CN115311414A - Live-action rendering method and device based on digital twinning and related equipment - Google Patents

Live-action rendering method and device based on digital twinning and related equipment Download PDF

Info

Publication number
CN115311414A
CN115311414A CN202210962015.0A CN202210962015A CN115311414A CN 115311414 A CN115311414 A CN 115311414A CN 202210962015 A CN202210962015 A CN 202210962015A CN 115311414 A CN115311414 A CN 115311414A
Authority
CN
China
Prior art keywords
target
dimensional model
spherical
initial
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210962015.0A
Other languages
Chinese (zh)
Inventor
吴志全
孙珂
杨舵
段伋
尹鹤珠
张龙平
王显贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210962015.0A priority Critical patent/CN115311414A/en
Publication of CN115311414A publication Critical patent/CN115311414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a live-action rendering method and device based on digital twins and related equipment, particularly relates to the field of artificial intelligence such as image processing, digital twins and virtual reality technologies, and can be applied to smart cities, city management and public security emergency scenes. The scheme is as follows: constructing an initial three-dimensional model; constructing a spherical panorama in a set spherical area according to a target panorama obtained by shooting a plurality of target objects by a camera; determining a target scaling according to height information of a camera from the horizontal ground when shooting, a set view radius of the camera and a radius of a set spherical area; mapping the initial three-dimensional model to a set spherical region according to the target scaling so as to obtain a target three-dimensional model; the spherical panorama is adopted to render the target three-dimensional model, so that the data volume processed by a machine can be reduced, the occupation of machine resources is reduced, the method can be applied to electronic equipment with lower performance, and the applicability of the method is improved.

Description

Live-action rendering method and device based on digital twin and related equipment
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of image processing, digital twins and virtual reality, which can be applied to smart cities, city management and public security emergency scenes, and in particular relates to a live-action rendering method and device based on digital twins and related equipment.
Background
With the rapid development of image technology and digital economy, three-dimensional model rendering is widely applied to industries such as city construction, movies, games, public security emergency and the like. At present, the three-dimensional model is rendered by adopting the image, a vivid virtual environment can be created, and various demonstration or training requirements are met, so that the user can experience the use in the scene.
Disclosure of Invention
The disclosure provides a method and a device for rendering a real scene based on digital twins and related equipment.
According to an aspect of the present disclosure, there is provided a digital twin-based live-action rendering method, including: according to the three-dimensional space information of a plurality of target objects, constructing an initial three-dimensional model corresponding to the target objects; acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera, and constructing a spherical panoramic image in a set spherical area according to the target panoramic image; determining a target scaling according to the height information of the camera from the horizontal ground when shooting, the set view radius of the camera and the radius of the set spherical area; mapping the initial three-dimensional model to the set spherical region according to the target scaling to obtain a target three-dimensional model; and rendering the target three-dimensional model by adopting the spherical panoramic image.
According to another aspect of the present disclosure, there is provided a digital twin-based live-action rendering apparatus including: the system comprises a first construction module, a second construction module and a third construction module, wherein the first construction module is used for constructing an initial three-dimensional model corresponding to a plurality of target objects according to three-dimensional space information of the target objects; the second construction module is used for acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera and constructing a spherical panoramic image in a set spherical area according to the target panoramic image; the determining module is used for determining a target scaling according to the height information of the camera from the horizontal ground when shooting, the set vision radius of the camera and the radius of the set spherical area; the mapping module is used for mapping the initial three-dimensional model to the set spherical region according to the target scaling so as to obtain a target three-dimensional model; and the rendering module is used for rendering the target three-dimensional model by adopting the spherical panorama.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and; a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the embodiments of the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic diagram of setting a spherical region according to an embodiment of the present disclosure;
FIG. 6 is a schematic illustration according to a fifth embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a sixth embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a rendered target three-dimensional model according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram according to a seventh embodiment of the present disclosure;
fig. 10 is a block diagram of an electronic device for implementing a digital twin-based live-action rendering method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, a city, a movie, a game, a public security emergency and the like are three-dimensionally modeled by an engine or a 3D modeling tool according to information such as the city, the movie, the game, the public security emergency and the like. Furthermore, a live-action photograph is taken by a camera, the photograph is processed, and a map of the model is extracted. During the modeling or rendering phase, a map is added to the surface of the model. However, the information acquisition and processing cost is high, and when the model is rendered, a large number of live-action pictures have high requirements on machine resources, and cannot be used on electronic equipment with low performance.
In order to solve the above problems, the present disclosure provides a live-action rendering method and apparatus based on digital twinning, and related devices.
The digital twinning-based live-action rendering method, device and related equipment of the embodiment of the disclosure are described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. It should be noted that the embodiment of the present disclosure is exemplified by the rendering method being configured in a digital twin-based live-action rendering apparatus, and the rendering apparatus may be applied to any electronic device, so that the electronic device may perform a digital twin-based live-action rendering function.
The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.
As shown in fig. 1, the digital twin-based live-action rendering method may include the steps of:
step 101, constructing an initial three-dimensional model corresponding to a plurality of target objects according to three-dimensional space information of the plurality of target objects.
In the embodiment of the present disclosure, an engine or a 3D modeling tool may be used to perform model construction according to three-dimensional spatial information of a plurality of target objects to obtain an initial three-dimensional model, where the target objects may be urban buildings, character scenes in games, character scenes in movies, and police emergency scenes, the initial three-dimensional model may be a transparent three-dimensional model, and the three-dimensional spatial information may include size information, position information, angle information, and the like of each target object.
And 102, acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera, and constructing a spherical panoramic image in a set spherical area according to the target panoramic image.
Further, a camera may be used to perform 360-degree panoramic shooting on a plurality of target objects, a target panoramic image is determined according to panoramic views of the plurality of target objects photographed in the panoramic shooting, and the target panoramic image is further mapped to a spherical surface of a set spherical area to obtain a spherical panoramic image, where it should be noted that the center of the set spherical area may be the position of the camera.
And 103, determining the target scaling according to the height information from the horizontal ground when the camera shoots, the set view radius of the camera and the radius of the set spherical area.
Further, to reduce the scale of the initial three-dimensional model, the initial three-dimensional model may be scaled and the scaled initial three-dimensional model may be mapped into the set spherical region, and thus, in the embodiment of the present disclosure, the scaling of the initial three-dimensional model may be determined.
As a possible implementation manner of the embodiment of the present disclosure, the target scaling may be determined according to the height information from the horizontal ground when the camera shoots, the set view radius of the camera, and the radius of the set spherical area.
And step 104, mapping the initial three-dimensional model to a set spherical area according to the target scaling so as to obtain a target three-dimensional model.
Further, scaling the initial three-dimensional model according to a target scaling, and translating the scaled initial three-dimensional model into a set spherical region to obtain a target three-dimensional model, wherein it needs to be noted that each target object in the target three-dimensional model is aligned with a corresponding target object in the spherical panorama.
And 105, rendering the target three-dimensional model by adopting the spherical panorama.
And then, rendering the target three-dimensional model by adopting the spherical panorama to obtain a rendered three-dimensional model.
Therefore, when the spherical panorama is operated, the operation of the target three-dimensional model can be realized, and the individual requirements of users can be met.
In conclusion, an initial three-dimensional model corresponding to a plurality of target objects is constructed according to the three-dimensional space information of the target objects; acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera, and constructing a spherical panoramic image in a set spherical area according to the target panoramic image; determining a target scaling according to height information of a camera from the horizontal ground when shooting, a set view radius of the camera and a radius of a set spherical area; mapping the initial three-dimensional model to a set spherical region according to the target scaling so as to obtain a target three-dimensional model; the target three-dimensional model is rendered by the spherical panorama, so that the initial three-dimensional model is mapped into the set spherical area according to the target scaling, the target three-dimensional model with a small scale can be obtained, and the target three-dimensional model with the small scale is rendered by the spherical panorama, so that the data amount processed by a machine can be reduced, the machine resource occupation can be reduced, the method can be applied to electronic equipment with low performance, and the applicability of the method is improved.
In order to clearly illustrate how the above embodiment maps the initial three-dimensional model into the set spherical region according to the target scaling to obtain the target three-dimensional model, the present disclosure proposes another digital twinning-based live-action rendering method.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure.
As shown in fig. 2, the rendering method may include the steps of:
step 201, according to the three-dimensional space information of a plurality of target objects, constructing an initial three-dimensional model corresponding to the plurality of target objects.
Step 202, obtaining a target panorama obtained by shooting a plurality of target objects by a camera, and constructing a spherical panorama in a set spherical area according to the target panorama.
And step 203, determining the target scaling according to the height information from the horizontal ground when the camera shoots, the set view radius of the camera and the radius of the set spherical area.
And 204, zooming the initial three-dimensional model according to the target zooming proportion to obtain the zoomed initial three-dimensional model.
In the embodiment of the present disclosure, in order to reduce the scale of the model, the initial three-dimensional model is scaled by using the target scaling ratio, and the scaled initial three-dimensional model can be obtained. For example, the size information of the initial three-dimensional model may be scaled using a target scaling.
And step 205, aligning each target object of the scaled initial three-dimensional model with each target object in the spherical panoramic image to obtain a target three-dimensional model.
Further, to import the scaled initial three-dimensional model into the spherical region, the scaled initial three-dimensional model may be translated into the spherical region.
In order to avoid the deviation between each target object in the target three-dimensional model and each target object in the spherical panorama from affecting the subsequent rendering effect, as a possible implementation manner of the embodiment of the present disclosure, each target object of the scaled initial three-dimensional model may be aligned with each target object in the spherical panorama to obtain the target three-dimensional model.
And step 206, rendering the target three-dimensional model by adopting the spherical panorama.
It should be noted that the execution processes of steps 201 to 203 and step 206 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In conclusion, the initial three-dimensional model is zoomed according to the target zoom scale to obtain the zoomed initial three-dimensional model; and aligning each target object of the scaled initial three-dimensional model with each target object in the spherical panoramic image to obtain the target three-dimensional model, so that the initial three-dimensional model can be mapped into the set spherical region by aligning each target object of the scaled initial three-dimensional model with each target object in the spherical panoramic image, the scale of the target three-dimensional model in the set spherical region can be reduced, the data amount processed by a machine is reduced, and meanwhile, the rendering effect of the target three-dimensional model can be further improved.
In order to clearly illustrate how the above embodiments map the initial three-dimensional model into the set spherical region according to the target scaling to obtain the target three-dimensional model, the present disclosure proposes another digital twin-based live-action rendering method.
Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure.
As shown in fig. 3, the rendering method may include the steps of:
step 301, constructing an initial three-dimensional model corresponding to a plurality of target objects according to three-dimensional space information of the plurality of target objects.
Step 302, obtaining a target panorama obtained by shooting a plurality of target objects by a camera, and constructing a spherical panorama in a set spherical area according to the target panorama.
Step 303, determining a target scaling according to the height information from the horizontal ground when the camera shoots, the set view radius of the camera and the radius of the set spherical area.
And step 304, zooming the initial three-dimensional model according to the target zooming proportion to obtain a zoomed initial three-dimensional model.
And 305, transforming each pixel point in the spherical panoramic image to the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system so as to obtain the pose information of each point of each target object in the spherical panoramic image in the world coordinate system.
In the embodiment of the present disclosure, in order to align each target object of the scaled initial three-dimensional model with each target object in the spherical panorama, and further improve the rendering effect of the model, each pixel point of the spherical panorama may be transformed into a world coordinate system, and the pose information of each point of each target object in the scaled initial three-dimensional model is aligned with the pose information of each point in the world coordinate system.
As a possible implementation manner of the embodiment of the present disclosure, each pixel point in the spherical panorama can be transformed into the world coordinate system according to the mapping relationship between the image coordinate system and the world coordinate system, so that the pose information of each point of the target object in the spherical panorama in the world coordinate system can be obtained.
And step 306, aligning the pose information of each point of each target object in the scaled initial three-dimensional model with the pose information of each point in the world coordinate system to obtain the target three-dimensional model.
And aligning the pose information of each point of each target object in the scaled initial three-dimensional model with the pose information of each point in the world coordinate system to obtain the target three-dimensional model.
And 307, rendering the target three-dimensional model by adopting the spherical panorama.
It should be noted that the execution processes of steps 301 to 304 and step 307 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this, and are not described again.
In summary, each pixel point in the spherical panorama is transformed to be under the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system, so as to obtain the pose information of each point of each target object in the spherical panorama under the world coordinate system; the method comprises the steps of aligning the position and pose information of each point of each target object in the initial three-dimensional model after zooming with the position and pose information of each point in a world coordinate system to obtain a target three-dimensional model, converting each pixel point in a spherical panorama into the world coordinate system, aligning the position and pose information of each point of each target object in the initial three-dimensional model after zooming with the position and pose information of each point in the world coordinate system to realize mapping of the initial three-dimensional model into a set spherical area, reduce the scale of the target three-dimensional model in the set spherical area, reduce the data volume processed by a machine and further improve the rendering effect of the target three-dimensional model.
To clearly illustrate how the above embodiments determine the target scaling according to the height information from the horizontal ground when the camera shoots, the set view radius of the camera, and the radius of the set spherical region, the present disclosure proposes another digital twin-based live-action rendering method.
Fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure.
As shown in fig. 4, the rendering method may include the steps of:
step 401, constructing an initial three-dimensional model corresponding to a plurality of target objects according to the three-dimensional space information of the plurality of target objects.
And 402, acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera, and constructing a spherical panoramic image in a set spherical area according to the target panoramic image.
And step 403, determining target zooming parameters according to the height information from the horizontal ground when the camera shoots and the set view radius of the camera.
In the embodiment of the disclosure, the target zooming parameter can be obtained by calculating according to the height information from the horizontal ground when the camera shoots and the set view radius of the camera.
In step 404, the ratio of the radius of the spherical area to the target scaling parameter is set as the target scaling.
Further, the radius of the set spherical region is compared with the target scaling parameter, and the ratio of the radius of the set spherical region to the target scaling parameter is used as the target scaling.
And 405, mapping the initial three-dimensional model to a set spherical area according to the target scaling to obtain a target three-dimensional model.
For example, as shown in fig. 5, the height information from the horizontal ground when the camera shoots is H, the set view radius of the camera is D, and the radius of the spherical area is r, wherein the camera may be a perspective camera, and the target scaling k may be expressed as the following formula:
Figure BDA0003793661300000071
further, the initial three-dimensional model is zoomed according to the target zoom ratio k to obtain a zoomed initial three-dimensional model, the zoomed three-dimensional model can be translated, each target object of the zoomed initial three-dimensional model is aligned with each target object in the spherical panorama to obtain a target three-dimensional model, and the position of the camera position in the world coordinate system is taken as P camera (X, Y, H), initial three-dimensional modelThe position P (x, y, z) of the point in (b) is converted into the position P (x ', y ', z ') of the point in the target three-dimensional model, which can be expressed as the following formula:
x′=kx+(1-k)X;
y′=ky+(1-k)Y;
z′=kz+(1-k)H;
wherein the position P (x, y, z) of a point in the initial three-dimensional model is generated from the real position of the point in the world coordinate system.
And 406, rendering the target three-dimensional model by using the spherical panorama.
It should be noted that the execution processes of steps 401 to 402 and step 406 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this, and are not described again.
In conclusion, the target zooming parameters are determined according to the height information of the camera from the horizontal ground when shooting and the set view radius of the camera; the ratio of the radius of the set spherical area to the target zoom parameter is used as the target zoom scale, and thus the target zoom scale can be obtained from the height information from the horizontal ground when the camera takes a picture, the set view radius of the camera, and the radius of the set spherical area.
In order to clearly illustrate how the above embodiments acquire the initial panoramic views corresponding to the plurality of target objects and construct the spherical panoramic view within the set spherical area according to the initial panoramic views, the present disclosure proposes another digital twin-based live-action rendering method.
Fig. 6 is a schematic diagram according to a fifth embodiment of the present disclosure.
As shown in fig. 6, the rendering method may include the steps of:
step 601, constructing an initial three-dimensional model corresponding to a plurality of target objects according to the three-dimensional space information of the plurality of target objects.
Step 602, performing all-around shooting on each target object to obtain an initial panorama of each target object.
In the embodiment of the present disclosure, by performing 306-degree panoramic shooting on target objects, an initial panoramic image of each target object may be obtained, where the number of the initial panoramic images corresponding to each target object may be one or more.
And step 603, splicing the initial panoramic images of the target objects to obtain target panoramic images corresponding to the target objects.
Further, the initial panoramic images of the target objects can be spliced according to the pose information of the target objects, so that the target panoramic images corresponding to the target objects are obtained.
And step 604, mapping the spherical surface of the target panoramic image in the set spherical area to obtain a spherical panoramic image.
Further, according to the area size of the inner surface of the set spherical area, the target panorama is subjected to mapping processing to obtain a spherical panorama, so that the area size of the spherical panorama is consistent with the area size of the inner surface of the set spherical area.
And step 605, determining the target scaling according to the height information from the horizontal ground when the camera shoots, the set view radius of the camera and the radius of the set spherical area.
Step 606, mapping the initial three-dimensional model to a set spherical area according to the target scaling to obtain a target three-dimensional model.
And step 607, rendering the target three-dimensional model by adopting the spherical panorama.
It should be noted that the execution processes of step 601, step 605 to step 607 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this, and are not described again.
In conclusion, the initial panorama of each target object is obtained by performing all-around shooting on each target object; the initial panoramic pictures of the target objects are spliced to obtain the target panoramic pictures corresponding to the target objects, and therefore the initial panoramic pictures of the target objects are spliced and mapped to obtain one spherical panoramic picture.
To clearly illustrate how a spherical panorama is used to render a three-dimensional model of an object, the present disclosure proposes another digital twin-based live-action rendering method.
Fig. 7 is a schematic diagram according to a sixth embodiment of the present disclosure.
As shown in fig. 7, the rendering method may include the steps of:
step 701, constructing an initial three-dimensional model corresponding to a plurality of target objects according to three-dimensional space information of the plurality of target objects.
Step 702, obtaining a target panorama obtained by shooting a plurality of target objects by a camera, and constructing a spherical panorama in a set spherical area according to the target panorama.
And 703, determining the target scaling according to the height information from the horizontal ground when the camera shoots, the set view radius of the camera and the radius of the set spherical area.
Step 704, mapping the initial three-dimensional model to a set spherical region according to the target scaling to obtain a target three-dimensional model.
Step 705, coordinate information of the vertex of each target object in the target three-dimensional model is obtained.
In the embodiment of the present disclosure, the coordinate information of the vertex of each target object in the target three-dimensional model may be generated according to the real coordinate information of each vertex of each target object in the world coordinate system.
Step 706, a plurality of triangular surfaces are constructed according to the coordinate information of the vertex of each target object.
And step 707, rendering the plurality of triangular surfaces by using the spherical panorama.
Furthermore, a plurality of triangular surfaces can be constructed according to the coordinate information of the vertex of each target object, and the plurality of triangular surfaces are rendered by adopting the pose information of each point of each target object in the spherical panorama under the world coordinate system.
Further, in response to a trigger operation on the spherical panorama, determining position information of a trigger point of the spherical panorama; color marking the position information of the trigger point to obtain the position information after color marking; the position information after the color marking is displayed, so that the operation on the target three-dimensional model can be realized through the spherical panoramic image, and the individual requirements of users are met.
In conclusion, coordinate information of the top point of each target object in the target three-dimensional model is obtained; the spherical panoramic images are adopted to render the triangular surfaces, so that the rendering of the target three-dimensional model by adopting one spherical panoramic image can be realized, the data volume processed by the machine is reduced, the occupation of machine resources is reduced, the method can be applied to electronic equipment with lower performance, and the applicability of the method is improved.
In order to clearly illustrate the above embodiments, the description will now be made by way of example.
For example, as shown in fig. 8, a digital twin-based live-action rendering method according to an embodiment of the present disclosure may include the following steps:
1. constructing a transparent three-dimensional model;
2. constructing a spherical panoramic view;
3. determining a target scaling according to height information from the horizontal ground when the camera shoots, the set view radius of the camera and the radius of a set spherical area, scaling the three-dimensional model according to the target scaling to obtain a scaled transparent three-dimensional model, and translating the scaled transparent three-dimensional model into the spherical area;
4. the zoomed transparent three-dimensional model can be rendered by adopting the spherical panorama, so that the user can operate on the spherical panorama and can realize the operation on the zoomed transparent three-dimensional model.
According to the live-action rendering method based on the digital twin, an initial three-dimensional model corresponding to a plurality of target objects is constructed according to three-dimensional space information of the target objects; acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera, and constructing a spherical panoramic image in a set spherical area according to the target panoramic image; determining a target scaling according to height information of a camera from a horizontal ground when shooting, a set view radius of the camera and a radius of a set spherical area; mapping the initial three-dimensional model to the set spherical region according to the target scaling to obtain a target three-dimensional model; the target three-dimensional model is rendered by the spherical panorama, so that the initial three-dimensional model is mapped into the set spherical area according to the target scaling, the target three-dimensional model with a smaller scale can be obtained, and the target three-dimensional model with the smaller scale is rendered by the spherical panorama, so that the data amount processed by a machine can be reduced, the occupation of machine resources is reduced, the method can be applied to electronic equipment with lower performance, and the applicability of the method is improved.
In order to implement the above embodiments, the present disclosure provides a live-action rendering apparatus based on a digital twin.
Fig. 9 is a schematic diagram according to a seventh embodiment of the present disclosure. As shown in fig. 9, the digital twin-based live-action rendering apparatus 900 includes: a first building module 910, a second building module 920, a determination module 930, a mapping module 940 and a rendering module 950.
The first constructing module 910 is configured to construct an initial three-dimensional model corresponding to a plurality of target objects according to three-dimensional spatial information of the plurality of target objects; a second construction module 920, configured to obtain a target panorama obtained by shooting a plurality of target objects with a camera, and construct a spherical panorama in a set spherical area according to the target panorama; a determining module 930, configured to determine a target scaling according to height information from a horizontal ground when the camera shoots, a set view radius of the camera, and a radius of a set spherical region; a mapping module 940, configured to map the initial three-dimensional model into a set spherical region according to the target scaling, so as to obtain a target three-dimensional model; and a rendering module 950, configured to render the target three-dimensional model by using the spherical panorama.
As a possible implementation manner of the embodiment of the present disclosure, the mapping module 940 is configured to: zooming the initial three-dimensional model according to the target zooming proportion to obtain a zoomed initial three-dimensional model; and aligning each target object of the scaled initial three-dimensional model with each target object in the spherical panorama to obtain a target three-dimensional model.
As a possible implementation manner of the embodiment of the present disclosure, the mapping module 940 is further configured to: according to the mapping relation between the image coordinate system and the world coordinate system, each pixel point in the spherical panoramic image is transformed to be under the world coordinate system, so that the pose information of each point of each target object in the spherical panoramic image under the world coordinate system is obtained; and aligning the pose information of each point of each target object in the scaled initial three-dimensional model with the pose information of each point in a world coordinate system to obtain a target three-dimensional model.
As a possible implementation manner of the embodiment of the present disclosure, the determining module 930 is configured to: determining a target zooming parameter according to height information of a camera from the horizontal ground when shooting and a set view radius of the camera; and setting the ratio of the radius of the spherical area to the target scaling parameter as the target scaling.
As a possible implementation manner of the embodiment of the present disclosure, the second constructing module 920 is configured to: performing all-around shooting on each target object to obtain an initial panoramic image of each target object; splicing the initial panoramic pictures of the target objects to obtain target panoramic pictures corresponding to the target objects; and mapping the spherical surface of the target panoramic image in the set spherical area to obtain a spherical panoramic image.
As a possible implementation manner of the embodiment of the present disclosure, the rendering module 950 is configured to: acquiring coordinate information of vertexes of all target objects in the target three-dimensional model; constructing a plurality of triangular surfaces according to the coordinate information of the top point of each target object; and rendering the plurality of triangular surfaces by adopting a spherical panoramic image.
As a possible implementation manner of the embodiment of the present disclosure, the digital twin-based live-action rendering apparatus 900 further includes: the device comprises an acquisition module, a marking module and a display module.
The acquisition module is used for responding to the triggering operation of the spherical panoramic image and determining the position information of the triggering point of the spherical panoramic image; the marking module is used for carrying out color marking on the position information of the trigger point to obtain the position information after the color marking; and the display module is used for displaying the position information after the color marking.
The live-action rendering device based on the digital twin of the embodiment of the disclosure constructs an initial three-dimensional model corresponding to a plurality of target objects according to three-dimensional space information of the plurality of target objects; acquiring a target panorama obtained by shooting a plurality of target objects by a camera, and constructing a spherical panorama in a set spherical area according to the target panorama; determining a target scaling according to height information of a camera from the horizontal ground when shooting, a set view radius of the camera and a radius of a set spherical area; mapping the initial three-dimensional model to the set spherical region according to the target scaling so as to obtain a target three-dimensional model; the target three-dimensional model is rendered by the spherical panorama, so that the initial three-dimensional model is mapped into the set spherical area according to the target scaling, the target three-dimensional model with a smaller scale can be obtained, and the target three-dimensional model with the smaller scale is rendered by the spherical panorama, so that the data amount processed by a machine can be reduced, the occupation of machine resources is reduced, the method can be applied to electronic equipment with lower performance, and the applicability of the method is improved.
In order to implement the above embodiment, the present disclosure further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the digital twin based live-action rendering method of the above embodiments.
To achieve the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the digital twin-based live-action rendering method according to the above embodiments.
In order to implement the above embodiments, the present disclosure also proposes a computer program product, which includes a computer program that, when being executed by a processor, implements the digital twin-based live-action rendering method according to the above embodiments.
In the technical scheme of the present disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user are all performed under the premise of obtaining the consent of the user, and all meet the regulations of the related laws and regulations, and do not violate the good custom of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 10 shows a schematic block diagram of an example electronic device 1000 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the device 1000 can also be stored. The calculation unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1001 executes the respective methods and processes described above, such as the rendering method. For example, in some embodiments, the rendering method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communications unit 1009. When the computer program is loaded into RAM 1003 and executed by the computing unit 1001, one or more steps of the rendering method described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the digital twin-based live-action rendering method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the Internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking process and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and has both hardware-level and software-level technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A digital twin-based live-action rendering method includes:
according to three-dimensional space information of a plurality of target objects, constructing an initial three-dimensional model corresponding to the target objects;
acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera, and constructing a spherical panoramic image in a set spherical area according to the target panoramic image;
determining a target scaling according to the height information of the camera from the horizontal ground when shooting, the set view radius of the camera and the radius of the set spherical area;
mapping the initial three-dimensional model to the set spherical region according to the target scaling to obtain a target three-dimensional model;
and rendering the target three-dimensional model by adopting the spherical panoramic image.
2. The method of claim 1, wherein said mapping said initial three-dimensional model into said set spherical region according to said target scale to obtain a target three-dimensional model comprises:
zooming the initial three-dimensional model according to the target zooming proportion to obtain a zoomed initial three-dimensional model;
and aligning each target object of the scaled initial three-dimensional model with each target object in the spherical panoramic image to obtain the target three-dimensional model.
3. The method of claim 2, wherein said aligning each target object of said scaled initial three-dimensional model with each target object of said spherical panorama to obtain said target three-dimensional model comprises:
transforming each pixel point in the spherical panoramic image to be under the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system so as to obtain the pose information of each point of each target object in the spherical panoramic image under the world coordinate system;
and aligning the pose information of each point of each target object in the scaled initial three-dimensional model with the pose information of each point in the world coordinate system to obtain the target three-dimensional model.
4. The method of claim 1, wherein the determining a target scaling from the height information from the horizontal ground when the camera is capturing, the set field of view radius of the camera, and the radius of the set spherical region comprises:
determining a target zooming parameter according to the height information of the camera from the horizontal ground when shooting and the set view radius of the camera;
and taking the ratio of the radius of the set spherical area to the target scaling parameter as the target scaling.
5. The method of claim 1, wherein the obtaining an initial panorama corresponding to a plurality of the target objects and constructing a spherical panorama within a set spherical area according to the initial panorama comprises:
performing all-round looking shooting on each target object to obtain an initial panorama of each target object;
splicing the initial panoramic pictures of the target objects to obtain a plurality of target panoramic pictures corresponding to the target objects;
and mapping the spherical surface of the target panoramic image in the set spherical area to obtain the spherical panoramic image.
6. The method of claim 1, wherein said rendering the target three-dimensional model using the spherical panorama comprises:
acquiring coordinate information of a vertex of each target object in the target three-dimensional model;
constructing a plurality of triangular surfaces according to the coordinate information of the vertex of each target object;
and rendering the plurality of triangular surfaces by adopting the spherical panoramic image.
7. The method as recited in claims 1-6, wherein the method further comprises:
in response to a trigger operation on the spherical panorama, determining position information of a trigger point of the spherical panorama;
color marking the position information of the trigger point to obtain the position information after color marking;
and displaying the position information after the color marking.
8. A digital twin-based live-action rendering apparatus comprising:
the system comprises a first construction module, a second construction module and a third construction module, wherein the first construction module is used for constructing an initial three-dimensional model corresponding to a plurality of target objects according to three-dimensional space information of the target objects;
the second construction module is used for acquiring a target panoramic image obtained by shooting a plurality of target objects by a camera and constructing a spherical panoramic image in a set spherical area according to the target panoramic image;
the determining module is used for determining a target scaling according to the height information of the camera from the horizontal ground when shooting, the set vision radius of the camera and the radius of the set spherical area;
the mapping module is used for mapping the initial three-dimensional model to the set spherical region according to the target scaling so as to obtain a target three-dimensional model;
and the rendering module is used for rendering the target three-dimensional model by adopting the spherical panorama.
9. The apparatus of claim 8, wherein the mapping module is to:
zooming the initial three-dimensional model according to the target zooming proportion to obtain a zoomed initial three-dimensional model;
and aligning each target object of the scaled initial three-dimensional model with each target object in the spherical panorama to obtain the target three-dimensional model.
10. The apparatus of claim 9, wherein the mapping module is further configured to:
transforming each pixel point in the spherical panorama to the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system so as to obtain the pose information of each point of each target object in the spherical panorama under the world coordinate system;
and aligning the pose information of each point of each target object in the scaled initial three-dimensional model with the pose information of each point in the world coordinate system to obtain the target three-dimensional model.
11. The apparatus of claim 8, wherein the means for determining is configured to:
determining a target zooming parameter according to the height information of the camera from the horizontal ground when shooting and the set view radius of the camera;
and taking the ratio of the radius of the set spherical area to the target scaling parameter as the target scaling.
12. The apparatus of claim 8, wherein the second building block is configured to:
performing all-round looking shooting on each target object to obtain an initial panorama of each target object;
splicing the initial panoramic pictures of the target objects to obtain target panoramic pictures corresponding to the target objects;
and mapping the spherical surface of the target panoramic image in the set spherical area to obtain the spherical panoramic image.
13. The apparatus of claim 8, wherein the rendering module is to:
acquiring coordinate information of a vertex of each target object in the target three-dimensional model;
constructing a plurality of triangular surfaces according to the coordinate information of the vertex of each target object;
and rendering the plurality of triangular surfaces by adopting the spherical panoramic image.
14. The apparatus of claims 8-13, wherein the apparatus further comprises:
the acquisition module is used for responding to the triggering operation of the spherical panorama and determining the position information of the triggering point of the spherical panorama;
the marking module is used for carrying out color marking on the position information of the trigger point to obtain the position information after the color marking;
and the display module is used for displaying the position information after the color marking.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202210962015.0A 2022-08-11 2022-08-11 Live-action rendering method and device based on digital twinning and related equipment Pending CN115311414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210962015.0A CN115311414A (en) 2022-08-11 2022-08-11 Live-action rendering method and device based on digital twinning and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210962015.0A CN115311414A (en) 2022-08-11 2022-08-11 Live-action rendering method and device based on digital twinning and related equipment

Publications (1)

Publication Number Publication Date
CN115311414A true CN115311414A (en) 2022-11-08

Family

ID=83861295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210962015.0A Pending CN115311414A (en) 2022-08-11 2022-08-11 Live-action rendering method and device based on digital twinning and related equipment

Country Status (1)

Country Link
CN (1) CN115311414A (en)

Similar Documents

Publication Publication Date Title
EP3852068A1 (en) Method for training generative network, method for generating near-infrared image and apparatuses
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
WO2022227768A1 (en) Dynamic gesture recognition method and apparatus, and device and storage medium
CN103914876A (en) Method and apparatus for displaying video on 3D map
CN114842121B (en) Method, device, equipment and medium for generating mapping model training and mapping
CN115690382B (en) Training method of deep learning model, and method and device for generating panorama
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
JP7277548B2 (en) SAMPLE IMAGE GENERATING METHOD, APPARATUS AND ELECTRONIC DEVICE
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
JP2023172893A (en) Control method, control device, and recording medium for interactive three-dimensional representation of target object
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
JP7262530B2 (en) Location information generation method, related device and computer program product
CN113870439A (en) Method, apparatus, device and storage medium for processing image
CN115619986B (en) Scene roaming method, device, equipment and medium
CN115880555B (en) Target detection method, model training method, device, equipment and medium
CN109816791B (en) Method and apparatus for generating information
JP2023021469A (en) Positioning method, positioning apparatus, method of generating visual map, and apparatus thereof
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN115775300A (en) Reconstruction method of human body model, training method and device of human body reconstruction model
CN115311414A (en) Live-action rendering method and device based on digital twinning and related equipment
CN114723796A (en) Three-dimensional point cloud generation method and device and electronic equipment
CN112465692A (en) Image processing method, device, equipment and storage medium
CN115761123B (en) Three-dimensional model processing method, three-dimensional model processing device, electronic equipment and storage medium
US20230078041A1 (en) Method of displaying animation, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination