CN115937482A - Holographic scene dynamic construction method and system capable of adapting to screen size - Google Patents

Holographic scene dynamic construction method and system capable of adapting to screen size Download PDF

Info

Publication number
CN115937482A
CN115937482A CN202211484940.3A CN202211484940A CN115937482A CN 115937482 A CN115937482 A CN 115937482A CN 202211484940 A CN202211484940 A CN 202211484940A CN 115937482 A CN115937482 A CN 115937482A
Authority
CN
China
Prior art keywords
scene
bridge construction
holographic
holographic scene
bridge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211484940.3A
Other languages
Chinese (zh)
Other versions
CN115937482B (en
Inventor
朱军
吴鉴霖
郭煜坤
党沛
李维炼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202211484940.3A priority Critical patent/CN115937482B/en
Publication of CN115937482A publication Critical patent/CN115937482A/en
Application granted granted Critical
Publication of CN115937482B publication Critical patent/CN115937482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a holographic scene dynamic construction method and system capable of adapting to screen size, belongs to the field of mapping geographic information, and solves the problems that a holographic scene dynamic construction method in the prior art is large in detail loss and low in scene drawing frame rate. The method comprises the steps of obtaining digital twin bridge construction scene data; importing digital twin bridge construction scene data to construct a bridge construction holographic scene, and then dynamically constructing a self-adaptive screen size holographic scene based on the bridge construction holographic scene to obtain the position of a visual window drawing view of the bridge construction holographic scene, so as to obtain the bridge construction holographic scene with the self-adaptive screen size; and optimizing the bridge construction holographic scene during interaction, and drawing the optimized bridge construction holographic scene based on the digital twin bridge construction scene data to obtain the drawn bridge construction holographic scene. The method is used for dynamically constructing the holographic scene.

Description

Holographic scene dynamic construction method and system capable of adapting to screen size
Technical Field
A holographic scene dynamic construction method and system adaptive to screen size are used for dynamically constructing a holographic scene and belong to the field of mapping geographic information.
Background
Bridges, as key nodes for interconnection and intercommunication of traffic facilities, and junction projects have gradually extended to hard mountainous areas. Factors such as complex topographic conditions, frequent mountain disasters and severe meteorological conditions in mountainous areas bring great challenges to bridge construction technology and engineering quality. Under the important strategic background of informatization, with the rapid development of industrial technology and new generation information technology, the construction process of bridges in the hard mountainous area and modern information technology are deeply integrated, and the development of an intelligent construction system and a fine management mode which are gradually increased is an important development direction in the future of bridge construction. The construction conditions of the bridge in the hard mountainous area are severe, the period is long, the control difficulty is high, geological disasters such as landslide and debris flow at the unfavorable geological position and the influence of mountainous area canyon wind, sunlight, temperature difference and the like on the construction need to be considered, the construction relates to various types of components and the assembly process is complex, and in addition, different construction sequences lead to different stress states of the bridge and influence the structural stability of the bridge. Therefore, how to consider the comprehensive influence of the surrounding geographic environment on the bridge construction is a key scientific problem which needs to be solved urgently by developing digital simulation in the whole bridge construction process in a difficult environment.
The digital twin technology is used for solving the interaction problem of a digital model and a physical entity, and the key use technology of practicing a digital transformation idea and a target plays an important role in the realization of digital simulation in the whole bridge construction process. At present, the research and application of a digital twin model in bridge engineering are still in a starting stage, and most of the existing research focuses on the concept abstraction and the concrete engineering application of the digital twin. The key point of the research of the digital twin bridge construction simulation at the present stage is to establish a high-fidelity three-dimensional visual model of a physical entity (a bridge and surrounding scenes) so as to provide a three-dimensional visual operation platform for subsequent application. The virtual geographic environment and the building information model are key methods for constructing the platform.
The holographic projection technology is considered to be one of the best three-dimensional visualization means by virtue of the advantages of comprehensively presenting three-dimensional object effects and displaying picture contents in an omnibearing and three-dimensional manner. The holography provides a three-dimensional visualization means for digital twins by virtue of a unique display effect. Compared with the traditional screen-based three-dimensional visualization means or head-wearing VR, the holographic visualization technology has the advantages that a user can conveniently observe a multi-angle and multi-directional three-dimensional scene through naked eyes without other equipment, a new visualization and interaction means is provided for centralized research and judgment of multiple people, and the requirements of real-time discussion and timely feedback of a digital twin are met. At present, the holographic projection technology is mainly applied to the fields of education, games, military affairs, industry, cultural relic exhibition, medicine and the like. In contrast, holographic techniques have found little use in digital twinning, particularly in bridge construction. In the prior art, the holographic technology is applied to digital twinning, and the following technical problems exist:
1. in the prior art, the holographic scene dynamic construction method has the disadvantages of large detail loss and low scene rendering frame rate.
2. The real-time dynamic construction cannot be realized, a holographic video source needs to be manufactured in advance, and the holographic display is carried out in a self-adaptive screen size mode, so that the problem of poor holographic display effect is caused.
Disclosure of Invention
The invention aims to provide a method and a system for dynamically constructing a holographic scene with a self-adaptive screen size, which solve the problems of high detail loss and low frame rate of scene drawing in the method for dynamically constructing the holographic scene in the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
a dynamic construction method of a holographic scene with a self-adaptive screen size comprises the following steps:
step 1, acquiring digital twin bridge construction scene data;
step 2, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, and then carrying out self-adaption screen size holographic scene dynamic construction based on the bridge construction holographic scene to obtain the position of a visual window drawing view of the bridge construction holographic scene, so as to obtain a bridge construction holographic scene with self-adaption screen size;
and 3, optimizing the bridge construction holographic scene obtained in the step 2 during interaction, and drawing the optimized bridge construction holographic scene based on the digital twin bridge construction scene data to obtain the drawn bridge construction holographic scene.
Further, the digital twin bridge construction scene data in step 1 includes digital elevation, thematic data, bridge BIM models of bridge solid components, inclination data, monitoring data, management data and geographic information data, wherein the digital elevation includes terrain, the thematic data includes rivers, vegetations, roads, ground objects and measurement data, the bridge BIM data includes building information models of bridge decks, piers, suspension cables and bridge spans of a bridge, the inclination data includes terrain, ground objects, rivers, trees and building digital surface models, the monitoring data includes bridge construction stage monitoring data, wind field monitoring data in a construction scene, temperature field monitoring data and bridge field stress monitoring data, the management data includes bridge component attributes and bridge construction progress, and the geographic information data includes images, terrain, roads, rivers and buildings.
Further, the specific steps of step 2 are:
2.1, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, arranging four virtual cameras for rendering and drawing scenes in real time in the bridge construction holographic scene based on the bridge construction holographic scene, and constructing a linkage window based on the four virtual cameras, namely, the four virtual cameras are always aligned to a unified area with the same action, wherein the bridge construction holographic scene is the virtual scene for bridge construction;
and 2.2, carrying out self-adaptive screen size picture segmentation and dynamic layout based on a Pepper principle, a holographic projection imaging principle and a linkage window, and obtaining the position of a visual window drawing view of the bridge construction holographic scene after dynamic layout, namely obtaining the bridge construction holographic scene with the self-adaptive screen size.
Further, the specific steps of step 2.1 are:
step 2.11, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, then constructing the holographic scene based on the bridge, arranging four virtual cameras for rendering and drawing the scene in real time in the bridge construction holographic scene, taking the plane of the four virtual cameras as an XY axis, taking the direction perpendicular to the XY axis as a Z axis, taking the center of the bridge construction holographic scene as an origin to establish a coordinate system, and calculating to obtain a transformation relation between the virtual cameras so that the cameras in the bridge construction holographic scene are aligned to the same object in the same posture, wherein the transformation relation between the virtual cameras comprises translation, scaling and rotation between every two virtual cameras, the translation means that the virtual cameras in the bridge construction holographic scene are displaced, and the other three virtual cameras perform displacement with the same scale;
setting the distance from each virtual camera to the origin to be l 0 Moving the bridge building holographic scene to a point (x) 0 ,y 0 ,z 0 ) Then, the rotation of the virtual camera in the Y-axis direction is transformed into:
Figure SMS_1
in the Y-axis direction, the scaling of the virtual camera is:
Figure SMS_2
/>
wherein, | y 0 |≤l 0
Similarly, the rotation of the virtual camera in the X-axis direction is transformed into:
Figure SMS_3
in the X-axis direction, the scaling of the virtual camera is:
Figure SMS_4
wherein, | x 0 |≤l 0
Wherein alpha is y Is the Euler angle, beta, of the virtual camera coordinate system about the y-axis z For the rotation angle, alpha, around the z-axis of the coordinate system after the virtual camera has rotated z Euler angle, x, about z-axis for a virtual camera coordinate system 0 Distance of movement in x-axis direction, y, for building a holographic scene for a bridge 0 Distance of movement in y-axis direction, z, for building a holographic scene for a bridge 0 Distance of movement in z-axis direction, l, for building a holographic scene for a bridge 0 The distance between each virtual camera and the holographic scene built by the bridge is shown;
and 2.12, adjusting each virtual camera based on the transformation relation among the virtual cameras to obtain a linkage window, namely uniformly linking the four virtual cameras.
Further, the specific steps of step 2.2 are:
step 2.21, based on the holographic projection imaging principle, defining four vertex coordinates of the screen as a pixel coordinate system, defining four vertex coordinates of the screen as a (0, 0), b (0, n), c (m, 0) and d ((m, n), wherein m and n are resolution of the screen, determining a central o coordinate of the screen as (m/2, n/2), and constructing a square with side length n by taking a point o as a center so as to enable the constructed holographic picture to be always in the midpoint of the picture and accord with the holographic projection imaging principle, wherein the four vertices of the square are respectively a '(m/2-n/2, 0), b' (m/2-n/2, n), c '(m/2 n/2, 0) and d' (m/2 + n/2, n) and performing self-adaptive holographic screen size picture segmentation based on the four vertices of the square to obtain an imaged area of the holographic picture;
step 2.22, dynamically arranging four visual windows dynamically generated by the four virtual cameras based on the holographic image imaging area, and obtaining the positions of the visual window drawing views of the bridge construction holographic scene after dynamic arrangement, wherein the four visual windows are the linkage windows obtained in the step 2.12;
when the maximum value of the frame range after dynamic layout is obtained according to the holographic projection imaging principle, the bottom side length is as follows:
L=w+2h (4)
h/w = n/m, then
Figure SMS_5
Figure SMS_6
Wherein w is the width of the visible window, and h is the height of the visible window;
after dynamic layout, the front view positions drawn by the visual windows are as follows:
Figure SMS_7
the rear view positions are:
Figure SMS_8
the left view positions are:
Figure SMS_9
the right view positions are:
Figure SMS_10
further, the specific steps of step 3 are:
step 3.1, optimizing the bridge construction holographic scene during interaction;
and 3.2, performing real-time rendering and drawing of the bridge construction holographic scene based on the digital twin bridge construction scene data loaded in the digital twin platform by the optimized bridge construction holographic scene to obtain the drawn bridge construction holographic scene.
Further, the specific steps of step 3.1 are:
step 3.11, acquiring fuzzy ranges of objects in each visual window during interaction, namely fuzzy areas, wherein the fuzzy areas comprise fuzzy areas generated by linear motion fuzzy and rotary motion fuzzy, and calculating a fuzzy degree through point spread functions of the linear motion fuzzy and the rotary motion fuzzy, wherein the interaction comprises movement, rotation and scaling;
and 3.12, simplifying the fuzzy area by adopting a simplifying means, namely reducing the data precision to obtain a simplified bridge construction holographic scene, namely obtaining the optimized bridge construction holographic scene during interaction, wherein the simplifying means comprises network simplification or texture compression.
Further, in step 3.11:
the point spread function of the linear motion blur comprises two parameters of total displacement and motion direction, the blurred image g (x, y) is linearly moved by the original image f (x, y) in the direction forming an angle alpha with the x axis, and the value of any point of the blurred image is as follows:
Figure SMS_11
wherein g (x, y) is the value of any point of the blurred image, and x 0 (t) is the motion component of the bridge construction holographic scene in the x-direction at time t, y 0 (T) is the motion component of the holographic scene built by the bridge in the y direction and at the time T, if the total displacement of the object is a, the total time is T m Then the rate of motion is
Figure SMS_12
Figure SMS_13
Then there are:
Figure SMS_14
the fuzzy area of the linear motion blur is obtained after discretization of the formula 12, which is as follows:
Figure SMS_15
wherein, L' is the number of pixels of the movement of the bridge construction holographic scene, namely a fuzzy scale, i is the ith pixel, u = [ icos alpha ], v = [ i sin alpha ], and alpha represents the movement direction;
the calculation of the blur region by convolution may be:
g(x,y)=f(x,y).h(u,v)
where (u, v) is the point spread function:
Figure SMS_16
the rotational motion blur is different from the linear motion blur, the rotational motion blur is a space variable motion blur, the blur parameters are different on different blur paths, and the farther the distance from the rotation center is, the larger the blur scale is; the blurring degree of points at the same position from the rotation center is the same, namely the blurring degree of the images on the same ring is the same, and the rotational motion blurring is distributed along different rotation paths;
if the rotation center is the origin (0, 0), the distance from any pixel point i (x, y) in the blurred image g (x, y) to the rotation center is
Figure SMS_17
Let the object rotate for a time T s And the rotation angular velocity is ω, the relationship between the blurred image g (x, y) and the original image f (x, y) is:
Figure SMS_18
expressed in polar vertex form:
Figure SMS_19
wherein r represents a radial coordinate and represents a distance from an origin to i (x, y), theta represents an angular coordinate and represents that a starting edge is a positive x-axis and an end point is an included angle between a passing original position and a ray of i (x, y);
let l = r, θ, s = r ω t, r being denoted as subscript, h r The point spread function is h, the point spread function of any pixel point i (x, y) with the distance r from the rotation center is h r (i) Then:
Figure SMS_20
wherein
Figure SMS_21
After the discretization processing is performed on the formula 16, a blurred region of the rotational motion blur is obtained, and the following results are obtained:
Figure SMS_22
wherein i =0,1,2 r -1,g r (i) And f r (i) The fuzzy pixel value and the original gray value, N, of the ith pixel point on the fuzzy path r Represents the number of pixels, and L r The fuzzy scale is expressed by the number of pixels;
the point spread function in the form of a rotational motion blur matrix is obtained based on equation 17 as:
Figure SMS_23
/>
a screen size adaptive holographic scene dynamic construction system comprises:
an acquisition module: acquiring digital twin bridge construction scene data;
a dynamic construction module: importing digital twin bridge construction scene data to construct a bridge construction holographic scene, and then carrying out self-adaption screen size holographic scene dynamic construction based on the bridge construction holographic scene to obtain the position of a visual window drawing view of the bridge construction holographic scene, so as to obtain a bridge construction holographic scene with self-adaption screen size;
a drawing module: and during interaction, optimizing the bridge construction holographic scene obtained by the dynamic construction module, and drawing the optimized bridge construction holographic scene based on the digital twin bridge construction scene data to obtain the drawn bridge construction holographic scene.
Further, the dynamic building module is specifically implemented by the following steps:
2.1, constructing a digital twin bridge construction scene based on the digital twin bridge construction scene data to obtain a bridge construction holographic scene, arranging four virtual cameras used for rendering and drawing the scene in real time in the bridge construction holographic scene, and constructing a linkage window based on the four virtual cameras, namely, the four virtual cameras are always aligned to a unified area with the same action, wherein the bridge construction holographic scene is the virtual scene for bridge construction;
2.2, carrying out self-adaptive screen size picture segmentation and dynamic layout based on a Pepper's principle, a holographic projection imaging principle and a linkage window, and obtaining the position of a visual window drawing view of the bridge construction holographic scene after dynamic layout, namely obtaining the bridge construction holographic scene with the self-adaptive screen size;
the drawing module is concretely implemented by the following steps:
step 3.1, optimizing the bridge construction holographic scene during interaction;
and 3.2, performing real-time rendering and drawing of the bridge construction holographic scene based on the digital twin bridge construction scene data loaded in the digital twin platform by the optimized bridge construction holographic scene to obtain the drawn bridge construction holographic scene.
Compared with the prior art, the invention has the advantages that:
1. the method can realize the real-time construction of the self-adaptive screen-size-division bridge construction holographic scene, and the provided optimization method can reduce 30-45% of scene drawing data on the premise of ensuring less detail loss, thereby obviously improving the frame rate.
2. The invention applies a motion blur algorithm to calculate a blur range caused by motion time, motion direction and the like as related parameters when a scene moves, and provides a model with low distribution precision of the fuzzy range, and data with higher distribution precision at a place where no blur occurs, so that the loading pressure of the data is reduced, and the drawing efficiency is improved, namely, the fuzzy range of an object in a bridge construction holographic scene during interaction can be obtained by calculating linear motion blur and rotary motion blur, namely, a fuzzy area is obtained, and less resources are distributed to the fuzzy area through a corresponding point spread function to reduce resolution rendering;
3. the method can dynamically construct the holographic scene of the bridge construction in real time without pre-manufacturing a holographic video source, and the holographic display is carried out in a self-adaptive screen size manner, so that the holographic display effect is good.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic diagram of the framework of the present invention;
FIG. 2 is a schematic diagram of a framework for dynamically constructing a holographic scene with an adaptive screen size according to the present invention;
FIG. 3 is a schematic representation of the holographic imaging region segmentation in accordance with the present invention;
FIG. 4 is a diagram illustrating a software and hardware configuration table according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a prototype system interface in an embodiment of the invention;
FIG. 6 is a schematic diagram of an experiment result and analysis of a holographic scene construction with an adaptive screen size according to an embodiment of the present invention;
FIG. 7 is a diagram of an embodiment of the present invention for automatically constructing and visualizing holograms at 1920 x 1080 resolution, (a) for automatically constructing maps, and (b) for visualizing effects;
fig. 8 is an embodiment of the invention for automated construction and visualization of holograms at 1920 x 1200 resolution, (a) for automated construction of maps, and (b) for visualization effects;
fig. 9 is an embodiment of the invention with 1680 x 1050 resolution for automatic construction and visualization of a hologram, (a) for automatic construction of a map, (b) for visualization;
fig. 10 shows the effect of automatically constructing and visualizing a hologram at 1440 x 1050 resolution for an embodiment of the invention, (a) for automatically constructing a map, and (b) for visualizing an effect;
FIG. 11 is a comparison graph of scene rendering efficiency before and after optimization according to an embodiment of the invention;
fig. 12 is a scene rendering diagram of an embodiment of the present invention, in which (a) is a view scene before optimization, (b) is a view scene after linear motion blur, and (c) is a view scene after rotational motion blur before optimization;
fig. 13 is a schematic diagram of recording the current frame rate of rendering a holographic scene and the number of triangles rendered in the scene every 1s in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
In order to solve the problems, holographic scene real-time visualization research work is carried out, a holographic scene dynamic construction method capable of adapting to the screen size is provided, a holographic image dynamic construction capable of adapting to the screen size is carried out through linkage of four cameras in consideration of a holographic imaging principle, the real-time visualization of a bridge construction holographic scene is realized, human eye visual characteristics are considered on the basis, scene visualization efficiency is optimized on the basis of a motion blur algorithm, scene drawing efficiency is improved, and efficient rendering of the bridge construction scene is realized.
A dynamic construction method of a holographic scene with a self-adaptive screen size comprises the following steps:
step 1, acquiring digital twin bridge construction scene data; the digital twin bridge construction scene data comprises digital elevations, thematic data, bridge BIM models of bridge entity components, inclination data, monitoring data, management data and geographic information data, wherein the digital elevations comprise terrains, the thematic data comprise rivers, vegetations, roads, ground objects and measurement data, the bridge BIM data comprise bridge decks, piers, suspension cables and building information models of bridge spans of the bridge, the inclination data comprise terrains, ground objects, rivers, trees and digital ground surface models of the buildings, the monitoring data comprise bridge construction stage monitoring data, wind field monitoring data in a construction scene, temperature field monitoring data and bridge stress field monitoring data, the management data comprise bridge component attributes and bridge construction progress, and the geographic information data comprise images, terrains, roads, rivers and buildings. In practice, data for other scenarios may also be used.
Step 2, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, and then dynamically constructing a self-adaptive screen size holographic scene based on the bridge construction holographic scene to obtain the position of a visual window drawing view of the bridge construction holographic scene, so as to obtain the bridge construction holographic scene with the self-adaptive screen size; the method comprises the following specific steps:
step 2.1, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, arranging four virtual cameras for rendering and drawing scenes in real time in the bridge construction holographic scene based on the bridge construction holographic scene, and constructing a linkage window based on the four virtual cameras, namely the four virtual cameras are always aligned to a unified area with the same action, wherein the bridge construction holographic scene is the virtual scene for bridge construction; the method comprises the following specific steps:
step 2.11, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, then constructing the holographic scene based on the bridge, arranging four virtual cameras for rendering and drawing the scene in real time in the bridge construction holographic scene, taking the plane of the four virtual cameras as an XY axis, taking the direction perpendicular to the XY axis as a Z axis, taking the center of the bridge construction holographic scene as an origin to establish a coordinate system, and calculating to obtain a transformation relation between the virtual cameras so that the cameras in the bridge construction holographic scene are aligned to the same object in the same posture, wherein the transformation relation between the virtual cameras comprises translation, scaling and rotation between every two virtual cameras, the translation means that the virtual cameras in the bridge construction holographic scene are displaced, and the other three virtual cameras perform displacement with the same scale;
setting the distance from each virtual camera to the origin to be l 0 Moving the bridge building holographic scene to point (x) 0 ,y 0 ,z 0 ) Then, the rotation of the virtual camera in the Y-axis direction is transformed into:
Figure SMS_24
in the Y-axis direction. The scale of the virtual camera is:
Figure SMS_25
wherein, | y 0 |≤l 0
Similarly, the rotation of the virtual camera in the X-axis direction is transformed into:
Figure SMS_26
in the X-axis direction. The scale of the virtual camera is:
Figure SMS_27
wherein, | x 0 |≤l 0
Wherein alpha is y Is the Euler angle, beta, of the virtual camera coordinate system about the y-axis z For the rotation angle, alpha, around the z-axis of the coordinate system after the virtual camera has rotated z Euler angle, x, about the z-axis for a virtual camera coordinate system 0 Distance of movement in x-axis direction, y, for building a holographic scene for a bridge 0 Distance of movement in y-axis direction, z, for building a holographic scene for a bridge 0 Distance of movement in z-axis direction, l, for building a holographic scene for a bridge 0 The distance between each virtual camera and the holographic scene built by the bridge is calculated;
and 2.12, adjusting each virtual camera based on the transformation relation among the virtual cameras to obtain a linkage window, namely uniformly linking the four virtual cameras.
And 2.2, carrying out self-adaptive screen size picture segmentation and dynamic layout based on a Pepper's principle, a holographic projection imaging principle (namely the holographic imaging characteristic in the graph) and a linkage window, and obtaining the position of a visual window drawing view of the bridge construction holographic scene after dynamic layout, namely obtaining the bridge construction holographic scene with the self-adaptive screen size. The method comprises the following specific steps:
step 2.21, based on the holographic projection imaging principle, defining four vertex coordinates of the screen as a pixel coordinate system, defining four vertex coordinates of the screen as a (0, 0), b (0, n), c (m, 0) and d ((m, n), wherein m and n are resolution of the screen, determining a central o coordinate of the screen as (m/2, n/2), and constructing a square with side length n by taking a point o as a center so as to enable the constructed holographic picture to be always in the midpoint of the picture and accord with the holographic projection imaging principle, wherein the four vertices of the square are respectively a '(m/2-n/2, 0), b' (m/2-n/2, n), c '(m/2 n/2, 0) and d' (m/2 + n/2, n) and performing self-adaptive holographic screen size picture segmentation based on the four vertices of the square to obtain an imaged area of the holographic picture;
step 2.22, dynamically arranging four visual windows dynamically generated by the four virtual cameras based on the holographic image imaging area, and obtaining the positions of the visual window drawing views of the bridge construction holographic scene after dynamic arrangement, wherein the four visual windows are the linkage windows obtained in the step 2.12;
when the maximum value of the frame range after dynamic layout is obtained according to the holographic projection imaging principle, the bottom side length is as follows:
L=w+2h (4)
h/w = n/m, then
Figure SMS_28
Figure SMS_29
Wherein w is the width of the visible window, and h is the height of the visible window;
after dynamic layout, the front view positions drawn by the visual windows are as follows:
Figure SMS_30
the rear view positions are:
Figure SMS_31
the left view positions are:
Figure SMS_32
the right view positions are:
Figure SMS_33
the steps are based on the principle of holographic technology, dynamically construct a visual window of self-adaptive screen resolution and reasonably and dynamically arrange according to the characteristics of holographic imaging, so as to achieve the real-time holographic effect visual display of self-adaptive screen size.
And 3, optimizing the bridge construction holographic scene obtained in the step 2 during interaction, and drawing the optimized bridge construction holographic scene based on the digital twin bridge construction scene data to obtain the drawn bridge construction holographic scene.
The method comprises the following specific steps:
step 3.1, optimizing the bridge construction holographic scene during interaction; the method comprises the following specific steps:
step 3.11, acquiring fuzzy ranges of objects in each visual window during interaction, namely fuzzy areas, wherein the fuzzy areas comprise fuzzy areas generated by linear motion fuzzy and rotational motion fuzzy, and obtaining fuzzy degrees through point spread functions of the linear motion fuzzy and the rotational motion fuzzy, wherein the interaction comprises moving, rotating and zooming;
the persistence of vision is a phenomenon in which light generated by the retina by light ceases to act on the retina and remains for a certain period of time. When an object moves rapidly, after an image seen by human eyes disappears, due to the phenomenon of vision persistence, the impression of the visual nerves on the object does not disappear immediately, but a scene (Jianvingjie and the like 2015) with motion blur (motion blur) seen by the human eyes is formed after a period of time, and the motion blur can be caused by instantaneous motion of a holographic scene during construction interaction or even display. Meanwhile, the dynamic construction of the holographic scene is performed by four virtual cameras at the same time, and compared with a single virtual camera, the construction of the holographic scene inevitably causes huge pressure on the rendering. Considering that the holographic scene exploration interaction is a human subject and needs to integrate the physiological characteristics and psychological requirements of people, the optimization method suitable for bridge construction of the holographic scene is designed, the scene drawing efficiency is further improved, and the vertigo sense of a user during exploration and analysis is reduced.
In the bridge construction holographic scene, a bridge main body is mostly located in the central area of the scene, and the edges of the bridge construction holographic scene are mostly landforms, rivers, sky and the like related to the bridge. Therefore, with the bridge as the center, namely the position aligned with the camera in the holographic scene of bridge construction, the design algorithm calculates the motion blur area formed with the bridge as the center, and allocates less resources to the blur area to reduce the resolution rendering, i.e. the rendering data can be greatly reduced, and the loss of perception details is minimized.
The method is characterized in that a virtual camera is aligned to a scene object in real time in the bridge construction holographic scene to simulate the state of human eyes when the scene is observed. The point spread function of the linear motion blur comprises two parameters of total displacement and motion direction, the blurred image g (x, y) is linearly moved by the original image f (x, y) in the direction forming an angle alpha with the x axis, and the value of any point of the blurred image is as follows:
Figure SMS_34
wherein g (x, y) is the value of any point of the blurred image, and x 0 (t) is the motion component of the bridge construction holographic scene in the x-direction at time t, y 0 (T) is the motion component of the holographic scene built by the bridge in the y direction and at the time T, if the total displacement of the object is a, the total time is T m Then the rate of motion is
Figure SMS_35
Figure SMS_36
Then there are:
Figure SMS_37
the fuzzy area of the linear motion blur is obtained after discretization of the formula 12, which is as follows:
Figure SMS_38
wherein, L' is the number of pixels of the movement of the bridge construction holographic scene, namely a fuzzy scale, i is the ith pixel, u = [ icos alpha ], v = [ i sin alpha ], and alpha represents the movement direction;
the calculation of the blur region by convolution may be:
g(x,y)=f(x,y).h(u,v)
where (u, v) is the point spread function:
Figure SMS_39
the rotational motion blur is different from the linear motion blur, the rotational motion blur is a space variable motion blur, the blur parameters are different on different blur paths, and the farther the distance from the rotation center is, the larger the blur scale is; the blurring degree of points at the same position from the rotation center is the same, namely the blurring degree of the images on the same ring is the same, and the rotational motion blurring is distributed along different rotation paths;
if the rotation center is the origin (0, 0), the distance from any pixel point i (x, y) in the blurred image g (x, y) to the rotation center is
Figure SMS_40
Let the object rotate for a time T s When the angular velocity of rotation is ω, the relationship between the blurred image g (x, y) and the original image f (x, y) is:
Figure SMS_41
expressed in polar vertex form:
Figure SMS_42
wherein r represents a radial coordinate and represents a distance from an origin to i (x, y), theta represents an angular coordinate and represents that a starting edge is a positive x-axis and an end point is an included angle between a passing original position and a ray of i (x, y);
let l = r, θ, s = r ω t, r being denoted as subscript, h r The point spread function is h, the point spread function of any pixel point i (x, y) with the distance r from the rotation center is h r (i) Then:
Figure SMS_43
wherein
Figure SMS_44
After the discretization processing is performed on the formula 16, a blurred region of the rotational motion blur is obtained, and the following results are obtained:
Figure SMS_45
wherein i =0,1,2,. Cndot., N r -1,g r (i) And f r (i) The fuzzy pixel value and the original gray value, N, of the ith pixel point on the fuzzy path r Represents the number of pixels, and L r The fuzzy scale is represented by the number of pixels;
the point spread function in the form of a rotational motion blur matrix is obtained based on equation 17 as:
Figure SMS_46
and 3.12, simplifying the fuzzy area by adopting a simplifying means, namely reducing the data precision to obtain a simplified bridge construction holographic scene, namely obtaining the optimized bridge construction holographic scene during interaction, wherein the simplifying means comprises network simplification or texture compression.
And 3.2, performing real-time rendering and drawing of the bridge construction holographic scene based on the digital twin bridge construction scene data loaded in the digital twin platform by the optimized bridge construction holographic scene to obtain the drawn bridge construction holographic scene.
Examples
The selected case area is located in a certain large bridge construction scene under construction (101 degrees 46 to 102 degrees 25 to 29 degrees 54 to 30 degrees 10') of the Ganjin autonomous State Luzhou county of Sichuan province as a case and conducts test analysis, and the area range comprises elements such as bridges, buildings, rivers, hills and the like. The method evaluates the correctness of the research party from two angles of comparing the real-time construction efficiency of the holographic scene with the scene rendering efficiency.
The system is configured by a prototype research and development environment, and based on the development environment, a digital twin-driven bridge construction holographic scene interaction and query analysis system is researched and developed, for example, a main interface of the system is shown in fig. 4, and the main functions of the system comprise bridge construction holographic scene visualization, optimized drawing, project introduction, bridge part attribute query and bridge progress simulation.
The dynamic construction of the holographic scene (the bridge construction holographic scene) can be measured by two indexes of the holographic scene rendering frame rate and the holographic picture construction effect under different screen resolutions. The rendering time between each frame of picture of the holographic scene reflects the real-time efficiency of scene construction, and simultaneously determines the experience of a user on the holographic scene. And after data access, recording the holographic scene construction time with different screen resolutions. The results of the experiment are shown in FIG. 6.
The research shows that the number of visual frames of human eyes ranges from 24 frames per second to 30 frames per second, the average construction time of each frame of picture of the holographic scene is 15.48ms under different screen resolutions in the whole test process, the average rendering efficiency is 65.46fps, the minimum limit of pictures captured by human eyes per second is reached, and smooth rendering and real-time construction of the holographic scene can be guaranteed.
And by the self-adaptive screen visualization holographic scene construction method, the dynamic construction of holographic pictures can be carried out for display terminals with different screen size resolutions, and visualization display can be carried out. Experiments prove that the method provided by the invention can realize the visualization of the holographic scene with different screen size resolutions, and the smooth rendering and real-time construction of the holographic scene picture. Specifically, fig. 7 shows the effect of automatically constructing and visualizing the hologram at 1920 × 1080 resolution, fig. 8 shows the effect of automatically constructing and visualizing the hologram at 1920 × 1200 resolution, fig. 9 shows the effect of automatically constructing and visualizing the hologram at 1680 × 1050 resolution, and fig. 10 shows the effect of automatically constructing and visualizing the hologram at 1440 × 1050 resolution.
The scene rendering optimization is a key for reducing the vertigo of a user and improving the interactive experience, and can effectively support interactive exploration and query analysis of the user on the holographic scene built in the bridge construction, so that the analysis of the scene drawing efficiency before and after the optimization is particularly important, and is specifically shown in fig. 11.
Fig. 12 is an analysis of data volume for drawing a holographic scene for bridge construction, in the case, 15 moments when a user interactively browses the scene are randomly extracted, and a triangular surface required to be drawn by the scene rendering optimization method based on motion blur is reduced by about 30% -45% compared with an original scene.
And recording the rendering frame rate of the current holographic scene and the number of triangles rendered in the scene every 1 s. The results of the experiment are shown in FIG. 13.
After optimization, the average rendering frame rate in the whole testing process is 77.79 frames, compared with the rendering efficiency before optimization, the rendering efficiency is improved by about 17.7%, and the difference between the frame rate of an experimental group and the frame rate of a control group has statistical significance (4.28E-15 is less than 0.05). The standard deviation of the experimental group is 9.57, and the standard deviation of the control group is 11.85, which reflects that the frame rate stability of the experimental group is better than that of the control group. The frame rate is improved mainly because the judgment is carried out according to the fuzzy region, if the fuzzy region is in the fuzzy region, the data precision of the complex scene object is reduced by adopting the means of grid simplification, texture compression and the like, the scene data is greatly reduced on the premise of ensuring the high rendering of the important region, and the rendering efficiency is improved. The data of the later stage in the experiment process is mainly bridge data, different from the mountain data of the earlier stage of the experiment, building data have more vertexes and triangular surfaces in the same area, and the rendering frame rate of the experiment group and the comparison group is reduced after 80 seconds. According to the Pearson correlation coefficient calculation method, the correlation coefficient between the experimental group frame rate and the number of the triangular surfaces is-0.89, the correlation coefficient between the comparison group frame rate and the number of the triangular surfaces is-0.91, and the rendering frame rate and the number of the triangular surfaces are further shown to be in negative correlation. Rendering efficiency is improved when the triangle faces are reduced in the scene.
The method for dynamically constructing the holographic scene with the self-adaptive screen size is provided aiming at the problems that the conventional holographic scene is mostly statically displayed, a holographic video source is manufactured in advance, and real-time display cannot be realized. Firstly, according to the Pepper principle, picture dynamic layout of self-adaptive screen size is achieved through linkage of four cameras, and dynamic construction of holographic pictures of a bridge construction scene is achieved; on the basis, the visual characteristics of human eyes are considered, and the scene visualization efficiency is optimized on the basis of a motion blur algorithm, so that the scene drawing efficiency is improved. Based on the method, a prototype system is constructed in an experimental area for experiment. Experiments show that the self-adaptive screen size holographic scene dynamic construction method can dynamically construct holographic pictures for display terminals with different screen size resolutions, and the holographic scene optimization method taking human visual characteristics into consideration can enable the average rendering frame rate of the holographic scene to reach 77.79 frames per second, improve the rendering efficiency by about 17.7% compared with that before optimization, reduce the required rendered triangular surface by about 30% -45% compared with the original scene, and rarely cause picture pause and tearing when rendering a large amount of sudden scene data in the holographic scene, thereby greatly improving the user experience of the holographic scene. The high-efficiency rendering display of the digital twin holographic scene in the bridge construction can be realized.
In conclusion, the visual display of the twin system in the bridge construction scene is carried out in a holographic projection mode; because the prior holographic projection is mainly used for static display, a holographic video source is mostly manufactured in advance, and the functions of real-time property and dynamic display are lacked, the display of real-time dynamic holographic projection is realized, and moreover, the holographic scene dynamic construction of self-adaptive screen size is realized by the algorithm designed by the text; and (3) designing an algorithm according to the visual characteristics of human eyes, and optimizing the scene data loading rate/scene drawing rendering efficiency.
Although the research makes some progress, a great improvement space still exists, the interactive application in the current holographic scene is weak, and in the subsequent research work, the user is supported to browse and query and analyze in the bridge construction scene in a gesture interaction mode by combining with gesture recognition interactive equipment. The gesture natural interaction mode can reduce the human-computer interaction learning cost of a user and improve the cognitive efficiency of the user.

Claims (10)

1. A dynamic construction method of a holographic scene with a self-adaptive screen size is characterized by comprising the following steps:
step 1, acquiring digital twin bridge construction scene data;
step 2, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, and then carrying out self-adaption screen size holographic scene dynamic construction based on the bridge construction holographic scene to obtain the position of a visual window drawing view of the bridge construction holographic scene, so as to obtain a bridge construction holographic scene with self-adaption screen size;
and 3, optimizing the bridge construction holographic scene obtained in the step 2 during interaction, and drawing the optimized bridge construction holographic scene based on the digital twin bridge construction scene data to obtain the drawn bridge construction holographic scene.
2. The method for dynamically constructing the screen size-adaptive holographic scene according to claim 1, wherein the digital twin bridge construction scene data in the step 1 comprises digital elevation, thematic data, bridge BIM models of solid bridge parts, inclination data, monitoring data, management data and geographic information data, wherein the digital elevation comprises terrain, the thematic data comprises rivers, vegetations, roads, ground objects and measurement data, the bridge BIM data comprises bridge decks, piers, suspension cables and building information models of bridge spans of a bridge, the inclination data comprises terrain, ground objects, rivers, trees and digital earth surface models of buildings, the monitoring data comprises bridge construction stage monitoring data, wind field monitoring data in a construction scene, temperature field monitoring data and bridge stress field monitoring data, the management data comprises bridge part attributes and bridge construction progress, and the geographic information data comprises images, terrain, roads, rivers and buildings.
3. The method for dynamically constructing a screen-size-adaptive holographic scene according to claim 2, wherein the specific steps of step 2 are as follows:
2.1, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, arranging four virtual cameras for rendering and drawing scenes in real time in the bridge construction holographic scene based on the bridge construction holographic scene, and constructing a linkage window based on the four virtual cameras, namely, the four virtual cameras are always aligned to a unified area with the same action, wherein the bridge construction holographic scene is the virtual scene for bridge construction;
and 2.2, carrying out self-adaptive screen size picture segmentation and dynamic layout based on the Pepper principle, the holographic projection imaging principle and the linkage window, and obtaining the position of a visual window drawing view of the bridge construction holographic scene after dynamic layout, thus obtaining the bridge construction holographic scene with the self-adaptive screen size.
4. The method for dynamically constructing a screen-size-adaptive holographic scene according to claim 3, wherein the specific steps of step 2.1 are as follows:
step 2.11, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, arranging four virtual cameras for rendering and drawing the scene in real time in the bridge construction holographic scene based on the bridge construction holographic scene, taking the plane of the four virtual cameras as an XY axis, taking the direction perpendicular to the XY axis as a Z axis, taking the center of the bridge construction holographic scene as an origin to establish a coordinate system, and calculating a transformation relation between the virtual cameras so as to enable the cameras in the bridge construction holographic scene to be aligned to the same object in the same posture, wherein the transformation relation between the virtual cameras comprises translation, scaling and rotation between every two virtual cameras, the translation means that the virtual cameras in the bridge construction holographic scene are displaced, and the other three virtual cameras perform displacement in the same scale;
let the distance from each virtual camera to the origin be l 0 Moving the bridge building holographic scene to a point (x) 0 ,y 0 ,z 0 ) Then, the rotation of the virtual camera in the Y-axis direction is transformed into:
Figure FDA0003961757560000021
in the Y-axis direction, the scaling of the virtual camera is:
Figure FDA0003961757560000022
wherein, | y 0 |≤l 0
Similarly, the rotation of the virtual camera in the X-axis direction is transformed into:
Figure FDA0003961757560000023
in the X-axis direction, the scaling of the virtual camera is:
Figure FDA0003961757560000024
wherein, | x 0 |≤l 0
Wherein alpha is y Is the Euler angle, beta, of the virtual camera coordinate system about the y-axis z For the rotation angle, alpha, around the z-axis of the coordinate system after the virtual camera has rotated z Euler angle, x, about the z-axis for a virtual camera coordinate system 0 Distance of movement in x-axis direction, y, of holographic scene for bridge construction 0 Distance of movement in y-axis direction, z, for building a holographic scene for a bridge 0 Distance of movement in z-axis direction, l, for building a holographic scene for a bridge 0 The distance between each virtual camera and the holographic scene built by the bridge is shown;
and 2.12, adjusting each virtual camera based on the transformation relation among the virtual cameras to obtain a linkage window, namely uniformly linking the four virtual cameras.
5. The method for dynamically constructing a screen-size-adaptive holographic scene according to claim 4, wherein the specific steps of the step 2.2 are as follows:
step 2.21, based on the holographic projection imaging principle, defining the screen as a pixel coordinate system, defining four vertex coordinates of the screen as a (0, 0), b (0, n), c (m, 0) and d ((m, n), wherein m and n are the resolution of the screen, determining the coordinate of o in the center of the screen as (m/2, n/2), and in order to make the constructed holographic picture always be at the midpoint of the picture and conform to the holographic projection imaging principle, constructing a square with side length n by taking a point o as the center, and then performing adaptive holographic screen size picture segmentation based on the four vertices of the square as a '(m/2-n/2, 0), b' (m/2-n/2, n), c '(m/2 n/2, 0) and d' (m/2 n/2, n), to obtain the imaged area of the holographic picture;
step 2.22, dynamically arranging four visual windows dynamically generated by four virtual cameras based on the holographic image imaging area, and obtaining the positions of the visual window drawing views of the bridge construction holographic scene after dynamic arrangement, wherein the four visual windows are the linkage windows obtained in the step 2.12;
when the maximum value of the frame range after dynamic layout is obtained according to the holographic projection imaging principle, the bottom side length is as follows:
L=w+2h (4)
h/w = n/m, then
Figure FDA0003961757560000031
Figure FDA0003961757560000032
Wherein w is the width of the visible window, and h is the height of the visible window;
after dynamic layout, the front view positions drawn by the visual windows are as follows:
Figure FDA0003961757560000033
the rear view positions are:
Figure FDA0003961757560000034
the left view positions are:
Figure FDA0003961757560000035
the right view positions are:
Figure FDA0003961757560000041
6. the method for dynamically constructing a screen-size-adaptive holographic scene according to claim 5, wherein the specific steps in step 3 are as follows:
step 3.1, optimizing the bridge construction holographic scene during interaction;
and 3.2, performing real-time rendering and drawing of the bridge construction holographic scene based on the digital twin bridge construction scene data loaded in the digital twin platform by the optimized bridge construction holographic scene to obtain the drawn bridge construction holographic scene.
7. The method for dynamically constructing a screen-size-adaptive holographic scene according to claim 6, wherein the specific steps of step 3.1 are as follows:
step 3.11, acquiring fuzzy ranges of objects in each visual window during interaction, namely fuzzy areas, wherein the fuzzy areas comprise fuzzy areas generated by linear motion fuzzy and rotational motion fuzzy, and calculating to obtain fuzzy degrees through point spread functions of the linear motion fuzzy and the rotational motion fuzzy, wherein the interaction comprises moving, rotating and zooming;
and 3.12, simplifying the fuzzy area by adopting a simplifying means, namely reducing the data precision to obtain a simplified bridge construction holographic scene, namely obtaining the optimized bridge construction holographic scene during interaction, wherein the simplifying means comprises network simplification or texture compression.
8. The method for dynamically constructing screen-size-adaptive holographic scene according to claim 7, wherein in the step 3.11:
the point spread function of the linear motion blur comprises two parameters of total displacement and motion direction, the blurred image g (x, y) is linearly moved by the original image f (x, y) in the direction forming an angle alpha with the x axis, and the value of any point of the blurred image is as follows:
Figure FDA0003961757560000042
wherein g (x, y) is the value of any point of the blurred image, and x 0 (t) is the motion component of the bridge construction holographic scene in the x-direction at time t, y 0 (T) is the motion component of the holographic scene built by the bridge in the y direction and at the time T, if the total displacement of the object is a, the total time is T m Then the rate of motion is
Figure FDA0003961757560000043
Figure FDA0003961757560000044
Then there are:
Figure FDA0003961757560000051
the fuzzy area of the linear motion blur is obtained after discretization of the formula 12, which is as follows:
Figure FDA0003961757560000052
wherein, L' is the number of pixels of the movement of the bridge construction holographic scene, namely the fuzzy scale, i is the ith pixel, u = [ i cos α ], v = [ i sin α ], and α represents the movement direction;
the calculation of the blurred region by convolution may be:
g(x,y)=f(x,y).h(u,v)
where (u, v) is the point spread function:
Figure FDA0003961757560000053
the rotational motion blur is different from the linear motion blur, the rotational motion blur is a space variable motion blur, the blur parameters are different on different blur paths, and the farther the distance from the rotation center is, the larger the blur scale is; the blurring degree of points at the same position from the rotation center is the same, namely the blurring degree of the images on the same ring is the same, and the rotational motion blurring is distributed along different rotation paths;
if the rotation center is the origin (0, 0), the distance from any pixel point i (x, y) in the blurred image g (x, y) to the rotation center is
Figure FDA0003961757560000054
Let the object rotation time be T s When the angular velocity of rotation is ω, the relationship between the blurred image g (x, y) and the original image f (x, y) is:
Figure FDA0003961757560000055
expressed in polar vertex form:
Figure FDA0003961757560000056
wherein r represents a radial coordinate and represents a distance from an origin to i (x, y), theta is an angular coordinate and represents that a starting edge is a positive x-axis, and an end point is an included angle between a passing-in place and a ray of i (x, y);
let l = r, θ, s = r ω t, r being denoted as subscript, h r The point spread function is h, the point spread function of any pixel point i (x, y) with the distance r from the rotation center is h r (i) And then:
Figure FDA0003961757560000057
wherein
Figure FDA0003961757560000061
After the discretization processing is performed on the formula 16, a blurred region of the rotational motion blur is obtained, and the following results are obtained:
Figure FDA0003961757560000062
wherein i =0,1,2 r -1,g r (i) And fr (i) are the blurred pixel value and the original gray scale value of the ith pixel point on the blurred path, N r Represents the number of pixels, and L r The fuzzy scale is expressed by the number of pixels;
the point spread function in the form of a rotational motion blur matrix is obtained based on equation 17 as:
Figure FDA0003961757560000063
9. a dynamic holographic scene construction system capable of adapting to screen size is characterized by comprising:
an acquisition module: acquiring digital twin bridge construction scene data;
a dynamic construction module: importing digital twin bridge construction scene data to construct a bridge construction holographic scene, and then dynamically constructing a self-adaptive screen size holographic scene based on the bridge construction holographic scene to obtain the position of a visual window drawing view of the bridge construction holographic scene, so as to obtain the bridge construction holographic scene with the self-adaptive screen size;
a drawing module: and during interaction, the bridge construction holographic scene obtained by the dynamic construction module is optimized, and the optimized bridge construction holographic scene is drawn based on the digital twin bridge construction scene data to obtain the drawn bridge construction holographic scene.
10. The screen-size-adaptive dynamic holographic scene construction system according to claim 9, wherein the dynamic construction module is implemented by the following steps:
2.1, importing digital twin bridge construction scene data to construct a bridge construction holographic scene, arranging four virtual cameras for rendering and drawing scenes in real time in the bridge construction holographic scene based on the bridge construction holographic scene, and constructing a linkage window based on the four virtual cameras, namely, the four virtual cameras are always aligned to a unified area with the same action, wherein the bridge construction holographic scene is the virtual scene for bridge construction;
2.2, carrying out self-adaptive screen size picture segmentation and dynamic layout based on the Pepper principle, the holographic projection imaging principle and the linkage window, and obtaining the position of a visible window drawing view of the bridge construction holographic scene after dynamic layout, namely obtaining the bridge construction holographic scene with the self-adaptive screen size;
the drawing module is concretely implemented by the following steps:
step 3.1, optimizing the bridge construction holographic scene during interaction;
and 3.2, performing real-time rendering and drawing of the bridge construction holographic scene based on the digital twin bridge construction scene data loaded in the digital twin platform by the optimized bridge construction holographic scene to obtain the drawn bridge construction holographic scene.
CN202211484940.3A 2022-11-24 2022-11-24 Holographic scene dynamic construction method and system for self-adapting screen size Active CN115937482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211484940.3A CN115937482B (en) 2022-11-24 2022-11-24 Holographic scene dynamic construction method and system for self-adapting screen size

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211484940.3A CN115937482B (en) 2022-11-24 2022-11-24 Holographic scene dynamic construction method and system for self-adapting screen size

Publications (2)

Publication Number Publication Date
CN115937482A true CN115937482A (en) 2023-04-07
CN115937482B CN115937482B (en) 2023-09-15

Family

ID=86648235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211484940.3A Active CN115937482B (en) 2022-11-24 2022-11-24 Holographic scene dynamic construction method and system for self-adapting screen size

Country Status (1)

Country Link
CN (1) CN115937482B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237574A (en) * 2023-10-11 2023-12-15 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102137247B1 (en) * 2019-12-26 2020-07-24 연세대학교산학협력단 System for manufacturing composite bridge prototype using 3d printing precast segment, and method for the same
CN113223162A (en) * 2021-04-13 2021-08-06 交通运输部科学研究院 Method and device for constructing digital twin scene of inland waterway
CN114943141A (en) * 2022-04-28 2022-08-26 国网浙江省电力有限公司金华供电公司 Transformer substation dynamic simulation method based on model mapping and identification
CN115131498A (en) * 2022-06-08 2022-09-30 浙江工业大学 Method for quickly constructing intelligent water conservancy digital twin model of reservoir
CN115310638A (en) * 2022-09-20 2022-11-08 广东电网有限责任公司 Transformer substation operation and maintenance method and system based on digital twins
CN115327770A (en) * 2022-07-26 2022-11-11 山西传媒学院 Self-adaptive holographic function screen modulation method
CN115330955A (en) * 2022-09-20 2022-11-11 罗中祥 Product visualization management and control method based on digital twinning technology

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102137247B1 (en) * 2019-12-26 2020-07-24 연세대학교산학협력단 System for manufacturing composite bridge prototype using 3d printing precast segment, and method for the same
CN113223162A (en) * 2021-04-13 2021-08-06 交通运输部科学研究院 Method and device for constructing digital twin scene of inland waterway
CN114943141A (en) * 2022-04-28 2022-08-26 国网浙江省电力有限公司金华供电公司 Transformer substation dynamic simulation method based on model mapping and identification
CN115131498A (en) * 2022-06-08 2022-09-30 浙江工业大学 Method for quickly constructing intelligent water conservancy digital twin model of reservoir
CN115327770A (en) * 2022-07-26 2022-11-11 山西传媒学院 Self-adaptive holographic function screen modulation method
CN115310638A (en) * 2022-09-20 2022-11-08 广东电网有限责任公司 Transformer substation operation and maintenance method and system based on digital twins
CN115330955A (en) * 2022-09-20 2022-11-11 罗中祥 Product visualization management and control method based on digital twinning technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张昀昊: "任务驱动的滑坡灾害应急场景自适应可视化方法", 中国博士学位论文全文数据库 基础科学辑, no. 05 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237574A (en) * 2023-10-11 2023-12-15 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system
CN117237574B (en) * 2023-10-11 2024-03-26 西南交通大学 Task-driven geographical digital twin scene enhancement visualization method and system

Also Published As

Publication number Publication date
CN115937482B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
CN108564527B (en) Panoramic image content completion and restoration method and device based on neural network
Wei et al. Fisheye video correction
CN109523622A (en) A kind of non-structured light field rendering method
Li et al. Three-dimensional traffic scenes simulation from road image sequences
CN110245199A (en) A kind of fusion method of high inclination-angle video and 2D map
CN115937482B (en) Holographic scene dynamic construction method and system for self-adapting screen size
Lukasczyk et al. Voidga: A view-approximation oriented image database generation approach
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
CN115527016A (en) Three-dimensional GIS video fusion registration method, system, medium, equipment and terminal
CN114926612A (en) Aerial panoramic image processing and immersive display system
CN110400366B (en) Real-time flood disaster visualization simulation method based on OpenGL
Bao et al. Artificial Intelligence and VR Environment Design of Digital Museum Based on Embedded Image Processing
Siegel et al. Superimposing height-controllable and animated flood surfaces into street-level photographs for risk communication
CN113673567A (en) Panorama emotion recognition method and system based on multi-angle subregion self-adaption
TW202223842A (en) Image processing method and device for panorama image
Xu et al. Real-time panoramic map modeling method based on multisource image fusion and three-dimensional rendering
CN112037313A (en) VR scene optimization method based on tunnel visual field
CN106228509A (en) Performance methods of exhibiting and device
Huixuan et al. Innovative Practice of Virtual Reality Technology in Animation Production
CN110889889A (en) Oblique photography modeling data generation method applied to immersive display equipment
Zou et al. Research on Multi-source Data Fusion of 3D Scene in Power Grid
CN115457220B (en) Simulator multi-screen visual simulation method based on dynamic viewpoint
Xu et al. Research on digital modeling and optimization of virtual reality scene
Neumann et al. Augmented Virtual Environments (AVE): for Visualization of Dynamic Imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant