CN116740249A - Distributed three-dimensional scene rendering system - Google Patents

Distributed three-dimensional scene rendering system Download PDF

Info

Publication number
CN116740249A
CN116740249A CN202311025376.3A CN202311025376A CN116740249A CN 116740249 A CN116740249 A CN 116740249A CN 202311025376 A CN202311025376 A CN 202311025376A CN 116740249 A CN116740249 A CN 116740249A
Authority
CN
China
Prior art keywords
module
rendering
model
observation
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311025376.3A
Other languages
Chinese (zh)
Other versions
CN116740249B (en
Inventor
邓正秋
吕绍和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Original Assignee
Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Malanshan Video Advanced Technology Research Institute Co ltd filed Critical Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Priority to CN202311025376.3A priority Critical patent/CN116740249B/en
Publication of CN116740249A publication Critical patent/CN116740249A/en
Application granted granted Critical
Publication of CN116740249B publication Critical patent/CN116740249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of three-dimensional rendering, in particular to a distributed three-dimensional scene rendering system, which comprises: the splitting module is used for splitting the three-dimensional scene according to a preset splitting strategy; an orientation module to determine a visual direction; the plurality of observation modules are used for forming corresponding visual rendering information; the shaping modules are used for generating dimension-reducing visual information according to visual direction positioning; a feedback module for forming adjusted visual information; the distribution module is used for determining corresponding adjustment visual information according to the position and visual direction of the observation module, the method is used for setting the modules, carrying out distributed rendering on the model of the three-dimensional scene, and rapidly presenting the rendered final image on a corresponding view angle after the rendering is completed, so that the rendering efficiency of the three-dimensional scene rendering is effectively improved, and meanwhile, the timeliness of the three-dimensional scene rendering is improved.

Description

Distributed three-dimensional scene rendering system
Technical Field
The invention relates to the technical field of three-dimensional rendering, in particular to a distributed three-dimensional scene rendering system.
Background
Rendering is a computer drawing process, which occupies a large amount of computer resources when in progress, and for a computer application scene in real time, rendering is not timely so as to reduce the use experience, and meanwhile, artistic construction of the scene is greatly influenced;
rendering a three-dimensional model completely is more reliable for rendering small scenes, while for large scenes, the image processing capability of a computer is often difficult to realize due to the excessive number of model surfaces.
Chinese patent grant bulletin number: CN103700133B discloses a three-dimensional scene distributed rendering synchronous refreshing method and system, wherein the three-dimensional scene distributed rendering synchronous refreshing method comprises the following steps: three-dimensional model data in the three-dimensional scene to be refreshed are obtained, and the three-dimensional model data are grouped according to the rendering performance parameters of the graphic workstation and the preset rendering frame rate; sequentially carrying out distributed rendering on each group of three-dimensional model data to generate corresponding images; and synchronously refreshing and displaying the images obtained by rendering the same group of three-dimensional model data. According to the scheme, the three-dimensional scene to be refreshed is rendered into a plurality of pictures by taking the three-dimensional model data as a reference, and the images obtained by rendering the same group of three-dimensional model data are synchronously refreshed and displayed, so that the synchronous refreshing of the three-dimensional display content in the display unit is realized.
However, the above method has the following problems: real-time rendering of large scenes is not possible.
Disclosure of Invention
Therefore, the invention provides a distributed three-dimensional scene rendering system which is used for solving the problem that the timeliness of scene rendering is reduced because large-scale scenes cannot be rendered in real time in the prior art.
To achieve the above object, the present invention provides a distributed three-dimensional scene rendering system, comprising:
the splitting module is used for splitting the three-dimensional scene according to a preset splitting strategy and forming a scene splitting area;
the orientation module is connected with the slitting module and used for determining a visual direction according to the position relation between each scene segmentation area and the model to be rendered;
the observation modules are connected with the slitting module and the orientation module and used for displaying the models to be rendered, rendering the corresponding models to be rendered according to the visual directions and the preset rendering strategies according to the scene segmentation areas where the single observation module is positioned, and forming corresponding visual rendering information;
the shaping modules are connected with the slitting module and the orientation module and are respectively connected with the corresponding observation modules, and are used for generating dimension-reducing visual information of the scene segmentation area for the model to be rendered according to the visual direction positioning and the visual rendering information;
the feedback module is connected with each shaping module and each observing module and is used for adjusting the dimension-reducing visual information according to a preset visual adjustment strategy so as to form adjustment visual information;
the distribution module is connected with each observation module and the orientation module, and is used for determining corresponding visual adjustment information according to the position of the observation module and the visual direction and transmitting the visual adjustment information to the corresponding observation module;
the preset segmentation strategy is to segment a scene by taking the model to be rendered as an origin; the preset rendering strategy is to render each model surface of the model to be rendered corresponding to the visual direction; the preset visual adjustment strategy is to adjust the shape of the dimension-reducing visual information according to the visual direction.
Further, the observation module determines a projection surface according to the visual direction under the orientation condition, and projects the model to be rendered on the projection surface to form observation surface data;
wherein the projection surface is a surface perpendicular to the visual direction;
and the orientation condition is that the observation module observes the model to be rendered.
Further, the observation module respectively renders each model surface of the model to be rendered projected on the projection surface according to the observation surface data under the rendering condition, wherein the observation module is provided with a projection area threshold value,
if the projection of the single model surface on the projection surface is smaller than the projection area threshold, the observation module judges that the model surface can be ignored, and performs fuzzy processing on the material of the model surface;
if the projection of the single model surface on the projection surface is greater than or equal to the projection area threshold, rendering the model surface by the observation module;
wherein the projection area threshold is positively correlated with the resolution displayed by the observation module;
the rendering condition is that the single observation module executes the preset rendering strategy on the model to be rendered.
Further, the shaping module projects the model to be rendered which is rendered on the observation surface under the rendering condition, and the dimension-reducing visual information is formed.
Further, the feedback module deforms the corresponding dimension-reduced visual information according to the visual direction by using the preset visual adjustment strategy under the observation condition, wherein the deformation mode of the dimension-reduced visual information is perspective deformation according to the visual direction and the included angle of the projection plane forming the dimension-reduced visual information;
and the observation condition is that the shaping module forms dimension reduction visual information of the model to be rendered.
Further, when executing the preset segmentation strategy, the segmentation module segments the three-dimensional space into the scene segmentation areas with preset sizes, and determines a rendering space according to the sizes of the models to be rendered;
wherein the preset size is positively correlated with the resolution displayed by the observation module;
and when the single observation module is in the rendering space, the projection of the model to be rendered on the projection surface corresponding to the observation module is larger than the projection area threshold.
Further, when executing the preset segmentation strategy, if the model to be rendered includes a plurality of entities, the segmentation module places the model to be rendered into a single cube space with the longest side of the model to be rendered as a reference, and executes the preset segmentation strategy with the geometric center of the space as an origin.
Further, the orientation module takes the geometric center of each scene segmentation area as a reference under the preset segmentation strategy, and the vector pointing to the origin point is marked as the visual direction of the scene segmentation area.
Further, the distribution module selects the adjustment visual information corresponding to the visual direction with the smallest included angle with the direction according to the direction corresponding to the connecting line of the position of the single observation module and the origin, and transmits the adjustment visual information to the observation module.
Further, when the observation modules complete rendering of the model to be rendered according to the preset rendering strategy, corresponding data of the completed rendering are transmitted to the distribution module, and the distribution module combines the rendering data of the molding surfaces into corresponding rendering models according to the rendering data of the molding surfaces and stores the data of the rendering models.
Compared with the prior art, the three-dimensional scene rendering method has the advantages that the three-dimensional scene model is subjected to distributed rendering by means of the dividing and cutting module, the orientation module, the observing modules, the shaping modules, the feedback modules and the distribution modules, and the rendered final image is rapidly presented on the corresponding view angle after the rendering is completed, so that the rendering efficiency of the three-dimensional scene rendering is effectively improved, and meanwhile, the timeliness of the three-dimensional scene rendering is improved.
Further, by means of projecting the rendered graph, the appearance of the rendered model is recorded, a corresponding image is formed, and the timeliness of rendering of the three-dimensional scene is further improved while the image precision is effectively improved.
Further, by means of dividing the scene, the corresponding observation direction is determined, the image is deformed according to the observation direction, the rendering efficiency is improved while resources required by image processing are effectively reduced, and therefore timeliness of three-dimensional scene rendering is further improved.
Furthermore, by means of storing the rendering model in different places, the resource waste caused by repeated rendering of single equipment is effectively reduced, the rendering efficiency is effectively improved, and therefore the timeliness of three-dimensional scene rendering is further improved.
Drawings
FIG. 1 is a schematic diagram of a distributed three-dimensional scene rendering system of the present invention;
FIG. 2 is a schematic view of the visual direction of the embodiment of the present invention;
FIG. 3 is a schematic diagram of a model surface according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of image distortion according to an embodiment of the present invention;
wherein: 1-a model to be rendered; 2-visual direction; 3-a projection surface; 4-scene segmentation areas; 5-observing surface; 6-datum point; 61-reference image; 7-transforming points; 71-transform the image.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1, which is a schematic structural diagram of a distributed three-dimensional scene rendering system according to the present invention, the distributed three-dimensional scene rendering system includes:
the splitting module is used for splitting the three-dimensional scene according to a preset splitting strategy and forming a scene splitting area;
the orientation module is connected with the splitting module and used for determining a visual direction according to the position relation between each scene splitting area and the model to be rendered;
the observation modules are connected with the slitting module and the orientation module and used for displaying the models to be rendered, rendering the corresponding models to be rendered according to the visual direction and a preset rendering strategy according to the scene segmentation areas where the single observation module is positioned, and forming corresponding visual rendering information;
the shaping modules are connected with the slitting module and the orientation module and are respectively connected with the corresponding observation modules, and are used for generating dimension-reducing visual information of the scene segmentation area for the model to be rendered according to the visual direction positioning and the visual rendering information;
the feedback module is connected with each shaping module and each observing module and is used for adjusting the dimension-reducing visual information according to a preset visual adjustment strategy so as to form adjustment visual information;
the distribution module is connected with each observation module and the orientation module, and is used for determining corresponding visual adjustment information according to the position and the visual direction of the observation module and transmitting the visual adjustment information to the corresponding observation module;
the method comprises the steps of presetting a segmentation strategy, namely segmenting a scene by taking a model to be rendered as an origin; the preset rendering strategy is to render each model surface of the model to be rendered corresponding to the visual direction; the preset vision adjustment strategy is to adjust the shape of the dimension-reducing vision information according to the vision direction.
According to the invention, the three-dimensional scene model is subjected to distributed rendering by arranging the slitting module, the orientation module, the plurality of observation modules, the plurality of shaping modules, the feedback module and the distribution module, and the rendered final image is rapidly presented on the corresponding visual angle after the rendering is completed, so that the rendering efficiency of the three-dimensional scene rendering is effectively improved, and the timeliness of the three-dimensional scene rendering is improved.
Fig. 2 is a schematic view of a visual direction in an embodiment of the present invention, wherein a geometric center of a model 1 to be rendered is taken as a target point, a geometric center of a single scene segmentation area 4 is taken as a starting point, a direction corresponding to a connecting line is a visual direction 2, and a plane perpendicular to the visual direction 2 is a projection plane.
The observation module determines a projection surface according to the visual direction under the orientation condition, and projects a model to be rendered on the projection surface to form observation surface data;
wherein the projection surface is a surface perpendicular to the visual direction;
the orientation condition is that the observation module observes the model to be rendered.
In an implementation, the projection surface may be arranged as a plane passing through the geometric center of the scene segmentation area corresponding to the visual direction.
Fig. 3 is a schematic diagram of a model surface according to an embodiment of the present invention, in which the observation surface 5 is a corresponding surface of the model to be rendered projected on the projection surface, and it can be understood that the appearance of the model to be rendered is composed of a plurality of surfaces, and the more the number of surfaces is composed, the smoother the appearance of the model.
The observation module respectively renders each model surface of the model to be rendered projected on the projection surface according to the data of the observation surface under the rendering condition, wherein the observation module is provided with a projection area threshold value,
if the projection of the single model surface on the projection surface is smaller than the projection area threshold, the observation module judges that the model surface can be ignored, and performs fuzzy processing on the material of the model surface;
if the projection of the single model surface on the projection surface is larger than or equal to the projection area threshold, rendering the model surface by the observation module;
wherein, the projection area threshold value is in positive correlation with the resolution displayed by the observation module;
the rendering condition is that a single observation module executes a preset rendering strategy for the model to be rendered.
It will be appreciated that the resolution at which the observation module displays is related to sharpness;
in an implementation, for a 720P definition observation module, the projected area threshold is the area occupied by 12 pixels;
for an observation module with 1080P definition, the projection area threshold is the area occupied by 20 pixels;
for a 4K definition observation module, the projection area threshold is 60 pixels;
it is understood that the blurring process includes halation, mosaic, gaussian blurring, etc., and the means is not unique; in the implementation, the blurring process may be performed so that the region does not have a color difference from the nearby region.
Specifically, the shaping module projects the model to be rendered which is rendered on an observation surface under the rendering condition, and forms dimension-reducing visual information.
And recording the appearance of the rendered model by utilizing a mode of projecting the rendered graph, and forming a corresponding image, so that the timeliness of rendering the three-dimensional scene is further improved while the image precision is effectively improved.
Specifically, the feedback module deforms corresponding dimension-reducing visual information according to a visual direction under an observation condition by a preset visual adjustment strategy, wherein the deformation mode of the single dimension-reducing visual information is perspective deformation according to the visual direction and the included angle of a projection surface forming the dimension-reducing visual information;
the observation condition is that the shaping module forms dimension reduction visual information of the model to be rendered.
Fig. 4 is a schematic diagram of image deformation according to an embodiment of the present invention, wherein the reference point 6 is a corresponding point for generating the dimension-reduced visual information, the generated visual information is the reference image 61, and when the observation module reaches the transformation point 7, the image generated when the observation module observes the dimension-reduced visual information is the transformation image 71.
It can be understood that the dimension-reducing visual information generated in the above manner is a plane image, in implementation, the observation point, that is, the position where the observation module is located, is taken as a base point, each side of the plane image can be enclosed into a cone, the position where the observation module can observe the image is located is the waist of the cone, and it can be understood that when the vertex of the cone is offset, the waist image of the cone is synchronously offset.
Specifically, when a preset segmentation strategy is executed, the segmentation module segments the three-dimensional space into scene segmentation areas with preset sizes, and determines a rendering space according to the sizes of the models to be rendered;
the preset size is positively correlated with the resolution displayed by the observation module;
when the rendering space is that a single observation module is in the rendering space, the projection of the model to be rendered on the corresponding projection surface of the observation module is larger than a projection area threshold.
It can be understood that the size of the mold surface of the model to be rendered is irrelevant to the size of the mold surface, and when the size of the model to be rendered is larger, if the resolution of the observation module is higher, the image load generated by the observation module is also increased;
in implementation, for a 720P definition observation module, the preset dimensions are corresponding spaces of 400 pixels in length, width and height;
for an observation module with 1080P definition, presetting corresponding spaces with length, width and height of 1000 pixels;
for the 4K definition observation module, the preset size is the corresponding space with 1600 pixels in length, width and height.
Specifically, when executing the preset segmentation strategy, if the model to be rendered includes a plurality of entities, the segmentation module places the model to be rendered into a single cube space with the longest side of the model to be rendered as a reference, and executes the preset segmentation strategy with the geometric center of the space as the origin.
Specifically, under a preset segmentation strategy, the orientation module takes the geometric center of each scene segmentation area as a reference, and the vector pointing to the origin is recorded as the visual direction of the scene segmentation area.
The method has the advantages that the corresponding observation direction is determined by dividing the scene, the image is deformed according to the observation direction, the rendering efficiency is improved while the resources required by image processing are effectively reduced, and therefore the timeliness of three-dimensional scene rendering is further improved.
Specifically, the distribution module selects the adjustment visual information corresponding to the visual direction with the smallest included angle with the direction according to the direction corresponding to the connecting line of the position of the single observation module and the origin, and transmits the adjustment visual information to the observation module.
Specifically, when each observation module finishes rendering the model to be rendered by a preset rendering strategy, corresponding data which is finished being rendered is transmitted to the distribution module, the distribution module combines the corresponding rendering models according to the rendering data of each model surface, and the data of the rendering models are stored.
By means of the mode of storing the rendering model in different places, the resource waste caused by repeated rendering of single equipment is effectively reduced, the rendering efficiency is effectively improved, and therefore timeliness of three-dimensional scene rendering is further improved.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A distributed three-dimensional scene rendering system, comprising:
the splitting module is used for splitting the three-dimensional scene according to a preset splitting strategy and forming a scene splitting area;
the orientation module is connected with the slitting module and used for determining a visual direction according to the position relation between each scene segmentation area and the model to be rendered;
the observation modules are connected with the slitting module and the orientation module and used for displaying the models to be rendered and rendering the corresponding models to be rendered according to the scene segmentation areas where the single observation module is positioned and the visual directions by a preset rendering strategy to form corresponding visual rendering information;
the shaping modules are connected with the slitting module and the orientation module and are respectively connected with the corresponding observation modules, and are used for generating dimension-reducing visual information of the scene segmentation area for the model to be rendered according to the visual direction positioning and the visual rendering information;
the feedback module is connected with each shaping module and each observing module and is used for adjusting the dimension-reducing visual information according to a preset visual adjustment strategy so as to form adjustment visual information;
the distribution module is connected with each observation module and the orientation module, and is used for determining corresponding visual adjustment information according to the positions of the observation modules and the visual directions and transmitting the visual adjustment information to the corresponding observation modules;
the preset segmentation strategy is to segment a scene by taking the model to be rendered as an origin; the preset rendering strategy is to render each model surface of the model to be rendered corresponding to the visual direction; the preset visual adjustment strategy is to adjust the shape of the dimension-reducing visual information according to the visual direction.
2. The distributed three-dimensional scene rendering system of claim 1, wherein the observation module determines a projection plane from the visual direction under directional conditions and projects the model to be rendered on the projection plane to form observation plane data;
wherein the projection surface is a surface perpendicular to the visual direction;
and the orientation condition is that the observation module observes the model to be rendered.
3. The system of claim 2, wherein the observation module is configured to render each model plane of the model to be rendered projected onto the projection plane according to the observation plane data under the rendering condition, wherein a projection area threshold is set in the observation module,
if the projection of the single model surface on the projection surface is smaller than the projection area threshold, the observation module judges that the model surface can be ignored, and performs fuzzy processing on the material of the model surface;
if the projection of the single model surface on the projection surface is greater than or equal to the projection area threshold, rendering the model surface by the observation module;
wherein the projection area threshold is positively correlated with the resolution displayed by the observation module;
the rendering condition is that the single observation module executes the preset rendering strategy on the model to be rendered.
4. A distributed three-dimensional scene rendering system according to claim 3, wherein the shaping module projects the rendered model to be rendered on the observation surface under the rendering condition, and forms the dimension-reduced visual information.
5. The distributed three-dimensional scene rendering system according to claim 4, wherein the feedback module deforms the corresponding dimension-reduced visual information according to the visual direction under the observation condition according to the preset visual adjustment strategy, and the deformation mode of the single dimension-reduced visual information is perspective deformation according to the included angle between the visual direction and a projection plane forming the dimension-reduced visual information;
and the observation condition is that the shaping module forms dimension reduction visual information of the model to be rendered.
6. The distributed three-dimensional scene rendering system of claim 5, wherein the splitting module splits a three-dimensional space into the scene splitting regions with a preset size when executing the preset splitting strategy, and determines a rendering space according to a size from the model to be rendered;
wherein the preset size is positively correlated with the resolution displayed by the observation module;
and when the single observation module is in the rendering space, the projection of the model to be rendered on the projection surface corresponding to the observation module is larger than the projection area threshold.
7. The system of claim 5, wherein when executing the preset segmentation strategy, if the model to be rendered includes a plurality of entities, the segmentation module places the model to be rendered into a single cube space with the longest side of the model to be rendered as a reference, and executes the preset segmentation strategy with the geometric center of the space as an origin.
8. The system of claim 5 or 6, wherein the orientation module uses the geometric center of each scene segmentation region as a reference under the preset segmentation strategy, and the vector pointing to the origin is recorded as the visual direction of the scene segmentation region.
9. The distributed three-dimensional scene rendering system according to claim 8, wherein the allocation module selects the adjusted visual information corresponding to the visual direction with the smallest included angle to the direction according to the direction corresponding to the line connecting the position of the single observation module and the origin, and transmits the adjusted visual information to the observation module.
10. The distributed three-dimensional scene rendering system according to claim 9, wherein when the observation modules complete rendering of the model to be rendered with the preset rendering strategy, the respective data after completing rendering is transmitted to the distribution module, and the distribution module combines the rendering data of the respective model surfaces into the respective rendering model, and stores the data of the rendering model.
CN202311025376.3A 2023-08-15 2023-08-15 Distributed three-dimensional scene rendering system Active CN116740249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311025376.3A CN116740249B (en) 2023-08-15 2023-08-15 Distributed three-dimensional scene rendering system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311025376.3A CN116740249B (en) 2023-08-15 2023-08-15 Distributed three-dimensional scene rendering system

Publications (2)

Publication Number Publication Date
CN116740249A true CN116740249A (en) 2023-09-12
CN116740249B CN116740249B (en) 2023-10-27

Family

ID=87919058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311025376.3A Active CN116740249B (en) 2023-08-15 2023-08-15 Distributed three-dimensional scene rendering system

Country Status (1)

Country Link
CN (1) CN116740249B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876555A (en) * 2024-03-12 2024-04-12 西安城市发展资源信息有限公司 Efficient rendering method of three-dimensional model data based on POI retrieval

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100091014A1 (en) * 2006-12-22 2010-04-15 Victor Shenkar Split-scene rendering of a three-dimensional model
KR20100075351A (en) * 2008-12-24 2010-07-02 한국전자통신연구원 Method and system for rendering mobile computer graphic
US20120293514A1 (en) * 2011-05-16 2012-11-22 General Electric Company Systems and methods for segmenting three dimensional image volumes
CN103927780A (en) * 2014-05-05 2014-07-16 广东威创视讯科技股份有限公司 Multi-display-card rendering method and 3D display system
CN110738721A (en) * 2019-10-12 2020-01-31 四川航天神坤科技有限公司 Three-dimensional scene rendering acceleration method and system based on video geometric analysis
CN111656407A (en) * 2018-01-05 2020-09-11 微软技术许可有限责任公司 Fusing, texturing, and rendering views of a dynamic three-dimensional model
CN115671719A (en) * 2022-10-17 2023-02-03 网易(杭州)网络有限公司 Game scene optimization method, device, equipment and storage medium
CN116468736A (en) * 2023-03-29 2023-07-21 广东横琴全域空间人工智能有限公司 Method, device, equipment and medium for segmenting foreground image based on spatial structure

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100091014A1 (en) * 2006-12-22 2010-04-15 Victor Shenkar Split-scene rendering of a three-dimensional model
KR20100075351A (en) * 2008-12-24 2010-07-02 한국전자통신연구원 Method and system for rendering mobile computer graphic
US20120293514A1 (en) * 2011-05-16 2012-11-22 General Electric Company Systems and methods for segmenting three dimensional image volumes
CN103927780A (en) * 2014-05-05 2014-07-16 广东威创视讯科技股份有限公司 Multi-display-card rendering method and 3D display system
CN111656407A (en) * 2018-01-05 2020-09-11 微软技术许可有限责任公司 Fusing, texturing, and rendering views of a dynamic three-dimensional model
CN110738721A (en) * 2019-10-12 2020-01-31 四川航天神坤科技有限公司 Three-dimensional scene rendering acceleration method and system based on video geometric analysis
CN115671719A (en) * 2022-10-17 2023-02-03 网易(杭州)网络有限公司 Game scene optimization method, device, equipment and storage medium
CN116468736A (en) * 2023-03-29 2023-07-21 广东横琴全域空间人工智能有限公司 Method, device, equipment and medium for segmenting foreground image based on spatial structure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾博文;王世刚;李天舒;张力中;赵岩;: "稀疏采集集成成像系统", 哈尔滨工业大学学报, no. 05 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876555A (en) * 2024-03-12 2024-04-12 西安城市发展资源信息有限公司 Efficient rendering method of three-dimensional model data based on POI retrieval
CN117876555B (en) * 2024-03-12 2024-05-31 西安城市发展资源信息有限公司 Efficient rendering method of three-dimensional model data based on POI retrieval

Also Published As

Publication number Publication date
CN116740249B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN109348119B (en) Panoramic monitoring system
CN116740249B (en) Distributed three-dimensional scene rendering system
EP0680019A2 (en) Image processing method and apparatus
CN107193372A (en) From multiple optional position rectangle planes to the projecting method of variable projection centre
CN101180653A (en) Method and device for three-dimensional rendering
JP2005339313A (en) Method and apparatus for presenting image
CN110648274A (en) Fisheye image generation method and device
DE102017118714A1 (en) Multi-level camera carrier system for stereoscopic image acquisition
CN106530212B (en) Lens video distortion correction device
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN102447925A (en) Method and device for synthesizing virtual viewpoint image
CN101729920A (en) Method for displaying stereoscopic video with free visual angles
CN102368826A (en) Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
US20180322671A1 (en) Method and apparatus for visualizing a ball trajectory
CN105979241B (en) A kind of quick inverse transform method of cylinder three-dimensional panoramic video
US10757345B2 (en) Image capture apparatus
Nonaka et al. Fast plane-based free-viewpoint synthesis for real-time live streaming
JP2006163547A (en) Program, system and apparatus for solid image generation
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
JP7394566B2 (en) Image processing device, image processing method, and image processing program
CN114742954A (en) Method for constructing large-scale diversified human face image and model data pairs
CN115883792B (en) Cross-space live-action user experience system utilizing 5G and 8K technologies
Zhang et al. Design of a 3D reconstruction model of multiplane images based on stereo vision
CN114419949B (en) Automobile rearview mirror image reconstruction method and rearview mirror
CN117061720B (en) Stereo image pair generation method based on monocular image and depth image rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant