CN106887032A - Three-dimensional scenic rendering intent and system and relevant device - Google Patents

Three-dimensional scenic rendering intent and system and relevant device Download PDF

Info

Publication number
CN106887032A
CN106887032A CN201510929262.0A CN201510929262A CN106887032A CN 106887032 A CN106887032 A CN 106887032A CN 201510929262 A CN201510929262 A CN 201510929262A CN 106887032 A CN106887032 A CN 106887032A
Authority
CN
China
Prior art keywords
clouds
distant view
dimensional scenic
information
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510929262.0A
Other languages
Chinese (zh)
Other versions
CN106887032B (en
Inventor
陆音
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN201510929262.0A priority Critical patent/CN106887032B/en
Publication of CN106887032A publication Critical patent/CN106887032A/en
Application granted granted Critical
Publication of CN106887032B publication Critical patent/CN106887032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of three-dimensional scenic rendering intent and system and relevant device, it is related to field of human-computer interaction.Three-dimensional scenic rendering intent therein includes:The interactive control information that high in the clouds receiving terminal sends;High in the clouds cuts the corresponding three-dimensional scenic in high in the clouds according to interactive control information;The level depth map of the three-dimensional scenic after the generation cutting of high in the clouds;The cutting result and level depth map of three-dimensional scenic are sent to terminal by high in the clouds, carry out the synthesis of scene according to the cutting result and level depth map of three-dimensional scenic so as to terminal.By way of the high in the clouds stronger using performance completes the cutting of three-dimensional scenic, synthesizes the cutting result that high in the clouds sends using terminal, it is possible to increase the efficiency that three-dimensional scenic is rendered, the requirement to terminal capabilities is reduced.

Description

Three-dimensional scenic rendering intent and system and relevant device
Technical field
The present invention relates to field of human-computer interaction, more particularly to a kind of three-dimensional scenic rendering intent and it is System and relevant device.
Background technology
With the continuous lifting of the network capabilities such as terminal and 4G (forth generation mobile communication technology), Stereo navigation, virtual exhibition, enhancing virtual reality, mobile MMO (MMO, Massive Multiplayer Online) mobile 3 D (3Dimensions, three-dimensional) such as game Interactive application becomes increasingly abundant, and constantly brings brand-new interactive Flow experience.
But 3D Scenario Designs in modern mobile interactive application are increasingly huge, complexity, for Complicated 3D scenes carry out real-time, efficient spatial reference (Spatial Culling), and then Avoid drawing those 3D objects not in screen ranges, be increasingly becoming lifting mobile terminal 3D One of key means of interactive rendering efficiency.But, when this means is applied to movement eventually During end, still there are a series of problems urgently to be resolved hurrily.
The CPU's (Central Processing Unit, central processing unit) of mobile terminal is interior Deposit capacity, disposal ability limited, when large-scale complex 3D scenes are carried out in real time, particulate During the spatial reference of degree, it is easily caused virtual memory and continually reads in reading, has in greatly consumption Terminal processes performance is reduced while limit memory source so that mobile terminal CPU can't bear the heavy load, Response is blunt, it is difficult to reaches real-time, interactive response and requires.Additionally, the GPU mono- of mobile terminal As calculate limited capacity, it is impossible to carry out real-time senior illumination for large area screen pixels, Color and post processing calculate, and then lead to not in real time calculate render, leverage three-dimension interaction Formula applies the interactive experience in mobile terminal.
The content of the invention
An embodiment of the present invention technical problem to be solved is:How the wash with watercolours of three-dimensional scenic is improved Dye efficiency, to meet the experience of the three-dimensional interactive application of mobile terminal.
A kind of one side according to embodiments of the present invention, there is provided three-dimensional scenic rendering intent, Including:The interactive control information that high in the clouds receiving terminal sends;Cut out according to interactive control information in high in the clouds Cut the corresponding three-dimensional scenic in high in the clouds;The level depth map of the three-dimensional scenic after the generation cutting of high in the clouds; The cutting result and level depth map of three-dimensional scenic are sent to terminal by high in the clouds, so as to terminal according to The cutting result and level depth map of three-dimensional scenic carry out the synthesis of scene.
In one embodiment, interactive control information includes shot information and scene settings information; High in the clouds cuts the corresponding three-dimensional scenic in high in the clouds according to interactive control information to be included:High in the clouds is according to scene Set information determines the three-dimensional scenic to be cut, according to shot information and the three-dimensional for pre-building Scene space index structure cuts three-dimensional scenic.
In one embodiment, the level depth map of the three-dimensional scenic after high in the clouds generation cuts includes: High in the clouds generates the texture mapping layer of the three-dimensional scenic after cutting by the way of hierarchical Z-depth Secondary depth map.
In one embodiment, the three-dimensional scenic in high in the clouds is static three-dimensional scene;Terminal is according to three The synthesis that the cutting result and level depth map of dimension scene carry out scene includes:Terminal is according to interaction The space index structure of control information and the dynamic 3 D close shot set up is carried out to dynamic 3 D close shot Cut;The cutting result of dynamic 3 D close shot is inserted into static three by terminal according to level depth map Corresponding level and depth in the cutting result of scene are tieed up, to carry out scene synthesis.
In one embodiment, also include:High in the clouds renders high in the clouds correspondence according to interactive control information Plane distant view picture;Plane distant view image information after high in the clouds will render is sent to terminal, with Just terminal carries out the synthesis of scene according to the plane distant view picture after rendering.
In one embodiment, to render the corresponding plane in high in the clouds according to interactive control information remote in high in the clouds Scape picture includes:Whether corresponding plane distant view picture is had in high in the clouds retrieval buffering, if it has, Rendered plane distant view picture is then obtained from buffering, if it is not, being controlled according to interaction Information processed renders the corresponding plane distant view picture in high in the clouds, and the plane distant view picture after rendering is preserved To in buffering.
In one embodiment, whether corresponding plane distant view picture is had in high in the clouds retrieval buffering Including:High in the clouds encodes to interactive control information, to obtain the coding of plane distant view picture; According in the coding of plane distant view picture retrieval buffering, whether existing corresponding plane distant view is drawn in high in the clouds Face.
In one embodiment, high in the clouds carries out coding to interactive control information includes:High in the clouds uses The mode of Hilbert space space filling curve coding is encoded to the positional information of camera lens, is used The mode that solid angle is divided in order is encoded to the directional information of camera lens, by the camera lens after coding Positional information and lens direction information carry out hashing operation, obtain scene settings information correspondence scene In plane distant view picture coding;Wherein, the positional information of interactive control information including camera lens, The directional information and scene settings information of camera lens.
In one embodiment, wherein, high in the clouds is by the cutting result of three-dimensional scenic, level depth Scheme, render after plane distant view image information to be separately encoded be independent code stream, and be encapsulated as tool There is the multiplexing code stream of time shaft, and multiplexing code stream is sent to terminal.
A kind of second aspect according to embodiments of the present invention, there is provided cloud rendered for three-dimensional scenic End server, including:Interactive control information receiver module, for the interaction that receiving terminal sends Control information;Three-dimensional scenic cuts module, for cutting high in the clouds correspondence according to interactive control information Three-dimensional scenic;Level depth map generation module, the layer for generating the three-dimensional scenic after cutting Secondary depth map;Sending module, for the cutting result of three-dimensional scenic and level depth map to be sent To terminal, so that terminal carries out scene according to the cutting result and level depth map of three-dimensional scenic Synthesis.
In one embodiment, interactive control information includes shot information and scene settings information; Three-dimensional scenic cuts module to be used to determine the three-dimensional scenic to be cut according to scene settings information, Three-dimensional scenic is cut according to shot information and the three-dimensional scene space index structure for pre-building.
In one embodiment, level depth map generation module is used for using hierarchical Z-depth Mode, the texture mapping level depth map of the three-dimensional scenic after generation cutting.
In one embodiment, also include:Plane distant view rendering module, for being controlled according to interaction Information processed renders the corresponding plane distant view picture in high in the clouds;The plane that sending module is used for after rendering Distant view image information is sent to terminal, so that terminal is carried out according to the plane distant view picture after rendering The synthesis of scene.
In one embodiment, plane distant view rendering module includes buffering retrieval unit, renders list Unit and buffer unit;Buffering retrieval unit is used in retrieval buffering whether existing corresponding plane to be remote Scape picture;Rendering unit is used for when existing corresponding plane distant view picture in buffering, from buffering It is middle to obtain rendered plane distant view picture, when there is no corresponding plane distant view picture in buffering When, the corresponding plane distant view picture in high in the clouds is rendered according to interactive control information;Buffer unit is used for Plane distant view picture after rendering is saved in buffering.
In one embodiment, buffering retrieval unit includes coded sub-units and retrieval subelement; Coded sub-units are used to encode interactive control information, to obtain the volume of plane distant view picture Code;Whether retrieval subelement is used for having phase according in the coding of plane distant view picture retrieval buffering The plane distant view picture answered.
In one embodiment, coded sub-units are used to be compiled using Hilbert space space filling curve The mode of code is encoded to the positional information of camera lens, by the way of solid angle is divided in order pair The directional information of camera lens is encoded, by lens location information and lens direction information after coding Hashing operation is carried out, the coding of the plane distant view picture in scene settings information correspondence scene is obtained; Wherein, interactive control information includes that positional information, the directional information of camera lens and the scene of camera lens set Determine information.
In one embodiment, sending module is used for the cutting result of three-dimensional scenic, level depth Degree figure, render after plane distant view image information to be separately encoded be independent code stream, and be encapsulated as Multiplexing code stream with time shaft, and multiplexing code stream is sent to terminal.
A kind of 3rd aspect according to embodiments of the present invention, there is provided end rendered for three-dimensional scenic End, including:Receiver module, the cutting result of the static three-dimensional scene for receiving high in the clouds transmission With the level depth map of static three-dimensional scene;Three-dimensional close shot cuts module, for being controlled according to interaction The space index structure of information processed and the dynamic 3 D close shot set up is cut out to dynamic 3 D close shot Cut;Scene synthesis module, for the level depth map according to static three-dimensional scene by dynamic 3 D The cutting result of close shot is inserted into corresponding level and depth in the cutting result of static three-dimensional scene, To carry out scene synthesis.
A kind of 4th aspect according to embodiments of the present invention, there is provided three-dimensional scenic rendering system, bag Include foregoing any one cloud server and aforementioned terminals.
The present invention is by the cutting using the stronger high in the clouds completion three-dimensional scenic of performance, using terminal The mode of the cutting result that synthesis high in the clouds sends, it is possible to increase the efficiency that three-dimensional scenic is rendered, drop The low requirement to terminal capabilities.
By referring to the drawings to the detailed description of exemplary embodiment of the invention, the present invention Further feature and its advantage will be made apparent from.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will The accompanying drawing to be used needed for embodiment or description of the prior art is briefly described, it is clear that Ground, drawings in the following description are only some embodiments of the present invention, for the common skill in this area For art personnel, on the premise of not paying creative work, can also be obtained according to these accompanying drawings Other accompanying drawings.
Fig. 1 shows the schematic flow sheet of one embodiment of three-dimensional scenic rendering intent of the present invention.
Fig. 2 shows that the flow of another embodiment of three-dimensional scenic rendering intent of the present invention is illustrated Figure.
Fig. 3 shows the structural representation of one embodiment of three-dimensional scenic rendering system of the present invention.
Fig. 4 shows the one embodiment for the cloud server that the present invention is rendered for three-dimensional scenic Structural representation.
Fig. 5 shows the structure of the one embodiment for the terminal that the present invention is rendered for three-dimensional scenic Schematic diagram.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, to the technical scheme in the embodiment of the present invention It is clearly and completely described, it is clear that described embodiment is only a real part of the invention Example is applied, rather than whole embodiments.Below to the description reality of at least one exemplary embodiment It is merely illustrative on border, never as to the present invention and its application or any limitation for using. Based on the embodiment in the present invention, those of ordinary skill in the art are not before creative work is made The every other embodiment for being obtained is put, the scope of protection of the invention is belonged to.
The three-dimensional scenic rendering intent of one embodiment of the invention is described below with reference to Fig. 1.
Fig. 1 is the flow chart of one embodiment of three-dimensional scenic rendering intent of the present invention.Such as Fig. 1 institutes Show, the method for the embodiment includes:
Step S102, the interactive control information that high in the clouds receiving terminal sends.
Step S104, high in the clouds cuts the corresponding three-dimensional scenic in high in the clouds according to interactive control information.
Wherein, interactive control information can include shot information and scene settings information.Work as application In three-dimensional scenic when having multiple, high in the clouds can determine what is cut according to scene settings information Three-dimensional scenic, and cut according to shot information and the three-dimensional scene space index structure for pre-building Three-dimensional scenic, to generate the cutting result of the part being only presented in screen.Three-dimensional scenic Space index structure can be Octree, (abbreviation of k-dimensional trees is one to K-d trees The data structure in kind of segmentation k dimension datas space), BSP tree (y-bend space cut tree) etc..
Step S106, the level depth map of the three-dimensional scenic after the generation cutting of high in the clouds.
Wherein, high in the clouds can be generated by the way of hierarchical Z-depth (Hierarchy Z) The texture mapping level depth map of the three-dimensional scenic after cutting.Hierarchy Z use maximum (Maximum) method carrys out generation layer level formula depth map.Which is specially:For a layer Level, stores the depth value corresponding to the pixel in each pixel, in taking 4 adjacent pixels The maximum pixel of depth value, as the value of the depth value of pixel corresponding in next level. As needed, those skilled in the art can also generate level depth map using other modes, Here repeat no more.
The cutting result and level depth map of three-dimensional scenic are sent to terminal by step S108, high in the clouds, The synthesis of scene is carried out according to the cutting result and level depth map of three-dimensional scenic so as to terminal.
By completing the cutting of three-dimensional scenic using the stronger high in the clouds of performance, using terminal synthesis cloud Hold the mode of the cutting result for sending, it is possible to increase the efficiency that three-dimensional scenic is rendered, it is right to reduce The requirement of terminal capabilities, is particularly suited in large-scale, real-time three-dimension interaction system or answers With.
In large-scale three-dimensional interactive application, three-dimensional scenic generally includes static three-dimensional scene again And moving three-dimensional scene.Wherein, the scale of three-dimensional static scene is generally larger, such as in scene Fixed buildings, fixed natural landscape etc.;The scale of Three Dimensional Dynamic Scene is then relatively small, Usually in close shot interactive controlling object or NPC (Non-Player Character, it is non- Role) etc..Therefore, it can larger static three-dimensional scene beyond the clouds Rendered, the dynamic 3 D close shot in the range of terminal rendering capability is rendered in terminal. Therefore, in step S108, terminal is entered according to the cutting result and level depth map of three-dimensional scenic The synthesis of row scene can include:Terminal is near according to interactive control information and the dynamic 3 D set up The space index structure of scape cuts to dynamic 3 D close shot;Terminal will according to level depth map The cutting result of dynamic 3 D close shot is inserted into corresponding layer in the cutting result of static three-dimensional scene Secondary and depth, to carry out scene synthesis.By combining level depth map that high in the clouds generates by terminal The static three-dimensional scene that dynamic 3 D close shot and the high in the clouds for rendering render is synthesized, and can make conjunction Scene after meets correct hiding relation.Existed by the way that terminal in this way, can be made Rendering for dynamic 3 D close shot is carried out in limit of power, rendering efficiency is improve.As needed, Dynamic 3 D close shot can also beyond the clouds be carried out in the mode same with static three-dimensional scene is rendered Render, here is omitted.
Plane distant view is also possible that in addition to three dimensional object, in three-dimensional scenic.Plane distant view Usually two-dimensional image, its change is smaller, and content is also relatively fixed, for increasing three-dimensional The stereovision of scene, enriches image content.In the present invention, for the plane in three-dimensional scenic Distant view can be rendered in the following way:High in the clouds renders high in the clouds pair according to interactive control information The plane distant view picture answered;Plane distant view image information after high in the clouds will render is sent to terminal, The synthesis of scene is carried out according to the plane distant view picture after rendering so as to terminal.By using this Method, only needs rendering plane distant view picture to be presented on the part in screen, can efficiently meet Requirement to picture richness.
Because the change of plane distant view picture is relatively small, rendering content there may be repeat, because This, high in the clouds specifically can render the corresponding plane distant view picture in high in the clouds using following methods:High in the clouds Whether corresponding plane distant view picture is had in retrieval buffering, if it has, then being obtained from buffering Rendered plane distant view picture, if it is not, rendering high in the clouds according to interactive control information Corresponding plane distant view picture, the plane distant view picture after rendering is saved in buffering.Pass through Using the above method, can only render the plane distant view picture that is cached in buffering without Rendered again every time, improve rendering efficiency.
When whether corresponding plane distant view picture is had in high in the clouds retrieval buffering, specifically can be with root Scanned for according to interactive control information, i.e.,:High in the clouds encodes to interactive control information, to obtain Obtain the coding of plane distant view picture;High in the clouds is according in the coding of plane distant view picture retrieval buffering No existing corresponding plane distant view picture.The mode efficiency retrieved using coding is higher, and And it is easy to storage.
The mode that high in the clouds is encoded to interactive control information can be:High in the clouds uses Hilbert The mode of space filling curve coding is encoded to the positional information of camera lens, is had using solid angle The mode that sequence is divided is encoded to the directional information of camera lens, by the lens location information after coding Hashing operation is carried out with lens direction information, the plane in scene settings information correspondence scene is obtained The coding of distant view picture;Wherein, the positional information of interactive control information including camera lens, camera lens Directional information and scene settings information.Hibert curve coding is a kind of compressible space bit Coding is put, the position to camera lens by way of being encoded using Hilbert space space filling curve is believed Breath is encoded, certain ad-hoc location that can be come in representation space with character string as short as possible, It is easy to generate corresponding hash index value beyond the clouds, improves index efficiency;By using solid angle The mode for dividing in order is encoded to the directional information of camera lens, can be defined the direction of camera lens It is several angles, each direction corresponds to certain surface of sphere, then each direction is numbered. The directional information and scene settings information of positional information, camera lens according to camera lens, can accurately determine Position to the part for needing in three-dimensional scenic to render.
The result for the three-dimensional scenic in high in the clouds is cut beyond the clouds, rendering is sent to terminal this transmission During, can specifically use following methods:High in the clouds is by cutting result, the level of three-dimensional scenic Depth map, render after plane distant view image information to be separately encoded be independent code stream, and encapsulate It is the multiplexing code stream with time shaft, and multiplexing code stream is sent to terminal.Terminal is by obtaining Timeline information, the multiplexing code stream decoded back that can be sent decoded high in the clouds is three dimensional field The cutting result of scape, level depth map and the plane distant view image information after rendering.After the decoding, Terminal can using high in the clouds send plane distant view image information as rasterisation picture the bottom, The rendering result of three-dimensional scenic and plane distant view are drawn again synthesizes final output picture, carried out Screen is exported, and completes the renewal of three-dimensional scenic picture in terminal.
The three-dimensional scenic rendering intent of another embodiment of the present invention is described below with reference to Fig. 2.
Fig. 2 is the flow chart of another embodiment of three-dimensional scenic rendering intent of the present invention.Such as Fig. 2 Shown, the method for the embodiment includes:
Step S202, terminal sends interactive control information to the front end access server positioned at high in the clouds, Wherein, interactive control information includes shot information and scene settings information.
Step S204, the front end access server positioned at high in the clouds sends to high in the clouds render farm and interacts Control information.
Step S206, high in the clouds render farm cuts the corresponding big rule in high in the clouds according to interactive control information Mould static three-dimensional scene, and generate the level depth map of the static three-dimensional scene after cutting, then root Cut according to interactive control information and the corresponding plane distant view picture in high in the clouds is rendered with less resolution ratio.
Step S208, high in the clouds render farm forward end code stream package server sends quiet after cutting State three-dimensional scenic and level depth map.
Step S210, high in the clouds render farm rendered to Video Coding collection pocket transmission after plane Distant view image information.
Step S212, Video Coding cluster carries out Video coding to plane distant view image information, And generate plane distant view video flowing.
Step S214, it is remote that Video Coding cluster forward end code stream package server sends plane Scape video flowing.
Step S216, front end code stream package server is by static three-dimensional scene, the level after cutting It is independent code stream that depth map and plane distant view video flowing are separately encoded, and adds corresponding time shaft Mark, and these independent code streams are encapsulated as can be with the multiplexing code stream of real-time Transmission.
Step S218, front end code stream package server is sent by front end access server to terminal Multiplexing code stream.
Step S220, the multiplexing code stream that terminal-pair high in the clouds sends is decoded.
Step S222, terminal sets up corresponding small-scale dynamic 3 D according to interactive control information The space index structure of close shot, and dynamic 3 D close shot is cut.
Step S224, terminal merges dynamic 3 D according to level depth map to be carried out and static three-dimensional Scape, it is right in the cutting result of static three-dimensional scene that the cutting result of dynamic 3 D close shot is inserted into The level and depth answered.
Step S226, terminal is rendered for the three-dimensional scenic after the merging that is presented in screen, and Using plane distant view picture as the bottom of rasterisation picture, final output picture is generated, and Screen output is carried out, the renewal of picture is completed.
By using the above method, terminal and high in the clouds can utilize self performance and feature, efficiently Ground completes the process of hybrid rending.
The system rendered for three-dimensional scenic of one embodiment of the invention is described below with reference to Fig. 3.
Fig. 3 is the structure chart of one embodiment of three-dimensional scenic rendering system of the present invention.Such as Fig. 3 Shown, the system of the embodiment includes:Cloud server 32 and terminal 34.Wherein, high in the clouds clothes The interactive control information that business device 32 is used to be sent according to terminal 34 cuts the corresponding three dimensional field in high in the clouds Scape simultaneously generates level depth map, and terminal 34 is used for the three dimensional field sent according to cloud server 32 The cutting result and level depth map of scape carry out the synthesis of scene.
The high in the clouds rendered for three-dimensional scenic of one embodiment of the invention is described below with reference to Fig. 4 Server.
Fig. 4 is the knot of the one embodiment for the cloud server that the present invention is rendered for three-dimensional scenic Composition.As shown in figure 4, the server 32 of the embodiment includes:Interactive control information receives mould Block 422, for the interactive control information that receiving terminal sends;Three-dimensional scenic cuts module 424, For cutting the corresponding three-dimensional scenic in high in the clouds according to interactive control information;Level depth map generates mould Block 426, the level depth map for generating the three-dimensional scenic after cutting;Sending module 428, uses In the cutting result and level depth map of three-dimensional scenic are sent into terminal, so that terminal is according to three Tieing up the cutting result and level depth map of scene carries out the synthesis of scene.
Wherein, interactive control information can include shot information and scene settings information;Three dimensional field Scape cuts module 424 to be used to determine the three-dimensional scenic to be cut, root according to scene settings information Three-dimensional scenic is cut according to shot information and the three-dimensional scene space index structure for pre-building.
Wherein, level depth map generation module 426 can be used for using the side of hierarchical Z-depth Formula, the texture mapping level depth map of the three-dimensional scenic after generation cutting.
Wherein, cloud server 32 can also include:Plane distant view rendering module, for basis Interactive control information renders the corresponding plane distant view picture in high in the clouds;Sending module 428 is used for wash with watercolours Plane distant view image information after dye is sent to terminal 34, so that terminal is according to the plane after rendering Distant view picture carries out the synthesis of scene.
Wherein, plane distant view rendering module can include that buffering retrieval unit, rendering unit ease up Memory cell;Whether buffering retrieval unit is used for having corresponding plane distant view picture in retrieval buffering; Rendering unit is used to, when existing corresponding plane distant view picture in buffering, be obtained from buffering Rendered plane distant view picture, when there is no corresponding plane distant view picture in buffering, according to Interactive control information renders the corresponding plane distant view picture in high in the clouds;After buffer unit is used to render Plane distant view picture be saved in buffering in.
Wherein, buffering retrieval unit can include coded sub-units and retrieval subelement;Coding Unit is used to encode interactive control information, to obtain the coding of plane distant view picture;Inspection Whether large rope unit is used for according to existing corresponding flat in the coding of plane distant view picture retrieval buffering Face distant view picture.
Wherein, coded sub-units can be used for the side using Hilbert space space filling curve coding Formula is encoded to the positional information of camera lens, to camera lens by the way of solid angle is divided in order Directional information is encoded, and the lens location information and lens direction information after coding are breathed out Uncommon operation, obtains the coding of the plane distant view picture in scene settings information correspondence scene;Wherein, Interactive control information includes positional information, the directional information of camera lens and the scene settings information of camera lens.
Wherein, sending module 428 can be used for the cutting result of three-dimensional scenic, level depth Scheme, render after plane distant view image information to be separately encoded be independent code stream, and be encapsulated as tool There is the multiplexing code stream of time shaft, and multiplexing code stream is sent to terminal.
The terminal rendered for three-dimensional scenic of one embodiment of the invention is described below with reference to Fig. 5.
Fig. 5 is the structure chart of the one embodiment for the terminal that the present invention is rendered for three-dimensional scenic.Such as Shown in Fig. 5, the terminal 34 of the embodiment includes:Receiver module 542, for receiving high in the clouds hair The cutting result of the static three-dimensional scene sent and the level depth map of static three-dimensional scene;It is three-dimensional near Scape cuts module 544, for the sky according to interactive control information and the dynamic 3 D close shot set up Between index structure dynamic 3 D close shot is cut;Scene synthesis module 546, for basis The cutting result of dynamic 3 D close shot is inserted into static three by the level depth map of static three-dimensional scene Corresponding level and depth in the cutting result of scene are tieed up, to carry out scene synthesis.
Additionally, the method according to the invention is also implemented as a kind of computer program product, should Computer program product includes computer-readable medium, is stored with the computer-readable medium Computer program for performing the above-mentioned functions limited in the method for the present invention.Art technology Personnel will also understand is that, the various illustrative logical blocks with reference to described by disclosure herein, mould Block, circuit and algorithm steps may be implemented as the group of electronic hardware, computer software or both Close.
Presently preferred embodiments of the present invention is the foregoing is only, is not intended to limit the invention, it is all at this Within the spirit and principle of invention, any modification, equivalent substitution and improvements made etc. all should be wrapped It is contained within protection scope of the present invention.

Claims (19)

1. a kind of three-dimensional scenic rendering intent, including:
The interactive control information that high in the clouds receiving terminal sends;
High in the clouds cuts the corresponding three-dimensional scenic in high in the clouds according to the interactive control information;
High in the clouds generates the level depth map of the three-dimensional scenic after the cutting;
The cutting result of the three-dimensional scenic and the level depth map are sent to terminal by high in the clouds, So that terminal carries out scene according to the cutting result and the level depth map of the three-dimensional scenic Synthesis.
2. method according to claim 1, it is characterised in that the interactive control information Including shot information and scene settings information;
The high in the clouds cuts the corresponding three-dimensional scenic in high in the clouds according to the interactive control information to be included:
High in the clouds determines the three-dimensional scenic to be cut according to scene settings information, according to shot information The three-dimensional scenic is cut with the three-dimensional scene space index structure for pre-building.
3. method according to claim 1, it is characterised in that the high in the clouds generation is described The level depth map of the three-dimensional scenic after cutting includes:
High in the clouds generates the line of the three-dimensional scenic after the cutting by the way of hierarchical Z-depth Reason mapping level depth map.
4. method according to claim 1, it is characterised in that the three dimensional field in the high in the clouds Scape is static three-dimensional scene;
The terminal carries out field according to the cutting result and the level depth map of the three-dimensional scenic The synthesis of scape includes:
Terminal is according to the interactive control information and the spatial index knot of the dynamic 3 D close shot set up Structure cuts to dynamic 3 D close shot;
The cutting result of dynamic 3 D close shot is inserted into static state by terminal according to the level depth map Corresponding level and depth in the cutting result of three-dimensional scenic, to carry out scene synthesis.
5. method according to claim 1, it is characterised in that also include:
High in the clouds renders the corresponding plane distant view picture in high in the clouds according to the interactive control information;
Plane distant view image information after high in the clouds will render is sent to terminal, so that terminal is according to wash with watercolours Plane distant view picture after dye carries out the synthesis of scene.
6. method according to claim 5, it is characterised in that the high in the clouds is according to Interactive control information renders the corresponding plane distant view picture in high in the clouds to be included:
Whether corresponding plane distant view picture is had in high in the clouds retrieval buffering, if it has, then postponing Rendered plane distant view picture is obtained in punching, if it is not, according to the interactive controlling Information renders the corresponding plane distant view picture in high in the clouds, and the plane distant view picture after rendering is saved in In buffering.
7. method according to claim 6, it is characterised in that the high in the clouds retrieval buffering In whether existing corresponding plane distant view picture includes:
High in the clouds encodes to the interactive control information, to obtain the coding of plane distant view picture;
According in the coding of plane distant view picture retrieval buffering, whether existing corresponding plane is remote in high in the clouds Scape picture.
8. method according to claim 7, it is characterised in that the high in the clouds is to the friendship Mutual control information carries out coding to be included:
High in the clouds is entered by the way of Hilbert space space filling curve coding to the positional information of camera lens Row coding, the directional information to camera lens by the way of solid angle is divided in order is encoded, will Lens location information and lens direction information after the coding carry out hashing operation, obtain scene The coding of the plane distant view picture in set information correspondence scene;
Wherein, the interactive control information includes positional information, the directional information of camera lens of camera lens With scene settings information.
9. method according to claim 5, it is characterised in that wherein,
High in the clouds by the cutting result of the three-dimensional scenic, the level depth map, render after it is flat It is independent code stream that face distant view image information is separately encoded, and is encapsulated as the multiplexing with time shaft Code stream, and the multiplexing code stream is sent to terminal.
10. a kind of cloud server rendered for three-dimensional scenic, including:
Interactive control information receiver module, for the interactive control information that receiving terminal sends;
Three-dimensional scenic cuts module, corresponding for cutting high in the clouds according to the interactive control information Three-dimensional scenic;
Level depth map generation module, the level depth for generating the three-dimensional scenic after the cutting Degree figure;
Sending module, for the cutting result of the three-dimensional scenic and the level depth map to be sent out Terminal is given, so that terminal is according to the cutting result and the level depth map of the three-dimensional scenic Carry out the synthesis of scene.
Server described in 11. claims 10, it is characterised in that the interactive controlling letter Breath includes shot information and scene settings information;
The three-dimensional scenic cuts three that module is used to be cut according to the determination of scene settings information Dimension scene, according to shot information and the three-dimensional scene space index structure for pre-building cut Three-dimensional scenic.
Server described in 12. claims 10, it is characterised in that the level depth map Generation module is used for by the way of hierarchical Z-depth, generates the three-dimensional scenic after the cutting Texture mapping level depth map.
Server described in 13. claims 10, it is characterised in that also include:
Plane distant view rendering module, it is corresponding for rendering high in the clouds according to the interactive control information Plane distant view picture;
The plane distant view image information that the sending module is used for after rendering is sent to terminal, with Just terminal carries out the synthesis of scene according to the plane distant view picture after rendering.
Server described in 14. claims 13, it is characterised in that the plane distant view wash with watercolours Dye module includes buffering retrieval unit, rendering unit and buffer unit;
Whether the buffering retrieval unit is used for having corresponding plane distant view picture in retrieval buffering;
The rendering unit is used for when existing corresponding plane distant view picture in buffering, from buffering It is middle to obtain rendered plane distant view picture, when there is no corresponding plane distant view picture in buffering When, the corresponding plane distant view picture in high in the clouds is rendered according to the interactive control information;
The plane distant view picture that the buffer unit is used for after rendering is saved in buffering.
Server described in 15. claims 14, it is characterised in that the buffering retrieval is single Unit includes coded sub-units and retrieval subelement;
The coded sub-units are used to encode the interactive control information, to obtain plane The coding of distant view picture;
It is described whether to retrieve during subelement is used to be buffered according to the retrieval of the coding of plane distant view picture There is corresponding plane distant view picture.
Server described in 16. claims 15, it is characterised in that the coded sub-units Compiled for the positional information to camera lens by the way of Hilbert space space filling curve coding Code, the directional information to camera lens by the way of solid angle is divided in order is encoded, will be described Lens location information and lens direction information after coding carry out hashing operation, obtain scene settings The coding of the plane distant view picture in information correspondence scene;
Wherein, the interactive control information includes positional information, the directional information of camera lens of camera lens With scene settings information.
Server described in 17. claims 13, it is characterised in that the sending module is used In by the cutting result of the three-dimensional scenic, the level depth map, render after plane distant view It is independent code stream that image information is separately encoded, and is encapsulated as the multiplexing code stream with time shaft, And the multiplexing code stream is sent to terminal.
A kind of 18. terminals rendered for three-dimensional scenic, including:
Receiver module, the cutting result and static state of the static three-dimensional scene for receiving high in the clouds transmission The level depth map of three-dimensional scenic;
Three-dimensional close shot cuts module, near according to interactive control information and the dynamic 3 D set up The space index structure of scape cuts to dynamic 3 D close shot;
Scene synthesis module, for inciting somebody to action dynamic according to the level depth map of the static three-dimensional scene The cutting result of three-dimensional close shot is inserted into corresponding layer in the cutting result of the static three-dimensional scene Secondary and depth, to carry out scene synthesis.
A kind of 19. three-dimensional scenic rendering systems, including:
Cloud server any one of claim 10-17,
With the terminal described in claim 18.
CN201510929262.0A 2015-12-15 2015-12-15 Three-dimensional scene rendering method and system and related equipment Active CN106887032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510929262.0A CN106887032B (en) 2015-12-15 2015-12-15 Three-dimensional scene rendering method and system and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510929262.0A CN106887032B (en) 2015-12-15 2015-12-15 Three-dimensional scene rendering method and system and related equipment

Publications (2)

Publication Number Publication Date
CN106887032A true CN106887032A (en) 2017-06-23
CN106887032B CN106887032B (en) 2020-09-29

Family

ID=59173883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510929262.0A Active CN106887032B (en) 2015-12-15 2015-12-15 Three-dimensional scene rendering method and system and related equipment

Country Status (1)

Country Link
CN (1) CN106887032B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418127A (en) * 2019-07-29 2019-11-05 南京师范大学 Virtual reality fusion device and method based on template pixel under a kind of Web environment
CN110490979A (en) * 2019-07-29 2019-11-22 南京师范大学 Virtual reality fusion device and method based on depth map under a kind of Web environment
CN114268784A (en) * 2021-12-31 2022-04-01 东莞仲天电子科技有限公司 Method for improving near-to-eye display experience effect of AR (augmented reality) equipment
CN114615528A (en) * 2020-12-03 2022-06-10 中移(成都)信息通信科技有限公司 VR video playing method, system, device and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663722A (en) * 2011-01-31 2012-09-12 微软公司 Moving object segmentation using depth images
CN102663804A (en) * 2012-03-02 2012-09-12 赞奇科技发展有限公司 Quick interactive graphic rendering method
CN103297393A (en) * 2012-02-27 2013-09-11 洛阳圈圈堂商贸有限公司 Method and system for achieving visual presentation of client side
CN103918012A (en) * 2011-11-07 2014-07-09 史克威尔·艾尼克斯控股公司 Rendering system, rendering server, control method thereof, program, and recording medium
CN103918011A (en) * 2011-11-07 2014-07-09 史克威尔·艾尼克斯控股公司 Rendering system, rendering server, control method thereof, program, and recording medium
CN104796393A (en) * 2014-05-30 2015-07-22 厦门极致互动网络技术有限公司 Online game system and method based on server real-time rendering
CN105096373A (en) * 2015-06-30 2015-11-25 华为技术有限公司 Media content rendering method, user device and rendering system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663722A (en) * 2011-01-31 2012-09-12 微软公司 Moving object segmentation using depth images
CN103918012A (en) * 2011-11-07 2014-07-09 史克威尔·艾尼克斯控股公司 Rendering system, rendering server, control method thereof, program, and recording medium
CN103918011A (en) * 2011-11-07 2014-07-09 史克威尔·艾尼克斯控股公司 Rendering system, rendering server, control method thereof, program, and recording medium
CN103297393A (en) * 2012-02-27 2013-09-11 洛阳圈圈堂商贸有限公司 Method and system for achieving visual presentation of client side
CN102663804A (en) * 2012-03-02 2012-09-12 赞奇科技发展有限公司 Quick interactive graphic rendering method
CN104796393A (en) * 2014-05-30 2015-07-22 厦门极致互动网络技术有限公司 Online game system and method based on server real-time rendering
CN105096373A (en) * 2015-06-30 2015-11-25 华为技术有限公司 Media content rendering method, user device and rendering system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418127A (en) * 2019-07-29 2019-11-05 南京师范大学 Virtual reality fusion device and method based on template pixel under a kind of Web environment
CN110490979A (en) * 2019-07-29 2019-11-22 南京师范大学 Virtual reality fusion device and method based on depth map under a kind of Web environment
CN110418127B (en) * 2019-07-29 2021-05-11 南京师范大学 Operation method of pixel template-based virtual-real fusion device in Web environment
CN110490979B (en) * 2019-07-29 2023-07-21 南京师范大学 Virtual-real fusion device and method based on depth map in Web environment
CN114615528A (en) * 2020-12-03 2022-06-10 中移(成都)信息通信科技有限公司 VR video playing method, system, device and medium
CN114615528B (en) * 2020-12-03 2024-04-19 中移(成都)信息通信科技有限公司 VR video playing method, system, equipment and medium
CN114268784A (en) * 2021-12-31 2022-04-01 东莞仲天电子科技有限公司 Method for improving near-to-eye display experience effect of AR (augmented reality) equipment

Also Published As

Publication number Publication date
CN106887032B (en) 2020-09-29

Similar Documents

Publication Publication Date Title
Würmlin et al. 3D video fragments: Dynamic point samples for real-time free-viewpoint video
US9773343B2 (en) Method for real-time and realistic rendering of complex scenes on internet
US10636201B2 (en) Real-time rendering with compressed animated light fields
WO2012037863A1 (en) Method for simplifying and progressively transmitting 3d model data and device therefor
CN106887032A (en) Three-dimensional scenic rendering intent and system and relevant device
CN102834849A (en) Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program
CN104637089A (en) Three-dimensional model data processing method and device
CN101477701A (en) Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
WO2011082650A1 (en) Method and device for processing spatial data
CN116977531A (en) Three-dimensional texture image generation method, three-dimensional texture image generation device, computer equipment and storage medium
Andújar et al. Visualization of Large‐Scale Urban Models through Multi‐Level Relief Impostors
Levkovich-Maslyuk et al. Depth image-based representation and compression for static and animated 3-D objects
WO2021245326A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
CN114419099A (en) Method for capturing motion trail of virtual object to be rendered
CN104737539A (en) Device, program, and method for reducing data size of multiple images containing similar information
Luo et al. Quad-tree atlas ray casting: a gpu based framework for terrain visualization and its applications
US20230119830A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
CN114255328A (en) Three-dimensional reconstruction method for ancient cultural relics based on single view and deep learning
Beacco et al. A flexible approach for output‐sensitive rendering of animated characters
CN206975717U (en) A kind of rendering device of extensive three-dimensional animation
CN101488230B (en) VirtualEarth oriented ture three-dimensional stereo display method
WO2020012071A1 (en) A method, an apparatus and a computer program product for volumetric video coding
WO2018040831A1 (en) Graphic identification code generation method and apparatus
Sun et al. Large-scale vector data displaying for interactive manipulation in 3D landscape map
CN101482977B (en) Microstation oriented implantation type true three-dimensional stereo display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant