CN117830497A - Method and system for intelligently distributing 3D rendering power consumption resources - Google Patents
Method and system for intelligently distributing 3D rendering power consumption resources Download PDFInfo
- Publication number
- CN117830497A CN117830497A CN202410061637.5A CN202410061637A CN117830497A CN 117830497 A CN117830497 A CN 117830497A CN 202410061637 A CN202410061637 A CN 202410061637A CN 117830497 A CN117830497 A CN 117830497A
- Authority
- CN
- China
- Prior art keywords
- virtual camera
- information
- rendering
- control input
- input signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000009877 rendering Methods 0.000 title claims abstract description 81
- 230000033001 locomotion Effects 0.000 claims abstract description 142
- 238000012545 processing Methods 0.000 claims description 45
- 238000004590 computer program Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 8
- 238000004806 packaging method and process Methods 0.000 claims description 7
- 238000012544 monitoring process Methods 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 5
- 238000004088 simulation Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 2
- 238000012800 visualization Methods 0.000 abstract description 2
- 230000010354 integration Effects 0.000 abstract 3
- 238000012856 packing Methods 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 29
- 238000009499 grossing Methods 0.000 description 18
- 238000005265 energy consumption Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 16
- 239000000370 acceptor Substances 0.000 description 10
- 238000001514 detection method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 125000004122 cyclic group Chemical group 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000007654 immersion Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000011897 real-time detection Methods 0.000 description 3
- 239000000872 buffer Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a method for intelligently distributing 3D rendering power consumption resources, which belongs to the technical field of computer 3D visualization, and can be used for detecting and acquiring virtual camera control input signals under a 3D scene, generating a virtual camera complete motion track, generating a plurality of pixel frames corresponding to the virtual camera complete motion track, packing the plurality of pixel frames according to time sequence arrangement to generate a new pixel frame integration set, storing the newly generated pixel frame integration set to cover original data, circularly projecting the newly generated pixel frame integration set in a screen image according to the sequence of the time sequence arrangement, and solving the problem that the supply of a 3D real-time rendering computing service provided by a system and the requirement of a user for acquiring information under a low interactivity state cannot be mutually matched under a continuous time period.
Description
Technical Field
The invention relates to the technical field of computer 3D visualization, in particular to a method and a system for intelligently distributing 3D rendering power consumption resources.
Background
The 3D real-time rendering technology in the prior art application mainly focuses on interactivity and real-time, and in the rendering process, the system needs to continuously calculate and update images on a screen so as to keep the real-time of interaction and feedback with a user. This means that the system needs to continuously perform a lot of data processing and computing tasks, by using computer graphics techniques such as geometry calculation, illumination calculation, texture mapping, etc., to convert three-dimensional image data into two-dimensional images, render three-dimensional objects on a two-dimensional screen, to simulate a three-dimensional scene in the real world. A large number of computing modules and computing resources in the system are always occupied, and these computing modules require a continuous consumption of a large amount of power, and the system is always at a high level of energy consumption.
In the current digital background, in order to present the 3D scene information to the audience in the occasions such as conference, exhibition, teaching or entertainment, the requirement of the user on the 3D rendering technical effect is changed relative to the 3D real-time rendering technology in the traditional sense. On the one hand, in order to meet the requirement that multiple information acceptors accept the same 3D scene information in the same space-time scene, a user puts forward higher requirements on 3D rendering results, namely pixel resolutions of two-dimensional images and two-dimensional screens and image display areas in actual scenes, in the implementation, a large-screen projection mode is often adopted to enhance visual effects and immersion of the acceptors and enhance the efficiency of 3D scene information transmission, but high resolution of output image results also puts forward higher requirements on system computing performance and energy consumption. On the other hand, when a plurality of information acceptors accept 3D scene information, the information acceptors mainly watch and watch two-dimensional (dynamic) images displayed by the system under the large screen projection, the visual enhancement effect brought by the large screen projection reduces the actual requirement that users send interactive instructions to the system to a certain extent, and in reality, the frequency that users send operation instructions to the system under the condition that the large screen projection displays the 3D scene information is greatly reduced, and the interactivity between the users and the system is greatly reduced.
Aiming at a new technical application scene that a plurality of information acceptors receive 3D scene information under the same space-time condition through large screen projection, the main technical means at present is still a 3D real-time rendering technology. The supply of the 3D real-time rendering computing service provided by the system and the requirement of the user for acquiring information in a low-interactivity state cannot be matched with each other in a continuous time period, and the system continuously processes a large amount of data and computes, so that the system is excessively high in energy consumption and low in computing efficiency. The system always has a single power consumption level for processing the 3D rendering data in real time with high energy consumption, and hardware resources cannot be intelligently allocated to improve the efficiency of processing the 3D rendering data by the system and reduce the energy consumption of the system.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a method and a system for intelligently distributing 3D rendering power consumption resources, which can solve the problem that the supply of 3D real-time rendering computing service provided by the system and the requirement of a user for acquiring information in a low-interactivity state cannot be matched with each other in a continuous time period, so as to realize the technical effects of reducing the data processing energy consumption of the system, improving the operation efficiency and intelligently distributing the system energy consumption resources. In order to achieve the above object, the present invention adopts the technical scheme that:
In a first aspect, the present invention provides a method for intelligently allocating 3D rendering power consumption resources, which is characterized by having an initial set of pixel frames that can be circularly projected to a screen image and 3D model basic scene information, where the 3D model basic scene information includes scene object information, light source and shadow information, and virtual camera information, and the method includes:
s1, detecting whether a virtual camera control input signal exists, if so, executing a step S2, and if not, executing a step S5;
s2, acquiring the control input signal of the virtual camera, and generating a complete motion track of the virtual camera;
s3, performing 3D rendering frame by frame according to the complete motion trail of the virtual camera and the 3D model basic scene information, and generating a plurality of pixel frames corresponding to the complete motion trail of the virtual camera;
s4, arranging and packaging the pixel frames generated in the step S3 according to a time sequence to generate a new pixel frame set, and storing and covering the initial pixel frame set;
s5, circularly projecting the pixel frames in the pixel frame assembly into the screen image according to the sequence of time sequence arrangement.
In a second aspect, the invention further provides a system for intelligently allocating 3D rendering power consumption resources, which is characterized by comprising a control signal monitoring module, a motion trail processing module, a 3D rendering module, a pixel frame storage module and a screen image display module:
the control signal monitoring module is used for detecting whether a virtual camera control input signal exists, entering the motion track processing module if the virtual camera control input signal exists, and entering the screen image display module if the virtual camera control input signal does not exist; the virtual camera control input signal comprises virtual camera position information and virtual camera visual angle information;
the motion trail processing module is used for acquiring the control input signal of the virtual camera and generating a complete motion trail of the virtual camera;
the 3D rendering module is used for performing 3D rendering frame by frame according to the complete motion trail of the virtual camera and the basic scene information of the 3D model, and generating a plurality of pixel frames corresponding to the complete motion trail of the virtual camera;
the pixel frame storage module is used for arranging and packaging the plurality of pixel frames generated in the 3D rendering module according to a time sequence to generate a new pixel frame set, and storing and covering the initial pixel frame set;
The screen image display module is used for circularly projecting the pixel frames in the pixel frame assembly into the screen image according to the sequence of time sequence arrangement.
In a third aspect, the present invention also provides a computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements a method as described above.
In a fourth aspect, the present invention further provides an electronic device, including a processor and a memory;
the memory is used for storing one or more programs;
the processor is configured to perform the method described above by invoking the one or more programs.
In a fifth aspect, the invention also provides a computer program product comprising a computer program and/or instructions, characterized in that the computer program and/or instructions, when executed by a processor, implement the steps of the method as described above.
The technical scheme provided by the invention has the beneficial effects that: according to the scheme provided by the invention, under the application scene that a plurality of information acceptors receive 3D scene information under the same space-time condition through large screen projection, the virtual camera control input signal under the 3D scene is detected and acquired, the virtual camera complete motion track is generated, a plurality of pixel frames corresponding to the virtual camera complete motion track are generated, the pixel frames are packed according to time sequence arrangement to generate a new pixel frame set, the newly generated pixel frame set is stored to cover original data and is circularly projected in a screen image according to the time sequence arrangement order, the problem that the supply of the 3D real-time rendering calculation service provided by the system and the requirement of a user for acquiring information under the low interactivity state cannot be matched with each other in a continuous time period can be solved, and the technical effects of reducing the system data processing energy consumption, improving the operation efficiency and intelligently distributing the system energy consumption resources are realized.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required to be used in the description of the present invention will be briefly described.
Fig. 1 is a schematic flow chart of a method for intelligently allocating 3D rendering power consumption resources according to the present invention.
Fig. 2 is a schematic flow chart of a method for automatically generating a complete motion trail of a virtual camera.
Fig. 3 is a schematic diagram of a system structure for intelligently allocating 3D rendering power consumption resources according to the present invention.
Fig. 4 is a schematic structural diagram of an exemplary electronic device provided by the present invention.
Detailed Description
According to the method for intelligently distributing the 3D rendering power consumption resources, under the application scene that a plurality of information acceptors receive 3D scene information under the same space-time condition through large screen projection, the problem that the supply of the 3D real-time rendering computing service provided by a system and the requirement of a user for acquiring information can not be matched with each other in a continuous time period can be solved, so that the technical effects of reducing the data processing energy consumption of the system, improving the operation efficiency and intelligently distributing the system energy consumption resources are achieved. Example embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
The current 3D real-time rendering technology mainly focuses on interactivity and real-time, and in the rendering process, the system needs to continuously calculate and update images on a screen so as to keep the real-time of interaction and feedback with a user. This means that the system needs to continuously perform a lot of data processing and computing tasks, by using computer graphics techniques such as geometry calculation, illumination calculation, texture mapping, etc., to convert three-dimensional image data into two-dimensional images, render three-dimensional objects on a two-dimensional screen, to simulate a three-dimensional scene in the real world. A large number of computing modules and computing resources in the system are always occupied, and these computing modules require a continuous consumption of a large amount of power, and the system is always at a high level of energy consumption.
In a new technical application scene that currently ubiquitous multi-information acceptors accept 3D scene information under the same space-time condition through large screen projection, the resolution requirement of a user on a 3D rendering result generated by a system is improved, and the interactivity between the user and the system is reduced. The supply of the 3D real-time rendering computing service provided by the system and the requirement of the user for acquiring information in a low-interactivity state cannot be matched with each other in a continuous time period, and the system continuously processes a large amount of data and computes, so that the system is excessively high in energy consumption and low in computing efficiency. The system always has a single power consumption level for processing the 3D rendering data in real time with high energy consumption, and hardware resources cannot be intelligently allocated to improve the efficiency of processing the 3D rendering data by the system and reduce the energy consumption of the system.
Aiming at the technical problems, the technical scheme provided by the application has the following overall thought:
the embodiment of the application provides a method for intelligently distributing 3D rendering power consumption resources, which is characterized by comprising an initial pixel frame set capable of being circularly projected to a screen image and 3D model basic scene information, wherein the 3D model basic scene information comprises scene object information, light source and shadow information and virtual camera information, and the method comprises the following steps:
s1, detecting whether a virtual camera control input signal exists, if so, executing a step S2, and if not, executing a step S5;
s2, acquiring the control input signal of the virtual camera, and generating a complete motion track of the virtual camera;
s3, performing 3D rendering frame by frame according to the complete motion trail of the virtual camera and the 3D model basic scene information, and generating a plurality of pixel frames corresponding to the complete motion trail of the virtual camera;
s4, arranging and packaging the pixel frames generated in the step S3 according to a time sequence to generate a new pixel frame set, and storing and covering the initial pixel frame set;
S5, circularly projecting the pixel frames in the pixel frame assembly into the screen image according to the sequence of time sequence arrangement.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present application provides a method for intelligently allocating 3D rendering power consumption resources, which is characterized by having an initial pixel frame set capable of being circularly projected to a screen image and 3D model basic scene information, where the 3D model basic scene information includes scene object information, light source and shadow information, and virtual camera information, and the method includes:
step S1: detecting whether a virtual camera control input signal exists, if so, executing a step S2, and if not, executing a step S5;
specifically, the basic scene information of the 3D model is an important basis for constructing and presenting a 3D model scene, and is a precondition for converting and finally presenting the 3D model into a video picture composed of two-dimensional images and continuous images, wherein the precondition mainly comprises scene object information, light source and shadow information and virtual camera information. All of this information can be set manually by 3D modeling software, such as Autodesk Maya, blender, etc. Meanwhile, the method can also be automatically extracted through image recognition and computer vision technology.
With respect to scene object information, each scene object may be described as one or more three-dimensional models. These models may be simple geometric shapes such as cubes, spheres, polygons, etc., or complex roles, vehicles, buildings, etc. Attribute information such as the type, shape, size, position, and angle of rotation of the object will determine the appearance and position of the object in the rendered image. In addition, the object may possess texture and texture information that will be used to determine the object's properties of reflectance, transparency, color, etc. under illumination.
Regarding light sources and shadow information, light sources are important factors that affect the effect that an object presents in an image. The type of light source (e.g., point light source, parallel light source, ambient light source, etc.), location, direction, color, etc. attributes will determine the darkness and shading effect of an object in an image. Shadow generation and cast information is also an important part, as shadows can increase the depth and stereoscopic impression of an image.
Regarding virtual camera information, virtual cameras are used to determine the perspective and composition of the rendered image. The position, orientation, focal length, viewing angle, etc. of the camera will determine the level of detail of the objects and scenes that can be seen in the image. By adjusting parameters of the camera, rendering effects of different styles, such as a ambitious scene with a wide viewing angle or a detail close-up with a narrow viewing angle, can be realized.
It is noted that, the virtual camera information in the invention is a main expression form of a user transmitting a control signal to the system, is a main input signal of the system for intelligently distributing 3D rendering power consumption resources and related data processing resources, and is key information of the system for realizing man-machine interaction.
The step of detecting whether the virtual camera control input signal exists comprises detecting whether the virtual camera control input signal exists in real time. The frequency of the real-time detection may be between 20 and 100 times per second, but the frequency is merely exemplary, and does not limit the scope of the claims and the disclosure of the specification, so long as the detection frequency can achieve the technical effect of detecting the control input signal of the virtual camera in real time, and the corresponding detection frequency is reasonable and can be used as the detection frequency of the real-time detection of the present invention. The real-time detection result of the virtual camera control input signal in the continuous time sequence can be divided into no signal input and less signal input. In general, in a new technical application scenario where multiple information acceptors accept 3D scene information under the same space-time condition through large screen projection, the ratio of the detection result without signal input in the time sequence is usually significantly higher than that of the detection result with signal input in the time sequence due to low interactivity between the user and the system.
Furthermore, if the detection result is no signal input, the method displays the relevant information of the video composed of the image and the continuous image on a large screen according to the preloaded pixel frames and the relevant data. Specifically, the initial set of pixel frames with cyclic projections onto the screen image is shown. This means that, in the state that the detection result is no signal input, the method of the present application may display the preloaded initial pixel frame set with the cyclic projection to the screen image, the information receiver displays the actual result seen by the large screen as a video frame composed of a segment of continuous multi-pixel frames, and the video frames are connected end to end, and in the process of cyclic playing of the video, the information receiver cannot distinguish the beginning and the end of the video. When the detection result is no signal input, that is, the virtual camera control input signal is not present in the step, the method of the present invention directly jumps from the step S1 to the step S5 and executes the detailed method of the step S5, see the step S5 in detail.
If the detection result is that a signal is input, that is, the virtual camera control input signal exists in the step, the method of the invention proceeds from the step S1 to the step S2 and executes the detailed method of the step S2, and the detailed method is shown in the step S2.
Further, the virtual camera control input signal comprises virtual camera position information, virtual camera visual angle information and virtual camera focal length information;
specifically, the virtual camera position information is a coordinate information set of a virtual camera in a three-dimensional space, the information is arranged in time sequence, and the virtual camera position information is input by a user through a controller; the visual angle information of the virtual camera is an orientation information set of the virtual camera in a three-dimensional space, the information is arranged in time sequence, and the visual angle information of the virtual camera is input by a user through a controller or automatically adjusted by the controller, so that the effect of meeting the observation requirements under different scenes can be realized; the virtual camera focal length information is an information set which is arranged according to a time sequence, and the information set is input by a user through a controller and is processed through subsequent steps, so that the focusing and depth-of-field effects can be finely controlled.
Step S2: acquiring the control input signal of the virtual camera, and generating a complete motion trail of the virtual camera;
specifically, the virtual camera control input signal includes virtual camera position information, virtual camera view angle information, virtual camera focal length information, and other parameter information for determining a view angle and composition of a rendered image. The detail degree of objects and scenes which can be seen in the image is determined by the various parameter information, and rendering effects of different styles, such as a ambitious scene with a wide viewing angle or a detail close-up with a narrow viewing angle, can be realized by adjusting the various parameter information.
It is noted that the virtual camera control input signal including the virtual camera position information, the virtual camera view angle information, and the virtual camera focal length information is in a smooth continuous state in time sequence.
In the subsequent step, the user inputs the virtual camera control input signal, and the invention performs 3D rendering on other 3D model basic scene information including scene object information, light source and shadow information according to the virtual camera control input signal input by the user, so that a pixel frame and a combined set corresponding to the virtual camera control input signal can be generated, and the pixel frame and the combined set can form a continuous picture so as to be expressed as a section of video on a screen terminal.
In addition, in consideration of the fact that the virtual camera control input signal inputted by the user is in a smooth continuous state in time sequence, the start end and the end of the signal generated in general are in a distinct separated state, specifically, the virtual camera position information, the virtual camera view angle information and other parameter information in the virtual camera control input signal have large differences between the values of the start end and the values of the end of the continuous signal. If the method directly performs 3D rendering according to the virtual camera control input signal, the difference between the picture contents of the initial frame (i.e. the first frame) and the final frame (i.e. the last frame) in the generated pixel frame set is larger, and further, if the pixel frame set is circularly played, obvious picture jump occurs in the process of displaying the video on a large screen, and then the visual effect and the immersion of an audience should be enhanced originally in a large screen projection mode, and the efficiency of 3D scene information transmission is enhanced.
In order to solve the problem that the video picture formed by the generated pixel frame set obviously jumps, the invention adopts an interpolation method, and further generates a virtual camera motion track with a closed loop, namely a complete motion track of the virtual camera according to the virtual camera control input signal. In the full motion track of the virtual camera, the virtual camera position information, the virtual camera visual angle information, the virtual camera focal length information and other parameter information are still in a smooth connection state in time sequence, and furthermore, the difference between the value of the starting end and the value of the ending end of the continuous signal is obviously reduced and is in a connection state. Therefore, after 3D rendering is performed according to the camera position information recorded by the full motion track of the virtual camera, the virtual camera view angle information and other parameter information, the difference between the picture content of the start frame (i.e. the first frame) and the picture content of the end frame (i.e. the last frame) in the pixel frame set is very small, if the pixel frame set is played circularly, the problem of picture jump will not occur in the process of displaying the video on a large screen, and the start and end pictures of the video are in a continuous state, so that the mode of avoiding interference with large screen projection should originally enhance the visual effect and immersion of the audience, and enhance the efficiency of transmitting 3D scene information.
Further, as shown in fig. 2, the obtaining the virtual camera control input signal, generating a virtual camera complete motion track, includes:
step S201: acquiring the control input signal of the virtual camera, and generating a first motion trail of the virtual camera;
the virtual camera control input signal may also be pre-processed and normalized in advance before the virtual camera control input signal is acquired, where the pre-processing includes noise and interference in the cancellation signal, and the normalization processing is used to implement comparability and calculability in values of different parameter information (such as position, view angle, focal length, etc.);
the preprocessing further comprises the step of dynamically adjusting smoothing parameters, namely, smoothing processing is reduced to keep more details when the virtual camera moves at a high speed, and smoothing processing is added to eliminate noise when the virtual camera moves at a low speed, so that the effect of dynamically adjusting parameters of the data smoothing processing according to the movement speed and the acceleration of the virtual camera is achieved;
the generating of the first motion trail of the virtual camera comprises recording the virtual camera control input signals such as the position, the visual angle, the focal length and the like of the virtual camera according to a time sequence, and generating the first motion trail of the virtual camera;
The generating of the first motion trail of the virtual camera further comprises smoothing the data of the first motion trail by adopting a moving average method or an exponential smoothing method so as to reduce jitter and irregularity in the trail; the parameters of the smoothing process can be adjusted according to actual application scenes, so that the effect of improving the quality of the generated result is achieved;
step S202: generating a second motion track of the virtual camera by using a curve interpolation method according to the control input signal of the virtual camera, wherein the starting end of the second motion track of the virtual camera is smoothly connected with the ending end of the first motion track of the virtual camera, and the ending end of the second motion track of the virtual camera is smoothly connected with the starting end of the first motion track of the virtual camera;
generating a second motion trail of the virtual camera by using a curve interpolation method, wherein the curve fitting is performed on the smoothed data by using algorithms such as polynomial fitting or spline interpolation, and the objective of the curve fitting is to generate a smooth and continuous curve serving as the second motion trail of the virtual camera;
the curve fitting further comprises self-adaptive curve fitting, and for the region with severe variation, a higher-order polynomial or a more complex spline function is used for fitting; for regions of gentle variation, a low order polynomial or simple linear interpolation is used.
Step S203: and combining and splicing the first motion trail of the virtual camera and the second motion trail of the virtual camera to generate a complete motion trail of the virtual camera.
The combination and the splicing are in an end-to-end connection mode, namely the starting end of the second motion track is smoothly connected with the ending end of the first motion track, and meanwhile, the ending end of the second motion track is also smoothly connected with the starting end of the first motion track. By forming the complete motion trail of the virtual camera, namely a closed loop.
Specifically, the scheme disclosed in the step firstly generates the first motion trail of the virtual camera according to the control input signal of the virtual camera provided by the user. The virtual camera first motion track comprises virtual camera position information, virtual camera visual angle information, virtual camera focal length information and other parameter information, the virtual camera first motion track is in a continuous state in time sequence, a starting end and a terminating end of a signal are usually in a relatively obvious separated state, and the difference between the numerical value of the starting end and the numerical value of the terminating end of a continuous signal of the virtual camera position information, the virtual camera visual angle information and other parameter information in the virtual camera first motion track is usually larger. After the first motion track of the virtual camera is generated, the method can acquire track data, specifically, continuously record position, view angle, focal length and direction information of the virtual camera in the process of moving along the track, and the acquired data are used for subsequent curve difference processing.
According to the generated first motion trail of the virtual camera and the adopted data, a second motion trail of the virtual camera corresponding to the first motion trail of the virtual camera is generated through a curve interpolation processing step, the starting end of the second motion trail of the virtual camera is smoothly connected with the ending end of the first motion trail of the virtual camera, the ending end of the second motion trail of the virtual camera is smoothly connected with the starting end of the first motion trail of the virtual camera, and the first motion trail of the virtual camera is connected with the second motion trail of the virtual camera end to end, so that the motion trail of the virtual camera forms a closed loop.
Further, the curve interpolation processing step comprises a data smoothing processing step, a curve fitting step and a closed loop generating step. The data smoothing process means that in order to eliminate noise and irregularities in the trajectory data, it is necessary to perform the data smoothing process, which can be achieved by using various data smoothing algorithms, for example, a moving average method, an exponential smoothing method, and the like. The curve fitting refers to fitting the smoothed track data by using a curve fitting algorithm, and aims to find a curve which can smoothly connect the start point and the end point of the track, so as to generate the second motion track of the virtual camera. The curve fitting algorithm comprises polynomial fitting, spline interpolation and the like. The second motion trail of the virtual camera, which is generated through the curve fitting algorithm, is smoothly connected with the starting end of the first motion trail of the virtual camera, and is smoothly connected with the starting end of the first motion trail of the virtual camera. The starting end of the second motion track of the virtual camera is connected with the ending end of the first motion track of the virtual camera, and the ending end of the second motion track of the virtual camera is connected with the starting end of the first motion track of the virtual camera, so that a closed loop can be generated, namely, the complete motion track of the virtual camera is generated.
Step S3: according to the complete motion trail of the virtual camera and the basic scene information of the 3D model, 3D rendering is carried out frame by frame, and a plurality of pixel frames corresponding to the complete motion trail of the virtual camera are generated;
specifically, the virtual camera is moved completely once along the complete motion track of the virtual camera, namely, the virtual camera moves from the starting end of the first motion track of the virtual camera to the ending end of the second motion track of the virtual camera, 3D rendering is continuously carried out in the motion process, and the rendering result is a plurality of pixel frames. The frequency of rendering per second (FPS) is usually 24 frames or more, and considering that 24FPS is a standard that the human eye is recognized as continuous motion at present, a value lower than this value may cause the video picture to be displayed to be jammed or unsmoothly, and of course, if higher rendering quality and smoother motion effect are required, the rendering frequency can be properly increased, but this also increases the rendering time and the consumption of computing resources, so that trade-off and adjustment are required according to actual needs.
Further, the frame-by-frame 3D rendering includes ray tracing, texture mapping, depth testing. The ray tracing comprises the steps of accurately modeling and calculating the material quality, illumination condition and the like of each object in a scene, and can simulate the real-world ray propagation and reflection effect. The texture mapping comprises mapping the 2D texture image to the surface of the 3D model, and carrying out proper stretching and deformation, so that the effect of improving the appearance authenticity of the 3D model can be realized. The depth test comprises the steps of detecting whether objects closer to the camera in a scene can shield objects farther away, and can achieve the effect of providing accuracy of rendering results.
In addition, the frame-by-frame 3D rendering also includes antialiasing, dynamic shading, physical simulation. The antialiasing includes blurring the image or using a higher resolution rendering target, which may achieve the effect of reducing the aliasing effects in the image. The dynamic shadow comprises the effect of carrying out real-time shadow calculation on the light source and the object in the scene, so that the reality and the dynamic effect of the shadow of the scene can be improved. The physical simulation comprises the steps of accurately modeling and calculating the mass, elasticity, collision and the like of the object, and the effect of improving the movement authenticity of the object in the scene can be achieved.
In addition, as the complete motion track of the virtual camera may have a non-smooth and discontinuous part, the situation that the video picture formed by combining a plurality of pixel frames is blocked or not smooth during playing is caused, and transition effects such as gradual change, blurring and the like can be further added into the pixel frames corresponding to the non-smooth and discontinuous part, so that the video picture formed by combining a plurality of pixel frames is smooth and free of blocking during playing, and the perception effect of a large-screen terminal user is improved.
Step S4: arranging and packaging the pixel frames generated in the step S3 according to a time sequence to generate a new pixel frame set, and storing and covering the initial pixel frame set;
Specifically, before the method of the present application executes the foregoing steps S1 to S3, an initial pixel frame set that can be circularly projected onto the screen image is stored in advance, and the initial pixel frame and the new pixel frame may be stored in the high-speed storage device, so as to improve the storage and reading efficiency of the pixel frame set. After the virtual camera control input signal is detected in step S1 of the present application, the above steps are executed to generate a plurality of pixel frames corresponding to the complete motion track of the virtual camera, and the plurality of pixel frames are arranged and packed according to the time sequence to generate a new pixel frame set. The pixel frame aggregate is usually played, and a 3D scene picture after a user sends out a control signal can be continuously displayed on the large-screen terminal.
Further, in the storage, coverage and reading links of the initial pixel frame and the new pixel frame, video compression is also included. The video compression refers to performing lossy or lossless compression on the initial pixel frame and the new pixel frame, so as to achieve the effect of reducing the occupation of storage space and network bandwidth.
Step S5: and circularly projecting the pixel frames in the pixel frame aggregate into the screen image according to the sequence of time sequence arrangement.
Specifically, in the process of circularly playing the pixel frame set stored by the method, the method does not need to execute a 3D rendering step, and can output continuous 3D scene pictures only by playing the pixel frame set.
The cyclic projection refers to sequentially projecting the pixel frames in the pixel frame assembly into the screen image according to the sequence of time sequence arrangement. This process is looped and when the last pixel frame is projected, the projection is resumed back to the first pixel frame.
Further, the cyclic projection includes timing control, which is typically accomplished by a precise clock signal or other synchronization mechanism to ensure that each pixel frame is projected to the screen terminal in the correct order.
The screen projection includes outputting and interacting with a projector, display or other display device using image processing algorithms and drivers, which involve specific hardware and software algorithms for accurately projecting the information in the pixel frame into the screen image.
Furthermore, to improve the rate-optimized performance of the projection, parallel processing or pipelining methods may be employed. To reduce image distortion, image processing and calibration methods may be employed.
In summary, the method for intelligently allocating the 3D rendering power consumption resources provided by the embodiment of the present application has the following technical effects:
according to the scheme, under the application scene that a plurality of information acceptors receive 3D scene information under the same space-time condition through large screen projection, the virtual camera control input signals under the 3D scene are detected and acquired, the virtual camera complete motion track is generated, a plurality of pixel frames corresponding to the virtual camera complete motion track are generated, the pixel frames are packed according to time sequence arrangement to generate a new pixel frame set, the newly generated pixel frame set is stored to cover original data and is circularly projected in a screen image according to the time sequence arrangement order, the problem that supply of a 3D real-time rendering calculation service provided by a system and the requirement of a user for acquiring information under a low interactivity state cannot be matched with each other in a continuous time period can be solved, and the technical effects of reducing system data processing energy consumption, improving operation efficiency and intelligently distributing system energy consumption resources are achieved.
Example two
Based on the same inventive concept as the method for intelligently allocating 3D rendering power consumption resources in the foregoing embodiment, the present invention further provides a system for intelligently allocating 3D rendering power consumption resources, as shown in fig. 3, which is characterized in that the system includes:
Control signal monitoring module 31, motion trail processing module 32, 3D rendering module 33, pixel frame storage module 34, screen image display module 35:
the control signal monitoring module 31 is configured to detect whether a virtual camera control input signal exists, enter a motion track processing module if the virtual camera control input signal exists, and enter a screen image display module if the virtual camera control input signal does not exist; the virtual camera control input signal comprises virtual camera position information, virtual camera visual angle information and virtual camera focal length information;
the motion trail processing module 32 is configured to obtain the virtual camera control input signal, and generate a virtual camera complete motion trail;
the motion trail processing module 32 is further configured to perform preprocessing and normalization processing on the virtual camera control input signal before the virtual camera control input signal is acquired, where the preprocessing includes noise and interference in the signal cancellation, and the normalization processing is used to implement comparability and calculability of different parameter information (such as position, view angle, focal length, etc.) in terms of values; the preprocessing also comprises dynamically adjusting smoothing parameters, namely, reducing smoothing processing to keep more details when moving at a high speed, and adding smoothing processing to eliminate noise when moving at a low speed;
Further, the motion trajectory processing module 32 includes a motion trajectory generating module 321, a curve interpolation module 322, and a motion trajectory synthesizing module 323:
the motion trail generation module 321 is configured to obtain the virtual camera control input signal, and generate a virtual camera first motion trail;
the motion track generation module 321 is further configured to perform smoothing processing on the data of the first motion track by using a moving average method or an exponential smoothing method, so as to reduce jitter and irregularity in the track; the parameters of the smoothing process can be adjusted according to actual application scenes, so that the effect of improving the quality of the generated result is achieved;
the curve interpolation module 322 is configured to generate a second motion track of the virtual camera according to the virtual camera control input signal by using a curve interpolation method, where a start end of the second motion track of the virtual camera is smoothly connected to a stop end of the first motion track of the virtual camera, and the stop end of the second motion track of the virtual camera is smoothly connected to the start end of the first motion track of the virtual camera;
the curve interpolation module 322 further includes performing curve fitting on the smoothed data using an algorithm such as polynomial fitting or spline interpolation; the curve fitting further comprises self-adaptive curve fitting, and for the region with severe variation, a higher-order polynomial or a more complex spline function is used for fitting; for regions of gentle variation, a low-order polynomial or simple linear interpolation is used;
The motion trail synthesis module 323 is configured to combine and splice the first motion trail of the virtual camera and the second motion trail of the virtual camera, and generate a complete motion trail of the virtual camera.
The 3D rendering module 33 is configured to perform 3D rendering frame by frame according to the full motion track of the virtual camera and the 3D model basic scene information, and generate a plurality of pixel frames corresponding to the full motion track of the virtual camera;
the pixel frame storage module 34 is configured to arrange and package the plurality of pixel frames generated in the 3D rendering module according to a time sequence to generate a new pixel frame set, and store and cover the initial pixel frame set;
the screen image display module 35 is configured to circularly project the pixel frames in the pixel frame set into the screen image in the order of time sequence arrangement.
The foregoing various modifications and embodiments of the method for intelligently allocating 3D rendering power consumption resources in the first embodiment of fig. 1 and fig. 2 are equally applicable to the system for intelligently allocating 3D rendering power consumption resources in the present embodiment of fig. 3, and by the foregoing detailed description of the method for intelligently allocating 3D rendering power consumption resources, those skilled in the art will clearly know the implementation method in the present embodiment, and for brevity of description, this will not be described in detail herein.
Exemplary readable storage Medium
The embodiments of the present invention also provide a computer-readable storage medium capable of implementing all the steps of the method in the above embodiments, the computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements all the steps of the method in the above embodiments.
Exemplary electronic device
An electronic device according to an embodiment of the present application is described below with reference to fig. 4.
Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the application.
Based on the inventive concept of the method for intelligently allocating 3D rendering power consumption resources as in the previous embodiments, the present invention further provides an electronic device, which is characterized by comprising a processor and a memory; wherein the memory is used for storing one or more programs; the processor is configured to execute any step of the above-described method for intelligently allocating 3D rendering power consumption resources by invoking the one or more programs.
Where in FIG. 4 a bus architecture (represented by bus 400), bus 400 may comprise any number of interconnected buses and bridges, with bus 400 linking together various circuits, including one or more processors, represented by processor 403, and memory, represented by memory 404. Bus 400 may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., as are well known in the art and, therefore, will not be described further herein. Bus interface 406 provides an interface between bus 400 and receiver 401 and transmitter 402. The receiver 401 and the transmitter 402 may be the same element, i.e. a transceiver, providing a unit for communicating with various other systems over a transmission medium.
The processor 403 is responsible for managing the bus 400 and general processing, while the memory 404 may be used to store data used by the processor 403 in performing operations.
The embodiment of the invention provides a method for intelligently distributing 3D rendering power consumption resources, which is characterized by comprising an initial pixel frame set capable of circularly projecting to a screen image and 3D model basic scene information, wherein the 3D model basic scene information comprises scene object information, light source and shadow information and virtual camera information, and the method comprises the following steps: s1, detecting whether a virtual camera control input signal exists, if so, executing a step S2, and if not, executing a step S5; s2, acquiring the control input signal of the virtual camera, and generating a complete motion track of the virtual camera; s3, performing 3D rendering frame by frame according to the complete motion trail of the virtual camera and the 3D model basic scene information, and generating a plurality of pixel frames corresponding to the complete motion trail of the virtual camera; s4, arranging and packaging the pixel frames generated in the step S3 according to a time sequence to generate a new pixel frame set, and storing and covering the initial pixel frame set; s5, circularly projecting the pixel frames in the pixel frame assembly into the screen image according to the sequence of time sequence arrangement. The method can detect and acquire virtual camera control input signals in a 3D scene, generate a virtual camera complete motion track, generate a plurality of pixel frames corresponding to the virtual camera complete motion track, package the pixel frames according to time sequence arrangement to generate a new pixel frame set, store the newly generated pixel frame set to cover original data, circularly project the newly generated pixel frame set in a screen image according to the time sequence arrangement order, and solve the problem that the supply of a 3D real-time rendering computing service provided by a system and the requirement of a user for acquiring information in a low-interactivity state cannot be matched with each other in a continuous time period.
Additionally, the electronic device may further include a communication module, an input unit, an audio processor, a display, a power supply, and the like. The processor (or controllers, operational controls) employed may comprise a microprocessor or other processor device and/or logic devices that receives inputs and controls the operation of the various components of the electronic device; the memory may be one or more of a buffer, a flash memory, a hard drive, a removable medium, a volatile memory, a nonvolatile memory, or other suitable means, may store the above-mentioned related data information, may further store a program for executing the related information, and the processor may execute the program stored in the memory to realize information storage or processing, etc.; the input unit is used for providing input to the processor, and can be a key or a touch input device; the power supply is used for providing power for the electronic equipment; the display is used for displaying display objects such as images and characters, and may be, for example, an LCD display. The communication module is a transmitter/receiver that transmits and receives signals via an antenna. The communication module (transmitter/receiver) is coupled to the processor to provide an input signal and to receive an output signal, which may be the same as in the case of a conventional mobile communication terminal. Based on different communication technologies, a plurality of communication modules, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, etc., may be provided in the same electronic device. The communication module (transmitter/receiver) is also coupled to the speaker and microphone via the audio processor to provide audio output via the speaker and to receive audio input from the microphone to implement the usual telecommunications functions. The audio processor may include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor is also coupled to the central processor so that sound can be recorded on the host through the microphone and sound stored on the host can be played through the speaker.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (10)
1. A method for intelligently allocating 3D rendering power consumption resources, characterized by having an initial set of pixel frames that are circularly projectable to a screen image and 3D model base scene information, the 3D model base scene information including scene object information, light source and shadow information, virtual camera information, the method comprising:
s1, detecting whether a virtual camera control input signal exists, if so, executing a step S2, and if not, executing a step S5;
s2, acquiring the control input signal of the virtual camera, and generating a complete motion track of the virtual camera;
s3, performing 3D rendering frame by frame according to the complete motion trail of the virtual camera and the 3D model basic scene information, and generating a plurality of pixel frames corresponding to the complete motion trail of the virtual camera;
S4, arranging and packaging the pixel frames generated in the step S3 according to a time sequence to generate a new pixel frame set, and storing and covering the initial pixel frame set;
s5, circularly projecting the pixel frames in the pixel frame assembly into the screen image according to the sequence of time sequence arrangement.
2. The method of claim 1, wherein the obtaining the virtual camera control input signal generates a virtual camera full motion trajectory, comprising:
s201, acquiring a control input signal of the virtual camera, and generating a first motion trail of the virtual camera;
s202, generating a second motion track of the virtual camera by using a curve interpolation method according to the control input signal of the virtual camera, wherein the starting end of the second motion track of the virtual camera is smoothly connected with the ending end of the first motion track of the virtual camera, and the ending end of the second motion track of the virtual camera is smoothly connected with the starting end of the first motion track of the virtual camera;
s203, combining and splicing the first motion trail of the virtual camera and the second motion trail of the virtual camera to generate a complete motion trail of the virtual camera.
3. The method of claim 2, wherein the virtual camera control input signal comprises virtual camera position information, virtual camera view angle information, virtual camera focal length information.
4. The method of claim 2, wherein the frame-by-frame 3D rendering comprises ray tracing, texture mapping, depth testing:
the ray tracing comprises modeling calculation of the material quality and illumination condition of each object in a scene;
the texture mapping comprises mapping the 2D texture image to the surface of the 3D model and performing stretching deformation;
the depth test includes detecting whether objects in the scene that are closer to the camera are able to obscure objects that are farther away.
5. The method of claim 4, wherein the frame-by-frame 3D rendering further comprises antialiasing, dynamic shading, physical simulation:
the antialiasing includes blurring the image, rendering the object using high resolution;
the dynamic shadow comprises the step of carrying out real-time shadow calculation on a light source and an object in a scene;
the physical simulation comprises modeling calculation of the mass, elasticity and collision of the object.
6. The system for intelligently distributing the 3D rendering power consumption resources is characterized by comprising a control signal monitoring module, a motion trail processing module, a 3D rendering module, a pixel frame storage module and a screen image display module:
the control signal monitoring module is used for detecting whether a virtual camera control input signal exists, entering the motion track processing module if the virtual camera control input signal exists, and entering the screen image display module if the virtual camera control input signal does not exist; the virtual camera control input signal comprises virtual camera position information, virtual camera visual angle information and virtual camera focal length information;
The motion trail processing module is used for acquiring the control input signal of the virtual camera and generating a complete motion trail of the virtual camera;
the 3D rendering module is used for performing 3D rendering frame by frame according to the complete motion trail of the virtual camera and the basic scene information of the 3D model, and generating a plurality of pixel frames corresponding to the complete motion trail of the virtual camera;
the pixel frame storage module is used for arranging and packaging the plurality of pixel frames generated in the 3D rendering module according to a time sequence to generate a new pixel frame set, and storing and covering the initial pixel frame set;
the screen image display module is used for circularly projecting the pixel frames in the pixel frame assembly into the screen image according to the sequence of time sequence arrangement.
7. The system of claim 6, wherein the motion trajectory processing module comprises a motion trajectory generation module, a curve interpolation module, a motion trajectory synthesis module:
the motion trail generation module is used for acquiring the control input signal of the virtual camera and generating a first motion trail of the virtual camera;
the curve interpolation module is used for generating a second motion track of the virtual camera by utilizing a curve interpolation method according to the control input signal of the virtual camera, wherein the starting end of the second motion track of the virtual camera is smoothly connected with the ending end of the first motion track of the virtual camera, and the ending end of the second motion track of the virtual camera is smoothly connected with the starting end of the first motion track of the virtual camera;
And the motion trail synthesis module is used for combining and splicing the first motion trail of the virtual camera and the second motion trail of the virtual camera to generate a complete motion trail of the virtual camera.
8. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-5.
9. An electronic device comprising a processor and a memory;
the memory is used for storing one or more programs;
the processor is configured to perform the method of any of claims 1-5 by invoking the one or more programs.
10. A computer program product comprising a computer program and/or instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410061637.5A CN117830497A (en) | 2024-01-16 | 2024-01-16 | Method and system for intelligently distributing 3D rendering power consumption resources |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410061637.5A CN117830497A (en) | 2024-01-16 | 2024-01-16 | Method and system for intelligently distributing 3D rendering power consumption resources |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117830497A true CN117830497A (en) | 2024-04-05 |
Family
ID=90524058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410061637.5A Pending CN117830497A (en) | 2024-01-16 | 2024-01-16 | Method and system for intelligently distributing 3D rendering power consumption resources |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117830497A (en) |
-
2024
- 2024-01-16 CN CN202410061637.5A patent/CN117830497A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11210838B2 (en) | Fusing, texturing, and rendering views of dynamic three-dimensional models | |
US11978175B2 (en) | Mixed reality system with color virtual content warping and method of generating virtual content using same | |
US10984583B2 (en) | Reconstructing views of real world 3D scenes | |
US20130321396A1 (en) | Multi-input free viewpoint video processing pipeline | |
US20140176591A1 (en) | Low-latency fusing of color image data | |
CN106527857A (en) | Virtual reality-based panoramic video interaction method | |
WO2019164631A1 (en) | Dynamic lighting for objects in images | |
US20140375634A1 (en) | Hybrid client-server rendering with low latency in view | |
US20180310025A1 (en) | Method and technical equipment for encoding media content | |
US11211034B2 (en) | Display rendering | |
CN115529835A (en) | Neural blending for novel view synthesis | |
US9161012B2 (en) | Video compression using virtual skeleton | |
US10708597B2 (en) | Techniques for extrapolating image frames | |
WO2023093792A1 (en) | Image frame rendering method and related apparatus | |
US20230132420A1 (en) | Asset reusability for lightfield/holographic media | |
CN117830497A (en) | Method and system for intelligently distributing 3D rendering power consumption resources | |
TW202141429A (en) | Rendering using shadow information | |
US20220326527A1 (en) | Display System Optimization | |
WO2024174050A1 (en) | Video communication method and device | |
US20230162436A1 (en) | Generating and modifying an artificial reality environment using occlusion surfaces at predetermined distances | |
CN118710796A (en) | Method, apparatus, device and medium for displaying bullet screen |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |