CN116563085B - Large-scale parallel processing method and system for offline rendering - Google Patents

Large-scale parallel processing method and system for offline rendering Download PDF

Info

Publication number
CN116563085B
CN116563085B CN202310819601.4A CN202310819601A CN116563085B CN 116563085 B CN116563085 B CN 116563085B CN 202310819601 A CN202310819601 A CN 202310819601A CN 116563085 B CN116563085 B CN 116563085B
Authority
CN
China
Prior art keywords
rendering
initial
frame
typical
user side
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310819601.4A
Other languages
Chinese (zh)
Other versions
CN116563085A (en
Inventor
邓正秋
徐振语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Original Assignee
Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Malanshan Video Advanced Technology Research Institute Co ltd filed Critical Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Priority to CN202310819601.4A priority Critical patent/CN116563085B/en
Publication of CN116563085A publication Critical patent/CN116563085A/en
Application granted granted Critical
Publication of CN116563085B publication Critical patent/CN116563085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a large-scale parallel processing method and a system for offline rendering, wherein the method comprises the following steps: the cloud acquires an initial video frame and a processing progress of a rendering node; extracting a typical frame and distributing the typical frame to a user side, and predicting a first rendering time length of the typical frame at the user side; determining cloud rendering workload; removing typical frames from each initial video frame uploaded by a user side, adding the initial video frames as initial files into a first waiting sequence, and creating virtual rendering groups corresponding to the initial files by a cloud; the initial file in the first waiting sequence is distributed to a second waiting sequence corresponding to the virtual rendering group, so that each rendering node in the virtual rendering group is controlled to render each initial video frame in the second waiting sequence, and the deviation of the second rendering time length of the cloud end completing the rendering and the first rendering time length is within the set deviation. The technical scheme provided by the invention is beneficial to solving the defect that a user cannot predict the rendering time length and can only continuously wait for rendering to finish in the existing offline rendering process.

Description

Large-scale parallel processing method and system for offline rendering
Technical Field
The invention relates to the technical field of image processing, in particular to a large-scale parallel processing method for offline rendering and a large-scale parallel processing system for offline rendering.
Background
Image rendering is one of image processing techniques, which is a process for converting a three-dimensional light energy transfer process into a two-dimensional image.
The work to be done in image rendering is: through geometric transformation, projective transformation, perspective transformation and window clipping, and then through the acquired material and shadow information, an image is generated. After the image rendering is finished, the image information is output to an image file or a video file, or the image is generated in a frame buffer of the display device.
The off-line rendering is different from the real-time rendering, is commonly seen in movie and television animation rendering, and generally refers to rendering pictures by a computer according to predefined light rays and tracks, continuously playing the pictures after the rendering is completed, realizing animation effects, and displaying no picture in the calculation process.
With the rapid development of image processing technology, a large number of computing tasks for offline rendering are generated. However, a large amount of rendering demands are not evenly distributed to different time periods, so that a large-scale situation of waiting of rendering users is generated, and since offline rendering is participated by a plurality of rendering nodes, when the rendering workload is large, the waiting process is long, however, the user cannot predict the rendering time, and only can continuously wait for the rendering to be completed, so that the user experience is poor.
Disclosure of Invention
The invention mainly aims to provide a large-scale parallel processing method and a large-scale parallel processing system for offline rendering, and aims to solve the defects that a user cannot predict rendering time in the existing offline rendering process and can only continuously wait for rendering to be completed, so that user experience is poor.
In order to achieve the above object, in the method for massively parallel processing of offline rendering provided by the present invention, a cloud is communicatively connected to a plurality of rendering nodes, and the method includes the following steps:
acquiring an initial video frame to be rendered, which is uploaded by a user side, and acquiring rendering parameters set by the user;
acquiring current processing progress information of each rendering node;
extracting a typical frame from each initial video frame;
distributing the typical frames to a user side, and predicting first rendering time of the typical frames at the user side according to computing resources and rendering parameters of the user side;
determining cloud rendering workload according to other initial video frames and rendering parameters except the typical frames;
removing typical frames from all initial video frames uploaded by a user side, and adding the initial video frames as initial files into a first waiting sequence, wherein the initial files comprise all initial video frames except the typical frames;
Creating a virtual rendering group corresponding to the initial file according to the processing progress information of each rendering node and the first rendering time of the typical frame at the user side;
the initial file in the first waiting sequence is distributed to a second waiting sequence corresponding to the virtual rendering group, so that each rendering node in the virtual rendering group is controlled to render each initial video frame in the second waiting sequence, and the deviation of the second rendering time length of the cloud end completing the rendering and the first rendering time length is within the set deviation.
Preferably, the step of distributing the initial file in the first waiting sequence to a second waiting sequence corresponding to the virtual rendering group to control each rendering node in the virtual rendering group to render each initial video frame in the second waiting sequence, so that a deviation between a second rendering time length when the cloud finishes rendering and the first rendering time length is within a set deviation includes:
the cloud end distributes monitoring units for each virtual rendering group, wherein the monitoring units of different virtual rendering groups are in communication connection;
the method comprises the steps that a monitoring unit obtains initial video frames in corresponding initial files, arranges the initial video frames into a processing sequence according to the order of the prediction workload from large to small, and establishes a rendering result set, wherein initial elements arranged according to the frame order are arranged in the empty rendering result set;
The monitoring unit detects the processing progress information of the virtual rendering group to judge whether each rendering node is idle, controls each idle rendering node to sequentially extract initial video frames from the processing sequence to conduct rendering processing, sends a rendering result file to the rendering result set, and replaces initial elements at corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file.
Preferably, the step of distributing the initial file in the first waiting sequence to a second waiting sequence corresponding to the virtual rendering group to control each rendering node in the virtual rendering group to render each initial video frame in the second waiting sequence, so that the deviation between the second rendering time length when the cloud finishes rendering and the first rendering time length is within the set deviation, further includes:
the monitoring unit detects the rendering start time, the completion proportion of the processing sequence and the completion proportion of the user side, and detects the current processing progress of each rendering node in the virtual rendering group;
when the completion proportion of the processing sequence is smaller than that of the user side, and the completion deviation between the completion proportion of the processing sequence and that of the user side is larger than the set deviation, the monitoring unit sends the rest initial video frames in the processing sequence to the rendering nodes outside the virtual rendering group for the queue-inserting processing, sends the rendering result file of the queue-inserting processing to the rendering result set, and replaces the initial elements at the corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file.
Preferably, the step of extracting a representative frame from each initial video frame includes:
acquiring a key frame in an initial video frame uploaded by a user side;
extracting the number of models to be rendered from each key frame to obtain the average number of models;
and determining a plurality of typical frames from each key frame according to the average model number and the processing progress information of each rendering node of the cloud.
Preferably, the step of distributing the typical frame to the user side and predicting the first rendering time length of the typical frame at the user side according to the computing resource and the rendering parameter of the user side includes:
the cloud allocates the typical frames to the user side;
the cloud performs rendering test on one typical frame at the user side to obtain a correction coefficient determined according to the computing resource condition of the user side;
acquiring the size of a typical frame;
obtaining the number of models of a typical frame;
acquiring a typical frame number;
obtaining the resolution, minimum subdivision, maximum subdivision, noise threshold and light cache subdivision of rendering;
predicting the first rendering time of the typical frame at the user side according to the size of the typical frame, the number of models, the number of the typical frames, the rendering resolution, the minimum subdivision, the maximum subdivision, the noise threshold, the light cache subdivision and the correction coefficient.
Preferably, the step of creating a virtual rendering group corresponding to the initial file according to the processing progress information of each rendering node and the first rendering time length of the typical frame at the user side includes:
the cloud end sorts all the rendering nodes from short to long according to the residual rendering time from all the rendering nodes according to the processing progress information of all the rendering nodes;
and selecting a rendering node positioned in the front in the sequence from a plurality of rendering nodes with the residual rendering time length smaller than the set time length to create a virtual rendering group according to the first rendering time length and the sequence of the rendering nodes.
Preferably, the method further comprises:
the cloud end is in communication connection with each monitoring unit through the scheduling unit;
the scheduling unit compares the processing progress of the cloud and the processing progress of the user side, and monitors whether each virtual rendering group has released idle rendering nodes, wherein the released idle rendering nodes are released idle rendering nodes after the initial files corresponding to the virtual rendering groups are distributed;
and when the processing progress of the cloud is smaller than that of the user side, the scheduling unit supplements the released idle rendering nodes to the virtual rendering group.
Preferably, the total rendering time of the processing sequence is calculated as the first rendering duration in the following manner:
the method comprises the steps of obtaining the size, the number of models, the rendering resolution, the minimum subdivision, the maximum subdivision, the noise threshold and the light cache subdivision of each initial video frame in an initial file;
the total work volume of the process sequence is predicted according to the following formula:
wherein ,for the predicted rendering time of the i-th initial video frame in the initial file, +.>N is the number of initial video frames in the initial file; t is a preset standard time, < >>;/>For the size of the i-th initial video frame, < >>Is of standard size; />Is a size coefficient and is a constant; />For the number of models of the i-th initial video frame, is->Is the number of standard models; />The model coefficients are constant and take values from a plurality of preset model coefficients according to different model types of the initial video frames;rendering resolution for the ith initial video frame,/->Rendering for standardA dye resolution; />Is a resolution coefficient, is a constant; />Minimum subdivision for the ith initial video frame, < >>Is a standard minimum subdivision; />Is the minimum subdivision coefficient and is a constant; />Maximum subdivision for the ith initial video frame,/-for the i-th initial video frame >Is a standard maximum subdivision; />The maximum subdivision coefficient is a constant; />Noise threshold for the ith initial video frame,/-, for the initial video frame>Is a standard noise threshold; />The noise threshold coefficient is a constant; />Light buffer subdivision for the ith initial video frame,/->Subdividing the standard lamplight cache; />The subdivision coefficient is a constant for the lamplight cache; t (T) 2 The total duration of the predicted rendering in the initial file, namely the second rendering duration;
the proportion of completion of the processing sequence is determined with reference to the following:
wherein ,p2 For the completion proportion of the processing sequence, Q is the rendering start duration.
Preferably, the total working amount of the user side is determined by the first rendering duration in the following specific manner:
wherein ,for the predicted rendering time of the j-th typical frame assigned to the client, < >>M is the typical number of frames allocated to the client; t (T) 1 Is a first rendering duration; />The size of the j-th typical frame allocated to the user terminal; />The number of models for the j-th typical frame allocated to the user side; />Rendering resolution for the j-th typical frame assigned to the client; />Minimum subdivision for the j-th typical frame assigned to the client; />Maximum subdivision for the j-th typical frame assigned to the client; / >A noise threshold value of the j-th typical frame allocated to the user terminal; />Subdividing the light cache of the jth typical frame allocated to the user terminal; y is a correction coefficient determined according to the calculation resource condition of the user side, and Y is more than 0;
the completion proportion of the user side is determined by referring to the following modes:
wherein ,p1 The completion ratio of the user terminal is obtained.
In addition, in order to achieve the above object, the present invention also proposes a massively parallel processing system for offline rendering, for executing the method of any one of the above objects; the system comprises a cloud and a plurality of rendering nodes, wherein the cloud is in communication connection with each rendering node.
In the technical scheme, a user side uploads an initial video frame to be rendered to a cloud side, rendering parameters are set, the cloud side extracts at least one typical frame from the initial video frame, the typical frame is distributed to the user side, and first rendering time required by the typical frame for completing rendering at the user side is predicted; then, the cloud end determines the workload of cloud end rendering work according to the conditions of other initial video frames except the typical frame and the rendering parameters set by the user, and adds the other initial video frames except the typical frame into a first waiting sequence to serve as initial files waiting for cloud end rendering, the cloud end organizes a plurality of rendering nodes to process the initial files according to first rendering time required by the typical frame to complete rendering at the user end, so that the user can see the processing progress of the typical frame at the user end, the cloud end also controls all the rendering nodes in the virtual rendering group to render the initial files according to the rendering progress of the typical frame at the user end, the cloud end also organizes the virtual rendering group to complete rendering of the initial files about the rendering completion time of the typical frame at the user end, and the cloud end can also determine the rendering progress according to the rendering progress of the user end to estimate the off-line rendering time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of a method for performing off-line rendering massively parallel processing according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The description as it relates to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present invention, unless specifically stated and limited otherwise, the terms "connected," "affixed," and the like are to be construed broadly, and for example, "affixed" may be a fixed connection, a removable connection, or an integral body; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In addition, the technical solutions of the embodiments of the present invention may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present invention.
Referring to fig. 1, in a first embodiment of the method for offline rendering massively parallel processing according to the present invention, a cloud end is communicatively connected to a plurality of rendering nodes, the method includes the following steps:
step S10, an initial video frame to be rendered, which is uploaded by a user side, is obtained, and rendering parameters set by the user are obtained;
Step S20, current processing progress information of each rendering node is obtained;
step S30, extracting typical frames from each initial video frame;
step S40, distributing the typical frames to the user side, and predicting the first rendering time of the typical frames at the user side according to the computing resources and rendering parameters of the user side;
step S50, determining cloud rendering workload according to other initial video frames and rendering parameters except the typical frames;
step S60, removing typical frames from all initial video frames uploaded by a user side, and adding the initial video frames as initial files into a first waiting sequence, wherein the initial files comprise all initial video frames except the typical frames;
step S70, creating a virtual rendering group corresponding to the initial file according to the processing progress information of each rendering node and the first rendering time of the typical frame at the user side;
step S80, the initial file in the first waiting sequence is distributed to a second waiting sequence corresponding to the virtual rendering group, so that each rendering node in the virtual rendering group is controlled to render each initial video frame in the second waiting sequence, and the deviation of the second rendering time length of the cloud end completing the rendering and the first rendering time length is within the set deviation.
In the technical scheme, a user side uploads an initial video frame to be rendered to a cloud side, rendering parameters are set, the cloud side extracts at least one typical frame from the initial video frame, the typical frame is distributed to the user side, and first rendering time required by the typical frame for completing rendering at the user side is predicted; then, the cloud end determines the workload of cloud end rendering work according to the conditions of other initial video frames except the typical frame and the rendering parameters set by the user, and adds the other initial video frames except the typical frame into a first waiting sequence to serve as initial files waiting for cloud end rendering, the cloud end organizes a plurality of rendering nodes to process the initial files according to first rendering time required by the typical frame to complete rendering at the user end, so that the user can see the processing progress of the typical frame at the user end, the cloud end also controls all the rendering nodes in the virtual rendering group to render the initial files according to the rendering progress of the typical frame at the user end, the cloud end also organizes the virtual rendering group to complete rendering of the initial files about the rendering completion time of the typical frame at the user end, and the cloud end can also determine the rendering progress according to the rendering progress of the user end to estimate the off-line rendering time.
When the cloud extracts the typical frames, the cloud can determine the typical frames according to the current processing progress condition of each rendering node of the cloud and the rendering workload of the initial video frames to be rendered, which are uploaded by the user side, so that the cloud can complete offline rendering according to the rendering completion time when the user side finishes rendering the typical frames. For example, when each current rendering node of the cloud is idle, the rendering capability of the cloud is strong, and the rendering work can be completed in a short time, so that an initial video frame with smaller rendering workload can be selected as a typical frame to be sent to a user side for processing. On the contrary, when each current rendering node of the cloud is busy, the cloud can finish rendering work for a longer time, so that an initial video frame with larger rendering workload can be selected as a typical frame to be sent to the user side for processing, the user side can lighten rendering pressure for the cloud, and the rendering time of the user side is relatively longer, so that the rendering work can be finished for a longer time for the cloud side.
When the cloud creates the virtual rendering group, a corresponding number of rendering nodes are selected according to the amount of rendering workload corresponding to the initial video frame uploaded by the user side, for example, when the rendering workload is more, more rendering nodes are selected, and when the rendering workload is less, fewer rendering nodes are selected.
The deviation of the second rendering time period from the first rendering time period may be determined by a ratio of the two. Specifically, since the actual time required for rendering is difficult to predict, after the first rendering time is determined, a verification period can be determined according to the first rendering time, and when the verification period is reached, a second duty ratio of the total work load of the initial cloud rendering file occupied by the completed rendering work load of the cloud end is determined, a first duty ratio of the total work load of the typical client rendering frame occupied by the completed rendering work load of the client is determined, and whether the progress of the cloud end rendering is controlled within the set deviation with the rendering progress of the client is determined by comparing whether the ratio of the second duty ratio to the first duty ratio is within the set deviation. The verification period is smaller than the first rendering time, and the verification period is preferably one tenth to one fourth of the first rendering time, so that it is easy to understand that the shorter the verification period is, the easier the rendering progress deviation of the cloud end and the user end is corrected in time.
Wherein, the setting deviation includes that the ratio of the second rendering time length to the first rendering time length is within the setting ratio, further, besides that the ratio of the second rendering time length to the first rendering time length is within the setting deviation, the setting deviation may further include: the absolute value of the difference between the second rendering time length and the first rendering time length is within the set difference.
Further, the virtual rendering group may include a plurality of rendering nodes with similar processing schedules.
Further, the user side can display the user side rendering progress information according to the first rendering time length required by predicting the rendering of the typical frame and the processed time length of the user side, so that the user can intuitively observe the rendering progress, and synchronously know the offline rendering progress according to the rendering progress.
In a second embodiment of the present invention, based on the first embodiment of the present invention, the step S80 includes:
step S81, the cloud end distributes monitoring units for each virtual rendering group, wherein the monitoring units of different virtual rendering groups are in communication connection;
step S82, the monitoring unit acquires each initial video frame in the corresponding initial file, arranges the initial video frames into a processing sequence according to the order of the prediction workload from large to small, and establishes a rendering result set, wherein initial elements arranged according to the frame order are arranged in the empty rendering result set;
step S83, the monitoring unit detects the processing progress information of the virtual rendering group to judge whether each rendering node is idle, controls each idle rendering node to sequentially extract initial video frames from the processing sequence for rendering processing, sends a rendering result file to the rendering result set, and replaces initial elements at corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file.
The monitoring unit is configured to monitor the processing progress information of each rendering node in the virtual rendering group, and feed back the processing progress information to the cloud end and other monitoring units, that is, the processing progress information of each rendering node acquired by the cloud end in step S20 may be specifically acquired by the monitoring unit.
And forming an idle node set at the cloud, and after the initial video frames corresponding to the initial files distributed by the virtual rendering groups are distributed, placing the rendering nodes which are not distributed to the initial video frames into the idle node set as released idle rendering nodes by each monitoring unit, so that each rendering node in the idle node set can continuously join other virtual rendering groups.
In the present embodiment, each initial video frame is set to a predicted workload (the predicted workload can be determined by the predicted rendering time of the initial video frame)Determining) from large to small, arranging the sequence into a processing sequence, and after each time the rendering node in the virtual rendering group executes the current task, sequentially extracting initial video frames from the processing sequence according to the sequence of the processing sequence by the rendering node which completes the current task to perform rendering processing, thereby: the initial video frames with large workload (for example, the initial video frames requiring a plurality of hours of workload) can be preferentially rendered, so that in the later stage of cloud offline rendering, the left initial video frames are all video frames with relatively small workload, and at the moment, if the work progress of the cloud offline rendering fails to catch up with the typical frame rendering progress of a user side, the initial video frames with relatively small workload can be distributed to other rendering nodes after the virtual rendering group for queuing or queue-inserting processing, thereby being beneficial to organizing more rendering nodes to catch up with the offline rendering progress. It will be readily appreciated that for initial video frames that account for the same proportion of the total workload of offline rendering, the workload is less if the initial video frames that remain later in the offline rendering are The number of the initial video frames is small, the initial video frames can be distributed to fewer rendering nodes to catch up progress, the rendering nodes distributed to rendering tasks also need longer time to render, the effect of catching up the offline rendering progress is not obvious, otherwise, the initial video frames with larger workload are processed first, the workload of each initial video frame remained in the later period of offline rendering is smaller, and the number of the video frames is larger for the initial video frames with the same proportion of the total workload of offline rendering, so that the video frames are favorable for being distributed to more rendering nodes to catch up progress, and the rendering nodes distributed to the rendering tasks only need shorter time to render, so that the effect of catching up the offline rendering progress is more obvious.
In this embodiment, a rendering result set is established, and the beneficial effects of replacing the initial elements at the corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file are as follows: the rendering result files obtained by rendering can always replace initial elements in the rendering result set according to an initial sequence no matter the rendering of a certain video frame is completed through a certain rendering node in the virtual rendering group or a rendering node outside the additional virtual rendering group, so that the obtained rendering result files are arranged according to the frame sequence of the initial video frame, the results are not disordered, and the ordering of the rendering results is not needed.
Further, after the off-line rendering is finished and the cloud returns the rendering result set to the user side, the user side also stores the rendering result file corresponding to the allocated typical frame in the rendering result set to replace the initial element corresponding to the typical frame. Therefore, the user side simply realizes the ordering of the rendering result files, and finally only needs to compress the rendering result files through a video coding format (such as H.264), thereby finally becoming films which can be played on players such as computers, televisions and the like.
In a third embodiment of the present invention, based on the second embodiment of the present invention, the step S80 further includes:
step S84, the monitoring unit detects the rendering start time, the completion proportion of the processing sequence and the completion proportion of the user side, and detects the current processing progress of each rendering node in the virtual rendering group;
step S85, when the completion proportion of the processing sequence is smaller than that of the user side, and the completion deviation between the completion proportion of the processing sequence and that of the user side is larger than the set deviation, the monitoring unit sends the rest initial video frames in the processing sequence to the rendering node queue-inserting processing outside the virtual rendering group, sends the rendering result file of the queue-inserting processing to the rendering result set, and replaces the initial elements at the corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file.
The total rendering time of the processing sequence is calculated by a first rendering time length in the following specific calculation mode:
the method comprises the steps of obtaining the size, the number of models, the rendering resolution, the minimum subdivision, the maximum subdivision, the noise threshold and the light cache subdivision of each initial video frame in an initial file;
the total work volume of the process sequence is predicted according to the following formula:
wherein ,for the predicted rendering time of the i-th initial video frame in the initial file, +.>N is the number of initial video frames in the initial file; t is a preset standard time, < >>;/>For the size of the i-th initial video frame, < >>Is of standard size; />Is a size coefficient and is a constant; />For the number of models of the i-th initial video frame, is->Is the number of standard models; />The model coefficients are constant and take values from a plurality of preset model coefficients according to different model types of the initial video frames;rendering resolution for the ith initial video frame,/->Rendering resolution for standard; />Is a resolution coefficient, is a constant; />Minimum subdivision for the ith initial video frame, < >>Is a standard minimum subdivision; />Is the minimum subdivision coefficient and is a constant; />Maximum subdivision for the ith initial video frame,/-for the i-th initial video frame>Is a standard maximum subdivision; / >The maximum subdivision coefficient is a constant; />Noise threshold for the ith initial video frame,/-, for the initial video frame>Is a standard noise threshold; />The noise threshold coefficient is a constant; />Light buffer subdivision for the ith initial video frame,/->Subdividing the standard lamplight cache; />The subdivision coefficient is a constant for the lamplight cache; t (T) 2 The total duration of the predicted rendering in the initial file, namely the second rendering duration;
the proportion of completion of the processing sequence is determined with reference to the following:
wherein ,p2 For the completion proportion of the processing sequence, Q is the rendering start duration.
In a fourth embodiment of the present invention, based on the first to third embodiments of the present invention, the step S40 includes:
step S41, obtaining a key frame in an initial video frame uploaded by a user side;
step S42, extracting the number of models to be rendered from each key frame to obtain the average number of models;
step S43, determining a plurality of typical frames from each key frame according to the average model number and the processing progress information of each rendering node of the cloud.
In this embodiment, one or more typical frames may be sent to the client for processing.
The key frame uploaded by the user side may be determined according to the scene and the model, for example, an initial video frame with the same scene and the same model may have a plurality of continuous frames, and a first frame in the plurality of continuous frames is used as the key frame;
The number of the models in each key frame is obtained, so that the average number of the models corresponding to each frame in the initial video frames uploaded by the user can be determined;
according to the processing progress information of each rendering node in the cloud, when the cloud is in a busy state, the key frames with the number of the models larger than the average number of the models in the key frames are used as typical frames, and according to the busy degree of the cloud, frames with the number of the models larger or smaller than the average number of the models in the key frames are selected as typical frames, so that a user side can obtain more complicated and fine typical frames to process when the cloud is busy, and longer rendering time is spent. When the cloud end is idle, the user end obtains simpler typical frames to process and spends shorter rendering time, so that the rendering progress of the user end can be matched with the rendering progress of the cloud end.
In a fifth embodiment of the present invention, based on the fourth embodiment of the present invention, the step S40 includes:
step S41, the cloud allocates the typical frames to the user side;
step S42, the cloud performs rendering test on one typical frame at the user side to obtain a correction coefficient determined according to the computing resource condition of the user side;
step S43, obtaining the size of a typical frame;
step S44, obtaining the number of models of the typical frame;
Step S45, obtaining the typical frame quantity;
step S46, obtaining the resolution, minimum subdivision, maximum subdivision, noise threshold and light cache subdivision of rendering;
step S47, predicting the first rendering time of the typical frame at the user side according to the size, the model number, the typical frame number, the rendering resolution, the minimum subdivision, the maximum subdivision, the noise threshold, the light cache subdivision and the correction coefficient of the typical frame.
The total working amount of the user side is determined through the first rendering time length, and the specific mode is as follows:
wherein ,for the predicted rendering time of the j-th typical frame assigned to the client, < >>M is the typical number of frames allocated to the client; t (T) 1 Is a first rendering duration; />The size of the j-th typical frame allocated to the user terminal; />The number of models for the j-th typical frame allocated to the user side; />Rendering resolution for the j-th typical frame assigned to the client;minimum subdivision for the j-th typical frame assigned to the client; />Maximum subdivision for the j-th typical frame assigned to the client; />A noise threshold value of the j-th typical frame allocated to the user terminal; />Subdividing the light cache of the jth typical frame allocated to the user terminal; y is a correction coefficient determined according to the calculation resource condition of the user side, and Y is more than 0;
The completion proportion of the user side is determined by referring to the following modes:
wherein ,p1 The completion ratio of the user terminal is obtained.
Wherein, a typical frame is subjected to rendering test at the user terminal in order to evaluate the computing resource condition of the user terminal, if the computing resource condition is good, the correction coefficient is higher, possibly greater than 1, and if the computing resource condition is bad, the correction coefficient is smaller, possibly less than 1. Specifically, after the rendering test is completed, the cloud control client and the corresponding virtual rendering group start rendering at the same time.
Further, if the cloud detects that the rendering test of the user side on the typical frame has a clamping stagnation condition exceeding a set duration, the typical frame can be distributed to the initial file, the rendering is performed by using the rendering nodes in the virtual rendering group, and the typical frame processing results fed back first in the user side and the rendering nodes are used as rendering results.
Further, the first rendering duration and the second rendering duration also need to satisfy:
wherein ,to set the time difference, a constant greater than 0 is set.
Step S80, further includes:
the monitoring unit determines whether the rendering time difference between the user side and the virtual rendering group exceeds a set time difference according to the completion proportion of the user side, the completion proportion of the processing sequence and the rendering start time length; if yes, the monitoring unit sends the rest initial video frames in the processing sequence to the rendering node queue-inserting processing outside the virtual rendering group, sends the rendering result file of the queue-inserting processing to the rendering result set, and replaces the initial elements at the corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file.
The monitoring unit determines whether the rendering time difference between the user side and the virtual rendering group exceeds the set time difference according to the completion proportion of the user side, the completion proportion of the processing sequence and the rendering start time length, and the method is adopted for determining as follows:
when the set time difference is exceeded, the processing progress of the current virtual rendering group is obviously behind, and the monitoring unit is required to assist in improving the rendering progress by sending the rendering task to the rendering nodes outside the virtual rendering group.
According to the first to fifth embodiments of the present invention, in a sixth embodiment of the present invention, the step S70 includes:
step S71, the cloud end sorts all the rendering nodes from short to long according to the residual rendering time from all the rendering nodes according to the processing progress information of all the rendering nodes;
step S72, according to the first rendering time length and the ordering of the rendering nodes, selecting the rendering node positioned in the front in the ordering to create a virtual rendering group from a plurality of rendering nodes with the residual rendering time length smaller than the set time length.
The embodiment is used for selecting idle or relatively idle rendering nodes to create a virtual rendering group, and avoids overlong waiting time of a user caused by the rendering task which is always executed by the rendering node selected into the virtual rendering group.
In a seventh embodiment of the present invention, based on the second or third embodiment of the present invention, the method further includes:
step S90, the cloud end is in communication connection with each monitoring unit through a scheduling unit;
step S100, a scheduling unit compares the processing progress of the cloud end with the processing progress of the user end, and monitors whether each virtual rendering group has released idle rendering nodes, wherein the released idle rendering nodes are released idle rendering nodes after the initial files corresponding to the virtual rendering groups are distributed;
in step S110, when the processing progress of the cloud end is smaller than the processing progress of the user end, the scheduling unit supplements the released idle rendering nodes to the virtual rendering group.
In addition, in order to achieve the above object, the present invention also proposes a massively parallel processing system for offline rendering, for executing the method; the system comprises a cloud and a plurality of rendering nodes, wherein the cloud is in communication connection with each rendering node.
The foregoing description of the preferred embodiments of the present invention should not be construed as limiting the scope of the invention, but rather utilizing equivalent structural changes made in the present invention description and drawings or directly/indirectly applied to other related technical fields are included in the scope of the present invention.

Claims (10)

1. The large-scale parallel processing method for offline rendering is characterized in that a cloud end is in communication connection with a plurality of rendering nodes, and the method comprises the following steps:
acquiring an initial video frame to be rendered, which is uploaded by a user side, and acquiring rendering parameters set by the user;
acquiring current processing progress information of each rendering node;
extracting a typical frame from each initial video frame;
distributing the typical frames to a user side, and predicting first rendering time of the typical frames at the user side according to computing resources and rendering parameters of the user side;
determining cloud rendering workload according to other initial video frames and rendering parameters except the typical frames;
removing typical frames from all initial video frames uploaded by a user side, and adding the initial video frames as initial files into a first waiting sequence, wherein the initial files comprise all initial video frames except the typical frames;
creating a virtual rendering group corresponding to the initial file according to the processing progress information of each rendering node and the first rendering time of the typical frame at the user side;
the initial file in the first waiting sequence is distributed to a second waiting sequence corresponding to the virtual rendering group, so that each rendering node in the virtual rendering group is controlled to render each initial video frame in the second waiting sequence, and the deviation of the second rendering time length of the cloud end completing the rendering and the first rendering time length is within the set deviation.
2. The method for performing off-line rendering on a large scale parallel processing according to claim 1, wherein the step of distributing the initial file in the first waiting sequence to the second waiting sequence corresponding to the virtual rendering group to control each rendering node in the virtual rendering group to render each initial video frame in the second waiting sequence, so that the deviation between the second rendering time length when the cloud end finishes rendering and the first rendering time length is within a set deviation comprises the following steps:
the cloud end distributes monitoring units for each virtual rendering group, wherein the monitoring units of different virtual rendering groups are in communication connection;
the method comprises the steps that a monitoring unit obtains initial video frames in corresponding initial files, arranges the initial video frames into a processing sequence according to the order of the prediction workload from large to small, and establishes a rendering result set, wherein initial elements arranged according to the frame order are arranged in the empty rendering result set;
the monitoring unit detects the processing progress information of the virtual rendering group to judge whether each rendering node is idle, controls each idle rendering node to sequentially extract initial video frames from the processing sequence to conduct rendering processing, sends a rendering result file to the rendering result set, and replaces initial elements at corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file.
3. The method according to claim 2, wherein the step of distributing the initial file in the first waiting sequence to the second waiting sequence corresponding to the virtual rendering group to control each rendering node in the virtual rendering group to render each initial video frame in the second waiting sequence, so that the deviation between the second rendering time length when the cloud finishes rendering and the first rendering time length is within the set deviation, further comprises:
the monitoring unit detects the rendering start time, the completion proportion of the processing sequence and the completion proportion of the user side, and detects the current processing progress of each rendering node in the virtual rendering group;
when the completion proportion of the processing sequence is smaller than that of the user side, and the completion deviation between the completion proportion of the processing sequence and that of the user side is larger than the set deviation, the monitoring unit sends the rest initial video frames in the processing sequence to the rendering nodes outside the virtual rendering group for the queue-inserting processing, sends the rendering result file of the queue-inserting processing to the rendering result set, and replaces the initial elements at the corresponding positions according to the frame sequence of the initial frame video corresponding to the rendering result file.
4. The method of off-line rendering massively parallel processing as claimed in claim 1, wherein said step of extracting representative frames from each initial video frame includes:
acquiring a key frame in an initial video frame uploaded by a user side;
extracting the number of models to be rendered from each key frame to obtain the average number of models;
and determining a plurality of typical frames from each key frame according to the average model number and the processing progress information of each rendering node of the cloud.
5. The method for performing off-line rendering on a large scale parallel processing according to claim 4, wherein the step of distributing the typical frame to the user terminal and predicting the first rendering time of the typical frame at the user terminal according to the computing resource and the rendering parameter of the user terminal comprises:
the cloud allocates the typical frames to the user side;
the cloud performs rendering test on one typical frame at the user side to obtain a correction coefficient determined according to the computing resource condition of the user side;
acquiring the size of a typical frame;
obtaining the number of models of a typical frame;
acquiring a typical frame number;
obtaining the resolution, minimum subdivision, maximum subdivision, noise threshold and light cache subdivision of rendering;
Predicting the first rendering time of the typical frame at the user side according to the size of the typical frame, the number of models, the number of the typical frames, the rendering resolution, the minimum subdivision, the maximum subdivision, the noise threshold, the light cache subdivision and the correction coefficient.
6. The method for performing off-line rendering on a large-scale parallel processing according to claim 1, wherein the step of creating the virtual rendering group corresponding to the initial file according to the processing progress information of each rendering node and the first rendering time of the typical frame at the user side comprises:
the cloud end sorts all the rendering nodes from short to long according to the residual rendering time from all the rendering nodes according to the processing progress information of all the rendering nodes;
and selecting a rendering node positioned in the front in the sequence from a plurality of rendering nodes with the residual rendering time length smaller than the set time length to create a virtual rendering group according to the first rendering time length and the sequence of the rendering nodes.
7. The method of massively parallel processing of offline rendering according to claim 2, further comprising:
the cloud end is in communication connection with each monitoring unit through the scheduling unit;
the scheduling unit compares the processing progress of the cloud and the processing progress of the user side, and monitors whether each virtual rendering group has released idle rendering nodes, wherein the released idle rendering nodes are released idle rendering nodes after the initial files corresponding to the virtual rendering groups are distributed;
And when the processing progress of the cloud is smaller than that of the user side, the scheduling unit supplements the released idle rendering nodes to the virtual rendering group.
8. A method of massively parallel processing of offline rendering according to claim 3, characterized in that the total rendering time of a processing sequence is calculated as a first rendering duration in the following manner:
the method comprises the steps of obtaining the size, the number of models, the rendering resolution, the minimum subdivision, the maximum subdivision, the noise threshold and the light cache subdivision of each initial video frame in an initial file;
the total work volume of the process sequence is predicted according to the following formula:
wherein ,for the predicted rendering time of the i-th initial video frame in the initial file, +.>N is the number of initial video frames in the initial file; t is a preset standard time, < >>;/>For the size of the i-th initial video frame, < >>Is of standard size; />Is a size coefficient and is a constant; />For the number of models of the i-th initial video frame, is->Is the number of standard models; />The model coefficients are constant and take values from a plurality of preset model coefficients according to different model types of the initial video frames; />Rendering resolution for the ith initial video frame,/- >Rendering resolution for standard; />Is a resolution coefficient, is a constant; />Minimum subdivision for the ith initial video frame, < >>Is a standard minimum subdivision; />Is the minimum subdivision coefficient and is a constant; />Maximum subdivision for the ith initial video frame,/-for the i-th initial video frame>Is a standard maximum subdivision; />The maximum subdivision coefficient is a constant; />Noise threshold for the ith initial video frame,/-, for the initial video frame>Is a standard noise threshold; />The noise threshold coefficient is a constant; />Light buffer subdivision for the ith initial video frame,/->Subdividing the standard lamplight cache; />The subdivision coefficient is a constant for the lamplight cache; t (T) 2 The total duration of the predicted rendering in the initial file, namely the second rendering duration;
the proportion of completion of the processing sequence is determined with reference to the following:
wherein ,p2 For the completion proportion of the processing sequence, Q is the rendering start duration.
9. The method for massively parallel processing of offline rendering according to claim 8, wherein the total workload of a user side is determined by a first rendering duration in the following specific manner:
wherein ,for the predicted rendering time of the j-th typical frame assigned to the client, < >>M is the typical number of frames allocated to the client; t (T) 1 Is a first rendering duration; / >The size of the j-th typical frame allocated to the user terminal; />The number of models for the j-th typical frame allocated to the user side; />Rendering resolution for the j-th typical frame assigned to the client; />Minimum subdivision for the j-th typical frame assigned to the client; />Maximum subdivision for the j-th typical frame assigned to the client; />A noise threshold value of the j-th typical frame allocated to the user terminal; />Subdividing the light cache of the jth typical frame allocated to the user terminal; y is a correction coefficient determined according to the calculation resource condition of the user side, and Y is more than 0;
the completion proportion of the user side is determined by referring to the following modes:
wherein ,p1 The completion ratio of the user terminal is obtained.
10. A massively parallel processing system for offline rendering for performing the method of any one of claims 1 to 9, the system comprising a cloud and a plurality of rendering nodes, the cloud being communicatively connected to each rendering node.
CN202310819601.4A 2023-07-06 2023-07-06 Large-scale parallel processing method and system for offline rendering Active CN116563085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310819601.4A CN116563085B (en) 2023-07-06 2023-07-06 Large-scale parallel processing method and system for offline rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310819601.4A CN116563085B (en) 2023-07-06 2023-07-06 Large-scale parallel processing method and system for offline rendering

Publications (2)

Publication Number Publication Date
CN116563085A CN116563085A (en) 2023-08-08
CN116563085B true CN116563085B (en) 2023-09-01

Family

ID=87488223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310819601.4A Active CN116563085B (en) 2023-07-06 2023-07-06 Large-scale parallel processing method and system for offline rendering

Country Status (1)

Country Link
CN (1) CN116563085B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761018B (en) * 2023-08-18 2023-10-17 湖南马栏山视频先进技术研究院有限公司 Real-time rendering system based on cloud platform
CN116828215B (en) * 2023-08-30 2023-11-14 湖南马栏山视频先进技术研究院有限公司 Video rendering method and system for reducing local computing power load
CN116866621B (en) * 2023-09-05 2023-11-03 湖南马栏山视频先进技术研究院有限公司 Cloud synchronization method and system for video real-time rendering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017088484A1 (en) * 2015-11-24 2017-06-01 成都赫尔墨斯科技有限公司 Cloud computing based real-time off-screen rendering method, apparatus and system
CN111640173A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Cloud rendering method and system for home-based roaming animation based on specific path
CN112085646A (en) * 2020-09-09 2020-12-15 江苏普旭软件信息技术有限公司 Distant view and close view distributed parallel rendering method and system of aviation simulator
CN114494553A (en) * 2022-01-21 2022-05-13 杭州游聚信息技术有限公司 Real-time rendering method, system and equipment based on rendering time estimation and LOD selection
CN114996619A (en) * 2022-06-27 2022-09-02 平安科技(深圳)有限公司 Page display method and device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11232532B2 (en) * 2018-05-30 2022-01-25 Sony Interactive Entertainment LLC Multi-server cloud virtual reality (VR) streaming
US11100698B2 (en) * 2019-06-28 2021-08-24 Ati Technologies Ulc Real-time GPU rendering with performance guaranteed power management

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017088484A1 (en) * 2015-11-24 2017-06-01 成都赫尔墨斯科技有限公司 Cloud computing based real-time off-screen rendering method, apparatus and system
CN111640173A (en) * 2020-05-09 2020-09-08 杭州群核信息技术有限公司 Cloud rendering method and system for home-based roaming animation based on specific path
CN112085646A (en) * 2020-09-09 2020-12-15 江苏普旭软件信息技术有限公司 Distant view and close view distributed parallel rendering method and system of aviation simulator
CN114494553A (en) * 2022-01-21 2022-05-13 杭州游聚信息技术有限公司 Real-time rendering method, system and equipment based on rendering time estimation and LOD selection
CN114996619A (en) * 2022-06-27 2022-09-02 平安科技(深圳)有限公司 Page display method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于概率模型检验的云渲染任务调度定量验证;高洪皓;缪淮扣;刘浩宇;许华虎;于芷若;;软件学报(第06期);全文 *

Also Published As

Publication number Publication date
CN116563085A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN116563085B (en) Large-scale parallel processing method and system for offline rendering
US8831369B2 (en) Image processing apparatus and image processing method
CN110992260B (en) Method and device for reconstructing video super-resolution
CN102741879B (en) Method for generating depth maps from monocular images and systems using the same
CN111027438B (en) Human body posture migration method, mobile terminal and computer storage medium
JP2012170042A5 (en)
CN103412782A (en) Dynamic resource loading method and system based on flash
US11792245B2 (en) Network resource oriented data communication
CN106203332A (en) Method and system based on the change of intelligent robot visual identity face facial expression
CN116248836A (en) Video transmission method, device and medium for remote driving
CN112165577B (en) Light source control method and device of multi-light source camera equipment, medium and terminal
CN110706161B (en) Image brightness adjusting method, medium, device and apparatus
CN103796036A (en) Coding parameter adjusting method and device
CN110415318B (en) Image processing method and device
TW201137793A (en) Method and apparatus for image compression bit rate control
CN113362220B (en) Multi-equipment matting drawing method
CN112190185B (en) Floor sweeping robot, three-dimensional scene construction method and system thereof, and readable storage medium
CN113191210B (en) Image processing method, device and equipment
CN116030108A (en) Configuration and scheduling method and system based on three-dimensional live-action reconstruction algorithm
CN115567711A (en) Desktop image dynamic acquisition method and device and computer readable storage medium
CN113628121B (en) Method and device for processing and training multimedia data
CN104463790A (en) Image processing method, apparatus and system
CN111754612A (en) Moving picture generation method and device
CN113781341B (en) Image processing method, device, electronic equipment and storage medium
Wong et al. An approach for imprecise transmission of TIFF image files through congested real-time ATM networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant