CN114004979A - High-cost-performance data storage method and system in cloud rendering - Google Patents

High-cost-performance data storage method and system in cloud rendering Download PDF

Info

Publication number
CN114004979A
CN114004979A CN202111304336.3A CN202111304336A CN114004979A CN 114004979 A CN114004979 A CN 114004979A CN 202111304336 A CN202111304336 A CN 202111304336A CN 114004979 A CN114004979 A CN 114004979A
Authority
CN
China
Prior art keywords
data
space
storage
cloud rendering
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111304336.3A
Other languages
Chinese (zh)
Other versions
CN114004979B (en
Inventor
梅向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Cudatec Co ltd
Original Assignee
Jiangsu Cudatec Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Cudatec Co ltd filed Critical Jiangsu Cudatec Co ltd
Priority to CN202111304336.3A priority Critical patent/CN114004979B/en
Publication of CN114004979A publication Critical patent/CN114004979A/en
Application granted granted Critical
Publication of CN114004979B publication Critical patent/CN114004979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • G06F3/0649Lifecycle management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a high cost performance data storage method and system in cloud rendering, wherein the method comprises the steps of obtaining cloud rendering data to be stored; setting the use frequency of planning resources based on the application scene of the data; extracting characteristic elements and basic turning heat alpha of the data; setting theoretical resource access frequency according to the characteristic elements; acquiring actual resource access frequency; calculating a data access heat F; and F is compared with alpha, if F is larger than alpha, the storage space is switched from the large space to the small space, and otherwise, the storage space is switched from the small space to the large space. The invention sets a storage mode of achieving three-frequency resonance as the best resource usage aiming at different service rhythms and data access frequencies, separately stores cold and hot data, and adjusts the storage space of the cold and hot ice data in real time, thereby achieving the technical effects of releasing local storage space and reducing cost.

Description

High-cost-performance data storage method and system in cloud rendering
Technical Field
The invention relates to a high-cost-performance data storage method and system in cloud rendering.
Background
The cloud desktop can provide strong GPU computing power and elastic storage resource support at the bottom layer for the later rendering process in the digital content production such as animation movies, movie special effects, construction visualization and the like. In terms of project cost management, factors influencing cloud rendering cost mainly include computing power, storage resources and the like, wherein the storage cost accounts for more than half, and therefore control and management of cloud storage cost are a key problem.
In the industry, according to the business rhythm and the data access heat, the stored data are divided into three categories, namely hot stored data, cold stored data and ice stored data, wherein different stored data have different characteristics, and from the aspect of access frequency, the hot storage is the most frequent, the cold storage is the second, and the ice storage is almost without access; from the storage period, the ice storage period is longest, and the cold storage period is second, the hot storage period is shortest; from the storage space, the memory of the ice storage data occupies the maximum, and the storage data between the hot storage and the cold storage are overlapped, so that the flexible scheduling and flowing can be realized.
In the existing data storage during cloud desktop rendering, a distributed storage system is adopted, a plurality of servers are used for sharing storage load through a transversely expanded multi-level multi-node standardized storage space, and a position server is adopted for positioning storage information, so that cluster expansion can be performed according to the rapidly increased data volume; however, the storage mode adopts a single storage strategy for different service rhythms, so that the storage cost is difficult to control effectively, cluster resources are wasted, and the storage cost is increased.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
the storage mode adopts a single storage strategy aiming at different service rhythms, the storage cost is difficult to be effectively controlled, cluster resources are wasted, and the storage cost is increased.
However, according to different storage data characteristics, if different storage schemes are adopted, the storage costs generated are different, and the difference is large, so that the different storage costs multiply.
Therefore, a data storage method with high cost performance in cloud rendering is needed.
Disclosure of Invention
The embodiment of the application provides a high cost performance data storage method in cloud rendering, solves the technical problem of high storage cost in the prior art, and achieves the technical effects of releasing local storage space and reducing cost.
In view of the above, the present invention has been developed to provide a solution to, or at least partially solve, the above problems.
In a first aspect, an embodiment of the present application provides a cost-effective data storage method in cloud rendering, where the method includes:
acquiring cloud rendering data to be stored;
setting the use frequency omega of planning resources based on the application scene of data1
Extracting characteristic elements and basic turning heat alpha of the data;
setting a theoretical resource access frequency omega according to the characteristic elements2
Acquiring an actual resource access frequency omega;
the data access heat degree F is calculated,
Figure BDA0003339582930000021
wherein K is the frequency of the data, ω'1=ω1modΩ,ω′2=ω2modΩ;
And F is compared with alpha, if F is larger than alpha, the storage space is switched from the large space to the small space, and otherwise, the storage space is switched from the small space to the large space.
Further, wherein the storage space comprises:
an ice space, a cold space, and a hot space; the spatial size relationship of the three spaces is as follows: ice space > cold space > hot space.
Further, the obtaining cloud rendering data to be stored includes:
classifying the cloud rendering data to be stored into five types of data: model data, trajectory data, evolution data, combined data and interface data; the K is the frequency of classified data, and K is K1k2k3k4k5(ii) a The extracting of the feature elements of the data includes extracting and storing the associated feature elements according to a preset relevance.
Further, wherein the calculating the data access heat F includes:
according to the calculated data access heat F value,judging whether the standard of three-frequency resonance is achieved; if the frequency of the planning resources is not up to the standard of the three-frequency resonance, iterative optimization of the characteristic elements is carried out, and the use frequency omega of the planning resources is reset1And the frequency omega of theoretical resource access2Until reaching the standard of three-frequency resonance;
the judgment standard of the three-frequency resonance is as follows: K/F is more than or equal to 0.9 and less than or equal to 1.
Further, the comparing F with α, if F > α, the storage space is switched from the large space to the small space, otherwise, after the storage space is switched from the small space to the large space, the method further includes:
and analyzing the user by adopting a mode priority principle, and optimizing a storage scheme according to the requirements of the user on the storage price and the performance.
Further, wherein,
when the data is stored in the ice space, the data is classified and compressed.
Further, the cold space is divided into a cold storage space and a cold storage space, and the space size relationship between the cold storage space and the cold storage space is as follows: the refrigerating space is larger than the cold storage space.
On the other hand, the application also provides a cost-effective data storage system in cloud rendering, wherein the system comprises:
the cloud rendering processing device comprises a first obtaining unit, a second obtaining unit and a processing unit, wherein the first obtaining unit is used for obtaining cloud rendering data to be stored;
a first setting unit for setting a planned resource usage frequency ω based on an application scenario of data1
A first extraction unit for extracting a feature element and a basic turning heat α of data;
a second setting unit for setting the theoretical resource access frequency ω based on the characteristic element2
A second obtaining unit, configured to obtain an actual resource access frequency Ω;
a first calculation unit for calculating a data access heat F,
Figure BDA0003339582930000031
wherein K is the frequency of the data, ω'1=ω1modΩ,ω′2=ω2modΩ;
A first comparing unit for comparing F with α;
and the first execution unit is used for switching the storage space from the large space to the small space if F is larger than alpha after the judgment of the first comparison unit, and otherwise, switching the storage space from the small space to the large space.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a bus, a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor, where the transceiver, the memory, and the processor are connected via the bus, and when the computer program is executed by the processor, the steps in the method for storing cost-effective data in cloud rendering described in any one of the above are implemented.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the foregoing cost-effective data storage method in cloud rendering.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the invention provides a high-cost-performance data storage method in cloud rendering for a cloud desktop, sets a storage mode of achieving three-frequency resonance as an optimal resource usage for different business rhythms and data access frequencies, separately stores cold and hot data, and adjusts a storage space for the cold and hot ice data in real time, so that the technical effects of releasing a local storage space and reducing the cost are achieved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flow chart of a method for storing and transmitting cost-effective data in cloud rendering according to an embodiment of the present application;
fig. 2 is a schematic diagram of classification data in the embodiment of the present application.
Fig. 3 is a schematic diagram illustrating monitoring and obtaining an actual resource access frequency in an embodiment of the present application.
FIG. 4 is a diagram illustrating various data transfers in a cold and hot ice space according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a data storage control system with high cost performance in cloud rendering according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device for executing a method of controlling output data according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a first setting unit 12, a first extracting unit 13, a second setting unit 14, a second obtaining unit 15, a first calculating unit 16, a first comparing unit 17, a bus 1110, a processor 1120, a transceiver 1130, a bus interface 1140, a memory 1150 and a user interface 1160.
Detailed Description
In the description of the embodiments of the present invention, it should be apparent to those skilled in the art that the embodiments of the present invention can be embodied as methods, apparatuses, electronic devices, and computer-readable storage media. Thus, embodiments of the invention may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), a combination of hardware and software. Furthermore, in some embodiments, embodiments of the invention may also be embodied in the form of a computer program product in one or more computer-readable storage media having computer program code embodied in the medium.
The computer-readable storage media described above may take any combination of one or more computer-readable storage media. The computer-readable storage medium includes: an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium include: a portable computer diskette, a hard disk, a random access memory, a read-only memory, an erasable programmable read-only memory, a flash memory, an optical fiber, a compact disc read-only memory, an optical storage device, a magnetic storage device, or any combination thereof. In embodiments of the invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, device, or apparatus.
Summary of the application
The method, the device and the electronic equipment are described through the flow chart and/or the block diagram.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions. These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner. Thus, the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The embodiments of the present invention will be described below with reference to the drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a cost-effective data storage method in cloud rendering, where the method includes:
step S100, cloud rendering data needing to be stored are obtained;
step S200, setting the use frequency omega of planning resources based on the application scene of data1
Step S300, extracting characteristic elements and basic turning heat alpha of the data;
step S400, according to the characteristic elements, setting the theoretical resource access frequency omega2
Step S500, obtaining the actual resource access frequency omega;
step S600, calculating the data access heat F,
Figure BDA0003339582930000061
wherein K is data, ω'1=ω1modΩ,ω′2=ω2modΩ;
And S700, comparing F with alpha, if F is larger than alpha, switching the storage space from a large space to a small space, and otherwise, switching the storage space from the small space to the large space.
In step S100, after the cloud rendering data to be stored is obtained, the cloud rendering data to be stored is classified into five types of data as shown in fig. 2: model data, trajectory data, evolution data, combined data and interface data; the K is the frequency of classified data, and K is K1k2k3k4k5In the following step S200 — step S700, the related data are classified data. Specifically, the model data is a rendering base data set; the track data is external assets and local data sets used in track scene rendering; the evolution data is the number after the information such as coloring and light is transformedA data set; the combined data is a data set obtained by superposing trajectory data and evolution data; the interface data is the interface rendering data set that interfaces with the application.
In step S200, the application scenarios (e.g., animation, movie, special effect, and construction visualization) of the classified data are analyzed, the storage requirements and the cost performance requirements of the user (i.e., which files need to be stored, which files have storage requirements, and the fee cost willing to be paid) are comprehensively considered, and the planned resource usage frequency ω is set under each business schedule1. In general, planning resource usage is biased towards global planning from a phase (temporal) perspective of work, such as how much space resources are each given in the modeling/tiling/rendering phases, how to plan for storage.
In step S300, a traceability hypothesis is adopted, a storage strategy and a solution are made by making hypotheses and data-based decisions, feature-extracting elements of each classification data are extracted, and a basic turning heat α in the classification data is extracted by analysis and used as a basic storage space switching threshold. The extracting of the feature elements of the data includes extracting and storing the associated feature elements according to the preset relevance, and considering that there are many feature elements of the intra-classification data, if all the feature elements are extracted and stored, a lot of resource space is occupied, so it can be preset which feature elements need to be extracted and stored, and only these associated feature elements need to be extracted and stored, such as storage resources, resource types, and the like. The basic turnover heat degree alpha is formed step by step, firstly a fixed basic value is set, then turnover test analysis is carried out, and continuous optimization is carried out to obtain a stable turnover heat degree with lower jumping cost.
In step S400, a theoretical resource access frequency ω is set based on the characteristic elements2. Specifically, the theoretical resource access frequency ω2Is to plan the frequency omega of resource usage1And more specifically, how to allocate resources to each link.
In step S500, in the rendering workflow, information such as a storage object, a storage period, a storage method, a benefit correlation, an entity, and the like is monitored according to fig. 3 and recorded in real time, so as to obtain an actual resource access frequency Ω. Specifically, the storage object includes classification data and feature elements of each classification data; is the storage period included short? Middle term? Or long-term storage? Storage methods include qualitative (type of data stored) and quantitative (storage space required); benefit-related refers to whether internal or external data; the entity is then the data of the user or the data of the system itself.
In step S600, a storage model is constructed according to the characteristic elements, actual monitored actual resource access frequency data is respectively compared with the planned resource use frequency and the theoretical resource access frequency, the data access heat F is calculated,
Figure BDA0003339582930000071
wherein K is data, ω'1=ω1modΩ,ω′2=ω2mod Ω. According to the calculation formula, if the actual resource access frequency data and the planned resource use frequency and the actual resource access frequency and the theoretical resource access frequency are closer to each other, the remainder is smaller, and the F and the K are closer to each other. Judging whether the standard of three-frequency resonance is achieved according to the calculated data access heat F value; the judgment standard of the three-frequency resonance is as follows: K/F is more than or equal to 0.9 and less than or equal to 1; if the frequency of the planning resources is not up to the standard of the three-frequency resonance, iterative optimization of the characteristic elements is carried out, and the use frequency omega of the planning resources is reset1And the frequency omega of theoretical resource access2Until the criteria for triple frequency resonance are met.
In step S700, F and α are compared, and if F > α, the storage space is switched from the large space to the small space, otherwise, the storage space is switched from the small space to the large space. Specifically, when the access heat is changed from low to high, i.e. the access heat F of the classified data is greater than the basic turnover heat α (F > α), the storage space of the classified data is changed from Li→LjSwitching (where i > j); on the contrary, when the access heat is changed from high to low, F is less than alpha, and the storage space of the classified data is changed from LiTo LjSwitching (where i < j) is performed to reduce the storage cost caused by jitter. Wherein the storage space comprises: an ice space, a cold space, and a hot space; three spacesThe spatial size relationship of (A) is as follows: ice space > cold space > hot space. Dividing the cold space into a cold storage space and a cold storage space, wherein the space size relationship between the cold storage space and the cold storage space is as follows: the refrigerating space is larger than the cold storage space. Referring specifically to fig. 4, the waking up of various stages and various data during ice, cold, and hot space jumps is shown. In the figure, from left to right, the storage space changes from large to small, the storage time changes from long to short, and the storage cost changes from small to large. Further, when data is stored in the ice space, the data is classified and compressed to further reduce storage costs. And after the data in the cold storage reaches a preset standard, the data are placed into a cold storage area, and then the data reach a certain standard and are compressed into an ice storage area.
Further, the solution of S700 may be used as a preliminary storage scheme, and further a mode priority principle is adopted to analyze the user, and the storage scheme is optimized according to the user' S requirements on storage price and performance.
To sum up, the method and the system for storing high performance-price ratio data in cloud rendering provided by the embodiment of the application have the following technical effects: aiming at different business rhythms and data access frequencies, a storage mode of achieving three-frequency resonance as the best resource use is set, cold and hot data are stored separately, and the storage space of the cold and hot ice data is adjusted in real time, so that the technical effects of releasing the local storage space and reducing the cost are achieved.
Example two
Based on the same inventive concept as the method for storing the cost-effective data in the cloud rendering in the foregoing embodiment, the present invention further provides a system for storing the cost-effective data in the cloud rendering, as shown in fig. 5, where the system includes:
a first obtaining unit 11, configured to obtain cloud rendering data to be stored;
a first setting unit 12 for setting a planned resource usage frequency ω based on an application scenario of data1
A first extraction unit 13 for extracting a feature element and a basic turning heat α of data;
a second setting unit 14 for setting the theoretical resource access frequency ω based on the characteristic element2
A second obtaining unit 15, configured to obtain an actual resource access frequency Ω;
a first calculation unit 16 for calculating a data access heat F,
Figure BDA0003339582930000081
wherein K is the frequency of the data, ω'1=ω1modΩ,ω′2=ω2modΩ)
A first comparing unit 17 for comparing F with α;
the first comparing unit further includes a first executing unit, and the first executing unit is configured to switch the storage space from the large space to the small space if F > α after the first comparing unit determines that F > α, and otherwise, switch the storage space from the small space to the large space.
Further, the first obtaining unit 11 further includes a first classifying unit, configured to classify the cloud rendering data to be stored.
Further, the refrigerator further comprises a first dividing unit for dividing the storage space into an ice space, a cold space and a hot space.
Further, the first calculating unit 16 further includes a first judging unit for judging whether triple-frequency resonance is achieved.
Further, the cloud rendering cost-effective data storage system further includes a first compression unit, configured to compress data stored in an ice space, and various changes and specific examples of the cloud rendering cost-effective data storage method in the first embodiment of fig. 1 are also applicable to the cloud rendering cost-effective data storage system in the present embodiment.
In addition, an embodiment of the present invention further provides an electronic device, which includes a bus, a transceiver, a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the transceiver, the memory, and the processor are connected via the bus, and when the computer program is executed by the processor, each process of the embodiment of the method for storing high cost performance data in cloud rendering is implemented, and the same technical effect can be achieved, and details are not repeated here to avoid repetition.
Exemplary electronic device
Specifically, referring to fig. 6, an embodiment of the present invention further provides an electronic device, which includes a bus 1110, a processor 1120, a transceiver 1130, a bus interface 1140, a memory 1150, and a user interface 1160.
In an embodiment of the present invention, the electronic device further includes: a computer program stored in the memory 1150 and executable on the processor 1120, wherein the computer program when executed by the processor 1120 implements the processes of the above-described embodiments of the method for efficient transfer of small files in real-time rendering.
A transceiver 1130 for receiving and transmitting data under the control of the processor 1120.
In embodiments of the invention in which a bus architecture (represented by bus 1110) is used, bus 1110 may include any number of interconnected buses and bridges, with bus 1110 connecting various circuits including one or more processors, represented by processor 1120, and memory, represented by memory 1150.
Bus 1110 represents one or more of any of several types of bus structures, including a memory bus, and a memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include: industry standard architecture bus, micro-channel architecture bus, expansion bus, video electronics standards association, peripheral component interconnect bus.
Processor 1120 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits in hardware or instructions in software in a processor. The processor described above includes: general purpose processors, central processing units, network processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, complex programmable logic devices, programmable logic arrays, micro-control units or other programmable logic devices, discrete gates, transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in embodiments of the present invention may be implemented or performed. For example, the processor may be a single core processor or a multi-core processor, which may be integrated on a single chip or located on multiple different chips.
Processor 1120 may be a microprocessor or any conventional processor. The steps of the method disclosed in connection with the embodiments of the present invention may be directly performed by a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software modules may reside in random access memory, flash memory, read only memory, programmable read only memory, erasable programmable read only memory, registers, and the like, as is known in the art. The readable storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The bus 1110 may also connect various other circuits such as peripherals, voltage regulators, or power management circuits to provide an interface between the bus 1110 and the transceiver 1130, as is well known in the art. Therefore, the embodiments of the present invention will not be further described.
The transceiver 1130 may be one element or may be multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. For example: the transceiver 1130 receives external data from other devices, and the transceiver 1130 transmits data processed by the processor 1120 to other devices. Depending on the nature of the computer system, a user interface 1160 may also be provided, such as: touch screen, physical keyboard, display, mouse, speaker, microphone, trackball, joystick, stylus.
It is to be appreciated that in embodiments of the invention, the memory 1150 may further include memory located remotely with respect to the processor 1120, which may be coupled to a server via a network. One or more portions of the above-described network may be an ad hoc network, an intranet, an extranet, a virtual private network, a local area network, a wireless local area network, a wide area network, a wireless wide area network, a metropolitan area network, the internet, a public switched telephone network, a plain old telephone service network, a cellular telephone network, a wireless fidelity network, and a combination of two or more of the above. For example, the cellular telephone network and the wireless network may be a global system for mobile communications, code division multiple access, global microwave interconnect access, general packet radio service, wideband code division multiple access, long term evolution, LTE frequency division duplex, LTE time division duplex, long term evolution-advanced, universal mobile communications, enhanced mobile broadband, mass machine type communications, ultra-reliable low latency communications, etc.
It is to be understood that the memory 1150 in embodiments of the present invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. Wherein the nonvolatile memory includes: read-only memory, programmable read-only memory, erasable programmable read-only memory, electrically erasable programmable read-only memory, or flash memory.
The volatile memory includes: random access memory, which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as: static random access memory, dynamic random access memory, synchronous dynamic random access memory, double data rate synchronous dynamic random access memory, enhanced synchronous dynamic random access memory, synchronous link dynamic random access memory, and direct memory bus random access memory. The memory 1150 of the electronic device described in the embodiments of the invention includes, but is not limited to, the above and any other suitable types of memory.
In an embodiment of the present invention, memory 1150 stores the following elements of operating system 1151 and application programs 1152: an executable module, a data structure, or a subset thereof, or an expanded set thereof.
Specifically, the operating system 1151 includes various system programs such as: a framework layer, a core library layer, a driver layer, etc. for implementing various basic services and processing hardware-based tasks. Applications 1152 include various applications such as: media player, browser, used to realize various application services. A program implementing a method of an embodiment of the invention may be included in application program 1152. The application programs 1152 include: applets, objects, components, logic, data structures, and other computer system executable instructions that perform particular tasks or implement particular abstract data types.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned method for storing high performance-price ratio data in cloud rendering, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The above description is only a specific implementation of the embodiments of the present invention, but the scope of the embodiments of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present invention, and all such changes or substitutions should be covered by the scope of the embodiments of the present invention. Therefore, the protection scope of the embodiments of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A cost-effective data storage method in cloud rendering, wherein the method comprises the following steps:
acquiring cloud rendering data to be stored;
setting the use frequency omega of planning resources based on the application scene of data1
Extracting characteristic elements and basic turning heat alpha of the data;
according to the characteristicsElement for setting theoretical resource access frequency omega2
Acquiring an actual resource access frequency omega;
the data access heat degree F is calculated,
Figure FDA0003339582920000011
wherein K is the frequency of the data, ω'1=ω1modΩ,ω′2=ω2modΩ;
And F is compared with alpha, if F is larger than alpha, the storage space is switched from the large space to the small space, and otherwise, the storage space is switched from the small space to the large space.
2. The method for storing the data with high cost performance in the cloud rendering according to claim 1, wherein the storage space comprises:
an ice space, a cold space, and a hot space; the spatial size relationship of the three spaces is as follows: ice space > cold space > hot space.
3. The method for storing cost-effective data in cloud rendering according to claim 2, wherein the obtaining of the cloud rendering data to be stored includes:
classifying the cloud rendering data to be stored into five types of data: model data, trajectory data, evolution data, combined data and interface data; the K is the frequency of classified data, and K is K1k2k3k4k5
The extracting of the feature elements of the data includes extracting and storing the associated feature elements according to a preset relevance.
4. The method for storing cost-effective data in cloud rendering according to claim 1, wherein the calculating the data access heat F comprises:
judging whether the standard of three-frequency resonance is achieved according to the calculated data access heat F value; if the frequency does not reach the standard of three-frequency resonance, iterative optimization of characteristic elements is carried out, and the use frequency of planning resources is resetDegree omega1And the frequency omega of theoretical resource access2Until reaching the standard of three-frequency resonance;
the judgment standard of the three-frequency resonance is as follows: K/F is more than or equal to 0.9 and less than or equal to 1.
5. The method for storing cost-effective data in cloud rendering according to claim 4, wherein said comparing F with α, if F > α, the storage space is switched from large space to small space, otherwise, after the storage space is switched from small space to large space, further comprising:
and analyzing the user by adopting a mode priority principle, and optimizing a storage scheme according to the requirements of the user on the storage price and the performance.
6. A cost-effective data storage method in cloud rendering as recited in claim 2,
when the data is stored in the ice space, the data is classified and compressed.
7. The data storage method with high cost performance in cloud rendering according to claim 2, wherein the cold space is divided into a cold storage space and a cold storage space, and the space size relationship between the cold storage space and the cold storage space is as follows: the refrigerating space is larger than the cold storage space.
8. A cost-effective data storage system in cloud rendering, wherein the system comprises:
the cloud rendering processing device comprises a first obtaining unit, a second obtaining unit and a processing unit, wherein the first obtaining unit is used for obtaining cloud rendering data to be stored;
a first setting unit for setting a planned resource usage frequency ω based on an application scenario of data1
A first extraction unit for extracting a feature element and a basic turning heat α of data;
a second setting unit for setting the theoretical resource access frequency ω based on the characteristic element2
A second obtaining unit, configured to obtain an actual resource access frequency Ω;
a first calculation unit for calculating a data access heat F,
Figure FDA0003339582920000021
wherein K is the frequency of the data, ω'1=ω1modΩ,ω′2=ω2modΩ)
A first comparing unit for comparing F with α;
and the first execution unit is used for switching the storage space from the large space to the small space if F is larger than alpha after the judgment of the first comparison unit, and otherwise, switching the storage space from the small space to the large space.
9. A system for storing cost-effective data in cloud rendering, comprising a bus, a transceiver, a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the transceiver, the memory and the processor are connected via the bus, and wherein the computer program, when executed by the processor, implements the steps of the method for storing cost-effective data in cloud rendering according to any one of claims 1-7.
10. A computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps in the method for cost-effective data storage in cloud rendering according to any of claims 1-7.
CN202111304336.3A 2021-11-05 2021-11-05 High-cost performance data storage method and system in cloud rendering Active CN114004979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111304336.3A CN114004979B (en) 2021-11-05 2021-11-05 High-cost performance data storage method and system in cloud rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111304336.3A CN114004979B (en) 2021-11-05 2021-11-05 High-cost performance data storage method and system in cloud rendering

Publications (2)

Publication Number Publication Date
CN114004979A true CN114004979A (en) 2022-02-01
CN114004979B CN114004979B (en) 2023-09-01

Family

ID=79927807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111304336.3A Active CN114004979B (en) 2021-11-05 2021-11-05 High-cost performance data storage method and system in cloud rendering

Country Status (1)

Country Link
CN (1) CN114004979B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007213564A (en) * 2006-01-10 2007-08-23 Ted Impact Co Ltd Resource exploitation supporting method, information processor constituting network system for supporting resource exploitation, and computer program for supporting resource exploitation
CN109274752A (en) * 2018-10-10 2019-01-25 腾讯科技(深圳)有限公司 The access method and device, electronic equipment, storage medium of block chain data
CN109358821A (en) * 2018-12-12 2019-02-19 山东大学 A kind of cold and hot data store optimization method of cloud computing of cost driving
CN109857737A (en) * 2019-01-03 2019-06-07 平安科技(深圳)有限公司 A kind of cold and hot date storage method and device, electronic equipment
CN110908608A (en) * 2019-11-22 2020-03-24 苏州浪潮智能科技有限公司 Storage space saving method and system
CN111309732A (en) * 2020-02-19 2020-06-19 杭州朗和科技有限公司 Data processing method, device, medium and computing equipment
CN111951363A (en) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 Cloud computing chain-based rendering method and system and storage medium
US20200372699A1 (en) * 2019-05-24 2020-11-26 Nvidia Corporation Fine grained interleaved rendering applications in path tracing for cloud computing environments
CN112306964A (en) * 2019-07-31 2021-02-02 国际商业机器公司 Metadata-based scientific data characterization driven on a large scale by knowledge databases
CN112825023A (en) * 2019-11-20 2021-05-21 上海商汤智能科技有限公司 Cluster resource management method and device, electronic equipment and storage medium
CN112860189A (en) * 2021-02-19 2021-05-28 山东大学 Cost-driven cold and hot layered cloud storage redundancy storage method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007213564A (en) * 2006-01-10 2007-08-23 Ted Impact Co Ltd Resource exploitation supporting method, information processor constituting network system for supporting resource exploitation, and computer program for supporting resource exploitation
CN109274752A (en) * 2018-10-10 2019-01-25 腾讯科技(深圳)有限公司 The access method and device, electronic equipment, storage medium of block chain data
CN109358821A (en) * 2018-12-12 2019-02-19 山东大学 A kind of cold and hot data store optimization method of cloud computing of cost driving
CN109857737A (en) * 2019-01-03 2019-06-07 平安科技(深圳)有限公司 A kind of cold and hot date storage method and device, electronic equipment
US20200372699A1 (en) * 2019-05-24 2020-11-26 Nvidia Corporation Fine grained interleaved rendering applications in path tracing for cloud computing environments
CN112306964A (en) * 2019-07-31 2021-02-02 国际商业机器公司 Metadata-based scientific data characterization driven on a large scale by knowledge databases
CN112825023A (en) * 2019-11-20 2021-05-21 上海商汤智能科技有限公司 Cluster resource management method and device, electronic equipment and storage medium
CN110908608A (en) * 2019-11-22 2020-03-24 苏州浪潮智能科技有限公司 Storage space saving method and system
CN111309732A (en) * 2020-02-19 2020-06-19 杭州朗和科技有限公司 Data processing method, device, medium and computing equipment
CN111951363A (en) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 Cloud computing chain-based rendering method and system and storage medium
CN112860189A (en) * 2021-02-19 2021-05-28 山东大学 Cost-driven cold and hot layered cloud storage redundancy storage method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NING WU: "Multistage_Heat-Storage_Scheduling_for_Integrated_Energy_System_Using_CloudPSS-IESLab", 《2020 10TH INTERNATIONAL CONFERENCE ON POWER AND ENERGY SYSTEM(ICPES)》, pages 512 - 516 *
张淼波;: "浅析云存储数据中心存储系统优化访问的策略", 电脑知识与技术, no. 26 *

Also Published As

Publication number Publication date
CN114004979B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN113242568B (en) Task unloading and resource allocation method in uncertain network environment
CN111010434B (en) Optimized task unloading method based on network delay and resource management
EP2705433B1 (en) Method and system for dynamically creating and servicing master-slave pairs within and across switch fabrics of a portable computing device
WO2021088207A1 (en) Mixed deployment-based job scheduling method and apparatus for cloud computing cluster, server and storage device
US20180136976A1 (en) Temporal task scheduling in a hybrid system
CN109669774B (en) Hardware resource quantification method, hardware resource arrangement method, hardware resource quantification device and hardware resource arrangement device and network equipment
WO2017157160A1 (en) Data table joining mode processing method and apparatus
CN111813506A (en) Resource sensing calculation migration method, device and medium based on particle swarm algorithm
TWI775210B (en) Data dividing method and processor for convolution operation
CN113645637B (en) Method and device for unloading tasks of ultra-dense network, computer equipment and storage medium
US11422858B2 (en) Linked workload-processor-resource-schedule/processing-system—operating-parameter workload performance system
US20160080284A1 (en) Method and apparatus for executing application based on open computing language
WO2009156809A1 (en) Method, apparatus and computer program product for distributed information management
CN114004979B (en) High-cost performance data storage method and system in cloud rendering
CN113766269A (en) Video caching strategy determination method, video data processing method, device and storage medium
CN109450684B (en) Method and device for expanding physical node capacity of network slicing system
WO2022063157A1 (en) Parameter configuration method and related system
Li et al. Traffic at-a-glance: Time-bounded analytics on large visual traffic data
Wang et al. Teaching mechanism empowered by virtual simulation: Edge computing–driven approach
CN114298705A (en) Cloud desktop accurate charging method and system based on charging engine
Jian et al. A HDFS dynamic load balancing strategy using improved niche PSO algorithm in cloud storage
CN115065685B (en) Cloud computing resource scheduling method, device, equipment and medium
CN111030856B (en) Cloud-based data access method, electronic device and computer readable medium
WO2023151465A1 (en) Method for adjusting specification parameter of ssd and related product
CN116582949A (en) Resource scheduling method integrating security decision and calculation acceleration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant