CN111131390A - Storage caching method for improving cloud rendering concurrency number - Google Patents
Storage caching method for improving cloud rendering concurrency number Download PDFInfo
- Publication number
- CN111131390A CN111131390A CN201911167739.0A CN201911167739A CN111131390A CN 111131390 A CN111131390 A CN 111131390A CN 201911167739 A CN201911167739 A CN 201911167739A CN 111131390 A CN111131390 A CN 111131390A
- Authority
- CN
- China
- Prior art keywords
- rendering
- storage
- server
- machine head
- storage machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/141—Setup of application sessions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/28—Indexing scheme for image data processing or generation, in general involving image processing hardware
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a storage caching method for improving cloud rendering concurrency number, which comprises the following steps: setting N rendering servers to form a rendering cluster, and configuring a storage machine head for each N rendering servers; the storage server which stores the rendering task and is arranged at the rear end of the mounted storage machine head is stored, and the storage machine head provides a virtual IP to the outside through the load balancing server; the rendering server accesses the virtual IP, and the load balancing server allocates one storage machine head to be connected with the rendering server according to the load condition of each storage machine head; the storage machine head loads rendering task data of the storage server in real time according to the rendering task data read by the rendering server and caches the rendering task data in a local memory of the storage machine head; and the rendering server reads the rendering task data cached in the local memory and performs rendering calculation. The invention can realize the quick acquisition of rendering task data and improve the utilization efficiency of the storage machine head.
Description
Technical Field
The invention relates to the field of cloud rendering storage, in particular to a storage caching method for improving cloud rendering concurrency.
Background
When the rendering farm is rendering, according to the size of the rendering cluster, one storage head is configured for every n rendering servers, and 10 storage heads need to be configured for every 10n rendering servers. However, the data of a single rendering task is on one of the storage heads, so the reading speed of each n rendering servers is determined by the load condition (storage bandwidth performance) of the configured storage head, for example, the bandwidth performance of a single storage head is 40Gbps, 400 rendering servers are concurrently performed, and each rendering server has only a 100Mbps reading speed, because the storage heads are independent from each other, even if the loads of other storage heads are small, the data cannot be utilized, the reading speed of the data of the rendering servers cannot be effectively improved, and the utilization rate of the storage heads is low.
Accordingly, the prior art is deficient and needs improvement.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a storage caching method for improving cloud rendering concurrency number, and solves the problems that the reading speed of rendering server data cannot be effectively improved and the utilization rate of a storage machine head is low in the prior art.
The technical scheme of the invention is as follows: a storage caching method for improving cloud rendering concurrency number comprises the following steps:
s1: setting N rendering servers to form a rendering cluster, and configuring a storage machine head for each N rendering servers; and N is more than or equal to N.
S2: and the load balancing server is respectively connected with the N rendering servers in the rendering cluster and is simultaneously connected with the set storage machine head.
S3: and the storage machine head is mounted on a storage server which stores the rendering tasks and is arranged at the rear end, and the storage machine head provides a virtual IP externally through the load balancing server.
S4: and the rendering server accesses the virtual IP, and the load balancing server allocates one storage machine head to be connected with the rendering server according to the load condition of each storage machine head.
S5: and the storage machine head loads the rendering task data of the storage server in real time according to the rendering task data read by the rendering server and caches the rendering task data in a local memory of the storage machine head.
And for the same rendering task, rendering by a plurality of rendering servers at the same time, and when the storage machine heads connected with the rendering servers for processing the rendering task are different, mutually synchronizing rendering task data read by all the rendering servers for processing the rendering task to the local storage of the storage machine heads.
And for the same rendering task, simultaneously rendering by a plurality of rendering servers, and when the storage machine heads connected with the rendering servers for processing the rendering task are the same, caching the rendering task data read by all the rendering servers for processing the rendering task to the local storage by the corresponding storage machine heads.
S6: and the rendering server reads the rendering task data cached in the local memory and performs rendering calculation.
The method comprises the steps that N rendering servers form a rendering cluster, each N rendering servers are provided with a storage machine head according to the size of the rendering cluster, the storage machine heads are mounted on the storage servers with rendering tasks at the rear ends, a virtual IP is provided externally through a load balancing server, when the rendering servers need to acquire rendering task data, the virtual IP is directly accessed, at the moment, the load balancing server allocates one storage machine head to be connected with the rendering servers according to the load conditions of the storage machine heads, the storage machine heads load the rendering task data from the storage servers in real time according to the rendering task data read by the rendering servers and cache the rendering task data into a local storage of the storage machine heads, the rendering servers perform rendering calculation, and when the storage machine heads connected with a plurality of rendering servers for rendering the same rendering task are different, the corresponding storage machine heads mutually synchronize the read rendering task data, when the rendering task data acquired by the rendering server is stored in the local memory of the memory head, the rendering server directly acquires the rendering task data from the local memory of the memory head for rendering calculation, and the same data information does not need to be acquired repeatedly, so that the rendering efficiency is improved; the optimal storage machine head can be selected to be connected with the rendering server through the load balancing server according to the actual load condition of each storage machine head, so that rendering task data can be rapidly acquired, and the utilization efficiency of the storage machine heads is effectively improved.
Further, the step S1 is: setting N rendering servers to form a rendering cluster, establishing communication connection among the N rendering servers, and configuring a storage machine head for each N rendering servers.
Further, the method also includes step S7: and after the rendering is finished, the storage machine head recovers the storage of the local storage server occupied by the rendering task. And after the rendering is finished, the occupied storage of the local storage server is recovered, so that sufficient storage is provided for the next storage task, and the data acquisition speed of the rendering task is improved.
Further, the local storage is a local memory and/or a solid state disk. The local memory may store the retrieved rendering task data in real-time.
Further, the load is a storage bandwidth performance; the rendering task data is a rendered original file.
Further, the step S4 is: and the rendering server side accesses the virtual IP, and the load balancing server selects one storage machine head with the minimum load to be connected with the rendering server according to the load condition of each storage machine head. And selecting one storage machine head with the minimum load according to the load condition to establish connection with the rendering server, and further improving the acquisition speed of the rendering task data so as to improve the rendering speed.
Further, the step S5 further includes: when the same rendering task is processed by one rendering server, the local memory of the storage head connected with the rendering server caches the rendering task.
By adopting the scheme, the invention provides a storage caching method for improving the concurrency of cloud rendering, N rendering servers form a rendering cluster, each N rendering servers are provided with a storage machine head according to the size of the rendering cluster, the storage machine head is used for mounting the storage server with the rendering tasks at the rear end, a virtual IP is provided to the outside through a load balancing server, when the rendering server needs to obtain rendering task data, the virtual IP is directly accessed, at the moment, the load balancing server can select one storage machine head with the minimum load to be connected with the rendering server according to the load condition of each storage machine head, the storage machine head loads the rendering task data from the storage server in real time according to the rendering task data read by the rendering server and caches the rendering task data into a local storage of the storage machine head, the rendering server carries out rendering calculation, when the storage machine heads connected with a plurality of rendering servers for rendering the same rendering task are different, the read rendering task data can be mutually synchronized between the corresponding storage machine heads, when the rendering task data acquired by the rendering server is stored in the local storage of the storage machine head, the rendering server directly acquires the rendering task data from the local storage of the storage machine head for rendering calculation, the same data information does not need to be acquired repeatedly, and the rendering efficiency is improved; the optimal storage machine head can be selected to be connected with the rendering server through the load balancing server according to the actual load condition of each storage machine head, so that rendering task data can be rapidly acquired, and the utilization efficiency of the storage machine heads is effectively improved.
Drawings
FIG. 1 is a block flow diagram of the present invention;
fig. 2 is a block diagram of the present invention.
Wherein: rendering cluster 1, rendering server 10, virtual IP2, load balancing server 3, storage head 4, local storage 40, storage server 5.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Referring to fig. 1 and fig. 2, the present invention provides a storage caching method for increasing cloud rendering concurrency number, including the following steps:
s1: 400 rendering servers 10 are arranged to form a rendering cluster 1, and one storage head 4 is configured for every 40 rendering servers 10.
S2: the load balancing server 3 establishes connection with 400 rendering servers 10 in the rendering cluster 1 respectively, and simultaneously establishes connection with the set storage head 4.
S3: the storage head 4 mounts a storage server 5 which stores rendering tasks at the rear end, and the storage head 4 provides a virtual IP2 to the outside through the load balancing server 3.
S4: the rendering server 10 accesses the virtual IP2, and the load balancing server 3 allocates one storage head 4 to connect with the rendering server 10 according to the load condition of each storage head 4.
S5: the storage head 4 loads the rendering task data of the storage server 5 in real time according to the rendering task data read by the rendering server 10, and caches the rendering task data in the local memory 40 of the storage head 4.
When there is a difference in the storage headings 4 to which the rendering servers 10 that process the rendering task are connected and the rendering server 10 that processes the rendering task simultaneously renders the same rendering task, the corresponding storage headings 4 mutually synchronize rendering task data read by all the rendering servers 10 that process the rendering task to the local storage 40 thereof.
For the same rendering task, rendering is performed by a plurality of rendering servers 10 at the same time, and the storage headings 4 connected to the rendering servers 10 processing the rendering task are the same, the corresponding storage headings 4 cache all the rendering task data read by the rendering servers 10 processing the rendering task to the local storage 40 thereof.
S6: the rendering server 10 reads the rendering task data cached in the local storage 40, and performs rendering calculation.
400 rendering servers 10 form a rendering cluster 1, according to the size of the rendering cluster 1, every 40 rendering servers 10 are configured with a storage head 4, the storage head 4 mounts a storage server 5 with rendering tasks at the rear end, and provides a virtual IP2 externally through a load balancing server 3, when the rendering server 10 needs to obtain rendering task data, the virtual IP2 is directly accessed, at this time, the load balancing server 3 will allocate a storage head 4 to connect with the rendering server 10 according to the load condition of each storage head 4, the storage head 4 loads the rendering task data from the storage server 5 in real time according to the rendering task data read by the rendering server 10 and caches the rendering task data in a local storage 40 of the storage head 4, the rendering server 10 performs rendering calculation, when the storage heads 4 connected to a plurality of rendering servers 10 rendering the same rendering task are different, the read rendering task data can be mutually synchronized between the corresponding storage heads 4, when the rendering task data acquired by the rendering server 10 is stored in the local memory 40 of the storage head 4, the rendering server 10 directly acquires the rendering task data from the local memory 40 of the storage head 4 for rendering calculation, and for the same data information, the same data does not need to be acquired repeatedly, so that the rendering efficiency is improved; the optimal storage machine head 4 can be selected to be connected with the rendering server 10 through the load balancing server 3 according to the actual load condition of each storage machine head 4, so that rendering task data can be rapidly acquired, and the utilization efficiency of the storage machine heads 4 is effectively improved.
The step S1 is: 400 rendering servers 10 are arranged to form a rendering cluster 1, communication connection is established among the 400 rendering servers 10, and each 40 rendering servers 10 are provided with a storage machine head 4.
Further comprising step S7: after the rendering is completed, the storage head 4 recovers the storage of the local storage server 5 occupied by the rendering task. And after the rendering is finished, the occupied storage of the local storage server 5 is recovered, so that sufficient storage is provided for the next storage task, and the data acquisition speed of the rendering task is improved.
The local storage 40 is a local memory and a solid state disk. The retrieved rendering task data may be stored in real-time within local memory 40.
The load is a storage bandwidth performance; the rendering task data is a rendered original file.
The step S4 is: the rendering server 10 side accesses the virtual IP2, and the load balancing server 3 selects one storage head 4 with the minimum load to connect with the rendering server 10 according to the load condition of each storage head 4. And selecting one storage machine head 4 with the minimum load according to the load condition to establish connection with the rendering server 10, and further improving the acquisition speed of the rendering task data so as to improve the rendering speed.
The step S5 further includes: when the same rendering task is processed by one rendering server 10, the local memory 40 of the storage head 4 connected to the rendering server 10 caches the rendering task.
In summary, the present invention provides a storage caching method for improving cloud rendering concurrency, wherein 400 rendering servers form a rendering cluster, each 40 rendering servers are configured with a storage head according to the size of the rendering cluster, the storage head mounts a storage server storing rendering tasks at the rear end, and provides a virtual IP to the outside through a load balancing server, when the rendering server needs to obtain rendering task data, the virtual IP is directly accessed, at this time, the load balancing server selects a storage head with the smallest load to establish connection with the rendering server according to the load condition of each storage head, the storage head loads the rendering task data from the storage server in real time according to the rendering task data read by the rendering server, and caches the rendering task data in a local storage of the storage head, the rendering server performs rendering calculation, when the storage heads connected to a plurality of rendering servers performing rendering on the same rendering task are different, the read rendering task data can be mutually synchronized between the corresponding storage machine heads, when the rendering task data acquired by the rendering server is stored in the local storage of the storage machine head, the rendering server directly acquires the rendering task data from the local storage of the storage machine head for rendering calculation, the same data information does not need to be acquired repeatedly, and the rendering efficiency is improved; the optimal storage machine head can be selected to be connected with the rendering server through the load balancing server according to the actual load condition of each storage machine head, so that rendering task data can be rapidly acquired, and the utilization efficiency of the storage machine heads is effectively improved.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A storage caching method for improving cloud rendering concurrency number is characterized by comprising the following steps:
s1: setting N rendering servers to form a rendering cluster, and configuring a storage machine head for each N rendering servers; n is more than or equal to N;
s2: the load balancing server is respectively connected with N rendering servers in the rendering cluster and is simultaneously connected with the set storage machine head;
s3: the storage server which stores the rendering task and is arranged at the rear end of the mounted storage machine head is stored, and the storage machine head provides a virtual IP to the outside through the load balancing server;
s4: the rendering server accesses the virtual IP, and the load balancing server allocates one storage machine head to be connected with the rendering server according to the load condition of each storage machine head;
s5: the storage machine head loads rendering task data of the storage server in real time according to the rendering task data read by the rendering server and caches the rendering task data in a local memory of the storage machine head;
for the same rendering task, rendering by a plurality of rendering servers at the same time, and when the storage machine heads connected with the rendering servers for processing the rendering task are different, the corresponding storage machine heads mutually synchronize rendering task data read by all the rendering servers for processing the rendering task to a local storage of the storage machine heads;
for the same rendering task, rendering by a plurality of rendering servers at the same time, and when the storage machine heads connected with the rendering servers for processing the rendering task are the same, caching the rendering task data read by all the rendering servers for processing the rendering task to a local storage of the corresponding storage machine heads;
s6: and the rendering server reads the rendering task data cached in the local memory and performs rendering calculation.
2. The storage and caching method for improving the cloud rendering concurrency number according to claim 1, wherein the step S1 is: setting N rendering servers to form a rendering cluster, establishing communication connection among the N rendering servers, and configuring a storage machine head for each N rendering servers.
3. The storage caching method for improving the cloud rendering concurrency number according to claim 1, further comprising step S7: and after the rendering is finished, the storage machine head recovers the storage of the local storage server occupied by the rendering task.
4. The storage and caching method for improving the cloud rendering concurrency number according to claim 1, wherein the local storage is a local memory and/or a solid state disk.
5. The storage caching method for improving the cloud rendering concurrency number according to claim 1, wherein the load is storage bandwidth performance; the rendering task data is a rendered original file.
6. The storage and caching method for improving the cloud rendering concurrency number according to claim 5, wherein the step S4 is: and the rendering server side accesses the virtual IP, and the load balancing server selects one storage machine head with the minimum load to be connected with the rendering server according to the load condition of each storage machine head.
7. The storage caching method for improving the cloud rendering concurrency number according to claim 1, wherein the step S5 further comprises: when the same rendering task is processed by one rendering server, the local memory of the storage head connected with the rendering server caches the rendering task.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911167739.0A CN111131390B (en) | 2019-11-25 | 2019-11-25 | Storage caching method for improving cloud rendering concurrency number |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911167739.0A CN111131390B (en) | 2019-11-25 | 2019-11-25 | Storage caching method for improving cloud rendering concurrency number |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111131390A true CN111131390A (en) | 2020-05-08 |
CN111131390B CN111131390B (en) | 2022-06-21 |
Family
ID=70496544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911167739.0A Active CN111131390B (en) | 2019-11-25 | 2019-11-25 | Storage caching method for improving cloud rendering concurrency number |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111131390B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114338725A (en) * | 2021-12-31 | 2022-04-12 | 深圳市瑞云科技有限公司 | Distributed storage scheduling method for improving large-scale cluster rendering upper limit |
CN115794424A (en) * | 2023-02-13 | 2023-03-14 | 成都古河云科技有限公司 | Method for accessing three-dimensional model through distributed architecture |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104580422A (en) * | 2014-12-26 | 2015-04-29 | 赞奇科技发展有限公司 | Cluster rendering node data access method based on shared cache |
US20160296842A1 (en) * | 2013-12-26 | 2016-10-13 | Square Enix Co., Ltd. | Rendering system, control method, and storage medium |
CN106850759A (en) * | 2016-12-31 | 2017-06-13 | 广州勤加缘科技实业有限公司 | MySQL database clustering methods and its processing system |
CN107483390A (en) * | 2016-06-08 | 2017-12-15 | 成都赫尔墨斯科技股份有限公司 | A kind of cloud rendering web deployment subsystem, system and cloud rendering platform |
CN108494588A (en) * | 2018-03-12 | 2018-09-04 | 深圳市瑞驰信息技术有限公司 | A kind of system and method for cluster block device dynamic QoS configuration |
-
2019
- 2019-11-25 CN CN201911167739.0A patent/CN111131390B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160296842A1 (en) * | 2013-12-26 | 2016-10-13 | Square Enix Co., Ltd. | Rendering system, control method, and storage medium |
CN104580422A (en) * | 2014-12-26 | 2015-04-29 | 赞奇科技发展有限公司 | Cluster rendering node data access method based on shared cache |
CN107483390A (en) * | 2016-06-08 | 2017-12-15 | 成都赫尔墨斯科技股份有限公司 | A kind of cloud rendering web deployment subsystem, system and cloud rendering platform |
CN106850759A (en) * | 2016-12-31 | 2017-06-13 | 广州勤加缘科技实业有限公司 | MySQL database clustering methods and its processing system |
CN108494588A (en) * | 2018-03-12 | 2018-09-04 | 深圳市瑞驰信息技术有限公司 | A kind of system and method for cluster block device dynamic QoS configuration |
Non-Patent Citations (1)
Title |
---|
李淼淼: ""GlusterFS 分布式存储系统在云渲染平台中的应用研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114338725A (en) * | 2021-12-31 | 2022-04-12 | 深圳市瑞云科技有限公司 | Distributed storage scheduling method for improving large-scale cluster rendering upper limit |
CN114338725B (en) * | 2021-12-31 | 2024-01-30 | 深圳市瑞云科技有限公司 | Distributed storage scheduling method for improving upper limit of large-scale cluster rendering |
CN115794424A (en) * | 2023-02-13 | 2023-03-14 | 成都古河云科技有限公司 | Method for accessing three-dimensional model through distributed architecture |
CN115794424B (en) * | 2023-02-13 | 2023-04-11 | 成都古河云科技有限公司 | Method for accessing three-dimensional model through distributed architecture |
Also Published As
Publication number | Publication date |
---|---|
CN111131390B (en) | 2022-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108234641B (en) | Data reading and writing method and device based on distributed consistency protocol | |
CN111131390B (en) | Storage caching method for improving cloud rendering concurrency number | |
JP5851055B2 (en) | Data storage method and apparatus | |
CN101794271A (en) | Implementation method and device of consistency of multi-core internal memory | |
CN104536701A (en) | Realizing method and system for NVME protocol multi-command queues | |
JP7074839B2 (en) | Packet processing | |
CN106549857A (en) | A kind of method and system of trans-regional PUSH message | |
CN106484633A (en) | A kind of data cached method and device | |
CN113419824A (en) | Data processing method, device, system and computer storage medium | |
CN103209214A (en) | Not only structured query language (NoSQL)-based method for realizing message-oriented middleware | |
CN103944993A (en) | Million-level user simultaneous online mobile platform server architecture | |
CN108415962A (en) | A kind of cloud storage system | |
CN104598615A (en) | Memory access method and device supporting data persistence | |
CN111190546B (en) | Distributed block storage performance optimization method based on ALUA and local cache | |
CN110058816A (en) | DDR-based high-speed multi-user queue manager and method | |
CN107451075B (en) | Data processing chip and system, data storage forwarding and reading processing method | |
CN112948025A (en) | Data loading method and device, storage medium, computing equipment and computing system | |
CN115208841A (en) | Industrial internet identification flow caching processing method based on SDN | |
CN105610906A (en) | Request forwarding method, device and system | |
CN102137138A (en) | Method, device and system for cache collaboration | |
CN105426125B (en) | A kind of date storage method and device | |
CN108282516A (en) | A kind of distributed storage cluster load balancing method and device based on iSCSI | |
CN114338725B (en) | Distributed storage scheduling method for improving upper limit of large-scale cluster rendering | |
CN107408071A (en) | A kind of memory pool access method, device and system | |
CN104679688B (en) | Data access method, apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518000 17th floor, block B, Sunshine Technology Innovation Center, No.2 Shanghua Road, Nanshan street, Nanshan District, Shenzhen City, Guangdong Province Patentee after: Shenzhen Ruiyun Technology Co.,Ltd. Address before: 518000 17th floor, block B, Sunshine Technology Innovation Center, No.2 Shanghua Road, Nanshan street, Nanshan District, Shenzhen City, Guangdong Province Patentee before: SHENZHEN RAYVISION TECHNOLOGY CO.,LTD. |