CN109976904A - Processing method of the Redis memory management in acquisition system - Google Patents
Processing method of the Redis memory management in acquisition system Download PDFInfo
- Publication number
- CN109976904A CN109976904A CN201910138994.6A CN201910138994A CN109976904A CN 109976904 A CN109976904 A CN 109976904A CN 201910138994 A CN201910138994 A CN 201910138994A CN 109976904 A CN109976904 A CN 109976904A
- Authority
- CN
- China
- Prior art keywords
- data
- cache
- redis
- caching
- acquisition system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 238000003860 storage Methods 0.000 claims abstract description 27
- 230000006870 function Effects 0.000 claims abstract description 12
- 238000004891 communication Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000007726 management method Methods 0.000 claims abstract description 10
- 230000001360 synchronised effect Effects 0.000 claims abstract description 10
- 238000004458 analytical method Methods 0.000 claims description 10
- 238000013500 data storage Methods 0.000 claims description 7
- 238000000034 method Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 4
- 239000012536 storage buffer Substances 0.000 claims description 4
- 238000010923 batch production Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 230000000737 periodic effect Effects 0.000 claims description 3
- 230000010076 replication Effects 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 239000000872 buffer Substances 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 11
- 238000010835 comparative analysis Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of Redis memory managements in the processing method of acquisition system, belongs to computer field.Distributed cache server is increased in acquisition system, carries out data buffer storage and instruction buffer;The distributed cache server is based on Redis memory management, it is responsible for executing preposition communication file data synchronization caching function, device end, collection point archive information are synchronized to caching, the message data for acquiring preposition parsing is pre-stored in distributed caching, batch storage storage facility located at processing plant is carried out using timing mode.Present invention alleviates relational database pressure, realize that high-performance reads data, dynamic expansion node, automatic discovery and switch failure node.
Description
Technical Field
The invention belongs to the field of computers, and relates to a processing method of a Redis memory management in an acquisition system.
Background
The bottleneck of the system data processing capability is mainly the data storage capability after protocol analysis. Most of the traditional processing modes are distributed multi-thread data acquisition (communication \ protocol analysis) and single-thread storage (warehousing), which causes storage blockage. There is therefore a need for an improvement over the prior art by thread and database connection pooling techniques.
Disclosure of Invention
In view of the above, the present invention provides a method for processing a Redis memory management in an acquisition system. And the database connection is dynamically established according to the data volume to be processed, so that the waste of connection resources is avoided, and the performance of large-capacity data acquisition, calculation warehousing and multi-user concurrent access is ensured.
The purpose of the invention is realized by the following technical scheme:
a processing method for managing a collection system by a Redis memory comprises the following steps: adding a distributed cache server in an acquisition system;
performing data caching and instruction caching;
the distributed cache server is in charge of executing a preposed communication file data synchronous cache function based on Redis memory management, synchronizing the file information of the equipment terminal and the acquisition point to cache, prestoring the acquired preposed analyzed message data in the distributed cache, and storing the message data in a production library in batches in a timing mode.
Further, the data cache comprises a data warehousing cache and a calculation data cache;
wherein, the data storage buffer memory is as follows: caching by using a memory database before the collected data are put into a warehouse, and putting the cached records into the warehouse in a batch processing mode after the cached records reach a threshold value; in order to avoid the problem that the number of cache records cannot reach the threshold value within a long time and cannot be put in storage, a time threshold value is set; and starting batch warehousing operation as long as the recording threshold or the time threshold is reached, so as to avoid huge time overhead caused by warehousing one by one.
The data cache is calculated as: executing the synchronous caching function of the preposed communication file data, and synchronizing the terminal equipment and the acquisition point file information; meanwhile, the collected data and the model data are cached, so that the rapid loading is convenient for analysis and statistics.
Further, the instruction cache comprises storage capacity estimation and storage scheme design;
wherein the storage capacity is estimated as: estimating capacity requirements according to current business acquisition requirements and calculating the total memory capacity of memory resources required to be occupied by the server and memory resources occupied by business applications when the server operates;
the storage scheme is designed as follows: (1) the method has the advantages that a preposed communication file data synchronous cache function is executed, and file information of the equipment terminal and the acquisition point is synchronized to cache, so that analysis and statistics can be conveniently and quickly loaded; (2) the collected message data analyzed in advance is prestored in a distributed cache, and a batch production library dumping function is carried out by adopting a timing or periodic mode.
Further, cache nodes based on the Redis are of an interconnected graph structure in network topology, and are connected in a mode that 2 servers form 1 master-slave copy set;
the MASTER-SLAVE replication set comprises 1 MASTER node and 1 SLAVE node; wherein,
the MASTER node provides data read-write service;
the SLAVE node provides data reading services.
Further, when the cache node of Redis is initialized, the equipment archive information is loaded once, and the later-stage change data is written into the MASTER node in a business on-demand updating mode
The invention has the beneficial effects that:
(1) the invention reduces the pressure of the relational database, realizes high-performance data reading, dynamic node expansion and automatic fault node discovery and switching.
(2) In the invention, the cache data adopts multithreading and multi-level storage buffer pool technology, the system dynamically applies for database connection according to the quantity and priority of the stored data, the multithreading technology is adopted to realize parallel storage, and simultaneously, a concurrent control access strategy is adopted to reduce resource competition for database access and improve the storage efficiency of the system. The data storage capacity is improved by the memory caching and buffer pool connecting technology; meanwhile, the caching technology can utilize a file system for storage under an emergency condition, and the reliability of the system in case of failure is improved.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings, in which:
FIG. 1 is a diagram of a multi-threaded access architecture;
fig. 2 is a graph of a Redis cache topology.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
The cache data adopts the multithreading and multi-level storage buffer pool technology, the system dynamically applies for database connection according to the quantity and priority of the stored data, the multithreading technology is adopted to realize parallel storage, and meanwhile, a concurrent control access strategy is adopted to reduce resource competition for database access and improve the storage efficiency of the system, as shown in figure 1.
The data storage capacity is improved by the memory caching and buffer pool connecting technology; meanwhile, the caching technology can utilize a file system for storage under an emergency condition, and the reliability of the system in case of failure is improved.
The bottleneck of the system data processing capability is mainly the data storage capability after protocol analysis. Most of the traditional processing modes are distributed multi-thread data acquisition (communication \ protocol analysis) and single-thread storage (warehousing), which causes storage blockage. Through the thread and database connection pool technology, the software dynamically establishes database connection according to the data volume to be processed, so that the waste of connection resources is avoided, and the performance of large-capacity data acquisition, calculation and storage and multi-user concurrent access is ensured.
1. Data caching
Distributed caching, one of the distributed computing technologies, is a distributed solution in memory. The distributed cache can effectively solve the expansibility bottleneck, and reduce the memory overhead of the application server and the reading and writing pressure on the relational database.
Data storage and caching: and caching the collected data by using a memory database before warehousing, and warehousing the collected data in a batch processing mode after the number of cached records reaches a threshold value. In order to avoid the problem that the number of the cache records cannot reach the threshold value within a long time and cannot be put in storage, a time threshold value is set. And starting batch warehousing operation as long as the recording threshold or the time threshold is reached. The batch warehousing technology can avoid huge time overhead caused by warehousing item by item.
Calculating data cache: executing the synchronous caching function of the preposed communication file data, and synchronizing the terminal equipment and the acquisition point file information; meanwhile, the collected data, the model data and the like are cached, so that the rapid loading is convenient for analysis and statistics.
2. Instruction cache
The downlink instruction is cached and introduced into an instruction queue mechanism, and the downlink instruction is processed in an asynchronous mode, so that the instruction processing efficiency is improved, and the stability of mass instruction issuing processing is improved.
3. Storage capacity estimation
Based on the current file data volume, the file data volume is about 48GB, which is estimated by the client file of 1600 ten thousand users in 2020. The daily data collection amount is 278 GB.
And storing the cache data for 3 days according to the current acquisition service requirement, wherein the capacity requirement is 278GB by 3+48 GB-882 GB.
The server needs to occupy memory resources when operating, the memory capacity occupied by the service application generally does not exceed 80% of the total capacity, and the total memory sum is about: 888GB 5/4 GB 1102 GB.
4. Storage scheme design
In the project, the design of a distributed cache server is added, and the following steps are mainly completed:
the system is responsible for executing the data synchronization caching function of the preposed communication file, and synchronizes the file information of the equipment terminal and the acquisition point to cache, so that the quick loading is convenient during analysis and statistics;
the collected message data analyzed in advance is prestored in a distributed cache, and a batch production library dumping function is carried out by adopting a timing or periodic mode. The topology of caching by using Redis is shown in FIG. 2.
Nodes based on Redis cache are of an interconnected graph structure in network topology, redundancy is considered, a MASTER-SLAVE replication set (namely, 1 MASTER node and 1 SLAVE node) is formed according to 2 servers, the MASTER node can read and write data, and the SLAVE node can only provide data reading service.
During initialization, equipment file information is loaded once, and later-period change data is written into a MASTER node in a business updating mode according to needs; the collected data is analyzed by the collection front-end processor and then written into the MASTER node.
The cache server meets the memory capacity requirement, and the number of the servers is required as follows: 1102GB/512G is approximately equal to 2 stations, 4 servers are needed in consideration of capacity reservation and data master-slave copying, 1 station is additionally controlled, and 5 stations are needed in total.
The distributed cache is a memory data management system for reducing the pressure of a relational database, and capable of reading data with high performance, dynamically expanding nodes, and automatically discovering and switching fault nodes, and commonly used distributed caches include Redis and Memcached, both of which store data in a memory and are NoSQL memory databases, and comparative analysis of the two databases is shown in table 1.
TABLE 1 Redis and Memcached comparative analysis Table
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (5)
- A processing method for managing a collection system by a Redis memory is characterized by comprising the following steps: the method comprises the following steps: adding a distributed cache server in an acquisition system;performing data caching and instruction caching;the distributed cache server is in charge of executing a preposed communication file data synchronous cache function based on Redis memory management, synchronizing the file information of the equipment terminal and the acquisition point to cache, prestoring the acquired preposed analyzed message data in the distributed cache, and storing the message data in a production library in batches in a timing mode.
- 2. The Redis memory management presence acquisition system processing method of claim 1, wherein: the data cache comprises a data warehousing cache and a calculation data cache;wherein, the data storage buffer memory is as follows: caching by using a memory database before the collected data are put into a warehouse, and putting the cached records into the warehouse in a batch processing mode after the cached records reach a threshold value; in order to avoid the problem that the number of cache records cannot reach the threshold value within a long time and cannot be put in storage, a time threshold value is set; and starting batch warehousing operation as long as the recording threshold or the time threshold is reached, so as to avoid huge time overhead caused by warehousing one by one.The data cache is calculated as: executing the synchronous caching function of the preposed communication file data, and synchronizing the terminal equipment and the acquisition point file information; meanwhile, the collected data and the model data are cached, so that the rapid loading is convenient for analysis and statistics.
- 3. The Redis memory management presence acquisition system processing method of claim 1, wherein: the instruction cache comprises storage capacity estimation and storage scheme design;wherein the storage capacity is estimated as: estimating capacity requirements according to current business acquisition requirements and calculating the total memory capacity of memory resources required to be occupied by the server and memory resources occupied by business applications when the server operates;the storage scheme is designed as follows: (1) the method has the advantages that a preposed communication file data synchronous cache function is executed, and file information of the equipment terminal and the acquisition point is synchronized to cache, so that analysis and statistics can be conveniently and quickly loaded; (2) the collected message data analyzed in advance is prestored in a distributed cache, and a batch production library dumping function is carried out by adopting a timing or periodic mode.
- 4. The Redis memory management presence acquisition system processing method of claim 1, wherein: the cache nodes based on the Redis are of an interconnected graph structure in network topology, and are connected in a mode that 1 master-slave copy set is formed by 2 servers;the MASTER-SLAVE replication set comprises 1 MASTER node and 1 SLAVE node; wherein,the MASTER node provides data read-write service;the SLAVE node provides data reading services.
- 5. The Redis memory management presence acquisition system processing method of claim 4, wherein: when the cache node of the Redis is initialized, equipment archive information is loaded once, and later-stage change data is written into the MASTER node in a service on-demand updating mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910138994.6A CN109976904A (en) | 2019-02-25 | 2019-02-25 | Processing method of the Redis memory management in acquisition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910138994.6A CN109976904A (en) | 2019-02-25 | 2019-02-25 | Processing method of the Redis memory management in acquisition system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109976904A true CN109976904A (en) | 2019-07-05 |
Family
ID=67077291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910138994.6A Pending CN109976904A (en) | 2019-02-25 | 2019-02-25 | Processing method of the Redis memory management in acquisition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109976904A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111125132A (en) * | 2019-12-19 | 2020-05-08 | 紫光云(南京)数字技术有限公司 | Data storage system and storage method |
CN111209271A (en) * | 2019-12-25 | 2020-05-29 | 深圳供电局有限公司 | Electric power data complementary acquisition method and device, computer equipment and storage medium |
CN112286767A (en) * | 2020-11-03 | 2021-01-29 | 浪潮云信息技术股份公司 | Redis cache analysis method |
CN112364105A (en) * | 2020-09-16 | 2021-02-12 | 贵州电网有限责任公司 | Collection file management method and system based on Redis |
CN112597172A (en) * | 2021-01-05 | 2021-04-02 | 中国铁塔股份有限公司 | Data writing method, system and storage medium |
CN114390069A (en) * | 2022-01-30 | 2022-04-22 | 青岛海尔科技有限公司 | Data access method, system, equipment and storage medium based on distributed cache |
CN115481158A (en) * | 2022-09-22 | 2022-12-16 | 北京泰策科技有限公司 | Automatic loading and converting method for data distributed cache |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530335A (en) * | 2013-09-30 | 2014-01-22 | 广东电网公司汕头供电局 | In-stockroom operation method and device of electric power measurement acquisition system |
CN103646111A (en) * | 2013-12-25 | 2014-03-19 | 普元信息技术股份有限公司 | System and method for realizing real-time data association in big data environment |
CN204462736U (en) * | 2015-03-06 | 2015-07-08 | 苏州智电节能科技有限公司 | A kind of real-time dynamic monitoring system being applied to comprehensive energy |
CN105589951A (en) * | 2015-12-18 | 2016-05-18 | 中国科学院计算机网络信息中心 | Distributed type storage method and parallel query method for mass remote-sensing image metadata |
CN106453297A (en) * | 2016-09-30 | 2017-02-22 | 努比亚技术有限公司 | Master and slave time delay detection method, device and system |
CN107689999A (en) * | 2017-09-14 | 2018-02-13 | 北纬通信科技南京有限责任公司 | A kind of full-automatic computational methods of cloud platform and device |
CN108322542A (en) * | 2018-02-12 | 2018-07-24 | 广州市贝聊信息科技有限公司 | Data-updating method, system, device and computer readable storage medium |
CN108829508A (en) * | 2018-03-30 | 2018-11-16 | 北京趣拿信息技术有限公司 | task processing method and device |
CN108961080A (en) * | 2018-06-29 | 2018-12-07 | 渤海人寿保险股份有限公司 | Insurance business distributed approach, device, storage medium and terminal |
CN109299079A (en) * | 2018-09-11 | 2019-02-01 | 南京朝焱智能科技有限公司 | A kind of high-speed data library design method |
CN109327437A (en) * | 2018-09-29 | 2019-02-12 | 深圳市多易得信息技术股份有限公司 | Concurrent websocket business information processing method and server-side |
-
2019
- 2019-02-25 CN CN201910138994.6A patent/CN109976904A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530335A (en) * | 2013-09-30 | 2014-01-22 | 广东电网公司汕头供电局 | In-stockroom operation method and device of electric power measurement acquisition system |
CN103646111A (en) * | 2013-12-25 | 2014-03-19 | 普元信息技术股份有限公司 | System and method for realizing real-time data association in big data environment |
CN204462736U (en) * | 2015-03-06 | 2015-07-08 | 苏州智电节能科技有限公司 | A kind of real-time dynamic monitoring system being applied to comprehensive energy |
CN105589951A (en) * | 2015-12-18 | 2016-05-18 | 中国科学院计算机网络信息中心 | Distributed type storage method and parallel query method for mass remote-sensing image metadata |
CN106453297A (en) * | 2016-09-30 | 2017-02-22 | 努比亚技术有限公司 | Master and slave time delay detection method, device and system |
CN107689999A (en) * | 2017-09-14 | 2018-02-13 | 北纬通信科技南京有限责任公司 | A kind of full-automatic computational methods of cloud platform and device |
CN108322542A (en) * | 2018-02-12 | 2018-07-24 | 广州市贝聊信息科技有限公司 | Data-updating method, system, device and computer readable storage medium |
CN108829508A (en) * | 2018-03-30 | 2018-11-16 | 北京趣拿信息技术有限公司 | task processing method and device |
CN108961080A (en) * | 2018-06-29 | 2018-12-07 | 渤海人寿保险股份有限公司 | Insurance business distributed approach, device, storage medium and terminal |
CN109299079A (en) * | 2018-09-11 | 2019-02-01 | 南京朝焱智能科技有限公司 | A kind of high-speed data library design method |
CN109327437A (en) * | 2018-09-29 | 2019-02-12 | 深圳市多易得信息技术股份有限公司 | Concurrent websocket business information processing method and server-side |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111125132A (en) * | 2019-12-19 | 2020-05-08 | 紫光云(南京)数字技术有限公司 | Data storage system and storage method |
CN111209271A (en) * | 2019-12-25 | 2020-05-29 | 深圳供电局有限公司 | Electric power data complementary acquisition method and device, computer equipment and storage medium |
CN112364105A (en) * | 2020-09-16 | 2021-02-12 | 贵州电网有限责任公司 | Collection file management method and system based on Redis |
CN112286767A (en) * | 2020-11-03 | 2021-01-29 | 浪潮云信息技术股份公司 | Redis cache analysis method |
CN112286767B (en) * | 2020-11-03 | 2023-02-03 | 浪潮云信息技术股份公司 | Redis cache analysis method |
CN112597172A (en) * | 2021-01-05 | 2021-04-02 | 中国铁塔股份有限公司 | Data writing method, system and storage medium |
CN114390069A (en) * | 2022-01-30 | 2022-04-22 | 青岛海尔科技有限公司 | Data access method, system, equipment and storage medium based on distributed cache |
CN114390069B (en) * | 2022-01-30 | 2024-03-22 | 青岛海尔科技有限公司 | Data access method, system, equipment and storage medium based on distributed cache |
CN115481158A (en) * | 2022-09-22 | 2022-12-16 | 北京泰策科技有限公司 | Automatic loading and converting method for data distributed cache |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109976904A (en) | Processing method of the Redis memory management in acquisition system | |
CN102831156B (en) | Distributed transaction processing method on cloud computing platform | |
US10659554B2 (en) | Scalable caching of remote file data in a cluster file system | |
US8799213B2 (en) | Combining capture and apply in a distributed information sharing system | |
CN107168657B (en) | Virtual disk hierarchical cache design method based on distributed block storage | |
US9146934B2 (en) | Reduced disk space standby | |
US7783601B2 (en) | Replicating and sharing data between heterogeneous data systems | |
CN103294710B (en) | A kind of data access method and device | |
CN103595797B (en) | Caching method for distributed storage system | |
CN104361030A (en) | Distributed cache architecture with task distribution function and cache method | |
CN113377868B (en) | Offline storage system based on distributed KV database | |
US20200019474A1 (en) | Consistency recovery method for seamless database duplication | |
CN107888687B (en) | Proxy client storage acceleration method and system based on distributed storage system | |
WO2023159976A1 (en) | Data segmented writing method, data reading method and apparatus | |
CN111984191A (en) | Multi-client caching method and system supporting distributed storage | |
CN110750372B (en) | Log system and log management method based on shared memory | |
CN111159176A (en) | Method and system for storing and reading mass stream data | |
CN110807039A (en) | Data consistency maintenance system and method in cloud computing environment | |
CN110083306A (en) | A kind of distributed objects storage system and storage method | |
US10642750B2 (en) | System and method of a shared memory hash table with notifications and reduced memory utilization | |
CN112463073A (en) | Object storage distributed quota method, system, equipment and storage medium | |
CN104281673A (en) | Cache building system and method for database | |
CN112579528A (en) | Method for efficiently accessing files at server side of embedded network file system | |
CN114706836B (en) | Data life cycle management method based on airborne embedded database | |
CN115167778A (en) | Storage management method, system and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190705 |
|
RJ01 | Rejection of invention patent application after publication |