CN104484136B - A kind of method of sustainable high concurrent internal storage data - Google Patents
A kind of method of sustainable high concurrent internal storage data Download PDFInfo
- Publication number
- CN104484136B CN104484136B CN201410822558.8A CN201410822558A CN104484136B CN 104484136 B CN104484136 B CN 104484136B CN 201410822558 A CN201410822558 A CN 201410822558A CN 104484136 B CN104484136 B CN 104484136B
- Authority
- CN
- China
- Prior art keywords
- data
- layer program
- program
- write
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/70—Software maintenance or management
- G06F8/73—Program documentation
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The present invention relates to a kind of method of sustainable high concurrent internal storage data, it is particularly suitable for use in needing the access of data can reach the situation of high concurrent high real-time, this method includes:S1, application layer program send read-write request of data and give Access Layer program;S2, the Access Layer program are sorted out request of data is read and write with being merged;S3, the Access Layer program are distributed to cache layer program by request of data is read and write;S4, judge that cache layer program is to read data or write data;S5, the cache layer program return to response bag to Access Layer program, and go to step S7;S6, data syn-chronization instrument read chronological file, and go to step S10;S9, end program;Beneficial effects of the present invention:The data access service of high concurrent high real-time can be provided, the cost that application layer program developer carries out design data and exploitation is reduced;Being perfectly safe for data is can guarantee that, the data ability to ward off risks is strong;The ability of the concurrent processing affairs of data buffer storage layer program can be maximized.
Description
Technical field
The present invention relates to network communication technology field, realized more specifically to one kind for magnanimity service development people
Member can quickly and easily realize the storage of data in data storage, and the access to data can reach that high concurrent is high in real time
Property requirement, while application layer program does not need distributed storage and the deployment of processing data, improve the situation of development efficiency
Method based on memory database high concurrent literary sketch data.
Background technology
In internet mass service development task, for read-write frequency in application layer program is higher and data volume ratio
The design and development of larger data is a problem of Software for Design developer must solve.High concurrent and height should be supported
Real-time is accessed, and is ensured uniformity of the application layer program data in clustered deploy(ment) again, is moved back while also to reduce program exception
Go out, the various exceptions such as system reboot cause to damage the risk brought to data, become and solve required for the management of magnanimity service data
Problem certainly.
At present, prior art is generally:
1st, in existing many systems, data are stored in relevant database, loading of databases when application layer program starts
In all data, and when data change by the data of change be sent to clustered deploy(ment) in other servers.
2nd, the higher data of read-write frequency are preserved using the memory database increased income.
In the prior art, its major defect has:
1st, traditional relevant database is saved the data in, it is on the one hand higher and to concurrency for read-write frequency
Big data access response speed and disposal ability ratio are relatively low;On the other hand for clustered deploy(ment) application layer program, mutually it
Between synchrodata cost than larger, and the it cannot be guaranteed that uniformity of data.
2nd, data are preserved using the memory database increased income, on the one hand because internal storage data library services are directly external
Processing logical sum network I/O inside portion's application layer program service, limited program, the disposal ability of unit service cannot maximum
Change;On the other hand due to saving the data in internal memory, or regularly a part of data are saved on disk, for service weight
Open, the abnormal conditions such as system power failure by lost part or total data, the data ability to ward off risks is low.
The content of the invention
The technical problems to be solved by the invention are, for the big data access response speed of existing concurrency and processing
There is provided a kind of sustainable high concurrent, the side of the sustainable high concurrent internal storage data of high real-time than relatively low situation for storage capacity
Method.
The technical scheme that the present invention solves above-mentioned technical problem is as follows:A kind of method of sustainable high concurrent internal storage data,
This method includes:
S1, application layer program send read-write request of data and give Access Layer program;
S2, the Access Layer program are sorted out request of data is read and write with being merged;
S3, the Access Layer program are distributed to cache layer program by request of data is read and write;
S4, the cache layer program judge it is to read data or write data, if reading data, will go to step S41, if
It is to write data to go to step S42;
Reading cache data is gone to step S5 by S41, cache layer program;
S42, change data cached and go to step S5, at the same time, the cache layer program, which writes data flowing water, to flow
Hydrology part, and go to step S6;
S5, the cache layer program return to response bag to Access Layer program, and go to step S7;
S6, data syn-chronization instrument read chronological file, and go to step S10;
S7, the Access Layer program are disassembled response bag with being merged;
S8, the Access Layer program return to read-write data response bag to application layer program, and go to step S9;
S9, end program;
Chronological file is carried out flowing water group bag by S10, the data syn-chronization instrument;
S11, the cache layer synchronous chronological file of standby host, and go to step S9.
In the method for the sustainable high concurrent internal storage data of the present invention, the Access Layer program of the step S2 will be read
Write data requests are sorted out and merged, and are that the cache layer position according to where the data item of request is sorted out, then basis
The request type of request, the request transaction for asking wall scroll record is merged, corresponding cache layer is finally forwarded requests to
Among program.
In the method for the sustainable high concurrent internal storage data of the present invention, the cache layer includes hosting reservoir and pair is stored
Layer, for data disaster tolerance.
In the method for the sustainable high concurrent internal storage data of the present invention, the caching layer-management data, and periodically will be slow
All data conversion storages for depositing the internal memory of layer are stored up on disk.
In the method for the sustainable high concurrent internal storage data of the present invention, in the cache layer program periodically by internal memory
When all data conversion storage storages are on disk, the cache layer program will record the write data requests each time of user.
In the method for the sustainable high concurrent internal storage data of the present invention, the internal memory of the cache layer includes at least one
Cluster, Cluster include at least one Block, and Block includes at least one Chunk, according to pre- during each storage allocation
The memory size of distribution selects suitable Chunk to be allocated.
In the method for the sustainable high concurrent internal storage data of the present invention, the cache layer is first returned in releasing memory
Memory management module, by memory management module control cache layer whether releasing memory.
The present invention sustainable high concurrent internal storage data method in, every time write data when, by data syn-chronization to delay
Among the standby host for depositing layer.
In the method for the sustainable high concurrent internal storage data of the present invention, when writing data every time, data flowing water is write
Among chronological file.
Implement the method for the sustainable high concurrent internal storage data of the present invention, have the advantages that:High concurrent can be provided
The data access service of high real-time, reduces the cost that application layer program developer carries out design data and exploitation;It can guarantee that
Data are perfectly safe, and the data ability to ward off risks is strong;The ability of the concurrent processing affairs of data buffer storage layer program can be maximized.
Brief description of the drawings
Fig. 1 is the schematic diagram of common memory database scheme service mode in the prior art;
Fig. 2 is the read-write data manipulation flow of the preferred embodiment of the method for the sustainable high concurrent internal storage data of the present invention
Sequence chart;
Fig. 3 is the data storage schematic diagram of the preferred embodiment of the method for the sustainable high concurrent internal storage data of the present invention;
Fig. 4 reads data for the application layer program of the preferred embodiment of the method for the sustainable high concurrent internal storage data of the present invention
The flow sequence chart of operation;
Fig. 5 writes data for the application layer program of the preferred embodiment of the method for the sustainable high concurrent internal storage data of the present invention
The flow sequence chart of operation.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples
The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the present invention, not
For limiting the present invention.
As shown in Fig. 2 the read-write data of the preferred embodiment in the method for the sustainable high concurrent internal storage data of the present invention
In operating process sequence chart, the method for the sustainable high concurrent internal storage data starts from step S0:Proceed to after step S0
Step S1, application layer program sends read-write request of data and gives Access Layer program;Then, to next step S2, the Access Layer journey
Sequence is sorted out request of data is read and write with being merged;Then, to next step S3, the Access Layer program will read and write request of data
It is distributed to cache layer program;Then, to next step S4, cache layer program judges it is to read data or write data, if reading
According to step S41 will be gone to, if step S42 will be gone to by writing data;The step S41 is that cache layer program reads caching number
According to, and go to step S5;The step S42 is changes and goes to step S5 data cached, at the same time, the cache layer program
Data flowing water is write into chronological file, and goes to step S6;The step S5 is that the cache layer program is returned to Access Layer program
Response bag is returned, and goes to step S7;The step S6 is that data syn-chronization instrument reads chronological file, and goes to step S10;It is described
Step S7 is that the Access Layer program is disassembled response bag with being merged;Then, to next step S8, the Access Layer program
Read-write data response bag is returned to application layer program, and goes to step S9;The step S10 is that the data syn-chronization instrument will flow
Hydrology part carries out flowing water group bag;Then, to next step S11, the synchronous chronological file of standby host of cache layer, and go to step S9;
Last this method ends at step S9.
Further, the Access Layer program of the step S2 is sorted out request of data is read and write and merged, and is root
Sorted out according to the cache layer position where the data item of request, then according to the request type of request, by request wall scroll record
Request transaction merge, finally forward requests among corresponding cache layer program.
Further, the cache layer includes hosting reservoir and secondary accumulation layer, for data disaster tolerance.
Further, the cache layer also includes memory management module, for managing internal memory data.
Further, the caching layer-management data, and periodically storing up all data conversion storages of the internal memory of cache layer in magnetic
On disk.
Further, it is described when the cache layer program periodically stores up all data conversion storages in internal memory on disk
Cache layer program will record the write data requests each time of user.
Further, the internal memory of the cache layer includes at least one Cluster, and Cluster includes at least one
Block, Block include at least one Chunk, select suitable according to the memory size of predistribution during each storage allocation
Chunk is allocated.
Further, the cache layer first returns to memory management module in releasing memory, by memory management module control
Cache layer processed whether releasing memory.
Classification merging treatment is carried out to the request of application layer program in Access Layer program, and is forwarded to the cache layer journey of correlation
Sequence;Cache layer program managing internal memory data, it is ensured that the security of data, and handle the request of data of Access Layer program.
1st, Access Layer
The Access Layer program directly serves in application layer program, and major function includes:
The request of application layer is distributed to corresponding cache layer program, the distributed deployment of data is realized, to reach application
Layer program developer does not need the distributed storage design of processing data;
The request transaction of application layer is merged, the network I/O expense of rear end cache layer program is reduced, to reach raising
The ability of cache layer program concurrent processing affairs.
Request to application layer, the cache layer position first according to where the data item of request, the request of application layer is pressed
Cache layer position is sorted out, then according to the request type of request, and the request transaction for asking wall scroll record is merged, and finally will
Request is forwarded to corresponding cache layer program.
2nd, cache layer
The cache layer program realizes the management to data, and periodically by all data storages in internal memory on disk,
The write operation requests each time of user are recorded simultaneously.Cache layer program point main frame and standby host deployment, to reach the work(of data disaster tolerance
Energy.Major function includes:
1) Service Data Management, for the management relevant to data storage, data-interface, Data expansion etc..
2) memory management.Memory pool includes several Cluster, and Cluster includes some Block, and Block is comprising some
Individual Chunk.Suitable Chunk is selected to be allocated according to the memory size of predistribution during each storage allocation, during releasing memory
Memory management module is first returned, real release is decided whether by memory management module.
3) data flowing water.Data flowing water is all write chronological file by each write operation.
4) data syn-chronization., all can be by data syn-chronization to standby host for each write operation.
5) Data Migration.When the machine data capacity or concurrency of main service do not reach application service demand, by number
According to distributed storage is carried out, correspondence deployment covers active/standby servers.During Data Migration, first by number to be migrated in current server
According to newly deployed server is moved to, while will write data flowing water is synchronized to new demand servicing device, after the completion of migration, Access Layer clothes are notified
Business modification of program scheduling strategy, so as to ensure the seamless dilatation of storage device.
Further, when writing data every time, among the standby host of data syn-chronization to cache layer.
Further, when writing data every time, data flowing water is write among chronological file.
Implement the method for the sustainable high concurrent internal storage data of the present invention, have the advantages that:High concurrent can be provided
The data access service of high real-time, reduces the cost that application layer program developer carries out design data and exploitation;It can guarantee that
Data are perfectly safe, and the data ability to ward off risks is strong;The ability of the concurrent processing affairs of data buffer storage layer program can be maximized.
As shown in figure 3, the data storage of the preferred embodiment in the method for the sustainable high concurrent internal storage data of the present invention
Principle schematic, in cache layer, the Service Data Management is used for the management to storage, the capacity of data etc.;To the industry
The data storage of business data management is managed according to stagewise.
As shown in figure 4, the application layer journey of the preferred embodiment in the method for the sustainable high concurrent internal storage data of the present invention
Sequence is read in the flow sequence chart of data manipulation, and the reading data manipulation starts from proceeding to step after step S100, step S100
Rapid S110, application layer program sends read data request and gives Access Layer program;Then, to next step S120, the Access Layer journey
Sequence is sorted out read data request with being merged;Then, to next step S130, the Access Layer program divides read data request
Issue cache layer program;Then, to next step S140, the cache layer program reading cache data;Then, to next step
S150, the cache layer program returns to response bag to Access Layer program;Then, to next step S160, the Access Layer program
Response bag is disassembled with being merged;Then, to next step S170, the Access Layer program returns to reading to application layer program
According to response bag;Finally the flow ends at step S180.
The application layer program for implementing the preferred embodiment of the method for the sustainable high concurrent internal storage data of the present invention reads data
The flow of operation, for magnanimity service developers in data storage, can quickly and easily realize the storage of data, to reading
The access of data can reach the requirement of high concurrent high real-time, while application layer program need not handle the distribution for reading data
Storage and deployment, improve development efficiency.
As shown in figure 5, the application layer journey of the preferred embodiment in the method for the sustainable high concurrent internal storage data of the present invention
In the flow sequence chart of sequence data writing operation, write data operation starts from proceeding to step after step S200, step S200
Rapid S210, application layer program sends write data requests and gives Access Layer program;Then, to next step S220, the Access Layer journey
Sequence is sorted out write data requests with being merged;Then, to next step S230, the Access Layer program divides write data requests
Issue cache layer program;Then, to next step S240, the cache layer modification of program is data cached;Then, to next step
S250, the cache layer program returns to response bag to Access Layer program;Then, to next step S260, the Access Layer program
Response bag is disassembled with being merged;Then, return and read and write to application layer program to next step S270, the Access Layer program
Data response bag, and go to step S280;Step S290, the cache layer program are carried out while the step S240 is carried out
Data flowing water is write into chronological file;Then, to next step S300, data syn-chronization instrument reads chronological file;Then, arrive down
Chronological file is carried out flowing water group bag by one step S310, the data syn-chronization instrument;Then, to next step S320, cache layer
The synchronous chronological file of standby host, and go to step S280;Finally the flow ends at step S280.
The application layer program for implementing the preferred embodiment of the method for the sustainable high concurrent internal storage data of the present invention writes data
The flow of operation, for magnanimity service developers in data storage, can quickly and easily realize the storage of data, to writing
The access of data can reach the requirement of high concurrent high real-time, while application layer program does not need the distribution of process write data
Storage and deployment, improve development efficiency.
Compared with prior art, the advantage of the method for sustainable high concurrent internal storage data of the invention is:It can provide high
The data access service of concurrent high real-time, reduces the cost that application layer program developer carries out design data and exploitation;Energy
Ensure being perfectly safe for data, the data ability to ward off risks is strong;The energy of the concurrent processing affairs of data buffer storage layer program can be maximized
Power.
Embodiments of the invention are the foregoing is only, are not intended to limit the scope of the invention, it is every to utilize this hair
The equivalent structure transformation that bright specification and accompanying drawing content are made, or other related technical fields are directly or indirectly used in,
Similarly it is included within the scope of the present invention.
Claims (8)
1. a kind of method of sustainable high concurrent internal storage data, it is characterised in that this method includes:
S1, application layer program send read-write request of data and give Access Layer program;
S2, the Access Layer program are sorted out request of data is read and write with being merged;
S3, the Access Layer program are distributed to cache layer program by request of data is read and write;
S4, the cache layer program judge it is to read data or write data, if reading data, step S41 will be gone to, if writing
Data will go to step S42;
Reading cache data is gone to step S5 by S41, cache layer program;
S42, change data cached and go to step S5, at the same time, data flowing water is write flowing water text by the cache layer program
Part, and go to step S6;
S5, the cache layer program return to response bag to Access Layer program, and go to step S7;
S6, data syn-chronization instrument read chronological file, and go to step S10;
S7, the Access Layer program are disassembled response bag with being merged;
S8, the Access Layer program return to read-write data response bag to application layer program, and go to step S9;
S9, end program;
Chronological file is carried out flowing water group bag by S10, the data syn-chronization instrument;
S11, the cache layer synchronous chronological file of standby host, and go to step S9.
2. the method for sustainable high concurrent internal storage data according to claim 1, it is characterised in that the institute of the step S2
State Access Layer program and sorted out request of data is read and write and merged, be that the cache layer position according to where the data item of request is entered
Row is sorted out, then according to the request type of request, and the request transaction for asking wall scroll record is merged, finally request is forwarded
To among corresponding cache layer program.
3. the method for sustainable high concurrent internal storage data according to claim 1 or 2, it is characterised in that the cache layer
Including hosting reservoir and secondary accumulation layer, for data disaster tolerance.
4. the method for sustainable high concurrent internal storage data according to claim 1, it is characterised in that the caching layer-management
Data, and periodically store up all data conversion storages of the internal memory of cache layer on disk.
5. the method for the sustainable high concurrent internal storage data according to claim 1 or 4, it is characterised in that in the caching
When layer program periodically stores up all data conversion storages in internal memory on disk, the cache layer program will record user each time
Write data requests.
6. the method for sustainable high concurrent internal storage data according to claim 1, it is characterised in that the cache layer it is interior
Bag deposit contains at least one Cluster, and Cluster includes at least one Block, and Block includes at least one Chunk, divided every time
Suitable Chunk is selected to be allocated according to the memory size of predistribution during with internal memory.
7. the method for the sustainable high concurrent internal storage data according to claim 1 or 6, it is characterised in that the cache layer
First return to memory management module in releasing memory, by memory management module control cache layer whether releasing memory.
8. the method for sustainable high concurrent internal storage data according to claim 1, it is characterised in that writing data every time
When, among the standby host of data syn-chronization to cache layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410822558.8A CN104484136B (en) | 2014-12-25 | 2014-12-25 | A kind of method of sustainable high concurrent internal storage data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410822558.8A CN104484136B (en) | 2014-12-25 | 2014-12-25 | A kind of method of sustainable high concurrent internal storage data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104484136A CN104484136A (en) | 2015-04-01 |
CN104484136B true CN104484136B (en) | 2017-09-29 |
Family
ID=52758684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410822558.8A Active CN104484136B (en) | 2014-12-25 | 2014-12-25 | A kind of method of sustainable high concurrent internal storage data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104484136B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107766220A (en) * | 2017-10-30 | 2018-03-06 | 成都交大光芒科技股份有限公司 | Vehicle-mounted overhead contact line state-detection monitoring device detects data network share method |
CN110597904B (en) * | 2018-05-25 | 2023-11-24 | 海能达通信股份有限公司 | Data synchronization method, standby machine and host machine |
CN108718285B (en) * | 2018-06-15 | 2022-06-03 | 北京奇艺世纪科技有限公司 | Flow control method and device of cloud computing cluster and server |
CN109670975B (en) * | 2018-12-17 | 2021-02-05 | 泰康保险集团股份有限公司 | Method, medium, and electronic device for generating a single number in a computer system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102262680A (en) * | 2011-08-18 | 2011-11-30 | 北京新媒传信科技有限公司 | Distributed database proxy system based on massive data access requirement |
CN103580891A (en) * | 2012-07-27 | 2014-02-12 | 腾讯科技(深圳)有限公司 | Data synchronization method and system and servers |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020169827A1 (en) * | 2001-01-29 | 2002-11-14 | Ulrich Thomas R. | Hot adding file system processors |
CN102129469B (en) * | 2011-03-23 | 2014-06-04 | 华中科技大学 | Virtual experiment-oriented unstructured data accessing method |
US8706834B2 (en) * | 2011-06-30 | 2014-04-22 | Amazon Technologies, Inc. | Methods and apparatus for remotely updating executing processes |
CN103793291B (en) * | 2012-11-01 | 2017-04-19 | 华为技术有限公司 | Distributed data copying method and device |
CN103853671B (en) * | 2012-12-07 | 2018-03-02 | 北京百度网讯科技有限公司 | A kind of data write-in control method and device |
CN103778071A (en) * | 2014-01-20 | 2014-05-07 | 华为技术有限公司 | Cache space distribution method and device |
-
2014
- 2014-12-25 CN CN201410822558.8A patent/CN104484136B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102262680A (en) * | 2011-08-18 | 2011-11-30 | 北京新媒传信科技有限公司 | Distributed database proxy system based on massive data access requirement |
CN103580891A (en) * | 2012-07-27 | 2014-02-12 | 腾讯科技(深圳)有限公司 | Data synchronization method and system and servers |
Also Published As
Publication number | Publication date |
---|---|
CN104484136A (en) | 2015-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Van Renen et al. | Managing non-volatile memory in database systems | |
CN103885728B (en) | A kind of disk buffering system based on solid-state disk | |
CN105205014B (en) | A kind of date storage method and device | |
CN108804031A (en) | Best titime is searched | |
CN110226157A (en) | Dynamic memory for reducing row buffering conflict remaps | |
US20150127691A1 (en) | Efficient implementations for mapreduce systems | |
CN105224444B (en) | Log generation method and device | |
US20180107601A1 (en) | Cache architecture and algorithms for hybrid object storage devices | |
CN103558992A (en) | Off-heap direct-memory data stores, methods of creating and/or managing off-heap direct-memory data stores, and/or systems including off-heap direct-memory data store | |
CN104484136B (en) | A kind of method of sustainable high concurrent internal storage data | |
US10310904B2 (en) | Distributed technique for allocating long-lived jobs among worker processes | |
CN104580437A (en) | Cloud storage client and high-efficiency data access method thereof | |
CN107003814A (en) | Effective metadata in storage system | |
CN101375241A (en) | Efficient data management in a cluster file system | |
CN109800185A (en) | A kind of data cache method in data-storage system | |
CN113377868A (en) | Offline storage system based on distributed KV database | |
CN109542907A (en) | Database caches construction method, device, computer equipment and storage medium | |
CN107888687B (en) | Proxy client storage acceleration method and system based on distributed storage system | |
CN105354046B (en) | Database update processing method and system based on shared disk | |
CN103218305B (en) | The distribution method of memory space | |
CN104572505A (en) | System and method for ensuring eventual consistency of mass data caches | |
CN105915626B (en) | A kind of data copy initial placement method towards cloud storage | |
CN112334891A (en) | Centralized storage for search servers | |
CN108132759A (en) | A kind of method and apparatus that data are managed in file system | |
CN113448897B (en) | Optimization method suitable for pure user mode far-end direct memory access |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |