CN108614782B - Cache access method for data processing system - Google Patents
Cache access method for data processing system Download PDFInfo
- Publication number
- CN108614782B CN108614782B CN201810404294.2A CN201810404294A CN108614782B CN 108614782 B CN108614782 B CN 108614782B CN 201810404294 A CN201810404294 A CN 201810404294A CN 108614782 B CN108614782 B CN 108614782B
- Authority
- CN
- China
- Prior art keywords
- cpu core
- data
- cache
- counter
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
There is provided a cache access method for a data processing system comprising a first CPU core, a second CPU core and a cache, the method comprising: a first operation, a first CPU core requesting to read first data from a cache, the cache providing the first data to the first CPU core, the cache memory updating a state associated with the first data to indicate that the first data is read by the first CPU core; and a second operation, when the first data is updated by a second CPU core, inquiring the state associated with the first data in a cache, based on the inquiry result, updating the first data to the cache by the second CPU core, providing the updated first data to the first CPU core by the cache memory, and updating the state associated with the first data by the cache memory to indicate that the first data is updated by the second CPU core.
Description
Technical Field
The present application relates to cache access methods. In particular, the present application relates to a cache coherency handling method in a shared cache multi-core processor system.
Background
In a multi-CPU core data processing system, each CPU core has its own exclusive cache, typically the L1 cache. Multiple CPU cores also have a large shared cache, typically referred to as the L2 cache. All CPU cores may access the shared cache.
In such a multi-core CPU architecture, if one CPU core wishes to update data in the shared cache, the other CPU cores accessing the data need to know that the data is updated. In the prior art, a directory is provided in the shared cache to record which CPU cores copied which data.
However, in the prior art, each update operation of a CPU core to a local cache is accompanied by an update to the shared cache, as well as updates to the local caches of other CPU cores that use the data. And directories take up much storage space. As the number of CPU cores increases, the memory space occupied by conventional directory structures also increases rapidly, thereby increasing the overhead of cache coherency processing and also significantly increasing power consumption.
Disclosure of Invention
According to an embodiment of the present application, a cache access method for a data processing system is provided.
According to a first aspect of the present application, there is provided a cache access method for a data processing system according to the first aspect of the present application, the data processing system comprising a first CPU core, a second CPU core and a cache, the method comprising: a first operation, a first CPU core requesting to read first data from a cache, the cache providing the first data to the first CPU core, the cache memory updating a state associated with the first data to indicate that the first data is read by the first CPU core; and a second operation, when the first data is updated by a second CPU core, inquiring the state associated with the first data in a cache, based on the inquiry result, updating the first data to the cache by the second CPU core, providing the updated first data to the first CPU core by the cache memory, and updating the state associated with the first data by the cache memory to indicate that the first data is updated by the second CPU core.
According to a second aspect of the present application, there is provided a first cache access method for a data processing system according to the second aspect of the present application, the data processing system comprising a first CPU core, a second CPU core and a cache, the first CPU core comprising a first counter and the second CPU core comprising a second counter, the method comprising: a first operation, in which a first CPU core requests to read first data from a cache, the cache provides the first data to the first CPU core, and the first CPU core instructs the cache to record a first count value, wherein the first count value is associated with the first data; the first count value is greater than a value of the first counter when a first operation is performed; the cache memory updating a state associated with the first data to indicate that the first data is read by the first CPU core; when the second CPU core requests to update the first data, inquiring the state associated with the first data in a cache, reading the first counting value based on the inquiry result, and setting a second value of a second counter of the second CPU core, wherein the second value is larger than the first counting value; the cache memory providing the first data to the second CPU core, the cache memory updating a state associated with the first data to indicate that the first data is updated by the second CPU core, the second CPU core saving a copy of the first data and updating the copy of the first data; the first CPU core executes preset processing and increments the first counter correspondingly; the second CPU core executes preset processing and increments the first two counters correspondingly; a second operation, the first CPU core requesting to read first data from a cache, the first CPU core querying a state in the cache associated with the first data, indicating that the first data is updated by the second CPU core based on the state, the cache requesting a value of a second counter of the second CPU core from the second CPU core, the cache requesting the second CPU core to update the first data to the cache based on the value of the second counter being greater than a first count value; the cache provides the updated first data to the first CPU core, and the first CPU core instructs the cache to set a first cache counter to a third value that is greater than any one of a value of the second counter and a value of the first counter when the second operation is performed.
According to a first cache access method for a data processing system of a second aspect of the present application, there is provided a second cache access method for a data processing system according to the second aspect of the present application, further comprising: the first CPU core increments the first counter according to a clock, and the second CPU core increments the second counter according to the clock.
The first cache access method for a data processing system according to the second aspect of the present application provides a third cache access method for a data processing system according to the second aspect of the present application, further comprising: the first CPU core increments the first counter in accordance with the executed instruction, and the second CPU core increments the second counter in accordance with the executed instruction.
According to a fourth aspect of the present application, there is provided a cache access method for a data processing system according to the second aspect of the present application, wherein the data processing system further comprises a third CPU core, the third CPU core comprising a third counter, the method further comprising: a third operation, the third CPU core requesting to read first data from a cache, the third CPU core querying a state associated with the first data in the cache, if the state indicates that the first data is read by the first CPU core, the cache providing the first data to the third CPU core, the third CPU core indicating the cache to record a third count value, the third count value being associated with the first data; the third counter value is greater than the value of the third counter when a third operation is performed; the cache memory updates a state associated with the first data to indicate that the first data is read by the third CPU core.
According to the first to third methods of cache access for a data processing system of the second aspect of the present application, there is provided a fifth method of cache access for a data processing system according to the second aspect of the present application, wherein the data processing system further comprises a third CPU core, the third CPU core comprising a third counter, the method further comprising: a fourth operation, the third CPU core requesting to read first data from a cache, the third CPU core querying a state associated with the first data in the cache, if the state indicates that the first data is updated by the second CPU core, the cache requesting the second CPU core to update the first data to the cache based on a value of a second counter being greater than a first count value; the cache provides the updated first data to the third CPU core, and the third CPU core instructs the cache to set the first cache counter to a fourth value that is greater than any one of the value of the second counter and the value of the third counter when the fourth operation is performed.
Drawings
FIG. 1 is a block diagram of a data processing system implemented in accordance with the present application;
FIG. 2 is a schematic diagram of a cache entry structure according to an embodiment of the present application; and
fig. 3 is a flow chart of a cache access method according to an embodiment of the present application.
Detailed Description
With reference to FIG. 1, FIG. 1 is a block diagram of a data processing system implemented in accordance with the present application. In the embodiment of fig. 1, the data processing system includes 4 CPU cores, CPU core 1, CPU core 2, CPU core 3, and CPU core 4. Each CPU core includes a counter therein. The CPU core 1 comprises a counter 1, the CPU core 2 comprises a counter 2, the CPU core 3 comprises a counter 3, and the CPU core 4 comprises a counter 4. The data processing system of FIG. 1 also includes a shared cache. The CPU core 1, the CPU core 2, the CPU core 3, and the CPU core 4 can all access the shared cache.
In one embodiment, the counters in the CPU core may be set by a program running in the CPU core. The value of the counter indicates the progress of program execution. In another embodiment, a counter in the CPU core indicates the time of program execution in the CPU core or a clock count of the CPU core.
The shared cache of FIG. 1 includes multiple entries. A schematic structural diagram of a cache entry according to an embodiment of the present application is shown in fig. 2. Each entry includes a plurality of regions. In one example, an entry includes a tag area, a data area, a tag area, a status area, an owner area, and a count area. The tag, data, and tag regions are conventional structures of cache entries. The tag area stores the address of a cache data item in the main memory, the data area records the cached data itself, and the mark area records whether the cache data item is valid. In a cache entry according to an embodiment of the present application, a status area is further provided for recording the usage status of the cache entry, including but not limited to which CPU core or cores the cache entry is read, which CPU core or cores the cache entry is updated for, the cache entry is not yet used, and the like.
The count area of the cache entry in fig. 2 is used to store a count value. When one CPU core (for example, CPU core 1) reads a cached entry, the time when CPU core 1 wishes to use the data corresponding to the entry, or the stage when the program on CPU core 1 wishes to use the data corresponding to the entry, may be notified to the other CPU cores by setting the count area of the read cached entry. In this time or phase, even if other CPU cores (for example, CPU core 2) update the data corresponding to the data entry, CPU core 1 does not need to obtain the updated data. In this way, according to the embodiment of the application, the requirement for data consistency processing in the shared cache system is reduced, the storage space is saved, and the cache consistency processing efficiency is improved.
Fig. 3 is a flow chart of a cache access method according to an embodiment of the present application. In the embodiment according to the present application, after system initialization, the CPU core 1 wishes to read data. After system initialization, the count value of the counter in the CPU core is initialized (e.g., to 0), and each entry in the shared cache is initialized. When the CPU core 1 wishes to read data, the data is fetched from the main memory and filled into the shared cache entry, and the data is also sent to the CPU core 1. By way of example, the data is filled into the cache entry indicated by the third line of FIG. 2. The address of the data in the main memory is 0x0006 (see fig. 2, tag area), the content of the data is GHI (data area), the cache entry is valid (tag area), and the data is read by the CPU core 1 (status area).
In the shared cache entry, in addition to recording the address of the data in the main memory, the data itself, and setting the flag area to be valid, the data entry is also recorded as being read by the CPU core 1. The CPU core 1 also supplies a count value, which is recorded in the count area of the cache entry, when reading the data. The CPU core 1 supplies a count value in accordance with the value of its own counter 1. In the embodiment according to the present application, the count value provided by the CPU core 1 is greater than the current value of the counter 1 and indicates the stage at which the program on the CPU core 1 wishes to use the data. For example, if the CPU core 1 wishes to use the data for 100 clock cycles (without concern for other CPU cores updating the data), the CPU core 1 provides a count value of the current value of the counter 1 plus 100. In another example, the current execution phase of a program on the CPU core 1 is 5 (represented by the current value of counter 1), and the program wishes to use the data within 100 phases of future program execution, the CPU core 1 provides a count value of 100. Accordingly, the count area of the cache entry indicated by the third line of FIG. 2 is written 100.
Each CPU core executes predetermined processing according to the loaded program, and updates the value of each counter accordingly.
If the CPU core 3 wishes to read data at main memory address 0x 0006. The CPU core 3 queries the shared cache, finds that the data (GHI) at main memory address 0x0006 is cached in the shared cache, and finds that the data (GHI) is read by the CPU core 1 according to the state of the data (GHI) in the shared cache. According to an embodiment of the present application, the CPU core 3 reads a count value (e.g., 100) corresponding to the data (GHI) from the shared cache. When the CPU core 3 determines that the count value of its own counter 3 is not greater than the count value (for example, 100) corresponding to the data (GHI) read from the shared cache, the data (GHI) is obtained from the shared cache, and the status area is updated in the entry of the data (GHI) in the shared cache to indicate that the data is read by the CPU core 1 and the CPU core 3, without changing the count value corresponding to the data (GHI) in the shared cache. When the CPU core 3 determines that the count value of its own counter 3 is greater than the count value (for example, 100) corresponding to the data (GHI) read from the shared cache, the CPU core acquires the data (GHI) from the shared cache, updates the status area in the entry of the data (GHI) in the shared cache to indicate that the data is read by the CPU core 1 and the CPU core 3, and updates the count value corresponding to the data (GHI) in the shared cache. The updated count value is larger than the count value of the counter 3.
If the CPU core 2 wishes to update the data at main memory address 0x 0006. The CPU core 2 queries the shared cache to find that the data at main memory address 0x0006 (GHI) is cached in the shared cache. The CPU core 2 also queries the state of the data (GHI) in the shared cache, and finds that the data (GHI) is read by the CPU core 1. According to an embodiment of the present application, the CPU core 2 reads a count value (e.g., 100) corresponding to the data (GHI) from the shared cache. The CPU core 2 sets a counter 2. The new value of the counter 2 is larger than either the count value of the data (GHI) read from the shared cache or the original value of the counter 2. In this way, the CPU core 2 identifies that the data at main memory address 0x0006 is updated by the CPU core 2 by setting the value of the counter 2, and the update occurs after the time when the CPU core 1 wishes to use the data at main memory address 0x0006, or after a stage when a program on the CPU core 1 wishes to use the data at main memory address 0x 0006. The CPU core 2 records the data at the updated main memory address 0x0006 in its local cache, without updating the data corresponding to the main memory address 0x0006 in the shared cache.
In another embodiment, CPU core 2 queries the status of the data (GHI) in the shared cache, and finds that the data (GHI) is read by CPU core 1 and CPU core 3. The CPU core 2 reads the count value (e.g., 100) corresponding to the data (GHI) from the shared cache. The CPU core 2 sets a counter 2. The new value of the counter 2 is larger than either the count value of the data (GHI) read from the shared cache or the original value of the counter 2. In this way, the CPU core 2 identifies that the data at the main memory address 0x0006 is updated by the CPU core 2 by setting the value of the counter 2, and the update occurs after the time when the CPU core 1 and the CPU core 3 wish to use the data at the main memory address 0x0006, or after the stage when the programs on the CPU core 1 and the CPU core 3 wish to use the data at the main memory address 0x 0006. The CPU core 2 records the data at the updated main memory address 0x0006 in its local cache, without updating the data corresponding to the main memory address 0x0006 in the shared cache.
As each CPU core executes predetermined processing in accordance with the loaded program, the value of each counter is updated accordingly. Before the value of the counter of the CPU core 1 reaches 100 (by way of example, 100 is the count value recorded in the entry corresponding to the data at main memory address 0x0006 in the shared cache), although the CPU core 2 updates the data corresponding to the main memory address 0x0006, the CPU core 1 does not care about or access the updated data.
With the lapse of time, the time has been reached when the CPU core 1 wishes to use the data at the main memory address 0x 0006; or, as the program is executed, a stage is reached where the program on the CPU core 1 wishes to use the data at the main memory address 0x 0006. Then, the CPU core 1 wishes to access the data at main memory address 0x0006 again.
In this way, when the CPU core updates data in the cache, cache data synchronization operation in the multi-core system is not necessarily caused. Storage overhead and operation overhead in the cache are saved.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the application in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art.
Claims (5)
1. A cache access method for a data processing system, the data processing system including a first CPU core, a second CPU core and a cache, the first CPU core including a first counter and the second CPU core including a second counter, the method comprising:
a first operation, in which a first CPU core requests to read first data from a cache, the cache provides the first data to the first CPU core, and the first CPU core instructs the cache to record a first count value, wherein the first count value is associated with the first data; the first count value is greater than a value of the first counter when a first operation is performed; the cache memory updating a state associated with the first data to indicate that the first data is read by the first CPU core;
when the second CPU core requests to update the first data, inquiring the state associated with the first data in a cache, reading the first counting value based on the inquiry result, and setting a second value of a second counter of the second CPU core, wherein the second value is larger than the first counting value; the cache memory providing the first data to the second CPU core, the cache memory updating a state associated with the first data to indicate that the first data is updated by the second CPU core, the second CPU core saving a copy of the first data and updating the copy of the first data;
the first CPU core executes preset processing and increments the first counter correspondingly;
the second CPU core executes preset processing and increments the second counter correspondingly;
a second operation, the first CPU core requesting to read first data from a cache, the first CPU core querying a state in the cache associated with the first data, indicating that the first data is updated by the second CPU core based on the state, the cache requesting a value of a second counter of the second CPU core from the second CPU core, the cache requesting the second CPU core to update the first data to the cache based on the value of the second counter being greater than a first count value; the cache provides the updated first data to the first CPU core, and the first CPU core instructs the cache to set a first cache counter to a third value that is greater than any one of a value of the second counter and a value of the first counter when the second operation is performed.
2. The method of claim 1, further comprising: the first CPU core increments the first counter according to a clock, and the second CPU core increments the second counter according to the clock.
3. The method of claim 2, further comprising: the first CPU core increments the first counter in accordance with the executed instruction, and the second CPU core increments the second counter in accordance with the executed instruction.
4. The method of any of claims 1-3, wherein the data processing system further comprises a third CPU core, the third CPU core comprising a third counter, the method further comprising:
a third operation, the third CPU core requesting to read first data from a cache, the third CPU core querying a state associated with the first data in the cache, if the state indicates that the first data is read by the first CPU core, the cache providing the first data to the third CPU core, the third CPU core indicating the cache to record a third count value, the third count value being associated with the first data; the third counter value is greater than the value of the third counter when a third operation is performed; the cache memory updates a state associated with the first data to indicate that the first data is read by the third CPU core.
5. The method of any of claims 1-3, wherein the data processing system further comprises a third CPU core, the third CPU core comprising a third counter, the method further comprising:
a fourth operation, the third CPU core requesting to read first data from a cache, the third CPU core querying a state associated with the first data in the cache, if the state indicates that the first data is updated by the second CPU core, the cache requesting the second CPU core to update the first data to the cache based on a value of a second counter being greater than a first count value; the cache provides the updated first data to the third CPU core, and the third CPU core instructs the cache to set the first cache counter to a fourth value that is greater than any one of the value of the second counter and the value of the third counter when the fourth operation is performed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810404294.2A CN108614782B (en) | 2018-04-28 | 2018-04-28 | Cache access method for data processing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810404294.2A CN108614782B (en) | 2018-04-28 | 2018-04-28 | Cache access method for data processing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108614782A CN108614782A (en) | 2018-10-02 |
CN108614782B true CN108614782B (en) | 2020-05-01 |
Family
ID=63661486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810404294.2A Active CN108614782B (en) | 2018-04-28 | 2018-04-28 | Cache access method for data processing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108614782B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111782419B (en) * | 2020-06-23 | 2023-11-14 | 北京青云科技股份有限公司 | Cache updating method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107111553A (en) * | 2015-01-13 | 2017-08-29 | 高通股份有限公司 | System and method for providing dynamic caching extension in many cluster heterogeneous processor frameworks |
CN107423234A (en) * | 2016-04-18 | 2017-12-01 | 联发科技股份有限公司 | Multicomputer system and caching sharing method |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8074026B2 (en) * | 2006-05-10 | 2011-12-06 | Intel Corporation | Scatter-gather intelligent memory architecture for unstructured streaming data on multiprocessor systems |
US8112594B2 (en) * | 2007-04-20 | 2012-02-07 | The Regents Of The University Of Colorado | Efficient point-to-point enqueue and dequeue communications |
US9396024B2 (en) * | 2008-10-14 | 2016-07-19 | Vmware, Inc. | Online computation of cache occupancy and performance |
US8909879B2 (en) * | 2012-06-11 | 2014-12-09 | International Business Machines Corporation | Counter-based entry invalidation for metadata previous write queue |
CN105359116B (en) * | 2014-03-07 | 2018-10-19 | 华为技术有限公司 | Buffer, shared cache management method and controller |
US9513693B2 (en) * | 2014-03-25 | 2016-12-06 | Apple Inc. | L2 cache retention mode |
CN105426319B (en) * | 2014-08-19 | 2019-01-11 | 超威半导体产品(中国)有限公司 | Dynamic buffering zone devices and method |
CN105677580B (en) * | 2015-12-30 | 2019-04-12 | 杭州华为数字技术有限公司 | The method and apparatus of access cache |
JP2018005667A (en) * | 2016-07-05 | 2018-01-11 | 富士通株式会社 | Cache information output program, cache information output method and information processing device |
CN106250348B (en) * | 2016-07-19 | 2019-02-12 | 北京工业大学 | A kind of heterogeneous polynuclear framework buffer memory management method based on GPU memory access characteristic |
-
2018
- 2018-04-28 CN CN201810404294.2A patent/CN108614782B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107111553A (en) * | 2015-01-13 | 2017-08-29 | 高通股份有限公司 | System and method for providing dynamic caching extension in many cluster heterogeneous processor frameworks |
CN107423234A (en) * | 2016-04-18 | 2017-12-01 | 联发科技股份有限公司 | Multicomputer system and caching sharing method |
Also Published As
Publication number | Publication date |
---|---|
CN108614782A (en) | 2018-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109074317B (en) | Adaptive deferral of lease for an entry in a translation look-aside buffer | |
KR101385430B1 (en) | Cache coherence protocol for persistent memories | |
US10019377B2 (en) | Managing cache coherence using information in a page table | |
US9405703B2 (en) | Translation lookaside buffer | |
US9733991B2 (en) | Deferred re-MRU operations to reduce lock contention | |
CN109240945B (en) | Data processing method and processor | |
US11269772B2 (en) | Persistent memory storage engine device based on log structure and control method thereof | |
US20080235477A1 (en) | Coherent data mover | |
CN106095817A (en) | Extensible file system based on micro-kernel and file access method | |
US20170364442A1 (en) | Method for accessing data visitor directory in multi-core system and device | |
US7702875B1 (en) | System and method for memory compression | |
CN113243008A (en) | Distributed VFS with shared page cache | |
JP2007048296A (en) | Method, apparatus and system for invalidating multiple address cache entries | |
US10216634B2 (en) | Cache directory processing method for multi-core processor system, and directory controller | |
US20140297957A1 (en) | Operation processing apparatus, information processing apparatus and method of controlling information processing apparatus | |
US9213673B2 (en) | Networked applications with client-caching of executable modules | |
CN108614782B (en) | Cache access method for data processing system | |
US11467937B2 (en) | Configuring cache policies for a cache based on combined cache policy testing | |
EP3136245B1 (en) | Computer | |
US20160342516A1 (en) | Cache coherence in multi-compute-engine systems | |
CN105700953B (en) | A kind of multiprocessor buffer consistency processing method and processing device | |
CN117608864B (en) | Multi-core cache consistency method and system | |
US11481143B2 (en) | Metadata management for extent-based storage system | |
JP3063658B2 (en) | Exclusive control processing unit | |
CN114791913A (en) | Method, storage medium and device for processing shared memory buffer pool of database |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200407 Address after: Room 801, 8th floor, R & D zone, Philips Research Building, No. 12, Shihua Road, Futian Free Trade Zone, Futian District, Shenzhen, Guangdong Province Applicant after: Shenzhen Huayang International Engineering Cost Consulting Co., Ltd Address before: 075000 04 B, three floor, East Century Square, Chahar, Zhangjiakou, Zhangjiakou, Qiaodong District, Hebei Applicant before: ZHANGJIAKOU HAOYANG TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |