CN110378471A - Operation method, device and Related product - Google Patents

Operation method, device and Related product Download PDF

Info

Publication number
CN110378471A
CN110378471A CN201910671013.4A CN201910671013A CN110378471A CN 110378471 A CN110378471 A CN 110378471A CN 201910671013 A CN201910671013 A CN 201910671013A CN 110378471 A CN110378471 A CN 110378471A
Authority
CN
China
Prior art keywords
data
storage capacity
data storage
stored
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910671013.4A
Other languages
Chinese (zh)
Other versions
CN110378471B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambricon Technologies Corp Ltd
Beijing Zhongke Cambrian Technology Co Ltd
Original Assignee
Beijing Zhongke Cambrian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Cambrian Technology Co Ltd filed Critical Beijing Zhongke Cambrian Technology Co Ltd
Priority to CN201910671013.4A priority Critical patent/CN110378471B/en
Publication of CN110378471A publication Critical patent/CN110378471A/en
Application granted granted Critical
Publication of CN110378471B publication Critical patent/CN110378471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

This disclosure relates to operation method, device and Related product, the product includes controller unit, and the controller unit includes: instruction cache unit, instruction process unit and storage queue unit;Described instruction cache unit, for storing the associated computations of artificial neural network operation;Described instruction processing unit obtains multiple operational orders for parsing to the computations;The storage queue unit, for storing instruction queue, the instruction queue include: by the pending multiple operational orders of the tandem of the queue or computations.By above method, operation efficiency of the Related product when carrying out the operation of neural network model is can be improved in the disclosure.

Description

Operation method, device and Related product
Technical field
This disclosure relates to technical field of information processing more particularly to a kind of operation method, device and Related product.
Background technique
In field of artificial intelligence, neural network algorithm is a kind of nearest popular machine learning algorithm, each Kind all achieves extraordinary effect, such as image recognition, speech recognition, natural language processing etc. in field.With nerve net The complexity of the development of network algorithm, algorithm is also higher and higher, and in order to improve resolution, the scale of model is also being gradually increased.
Summary of the invention
According to the disclosure in a first aspect, providing a kind of date storage method, which comprises
Read data to be stored;
Determine the first data storage capacity of the data to be stored;
According to the data to be stored, first data storage capacity is extended in respective dimensions, obtains time overhead most The second small data storage capacity;
According to second data storage capacity, the data to be stored is stored.
According to the second aspect of the disclosure, a kind of data storage device is provided, comprising:
Reading unit, for reading data to be stored;
First data storage capacity determination unit, for determining the first data storage capacity of the data to be stored;
Second data storage capacity determination unit extends described for according to the data to be stored in respective dimensions One data storage capacity obtains the smallest second data storage capacity of time overhead;
Storage unit, for storing the data to be stored according to second data storage capacity.
According to the third aspect of the disclosure, a kind of arithmetic unit is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to executing method described in above-mentioned first aspect.
According to the fourth aspect of the disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon with Computer program instructions, the computer program instructions realize method described in above-mentioned first aspect when being executed by processor.
By reading data to be stored, the first data storage capacity of data to be stored is determined, according to data to be stored, in phase It answers and extends the first data storage capacity in dimension, obtain the second data storage capacity of time overhead, and according to the second data storage capacity Storage data to be stored can determine according to the date storage method, device and Related product of all aspects of this disclosure embodiment Data storage method of the data in deep learning hardware algorithm equipment stores data to be stored by the data storage method, It can effectively improve the memory usage of hardware device, while determining the process of data storage method with smaller by this method Time loss, can effectively promote the efficiency of determination process.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Detailed description of the invention
Comprising in the description and constituting the attached drawing of part of specification and specification together illustrates the disclosure Exemplary embodiment, feature and aspect, and for explaining the principles of this disclosure.
Fig. 1 shows the dimensional information described according to the Dimension of one embodiment of the disclosure.
Fig. 2 shows the flow charts according to the date storage method of one embodiment of the disclosure.
Fig. 3 shows the flow chart of the date storage method according to one embodiment of the disclosure.
Fig. 4 shows the flow chart of the date storage method according to one embodiment of the disclosure.
Fig. 5 shows the flow chart of the date storage method according to one embodiment of the disclosure.
Fig. 6 shows the parameter configuration schematic diagram according to one embodiment of the disclosure.
Fig. 7 shows the flow chart of the date storage method according to one embodiment of the disclosure.
Fig. 8, which is shown, applies exemplary schematic diagram according to the disclosure one.
Fig. 9, which is shown, applies exemplary schematic diagram according to the disclosure one.
Figure 10 shows the block diagram of the data storage device according to one embodiment of the disclosure.
Figure 11 shows the block diagram of the data storage device according to one embodiment of the disclosure.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure. It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Since the Resources on Chip of deep learning accelerator is limited, each tensor (Tensor) will be divided into several pieces and carry out respectively It calculates.Vector is divided into several pieces on vector processor to calculate separately, band can be referred to as and excavate (strip mining), and On deep learning accelerator, dividing to tensor can become more difficult, and main cause is the processing of deep learning accelerator Operation input and output be multidimensional Tensor, rather than one-dimensional vector, each dimension of Tensor can be carried out It divides, it is more difficult than vector processor to find suitable partition strategy.Because the dimension that can be divided is more, the number that may be constructed Also become more according to block size, this also puts and positioned each block number evidence to data and brings challenges.In order to preferably indicate data Segmentation, each dimension of Tensor can be packed with a specific data structure Dimension.Each The segment information of some dimension is contained in Dimension, by the Dimension of each dimension, can directly be calculated The address for each data block that Tensor is divided into and size.
Dimension structure can be used to indicate that the information in a dimension, and after dividing to the dimension, access each The mode of a position.One Dimension can be described with 4 variables: starting position (start), end position (end), Step-length (stride), and the length (extent) accessed each time.Fig. 1 is shown according to one embodiment of the disclosure The dimensional information of Dimension description, it can be seen from the figure that the dimension of the corresponding Tensor of the Dimension is since 0, Length is 10, when traversing this dimension, can be 2 mobile with step-length, access the position that length is 4 every time since 0 It sets.
It can be seen that during projected depth learns accelerator by above-mentioned Dimension data structure, a weight The problem of wanting is exactly the processing and management to data.In order to manage the data on deep learning hardware device, needs can be held Row calculates or the data of memory access are managed and are segmented.Data sectional be one piece of big data is reasonably divided so that Can be put into the caching of deep learning hardware device per a bit of data after division.Fragment size will affect concurrency, such as Fruit segmentation is very small, then it represents that the number of memory access increases, since each memory access needs a starting time, when memory access number increases Add, then it is unfavorable in this way for the algorithm to memory access intensity, such as full articulamentum that the overall memory access time, which then will increase,. But if segmentation is bigger, operation for computation-intensive, instruction without it is sufficiently parallel in the case where may cause compared with The long waiting time, to can also reduce the time that hardware device needs to consume in the process of running.Therefore, how to depth The data practised in hardware device are effectively segmented, to be stored in deep learning hardware device in a kind of relatively reasonable mode It is interior, become a urgent problem to be solved.
In order to ensure memory access and operation efficiency of the hardware device to data, the embodiment of the present disclosure proposes a kind of data storage Method can be first according to the actual conditions of hardware device, by number to be stored in hardware device in disclosure Application Example According to amount of storage be set as reaching the numerical value x of minimum memory in each dimensioni(i=1,2 ..., n), wherein n is represented Then the dimension of data to be stored can be determined according to the actual conditions of data to be stored in conjunction with the relevant parameter of hardware device Data to be stored can be extended data storage capacity in which dimension, can after the dimension that can be extended has been determined It, then can be according to M to assess the memory access expense that will cause after i-th of dimension increase of data to be stored with M (i) (i) value, to determine in which dimension that memory access expense is minimum, and using this dimension as the dimension of data storage capacity to be increased, Then increase the data storage capacity in this dimension.By constantly repeating the above process, to data storage capacity expansible every It is extended in one dimension, until the fragment size of none of dimension can further be extended, at this time The data storage capacity arrived can be used as the size of storage data to be stored, then can be according to the value of this data storage capacity, will In data to be stored storage to the relevant hardware devices of deep learning, it to be used for subsequent memory access and calculating.
Fig. 2 shows the flow charts according to the date storage method of one embodiment of the disclosure.The date storage method can be by Terminal device or other processing equipments execute, wherein terminal device can be user equipment (User Equipment, UE), shifting Dynamic equipment, user terminal, terminal, cellular phone, wireless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, calculate equipment, mobile unit, wearable device etc..In some possible implementations In, which can realize in such a way that processor calls the computer-readable instruction stored in memory.Such as figure Shown, this method may include:
Step S11 reads data to be stored.
Step S12 determines the first data storage capacity of data to be stored.
Step S13 extends the first data storage capacity according to data to be stored in respective dimensions, obtains time overhead most The second small data storage capacity.
Step S14 stores data to be stored according to the second data storage capacity.
In embodiment disclosed above, content, size and the storage mode of data to be stored can be according to data to be stored Storage equipment and actual conditions flexibly determine, it is not limited here.
By reading data to be stored, the first data storage capacity of data to be stored is determined, according to data to be stored, in phase It answers and extends the first data storage capacity in dimension, obtain the second data storage capacity of time overhead, and according to the second data storage capacity Storage data to be stored can determine according to the date storage method, device and Related product of all aspects of this disclosure embodiment Data storage method of the data in deep learning hardware algorithm equipment stores data to be stored by the data storage method, It can effectively improve the memory usage of hardware device, while determining the process of data storage method with smaller by this method Time loss, can effectively promote the efficiency of determination process.
In embodiment disclosed above, the implementation of step S11 is not limited, any to read or determine number to be stored According to mode, can be used as step S11 embodiment, no longer enumerated or limited herein.The implementation of step S12 is same Sample is not limited, i.e., how to determine the first data storage capacity of data to be stored, can flexibly be determined according to the actual situation.Fig. 3 The flow chart of date storage method according to one embodiment of the disclosure is shown, as shown, in one possible implementation, Step S11 may include:
Step S111 determines the storage object of data to be stored.
Step S112, using the minimum data amount of storage of storage object as the first data storage capacity.
In embodiment disclosed above, the implementation of step S111 is not limited, i.e., how to determine depositing for data to be stored Object is stored up, method of determination is not limited.The storage object of data to be stored, the content for referring specifically to generation can also flexibly really Fixed, in one example, storage object can be the storage location of data to be stored;In one example, storage object can also Think the hardware device model for storing the data to be stored.No matter the storage object refer to be specifically what content, determine that this is deposited The purpose of storage object all can be determine the minimum of the storage object by step S112 according to the actual conditions of the storage object Data storage capacity, thus as the first data storage capacity.Since the implementation of storage object is not limited, step The specific implementation of S112 is equally not limited, and can flexibly be determined according to the actual situation.In one example, due to wait deposit The tensor data that data may be various dimensions are stored up, specific number of dimensions is not limited herein, can according to the actual situation flexibly It determines.Therefore it can determine that the storage object can store in each dimension according to the hardware case of storage object at this time Minimum data amount can learn that data exist in the storage object based on the minimum data amount that can store in each dimension When storage, when carrying out data storage in all dimensions, the minimum data amount of storage that can store, and this minimum data is deposited Reserves are prepared as the first data storage capacity for subsequent amount of storage expansion process.The storage object is in each dimension The minimum data amount that can store is equally unrestricted, can flexibly determine according to the actual situation, in one example, the storage The minimum data amount that object can store in each dimension can be identical, i.e., the minimum number that can store in each dimension It is same data volume according to amount;In one example, the minimum data amount which can store in each dimension can With difference, i.e., the minimum data amount that can store in each dimension can be that different data volumes can basis when determining The actual conditions of dimension determine the minimum data amount in each dimension respectively, then count all dimensions, obtain the first data and deposit Reserves.
By determining the storage object of data to be stored, stored the minimum data amount of storage object as the first data Amount can will determine the process of data storage method, go out each data storage method from exhaustion and calculate its elapsed time, from And the process of optimal data storage mode is found, it is converted into and is extended based on the smallest data storage capacity, by comparing extension The time loss of front and back determines the process of optimal data storage mode, determines optimal data storage mode compared with exhaustive mode For process, this process can significantly shorten the time of search optimal data storage mode, while can obtain in preferable Deposit utilization rate.
After the first data storage capacity has been determined, the first data can be deposited according to data to be stored according to step S13 Reserves are extended in respective dimensions, obtain the smallest second data storage capacity of time overhead.The implementation of step S13 is same Sample is not limited, i.e., is not limited to the extended mode of the first data storage capacity, can flexibly be determined according to the actual situation, Fig. 4 The flow chart of date storage method according to one embodiment of the disclosure is shown, as shown, in one possible implementation, Step S13 may include:
Step S131 determines the candidate dimension of the first data storage capacity, wherein candidate dimension is in the first data storage capacity It is able to carry out the dimension of data storage capacity extension.
Step S132 is respectively extended the first data storage capacity in each candidate dimension.
Step S133 obtains in all candidate dimensions, completes the second data storage capacity of extension, wherein the second number It is minimum according to time overhead of the amount of storage in each candidate dimension.
Can be seen that data to be stored by above-mentioned each open embodiment, there may be multiple dimensions, in Data expansion In the process, dimension that may be all can reduce the overall time consumption of hardware device after dimension extension, it is also possible to only Partial dimensional can reduce the overall time consumption of hardware device after dimension extension, other dimensions will increase after dimension extension The time loss of hardware device entirety.Therefore, in one possible implementation, can be determined by step S131 first The candidate dimension of one data storage capacity carries out data storage capacity to show in the first data storage capacity in which dimension Extension, can reduce the time loss of hardware device entirety.And it is set if all dimensions will increase hardware after dimension extension When the time loss of standby entirety, it may be said that bright first data storage capacity at this time is not further continued for the meaning of extension, i.e., at this time The smallest second data storage capacity of time overhead is obtained, it can enter step S14.
By determining the candidate dimension of the first data storage capacity, respectively to the first data storage capacity in each candidate dimension After being extended, the second data storage capacity that extension is completed in all candidate dimensions, through the above steps, Ke Yiyou are obtained Effect is so that the second data storage capacity all has the smallest time overhead in each dimension, so that finally storing Data have optimal segmented mode, can effectively reduce the memory access time, promote operation efficiency.
Can be seen that step S131 based on above-mentioned each open embodiment may be used to determine the candidate of the first data storage capacity Dimension is extended the first data storage capacity in which dimension.In the embodiments of the present disclosure, the realization side of step S131 Formula is not limited, i.e., the mode for how determining which dimension can be extended is not limited, and in one example, can be passed through Respectively in each dimension, after estimation increases data storage capacity, the service condition of memory, to determine whether the dimension can be used In carrying out data storage capacity extension, all dimensions that can be used for carrying out data storage capacity are finally counted, as candidate dimension.Tool The estimation mode of body, can flexible choice according to the actual situation, be not specifically limited herein.
It, can be by step S132, every after the candidate dimension that the first data storage capacity has been determined by step S131 In a candidate's dimension, the first data storage capacity is extended respectively.Specifically how in each dimension, the first data are stored Amount is extended, and extended mode is unrestricted, and Fig. 5 shows the process of the date storage method according to one embodiment of the disclosure Figure, as shown, in one possible implementation, step S132 may include:
Step S1321, using the first data storage capacity as current data amount of storage.
Current data amount of storage is extended preset data amount of storage in candidate dimension by step S1322.
Step S1323 calculates separately time overhead of the current data amount of storage before and after extension, if the data after extension are deposited The time overhead of reserves no more than the data storage capacity before extension time overhead, then using the data storage capacity after extension as working as Preceding data storage capacity returns to extension current data amount of storage.
Step S1324, if the time overhead of the data storage capacity after extension is greater than the time of the data storage capacity before extension Expense, the then spreading result using currently stored amount as first data storage capacity in candidate dimension.
It can be seen that by embodiment disclosed above and the first data storage capacity be extended in each candidate dimension When, it can be first using the first data storage capacity as current data amount of storage, then by repeatedly increasing to current data amount of storage Add preset stored amount, stop the increased mode of data storage capacity until increasing when time overhead after preset stored amount becomes larger again, Realize the extension to data storage capacity in candidate dimension.Wherein, when being extended each time to current data amount of storage, extension Size can be preset data amount of storage, and the specific size of preset data amount of storage is not limited, can according to hardware device and The actual conditions of data to be stored are flexibly determined that in one example, preset data amount of storage can be with reality disclosed above It is identical to apply the minimum data amount of storage proposed in example, i.e., it, can be with increased data every time when increasing amount of storage in candidate dimension Amount is the minimum data amount that can store in the dimension.It can be seen that the minimum on different dimensions by embodiment disclosed above Data volume may be the same or different, similarly, different candidate when extending preset data amount of storage in each candidate dimension The preset data amount of storage extended in dimension may be the same or different, and flexibly determine according to the actual situation.
It can be seen that in each candidate dimension by embodiment disclosed above, determination finally needs to deposit the first data Reserves carry out the extension of data storage capacity several times in total, and the number determined can depend on before being extended to data storage capacity Afterwards, the comparison of time overhead.Wherein, which kind of time overhead time overhead refers specifically to, and can flexibly select according to the actual situation It selects, is not limited to following open embodiments.In one possible implementation, time overhead may include memory access time overhead With calculating time overhead.In one possible implementation, time overhead may include calculating time overhead.
In one possible implementation, time overhead may include memory access time overhead.Due to determining wait store When the optimal storage mode of data, after carrying out division segmentation for data to be stored, then the mode stored, it will not treat and deposit The calculating time of needs has much impact when storage data carry out operation, and for the data hierarchy of intermediate representation, unfavorable In the performance for assessing every computing statement;Therefore calculating time overhead influences less, to pass through for whole time overhead Statistics includes the time overhead of memory access time overhead, can promote the determination efficiency of time overhead, then reduces data storage side Whole time-consuming, the efficiency of the determining optimal storage mode of promotion of method.
The concrete mode of memory access time overhead is counted, it is same unrestricted in the embodiments of the present disclosure, it can be according to wait deposit This body structure of the actual conditions and hardware device of storing up data is determined.In one possible implementation, it determines current The mode of memory access time overhead under amount of storage can be with are as follows:
Since deep learning accelerator generally comprises muti-piece on piece caching, the operand of instruction can be used to store.Every time It needs for the block number evidence in main memory to be loaded on piece caching when calculating.Different accelerators may variant very big storage Level can use a kind of configurable caching login mechanism, user oneself can match to be compatible with different caching designs The cache attribute for setting its hardware, the quantity including caching, size, delay, bandwidth etc..Instruction generator passes through in scheduling process The parameter for reading caching can also preferably assess runing time and be scheduled.It, can be first to accelerator when allocating cache Storage organization is abstracted into a series of definable parameters, and Fig. 6 shows the parameter configuration schematic diagram according to one embodiment of the disclosure, such as Shown in figure, the configurable interface of parameter can be supplied to related personnel and carry out customized caching.The caching of each definition carries out only Vertical address space menagement.In the embodiments of the present disclosure, sequence of operations can be provided and realize the operation cached on piece, specifically It is operated comprising which, in the embodiments of the present disclosure without limitation, can be set according to the actual situation, be not limited to following public affairs Embodiment is opened, may include distribution (allocate), release (release), load (load), storage in one example (store) and it is mobile (move).
It can be seen that by embodiment disclosed above by caching login mechanism, load and a certain piece of storage can be obtained The memory access latency and bandwidth of storage region, therefore can go out a Load's and Store based on this memory access latency and bandwidth estimation The memory access time.In one example, memory access during calculating the memory access time of full articulamentum, to a Load of input Time can be with are as follows:
in_seg_size÷Bvmem+Lvmem
Wherein, in_seg_size represents the size after input data segmentation, BvmemRepresent the bandwidth of caching, LvmemRepresent one The delay of secondary reading.The Load time total for input data can be the number in_parts of the data of input multiplied by Load's Memory access time, then by cycle-index, the whole memory access time can be calculated.
It should be noted that the possible realization side of one kind of the above-mentioned open embodiment instruction step S13 about step S13 Formula, wherein step S131, step S132 and step S133 can be executed sequentially, can also according to the actual situation reversed order spirit It is living to execute, the operation order between above-mentioned steps can be flexibly determined according to the actual situation, be not limited to embodiment disclosed above Implementation.In one possible implementation, after completing extension by step S133 and obtaining the second data storage capacity, It is likely due to the variation of data storage capacity, generating new dimension can be extended with further progress data storage capacity, therefore, one In a example, after obtaining the second data storage capacity by step S133, it can be judged again with return step S131, it will Second data storage capacity is used as the first data storage capacity again, and data storage can be carried out by judging whether there is new candidate dimension Amount extension, then time loss is constantly determined by step S13, until obtaining the smallest data storage capacity of time overhead, as most The second whole data storage capacity.
By current data amount of storage being expanded in candidate dimension using the first data storage capacity as current data amount of storage Preset data amount of storage is opened up, the time overhead of extension front and back is calculated separately, before time overhead after expansion is no more than extension When time overhead, continue to be extended current data amount of storage, the time that time overhead after expansion is less than before extension opens When pin, using current data amount of storage as the second data storage capacity, more it can easily be obtained every by way of circulation The smallest second data storage capacity of time overhead mentions to save the time of determining optimal storage mode in a candidate's dimension Rise the efficiency for determining optimal storage mode.
After obtaining the second data storage capacity, it can be stored according to the second data storage capacity wait store by step S14 Data.The implementation of step S14 is unrestricted, can flexible choice according to the actual situation, be not limited to following open implement Example.Fig. 7 shows the flow chart of the date storage method according to the open embodiment of the disclosure one, as shown, a kind of possible In implementation, step S14 may include:
Data to be stored is divided according to the second data storage capacity, obtains division result by step S141.
Step S142 stores division result.
It can be seen that the dimension that the second data storage capacity has fully taken into account data to be stored by above-mentioned each open embodiment Degree divides, and the time overhead in each candidate dimension is minimum, therefore, by storing data to be stored according to the second data Amount is divided, and is obtained division result and is stored, the mode of this storing data can substantially reduce stored data and exist Time loss required for during memory access, promotes the memory access efficiency of hardware device.
The date storage method that above-mentioned each open Application Example proposes, can also be according to the actual storage feelings of hardware device Condition is further optimized and is applied.In one possible implementation, for being counted using double buffering optimal way According to the hardware device of storage, the memory size of hardware device can be contracted to original half and scanned for.In a kind of possibility Implementation in, for the framework with multiple storage organizations, date storage method that embodiment disclosed above proposes It recursive can apply in every two layers of storage hierarchy.In one example, when the number proposed by embodiment disclosed above According to storage method, after realizing data storage to the memory space of certain level-one, can by the storing data after the segmentation as a whole, The date storage method that embodiment disclosed above proposes is continued through, more higher leveled memory space is segmented, wherein more Higher leveled memory space can be closer to calculating or the smaller memory space of fragment size.
The date storage method that disclosure Application Example proposes, can be in such a way that significant shortening determines data optimal storage Time, while obtaining preferable memory usage.Be experimentally confirmed, this method be applied to AlexNet network on when, with The mode that optimal solution is searched for directly in all possible data storage method determines that optimal storage method is compared, and utilizes above-mentioned public affairs Search step number can be shortened 161.44x by the method for opening embodiment proposition.
Using example
In disclosure application example, it is 1024 by an input, exports the full articulamentum for 256 as example, explanation The specific implementation of date storage method.Before realizing specific segmentation, formula etc. can be passed through to segmentation method first Mode is defined.In disclosure example, can by determine optimal storage mode method, see make treat storing data into After row segmentation, according in segmented mode storage to hardware device, it is therefore desirable to find out a kind of optimal data sectional strategy, that is, estimate The size for calculating section, searches for optimal solution in solution space, thus can objective function first, be then based on objective function and obtain To optimal solution.
, can be by Duan great little in disclosure application example, that is, the amount for needing to solve, with variable x1,x2,...xnIt indicates, n It is the quantity of dimension.In disclosure example, final target is to find one group of x, be can satisfy for all m, Fm(x1, x2,...xn)≤MemSizem, wherein m indicates the identifier of the size cached in a block piece.FmIt is to calculate each segmentation plan Slightly, for space size required for memory m, the calculating of this value can pass through analysis of allocated function (ALLOC) and release letter Number (RELEASE) obtains.
Example disclosed above is it has been proposed that in this application example, and content to be segmented is that input is 1024, and exporting is 256 Full articulamentum, the data storage position can be Cambricon deep learning chip comprising in two block pieces caching (greatly Small is 64KB and 732KB respectively), the operation of full articulamentum is carried out using matrix manipulation.Fig. 8 is shown to be shown according to the application of the disclosure one The schematic diagram of example, as shown, the data structure Dimension in the full articulamentum is when being segmented, the size of segmentation There are no being determined, return value is debatable variable.Dimension n=2, variable x1, i.e. input size in_seg_size, x2 As output size out_seg_size, m=[vmem, mmem], Fvmem(x1, x2)=x1+x2 × 2, Fmmem(x1, x2)=x1 × x2, therefore, in this application example, constraint condition is are as follows: x1+x2 × 2 < 32K&&x1 × x2 < 384K.
After constraint condition has been determined, it is estimated that total memory access time is as objective function, disclosure application example Target be exactly to minimize the memory access time.In disclosure application example, the memory access time is only assessed without assessing the calculating time Reason has two o'clock, firstly, fragment size does not influence too much for total calculating time, second, the level of intermediate representation It is unfavorable for assessing the performance of every computing statement.A kind of caching login mechanism has been proposed in above-mentioned each open embodiment, it can To obtain the memory access latency and bandwidth of a certain piece of storage region of Load and Store, therefore, it is estimated that Load and The memory access time of Store.For the exemplary full articulamentum of this application, to the memory access time of a Load of input are as follows: in_ seg_size÷Bvmem+Lvmem, wherein BvmemIndicate the bandwidth of caching, LvmemWhat is indicated is the delay once read.For input Data total Load time is a memory access time of the in_parts multiplied by Load.Again by cycle-index, can calculate The whole memory access time.
In disclosure application example, by the object definition of partition strategy be minimize the memory access time, while as far as possible Memory is with completely.In order to find optimal solution, most straightforward approach is exactly the entire design space of removal search, then to each space The time is evaluated, then selects a time shortest space as storage mode, this method can guarantee to obtain optimal solution, But since design space is very big, the time of compiling can excessively be grown, and therefore, propose that a kind of data are deposited in disclosure application example Method for storing can significantly shorten search time, while obtain preferable memory usage.Fig. 9 is shown to be answered according to the disclosure one With exemplary schematic diagram, as shown, the date storage method that disclosure application example proposes mainly comprises the following processes:
Initialization: section is dimensioned to x first by the first stepi(i=1,2 ..., n), are arranged to a minimum value. In later the step of, these numerical value can be gradually increased, until reaching memory limitation.
Second step obtains the dimension for needing to be increased.In disclosure application example, assessed with M (i) when i-th of dimension After degree increases, the memory access expense that will cause selects the smallest dimension i of memory access expense as increased dimension is wanted, then increases This selects the fragment size of dimension.
Third step, constantly repetition second step can continue to be increased until the fragment size of none of dimension, move back Program out.
Figure 10 shows the block diagram of the data storage device according to one embodiment of the disclosure, as shown, the device 20 includes:
Reading unit 21, for reading data to be stored;
First data storage capacity determination unit 22, for determining the first data storage capacity of data to be stored;
Second data storage capacity determination unit 23, for extending the first data in respective dimensions according to data to be stored Amount of storage obtains the smallest second data storage capacity of time overhead;
Storage unit 24, for storing data to be stored according to the second data storage capacity.
In one possible implementation, the first data storage capacity determination unit is used for:
Determine the storage object of data to be stored;
Using the minimum data amount of storage of storage object as the first data storage capacity.
In one possible implementation, the second data storage capacity determination unit is used for:
Determine the candidate dimension of the first data storage capacity, wherein candidate dimension is to be able to carry out in the first data storage capacity The dimension of data storage capacity extension;
In each candidate dimension, the first data storage capacity is extended respectively;
It obtains in all candidate dimensions, completes the second data storage capacity of extension, wherein the second data storage capacity exists Time overhead in each candidate's dimension is minimum.
In one possible implementation, the second data storage capacity determination unit is further used for:
Using the first data storage capacity as current data amount of storage;
In candidate dimension, current data amount of storage is extended into preset data amount of storage;
Time overhead of the current data amount of storage before and after extension is calculated separately, if the time of the data storage capacity after extension Expense is then stored the data storage capacity after extension as current data no more than the time overhead of the data storage capacity before extension Amount returns to extension current data amount of storage;
If the time overhead of the data storage capacity after extension is greater than the time overhead of the data storage capacity before extension, will work as Spreading result of the preceding amount of storage as the first data storage capacity in candidate dimension.
In one possible implementation, time overhead includes memory access time overhead.
In one possible implementation, storage unit is used for:
Data to be stored is divided according to the second data storage capacity, obtains division result;
Store division result.
Figure 11 is a kind of block diagram of data storage device 1300 shown according to an exemplary embodiment.For example, device 1300 may be provided as a server.Referring to Fig.1 1, device 1300 includes processing component 1322, further comprise one or Multiple processors, and the memory resource as representated by memory 1332, can be by the execution of processing component 1322 for storing Instruction, such as application program.The application program stored in memory 1332 may include it is one or more each Module corresponding to one group of instruction.In addition, processing component 1322 is configured as executing instruction, to execute the above method.
Device 1300 can also include that a power supply module 1326 be configured as the power management of executive device 1300, and one Wired or wireless network interface 1350 is configured as device 1300 being connected to network and input and output (I/O) interface 1358.Device 1300 can be operated based on the operating system for being stored in memory 1332, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 1332 of machine program instruction, above-mentioned computer program instructions can be executed by the processing component 1322 of device 1300 to complete The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
Foregoing teachings are better understood according to following clause:
Clause A1, a kind of date storage method, which comprises
Read data to be stored;
Determine the first data storage capacity of the data to be stored;
According to the data to be stored, first data storage capacity is extended in respective dimensions, obtains time overhead most The second small data storage capacity;
According to second data storage capacity, the data to be stored is stored.
Clause A2, the method according to clause A1, the first data storage capacity of the determination data to be stored, packet It includes:
Determine the storage object of the data to be stored;
Using the minimum data amount of storage of the storage object as first data storage capacity.
Clause A3, the method according to clause A1, it is described according to the data to be stored, institute is extended in respective dimensions The first data storage capacity is stated, the smallest second data storage capacity of time overhead is obtained, comprising:
Determine the candidate dimension of first data storage capacity, wherein candidate's dimension is first data storage The dimension of data storage capacity extension is able to carry out in amount;
In each candidate dimension, first data storage capacity is extended respectively;
It obtains in all candidate dimensions, completes the second data storage capacity of the extension, wherein described second Time overhead of the data storage capacity in each candidate dimension is minimum.
Clause A4, the method according to clause A3, it is described that first data storage capacity is extended, comprising:
Using first data storage capacity as current data amount of storage;
In the candidate dimension, the current data amount of storage is extended into preset data amount of storage;
Time overhead of the current data amount of storage before and after the extension is calculated separately, if the data storage after extension The time overhead of amount no more than extension before data storage capacity time overhead, then using the data storage capacity after the extension as The current data amount of storage returns and extends the current data amount of storage;
If the time overhead of the data storage capacity after extension is greater than the time overhead of the data storage capacity before the extension, Spreading result using the currently stored amount as first data storage capacity in the candidate dimension.
Clause A5, the method according to clause A1, the time overhead include memory access time overhead.
Clause A6, the method according to clause A1, described according to second data storage capacity, storage is described wait store Data, comprising:
The data to be stored is divided according to second data storage capacity, obtains division result;
Store the division result.
Clause B7, a kind of data storage device, comprising:
Reading unit, for reading data to be stored;
First data storage capacity determination unit, for determining the first data storage capacity of the data to be stored;
Second data storage capacity determination unit extends described for according to the data to be stored in respective dimensions One data storage capacity obtains the smallest second data storage capacity of time overhead;
Storage unit, for storing the data to be stored according to second data storage capacity.
Clause B8, the device according to clause B7, the first data storage capacity determination unit are used for:
Determine the storage object of the data to be stored;
Using the minimum data amount of storage of the storage object as first data storage capacity.
Clause B9, the device according to clause B7, the second data storage capacity determination unit are used for:
Determine the candidate dimension of first data storage capacity, wherein candidate's dimension is first data storage The dimension of data storage capacity extension is able to carry out in amount;
In each candidate dimension, first data storage capacity is extended respectively;
It obtains in all candidate dimensions, completes the second data storage capacity of the extension, wherein described second Time overhead of the data storage capacity in each candidate dimension is minimum.
Clause B10, the device according to clause B9, the second data storage capacity determination unit are further used for:
Using first data storage capacity as current data amount of storage;
In the candidate dimension, the current data amount of storage is extended into preset data amount of storage;
Time overhead of the current data amount of storage before and after the extension is calculated separately, if the data storage after extension The time overhead of amount no more than extension before data storage capacity time overhead, then using the data storage capacity after the extension as The current data amount of storage returns and extends the current data amount of storage;
If the time overhead of the data storage capacity after extension is greater than the time overhead of the data storage capacity before the extension, Spreading result using the currently stored amount as first data storage capacity in the candidate dimension.
Clause B11, the device according to clause B7, the time overhead include memory access time overhead.
Clause B12, the device according to clause B7, the storage unit are used for:
The data to be stored is divided according to second data storage capacity, obtains division result;
Store the division result.
Clause C13, a kind of data storage device, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to the described in any item methods of implementation of the provisions A1- clause A6.
Clause D14, a kind of non-volatile computer readable storage medium storing program for executing, are stored thereon with computer program instructions, described Method described in any one of clause A1 to clause A6 is realized when computer program instructions are executed by processor.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In principle, the practical application or to the technological improvement in market for best explaining each embodiment, or make the art its Its those of ordinary skill can understand each embodiment disclosed herein.

Claims (10)

1. a kind of date storage method, which is characterized in that the described method includes:
Read data to be stored;
Determine the first data storage capacity of the data to be stored;
According to the data to be stored, first data storage capacity is extended in respective dimensions, it is the smallest to obtain time overhead Second data storage capacity;
According to second data storage capacity, the data to be stored is stored.
2. the method according to claim 1, wherein the first data of the determination data to be stored store Amount, comprising:
Determine the storage object of the data to be stored;
Using the minimum data amount of storage of the storage object as first data storage capacity.
3. the method according to claim 1, wherein described according to the data to be stored, in respective dimensions First data storage capacity is extended, the smallest second data storage capacity of time overhead is obtained, comprising:
Determine the candidate dimension of first data storage capacity, wherein candidate's dimension is in first data storage capacity It is able to carry out the dimension of data storage capacity extension;
In each candidate dimension, first data storage capacity is extended respectively;
It obtains in all candidate dimensions, completes the second data storage capacity of the extension, wherein second data Time overhead of the amount of storage in each candidate dimension is minimum.
4. according to the method described in claim 3, it is characterized in that, described be extended first data storage capacity, packet It includes:
Using first data storage capacity as current data amount of storage;
In the candidate dimension, the current data amount of storage is extended into preset data amount of storage;
Time overhead of the current data amount of storage before and after the extension is calculated separately, if the data storage capacity after extension Time overhead is not more than the time overhead of the data storage capacity before extension, then using the data storage capacity after the extension as described in Current data amount of storage returns and extends the current data amount of storage;
If the time overhead of the data storage capacity after extension is greater than the time overhead of the data storage capacity before the extension, by institute State spreading result of the currently stored amount as first data storage capacity in the candidate dimension.
5. the method according to claim 1, wherein the time overhead includes memory access time overhead.
6. the method according to claim 1, wherein described according to second data storage capacity, described in storage Data to be stored, comprising:
The data to be stored is divided according to second data storage capacity, obtains division result;
Store the division result.
7. a kind of data storage device characterized by comprising
Reading unit, for reading data to be stored;
First data storage capacity determination unit, for determining the first data storage capacity of the data to be stored;
Second data storage capacity determination unit, for according to the data to be stored, extending first number in respective dimensions According to amount of storage, the smallest second data storage capacity of time overhead is obtained;
Storage unit, for storing the data to be stored according to second data storage capacity.
8. device according to claim 7, which is characterized in that the first data storage capacity determination unit is used for:
Determine the storage object of the data to be stored;
Using the minimum data amount of storage of the storage object as first data storage capacity.
9. a kind of data storage device characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to perform claim requires the described in any item methods of 1-6.
10. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute It states and realizes method described in any one of claim 1 to 6 when computer program instructions are executed by processor.
CN201910671013.4A 2019-07-24 2019-07-24 Operation method, device and related product Active CN110378471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910671013.4A CN110378471B (en) 2019-07-24 2019-07-24 Operation method, device and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910671013.4A CN110378471B (en) 2019-07-24 2019-07-24 Operation method, device and related product

Publications (2)

Publication Number Publication Date
CN110378471A true CN110378471A (en) 2019-10-25
CN110378471B CN110378471B (en) 2021-06-01

Family

ID=68255618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910671013.4A Active CN110378471B (en) 2019-07-24 2019-07-24 Operation method, device and related product

Country Status (1)

Country Link
CN (1) CN110378471B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160062900A1 (en) * 2014-08-29 2016-03-03 International Business Machines Corporation Cache management for map-reduce applications
US20180088996A1 (en) * 2016-09-23 2018-03-29 Apple Inc. Systems and Methods of Memory Allocation for Neural Networks
CN109190758A (en) * 2018-09-04 2019-01-11 地平线(上海)人工智能技术有限公司 Method and apparatus for the tensor data of convolutional neural networks to be unfolded
CN110033086A (en) * 2019-04-15 2019-07-19 北京异构智能科技有限公司 Hardware accelerator for neural network convolution algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160062900A1 (en) * 2014-08-29 2016-03-03 International Business Machines Corporation Cache management for map-reduce applications
US20180088996A1 (en) * 2016-09-23 2018-03-29 Apple Inc. Systems and Methods of Memory Allocation for Neural Networks
CN109190758A (en) * 2018-09-04 2019-01-11 地平线(上海)人工智能技术有限公司 Method and apparatus for the tensor data of convolutional neural networks to be unfolded
CN110033086A (en) * 2019-04-15 2019-07-19 北京异构智能科技有限公司 Hardware accelerator for neural network convolution algorithm

Also Published As

Publication number Publication date
CN110378471B (en) 2021-06-01

Similar Documents

Publication Publication Date Title
US20220300812A1 (en) Workflow optimization
Meng et al. Training deeper models by GPU memory optimization on TensorFlow
CN112579063B (en) Acceleration method for exploring optimization space in deep learning compiler
CN110058883A (en) A kind of CNN accelerated method and system based on OPU
KR20220127878A (en) Adaptive Search Method and Apparatus for Neural Networks
CN111738434A (en) Method for executing deep neural network on heterogeneous processing unit
CN110377340A (en) Operation method, device and Related product
EP4290824A1 (en) Task allocation method and apparatus based on internet-of-things device, and network training method and apparatus
US11366806B2 (en) Automated feature generation for machine learning application
CN110889497B (en) Learning task compiling method of artificial intelligence processor and related product
US20230394110A1 (en) Data processing method, apparatus, device, and medium
CN116501505B (en) Method, device, equipment and medium for generating data stream of load task
CN115423082A (en) Automatic optimization method for depth model calculation graph related to hardware characteristics
Zheng et al. Chimera: An analytical optimizing framework for effective compute-intensive operators fusion
Yang et al. Efficient GPU memory management for nonlinear DNNs
Wen et al. Taso: Time and space optimization for memory-constrained DNN inference
CN116680063B (en) Task scheduling method, device, computing system, electronic equipment and storage medium
CN117271101A (en) Operator fusion method and device, electronic equipment and storage medium
CN115392467B (en) Cloud edge cooperative self-adaptive depth reasoning method for real-time processing of mass data
US20210142197A1 (en) Methods and systems for diverse instance generation in artificial intelligence planning
Shu et al. ROAM: memory-efficient large DNN training via optimized operator ordering and memory layout
CN110378471A (en) Operation method, device and Related product
Roy et al. Trust-region based multi-objective optimization for low budget scenarios
CN112633516A (en) Performance prediction and machine learning compilation optimization method and device
KR20230058621A (en) Memory-limit scheduling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing

Applicant after: Zhongke Cambrian Technology Co., Ltd

Address before: 100190 room 644, comprehensive research building, No. 6 South Road, Haidian District Academy of Sciences, Beijing

Applicant before: Beijing Zhongke Cambrian Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant