CN110083532B - Method and device for positioning operation errors in fusion mode based on deep learning framework - Google Patents
Method and device for positioning operation errors in fusion mode based on deep learning framework Download PDFInfo
- Publication number
- CN110083532B CN110083532B CN201910298218.2A CN201910298218A CN110083532B CN 110083532 B CN110083532 B CN 110083532B CN 201910298218 A CN201910298218 A CN 201910298218A CN 110083532 B CN110083532 B CN 110083532B
- Authority
- CN
- China
- Prior art keywords
- layer
- target
- network
- line
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 64
- 238000013135 deep learning Methods 0.000 title claims abstract description 35
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000004590 computer program Methods 0.000 claims description 10
- 239000010410 layer Substances 0.000 description 403
- 230000015654 memory Effects 0.000 description 14
- 238000012545 processing Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 8
- 230000004807 localization Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004806 packaging method and process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005481 NMR spectroscopy Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000011229 interlayer Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000005406 washing Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000008187 granular material Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/362—Software debugging
- G06F11/3644—Software debugging by instrumenting at runtime
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Hardware Design (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The method is characterized in that the target network under the deep learning framework is correctly operated layer by layer, when the fusion mode is operated in error, the operating network structure is continuously adjusted, the range of the reason of the fusion mode operation error is gradually narrowed until an error layer is found, so that the error reason can be quickly found out, and the efficiency of repairing the problem is improved.
Description
Technical Field
The application relates to the technical field of electronics, in particular to a method and a device for positioning operation errors in a fusion mode based on a deep learning framework.
Background
Under the deep learning framework, the network in the deep learning can be operated layer by layer, or operated according to a fusion mode, compared with the layer by layer mode, the network is operated in the fusion mode, and the method has the advantages of extremely high performance improvement, great reduction of input/output io quantity and the like. However, in the fusion mode, all intermediate results in the operation process are not output, and when the fusion mode is wrong and needs to be debugged, the neural network has a large number of layers and is complex, so that the method is very inconvenient for positioning errors. Therefore, the problem of locating the cause of the error when operating the error in the fusion mode of the deep learning framework needs to be solved.
Disclosure of Invention
The embodiment of the application provides a device and a device for positioning operation errors in a fusion mode based on a deep learning framework, which are used for positioning an error layer causing the operation errors in the fusion mode according to an operation result by rerun a target network, so that error reasons can be quickly found out, and the efficiency of repairing problems is improved.
In a first aspect, an embodiment of the present application provides a method for positioning an operation error in a fusion mode based on a deep learning framework, where the method includes:
when a target network under the deep learning framework operates correctly layer by layer and the fusion mode operates incorrectly, determining a first target layer according to target information, wherein the target information is the number of lines of text of a description text file of the target network;
operating the target network to the first target layer to obtain an operation result;
updating the first target layer according to the operation result to obtain a second target layer;
and operating the target network to the second target layer until an error layer causing the operation error of the fusion mode is obtained.
In a second aspect, there is provided a deep learning framework based fusion mode run error localization apparatus comprising a controller and an actuator, wherein,
The controller is used for determining a first target layer according to target information when the target network under the deep learning framework operates correctly layer by layer and the fusion mode operates incorrectly, wherein the target information is the number of lines of text of a description text file of the target network;
the executor is used for running the target network to the first target layer to obtain a running result;
the controller is further used for updating the first target layer according to the operation result to obtain a second target layer;
the executor is further configured to run the target network to the second target layer until an error layer that causes the fusion mode operation error is obtained.
Optionally, in the aspect of determining the first target layer according to the target information, the controller is specifically configured to:
determining a first middle line of the target network from the text line numbers of the description text file of the target network according to a dichotomy;
and determining the first target layer according to the first middle row.
Optionally, in the aspect of determining the first target layer according to the first middle row, the controller is specifically configured to:
determining a first layer start line or a first layer end line closest to the first intermediate line;
And determining a layer before the layer corresponding to the first layer starting line or the first layer ending line as the first target layer.
Optionally, in the running the target network to the first target layer, the executor is specifically configured to:
performing text annotation on the descriptive contents corresponding to each layer of network behind the first target layer in the descriptive text file corresponding to the target network to obtain the annotated descriptive text file, and controlling the target network to operate through the descriptive text file;
and operating the target network to the first target layer according to the annotated description text file.
Optionally, in the text annotation of the description content corresponding to each layer of network after the first target layer in the description text file corresponding to the target network, the executor is specifically configured to:
and adding a preset symbol in front of the descriptive content corresponding to each row in each layer of network after the first target layer number in the descriptive text file corresponding to the target network.
Optionally, in the running the target network to the first target layer, the executor is specifically configured to:
counting the layer starting line number of each layer in the target network;
And executing the target network to the first target layer according to the layer starting line number of each layer.
Optionally, in the aspect of updating the first target layer according to the operation result to obtain a second target layer, the controller is specifically configured to:
dividing the target network into a first partial network before the first target layer and a second partial network except the first partial network;
and determining the second target layer by using the number of line layers information of the first partial network or the number of line layers information of the second partial network according to the operation result.
Optionally, in the determining the second target layer by using the number of layers of the first partial network or the number of layers of the second partial network, the controller is specifically configured to:
and determining the second target layer by using the number of line number layer information of the first partial network or the number of line number layer information of the second partial network.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method steps according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform the method steps of the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes the operation error locating device in the fusion mode based on the deep learning framework according to the second aspect.
In a sixth aspect, embodiments of the present application provide a chip packaging structure, where the chip packaging structure includes the chip described in the fifth aspect;
in a seventh aspect, an embodiment of the present application provides a board card, where the board card includes the chip package structure described in the sixth aspect.
In an eighth aspect, an embodiment of the present application provides an electronic device, where the electronic device includes the chip described in the fifth aspect or the board card described in the seventh aspect.
In some embodiments, the electronic device comprises a data processing device, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a vehicle recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
In some embodiments, the vehicle comprises an aircraft, a ship, and/or a vehicle; the household appliances comprise televisions, air conditioners, microwave ovens, refrigerators, electric cookers, humidifiers, washing machines, electric lamps, gas cookers and range hoods; the medical device includes a nuclear magnetic resonance apparatus, a B-mode ultrasonic apparatus, and/or an electrocardiograph apparatus.
According to the technical scheme, when the target network under the deep learning framework operates correctly layer by layer and the fusion mode operates incorrectly, the first target layer is determined according to the target information, the target network is operated to the first target layer to obtain an operation result, the first target layer is updated according to the operation result to obtain a second target layer, and the target network is operated to the second target layer until an error layer which causes the fusion mode to operate incorrectly is obtained, so that the target layer of the target network is operated again through each update, the network structure which needs to operate in each target network can be continuously adjusted, the range of the reason of the fusion mode to operate incorrectly is gradually narrowed until the error layer is found, and therefore error reasons can be found out rapidly, and the efficiency of repairing the problem is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1A is a schematic structural diagram of an error locating device operated in a fusion mode based on a deep learning framework according to an embodiment of the present application;
fig. 1B is a schematic illustration of an error layer determination of a target network according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for positioning operation errors in a fusion mode based on a deep learning framework according to an embodiment of the present application;
fig. 3 is a structural diagram of a board card assembly according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
And operating the neural network in a fusion mode, wherein the neural network is mainly interlayer fusion in the neural network, and comprises support of front fusion, rear fusion and pyramid fusion. Inter-layer fusion can reduce access in a network, so that network performance is improved. The fusion behavior of pyramid fusion is relatively complex. Pyramid fusion establishes a sub-fused white list and attempts to fuse any operators in the white list. The fused constraint conditions comprise on-chip resource and network topology limitation. The fusion mode can set parameters of all layers which can be calculated on the artificial intelligent processor, then the parameters are transmitted into the fusion graph, the artificial intelligent learning library is used for completing overall graph construction and optimization, only set input and output are used as network overall input and output, the space of intermediate results can be multiplexed to reduce memory occupation, and under the deep learning framework, the neural network is completely consistent with the result of layer-by-layer mode operation under the condition that the calculation is correct in the fusion mode, so that when the neural network operates correctly layer by layer, and the fusion mode operates incorrectly, error reasons need to be positioned.
As shown in fig. 1A, fig. 1A is a flow chart of a method for positioning operation errors in a fusion mode based on a deep learning framework according to an embodiment of the present application, where the method includes:
101. and when the target network under the deep learning framework operates correctly layer by layer and the fusion mode operates incorrectly, determining a first target layer according to target information, wherein the target information is the number of text lines of a description text file of the target network.
In the embodiment of the application, the network description file of the neural network under the deep learning framework includes a text file prototxt in protobuf format, and a binary format file caffedel for storing weights and parameters. Under the deep learning framework, the network structure is required to be acquired from the prototxt file and the caffedel file, and then the forward or reverse operation of the network is performed according to the analyzed network structure. Wherein the protoxt file may control the structure of the network, the protoxt file may include only a partial network structure of the caffedel file, i.e., the protoxt file may be a subset of the entire network. Thus, by modifying the prototxt file, the operation of the network can be controlled.
When the target network under the deep learning framework operates correctly layer by layer and the fusion mode operates incorrectly, the first target layer may be determined according to the target information, where the target information is the number of text lines of the description text file of the target network, for example, the number of text lines of the prototxt file of the target network may be 214, and total number of text lines of 23 layers.
Optionally, in the step 101, determining the first target layer according to the target information may include the following steps:
11. determining a first middle line of the target network from the text line numbers of the description text file of the target network according to a dichotomy;
12. and determining the first target layer according to the first middle row.
Wherein, the first middle row is determined according to the dichotomy, and the following formula can be adopted:
wherein a is 1 For the first network start line of the target network, b 1 For the first network termination line of the target network, for example, when the text line of the prototxt file of the target network is 214, the middle line may be determined to be 107 lines.
The first target layer is determined according to the first middle line, and the layer where the middle line is located may be determined to be the first target layer, for example, if the layer where the 107 lines are located is the 11 th layer, the first target layer may be determined to be the 11 th layer.
Optionally, in step 12, determining the first target layer according to the first intermediate line may include the following steps:
a1, determining a first layer starting line or a first layer ending line closest to the first intermediate line;
a2, determining a layer before the layer corresponding to the first layer starting line or the first layer ending line as the first target layer.
In the multi-layer network included in the target network, each layer may include a layer start line and a layer end line, for example, as shown in the following table, the number of lines and layer number information of a prototxt file in the target network provided in the embodiment of the present application are shown in the following table:
in the target network, the number of lines of text of the prototxt file is 214 lines, and the number of lines is 23 except for an input layer, and the first layer starts from line 9.
In this embodiment of the present application, after determining the first intermediate row 107, a first layer starting row or a first layer ending row closest to the first intermediate row may be determined, where the first starting row 105 is a first starting row, and the first ending row 116 is a first ending row, so that it may be determined that the first target layer is a layer before the 11 th layer, that is, the 10 th layer, corresponding to the first starting row.
Optionally, in the embodiment of the present application, the first intermediate layer of the target network may also be determined according to a dichotomy, specifically, the following formula may be adopted:
wherein c 1 D, as the initial layer of the target network 1 For example, if the target network is 23 layers, the first intermediate layer is 11 th layer, and then the first target layer is determined to be the previous layer of the first intermediate layer, i.e., 10 th layer.
102. And operating the target network to the first target layer to obtain an operation result.
In the embodiment of the application, considering that the text content of the protoxt file can be edited, the target network can be controlled to the first target layer by performing text annotation on the text content of the protoxt file.
Optionally, in the step 102, the step of operating the target network to the first target layer may include the steps of:
21. performing text annotation on the descriptive contents corresponding to each layer of network behind the first target layer in the descriptive text file corresponding to the target network to obtain the annotated descriptive text file, and controlling the target network to operate through the descriptive text file;
22. and operating the target network to the first target layer according to the annotated description text file.
And performing text annotation on the descriptive contents corresponding to each layer of network behind the first target layer in the descriptive text file to obtain an annotated descriptive text file, and controlling the target network to the first target layer according to the annotated descriptive text file. For example, if the target network includes 23 layers and the first target layer is layer 10, text annotation can be performed on the descriptive contents corresponding to each layer after layer 10 in the prototxt file, that is, text annotation is performed on the descriptive contents after 105 lines in the prototxt file, so as to control the target network to run to layer 10.
Optionally, in the step 21, text annotation is performed on the descriptive content corresponding to each layer of network after the first target layer in the descriptive text file corresponding to the target network, which may include the following steps:
and adding a preset symbol in front of the descriptive content corresponding to each row in each layer of network after the first target layer number in the descriptive text file corresponding to the target network.
The text annotation is performed on the descriptive contents corresponding to each layer of network after the first target layer in the descriptive text file, and a preset symbol can be added in front of the descriptive contents corresponding to each row in each layer of network, for example, a "#" sign can be added in front of the descriptive contents, so that the text annotation can be performed on the layers which do not need to be performed in the target network.
Optionally, in the step 102, the step of operating the target network to the first target layer may include the steps of:
23. counting the layer starting line number of each layer in the target network;
24. and executing the target network to the first target layer according to the layer starting line number of each layer.
In this embodiment, considering that the network operation is performed in units of layers, when the target network is operated to the first target layer, the layer starting line number of each layer may be counted first, and then the layer corresponding to the layer starting line is directly moved to the layer starting line by locating the layer starting line of each layer.
103. And updating the first target layer according to the operation result to obtain a second target layer.
After the target network is operated to the first target layer, the operation result is correct or incorrect, whether the operation result is correct or incorrect, the range of the error cause can be determined according to the operation result, and the first target layer is updated.
Optionally, in step 103, updating the first target layer according to the operation result to obtain a second target layer may include the following steps:
31. dividing the target network into a first partial network before the first target layer and a second partial network except the first partial network;
32. and determining the second target layer by using the number of line number layer information of the first partial network or the number of line number layer information of the second partial network according to an operation result.
If the target network includes 23 layers, the target network may be divided into a first partial network of 1 layer to 10 layers and a second partial network of 11 layers to 23 layers, and then the second target layer is determined by using the number of layers information of the first partial network or the number of layers information of the second partial network.
Optionally, in the step 32, determining the second target layer by using the number of layers of the first partial network or the number of layers of the second partial network may include the following steps:
b1, if the operation result is wrong, using the number of line number layer information of the first part of network as new target information; if the operation result is correct, the number of line numbers and layer number information of the second partial network is used as new target information;
and B2, determining the second target layer according to the new target information.
If the operation result is wrong, the reason for the fusion mode operation error is indicated in the first partial network, so that the number of line layers information of the first partial network can be used as new target information, for example, the target network comprises 23 layers, and the number of line layers information of the 1 st layer to the 10 th layer can be used as new target information; if the operation result is correct, it indicates that the cause of the fusion mode operation error is in the second partial network, so the number of line layers information of the second partial network can be used as new target information, for example, the target network comprises 23 layers, and the number of line layers information of 11 th layer to 23 rd layer can be used as new target information.
Further, the second target layer may be determined according to the new target information, and the second target layer may be determined according to the new target information, specifically, if the operation result is wrong, the number of line layers of the 1 st layer to the 10 th layer may be used as the new target information, the second middle line of the first part of the network may be determined from the text line number of the description text file of the first part of the network according to the bisection method, so as to obtain the second middle line 52, further, the second target layer may be determined according to the second middle line, and the previous layer of the second initial line corresponding layer closest to the second middle line may be determined, so as to obtain the 4 th layer of the second target layer. If the operation result is correct, the line number layer number information of the 11 th layer to the 23 rd layer can be used as new target information, a third middle line of the second part of network can be determined from the text line numbers of the description text file of the second part of network according to a dichotomy to obtain a third middle line 160 line, then the second target layer is determined according to the third middle line, a previous layer of a third initial line corresponding layer closest to the third middle line can be determined to obtain a second target layer, wherein the line 161 th line of the third initial line closest to the third middle line is the 161 th layer, and the previous layer of the 161 th line corresponding layer is the 16 th layer, so that the second target becomes 16 layers.
104. And operating the target network to the second target layer until an error layer causing the operation error of the fusion mode is obtained.
In the embodiment of the application, the target network is operated to the second target layer, text annotation can be performed on the descriptive contents corresponding to each layer of network behind the second target layer in the descriptive text file corresponding to the target network, the annotated descriptive text file is obtained, and then the target network is operated to the second target layer according to the annotated descriptive text file. In a specific implementation, if the operation result is wrong, the second target layer is the 4 th layer, text annotation is required to be performed on the description content corresponding to each layer of network after the 4 th layer, and considering that the description content corresponding to each layer of network after the 10 th layer has been annotated when the target network is operated to the first target layer, text annotation can be performed on the description content corresponding to the 5 th layer to the 10 th layer of network, namely, text annotation is performed on the 48 th line to the 104 th line. If the operation result is correct, the second target layer is the 16 th layer, text annotation is required to be performed on the descriptive contents corresponding to each layer of network after the 16 th layer, and considering that the descriptive contents corresponding to each layer of network after the 10 th layer are already annotated when the target network is operated to the first target layer, text annotation in the descriptive contents corresponding to the 11 th layer to the 15 th layer of network can be cancelled, namely, the 105 th line to 160 th line of text annotation is cancelled.
By running the target network to the second target layer, a new running result can be obtained, and then the operations of steps 103-104 can be followed until an error layer that causes the fusion mode running error is found. Therefore, the target layer of the target network can be rerun by updating each time, the network structure required to be operated in each time of the target network can be continuously adjusted, and the range of the reasons for the operation errors of the fusion mode is gradually narrowed until the error layer is found.
For example, as shown in fig. 1B, a schematic illustration of determining an error layer of a target network is provided in the present application. As shown in fig. 1B, the target network includes 214 lines and 23 layers, where the first target layer may be determined to be layer 10, and then text annotation is performed on the description content corresponding to lines 105-214 in the prototxt file, so as to control the target network to run to layer 10. If the operation results from the operation target network to the 10 th layer are wrong, the second target layer can be determined to be the 4 th layer, then text annotation is carried out on the description contents corresponding to the 48 th line to the 104 th line in the prototxt file, and the operation of the target network to the 4 th layer is controlled. If the operation results from the operation target network to the 10 th layer are correct, the second target layer can be determined to be the 16 th layer, then text comments corresponding to the 105 th line to the 160 th line in the prototxt file are canceled, and the operation of the target network to the 16 th layer is controlled.
If the operation results from the operation target network to the 4 th layer are wrong, the third target layer can be determined to be the 1 st layer, then text annotation is carried out on the description contents corresponding to the 20 th line to the 47 th line in the prototxt file, and the operation of the target network to the 1 st layer is controlled. If the operation results from the operation target network to the 4 th layer are correct, the third target layer can be determined to be the 6 th layer, then text comments corresponding to the 48 th line to the 65 th line in the prototxt file are canceled, and the operation of the target network to the 6 th layer is controlled. If the operation results from the operation target network to the 16 th layer are wrong, the third target layer can be determined to be the 13 th layer, then text annotation is carried out on the description contents corresponding to the 123 rd line to the 160 th line in the prototxt file, and the operation of the target network to the 13 th layer is controlled. If the operation results from the operation target network to the 16 th layer are correct, the third target layer can be determined to be the 19 th layer, then text comments corresponding to 161 th line to 184 th line in the prototxt file are canceled, and the operation of the target network to the 19 th layer is controlled. If the operation results from the operation target network to the layer 1 are wrong, the wrong layer can be determined to be the layer 1.
And similarly, if the operation results from the operation target network to the layer 2 are wrong, determining the error layer as the layer 2, and if the operation results from the operation target network to the layer 2 are correct, determining the error layer as the layer 3.
If the operation results from the operation target network to the layer 3 are correct, the error layer can be determined to be the layer 4.
If the operation results from the operation target network to the 5 th layer are wrong, determining the error layer as the 5 th layer, and if the operation results from the operation target network to the 5 th layer are correct, determining the error layer as the 6 th layer.
If the operation results from the operation target network to the 7 th layer are wrong, determining the error layer as the 7 th layer, and if the operation results from the operation target network to the 7 th layer are correct, determining the error layer as the 8 th layer.
If the operation results from the operation target network to the 9 th layer are wrong, determining the error layer as the 9 th layer, and if the operation results from the operation target network to the 9 th layer are correct, determining the error layer as the 10 th layer.
If the operation results from the operation target network to the 11 th layer are wrong, determining the error layer as the 11 th layer, and if the operation results from the operation target network to the 11 th layer are correct, determining the error layer as the 12 th layer.
If the operation results from the operation target network to the layer 12 are correct, the error layer can be determined to be the layer 13.
If the operation results from the operation target network to the 14 th layer are wrong, determining the error layer as the 14 th layer, and if the operation results from the operation target network to the 14 th layer are correct, determining the error layer as the 15 th layer.
If the operation results from the operation target network to the 15 th layer are correct, determining that the error layer is the 16 th layer; if the operation results from the operation target network to the 17 th layer are wrong, the wrong layer can be determined to be the 17 th layer.
If the operation results from the operation target network to the 18 th layer are wrong, determining the error layer as the 18 th layer, and if the operation results from the operation target network to the 18 th layer are correct, determining the error layer as the 19 th layer.
If the operation results from the operation target network to the 20 th layer are wrong, determining the error layer as the 20 th layer, and if the operation results from the operation target network to the 20 th layer are correct, determining the error layer as the 21 st layer.
If the operation results from the operation target network to the 22 th layer are wrong, determining the error layer as the 23 rd layer, and if the operation results from the operation target network to the 22 nd layer are correct, determining the error layer as the 22 nd layer.
According to the technical scheme, the target network under the deep learning framework operates correctly layer by layer, when the fusion mode operates in error, the first target layer is determined according to the target information, the target network is operated to the first target layer to obtain an operation result, the first target layer is updated according to the operation result to obtain the second target layer, and the target network is operated to the second target layer until the error layer which causes the fusion mode to operate in error is obtained, so that the target layer of the target network is operated again through each update, the network structure which needs to operate in each target network can be continuously adjusted, the range of the cause of the fusion mode to operate in error is gradually narrowed until the error layer is found, and therefore error reasons can be quickly found out, and the efficiency of repairing the problem is improved.
The following describes the device for positioning operation errors in the fusion mode based on the deep learning framework.
Referring to fig. 2, fig. 2 provides an apparatus for positioning an operation error in a fusion mode based on a deep learning framework, which includes a processor 101 and a storage unit 102, wherein the processor 101 includes a controller 10 and an actuator 11, and the storage unit 102 includes a register 21 and a random access memory RAM22. Wherein,,
the controller 10 is configured to determine a first target layer according to target information when a target network under the deep learning framework operates correctly layer by layer and a fusion mode operates incorrectly, where the target information is a number of lines of text of a description text file of the target network;
the executor 11 is configured to operate the target network to the first target layer to obtain an operation result;
the controller 10 is further configured to update the first target layer according to the operation result to obtain a second target layer;
the executor 11 is further configured to run the target network to the second target layer until an error layer that causes the fusion mode operation error is obtained.
The target network refers to a neural network under a deep learning framework.
Wherein, the register 21 and the RAM22 are used for storing a description text file of the target network. The network description file comprises a text file prototxt in a protobuf format, and a binary file caffemul for storing weight and parameters.
In one possible embodiment, in the determining the first target layer according to the target information, the controller is specifically configured to:
determining a first middle line of the target network from the text line numbers of the description text file of the target network according to a dichotomy;
and determining the first target layer according to the first middle row.
In a possible embodiment, in said determining a first target layer according to said first intermediate line, said controller is specifically configured to:
determining a first layer start line or a first layer end line closest to the first intermediate line;
and determining a layer before the layer corresponding to the first layer starting line or the first layer ending line as the first target layer.
In one possible embodiment, in said running said target network to said first target layer, said executor is specifically configured to:
Performing text annotation on the descriptive contents corresponding to each layer of network behind the first target layer in the descriptive text file corresponding to the target network to obtain the annotated descriptive text file, and controlling the target network to operate through the descriptive text file;
and operating the target network to the first target layer according to the annotated description text file.
In one possible embodiment, the executor is specifically configured to, in terms of text annotation of the description content corresponding to each layer of network after the first target layer in the description text file corresponding to the target network:
and adding a preset symbol in front of the descriptive content corresponding to each row in each layer of network after the first target layer number in the descriptive text file corresponding to the target network.
In one possible embodiment, in said running said target network to said first target layer, said executor is specifically configured to:
counting the layer starting line number of each layer in the target network;
and executing the target network to the first target layer according to the layer starting line number of each layer.
In one possible embodiment, in the aspect of updating the first target layer according to the operation result to obtain a second target layer, the controller is specifically configured to:
Dividing the target network into a first partial network before the first target layer and a second partial network except the first partial network;
and determining the second target layer by using the number of line layers information of the first partial network or the number of line layers information of the second partial network according to the operation result.
In one possible embodiment, in the determining the second target layer by using the number of layers information of the number of lines of the first partial network or the number of layers information of the number of lines of the second partial network, the controller is specifically configured to:
and determining the second target layer by using the number of line number layer information of the first partial network or the number of line number layer information of the second partial network.
According to the technical scheme, the target network under the deep learning framework operates correctly layer by layer, when the fusion mode operates in error, the first target layer is determined according to the target information, the target network is operated to the first target layer to obtain an operation result, the first target layer is updated according to the operation result to obtain the second target layer, and the target network is operated to the second target layer until the error layer which causes the fusion mode to operate in error is obtained, so that the target layer of the target network is operated again through each update, the network structure which needs to operate in each target network can be continuously adjusted, the range of the cause of the fusion mode to operate in error is gradually narrowed until the error layer is found, and therefore error reasons can be quickly found out, and the efficiency of repairing the problem is improved.
The present application also discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the steps of the method for error localization in a fusion mode based on a deep learning framework as shown in fig. 1A.
The present application also discloses a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform the steps of the deep learning framework based fusion mode run error localization method as shown in fig. 1A.
In some embodiments, a chip is also disclosed, which includes the above-described run error localization device in a fusion mode based on a deep learning framework as shown in fig. 2.
In some embodiments, a chip package structure is disclosed, which includes the chip described above.
In some embodiments, a board card is disclosed that includes the above-described chip package structure. Referring to fig. 3, fig. 3 provides a board that may include other mating components in addition to the chip 389, including but not limited to: a memory device 390, an interface device 391 and a control device 392;
The memory device 390 is connected to the chip in the chip package structure through a bus for storing data. The memory device may include multiple sets of memory cells 393. Each group of storage units is connected with the chip through a bus. It is understood that each set of memory cells may be DDR SDRAM (English: double Data Rate SDRAM, double Rate synchronous dynamic random Access memory).
DDR can double the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on both the rising and falling edges of the clock pulse. DDR is twice as fast as standard SDRAM. In one embodiment, the memory device may include 4 sets of the memory cells. Each set of the memory cells may include a plurality of DDR4 particles (chips). In one embodiment, the chip may include 4 72-bit DDR4 controllers inside, where 64 bits of the 72-bit DDR4 controllers are used to transfer data and 8 bits are used for ECC verification. It is understood that the theoretical bandwidth of data transfer can reach 25600MB/s when DDR4-3200 granules are employed in each set of memory cells.
In one embodiment, each set of memory cells includes a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. And a controller for controlling DDR is arranged in the chip and is used for controlling data transmission and data storage of each storage unit.
The interface device is electrically connected with the chip in the chip packaging structure. The interface means is used for enabling data transmission between the chip and an external device, such as a server or a computer. For example, in one embodiment, the interface device may be a standard PCIE interface. For example, the data to be processed is transferred from the server to the chip through the standard PCIE interface, so as to implement data transfer. Preferably, when PCIE 3.0X10 interface transmission is adopted, the theoretical bandwidth can reach 16000MB/s. In another embodiment, the interface device may be another interface, and the application is not limited to the specific form of the other interface, and the interface unit may be capable of implementing a switching function. In addition, the calculation result of the chip is still transmitted back to the external device (e.g. a server) by the interface device.
The control device is electrically connected with the chip. The control device is used for monitoring the state of the chip. Specifically, the chip and the control device may be electrically connected through an SPI interface. The control device may comprise a single chip microcomputer (Micro Controller Unit, MCU). The chip may include a plurality of processing chips, a plurality of processing cores, or a plurality of processing circuits, and may drive a plurality of loads. Therefore, the chip can be in different working states such as multi-load and light-load. The control device can realize the regulation and control of the working states of a plurality of processing chips, a plurality of processing circuits and/or a plurality of processing circuits in the chip.
In some embodiments, an electronic device is provided that includes the chip or the board.
The electronic device includes a data processing device, a robot, a computer, a printer, a scanner, a tablet, an intelligent terminal, a cell phone, a vehicle recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
The vehicle comprises an aircraft, a ship and/or a vehicle; the household appliances comprise televisions, air conditioners, microwave ovens, refrigerators, electric cookers, humidifiers, washing machines, electric lamps, gas cookers and range hoods; the medical device includes a nuclear magnetic resonance apparatus, a B-mode ultrasonic apparatus, and/or an electrocardiograph apparatus.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software program modules.
The integrated units, if implemented in the form of software program modules, may be stored in a computer-readable memory for sale or use as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Claims (13)
1. A method for locating an operation error in a fusion mode based on a deep learning framework, the method comprising:
when a target network under the deep learning framework operates correctly layer by layer and the fusion mode operates incorrectly, determining a first target layer according to target information, wherein the target information is the number of lines of text of a description text file of the target network;
Operating the target network to the first target layer to obtain an operation result;
updating the first target layer according to the operation result to obtain a second target layer;
operating the target network to the second target layer until an error layer causing the fusion mode operation error is obtained;
the updating the first target layer according to the operation result to obtain a second target layer includes:
dividing the target network into a first partial network before the first target layer and a second partial network except the first partial network;
determining the second target layer by using the number of layer information of the number of lines of the first partial network or the number of layer information of the number of lines of the second partial network according to the operation result;
the determining the second target layer by using the number of layer information of the number of lines of the first partial network or the number of layer information of the number of lines of the second partial network includes:
if the operation result is wrong, the number of line layers information of the first partial network is used as new target information; if the operation result is correct, the number of line numbers and layer number information of the second partial network is used as new target information;
And determining the second target layer according to the new target information.
2. The method of claim 1, wherein determining the first target layer based on the target information comprises:
determining a first middle line of the target network from the text line numbers of the description text file of the target network according to a dichotomy;
and determining the first target layer according to the first middle row.
3. The method of claim 2, wherein the determining a first target layer from the first intermediate row comprises:
determining a first layer start line or a first layer end line closest to the first intermediate line;
and determining a layer before the layer corresponding to the first layer starting line or the first layer ending line as the first target layer.
4. A method according to claim 3, wherein said running the target network to the first target layer comprises:
performing text annotation on the descriptive contents corresponding to each layer of network behind the first target layer in the descriptive text file corresponding to the target network to obtain the annotated descriptive text file, and controlling the target network to operate through the descriptive text file;
And operating the target network to the first target layer according to the annotated description text file.
5. The method of claim 4, wherein the text annotating the descriptive content corresponding to each layer of the descriptive text file corresponding to the target network subsequent to the first target layer comprises:
and adding a preset symbol in front of the descriptive content corresponding to each row in each layer of network after the first target layer number in the descriptive text file corresponding to the target network.
6. The method of claim 1, wherein said running the target network to the first target layer comprises:
counting the layer starting line number of each layer in the target network;
and executing the target network to the first target layer according to the layer starting line number of each layer.
7. An operation error positioning device in a fusion mode based on a deep learning framework is characterized by comprising a controller and an actuator, wherein,
the controller is used for determining a first target layer according to target information when the target network under the deep learning framework operates correctly layer by layer and the fusion mode operates incorrectly, wherein the target information is the number of lines of text of a description text file of the target network;
The executor is used for running the target network to the first target layer to obtain a running result;
the controller is further used for updating the first target layer according to the operation result to obtain a second target layer;
the executor is further configured to operate the target network to the second target layer until an error layer that causes the fusion mode operation error is obtained;
wherein, in the aspect of updating the first target layer according to the operation result to obtain a second target layer, the controller is specifically configured to:
dividing the target network into a first partial network before the first target layer and a second partial network except the first partial network;
determining the second target layer by using the number of layer information of the number of lines of the first partial network or the number of layer information of the number of lines of the second partial network according to the operation result;
in the aspect of determining the second target layer by using the number of line layers information of the first partial network or the number of line layers information of the second partial network, the controller is specifically configured to:
and determining the second target layer by using the number of line number layer information of the first partial network or the number of line number layer information of the second partial network.
8. The apparatus of claim 7, wherein in determining the first target layer based on the target information, the controller is specifically configured to:
determining a first middle line of the target network from the text line numbers of the description text file of the target network according to a dichotomy;
and determining the first target layer according to the first middle row.
9. The apparatus of claim 8, wherein the controller is configured, in the determining the first target layer according to the first intermediate row, to:
determining a first layer start line or a first layer end line closest to the first intermediate line;
and determining a layer before the layer corresponding to the first layer starting line or the first layer ending line as the first target layer.
10. The apparatus of claim 9, wherein in said running the target network to the first target layer, the executor is specifically configured to:
performing text annotation on the descriptive contents corresponding to each layer of network behind the first target layer in the descriptive text file corresponding to the target network to obtain the annotated descriptive text file, and controlling the target network to operate through the descriptive text file;
And operating the target network to the first target layer according to the annotated description text file.
11. The apparatus of claim 10, wherein the executor is configured to, in terms of text annotating the descriptive content corresponding to each layer of network after the first target layer in the descriptive text file corresponding to the target network:
and adding a preset symbol in front of the descriptive content corresponding to each row in each layer of network after the first target layer number in the descriptive text file corresponding to the target network.
12. The apparatus of claim 11, wherein in the running the target network to the first target layer, the executor is specifically configured to:
counting the layer starting line number of each layer in the target network;
and executing the target network to the first target layer according to the layer starting line number of each layer.
13. A computer-readable storage medium, characterized in that it stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910298218.2A CN110083532B (en) | 2019-04-12 | 2019-04-12 | Method and device for positioning operation errors in fusion mode based on deep learning framework |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910298218.2A CN110083532B (en) | 2019-04-12 | 2019-04-12 | Method and device for positioning operation errors in fusion mode based on deep learning framework |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110083532A CN110083532A (en) | 2019-08-02 |
CN110083532B true CN110083532B (en) | 2023-05-23 |
Family
ID=67415203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910298218.2A Active CN110083532B (en) | 2019-04-12 | 2019-04-12 | Method and device for positioning operation errors in fusion mode based on deep learning framework |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110083532B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110751272B (en) * | 2019-10-30 | 2021-02-23 | 珠海格力电器股份有限公司 | Method, device and storage medium for positioning data in convolutional neural network model |
CN112116081B (en) * | 2020-09-29 | 2023-09-08 | 杭州海康威视数字技术股份有限公司 | Optimization method and device for deep learning network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101256526A (en) * | 2008-03-10 | 2008-09-03 | 清华大学 | Method for implementing document condition compatibility maintenance in inspection point fault-tolerant technique |
CN107306419A (en) * | 2016-04-21 | 2017-10-31 | 中国移动通信集团广东有限公司 | A kind of end-to-end quality appraisal procedure and device |
CN109067581A (en) * | 2018-08-03 | 2018-12-21 | 中国联合网络通信集团有限公司 | Calculating network selecting method and platform based on analytic hierarchy process (AHP) |
CN109299216A (en) * | 2018-10-29 | 2019-02-01 | 山东师范大学 | A kind of cross-module state Hash search method and system merging supervision message |
CN109582579A (en) * | 2018-11-30 | 2019-04-05 | 腾讯音乐娱乐科技(深圳)有限公司 | Applied program testing method, device, electronic equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050038832A1 (en) * | 2003-08-14 | 2005-02-17 | International Business Machines Corporation | Application error recovery using solution database |
-
2019
- 2019-04-12 CN CN201910298218.2A patent/CN110083532B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101256526A (en) * | 2008-03-10 | 2008-09-03 | 清华大学 | Method for implementing document condition compatibility maintenance in inspection point fault-tolerant technique |
CN107306419A (en) * | 2016-04-21 | 2017-10-31 | 中国移动通信集团广东有限公司 | A kind of end-to-end quality appraisal procedure and device |
CN109067581A (en) * | 2018-08-03 | 2018-12-21 | 中国联合网络通信集团有限公司 | Calculating network selecting method and platform based on analytic hierarchy process (AHP) |
CN109299216A (en) * | 2018-10-29 | 2019-02-01 | 山东师范大学 | A kind of cross-module state Hash search method and system merging supervision message |
CN109582579A (en) * | 2018-11-30 | 2019-04-05 | 腾讯音乐娱乐科技(深圳)有限公司 | Applied program testing method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110083532A (en) | 2019-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110096310B (en) | Operation method, operation device, computer equipment and storage medium | |
CN110119807B (en) | Operation method, operation device, computer equipment and storage medium | |
CN110083532B (en) | Method and device for positioning operation errors in fusion mode based on deep learning framework | |
CN109739703A (en) | Adjust wrong method and Related product | |
CN109754084A (en) | Processing method, device and the Related product of network structure | |
CN111353124A (en) | Operation method, operation device, computer equipment and storage medium | |
WO2021185262A1 (en) | Computing apparatus and method, board card, and computer readable storage medium | |
CN110261758B (en) | Device under test verification device and related product | |
CN111061507A (en) | Operation method, operation device, computer equipment and storage medium | |
CN111723920B (en) | Artificial intelligence computing device and related products | |
CN110020720B (en) | Operator splicing method and device | |
CN111258732B (en) | Data processing method, data processing device and electronic equipment | |
CN111047030A (en) | Operation method, operation device, computer equipment and storage medium | |
CN111026440B (en) | Operation method, operation device, computer equipment and storage medium | |
CN111124497B (en) | Operation method, operation device, computer equipment and storage medium | |
CN111340202A (en) | Operation method, device and related product | |
CN111339060B (en) | Operation method, device, computer equipment and storage medium | |
CN111338694B (en) | Operation method, device, computer equipment and storage medium | |
CN111275197B (en) | Operation method, device, computer equipment and storage medium | |
CN111290789B (en) | Operation method, operation device, computer equipment and storage medium | |
CN112395008A (en) | Operation method, operation device, computer equipment and storage medium | |
CN113033791B (en) | Computing device, integrated circuit device, board card and order preserving method for order preserving | |
CN111767999A (en) | Data processing method and device and related products | |
CN111353125B (en) | Operation method, operation device, computer equipment and storage medium | |
CN117591378B (en) | Temperature control method, system, equipment and storage medium of server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences Applicant after: Zhongke Cambrian Technology Co.,Ltd. Address before: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences Applicant before: Beijing Zhongke Cambrian Technology Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |