CN114021691A - Neural network model quantification method, system, device and computer readable medium - Google Patents

Neural network model quantification method, system, device and computer readable medium Download PDF

Info

Publication number
CN114021691A
CN114021691A CN202111190372.1A CN202111190372A CN114021691A CN 114021691 A CN114021691 A CN 114021691A CN 202111190372 A CN202111190372 A CN 202111190372A CN 114021691 A CN114021691 A CN 114021691A
Authority
CN
China
Prior art keywords
model
quantization
neural network
quantized
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111190372.1A
Other languages
Chinese (zh)
Inventor
陈其宾
李锐
张晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Science Research Institute Co Ltd
Original Assignee
Shandong Inspur Science Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Science Research Institute Co Ltd filed Critical Shandong Inspur Science Research Institute Co Ltd
Priority to CN202111190372.1A priority Critical patent/CN114021691A/en
Publication of CN114021691A publication Critical patent/CN114021691A/en
Priority to PCT/CN2022/105317 priority patent/WO2023060959A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a neural network model quantization method, a system, a device and a computer readable medium, belonging to the technical field of neural networks and aiming at solving the technical problem of how to avoid calculating an activation value quantization factor during reasoning. The method comprises the following steps: constructing a neural network model, and training the neural network model; for the target model, calculating a model weight quantization factor based on a quantization range by calculating a maximum value of an absolute value of the model weight; for each layer of the target model, calculating an activation value quantization factor of each layer of the target model by minimizing a mean square error; for each layer of the target model, carrying out model reasoning through the quantized weight and activation value of the fixed point type, and inversely quantizing the reasoning result into an int32 data type; for each layer of the target model, each operator is quantized in an asymmetric quantization manner, the floating point type model weight is quantized to int8 data type, and the activation value is quantized to the unt 8 data type.

Description

Neural network model quantification method, system, device and computer readable medium
Technical Field
The invention relates to the technical field of neural networks, in particular to a neural network model quantification method, a system, a device and a computer readable medium.
Background
In recent years, neural network models have been widely used in many fields and have achieved excellent results. However, the neural network model has low reasoning efficiency and long reasoning time due to high model complexity and large model, and particularly operates in low-performance mobile equipment and low-power-consumption equipment. Therefore, how to design a model which has low resource consumption, can predict in real time and simultaneously ensure the prediction precision becomes a practical problem. On low power consumption devices like single-chip microcomputers, models with low resource consumption are needed. In fields with high requirements for real-time performance, such as speech recognition and automatic driving, a model capable of predicting in real time is required. In order to solve the problem, the design of an efficient model architecture, the design of a model architecture suitable for specific hardware, network pruning, knowledge distillation, model quantization and the like can be started. The model quantization achieves a good effect on the problem, the size of the model can be effectively reduced by quantizing the model from a floating point type to a fixed point type, and meanwhile, the model reasoning speed is improved.
In order to improve the model reasoning speed, how to avoid calculating the activation value quantization factor during reasoning is a technical problem to be solved.
Disclosure of Invention
The technical task of the present invention is to provide a neural network model quantization method, system, device and computer readable medium to solve the technical problem of how to avoid calculating the activation value quantization factor during inference.
In a first aspect, a method for quantizing a neural network model according to the present invention calculates an activation value quantization factor for each layer of the neural network model by a minimization equation before inference is performed by the neural network model, the method including the steps of:
constructing a neural network model, and training the neural network model to obtain a floating point type neural network model as a target model;
for the target model, calculating a model weight quantization factor based on a quantization range by calculating a maximum value of an absolute value of the model weight;
for each layer of the target model, calculating an activation value quantization factor of each layer of the target model by minimizing a mean square error;
for each layer of the target model, carrying out model reasoning through the quantized weight and activation value of the fixed point type, and inversely quantizing the reasoning result into an int32 data type;
for each layer of the target model, each operator is quantized in an asymmetric quantization mode, the weight of the floating point type model is quantized into int8 data types, the activation value is quantized into a unt 8 data type, and a final quantized model is obtained.
Preferably, for the object model, the model weights are quantized to int8 type, with a quantization range of [ -128,127 ].
Preferably, a test data set is obtained, the mean square error of each layer of quantized output and unquantized output of the target model is calculated based on the test data set, and the activation value quantization factor is obtained by minimizing the mean square error.
Preferably, the mean square error formula is:
Figure BDA0003300879760000021
wherein, yiThe output is represented without quantization and is,
Figure BDA0003300879760000022
and representing the quantized output.
In a second aspect, the present invention provides a neural network model quantization system for calculating an activation value quantization factor for each layer of a neural network model by a minimization equation before inference is performed by the neural network model, the system comprising:
the system comprises a construction training module, a target model generation module and a target model generation module, wherein the construction training module is used for constructing a neural network model and training the neural network model to obtain a floating point type neural network model as the target model;
the quantization factor calculation module is applied to the target model and used for calculating the model weight quantization factor based on the quantization range by calculating the maximum value of the absolute value of the model weight;
the activation value quantization factor calculation module is applied to each layer of the target model and used for calculating the activation value quantization factor of each layer of the target model by minimizing the mean square error;
the reasoning inverse quantization module is applied to each layer of the target model and is used for carrying out model reasoning through the quantized weight and the activation value of the fixed point type and inversely quantizing a reasoning result into an int32 data type;
and the final quantization module is applied to each layer of the target model and used for quantizing each operator in an asymmetric quantization mode, quantizing the weight of the floating point type model into int8 data type, and quantizing the activation value into uint8 data type to obtain a final quantized model.
Preferably, for the object model, the model weights are quantized to int8 type, with a quantization range of [ -128,127 ].
Preferably, the activation value quantization factor calculation module is configured to calculate the activation value quantization factor by:
and acquiring a test set data set, calculating the mean square error of each layer of quantized and unquantized target models, and obtaining an activation value quantization factor by minimizing the mean square error.
Preferably, the mean square error formula is:
Figure BDA0003300879760000031
wherein, yiThe output is represented without quantization and is,
Figure BDA0003300879760000032
and representing the quantized output.
In a third aspect, the apparatus of the present invention comprises: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine-readable program to perform the method of any of the first aspects.
In a fourth aspect, the present invention provides a computer readable medium having stored thereon computer instructions, which, when executed by a processor, cause the processor to perform the method of any of the first aspects.
The neural network model quantification method, the system, the device and the computer readable medium have the following advantages: the method for calculating the activation value quantization factor in advance before model reasoning obtains the activation value quantization factor of each layer by minimizing mean square error calculation, ensures the precision of the model, and improves the reasoning speed of the model by calculating the quantization factor in advance.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a neural network model quantization method according to embodiment 1.
Detailed Description
The present invention is further described in the following with reference to the drawings and the specific embodiments so that those skilled in the art can better understand the present invention and can implement the present invention, but the embodiments are not to be construed as limiting the present invention, and the embodiments and the technical features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides a neural network model quantization method, a system, a device and a computer readable medium, which are used for solving the technical problem of how to avoid calculating an activation value quantization factor during reasoning.
Example 1:
the invention relates to a neural network model quantification method, which is characterized in that before the neural network model carries out reasoning, an activation value quantification factor of each layer of the neural network model is calculated through a minimization equation, and the method comprises the following steps:
s100, constructing a neural network model, and training the neural network model to obtain a floating point type neural network model as a target model;
s200, calculating a model weight quantization factor based on a quantization range by calculating the maximum value of the absolute value of the model weight for the target model;
s300, for each layer of the target model, calculating an activation value quantization factor of each layer of the target model by minimizing a mean square error;
s400, carrying out model reasoning on each layer of the target model through the quantized weight and the activation value of the fixed point type, and inversely quantizing a reasoning result into an int32 data type;
s500, quantizing each operator in an asymmetric quantization mode for each layer of the target model, quantizing the weight of the floating point type model into int8 data type, and quantizing the activation value into uint8 data type to obtain a final quantized model.
In this embodiment, the quantization factor of the model weight is calculated in step S200, and the model weight quantization factor is calculated based on the quantization range by calculating the maximum absolute value of the model weight, so that the model weight is quantized to int8 type, and thus the quantization range is [ -128,127 ].
Step S300 obtains an activation value quantization factor of each layer through minimum mean square error calculation, calculates the mean square error of quantized output and unquantized output of each layer based on a partial test data set, and obtains the activation value quantization factor through minimizing the mean square error. The mean square error formula is:
Figure BDA0003300879760000051
wherein, yiThe output is represented without quantization and is,
Figure BDA0003300879760000061
and representing the quantized output.
And (5) repeating the calculation of the step S400 and the step S500 on each layer of the model to finally obtain model output.
Example 2:
the neural network model quantization system is used for calculating the activation value quantization factor of each layer of the neural network model through a minimization equation before the neural network model carries out reasoning. The system comprises a construction training module, a quantization factor calculation module, an activation value quantization factor calculation module, an inference inverse quantization module and a final quantization module, wherein the construction training module is used for constructing a neural network model and training the neural network model to obtain a floating point type neural network model as a target model; the quantization factor calculation module is applied to the target model and used for calculating the model weight quantization factor based on the quantization range by calculating the maximum value of the absolute value of the model weight; the activation value quantization factor calculation module is applied to each layer of the target model and used for calculating the activation value quantization factor of each layer of the target model by minimizing the mean square error; the reasoning inverse quantization module is applied to each layer of the target model and is used for carrying out model reasoning through the quantized weight and the activation value of the fixed point type and inversely quantizing a reasoning result into an int32 data type; and the final quantization module is applied to each layer of the target model and used for quantizing each operator in an asymmetric quantization mode, quantizing the weight of the floating point type model into int8 data type, and quantizing the activation value into uint8 data type to obtain a final quantized model.
For the object model, the model weights are quantized to type int8, with a quantization range of [ -128,127 ].
The activation value quantization factor calculation module is used for calculating the activation value quantization factor by the following method: and acquiring a test set data set, calculating the mean square error of each layer of quantized and unquantized target models, and obtaining an activation value quantization factor by minimizing the mean square error. The mean square error formula is:
Figure BDA0003300879760000062
wherein, yiThe output is represented without quantization and is,
Figure BDA0003300879760000063
and representing the quantized output.
Example 3:
the apparatus of the present invention comprises: at least one memory and at least one processor; at least one memory for storing a machine readable program; and the at least one processor is used for calling the machine-readable program to execute the method disclosed by the embodiment 1 of the invention.
Example 4:
the embodiment of the invention also provides a computer readable medium, wherein computer instructions are stored on the computer readable medium, and when the computer instructions are executed by a processor, the processor is enabled to execute the method disclosed in the embodiment 1 of the invention. Specifically, a system or an apparatus equipped with a storage medium on which software program codes that realize the functions of any of the above-described embodiments are stored may be provided, and a computer (or a CPU or MPU) of the system or the apparatus is caused to read out and execute the program codes stored in the storage medium.
In this case, the program code itself read from the storage medium can realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code constitute a part of the present invention.
Examples of the storage medium for supplying the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer via a communications network.
Further, it should be clear that the functions of any one of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform a part or all of the actual operations based on instructions of the program code.
Further, it is to be understood that the program code read out from the storage medium is written to a memory provided in an expansion board inserted into the computer or to a memory provided in an expansion unit connected to the computer, and then causes a CPU or the like mounted on the expansion board or the expansion unit to perform part or all of the actual operations based on instructions of the program code, thereby realizing the functions of any of the above-described embodiments.
It should be noted that not all steps and modules in the above flows and system structure diagrams are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The system structure described in the above embodiments may be a physical structure or a logical structure, that is, some modules may be implemented by the same physical entity, or some modules may be implemented by a plurality of physical entities, or some components in a plurality of independent devices may be implemented together.
In the above embodiments, the hardware unit may be implemented mechanically or electrically. For example, a hardware element may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware elements may also comprise programmable logic or circuitry, such as a general purpose processor or other programmable processor, that may be temporarily configured by software to perform the corresponding operations. The specific implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
While the invention has been shown and described in detail in the drawings and in the preferred embodiments, it is not intended to limit the invention to the embodiments disclosed, and it will be apparent to those skilled in the art that many more embodiments of the invention are possible that combine the features of the different embodiments described above and still fall within the scope of the invention.

Claims (10)

1. A neural network model quantization method is characterized in that before inference is carried out on a neural network model, an activation value quantization factor of each layer of the neural network model is calculated through a minimization equation, and the method comprises the following steps:
constructing a neural network model, and training the neural network model to obtain a floating point type neural network model as a target model;
for the target model, calculating a model weight quantization factor based on a quantization range by calculating a maximum value of an absolute value of the model weight;
for each layer of the target model, calculating an activation value quantization factor of each layer of the target model by minimizing a mean square error;
for each layer of the target model, carrying out model reasoning through the quantized weight and activation value of the fixed point type, and inversely quantizing the reasoning result into an int32 data type;
for each layer of the target model, each operator is quantized in an asymmetric quantization mode, the weight of the floating point type model is quantized into int8 data types, the activation value is quantized into a unt 8 data type, and a final quantized model is obtained.
2. The neural network model quantization method of claim 1, wherein for the target model, the model weight is quantized to int8 type, with quantization range of [ -128,127 ].
3. The neural network model quantization method of any one of claims 1 or 2, wherein a test data set is obtained, a mean square error of quantized output and unquantized output of each layer of the target model is calculated based on the test data set, and the activation value quantization factor is obtained by minimizing the mean square error.
4. The neural network model quantization method of claim 3, wherein said mean square error formula is:
Figure FDA0003300879750000011
wherein, yiThe output is represented without quantization and is,
Figure FDA0003300879750000021
and representing the quantized output.
5. A neural network model quantization system for calculating an activation value quantization factor for each layer of a neural network model by a minimization equation before inference is performed by the neural network model, the system comprising:
the system comprises a construction training module, a target model generation module and a target model generation module, wherein the construction training module is used for constructing a neural network model and training the neural network model to obtain a floating point type neural network model as the target model;
the quantization factor calculation module is applied to the target model and used for calculating the model weight quantization factor based on the quantization range by calculating the maximum value of the absolute value of the model weight;
the activation value quantization factor calculation module is applied to each layer of the target model and used for calculating the activation value quantization factor of each layer of the target model by minimizing the mean square error;
the reasoning inverse quantization module is applied to each layer of the target model and is used for carrying out model reasoning through the quantized weight and the activation value of the fixed point type and inversely quantizing a reasoning result into an int32 data type;
and the final quantization module is applied to each layer of the target model and used for quantizing each operator in an asymmetric quantization mode, quantizing the weight of the floating point type model into int8 data type, and quantizing the activation value into uint8 data type to obtain a final quantized model.
6. The neural network model quantization system of claim 5, wherein for the target model, the model weights are quantized to int8 type, with quantization range of [ -128,127 ].
7. The neural network model quantization system of any one of claims 5 or 6, wherein the activation value quantization factor calculation module is configured to calculate the activation value quantization factor by:
and acquiring a test set data set, calculating the mean square error of each layer of quantized and unquantized target models, and obtaining an activation value quantization factor by minimizing the mean square error.
8. The neural network model quantization system of claim 7, wherein said mean square error formula is:
Figure FDA0003300879750000031
wherein, yiThe output is represented without quantization and is,
Figure FDA0003300879750000032
and representing the quantized output.
9. An apparatus, comprising: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor, configured to invoke the machine readable program to perform the method of any of claims 1 to 4.
10. Computer readable medium, characterized in that it has stored thereon computer instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 4.
CN202111190372.1A 2021-10-13 2021-10-13 Neural network model quantification method, system, device and computer readable medium Withdrawn CN114021691A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111190372.1A CN114021691A (en) 2021-10-13 2021-10-13 Neural network model quantification method, system, device and computer readable medium
PCT/CN2022/105317 WO2023060959A1 (en) 2021-10-13 2022-07-13 Neural network model quantification method, system and device, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111190372.1A CN114021691A (en) 2021-10-13 2021-10-13 Neural network model quantification method, system, device and computer readable medium

Publications (1)

Publication Number Publication Date
CN114021691A true CN114021691A (en) 2022-02-08

Family

ID=80056227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111190372.1A Withdrawn CN114021691A (en) 2021-10-13 2021-10-13 Neural network model quantification method, system, device and computer readable medium

Country Status (2)

Country Link
CN (1) CN114021691A (en)
WO (1) WO2023060959A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492778A (en) * 2022-02-16 2022-05-13 安谋科技(中国)有限公司 Operation method of neural network model, readable medium and electronic device
CN114821660A (en) * 2022-05-12 2022-07-29 山东浪潮科学研究院有限公司 Pedestrian detection inference method based on embedded equipment
WO2023060959A1 (en) * 2021-10-13 2023-04-20 山东浪潮科学研究院有限公司 Neural network model quantification method, system and device, and computer-readable medium
WO2024031989A1 (en) * 2022-08-11 2024-02-15 山东浪潮科学研究院有限公司 Memory optimization method and system for deep learning reasoning of embedded device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12008467B2 (en) * 2019-07-01 2024-06-11 Baidu Usa Llc Asymmetric quantization for compression and for acceleration of inference for neural networks
CN111814955B (en) * 2020-06-19 2024-05-31 浙江大华技术股份有限公司 Quantification method and equipment for neural network model and computer storage medium
CN111950715A (en) * 2020-08-24 2020-11-17 云知声智能科技股份有限公司 8-bit integer full-quantization inference method and device based on self-adaptive dynamic shift
CN111950716A (en) * 2020-08-25 2020-11-17 云知声智能科技股份有限公司 Quantification method and system for optimizing int8
CN112766484A (en) * 2020-12-30 2021-05-07 上海熠知电子科技有限公司 Floating point neural network model quantization system and method
CN114021691A (en) * 2021-10-13 2022-02-08 山东浪潮科学研究院有限公司 Neural network model quantification method, system, device and computer readable medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023060959A1 (en) * 2021-10-13 2023-04-20 山东浪潮科学研究院有限公司 Neural network model quantification method, system and device, and computer-readable medium
CN114492778A (en) * 2022-02-16 2022-05-13 安谋科技(中国)有限公司 Operation method of neural network model, readable medium and electronic device
CN114821660A (en) * 2022-05-12 2022-07-29 山东浪潮科学研究院有限公司 Pedestrian detection inference method based on embedded equipment
WO2024031989A1 (en) * 2022-08-11 2024-02-15 山东浪潮科学研究院有限公司 Memory optimization method and system for deep learning reasoning of embedded device

Also Published As

Publication number Publication date
WO2023060959A1 (en) 2023-04-20

Similar Documents

Publication Publication Date Title
CN114021691A (en) Neural network model quantification method, system, device and computer readable medium
US11783227B2 (en) Method, apparatus, device and readable medium for transfer learning in machine learning
US10360899B2 (en) Method and device for processing speech based on artificial intelligence
US11861474B2 (en) Dynamic placement of computation sub-graphs
CN112232497A (en) Method, system, device and medium for compiling AI chip
CN112689303B (en) Edge cloud cooperative resource joint allocation method, system and application
CN111985495B (en) Model deployment method, device, system and storage medium
CN114528924B (en) Image classification model reasoning method, device, equipment and medium
CN115544815A (en) Method and device for generating fan model
CN112633502B (en) Cross-platform execution method and device of deep learning model and electronic equipment
CN111144571A (en) Deep learning reasoning operation method and middleware
CN114926701A (en) Model training method, target detection method and related equipment
CN113673532B (en) Target detection method and device based on quantitative model
CN115940202A (en) Multi-inverter power distribution control method, device and equipment based on artificial intelligence
CN115526320A (en) Neural network model inference acceleration method, apparatus, electronic device and medium
CN112633516B (en) Performance prediction and machine learning compiling optimization method and device
CN113779366A (en) Automatic optimization deployment method and device for neural network architecture for automatic driving
CN115222025B (en) Artificial intelligence model deployment and artificial intelligence operation method and system
CN116187235A (en) Method and system for designing chip architecture based on mathematical modeling
CN117440248B (en) Method and system for realizing target servo intelligent control based on axial image stabilization technology
CN117931211A (en) Model deployment method, device, apparatus, chip and storage medium
CN117741607A (en) Target parameter estimation method and device based on intelligent optimization algorithm
CN116431205A (en) Data stream policy generation method and device, electronic equipment and storage medium
CN117149438A (en) Method and device for obtaining computing task unloading strategy
CN116468116A (en) Model searching method, device, chip, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220208

WW01 Invention patent application withdrawn after publication