CN113033767A - Knowledge distillation-based data compression recovery method and system for neural network - Google Patents

Knowledge distillation-based data compression recovery method and system for neural network Download PDF

Info

Publication number
CN113033767A
CN113033767A CN202110188891.8A CN202110188891A CN113033767A CN 113033767 A CN113033767 A CN 113033767A CN 202110188891 A CN202110188891 A CN 202110188891A CN 113033767 A CN113033767 A CN 113033767A
Authority
CN
China
Prior art keywords
data
network
knowledge
knowledge distillation
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110188891.8A
Other languages
Chinese (zh)
Inventor
田永鸿
马力
高峰
彭佩玺
邢培银
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110188891.8A priority Critical patent/CN113033767A/en
Publication of CN113033767A publication Critical patent/CN113033767A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The disclosure relates to the technical field of neural networks, in particular to a data compression recovery method and system of a neural network based on knowledge distillation. The method comprises the following steps: inputting the raw data into a knowledge distillation network for training; inputting compressed data obtained by compressing the original data into a target network; analyzing and training the target network into which the compressed data is input through the knowledge distillation network; and outputting the trained target network data to obtain a recovered analysis result. The system comprises: the data compression module is used for generating compressed data; a knowledge distillation module for extracting data incoming from the data compression module into a high quality feature stream; and the reasoning module is used for deploying the recovered analysis result. The method and the system can reduce the loss of the image, the video or the audio data caused by compression, greatly improve the performance of the target network and obtain the analysis result with greatly improved accuracy.

Description

Knowledge distillation-based data compression recovery method and system for neural network
Technical Field
The present disclosure relates to the field of neural network technology, and more particularly, to a method and system for data compression and recovery of a neural network based on knowledge distillation.
Background
With the rapid development of artificial neural networks, various neural network models are more and more widely applied. Data adopted by many neural networks are lossy-compressed, and lossy compression causes loss which is difficult to recover for data signals. This loss not only affects human perception of data, but also reduces the performance of various neural networks.
There are many recovery techniques that are dedicated to recovering such losses from the signal level. Taking picture compression as an example, the removal of picture compression artifacts is a technology aiming at recovering the loss of the picture caused by lossy compression as much as possible so as to meet the requirements of human viewing and various visual analysis tasks. The existing methods aim to recover the signal values of the pictures, but we find that such methods of recovering signal values do not contribute significantly to the neural network.
Knowledge distillation is a model compression method that is widely used in the industry because it is simple and efficient, but lossy compression can degrade the performance of the target network.
The present application therefore proposes an improved method and system to at least partially solve the above technical problem.
Disclosure of Invention
To achieve the above technical object, the present disclosure provides a data compression recovery method for a neural network based on knowledge distillation, including:
inputting the raw data into a knowledge distillation network for training;
inputting compressed data obtained by compressing the original data into a target network;
analyzing and training the target network into which the compressed data is input through the knowledge distillation network;
and outputting the trained target network data to obtain a recovered analysis result.
Specifically, the knowledge distillation network may be a teacher network, the teacher network being a presentation knowledge teacher network or a signal knowledge teacher network.
Further, the method for representing the training of the knowledge teacher network on the raw data comprises the following steps:
θ*=argminθltask(f(x;θ),y)
wherein, theta*Teacher network of representation knowledge obtained by representation training,/taskRepresenting the associated loss function, x and y representing the raw and output data, respectively.
Further, the method for training the raw data by the signal knowledge teacher network comprises the following steps:
Figure BDA0002944446330000021
wherein, FsRepresenting the characteristic representation of the target network, x representing the original data, and l representing a function for measuring the difference of the two data.
Further, when the network representing knowledge teachers trains the target network into which the compressed data is input, the difference between the feature representation extracted from the compressed data by the target network and the high-quality feature representation extracted by the network representing knowledge teachers is restrained, and the specific method is as follows:
Figure BDA0002944446330000031
wherein M isRKTIs a high-quality feature representation representing the knowledge teacher network, M is a feature representation obtained by the target network, and c (x) represents compressed data.
Preferably, the target network is a convolutional neural network, a cyclic neural network and/or a capsule network.
Preferably, the original data is picture data, video data and/or audio data, and the compressed data is picture compressed data, video compressed data and/or audio compressed data.
The present disclosure provides a data compression recovery system for a neural network based on knowledge distillation, comprising:
the data compression module is used for acquiring original data, compressing the original data and generating compressed data;
a knowledge distillation module for extracting data incoming from the data compression module into a high quality feature stream;
and the reasoning module is used for deploying the recovered analysis result output from the knowledge distillation module.
Specifically, the knowledge distillation module comprises a knowledge distillation network and a target network, and the knowledge distillation network analyzes and trains the target network into which the compressed data is input.
Specifically, the knowledge distillation network is a representation knowledge teacher network or a signal knowledge teacher network.
The beneficial effect of this disclosure does:
the invention provides a data compression recovery method and a data compression recovery system of a neural network based on knowledge distillation, which can reduce the loss of picture, video or audio data caused by compression, greatly improve the performance of a target network and obtain an analysis result with greatly improved accuracy.
Drawings
Fig. 1 shows a schematic flow diagram of embodiment 1 of the present disclosure;
fig. 2 shows a schematic structural diagram of embodiment 2 of the present disclosure;
fig. 3 shows a process schematic of embodiment 3 of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure. It will be apparent to one skilled in the art that the present disclosure may be practiced without one or more of these details. In other instances, well-known features of the art have not been described in order to avoid obscuring the present disclosure.
It should be noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the disclosure. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Exemplary embodiments according to the present disclosure will now be described in more detail with reference to the accompanying drawings. These exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to only the embodiments set forth herein. The figures are not drawn to scale, wherein certain details may be exaggerated and omitted for clarity. The shapes of various regions, layers, and relative sizes and positional relationships therebetween shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, as actually required.
Example 1:
the present disclosure provides a data compression recovery method of a neural network based on knowledge distillation, as shown in fig. 1, including:
inputting the raw data into a knowledge distillation network for training;
inputting compressed data obtained by compressing original data into a target network;
analyzing and training the target network into which the compressed data is input through a knowledge distillation network;
and outputting the trained target network data to obtain a recovered analysis result.
In particular, the knowledge distillation network may be a teacher network, the teacher network being a presentation knowledge teacher network or a signal knowledge teacher network.
Further, the method for representing the training of the knowledge teacher network on the raw data comprises the following steps:
θ*=argminθ ltask(f(x;θ),V)
wherein, theta*Teacher network of representation knowledge obtained by representation training,/taskRepresenting the associated loss function, x and y representing the raw and output data, respectively.
Further, the method for training the original data by the signal knowledge teacher network comprises the following steps:
Figure BDA0002944446330000061
wherein, FsRepresenting the characteristic representation of the target network, x representing the original data, and l representing a function for measuring the difference of the two data.
Further, when the expressed knowledge teacher network performs analysis training on the target network into which the compressed data is input, the expressed knowledge teacher network restricts the difference between the feature representation extracted from the compressed data by the target network and the high-quality feature representation extracted by the expressed knowledge teacher network, and the specific method is as follows:
Figure BDA0002944446330000062
wherein M isRKTIs a high-quality feature representation representing the knowledge teacher network, M is a feature representation obtained by the target network, and c (x) represents compressed data.
Preferably, the target network is a convolutional neural network, a cyclic neural network and/or a capsule network.
Preferably, the original data is picture data, video data and/or audio data, and the compressed data is picture compressed data, video compressed data and/or audio compressed data.
Example 2:
the present disclosure provides a data compression recovery system of a neural network based on knowledge distillation, as shown in fig. 2, including:
the data compression module is used for acquiring original data, compressing the original data and generating compressed data;
a knowledge distillation module for extracting data incoming from the data compression module into a high quality feature stream;
and the reasoning module is used for deploying the recovered analysis result output from the knowledge distillation module. Specifically, the knowledge distillation module includes a knowledge distillation network and a target network, the knowledge distillation network training the target network into which the compressed data has been input.
Specifically, the knowledge distillation network may be a teacher network, the teacher network being a presentation knowledge teacher network or a signal knowledge teacher network.
Further, the raw data are input into a knowledge distillation network of a knowledge distillation module for training, compressed data obtained by compressing the raw data in a data compression module are input into a target network of the knowledge distillation module, the target network into which the compressed data are input is analyzed and trained through the knowledge distillation network in the knowledge distillation module, and the trained target network data are output to an inference module for deploying tasks.
Specifically, the method for representing the training of the knowledge teacher network on the raw data comprises the following steps:
θ*=argminθltask(f(x;θ),y)
wherein, theta*Teacher network of representation knowledge obtained by representation training,/taskRepresenting the associated loss function, x and y representing the raw and output data, respectively.
Further, the method for training the raw data by the signal knowledge teacher network comprises the following steps:
Figure BDA0002944446330000071
wherein, FsRepresenting the characteristic representation of the target network, x representing the original data, and l representing a function for measuring the difference of the two data.
Further, when the expressed knowledge teacher network analyzes and trains the target network into which the compressed data is input, the expressed knowledge teacher network restricts the difference between the feature representation extracted from the compressed data by the target network and the high-quality feature representation extracted by the expressed knowledge teacher network, and the specific method is as follows:
Figure BDA0002944446330000081
wherein M isRKTIs a high-quality feature representation representing the knowledge teacher network, M is a feature representation obtained by the target network, and c (x) represents compressed data.
Preferably, the target network is a convolutional neural network, a cyclic neural network and/or a capsule network.
Preferably, the original data is picture data, video data and/or audio data, and the compressed data is picture compressed data, video compressed data and/or audio compressed data.
Example 3:
the present disclosure provides a data compression and recovery system of a neural network based on knowledge distillation, which includes a data compression module, a knowledge distillation module and an inference module, wherein, as shown in fig. 3, the knowledge distillation module includes a representation knowledge teacher network and a target network, and the process of representing knowledge distillation is realized from the knowledge teacher network to the target network.
Specifically, raw data is input into a knowledge teacher representation network of a knowledge distillation module for training, compressed data obtained by compressing the raw data in a data compression module is input into a target network of the knowledge distillation module, the target network into which the compressed data is input is trained through the knowledge teacher representation network in the knowledge distillation module, and finally the trained target network data is output to an inference module for deploying tasks.
Specifically, the method for training the raw data by the representation knowledge teacher network comprises the following steps:
θ*=argminθ ltask(f(x;θ),y)
wherein, theta*Teacher network of representation knowledge obtained by representation training,/taskRepresenting the associated loss function, x and y representing the raw and output data, respectively.
Further, when the expressed knowledge teacher network analyzes and trains the target network into which the compressed data is input, the expressed knowledge teacher network restricts the difference between the feature representation extracted from the compressed data by the target network and the high-quality feature representation extracted by the expressed knowledge teacher network, and the specific method is as follows:
Figure BDA0002944446330000091
wherein M isRKTIs a high-quality feature representation representing the knowledge teacher network, M is a feature representation obtained by the target network, and c (x) represents compressed data.
Preferably, the target network is a convolutional neural network, a cyclic neural network and/or a capsule network.
Preferably, the original data is picture data, video data and/or audio data, and the compressed data is picture compressed data, video compressed data and/or audio compressed data.
Example 4:
the present disclosure provides a computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of a method for data compression recovery for a neural network based on knowledge distillation: inputting the raw data into a knowledge distillation network for training; inputting compressed data obtained by compressing the original data into a target network; analyzing and training the target network into which the compressed data is input through the knowledge distillation network; and outputting the trained target network data to obtain a recovered analysis result.
Example 5:
the present disclosure also provides a storage medium having stored thereon computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of a method for data compression recovery for a neural network based on knowledge distillation: inputting the raw data into a knowledge distillation network for training; inputting compressed data obtained by compressing the original data into a target network; analyzing and training the target network into which the compressed data is input through the knowledge distillation network; and outputting the trained target network data to obtain a recovered analysis result.
The method and the system for data compression and recovery of the neural network based on knowledge distillation provided by the embodiment of the disclosure can be realized by relying on a computer program. The computer program may be integrated into the application or may run as a separate tool-like application. The data compression recovery device and medium of the neural network based on knowledge distillation in the disclosed embodiment include but are not limited to: personal computers, tablet computers, handheld devices, in-vehicle devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and the like. The user terminals may be called different names in different networks, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, Personal Digital Assistant (PDA), terminal equipment in a 5G network or future evolution network, and the like.
The embodiments of the present disclosure have been described above. However, this is for illustrative purposes only and is not intended to limit the scope of the present disclosure. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A data compression recovery method of a neural network based on knowledge distillation is characterized by comprising the following steps:
inputting the raw data into a knowledge distillation network for training;
inputting compressed data obtained by compressing the original data into a target network;
analyzing and training the target network into which the compressed data is input through the knowledge distillation network;
and outputting the trained target network data to obtain a recovered analysis result.
2. The method for data compression recovery based on the neural network of knowledge distillation as claimed in claim 1, wherein the knowledge distillation network is a representation knowledge teacher network or a signal knowledge teacher network.
3. The method for data compression and recovery of the neural network based on knowledge distillation as claimed in claim 2, wherein the method for representing the training of the knowledge teacher network to the raw data is as follows:
θ*=argminθltask(f(x;θ),y)
wherein, theta*Teacher network of representation knowledge obtained by representation training,/taskRepresenting the associated loss function, x and y representing the raw and output data, respectively.
4. The method for data compression and recovery of the neural network based on knowledge distillation as claimed in claim 2, wherein the method for training the raw data by the signal knowledge teacher network is as follows:
Figure FDA0002944446320000011
wherein, FsRepresenting the characteristic representation of the target network, x representing the original data, and l representing a function for measuring the difference of the two data.
5. The method for data compression and recovery of a neural network based on knowledge distillation as claimed in claim 2, wherein the representing knowledge teacher network is used for analyzing and training the target network into which the compressed data is input, and restricting the difference between the feature representation extracted from the compressed data by the target network and the high-quality feature representation extracted by the representing knowledge teacher network, and the method comprises the following steps:
Figure FDA0002944446320000021
wherein M isRKTIs a high-quality feature representation representing the knowledge teacher network, M is a feature representation obtained by the target network, and c (x) represents compressed data.
6. The method for data compression recovery of a neural network based on knowledge distillation of claim 1, wherein the target network is a convolutional neural network, a cyclic neural network and/or a capsule network.
7. The method for data compression recovery of a neural network based on knowledge distillation of claim 1, wherein the raw data is picture data, video data and/or audio data.
8. A data compression recovery system for a neural network based on knowledge distillation, comprising:
the data compression module is used for acquiring original data, compressing the original data and generating compressed data;
a knowledge distillation module for extracting data incoming from the data compression module into a high quality feature stream;
and the reasoning module is used for deploying the recovered analysis result output from the knowledge distillation module.
9. The knowledge distillation based data compression recovery system for a neural network of claim 8, wherein the knowledge distillation module comprises a knowledge distillation network and a target network, the knowledge distillation network performing analytical training on the target network into which the compressed data has been input.
10. The knowledge distillation based neural network data compression recovery system of claim 9, wherein the knowledge distillation network is a representation knowledge teacher network or a signal knowledge teacher network.
CN202110188891.8A 2021-02-19 2021-02-19 Knowledge distillation-based data compression recovery method and system for neural network Pending CN113033767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110188891.8A CN113033767A (en) 2021-02-19 2021-02-19 Knowledge distillation-based data compression recovery method and system for neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110188891.8A CN113033767A (en) 2021-02-19 2021-02-19 Knowledge distillation-based data compression recovery method and system for neural network

Publications (1)

Publication Number Publication Date
CN113033767A true CN113033767A (en) 2021-06-25

Family

ID=76461310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110188891.8A Pending CN113033767A (en) 2021-02-19 2021-02-19 Knowledge distillation-based data compression recovery method and system for neural network

Country Status (1)

Country Link
CN (1) CN113033767A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116382979A (en) * 2023-04-07 2023-07-04 海南翔文飞网络科技有限公司 Data loss prevention disaster recovery method and server combined with expert system neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880036A (en) * 2019-11-20 2020-03-13 腾讯科技(深圳)有限公司 Neural network compression method and device, computer equipment and storage medium
CN111160533A (en) * 2019-12-31 2020-05-15 中山大学 Neural network acceleration method based on cross-resolution knowledge distillation
CN112200062A (en) * 2020-09-30 2021-01-08 广州云从人工智能技术有限公司 Target detection method and device based on neural network, machine readable medium and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880036A (en) * 2019-11-20 2020-03-13 腾讯科技(深圳)有限公司 Neural network compression method and device, computer equipment and storage medium
CN111160533A (en) * 2019-12-31 2020-05-15 中山大学 Neural network acceleration method based on cross-resolution knowledge distillation
CN112200062A (en) * 2020-09-30 2021-01-08 广州云从人工智能技术有限公司 Target detection method and device based on neural network, machine readable medium and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116382979A (en) * 2023-04-07 2023-07-04 海南翔文飞网络科技有限公司 Data loss prevention disaster recovery method and server combined with expert system neural network
CN116382979B (en) * 2023-04-07 2024-03-19 国网冀北电力有限公司计量中心 Data loss prevention disaster recovery method and server combined with expert system neural network

Similar Documents

Publication Publication Date Title
CN111598776B (en) Image processing method, image processing device, storage medium and electronic apparatus
US11138903B2 (en) Method, apparatus, device and system for sign language translation
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN110490296A (en) A kind of method and system constructing convolutional neural networks (CNN) model
CN107909583B (en) Image processing method and device and terminal
CN110837842A (en) Video quality evaluation method, model training method and model training device
US11716438B2 (en) Method for motion estimation, non-transitory computer-readable storage medium, and electronic device
CN111414879A (en) Face shielding degree identification method and device, electronic equipment and readable storage medium
WO2020062191A1 (en) Image processing method, apparatus and device
EP4207195A1 (en) Speech separation method, electronic device, chip and computer-readable storage medium
CN111694978A (en) Image similarity detection method and device, storage medium and electronic equipment
EP4249869A1 (en) Temperature measuring method and apparatus, device and system
CN111343356A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111429374A (en) Method and device for eliminating moire in image
CN107145855B (en) Reference quality blurred image prediction method, terminal and storage medium
CN113033767A (en) Knowledge distillation-based data compression recovery method and system for neural network
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
US20230281956A1 (en) Method for generating objective function, apparatus, electronic device and computer readable medium
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN111626035B (en) Layout analysis method and electronic equipment
CN117541770A (en) Data enhancement method and device and electronic equipment
CN114283493A (en) Artificial intelligence-based identification system
CN113240599A (en) Image toning method and device, computer-readable storage medium and electronic equipment
CN111382764B (en) Neural network model building method and device for face recognition or gesture recognition and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210625