CN112990337B - Multi-stage training method for target identification - Google Patents

Multi-stage training method for target identification Download PDF

Info

Publication number
CN112990337B
CN112990337B CN202110348354.5A CN202110348354A CN112990337B CN 112990337 B CN112990337 B CN 112990337B CN 202110348354 A CN202110348354 A CN 202110348354A CN 112990337 B CN112990337 B CN 112990337B
Authority
CN
China
Prior art keywords
data
training
sample
targets
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110348354.5A
Other languages
Chinese (zh)
Other versions
CN112990337A (en
Inventor
于效宇
李富超
刘艳
陈颖璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Boceda Electronic Technology Co ltd
Original Assignee
University of Electronic Science and Technology of China Zhongshan Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China Zhongshan Institute filed Critical University of Electronic Science and Technology of China Zhongshan Institute
Priority to CN202110348354.5A priority Critical patent/CN112990337B/en
Priority to PCT/CN2021/091056 priority patent/WO2022205554A1/en
Publication of CN112990337A publication Critical patent/CN112990337A/en
Application granted granted Critical
Publication of CN112990337B publication Critical patent/CN112990337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-stage training method for target identification, which comprises the following steps: preprocessing a data set adopted by training; dividing the pre-processed result into a plurality of types of data subsets by using a data set classification module; mixing the data subsets of various types according to different proportions to obtain a plurality of new data sets of different types; dividing the trained model into a plurality of training phases; the invention can simply and efficiently improve the average precision of the model for identifying various categories by a multi-stage training mode on the premise of not changing the training time and the size of the original model.

Description

Multi-stage training method for target identification
Technical Field
The invention relates to the fields of a training method for target recognition, image recognition and artificial intelligence, in particular to a multi-stage training method for target recognition.
Background
In the field of target identification, a model obtains better and better performance by means of a continuously expanded data set, but in the training process, most models do not consider the characteristics of samples and the difficulty of obtaining characteristics from the samples by the models, pictures are randomly and equally loaded into the models for training, the samples are not trained in a targeted manner, the characteristic information amount of each sample is different for the samples in the data set, the time required by the models for learning the characteristic information from the samples is different, the characteristic information between the samples and the samples always has certain relevance, most models do not consider the relevance between the samples, the model cannot grasp the characteristic information of the samples and the relevance between the samples to a certain extent through the disordered learning, the model cannot learn enough useful characteristic information from the samples in the whole learning process, and the problems influence the final performance of the model to a certain extent.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a multi-stage training method for target recognition. According to the method, the model is trained through a multi-stage mode, the characteristic information of the sample is used as a training basis, and each stage is trained by adopting data sets with different characteristics, so that the training of the model becomes orderly, and finally the performance of the model is improved.
In order to solve the technical problem, the invention adopts the following technical scheme:
a multi-stage training method for target identification is optimized on the basis of a traditional training method, the training method integrates adopted data sets again in the training process to form new data sets with various characteristics, and the whole single training process is divided into a plurality of training stages, and the training method comprises four modules:
the data set preprocessing module is used for preprocessing the adopted data set, the main content of the preprocessing is to collect the basic characteristics of each data sample, and the basic characteristics of the data sample comprise the area of each target on the sample, the number of the targets on the sample, the size of the targets on the sample and the type of the targets on the sample;
the data set classification module classifies the data sets according to the basic characteristics counted by the data set preprocessing module, classifies the samples meeting the set threshold value into the same class, and the classification standard comprises the area mean value of the targets on the samples, the number of the targets on the samples, the size of the targets on the samples or the type of the targets on the samples;
a data set integration module for reintegrating the classified data subsets into a new data set, wherein in the reintegration process, each data subset is respectively provided with a weight parameter
Figure 100002_DEST_PATH_IMAGE002
Wherein i is the serial number of the data subset, and the proportion of the corresponding data subset in the whole data set is adjusted through a set weight parameter, so that the training of each stage has different pertinence to the data set;
and the data set loading module is used for loading the integrated new data set into each stage for training, and expanding the data set before loading, wherein the main method for expanding is to randomly cover the identified target, generate a new sample every time the target is covered, and allocate a new label.
The method comprises the following steps:
firstly, preprocessing an adopted data set;
classifying the data set according to the preprocessing result;
thirdly, remixing the classified data subsets according to different proportions to form a new data set;
and step four, dividing the process of training the model into a plurality of stages, and then loading new data sets into different stages in sequence.
The specific steps of the first step are as follows:
and preprocessing the adopted data set, wherein the main content of preprocessing is to collect basic characteristics of each data sample, and the basic characteristics of the data sample comprise the area of each target on the sample, the number of targets on the sample, the size of the targets on the sample and the type of the targets on the sample.
And step two, classifying according to the result of the preprocessing, wherein the classification standard comprises one or more of the area mean of the targets on the sample, the number of the targets on the sample or the type of the targets on the sample, and the data set is divided into a plurality of data subsets with specific characteristics through the pre-classification.
The second step comprises the following specific steps:
and classifying according to the result of the preprocessing, and dividing the whole data set into a simple type data subset and a complex type data subset according to the number of the targets on the sample, wherein the samples with the number of the identified targets being less than 5 but not background are defined as the simple type data subset, and the samples with the number of the background sample and the identified targets being more than 5 are defined as the complex type data subset.
And step three, remixing, namely setting a weight parameter for each data subset, influencing the proportion of each data subset in the whole data set through the weight parameter, and remixing the data subsets with different weights.
Setting weights in the process of the third step, namely, each data subset is provided with a separate weight, and the range of the weights is larger than zero, so that the data volume of the data subsets can be increased or decreased through the assigned weights.
Step four, multi-stage training: the multi-stage training is the training that divides the traditional one-time training into a plurality of stages, the imported data sets of each stage are different, namely, the imported data sets exist a plurality of times, each epoch is used as a stage for training, and the trained data sets of each epoch are the same or different.
The method comprises the following steps that a training process is divided into three stages, a data set preprocessing module reads and counts the number of targets on a sample, a data set classifying module divides a COCO data set into two types of data subsets, namely a simple type data subset A and a complex type data subset B according to a statistical result, the classifying standard is that the samples with the number of targets being less than 5 but not being backgrounds are classified into the simple type data subset A, and the background samples and the samples with the number of targets being more than 5 are classified into the complex type data subset B;
in the first stage, the data subsets of the simple type data subset A and the complex type data subset B are not processed, namely the corresponding weights are set to be 1, and then the data subsets are only randomly mixed and loaded into a model for training;
in the second phase, the data amount of the simple type data subset A is reduced
Figure 100002_DEST_PATH_IMAGE004
The data quantity of the complex type data subset B is increased
Figure 100002_DEST_PATH_IMAGE006
In the process, the sum of the data volumes of the two types of data subsets is kept to be the same as that of the original data set, and then the data subsets are loaded into a model for training after being randomly mixed;
in the third stage, the data amount of the simple type data subset A is reduced
Figure 100002_DEST_PATH_IMAGE008
The number of times of the total number of the parts,the data quantity of the complex type data subset B is increased
Figure 100002_DEST_PATH_IMAGE010
And then random mixing is carried out, and then model training is carried out.
The invention has the beneficial effects that: according to the method, staged training is added, the characteristic information of the sample is used as a training basis, and each stage is trained by adopting the data sets with different characteristics, so that the training of the model becomes orderly, and finally the performance of the model is improved.
Drawings
The invention is further illustrated by the following examples in conjunction with the drawings.
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a block flow diagram of the method;
FIG. 3 is a block flow diagram of one example of the present invention.
Detailed Description
Referring to fig. 1 to 3, a multi-stage training method for object recognition is optimized based on a conventional training method, the training method reintegrates adopted data sets in a training process to form new data sets with various characteristics, and splits the whole single training process into a plurality of training stages, and the training method includes four modules:
the data set preprocessing module is used for preprocessing the adopted data set, the main content of the preprocessing is to collect the basic characteristics of each data sample, and the basic characteristics of the data sample comprise the area of each target on the sample, the number of the targets on the sample, the size of the targets on the sample and the type of the targets on the sample;
the data set classification module classifies the data sets according to the basic characteristics counted by the data set preprocessing module, classifies the samples meeting the set threshold value into the same class, and the classification standard comprises the area average value of the targets on the samples, the number of the targets on the samples, the size of the targets on the samples or the type of the targets on the samples;
a data set integration module for classifying the classified dataThe data subsets are reintegrated into a new data set, and in the reintegration process, each data subset is respectively provided with a weight parameter
Figure 658810DEST_PATH_IMAGE002
Wherein i is the serial number of the data subsets, and the proportion of the corresponding data subsets in the whole data set is adjusted through a set weight parameter, so that the training of each stage has different pertinence to the data set;
and the data set loading module is used for loading the integrated new data set into each stage for training, and expanding the data set before loading, wherein the main method for expanding is to randomly cover the identified target, generate a new sample every time the target is covered, and allocate a new label.
The method comprises the following steps:
firstly, preprocessing an adopted data set;
classifying the data set according to the preprocessing result;
thirdly, remixing the classified data subsets according to different proportions to form a new data set;
and step four, dividing the process of training the model into a plurality of stages, and then loading new data sets into different stages in sequence.
The specific steps of the first step are as follows:
and preprocessing the adopted data set, wherein the main content of preprocessing is to collect basic characteristics of each data sample, and the basic characteristics of the data sample comprise the area of each target on the sample, the number of targets on the sample, the size of the targets on the sample and the type of the targets on the sample.
And step two, classifying according to the result of the preprocessing, wherein the classification standard comprises the area mean of the targets on the sample, the number of the targets on the sample or the type of the targets on the sample, and the data set is divided into a plurality of data subsets with specific characteristics through the pre-classification.
The second step comprises the following specific steps:
and classifying according to the result of the preprocessing, and dividing the whole data set into a simple type data subset and a complex type data subset according to the number of targets on the sample, wherein the number of the identified targets is less than 5 but not the background samples is defined as the simple type data subset, and the number of the background samples and the number of the identified targets is more than 5 are defined as the complex type data subset.
And step three, remixing, namely setting a weight parameter for each data subset, influencing the proportion of each data subset in the whole data set through the weight parameter, and remixing the data subsets with different weights.
Setting weights in the process of the third step, namely, each data subset is provided with a separate weight, and the range of the weights is larger than zero, so that the data volume of a certain data subset can be increased or decreased by the weight correspondingly distributed.
Step four, multi-stage training: the multi-stage training is the training that divides the traditional one-time training into a plurality of stages, the imported data sets of each stage are different, namely, the imported data sets exist a plurality of times, each epoch is used as a stage for training, and the trained data sets of each epoch are the same or different.
As shown in FIG. 2, in the general example of the multi-stage training method of the training process, the whole training stage of the model is divided into n training stages, each epoch can be actually used as a stage, while the conventional training method is the special training method in the present training method. The classification method of the data set classification module is various and may include the area mean of the target, the number of targets on the sample, the type of the target on the sample, and the like. For one of the data set classification modules, multiple thresholds may be set to divide the samples in the classification, and then the data subset is assigned with a weight by the data set integration module for reintegration. And judging whether data expansion is performed or not in a data set loading module, then importing the sample into the model for training, and obtaining the characteristics of the data set from the data set of the stage by the model for each training stage.
As shown in fig. 3, a specific example of the training method is: firstly, dividing a training process into three stages, reading and counting the target number on a sample by a data set preprocessing module, dividing a COCO data set into two types of data subsets which are a simple type data subset A and a complex type data subset B respectively by a data set classifying module according to a counting result, classifying the samples of which the target number is less than 5 but not background into the simple type data subset A, and classifying the background samples and the samples of which the target number is more than 5 into the complex type data subset B;
in the first stage, the data subsets of the simple type data subset A and the complex type data subset B are not processed, namely the corresponding weight is set to be 1, and then the data subsets are only randomly mixed and loaded into a model for training;
in the second phase, the data amount of the simple type data subset A is reduced
Figure 920158DEST_PATH_IMAGE004
The data quantity of the complex type data subset B is increased
Figure 168737DEST_PATH_IMAGE006
In the process, the sum of the data volumes of the two types of data subsets is kept to be the same as that of the original data set, and then the data subsets are loaded into a model for training after being randomly mixed;
in a third phase, data subsets of the simple type data subset A are reduced
Figure 157421DEST_PATH_IMAGE008
Data subset increase of data subset B of complex type
Figure 440635DEST_PATH_IMAGE010
And then, carrying out random mixing and loading the model for training, wherein although the data subsets of the simple type data set A and the complex type data set B in the stage are the same as the data subsets in the second stage, the reduction and the increase of the data are different, so as to ensure that the model can be trained to different types of data as much as possible.
The above embodiments do not limit the scope of the present invention, and those skilled in the art can make modifications and variations without departing from the overall spirit of the present invention.

Claims (1)

1. A multi-stage training method for target recognition is optimized on the basis of a traditional training method and is characterized in that the training method reintegrates adopted data sets in the training process to form new data sets with characteristics, and splits the whole single training process into a plurality of training stages, and comprises four modules:
the data set preprocessing module is used for preprocessing the adopted data set, the main content of the preprocessing is to collect the basic characteristics of each data sample, and the basic characteristics of the data sample comprise the area of each target on the sample, the number of the targets on the sample, the size of the targets on the sample and the type of the targets on the sample;
the data set classification module classifies the data sets according to the basic characteristics counted by the data set preprocessing module, classifies the samples meeting the set threshold value into the same class, and the classification standard comprises the area average value of the targets on the samples, the number of the targets on the samples, the size of the targets on the samples or the type of the targets on the samples;
a data set integration module for reintegrating the classified data subsets into a new data set, wherein in the reintegration process, each data subset is respectively provided with a weight parameter
Figure DEST_PATH_IMAGE002
Wherein i is the serial number of the data subset, and the proportion of the corresponding data subset in the whole data set is adjusted through a set weight parameter, so that the training of each stage has different pertinence to the data set;
the data set loading module is used for loading the integrated new data set into each stage for training, and expanding the data set before loading, wherein the main method for expanding is to randomly cover the identified target, generate a new sample every time the target is covered, and allocate a new label;
the multi-stage training method for object identification comprises the following steps:
firstly, preprocessing an adopted data set;
classifying the data set according to the preprocessing result;
thirdly, remixing the classified data subsets according to different proportions to form a new data set;
dividing the process of training the model into a plurality of stages, and then sequentially loading new data sets into different stages;
the specific steps of the first step are as follows:
preprocessing the adopted data set, wherein the main content of the preprocessing is to collect basic characteristics of each data sample, and the basic characteristics of the data sample comprise the area of each target on the sample, the number of the targets on the sample, the size of the targets on the sample and the type of the targets on the sample;
classifying according to a preprocessing result, wherein the classification standard comprises the area average value of the targets on the sample, the number of the targets on the sample, the size of the targets on the sample or the type of the targets on the sample, and the data set is divided into a plurality of data subsets with specific characteristics through pre-classification;
the second step comprises the following specific steps:
classifying according to the result of the preprocessing, and dividing the whole data set into a simple type data subset and a complex type data subset according to the number of targets on the sample, wherein the number of the identified targets is less than 5 but not the background samples is defined as the simple type data subset, and the number of the background samples and the number of the identified targets is more than 5 are defined as the complex type data subset;
remixing in the process of the third step, respectively setting a weight parameter for each data subset, influencing the proportion of each data subset in the whole data set through the weight parameter, and remixing each data subset with different weights;
setting weights in the process of the third step, namely, each data subset is provided with an individual weight, and the range of the weights is larger than zero, namely, the data volume of the data subsets can be increased or decreased through the distributed weights;
step four, multi-stage training: the multi-stage training is the training which divides the traditional one-time training into a plurality of stages, the data sets imported in each stage are different, namely, the data sets imported in a plurality of times exist, each epoch is used as a stage for training, and the data sets trained by each epoch are the same or different;
the multi-stage training method facing the target identification comprises the steps that a training process is divided into three stages, a data set preprocessing module reads and counts the target number on a sample, according to the statistical result, a data set classifying module divides a COCO data set into two types of data subsets, namely a simple type data subset A and a complex type data subset B, the classification standard is that samples with the target number smaller than 5 but not background samples are classified into the simple type data subset A, and the background samples and the samples with the target number larger than 5 are classified into the complex type data subset B;
in the first stage, the data subsets of the simple type data subset A and the complex type data subset B are not processed, namely the corresponding weights are set to be 1, and then the data subsets are loaded into a model for training after only being randomly mixed;
in the second phase, the data amount of the simple type data subset A is reduced
Figure DEST_PATH_IMAGE004
The data quantity of the complex type data subset B is increased
Figure DEST_PATH_IMAGE006
In the process, the sum of the data volumes of the two types of data subsets is kept to be the same as that of the original data set, and then the data subsets are loaded into a model for training after being randomly mixed;
in the third phase, the data amount of the simple-type data subset A is reduced
Figure DEST_PATH_IMAGE008
The data quantity of the complex type data subset B is increased
Figure DEST_PATH_IMAGE010
And then random mixing is carried out, and then model training is carried out.
CN202110348354.5A 2021-03-31 2021-03-31 Multi-stage training method for target identification Active CN112990337B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110348354.5A CN112990337B (en) 2021-03-31 2021-03-31 Multi-stage training method for target identification
PCT/CN2021/091056 WO2022205554A1 (en) 2021-03-31 2021-04-29 Multi-stage training method for target recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110348354.5A CN112990337B (en) 2021-03-31 2021-03-31 Multi-stage training method for target identification

Publications (2)

Publication Number Publication Date
CN112990337A CN112990337A (en) 2021-06-18
CN112990337B true CN112990337B (en) 2022-11-29

Family

ID=76338647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110348354.5A Active CN112990337B (en) 2021-03-31 2021-03-31 Multi-stage training method for target identification

Country Status (2)

Country Link
CN (1) CN112990337B (en)
WO (1) WO2022205554A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693265A (en) * 2011-02-15 2012-09-26 通用电气公司 A method of constructing a mixture model
CN104217219A (en) * 2014-09-15 2014-12-17 西安电子科技大学 Polarization SAR image classification method based on matching pursuit selection integration
CN108460523A (en) * 2018-02-12 2018-08-28 阿里巴巴集团控股有限公司 A kind of air control rule generating method and device
CN110414587A (en) * 2019-07-23 2019-11-05 南京邮电大学 Depth convolutional neural networks training method and system based on progressive learning
CN111950630A (en) * 2020-08-12 2020-11-17 深圳市烨嘉为技术有限公司 Small sample industrial product defect classification method based on two-stage transfer learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555317A (en) * 1992-08-18 1996-09-10 Eastman Kodak Company Supervised training augmented polynomial method and apparatus for character recognition
CN106778853A (en) * 2016-12-07 2017-05-31 中南大学 Unbalanced data sorting technique based on weight cluster and sub- sampling
CN108345904A (en) * 2018-01-26 2018-07-31 华南理工大学 A kind of Ensemble Learning Algorithms of the unbalanced data based on the sampling of random susceptibility
CN108595585B (en) * 2018-04-18 2019-11-12 平安科技(深圳)有限公司 Sample data classification method, model training method, electronic equipment and storage medium
CN112215303B (en) * 2020-11-05 2022-02-11 北京理工大学 Image understanding method and system based on self-learning attribute

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693265A (en) * 2011-02-15 2012-09-26 通用电气公司 A method of constructing a mixture model
CN104217219A (en) * 2014-09-15 2014-12-17 西安电子科技大学 Polarization SAR image classification method based on matching pursuit selection integration
CN108460523A (en) * 2018-02-12 2018-08-28 阿里巴巴集团控股有限公司 A kind of air control rule generating method and device
CN110414587A (en) * 2019-07-23 2019-11-05 南京邮电大学 Depth convolutional neural networks training method and system based on progressive learning
CN111950630A (en) * 2020-08-12 2020-11-17 深圳市烨嘉为技术有限公司 Small sample industrial product defect classification method based on two-stage transfer learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多阶段数据生成的自循环文本智能识别;马新强 等;《模式识别与人工智能》;20200531;第33卷(第5期);第468-477页 *

Also Published As

Publication number Publication date
WO2022205554A1 (en) 2022-10-06
CN112990337A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN103544506A (en) Method and device for classifying images on basis of convolutional neural network
CN102385592B (en) Image concept detection method and device
CN110390347B (en) Condition-guided countermeasure generation test method and system for deep neural network
CN110874604A (en) Model training method and terminal equipment
CN103366367A (en) Pixel number clustering-based fuzzy C-average value gray level image splitting method
CN109741341A (en) A kind of image partition method based on super-pixel and long memory network in short-term
Zhong et al. A comparative study of image classification algorithms for Foraminifera identification
CN110717554A (en) Image recognition method, electronic device, and storage medium
CN109492093A (en) File classification method and electronic device based on gauss hybrid models and EM algorithm
CN106874943A (en) Business object sorting technique and system
CN102411592B (en) Text classification method and device
Wang et al. Spatial weighting for bag-of-features based image retrieval
CN114283350A (en) Visual model training and video processing method, device, equipment and storage medium
CN114398485B (en) Expert portrait construction method and device based on multi-view fusion
CN108229505A (en) Image classification method based on FISHER multistage dictionary learnings
CN112990337B (en) Multi-stage training method for target identification
CN105868272A (en) Multimedia file classification method and apparatus
CN111553424A (en) CGAN-based image data balancing and classifying method
CN112861934A (en) Image classification method and device of embedded terminal and embedded terminal
CN103793714A (en) Multi-class discriminating device, data discrimination device, multi-class discriminating method and data discriminating method
Zhang et al. Image scene categorization using multi-bag-of-features
CN115424086A (en) Multi-view fine-granularity identification method and device, electronic equipment and medium
CN115205573A (en) Image processing method, device and equipment
Min et al. Optimized Dense Convolutional Neural Networks for Micro-expression Recognition
Rezky et al. Handling Imbalanced Data Using a Cascade Model for Image-Based Human Action Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231222

Address after: 528400 area C, 5th floor, building C, 148 Tanshen South Road, Tanzhou town, Zhongshan City, Guangdong Province

Patentee after: ZHONGSHAN BOCEDA ELECTRONIC TECHNOLOGY CO.,LTD.

Address before: 528400, Xueyuan Road, 1, Shiqi District, Guangdong, Zhongshan

Patentee before: University OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINA, ZHONGSHAN INSTITUTE