CN115620155A - Transformer substation target detection method and system and computer storage medium - Google Patents
Transformer substation target detection method and system and computer storage medium Download PDFInfo
- Publication number
- CN115620155A CN115620155A CN202211632345.XA CN202211632345A CN115620155A CN 115620155 A CN115620155 A CN 115620155A CN 202211632345 A CN202211632345 A CN 202211632345A CN 115620155 A CN115620155 A CN 115620155A
- Authority
- CN
- China
- Prior art keywords
- target
- model
- matting
- iteration
- prediction result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a transformer substation target detection method, a transformer substation target detection system and a computer storage medium. Wherein, the method comprises the following steps: inputting all pictures in the manual calibration training set into an original classification model for multi-round model training to obtain a violation classification model; inputting all pictures in the manual calibration training set into a student model for model training, and inputting all pictures in the non-calibration training set into a teacher model and the student model for model training; in the process of training the student model, the defects of the teacher model are optimized; and supervising the student model and the teacher model during training through the violation classification model. The method trains a manual calibration training set and an uncalibrated training set through a student model and a teacher model, and solves the problem that the training sets are difficult to collect after a transformer substation is put into use; and a violation classification model is introduced to supervise the training of the student model and the teacher model, so that the accuracy of target detection is higher.
Description
Technical Field
The invention relates to the technical field of target detection, in particular to a transformer substation target detection method, a transformer substation target detection system and a computer storage medium.
Background
With the rapid development of deep learning, various fields are widely used, and substations are widely introduced in recent years. The transformer substation is a place which is very easy to cause personnel accidents, so that the safety of the transformer substation is very important, and a large amount of cost needs to be paid by adopting a worker to monitor a site in a conventional method. Therefore, in recent years, deep learning target detection is widely used by substations, and labor is liberated. However, the conventional supervised target detection needs to collect a large amount of data sets and manpower calibration data, and because the data collection of the transformer substation is very difficult, the following reasons mainly exist: 1. the violation target of the transformer substation appears less frequently, so that the collected data set is rare, the violation target is generally made manually, but the manually made data set lacks universality; 2. when the transformer substation is not put into use, data can be artificially made, but once the transformer substation is put into use, the transformer substation becomes very dangerous, and the violation target is not artificially made. Based on the problems, the current solution is to train through a semi-supervised transformer substation violation target detection model, but the existing semi-supervised training set has the following problems:
(1) Lack of calibrated data sets for supervision in the process of uncalibrated data set training results in low accuracy of target detection;
(2) When the student model predicts the result, the guidance of the teacher model is lacked;
in view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a transformer substation target detection method, a transformer substation target detection system and a computer storage medium, and aims to solve the problems that in the prior art, a semi-supervised transformer substation violation target detection model lacks a calibrated data set for supervision, and a student model lacks guidance of a teacher model when predicting a result.
In order to achieve the above object, in one aspect, the present invention provides a substation target detection method, including: s1, sequentially carrying out matting operation and zooming operation on target frames in all pictures in a manual calibration training set to obtain a first zooming matting set; inputting the first zooming matting set into an original classification model to perform multi-round model training to obtain a violation classification model; s2, extracting an artificial calibration iteration picture and an uncalibrated iteration picture from the artificial calibration training set and the uncalibrated training set respectively; s3, inputting each artificial calibration iteration picture into a student model for model training to obtain each artificial calibration iteration loss value; carrying out matting operation and scaling operation on the target frame in each manual calibration iteration picture in sequence to obtain a second scaling matting set, and screening the second scaling matting set according to the violation classification model to obtain a first retaining matting set; s4, inputting each uncalibrated iteration picture into a teacher model for model training to obtain a first target prediction result; according to the first target prediction result, sequentially carrying out matting operation and scaling operation on target frames in all uncalibrated iterative pictures to obtain a third scaling matting set; inputting the third scaling matting set into the violation classification model for model training to obtain a first classification prediction result; calculating to obtain a pseudo label target weight according to the first target prediction result and the first classification prediction result; screening the third zooming matting set to obtain a second reserved matting set; s5, inputting each uncalibrated iteration picture into the student model and the teacher model to obtain a second target prediction result; calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight; s6, calculating to obtain an iteration total loss value according to each manually calibrated iteration loss value and each uncalibrated iteration loss value; updating the student model and the teacher model according to the iteration total loss value to obtain a current iteration student model and a current iteration teacher model; s7, inputting the first scaling matting set, the first reserving matting set and the second reserving matting set into the violation classification model for multi-round model training to obtain an updated violation classification model; s8, repeating the steps S2-S7 until all pictures of the manual calibration training set and the non-calibration training set are trained, and performing multi-round model training to obtain a target student model and a target teacher model; and S9, inputting the picture to be detected into the target student model and the target teacher model for detection to obtain the target position and the target category.
Optionally, the matting operation and the zooming operation are performed on the target frames in all the pictures in the manual calibration training set in sequence, and obtaining the first zooming matting set includes: s11, performing cutout operation on all target frames in all pictures in the manual calibration training set to obtain a first cutout set, wherein the first cutout set comprises the following steps: performing intersection ratio operation on any two target frames to obtain an intersection ratio; when the intersection ratio exceeds a first preset intersection ratio threshold value, using the two target frames as a picture for matting; when the intersection ratio is larger than 0 and smaller than the first preset intersection ratio threshold, matting the object frame with larger area in the two object frames; when the intersection ratio is equal to 0, performing sectional drawing on the two target frames; all pictures obtained by matting are used as the first matting set; s12, carrying out zooming operation on the first matting set to obtain the first zooming matting set.
Optionally, the S12 includes: zooming each picture in the first matting set according to a preset zooming value; and performing black edge filling operation on the zoomed picture to enable the proportion of the long edge and the short edge of the zoomed picture to be a preset proportion.
Optionally, the S3 includes: inputting each artificial calibration iteration picture into a student model for model training to obtain a third target prediction result; calculating to obtain an iteration loss value of each artificial calibration according to the third target prediction result and the artificial calibration result; when the type of the target frame calibrated by the third target prediction result is judged to be consistent with the type of the target frame calibrated by the artificial calibration result, judging whether the intersection ratio of the target frame calibrated by the third target prediction result and the target frame calibrated by the artificial calibration result exceeds a second preset intersection ratio threshold value; if yes, sequentially carrying out image matting and zooming operations on the target frame calibrated by the third target prediction result to obtain a second zooming image matting set; inputting all pictures in the second zooming sectional set into the violation classification model for model training to obtain a second classification prediction result; and when the second classification prediction result of any picture in the second zooming matting set is judged to be consistent with the classification of the manual calibration result and the classification score of the second classification prediction result is larger than a first preset score, deleting the picture to obtain the first retaining matting set.
Optionally, the filtering the third scaling matte set to obtain a second retained matte set includes: judging whether the category of the target frame calibrated by the first target prediction result is consistent with that of the first classification prediction result, and if not, deleting the target frame calibrated by the first target prediction result; otherwise, reserving the target frame calibrated by the first target prediction result, judging whether the category score of the first classification prediction result is larger than a second preset score, if so, deleting the corresponding third zooming matting, otherwise, reserving the corresponding third zooming matting, and obtaining the second reserved matting set.
Optionally, the pseudo tag is a target frame calibrated by the reserved first target prediction result; the pseudo tag target weight is calculated by the following formula:
wherein, theFor the pseudo tag object weight, theA category score for the first category prediction result corresponding to the retained target box,a category score for the first target prediction result for the retained target box.
Optionally, the S5 includes: inputting each uncalibrated iteration picture into a backbone network in the student model to obtain a feature map; inputting the characteristic graphs into a detection head in the student model and a detection head in the teacher model respectively to obtain a fourth target prediction result and a fifth target prediction result respectively; merging the fourth target prediction result and the fifth target prediction result to obtain a second target prediction result; and calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight.
Optionally, the updating the student model and the teacher model according to the total iterative loss value to obtain a current iterative student model and a current iterative teacher model includes: carrying out back propagation on the student model according to the iteration total loss value to obtain a current iteration student model; and assigning the weight of the current iteration student model to the weight of the teacher model to obtain the current iteration teacher model.
In another aspect, the present invention provides a substation target detection system, including: the violation classification model training unit is used for sequentially carrying out image matting operation and zooming operation on the target frames in all the pictures in the manual calibration training set to obtain a first zooming image matting set; inputting the first zooming matting set into an original classification model to perform multi-round model training to obtain a violation classification model; the extraction unit is used for extracting an artificial calibration iteration picture and an uncalibrated iteration picture from the artificial calibration training set and the uncalibrated training set respectively; the first calculation and first screening unit is used for inputting each artificial calibration iteration picture into a student model for model training to obtain each artificial calibration iteration loss value; carrying out matting operation and scaling operation on the target frame in each manual calibration iteration picture in sequence to obtain a second scaling matting set, and screening the second scaling matting set according to the violation classification model to obtain a first retaining matting set; the second calculation and screening unit is used for inputting each uncalibrated iteration picture into the teacher model for model training to obtain a first target prediction result; according to the first target prediction result, sequentially carrying out matting operation and scaling operation on target frames in all uncalibrated iterative pictures to obtain a third scaling matting set; inputting the third scaling matting set into the violation classification model for model training to obtain a first classification prediction result; calculating to obtain a pseudo label target weight according to the first target prediction result and the first classification prediction result; screening the third zooming matting set to obtain a second reserved matting set; the third calculation unit is used for inputting each uncalibrated iteration picture into the student model and the teacher model to obtain a second target prediction result; calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight; the fourth calculating and updating unit is used for calculating to obtain an iteration total loss value according to each artificial calibration iteration loss value and each uncalibrated iteration loss value; updating the student model and the teacher model according to the iteration total loss value to obtain a current iteration student model and a current iteration teacher model; the violation classification model updating unit is used for inputting the first scaling matting set, the first reserving matting set and the second reserving matting set into the violation classification model to carry out multi-round model training to obtain an updated violation classification model; the output unit is used for outputting a target student model and a target teacher model which are obtained after all pictures of the artificial calibration training set and the non-calibration training set are trained from the extraction unit to the violation classification model updating unit and multi-round model training is carried out; and the detection unit is used for inputting the picture to be detected into the target student model and the target teacher model for detection to obtain the target position and the target category.
The invention also provides a computer storage medium on which a computer program is stored, which when executed by a processor implements the substation target detection method described above.
The invention has the beneficial effects that:
the invention provides a transformer substation target detection method, a transformer substation target detection system and a computer storage medium, wherein the method trains a manual calibration training set and an uncalibrated training set through a student model and a teacher model, and solves the problem that the training sets are difficult to collect after a transformer substation is put into use; a violation classification model is introduced to supervise the training of the student model and the teacher model; the violation classification model is optimized according to the cutout of each round of the semi-supervised detection model, and then the cutout target is screened according to the predicted score and category of each time, so that the classification accuracy of the violation classification model is optimized; in the semi-supervised model training, the characteristic layer of the student model is combined with the detection head of the teacher model, so that the defects of the teacher model are optimized in the training process of the student model. The substation target detection method provided by the invention has higher target detection accuracy.
Drawings
Fig. 1 is a flowchart of a substation target detection method according to an embodiment of the present invention;
FIG. 2 is a flowchart of obtaining a first matting set according to an embodiment of the present invention;
FIG. 3 is a flow chart of obtaining a first scaled matting set provided by an embodiment of the invention;
FIG. 4 is a flowchart of calculating each artificial calibration iteration loss value and obtaining a first retention matting set according to an embodiment of the present invention;
FIG. 5 is a flowchart of calculating a loss value for each uncalibrated iteration according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a substation target detection system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The conventional supervised target detection needs to collect a large amount of data sets and manpower calibration data, and because the data collection of the transformer substation is very difficult, the following reasons mainly exist: 1. the violation target of the transformer substation is not frequently generated, so that the collected data set is rare, the violation target is generally made manually, but the manually made data set is lack of universality; 2. when the transformer substation is not put into use, data can be artificially made, but once the transformer substation is put into use, the transformer substation is very dangerous and does not use the artificial making of violation targets. Based on the problems, the current solution is to train through a semi-supervised transformer substation violation target detection model, but the existing semi-supervised training set has the following problems:
(1) Lack of calibrated data sets for supervision in the process of uncalibrated data set training leads to low accuracy of target detection;
(2) The student model lacks the guidance of the teacher model when predicting the result.
Aiming at the problems, the invention provides a transformer substation target detection method combining a violation classification model and a semi-supervised training model.
Specifically, fig. 1 is a flowchart of a substation target detection method provided in an embodiment of the present invention, and as shown in fig. 1, the method includes:
s1, sequentially carrying out matting operation and zooming operation on target frames in all pictures in a manual calibration training set to obtain a first zooming matting set; inputting the first zooming matting set into an original classification model to perform multi-round model training to obtain a violation classification model;
specifically, the matting operation and the zooming operation are sequentially performed on the target frames in all the pictures (pictures shot by the transformer substation) in the manual calibration training set, and obtaining a first zooming matting set includes:
s11, performing matting operation on all target frames in all pictures in the manual calibration training set to obtain a first matting set, where fig. 2 is a flowchart for obtaining the first matting set according to an embodiment of the present invention, and as shown in fig. 2, the S11 includes:
s111, performing intersection ratio operation on any two target frames to obtain an intersection ratio;
the intersection ratio is calculated according to the following formula:
the IOU is an intersection ratio, A is the area of the first target frame, and B is the area of the second target frame.
S112, when the intersection ratio exceeds a first preset intersection ratio threshold, using the two target frames as a picture to perform cutout;
specifically, the first preset intersection ratio threshold value is set to be 0.2, when two target frames are overlapped and the intersection ratio value exceeds 0.2, the two target frames are used as a picture to be subjected to matting, and labels are respectively marked according to different types of targets in the picture.
Further, assuming that two target frames are used as a picture, and the intersection ratio of a third target frame and the picture exceeds 0.2, the three target frames are used as a picture to perform matting, and labels are respectively marked according to different types of targets in the picture.
S113, when the intersection ratio is larger than 0 and smaller than the first preset intersection ratio threshold, matting the object frame with larger area in the two object frames;
specifically, when the two target frames are overlapped and the intersection ratio is smaller than 0.2, the target frame with the smaller area in the two target frames is blackened, and the target frame with the larger area in the two target frames is subjected to matting to obtain a picture.
S114, when the intersection ratio is equal to 0, performing sectional drawing on the two target frames;
specifically, when two target frames are not overlapped, the two target frames are directly subjected to matting to obtain two pictures.
S115, taking all pictures obtained by matting as the first matting set;
s12, carrying out zooming operation on the first matting set to obtain the first zooming matting set.
Specifically, fig. 3 is a flowchart of obtaining a first scaling matte set according to an embodiment of the present invention, and as shown in fig. 3, the S12 includes:
s121, zooming each picture in the first matting set according to a preset zooming value; wherein the preset scaling value is randomly selected within a preset scaling range (0.5-0.8).
Wherein the content of the first and second substances,for the width of the scaled picture,for the height of the scaled picture,for the width of the current picture in the first matte set,for the height of the current picture in the first matte set,is the preset zoom value.
And S122, performing black edge filling operation on the zoomed picture to enable the proportion of the long edge and the short edge of the zoomed picture to be a preset proportion (namely a: b).
Specifically, the calculation formula is as follows:
if long _ side/short _ side > a: b (4:3 in this application)
else
Wherein, the first and the second end of the pipe are connected with each other,is the long side of the zoomed picture,is the short side of the zoomed picture,for the width of the scaled picture,the height of the zoomed picture is shown, max is the maximum value, min is the minimum value, a: b is the preset proportion,for the increased length of the short side,increase for long sideThe length of the addition.
The ratio of the long side and the short side of the zoomed picture can be a preset ratio (i.e., = a: b) through the above operation.
S13, inputting the first zooming matting set into an original classification model to perform multi-round model training to obtain a violation classification model;
specifically, N pictures are extracted from the first zooming cutout set as iterative pictures, each iterative picture is input into an original classification model for model training, and a classification iteration loss value is obtained, wherein the formula is as follows:
wherein the content of the first and second substances,to classify the iteration loss value, N is the number of iteration pictures (set to 32 in this application), C is the number of classes of violations (set to 5 in this application),the probability of predicting to the c-th category for the n-th picture.
And training all the pictures in the first zooming cutout set, performing multi-round model training until the classification iteration loss value fluctuates in a first preset loss range, and stopping model training to obtain a violation classification model.
S2, extracting a manually-calibrated iteration picture and an uncalibrated iteration picture from the manually-calibrated training set and the uncalibrated training set respectively;
specifically, assuming that the artificial calibration training set comprises 100 pictures and the uncalibrated training set comprises 100 pictures, extracting one picture from the artificial calibration training set as an artificial calibration iterative picture, and extracting one picture from the uncalibrated training set as an uncalibrated iterative picture; the artificial calibration iteration picture and the uncalibrated iteration picture keep a first preset proportion (the first preset proportion in the application is 1:1).
Further, before S3, performing data preprocessing on all pictures in the artificial calibration training set, and inputting the artificial calibration training set subjected to data preprocessing into an initial teacher model for multi-round model training to obtain the teacher model; assigning the weight of the teacher model to the weight of the initial student model to obtain the student model;
s3, inputting each artificial calibration iteration picture into a student model for model training to obtain each artificial calibration iteration loss value; carrying out matting operation and scaling operation on a target frame in each manual calibration iteration picture in sequence to obtain a second scaling matting set, and screening the second scaling matting set according to the violation classification model to obtain a first retaining matting set;
fig. 4 is a flowchart of calculating an iteration loss value of each manual calibration and obtaining a first remaining matting set according to an embodiment of the present invention, as shown in fig. 4, where S3 includes:
s31, performing data preprocessing (turning, affine transformation and other operations) on each artificial calibration iteration picture (one picture in the application), inputting each artificial calibration iteration picture subjected to data preprocessing into a student model for model training, and obtaining a third target prediction result (coordinates, categories and category scores of the target); calculating to obtain an iteration loss value of each artificial calibration according to the third target prediction result and the artificial calibration result;
specifically, calculating according to the third target prediction result and the artificial calibration result to obtain each artificial calibration iterative regression loss value and each artificial calibration iterative classification loss value; and adding the two loss values to obtain each artificial calibration iteration loss value.
S32, when the type of the target frame calibrated by the third target prediction result is judged to be consistent with the type of the target frame calibrated by the artificial calibration result, judging whether the intersection ratio of the target frame calibrated by the third target prediction result and the target frame calibrated by the artificial calibration result exceeds a second preset intersection ratio threshold (0.7 in the application);
s33, if yes, sequentially carrying out matting operation and zooming operation on the target frame calibrated by the third target prediction result to obtain a second zooming matting set; inputting all pictures in the second zooming sectional set into the violation classification model for model training to obtain a second classification prediction result;
and S34, when the second classification prediction result of any picture in the second zooming matting set is judged to be consistent with the category of the manual calibration result and the category score of the second classification prediction result is larger than a first preset score (0.75 in the application), deleting the picture to obtain the first retention matting set.
Namely, assuming that the predicted category input into the violation classification model by the current Zhang Dier scaling matte is consistent with the manually calibrated category, judging whether the category score (confidence score) predicted by inputting the current Zhang Dier scaling matte into the violation classification model is smaller than or equal to a first preset score (0.75 in the application), and if so, keeping the current Zhang Dier scaling matte.
S4, inputting each uncalibrated iteration picture into a teacher model for model training to obtain a first target prediction result; according to the first target prediction result, sequentially carrying out matting operation and scaling operation on target frames in all uncalibrated iterative pictures to obtain a third scaling matting set; inputting the third scaling matting set into the violation classification model for model training to obtain a first classification prediction result; calculating to obtain a pseudo label target weight according to the first target prediction result and the first classification prediction result; screening the third zooming matting set to obtain a second reserved matting set;
specifically, each uncalibrated iteration picture (one uncalibrated iteration picture without data preprocessing) is input into a teacher model for model training to obtain a first target prediction result (namely, target position coordinates, categories and category scores are predicted), and target frames in all uncalibrated iteration pictures are subjected to matting operation (consistent with the matting operation in S1) and zooming operation (consistent with the zooming operation in S1) in sequence according to the predicted target position coordinates to obtain a third zooming matting set (assuming that two pictures are scratched); inputting the third scaling matting set into the violation classification model for model training to obtain a first classification prediction result;
the screening the third scaled matte set to obtain a second retained matte set includes:
judging whether the category of the target frame calibrated by the first target prediction result is consistent with that of the first classification prediction result or not, and if not, deleting the target frame calibrated by the first target prediction result; otherwise, the target frame marked by the first target prediction result is reserved, whether the category score of the first category prediction result is larger than a second preset score (0.7 in the application) or not is judged, if yes, the corresponding third zooming cutout is deleted, otherwise, the corresponding third zooming cutout is reserved, and the second reserved cutout set is obtained.
Specifically, assuming that two target frames, namely a first target frame and a second target frame, are predicted from one uncalibrated iterative picture, the two target frames are subjected to matting operation and scaling operation respectively to obtain two third scaling mattes, namely a third scaling matte 1 and a third scaling matte 2; judging whether the category of the first target frame is consistent with the category of the third scaling sectional drawing 1 predicted by the violation classification model or not, and if not, deleting the first target frame; otherwise, the first target frame is reserved, whether the category score of the third zooming sectional drawing 1 predicted by the violation classification model is larger than 0.7 or not is judged, if yes, the third zooming sectional drawing 1 is deleted, otherwise, the third zooming sectional drawing 1 is reserved, and the second reserved sectional drawing set is obtained.
Using the target box marked by the reserved first target prediction result as a pseudo label,
according to the first target prediction result and the first classification prediction result, calculating to obtain a pseudo label target weight, wherein a specific calculation formula is as follows:
wherein, theFor the pseudo tag object weight, theThe category score of the first classification predicted result corresponding to the reserved target box,a category score for the first target prediction result for the retained target box.
The process shows that the false label is supervised by a violation classification model, and the accuracy rate is higher.
S5, inputting each uncalibrated iteration picture into the student model and the teacher model to obtain a second target prediction result; calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight;
specifically, fig. 5 is a flowchart for calculating a loss value of each uncalibrated iteration provided in the embodiment of the present invention, and as shown in fig. 5, S5 includes:
s51, performing strong data preprocessing (operations such as turning and affine transformation) on each uncalibrated iterative picture (one uncalibrated iterative picture in the application), and inputting each uncalibrated iterative picture subjected to strong data preprocessing into a backbone network in the student model to obtain a feature map; inputting the feature maps into a detection head in the student model and a detection head in the teacher model respectively to obtain a fourth target prediction result and a fifth target prediction result respectively;
s52, merging the fourth target prediction result and the fifth target prediction result to obtain a second target prediction result; and calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight.
Specifically, performing an intersection ratio operation on the target frames with the same category calibrated in the second target prediction result, and when the intersection ratio value is greater than a third preset intersection ratio threshold value (0.7 in the present application), retaining the target frames with high category scores, and deleting the target frames with low category scores to obtain a remaining target frame set;
carrying out matting operation and zooming operation on the residual target frame set in sequence to obtain a fourth zooming matting set; inputting the fourth scaling matting set into the violation classification model for model training to obtain a third classification prediction result;
judging whether the categories of the target frame calibrated by the third classification prediction result are consistent with the categories of the residual target frame set, and if not, deleting the target frames corresponding to the residual target frame set; otherwise, reserving the target frames corresponding to the remaining target frame sets, and judging whether the category score of the third classification prediction result is within a preset score range (higher than 0.35 and lower than 0.85 in the present application), if so, reserving the corresponding fourth zooming matting, and obtaining a third reserved matting set.
Calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight specifically as follows:
calculating the target frames corresponding to the remaining target frame set according to the pseudo labels to obtain each uncalibrated iteration classification loss value and each uncalibrated iteration regression loss value, and calculating according to the two loss values and the pseudo label target weight to obtain each uncalibrated iteration loss value, wherein the calculation formula is as follows:
wherein the content of the first and second substances,for the pseudo-tag target weight in question,for each of said uncalibrated iterative classification loss values,for each of the uncalibrated iterative regression loss values,and obtaining the iteration loss value of each uncalibrated iteration.
In the process, the student model can not only optimize the problems of the student model, but also optimize the defects of the teacher model.
S6, calculating to obtain an iteration total loss value according to each manually calibrated iteration loss value and each uncalibrated iteration loss value; updating the student model and the teacher model according to the iteration total loss value to obtain a current iteration student model and a current iteration teacher model;
specifically, the iteration total loss value is calculated according to the following formula:
wherein the content of the first and second substances,in order to iterate the total loss value,for the number of one iteration picture (i.e. the sum of the number of manually marked iteration pictures and the number of unmarked iteration pictures),the number of iteration pictures is calibrated for the manual work,the number of the uncalibrated iteration pictures is, i is the ith manual calibration iteration picture, j is the jth uncalibrated iteration picture,manually calibrating an iteration loss value for the ith picture,and the jth uncalibrated iteration loss value is obtained.
The updating the student model and the teacher model according to the iteration total loss value to obtain a current iteration student model and a current iteration teacher model comprises the following steps:
carrying out back propagation on the student model according to the iteration total loss value to obtain a current iteration student model; and assigning the weight of the current iteration student model to the weight of the teacher model to obtain the current iteration teacher model.
The concrete formula is as follows:
wherein the content of the first and second substances,for the weights of the teacher model of the current iteration,for the weights of the teacher model in the last iteration,for the weights of the student model of the current iteration,representing the current iteration step number;representing a preset number of iteration steps.
S7, inputting the first scaling matting set, the first reserving matting set and the second reserving matting set into the violation classification model for multi-round model training to obtain an updated violation classification model;
specifically, the first zooming matting set, the first reserving matting set, the second reserving matting set and the third reserving matting set are input into the violation classification model to perform first-round model training, so that the category and the category score (confidence score) of each picture and a current-round classification loss value are obtained; repeating multiple rounds of training until the classification loss value of the current round fluctuates within a second preset loss range, stopping model training and obtaining an updated violation classification model; specifically, if the category of the current picture continuously changes or the category score continuously fluctuates for n times (5 times in the present application) beyond a preset fluctuation range (greater than 0.6 in the present application) during the multi-round model training, the current picture is deleted.
S8, repeating the steps S2-S7 until all pictures of the manual calibration training set and the non-calibration training set are trained, and performing multi-round model training to obtain a target student model and a target teacher model;
specifically, repeating the steps S2-S7 until all pictures of the manually calibrated training set and the uncalibrated training set are trained, and obtaining a student model of the current wheel, a teacher model of the current wheel and a total loss value of the current wheel;
and performing multi-round model training on the artificial calibration training set and the uncalibrated training set until the total loss value of the current round fluctuates within a third preset loss range, and stopping model training to obtain a target student model and a target teacher model.
And S9, inputting the picture to be detected into the target student model and the target teacher model for detection to obtain the target position and the target category.
Fig. 6 is a schematic structural diagram of a substation target detection system according to an embodiment of the present invention, and as shown in fig. 6, the system includes:
the violation classification model training unit 201 is configured to perform matting operation and scaling operation on the target frames in all the pictures in the manual calibration training set in sequence to obtain a first scaling matting set; inputting the first zooming matting set into an original classification model to perform multi-round model training to obtain a violation classification model;
an extracting unit 202, configured to extract an artificially calibrated iterative picture and an uncalibrated iterative picture from the artificially calibrated training set and the uncalibrated training set, respectively;
the first calculating and screening unit 203 is used for inputting each artificial calibration iteration picture into a student model for model training to obtain each artificial calibration iteration loss value; carrying out matting operation and scaling operation on the target frame in each manual calibration iteration picture in sequence to obtain a second scaling matting set, and screening the second scaling matting set according to the violation classification model to obtain a first retaining matting set;
the second calculating and screening unit 204 is configured to input each uncalibrated iterative picture into the teacher model for model training to obtain a first target prediction result; according to the first target prediction result, sequentially carrying out matting operation and scaling operation on target frames in all uncalibrated iterative pictures to obtain a third scaling matting set; inputting the third scaling matting set into the violation classification model for model training to obtain a first classification prediction result; calculating to obtain a pseudo label target weight according to the first target prediction result and the first classification prediction result; screening the third zooming matting set to obtain a second reserved matting set;
a third calculating unit 205, configured to input each uncalibrated iterative picture into the student model and the teacher model to obtain a second target prediction result; calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight;
a fourth calculating and updating unit 206, configured to calculate a total iteration loss value according to each artificially calibrated iteration loss value and each uncalibrated iteration loss value; updating the student model and the teacher model according to the iteration total loss value to obtain a current iteration student model and a current iteration teacher model;
a violation classification model updating unit 207, configured to input the first scaling matting set, the first reserving matting set, and the second reserving matting set into the violation classification model for multiple-round model training, so as to obtain an updated violation classification model;
the output unit 208 is used for outputting a target student model and a target teacher model which are obtained after all pictures of the artificial calibration training set and the non-calibration training set are trained by the extraction unit to the violation classification model updating unit and multi-round model training is carried out;
and the detection unit 209 is configured to input the picture to be detected into the target student model and the target teacher model for detection, so as to obtain a target position and a target category.
The invention has the beneficial effects that:
the invention provides a transformer substation target detection method, a transformer substation target detection system and a computer storage medium, wherein the method trains a manually calibrated training set and an uncalibrated training set through a student model and a teacher model, and solves the problem that the training set is difficult to collect after a transformer substation is put into use; a violation classification model is introduced to supervise the training of the student model and the teacher model; the violation classification model is optimized according to the cutout of each round of the semi-supervised detection model, and then the cutout target is screened according to the predicted score and category of each time, so that the classification accuracy of the violation classification model is optimized; in the semi-supervised model training, the characteristic layer of the student model is combined with the detection head of the teacher model, so that the defects of the teacher model are optimized in the training process of the student model. The substation target detection method provided by the invention has higher target detection accuracy.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A transformer substation target detection method is characterized by comprising the following steps:
s1, sequentially carrying out matting operation and zooming operation on target frames in all pictures in a manual calibration training set to obtain a first zooming matting set; inputting the first scaling matting set into an original classification model for multi-round model training to obtain a violation classification model;
s2, extracting a manually-calibrated iteration picture and an uncalibrated iteration picture from the manually-calibrated training set and the uncalibrated training set respectively;
s3, inputting each artificial calibration iteration picture into a student model for model training to obtain each artificial calibration iteration loss value; carrying out matting operation and scaling operation on the target frame in each manual calibration iteration picture in sequence to obtain a second scaling matting set, and screening the second scaling matting set according to the violation classification model to obtain a first retaining matting set;
s4, inputting each uncalibrated iteration picture into a teacher model for model training to obtain a first target prediction result; carrying out matting operation and zooming operation on target frames in all uncalibrated iterative pictures in sequence according to the first target prediction result to obtain a third zooming matting set; inputting the third scaling matting set into the violation classification model for model training to obtain a first classification prediction result; calculating to obtain a pseudo label target weight according to the first target prediction result and the first classification prediction result; screening the third zooming matting set to obtain a second reserved matting set;
s5, inputting each uncalibrated iteration picture into the student model and the teacher model to obtain a second target prediction result; calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight;
s6, calculating to obtain an iteration total loss value according to each manually calibrated iteration loss value and each uncalibrated iteration loss value; updating the student model and the teacher model according to the iteration total loss value to obtain a current iteration student model and a current iteration teacher model;
s7, inputting the first scaling matting set, the first reserving matting set and the second reserving matting set into the violation classification model for multi-round model training to obtain an updated violation classification model;
s8, repeating the steps S2-S7 until all pictures of the manual calibration training set and the non-calibration training set are trained, and performing multi-round model training to obtain a target student model and a target teacher model;
and S9, inputting the picture to be detected into the target student model and the target teacher model for detection to obtain the target position and the target category.
2. The method as claimed in claim 1, wherein the step of sequentially performing the matting operation and the scaling operation on the target frames in all the pictures in the manual targeting training set to obtain a first scaling matting set comprises:
s11, performing cutout operation on all target frames in all pictures in the manual calibration training set to obtain a first cutout set, wherein the first cutout set comprises the following steps:
performing intersection ratio operation on any two target frames to obtain an intersection ratio;
when the intersection ratio exceeds a first preset intersection ratio threshold value, using the two target frames as a picture for matting;
when the intersection ratio is larger than 0 and smaller than the first preset intersection ratio threshold, matting the object frame with a larger area in the two object frames;
when the intersection ratio is equal to 0, performing sectional drawing on the two target frames;
all pictures obtained by matting are used as the first matting set;
s12, carrying out zooming operation on the first matting set to obtain the first zooming matting set.
3. The method according to claim 2, wherein the S12 comprises:
zooming each picture in the first matting set according to a preset zooming value;
and performing black edge filling operation on the zoomed picture to enable the proportion of the long edge and the short edge of the zoomed picture to be a preset proportion.
4. The method of claim 1, wherein the S3 comprises:
inputting each artificial calibration iteration picture into a student model for model training to obtain a third target prediction result; calculating to obtain an iteration loss value of each artificial calibration according to the third target prediction result and the artificial calibration result;
when the type of the target frame calibrated by the third target prediction result is judged to be consistent with the type of the target frame calibrated by the artificial calibration result, judging whether the intersection ratio of the target frame calibrated by the third target prediction result and the target frame calibrated by the artificial calibration result exceeds a second preset intersection ratio threshold value;
if yes, sequentially carrying out matting operation and zooming operation on the target frame calibrated by the third target prediction result to obtain a second zooming matting set; inputting all pictures in the second scaling sectional picture set into the violation classification model for model training to obtain a second classification prediction result;
and when the second classification prediction result of any picture in the second zooming matting set is judged to be consistent with the category of the artificial calibration result and the category score of the second classification prediction result is larger than a first preset score, deleting the picture to obtain the first retaining matting set.
5. A method as recited in claim 1 wherein said screening the third scaled matte set to a second retained matte set comprises:
judging whether the category of the target frame calibrated by the first target prediction result is consistent with that of the first classification prediction result, and if not, deleting the target frame calibrated by the first target prediction result; otherwise, the target frame marked by the first target prediction result is reserved, whether the category score of the first category prediction result is larger than a second preset score or not is judged, if yes, the corresponding third zooming cutout is deleted, otherwise, the corresponding third zooming cutout is reserved, and the second reserved cutout set is obtained.
6. The method of claim 5, wherein:
the pseudo label is a reserved target frame calibrated by the first target prediction result;
the pseudo tag target weight is calculated by the following formula:
7. The method according to claim 1, wherein the S5 comprises:
inputting each uncalibrated iteration picture into a backbone network in the student model to obtain a feature map; inputting the characteristic graphs into a detection head in the student model and a detection head in the teacher model respectively to obtain a fourth target prediction result and a fifth target prediction result respectively;
merging the fourth target prediction result and the fifth target prediction result to obtain a second target prediction result; and calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight.
8. The method of claim 1, wherein updating the student model and the teacher model according to the iterative total loss value to obtain a current iterative student model and a current iterative teacher model comprises:
carrying out back propagation on the student model according to the iteration total loss value to obtain a current iteration student model; and assigning the weight of the current iteration student model to the weight of the teacher model to obtain the current iteration teacher model.
9. A substation target detection system, comprising:
the violation classification model training unit is used for sequentially carrying out image matting operation and zooming operation on the target frames in all the pictures in the manual calibration training set to obtain a first zooming image matting set; inputting the first scaling matting set into an original classification model for multi-round model training to obtain a violation classification model;
the extraction unit is used for extracting an artificial calibration iteration picture and an uncalibrated iteration picture from the artificial calibration training set and the uncalibrated training set respectively;
the first calculation and first screening list is used for inputting each manual calibration iteration picture into a student model for model training to obtain each manual calibration iteration loss value; carrying out matting operation and scaling operation on a target frame in each manual calibration iteration picture in sequence to obtain a second scaling matting set, and screening the second scaling matting set according to the violation classification model to obtain a first retaining matting set;
the second calculation and screening unit is used for inputting each uncalibrated iteration picture into the teacher model for model training to obtain a first target prediction result; according to the first target prediction result, sequentially carrying out matting operation and scaling operation on target frames in all uncalibrated iterative pictures to obtain a third scaling matting set; inputting the third scaling matting set into the violation classification model for model training to obtain a first classification prediction result; calculating to obtain a pseudo label target weight according to the first target prediction result and the first classification prediction result; screening the third zooming matting set to obtain a second reserved matting set;
the third calculation unit is used for inputting each uncalibrated iteration picture into the student model and the teacher model to obtain a second target prediction result; calculating to obtain each uncalibrated iteration loss value according to the second target prediction result and the pseudo label target weight;
the fourth calculating and updating unit is used for calculating to obtain an iteration total loss value according to each artificial calibration iteration loss value and each uncalibrated iteration loss value; updating the student model and the teacher model according to the iteration total loss value to obtain a current iteration student model and a current iteration teacher model;
the violation classification model updating unit is used for inputting the first scaling matting set, the first reserving matting set and the second reserving matting set into the violation classification model for multi-round model training to obtain an updated violation classification model;
the output unit is used for outputting a target student model and a target teacher model which are obtained after all pictures of the artificial calibration training set and the non-calibration training set are trained from the extraction unit to the violation classification model updating unit and multi-round model training is carried out;
and the detection unit is used for inputting the picture to be detected into the target student model and the target teacher model for detection to obtain the target position and the target category.
10. A computer storage medium having a computer program stored thereon, the program, when executed by a processor, implementing a substation target detection method according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211632345.XA CN115620155B (en) | 2022-12-19 | 2022-12-19 | Transformer substation target detection method and system and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211632345.XA CN115620155B (en) | 2022-12-19 | 2022-12-19 | Transformer substation target detection method and system and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115620155A true CN115620155A (en) | 2023-01-17 |
CN115620155B CN115620155B (en) | 2023-03-10 |
Family
ID=84879669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211632345.XA Active CN115620155B (en) | 2022-12-19 | 2022-12-19 | Transformer substation target detection method and system and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115620155B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130114942A1 (en) * | 2011-11-07 | 2013-05-09 | General Electric Company | Automatic Surveillance Video Matting Using a Shape Prior |
CN114399686A (en) * | 2021-11-26 | 2022-04-26 | 中国科学院计算机网络信息中心 | Remote sensing image ground feature identification and classification method and device based on weak supervised learning |
CN114581350A (en) * | 2022-02-23 | 2022-06-03 | 清华大学 | Semi-supervised learning method suitable for monocular 3D target detection task |
CN114998691A (en) * | 2022-06-24 | 2022-09-02 | 浙江华是科技股份有限公司 | Semi-supervised ship classification model training method and device |
CN115359062A (en) * | 2022-10-24 | 2022-11-18 | 浙江华是科技股份有限公司 | Method and system for dividing and calibrating monitoring target through semi-supervised example |
-
2022
- 2022-12-19 CN CN202211632345.XA patent/CN115620155B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130114942A1 (en) * | 2011-11-07 | 2013-05-09 | General Electric Company | Automatic Surveillance Video Matting Using a Shape Prior |
CN114399686A (en) * | 2021-11-26 | 2022-04-26 | 中国科学院计算机网络信息中心 | Remote sensing image ground feature identification and classification method and device based on weak supervised learning |
CN114581350A (en) * | 2022-02-23 | 2022-06-03 | 清华大学 | Semi-supervised learning method suitable for monocular 3D target detection task |
CN114998691A (en) * | 2022-06-24 | 2022-09-02 | 浙江华是科技股份有限公司 | Semi-supervised ship classification model training method and device |
CN115359062A (en) * | 2022-10-24 | 2022-11-18 | 浙江华是科技股份有限公司 | Method and system for dividing and calibrating monitoring target through semi-supervised example |
Non-Patent Citations (2)
Title |
---|
WEI LI,AND ETC: "Improving Audio-visual Speech Recognition Performance with Cross-modal Student-teacher Training" * |
葛仕明;赵胜伟;刘文瑜;李晨钰;: "基于深度特征蒸馏的人脸识别" * |
Also Published As
Publication number | Publication date |
---|---|
CN115620155B (en) | 2023-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103632158B (en) | Forest fire prevention monitor method and forest fire prevention monitor system | |
CN110084165B (en) | Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation | |
CN111144232A (en) | Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment | |
CN109214280B (en) | Shop identification method and device based on street view, electronic equipment and storage medium | |
CN104463253B (en) | Passageway for fire apparatus safety detection method based on adaptive background study | |
CN110942072A (en) | Quality evaluation-based quality scoring and detecting model training and detecting method and device | |
CN112462774A (en) | Urban road supervision method and system based on unmanned aerial vehicle navigation following and readable storage medium | |
CN113076899B (en) | High-voltage transmission line foreign matter detection method based on target tracking algorithm | |
CN115272656A (en) | Environment detection alarm method and device, computer equipment and storage medium | |
CN106023199B (en) | A kind of flue gas blackness intelligent detecting method based on image analysis technology | |
CN114782897A (en) | Dangerous behavior detection method and system based on machine vision and deep learning | |
CN111476102A (en) | Safety protection method, central control equipment and computer storage medium | |
CN112270671B (en) | Image detection method, device, electronic equipment and storage medium | |
CN115620155B (en) | Transformer substation target detection method and system and computer storage medium | |
CN114529869A (en) | Training method of illegal fire detection model and detection method using model | |
CN107797981A (en) | A kind of target text recognition methods and device | |
CN116884192A (en) | Power production operation risk early warning method, system and equipment | |
CN112288701A (en) | Intelligent traffic image detection method | |
CN111553199A (en) | Motor vehicle traffic violation automatic detection technology based on computer vision | |
CN115829324A (en) | Personnel safety risk silent monitoring method | |
CN114926791A (en) | Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment | |
CN114694090A (en) | Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5 | |
CN115761607B (en) | Target identification method, device, terminal equipment and readable storage medium | |
CN114821327B (en) | Method and system for extracting and processing characteristics of power line and tower and storage medium | |
CN111191648B (en) | Method and device for image recognition based on deep learning network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |