CN114580631A - Model training method, smoke and fire detection method, device, electronic equipment and medium - Google Patents

Model training method, smoke and fire detection method, device, electronic equipment and medium Download PDF

Info

Publication number
CN114580631A
CN114580631A CN202210214314.6A CN202210214314A CN114580631A CN 114580631 A CN114580631 A CN 114580631A CN 202210214314 A CN202210214314 A CN 202210214314A CN 114580631 A CN114580631 A CN 114580631A
Authority
CN
China
Prior art keywords
deep learning
learning model
image set
sample image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210214314.6A
Other languages
Chinese (zh)
Other versions
CN114580631B (en
Inventor
安梦涛
程军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210214314.6A priority Critical patent/CN114580631B/en
Publication of CN114580631A publication Critical patent/CN114580631A/en
Application granted granted Critical
Publication of CN114580631B publication Critical patent/CN114580631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The disclosure provides a training method of a deep learning model, a smoke and fire detection method and device, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the field of deep learning. The specific implementation scheme is as follows: determining an evaluation value for representing the evaluation performance of a deep learning model, wherein the deep learning model is obtained by utilizing the first sample image set for training; in response to the fact that the evaluation value does not reach the predefined range, processing the first sample image set to obtain a second sample image set; and training the deep learning model by utilizing the second sample image set.

Description

Model training method, firework detection method, device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a training method for a deep learning model, a smoke and fire detection method, an apparatus, an electronic device, and a storage medium.
Background
Deep learning, also known as deep structured learning or hierarchical learning, is part of a broader family of machine learning methods based on artificial neural networks. Deep learning architectures, such as deep neural networks, deep belief networks, recurrent neural networks, and convolutional neural networks, have been applied in fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection, and board game programs. In order to ensure the accuracy of output results in various fields, corresponding model training is indispensable.
Disclosure of Invention
The disclosure provides a training method of a deep learning model, a smoke and fire detection method, a smoke and fire detection device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a training method of a deep learning model, including:
determining an evaluation value for representing evaluation performance of a deep learning model, wherein the deep learning model is obtained by training with a first sample image set; in response to detecting that the evaluation value does not reach a predefined range, processing the first sample image set to obtain a second sample image set; and training the deep learning model by using the second sample image set.
According to another aspect of the present disclosure, there is provided a smoke detection method comprising: inputting an image to be detected into a deep learning model to obtain a detection result; the deep learning model is obtained by training through the training method.
According to another aspect of the present disclosure, there is provided a training apparatus for a deep learning model, including: the determining module is used for determining an evaluation value for representing the evaluation performance of a deep learning model, wherein the deep learning model is obtained by training with a first sample image set; the processing module is used for processing the first sample image set to obtain a second sample image set in response to the detection that the evaluation value does not reach the predefined range; and the training module is used for training the deep learning model by utilizing the second sample image set.
According to another aspect of the present disclosure, there is provided a smoke and fire detection device comprising: the acquisition module is used for inputting the image to be detected into the depth learning model to obtain a detection result; the deep learning model is obtained based on the training of the training device.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the deep learning model training method and the smoke detection method of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the training method of the deep learning model and the smoke detection method of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the training method and the smoke detection method of the deep learning model of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 schematically illustrates an exemplary system architecture for a training method and apparatus, a smoke detection method and apparatus, to which deep learning models may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow diagram of a method of training a deep learning model according to an embodiment of the disclosure;
FIG. 3 schematically illustrates an overall flow diagram for training a deep learning model according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a smoke and fire detection method according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of a training apparatus for deep learning models, in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates a block diagram of a smoke and fire detection device according to an embodiment of the present disclosure; and
FIG. 7 illustrates a schematic block diagram of an example electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure, application and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations, necessary confidentiality measures are taken, and the customs of the public order is not violated.
In the technical scheme of the disclosure, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
The fire disaster is a multiple disaster which endangers the safety of lives and properties of people, and the timely detection of the fire disaster is very important. When the fire detection is carried out aiming at high-fire scenes such as houses, gas stations, roads, forests and the like, the target detection technology of PaddleX (full-flow development tool for flight) can be applied to automatically detect smoke and fire in a detectable region, help related personnel to respond in time and reduce casualties and property loss to the maximum extent.
The smoke and fire detection technology comprises the steps of image acquisition, image preprocessing, image combination, smoke and fire target detection, deep learning and the like. The method for realizing the detection of the firework target comprises the following steps: acquiring continuous multi-frame video images. And determining the region to be fireworks determined in each frame of video image according to the color distribution of each frame of video image. And determining the pixel motion area in each frame of video image according to the change condition of the continuous multi-frame video image. And determining a target firework area in each frame of video image according to the area to be firework and the pixel motion area.
The inventor finds that in the process of implementing the concept disclosed by the invention, more interference samples are generated in the process of carrying out smoke and fire detection, and false detection is easily caused. For example, many objects and fireworks in life are very close to each other, such as clouds, red lights and the like, and are difficult to distinguish, and false detection is easy to cause. In addition, the smoke and fire detection rate is low, the fire development speed is high, if the fire can not be judged in the initial stage of the fire and the artificial intervention is carried out, the detection result can not be output until the middle stage of the fire, and the smoke and fire detection significance is lost.
The disclosure provides a training method of a deep learning model, a smoke and fire detection method, a smoke and fire detection device, an electronic device and a storage medium. The training method of the deep learning model comprises the following steps: determining an evaluation value for representing the evaluation performance of a deep learning model, wherein the deep learning model is obtained by utilizing the first sample image set for training; in response to the fact that the evaluation value does not reach the predefined range, processing the first sample image set to obtain a second sample image set; and training the deep learning model by using the second sample image set.
Fig. 1 schematically illustrates an exemplary system architecture of a training method and apparatus, a smoke detection method and apparatus, to which a deep learning model may be applied, according to an embodiment of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios. For example, in another embodiment, an exemplary system architecture to which the training method and apparatus for the deep learning model and the smoke and fire detection method and apparatus can be applied may include a terminal device, but the terminal device may implement the training method and apparatus for the deep learning model and the smoke and fire detection method and apparatus provided by the embodiments of the present disclosure without interacting with a server.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a knowledge reading application, a web browser application, a search application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for content browsed by the user using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a VPS service ("Virtual Private Server", or "VPS" for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that the training method and the smoke detection method of the deep learning model provided by the embodiments of the present disclosure may be generally performed by the terminal device 101, 102, or 103. Accordingly, the training device of the deep learning model, the smoke detection device provided by the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103.
Alternatively, the training method and the smoke detection method of the deep learning model provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the training device and the smoke detection device of the deep learning model provided by the embodiment of the present disclosure may be generally disposed in the server 105. The training method and the smoke detection method of the deep learning model provided by the embodiment of the present disclosure may also be executed by a server or a server cluster which is different from the server 105 and can communicate with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the training device and the smoke detection device of the deep learning model provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
For example, when a deep learning model needs to be trained, the terminal devices 101, 102, 103 may acquire a first sample image set, then transmit the acquired first sample image set to the server 105, determine an evaluation value for characterizing evaluation performance of the deep learning model by the server 105, where the deep learning model is trained by using the first sample image set, process the first sample image set to obtain a second sample image set in response to detecting that the evaluation value does not reach a predefined range, and train the deep learning model by using the second sample image set. Or by a server or server cluster capable of communicating with the terminal devices 101, 102, 103 and/or the server 105, to analyze the performance of the deep learning model and to enable training of the deep learning model.
For example, when smoke and fire detection is required, the terminal devices 101, 102, and 103 may acquire an image to be detected, then send the acquired image to be detected to the server 105, and the server 105 inputs the image to be detected into the deep learning model to obtain a detection result. Or a server cluster capable of communicating with the terminal devices 101, 102, 103 and/or the server 105 analyzes the image to be detected and realizes the detection result.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation.
Fig. 2 schematically shows a flow chart of a training method of a deep learning model according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, an evaluation value for characterizing evaluation performance of a deep learning model trained using the first sample image set is determined.
In operation S220, in response to detecting that the evaluation value does not reach the predefined range, the first sample image set is processed to obtain a second sample image set.
In operation S230, the deep learning model is trained using the second sample image set.
According to an embodiment of the present disclosure, the deep learning model may be used to detect target information in an image, and the target information may include at least one of smoke information and fire information. In the case where the deep learning model is used to detect smoke information and fire information in an image, the output of the model may include the category of the detected target information, such as the smoke category, the fire category, and the like. The output of the model may also include location information of detected smoke information in the image, location information of detected fire information in the image, and the like. The position information may include a certain pixel point coordinate or a set of certain pixel point coordinates covered by the detected target information in the image.
According to an embodiment of the present disclosure, the first sample image set may include images having target information. The image with the object information may include an image acquired by acquiring a scene with the object information by an image acquisition device, or may include an image formed by cutting the object information from the image with the object information, enlarging or reducing the cut object information, and pasting the cut object information to a randomly selected background picture. The object information in the image may include object information having an occupation area of any size, for example, the occupation area of the object information may be less than 3% of the image area. In the case where the deep learning model needs to be trained as a model for detecting information such as smoke, fire, and the like, the target information may include smoke information and fire information.
According to an embodiment of the present disclosure, for each image in the first sample image set, tag information may be configured, and the tag information may include at least one of category tag information representing a category of the target information, position tag information representing a position of the target information in the image, and the like. For example, the tag information associated with the image with smoke information may include a smoke category tag and a smoke location tag, and the tag information associated with the image with fire information may include a fire category tag and a fire location tag. The tag information associated with the image having the smoke information and the fire information may include a smoke category tag, a fire category tag, a smoke location tag, and a fire location tag, and the tag information associated with the image not having the smoke information and the fire information may include an empty tag, and the like.
According to an embodiment of the present disclosure, the evaluation value may include at least one of a recall rate of an image level, a false detection rate of an image level, a MAP (Mean Average Precision), and an IOU (Intersection over Union), which are used to evaluate the performance of the deep learning model.
According to embodiments of the present disclosure, the deep learning model may include a target detection model implemented based on a single-stage PP-YOLO algorithm, which is an effective and efficient target detection model. The deep learning model can be obtained by training the PP-YOLO model by using the first sample image set, the second sample image set and the like. The real-time performance of target detection can be improved by using the PP-YOLO model, and the target detection model can be favorable for effectively detecting target information with small occupied area in the image.
According to the embodiment of the disclosure, various evaluation values for evaluating the current performance of the model can be calculated for the deep learning model trained by using the first sample image set. Then, in a case where it is detected that at least one of the plurality of types of evaluation values does not reach the predefined range, the deep learning model may be further trained using a second sample image set different from the first sample image set. For the deep learning model obtained by further training, the evaluation value can be continuously calculated, and under the condition that the evaluation value does not reach the predefined range, the deep learning model can be continuously trained by using the new sample image set until the evaluation value of the deep learning model obtained by training reaches the predefined range, and the training process can be terminated.
According to an embodiment of the present disclosure, in a case where the evaluation value is a recall rate, the evaluation value not reaching the predefined range may include the recall rate being less than a first preset value. In the case where the evaluation value is a false detection rate, the evaluation value not reaching the predefined range may include the false detection rate being greater than a second preset value. The first preset value and the second preset value can be set by user, and the set values of the first preset value and the second preset value can be different.
It should be noted that, when the deep learning model is trained, all used sample image sets can be regarded as a first sample image set, and the second sample image set can represent the sample image set used when the deep learning model is trained next time.
Through the embodiment of the disclosure, the deep learning model can be further trained under the condition that the evaluation value of the deep learning model does not reach the predefined range, so that a more optimized deep learning model is obtained, and when the optimized deep learning model is used for target detection such as firework detection, the accuracy of the detection result can be effectively improved.
The method shown in fig. 2 is further described below with reference to specific embodiments.
According to an embodiment of the present disclosure, in a case where the evaluation value is a recall rate, determining the evaluation value for characterizing the evaluation performance of the deep learning model may include: and detecting each image in the first image set by using the deep learning model to obtain a first target image with the detected target information, wherein the first image set comprises a first image with the target information. And determining the recall rate according to the ratio of the number of the first target images to the number of the first images.
According to an embodiment of the present disclosure, the first image set may include a plurality of randomly selected first images having target information, and the first image set may also include images having no target information. In calculating the recall rate, the calculation may be performed only based on the first image having the target information and the detection result thereof.
According to the embodiment of the present disclosure, when the target detection is performed on each image in the first image set using the deep learning model, as long as the target is detected on the first image having the target information, the first image may be considered to be recalled. By calculating the proportion of all recalled first images in all first images with target information, the picture-level recall rate of the current deep learning model can be determined.
According to an embodiment of the present disclosure, when calculating the recall rate, the first image having the target information may also be first divided into a plurality of batches. Then, the proportion of the recalled first images in the first images with the target information is calculated in batch, and the picture-level recall rate of the current deep learning model is determined. The method for determining the recall rate in a batch calculation mode can comprise the following steps: and calculating the proportion of the recalled first images in the batch in the first images in the batch to obtain a plurality of proportions aiming at the first images in each batch. After obtaining the plurality of proportions, the recall rate may be determined based on the value of the least valued proportion of the plurality of proportions. The recall rate may also be determined based on an average of a plurality of ratios. The recall rate may also be determined based on a value of a highest-frequency proportion of the plurality of proportions. And are not limited herein.
Through the embodiment of the disclosure, the recall rate is introduced as an index for evaluating the performance of the deep learning model, the detection effect of the deep learning model can be correctly reflected, the more optimized deep learning model can be conveniently and efficiently trained, and the accuracy of the deep learning model in target detection is further improved.
According to an embodiment of the present disclosure, in a case where the evaluation value is a false detection rate, determining an evaluation value for characterizing evaluation performance of the deep learning model may include: and detecting each image in the second image set by using the deep learning model to obtain a second target image with the detected target information, wherein the second image set comprises second images without the target information. And determining the false detection rate according to the ratio of the number of the second target images to the number of the second images.
According to an embodiment of the present disclosure, the second image set may include a plurality of randomly selected second images having target information, and the second image set may also include images having target information. When calculating the false detection rate, the calculation may be performed only based on the second image without the target information and the detection result thereof.
According to the embodiments of the present disclosure, when the target detection is performed on each image in the second image set using the depth learning model, as long as the target is detected on the second image having no target information, the second image may be considered as being erroneously detected. By calculating the proportion of all the misdetected second images in all the second images without the target information, the picture-level misdetection rate of the current deep learning model can be determined.
According to an embodiment of the present disclosure, when calculating the false detection rate, the second image without the target information may also be first divided into a plurality of batches. And then, calculating the proportion of the second image subjected to false detection in the second image without the target information in batch, and determining the false detection rate of the current deep learning model at the picture level. The method for determining the false detection rate in a batch calculation mode can comprise the following steps: and calculating the proportion of the second image which is falsely detected in each batch in the second images in the batches to obtain a plurality of proportions. After obtaining the plurality of ratios, the false detection rate may be determined according to a value of a ratio having a largest value among the plurality of ratios. The false detection rate may be determined based on an average of the plurality of ratios. The false detection rate can also be determined according to the value of the proportion with the highest frequency in the multiple proportions. And are not limited herein.
Through the embodiment of the disclosure, the false detection rate is introduced as an index for judging the performance of the deep learning model, the detection effect of the deep learning model can be correctly reflected, the more optimized deep learning model can be conveniently and efficiently trained, and the accuracy of target detection by using the deep learning model is further improved.
Whether the target is detected or not is independent of the number of detection frames, and the target can be considered to be detected as long as the detection result indicates that the detection frames exist. For the detection result without the detection frame, it can be considered that the corresponding image has no detected object.
According to an embodiment of the present disclosure, the method of processing the first sample image set to obtain the second sample image set may include at least one of the following: and carrying out data augmentation processing on the first sample image set to obtain a second sample image set. And adding a negative sample image with the similarity to the target information larger than a preset threshold value in the first sample image set to obtain a second sample image set.
According to an embodiment of the present disclosure, the data augmentation policy for performing the data augmentation process may include at least one of: mixup (a simple and data-independent data augmentation), randomdistor (randomly changing brightness), RandomExpand (random fill), RandomContrast (randomly changing contrast), RandomColor (randomly changing color), RandomCrop (random crop), randominter (random zoom), RandomFlip (random flip), Resize (Resize), BatchRandomResize (batch random Resize), normaize (Normalize), and the like, and may not be limited thereto.
According to the embodiment of the disclosure, the second preset threshold value can be determined in a self-defined mode. The negative example image may include an image having at least one of the following information: cloud information, snow mountain information, light information, and sun information, and may not be limited thereto.
It should be noted that the above processing manner is only an exemplary embodiment, but is not limited thereto, and other processing methods known in the art may be included, for example, a new sample image set may be acquired as the second sample image set, as long as the second sample image set and the first sample image set are different.
Through the embodiment of the disclosure, under the condition that the evaluation value of the deep learning model is determined not to reach the predefined range, the second sample image set used for further training the deep learning model can be determined by combining at least one of the data augmentation method and the negative sample image, so that after the deep learning model is trained by using the second sample image set, the performance of the deep learning model can be effectively improved, and particularly, the accuracy of target detection by using the deep learning model can be effectively improved.
According to an embodiment of the present disclosure, a deformable convolution submodel may be included in the deep learning model. The training of the deformable convolution submodel can be realized simultaneously when the deep learning model is trained.
According to the embodiment of the disclosure, a Deformable Convolution (DCN) sub-model can be added on the basis of the PP-YOLO model. The deformable convolution submodel can accommodate information of various shapes and sizes. The method is particularly suitable for target detection in scenes with large scale change, such as smoke, fire and the like.
According to an embodiment of the present disclosure, different backbones (backbone networks) may also be set in the deep learning model. For example, ResNet101 can be selected as a backbone network, and a highly accurate deep learning model can be obtained. ResNet50 can be selected as a backbone network, and a deep learning model with higher detection speed can be obtained. ResNet is a residual network.
According to the embodiment of the disclosure, the deep learning model is obtained by training with the first sample image set on the basis of a pre-training model.
According to embodiments of the present disclosure, the pre-trained models may include models trained on the COCO public data set, may also include models trained on smoke and fire related data sets, and the like. After the pre-training model is obtained, training can be performed on the smoke and fire data sets to obtain a deep learning model. The deep learning model is obtained by applying a transfer learning mode, so that the detection effect of the model can be further improved, and the requirements of high recall and low false detection are met.
FIG. 3 schematically shows an overall flow diagram for training a deep learning model according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S310 to S340.
In operation S310, for the trained deep learning model, an evaluation value for characterizing evaluation performance of the deep learning model is determined.
In operation S320, whether the evaluation value reaches a predefined range? If yes, perform operation S330; if not, operations S340 and S310-S320 are performed.
In operation S330, the training process is ended.
In operation S340, training sample images used for training the deep learning model are updated, and the deep learning model is trained using the new training sample images.
It should be noted that, in the case where the evaluation values include multiple evaluation indexes such as recall rate and false detection rate, the determination result in operation S320 is yes only if the evaluation values of the multiple evaluation indexes all reach the predefined range. Otherwise, if only one of the evaluation values of the plurality of evaluation indexes does not reach the predefined range, the determination result of operation S320 is no.
Through the embodiment of the disclosure, the deep learning model can be trained by combining the evaluation value of the deep learning model, so that the optimized deep learning model can be obtained, and the accuracy of the model detection result is improved.
Figure 4 schematically illustrates a flow chart of a method of smoke and fire detection according to an embodiment of the present disclosure.
As shown in fig. 4, the method includes operation S410.
In operation S410, an image to be detected is input into the deep learning model, and a detection result is obtained.
According to an embodiment of the present disclosure, the deep learning model may be a model trained based on the above training method. For example, smoke and fire data sets may be acquired first, and a deep learning target detection model trained so that the location and type of fire and smoke in any image can be acquired based on the model. Under the condition that the deep learning model capable of realizing smoke and fire detection is obtained through preliminary training, the deep learning model obtained through the preliminary training can be evaluated by combining the recall rate at the image level and the false detection rate at the image level. And under the condition that the recall rate and the false retrieval rate of the current model are determined not to reach the recall rate and the false retrieval rate, the deep learning model can be further trained and optimized by combining a new sample image set.
According to the embodiment of the disclosure, the trained or optimized deep learning model can be deployed in a client, such as a Jetson NX (a small artificial intelligence edge computing device). After deployment, a user can input videos, pictures, folders and the like acquired in real time, and the deep learning model can analyze the contents and judge whether the contents contain fire information or smoke information. If included, the location and type of fire or smoke may be output.
Through the embodiment of the disclosure, the smoke and fire detection method has high detection speed, so that in a smoke and fire detection scene, the model can respond at the first time when a fire occurs. In addition, the realized firework detection method also has high detection accuracy, can distinguish object information and firework which are very close to the firework in life, and can ensure the high detection accuracy.
Fig. 5 schematically shows a block diagram of a training apparatus for deep learning models according to an embodiment of the present disclosure.
As shown in fig. 5, the training apparatus 500 for deep learning model includes a determination module 510, a processing module 520, and a training module 530.
A determining module 510, configured to determine an evaluation value for characterizing an evaluation performance of a deep learning model, where the deep learning model is trained by using the first sample image set.
And a processing module 520, configured to process the first sample image set to obtain a second sample image set in response to detecting that the evaluation value does not reach the predefined range.
A training module 530, configured to train the deep learning model with the second sample image set.
According to an embodiment of the present disclosure, the evaluation value includes a recall rate. The determination module includes a first detection unit and a first determination unit.
The first detection unit is used for detecting each image in a first image set by using the deep learning model to obtain a first target image with detected target information, wherein the first image set comprises a first image with the target information.
And the first determining unit is used for determining the recall rate according to the ratio of the number of the first target images to the number of the first images.
According to an embodiment of the present disclosure, the evaluation value includes a false detection rate. The determination module includes a second detection unit and a second determination unit.
And the second detection unit is used for detecting each image in a second image set by using the deep learning model to obtain a second target image with the detected target information, wherein the second image set comprises second images without the target information.
And the second determining unit is used for determining the false detection rate according to the ratio of the number of the second target images to the number of the second images.
According to an embodiment of the present disclosure, the processing module includes at least one of: a processing unit and an adding unit.
And the processing unit is used for carrying out data augmentation processing on the first sample image set to obtain a second sample image set.
And the adding unit is used for adding the negative sample image with the similarity degree with the target information larger than a preset threshold value in the first sample image set to obtain a second sample image set.
According to an embodiment of the present disclosure, the target information includes at least one of smoke information and fire information.
According to an embodiment of the present disclosure, a deformable convolution submodel is included in the deep learning model.
According to an embodiment of the present disclosure, the second sample image set comprises images having at least one of the following information: cloud information, snow mountain information, light information, and sun information.
According to the embodiment of the disclosure, the deep learning model is obtained by training with the first sample image set on the basis of a pre-training model.
Figure 6 schematically illustrates a block diagram of a pyrotechnic detection device in accordance with an embodiment of the disclosure.
As shown in fig. 6, the pyrotechnic detection device 600 includes an acquisition module 610.
The obtaining module 610 is configured to input the image to be detected into the deep learning model to obtain a detection result. The deep learning model is obtained by training based on the training device.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the deep learning model training method and the smoke detection method of the present disclosure.
According to an embodiment of the present disclosure, a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a training method of a deep learning model and a smoke detection method of the present disclosure.
According to an embodiment of the disclosure, a computer program product comprising a computer program which, when executed by a processor, implements the deep learning model training method and the smoke detection method of the disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 701 performs the various methods and processes described above, such as the training method of the deep learning model and the smoke detection method. For example, in some embodiments, the training method of the deep learning model and the smoke detection method may be implemented as computer software programs tangibly embodied in a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the training method of the deep learning model and the smoke detection method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the training method of the deep learning model and the smoke detection method.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (20)

1. A training method of a deep learning model comprises the following steps:
determining an evaluation value for representing evaluation performance of a deep learning model, wherein the deep learning model is obtained by training with a first sample image set;
in response to detecting that the evaluation value does not reach a predefined range, processing the first sample image set to obtain a second sample image set; and
training the deep learning model using the second sample image set.
2. The method of claim 1, wherein the assessment value comprises a recall rate;
the determining an evaluation value for characterizing evaluation performance of the deep learning model includes:
detecting each image in a first image set by using the deep learning model to obtain a first target image with detected target information, wherein the first image set comprises a first image with the target information; and
and determining the recall rate according to the ratio of the number of the first target images to the number of the first images.
3. The method according to claim 1 or 2, wherein the evaluation value comprises a false detection rate;
the determining an evaluation value for characterizing evaluation performance of the deep learning model includes:
detecting each image in a second image set by using the deep learning model to obtain a second target image with detected target information, wherein the second image set comprises second images without target information; and
and determining the false detection rate according to the ratio of the number of the second target images to the number of the second images.
4. The method of any of claims 1-3, wherein the processing the first sample image set to obtain a second sample image set comprises at least one of:
performing data augmentation processing on the first sample image set to obtain a second sample image set; and
and adding a negative sample image with the similarity to the target information larger than a preset threshold value in the first sample image set to obtain the second sample image set.
5. The method of any of claims 2-4, wherein the target information includes at least one of smoke information and fire information.
6. The method of any of claims 1-5, wherein the deep learning model includes deformable convolution submodels therein.
7. The method of any of claims 1-6, wherein the second sample image set includes images having at least one of: cloud information, snow mountain information, light information and sun information.
8. The method of any of claims 1-7, wherein the deep learning model is trained using the first sample image set based on a pre-trained model.
9. A smoke detection method comprising:
inputting an image to be detected into a deep learning model to obtain a detection result;
wherein the deep learning model is trained based on the training method of any one of claims 1-8.
10. A training apparatus for deep learning models, comprising:
the determining module is used for determining an evaluation value for representing the evaluation performance of a deep learning model, wherein the deep learning model is obtained by training with a first sample image set;
the processing module is used for processing the first sample image set to obtain a second sample image set in response to the detection that the evaluation value does not reach the predefined range; and
and the training module is used for training the deep learning model by utilizing the second sample image set.
11. The apparatus of claim 10, wherein the evaluation value comprises a recall rate;
the determining module comprises:
the first detection unit is used for detecting each image in a first image set by using the deep learning model to obtain a first target image with detected target information, wherein the first image set comprises a first image with the target information; and
a first determining unit, configured to determine the recall rate according to a ratio of the number of the first target images to the number of the first images.
12. The apparatus according to claim 10 or 11, wherein the evaluation value comprises a false detection rate;
the determining module comprises:
the second detection unit is used for detecting each image in a second image set by using the deep learning model to obtain a second target image with detected target information, wherein the second image set comprises second images without target information; and
and the second determining unit is used for determining the false detection rate according to the ratio of the number of the second target images to the number of the second images.
13. The apparatus of any of claims 10-12, wherein the processing module comprises at least one of:
the processing unit is used for carrying out data augmentation processing on the first sample image set to obtain a second sample image set; and
and the adding unit is used for adding a negative sample image with the similarity to the target information larger than a preset threshold value in the first sample image set to obtain the second sample image set.
14. The apparatus of any of claims 11-13, wherein the target information comprises at least one of smoke information and fire information.
15. The apparatus of any one of claims 10-14, wherein a deformable convolution submodel is included in the deep learning model.
16. The apparatus of any of claims 10-15, wherein the second sample image set comprises images having at least one of: cloud information, snow mountain information, light information, and sun information.
17. A smoke detection device comprising:
the acquisition module is used for inputting the image to be detected into the deep learning model to obtain a detection result;
wherein the deep learning model is trained based on the training apparatus of any one of claims 10-16.
18. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8 or 9.
19. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of claims 1-8 or 9.
20. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8 or 9.
CN202210214314.6A 2022-03-04 2022-03-04 Model training method, smoke and fire detection method, device, electronic equipment and medium Active CN114580631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210214314.6A CN114580631B (en) 2022-03-04 2022-03-04 Model training method, smoke and fire detection method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210214314.6A CN114580631B (en) 2022-03-04 2022-03-04 Model training method, smoke and fire detection method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114580631A true CN114580631A (en) 2022-06-03
CN114580631B CN114580631B (en) 2023-09-08

Family

ID=81773508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210214314.6A Active CN114580631B (en) 2022-03-04 2022-03-04 Model training method, smoke and fire detection method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114580631B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706150A (en) * 2019-07-12 2020-01-17 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111310621A (en) * 2020-02-04 2020-06-19 北京百度网讯科技有限公司 Remote sensing satellite fire point identification method, device, equipment and storage medium
CN112883714A (en) * 2021-03-17 2021-06-01 广西师范大学 ABSC task syntactic constraint method based on dependency graph convolution and transfer learning
CN112950570A (en) * 2021-02-25 2021-06-11 昆明理工大学 Crack detection method combining deep learning and dense continuous central point
CN113704531A (en) * 2021-03-10 2021-11-26 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113792791A (en) * 2021-09-14 2021-12-14 百度在线网络技术(北京)有限公司 Processing method and device for visual model
CN113870284A (en) * 2021-09-29 2021-12-31 柏意慧心(杭州)网络科技有限公司 Method, apparatus, and medium for segmenting medical images
CN114118287A (en) * 2021-11-30 2022-03-01 北京百度网讯科技有限公司 Sample generation method, sample generation device, electronic device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706150A (en) * 2019-07-12 2020-01-17 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2021008456A1 (en) * 2019-07-12 2021-01-21 北京达佳互联信息技术有限公司 Image processing method and apparatus, electronic device, and storage medium
CN111310621A (en) * 2020-02-04 2020-06-19 北京百度网讯科技有限公司 Remote sensing satellite fire point identification method, device, equipment and storage medium
CN112950570A (en) * 2021-02-25 2021-06-11 昆明理工大学 Crack detection method combining deep learning and dense continuous central point
CN113704531A (en) * 2021-03-10 2021-11-26 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112883714A (en) * 2021-03-17 2021-06-01 广西师范大学 ABSC task syntactic constraint method based on dependency graph convolution and transfer learning
CN113792791A (en) * 2021-09-14 2021-12-14 百度在线网络技术(北京)有限公司 Processing method and device for visual model
CN113870284A (en) * 2021-09-29 2021-12-31 柏意慧心(杭州)网络科技有限公司 Method, apparatus, and medium for segmenting medical images
CN114118287A (en) * 2021-11-30 2022-03-01 北京百度网讯科技有限公司 Sample generation method, sample generation device, electronic device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吕俊杰 等: ""基于深度学习的烟火识别算法"", 《信息技术与信息化》 *
吕俊杰 等: ""基于深度学习的烟火识别算法"", 《信息技术与信息化》, no. 12, 31 December 2021 (2021-12-31), pages 220 - 222 *
李策 等: ""一种迁移学习和可变形卷积深度学习的蝴蝶检测算法"", 《自动化学报》 *
李策 等: ""一种迁移学习和可变形卷积深度学习的蝴蝶检测算法"", 《自动化学报》, vol. 45, no. 9, 30 September 2019 (2019-09-30), pages 1772 - 1782 *

Also Published As

Publication number Publication date
CN114580631B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN112801164A (en) Training method, device and equipment of target detection model and storage medium
CN112861885B (en) Image recognition method, device, electronic equipment and storage medium
CN112989995B (en) Text detection method and device and electronic equipment
CN113591864B (en) Training method, device and system for text recognition model framework
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN113780297A (en) Image processing method, device, equipment and storage medium
CN117710921A (en) Training method, detection method and related device of target detection model
CN117557777A (en) Sample image determining method and device, electronic equipment and storage medium
CN117351462A (en) Construction operation detection model training method, device, equipment and storage medium
CN115496916B (en) Training method of image recognition model, image recognition method and related device
CN114445711B (en) Image detection method, image detection device, electronic equipment and storage medium
CN114580631B (en) Model training method, smoke and fire detection method, device, electronic equipment and medium
CN115761698A (en) Target detection method, device, equipment and storage medium
CN115131315A (en) Image change detection method, device, equipment and storage medium
CN117615363B (en) Method, device and equipment for analyzing personnel in target vehicle based on signaling data
CN113128601B (en) Training method of classification model and method for classifying images
US20220383626A1 (en) Image processing method, model training method, relevant devices and electronic device
CN118038402A (en) Traffic light detection method and device, electronic equipment and storage medium
CN114842073A (en) Image data amplification method, apparatus, device, medium, and computer program product
CN114627423A (en) Object detection and model training method, device, equipment and storage medium
CN115131825A (en) Human body attribute identification method and device, electronic equipment and storage medium
CN114037865A (en) Image processing method, apparatus, device, storage medium, and program product
CN115359574A (en) Human face living body detection and corresponding model training method, device and storage medium
CN112925942A (en) Data searching method, device, equipment and storage medium
CN113011158A (en) Information anomaly detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant