CN113128440A - Target object identification method, device, equipment and storage medium based on edge equipment - Google Patents
Target object identification method, device, equipment and storage medium based on edge equipment Download PDFInfo
- Publication number
- CN113128440A CN113128440A CN202110465403.3A CN202110465403A CN113128440A CN 113128440 A CN113128440 A CN 113128440A CN 202110465403 A CN202110465403 A CN 202110465403A CN 113128440 A CN113128440 A CN 113128440A
- Authority
- CN
- China
- Prior art keywords
- target object
- object recognition
- model
- image set
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000012549 training Methods 0.000 claims abstract description 104
- 230000008569 process Effects 0.000 claims abstract description 55
- 238000012216 screening Methods 0.000 claims abstract description 30
- 230000009467 reduction Effects 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims description 90
- 238000012795 verification Methods 0.000 claims description 56
- 238000012360 testing method Methods 0.000 claims description 44
- 230000006870 function Effects 0.000 claims description 24
- 230000000007 visual effect Effects 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000007667 floating Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 238000007726 management method Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44521—Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an intelligent decision technology, and discloses a target object identification method based on edge equipment, which comprises the following steps: screening the target object in the training picture set to obtain a sample image set with a preset type of target object, training a target object recognition model by using the sample image set to obtain a standard target object recognition model, carrying out parameter reduction on model parameters of the standard target object recognition model to obtain a reduced standard target object recognition model, extracting a target object recognition process in the standard target object recognition model, compiling the process into a dynamic link library and embedding the dynamic link library into edge equipment, and transmitting an image to be recognized into the edge equipment to carry out target object recognition to obtain a recognition result. In addition, the invention also relates to a block chain technology, and the dynamic link library can be stored in the nodes of the block chain. The invention also provides an object identification device based on the edge device, an electronic device and a computer readable storage medium. The invention can solve the problem of low target object identification accuracy.
Description
Technical Field
The invention relates to the technical field of intelligent decision, in particular to a target object identification method, a target object identification device, target object identification equipment and a computer readable storage medium based on edge equipment.
Background
With the development of computer vision and artificial intelligence technologies, target identification is widely applied not only in the industrial field but also in the environmental field. The target identification refers to a process of locating a target position in an image, wherein in the environmental field, it is generally required to identify a target on a river or a hill, such as identifying a floating object in the river, and determining a specific situation of an environment according to the identified target, thereby making a more detailed environmental protection scheme.
The existing detection method is generally based on cloud or combination of a local server and a deep learning model, and has high requirements on hardware resources, the local server is expensive, and the detection accuracy is low.
Disclosure of Invention
The invention provides a target object identification method and device based on edge equipment and a computer readable storage medium, and mainly aims to solve the problem of low target object identification accuracy.
In order to achieve the above object, the present invention provides an object identification method based on edge device, including:
carrying out target object screening processing on the training picture set to obtain a sample image set with preset kinds of target objects;
training a pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model;
carrying out parameter reduction processing on the model parameters of the standard target object identification model to obtain a reduced standard target object identification model;
inputting the sample image set into the reduced standard target object recognition model for target object recognition processing, extracting the target object recognition processing process, and compiling the process into a dynamic link library;
and embedding the dynamic link library into edge equipment, establishing connection between the edge equipment and preset visual equipment, and transmitting an image to be identified received by the visual equipment into the edge equipment for identifying a target object to obtain an identification result.
Optionally, the performing parameter reduction processing on the model parameters of the standard object recognition model to obtain a reduced standard object recognition model includes:
obtaining a type corresponding to a model parameter of the standard target object recognition model;
and converting the type corresponding to the model parameter into a preset type to obtain the reduced standard target object recognition model. Optionally, the extracting the process of the target object recognition processing and compiling the process into a dynamic link library includes:
converting the target object identification process in the target object identification process into a corresponding identification operation code;
compiling the identification operation code into an engineering file and writing the engineering file into a pre-acquired template link library to obtain a dynamic link library.
Optionally, said embedding the dynamically linked library into an edge device comprises;
acquiring a fixed path of the edge device;
and loading the dynamic link library into the edge device through the fixed path by utilizing a calling interface of the dynamic link library.
Optionally, after establishing the connection between the edge device and the preset vision device, the method further includes:
sending image data acquired by the preset vision equipment to the edge equipment within a preset time;
judging whether the image data received by the edge device is complete;
and if the received image data is incomplete, reestablishing the connection between the edge device and the visual device.
Optionally, the training processing on the pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model includes:
inputting the training image set into the target object recognition model for feature extraction to obtain a training result;
calculating a loss value of the training result and a preset standard result by using a preset loss function to obtain a loss value;
when the loss value is larger than or equal to a preset loss threshold value, adjusting parameters of the target object recognition model, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the loss value is smaller than the loss threshold value, obtaining a trained target object recognition model;
inputting the verification image set into the trained target object identification model for verification processing to obtain a verification result;
when the verification result is that the verification fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the verification result is that the verification passes, inputting the test image set into a verification passing target object identification model for test processing;
when the test processing fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
and when the test treatment is passed, obtaining a standard target object recognition model.
Optionally, the performing target object screening processing on the training image set to obtain a sample image set with a preset type of target object includes:
image cutting is carried out on the training image set according to a preset image size, and a cut image set is obtained;
and screening out a sample image set with preset types of target objects from the cut image set.
In order to solve the above problem, the present invention further provides an object recognition apparatus based on edge device, the apparatus comprising:
the target object screening module is used for carrying out target object screening processing on the training image set to obtain a sample image set with preset types of target objects;
the model training module is used for training a pre-constructed target object recognition model by utilizing the sample image set to obtain a standard target object recognition model;
the parameter reduction module is used for carrying out parameter reduction processing on the model parameters of the standard target object identification model to obtain a reduced standard target object identification model;
the process compiling module is used for inputting the sample image set into the reduced standard object recognition model for object recognition processing, extracting the process of the object recognition processing and compiling the process into a dynamic link library;
and the target object identification module is used for embedding the dynamic link library into edge equipment, establishing connection between the edge equipment and preset visual equipment, and transmitting an image to be identified, which is received by the visual equipment, into the edge equipment for target object identification to obtain an identification result.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the target object identification method based on the edge device.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, which stores at least one instruction, where the at least one instruction is executed by a processor in an electronic device to implement the edge device-based object identification method described above.
The embodiment of the invention trains a pre-constructed target object recognition model by using a sample image set with a preset type of target object to obtain a standard target object recognition model with higher target object recognition capability, and in order to combine the standard target object recognition model with edge equipment, the model parameters of the standard target object recognition model are subjected to parameter reduction processing, so that the storage space occupied by the model is reduced, and meanwhile, the calculation can be accelerated. And extracting the process of the target object identification processing, compiling the process into a dynamic link library, and embedding the dynamic link library into edge equipment, wherein the edge equipment greatly saves the transmission bandwidth of the system and greatly reduces the system cost compared with a cloud server or a local server. Therefore, the target object identification method based on the edge device, the target object identification device based on the edge device, the electronic device and the computer readable storage medium provided by the invention can solve the problem of low target object identification accuracy.
Drawings
Fig. 1 is a schematic flowchart of a target object identification method based on edge device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating one step of the edge device-based object recognition method shown in FIG. 1;
FIG. 3 is a functional block diagram of an object recognition apparatus based on edge devices according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device for implementing the edge device-based target object identification method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a target object identification method based on edge equipment. The execution subject of the target object identification method based on the edge device includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiment of the present application. In other words, the object identification method based on the edge device may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a target object identification method based on edge devices according to an embodiment of the present invention. In this embodiment, the target object identification method based on the edge device includes:
and S1, performing target object screening processing on the training picture set to obtain a sample image set with preset types of target objects.
In one embodiment of the present invention, the training picture set may be pictures included in river checkpoints, and the training picture set is collected from image capturing devices disposed at each river checkpoint, where the collected pictures in the river checkpoints do not necessarily include targets, such as drifts, and therefore the pictures in the river checkpoints need to be subjected to target screening processing to obtain river images with the targets.
Specifically, the step of performing target object screening processing on the training image set to obtain a sample image set with a preset type of target object includes:
image cutting is carried out on the training image set according to a preset image size, and a cut image set is obtained;
and screening out a sample image set with preset types of target objects from the cut image set.
In detail, the size of the training image set collected by the image capturing device is fixed and depends on the parameters adjusted by the image capturing device, but the size is not necessarily convenient for subsequent image processing, and the images in the training image set do not all contain the target object, so that further target object screening processing is required.
In the embodiment of the invention, images in the river bayonet can be cut into a proper size by photoshop, so that subsequent screening is facilitated, and the size of the image can be 24 × 24.
And S2, training the pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model.
In the embodiment of the present invention, before training a pre-constructed target recognition model by using the sample image set, the method includes:
and dividing the sample image set into a training image set, a verification image set and a test image set according to a preset proportion.
Wherein the preset ratio is 8: 1: 1.
in detail, the training image set is used for subsequent training of the model and is a sample for model fitting, the verification image set is a sample set left alone in the model training process and can be used for adjusting the hyper-parameters of the model and for primarily evaluating the capability of the model, and the test image set is used for testing the model and evaluating the generalization capability of the model.
Specifically, the training processing of the pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model includes:
inputting the training image set into the target object recognition model for feature extraction to obtain a training result;
calculating a loss value of the training result and a preset standard result by using a preset loss function to obtain a loss value;
when the loss value is larger than or equal to a preset loss threshold value, adjusting parameters of the target object recognition model, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the loss value is smaller than the loss threshold value, obtaining a trained target object recognition model;
inputting the verification image set into the trained target object identification model for verification processing to obtain a verification result;
when the verification result is that the verification fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the verification result is that the verification passes, inputting the test image set into a verification passing target object identification model for test processing;
when the test processing fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
and when the test treatment is passed, obtaining a standard target object recognition model.
In detail, the training result is a result obtained by inputting the training image into the target object recognition model to perform target object recognition, the standard result is an accurate result obtained by judging in advance whether a target object is detected in pictures collected by each river gate, the target object recognition model may be an efficientnet deep learning network, and the preset loss function may be a cross entropy loss function, a hinge loss function, or an exponential loss function.
Specifically, the calculating the loss value of the training result and the preset standard result by using the preset loss function includes:
and calculating the loss value of the training result and a preset standard result by using the following calculation formula:
wherein,in order to obtain the value of the loss,and obtaining the training result, wherein Y is the standard result, and alpha represents an error factor and is a preset constant.
Further, the step of inputting the verification image set into the trained target object recognition model for verification processing is to input the verification image set into the trained target object recognition model to obtain a verification recognition result, judge whether the verification recognition result is consistent with a preset real recognition result, if not, adjust internal parameters in the target object recognition model until the verification recognition result is consistent with the real recognition result, and then pass the verification processing.
Wherein the internal parameter may be a gradient parameter or a weight parameter of the trained target recognition model.
And when the verification result is verification passing, inputting the test image set into a target object identification model passing the verification to perform test processing to obtain a test identification result, judging whether the test identification result is consistent with a preset real test result, returning to the step of training the target object identification model until the test identification result is consistent with the real test result if the test identification result is inconsistent with the real test result, and outputting a standard target object identification model.
In the embodiment of the invention, the target object recognition model is tested by using the test image set, and whether the target object recognition model needs to be trained again is judged according to the obtained test result, so that the generalization capability of the model can be better evaluated.
And S3, carrying out parameter reduction processing on the model parameters of the standard target object recognition model to obtain a reduced standard target object recognition model.
In the embodiment of the invention, in order to combine the standard target object identification model with the edge device, the standard target object identification model needs to be compressed, because the model parameters in the standard target object identification model are of a floating point type, and the space of the model is difficult to compress by using a common compression algorithm, the model is subjected to parameter conversion processing, 32 floating point numbers are approximately stored and calculated by using 8-bit integers, and the storage space occupied by the model after parameter conversion is reduced by 75 percent because the storage is changed from 4 bytes to 1 byte.
In detail, in the embodiment of the present invention, the performing parameter reduction processing on the model parameter of the standard object recognition model to obtain a reduced standard object recognition model includes:
obtaining a type corresponding to a model parameter of the standard target object recognition model;
and converting the type corresponding to the model parameter into a preset type to obtain the reduced standard target object recognition model.
The types of the model parameters include an integer type, a floating point type, a character type, a non-value type and the like, the type corresponding to the model parameters of the standard target object identification model in the scheme is usually a 32-bit floating point number, and the preset type is an 8-bit integer.
In the embodiment of the invention, after the parameter reduction processing is carried out, the occupied storage space of the model can be reduced, and meanwhile, the calculation can be accelerated.
S4, inputting the sample image set into the reduced standard object recognition model for object recognition processing, extracting the process of the object recognition processing, and compiling the process into a dynamic link library.
In an embodiment of the present invention, the process of the object identification processing is a process of inputting the sample image set into the reduced standard object identification model to provide an identification result.
Specifically, referring to fig. 2, the process of extracting the target object recognition processing and compiling the process into a dynamic link library includes:
s401, converting the target object identification process in the target object identification process into a corresponding identification operation code;
s402, compiling the identification operation codes into engineering files and writing the engineering files into a pre-acquired template link library to obtain a dynamic link library.
The dynamic link library file is a program function library under Linux, and the pre-acquired template link library can be a tensoflow C + + dynamic link library.
S5, embedding the dynamic link library into edge equipment, establishing connection between the edge equipment and preset visual equipment, and transmitting an image to be recognized received by the visual equipment into the edge equipment for recognizing a target object to obtain a recognition result.
In the embodiment of the invention, the edge device refers to a small server with an AI chip, the visual device refers to a front-end camera, the edge device comprises an integral engineering code, and the integral engineering code comprises the dynamic link library.
Specifically, the embedding the dynamic link library into an edge device includes;
acquiring a fixed path of the edge device;
and loading the dynamic link library into the edge device through the fixed path by utilizing a calling interface of the dynamic link library.
In detail, a fixed path exists in the edge device, a plurality of data interfaces exist in the dynamic link library, each data interface corresponds to a different function, a calling interface pair in the dynamic link library is selected to load the dynamic link library into the edge device according to the fixed path, and the embedding efficiency of embedding the dynamic link library into the edge device is improved.
Further, after establishing the connection between the edge device and the preset vision device, the method further comprises:
sending image data acquired by the preset vision equipment to the edge equipment within a preset time;
judging whether the image data received by the edge device is complete;
and if the received image data is incomplete, reestablishing the connection between the edge device and the visual device.
In detail, the image data is acquired by adopting a 16-path camera, whether the image data received by the edge device is complete or not is judged, that is, whether the data acquired by the 16-path camera is received or not is judged, and only under the condition that the acquired image data is complete, the image to be identified received by the vision device is transmitted to the edge device for identifying the target object, so that an identification result is obtained.
Compared with the traditional image processing modeling method, the deep learning extraction features are more robust and higher in accuracy, and the edge device greatly saves the transmission bandwidth of the system and greatly reduces the system cost compared with a cloud server or a local server.
The method comprises the steps of training a pre-constructed target object recognition model by utilizing a sample image set with a preset type of target object to obtain a standard target object recognition model with higher target object recognition capability, and carrying out parameter reduction on model parameters of the standard target object recognition model in order to combine the standard target object recognition model with edge equipment, wherein the occupied storage space of the model can be reduced by carrying out the parameter reduction, and meanwhile, the calculation can be accelerated. And extracting the process of the target object identification processing, compiling the process into a dynamic link library, and embedding the dynamic link library into edge equipment, wherein the edge equipment greatly saves the transmission bandwidth of the system and greatly reduces the system cost compared with a cloud server or a local server. Therefore, the target object identification method based on the edge device can solve the problem of low target object identification accuracy.
Fig. 3 is a functional block diagram of an object recognition apparatus based on edge devices according to an embodiment of the present invention.
The object recognition apparatus 100 based on the edge device according to the present invention may be installed in an electronic device. According to the implemented functions, the object recognition apparatus 100 based on edge device may include an object screening module 101, a model training module 102, a parameter reduction module 103, a process compiling module 104 and an object recognition module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the target object screening module 101 is configured to perform target object screening processing on the training image set to obtain a sample image set with a preset type of target object;
the model training module 102 is configured to train a pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model;
the parameter reduction module 103 is configured to perform parameter reduction processing on the model parameters of the standard target identification model to obtain a reduced standard target identification model;
the process compiling module 104 is configured to input the sample image set into the reduced standard object recognition model for object recognition processing, extract a process of the object recognition processing, and compile the process into a dynamic link library;
the target object recognition module 105 is configured to embed the dynamic link library into edge equipment, establish a connection between the edge equipment and preset visual equipment, and transmit an image to be recognized, which is received by the visual equipment, to the edge equipment to perform target object recognition, so as to obtain a recognition result.
In detail, the specific implementation of each module of the object recognition apparatus 100 based on edge device is as follows:
firstly, the target object screening module 101 performs target object screening processing on the training image set to obtain a sample image set with preset types of target objects.
In the embodiment of the present invention, the training picture set includes pictures in river checkpoints, and the training picture set is collected from image capturing devices arranged at each river checkpoint, where the collected pictures in the river checkpoints do not necessarily include a target object, and therefore, the pictures in the river checkpoints need to be subjected to target object screening processing to obtain river images with the target object.
Specifically, the target object screening module 101 performs target object screening processing on a training image set to obtain a sample image set with a preset type of target object, and includes:
image cutting is carried out on the training image set according to a preset image size, and a cut image set is obtained;
and screening out a sample image set with preset types of target objects from the cut image set.
In detail, the size of the training image set collected by the image capturing device is fixed and depends on the parameters adjusted by the image capturing device, but the size is not necessarily convenient for subsequent image processing, and the images in the training image set do not all contain the target object, so that further target object screening processing is required.
In the embodiment of the invention, images in the river bayonet can be cut into a proper size by photoshop, so that subsequent screening is facilitated, and the size of the image can be 24 × 24.
And step two, the model training module 102 trains a pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model.
In the embodiment of the present invention, before training a pre-constructed target recognition model by using the sample image set, the method includes:
and dividing the sample image set into a training image set, a verification image set and a test image set according to a preset proportion.
Wherein the preset ratio is 8: 1: 1.
in detail, the training image set is used for subsequent training of the model and is a sample for model fitting, the verification image set is a sample set left alone in the model training process and can be used for adjusting the hyper-parameters of the model and for primarily evaluating the capability of the model, and the test image set is used for testing the model and evaluating the generalization capability of the model.
Specifically, the model training module 102 performs training processing on a pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model, including:
inputting the training image set into the target object recognition model for feature extraction to obtain a training result;
calculating a loss value of the training result and a preset standard result by using a preset loss function to obtain a loss value;
when the loss value is larger than or equal to a preset loss threshold value, adjusting parameters of the target object recognition model, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the loss value is smaller than the loss threshold value, obtaining a trained target object recognition model;
inputting the verification image set into the trained target object identification model for verification processing to obtain a verification result;
when the verification result is that the verification fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the verification result is that the verification passes, inputting the test image set into a verification passing target object identification model for test processing;
when the test processing fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
and when the test treatment is passed, obtaining a standard target object recognition model.
In detail, the training result is a result obtained by inputting the training image into the target object recognition model to perform target object recognition, the standard result is an accurate result obtained by judging in advance whether a target object is detected in pictures collected by each river gate, the target object recognition model may be an efficientnet deep learning network, and the preset loss function may be a cross entropy loss function, a hinge loss function, or an exponential loss function.
Specifically, the calculating the loss value of the training result and the preset standard result by using the preset loss function includes:
and calculating the loss value of the training result and a preset standard result by using the following calculation formula:
wherein,in order to obtain the value of the loss,and obtaining the training result, wherein Y is the standard result, and alpha represents an error factor and is a preset constant.
Further, the step of inputting the verification image set into the trained target object recognition model for verification processing is to input the verification image set into the trained target object recognition model to obtain a verification recognition result, judge whether the verification recognition result is consistent with a preset real recognition result, if not, adjust internal parameters in the target object recognition model until the verification recognition result is consistent with the real recognition result, and then pass the verification processing.
Wherein the internal parameter may be a gradient parameter or a weight parameter of the trained target recognition model.
And when the verification result is verification passing, inputting the test image set into a target object identification model passing the verification to perform test processing to obtain a test identification result, judging whether the test identification result is consistent with a preset real test result, returning to the step of training the target object identification model until the test identification result is consistent with the real test result if the test identification result is inconsistent with the real test result, and outputting a standard target object identification model.
In the embodiment of the invention, the target object recognition model is tested by using the test image set, and whether the target object recognition model needs to be trained again is judged according to the obtained test result, so that the generalization capability of the model can be better evaluated.
And step three, the parameter reduction module 103 performs parameter reduction processing on the model parameters of the standard target object identification model to obtain a reduced standard target object identification model.
In the embodiment of the invention, in order to combine the standard target object identification model with the edge device, the standard target object identification model needs to be compressed, because the model parameters in the standard target object identification model are of a floating point type, and the space of the model is difficult to compress by using a common compression algorithm, the model is subjected to parameter conversion processing, 32 floating point numbers are approximately stored and calculated by using 8-bit integers, and the storage space occupied by the model after parameter conversion is reduced by 75 percent because the storage is changed from 4 bytes to 1 byte.
In detail, in the embodiment of the present invention, the parameter reduction module 103 performs parameter reduction processing on the model parameters of the standard object recognition model to obtain a reduced standard object recognition model, including:
obtaining a type corresponding to a model parameter of the standard target object recognition model;
and converting the type corresponding to the model parameter into a preset type to obtain the reduced standard target object recognition model.
The types of the model parameters include an integer type, a floating point type, a character type, a non-value type and the like, the type corresponding to the model parameters of the standard target object identification model in the scheme is usually a 32-bit floating point number, and the preset type is an 8-bit integer.
In the embodiment of the invention, after the parameter reduction processing is carried out, the occupied storage space of the model can be reduced, and meanwhile, the calculation can be accelerated.
Step four, the process compiling module 104 inputs the sample image set into the reduced standard object recognition model for object recognition processing, extracts the process of the object recognition processing, and compiles the process into a dynamic link library.
In an embodiment of the present invention, the process of the object identification processing is a process of inputting the sample image set into the reduced standard object identification model to provide an identification result.
Specifically, the extracting the process of the target object recognition processing and compiling the process into a dynamic link library includes:
converting the target object identification process in the target object identification process into a corresponding identification operation code;
compiling the identification operation code into an engineering file and writing the engineering file into a pre-acquired template link library to obtain a dynamic link library.
The dynamic link library file is a program function library under Linux, and the pre-acquired template link library can be a tensoflow C + + dynamic link library.
And step five, the target object recognition module 105 embeds the dynamic link library into edge equipment, establishes connection between the edge equipment and preset visual equipment, and transmits the image to be recognized received by the visual equipment into the edge equipment for target object recognition to obtain a recognition result.
In the embodiment of the invention, the edge device refers to a small server with an AI chip, the visual device refers to a front-end camera, the edge device comprises an integral engineering code, and the integral engineering code comprises the dynamic link library.
Specifically, the embedding the dynamic link library into an edge device includes;
acquiring a fixed path of the edge device;
and loading the dynamic link library into the edge device through the fixed path by utilizing a calling interface of the dynamic link library.
In detail, a fixed path exists in the edge device, a plurality of data interfaces exist in the dynamic link library, each data interface corresponds to a different function, a calling interface pair in the dynamic link library is selected to load the dynamic link library into the edge device according to the fixed path, and the embedding efficiency of embedding the dynamic link library into the edge device is improved.
Further, after establishing the connection between the edge device and the preset vision device, the method further comprises:
sending image data acquired by the preset vision equipment to the edge equipment within a preset time;
judging whether the image data received by the edge device is complete;
and if the received image data is incomplete, reestablishing the connection between the edge device and the visual device.
In detail, the image data is acquired by adopting a 16-path camera, whether the image data received by the edge device is complete or not is judged, that is, whether the data acquired by the 16-path camera is received or not is judged, and only under the condition that the acquired image data is complete, the image to be identified received by the vision device is transmitted to the edge device for identifying the target object, so that an identification result is obtained.
Compared with the traditional image processing modeling method, the deep learning extraction features are more robust and higher in accuracy, and the edge device greatly saves the transmission bandwidth of the system and greatly reduces the system cost compared with a cloud server or a local server.
The method comprises the steps of training a pre-constructed target object recognition model by utilizing a sample image set with a preset type of target object to obtain a standard target object recognition model with higher target object recognition capability, and carrying out parameter reduction on model parameters of the standard target object recognition model in order to combine the standard target object recognition model with edge equipment, wherein the occupied storage space of the model can be reduced by carrying out the parameter reduction, and meanwhile, the calculation can be accelerated. And extracting the process of the target object identification processing, compiling the process into a dynamic link library, and embedding the dynamic link library into edge equipment, wherein the edge equipment greatly saves the transmission bandwidth of the system and greatly reduces the system cost compared with a cloud server or a local server. Therefore, the target object recognition device based on the edge device can solve the problem of low target object recognition accuracy.
Fig. 4 is a schematic structural diagram of an electronic device for implementing an object identification method based on an edge device according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication interface 12 and a bus 13, and may further comprise a computer program, such as an edge device based object recognition program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of an object recognition program based on an edge device, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., object identification programs based on edge devices, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus 13 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 13 may be divided into an address bus, a data bus, a control bus, etc. The bus 13 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 4 only shows an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The object recognition program based on edge device stored in the memory 11 of the electronic device 1 is a combination of instructions, which when executed in the processor 10, can realize:
carrying out target object screening processing on the training picture set to obtain a sample image set with preset kinds of target objects;
training a pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model;
carrying out parameter reduction processing on the model parameters of the standard target object identification model to obtain a reduced standard target object identification model;
inputting the sample image set into the reduced standard target object recognition model for target object recognition processing, extracting the target object recognition processing process, and compiling the process into a dynamic link library;
and embedding the dynamic link library into edge equipment, establishing connection between the edge equipment and preset visual equipment, and transmitting an image to be identified received by the visual equipment into the edge equipment for identifying a target object to obtain an identification result.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the corresponding embodiments of fig. 1 to fig. 2, which is not repeated herein.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
carrying out target object screening processing on the training picture set to obtain a sample image set with preset kinds of target objects;
training a pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model;
carrying out parameter reduction processing on the model parameters of the standard target object identification model to obtain a reduced standard target object identification model;
inputting the sample image set into the reduced standard target object recognition model for target object recognition processing, extracting the target object recognition processing process, and compiling the process into a dynamic link library;
and embedding the dynamic link library into edge equipment, establishing connection between the edge equipment and preset visual equipment, and transmitting an image to be identified received by the visual equipment into the edge equipment for identifying a target object to obtain an identification result.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. An object identification method based on edge equipment is characterized by comprising the following steps:
carrying out target object screening processing on the training picture set to obtain a sample image set with preset kinds of target objects;
training a pre-constructed target object recognition model by using the sample image set to obtain a standard target object recognition model;
carrying out parameter reduction processing on the model parameters of the standard target object identification model to obtain a reduced standard target object identification model;
inputting the sample image set into the reduced standard target object recognition model for target object recognition processing, extracting the target object recognition processing process, and compiling the process into a dynamic link library;
and embedding the dynamic link library into edge equipment, establishing connection between the edge equipment and preset visual equipment, and transmitting an image to be identified received by the visual equipment into the edge equipment for identifying a target object to obtain an identification result.
2. The edge device-based object recognition method of claim 1, wherein the performing parameter reduction processing on the model parameters of the standard object recognition model to obtain a reduced standard object recognition model comprises:
obtaining a type corresponding to a model parameter of the standard target object recognition model;
and converting the type corresponding to the model parameter into a preset type to obtain the reduced standard target object recognition model.
3. The edge device-based object recognition method of claim 1, wherein the extracting the process of the object recognition processing and compiling the process into a dynamically linked library comprises:
converting the target object identification process in the target object identification process into a corresponding identification operation code;
compiling the identification operation code into an engineering file and writing the engineering file into a pre-acquired template link library to obtain a dynamic link library.
4. The edge device-based object recognition method of claim 1, wherein the embedding the dynamically linked library into an edge device comprises;
acquiring a fixed path of the edge device;
and loading the dynamic link library into the edge device through the fixed path by utilizing a calling interface of the dynamic link library.
5. The edge device-based object recognition method of claim 1, wherein after establishing the connection between the edge device and a preset vision device, the method further comprises:
sending image data acquired by the preset vision equipment to the edge equipment within a preset time;
judging whether the image data received by the edge device is complete;
and if the received image data is incomplete, reestablishing the connection between the edge device and the visual device.
6. The edge device-based object recognition method of claim 1, wherein the training of the pre-constructed object recognition model with the sample image set to obtain a standard object recognition model comprises:
inputting the training image set into the target object recognition model for feature extraction to obtain a training result;
calculating a loss value of the training result and a preset standard result by using a preset loss function to obtain a loss value;
when the loss value is larger than or equal to a preset loss threshold value, adjusting parameters of the target object recognition model, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the loss value is smaller than the loss threshold value, obtaining a trained target object recognition model;
inputting the verification image set into the trained target object identification model for verification processing to obtain a verification result;
when the verification result is that the verification fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
when the verification result is that the verification passes, inputting the test image set into a verification passing target object identification model for test processing;
when the test processing fails, returning to the step of inputting the training image set to the target object recognition model for feature extraction to obtain a training result;
and when the test treatment is passed, obtaining a standard target object recognition model.
7. The edge device-based target object identification method according to any one of claims 1 to 6, wherein the step of performing target object screening processing on the training image set to obtain a sample image set with a preset type of target object includes:
image cutting is carried out on the training image set according to a preset image size, and a cut image set is obtained;
and screening out a sample image set with preset types of target objects from the cut image set.
8. An object recognition apparatus based on edge devices, the apparatus comprising:
the target object screening module is used for carrying out target object screening processing on the training image set to obtain a sample image set with preset types of target objects;
the model training module is used for training a pre-constructed target object recognition model by utilizing the sample image set to obtain a standard target object recognition model;
the parameter reduction module is used for carrying out parameter reduction processing on the model parameters of the standard target object identification model to obtain a reduced standard target object identification model;
the process compiling module is used for inputting the sample image set into the reduced standard object recognition model for object recognition processing, extracting the process of the object recognition processing and compiling the process into a dynamic link library;
and the target object identification module is used for embedding the dynamic link library into edge equipment, establishing connection between the edge equipment and preset visual equipment, and transmitting an image to be identified, which is received by the visual equipment, into the edge equipment for target object identification to obtain an identification result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the edge device based object recognition method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the edge device-based object recognition method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110465403.3A CN113128440A (en) | 2021-04-28 | 2021-04-28 | Target object identification method, device, equipment and storage medium based on edge equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110465403.3A CN113128440A (en) | 2021-04-28 | 2021-04-28 | Target object identification method, device, equipment and storage medium based on edge equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113128440A true CN113128440A (en) | 2021-07-16 |
Family
ID=76781033
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110465403.3A Pending CN113128440A (en) | 2021-04-28 | 2021-04-28 | Target object identification method, device, equipment and storage medium based on edge equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113128440A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113674176A (en) * | 2021-08-23 | 2021-11-19 | 北京市商汤科技开发有限公司 | Image restoration method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110069715A (en) * | 2019-04-29 | 2019-07-30 | 腾讯科技(深圳)有限公司 | A kind of method of information recommendation model training, the method and device of information recommendation |
CN110472529A (en) * | 2019-07-29 | 2019-11-19 | 深圳大学 | Target identification navigation methods and systems |
WO2020057000A1 (en) * | 2018-09-19 | 2020-03-26 | 深圳云天励飞技术有限公司 | Network quantization method, service processing method and related products |
CN112711423A (en) * | 2021-01-18 | 2021-04-27 | 深圳中兴网信科技有限公司 | Engine construction method, intrusion detection method, electronic device and readable storage medium |
-
2021
- 2021-04-28 CN CN202110465403.3A patent/CN113128440A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020057000A1 (en) * | 2018-09-19 | 2020-03-26 | 深圳云天励飞技术有限公司 | Network quantization method, service processing method and related products |
CN110069715A (en) * | 2019-04-29 | 2019-07-30 | 腾讯科技(深圳)有限公司 | A kind of method of information recommendation model training, the method and device of information recommendation |
CN110472529A (en) * | 2019-07-29 | 2019-11-19 | 深圳大学 | Target identification navigation methods and systems |
CN112711423A (en) * | 2021-01-18 | 2021-04-27 | 深圳中兴网信科技有限公司 | Engine construction method, intrusion detection method, electronic device and readable storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113674176A (en) * | 2021-08-23 | 2021-11-19 | 北京市商汤科技开发有限公司 | Image restoration method and device, electronic equipment and storage medium |
CN113674176B (en) * | 2021-08-23 | 2024-04-16 | 北京市商汤科技开发有限公司 | Image restoration method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112446025A (en) | Federal learning defense method and device, electronic equipment and storage medium | |
CN113283446B (en) | Method and device for identifying object in image, electronic equipment and storage medium | |
CN113705462B (en) | Face recognition method, device, electronic equipment and computer readable storage medium | |
CN112396005A (en) | Biological characteristic image recognition method and device, electronic equipment and readable storage medium | |
CN112137591B (en) | Target object position detection method, device, equipment and medium based on video stream | |
CN111738212B (en) | Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence | |
CN112052850A (en) | License plate recognition method and device, electronic equipment and storage medium | |
CN113961473A (en) | Data testing method and device, electronic equipment and computer readable storage medium | |
CN114550076A (en) | Method, device and equipment for monitoring area abnormal behaviors and storage medium | |
CN113065607A (en) | Image detection method, image detection device, electronic device, and medium | |
CN113806434A (en) | Big data processing method, device, equipment and medium | |
CN115471775A (en) | Information verification method, device and equipment based on screen recording video and storage medium | |
CN112668575A (en) | Key information extraction method and device, electronic equipment and storage medium | |
CN114639152A (en) | Multi-modal voice interaction method, device, equipment and medium based on face recognition | |
CN112104662B (en) | Far-end data read-write method, device, equipment and computer readable storage medium | |
CN113128440A (en) | Target object identification method, device, equipment and storage medium based on edge equipment | |
CN112541688A (en) | Service data checking method and device, electronic equipment and computer storage medium | |
CN112580505B (en) | Method and device for identifying network point switch door state, electronic equipment and storage medium | |
CN113221888B (en) | License plate number management system test method and device, electronic equipment and storage medium | |
CN114942855A (en) | Interface calling method and device, electronic equipment and storage medium | |
CN115082736A (en) | Garbage identification and classification method and device, electronic equipment and storage medium | |
CN111859985B (en) | AI customer service model test method and device, electronic equipment and storage medium | |
CN114390200A (en) | Camera cheating identification method, device, equipment and storage medium | |
CN113419951A (en) | Artificial intelligence model optimization method and device, electronic equipment and storage medium | |
CN113191455B (en) | Edge computing box election method and device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |