CN117974607A - Tongue image recognition method based on federal learning - Google Patents

Tongue image recognition method based on federal learning Download PDF

Info

Publication number
CN117974607A
CN117974607A CN202410154175.1A CN202410154175A CN117974607A CN 117974607 A CN117974607 A CN 117974607A CN 202410154175 A CN202410154175 A CN 202410154175A CN 117974607 A CN117974607 A CN 117974607A
Authority
CN
China
Prior art keywords
weight
model
convolution
tongue
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410154175.1A
Other languages
Chinese (zh)
Inventor
刘忆宁
廖哲皓
蔡雪玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202410154175.1A priority Critical patent/CN117974607A/en
Publication of CN117974607A publication Critical patent/CN117974607A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a tongue image recognition method based on federal learning, which comprises the following steps: and carrying out preliminary processing on the tongue image, extracting features of the processed tongue image by utilizing a DRNet deep neural network model, applying differential privacy to protect weight privacy when the client uploads the model weight through a federal learning training model, and aggregating the weight by a central server through a weighted federal average algorithm. The invention can effectively extract key features from the tongue image to identify so as to correspond to various physique types, and simultaneously protects the privacy of data.

Description

Tongue image recognition method based on federal learning
Technical Field
The invention relates to the technical field of medical image analysis, in particular to a tongue image recognition method based on federal learning.
Background
In the traditional Chinese medicine diagnosis method, tongue diagnosis is an important component, and the tongue manifestation can show the physique type of a person, and the physique type of the person can be accurately distinguished by observing the physiological characteristics of color cracks and the like of the tongue. Traditionally, this method relies mainly on the experience of doctors, but the results of tongue classification judgment of different doctors on the same patient cannot be completely the same, and accurate tongue type identification is a great challenge for traditional Chinese doctors.
In recent years, with the development of artificial intelligence and deep learning technologies, the analysis of tongue images and the assistance of physique recognition by using these technologies have become a research hotspot, which brings new possibilities for modernization of physique recognition in traditional Chinese medicine, namely, the recognition of tongue images by using the deep learning technology, so as to realize more accurate and objective physique type recognition.
However, tongue images as one of medical images are limited by the privacy and data sharing issues of medical images. Because of the sensitivity of medical data, medical images often cannot be publicly shared, different institutions, organizations and enterprises have different magnitudes and heterogeneous medical data, and the data are difficult to integrate, so that a seat data island is formed, and training and optimization of a deep learning model are limited. Under the condition, the federal learning is used as a novel machine learning mode, so that the possibility is provided for solving the problem, and the federal learning is applied to tongue diagnosis image analysis, so that the accuracy of identification can be improved, the privacy of individuals can be protected, and the limitation problem in the prior art can be solved.
Disclosure of Invention
The invention provides a tongue image recognition method based on federal learning, which can effectively extract key features from tongue images to recognize so as to correspond to various physique types, and also protect the privacy of data.
The technical scheme of the invention mainly comprises the following contents:
1. The system firstly receives the input tongue image, performs preliminary processing including image cutting, scaling and normalization, and facilitates subsequent analysis.
2. And extracting the characteristics of the processed tongue image by using DRNet depth neural network model. The DRNet model includes convolution layers, dynamic convolution layers, and fully-connected layers that, in combination, effectively extract key physiological features from the image.
3. The client trains DRNet the model through federal learning, differential privacy is applied to protect the weight privacy when the model weight is uploaded, and the central server aggregates the weight by using a weighted federal average algorithm, so that the global performance of the model is improved.
The DRNet model comprises four tongue picture classification sub-models for identifying tooth marks, cracks, tongue colors and tongue coating colors respectively. The models are deployed locally and are used for analyzing tongue picture images to be detected and respectively extracting four physiological characteristics. Finally, the system obtains the recognition result according to the comprehensive analysis of the characteristics, and the specific corresponding physique classification is judged by doctors according to professional knowledge.
The construction method of the DRNet depth neural network model in the step2 is as follows:
(1) The DRNet (DyResNet) network is a variant of the residual network ResNet and includes multiple convolutional layers, multiple dynamic convolutional layers, and a full-join layer. The structure enables DRNet to effectively extract deep features of the image, and meanwhile enhances the recognition capability of the model on key features.
(2) In the convolution layer DRNet employs a ReLU activation function, which helps to alleviate the gradient vanishing problem and speed up the convergence rate of the model. In addition, the pooling layer adopts a maximum pooling mode, so that feature dimensions are effectively reduced, and the most important feature information is reserved.
(3) The dynamic convolution layer in DRNet is designed to have 64 input planes and 64 output planes, the convolution kernel size is 3x3, the step size is 1, and the padding is 1. The addition of the dynamic convolution layer improves the adaptability and flexibility of the model to input data.
(4) The final full connection layer adopts a softmax activation function to convert the feature vector into probability distribution, which is helpful for more accurate discrimination in multi-classification tasks.
The dynamic convolution layer construction step in the step (1) is as follows:
(a) Initialization of the multi-headed attention mechanism: a multi-headed attention module is constructed that contains N independent heads. Each head learns a different feature representation through its own full connected layer sequence. The attention weight of each head is calculated by the following formula:
wherein A (i) represents the weight of the ith attention head, T represents the temperature parameter, And/>Two full connection layers of the ith header are respectively represented, and X represents an input feature map.
(B) Weight generation and application: the output of each head is processed through a softmax function, generating a set of weights. These weights are multiplied by the corresponding convolution kernels to dynamically adjust the response of each convolution kernel to the input features. The weight applied to the convolution is calculated by the following formula:
where K dynamic is the dynamic convolution kernel and K (i) is the convolution kernel corresponding to the ith attention header.
(C) Synthesis of dynamic convolution kernel: all the attention-weighted convolution kernels are combined to form the core component of the dynamic convolution layer.
(D) Convolution operation and post-processing: the input feature map is checked using the synthesized dynamic convolution to perform a convolution operation, followed by batch normalization and activation function processing to produce an output feature map. The convolution operation and post-processing are calculated by the following formulas:
Y=Activation(BN(Conv(X,Kdynamic)))。
wherein Y is an output feature map, BN is batch normalization, activation is an Activation function, and Conv represents convolution operation.
(E) Multi-head attention feature fusion: the features of each header are fused by stitching or other synthetic means to take advantage of the information captured by the different headers. Feature fusion can be expressed as:
Fmerged=concat(A(1),A(2),…,A(N)
wherein F merged is a post-fusion feature.
(F) Generating an output characteristic diagram: the synthesized features are processed by a final activation function and output as a final feature map Y, serving the subsequent network layer or as model output.
The specific process of federal learning and model aggregation in the step 3 is as follows:
(a) Distributed training: in the framework of federal learning, a neural network model is deployed to a plurality of clients. Each client will use the local data for independent model training. To ensure consistency and efficiency of training, all training hyper-parameters, such as learning rate, batch size and local training rounds, are issued by the central server. This ensures the synchronicity of the different clients during the training process.
(B) Model weight upload and privacy enhancement: after the local training is completed, each client uploads the model weight obtained by training to the central server. The security of the model weight in the transmission process is ensured by adopting a differential privacy technology, and the weight is randomly disturbed by the client before uploading. Wherein the application of differential privacy techniques may be expressed as: w i′=Wi +noise (lambda). Where W i is the model weight of the ith client, W i' is the weight after perturbation, noise (λ) is the random Noise generated from the privacy budget λ.
(C) Model weight aggregation: the central server aggregates model weights using a weighted federal averaging algorithm (WEIGHT FEDAVG) that can adjust weights based on the amount, quality, or behavior of the client data on the client to optimize performance of the aggregate model.
Wherein the weighted federal averaging algorithm (WEIGHT FEDAVG) is expressed as follows:
where k i is the weight factor of the i-th client.
(D) Weight issuing and iteration: the aggregated model weights are then distributed to the clients for the next round of training. The loop iteration process ensures that the model is gradually optimized in a global scope, and meanwhile, the privacy security of the data of each client is maintained.
In addition, each client side carries out model training aiming at four different tongue coating characteristics (tongue color, tongue coating color, tongue tooth trace and tongue crack), so that the recognition capability and accuracy of the model on tongue coating image characteristics are improved.
The constitution identification process of the traditional Chinese medicine is as follows:
(a) Image analysis: the system first receives and analyzes an input tongue image. At this stage, the system uses four independent models to identify and calculate the likelihood of the image in terms of the following four key features: tongue color, tongue coating color, tongue cracks and tongue tooth marks. Each model is specifically identified for the following subdivision: tongue color (including pale white, red, dark red), tongue coating color (including yellow, white, no) tooth trace (with and without tooth trace), and crack (with and without crack).
(B) And (3) feature selection: in each feature class, the system will choose the feature with the highest likelihood of similarity as a representation of the tongue image in that feature class.
(C) Comprehensive classification: the system then comprehensively considers the four characteristics, and classifies the physique of the traditional Chinese medicine according to the corresponding relation between the physique of the traditional Chinese medicine and the tongue picture.
Drawings
FIG. 1 is a schematic diagram of a dynamic convolutional layer module of the present invention;
FIG. 2 is a flow chart of a federally learned tongue fur recognition model training process in accordance with the present invention;
fig. 3 is an overall flow chart of the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the drawings and examples.
The general flow of this example is shown in fig. 3, comprising the steps of:
s1, establishing a data set and preprocessing data: the images are manually classified and training and testing datasets are created.
S2, designing DRNet deep neural network structures: in combination with the dynamic convolution layer and the multi-headed attention mechanism, a model is constructed to capture complex features of the tongue image. The dynamic convolution layer adaptively adjusts the convolution kernel according to input data, and the multi-head attention mechanism allows the model to focus on key parts of the image in multiple dimensions, so that the recognition accuracy is further improved. The dynamic convolutional layer architecture of this embodiment is shown in fig. 1.
S3, training a federal learning model: and independently training models on a plurality of clients, protecting data privacy, and merging learning achievements through a federal learning method. A federal learning model training flow chart for this embodiment is shown in fig. 2.
S4, tongue condition identification: inputting the tested tongue fur image into a trained network model, determining corresponding physiological characteristics of the tongue fur image, and classifying physique according to the physiological characteristics. A tongue image physique recognition flowchart of this embodiment is shown in fig. 3.
The tongue coating image dataset used in the embodiment is collected by medical institutions such as provincial middle hospitals and the like, and is formed by specially cutting tongue coating areas. Various physiological characteristics are manually classified by a professional doctor, and tongue colors comprise pale red tongue, deep red tongue and pale white tongue; the tongue coating color comprises white tongue coating, yellow tongue coating and no tongue coating; tongue tooth marks are divided into tooth marks and tooth marks; tongue cracks are classified into cracking and non-cracking, and all classifications are marked according to a unified standard.
In step S1, the tongue coating image dataset is specifically divided into a training set and a testing set after data enhancement such as noise adding, brightness changing, random point adding, mirroring and the like. For the training set, the pictures are first transformed by random size and cut to 224 x 224 size, then randomly flipped horizontally to increase sample diversity. Then, the picture is converted into tensors and RGB three channels are normalized according to the channels respectively. The process flow of the verification set is similar to that of the training set, but replaces random clipping with center clipping to ensure that the features of the center region of the image can be learned by the model.
In step S2, specifically, the method for constructing DRNet deep neural network is as follows:
(1) The DRNet (DyResNet) network is a variant of the residual network ResNet and includes multiple convolutional layers, multiple dynamic convolutional layers, and a full-join layer. The structure enables DRNet to effectively extract deep features of the image, and meanwhile enhances the recognition capability of the model on key features.
(2) In the convolution layer DRNet employs a ReLU activation function, which helps to alleviate the gradient vanishing problem and speed up the convergence rate of the model. In addition, the pooling layer adopts a maximum pooling mode, so that feature dimensions are effectively reduced, and the most important feature information is reserved.
(3) The dynamic convolution layer in DRNet is designed to have 64 input planes and 64 output planes, the convolution kernel size is 3x3, the step size is 1, and the padding is 1. The addition of the dynamic convolution layer improves the adaptability and flexibility of the model to input data.
(4) The final full connection layer adopts a softmax activation function to convert the feature vector into probability distribution, which is helpful for more accurate discrimination in multi-classification tasks.
The dynamic convolution layer construction step in the step (1) is as follows:
(a) Initialization of the multi-headed attention mechanism: a multi-headed attention module is constructed that contains N independent heads. Each head learns a different feature representation through its own full connected layer sequence. The attention weight of each head is calculated by the following formula:
wherein A (i) represents the weight of the ith attention head, T represents the temperature parameter, And/>Two full connection layers of the ith header are respectively represented, and X represents an input feature map.
(B) Weight generation and application: the output of each head is processed through a softmax function, generating a set of weights. These weights are multiplied by the corresponding convolution kernels to dynamically adjust the response of each convolution kernel to the input features. The weight applied to the convolution is calculated by the following formula:
where K dynamic is the dynamic convolution kernel and K (i) is the convolution kernel corresponding to the ith attention header.
(C) Synthesis of dynamic convolution kernel: all the attention-weighted convolution kernels are combined to form the core component of the dynamic convolution layer.
(D) Convolution operation and post-processing: the input feature map is checked using the synthesized dynamic convolution to perform a convolution operation, followed by batch normalization and activation function processing to produce an output feature map. The convolution operation and post-processing are calculated by the following formulas:
Y=Activation(BN(Conv(X,Kdynamic)))。
wherein Y is an output feature map, BN is batch normalization, activation is an Activation function, and Conv represents convolution operation.
(E) Multi-head attention feature fusion: the features of each header are fused by stitching or other synthetic means to take advantage of the information captured by the different headers. Feature fusion can be expressed as:
Fmerged=concat(A(1),A(2),…,A(N)
wherein F merged is a post-fusion feature.
(F) Generating an output characteristic diagram: the synthesized features are processed by a final activation function and output as a final feature map Y, serving the subsequent network layer or as model output.
In step S3, the federal learning and model aggregation steps are as follows:
(a) Distributed training: in the framework of federal learning, a neural network model is deployed to a plurality of clients. Each client will use the local data for independent model training. To ensure consistency and efficiency of training, all training hyper-parameters, such as learning rate, batch size and local training rounds, are issued by the central server. This ensures the synchronicity of the different clients during the training process.
(B) Model weight upload and privacy enhancement: after the local training is completed, each client uploads the model weight obtained by training to the central server. The security of the model weight in the transmission process is ensured by adopting a differential privacy technology, and the weight is randomly disturbed by the client before uploading. Wherein the application of differential privacy techniques may be expressed as: w i′=Wi +noise (lambda). Where W i is the model weight of the ith client, W i' is the weight after perturbation, noise (λ) is the random Noise generated from the privacy budget λ.
(C) Model weight aggregation: the central server aggregates model weights using a weighted federal averaging algorithm (WEIGHT FEDAVG) that can adjust weights based on the amount, quality, or behavior of the client data on the client to optimize performance of the aggregate model.
Wherein the weighted federal averaging algorithm (WEIGHT FEDAVG) is expressed as follows:
where k i is the weight factor of the i-th client.
(D) Weight issuing and iteration: the aggregated model weights are then distributed to the clients for the next round of training. The loop iteration process ensures that the model is gradually optimized in a global scope, and meanwhile, the privacy security of the data of each client is maintained.
In addition, each client side carries out model training aiming at four different tongue coating characteristics (tongue color, tongue coating color, tongue tooth trace and tongue crack), so that the recognition capability and accuracy of the model on tongue coating image characteristics are improved.
In step S4, the tongue condition recognition step is as follows:
(a) Image analysis: the system first receives and analyzes the input tongue image, pre-processes the tongue coating image, and cuts to 224 x 224 size. At this stage, the system uses four independent models to identify and calculate the likelihood of the image in terms of the following four key features: tongue color, tongue coating color, tongue cracks and tongue tooth marks. Each model is specifically identified for the following subdivision: tongue color (including pale white, red, dark red), tongue coating color (including yellow, white, no) tooth trace (with and without tooth trace), and crack (with and without crack).
(B) And (3) feature selection: in each feature class, the system will choose the feature with the highest likelihood of similarity as a representation of the tongue image in that feature class.
(C) Comprehensive classification: the system then comprehensively considers the four characteristics, and classifies the physique of the traditional Chinese medicine according to the corresponding relation between the physique of the traditional Chinese medicine and the tongue picture.
The invention has the technical characteristics and beneficial effects that:
In the prior art, the problems of data privacy disclosure and low processing efficiency are frequently faced in a centralized learning environment, key features are effectively extracted from tongue images through an innovative deep learning model DRNet, the type of Chinese medicine physique is identified, and meanwhile, the data privacy is effectively protected. The method comprises the following steps:
(1) Data privacy protection: by utilizing federal learning, the client only uploads model parameters instead of original data, and data privacy is effectively protected. The application of the differential privacy technology further enhances privacy protection and ensures the security of the client data.
(2) And (3) improving the treatment efficiency: through distributed computing resources, the invention accelerates the model training speed, adapts to a scene with large data volume and improves the learning efficiency.
(3) Accuracy enhancement: DRnet model fusion deep learning, and four sub-models are respectively responsible for identifying tooth marks, cracks, tongue colors and fur colors, and the accuracy of identification is improved by integrating the characteristics.
(4) Generalization capability enhancement: DRnet improves the generalization capability of the model under different tongue condition, and can adapt to diversified clinical tongue condition data.

Claims (3)

1. A tongue image recognition method based on federal learning comprises the steps of preliminary processing of tongue images, and is characterized in that: extracting features of the processed tongue image by using DRNet depth neural network model;
the client side uses a federal learning training model, differential privacy is applied to protect the weight privacy when the model weight is uploaded, and a central server uses a weighted federal average algorithm to aggregate the weight;
The DRNet deep neural network model comprises a plurality of convolution layers, a plurality of dynamic convolution layers and a full connection layer;
the dynamic convolution layer construction steps are as follows:
(a) Initialization of the multi-headed attention mechanism: constructing a multi-head attention module comprising N independent heads, wherein each head learns different characteristic representations through a self full-connection layer sequence, and the attention weight of each head is calculated through the following formula:
Wherein A (i) represents the weight of the ith attention head, T represents a temperature parameter, FC 1 (i) and FC 2 (i) respectively represent two fully connected layers of the ith head, and X represents an input feature map;
(b) Weight generation and application: the output of each head is processed through a softmax function, generating a set of weights. These weights are multiplied with corresponding convolution kernels to dynamically adjust the response of each convolution kernel to the input features, the weights applied to the convolutions being calculated by the following formula:
where K dynamic is the dynamic convolution kernel and K (i) is the convolution kernel corresponding to the ith attention header;
(c) Synthesis of dynamic convolution kernel: merging all the weighted convolution kernels of the attention heads to form a core component of the dynamic convolution layer;
(d) Convolution operation and post-processing: the input feature map is checked using the synthesized dynamic convolution to perform a convolution operation followed by batch normalization and activation function processing to produce an output feature map, the convolution operation and post-processing being calculated by the following formulas: y=activity (BN (Conv (X, K dynamic))); wherein Y is an output feature map, BN is batch normalization, activation is an Activation function, conv represents convolution operation;
(e) Multi-head attention feature fusion: the features of each head are fused by splicing or other synthesis modes to utilize the information captured by different heads, and the feature fusion is expressed as follows:
F merged=concat(A(1),A(2),…,A(N);Fmerged is the feature after fusion;
(f) Generating an output characteristic diagram: and outputting the synthesized characteristics into a final characteristic diagram Y after final activation function processing.
2. The method of claim 1, wherein the process of federally learning the training model comprises:
(a) Training in a distributed manner;
(b) Model weight upload and privacy enhancement: after local training is finished, each client uploads the model weight obtained by training to a central server, the safety of the model weight in the transmission process is ensured by adopting a differential privacy technology, and the client randomly perturbs the weight before uploading; wherein the application of the differential privacy technique is expressed as: w i′=Wi + Noise (lambda); where W i is the model weight of the ith client, W i' is the weight after perturbation, noise (λ) is random Noise generated from the privacy budget λ;
(c) Model weight aggregation: the central server uses a weighted federal average algorithm to aggregate model weights, and the weighted federal average algorithm adjusts the weights according to the quantity, quality or performance of the client data on the client so as to optimize the performance of the aggregate model; wherein the weighted federal average algorithm expression is as follows:
Wherein k i is the weight factor of the i-th client;
(d) Weight down and iteration.
3. The method according to claim 1 or 2, characterized in that the method further comprises the steps of:
In the convolution layer, DRNet deep neural network adopts a ReLU activation function, and a pooling layer adopts a maximum pooling mode;
The dynamic convolution layer is designed into 64 input planes and 64 output planes, the convolution kernel size is 3x3, the step length is 1, and the filling is 1;
the fully connected layer uses a softmax activation function to convert feature vectors into probability distributions.
CN202410154175.1A 2024-02-02 2024-02-02 Tongue image recognition method based on federal learning Pending CN117974607A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410154175.1A CN117974607A (en) 2024-02-02 2024-02-02 Tongue image recognition method based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410154175.1A CN117974607A (en) 2024-02-02 2024-02-02 Tongue image recognition method based on federal learning

Publications (1)

Publication Number Publication Date
CN117974607A true CN117974607A (en) 2024-05-03

Family

ID=90850966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410154175.1A Pending CN117974607A (en) 2024-02-02 2024-02-02 Tongue image recognition method based on federal learning

Country Status (1)

Country Link
CN (1) CN117974607A (en)

Similar Documents

Publication Publication Date Title
Li et al. Automatic detection of diabetic retinopathy in retinal fundus photographs based on deep learning algorithm
CN108806792B (en) Deep learning face diagnosis system
Laibacher et al. M2u-net: Effective and efficient retinal vessel segmentation for real-world applications
KR102041906B1 (en) API engine for discrimination of facial skin disease based on artificial intelligence that discriminates skin disease by using image captured through facial skin photographing device
KR102311654B1 (en) Smart skin disease discrimination platform system constituting API engine for discrimination of skin disease using artificial intelligence deep run based on skin image
JP6788264B2 (en) Facial expression recognition method, facial expression recognition device, computer program and advertisement management system
JP2002109525A (en) Method for changing image processing path based on image conspicuousness and appealingness
CN115035068A (en) Cross-domain self-photographing face pockmark grading image classification method capable of self-adapting skin color
Singh et al. A novel hybrid robust architecture for automatic screening of glaucoma using fundus photos, built on feature selection and machine learning‐nature driven computing
Bardozzo et al. Cross X-AI: Explainable semantic segmentation of laparoscopic images in relation to depth estimation
Ukwuoma et al. Boosting breast cancer classification from microscopic images using attention mechanism
Yadav et al. Automatic Cataract Severity Detection and Grading Using Deep Learning
Vijendran et al. Optimal segmentation and fusion of multi-modal brain images using clustering based deep learning algorithm
Mannanuddin et al. Enhancing medical image analysis: A fusion of fully connected neural network classifier with CNN-VIT for improved retinal disease detection
Mehta et al. Decentralized Detection of Cassava Leaf Diseases: A Federated Convolutional Neural Network Solution
Li et al. Ranking-based color constancy with limited training samples
CN117974607A (en) Tongue image recognition method based on federal learning
Lee et al. Learning non-homogenous textures and the unlearning problem with application to drusen detection in retinal images
Mohammadi et al. Deep-rsi: Deep learning for radiographs source identification
Hasan et al. A study of gender classification techniques based on iris images: A deep survey and analysis
Akshay et al. Face matching in indian citizens using cnn
Singla et al. Age and gender detection using Deep Learning
KR101916596B1 (en) Method for predicting disgust of image
Bhowmik et al. Polar fusion technique analysis for evaluating the performances of image fusion of thermal and visual images for human face recognition
Ghosh Computational Models for Cognitive Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination