CN116958192A - Event camera image reconstruction method based on diffusion model - Google Patents

Event camera image reconstruction method based on diffusion model Download PDF

Info

Publication number
CN116958192A
CN116958192A CN202310986749.7A CN202310986749A CN116958192A CN 116958192 A CN116958192 A CN 116958192A CN 202310986749 A CN202310986749 A CN 202310986749A CN 116958192 A CN116958192 A CN 116958192A
Authority
CN
China
Prior art keywords
event
training
image
model
image reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310986749.7A
Other languages
Chinese (zh)
Inventor
杨敬钰
毕春洋
岳焕景
李坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202310986749.7A priority Critical patent/CN116958192A/en
Publication of CN116958192A publication Critical patent/CN116958192A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an event camera image reconstruction method based on a diffusion model, and belongs to the technical field of digital image signal processing. Current deep learning event based camera image dense reconstruction algorithms suffer from the fact that event data is motion dependent and exhibits considerable noise and non-ideal effects, performance is not good and is not stable. According to the method, the diffusion model which is developed and mature at present is introduced, and the image reconstruction is guided by the generation effect of the generation model in the image reconstruction process, so that the situations of insufficient event information and unstable reconstruction effect are made up. And the invention opens up the connection between the image reconstruction based on the event camera and the diffusion model, so that the subsequent tasks such as superdivision, coloring and the like based on diffusion can be easily applied to the image reconstruction of the event camera.

Description

Event camera image reconstruction method based on diffusion model
Technical Field
The invention belongs to the technical field of digital image signal processing, and particularly relates to an event camera image reconstruction method based on a diffusion model.
Background
An event camera is a new type of vision sensor, also known as a dynamic vision sensor or DAVIS (Dynamic and Active-Pixel Vision Sensor). The method is inspired by a biological vision system, takes 'only perceiving a moving object' as a starting point, realizes the unique characteristics of high time resolution, high dynamic range, low power consumption and the like through an asynchronous and independent imaging paradigm, and successfully solves the problems of the common camera in the aspects of space redundancy, dynamic blurring and the like. Therefore, the method is widely applied and well performed in the fields of high-speed motion estimation, high dynamic range mapping, feature detection and tracking and the like.
Unlike conventional cameras that image by exposing accumulated photons, each pixel of an event camera is equipped with a separate photo-sensing module. When the brightness of the pixel changes beyond a preset threshold, a differential pulse signal (also referred to as event data) is output. Event data is encoded as a quad vector (x i ,y i ,t i ,p i ) Wherein (x) i ,y i ) Representing pixel coordinates, t i Indicating the trigger time, p i Indicating the polarity of the brightness change. The data output of the event camera is time asynchronous and spatially sparse due to the independent operation of all pixels. This imaging paradigm reduces the amount of redundant data and eliminates the concept of imaging in units of time in conventional cameras. However, due to its non-euclidean data structure, existing image reconstruction algorithms have difficulty accurately reconstructing event data. It is therefore necessary to design new algorithms for the spatiotemporal nature of event data.
Currently, the main stream processing methods for event data are mainly divided into two types: event-by-event processing and event group processing. The event-by-event processing method is commonly used for tasks such as event noise reduction, feature extraction, image reconstruction filters and the like, and the system state is updated in real time through differential calculation. In order to solve the problem of limited data size carried by a single event, the event group processing method performs accumulation processing on event data in a fixed event window. Common representations include event frames, volume grids, three-dimensional point sets, and the like.
In recent years, deep learning has made a remarkable breakthrough in the field of image processing, and an event camera image reconstruction method based on a deep learning frame has better performance compared with a traditional method. However, event data is dependent on motion and exhibits considerable noise and the effects of non-idealities, performance is not good and stable, and further improvements and optimizations are needed.
In view of the above, the present invention proposes an event camera image reconstruction method based on a diffusion model.
Disclosure of Invention
The invention aims to provide an event camera image reconstruction method based on a diffusion model to solve the problems in the background technology, and aims to overcome the defects that event data contains information and is easily affected by noise, and the event camera image reconstruction is guided by utilizing the generation capacity of a pre-trained generation diffusion model, namely the priori of image capture, so that the image has good reconstruction details.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an event camera image reconstruction method based on a diffusion model specifically comprises the following steps:
s1, acquiring an event stream: shooting a scene to be observed and an event camera simulation simulator through an event camera to obtain event stream data of a target scene;
s2, preprocessing event data: the event data stream cannot be directly applied to components commonly used in image processing algorithms, such as CNN, the event data obtained in the step S1 is preprocessed, and the event data is expressed in the form of event frames by adopting a method for processing event packets;
s3, reconstructing network design: designing a pre-training Stable diffration model and a pre-training event data feature extractor; the pre-training Stable diffration model comprises a pre-training Stable diffration network part, and the pre-training event data feature extractor comprises a pre-training event data feature extraction network part and a fusion part;
s4, designing a DDIM sampling process: designing DDIM sampling super parameters based on event camera image reconstruction, wherein the super parameters comprise sampling step number, step distance and unconditionally generating a guiding rate so as to optimize an imaging result;
s5, obtaining a reconstructed image: inputting a noise image into a pre-training self-encoder of a pre-training Stable diffration model to encode, inputting the coded hidden variable representation, text guide information and an event frame into a reconstruction network constructed in the step S3, extracting event characteristics by using a pre-training event data characteristic extractor, fitting the event characteristics into the pre-training Stable diffration model through a connecting part, and generating a denoising image under the current time step;
s6, generating a reconstructed image: based on the design of the DDIM sampling process in S4, repeating the acquisition process of the reconstructed image in S5 to obtain the hidden variable representation of the reconstructed image, and finally generating the reconstructed image by decoding through a pre-training self-encoder.
Preferably, the S2 specifically includes the following:
event interval Δt=t k -t 0 The event points in the frame are encoded into event frames by a linear interpolation method, and the polarity p of each event point i Mapped to the two channels closest to it, formulated as follows:
wherein E is k Representing the encoding result of the event point in the delta T interval; p is p i Representing event point polarity; b represents the number of voxel grid channels; t is t i A timestamp representing the event point.
Preferably, the input of the pre-training Stable distribution model is a noise image after time-email and text guide information, and the input of the pre-training event data feature extractor is an event frame; the pre-training event data feature extractor pre-trains on an event-image reconstruction task and takes an encoder part thereof; the pre-training Stable distribution model and the connection part are selected from mature control net structures.
Compared with the prior art, the invention provides an event camera image reconstruction method based on a diffusion model, which has the following beneficial effects:
the invention provides an event camera image reconstruction method based on a diffusion model, when an event image is reconstructed, as the information quantity contained in the event is influenced by motion and the influence on the quality of the reconstructed image for the attribute of brightness change is greatly reduced, the diffusion model applies strong generating capability, the original situation of generating a real image is made up, the generating quality is greatly improved, and the pre-training diffusion model is replaced to be well adapted to superdistribution, coloring and other works really achieve the adaptation of event processing and traditional image frame processing.
Drawings
FIG. 1 is a general flow chart of an event camera image reconstruction method based on a diffusion model according to the present invention;
FIG. 2 is a flow chart of the acquisition of the reconstructed image mentioned in the embodiment 1 of the present invention;
FIG. 3 is a graph showing an example of the experimental results in example 1 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1:
the invention provides an event camera image reconstruction method based on a diffusion model, which specifically comprises the following steps:
s1, generating a simulation data set. Considering the difficulty of acquiring a real data set, the invention adopts an ESIM simulator. The simulator, in combination with the rendering engine and the event simulator, is capable of dynamically adaptively sampling event data. The invention sets the event trigger threshold of the simulator between 0.2 and 0.5 according to the estimation of the real data set to approximate the performance of the real data set to the maximum extent. By this method, the present invention is able to generate event data that approximates a real scene for further investigation and analysis.
S2, preprocessing event data. Comprehensively considering algorithm execution speed and algorithm reconstruction result, adopting a method for processing event packets to represent event data in S1 as event frames, and specifically adopting an event interval delta T=t as a specific method k -t 0 The event points in the frame are encoded into event frames by a linear interpolation method, and the polarity p of each event point i Mapped to the two channels closest to it, formulated as follows:
wherein E is k Representing the encoding result of the event point in the delta T interval; p is p i Representing event point polarity; b denotes the number of voxel grid channels, where b=5, t i Is the timestamp of the event point.
S3, data augmentation. In order to generate the simulation dataset, the present invention decides to employ an ESIM simulator, considering the difficulty of real dataset acquisition. The simulator combines the functions of a rendering engine and an event simulator, and is capable of generating event data in a dynamically adaptive manner. In order to be as close as possible to the features of the real dataset, the present invention sets the event trigger threshold of the simulator in the range of 0.2 to 0.5 based on the estimation of the real dataset. By the method, the event data similar to the real scene can be generated, and useful resources are provided for subsequent research and analysis.
S4, as shown in part Event Frame Encoder Blocks of fig. 1, first, the present invention feeds the incoming event frame V (of size b×h×w) into a circular convolution backbone network consisting of a header and three circular convolution blocks. The header is used to convert the input event voxel grid V into a first scale feature map f 0 (size C 0 X H x W). In the present invention, C 0 Is arranged as32。
The video reconstruction of events by the present invention can be improved by exploiting the temporal consistency between successive frames. A ConvLSTM layer is used in each loop block that uses the previous state to enhance the temporal stability of the reconstruction. Furthermore, in each cyclic block, the present invention applies a convolution layer (step size 2) to halve the spatial size of the feature. Meanwhile, as the scale level increases, the number of channels doubles, namely C l =C 0 X 2++I. Thus, three stacked cyclic blocks produce a three-scale feature map, which can be expressed as:
in the method, in the process of the invention,the output of the ConvLstm layer at time t is indicated.
S5, as shown in a Stable diffration and control part in the figure I, the Stable diffration comprises four pre-trained parts, namely an Autoencoder, a Time Encoder, a Prompt Encoder and a Unet, a mature Stable diffration control network control net structure is selected to be used as a backbone part of image reconstruction, and low-dimensional features (256 x 32) obtained in the S4 are subjected to a zero convolution layer to obtain feature vectors f with the same dimension as the output feature of a first coding block of the Stable diffration 1 ' first Input is a full-noise image which is sampled by Gaussian noise and has the same dimension as the low-dimensional representation output by the pre-training self-encoder, and the Input of the current time step is processed by a first coding block of the control net to obtain f 1 Will f 1 ’+f 1 And continuously feeding the low-dimensional coding blocks, and feeding the output characteristic vector of each coding block in the control net after zero convolution into a Stable diffration to be added with the corresponding characteristic vector to play a role in condition control. Obtaining Output under the control of the condition, the semantics and the time steps is the noise prediction of the current time step.
S6, reconstructing an image to obtain. Referring to fig. 2, the operation flow of each time-step algorithm framework is shown as S5, and in the present invention, a standard DDIM generation method is adopted, wherein the super parameter of DDIM is selected, the sampling step number is 50, the unconditional generation guiding rate is 9.0, and eta=0.0. And (3) generating prediction of the Input noise under the conditions of semantics, event frames and time steps by using the Input after the current denoising from the beginning pure Gaussian noise in each time step, obtaining new Input after denoising the Input by noise in a standard DDIM denoising mode, continuously executing the operation of the fifth step, obtaining a reconstructed image Output after the time step is executed, generating a Stable Diffusion as a hidden space, encoding the image into a low-dimensional representation by an official pre-training automatic encoder before the image is sent to a network, obtaining the same reconstructed image, and decoding by the automatic encoder to obtain the reconstructed image.
S7, giving an image z by a loss function 0 The diffusion algorithm adds noise to the image step by step, producing a noisy image z t Where t represents the number of times of noise addition. When t is large enough, the image approximates pure noise. Given a set of conditions including a time step t and a text prompt c t And a task-specific condition c f The image diffusion algorithm learns a network epsilon θ To predict the addition to noisy image z t Noise in (a) is generated.
Where L represents the overall learning objective of the entire diffusion model. This learning goal can be used directly to fine tune the diffusion model.
S8, in the training process, a sample is randomly selected from given data. Then, in the time range 1 to T, one time point T is randomly selected. Sample data and time t are passed as inputs to the Diffusion model. The Diffusion model samples a random noise and adds it to the input samples to form a new noise sample. The noise samples and time t are then passed as inputs to the Unet neural network model.
The Unet model generates a sinusoidal position code from time t and combines it with the input samples. The Unet model predicts this added noise and returns it as output. While the diffration model calculates the loss between the noise sample and the random noise of the previous sample. Then, the present invention calculates a loss between noise predicted by the Unet model and random noise sampled before using the L2 loss function, and calculates a gradient according to the loss and updates weights of the Unet model.
Repeating the steps until the Unet model finishes training. Through this training process, the Unet model is able to learn how to predict the appropriate noise, and how to combine the position coding with the input samples, thereby generating more accurate results.
Under the pytorch_lighting framework, the training hyper-parameter is set to be the batch size of 4, the learning rate of 0.0005 and the training period of 50 periods.
Example 2:
based on example 1 but with the difference that:
the invention selects the data set HQF in the event field as a reference data set, and selects the advanced comparison method E2VID trained on the data set generated in the same mode, which has a greatly better effect on the synthesis of the high-speed event stream video than the prior technical level. In the experiment, the method and the E2VID method are used for generating the HQF data set completely, the generating effect of the method is obviously better than that of the E2VID, two challenging data streams are selected in the diagram and are respectively boxes and reflective materials, the front part of the data stream is unstable, the generating effect is generally poor, the first three frames are taken as an illustration, and the method is better than the E2VID method in the aspects of global structure, contrast, image quality and the like as seen in FIG. 3, wherein the first row of each group of graphs is the reconstruction effect of the invention, the second row of graphs is real data, and the third row of graphs is E2VID reconstruction result. The method makes up the situation that the information quantity of the event is insufficient and the reconstruction effect is unstable by utilizing the pre-training strong image prior, so that the method has better performance.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (3)

1. The event camera image reconstruction method based on the diffusion model is characterized by comprising the following steps of:
s1, acquiring an event stream: shooting a scene to be observed and an event camera simulation simulator through an event camera to obtain event stream data of a target scene;
s2, preprocessing event data: preprocessing the event data obtained in the step S1, and representing the event data into an event frame by adopting a method for processing event packets;
s3, reconstructing network design: designing a pre-training Stable diffration model and a pre-training event data feature extractor; the pre-training Stable diffration model comprises a pre-training Stable diffration network part, and the pre-training event data feature extractor comprises a pre-training event data feature extraction network part and a fusion part;
s4, designing a DDIM sampling process: designing DDIM sampling super parameters based on event camera image reconstruction, wherein the super parameters comprise sampling step numbers, step distances and unconditionally generating a guiding rate;
s5, obtaining a reconstructed image: inputting a noise image into a pre-training self-encoder of a pre-training Stable diffration model to encode, inputting the coded hidden variable representation, text guide information and an event frame into a reconstruction network constructed in the step S3, extracting event characteristics by using a pre-training event data characteristic extractor, fitting the event characteristics into the pre-training Stable diffration model through a connecting part, and generating a denoising image under the current time step;
s6, generating a reconstructed image: based on the design of the DDIM sampling process in S4, repeating the acquisition process of the reconstructed image in S5 to obtain the hidden variable representation of the reconstructed image, and finally generating the reconstructed image by decoding through a pre-training self-encoder.
2. The diffusion model-based event camera image reconstruction method according to claim 1, wherein S2 specifically comprises the following:
event interval Δt=t k -t 0 The event points in the frame are encoded into event frames by a linear interpolation method, and the polarity p of each event point i Mapped to the two channels closest to it, formulated as follows:
wherein E is k Representing the encoding result of the event point in the delta T interval; p is p i Representing event point polarity; b represents the number of voxel grid channels; t is t i A timestamp representing the event point.
3. The Diffusion model-based event camera image reconstruction method according to claim 1, wherein the pre-training Stable Diffusion model is input of a time-rebiding noise image and text guide information, and the pre-training event data feature extractor is input of an event frame; the pre-training event data feature extractor pre-trains on an event-image reconstruction task and takes an encoder part thereof; the pre-training Stable distribution model and the connection part are selected from mature control net structures.
CN202310986749.7A 2023-08-07 2023-08-07 Event camera image reconstruction method based on diffusion model Pending CN116958192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310986749.7A CN116958192A (en) 2023-08-07 2023-08-07 Event camera image reconstruction method based on diffusion model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310986749.7A CN116958192A (en) 2023-08-07 2023-08-07 Event camera image reconstruction method based on diffusion model

Publications (1)

Publication Number Publication Date
CN116958192A true CN116958192A (en) 2023-10-27

Family

ID=88444462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310986749.7A Pending CN116958192A (en) 2023-08-07 2023-08-07 Event camera image reconstruction method based on diffusion model

Country Status (1)

Country Link
CN (1) CN116958192A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392259A (en) * 2023-12-11 2024-01-12 电子科技大学 Method for reconstructing CT image by dual-view cross-mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392259A (en) * 2023-12-11 2024-01-12 电子科技大学 Method for reconstructing CT image by dual-view cross-mode
CN117392259B (en) * 2023-12-11 2024-04-16 电子科技大学 Method for reconstructing CT image by dual-view cross-mode

Similar Documents

Publication Publication Date Title
CN109064507B (en) Multi-motion-stream deep convolution network model method for video prediction
US10924755B2 (en) Real time end-to-end learning system for a high frame rate video compressive sensing network
CN110363716B (en) High-quality reconstruction method for generating confrontation network composite degraded image based on conditions
CN113034380B (en) Video space-time super-resolution method and device based on improved deformable convolution correction
CN110634108B (en) Composite degraded network live broadcast video enhancement method based on element-cycle consistency confrontation network
CN112418409B (en) Improved convolution long-short-term memory network space-time sequence prediction method by using attention mechanism
CN109151474A (en) A method of generating new video frame
CN110933429B (en) Video compression sensing and reconstruction method and device based on deep neural network
CN110135386B (en) Human body action recognition method and system based on deep learning
CN116958192A (en) Event camera image reconstruction method based on diffusion model
CN109949217A (en) Video super-resolution method for reconstructing based on residual error study and implicit motion compensation
CN111798370A (en) Manifold constraint-based event camera image reconstruction method and system
CN116740223A (en) Method for generating image based on text
CN113379606B (en) Face super-resolution method based on pre-training generation model
Jiang et al. Event-based low-illumination image enhancement
US20230254230A1 (en) Processing a time-varying signal
CN111784583A (en) Cyclic random super-resolution generation countermeasure network for precipitation graph
CN116309213A (en) High-real-time multi-source image fusion method based on generation countermeasure network
Ercan et al. HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
Kong et al. Progressive motion context refine network for efficient video frame interpolation
CN116095183A (en) Data compression method and related equipment
CN111861877A (en) Method and apparatus for video hyper-resolution
CN112527860B (en) Method for improving typhoon track prediction
Chang et al. Stip: A spatiotemporal information-preserving and perception-augmented model for high-resolution video prediction
CN117097876B (en) Event camera image reconstruction method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination