CN115083571A - Pathological section processing method, computer device and storage medium - Google Patents

Pathological section processing method, computer device and storage medium Download PDF

Info

Publication number
CN115083571A
CN115083571A CN202210702933.XA CN202210702933A CN115083571A CN 115083571 A CN115083571 A CN 115083571A CN 202210702933 A CN202210702933 A CN 202210702933A CN 115083571 A CN115083571 A CN 115083571A
Authority
CN
China
Prior art keywords
image
images
pathological section
digitized
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210702933.XA
Other languages
Chinese (zh)
Inventor
淳秋坪
石峰
周翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202210702933.XA priority Critical patent/CN115083571A/en
Publication of CN115083571A publication Critical patent/CN115083571A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Abstract

The present application relates to a pathological section processing method, a computer device, and a storage medium. The method comprises the following steps: acquiring images of the pathological sections according to the acquisition parameters to obtain a plurality of corresponding digital images; naming the digitized images according to an acquisition sequence; determining the position of the digitized image according to the name of the digitized image; splicing the digital images according to the positions of the digital images to obtain corresponding pathological section images; classifying the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section images marked with pathological tissues. By adopting the method, the working efficiency can be improved.

Description

Pathological section processing method, computer device and storage medium
The patent application of the invention is a divisional application of Chinese invention patent application with the application date of 2019, 08 and 28 months and the application number of 201910802010X, and the name of the invention is 'pathological section processing method, computer equipment and storage medium'.
Technical Field
The present application relates to the field of computer technologies, and in particular, to a pathological image aided analysis method, system, computer device, and storage medium.
Background
With the advent of the industrial 4.0 era, the mechanical automation of various industries is rapidly advancing, and the same is true in the field of medical imaging. The microscope image is the gold standard in the medical image field, and is always the direction of research and development in the medical industry. However, the conventional pathological section reading method is usually to place the stained pathological section on the stage of the microscope by the user, and to search and observe the pathological section by changing the objective lens with different magnifications, so as to give the result. For example, it is necessary to lock the tissue region with a 10-fold objective lens, search the region of interest from the tissue region with a 20-fold objective lens, and observe detailed features of cell morphology in the region of interest with a 40-fold objective lens, which results in an excessively long reading time. Moreover, in order to prevent the occurrence of a slide reading error, a plurality of users are generally required to read a pathological section until a uniform result position is obtained, so that the slide reading time is multiplied, and the slide reading work efficiency is reduced.
Disclosure of Invention
In view of the above, it is necessary to provide a pathological section processing method, a computer device, and a storage medium capable of improving work efficiency in view of the above technical problems.
A method of pathological section treatment, the method comprising:
acquiring images of the pathological sections according to the acquisition parameters to obtain a plurality of corresponding digital images;
naming the digitized images according to an acquisition sequence;
determining the position of the digitized image according to the name of the digitized image;
splicing the digitized images according to the positions of the digitized images to obtain corresponding pathological section images;
classifying the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section images marked with pathological tissues.
In one embodiment, the acquisition parameters include at least one of acquisition mode, acquisition size and moving step length;
the method for acquiring the pathological section according to the acquisition parameters to obtain the corresponding digital image comprises the following steps:
and traversing and collecting the pathological section according to at least one of the collection mode, the collection size and the moving step length to obtain a corresponding digital image.
In one embodiment, the collection mode includes any one or more of an S-mode and a Z-mode.
In one embodiment, the stitching the digitized images according to the positions of the digitized images to obtain corresponding pathological section images includes:
arranging the digitized images according to the positions of the digitized images to obtain images to be spliced;
and taking the middle row image and the middle column image of the image to be spliced as a splicing boundary, and splicing the digital images in the image to be spliced according to the splicing boundary to obtain a pathological section image corresponding to the pathological section.
In one embodiment, the step of splicing the digitized images in the images to be spliced according to the splicing boundary by using the middle row image and the middle column image of the images to be spliced as the splicing boundary to obtain a pathological section image corresponding to the pathological section includes:
splicing the digitized images in the middle row images and the middle column images to form a cross-shaped image;
dividing the images to be spliced according to the boundaries of the cross-shaped images to obtain four splicing areas;
and respectively splicing the digital images in the splicing areas to obtain pathological section images corresponding to the pathological sections.
In one embodiment, the stitching the digitized images in the respective stitching regions to obtain a pathological section image corresponding to the pathological section includes:
determining a repetition region in the adjacent digitized images in the stitching region;
extracting feature points of the repeated region and feature descriptors corresponding to the feature points;
matching the feature points of the adjacent digitized images according to the feature descriptors to determine matching points;
and splicing the adjacent digital images based on the matching points, wherein the spliced image is a pathological section image corresponding to the pathological section.
In one embodiment, before the stitching the digitized images according to the positions of the digitized images to obtain corresponding pathological section images, the method further includes:
and carrying out brightness correction on the digitized image to obtain the digitized image after the brightness correction.
A computer device comprising a memory storing a computer program and a processor implementing the steps of any of the pathological section processing methods described above when the computer program is executed.
A computer-readable storage medium, on which a computer program is stored, which computer program, when executed by a processor, carries out the steps of the pathological section processing method of any one of the above.
According to the pathological section processing method, the computer equipment and the storage medium, the pathological section is subjected to image acquisition according to the received acquisition parameters, and a plurality of corresponding digital images of the pathological section are obtained, so that the digital images corresponding to the pathological section are automatically obtained. And naming the digitized images according to the acquired sequence, determining the positions according to the names of the digitized images, and splicing the digitized images to obtain corresponding pathological section images. Then, the pathological section images are classified based on the neural network to obtain a classification result, so that the judgment of a user can be assisted. According to the method, the user can realize image acquisition and analysis of the pathological section by inputting the acquisition parameters, and the user does not need to manually operate the microscope to read the pathological section, so that the working efficiency is improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a pathological section treatment method;
FIG. 2 is a schematic flow chart of a method for pathological section processing according to one embodiment;
FIG. 3 is a schematic flowchart of the step of stitching the digitized images according to their positions to obtain corresponding pathological section images according to an embodiment;
FIG. 4 is a schematic illustration of images to be stitched in one embodiment;
FIG. 5 is a schematic illustration of the manner of stitching in one embodiment;
FIG. 6 is a schematic flowchart illustrating a step of stitching the digitized images in the stitching region to obtain a pathological section image corresponding to a pathological section in one embodiment;
FIG. 7 is a schematic diagram of a neural network in one embodiment;
fig. 8 is a block diagram showing the structure of a pathological section processing apparatus according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The pathological section processing method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the microscope 104 via a network. The terminal 102 controls the microscope 104 to acquire images of pathological sections through the acquisition parameters, so as to obtain a plurality of corresponding digital images. The terminal 102 names the digital images according to the acquisition sequence; the terminal 102 determines the position of the digitized image according to the name of the digitized image; splicing the digital images according to the positions of the digital images to obtain corresponding pathological section images; the terminal 102 classifies the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section images marked with pathological tissues. The terminal 102 may be, but is not limited to, various personal computers, laptops, smartphones, tablets and portable wearable devices, and the microscope 104 is an automatic microscope, such as a micro-motion platform microscope. It is to be understood that the microscope 104 is an automated scanning capable microscope.
In one embodiment, as shown in fig. 2, a pathological section processing method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
and 202, carrying out image acquisition on the pathological section according to the acquisition parameters to obtain a plurality of corresponding digital images.
Wherein the pathological section is one of pathological specimens, and is prepared by taking pathological tissues with a certain size and using a histopathology method. The acquisition parameters are information indicating how to perform image acquisition on the pathological section. The image acquisition refers to a method for acquiring a digitized image, for example, a digitized image corresponding to a pathological section is acquired by scanning and shooting the pathological section.
Specifically, when the prepared pathological section needs to be analyzed, a user configures acquisition parameters for image acquisition through a terminal connected with the microscope, and issues an acquisition instruction to the microscope through the terminal. And after the terminal receives the acquisition instruction of the user, responding to the acquisition instruction, and controlling the microscope to acquire images of the pathological section according to the acquisition parameters configured by the user so as to obtain a plurality of corresponding digital images. For example, a user first places a pathological section on a stage in a microscope, which is understood to be a stage that has been loaded with a micro-motion stage. The micro-motion platform is a device capable of moving in the X direction and the Y direction, and the motors are arranged in the X direction and the Y direction of the micro-motion platform and control the micro-motion platform to move in the X axial direction and the Y axial direction based on the motors. When image acquisition is carried out, the terminal controls the micro-motion platform to move through acquisition parameters, so that an acquired area of a pathological section on the objective table can be in a shooting range of the camera, and the camera installed in the microscope is controlled to shoot the acquired area of the pathological section, so that a digital image corresponding to the acquired area is acquired.
In one embodiment, the acquisition parameters include at least one or more of acquisition mode, acquisition size, and movement step size. Acquiring images of the pathological sections according to the acquisition parameters to obtain a plurality of corresponding digital images specifically comprises: and traversing and collecting the pathological sections according to the collection mode, the collection size and the moving step length to obtain a corresponding digital image.
The acquisition mode refers to a mode of controlling the micro-motion platform to move, and includes but is not limited to one or more of an S-type mode and a Z-type mode. It can be understood that the moving direction of the micro-motion platform is controlled by a collection mode. The acquisition size refers to the size of the acquired region, and may be understood as the acquisition range, that is, the size of the slice region within the shooting range each time the pathological section is acquired. The moving step length is the moving distance, for example, the distance for controlling the micro-motion platform to move once, and the repetition area between the upper and lower and left and right adjacent images can be determined by different moving step lengths. For example, the acquisition size includes length and width, if the moving step length is smaller than the length and width, an overlapped acquisition area can be formed, and the adjacent two images have a repeat area naturally during acquisition. If the moving step length is equal to the length and the width, overlapping areas can not be formed, the overlapping areas are not beneficial to being spliced into a complete image, or the moving step length is larger than the length and the width, a collecting blind area is formed, the condition of missed collecting occurs, and image information is incomplete.
Specifically, when the pathological section is subjected to image acquisition according to the acquisition parameters, the moving direction and the moving distance of the micro-motion platform are controlled according to the acquisition mode and the moving step length in the acquisition parameters, and the pathological section is acquired according to the acquisition size. The pathological section is collected once when the micro-motion platform moves once, until each area of the pathological section is collected, the traversal collection is completed, and a plurality of corresponding digital images are obtained.
And step S204, naming the digitized images according to the acquisition sequence.
Acquisition order refers to the order in which the digitized images are acquired, including but not limited to top to bottom, left to right, bottom to top, right to left, and the like. The collection sequence and the collection mode jointly determine the movement of the micro-motion platform during image collection, for example, the collection mode is S-type, and the collection sequence is from right to left and from top to bottom, and the movement of the micro-motion platform is the sequential S. Otherwise, if the collection sequence is from bottom to top and from left to right, the micro-motion platform is the inverted S. That is, the pathological section is acquired line by line, so that the corresponding digitized image is obtained. Thus, naming a digitized image according to the acquisition order is understood to mean naming according to the number of rows and columns in which the digitized image is located. For example, the name of the digitized image may be "x rows and x columns".
In step S206, the position of the digitized image is determined according to the name of the digitized image.
And S208, splicing the digital images according to the positions of the digital images to obtain corresponding pathological section images.
The stitching refers to stitching two or more images into one image, and in this embodiment, all the acquired digitized images are stitched into one complete image. Since the digitized image represents an image of a certain area in a pathological section. Therefore, when all the digitized images are spliced into a complete image, the complete image is the digitized image corresponding to the pathological section, i.e. the pathological section image.
Specifically, after the position of the digitized images is determined according to the names of the digitized images, the digitized images are arranged according to the position of each digitized image, and adjacent digitized images are spliced according to the arrangement sequence. And when all the digitized images are spliced with the adjacent digitized images, obtaining the corresponding pathological section images. Because the digitized images are named according to the acquisition sequence after being acquired, the adjacent relation among the digitized images can be determined and the correct splicing sequence can be obtained through name arrangement, so that a complete image can be accurately spliced.
Step S210, classifying the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section images marked with pathological tissues.
The neural network is an algorithmic mathematical model simulating animal neural network behavior characteristics and performing distributed parallel information processing. In the embodiment, the neural network is obtained by training in advance according to the pathological section image labeled with pathological tissues.
Specifically, when a pathological section image is obtained, a pre-trained neural network is called. And inputting the pathological section image into a neural network, performing operations such as convolution pooling on the pathological section image through the neural network, determining the category of pathological tissues in the pathological section image, and obtaining a classification result, thereby assisting a user in further analyzing the pathological tissues in the pathological section. The Neural Network may be any one or more of Networks such as a ResNet (Residual Neural Network), a VGG (Visual Geometry Group Network), a Google internet (Google initial Network), a densneet (denseley Connected Convolutional Network), and the like.
According to the pathological section processing method, the image acquisition is carried out on the pathological section according to the received acquisition parameters, and the digital images corresponding to a plurality of pathological sections are obtained, so that the digital images corresponding to the pathological sections are automatically obtained. And naming the digitized images according to the acquired sequence, determining the positions according to the names of the digitized images, and splicing the digitized images to obtain corresponding pathological section images. Then, the pathological section images are classified based on the neural network to obtain a classification result, so that the judgment of a user can be assisted. According to the method, the user can realize image acquisition and analysis of the pathological section by inputting the acquisition parameters, and the user does not need to manually operate the microscope to read the pathological section, so that the working efficiency is improved.
In one embodiment, as shown in fig. 3, stitching the digitized images according to their positions to obtain corresponding pathological section images includes the following steps:
and S302, arranging the digitized images according to the positions of the digitized images to obtain images to be spliced.
As shown in fig. 4, a schematic diagram of images to be stitched is provided, specifically, a position of a digitized image is determined according to a name of the digitized image, and then the digitized images are arranged according to the position of the digitized image, so as to form an image including a plurality of rows and a plurality of columns of digitized images, where the image is an image to be stitched. For example, if the name of the digitized image is "row and column", the position of the digitized image is at the first row and the first column of the image to be stitched.
And S304, splicing the digital images in the images to be spliced according to the splicing boundary by taking the middle row images and the middle column images of the images to be spliced as the splicing boundary to obtain pathological section images corresponding to pathological sections.
Wherein the middle row is a row between the first row and the last row, excluding the first row and the last row. The middle line image then refers to the digitized image in that line. The middle column is a column between the first column and the last column, excluding the first column and the last column. The middle column image then refers to the digitized image in that column.
Specifically, after the position of the digitized image is determined based on the name of the digitized image, the digitized images are arranged based on the position of the digitized image, and then any row except the first row and the last row is selected as the intermediate row, and any column except the first column and the last column is selected as the intermediate column. And taking the digitized images in the middle rows and the digitized images in the middle columns as splicing boundaries of image splicing, and splicing the digitized images in the images to be spliced according to the splicing boundaries. It is understood that the splice is initiated with the splice boundary as the starting splice point. And when all the digital images in the images to be spliced are spliced, the obtained image is the pathological section image corresponding to the pathological section.
In the embodiment, after the position is determined by the name of the digitized image, the digitized images are arranged to obtain the images to be spliced and then are spliced, so that the digitized images are in the correct connection sequence, the splicing error is prevented, and the splicing accuracy is improved.
In one embodiment, the step of splicing the digital images in the images to be spliced according to the splicing boundary by using the middle row image and the middle column image of the images to be spliced as the splicing boundary to obtain the pathological section image corresponding to the pathological section specifically includes: splicing the digital images in the middle row images and the middle column images to form a cross-shaped image; dividing images to be spliced according to the boundaries of the cross-shaped images to obtain four spliced areas; and respectively splicing the digital images in the splicing areas to obtain pathological section images corresponding to the pathological sections.
The cross-shaped image is an image with a shape similar to a cross. Since the rows and the columns are in a vertical crossing relationship, in this embodiment, images obtained by stitching images in any one row and any one column are all cross-shaped images.
Specifically, as shown in fig. 5, a schematic diagram of a splicing manner is provided. Referring to fig. 5, after the middle row (i) and the middle column (ii) are determined, the digitized images in the middle row (i) and the middle column (ii) are first stitched to obtain a cross-shaped image composed of the row (i) and the column (ii). And splicing the digitized images in the middle row I and the middle column II, and determining matching points capable of being spliced according to the feature points in the repeating area and the feature descriptors corresponding to the feature points by determining the repeating area between the adjacent digitized images in the middle row I and the middle column II. And splicing the digitized images according to the determined matching points. Then, taking the middle row (i) and the middle column (ii) as dividing lines, the whole image to be spliced is divided into four splicing regions, namely, a region (iii), a region (iv), a region (v), and a region (iv) in fig. 5. And after four splicing areas divided by the cross-shaped image are obtained, independently splicing the images in each splicing area. The splicing can be performed according to the cross-shaped boundary included in each area as the starting boundary, or any group of adjacent digitized images in the spliced area can be spliced. And when the splicing of the digital images in the splicing region is completed, a complete pathological section image is obtained.
In the embodiment, the splicing work is completed by independently splicing the four image-divided regions, so that the phenomenon that the image splicing is not accurate due to a blank region or a fuzzy region at the high frequency of the boundary of the image can be avoided, and the splicing accuracy is improved.
In one embodiment, as shown in fig. 6, stitching the digitized images in the stitching region to obtain a pathological section image corresponding to a pathological section includes the following steps:
step S602, determining a repeat region in the adjacent digitized image in the stitching region.
The repeated area refers to the same area between adjacent images, and the size of the repeated area is determined by the moving step length of the acquisition parameters.
Specifically, when image stitching is performed, the overlapped area can be obtained from the adjacent digitized images through the moving step of the acquisition parameter. For example, if two digitized images are in an up-down adjacent relationship, the overlap region is obtained from the lower boundary of one of the digitized images according to the moving step length, and the overlap region is obtained from the upper boundary of the other digitized image. Or, the intersection processing is carried out on two adjacent digital images to obtain the repeated area.
Step S604, feature points of the overlapping area and feature descriptors corresponding to the feature points are extracted.
The feature points and the feature descriptors are in one-to-one correspondence, and the number of the feature points is the same as that of the feature descriptors. The feature points may also be called key points, and the feature descriptors are parameters for describing the feature points.
Specifically, after obtaining the repeated regions of the adjacent digitized images, extracting the feature points and the feature descriptors in the repeated regions of the two adjacent digitized images by using a preset feature extraction algorithm, and it should be understood that since the repeated regions exist on the two adjacent digitized images, the feature points and the feature descriptors in the respective repeated regions of the two adjacent digitized images are extracted. The feature extraction algorithm includes, but is not limited to, an SURF algorithm, a SIFT feature extraction algorithm, a Harris corner feature extraction algorithm, an ORB feature extraction algorithm, a FAST algorithm, and the like.
And step S606, matching the feature points of the adjacent digitized images according to the feature descriptors, and determining the matching points.
The feature descriptors are parameters for describing the feature points, that is, the feature descriptors of the feature points in the respective repetition regions of two adjacent images can be used to determine the same feature points in the respective repetition regions in a matching manner, and the same feature points are the matching points. It is understood that the matching points include two feature points, i.e., a pair of feature points, which are respectively from two adjacent images.
Specifically, after the feature points and the corresponding feature descriptors of the respective repetition regions of two adjacent digital images are obtained, the distance between the feature descriptors in the repetition regions of two adjacent digital images may be calculated by using a hamming distance algorithm or the like, so as to obtain distance values between a plurality of feature descriptor pairs. And comparing the calculated distance values with a preset distance threshold, and removing the feature descriptor pair corresponding to the distance and the corresponding feature point when the calculated distance value is greater than the preset distance threshold. And when the calculated distance value is smaller than or equal to the preset distance threshold value, reserving the feature descriptor pair corresponding to the distance and the corresponding feature point pair, and finally obtaining a plurality of reserved feature point pairs and feature descriptor pairs, wherein the reserved feature point pairs are matched points obtained through matching.
In addition, if more accurate matching points are needed, matching can be performed again on the matching points obtained through matching by using a preset matching algorithm. The preset matching algorithm includes, but is not limited to, RANSAC (random sample consensus) algorithm, KNN-matching (K-nearest neighbor matching) algorithm, BFMatcher (brute force matching) algorithm, and the like.
And step S608, splicing the adjacent digital images based on the matching points, wherein the spliced image is a pathological section image corresponding to the pathological section.
Specifically, when the feature points and the feature descriptors are extracted by using a preset feature extraction algorithm, the coordinates of each feature point can be extracted and obtained, and the coordinates can be understood as coordinates on an image coordinate system. I.e. the resulting matching points also have corresponding coordinates. And after the relative displacement of the matching points is calculated through coordinates, the relative displacement corresponding to the repetition region of the two adjacent digital images is obtained through the relative displacement of the matching points and the weight calculation corresponding to the matching points, the relative displacement corresponding to the repetition region is used as the target relative displacement between the two adjacent digital images, and the two digital images are spliced by using the target relative displacement.
The relative displacement of the matching points can be obtained by subtracting the coordinates of two feature points in the matching points respectively and then taking the absolute value, and then the relative displacement between the two feature points, namely the relative displacement of the matching points can be obtained. Similarly, the relative displacement of other matching points can be calculated by the method. The weight can be calculated according to the distance between the feature descriptors in the repeated region, and the calculation formula is as follows:
Figure BDA0003705031150000111
wherein d is i Represents the distance, w, between the feature descriptors corresponding to the ith matching point i And the weight corresponding to the ith matching point pair is shown, and n is the number of the matching points. Then, the weight corresponding to each matching point and the relative displacement between two feature points in each matching point can be subjected to weighted summation to obtain the relative displacement corresponding to the repeated region, namely the relative displacement is used as the target relative displacement between two adjacent digital images.
The target relative displacement comprises a target vertical relative displacement and a target horizontal relative displacement, and when two adjacent digital images are spliced, one digital image of the two adjacent digital images can be determined as a reference image, and the other digital image can be determined as a moving image. And then, by taking the reference image as a standard, translating the moving image in the vertical direction to perform the vertical relative displacement of the target, translating the moving image in the horizontal direction to perform the horizontal relative displacement of the target, and splicing two adjacent digital images. It should be understood that when the moving image is translated, the moving image may be translated in the vertical direction first and then in the horizontal direction, or may be translated in the horizontal direction first and then in the vertical direction.
In this embodiment, secondary matching is performed through the feature points and the feature descriptors, so that the matching degree of the obtained matching points is higher, and therefore, when the relative displacement of the matching points is used for calculating the relative displacement of the repeating region, the obtained displacement is more accurate, the obtained relative displacement between two adjacent digital images is more accurate, the images can be more accurate when being spliced, and the quality of image splicing is improved.
In one embodiment, the illumination non-uniformity is generated due to the image capture that is typically subject to factors such as ambient light and microscope beams. However, once the illumination is not uniform, the local brightness of the image is not consistent, and although the difference may not be easily found when one digital image is seen, when the digital image is spliced into a complete image, the complete image forms a rasterization phenomenon, thereby affecting the quality of the whole image. Therefore, after the digitized images are obtained, brightness correction processing is carried out on the digitized images, so that the digitized images after brightness correction are spliced, and the rasterization phenomenon is prevented.
Performing brightness correction on the digitized image to obtain a brightness-corrected digitized image specifically includes: respectively acquiring a brightness channel image in each digital image; carrying out mean value calculation on image matrixes corresponding to the brightness channel images to obtain mean value images; normalizing the mean image to generate a mask image; and dividing an image matrix corresponding to the brightness channel image of each digitized image and an image matrix corresponding to the mask image to obtain an image corresponding to the image matrix, which is the digitized image after brightness correction.
Specifically, the luminance channel image refers to an image of a luminance channel, and may be understood as a V channel image in an HSV color space and a Y channel image in a YIQ color space, taking a color space as an example. Taking the form of an image matrix as an example, it can be understood that a matrix dimension corresponding to the brightness is extracted from the image matrix. After the luminance channel images are separated, the image matrixes corresponding to all the luminance channel images are accumulated and then averaged, and the obtained image matrix is the average image. Then, the mean image is normalized to obtain a mask image. An image in which the pixel value of each pixel in the image is 0 to 1 is a mask image, and therefore, normalization can be understood as a range in which each pixel in the mean image is changed from 0 to 255 to 0 to 1. The separation of the luminance channel image and the normalization process can be completed by calling related image processing tools, such as MATLAB, OpenCV, and the like. Then, the luminance channel image in the digitized image is divided by the obtained mask image, and the divided image is the image with corrected luminance. It can be understood that the image matrix corresponding to the luminance channel image in the digitized image is divided by the image matrix corresponding to the mask image, and the image corresponding to the obtained image matrix is the image after luminance correction.
In this embodiment, the digitized images are subjected to brightness correction, and the digitized images subjected to brightness correction are spliced to prevent rasterization of the images, so that the quality of the images is improved.
In one embodiment, as shown in fig. 7, a schematic structural diagram of a neural network is provided, and the pathological section images are classified based on the neural network shown in fig. 7 to obtain a classification result. Referring to fig. 7, the neural network structure includes four Dense connection blocks (sense blocks), four convolutional layers, four pooling layers, and one activation layer. Specifically, the obtained pathological section image is input into a neural network, feature extraction is performed through a convolution layer, a dense connection module and a pooling layer of the neural network, and a feature map output by a third dense connection module and a feature map output by a last dense connection module are obtained. And performing up-sampling on the feature map output by the last dense connection module to obtain a new feature map, and performing feature fusion on the new feature map and the feature map output by the third dense connection module to obtain fusion features. And further pooling the fusion characteristics and outputting a classification result through an activation layer. Wherein the feature fusion comprises feature addition and feature merging.
It should be understood that although the steps in the flowcharts of fig. 2, 3, and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, and 6 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided a pathological section processing apparatus including: an acquisition module 802, a generation module 804, a determination module 806, a concatenation module 808, and a classification module 810, wherein:
and the acquisition module 802 is configured to perform image acquisition on the pathological section according to the acquisition parameters to obtain a plurality of corresponding digital images.
A generating module 804, configured to name the digitized images according to the acquisition order.
A determining module 806 for determining a location of the digitized image from the name of the digitized image;
and the splicing module 808 is configured to splice the digitized images according to the positions of the digitized images to obtain corresponding pathological section images.
The classification module 810 is configured to classify the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section images marked with pathological tissues.
In one embodiment, the acquisition module 802 is further configured to perform traversal acquisition on the pathological section according to at least one of an acquisition mode, an acquisition size, and a moving step size, so as to obtain a corresponding digitized image.
In one embodiment, the stitching module 806 is further configured to arrange the digitized images according to the positions of the digitized images to obtain images to be stitched; and taking the middle row image and the middle column image of the image to be spliced as a splicing boundary, and splicing the digital images in the image to be spliced according to the splicing boundary to obtain a pathological section image corresponding to the pathological section.
In one embodiment, the stitching module 806 is further configured to stitch the digitized images in the middle row image and the middle column image to form a cross-shaped image; dividing images to be spliced according to the boundaries of the cross-shaped images to obtain four spliced areas; and respectively splicing the digital images in each splicing area to obtain a pathological section image corresponding to the pathological section.
In one embodiment, the stitching module 806 is further configured to determine repeat regions in adjacent digitized images in the stitching region; extracting feature points of the repeated region and feature descriptors corresponding to the feature points; matching the feature points of the adjacent digital images according to the feature descriptors to determine matching points; and splicing the adjacent digital images based on the matching points, wherein the spliced images are pathological section images corresponding to pathological sections.
In one embodiment, the pathological section processing device further comprises a correction module for performing brightness correction on the digitized image to obtain a brightness-corrected digitized image.
In one embodiment, the rectification module is further configured to acquire a brightness channel image in each digitized image; carrying out mean value calculation on image matrixes corresponding to the brightness channel images to obtain mean value images; normalizing the mean image to generate a mask image; and dividing an image matrix corresponding to the brightness channel image of each digitized image and an image matrix corresponding to the mask image to obtain an image corresponding to the image matrix, which is the digitized image after brightness correction.
For the specific definition of the pathological section processing device, reference may be made to the above definition of the pathological section processing method, which is not described herein again. The modules in the pathological section processing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a pathological section processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring images of the pathological sections according to the acquisition parameters to obtain a plurality of corresponding digital images;
naming the digitized images according to an acquisition sequence;
determining the position of the digitized image according to the name of the digitized image;
splicing the digitized images according to the positions of the digitized images to obtain corresponding pathological section images;
classifying the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section images marked with pathological tissues.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and traversing and collecting the pathological section according to at least one of a collection mode, a collection size and a moving step length to obtain a corresponding digital image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: arranging the digitized images according to the positions of the digitized images to obtain images to be spliced; and taking the middle row image and the middle column image of the image to be spliced as a splicing boundary, and splicing the digital images in the image to be spliced according to the splicing boundary to obtain a pathological section image corresponding to the pathological section.
In one embodiment, the processor, when executing the computer program, further performs the steps of: splicing the digital images in the middle row images and the middle column images to form a cross-shaped image; dividing images to be spliced according to the boundaries of the cross-shaped images to obtain four spliced areas; and respectively splicing the digital images in the splicing areas to obtain pathological section images corresponding to the pathological sections.
In one embodiment, the processor, when executing the computer program, further performs the steps of: determining a repeated area in the adjacent digital images in the splicing area; extracting feature points of the repeated region and feature descriptors corresponding to the feature points; matching the feature points of the adjacent digital images according to the feature descriptors to determine matching points; and splicing the adjacent digital images based on the matching points, wherein the spliced images are pathological section images corresponding to pathological sections.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and performing brightness correction on the digital image to obtain a digital image after brightness correction.
In one embodiment, the processor, when executing the computer program, further performs the steps of: respectively acquiring brightness channel images in the digital images; carrying out mean value calculation on image matrixes corresponding to the brightness channel images to obtain mean value images; normalizing the mean value image to generate a mask image; and dividing an image matrix corresponding to the brightness channel image of each digitized image and an image matrix corresponding to the mask image to obtain an image corresponding to the image matrix, which is the digitized image after brightness correction.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring images of the pathological sections according to the acquisition parameters to obtain a plurality of corresponding digital images;
naming the digitized images according to an acquisition sequence;
determining the position of the digitized image according to the name of the digitized image;
splicing the digital images according to the positions of the digital images to obtain corresponding pathological section images;
classifying the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section images marked with pathological tissues.
In one embodiment, the computer program when executed by the processor further performs the steps of: and traversing and collecting the pathological section according to at least one of a collection mode, a collection size and a moving step length to obtain a corresponding digital image.
In one embodiment, the computer program when executed by the processor further performs the steps of: arranging the digitized images according to the positions of the digitized images to obtain images to be spliced; and taking the middle row image and the middle column image of the image to be spliced as a splicing boundary, and splicing the digital images in the image to be spliced according to the splicing boundary to obtain a pathological section image corresponding to the pathological section.
In one embodiment, the computer program when executed by the processor further performs the steps of: splicing the digital images in the middle row images and the middle column images to form a cross-shaped image; dividing images to be spliced according to the boundaries of the cross-shaped images to obtain four spliced areas; and respectively splicing the digital images in each splicing area to obtain a pathological section image corresponding to the pathological section.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining a repeated area in the adjacent digital images in the splicing area; extracting feature points of the repeated region and feature descriptors corresponding to the feature points; matching the feature points of the adjacent digital images according to the feature descriptors to determine matching points; and splicing the adjacent digital images based on the matching points, wherein the spliced images are pathological section images corresponding to pathological sections.
In one embodiment, the computer program when executed by the processor further performs the steps of: and performing brightness correction on the digital image to obtain a digital image after brightness correction.
In one embodiment, the computer program when executed by the processor further performs the steps of: respectively acquiring a brightness channel image in each digital image; carrying out mean value calculation on image matrixes corresponding to the brightness channel images to obtain mean value images; normalizing the mean image to generate a mask image; and dividing an image matrix corresponding to the brightness channel image of each digitized image and an image matrix corresponding to the mask image to obtain an image corresponding to the image matrix, which is the digitized image after brightness correction.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A pathological section processing method is characterized by comprising the following steps:
acquiring images of the pathological sections according to the acquisition parameters to obtain a plurality of corresponding digital images;
determining a location of the digitized image;
splicing the digitized images according to the positions of the digitized images to obtain spliced areas;
determining a repetition region in the adjacent digitized images in the stitching region;
acquiring feature points of the repeated area and feature descriptors corresponding to the feature points;
matching the feature points of the adjacent digital images according to the feature descriptors corresponding to the feature points to determine matching points;
based on the matching points, splicing the adjacent digital images to obtain a pathological section image corresponding to the pathological section;
classifying the pathological section images by using a preset neural network to obtain a classification result; the neural network is obtained by training in advance according to the pathological section image marked with pathological tissues.
2. The method according to claim 1, wherein the matching the feature points of the adjacent digitized images according to the feature descriptors corresponding to the feature points to determine the matching points comprises:
calculating the distance between the feature descriptors in the repeated areas of two adjacent digital images;
comparing the calculated distances with a preset distance threshold;
when the calculated distance is larger than the distance threshold value, removing the feature descriptor pair corresponding to the distance and the corresponding feature point pair;
when the calculated distance is smaller than or equal to the distance threshold value, reserving a feature descriptor pair corresponding to the distance and a corresponding feature point pair;
and obtaining the matching points according to the reserved characteristic point pairs.
3. The method according to claim 1 or 2, wherein the stitching adjacent digitized images based on the matching points to obtain a pathological section image corresponding to the pathological section comprises:
acquiring coordinates of each feature point;
calculating the relative displacement of each matching point by using each coordinate;
calculating to obtain the relative displacement corresponding to the repeated area of two adjacent digital images according to the relative displacement of the matching point and the weight corresponding to the matching point;
and splicing two adjacent digital images according to the relative displacement corresponding to the repeated area, wherein the spliced image is a pathological section image corresponding to the pathological section.
4. The method of claim 1, wherein stitching the digitized images according to their positions to obtain a stitched region comprises:
arranging the digitized images according to the positions of the digitized images to obtain images to be spliced;
taking the middle row image and the middle column image of the image to be spliced as a splicing boundary, and splicing the digital images in the middle row image and the middle column image to form a cross-shaped image;
and dividing the images to be spliced according to the boundaries of the cross-shaped images to obtain the spliced area.
5. The method of claim 1, wherein the acquisition parameters include at least one of acquisition mode, acquisition size, and movement step size;
the acquiring the pathological section image according to the acquisition parameters to obtain the corresponding digital image comprises the following steps:
and traversing and collecting the pathological section according to at least one of the collection mode, the collection size and the moving step length to obtain a corresponding digital image.
6. The method of claim 5, wherein the acquisition mode comprises any one or more of an S-mode and a Z-mode.
7. The method of claim 1, wherein before stitching the digitized images according to their positions to obtain a stitched region, the method further comprises:
respectively acquiring a brightness channel image in each digital image;
carrying out mean value calculation on image matrixes corresponding to the brightness channel images to obtain mean value images;
normalizing the mean value image to generate a mask image;
and dividing an image matrix corresponding to the brightness channel image of each digitized image with an image matrix corresponding to the mask image to obtain an image corresponding to the image matrix, which is the digitized image after brightness correction.
8. The method according to claim 1, wherein the obtaining of the feature points of the repeated region and the feature descriptors corresponding to the feature points comprises:
and extracting the feature points of the repeated region in the adjacent digital images in the splicing region and the feature descriptors corresponding to the feature points by using a preset feature extraction algorithm.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202210702933.XA 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium Pending CN115083571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210702933.XA CN115083571A (en) 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910802010.XA CN110600106B (en) 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium
CN202210702933.XA CN115083571A (en) 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910802010.XA Division CN110600106B (en) 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN115083571A true CN115083571A (en) 2022-09-20

Family

ID=68856013

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910802010.XA Active CN110600106B (en) 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium
CN202210702933.XA Pending CN115083571A (en) 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910802010.XA Active CN110600106B (en) 2019-08-28 2019-08-28 Pathological section processing method, computer device and storage medium

Country Status (1)

Country Link
CN (2) CN110600106B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260561A (en) * 2020-02-18 2020-06-09 中国科学院光电技术研究所 Rapid multi-graph splicing method for mask defect detection
CN111354444A (en) * 2020-02-28 2020-06-30 上海商汤智能科技有限公司 Pathological section image display method and device, electronic equipment and storage medium
CN113470788B (en) * 2021-07-08 2023-11-24 山东志盈医学科技有限公司 Synchronous browsing method and device for multiple digital slices

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100454078C (en) * 2006-06-22 2009-01-21 北京普利生仪器有限公司 Method for preparing microscopic image of holographic digitalized sliced sheet
CN101571663B (en) * 2009-06-01 2011-05-04 北京航空航天大学 Distributed online regulating method for splicing multiple projectors
CN101751659B (en) * 2009-12-24 2012-07-25 北京优纳科技有限公司 Large-volume rapid image splicing method
JP5609742B2 (en) * 2011-03-31 2014-10-22 カシオ計算機株式会社 Imaging apparatus, image composition method, and program
CN103985254B (en) * 2014-05-29 2016-04-06 四川川大智胜软件股份有限公司 A kind of multi-view point video for large scene traffic monitoring merges and traffic parameter acquisition method
CN105488775A (en) * 2014-10-09 2016-04-13 东北大学 Six-camera around looking-based cylindrical panoramic generation device and method
CN105069743B (en) * 2015-07-28 2018-06-26 中国科学院长春光学精密机械与物理研究所 Detector splices the method for real time image registration
CN106683044B (en) * 2015-11-10 2020-04-28 中国航天科工集团第四研究院指挥自动化技术研发与应用中心 Image splicing method and device of multi-channel optical detection system
CN106408573A (en) * 2016-08-31 2017-02-15 诸暨微因生物科技有限公司 Whole slide digital pathological image processing and analysis method
CN106709898A (en) * 2017-03-13 2017-05-24 微鲸科技有限公司 Image fusing method and device
CN107064019B (en) * 2017-05-18 2019-11-26 西安交通大学 The device and method for acquiring and dividing for dye-free pathological section high spectrum image
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
CN109598673A (en) * 2017-09-30 2019-04-09 深圳超多维科技有限公司 Image split-joint method, device, terminal and computer readable storage medium
CN108537730B (en) * 2018-03-27 2021-10-22 宁波江丰生物信息技术有限公司 Image splicing method
CN108898171B (en) * 2018-06-20 2022-07-22 深圳市易成自动驾驶技术有限公司 Image recognition processing method, system and computer readable storage medium
CN110084270A (en) * 2019-03-22 2019-08-02 上海鹰瞳医疗科技有限公司 Pathological section image-recognizing method and equipment

Also Published As

Publication number Publication date
CN110600106B (en) 2022-07-05
CN110600106A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110738207B (en) Character detection method for fusing character area edge information in character image
US20230419472A1 (en) Defect detection method, device and system
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN110245662B (en) Detection model training method and device, computer equipment and storage medium
CN110600106B (en) Pathological section processing method, computer device and storage medium
CN111310841B (en) Medical image classification method, medical image classification device, medical image classification apparatus, medical image classification computer device, and medical image classification storage medium
CN110163193B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110211076B (en) Image stitching method, image stitching equipment and readable storage medium
CN110427970A (en) Image classification method, device, computer equipment and storage medium
CN109961399B (en) Optimal suture line searching method based on image distance transformation
CN110363774B (en) Image segmentation method and device, computer equipment and storage medium
CN114037637B (en) Image data enhancement method and device, computer equipment and storage medium
CN110517186B (en) Method, device, storage medium and computer equipment for eliminating invoice seal
CN110473172B (en) Medical image anatomical centerline determination method, computer device and storage medium
CN111160288A (en) Gesture key point detection method and device, computer equipment and storage medium
CN112884782B (en) Biological object segmentation method, apparatus, computer device, and storage medium
CN115331245B (en) Table structure identification method based on image instance segmentation
CN111832561B (en) Character sequence recognition method, device, equipment and medium based on computer vision
CN114549462A (en) Focus detection method, device, equipment and medium based on visual angle decoupling Transformer model
JP2007025902A (en) Image processor and image processing method
CN110781887A (en) License plate screw detection method and device and computer equipment
CN114549603A (en) Method, system, device and medium for converting labeling coordinate of cytopathology image
CN112464802B (en) Automatic identification method and device for slide sample information and computer equipment
CN111291716A (en) Sperm cell recognition method, device, computer equipment and storage medium
CN109063601A (en) Cheilogramma detection method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination