CN116363055A - Linear detection method based on template classification and explicit linear descriptor - Google Patents

Linear detection method based on template classification and explicit linear descriptor Download PDF

Info

Publication number
CN116363055A
CN116363055A CN202310054269.7A CN202310054269A CN116363055A CN 116363055 A CN116363055 A CN 116363055A CN 202310054269 A CN202310054269 A CN 202310054269A CN 116363055 A CN116363055 A CN 116363055A
Authority
CN
China
Prior art keywords
pixel
straight line
template
probability
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310054269.7A
Other languages
Chinese (zh)
Inventor
陈馨雨
贾棋
樊鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202310054269.7A priority Critical patent/CN116363055A/en
Publication of CN116363055A publication Critical patent/CN116363055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention relates to the technical field of digital image processing, in particular to a straight line detection method based on template classification and explicit straight line descriptors. The present invention uses a local linear structure between adjacent pixels to assist in determining the pixels that lie on a line segment. The constructed local linear structure template of the present invention provides a robust relationship of adjacent pixels while indicating the direction of the line. Explicit straight line descriptors are differentiated for the expansion and merging of short line segments. The method solves the problem that more redundant short line segments and error line segments are generated by the traditional method. Compared with the deep learning method, the method can obtain competitive results by only using 1% of training data, and saves computing resources. The method provided by the invention has higher detection precision and better robustness to rotation and noise.

Description

Linear detection method based on template classification and explicit linear descriptor
Technical Field
The invention relates to the technical field of digital image processing, in particular to a straight line detection method based on template classification and explicit straight line descriptors.
Background
The straight line provides the most basic geometry in the image, which is critical for further image processing. Straight line detection is a very important and challenging basic task in computer vision, which finds wide application in various practical scenarios. Such as 3D reconstruction, scene parsing, image stitching, etc. The application in the natural scenes not only needs to ensure the accurate detection of the line segments, but also ensures the robustness under special conditions such as noise, rotation and the like.
Based on the difference of the implementation methods, the existing straight line detection methods are divided into two main types, namely a traditional method and a method based on deep learning. The conventional methods are classified into two types, namely, a spatial transformation-based straight line detection algorithm and an image gradient-based straight line detection algorithm. The two algorithms have the characteristics: the linear detection method based on space transformation can find more longer and more continuous line segments, but is easy to generate wrong long lines; edge detection methods based on image gradients are easier to locate the end point positions of a straight line, but are prone to shorter straight lines. The straight line detection method based on deep learning has attracted extensive attention due to its remarkable performance, and discrete features are generally adopted to locate line segment endpoints, while spatial distribution relations between adjacent pixels are ignored. Meanwhile, a perfect training set is needed, and the calculation cost is high. Considering that the edge detection method based on the image gradient has higher detection efficiency, the invention expects to detect more robust straight lines by improvement on the basis.
Disclosure of Invention
The invention provides a straight line detection method based on template classification and explicit straight line descriptors. The local linear structure template comprises linear structures in different directions, and represents the spatial distribution relation of adjacent pixels, and the local linear structure template can reflect whether the adjacent pixels are collinear or not. The invention uses the correlation of the direction and the texture formed by all the pixel points on the straight line by means of the linear structure template to obtain more accurate and stable line segments.
The technical scheme of the invention is as follows:
a straight line detection method based on template classification and explicit straight line descriptor comprises the following steps:
step 100, constructing a local linear structure template through K-means clustering, wherein the template comprises linear structures in different directions; training a random forest classifier based on the linear structure template to predict the probability that the image pixel points belong to each type of template;
step 200, for each image to be detected, calculating the probability of each pixel falling on a straight line by using a trained classifier to obtain a probability map;
step 300, based on the probability map, connecting and dividing the pixel chain according to the difference of adjacent pixels, and fitting an initial line segment by using a least square method;
and 400, constructing a straight line descriptor based on the probability distribution of the pixel points to expand and merge the short line segments, and evaluating the consistency of the on-line pixels to obtain a steady line segment.
Preferably, in step 100, clustering refers to dividing input data samples without classes into a plurality of data samples with different classes according to a similarity principle, wherein the sample data in each class has similar attributes. K-means is a common unsupervised clustering method, and has the advantages of simple design concept, relatively easy realization, high convergence speed, no relation between clustering results and data input sequence, and the like, so that the K-means is widely used. Based on the characteristic that similar texture features are arranged around the pixel points where the straight lines are located, the invention uses K-means to gather image blocks with similar linear structures around each pixel point as a center into one type to obtain a plurality of templates with different linear structures to represent the straight lines in different directions. For the image block represented by each pixel point, knowing the category to which the image block belongs after clustering, and training the color gradient characteristic and the autocorrelation characteristic of the image block as input characteristics of a classifier to predict the template category to which each pixel point belongs.
Preferably, in step 200, for a picture to be detected, all pixels in the picture to be detected need to be classified one by using a trained classification model, and the classification may be one of template classes or background classes. When a new picture is input, each pixel point of the picture is traversed, the feature vector of the corresponding image block is input into a trained random forest, and finally the probability that the pixel point belongs to each type of linear template and the probability of the background are output. And adding the probabilities belonging to all the template classes to obtain the probability that the pixel point belongs to a straight line. And finally, extracting the outline of the straight line by using a non-maximum suppression algorithm, and finally obtaining a matrix with the same size as the original image, wherein the value of each position in the matrix represents the probability that the pixel point at the corresponding position of the original image belongs to the straight line, namely the probability map.
Preferably, in step 300, because the larger the straight line probability value of the pixel point is, the more obvious the straight line where the point is located is, the maximum point in the mapping probability map is selected as the anchor point, and then the pixel points with similar directions are searched in the eight neighborhood range of the pixel, and are connected into a chain. Due to the inherent complexity of the image, there are pixels in the pixel chain that meet the minimum directional constraint but are not collinear. If the number of the pixel points in the pixel chain is larger than a certain threshold value, connecting two ends of the pixel points to generate a line segment, finding the pixel point farthest from the line in the pixel chain, calculating the distance between the pixel point and the line segment, if the number of the pixel points in the pixel chain is larger than one pixel, dividing the pixel chain into two parts by the pixel point, and iterating until all the pixel chains are collinear. The resulting series of nearly collinear chains of pixels are fitted to the initial short line segments using a least squares method.
Preferably, in step 400, in order to obtain a more complete and accurate line segment, an explicit line descriptor is provided to characterize a line. In step 200, the probability that each pixel belongs to each class of templates is obtained, and it can be known that the distribution of the pixels on each straight line on the N templates is approximately the same, and all the pixels are concentrated on a certain template. Setting an N-dimensional 'voter' corresponding to N types of templates, respectively selecting templates corresponding to the first N maximum probability values of each pixel point on a straight line as 'votes', then selecting the first m templates (m < N) with the largest votes obtained in the 'voter', and finally calculating the average value of the probabilities on the m templates corresponding to all the pixel points on the straight line to obtain the characteristic of the straight line.
For the initial short line segment obtained in step 300, the corresponding straight line feature is calculated by adopting the method of step 400. Finding adjacent pixel points along the direction of the straight line, and adding the adjacent pixel points into a pixel chain to be re-fitted if the square sum of the probability value difference corresponding to the current straight line characteristic template is smaller than a certain threshold value, so as to obtain an expanded straight line.
The invention has the beneficial effects that: the invention provides a straight line detection method based on template classification and explicit straight line descriptors, which uses a local linear structure between adjacent pixels to assist in judging pixels positioned on a line segment. The constructed local linear structure template of the present invention provides a robust relationship of adjacent pixels while indicating the direction of the line. Explicit straight line descriptors are differentiated for the expansion and merging of short line segments. The method solves the problem that more redundant short line segments and error line segments are generated by the traditional method. Compared with the deep learning method, the method can obtain competitive results by only using 1% of training data, and saves computing resources. The method provided by the invention has higher detection precision and better robustness to rotation and noise.
Drawings
FIG. 1 is a workflow diagram of a method of line detection based on template classification and explicit line descriptors of the present invention;
FIG. 2 is a schematic diagram of a partial linear template;
FIG. 3 is a schematic diagram of extracting image blocks and training a random forest classifier.
Fig. 4 is a schematic diagram of a pixel chain segmentation strategy.
Fig. 5 (a) is a schematic diagram of labeling three line segments on an image, and fig. 5 (b) to 5 (d) are schematic diagrams of probability distributions of pixels on the three line segments over 50 template categories, respectively.
Fig. 6 (a) is a probability distribution of N points over the template class, and fig. 6 (b) is a schematic diagram of a voting strategy of straight line descriptors constructed by N points.
Fig. 7 is a schematic diagram of a segment expansion and merging strategy.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings and technical schemes.
As shown in fig. 1, the method for detecting the straight line based on the template classification and the explicit straight line descriptor comprises the following steps:
and 100, constructing a local linear structure template through K-means clustering, wherein the template comprises linear structures in different directions. Based on the linear structure templates, training a random forest classifier is used for predicting the probability that the image pixel points belong to each type of templates.
In the step, for each RGB image, a binary image is manually marked, and an image block with a center point positioned on a marking line is extracted. Each pixel point on the straight line corresponds to an image block with a fixed size and is used for representing various local linear structures. To obtain robustness to slight displacements, features are obtained using the Daisy descriptor. The features of the linear templates are then clustered using the K-means method (k=50 in this example). For visual representation of the cluster center, the average value of the same class of image blocks is used as a template for the center. As shown in fig. 2, these cluster centers represent the most common local linear structures in real-world scenes, and the corresponding image blocks are referred to as linear structure templates.
As indicated by the double arrow in fig. 3, corresponding image blocks in the RGB image are sampled for each template class. Because there are 50 linear templates, these image blocks belong to 50 different categories. And (5) marking the pixel points falling on the line segments as on-line pixels. To distinguish between online and non-online pixels, image blocks with center points not on line segments are sampled as a non-linear class. In order for each pixel detected to correspond to a class in the template, the classifier needs to be distinguishable and can accurately classify multiple classes. The random forest classifier meets the conditions, and the color gradient characteristics and the autocorrelation characteristics of the image block are used as input characteristics of the classifier.
The specific description is as follows: FIG. 3 is a schematic diagram of extracting image blocks and training a random forest classifier. After clustering, it can be known to which class each binary image block belongs. The category is regarded as a label of the image block, and the corresponding image block characteristics are used as input of a classifier for training.
And 200, calculating the probability of each pixel falling on a straight line by using a trained classifier for each image to be detected, and obtaining a probability map.
For each image to be detected, extracting an image block taking each pixel point as a center, and inputting corresponding features into a trained classifier to obtain the probability that the image pixel points belong to each class of templates, namely a probability map. The probabilities on the 50 linear template categories are added to represent the probability that the pixel falls on the line segment. Taking into account the problems of sharpness and number of lines in the image, an adaptive threshold setting is used to determine the position of the final line pixel point. A pixel in the probability map is considered an online pixel when its sum of probabilities on the linear template class is above a threshold.
Step 300, based on the probability map, connecting and dividing the pixel chain according to the difference of adjacent pixels, and fitting an initial line segment by using a least square method.
In the step, a pixel point with the maximum online probability is selected as a starting point, and adjacent pixels with similar gradient directions in eight adjacent regions are connected to generate a series of pixel chains. Due to the inherent complexity of the image, there are pixels in the pixel chain that meet the minimum directional constraint but are not collinear. To solve this problem, a point-line distance segmentation strategy is adopted to evaluate the collinearity of the pixels in the pixel chain.
As shown in fig. 4, each cell represents one pixel. Let a chain of pixels between a and B be shown with gray grid. The length of the pixel chain is the number of pixels therein, if the length exceeds the threshold value theta l Connecting two ends of the pixel chain to generate a virtual line segment L AB (long line segments in the figure). If the pixels in the pixel chain belong to the same line segment, they should be close to L AB . Detection of the separation L in the pixel chain AB The furthest pixel point D is calculated to reach a line segment L AB Is a distance of (3). If the distance is greater than 1 pixel, the pixel chain is split into two at pixel D, resulting in pixel chains AC and BD. The pixel chains are iteratively segmented such that pixels on the pixel chains are all nearly collinear. Finally, adopting least two for the divided pixel chainsAnd (5) performing multiplication fitting to obtain an initial short line segment.
And 400, constructing a straight line descriptor based on the probability distribution of the pixel points to expand and merge the short line segments, and evaluating the consistency of the on-line pixels to obtain a steady line segment.
This step proposes a new straight line descriptor to evaluate the consistency of adjacent short line segments. Collinear pixels exhibit similar probability distributions over 50 cluster centers. As shown in fig. 5 (a) to 5 (d), three line segments are marked in fig. 5 (a). Taking L1 of fig. 5 (b) as an example, the horizontal axis represents 50 template categories, and the vertical axis represents corresponding probability values. Each color represents the probability distribution of one online pixel, and the probability distribution of collinear points can be seen to be very similar. L2 and L3 indicate that the probability distribution of non-collinear points varies greatly. The step is used for evaluating the consistency of the adjacent line segments based on the step.
This step designs a voting strategy for all online pixels to construct a robust line descriptor. As shown in fig. 6 (a) and 6 (b), assuming that there are N pixels on one line segment, the probability distribution of two pixels is shown at the top. Since collinear points have similar distribution trends on the same template class, but the peaks are different, voting is performed using the class in which the peak exists. For each pixel, the first 20 probability values are selected and a ticket is cast for the corresponding category. As shown in the histogram, the first 15 categories with the largest number of tickets are selected among the 50 categories. The class labels of the first 15 are shown in the lowest array. The average value of the probabilities of all the pixel points corresponding to the 15 categories is calculated, and a 15-dimensional vector is used as a straight line descriptor. In order to make the line descriptors insensitive to the other 35 classes, a simple normalization scheme is used.
Existing line segments can be expanded into a longer line segment based on the fitted line segments and their descriptors. As shown in fig. 7, line segments L1 and L2 are fitted by different pixels, respectively. Extending the line segment in the direction of L1, based on one end point thereof, an adjacent pixel a, and two pixels B and C above and below it can be found. According to A, B, C three pixels to L1 distance is ordered in ascending order, and the characteristics of candidate pixels are compared in turnAnd L1. Since the straight line descriptor is a 15-dimensional feature, the probability of a pixel point on the corresponding 15 categories is used as the feature of the pixel in this embodiment. If the Euclidean distance between the pixel feature and the current line descriptor is less than a predefined threshold value θ m =1, then this pixel is added to L1 and the other candidate pixels are discarded. And then fitting L1 with the added pixels to obtain a new extension line segment. The expansion process is iterative.
In the expansion process, if there are candidate pixels belonging to another fitted line segment L2 at the same time, for example, pixels D and E, and the Euclidean distance between the straight line descriptors of L1 and L2 is smaller than the threshold value θ m L1 and L2 will be combined by least squares fitting method. In order to ensure the collinearity of L1 and L2, the fitting error is taken as a constraint condition. If the fitting error is less than 1, a new fitting line segment L3 is retained. The same strategy is adopted for the two endpoints of the line segment. If a line segment meets both the gradient magnitude and direction consistency, it is marked as a detected line segment.

Claims (2)

1. A straight line detection method based on template classification and explicit straight line descriptors is characterized by comprising the following steps:
step 100, constructing a local linear structure template through K-means clustering, wherein the template comprises linear structures in different directions; training a random forest classifier based on the linear structure template to predict the probability that the image pixel points belong to each type of template;
step 200, for each image to be detected, calculating the probability of each pixel falling on a straight line by using a trained classifier to obtain a probability map;
step 300, based on the probability map, connecting and dividing the pixel chain according to the difference of adjacent pixels, and fitting an initial line segment by using a least square method;
and 400, constructing a straight line descriptor based on the probability distribution of the pixel points to expand and merge the short line segments, and evaluating the consistency of the on-line pixels to obtain a steady line segment.
2. The method for detecting the straight line based on the template classification and the explicit straight line descriptor according to claim 1, wherein the method is specifically characterized by comprising the following steps:
in step 100, K-means is used to group image blocks with similar linear structures around each pixel as a center to obtain a plurality of templates with different linear structures to represent straight lines in different directions; for the image block represented by each pixel point, knowing the category to which the image block belongs after clustering, and training the color gradient characteristic and the autocorrelation characteristic of the image block as input characteristics of a classifier to predict the template category to which each pixel point belongs;
in step 200, for a picture to be detected, all pixels in the picture to be detected need to be classified one by using a trained classification model, and the classification can be one of template classes or background classes; when a new picture is input, traversing each pixel point of the picture, inputting the feature vector of the corresponding image block into a trained random forest, and finally outputting the probability that the pixel point belongs to each type of linear template and the probability of the background; adding the probabilities belonging to all the template classes to obtain the probability that the pixel point belongs to a straight line; finally, extracting the outline of the straight line by using a non-maximum suppression algorithm, and finally obtaining a matrix with the same size as the original image, wherein the value of each position in the matrix represents the probability that the pixel point at the corresponding position of the original image belongs to the straight line, namely a probability map;
in step 300, because the larger the straight line probability value of the pixel point is, the more obvious the straight line where the point is located is, the maximum point in the mapping probability map is selected as an anchor point, and then the pixel points with similar directions are searched in the eight neighborhood range of the pixel and are connected into a chain; due to the inherent complexity of the image, there are pixels in the pixel chain that meet the minimum directional constraint but are not collinear; if the number of the pixel points in the pixel chain is greater than a certain threshold value, connecting two ends of the pixel points to generate a line segment, finding the pixel point farthest from the line in the pixel chain, calculating the distance between the pixel point and the line segment, if the number of the pixel points in the pixel chain is greater than one pixel, dividing the pixel chain into two parts by the pixel point, and iterating until all the pixel chains are collinear; fitting the resulting series of nearly collinear chains of pixels to an initial short line segment using a least squares method;
in step 400, in order to obtain a more complete and accurate line segment, an explicit line descriptor is provided to express the characteristics of a line; in step 200, the probability that each pixel belongs to each class of templates is obtained, and meanwhile, the distribution of the pixels on each straight line on the N templates is approximately the same and all the pixels are concentrated on a certain template; setting an N-dimensional 'voter' corresponding to N types of templates, respectively selecting templates corresponding to N maximum probability values before each pixel point on a straight line as 'votes', then selecting the first m templates with the largest votes in the 'voter', and finally calculating the average value of the probabilities on the m templates corresponding to all the pixel points on the straight line to obtain the characteristic of the straight line;
for the initial short line segment obtained in the step 300, calculating corresponding straight line characteristics by adopting the method of the step 400; finding adjacent pixel points along the direction of the straight line, and adding the adjacent pixel points into a pixel chain to be re-fitted if the square sum of the probability value difference corresponding to the current straight line characteristic template is smaller than a certain threshold value, so as to obtain an expanded straight line.
CN202310054269.7A 2023-02-03 2023-02-03 Linear detection method based on template classification and explicit linear descriptor Pending CN116363055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310054269.7A CN116363055A (en) 2023-02-03 2023-02-03 Linear detection method based on template classification and explicit linear descriptor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310054269.7A CN116363055A (en) 2023-02-03 2023-02-03 Linear detection method based on template classification and explicit linear descriptor

Publications (1)

Publication Number Publication Date
CN116363055A true CN116363055A (en) 2023-06-30

Family

ID=86916580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310054269.7A Pending CN116363055A (en) 2023-02-03 2023-02-03 Linear detection method based on template classification and explicit linear descriptor

Country Status (1)

Country Link
CN (1) CN116363055A (en)

Similar Documents

Publication Publication Date Title
EP2808827B1 (en) System and method for OCR output verification
US11416710B2 (en) Feature representation device, feature representation method, and program
CN110866896B (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN108629286B (en) Remote sensing airport target detection method based on subjective perception significance model
CN112633382A (en) Mutual-neighbor-based few-sample image classification method and system
CN106780639B (en) Hash coding method based on significance characteristic sparse embedding and extreme learning machine
CN111460927A (en) Method for extracting structured information of house property certificate image
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116030396A (en) Accurate segmentation method for video structured extraction
CN111832497B (en) Text detection post-processing method based on geometric features
CN111428064B (en) Small-area fingerprint image fast indexing method, device, equipment and storage medium
CN113723558A (en) Remote sensing image small sample ship detection method based on attention mechanism
CN106980878B (en) Method and device for determining geometric style of three-dimensional model
CN103136536A (en) System and method for detecting target and method for exacting image features
CN111104924A (en) Processing algorithm for effectively identifying low-resolution commodity image
CN108564020B (en) Micro-gesture recognition method based on panoramic 3D image
CN116363055A (en) Linear detection method based on template classification and explicit linear descriptor
CN113313149B (en) Dish identification method based on attention mechanism and metric learning
CN112288045B (en) Seal authenticity distinguishing method
CN103136524A (en) Object detecting system and method capable of restraining detection result redundancy
CN113763313A (en) Text image quality detection method, device, medium and electronic equipment
CN111753723A (en) Fingerprint identification method and device based on density calibration
CN111753722A (en) Fingerprint identification method and device based on feature point type
CN110956177A (en) Hybrid verification code identification method and system
CN112784632B (en) Method and device for detecting potential safety hazards of power transmission line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination