CN113705593A - Method for generating training data and electronic device - Google Patents

Method for generating training data and electronic device Download PDF

Info

Publication number
CN113705593A
CN113705593A CN202010436701.5A CN202010436701A CN113705593A CN 113705593 A CN113705593 A CN 113705593A CN 202010436701 A CN202010436701 A CN 202010436701A CN 113705593 A CN113705593 A CN 113705593A
Authority
CN
China
Prior art keywords
image
training
data
warping
silhouette
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010436701.5A
Other languages
Chinese (zh)
Inventor
孙民
朱宏国
王传崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010436701.5A priority Critical patent/CN113705593A/en
Publication of CN113705593A publication Critical patent/CN113705593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for generating training data and an electronic device. The method comprises the following steps: obtaining an object model of a specific object; obtaining a first image of the object model at a first angle and a first silhouette corresponding to the first image; taking out a first object image from the first image based on the first silhouette; embedding a first object image into a first background image to generate a first training image; generating first label data of the first object image in a first training image; the first training image and the first label data are used as first training data of a specific object. Therefore, the training data can be automatically, quickly and correctly generated, and the related time and money cost is reduced.

Description

Method for generating training data and electronic device
Technical Field
The present invention relates to deep learning technologies, and in particular, to a method for generating training data and an electronic device.
Background
At present, the deep learning needs huge training data to be trained to obtain a good result. However, some training data is difficult to obtain during product development and is very time consuming and costly for the manufacturer.
Generally, training data is generated and collected by using artificial labeling, but since the labeling work related to human body posture or object posture is very difficult for human, the labeling of each piece takes a lot of time, and the error rate is very high.
Furthermore, for certain objects (objects) unique to some vendors (e.g., a product), since it may not be possible to use the existing data set on the market, the actual data may have to be collected by the vendors themselves, which consumes extra time and money.
Disclosure of Invention
The present invention provides a method for generating training data and an electronic device, which can be used to solve the above technical problems.
The invention provides a method for generating training data, which comprises the following steps: obtaining an object model of a specific object; obtaining a first image of the object model at a first angle and a first silhouette corresponding to the first image; taking a first object image of the specific object presenting the first angle from the first image based on the first silhouette; embedding the first object image into a first background image to generate a first training image; generating first label data of the first object image in the first training image; and taking the first training image and the first mark data as first training data of the specific object.
The invention provides an electronic device which comprises a storage circuit and a processor. The memory circuit stores a plurality of modules. The processor is coupled to the storage circuit and accesses the modules to execute the following steps: obtaining an object model of a specific object; obtaining a first image of the object model at a first angle and a first silhouette corresponding to the first image; taking a first object image of the specific object presenting the first angle from the first image based on the first silhouette; embedding the first object image into a first background image to generate a first training image; generating first label data of the first object image in the first training image; and taking the first training image and the first mark data as first training data of the specific object.
Based on the above, the present invention can automatically, rapidly and correctly generate training data, thereby reducing the associated time and money costs.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention;
FIG. 2 is a flow diagram illustrating a method of generating training data in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an object model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of fetching a first object image according to the diagram of FIG. 3;
FIG. 5A is a diagram illustrating generation of a first training image according to an embodiment of the present invention;
FIGS. 5B and 5C are schematic diagrams illustrating the generation of training images according to various embodiments of the present invention;
fig. 6 is a schematic diagram of fetching a second object image according to the diagram of fig. 3.
Detailed Description
Reference will now be made in detail to exemplary embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. In different embodiments, the electronic device 100 of the present invention can be various computer devices, such as a personal computer, a cloud server, a workstation, a notebook computer, etc., or various smart devices, such as a smart phone, a tablet computer, etc., but not limited thereto.
As shown in fig. 1, the electronic device 100 may include a memory circuit 102 and a processor 104. The Memory circuit 102 is, for example, any type of fixed or removable Random Access Memory (RAM), Read-Only Memory (ROM), Flash Memory (Flash Memory), hard disk, or other similar devices or combination thereof, and can be used to record a plurality of program codes or modules.
The processor 104 is coupled to the memory Circuit 102, and may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors (microprocessors), one or more microprocessors in conjunction with a digital signal processor core, a controller, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), any other type of Integrated Circuit, a state Machine, an Advanced reduced instruction set Machine (Advanced RISC) based processor, and the like.
In an embodiment of the present invention, the processor 104 may access the module and the program code recorded in the storage circuit 102 to implement the method for generating training data according to the present invention, and details thereof are described below.
Referring to fig. 2, a flowchart of a method for generating training data according to an embodiment of the invention is shown. The method of this embodiment can be executed by the electronic device 100 of fig. 1, and details of steps in fig. 2 are described below in conjunction with components shown in fig. 1.
First, in step S210, the processor 104 may retrieve an object model of a particular object. In different embodiments, the specific object may be a product unique to a manufacturer or various objects, but may not be limited thereto. In order to make the concept of the present invention easier to understand, the following is additionally described with reference to fig. 3, fig. 4, and fig. 5A to fig. 5C, but the following is only exemplary and not intended to limit the possible embodiments of the present invention.
Fig. 3 is a schematic diagram of an object model according to an embodiment of the invention. In the embodiment, the object model 300 is, for example, a related three-dimensional object model corresponding to a specific object, and the file type thereof may be, but not limited to, the obj file or the fbx file. In fig. 3, the specific object corresponding to the object model 300 is, for example, an object having a barrel, a handle and a spout, but it is only used as an example and not intended to limit the possible embodiments of the present invention. In other embodiments, the designer may select any object to be trained for recognition by the artificial intelligence model as the specific object under consideration, and is not limited to the embodiment shown in fig. 3. In addition, in various embodiments, one or more feature points marked by the relevant person may exist on the object model 300, and the positions of the feature points may be set by the relevant person according to the requirement. For example, there may be feature points marked on the object model 300 at positions of a spout, a carrying handle, a pot bottom, etc., but the present invention may not be limited thereto.
Thereafter, in step S220, the processor 104 may obtain a first image when the object model 300 is at a first angle and a first silhouette corresponding to the first image, and in step S230, extract a first object image of a specific object representing the first angle from the first image based on the first silhouette.
Please refer to fig. 4, which is a schematic diagram illustrating the first object image being extracted according to fig. 3. In the present embodiment, it is assumed that the object model 300 is rotated to assume the first angle shown in fig. 4. In this case, the processor 104 may take the first image 410 of the object model 300 at the first angle by screenshot or other similar means. At the same time, the processor 104 may retrieve the first silhouette 420 corresponding to the first image 410. In various embodiments, the processor 104 may retrieve the first Silhouette 420 corresponding to the first image 410, for example, via a related software function, such as a Silhouette function(s), but may not be so limited.
After obtaining the first image 410 and the first silhouette 420, the processor 104 may, for example, acquire an image area corresponding to the non-shadow portion 420a in the first silhouette 420 from the first image 410 as the first object image 430 (i.e., a specific object presenting the first angle described above), but may not be limited thereto.
Thereafter, in step S240, the processor 104 may embed the first object image 430 into the first background image to generate a first training image.
In various embodiments, the first background image may be various pre-stored indoor/outdoor scene images, or a scene image captured and acquired in real time, but is not limited thereto. Referring to fig. 5A, a schematic diagram of generating a first training image according to an embodiment of the invention is shown.
In fig. 5A, assuming that the first background image 510 obtained by the processor 104 is a fish-eye image as shown, the processor 104 may accordingly embed the first object image 430 into the first background image 410 to generate a first training image 510 a. In the embodiment of the present invention, the processor 104 may embed the first object image 430 in any position of the first background image 410 to generate the first training image 510a, and is not limited to the embodiment shown in fig. 5A.
Thereafter, in step S250, the processor 104 may generate first label data of the first object image 430 in the first training image 510a, and take the first training image 510a and the first label data as first training data of the specific object in step S260.
In various embodiments, the processor 104 may generate the first labeled data of the first object image 430 in the first training image 510a based on a bounding box labeling (bounding box annotation) technique, a segmentation (segmentation) technique, or other related existing labeling techniques.
For ease of understanding, it is assumed below that the processor 104 employs bounding box labeling techniques, but may not be so limited. Specifically, in fig. 3, the object model 300 may have a reference point 300a, and the reference point 300a exists in both the first image 410 and the first object image 430.
In an embodiment, after the object model 300 is rotated to assume the first angle in fig. 4, the processor 104 may automatically generate a corresponding bounding box. In one embodiment, the bounding box is, for example, a rectangular box that can just frame a specific object in the first image 410 (but may not be limited thereto). In this case, the processor 104 may record the relative positions between the top left corner and the bottom right corner of the bounding box and the reference point 300 a. For example, if the reference point 300a in the first image 410 is regarded as an origin in a coordinate system, the relative positions between the upper left corner and the lower right corner of the bounding box and the reference point 300a can be characterized as coordinates relative to the origin, but the invention is not limited thereto.
Accordingly, during the process of embedding the first object image 430 in the first background image 510, the processor 104 may determine an embedding position in the first background image 510, and align the reference point 300a in the first object image 430 with the embedding position to generate the first training image 510 a.
Then, the processor 104 may reversely deduce the positions of the upper left corner and the lower right corner of the bounding box in the first training image 510a based on the embedded position corresponding to the reference point 300a, and record the information of these positions as the first label data. In various embodiments, the first tag data is, for example, a JSON file,. txt file, or other similar description file, but may not be limited thereto.
Furthermore, assuming the segmentation technique is used, after the processor 140 embeds the first object image 430 in the first background image 510 to generate the first training image 510a, the processor 140 may additionally generate a completely black image having the same size as the first background image 510. Next, since the processor 140 knows the embedding position of the first object image 430 in the first background image 510, the processor 140 may insert a color block having the same size and contour as the first object image 430 in the completely black image corresponding to the embedding position to generate the first mark data corresponding to the first training image 510a, but is not limited thereto.
For further details of the above-mentioned bounding box labeling technique and segmentation technique, reference may be made to related technical documents (e.g., "Russell, B.C., Torralba, A., Murphy, K.P.et al. LabelMe: A Database and Web-Based Tool for Image interpretation. int J Compout Vis 77, 157-173 (2008); https:// doi.org/10.1007/s 11263-007-0090-8"), which are not repeated herein.
In some embodiments, after obtaining the first training data, the processor 104 may provide the first training data to an associated artificial intelligence model to allow the artificial intelligence model to learn how to recognize the specific object, but is not limited thereto.
As can be seen from the above, the present invention can automatically, correctly and quickly generate the required first training data from the electronic device 100, compared to the conventional method of generating training data by manually marking. In addition, since the method of the present invention can be implemented by the electronic device 100 implemented as a cloud server, the generation of the training data can be completed at the cloud, which is different from the existing marking operation that needs to be completed locally.
In addition, since the data set related to the fisheye image in the network resource is difficult to obtain, the first training data with the fisheye image embodiment can be quickly generated through the method and the device. Furthermore, the present invention can efficiently and accurately generate suitable training data for a particular subject that is unique to a particular vendor, without being limited to an existing network data set.
In some embodiments, the first object image 430 may also be embedded in a variety of different background images to produce different training data for artificial intelligence model learning.
Please refer to fig. 5B and 5C, which are schematic diagrams illustrating the generation of training images according to various embodiments of the present invention. In fig. 5B and 5C, after the processor 104 obtains the background images 520 and 530, the first object image 430 may be embedded in the background images 520 and 530 to generate the training images 520a and 530a, respectively, and the related marking data may be obtained according to the previous teachings, which are not described herein in detail.
After obtaining the training images 520a, 530a and their associated label data, the processor 104 may also feed such information into the artificial intelligence model for the artificial intelligence model to better learn how to recognize the specific object, but may not be limited thereto.
In addition to embedding the same object image (e.g., the first object image 430) in different background images to generate different training images, the present invention can also obtain a corresponding object image (hereinafter referred to as a second object image) after rotating the object model 300 to different angles, and embed the second object image in various background images to generate more diversified training images.
Please refer to fig. 6, which is a schematic diagram illustrating the second object image being extracted according to fig. 3. In the present embodiment, it is assumed that the object model 300 is rotated to assume the second angle shown in fig. 6. In this case, the processor 104 may take a second image 610 of the object model 300 at a second angle by screenshot or other similar means. At the same time, the processor 104 may retrieve a second silhouette 620 corresponding to the second image 610.
After obtaining the second image 610 and the second silhouette 620, the processor 104 may, for example, acquire an image area corresponding to the non-shadow portion 620a in the second silhouette 620 from the second image 610 as the second object image 630 (i.e., a specific object presenting the second angle described above), but may not be limited thereto.
Thereafter, the processor 104 may embed the second object image 630 into various background images (e.g., the first background image 510, the background images 520, 530, etc.) to generate more diversified training images, but is not limited thereto.
In addition, although the above embodiments are described with reference to the background image of the embodiment with the fish-eye image, the present invention is also applicable to the background image of the embodiment with the plane image, the 360-degree image, and the like.
In some embodiments, the background image considered by the present invention may also be an image obtained by performing a pre-processing on an original image. In different embodiments, the preprocessing may include, for example, various Image Warping processes, such as at least one of affine Warping, perspective-n-point (PnP) Warping, Image Warping (Image Warping), Parametric Warping (Parametric Warping), 2D Image transformation (Image Warping), Forward Warping (Forward Warping), reverse Warping (Inverse Warping), Non-Parametric Image Warping (Non-Parametric Image Warping), and Mesh Warping (Mesh Warping), but is not limited thereto.
Furthermore, after the training image is taken, the present invention may further update the training image by performing the above-described image warping process thereon, but may not be limited thereto.
In some embodiments, after the training image is obtained, the present invention may also use different image augmentation methods (e.g., giving different light sources/shooting angles) to increase the data volume, but may not be limited thereto.
In summary, the present invention can embed the object images corresponding to various specific object angles into various background images to automatically, rapidly and correctly generate the required training data. In addition, the invention can complete the generation of the training data at the cloud end, which is different from the prior marking operation needing to be completed locally. Moreover, the invention can quickly generate training data with the fish-eye image embodiment. Furthermore, the present invention can efficiently and accurately generate suitable training data for a particular subject that is unique to a particular vendor, without being limited to an existing network data set.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A method of generating training data, comprising:
obtaining an object model of a specific object;
obtaining a first image of the object model at a first angle and a first silhouette corresponding to the first image;
retrieving a first object image of the specific object representing the first angle from the first image based on the first silhouette;
embedding the first object image into a first background image to generate a first training image;
generating first label data of the first object image in the first training image;
and using the first training image and the first label data as first training data of the specific object.
2. The method of claim 1, further comprising:
obtaining a second image of the object model at a second angle and a second silhouette corresponding to the second image;
retrieving a second object image of the specific object representing the second angle from the second image based on the second silhouette;
embedding the second object image into a second background image to generate a second training image;
generating second label data of the second object image in the second training image;
and using the second training image and the second label data as second training data of the specific object.
3. The method of claim 2, further comprising:
feeding the first training data and the second training data into an artificial intelligence model to train the artificial intelligence model to recognize the specific object.
4. The method of claim 1, further comprising:
a first original image is obtained, and image preprocessing is performed on the first original image to generate the first background image.
5. The method of claim 4, wherein the pre-image processing comprises at least one of affine warping, perspective n-point warping, image wrapping, image warping, parametric warping, 2D image transformation, forward warping, reverse warping, non-parametric image warping, and mesh warping.
6. The method of claim 1, further comprising:
performing an image warping process on the first training image to update the first training image.
7. The method of claim 1, wherein the first background image comprises at least one of a planar image, a fisheye image, a 360 degree image.
8. The method of claim 1, wherein the step of generating the first marker data of the first object image in the first background image comprises:
generating the first label data of the first object image in the first training image based on a bounding box labeling technique or a segmentation technique.
9. The method of claim 1, wherein the first marker data is associated with a first image area occupied by the first object image in the first background image.
10. The method of claim 1, further comprising:
obtaining a third image of the object model at a third angle and a third silhouette corresponding to the third image;
retrieving a third object image of the specific object representing the third angle from the third image based on the third silhouette;
embedding the third object image into the first background image to generate a third training image;
generating third label data of the third object image in the third training image;
and using the third training image and the third label data as third training data of the specific object.
11. An electronic device, comprising:
a storage circuit that stores a plurality of modules; and
a processor coupled to the storage circuit and accessing the plurality of modules to perform the following steps:
obtaining an object model of a specific object;
obtaining a first image of the object model at a first angle and a first silhouette corresponding to the first image;
retrieving a first object image of the specific object representing the first angle from the first image based on the first silhouette;
embedding the first object image into a first background image to generate a first training image;
generating first marker data of the first object image in the first background image;
and using the first training image and the first label data as first training data of the specific object.
CN202010436701.5A 2020-05-21 2020-05-21 Method for generating training data and electronic device Pending CN113705593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010436701.5A CN113705593A (en) 2020-05-21 2020-05-21 Method for generating training data and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010436701.5A CN113705593A (en) 2020-05-21 2020-05-21 Method for generating training data and electronic device

Publications (1)

Publication Number Publication Date
CN113705593A true CN113705593A (en) 2021-11-26

Family

ID=78645852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010436701.5A Pending CN113705593A (en) 2020-05-21 2020-05-21 Method for generating training data and electronic device

Country Status (1)

Country Link
CN (1) CN113705593A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544496A (en) * 2018-11-19 2019-03-29 南京旷云科技有限公司 Generation method, the training method and device of object detection model of training data
CN110059724A (en) * 2019-03-20 2019-07-26 东软睿驰汽车技术(沈阳)有限公司 A kind of acquisition methods and device of visual sample
CN110414480A (en) * 2019-08-09 2019-11-05 威盛电子股份有限公司 Training image production method and electronic device
CN111144487A (en) * 2019-12-27 2020-05-12 二十一世纪空间技术应用股份有限公司 Method for establishing and updating remote sensing image sample library
WO2020093694A1 (en) * 2018-11-07 2020-05-14 华为技术有限公司 Method for generating video analysis model, and video analysis system
US20200193591A1 (en) * 2018-12-17 2020-06-18 Bodygram, Inc. Methods and systems for generating 3d datasets to train deep learning networks for measurements estimation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020093694A1 (en) * 2018-11-07 2020-05-14 华为技术有限公司 Method for generating video analysis model, and video analysis system
CN109544496A (en) * 2018-11-19 2019-03-29 南京旷云科技有限公司 Generation method, the training method and device of object detection model of training data
US20200193591A1 (en) * 2018-12-17 2020-06-18 Bodygram, Inc. Methods and systems for generating 3d datasets to train deep learning networks for measurements estimation
CN110059724A (en) * 2019-03-20 2019-07-26 东软睿驰汽车技术(沈阳)有限公司 A kind of acquisition methods and device of visual sample
CN110414480A (en) * 2019-08-09 2019-11-05 威盛电子股份有限公司 Training image production method and electronic device
CN111144487A (en) * 2019-12-27 2020-05-12 二十一世纪空间技术应用股份有限公司 Method for establishing and updating remote sensing image sample library

Similar Documents

Publication Publication Date Title
US9349076B1 (en) Template-based target object detection in an image
Wang et al. Multi-label image recognition by recurrently discovering attentional regions
US10936911B2 (en) Logo detection
Huang et al. A coarse-to-fine algorithm for matching and registration in 3D cross-source point clouds
CN110675487B (en) Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face
US10424072B2 (en) Leveraging multi cues for fine-grained object classification
CN104995663B (en) The method and apparatus of augmented reality are provided for using optical character identification
CN109960742B (en) Local information searching method and device
CN108509848A (en) The real-time detection method and system of three-dimension object
WO2022170844A1 (en) Video annotation method, apparatus and device, and computer readable storage medium
JP2017208092A (en) Method and apparatus for searching database of 3d items using descriptors
CN113689578B (en) Human body data set generation method and device
JP2010267113A (en) Component management method, device, program and recording medium
US20120033873A1 (en) Method and device for determining a shape match in three dimensions
US10949523B2 (en) Method and electronic device for providing image-based CAPTCHA challenge
CN111951333A (en) Automatic six-dimensional attitude data set generation method, system, terminal and storage medium
CN111783561A (en) Picture examination result correction method, electronic equipment and related products
Tang et al. Image dataset creation and networks improvement method based on CAD model and edge operator for object detection in the manufacturing industry
CN114241524A (en) Human body posture estimation method and device, electronic equipment and readable storage medium
CN112950759B (en) Three-dimensional house model construction method and device based on house panoramic image
CN114626118A (en) Building indoor model generation method and device
CN113763307A (en) Sample data acquisition method and device
CN116628250A (en) Image generation method, device, electronic equipment and computer readable storage medium
CN113705593A (en) Method for generating training data and electronic device
KR102221152B1 (en) Apparatus for providing a display effect based on posture of object, method thereof and computer readable medium having computer program recorded therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination