CN113033248A - Image identification method and device and computer readable storage medium - Google Patents

Image identification method and device and computer readable storage medium Download PDF

Info

Publication number
CN113033248A
CN113033248A CN201911252746.0A CN201911252746A CN113033248A CN 113033248 A CN113033248 A CN 113033248A CN 201911252746 A CN201911252746 A CN 201911252746A CN 113033248 A CN113033248 A CN 113033248A
Authority
CN
China
Prior art keywords
target
geometric shape
image
parameters
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911252746.0A
Other languages
Chinese (zh)
Inventor
张洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911252746.0A priority Critical patent/CN113033248A/en
Publication of CN113033248A publication Critical patent/CN113033248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image identification method and device and a computer readable storage medium, which can improve the efficiency and accuracy of image identification. The method comprises the following steps: acquiring a target image and acquiring point cloud data of the target image; carrying out geometric shape detection on the point cloud data to obtain a target geometric shape parameter; acquiring a direction line of a target geometric shape on a display interface of a target image; carrying out effectiveness detection on the geometric shape parameters of the target based on the direction lines and a preset detection threshold; when the target geometric shape parameters are detected to be effective, performing parameter iteration processing based on the target geometric shape parameters and the point cloud data to obtain target parameters; and performing plane expansion on the surface of the target geometric shape in the target image based on the target parameters to obtain a recognition result.

Description

Image identification method and device and computer readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image recognition method and apparatus, and a computer-readable storage medium.
Background
At present, when image information is subjected to target detection and target identification, more interference information exists under the condition that an image contains a complex background, and false identification or identification failure is easy to occur; further, in the case where another pattern having the same shape as the object of interest exists in the same image, the object of interest cannot be distinguished by the conventional algorithm, and false detection may occur.
Disclosure of Invention
Embodiments of the present application are intended to provide an image recognition method and apparatus, and a computer-readable storage medium, which can improve efficiency and accuracy of image recognition.
The technical scheme of the application is realized as follows:
the embodiment of the application provides an image identification method, which comprises the following steps:
acquiring a target image and acquiring point cloud data of the target image;
carrying out geometric shape detection on the point cloud data to obtain a target geometric shape parameter;
acquiring a direction line of a target geometric shape on a display interface of a target image;
carrying out effectiveness detection on the geometric shape parameters of the target based on the direction lines and a preset detection threshold;
when the target geometric shape parameters are detected to be effective, performing parameter iteration processing based on the target geometric shape parameters and the point cloud data to obtain target parameters;
and performing plane expansion on the surface of the target geometric shape in the target image based on the target parameters to obtain a recognition result.
The embodiment of the application provides an image recognition device, which comprises an acquisition unit, a shape detection unit, an interaction unit, an effectiveness detection unit, an iteration unit and a recognition unit, wherein,
the acquisition unit is used for acquiring a target image and acquiring point cloud data of the target image;
the shape detection unit is used for carrying out geometric shape detection on the point cloud data to obtain a target geometric shape parameter;
the interaction unit is used for acquiring a direction line of a target geometric shape on a display interface of a target image;
the effectiveness detection unit is used for carrying out effectiveness detection on the geometric shape parameters of the target based on the direction lines and a preset detection threshold;
the iteration unit is used for carrying out parameter iteration processing on the basis of the target geometric shape parameters and the point cloud data to obtain target parameters when the target geometric shape parameters are detected to be effective;
and the identification unit is used for carrying out plane expansion on the surface of the target geometric shape in the target image based on the target parameters to obtain an identification result.
The embodiment of the application provides an image recognition device, a processor, a memory and a communication bus, wherein the memory is communicated with the processor through the communication bus, the memory stores one or more programs executable by the processor, and when the one or more programs are executed, the processor executes the image recognition method.
The embodiment of the application provides a computer readable storage medium, which stores one or more programs, and the one or more programs can be executed by one or more processors to realize the image recognition method according to any one of the above items.
The embodiment of the application provides an image identification method and device and a computer readable storage medium, which can improve the efficiency and accuracy of image identification, and the method comprises the following steps: acquiring a target image and acquiring point cloud data of the target image; carrying out geometric shape detection on the point cloud data to obtain a target geometric shape parameter; acquiring a direction line of a target geometric shape on a display interface of a target image; carrying out effectiveness detection on the geometric shape parameters of the target based on the direction lines and a preset detection threshold; when the target geometric shape parameters are detected to be effective, performing parameter iteration processing based on the target geometric shape parameters and the point cloud data to obtain target parameters; and performing plane expansion on the surface of the target geometric shape in the target image based on the target parameters to obtain a recognition result. By adopting the method, the effectiveness of the target geometric shape parameters can be judged by means of the assistance of the direction lines acquired by the display interface, and the target geometric shape parameters conforming to the direction line constraint are defined for further iterative computation, so that the iterative computation amount of the image detection algorithm is reduced, the recognition efficiency is improved, the finally acquired target parameters are ensured to conform to the target geometric shapes corresponding to the direction lines, and the accuracy of image recognition is improved.
Drawings
FIG. 1-1 is a first schematic view of a scene for identifying an image of a geometric surface in an image;
1-2 are diagrams of a second scene for identifying an image of a geometric surface in an image;
FIG. 2 is a diagram illustrating an image recognition effect when background information of a target image is complex;
FIG. 3 is a diagram illustrating the image recognition effect when multiple identical geometric shapes exist in the target image;
fig. 4 is a schematic diagram of an image recognition system according to an embodiment of the present application;
fig. 5 is a first flowchart illustrating an image recognition method according to an embodiment of the present application;
fig. 6 is a schematic flowchart illustrating a second image recognition method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a method of receiving a trajectory line drawn for a target object in a target image;
FIG. 8 is a first schematic diagram of a method for obtaining directional lines of a target geometry based on trajectory lines;
fig. 9 is a third schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 10 is a second schematic diagram of a method for obtaining directional lines of a target geometry based on trajectory lines;
fig. 11 is a fourth schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 12 is a schematic three-dimensional direction plan view of the direction lines in three-dimensional space;
fig. 13 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
fig. 14 is a schematic diagram of an effectiveness detection method provided in an embodiment of the present application;
fig. 15 is a sixth schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 16 is a schematic diagram of a direction line of a target geometric shape obtained by dragging a target line segment;
fig. 17 is a seventh flowchart illustrating an image recognition method according to an embodiment of the present application;
FIG. 18-1 is a first schematic diagram of obtaining directional lines of a target geometry via text detection;
FIG. 18-2 is a second schematic diagram of obtaining directional lines of a target geometry by text detection;
fig. 19 is a first schematic structural diagram of an image recognition apparatus according to an embodiment of the present invention;
fig. 20 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
At present, in a scene of recognizing an image of a geometric shape surface in an image, for example, as shown in fig. 1-1, when image recognition is performed on a cylindrical commodity outer package, or as shown in fig. 1-2, image recognition is performed on an image and character information posted on a cylindrical building, it is often encountered that since background information of a target image is complicated, an outline of the target geometric shape cannot be clearly detected, and thus target geometric shape recognition fails, as shown in fig. 2; or there may be multiple identical geometric shapes in the target image. For example, as shown in fig. 3, when a plurality of cylinders exist in the target image, the target cylinder requiring surface image recognition cannot be distinguished. The adoption of the prior technical scheme needs to increase the algorithm complexity to improve the identification accuracy, and the identification efficiency is lower; and the target object to be subjected to surface image recognition cannot be distinguished from a plurality of identical geometric shapes.
The image recognition system provided by the embodiment of the application is shown in fig. 4, and includes an image acquisition module 10, a geometric detection module 20, an interface interaction module 30, an effectiveness judgment module 40, a graph transformation module 50, and a display module 60. The image acquisition module 10 is configured to acquire point cloud data of a target image, and includes an image acquisition device capable of acquiring three-dimensional data information. Illustratively, the image capture device may be a camera with Time Of flight sensor (TOF), a structured light camera, a binocular camera, etc., as well as an RGB image capture device. The geometry detection module 20 is configured to detect, by using an image detection algorithm, a target geometry parameter included in the point cloud data of the target image according to a preset geometry type, where the preset geometry type may be a geometry type that is the same as a shape type of a target object to be identified, for example, when a cylindrical target object in the target image needs to be identified, the geometry type may be preset to be a cylinder, and the geometry detection module 20 is configured to detect the included cylinder shape parameter from the point cloud data of the target image. The interface interaction module 30 is configured to receive operation information of an external interaction for a target object from an interaction interface of the image recognition system, and obtain a direction line of a target geometric shape according to the operation information. Illustratively, a trajectory line drawn by a user on an axis position of a target object of interest, illustratively, a target cylinder, is received on a display screen of the terminal, and then a direction line of the target cylinder is obtained according to the trajectory line. The validity judging module 40 is configured to perform validity detection on the target geometric shape parameter detected by the geometric detection module 20 according to the direction line acquired from the interface interaction module 30, determine whether a geometric shape formed by a spatial range defined by the target geometric shape parameter is consistent with a target geometric shape where the direction line is located, and if the geometric shape is consistent with the target geometric shape, pass the validity detection. The geometry detection module 20 is further configured to iterate the target geometry parameters that pass the validity detection, for example, using a random sampling consensus algorithm to perform optimization iteration on the target geometry parameters that pass the validity detection, so as to obtain the optimal optimized target parameters. The image transformation module 50 is configured to tile and expand the target geometric shape in the target image from a curved surface to a plane according to the target parameter, so as to obtain a recognition result of the target geometric shape. The display module 60 is configured to display the recognition result on a display interface.
Fig. 5 is an alternative flowchart of an image recognition method according to an embodiment of the present application, which will be described with reference to the steps shown in fig. 5.
S101, collecting a target image and acquiring point cloud data of the target image.
The image recognition method provided by the embodiment of the application is suitable for a scene for carrying out surface image recognition and extraction on the target geometric shape in the image.
Illustratively, a scene of announcement or explanatory information of a surface of a building or a commodity is recognized through image acquisition, or a scene of detection of a target obstacle in an acquired image.
In the embodiment of the application, the image recognition device firstly acquires a target image through the image acquisition equipment and acquires point cloud data of the target image.
In the embodiment of the application, the point cloud data is a data set of three-dimensional coordinate points of the target image. The image recognition device needs to acquire point cloud data of the target image through an image acquisition device capable of acquiring three-dimensional information of the target image, for example, the image recognition device acquires the point cloud data through a TOF sensor integrated in a camera, and also can acquire the point cloud data through a structured light device or a depth camera based on a triangulation technology, which is not limited in the embodiment of the present application.
In the embodiment of the application, the point cloud data of the obtained target image can be point cloud data stored in advance, and illustratively, when a picture shot by using three-dimensional image acquisition equipment is scanned, the point cloud data stored in advance in the picture can be obtained; or point cloud data acquired in real time from a distance sensor such as a TOF sensor or a laser radar sensor; illustratively, when a camera viewing interface is used to directly photograph or scan a real object in a real environment, point cloud data acquired in real time can be obtained.
It should be noted that, in the embodiment of the present application, when the TOF sensor integrated in the camera is used to acquire the point cloud data, the TOF sensor has the characteristics of being unaffected by illumination changes and object textures, and on the premise of meeting the precision requirement, the acquisition cost of the point cloud data can also be reduced.
S102, carrying out geometric shape detection on the point cloud data to obtain a target geometric shape parameter.
In the embodiment of the application, after the image recognition device acquires the point cloud data, geometric shape detection is performed on the basis of the point cloud data to obtain the target geometric shape parameters.
In the embodiment of the application, when the geometric shape to be detected is a cylinder, the image recognition device may use a random sampling consensus algorithm to randomly extract two points from the point cloud data, perform normal estimation and normal constraint on the two selected points respectively, and calculate a cylinder parameter that can be represented by the two selected points and corresponding normals thereof as a target geometric shape parameter.
In embodiments of the present application, the target geometry parameters may include an axis vector and a center point, and in some embodiments, the axis vector n ═ n (n)x,ny,nz) Wherein n isx,ny,nzFor three directional components of the axial vector n in three-dimensional space, the central point o ═ ox,oy,oz) Wherein o isx,oy,ozIs the three-dimensional coordinate of the center point.
S103, acquiring a direction line of the target geometric shape on a display interface of the target image.
In the embodiment of the application, after the image recognition device obtains the target geometric shape parameters, the image recognition device obtains the direction line of the target geometric shape on the display interface of the target image.
In the embodiment of the application, when the image recognition device starts an image recognition function to acquire a target image and performs geometric shape detection on point cloud data of the target image, the image recognition device can display an RGB camera preview of the target image on a display interface so that a user or other external interaction objects can specify an axis of a target geometric shape in the preview as a direction line of the target geometric shape.
In the embodiment of the application, the image recognition device may acquire the direction line provided by the external interaction information from the display interface, and use the direction line as auxiliary information for recognizing the target geometric shape, so as to limit the result range of geometric shape detection, so that the detected target geometric shape parameter conforms to the target geometric shape specified by the direction line selected from the interaction information.
In the embodiment of the present application, based on fig. 5, the image recognition apparatus may obtain, on the display interface of the target image, the direction line of the target geometric shape as shown in fig. 6, where the direction line includes S1031 to S1032, as follows:
s1031, receiving a track line drawn aiming at a target object in the target image on a display interface of the target image; the target object is an object which is in accordance with the target geometric shape in the target image.
In this embodiment, the image recognition apparatus may obtain the trajectory line drawn on the axis of the target object by the touch and drag operation by the user by using a method of drawing on the target object of the RGB camera preview by the user.
Illustratively, as shown in fig. 7, the user draws a trajectory line P on the axis of the cylinder in the display interface through the operation of touch-and-drag, wherein P is represented by { P { (P) }0,p1,p2,…,pNThe set of trace points of (i) }, where the coordinate p of the ith trace pointi=(ui,vi) Wherein u isiIs the abscissa, v, of the ith trace pointiIs the ordinate of the ith trace point.
In the embodiment of the application, the target object is an object which conforms to the target geometric shape in the target image. For example, when the target image contains a complex background or the target image contains a plurality of graphic objects with the same shape, the image recognition device may receive a trajectory line drawn by the user on the target image display interface to distinguish the target object from the background image and other graphic objects with the same shape.
Note that, in order to ensure that the direction line can be obtained, the trajectory line drawn for the target object in the target image is preferably drawn on the axis of the target object.
S1032, obtaining a direction line of the target geometric shape based on a cross product vector product of the three-dimensional coordinates of the starting position and the three-dimensional coordinates of the end position in the track line.
In this embodiment of the application, after the image recognition device obtains the trajectory line, the direction line of the target geometric shape may be obtained based on the start point and the end point in the trajectory line.
In the embodiment of the present application, as shown in fig. 7, the trajectory line drawn on the display interface is an irregular curve formed by a plurality of continuous trajectory points, and in order to extract the direction information and the normalization information included in the trajectory line, the image recognition device performs cross multiplication on the three-dimensional coordinates based on the start position and the three-dimensional coordinates based on the end position in the trajectory line, and uses the vector product of the cross multiplication as the direction line of the target geometric shape.
In this embodiment, the image recognition apparatus may obtain the direction line of the target geometric shape based on a cross-product vector product of the three-dimensional coordinates of the start position and the three-dimensional coordinates of the end position in the trajectory line by using formula (1), where formula (1) is as follows:
l=p0×pN (1)
wherein p is0Is a three-dimensional coordinate of the starting position point of the trajectory line under the homogeneous coordinate, pNIs the three-dimensional coordinate of the track line terminal point under the homogeneous coordinate, l is the two-dimensional direction line corresponding to the track line, l may include 3 direction components l ═ of the two-dimensional direction line in the three-dimensional space (l ═x,ly,lz)。
In formula (1), the image recognizing means three-dimensionally coordinates p of the start position0And three-dimensional coordinates p of the end point positionNPerforming cross multiplication to obtain the vector product of the three-dimensional coordinates of the start position and the end position as the direction line of the target geometric shape, as shown in FIG. 8, the three-dimensional coordinates p of the start position0And three-dimensional coordinates p of the end point positionNThe corresponding direction lines are connected end to end, namely the three-dimensional coordinate p of the starting position0And three-dimensional coordinates p of the end point positionNThe cross product of (a) and (b) is calculated as a product.
In the embodiment of the present application, based on fig. 6, the image recognition apparatus may further obtain, on the display interface of the target image, a direction line of the target geometric shape as shown in fig. 9, where the direction line includes S1033-S1034, as follows:
and S1033, performing principal component analysis based on the trajectory line to obtain a principal component direction and a three-dimensional coordinate mean value point of each trajectory point on the trajectory line.
In this embodiment of the application, the image recognition device may also perform Principal Component Analysis (PCA) on the drawn trajectory line, and obtain Principal Component directions of all trajectory point sets on the trajectory line and a three-dimensional coordinate mean point of each trajectory point.
In the embodiment of the present application, as shown in fig. 10, the image recognition apparatus performs PCA analysis on the trajectory line P to obtain a PCA principal component direction v of the trajectory line P0And the mean value point of the three-dimensional coordinate of each track point on the track line
Figure BDA0002309478830000061
S1034, obtaining a direction line of the target geometric shape based on the principal component direction and the three-dimensional coordinate mean value point.
In the embodiment of the application, the principal component direction of the track line represents the direction information of the track line, the three-dimensional coordinate mean point of each track point on the track line represents the normalized coordinate information of each track point, and the image recognition device can obtain the direction line of the target geometric shape based on the principal component direction and the three-dimensional coordinate mean point.
In the embodiment of the present application, the image recognition apparatus may obtain a direction line of the target geometric shape based on the principal component direction and the three-dimensional coordinate mean value point by using formula (2), where formula (2) is as follows:
Figure BDA0002309478830000062
wherein, based on the principal component direction v obtained in S10330Three-dimensional coordinate mean value point
Figure BDA0002309478830000063
The image recognition device can cross-multiply the sum of the three-dimensional coordinate mean point and the principal component direction with the three-dimensional coordinate mean point to obtain a two-dimensional direction line l corresponding to the trajectory line.
And S104, carrying out effectiveness detection on the geometric shape parameters of the target based on the direction lines and preset detection threshold values.
In the embodiment of the application, after the image recognition device obtains the direction line, validity detection is performed on the target geometric shape parameter based on the direction line and a preset detection threshold.
In the embodiment of the present application, as described in S102, the target geometric parameter is a target geometric parameter obtained by extracting sampling points from point cloud data by an image recognition device and performing geometric shape detection according to the extracted sampling points, and therefore, the target geometric parameter obtained by one-time detection is not necessarily a parameter of a target geometric shape corresponding to a target object specified by a direction line, and therefore, in order to improve the accuracy of image recognition, it is necessary to perform validity detection on whether a geometric shape formed by the target geometric parameter is a target geometric shape specified by the direction line.
In this embodiment of the application, based on fig. 9, the method for the image recognition device to perform validity detection on the target geometric shape parameter based on the direction line and the preset detection threshold may be as shown in fig. 11, including S1041-S1042, as follows:
s1041, performing plane fitting processing on the direction line based on a preset imaging projection matrix to obtain characteristic parameters of a three-dimensional direction plane; the three-dimensional direction plane is a plane where the direction line is located in the three-dimensional space, and the characteristic parameters comprise three coordinate components and a normal vector component.
In the embodiment of the application, because the direction line is acquired on the display interface of image acquisition, the image recognition device can perform plane fitting processing on the direction line based on the preset imaging projection matrix of the image acquisition equipment, find the plane where the direction line is located in the three-dimensional space of the camera coordinate system and use the plane as the three-dimensional direction plane, so as to further utilize the characteristic parameters of the three-dimensional direction plane to perform validity detection on the target geometric shape parameters.
In the embodiment of the present application, the characteristic parameters of the three-dimensional direction plane include three coordinate components and one normal vector component.
In this embodiment of the application, the image recognition device calculates a three-dimensional direction plane formed by the camera optical axis and the direction line through the preset imaging projection matrix based on the preset imaging projection matrix, and the three-dimensional direction plane formed by the camera optical axis and the direction line may be as shown in fig. 12, where pi is the three-dimensional direction plane and includes 4 plane characteristic parameters, pi (a, b, c, d), where a, b, c are three coordinate components of the three-dimensional direction plane respectively, and d is a normal vector component of the three-dimensional direction plane.
In the embodiment of the present application, the preset imaging projection matrix may be a 3 × 4 matrix, where values of matrix elements are known values related to parameters of specific camera hardware and lenses; the three direction vectors contained in the direction lines may be a 3 × 1 matrix, and the image recognition device may perform plane fitting processing on the direction lines based on a preset imaging projection matrix through formula (3), as follows:
π=PTl (3)
wherein, P is a preset imaging projection matrix.
In formula (3), the image recognition apparatus may transpose the preset imaging projection matrix P to obtain a transposed 4 x 3 matrix PTThen the transposed 4-to-3 matrix P is formedTAnd multiplying the three-dimensional plane by a 3 x 1 matrix corresponding to the direction line l, and calculating four characteristic parameters of the three-dimensional plane pi according to an equation obtained after multiplication.
S1042, carrying out validity detection on the geometric shape parameters of the target based on the characteristic parameters of the three-dimensional direction plane and a preset detection threshold.
In the embodiment of the present application, after obtaining the three-dimensional direction plane obtained by performing plane fitting processing on the direction line, the image recognition device compares the angle and the position of the geometric shape formed by the three-dimensional direction plane and the target geometric shape parameter, so as to verify whether the geometric shape formed by the target geometric shape parameter is on the three-dimensional direction plane specified by the direction line. Specifically, the image recognition device may perform validity detection on the target geometric shape parameter based on the feature parameter of the three-dimensional direction plane and a preset detection threshold.
In this embodiment of the application, based on the feature parameters based on the three-dimensional direction plane and the preset detection threshold in fig. 11 and S1042, the validity detection on the target geometric shape parameter may be as shown in fig. 13, including S301 to S303, as follows:
s301, taking an absolute value of the quantity product of the three coordinate components and the axial vector to obtain a first result.
In the embodiment of the application, the image recognition device may calculate an included angle between a central axis vector of the target geometric shape parameter and a three-dimensional direction plane as a first result.
In this embodiment, the image recognition apparatus may obtain a first result by taking an absolute value of a number product of three coordinate components and an axis vector according to formula (4), as follows:
θ=|(a,b,c)·n| (4)
and theta is an included angle between the axis vector of the target geometric shape parameter and the three-dimensional direction plane, namely a first result. The image recognition device multiplies three coordinate components (a, b and c) in the characteristic parameters of the three-dimensional direction plane by an axis vector n in the target geometric shape parameter, and takes an absolute value of the quantity product to obtain a first result theta which represents an included angle between the axis vector of the target geometric shape parameter and the three-dimensional direction plane.
S302, calculating the distance from the central point to the three-dimensional direction plane according to the three-dimensional coordinates of the central point and the characteristic parameters of the three-dimensional direction plane to obtain a second result.
In this embodiment, the image recognition apparatus may calculate, according to the three-dimensional coordinate of the central point and the characteristic parameter of the three-dimensional direction plane, a distance from the central point to the three-dimensional direction plane by using formula (5), to obtain a second result, as follows:
Figure BDA0002309478830000081
wherein o isx,oy,ozThe image recognition device uses the three-dimensional coordinate o of the central point as the three-dimensional coordinate o of the central pointx,oy,ozSubstituting the three-dimensional plane characteristic parameters (a, b, c and D) into a formula (5) for calculation to obtain a second calculation result D, wherein D is the distance from the central point o to the three-dimensional direction plane pi (a, b, c and D).
And S303, when the first result is smaller than or equal to a preset angle threshold value and the second result is smaller than or equal to a preset distance threshold value, determining that the target geometric shape parameter is valid.
In the embodiment of the application, after the image recognition device obtains the first result and the second result, validity detection is performed on the target geometric shape parameter according to the first result and the second result, and when the first result is smaller than or equal to a preset angle threshold value and the second result is smaller than or equal to a preset distance threshold value, it is determined that the target geometric shape parameter is valid.
In this embodiment, the preset angle threshold may be a set included angle tolerance constant θtIn some embodiments of the present application, the preset angle threshold may be 0.1, or may be other values, which is not limited in this embodiment.
In this embodiment, the preset distance threshold may be a set distance tolerance constant dtThe preset distance threshold may also be a variable related to the depth coordinate in the center point coordinate, and for example, the preset distance threshold may be set as dtoz
In the embodiment of the present application, as shown in fig. 14, when the first result is less than or equal to the predetermined angle threshold, that is, θ<θt(ii) a And the second result is less than or equal to a predetermined distance threshold, i.e. D<dtozOr D<dtAnd when the image recognition device determines that the target geometric shape parameters are effective, the included angle between the axis vector of the target geometric shape and the three-dimensional direction plane is small, namely the inclination is consistent, the distance between the central point of the target geometric shape and the three-dimensional direction plane is short, namely the spatial position is close, the target geometric shape is the target object corresponding to the direction line on the display interface.
And S105, when the target geometric shape parameters are detected to be effective, performing parameter iteration processing based on the target geometric shape parameters and the point cloud data to obtain target parameters.
In the embodiment of the application, when the target geometric shape parameter is detected to be effective, for the first iteration, the image recognition device takes the effective target geometric shape parameter as an initial target geometric shape parameter, two points are randomly selected again from point cloud data to perform the calculation and the effectiveness detection of the next target geometric shape parameter, and when the next target geometric shape parameter also passes the effectiveness detection, the image recognition device obtains the current effective target geometric shape parameter. The image recognition device compares the initial target geometric shape parameter obtained by the first iteration with the current effective target geometric shape parameter, if the fitting degree of the current effective target geometric shape parameter is higher, the current effective target geometric shape parameter is used as the optimal target geometric shape parameter, the calculation of the target geometric shape parameter is carried out in the subsequent iteration by the same method, the effectiveness detection and the optimal parameter updating are carried out, and when the iteration condition is finally met, the image recognition device can obtain the target parameter.
And S106, performing plane expansion on the surface of the target geometric shape in the target image based on the target parameters to obtain a recognition result.
In this embodiment, the image recognition device may perform plane expansion on the surface of the target geometric shape in the target image based on the target parameter, so as to obtain a recognition result.
In the embodiment of the application, the target parameter is a curved surface parameter of a target geometric shape corresponding to the target image, the image recognition device can unfold a curved surface represented by the target parameter into a plane based on the target parameter, and take the image after the plane unfolding as a recognition result.
In this application embodiment, the image recognition device can utilize the existing method to realize that the curved surface in the space is expanded to the plane tiling, and this application embodiment is no longer repeated.
It can be understood that, in the embodiment of the present application, the target geometric shape parameter may be determined according to the assistance of the direction line acquired by the display interface, and the target geometric shape parameter conforming to the direction line constraint is defined therefrom for further iterative computation, so that the computation amount of the image detection algorithm iteration is reduced, the recognition efficiency is improved, it is ensured that the finally obtained target parameter conforms to the target geometric shape corresponding to the direction line, and the accuracy of the image recognition is improved.
In the embodiment of the present application, after S105, S107 is further included, as follows:
and S107, when the target geometric shape parameters are detected to be invalid, the parameter iteration processing is not carried out, and the target geometric shape parameters are obtained again for validity detection.
In the embodiment of the application, when the target geometric shape parameter is detected to be invalid, it is indicated that the target geometric shape represented by the target geometric shape parameter does not conform to the target object corresponding to the direction line, so that the image recognition device does not use the invalid target geometric shape parameter to perform iterative optimization, reselects the image point from the point cloud data to calculate the target geometric shape parameter, and performs validity detection.
It can be understood that when the target geometric shape parameters are detected to be invalid, the image recognition device is not suitable for further processing the invalid parameters, and the point cloud data is reselected to carry out a new round of calculation and detection on the target geometric shape parameters, so that the calculation workload is reduced, meanwhile, the target geometric shape parameters are effectively screened, and finally, the efficiency and the accuracy of image recognition are improved.
In this embodiment of the application, in S103, the method for acquiring the direction line of the target geometric shape on the display interface of the target image by the image recognition device may further include, as shown in fig. 15, S401-S402, as follows:
s401, receiving a dragging instruction of a target line segment in a target image on a display interface of the target image; the target line segment is any line segment on the target object which is in accordance with the target geometric shape in the target image.
In this embodiment of the application, the image recognition device may also pre-display any line segment on the target object that conforms to the target geometric shape in the target image as the target line segment on the display interface of the target image, and receive a drag instruction for the target line segment in the target image.
S402, based on the dragging instruction, dragging the target line segment to obtain a direction line of the target geometric shape, wherein the direction line of the target geometric shape is consistent with the axis of the target object.
In this embodiment of the application, the image recognition device may drag the target line segment based on the drag instruction, so that the target line segment is consistent with the axis of the target object and serves as the direction line of the target geometric shape.
In some embodiments of the present application, for example, when the target geometry is a cylinder, as shown in fig. 16, the image recognition device pre-displays an axis corresponding to the cylinder on the screen, and the user drags and rotates the pre-displayed axis to be consistent with the axis of the cylinder.
In this embodiment of the application, in S103, the method for acquiring the direction line of the target geometric shape on the display interface of the target image by the image recognition device may further include, as shown in fig. 17, S501-S504, as follows:
s501, performing text detection on a target object in the target image on a display interface of the target image to obtain at least one text box on the target object; the target object is an object which is in accordance with the target geometric shape in the target image.
In this embodiment, the image recognition device may further perform text detection on a target object of the target image display interface to obtain at least one text box on the target object.
In this embodiment of the application, compared to a complex pattern shape, at least one text box is a regular shape on the target object, and a direction line corresponding to the target object may be generated based on the at least one text box.
In the embodiment of the application, the image recognition device may perform Text detection on the target object by using a Text detection algorithm such as a Seglink model or natural Scene Text detection (EAST, An Efficient and Accurate Scene Text Detector), and extract a direction of the Text to obtain at least one Text box.
S502, receiving a trigger instruction for the target object, and acquiring a trigger position from the trigger instruction.
In the embodiment of the present application, as shown in fig. 18-1, the image recognition apparatus receives a trigger instruction for a target object on a display interface, and acquires a display interface coordinate corresponding to an operation in the trigger instruction as a trigger position.
In this embodiment of the present application, the trigger instruction may be a click instruction or other forms of screen operation instructions, and this embodiment of the present application is not limited.
S503, in at least one text box, determining a target text box in the preset area range of the trigger position.
In the embodiment of the application, after the image recognition device obtains the trigger position, in at least one text box, the text box in the preset area range of the trigger position is determined as the target text box.
In some embodiments of the present application, the image recognition device may also determine a text box closest to the trigger position in the at least one text box as the target text box.
S504, taking the vertical direction of the target text box as a direction line of the target geometric shape.
In the embodiment of the present application, as shown in fig. 18-2, since the target text box is the text box closest to the trigger position, the target object in which the target text box is located is most likely the target object specified by the trigger position, and therefore the image recognition apparatus takes the vertical direction of the target text box as the direction line.
It can be understood that, in the embodiment of the application, the image recognition device may obtain the direction line of the target geometric shape through multiple interaction modes, so that the image recognition range is limited through an interaction method, and the efficiency and accuracy of image recognition are improved.
The embodiment of the present application provides an image recognition apparatus 5, as shown in fig. 19, the image recognition apparatus 5 includes an acquisition unit 100, a shape detection unit 200, an interaction unit 300, an effectiveness detection unit 400, an iteration unit 500, and a recognition unit 600, wherein,
the acquisition unit 100 is configured to acquire a target image and acquire point cloud data of the target image;
the shape detection unit 200 is configured to perform geometric shape detection on the point cloud data to obtain a target geometric shape parameter;
the interaction unit 300 is configured to obtain a direction line of a target geometric shape on a display interface of the target image;
the validity detection unit 400 is configured to perform validity detection on the target geometric shape parameter based on the direction line and a preset detection threshold;
the iteration unit 500 is configured to, when it is detected that the target geometric shape parameter is valid, perform parameter iteration processing based on the target geometric shape parameter and the point cloud data to obtain a target parameter;
the identifying unit 600 is configured to perform plane expansion on the surface of the target geometric shape in the target image based on the target parameter, so as to obtain an identification result.
In some embodiments of the present application, the validity detecting unit 400 is further configured to perform plane fitting processing on the direction line based on a preset imaging projection matrix to obtain a characteristic parameter of a three-dimensional direction plane; the three-dimensional direction plane is a plane where the direction line is located in a three-dimensional space, and the characteristic parameter comprises three coordinate components and a normal vector component;
and carrying out effectiveness detection on the target geometric shape parameters based on the characteristic parameters of the three-dimensional direction plane and the preset detection threshold.
In some embodiments of the present application, the interaction unit 300 is further configured to receive, on the display interface of the target image, a trajectory line drawn for a target object in the target image; wherein the target object is an object in the target image which conforms to a target geometric shape; obtaining a direction line of the target geometric shape based on a cross-product vector product of the three-dimensional coordinates of the starting position and the three-dimensional coordinates of the end position in the trajectory line; or, based on the trajectory line, performing principal component analysis to obtain a principal component direction and a three-dimensional coordinate mean value point of each trajectory point on the trajectory line; and obtaining a direction line of the target geometric shape based on the principal component direction and the three-dimensional coordinate mean value point.
In some embodiments of the present application, the interaction unit 300 is further configured to receive, on the display interface of the target image, a drag instruction for a target line segment in the target image; the target line segment is any line segment on a target object which accords with a target geometric shape in the target image; and dragging the target line segment based on the dragging instruction to obtain a direction line of the target geometric shape, wherein the direction line of the target geometric shape is consistent with the axis of the target object.
In some embodiments of the present application, the interaction unit 300 is further configured to perform text detection on a target object in the target image on a display interface of the target image, so as to obtain at least one text box on the target object; wherein the target object is an object in the target image which conforms to a target geometric shape; receiving a trigger instruction of the target object, and acquiring a trigger position from the trigger instruction; and in the at least one text box, determining a target text box within a preset area range of the trigger position; and taking the vertical direction of the target text box as a direction line of the target geometric shape.
In some embodiments of the present application, the validity detecting unit 400 is further configured to transpose the preset imaging projection matrix, and use a product of the transposed preset imaging projection matrix and a direction vector included in the direction line as a feature parameter of the three-dimensional direction plane.
In some embodiments of the present application, the target geometry parameters include an axis vector and a center point, the axis vector and the center point corresponding one-to-one; the preset detection threshold comprises a preset angle threshold and a preset distance threshold; the validity detecting unit 400 is further configured to take an absolute value of a number product of the three coordinate components and the axial vector to obtain a first result; calculating the distance from the central point to the three-dimensional direction plane according to the three-dimensional coordinate of the central point and the characteristic parameters of the three-dimensional direction plane to obtain a second result; and when the first result is smaller than or equal to the preset angle threshold and the second result is smaller than or equal to the preset distance threshold, determining that the target geometric shape parameter is valid.
In some embodiments of the present application, the validity detecting unit 400 is further configured to, when it is detected that the target geometry parameter is invalid, not perform parameter iteration, and re-acquire the target geometry parameter to perform validity detection.
In some embodiments of the present application, the acquisition unit 100 is further configured to acquire point cloud data through a time-of-flight sensor.
An embodiment of the present application provides an image recognition apparatus 6, as shown in fig. 20, where the image recognition apparatus 6 includes: a processor 125, a memory 126, and a communication bus 127, the memory 126 in communication with the processor 125 via the communication bus 127, the memory 126 storing one or more programs executable by the processor 125, the processor 125 performing the image recognition method as in any one of the above when the one or more programs are executed.
Embodiments of the present application provide a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement an image recognition method as described in any one of the above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (12)

1. An image recognition method, comprising:
acquiring a target image and acquiring point cloud data of the target image;
carrying out geometric shape detection on the point cloud data to obtain a target geometric shape parameter;
acquiring a direction line of a target geometric shape on a display interface of the target image;
based on the direction line and a preset detection threshold value, carrying out validity detection on the target geometric shape parameter;
when the target geometric shape parameters are detected to be effective, performing parameter iteration processing based on the target geometric shape parameters and the point cloud data to obtain target parameters;
and performing plane expansion on the surface of the target geometric shape in the target image based on the target parameters to obtain a recognition result.
2. The method according to claim 1, wherein the detecting validity of the target geometry candidate parameter based on the direction line and a preset detection threshold comprises:
performing plane fitting processing on the direction line based on a preset imaging projection matrix to obtain characteristic parameters of a three-dimensional direction plane; the three-dimensional direction plane is a plane where the direction line is located in a three-dimensional space, and the characteristic parameter comprises three coordinate components and a normal vector component;
and carrying out effectiveness detection on the target geometric shape parameters based on the characteristic parameters of the three-dimensional direction plane and the preset detection threshold.
3. The method according to claim 1 or 2, wherein the obtaining of the direction line of the target geometric shape on the display interface of the target image comprises:
receiving a trajectory line drawn for a target object in the target image on a display interface of the target image; wherein the target object is an object in the target image which conforms to a target geometric shape;
obtaining a direction line of the target geometric shape based on a cross product vector product of the three-dimensional coordinates of the starting position and the three-dimensional coordinates of the end position in the trajectory line;
or, based on the trajectory line, performing principal component analysis to obtain a principal component direction and a three-dimensional coordinate mean value point of each trajectory point on the trajectory line;
and obtaining a direction line of the target geometric shape based on the principal component direction and the three-dimensional coordinate mean value point.
4. The method according to claim 1 or 2, wherein the obtaining of the direction line of the target geometric shape on the display interface of the target image comprises:
receiving a dragging instruction of a target line segment in the target image on a display interface of the target image; the target line segment is any line segment on a target object which accords with a target geometric shape in the target image;
and dragging the target line segment based on the dragging instruction to obtain a direction line of the target geometric shape, wherein the direction line of the target geometric shape is consistent with the axis of the target object.
5. The method according to claim 1 or 2, wherein the obtaining of the direction line of the target geometric shape on the display interface of the target image comprises:
performing text detection on a target object in the target image on a display interface of the target image to obtain at least one text box on the target object; wherein the target object is an object in the target image which conforms to a target geometric shape;
receiving a trigger instruction for the target object, and acquiring a trigger position from the trigger instruction;
determining a target text box in the preset area range of the trigger position in the at least one text box;
and taking the vertical direction of the target text box as a direction line of the target geometric shape.
6. The method according to claim 1, wherein performing plane fitting processing on the direction lines based on a preset imaging projection matrix to obtain characteristic parameters of a three-dimensional direction plane comprises:
and transposing the preset imaging projection matrix, and taking the product of the transposed preset imaging projection matrix and the direction vector contained in the direction line as the characteristic parameter of the three-dimensional direction plane.
7. The method of claim 2, wherein the target geometry parameters include an axis vector and a center point, the axis vector and the center point having a one-to-one correspondence; the preset detection threshold comprises a preset angle threshold and a preset distance threshold; the detecting the effectiveness of the target geometric shape parameter based on the characteristic parameter of the three-dimensional direction plane and the preset detection threshold comprises:
taking an absolute value of the number product of the three coordinate components and the axial vector to obtain a first result;
calculating the distance from the central point to the three-dimensional direction plane according to the three-dimensional coordinate of the central point and the characteristic parameters of the three-dimensional direction plane to obtain a second result;
and when the first result is smaller than or equal to the preset angle threshold and the second result is smaller than or equal to the preset distance threshold, determining that the target geometric shape parameter is valid.
8. The method according to claim 1, wherein after the validity detection of the target geometry parameter based on the direction line and a preset detection threshold, the method further comprises:
and when the target geometric shape parameters are detected to be invalid, carrying out parameter iteration processing, and reacquiring the target geometric shape parameters for validity detection.
9. The method of claim 1, wherein the obtaining point cloud data for the target image comprises:
point cloud data is acquired by a time-of-flight sensor.
10. An image recognition device is characterized by comprising an acquisition unit, a shape detection unit, an interaction unit, an effectiveness detection unit, an iteration unit and a recognition unit,
the acquisition unit is used for acquiring a target image and acquiring point cloud data of the target image;
the shape detection unit is used for carrying out geometric shape detection on the point cloud data to obtain a target geometric shape parameter;
the interaction unit is used for acquiring a direction line of a target geometric shape on a display interface of the target image;
the validity detection unit is used for detecting the validity of the target geometric shape parameters based on the direction lines and a preset detection threshold;
the iteration unit is used for carrying out parameter iteration processing on the basis of the target geometric shape parameters and the point cloud data to obtain target parameters when the target geometric shape parameters are detected to be effective;
and the identification unit is used for carrying out plane expansion on the surface of the target geometric shape in the target image based on the target parameters to obtain an identification result.
11. An image recognition apparatus, comprising: a processor, a memory, and a communication bus, the memory in communication with the processor through the communication bus, the memory storing one or more programs executable by the processor, the processor performing the method of any of claims 1-9 when the one or more programs are executed.
12. A computer-readable storage medium, having one or more programs stored thereon, the one or more programs being executable by one or more processors to perform the method of any of claims 1-9.
CN201911252746.0A 2019-12-09 2019-12-09 Image identification method and device and computer readable storage medium Pending CN113033248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252746.0A CN113033248A (en) 2019-12-09 2019-12-09 Image identification method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252746.0A CN113033248A (en) 2019-12-09 2019-12-09 Image identification method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113033248A true CN113033248A (en) 2021-06-25

Family

ID=76451003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252746.0A Pending CN113033248A (en) 2019-12-09 2019-12-09 Image identification method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113033248A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658037A (en) * 2021-08-24 2021-11-16 凌云光技术股份有限公司 Method and device for converting depth image into gray image
WO2023207651A1 (en) * 2022-04-29 2023-11-02 华为技术有限公司 Cover pulling method of robot, and robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658037A (en) * 2021-08-24 2021-11-16 凌云光技术股份有限公司 Method and device for converting depth image into gray image
CN113658037B (en) * 2021-08-24 2024-05-14 凌云光技术股份有限公司 Method and device for converting depth image into gray level image
WO2023207651A1 (en) * 2022-04-29 2023-11-02 华为技术有限公司 Cover pulling method of robot, and robot

Similar Documents

Publication Publication Date Title
CN108764048B (en) Face key point detection method and device
US20210190497A1 (en) Simultaneous location and mapping (slam) using dual event cameras
CN107810522B (en) Real-time, model-based object detection and pose estimation
CN110807350B (en) System and method for scan-matching oriented visual SLAM
US9208607B2 (en) Apparatus and method of producing 3D model
EP2430588B1 (en) Object recognition method, object recognition apparatus, and autonomous mobile robot
EP2770783B1 (en) A wearable information system having at least one camera
US9560273B2 (en) Wearable information system having at least one camera
CN109033989B (en) Target identification method and device based on three-dimensional point cloud and storage medium
CN110176075B (en) System and method for simultaneous consideration of edges and normals in image features through a vision system
JP6054831B2 (en) Image processing apparatus, image processing method, and image processing program
JP2008298631A (en) Map change detection device and method, and program
US11842514B1 (en) Determining a pose of an object from rgb-d images
JP7156515B2 (en) Point cloud annotation device, method and program
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
CN107949851B (en) Fast and robust identification of end points of objects within a scene
JP6172432B2 (en) Subject identification device, subject identification method, and subject identification program
JP6817742B2 (en) Information processing device and its control method
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN113033248A (en) Image identification method and device and computer readable storage medium
CN109741306B (en) Image processing method applied to dangerous chemical storehouse stacking
JP2012220271A (en) Attitude recognition apparatus, attitude recognition method, program and recording medium
JP5953166B2 (en) Image processing apparatus and program
CN116721156A (en) Workpiece position positioning method, device, computer equipment and storage medium
JP2015045919A (en) Image recognition method and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination