CN115797442A - Simulation image reinjection method of target position and related equipment thereof - Google Patents

Simulation image reinjection method of target position and related equipment thereof Download PDF

Info

Publication number
CN115797442A
CN115797442A CN202211538620.1A CN202211538620A CN115797442A CN 115797442 A CN115797442 A CN 115797442A CN 202211538620 A CN202211538620 A CN 202211538620A CN 115797442 A CN115797442 A CN 115797442A
Authority
CN
China
Prior art keywords
camera
target
image
parameter matrix
target position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211538620.1A
Other languages
Chinese (zh)
Other versions
CN115797442B (en
Inventor
方志刚
陈奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunyi Electronic Technology Shanghai Co Ltd
Original Assignee
Kunyi Electronic Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunyi Electronic Technology Shanghai Co Ltd filed Critical Kunyi Electronic Technology Shanghai Co Ltd
Priority to CN202211538620.1A priority Critical patent/CN115797442B/en
Priority claimed from CN202211538620.1A external-priority patent/CN115797442B/en
Publication of CN115797442A publication Critical patent/CN115797442A/en
Application granted granted Critical
Publication of CN115797442B publication Critical patent/CN115797442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application relates to a method for reinjecting a simulation image of a target position and related equipment thereof, wherein the method comprises the following steps: acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle; determining at least one path of second camera adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera; forming a simulation image matched with the target position based on the real image and the depth map corresponding to the at least one path of second camera and the first camera data corresponding to the target camera; the simulated image is reinjected to the data processing unit of the second vehicle. The application can further enrich the second camera data, enhance the adaptability and expand the application range by utilizing the first camera data of the first vehicle to form the simulation image at the target position of the second vehicle.

Description

Simulation image reinjection method of target position and related equipment thereof
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for annotating a simulated image of a target location and a related device thereof.
Background
At present, the unmanned technology is developed vigorously, and the data information collected under the real condition is the most depended on. In the algorithm development process, data collected under the actual condition is reinjected (namely injected) to the controller, so that the algorithm effect can be verified, and the algorithm development and verification efficiency is improved.
Specifically, an algorithm (e.g., a machine-learned neural network) is provided in the controller. After algorithm development is complete, the onboard camera may inject the captured video data into the controller. The algorithm of the controller can process the collected video data to obtain an output result, thereby realizing various functions such as target identification.
In the algorithm development process, training, verification and other work need to be performed on an algorithm (such as a neural network), at this time, various video data need to be injected into the algorithm of the controller, and a data source of the video data may be actually acquired video data or simulated video data. However, in the prior art, when video data needs to be injected into a controller of a vehicle of a new vehicle type, a camera of the vehicle of the new vehicle type needs to actually acquire real video data, or simulated video data is specially formed for the camera of the new vehicle type, and when video data is injected into the vehicle of the new vehicle type, the video data is not much, and the training and verification effects cannot be well achieved.
Disclosure of Invention
In view of this, the present application provides a method for reinjecting a simulation image of a target position and a related device thereof, which can form a simulation image of a target position of a second vehicle by using first camera data of a first vehicle, further enrich second camera data of the second vehicle, enhance adaptability of the second camera data, and expand application scope of the second camera data.
According to an aspect of the present application, there is provided a method for artificial image reinjection of a target position, the method including: acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprise an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the position of the target camera is the same as a preset target position on the second vehicle, the target position is different from the position of each second camera, and the second camera data comprise the external parameter matrix of the second camera and a real image shot by the second camera; determining at least one path of second camera adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera; forming a simulation image matched with the target position based on the real image and the depth map corresponding to the at least one path of second camera and the first camera data corresponding to the target camera; reinjecting the simulated image to a data processing unit of the second vehicle.
According to still another aspect of the present application, there is provided a simulation image reinjection apparatus of a target camera, including: the camera data acquisition module is used for acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprise an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the position of the target camera is the same as a preset target position on the second vehicle, the target position is different from the position of each second camera, and the second camera data comprise an external parameter matrix of the second camera and a real image shot by the second camera; the camera determining module is used for determining at least one path of second camera adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera; the image forming module is used for forming a simulation image matched with the target position based on a real image and a depth map which are respectively corresponding to the at least one path of second camera and first camera data corresponding to the target camera; and the reinjection module is used for reinjecting the simulation image to a data processing unit of the second vehicle.
The method comprises the steps of obtaining first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, determining at least one path of second camera adjacent to a target position according to an external parameter matrix of a target camera and an external parameter matrix of each second camera, forming a simulation image matched with the target position based on a real image, a depth map and the first camera data corresponding to the target camera, wherein the real image and the depth map correspond to the at least one path of second camera, and finally injecting the simulation image back to a data processing unit of the second vehicle.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a simulation image reinjection method of a target location according to an embodiment of the present application.
FIG. 2 shows a schematic view of a target location and a target camera of an embodiment of the application.
Fig. 3 shows a block diagram of a simulation image reinjection apparatus of a target camera according to an embodiment of the present application.
Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "center," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the present application and to simplify the description, and are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed in a particular orientation, and be operated in a particular orientation, and thus are not to be construed as limiting the present application. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other suitable relationship. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The following disclosure provides many different embodiments or examples for implementing different features of the application. In order to simplify the disclosure of the present application, specific example components and arrangements are described below. Of course, they are merely examples and are not intended to limit the present application. Moreover, the present application may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, examples of various specific processes and materials are provided herein, but one of ordinary skill in the art may recognize applications of other processes and/or use of other materials. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
Fig. 1 shows a flowchart of a simulation image reinjection method of a target position according to an embodiment of the present application. As shown in fig. 1, the method for injecting the simulation image of the target position of the present application may include:
step S1: acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprise an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the position of the target camera is the same as a preset target position on the second vehicle, the target position is different from the position of each second camera, and the second camera data comprise the external parameter matrix of the second camera and a real image shot by the second camera;
the plurality of first cameras may be mounted on a first vehicle, and there may be one or more first vehicles. Correspondingly, the plurality of second cameras may be mounted on a second vehicle, and there may be one or more second vehicles. It is to be understood that the application is not limited to the number of first vehicles and second vehicles.
The first camera data may be a data set, and the first camera data may include parameters such as an extrinsic parameter matrix of each first camera and an intrinsic parameter matrix of each first camera. Similar to the first camera data, the second camera data may be a data set, and the second camera data may include parameters such as an extrinsic parameter matrix of each of the second cameras and an intrinsic parameter matrix of each of the second cameras. It should be noted that the first camera data may further include original images captured by the first cameras, and the second camera data may further include real images captured by the second cameras.
For example, the first vehicle may be an old vehicle type, the second vehicle may be a new vehicle type, and the first camera data and the second camera data may be stored in the corresponding database.
The extrinsic parameter matrix of the target camera and the intrinsic parameter matrix of the target camera may be obtained simultaneously or sequentially. The extrinsic parameter matrix may include rotation parameters and translation parameters for converting coordinate points of a world coordinate system to coordinate points of a camera coordinate system from the world coordinate system; the internal parameter matrix may include parameters of the camera itself, such as camera focal length, etc., which may be used to convert coordinate points of the camera coordinate system to coordinate points of the pixel coordinate system. The internal parameter matrix is typically fixed, while the external parameter matrix may be related to the position, orientation, etc. parameters of the camera.
Fig. 2 shows a schematic view of a target position and a target camera of an embodiment of the present application.
As shown in fig. 2, a plurality of first cameras may be mounted on a first vehicle and a plurality of second cameras may be mounted on a second vehicle. Among the plurality of first cameras, one of the plurality of first cameras may be a target camera that exists on the first vehicle, and a corresponding camera does not exist at the same position as the target camera on the second vehicle.
Specifically, the position of the target camera is the same as a preset target position on the second vehicle, and the target position is different from the position of each second camera. In the embodiment of the present application, the camera having the same position as the target camera of the first vehicle does not exist in the second vehicle. Thus, a number of second cameras adjacent to the target location may be utilized to simulate a simulated image that would be captured if a second camera were installed at the target location.
Because the position of the first camera mounted on the first vehicle is different from the position of the second camera mounted on the second vehicle, or the model of the first camera mounted on the first vehicle is different from the model of the second camera mounted on the second vehicle, the viewing angle of the first camera mounted on the first vehicle is different from the viewing angle of the second camera mounted on the second vehicle, so that the camera data of the first vehicle cannot be directly used for the second vehicle, and the camera data of the first vehicle needs to be converted and used for the second vehicle by adopting the method of the present application. And in the case that the position of the second camera is the same as the position of the first camera, the first camera data corresponding to the first camera can be directly transplanted to the second camera having the same position as the first camera.
Step S2: determining at least one path of second camera adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera;
in the embodiment of the present application, since the second vehicle does not have a camera having the same position as the target camera, it is necessary to simulate a simulation image of the target position by using at least one second camera near the target position of the second vehicle, so as to correct an image which is supposed to be captured after the camera is installed at the target position. It should be noted that, since the extrinsic parameter matrix reflects the view angle information of the camera, no involvement of the intrinsic parameter matrix is required in the process of determining the at least one second camera adjacent to the target position, thereby improving the efficiency of selecting the at least one second camera.
Further, determining at least one second camera adjacent to the target location according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrices of the second cameras may include:
step S21: extracting a first translation parameter, a second translation parameter and a third translation parameter corresponding to the target camera through the external parameter matrix of the target camera;
for example, C1, C2 N And N first cameras are arranged in total, wherein N is a positive integer. For each first camera, an extrinsic parameter matrix for the corresponding first camera may be acquired. Illustratively, for the target camera C1, the first translation parameter, the second translation parameter, and the third translation parameter in the extrinsic parameter matrix corresponding to the target camera may be respectively represented as t x(C1) 、t y(C1) 、t z(C1) And translation parameters representing the transformation of the target object shot by the target camera from the world coordinate system to the camera coordinate system.
Step S22: respectively extracting a fourth translation parameter, a fifth translation parameter and a sixth translation parameter corresponding to each second camera through an external parameter matrix of each second camera;
wherein the fourth translation parameter, the fifth translation parameter and the sixth translation parameter may be t in an extrinsic parameter matrix of each of the second cameras respectively x 、t y 、t z And translation parameters representing the transformation of the target object shot by any one second camera from the world coordinate system to the camera coordinate system.
Step S23: respectively calculating Euclidean distances between the second cameras and the target camera based on a fourth translation parameter, a fifth translation parameter and a sixth translation parameter corresponding to the second cameras and a first translation parameter, a second translation parameter and a third translation parameter corresponding to the target camera to obtain a plurality of camera distances corresponding to the second cameras;
for example, for a target camera C1 on a first vehicle and a second camera on a second vehicle, a square root operation may be performed on translation parameters corresponding to the two cameras to obtain a euclidean distance corresponding to the second camera. Specifically, the first translation parameter and the fourth translation parameter are subtracted and then squared, the second translation parameter and the fifth translation parameter are subtracted and then squared, the third translation parameter and the sixth translation parameter are subtracted and then squared, and the total addition result is subjected to root formation to obtain the camera distance corresponding to the second camera. Similarly, the camera distances corresponding to the other second cameras can be obtained.
Step S24: and selecting at least one path of second camera lower than a preset camera distance threshold value as at least one path of second camera adjacent to the target position.
Wherein the camera distance threshold can be set as required. The at least one second camera may be n second cameras, and n is an integer. Optionally, n may be set to be 2 or 3, and a 2-way second camera or a 3-way second camera close to the target camera may be selected as the reference.
By determining at least one path of second camera close to the target position by using the external parameter matrix of the target camera and the external parameter matrix of each second camera, the accuracy of camera data migration of different vehicles can be improved, and the calculated data amount is reduced.
And step S3: forming a simulation image matched with the target position based on the real image and the depth map corresponding to the at least one path of second camera and the first camera data corresponding to the target camera;
further, before forming a simulated image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera, which correspond to the at least one second camera, the method for reinjecting the simulated image of the target position includes:
step S301: and processing the real image shot by the at least one path of second camera based on an SFM algorithm to generate a depth map corresponding to the at least one path of second camera.
The second camera data may be a data set including a plurality of real images captured by the second cameras. In step S301, the at least one second camera may be found in the second camera data according to the number of the second camera, and then a plurality of real images captured by the at least one second camera may be directly extracted.
The depth map may be used to characterize a distance between a target object photographed by a target camera and the target camera, i.e., target depth information. For example, for the second vehicle, a target camera is installed in front of the second vehicle on the left, the linear distance from the traffic light shot by the target camera to the camera is 10m, and the linear distance from the pedestrian shot by the target camera to the target camera is 15m, and both the traffic light and the pedestrian can be used as the target object. The target object may be one or more, and the distance between the target object and the second vehicle camera may be specifically measured based on the geometric center of the target object and the optical center of the camera.
In practical applications, the coordinates of the target object photographed by the target camera may be calibrated using the world coordinate system. The origin of the world coordinate system is independent of the specific position of the target camera and can be selected according to actual needs. In general, the coordinates of the target object in the world coordinate system cannot be projected directly to the image of the two-dimensional plane, and further conversion is required. For example, the external parameter matrix may be used to convert coordinates in the world coordinate system into the camera coordinate system, and then the internal parameter matrix may be used to convert corresponding coordinates in the camera coordinate system into the pixel coordinate system. For example, the origin of the camera coordinate system may be the optical center of the camera, and the origin of the pixel coordinate system may be located at the upper left corner of the captured image.
The depth information can be obtained through radar detection and also can be obtained through a binocular vision mode. It can be understood that there are various ways to obtain the depth information, and the application is not limited thereto.
Among them, the SFM algorithm, also called Motion From Motion algorithm, is capable of reconstructing a three-dimensional Structure From a series of two-dimensional image sequences containing visual Motion information.
Illustratively, based on an SFM algorithm, two adjacent real images of the plurality of real images may be selected for calculation, feature points in different real images are matched, corresponding basis matrices and eigen matrices are calculated, and depth maps corresponding to the at least one path of second camera are reconstructed according to the basis matrices and eigen matrices. Each path of second camera in the at least one path of second camera corresponds to one depth map. For any one of the at least one path of second camera, the depth map corresponding to the path of second camera may include a distance between the target object and the path of second camera.
Further, forming a simulation image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera, which are respectively corresponding to the at least one second camera, includes:
step S31: acquiring an internal parameter matrix of each second camera in the at least one path of second cameras;
in step S31, an internal parameter matrix of each second camera in the at least one path of second cameras may be extracted from the second camera data.
Step S32: based on the external parameter matrix of each second camera in the at least one path of second cameras, the internal parameter matrix of each second camera, the depth map and the real image corresponding to the at least one path of second cameras, projecting the pixel points of the real image from the pixel coordinate system to the world coordinate system to obtain a plurality of first pixel points under the world coordinate system;
the target object may include a plurality of feature points, and each feature point may correspond to a pixel point on an image photographed based on the target object. Each feature point has a first pixel point in the world coordinate system. In practical applications, the feature points corresponding to each of the depth maps may be translated to a world coordinate system.
Due to the difference of the visual angles of the first camera and the target camera, and the pixel points of the pixel coordinate system cannot be directly transformed to the coordinate points of the world coordinate system, each depth map is translated to the world coordinate system, so that the transformation process of projecting the pixel points of the real images from the pixel coordinate system to the world coordinate system is realized through each depth map. In other words, without a depth map, the transformation of the projection of the pixel points of the plurality of real images from the pixel coordinate system to the world coordinate system cannot be realized; and under the condition of combining the external parameter matrix of each second camera in the at least one path of second camera, the internal parameter matrix of each second camera and the depth map corresponding to each second camera, the pixel points of the plurality of real images can be projected from the pixel coordinate system to the world coordinate system.
Step S33: and forming a simulation image matched with the target position based on the plurality of first pixel points and the first camera data corresponding to the target camera.
Further, forming a simulation image matched with the target position based on the plurality of first pixel points and the first camera data corresponding to the target camera includes:
step S331: generating an extrinsic parameter matrix of the target location using an extrinsic parameter matrix of the target camera;
in one example, the extrinsic parameter matrix of the target camera may be directly used as the extrinsic parameter matrix of the target location.
Step S332: according to the external parameter matrix of the target position, projecting part or all of the first pixel points to a camera coordinate system corresponding to the target camera to obtain a plurality of second pixel points under the camera coordinate system;
in one example, the conversion relationship between the first pixel point and the second pixel point can be expressed by formula (1) as follows:
Figure BDA0003976384450000091
x, Y and Z respectively represent first pixel points of a target object under a world coordinate system; x c 、Y c 、Z c Respectively representing second pixel points of the target object under the camera coordinate system; r in the extrinsic parameter matrix 11 -r 33 A total of 9 parameters represent rotation parameters of the target object transformed from the world coordinate system to the camera coordinate system; t in the extrinsic parameter matrix x 、t y 、t z Respectively representing translation parameters of the target object from the world coordinate system to the camera coordinate system.
Because the world coordinate system origin is not coincident with the camera coordinate system origin, when a point needing the world coordinate system is projected onto an image plane, the world coordinate system needs to be converted into the camera coordinate system by using an external parameter matrix, and the external parameter matrix represents rotation and translation in the conversion process. Of course, since the target object may include a plurality of feature points, the actual processing object of formula (1) may be the feature points of the target object. From the perspective of the pixel coordinate system, the feature points may be respective pixel points in the photographed image.
Step S333: and forming a simulation image matched with the target position based on the plurality of second pixel points and the first camera data corresponding to the target camera.
Among them, the transformation process of formula (1) can be regarded as a part of the forward transformation process, and the process of step S32 can be referred to as an inverse transformation process.
In the present application, the positions of the target object and the target camera are mainly calibrated by using three coordinate systems, namely, a world coordinate system, a camera coordinate system, and a pixel coordinate system. It will be understood by those skilled in the art that there are other possible variations of the coordinate transformation, and the present application is not limited to the transformation between coordinate systems.
Further, according to the external parameter matrix of the target position, projecting part or all of the first pixel points to a camera coordinate system corresponding to the target camera to obtain a plurality of second pixel points in the camera coordinate system, including:
step S3321: acquiring a visual angle range of the target camera;
wherein the range of viewing angles of the target camera may be a maximum viewing angle that can be observed by the target camera. For example, the maximum angle of view that can be observed by the subject camera in the case where the subject camera is installed at the left front of the vehicle may be 120 degrees, and the maximum angle of view that can be observed by the subject camera in the case where the subject camera is installed at the right front of the vehicle may be 180 degrees. The range of viewing angles of the target camera at different positions may be different. In other words, the maximum viewing angle that the subject camera can view may be related to the coordinates of the subject camera itself.
Step S3322: judging whether each first pixel point in the plurality of first pixel points is located in the visual angle range;
step S3323: if the first pixel point is located in the visual angle range, projecting the first pixel point to a camera coordinate system corresponding to the target position; and if the first pixel point is positioned outside the visual angle range, filtering the first pixel point.
It should be noted that the view angle range of the target camera may be related to the installation position of the camera itself and the performance of the camera, and the view angle range of the target camera is limited. For example, the angle of view range of the subject camera may capture a range of 120 degrees in the horizontal direction and 120 degrees in the vertical direction. Therefore, when each first pixel point is projected, it is necessary to determine whether each first pixel point of the plurality of first pixel points is located within the viewing angle range.
Wherein, the target object can be selected according to the requirement. The target object corresponding to the first image shot by the first camera is the same as the target object corresponding to the real image shot by the second camera. The first camera and the second camera can shoot the same target object from different visual angles.
Under the condition that the first pixel point is located in the visual angle range, the first pixel point can be directly projected to a camera coordinate system corresponding to the target position; and under the condition that the first pixel point is positioned outside the visual angle range, the first pixel point outside the visual angle range can be filtered. By judging whether each first pixel point in the plurality of first pixel points is located in the visual angle range, the coordinate data volume in the coordinate projection process can be reduced, and the generation efficiency of the simulation graph is improved.
Further, the first camera data further includes an internal parameter matrix of each first camera, and a simulated image matched with the target position is formed based on the plurality of second pixel points and the first camera data corresponding to the target camera, including:
step S3331: acquiring an internal parameter matrix of the target camera;
wherein, in step S3331, the internal parameter matrix of the target camera may be extracted from the first camera data.
Step S3332: generating an internal parameter matrix of the target position by using the internal parameter matrix of the target camera;
in one example, the internal parameter matrix of the target camera may be directly used as the internal parameter matrix of the target location.
Step S3333: according to the internal parameter matrix of the target position, projecting each second pixel point of the second pixel points to a pixel coordinate system corresponding to the target position to obtain a plurality of third pixel points under the pixel coordinate system;
in one example, the conversion relationship between the second pixel point and the third pixel point can be expressed by formula (2) as follows:
Figure BDA0003976384450000111
wherein X c 、Y c 、Z c Respectively representing second pixel points of the target object under the camera coordinate system; x and y respectively represent a third pixel point of the target object under the pixel coordinate system; c in the internal parameter matrix x 、c y Respectively representing the pixel coordinates corresponding to the origin of the camera coordinate system on the image; f in the internal parameter matrix x 、f y May represent a camera focal length.
In addition, distortion factors can be considered in the process of converting the second pixel point to the third pixel point through the internal parameter matrix. For example, a radial distortion coefficient and a tangential distortion coefficient are added to the internal parameter matrix. The distortion factor is added, so that the problems of deviation and deformation of the pixel point calculated theoretically and the pixel point under the actual condition can be solved. In practical applications, whether the distortion factor is added or not can be determined according to practical needs, and the application is not limited.
It should be noted that equation (2) is performed based on the camera projection principle. The transformation process of equation (2) may also be considered as part of the forward transformation process. In the present application, the first pixel point may be a coordinate of the target object in the real world, where the coordinate is three-dimensional; the second pixel point can be an intermediate coordinate, and the coordinate is also three-dimensional; the third pixel point may be a coordinate of the target object in the photographed image, and the coordinate is two-dimensional.
Step S3334: and forming a simulation image matched with the target position based on the third pixel points.
Wherein the simulated image is available to the second vehicle. For example, the simulated image may be input into a data processing unit of the second vehicle, such that the data processing unit performs training, verification, testing, or the like, using the simulated image.
It should be noted that, in the process of processing each depth map, it is assumed that the directions of the external parameters of the virtual perspective at the target position are the same, and it is considered that the virtual cameras and the target cameras at the participating target positions of the external parameters r11 to r33 are the same, so that only the translation disparity of the perspective is considered, and the rotation disparity of the perspective is not considered. For convenience of understanding, it may also be considered that the at least one second camera is a camera closer to the target position, and therefore, an orientation of each second camera in the at least one second camera is similar to an orientation of the target camera, and a deviation occurs in left, right, upper, and lower positions, so that a viewing angle deviation is caused, and therefore, a translation may be implemented in the process of processing each depth map.
And step S4: reinjecting the simulated image to a data processing unit of the second vehicle.
Further, reinjecting the simulated image to a data processing unit of the second vehicle, comprising:
step S41: filling missing pixel points in the simulation image by adopting a bilinear difference algorithm and/or an image restoration algorithm to obtain a fitting image corresponding to the target position;
each pixel point of the simulation image may correspond to a fixed RGB value (e.g., a gray scale). The missing or damaged pixel points can be repaired by a bilinear difference algorithm and/or an image inpainting (image inpainting) algorithm. Illustratively, the simulation image can be supplemented according to semantic information. The image inpainting algorithm may be based on generating a countermeasure network (GAN).
It should be noted that the bilinear difference algorithm and the image restoration algorithm may be performed simultaneously or alternatively. It can be understood by those skilled in the art that there are many implementation forms of the bilinear difference algorithm and the image restoration algorithm, and the application is not limited to the specific implementation of the bilinear difference algorithm and the image restoration algorithm.
Step S42: reinjecting the fitted image to a data processing unit of the second vehicle.
Wherein, there may be one or more fitting images. In the case of multiple fitted images, the fitted images may be further fused to make the fitted images at the target positions closer to the actual situation.
Further, the fitting image is reinjected to the data processing unit of the second vehicle, which may include:
step S421: inputting the formed fitting image matched with the target camera into a data processing unit of a second vehicle for neural network training to obtain a training value corresponding to the target position;
for example, the second vehicle may include an industrial personal computer and an injection device. The fitting image can be used as video data, decoded by an industrial personal computer (also called a real-time machine), and then injected into the data processing unit through injection equipment (such as a video injection board card). The algorithm of the data processing unit may be based on a neural network model, which performs neural network training with the fitted image as an input, thereby obtaining a training value corresponding to the target position.
Step S422: comparing the training value with a real value acquired by a camera installed at the target position to obtain a comparison result corresponding to the training value;
in one example, the training values may be compared to actual values collected by cameras installed at the target locations, and the comparison may be equal or unequal. Under the condition that the comparison results are equal, the fitting image can better fit the image actually acquired by the camera installed at the target position; and under the condition that the comparison result is unequal, the fitting image cannot be well fitted with the image actually acquired by the camera installed at the target position, and the neural network model needs to be readjusted.
Step S423: and adjusting a neural network model in the data processing unit according to the comparison result.
The neural network model may include network models such as DNN, CNN, LSTM, resNet, etc., and the application is not limited to the type of neural network model.
In one embodiment, because there are usually more overlapped portions between the simulated image (or the fitted image) and the real image, the simulated image (or the fitted image) and the real image acquired by each second camera may be spliced together based on the overlapped portions to be used as an image to be checked, and meanwhile, there may be more overlapped portions in each real image, so that the real images acquired by each second camera may be spliced together based on the overlapped portions to be used as a reference image; and then comparing the similarity of the image to be verified and the reference image to obtain similarity evaluation information for representing the similarity. Illustratively, if the similarity evaluation information is positively correlated with the similarity, the simulation image can be reinjected when the similarity evaluation information is higher than a similarity threshold, otherwise, the reinjection is not performed; if the similarity evaluation information is negatively correlated with the similarity, the simulation image can be reinjected when the similarity evaluation information is lower than the similarity threshold, otherwise, the reinjection is not carried out. Further, the simulated image (or fitted image) may be annotated in synchronization with other real images.
Through the scheme of judging whether to reinject the simulation image or the fitting image based on the similarity, the embodiment of the application can avoid reinjection of the image with poor simulation or fitting results (for example, poor accuracy), and solve the problem that the images of synchronous reinjection observed by the data processing unit are not matched possibly due to the reinjection, so that the training, verifying and testing effects of the algorithm are improved.
It should be noted that the above scheme can be mainly used in situations where there is no need to fill missing pixels in the simulation image, and also can not be used in situations where there are missing pixels.
In one example, the case where there are pixels to be filled and the case where there are no missing pixels to be filled can also be considered. At this time, since the difference between the images in the absence may be caused by a defect of the algorithm used in the filling or other related defects, different similarity thresholds may be used for different situations. For example, for the case of missing pixel points, if the missing pixel points need to be filled (i.e., the image to be filled is a fitting image), the similarity threshold may be determined as a first similarity threshold, and if no missing pixel points need to be filled (i.e., the image to be filled is a simulation image), the similarity threshold may be determined as a second similarity threshold. Further, if the similarity evaluation information is positively correlated with the similarity, the first similarity threshold is smaller than the second similarity threshold, and if the similarity evaluation information is negatively correlated with the similarity, the first similarity threshold is larger than the second similarity threshold.
To sum up, the application utilizes the coordinate mapping relation and the depth map to fit the simulated image of the target position view angle through the adaptation adjustment of the first camera data and the second camera data, so that the first camera data can be adapted to the second vehicle, the second camera data of the second vehicle is further enriched, the adaptability of the second camera data is enhanced, and the application range of the second camera data is expanded. In addition, the adapted simulation image can be used for neural network training, so that the training precision of the neural network is improved, meanwhile, the new vehicle type is rapidly adapted based on the first camera data and the second camera data, and the research and development and testing efficiency of the new vehicle type are also improved.
Fig. 3 shows a block diagram of a simulation image reinjection apparatus of a target camera according to an embodiment of the present application.
As shown in fig. 3, the simulation image reinjection apparatus 30 of the target camera according to the embodiment of the present application may include:
a camera data acquiring module 31, configured to acquire first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, where the first camera data includes an extrinsic parameter matrix of each first camera, the first camera set includes a preset target camera, the second camera set includes a plurality of second cameras, a position of the target camera is the same as a preset target position on the second vehicle, the target position is different from a position of each second camera, and the second camera data includes an extrinsic parameter matrix of the second camera and a real image captured by the second camera;
a camera determining module 32, configured to determine at least one path of second cameras adjacent to the target location according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrices of the second cameras;
an image forming module 33, configured to form a simulation image matched with the target position based on the real image and the depth map corresponding to each of the at least one second camera and the first camera data corresponding to the target camera;
a reinjection module 34 for reinjecting the simulation image to a data processing unit of the second vehicle.
Furthermore, the present application provides a computer-readable medium, on which a computer program is stored, which computer program, when being executed by a processor, realizes the method for artificial image annotation of the target position.
Further, the present application also provides an electronic device, including: one or more processors; a storage device to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement a simulated image annotation process of the target location.
Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 4, the electronic device may be used to implement a simulated image reinjection method of the target location. In particular, the electronic device may comprise a computer system. It should be noted that the electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the use range of the embodiment of the present application.
As shown in fig. 3, the computer system includes a Central Processing Unit (CPU) 1801, which can perform various appropriate actions and processes, such as executing the method described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1802 or a program loaded from a storage portion 1808 into a Random Access Memory (RAM) 1803. In the RAM 1803, various programs and data necessary for system operation are also stored. The CPU 1801, ROM 1802, and RAM 1803 are connected to each other via a bus 1804. An Input/Output (I/O) interface 1805 is also connected to bus 1804.
The following components are connected to the I/O interface 1805: an input portion 1806 including a keyboard, a mouse, and the like; an output section 1807 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1808 including a hard disk and the like; and a communication section 1809 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1809 performs communication processing via a network such as the internet. A driver 1810 is also connected to the I/O interface 1805 as needed. A removable medium 1811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1810 as necessary, so that a computer program read out therefrom is mounted in the storage portion 1808 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1809, and/or installed from the removable media 1811. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 1801.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiment; or may be separate and not incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The method for replying the simulation image of the target position and the related device thereof provided by the embodiment of the application are introduced in detail, a specific example is applied in the description to explain the principle and the implementation mode of the application, and the description of the embodiment is only used for helping to understand the technical scheme and the core idea of the application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (10)

1. A method for artificial image reinjection of a target location, the method comprising:
acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprise an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the position of the target camera is the same as a preset target position on the second vehicle, the target position is different from the position of each second camera, and the second camera data comprise the external parameter matrix of the second camera and a real image shot by the second camera;
determining at least one path of second camera adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera;
forming a simulation image matched with the target position based on the real image and the depth map corresponding to the at least one path of second camera and the first camera data corresponding to the target camera;
reinjecting the simulated image to a data processing unit of the second vehicle.
2. The method of claim 1, wherein determining at least one path of second cameras adjacent to the target location according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrix of each second camera comprises:
extracting a first translation parameter, a second translation parameter and a third translation parameter corresponding to the target camera through the external parameter matrix of the target camera;
respectively extracting a fourth translation parameter, a fifth translation parameter and a sixth translation parameter corresponding to each second camera through an external parameter matrix of each second camera;
respectively calculating Euclidean distances between the second cameras and the target camera based on a fourth translation parameter, a fifth translation parameter and a sixth translation parameter corresponding to the second cameras and a first translation parameter, a second translation parameter and a third translation parameter corresponding to the target camera to obtain a plurality of camera distances corresponding to the second cameras;
and selecting at least one path of second camera lower than a preset camera distance threshold value as at least one path of second camera adjacent to the target position.
3. The method according to claim 1, wherein before forming the simulated image matching the target position based on the real image, the depth map and the first camera data corresponding to the target camera, the simulated image reinjection method comprises:
and processing the real image shot by the at least one second camera based on an SFM algorithm to generate a depth map corresponding to the at least one second camera.
4. The method of claim 1, wherein forming a simulated image matching the target position based on the real image, the depth map and the first camera data corresponding to the target camera, the real image and the depth map corresponding to each of the at least one second camera, comprises:
acquiring an internal parameter matrix of each second camera in the at least one path of second cameras;
based on the external parameter matrix of each second camera in the at least one path of second cameras, the internal parameter matrix of each second camera, the depth map and the real image corresponding to the at least one path of second cameras, projecting the pixel points of the real image from the pixel coordinate system to the world coordinate system to obtain a plurality of first pixel points under the world coordinate system;
and forming a simulated image matched with the target position based on the plurality of first pixel points and the first camera data corresponding to the target camera.
5. The method of claim 4, wherein forming a simulated image matching the target location based on the plurality of first pixel points and first camera data corresponding to the target camera comprises:
generating an extrinsic parameter matrix of the target location using an extrinsic parameter matrix of the target camera;
according to the external parameter matrix of the target position, projecting part or all of the first pixel points to a camera coordinate system corresponding to the target camera to obtain a plurality of second pixel points under the camera coordinate system;
and forming a simulation image matched with the target position based on the plurality of second pixel points and the first camera data corresponding to the target camera.
6. The method of claim 5, wherein the step of projecting part or all of the first pixel points to a camera coordinate system corresponding to the target camera according to the extrinsic parameter matrix of the target position to obtain a plurality of second pixel points in the camera coordinate system comprises:
acquiring a visual angle range of the target camera;
judging whether each first pixel point in the plurality of first pixel points is located in the visual angle range or not;
if the first pixel point is located in the visual angle range, projecting the first pixel point to a camera coordinate system corresponding to the target position; and if the first pixel point is positioned outside the visual angle range, filtering the first pixel point.
7. The method of claim 5, wherein the first camera data further includes an internal parameter matrix of each first camera, and the forming of the simulated image matching the target position based on the plurality of second pixel points and the first camera data corresponding to the target camera comprises:
acquiring an internal parameter matrix of the target camera;
generating an internal parameter matrix of the target position using the internal parameter matrix of the target camera;
according to the internal parameter matrix of the target position, projecting each second pixel point of the second pixel points to a pixel coordinate system corresponding to the target position to obtain a plurality of third pixel points under the pixel coordinate system;
and forming a simulation image matched with the target position based on the third pixel points.
8. The method of claim 1, wherein the reinjecting the simulated image to the data processing unit of the second vehicle comprises:
filling missing pixel points in the simulation image by adopting a bilinear difference algorithm and/or an image restoration algorithm to obtain a fitting image corresponding to the target position;
reinjecting the fitted image to a data processing unit of the second vehicle.
9. An apparatus for annotating an emulated image of a target location, said apparatus comprising:
the camera data acquisition module is used for acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprise an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the position of the target camera is the same as a preset target position on the second vehicle, the target position is different from the position of each second camera, and the second camera data comprise an external parameter matrix of the second camera and a real image shot by the second camera;
the camera determining module is used for determining at least one path of second camera adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera;
the image forming module is used for forming a simulation image matched with the target position based on a real image and a depth map which are respectively corresponding to the at least one path of second camera and first camera data corresponding to the target camera;
and the reinjection module is used for reinjecting the simulation image to a data processing unit of the second vehicle.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the simulated signal triggering method for vehicle testing as claimed in any of claims 1 to 8.
CN202211538620.1A 2022-12-01 Simulation image reinjection method of target position and related equipment thereof Active CN115797442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211538620.1A CN115797442B (en) 2022-12-01 Simulation image reinjection method of target position and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211538620.1A CN115797442B (en) 2022-12-01 Simulation image reinjection method of target position and related equipment thereof

Publications (2)

Publication Number Publication Date
CN115797442A true CN115797442A (en) 2023-03-14
CN115797442B CN115797442B (en) 2024-06-07

Family

ID=

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3034555A1 (en) * 2015-04-03 2016-10-07 Continental Automotive France METHOD FOR DETERMINING THE DIRECTION OF THE MOVEMENT OF A MOTOR VEHICLE
CN113868873A (en) * 2021-09-30 2021-12-31 重庆长安汽车股份有限公司 Automatic driving simulation scene expansion method and system based on data reinjection
CN114299230A (en) * 2021-12-21 2022-04-08 中汽创智科技有限公司 Data generation method and device, electronic equipment and storage medium
CN114723820A (en) * 2022-03-09 2022-07-08 福思(杭州)智能科技有限公司 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment
CN114821497A (en) * 2022-02-24 2022-07-29 广州文远知行科技有限公司 Method, device and equipment for determining position of target object and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3034555A1 (en) * 2015-04-03 2016-10-07 Continental Automotive France METHOD FOR DETERMINING THE DIRECTION OF THE MOVEMENT OF A MOTOR VEHICLE
CN113868873A (en) * 2021-09-30 2021-12-31 重庆长安汽车股份有限公司 Automatic driving simulation scene expansion method and system based on data reinjection
CN114299230A (en) * 2021-12-21 2022-04-08 中汽创智科技有限公司 Data generation method and device, electronic equipment and storage medium
CN114821497A (en) * 2022-02-24 2022-07-29 广州文远知行科技有限公司 Method, device and equipment for determining position of target object and storage medium
CN114723820A (en) * 2022-03-09 2022-07-08 福思(杭州)智能科技有限公司 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment

Similar Documents

Publication Publication Date Title
CN109840500B (en) Three-dimensional human body posture information detection method and device
US20190385355A1 (en) Three-dimensional representation by multi-scale voxel hashing
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
EP2235955A1 (en) Method and system for converting 2d image data to stereoscopic image data
CN112116639B (en) Image registration method and device, electronic equipment and storage medium
US20170272724A1 (en) Apparatus and method for multi-view stereo
CN109948441B (en) Model training method, image processing method, device, electronic equipment and computer readable storage medium
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
CN116051747A (en) House three-dimensional model reconstruction method, device and medium based on missing point cloud data
CN113610918A (en) Pose calculation method and device, electronic equipment and readable storage medium
CN116433843A (en) Three-dimensional model reconstruction method and device based on binocular vision reconstruction route
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN116468769A (en) Depth information estimation method based on image
CN112270748B (en) Three-dimensional reconstruction method and device based on image
CN114463408A (en) Free viewpoint image generation method, device, equipment and storage medium
CN109741245B (en) Plane information insertion method and device
CN109816791B (en) Method and apparatus for generating information
CN115797442B (en) Simulation image reinjection method of target position and related equipment thereof
CN116524382A (en) Bridge swivel closure accuracy inspection method system and equipment
CN115797442A (en) Simulation image reinjection method of target position and related equipment thereof
CN111179331A (en) Depth estimation method, depth estimation device, electronic equipment and computer-readable storage medium
CN115439534A (en) Image feature point matching method, device, medium, and program product
CN111178501B (en) Optimization method, system, electronic equipment and device for dual-cycle countermeasure network architecture
CN110490950B (en) Image sample generation method and device, computer equipment and storage medium
Xing et al. Scale-consistent fusion: from heterogeneous local sampling to global immersive rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant