CN114529800A - Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle - Google Patents

Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle Download PDF

Info

Publication number
CN114529800A
CN114529800A CN202210030533.9A CN202210030533A CN114529800A CN 114529800 A CN114529800 A CN 114529800A CN 202210030533 A CN202210030533 A CN 202210030533A CN 114529800 A CN114529800 A CN 114529800A
Authority
CN
China
Prior art keywords
coordinate system
aerial vehicle
unmanned aerial
slam
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210030533.9A
Other languages
Chinese (zh)
Inventor
梁亚东
罗飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210030533.9A priority Critical patent/CN114529800A/en
Publication of CN114529800A publication Critical patent/CN114529800A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rotor unmanned aerial vehicle obstacle avoidance method, a rotor unmanned aerial vehicle obstacle avoidance system, a rotor unmanned aerial vehicle obstacle avoidance device and a rotor unmanned aerial vehicle obstacle avoidance medium, wherein the rotor unmanned aerial vehicle obstacle avoidance method comprises the following steps: in the flying process of the unmanned aerial vehicle, tracking and shooting the environment around the unmanned aerial vehicle by adopting an RGB-D camera; performing image semantic segmentation and semantic map construction on image information shot by an RGB-D camera by using an SLAM method; a camera coordinate system is established on the basis of the image information, the expression relation of the pixel coordinates in the world coordinate system is obtained by utilizing the conversion relation between the camera coordinate system and the world coordinate system, and the three-dimensional environment of the surrounding environment of the unmanned aerial vehicle is established and used as the barrier information; and controlling the flight of the unmanned aerial vehicle according to the obstacle information, and carrying out corresponding path planning to ensure the safety and accuracy of the flight of the unmanned aerial vehicle. According to the method, the environment information is acquired in the flight process, the SLAM composition method is utilized, the deep learning image processing method is combined, the three-dimensional map is constructed, the obstacle information is obtained, the rotor wing unmanned aerial vehicle is assisted to successfully bypass the obstacle based on the obstacle information, and the method can be widely applied to the technical field of unmanned aerial vehicle obstacle avoidance and path planning.

Description

Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle
Technical Field
The invention relates to the technical field of unmanned aerial vehicle obstacle avoidance and path planning, in particular to a rotor unmanned aerial vehicle obstacle avoidance method, system, device and medium.
Background
Despite the great advances made in the field of drone technology in the past, drones still face many difficulties in the actual environmental navigation process. This is because the flight environment of the drone is unknown and uncertain in most cases, which increases the difficulty of obstacle avoidance flight of the drone. The method is especially important as a precondition for safe flight of the unmanned aerial vehicle, namely accurate modeling and instant perception of the surrounding environment, but a technical scheme capable of accurately perceiving the obstacle in real time is still lacked at present.
Disclosure of Invention
In order to solve at least one of the technical problems in the prior art to a certain extent, the invention aims to provide a method, a system, a device and a medium for avoiding obstacles of a rotor unmanned aerial vehicle.
The technical scheme adopted by the invention is as follows:
a rotor unmanned aerial vehicle obstacle avoidance method comprises the following steps:
in the flying process of the unmanned aerial vehicle, tracking and photographing the surrounding environment of the unmanned aerial vehicle by adopting an RGB-D camera;
performing image semantic segmentation and semantic map construction on image information shot by an RGB-D camera by using an SLAM method;
a camera coordinate system is established on the basis of the image information, the expression relation of the pixel coordinates in the world coordinate system is obtained by utilizing the conversion relation between the camera coordinate system and the world coordinate system, and the three-dimensional environment of the surrounding environment of the unmanned aerial vehicle is established and used as the barrier information;
and controlling the flight of the unmanned aerial vehicle according to the obstacle information, and carrying out corresponding path planning to ensure the safety and accuracy of the flight of the unmanned aerial vehicle.
Further, the image semantic segmentation and semantic map construction by using the SLAM method includes:
based on deep learning, adding semantic information to SLAM to perform image semantic segmentation and semantic map construction;
and promoting image semantic segmentation by adopting the geometric consistency between the images obtained in the SLAM, so that the SLAM and the semantic segmentation can complement each other.
Further, the deep learning-based image semantic segmentation and semantic map construction by adding semantic information to the SLAM includes:
semantic information is added to the SLAM by combining a deep learning technology method and a semi-dense SLAM technology based on video stream;
and for two-dimensional semantic information, three-dimensional mapping is carried out after the association between the connected key frames with spatial consistency is combined.
Further, the obtaining of the representation relationship of the pixel coordinate in the world coordinate system by using the conversion relationship between the camera coordinate system and the world coordinate system includes:
acquiring a first relation between a pixel plane coordinate system and an image plane coordinate system;
acquiring a second relation between the world coordinate system and the camera coordinate system;
acquiring a third relation between a camera coordinate system and an image plane coordinate system;
and acquiring the representation relation of the pixel coordinates in the world coordinate system according to the first relation, the second relation and the third relation.
Further, the obtaining of the representation relationship of the pixel coordinate in the world coordinate system according to the first relationship, the second relationship and the third relationship includes:
1) a first relationship between the pixel plane coordinate system and the image plane coordinate system:
the origin of the pixel plane coordinate system is positioned at the upper left corner of the image, the u axis is parallel to the right direction and the x axis, the v axis is parallel to the y axis, and the difference between the pixel plane coordinate system and the image plane coordinate system is zooming and translation of the origin;
assuming that the physical sizes of each pixel in the u-axis and v-axis directions are dx and dy, the formula is obtained:
Figure BDA0003466275920000021
wherein dx and dy denote unit pixels projected on x-axis and y-axis of the image plane coordinate system (x, y), respectively; u. of0,v0Is the image plane center;
based on equation (1), the equation is represented in matrix form using knowledge of linear algebra as follows:
Figure BDA0003466275920000022
2) second relationship between world coordinate system and camera coordinate system:
the world coordinate system can obtain a camera coordinate system through rigid body transformation, and the camera coordinate system is obtained through a rotation matrix R and a translation matrix t, and the formula is as follows:
Figure BDA0003466275920000031
wherein (X)C,YC,ZC) Is the camera coordinate, (X, Y, Z) is the world coordinate, R is the three-dimensional rotating square matrix, t is the three-dimensional translation column vector, is the four-dimensional square matrix, i.e. the
Figure BDA0003466275920000032
Is an external reference matrix;
3) a third relationship between the camera coordinate system and the image plane coordinate system:
the relation between the camera coordinate system and the image plane coordinate system is actually a projection relation, and the projection relation satisfies the similar triangle principle, and the projection equation is as follows:
Figure BDA0003466275920000033
the formula is as follows:
Figure BDA0003466275920000034
wherein (X, y) is the image plane coordinate, (X)C,YC,ZC) For camera coordinates, f is the focal length of the camera equation (4) is expressed in matrix form as:
Figure BDA0003466275920000035
integrating the matrix relations to obtain the following formula:
Figure BDA0003466275920000036
wherein the content of the first and second substances,
Figure BDA0003466275920000041
referred to as an internal reference matrix, is,
Figure BDA0003466275920000042
referred to as the external reference matrix.
The other technical scheme adopted by the invention is as follows:
a rotary wing unmanned aerial vehicle keeps away barrier system includes:
the data acquisition module is used for tracking and taking a picture of the environment around the unmanned aerial vehicle by adopting an RGB-D camera in the flight process of the unmanned aerial vehicle;
the model construction module is used for carrying out image semantic segmentation and semantic map construction on image information shot by the RGB-D camera by utilizing an SLAM method;
the coordinate conversion module is used for constructing a camera coordinate system on the basis of the image information, acquiring the representation relation of pixel coordinates in the world coordinate system by utilizing the conversion relation between the camera coordinate system and the world coordinate system, and constructing the three-dimensional environment of the surrounding environment of the unmanned aerial vehicle as the obstacle information;
and the flight control module is used for controlling the flight of the unmanned aerial vehicle according to the barrier information, carrying out corresponding path planning and ensuring the safety and accuracy of the flight of the unmanned aerial vehicle.
Further, the image semantic segmentation and semantic map construction by using the SLAM method includes:
based on deep learning, adding semantic information to SLAM to perform image semantic segmentation and semantic map construction;
and promoting image semantic segmentation by adopting the geometric consistency between the images obtained in the SLAM, so that the SLAM and the semantic segmentation can complement each other.
Further, the deep learning-based image semantic segmentation and semantic map construction by adding semantic information to the SLAM includes:
semantic information is added to the SLAM by combining a deep learning technology method and a semi-dense SLAM technology based on video stream;
and for two-dimensional semantic information, three-dimensional mapping is carried out after the association between the connected key frames with spatial consistency is combined.
The other technical scheme adopted by the invention is as follows:
the utility model provides a rotor unmanned aerial vehicle keeps away barrier device, includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The other technical scheme adopted by the invention is as follows:
a computer readable storage medium in which a processor executable program is stored, which when executed by a processor is for performing the method as described above.
The invention has the beneficial effects that: the method comprises the steps of sending detected surrounding environment information back to the unmanned aerial vehicle in the flight process, and constructing a three-dimensional map by utilizing a SLAM composition method and combining a deep learning image processing method according to the environment picture information to obtain obstacle information; and (4) adopting a proper obstacle avoidance algorithm by utilizing the obstacle information to assist the rotor unmanned aerial vehicle to successfully bypass the obstacle.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description is made on the drawings of the embodiments of the present invention or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of implementing obstacle avoidance based on depth SLAM of RGB-D in the embodiment of the present invention;
FIG. 2 is a flow chart of the RGB-D SLAM algorithm in an embodiment of the present invention;
FIG. 3 is a diagram of a transformation relationship between a world coordinate system and a camera coordinate system according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
The existing real-time modeling and three-dimensional construction technology is mature, and the well-known SLAM technology, namely a real-time positioning and composition technology, is applied to a great number of aspects; in addition, deep learning technology is also available, which is to obtain the intrinsic rules and the expression levels of sample data by learning a sample data set, the information obtained in the learning process is very helpful for explaining data such as characters, images and sounds, and the final aim is to enable a machine to have the analysis and learning capability like a human and to recognize the data such as the characters, the images and the sounds. Deep learning is a complex machine learning algorithm, and achieves the effect in speech and image recognition far exceeding the prior related art. One current popular research direction is to combine deep learning and SLAM technology to obtain their respective advantages to complete a complex task, so that the effect is improved. Ma L, St ü ckler J, Kerl C et al (Multi-View Deep Learning for structural Semantic Mapping with RGB-D Cameras) propose a novel Deep neural network approach to Semantic segmentation in RGB-D image sequences. The main innovation is to train our network in a self-supervised way for predicting multi-view consistent semantic information. During testing, compared with semantic prediction on a network trained by a single-view picture, semantic prediction based on a semantic key frame map of the network has higher fusion consistency. The network architecture performs semantic style based on the latest single-view depth learning method for RGB-D and depth image fusion, and optimizes the effect of this method through multi-scale error minimization. Based on the above, the embodiment provides a rotor unmanned aerial vehicle obstacle avoidance method based on deep learning fusion SLAM.
As shown in fig. 1 and fig. 2, the present embodiment provides an obstacle avoidance method for a rotor-wing drone, including the following steps:
and S1, in the flying process of the unmanned aerial vehicle, carrying out real-time tracking and photographing on the surrounding environment by using an RGB-D camera carried by the unmanned aerial vehicle.
Utilize the RGB-D camera that unmanned aerial vehicle self carried to carry out real-time tracking to the surrounding environment and shoot, the surrounding environment is the information such as barrier that unmanned aerial vehicle meets in the flight process, has following advantage: dense or semi-dense depth maps can be obtained directly without calculating feature points and descriptors. The framework is also simpler than the traditional SLAM, and can be divided into front-end RGB-D camera tracking and back-end model reconstruction.
The RGB-D camera is widely used for realizing rapid three-dimensional reconstruction and dense track tracking, and real-time tracking shooting refers to the fact that the RGB-D camera is placed on a base of the unmanned aerial vehicle and shoots the surrounding environment in real time along with the flight course of the unmanned aerial vehicle.
And S2, adding semantic information on the traditional SLAM according to the image information shot by the RGB-D camera by utilizing the SLAM method and combining the existing advantages of deep learning in the aspect of image processing, and performing image semantic segmentation and semantic map construction.
According to image information obtained by an RGB-D camera, an SLAM method is utilized, and the existing advantages of depth learning in the aspect of image processing are combined, wherein the SLAM method is an instant positioning and composition method, namely, a rotor unmanned aerial vehicle flies in an unknown environment, and the unmanned aerial vehicle gradually draws a complete map of the unknown environment while flying, wherein the complete map refers to each corner in the environment flying without obstacles.
In addition, the existing advantage of deep learning in the aspect of image processing refers to adding semantic information on the traditional SLAM to perform image semantic segmentation and semantic map construction. Geometric consistency between images obtained in a SLAM system is used to promote image semantic segmentation, so that SLAM and semantic segmentation can complement each other.
S3, a camera coordinate system is constructed based on the image information, and a three-dimensional environment of the surrounding environment, that is, obstacle information is constructed based on the conversion between the camera coordinate system and the world coordinate system.
In three-dimensional mapping, semantic information is difficult to obtain. We effectively address this problem by combining advanced deep learning techniques methods with semi-dense video stream-based SLAM techniques. In the method, after two-dimensional semantic information is combined with the relation between the connected key frames with space consistency, three-dimensional mapping is carried out. In the method, each key frame in a sequence does not need to be subjected to semantic segmentation, so that the calculation time is relatively reasonable for the unmanned aerial vehicle obstacle avoidance opportunity. The basic steps are as follows:
1) an RGB image is input.
2) Key frames are selected and refined.
3) And (5) performing 2D semantic segmentation.
4)3D reconstruction and semantic optimization.
S4, the unmanned aerial vehicle acquires corresponding obstacle information and starts to implement obstacle avoidance instructions, corresponding path planning is carried out, and safety and accuracy of real-time flight of the unmanned aerial vehicle are guaranteed.
The unmanned aerial vehicle acquires corresponding obstacle information and starts to implement obstacle avoidance instructions, a set of obstacle avoidance processing device is designed for the unmanned aerial vehicle, the received information is three-dimensional composition information of the obstacles, corresponding obstacle avoidance paths are generated and sent to the actuator device, and the safety and accuracy of the unmanned aerial vehicle flight are controlled.
Referring to fig. 3, in some alternative embodiments, the transformation relationship between the world coordinate system and the machine coordinate system in step S3 is as follows:
this relates to the relationship of four coordinate systems, namely the pixel plane coordinate system (u, v), the image plane coordinate system (x, y), the camera coordinate system (Xc, Yc, Zc) and the world coordinate system (Xw, Yw, Zw). The coordinates of the four coordinate systems are related through the internal and external parameters of the camera, so that the coordinates of a point in a world coordinate system can be reversely deduced from the coordinates of the point on a shot picture all the way, and the purpose of three-dimensional reconstruction is achieved. And the assumed parameters are the internal and external parameters to be calibrated.
1) Relation between pixel coordinates and image plane coordinate system
The origin of the pixel coordinate system is located in the upper left corner of the image, the u-axis is parallel to the right to the x-axis, and the v-axis is parallel to the y-axis. The difference between the pixel coordinate system and the image plane coordinate system is a zoom and a translation of an origin.
Assume that the physical dimensions of each pixel in the u-axis and v-axis directions are dx and dy. The following equation can be derived:
Figure BDA0003466275920000081
wherein dx and dy denote unit pixels projected on x-axis and y-axis of the image plane coordinate system (x, y), respectively; u. of0,v0Is the image plane center;
based on equation (1), the equation is represented in matrix form using knowledge of linear algebra as follows:
Figure BDA0003466275920000082
2) relation between world coordinate system and camera coordinate system
The world coordinate system can obtain a camera coordinate system through rigid body transformation, and is obtained through a rotation matrix R and a translation matrix t, and the formula is as follows:
Figure BDA0003466275920000083
wherein (X)C,YC,ZC) Is the camera coordinate, (X, Y, Z) is the world coordinate, R is the three-dimensional rotating square matrix, t is the three-dimensional translation column vector, is the four-dimensional square matrix, i.e. the
Figure BDA0003466275920000084
Is an external reference matrix;
3) relationship between camera coordinate system and image plane coordinate system:
the relation between the camera coordinate system and the image plane coordinate system is actually a projection relation, and the projection relation satisfies the similar triangle principle, and the projection equation is as follows:
Figure BDA0003466275920000085
the formula is as follows:
Figure BDA0003466275920000086
wherein (X, y) is the image plane coordinate, (X)C,YC,ZC) For camera coordinates, f is the focal length of the camera equation (4) is expressed in matrix form as:
Figure BDA0003466275920000091
integrating the matrix relations to obtain the following formula:
Figure BDA0003466275920000092
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003466275920000093
referred to as an internal reference matrix, is,
Figure BDA0003466275920000094
referred to as the external reference matrix.
Further, let
Figure BDA0003466275920000095
In summary, in this embodiment, the RGB-D camera for performing environment detection along with the unmanned aerial vehicle is placed on the base of the unmanned aerial vehicle, the detected ambient environment information is sent back to the unmanned aerial vehicle during the flight, and a three-dimensional map is constructed by using a SLAM composition method and combining a deep learning image processing method according to the environment picture information to obtain the obstacle information. And (4) adopting a proper obstacle avoidance algorithm by utilizing the obstacle information to assist the rotor unmanned aerial vehicle to successfully bypass the obstacle. Compared with the method for acquiring the information of the whole route-finding map in advance, the method can be used for shooting the environment information of the obstacles encountered in the flight process in real time, acquiring the information through coordinate transformation, rapidly triggering the emergency obstacle-avoiding device, and controlling the flight state of the rotor unmanned aerial vehicle, so that the pose estimation precision of the RGB-D camera in a dynamic environment is effectively improved, the high requirement on the response speed in the flight process of the unmanned aerial vehicle is effectively met, and the unmanned aerial vehicle can be ensured to avoid the obstacles in real time and fly safely to reach a target point.
The above method is explained in detail with reference to specific examples below.
The rotor unmanned aerial vehicle obstacle avoidance method based on deep learning fusion SLAM provided by the embodiment comprises the following steps:
the method comprises the following steps: in the flying process of the unmanned aerial vehicle, the surrounding environment is tracked and photographed in real time by using an RGB-D camera carried by the unmanned aerial vehicle.
In one embodiment, the following schemes are adopted for the RGB-D camera:
1) binocular vision: ZED and Tango;
2) structured light: kinect v1, xton;
3)TOF(Time of Flight):Kinect v2、Realsense。
the advantages of using an RGB-D camera are: dense or semi-dense depth maps can be obtained directly without calculating feature points and descriptors. The framework is also simpler than the traditional SLAM, and can be divided into front-end RGB-D camera tracking and back-end model reconstruction. The method is widely used for realizing rapid three-dimensional reconstruction and dense track tracking.
In each of the three schemes described above, there are advantages and disadvantages, as shown in table 1.
TABLE 1
Figure BDA0003466275920000101
Since the unmanned aerial vehicle of the embodiment flies in an outdoor uncertain environment, a camera adopting a tof (time of flight) scheme is suitable.
The RGB-D camera is arranged on a base of the unmanned aerial vehicle and shoots the surrounding environment in real time along with the flight course of the unmanned aerial vehicle.
Step two: according to image information shot by an RGB-D camera, semantic information is added on the traditional SLAM by utilizing an SLAM method and combining the existing advantages of deep learning in the aspect of image processing, and image semantic segmentation and semantic map construction are carried out. The SLAM method is a method for positioning and patterning immediately, that is, a rotor unmanned aerial vehicle flies in an unknown environment, and the unmanned aerial vehicle flies while drawing a complete map of the unknown environment, wherein the complete map refers to each corner in the environment flying without obstacles.
And combining the existing advantages of deep learning in the aspect of image processing, adding semantic information on the traditional SLAM, and performing image semantic segmentation and semantic map construction. Geometric consistency between images obtained in a SLAM system is used to promote image semantic segmentation, so that SLAM and semantic segmentation can complement each other.
In three-dimensional mapping, semantic information is difficult to obtain. We effectively address this problem by combining advanced deep learning techniques methods with semi-dense video stream-based SLAM techniques. In the method, after two-dimensional semantic information is combined with the relation between the connected key frames with space consistency, three-dimensional mapping is carried out. In the method, each key frame in a sequence does not need to be subjected to semantic segmentation, so that the calculation time is relatively reasonable for the unmanned aerial vehicle obstacle avoidance opportunity. The basic steps are as follows:
1) inputting RGB image
2) Selecting key frames and improving
3)2D semantic segmentation
4)3D reconstruction and semantic optimization
Step three: a camera coordinate system is constructed on the basis of the image information, and a three-dimensional environment of the surrounding environment, namely obstacle information, is constructed according to the conversion between the camera coordinate system and a world coordinate system.
The transformation relationship between the camera coordinate system and the world coordinate system is as follows:
referring to fig. 3, the principle and steps are as follows:
this relates to the relationship of four coordinate systems, namely the pixel plane coordinate system (u, v), the image plane coordinate system (x, y), the camera coordinate system (Xc, Yc, Zc) and the world coordinate system (Xw, Yw, Zw). The coordinates of the four coordinate systems are related through the internal and external parameters of the camera, so that the coordinates of a point in a world coordinate system can be reversely deduced from the coordinates of one point on a shot picture, and the purpose of three-dimensional reconstruction is achieved. And the assumed parameters are the internal and external parameters to be calibrated.
1) Relation between pixel coordinates and image plane coordinate system
The origin of the pixel coordinate system is located in the upper left corner of the image, the u-axis is parallel to the right to the x-axis, and the v-axis is parallel to the y-axis. The difference between the pixel coordinate system and the image plane coordinate system is a zoom and a translation of an origin.
Assume that the physical dimensions of each pixel in the u-axis and v-axis directions are dx and dy. The following formula can be derived:
Figure BDA0003466275920000111
where dx and dy denote unit pixels projected on the x-axis and y-axis of the image plane coordinate system (x, y), respectively; u. of0,v0Is the image plane center.
From the above equation, the equation is represented in matrix form using knowledge of linear algebra as follows:
Figure BDA0003466275920000112
2) relation between world coordinate system and camera coordinate system
The world coordinate system can obtain a camera coordinate system through rigid body transformation, namely under the condition that the state structure of an object is not changed, the pose of the object is changed through rotating, translating and the like, and the pose of the object can be obtained through a rotating matrix R and a translating matrix t, wherein the formula is as follows:
Figure BDA0003466275920000121
wherein (X)C,YC,ZC) Is the camera coordinate, (X, Y, Z) is the world coordinate, R is the three-dimensional rotating square matrix, t is the three-dimensional translation column vector, is the four-dimensional square matrix, i.e. the
Figure BDA0003466275920000122
Is an external reference matrix.
3) Relation between camera coordinate system and image plane coordinate system
The relation between the camera coordinate system and the image plane coordinate system is actually a projection relation, and the projection relation satisfies the similar triangle principle, and the projection equation is as follows:
Figure BDA0003466275920000123
the formula is as follows:
Figure BDA0003466275920000124
wherein (X, y) is the image plane coordinate, (X)C,YC,ZC) Is the camera coordinates and f is the focal length of the camera.
The above formula is expressed in matrix form as:
Figure BDA0003466275920000125
4) formula integration
Integrating the matrix relations to obtain the following formula:
Figure BDA0003466275920000126
further, let
Figure BDA0003466275920000131
Wherein the content of the first and second substances,
Figure BDA0003466275920000132
referred to as an internal reference matrix.
Figure BDA0003466275920000133
Referred to as the external reference matrix.
From the above analysis, the pixel coordinate of a certain point under the camera coordinate system can be obtained according to the values of the internal reference matrix and the external reference matrix, so that the obstacle information is obtained, and a theoretical basis is provided for the obstacle avoidance of the next step.
Step four: the unmanned aerial vehicle acquires corresponding obstacle information and starts to implement obstacle avoidance instructions, corresponding path planning is carried out, and safety and accuracy of real-time flight of the unmanned aerial vehicle are guaranteed.
In one embodiment of the invention, a set of obstacle avoidance processing device is designed for the unmanned aerial vehicle, the information is received as three-dimensional composition information of an obstacle, a corresponding obstacle avoidance path is generated and sent to the actuator device, and the safety and accuracy of the unmanned aerial vehicle flight are controlled.
According to the invention, an RGB-D camera for environment detection along with an unmanned aerial vehicle is arranged on a base of the unmanned aerial vehicle, detected ambient environment information is sent back to the unmanned aerial vehicle in the flight process, and a three-dimensional map is constructed by utilizing an SLAM composition method and combining a deep learning image processing method according to the environment picture information to obtain obstacle information. And (4) adopting a proper obstacle avoidance algorithm by utilizing the obstacle information to assist the rotor unmanned aerial vehicle to successfully bypass the obstacle. Compared with the method for acquiring the information of the whole route-finding map in advance, the method can be used for shooting the environment information of the obstacles encountered in the flight process in real time, acquiring the information through coordinate transformation, rapidly triggering the emergency obstacle-avoiding device, and controlling the flight state of the rotor unmanned aerial vehicle, so that the pose estimation precision of the RGB-D camera in a dynamic environment is effectively improved, the high requirement on the response speed in the flight process of the unmanned aerial vehicle is effectively met, and the unmanned aerial vehicle can be ensured to avoid the obstacles in real time and fly safely to reach a target point.
In summary, after the above scheme is adopted, the invention constructs a three-dimensional map by using a SLAM composition method in combination with a deep learning image processing method, and obtains obstacle information. And (4) adopting a proper obstacle avoidance algorithm by utilizing the obstacle information to assist the rotor unmanned aerial vehicle to successfully bypass the obstacle. Effectively improve the speed of unmanned aerial vehicle flight in-process to environmental image processing, can realize that unmanned aerial vehicle keeps away the barrier in real time. Has practical popularization value and is worth popularizing.
This embodiment still provides a rotor unmanned aerial vehicle keeps away barrier system, includes:
the data acquisition module is used for tracking and photographing the environment around the unmanned aerial vehicle by adopting an RGB-D camera in the flying process of the unmanned aerial vehicle;
the model construction module is used for carrying out image semantic segmentation and semantic map construction on image information shot by the RGB-D camera by utilizing an SLAM method;
the coordinate conversion module is used for constructing a camera coordinate system on the basis of the image information, acquiring the representation relation of pixel coordinates in the world coordinate system by utilizing the conversion relation between the camera coordinate system and the world coordinate system, and constructing the three-dimensional environment of the surrounding environment of the unmanned aerial vehicle as the obstacle information;
and the flight control module is used for controlling the flight of the unmanned aerial vehicle according to the barrier information, carrying out corresponding path planning and ensuring the safety and accuracy of the flight of the unmanned aerial vehicle.
Further as an optional implementation manner, the performing image semantic segmentation and semantic map construction by using the SLAM method includes:
based on deep learning, adding semantic information to SLAM to perform image semantic segmentation and semantic map construction;
and promoting image semantic segmentation by adopting the geometric consistency between the images obtained in the SLAM, so that the SLAM and the semantic segmentation can complement each other.
Further as an optional implementation manner, the adding semantic information to the SLAM based on the deep learning to perform image semantic segmentation and semantic map construction includes:
semantic information is added to the SLAM by combining a deep learning technology method and a semi-dense SLAM technology based on video stream;
and for two-dimensional semantic information, three-dimensional mapping is carried out after the association between the connected key frames with spatial consistency is combined.
The rotor unmanned aerial vehicle obstacle avoidance system can execute the rotor unmanned aerial vehicle obstacle avoidance method provided by the method embodiment of the invention, can execute any combination implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
This embodiment still provides a rotor unmanned aerial vehicle keeps away barrier device, includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of fig. 1.
The rotor unmanned aerial vehicle obstacle avoidance device can execute the rotor unmanned aerial vehicle obstacle avoidance method provided by the method embodiment of the invention, can execute any combination implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
The embodiment of the application also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
The embodiment also provides a storage medium, which stores instructions or programs for executing the method for avoiding obstacles of the unmanned gyroplane provided by the embodiment of the method of the invention, and when the instructions or the programs are run, the steps can be implemented in any combination of the embodiment of the method, so that the method has corresponding functions and beneficial effects.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A rotor unmanned aerial vehicle obstacle avoidance method is characterized by comprising the following steps:
in the flying process of the unmanned aerial vehicle, tracking and photographing the surrounding environment of the unmanned aerial vehicle by adopting an RGB-D camera;
performing image semantic segmentation and semantic map construction on image information shot by an RGB-D camera by using an SLAM method;
a camera coordinate system is established on the basis of the image information, the expression relation of the pixel coordinates in the world coordinate system is obtained by utilizing the conversion relation between the camera coordinate system and the world coordinate system, and the three-dimensional environment of the surrounding environment of the unmanned aerial vehicle is established and used as the barrier information; and controlling the flight of the unmanned aerial vehicle according to the barrier information, and carrying out corresponding path planning to ensure the safety and accuracy of the flight of the unmanned aerial vehicle.
2. The unmanned gyroplane obstacle avoidance method according to claim 1, wherein the semantic segmentation and semantic map construction of the image by using the SLAM method comprises:
based on deep learning, adding semantic information to SLAM to perform image semantic segmentation and semantic map construction;
and promoting image semantic segmentation by adopting the geometric consistency between the images obtained in the SLAM, so that the SLAM and the semantic segmentation can complement each other.
3. The unmanned gyroplane obstacle avoidance method according to claim 2, wherein the semantic information is added to the SLAM based on the deep learning to perform image semantic segmentation and semantic map construction, and the method comprises the following steps:
semantic information is added to the SLAM by combining a deep learning technology method and a semi-dense SLAM technology based on video stream;
and for two-dimensional semantic information, three-dimensional mapping is carried out after the association between the connected key frames with spatial consistency is combined.
4. The obstacle avoidance method for the unmanned gyroplane according to claim 1, wherein the obtaining of the representation relationship of the pixel coordinate in the world coordinate system by using the transformation relationship between the camera coordinate system and the world coordinate system comprises:
acquiring a first relation between a pixel plane coordinate system and an image plane coordinate system;
acquiring a second relation between the world coordinate system and the camera coordinate system;
acquiring a third relation between a camera coordinate system and an image plane coordinate system;
and acquiring the representation relation of the pixel coordinates in the world coordinate system according to the first relation, the second relation and the third relation.
5. The obstacle avoidance method for the unmanned rotorcraft according to claim 4, wherein the obtaining of the representation relationship of the pixel coordinates in the world coordinate system according to the first relationship, the second relationship and the third relationship comprises:
1) a first relationship between the pixel plane coordinate system and the image plane coordinate system:
the origin of the pixel plane coordinate system is positioned at the upper left corner of the image, the u axis is parallel to the right direction and the x axis, the v axis is parallel to the y axis, and the difference between the pixel plane coordinate system and the image plane coordinate system is zooming and translation of the origin;
assuming that the physical sizes of each pixel in the u-axis and v-axis directions are dx and dy, the formula is obtained:
Figure FDA0003466275910000021
wherein dx and dy denote unit pixels projected on x-axis and y-axis of the image plane coordinate system (x, y), respectively; u. of0,v0Is the image plane center;
based on equation (1), the equation is represented in matrix form using knowledge of linear algebra as follows:
Figure FDA0003466275910000022
2) second relationship between world coordinate system and camera coordinate system:
the world coordinate system can obtain a camera coordinate system through rigid body transformation, and is obtained through a rotation matrix R and a translation matrix t, and the formula is as follows:
Figure FDA0003466275910000023
wherein (X)C,YC,ZC) Is the camera coordinate, (X, Y, Z) is the world coordinate, R is the three-dimensional rotating square matrix, t is the three-dimensional translation column vector, is the four-dimensional square matrix, i.e. the
Figure FDA0003466275910000024
Is an external reference matrix;
3) a third relationship between the camera coordinate system and the image plane coordinate system:
the relation between the camera coordinate system and the image plane coordinate system is actually a projection relation, and the projection relation satisfies the similar triangle principle, and the projection equation is as follows:
Figure FDA0003466275910000025
the formula is as follows:
Figure FDA0003466275910000026
wherein (X, y) is the image plane coordinate, (X)C,YC,ZC) Is the camera coordinate, f is the focal length of the camera
Equation (4) is expressed in matrix form as:
Figure FDA0003466275910000031
integrating the matrix relations to obtain the following formula:
Figure FDA0003466275910000032
wherein the content of the first and second substances,
Figure FDA0003466275910000033
referred to as an internal reference matrix, is,
Figure FDA0003466275910000034
referred to as the external reference matrix.
6. The utility model provides a rotor unmanned aerial vehicle keeps away barrier system which characterized in that includes:
the data acquisition module is used for tracking and photographing the environment around the unmanned aerial vehicle by adopting an RGB-D camera in the flying process of the unmanned aerial vehicle;
the model construction module is used for carrying out image semantic segmentation and semantic map construction on image information shot by the RGB-D camera by utilizing an SLAM method;
the coordinate conversion module is used for constructing a camera coordinate system on the basis of the image information, acquiring the representation relation of pixel coordinates in the world coordinate system by utilizing the conversion relation between the camera coordinate system and the world coordinate system, and constructing the three-dimensional environment of the surrounding environment of the unmanned aerial vehicle as the obstacle information;
and the flight control module is used for controlling the flight of the unmanned aerial vehicle according to the barrier information, carrying out corresponding path planning and ensuring the safety and accuracy of the flight of the unmanned aerial vehicle.
7. The unmanned rotorcraft obstacle avoidance system of claim 6, wherein said image semantic segmentation and semantic mapping using SLAM methods comprises:
based on deep learning, adding semantic information to SLAM to perform image semantic segmentation and semantic map construction;
and promoting image semantic segmentation by adopting the geometric consistency between the images obtained in the SLAM, so that the SLAM and the semantic segmentation can complement each other.
8. The unmanned gyroplane obstacle avoidance system of claim 7, wherein the deep learning based semantic information is added to SLAM for image semantic segmentation and semantic map construction, comprising:
semantic information is added to the SLAM by combining a deep learning technology method and a semi-dense SLAM technology based on video stream;
and for two-dimensional semantic information, after combining the relation between the connected key frames with space consistency, performing three-dimensional mapping.
9. The utility model provides a rotor unmanned aerial vehicle keeps away barrier device which characterized in that includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of any one of claims 1-5.
10. A computer-readable storage medium, in which a program executable by a processor is stored, wherein the program executable by the processor is adapted to perform the method according to any one of claims 1 to 5 when executed by the processor.
CN202210030533.9A 2022-01-12 2022-01-12 Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle Pending CN114529800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210030533.9A CN114529800A (en) 2022-01-12 2022-01-12 Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210030533.9A CN114529800A (en) 2022-01-12 2022-01-12 Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN114529800A true CN114529800A (en) 2022-05-24

Family

ID=81621265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210030533.9A Pending CN114529800A (en) 2022-01-12 2022-01-12 Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN114529800A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117406706A (en) * 2023-08-11 2024-01-16 汕头大学 Multi-agent obstacle avoidance method and system combining causal model and deep reinforcement learning
CN117826845A (en) * 2024-03-04 2024-04-05 易创智芯(西安)科技有限公司 Aviation operation safety active obstacle avoidance and planning method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117406706A (en) * 2023-08-11 2024-01-16 汕头大学 Multi-agent obstacle avoidance method and system combining causal model and deep reinforcement learning
CN117406706B (en) * 2023-08-11 2024-04-09 汕头大学 Multi-agent obstacle avoidance method and system combining causal model and deep reinforcement learning
CN117826845A (en) * 2024-03-04 2024-04-05 易创智芯(西安)科技有限公司 Aviation operation safety active obstacle avoidance and planning method

Similar Documents

Publication Publication Date Title
US10818029B2 (en) Multi-directional structured image array capture on a 2D graph
US10430995B2 (en) System and method for infinite synthetic image generation from multi-directional structured image array
Jörgensen et al. Monocular 3d object detection and box fitting trained end-to-end using intersection-over-union loss
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
CN108428255B (en) Real-time three-dimensional reconstruction method based on unmanned aerial vehicle
CN107329490B (en) Unmanned aerial vehicle obstacle avoidance method and unmanned aerial vehicle
US20210097717A1 (en) Method for detecting three-dimensional human pose information detection, electronic device and storage medium
US11064178B2 (en) Deep virtual stereo odometry
EP3886053A1 (en) Slam mapping method and system for vehicle
WO2020113423A1 (en) Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN113850126A (en) Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN114529800A (en) Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle
CN112312113B (en) Method, device and system for generating three-dimensional model
Yang et al. Reactive obstacle avoidance of monocular quadrotors with online adapted depth prediction network
WO2019127518A1 (en) Obstacle avoidance method and device and movable platform
CN115035235A (en) Three-dimensional reconstruction method and device
CN110348351B (en) Image semantic segmentation method, terminal and readable storage medium
Sharma et al. Unsupervised learning of depth and ego-motion from cylindrical panoramic video
CN105335959B (en) Imaging device quick focusing method and its equipment
CN113496503A (en) Point cloud data generation and real-time display method, device, equipment and medium
CN117252912A (en) Depth image acquisition method, electronic device and storage medium
CN116206050A (en) Three-dimensional reconstruction method, electronic device, and computer-readable storage medium
Sharma et al. Unsupervised learning of depth and ego-motion from cylindrical panoramic video with applications for virtual reality
Garau et al. Unsupervised continuous camera network pose estimation through human mesh recovery
Liang et al. Research and Hardware Implementation of Binocular Vision Obstacle Avoidance for UAV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination