CN116168306A - Unmanned aerial vehicle video target tracking method based on region re-search - Google Patents

Unmanned aerial vehicle video target tracking method based on region re-search Download PDF

Info

Publication number
CN116168306A
CN116168306A CN202211099235.1A CN202211099235A CN116168306A CN 116168306 A CN116168306 A CN 116168306A CN 202211099235 A CN202211099235 A CN 202211099235A CN 116168306 A CN116168306 A CN 116168306A
Authority
CN
China
Prior art keywords
target
unmanned aerial
aerial vehicle
image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211099235.1A
Other languages
Chinese (zh)
Inventor
左毅
周中元
罗子娟
李雪松
缪伟鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 28 Research Institute
Original Assignee
CETC 28 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 28 Research Institute filed Critical CETC 28 Research Institute
Priority to CN202211099235.1A priority Critical patent/CN116168306A/en
Publication of CN116168306A publication Critical patent/CN116168306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an unmanned aerial vehicle video target tracking method based on area re-search, which comprises the following steps: and selecting a target to be tracked in a frame in the current video frame, storing the target as a template, initializing a tracking model based on deep learning by using the template, and calculating and storing coordinate values of the target. In the next frame, positioning a target by using a tracking model, judging whether a target image is matched with a stored template by adopting a SIFT method, and entering the next frame to continue tracking if the target image is successful; and if the image matching method fails, retrieving the target again by using the image matching method. If the re-searching target is correct, updating the target template and the target coordinate value, and if the re-searching target is incorrect, repeating the steps to re-search. The method provided by the invention can effectively solve the problem of losing the tracking target caused by serious jitter, frame loss and focal length rapid switching of the video image of the unmanned aerial vehicle, has high reliability, and can still stably and accurately track the target in a scene with complex background and much noise interference.

Description

Unmanned aerial vehicle video target tracking method based on region re-search
Technical Field
The invention relates to a video target tracking method, in particular to an unmanned aerial vehicle video target tracking method based on regional re-search.
Background
Because of the unique advantages and flexibility of unmanned aerial vehicles, many reconnaissance tasks are performed by unmanned aerial vehicles in modern war, and unmanned aerial vehicles are also commonly equipped with airborne measurement systems to achieve tracking of targets. The unmanned aerial vehicle gives information such as the position of the target and the like, and provides a combat basis for a war commander. The accurate tracking of the target by the unmanned aerial vehicle is an important means for acquiring battlefield target information and grasping battlefield situations, and is an important premise for commanding battle resolution and weapon striking. The current tracking method based on unmanned aerial vehicle images can be divided into two types: video target tracking based on deep learning, tracking based on target positioning.
Due to the successful application of deep learning in the field of object detection and recognition, some researchers have improved this framework different from the traditional method and introduced it into tracking problems, and have achieved better results. For example, the SiamFC algorithm realizes the networking of the traditional related filtering method and designs the traditional related filtering method by adopting a twin network, and the operation speed meets the real-time requirement. SiamRcnn references the thought of multi-target tracking, records the information of the tracked target and other suspicious targets in the target tracking process, and obtains the current target state through a tracking fragment dynamic programming algorithm (Tracklet Dynamic Programming Algorithm), and the method has the advantage of eliminating the influence of background interferents on target tracking. The combination of target segmentation and tracking is the latest development trend, the FTMU adopts Mask R-CNN as a candidate region extraction and Mask segmentation network, and selects the current tracking mode by using an reinforcement learning method, wherein the two modes are IoU-based matching and appearance feature-based matching respectively, and the method simultaneously adopts the reinforcement learning network to determine whether to update the target template. On the other hand, since the unmanned aerial vehicle generally provides information such as its longitude, latitude and altitude, and the like, the absolute position of the target in the unmanned aerial vehicle video can be calculated by using similar information, a method based on target positioning and tracking is also commonly used for unmanned aerial vehicle video.
However, the above method has high requirements on video quality, the unmanned aerial vehicle video usually comprises serious jitter, frame loss and rapid switching of camera focal length, the application of the above method can cause target tracking errors, and the tracking target can be lost because the above method does not have the re-searching capability.
Disclosure of Invention
The invention aims to: the invention aims to solve the technical problem of providing an unmanned aerial vehicle video target tracking method based on regional re-search aiming at the defects of the prior art.
In order to solve the technical problems, the invention discloses an unmanned aerial vehicle video target tracking method based on area re-search, which comprises the following steps:
step 1, selecting a target to be tracked in a frame in a current frame of an unmanned aerial vehicle video; the unmanned aerial vehicle video refers to a video shot by an unmanned aerial vehicle;
step 2, calculating the position information of the target to be tracked, storing the position information of the target, generating a target image characteristic template, and initializing a tracking model; the position information of the target to be tracked is longitude, latitude and altitude of the target to be tracked in a geodetic coordinate system;
step 3, detecting a target to be tracked in the next frame of the unmanned aerial vehicle video based on the tracking model to obtain a detection result, and positioning the target to be tracked to obtain a positioning result;
step 4, judging whether the detection result in the step 3 is correct or not;
step 5, if the detection result is correct, updating the position information of the target to be tracked by adopting the positioning result in the step 3, and entering the next frame of the unmanned aerial vehicle video;
step 6, if the detection result is incorrect, detecting the target to be tracked again in the selected range based on the stored position information of the target in the step 2, and judging whether the detection result is correct again;
step 7, if the re-detection result is correct, updating the position information of the target, updating the target image characteristic template, updating the tracking model and entering the next frame of the unmanned aerial vehicle video;
and 8, if the re-detection result is incorrect, entering the next frame of the unmanned aerial vehicle video, and repeating the step 6.
The step of framing the target to be tracked in the current frame of the unmanned aerial vehicle video in the step 1 of the invention is as follows:
based on the current single-frame image of the unmanned aerial vehicle video, selecting a target to be tracked by a frame to obtain the position information of the target in the current single-frame image, wherein the position information comprises a center point coordinate (x, y), x is a center point abscissa, y is a center point ordinate, the length W and the width H of a target frame, the size of the current single-frame image, W is the width, and H is the height.
The step 2 of the invention comprises the following steps:
step 2-1, calculating the longitude x of the target to be tracked under the geodetic rectangular coordinate system through coordinate system conversion according to the longitude alpha, latitude lambda, altitude h, pitch angle theta, yaw angle phi, roll angle phi, relative target pitch angle alpha, relative target rotation angle beta, camera focal length f and image resolution r of the unmanned aerial vehicle g Latitude y g And height z g Coordinates;
step 2-2, storing the position information (w, h) of the object to be tracked in the image photographed by the unmanned aerial vehicle, wherein w and h represent pixel positions in the image, i.e. width and height, and longitude and latitude heights (x g ,y g ,z g );
Step 2-3, cutting an original image, namely an image shot by the unmanned aerial vehicle, according to the specific coordinates of the target to be tracked, which are determined by frame selection in the step 1, and storing the cut image as a target image feature template;
step 2-4, selecting a deep Sort tracking model (refer to the paper address: https:// arxiv. Org/pdf/1703.07402.Pdf; and the code address: https:// gitsub. Com/nwojke/deep_SORT') as the tracking model, wherein the pre-trained tracking model is adopted, and the parameters of the tracking model are not updated;
and 2-5, inputting the target image characteristic template into the tracking model to finish the initialization of the tracking model.
The coordinate system conversion in the step 2-1 of the invention calculates the longitude x of the target to be tracked under the geodetic rectangular coordinate system g Latitude y g And height z g The method for coordinates comprises the following steps:
step 2-1-1, based on the coordinates of the target to be tracked in the image shot by the unmanned aerial vehicle, the focal length of the camera and the resolution of the image, completing conversion from an image coordinate system to a camera coordinate system:
x c =x×f-W/2
y c =y×f-H/2
z c =0
wherein x is c Representing x-axis coordinate and y of unmanned aerial vehicle shooting image c Representing y-axis coordinate and z of unmanned aerial vehicle shooting image c Representing the z-axis coordinate of an image shot by the unmanned aerial vehicle; f is the focal length of the camera, w is the width of the image, and H is the height of the image;
step 2-1-2, completing conversion from the camera coordinate system to the unmanned aerial vehicle base coordinate system based on the coordinates of the target to be tracked in the unmanned aerial vehicle camera coordinate system, and obtaining coordinate values (x b ,y b ,z b ):
x b =x c ×r+Rcos(alpha)cos(beta)
y b =y c ×r+Rcos(alpha)sin(beta)
z b =z c +Rsin(alpha)
Wherein x is b Represents the abscissa value, y of the unmanned aerial vehicle b Representing the ordinate, z, of the unmanned aerial vehicle b The method comprises the steps of representing the height information of the unmanned aerial vehicle, wherein R represents image resolution, alpha represents camera pitch angle, and beta represents camera rotation angle;
step 2-1-3, based on the target to be tracked in the unmanned aerial vehicle base coordinate systemThe method comprises the steps of obtaining a conversion matrix from an unmanned aerial vehicle base coordinate system to a ground rectangular coordinate system by adopting a homogeneous coordinate conversion method, wherein the conversion matrix comprises the unmanned aerial vehicle base coordinate system to a carrier coordinate system, the carrier coordinate system to a geographic coordinate system and the geographic coordinate system to the ground rectangular coordinate system; finally, the coordinate value (x) of the target to be tracked in the geodetic rectangular coordinate system is calculated g ,y g ,z g ) The transformation matrix is:
Figure BDA0003837726770000041
wherein x is g Representing the abscissa value, y, of the object to be tracked in the geodetic rectangular coordinate system g Z is the ordinate value of the object to be tracked in the geodetic rectangular coordinate system g For the height value of the target to be tracked in the rectangular coordinate system of the earth, wherein
Figure BDA0003837726770000042
The transformation matrix from the rectangular coordinate system of the earth to the geographic system is as follows:
Figure BDA0003837726770000043
wherein alpha is s ,λ s And h s Respectively the longitude, latitude and altitude of the earth in the carrier coordinate system, e is the first eccentricity of the earth reference ellipsoid, R N Is a radius of curvature;
Figure BDA0003837726770000044
the transformation matrix from geographical system to carrier system:
Figure BDA0003837726770000045
ψ as ,θ as and
Figure BDA0003837726770000046
respectively a yaw angle, a pitch angle and a roll angle of the carrier;
Figure BDA0003837726770000047
the method comprises the steps of transforming a matrix from a carrier coordinate system to a unmanned aerial vehicle base coordinate system:
Figure BDA0003837726770000048
the method for detecting the target to be tracked in the step 3 of the invention comprises the following steps:
sending the next frame image of the unmanned aerial vehicle video into the initialized tracking model to obtain a detection result, namely the position coordinate of the target in the next frame image, and calculating the position (x g_new ,y g_new ,z g_new ) The method comprises the steps of carrying out a first treatment on the surface of the The detection result includes a detected target image.
The method for judging whether the detection result is correct in the step 4 of the invention comprises the following steps:
step 4-1, calculating a displacement distance of the target to be tracked in the geodetic coordinate system, as follows:
Figure BDA0003837726770000051
step 4-2, comparing the calculation result with a preset threshold value eta, if the calculation result exceeds the threshold value, considering that the target to be tracked is abnormal in movement, and if the calculation result is smaller than the threshold value, performing the next judgment;
step 4-3, cutting the detected target image based on the detection result of the tracking model in step 3;
step 4-4, comparing whether the target image in the next frame is consistent with the target image feature template stored in the step 2 by adopting a SIFT method (refer to https:// www.cs.ubc.ca/-lowe/papers/ijcv04. Pdf); if the number of the matched key points exceeds a threshold value eta, judging that the key points are successful if the key points are consistent, otherwise judging that the key points are unsuccessful.
The specific steps for updating the position information of the target to be tracked in the step 5 of the invention are as follows:
and (2) adopting the same method as the step (2-1), calculating the longitude, latitude and altitude of the target to be tracked under a geodetic coordinate system according to the unmanned aerial vehicle position information, camera information and target position of the current frame in the unmanned aerial vehicle video, and updating the stored position information of the target.
The specific steps of detecting the target to be tracked again within the selected range and judging whether the re-detection result is correct in the step 6 of the invention are as follows:
step 6-1, constructing a re-detection area with the stored position information of the target, namely the target image coordinates (px, py) as the center, wherein the re-detection area coordinates are ((px-w/2, py-h/2), (px+w/2, y+h/2), (px+w/2, py+h/2), (px+w/2, py-h/2)); wherein w represents the width of the target image, and h represents the height of the target image;
step 6-2, detecting the target again in the re-detection area by adopting the method in the step 3;
and 6-3, judging whether the re-detection target is correct or not by adopting the method in the step 4.
In step 7 of the present invention, if the re-detection target is correct, the specific steps are as follows:
step 7-1, calculating longitude and latitude of the target in the geodetic coordinate system according to the unmanned aerial vehicle position information, the camera information and the target position of the current frame by adopting the method in the step 2-1, and updating the previously stored target position information;
and 7-2, replacing the target template with a template for detecting the cut target image again and judging whether the target is correct or not as the next frame.
If the re-detection target is incorrect, the specific steps in step 8 of the invention are as follows:
step 8-1, detecting the current position of a target based on a DeepSort tracking model in the next frame of image, and cutting the detected target image;
and 8-2, judging whether the detected target image is correct by adopting the method described in the step 4.
The beneficial effects are that:
compared with the prior art, the unmanned aerial vehicle video target tracking method based on the area re-search provided by the invention can still continuously and accurately track the target under the complex conditions that the unmanned aerial vehicle video is severely dithered and lost, the focal length of a camera is rapidly changed, the target is blocked for a long time, the background noise is too much, and the like, and the condition that the target is lost can not occur. Meanwhile, compared with the existing tracking algorithm, the unmanned aerial vehicle video target tracking method based on the region re-search maintains higher accuracy and does not cause the situation of tracking targets by mistake when processing multi-resolution, complex background noise interference and low-quality videos.
Drawings
The foregoing and/or other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings and detailed description.
FIG. 1 is a schematic diagram of a workflow framework of the present invention.
FIG. 2 is a schematic diagram illustrating the transformation of tracking object coordinates in the present invention.
Detailed Description
An unmanned aerial vehicle video target tracking method based on area re-search, as shown in fig. 1, comprises the following steps:
step 1, selecting a target to be tracked in a frame in a current frame of an unmanned aerial vehicle video; the unmanned aerial vehicle video refers to a video shot by an unmanned aerial vehicle;
the steps of selecting the target to be tracked in the current frame of the unmanned aerial vehicle video are as follows:
based on the current single-frame image of the unmanned aerial vehicle video, selecting a target to be tracked by a frame to obtain the position information of the target in the current single-frame image, wherein the position information comprises a center point coordinate (x, y), x is a center point abscissa, y is a center point ordinate, the length W and the width H of a target frame, the size of the current single-frame image, W is the width, and H is the height.
Step 2, calculating the position information of the target to be tracked, storing the position information of the target, generating a target image characteristic template, and initializing a tracking model; the position information of the target to be tracked is longitude, latitude and altitude of the target to be tracked in a geodetic coordinate system;
the step 2 specifically comprises the following steps:
step 2-1, calculating the longitude x of the target to be tracked under the geodetic rectangular coordinate system through coordinate system conversion according to the longitude alpha, latitude lambda, altitude h, pitch angle theta, yaw angle phi, roll angle phi, relative target pitch angle alpha, relative target rotation angle beta, camera focal length f and image resolution r of the unmanned aerial vehicle g Latitude y g And height z g Coordinates;
the coordinate system is converted, and the longitude x of the target to be tracked under the geodetic rectangular coordinate system is calculated g Latitude y g And height z g The method for coordinates comprises the following steps:
step 2-1-1, based on the coordinates of the target to be tracked in the image shot by the unmanned aerial vehicle, the focal length of the camera and the resolution of the image, completing conversion from an image coordinate system to a camera coordinate system:
x c =x×f-W/2
y c =y×f-H/2
z c =0
wherein x is c Representing x-axis coordinate and y of unmanned aerial vehicle shooting image c Representing y-axis coordinate and z of unmanned aerial vehicle shooting image c Representing the z-axis coordinate of an image shot by the unmanned aerial vehicle; f is the focal length of the camera, w is the width of the image, and H is the height of the image;
step 2-1-2, completing conversion from the camera coordinate system to the unmanned aerial vehicle base coordinate system based on the coordinates of the target to be tracked in the unmanned aerial vehicle camera coordinate system, and obtaining coordinate values (x b ,y b ,z b ):
x b =x c ×r+Rcos(alpha)cos(beta)
y b =y c ×r+Rcos(alpha)sin(beta)
z b =z c +Rsin(alpha)
Wherein x is b Represents the abscissa value, y of the unmanned aerial vehicle b Representing the ordinate, z, of the unmanned aerial vehicle b The method comprises the steps of representing the height information of the unmanned aerial vehicle, wherein R represents image resolution, alpha represents camera pitch angle, and beta represents camera rotation angle;
2-1-3, obtaining a conversion matrix from the unmanned aerial vehicle base coordinate system to the ground rectangular coordinate system by adopting a homogeneous coordinate conversion method based on the coordinates of the target to be tracked in the unmanned aerial vehicle base coordinate system, wherein the conversion matrix comprises the unmanned aerial vehicle base coordinate system to the carrier coordinate system, the carrier coordinate system to the geographic coordinate system and the geographic coordinate system to the ground rectangular coordinate system; finally, the coordinate value (x) of the target to be tracked in the geodetic rectangular coordinate system is calculated g ,y g ,z g ) The transformation matrix is:
Figure BDA0003837726770000071
wherein x is g Representing the abscissa value, y, of the object to be tracked in the geodetic rectangular coordinate system g Z is the ordinate value of the object to be tracked in the geodetic rectangular coordinate system g For the height value of the target to be tracked in the rectangular coordinate system of the earth, wherein
Figure BDA0003837726770000072
The transformation matrix from the rectangular coordinate system of the earth to the geographic system is as follows:
Figure BDA0003837726770000073
Figure BDA0003837726770000081
wherein alpha is s ,λ s And h s Respectively the geodetic longitude in the carrier coordinate system,Latitude and altitude, e is the first eccentricity of the earth reference ellipsoid, R N Is a radius of curvature;
Figure BDA0003837726770000082
the transformation matrix from geographical system to carrier system:
Figure BDA0003837726770000083
ψ as ,θ as and
Figure BDA0003837726770000084
respectively a yaw angle, a pitch angle and a roll angle of the carrier;
Figure BDA0003837726770000085
the method comprises the steps of transforming a matrix from a carrier coordinate system to a unmanned aerial vehicle base coordinate system:
Figure BDA0003837726770000086
step 2-2, storing the position information (w, h) of the object to be tracked in the image photographed by the unmanned aerial vehicle, wherein w and h represent pixel positions in the image, i.e. width and height, and longitude and latitude heights (x g ,y g ,z g );
Step 2-3, cutting an original image, namely an image shot by the unmanned aerial vehicle, according to the specific coordinates of the target to be tracked, which are determined by frame selection in the step 1, and storing the cut image as a target image feature template;
step 2-4, selecting a deep Sort tracking model (refer to the paper address: https:// arxiv. Org/pdf/1703.07402.Pdf; and the code address: https:// gitsub. Com/nwojke/deep_SORT') as the tracking model, wherein the pre-trained tracking model is adopted, and the parameters of the tracking model are not updated;
and 2-5, inputting the target image characteristic template into the tracking model to finish the initialization of the tracking model.
Step 3, detecting a target to be tracked in the next frame of the unmanned aerial vehicle video based on the tracking model to obtain a detection result, and positioning the target to be tracked to obtain a positioning result; the specific method comprises the following steps:
sending the next frame image of the unmanned aerial vehicle video into the initialized tracking model to obtain a detection result, namely the position coordinate of the target in the next frame image, and calculating the position (x g_new ,y g_new ,z g_new ) The method comprises the steps of carrying out a first treatment on the surface of the The detection result includes a detected target image.
Step 4, judging whether the detection result in the step 3 is correct or not; the specific method comprises the following steps:
step 4-1, calculating a displacement distance of the target to be tracked in the geodetic coordinate system, as follows:
Figure BDA0003837726770000091
step 4-2, comparing the calculation result with a preset threshold value eta, if the calculation result exceeds the threshold value, considering that the target to be tracked is abnormal in movement, and if the calculation result is smaller than the threshold value, performing the next judgment;
step 4-3, cutting the detected target image based on the detection result of the tracking model in step 3;
step 4-4, comparing whether the target image in the next frame is consistent with the target image feature template stored in the step 2 by adopting a SIFT method (refer to https:// www.cs.ubc.ca/-lowe/papers/ijcv04. Pdf); if the number of the matched key points exceeds a threshold value eta, judging that the key points are successful if the key points are consistent, otherwise judging that the key points are unsuccessful.
Step 5, if the detection result is correct, updating the position information of the target to be tracked by adopting the positioning result in the step 3, and entering the next frame of the unmanned aerial vehicle video;
the specific steps of updating the position information of the target to be tracked are as follows:
and (2) adopting the same method as the step (2-1), calculating the longitude, latitude and altitude of the target to be tracked under a geodetic coordinate system according to the unmanned aerial vehicle position information, camera information and target position of the current frame in the unmanned aerial vehicle video, and updating the stored position information of the target.
Step 6, if the detection result is incorrect, detecting the target to be tracked again in the selected range based on the stored position information of the target in the step 2, and judging whether the detection result is correct again, wherein the specific steps are as follows:
step 6-1, constructing a re-detection area with the stored position information of the target, namely the target image coordinates (px, py) as the center, wherein the re-detection area coordinates are ((px-w/2, py-h/2), (px+w/2, y+h/2), (px+w/2, py+h/2), (px+w/2, py-h/2)); wherein w represents the width of the target image, and h represents the height of the target image;
step 6-2, detecting the target again in the re-detection area by adopting the method in the step 3;
and 6-3, judging whether the re-detection target is correct or not by adopting the method in the step 4.
Step 7, if the re-detection result is correct, updating the position information of the target, updating the target image characteristic template, updating the tracking model and entering the next frame of the unmanned aerial vehicle video;
if the re-detection target is correct, the specific steps are as follows:
step 7-1, calculating longitude and latitude of the target in the geodetic coordinate system according to the unmanned aerial vehicle position information, the camera information and the target position of the current frame by adopting the method in the step 2-1, and updating the previously stored target position information;
and 7-2, replacing the target template with a template for detecting the cut target image again and judging whether the target is correct or not as the next frame.
And 8, if the re-detection result is incorrect, entering the next frame of the unmanned aerial vehicle video, and repeating the step 6.
If the re-detection target is incorrect, the specific steps are as follows:
step 8-1, detecting the current position of a target based on a DeepSort tracking model in the next frame of image, and cutting the detected target image;
and 8-2, judging whether the detected target image is correct by adopting the method described in the step 4.
Examples:
as shown in fig. 1, an unmanned aerial vehicle video target tracking method based on area re-search includes the following steps:
1) The unmanned aerial vehicle video is characterized in that a target needing to be tracked is selected in a frame mode in the current frame of the original video, namely a single frame is selected to track the target;
2) Calculating longitude, latitude and altitude of a target in a geodetic coordinate system, storing target position information, generating a target image feature template, and initializing a tracking model;
3) In the next frame of image of the unmanned aerial vehicle video, tracking a target based on a deep learning model, and detecting a positioning target;
4) Judging whether the detected target is correct or not;
5) If the detected target is correct, updating the coordinate position information of the target, and entering the next frame;
6) If the target is incorrect, detecting the target again in the selected range based on the stored target position information, and judging whether the target is detected again correctly or not;
7) If the re-detection target is correct, updating target position information, updating a target image characteristic template, updating a tracking model and entering the next frame;
8) If the re-detection target is incorrect, entering the next frame, and repeating the step 6;
in the step 1, the specific steps of selecting a target to be tracked in a frame of the unmanned aerial vehicle video current frame are as follows:
step 1-1, based on a current single frame image in the unmanned aerial vehicle video, selecting a target to be tracked in a frame mode, and obtaining position information of the target in the current image, wherein the position information comprises center point coordinates (x, y), length and width (W, H) of a target frame and image sizes (W, H). In this embodiment, the frame-selected white vehicle is a tracking target, the center of the target frame is (76,119), the length and width are (65,37), the center of the image is the origin coordinate, and the center point is corrected to be the coordinates (-148, -49), respectively.
Calculating longitude, latitude and altitude of a target in a geodetic coordinate system, storing target position information, generating a target image feature template, and initializing a tracking model, wherein the specific steps are as follows:
step 2-1, calculating the longitude x of the target under the rectangular coordinate system of the earth through coordinate system conversion according to the longitude alpha, latitude lambda, altitude h, pitch angle theta, yaw angle phi, roll angle phi, relative target pitch angle alpha, relative target rotation angle beta, camera focal length f and image resolution r of the unmanned aerial vehicle g Latitude y g Height z g Coordinates;
step 2-2, cutting an original image according to the specific coordinates of the tracking target selected in the step 2-1, and storing the cut image as a target template;
step 2-3, selecting a deep source tracking model as a tracking model in the invention, directly adopting a pre-training model, and not updating model parameters;
step 2-4, sending the target template into a tracking model, and initializing the tracking model
As shown in fig. 2, in step 2-1, the specific steps of calculating the longitude, latitude, and altitude coordinates of the target in the geodetic rectangular coordinate system by coordinate system transformation are as follows:
step 2-1-1, based on the coordinates of the target in the image, the focal length and resolution of the camera, the conversion from the image coordinate system to the camera coordinate system is realized
x c =x×f-W/2
y c =y×f-H/2
z c =0
The coordinate in this embodiment is x b =-148,y b =-49,z b =0。
Step 2-1-2, based on the coordinates of the target in the camera coordinate system, converting the camera coordinate system into the unmanned aerial vehicle base coordinate system, and obtaining the coordinate value (x) of the target in the base coordinate system b ,y b ,z b )
x b =x c ×r+Rcos(alpha)cos(beta)
y b =y c ×r+Rcos(alpha)sin(beta)
z b =z c +Rsin(alpha)
And 2-1-3, obtaining a conversion matrix from the base coordinate system to the ground rectangular coordinate system by adopting a homogeneous coordinate conversion method based on the coordinates of the target in the unmanned aerial vehicle base coordinate system, wherein the conversion matrix comprises the base coordinate system to the carrier coordinate system (the carrier coordinate system is composed of an origin arranged on a carrier material center, an X axis is the direction of a carrier longitudinal axis machine head, a Y axis is the right wing forward direction, a Z axis is determined by a right-hand spiral rule and faces downwards), and the carrier coordinate system to a geographic coordinate system and the geographic coordinate system to the ground rectangular coordinate system. Finally, the coordinate value (x) of the target in the rectangular coordinate system of the earth is obtained g ,y g ,z g ) The transformation matrix is:
Figure BDA0003837726770000111
Figure BDA0003837726770000112
the transformation matrix from the rectangular coordinate system of the earth to the geographic system is as follows:
Figure BDA0003837726770000113
wherein the geodetic longitude, latitude and altitude alpha of the carrier s ,λ s ,h s 88 DEG 11 '47', 42 DEG 14 '27', 4324m respectively, the first eccentricity e of the earth reference ellipsoid being 0.00669, the radius of curvature R N 6378137.0m.
Figure BDA0003837726770000114
The transformation matrix from geographical system to carrier system:
Figure BDA0003837726770000121
due to lack of relevant parameters in unmanned aerial vehicle video, yaw angle, pitch angle and roll angle ψ of the carrier therein as ,θ as
Figure BDA0003837726770000122
Respectively set to 0 deg., 0 deg..
Figure BDA0003837726770000123
Transformation matrix from carrier system to base standard system
Figure BDA0003837726770000124
Based on the coordinates of the target in the base coordinate system, the coordinates of the target in the geodetic coordinate system can be obtained as follows: (-880.64, -1667.1,5241.03)
In the next frame of image of the unmanned aerial vehicle video in the step 3, a tracking model is adopted, and the specific steps of detecting the target are as follows:
and 3-1, sending the next frame of image of the unmanned aerial vehicle video into the initialized tracking model to obtain a model prediction result, namely the position coordinate of the target in the next frame of image. And sending the next frame of image into a pretrained DeepSort model, wherein the DeepSort model firstly detects a target through YOLO to generate a candidate target list, and then screening out a tracking target by adopting a Kalman filtering and correlation measurement method.
In the step 4, the specific steps for judging whether the detection target is correct are as follows:
step 4-1, calculating the displacement of the target in the geodetic coordinate system as follows:
Figure BDA0003837726770000125
/>
step 4-2, comparing the calculation result with a preset threshold value theta, if the calculation result exceeds the threshold value theta, considering that the target movement is abnormal, and if the calculation result is smaller than the threshold value theta, performing the next step of judgment;
step 4-3, cutting the detected target image based on the prediction result of the tracking model;
and 4-4, comparing whether the target image in the next frame is consistent with the stored target template or not by adopting a SIFT method. If the number of the matched key points exceeds a threshold value eta, judging that the detection is successful, otherwise judging that the detection is unsuccessful;
in this experiment, the threshold value θ is set to be 10, the focal length of the video is changed rapidly, the target displacement can be found to be 145.9 according to the detection results (-703.66, -1722.47, 5190.59,) (-828.14, -1679, 5253.08) of two continuous frames, and therefore, if the threshold value is exceeded, the region re-detection is needed. Setting the threshold eta to be 5, namely if the number of the matching points exceeds 5, the detection target is considered to be correct, otherwise, the detection target is considered to be incorrect.
In step 5, if the detected target is correct, updating the longitude and latitude of the target, and entering the next frame specifically comprises the following steps:
step 5-1, calculating longitude and latitude of the target in a geodetic coordinate system according to the unmanned aerial vehicle position information, the camera information and the target position of the current frame by adopting the same method as the step 3-1, and updating the previously stored target longitude and latitude;
in step 6, if the target is incorrect, detecting the target again in the selected range based on the current longitude and latitude, and judging whether the target is correct again or not, wherein the specific steps are as follows:
step 6-1, constructing a re-detection area by taking the stored target image coordinates (x, y) as a center, wherein the area coordinates are ((x-2 xw, y-2 xh), (x-2 xw, y+2 xh), (x+2 xw, y-2 xh)); the re-detection area coordinates were ((-447, -233), (-447, -97), (271, -97), (271, -233)) centered on (-359, -165)
Step 6-2, detecting the target again in the re-detection area by adopting a target matching method, and storing the re-detected target;
step 6-3, judging whether the re-detection target is correct or not by adopting the method in the step 4;
when the unmanned aerial vehicle video has rapid changes of camera focal length, serious video frame loss and serious video jitter, the deep Sort algorithm is difficult to accurately track the target, so that the target is lost. And selecting a re-detection area through stored target position information, detecting the target again in the area by adopting an image matching method, and recovering the lost target to realize stable target tracking.
In step 7, if the detected target is correct again, updating the longitude and latitude of the target, updating the characteristic template of the target image, updating the tracking model, and entering the next frame, wherein the specific steps are as follows:
step 7-1, calculating longitude and latitude of the target in a geodetic coordinate system according to the unmanned aerial vehicle position information, the camera information and the target position of the current frame by adopting the same method as the step 3-1, and updating the previously stored target longitude and latitude;
step 7-2, replacing the target template with a template for detecting the cut target image again and judging whether the target is correct or not as the next frame;
if the detected target is incorrect again in the step 8, the next frame is entered, and the specific steps of the repeated step 6 are as follows:
and 8-1, detecting the current position of the target based on the DeepSort tracking model in the next frame of image. Cutting the image according to the model prediction result to generate a target image;
step 8-2, judging whether the target image is correct or not by adopting the method described in the step 4;
in a specific implementation, the application provides a computer storage medium and a corresponding data processing unit, wherein the computer storage medium can store a computer program, and the computer program can run the invention content of the unmanned aerial vehicle video target tracking method based on the area re-search and part or all of the steps in each embodiment when being executed by the data processing unit. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
It will be apparent to those skilled in the art that the technical solutions in the embodiments of the present invention may be implemented by means of a computer program and its corresponding general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied essentially or in the form of a computer program, i.e. a software product, which may be stored in a storage medium, and include several instructions to cause a device (which may be a personal computer, a server, a single-chip microcomputer MUU or a network device, etc.) including a data processing unit to perform the methods described in the embodiments or some parts of the embodiments of the present invention.
The invention provides a method and a method for tracking an unmanned aerial vehicle video target based on area re-search, and particularly provides a method and a plurality of ways for realizing the technical scheme, the method and the method are only preferred embodiments of the invention, and it should be pointed out that a plurality of improvements and modifications can be made by one of ordinary skill in the art without departing from the principle of the invention, and the improvements and modifications are also considered as the protection scope of the invention. The components not explicitly described in this embodiment can be implemented by using the prior art.

Claims (10)

1. The unmanned aerial vehicle video target tracking method based on the area re-search is characterized by comprising the following steps of:
step 1, selecting a target to be tracked in a frame in a current frame of an unmanned aerial vehicle video;
step 2, calculating the position information of the target to be tracked, storing the position information of the target, generating a target image characteristic template, and initializing a tracking model; the position information of the target to be tracked is longitude, latitude and altitude of the target to be tracked in a geodetic coordinate system;
step 3, detecting a target to be tracked in the next frame of the unmanned aerial vehicle video based on the tracking model to obtain a detection result, and positioning the target to be tracked to obtain a positioning result;
step 4, judging whether the detection result in the step 3 is correct or not;
step 5, if the detection result is correct, updating the position information of the target to be tracked by adopting the positioning result in the step 3, and entering the next frame of the unmanned aerial vehicle video;
step 6, if the detection result is incorrect, detecting the target to be tracked again in the selected range based on the stored position information of the target in the step 2, and judging whether the detection result is correct again;
step 7, if the re-detection result is correct, updating the position information of the target, updating the target image characteristic template, updating the tracking model and entering the next frame of the unmanned aerial vehicle video;
and 8, if the re-detection result is incorrect, entering the next frame of the unmanned aerial vehicle video, and repeating the step 6.
2. The method for tracking the target of the unmanned aerial vehicle based on the area re-search according to claim 1, wherein the step of framing the target to be tracked in the current frame of the unmanned aerial vehicle video in step 1 is as follows:
based on the current single-frame image of the unmanned aerial vehicle video, selecting a target to be tracked by a frame to obtain the position information of the target in the current single-frame image, wherein the position information comprises a center point coordinate (x, y), x is a center point abscissa, y is a center point ordinate, the length W and the width H of a target frame, the size of the current single-frame image, W is the width, and H is the height.
3. The unmanned aerial vehicle video target tracking method based on the area re-search according to claim 2, wherein step 2 comprises:
step 2-1, calculating the longitude x of the target to be tracked under the geodetic rectangular coordinate system through coordinate system conversion according to the longitude alpha, latitude lambda, altitude h, pitch angle theta, yaw angle phi, roll angle phi, relative target pitch angle alpha, relative target rotation angle beta, camera focal length f and image resolution r of the unmanned aerial vehicle g Latitude y g And height z g Coordinates;
step 2-2, storing the position information (w, h) of the object to be tracked in the image shot by the unmanned aerial vehicle, wherein w and h represent pixel positions in the image, namely width and height, and in a large scaleLongitude and latitude height (x) in rectangular ground coordinate system g ,y g ,z g );
Step 2-3, cutting an original image, namely an image shot by the unmanned aerial vehicle, according to the specific coordinates of the target to be tracked, which are determined by frame selection in the step 1, and storing the cut image as a target image feature template;
step 2-4, selecting a deep source tracking model as a tracking model, and adopting a pre-trained tracking model, wherein the parameters of the tracking model are not updated;
and 2-5, inputting the target image characteristic template into the tracking model to finish the initialization of the tracking model.
4. A method for tracking a video object of an unmanned aerial vehicle based on area re-search according to claim 3, wherein the coordinate system conversion in step 2-1 calculates the longitude x of the object to be tracked in the rectangular coordinate system of the earth g Latitude y g And height z g The method for coordinates comprises the following steps:
step 2-1-1, based on the coordinates of the target to be tracked in the image shot by the unmanned aerial vehicle, the focal length of the camera and the resolution of the image, completing conversion from an image coordinate system to a camera coordinate system:
x c =x×f-W/2
y c =y×f-H/2
z c =0
wherein x is c Representing x-axis coordinate and y of unmanned aerial vehicle shooting image c Representing y-axis coordinate and z of unmanned aerial vehicle shooting image c Representing the z-axis coordinate of an image shot by the unmanned aerial vehicle; f is the focal length of the camera, w is the width of the image, and H is the height of the image;
step 2-1-2, completing conversion from the camera coordinate system to the unmanned aerial vehicle base coordinate system based on the coordinates of the target to be tracked in the unmanned aerial vehicle camera coordinate system, and obtaining coordinate values (x b ,y b ,z b ):
x b =x c ×r+Rcos(alpha)cos(beta)
y b =y c ×r+Rcos(alpha)sin(beta)
z b =z c +Rsin(alpha)
Wherein x is b Represents the abscissa value, y of the unmanned aerial vehicle b Representing the ordinate, z, of the unmanned aerial vehicle b The method comprises the steps of representing the height information of the unmanned aerial vehicle, wherein R represents image resolution, alpha represents camera pitch angle, and beta represents camera rotation angle;
2-1-3, obtaining a conversion matrix from the unmanned aerial vehicle base coordinate system to the ground rectangular coordinate system by adopting a homogeneous coordinate conversion method based on the coordinates of the target to be tracked in the unmanned aerial vehicle base coordinate system, wherein the conversion matrix comprises the unmanned aerial vehicle base coordinate system to the carrier coordinate system, the carrier coordinate system to the geographic coordinate system and the geographic coordinate system to the ground rectangular coordinate system; finally, the coordinate value (x) of the target to be tracked in the geodetic rectangular coordinate system is calculated g ,y g ,z g ) The transformation matrix is:
Figure FDA0003837726760000031
wherein x is g Representing the abscissa value, y, of the object to be tracked in the geodetic rectangular coordinate system g Z is the ordinate value of the object to be tracked in the geodetic rectangular coordinate system g For the height value of the target to be tracked in the rectangular coordinate system of the earth, wherein
Figure FDA0003837726760000032
The transformation matrix from the rectangular coordinate system of the earth to the geographic system is as follows:
Figure FDA0003837726760000033
wherein alpha is s ,λ s And h s Respectively the longitude, latitude and altitude of the earth in the carrier coordinate system, e is the first eccentricity of the earth reference ellipsoid, R N Is of curvatureRadius;
Figure FDA0003837726760000034
the transformation matrix from geographical system to carrier system:
Figure FDA0003837726760000035
ψ as ,θ as and
Figure FDA0003837726760000036
respectively a yaw angle, a pitch angle and a roll angle of the carrier;
Figure FDA0003837726760000037
the method comprises the steps of transforming a matrix from a carrier coordinate system to a unmanned aerial vehicle base coordinate system: />
Figure FDA0003837726760000038
5. The unmanned aerial vehicle video target tracking method based on the area re-search according to claim 4, wherein the method for detecting the target to be tracked in the step 3 comprises:
sending the next frame image of the unmanned aerial vehicle video into the initialized tracking model to obtain a detection result, namely the position coordinate of the target in the next frame image, and calculating the position (x g_new ,y g_new ,z g_new ) The method comprises the steps of carrying out a first treatment on the surface of the The detection result includes a detected target image.
6. The method for tracking a video object of an unmanned aerial vehicle based on area re-search according to claim 5, wherein the method for judging whether the detection result is correct in step 4 comprises:
step 4-1, calculating a displacement distance of the target to be tracked in the geodetic coordinate system, as follows:
Figure FDA0003837726760000041
step 4-2, comparing the calculation result with a preset threshold value eta, if the calculation result exceeds the threshold value, considering that the target to be tracked is abnormal in movement, and if the calculation result is smaller than the threshold value, performing the next judgment;
step 4-3, cutting the detected target image based on the detection result of the tracking model in step 3;
step 4-4, comparing whether the target image in the next frame is consistent with the target image feature template stored in the step 2 by adopting a SIFT method; if the number of the matched key points exceeds a threshold value eta, judging that the key points are successful if the key points are consistent, otherwise judging that the key points are unsuccessful.
7. The unmanned aerial vehicle video target tracking method based on area re-search according to claim 6, wherein the specific step of updating the position information of the target to be tracked in step 5 is as follows:
and (2) adopting the same method as the step (2-1), calculating the longitude, latitude and altitude of the target to be tracked under a geodetic coordinate system according to the unmanned aerial vehicle position information, camera information and target position of the current frame in the unmanned aerial vehicle video, and updating the stored position information of the target.
8. The method for tracking the video target of the unmanned aerial vehicle based on the area re-search according to claim 7, wherein the specific steps of re-detecting the target to be tracked within the selected range and judging whether the re-detection result is correct in step 6 are as follows:
step 6-1, constructing a re-detection area with the stored position information of the target, namely the target image coordinates (px, py) as the center, wherein the re-detection area coordinates are ((px-w/2, py-h/2), (px+w/2, y+h/2), (px+w/2, py+h/2), (px+w/2, py-h/2)); wherein w represents the width of the target image, and h represents the height of the target image;
step 6-2, detecting the target again in the re-detection area by adopting the method in the step 3;
and 6-3, judging whether the re-detection target is correct or not by adopting the method in the step 4.
9. The method for tracking the video target of the unmanned aerial vehicle based on the area re-search according to claim 8, wherein if the re-detection target is correct in the step 7, the specific steps are as follows:
step 7-1, calculating longitude and latitude of the target in the geodetic coordinate system according to the unmanned aerial vehicle position information, the camera information and the target position of the current frame by adopting the method in the step 2-1, and updating the previously stored target position information;
and 7-2, replacing the target template with a template for detecting the cut target image again and judging whether the target is correct or not as the next frame.
10. The method for tracking the video target of the unmanned aerial vehicle based on the area re-search according to claim 9, wherein if the re-detection target is incorrect in the step 8, the specific steps are as follows:
step 8-1, detecting the current position of a target based on a DeepSort tracking model in the next frame of image, and cutting the detected target image;
and 8-2, judging whether the detected target image is correct by adopting the method described in the step 4.
CN202211099235.1A 2022-09-08 2022-09-08 Unmanned aerial vehicle video target tracking method based on region re-search Pending CN116168306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211099235.1A CN116168306A (en) 2022-09-08 2022-09-08 Unmanned aerial vehicle video target tracking method based on region re-search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211099235.1A CN116168306A (en) 2022-09-08 2022-09-08 Unmanned aerial vehicle video target tracking method based on region re-search

Publications (1)

Publication Number Publication Date
CN116168306A true CN116168306A (en) 2023-05-26

Family

ID=86415166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211099235.1A Pending CN116168306A (en) 2022-09-08 2022-09-08 Unmanned aerial vehicle video target tracking method based on region re-search

Country Status (1)

Country Link
CN (1) CN116168306A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866719A (en) * 2023-07-12 2023-10-10 山东恒辉软件有限公司 Intelligent analysis processing method for high-definition video content based on image recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866719A (en) * 2023-07-12 2023-10-10 山东恒辉软件有限公司 Intelligent analysis processing method for high-definition video content based on image recognition
CN116866719B (en) * 2023-07-12 2024-02-02 山东恒辉软件有限公司 Intelligent analysis processing method for high-definition video content based on image recognition

Similar Documents

Publication Publication Date Title
CN111932588B (en) Tracking method of airborne unmanned aerial vehicle multi-target tracking system based on deep learning
CN110197232B (en) Image matching method based on edge direction and gradient features
CN110033411B (en) High-efficiency road construction site panoramic image splicing method based on unmanned aerial vehicle
CN110113560B (en) Intelligent video linkage method and server
CN113313659B (en) High-precision image stitching method under multi-machine cooperative constraint
CN110047108A (en) UAV position and orientation determines method, apparatus, computer equipment and storage medium
CN111967337A (en) Pipeline line change detection method based on deep learning and unmanned aerial vehicle images
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
CN116168306A (en) Unmanned aerial vehicle video target tracking method based on region re-search
CN111273701B (en) Cloud deck vision control system and control method
CN115861860A (en) Target tracking and positioning method and system for unmanned aerial vehicle
CN111898428A (en) Unmanned aerial vehicle feature point matching method based on ORB
CN116977902B (en) Target tracking method and system for on-board photoelectric stabilized platform of coastal defense
CN116866719B (en) Intelligent analysis processing method for high-definition video content based on image recognition
CN109459046B (en) Positioning and navigation method of suspension type underwater autonomous vehicle
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN113781524B (en) Target tracking system and method based on two-dimensional label
CN116188545A (en) Online registering method for infrared and visible light sensors based on IMU and odometer
CN115511853A (en) Remote sensing ship detection and identification method based on direction variable characteristics
JPH07146121A (en) Recognition method and device for three dimensional position and attitude based on vision
CN115035326A (en) Method for accurately matching radar image and optical image
CN114821113A (en) Monocular vision inertia SLAM method and system based on adaptive robust kernel
Ogawa et al. Reducing false positives in object tracking with Siamese network
CN112924708B (en) Speed estimation method suitable for underwater near-bottom operation vehicle
CN113111890B (en) Remote water surface infrared target rapid tracking method based on water antenna

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination