CN112200227A - Airplane detection method based on airplane 3d model - Google Patents

Airplane detection method based on airplane 3d model Download PDF

Info

Publication number
CN112200227A
CN112200227A CN202011040774.9A CN202011040774A CN112200227A CN 112200227 A CN112200227 A CN 112200227A CN 202011040774 A CN202011040774 A CN 202011040774A CN 112200227 A CN112200227 A CN 112200227A
Authority
CN
China
Prior art keywords
airplane
picture
plane
pictures
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011040774.9A
Other languages
Chinese (zh)
Inventor
李爱林
黄涛
文戈
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huafu Information Technology Co ltd
Original Assignee
Shenzhen Huafu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huafu Information Technology Co ltd filed Critical Shenzhen Huafu Information Technology Co ltd
Priority to CN202011040774.9A priority Critical patent/CN112200227A/en
Publication of CN112200227A publication Critical patent/CN112200227A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an airplane detection method based on an airplane 3d model, and particularly relates to the technical field of image processing, and the method comprises the following steps: the method comprises the following steps: opening the 3d model of the airplane to obtain screenshots of the airplane body with airplane masks at different angles; step two: generating a vivid airplane picture; the background pictures are uniformly set to a fixed size. The angles of the airplanes in the generated data are extremely various and are far more than the angles which can be collected by the camera, the deep learning model learns more accurate airplane characteristics, so that the false alarm rate is lower, the generated airplane pictures are vivid, the backgrounds are extremely various, the deep learning model can more accurately avoid the interference of the background characteristics, so that the missing report rate is lower, the diversity of the training data in all aspects ensures that the deep learning model can obtain better effect no matter what scenes the deep learning model is applied to, and the robustness and the generalization performance are better.

Description

Airplane detection method based on airplane 3d model
Technical Field
The invention relates to the technical field of image processing, in particular to an airplane detection method based on an airplane 3d model.
Background
Due to the development of deep learning, in recent years, artificial intelligence in various fields of landing products like bamboo shoots after raining, in the field of airport flight management, with the encouragement of more than ten million levels of airports by civil aviation authorities to push an A-CDM system, all parties are required to input time nodes of in-place/out-of-place airplanes into a system, and in order to change the pain points of low efficiency, large error, untimely and the like in early manual input, an organic field starts to realize automatic input of in-place/out-of-place airplane by using an airplane detection method based on deep learning.
The current technical status is as follows:
the deep learning is a technology which depends on mass data seriously, the airplane detection data which is open on the Internet is few and few, and the method can only be used for academic display, and if the method is directly applied to airport streaming media real-time detection of airplanes, airplanes can not be detected basically because the angles, the environment background, the weather, the camera imaging quality and the like of the airplanes in reality are changed greatly; moreover, the long wings of the airplane enable the background in the frame to occupy a large proportion, so that the influence of the background on airplane detection is large, and the real scene cannot be covered by a small amount of data extracted features.
Secondly, the airplane detection technology used in some airports at present mainly collects all cameras and monitoring video data with a duration of several months in the airport, frames the monitoring video data to obtain millions of pictures, manually marks frame information of all airplanes in each picture, inputs the frame information into a deep learning model for training, and obtains a model for production, wherein the defect is as follows: the economic cost of this process is undoubtedly enormous, with the speed of such a box noted in the industry being 300 sheets/person/day (8 hours); the method has the following disadvantages: the airplane detection model obtained by the development mode can only be applied to the existing camera of the airport, and once the camera is changed greatly, a new camera is added or the airplane detection model is applied to other airports, the condition of serious missing detection still occurs; even if a certain airport has thousands of cameras, each camera has a fixed scene, a fixed airplane running track and a fixed airplane angle, so that the deep learning model trained based on the airport data still has insufficient generalization and robustness.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In order to solve the problem of insufficient diversity of airplane angles and scenes in collected data, the invention provides an airplane detection method based on an airplane 3d model, which utilizes the airplane 3d model to infinitely generate vivid airplane pictures under various weather conditions and simultaneously obtains frame information of the airplane in each picture.
In order to achieve the purpose, the invention provides the following technical scheme: an airplane detection method based on an airplane 3d model comprises the following steps:
the method comprises the following steps: opening the 3d model of the airplane to obtain screenshots of the airplane body with airplane masks at different angles;
step two: generating a vivid airplane picture; uniformly setting background pictures into a fixed size, randomly selecting a point in a picture range as the upper left corner of a pasted plane, selecting a plane screenshot with a mask, obtaining an external rectangle of the plane according to the mask, picking up the plane according to the external rectangle, setting a size for the plane, and finally pasting the plane onto the selected point of the background picture by using a paste function of a PIL library according to the mask of the plane screenshot to obtain a vivid plane picture and generate frame information of the plane in the picture;
step three: the plane picture and the corresponding frame information thereof generated in the step two and the plane and frame information of the real application scene form a training set for deep learning, plane data are generated in a programmed mode, and data which can be used for plane detection training can be obtained through the data of the real application scene;
step four: simulating airplane pictures under various weather conditions, taking a picture in a training set, taking a picture in the weather of night, rain, snow and the like, storing the picture as the same resolution as the picture in the training set, adding the two pictures according to the weight of 1 to 1, namely multiplying the pixels on the same positions of the two pictures by 0.5 respectively, then adding the pixels to obtain a simulated picture, generating the simulated picture according to the process, and adding the simulated picture into the training set corresponding to the information of the airplane frame;
step five: training a deep learning model: the generated airplane pictures, the airplane pictures under various weather conditions, a small number of real airplane pictures and airplane frames corresponding to the pictures are input into a neural network for reasonable training, and then the deep learning model can be obtained.
The specific steps of the first step are as follows:
(1) opening a 3d model file of the airplane by using three-dimensional image viewing software;
(2) directly rendering to obtain plane body screenshots with plane masks at different angles, wherein the screenshots are PNG pictures in an RGBA format, the plane masks are A (alpha) layers in the RGBA format and represent the opacity degree of each pixel point, the plane masks are binary pictures, 0 represents transparency, 1 represents opacity, and the plane masks are as follows: in the plane screenshot, the layer A of the background pixel points is 0, and the layer A of the plane pixel points is 1.
The specific steps of the first step are as follows:
(1) opening a 3d model file through a website for viewing the 3d model on line;
(2) rotating the plane and recording the screen at the same time to obtain a video file and then extracting frames to obtain plane screenshots at different angles;
(3) the mask for each aircraft screenshot is obtained using the opencv library.
In the second step, the background picture is one or more of a high-definition wallpaper picture, a high-resolution picture and a real application scene picture.
In the second step, the size of the background picture is 512 × 512 pixels.
In the second step, the frame information includes: the abscissa and ordinate of the upper left corner and the width and height of the airplane.
In the third step, the number of the real application scenes is 100 per camera.
The invention has the technical effects and advantages that:
1. because the angles of the airplanes in the generated data are extremely various and are far more than the angles which can be collected by the camera, the more accurate characteristics of the airplanes are learned by the deep learning model, and the false alarm rate is lower;
2. the generated airplane pictures are vivid, the backgrounds are extremely diverse, and the deep learning model can more accurately avoid the interference of the background characteristics, so the missing report rate is lower;
3. the diversity of the training data in all aspects ensures that the deep learning model can obtain better effect no matter what scene the deep learning model is applied to, and has better robustness and generalization;
4. the deep learning model of the invention can achieve higher accuracy rate on the airplane detection task with lower economic cost.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Example 1:
the invention provides an airplane detection method based on an airplane 3d model, which comprises the following steps:
the method comprises the following steps: opening an airplane 3d model file by using three-dimensional image viewing software, directly rendering to obtain airplane body screenshots with airplane masks at different angles, wherein the screenshots are PNG pictures in an RGBA format, the airplane masks are A (alpha) layers in the RGBA format and represent the opacity degree of each pixel point, the airplane masks are substantially binary pictures, the airplane masks are transparent when being 0 and opaque when being 1, and the airplane masks are: in the plane screenshot, the layer A of the background pixel point is 0, and the layer A of the plane pixel point is 1; with the mask, only the airplane body in the irregular edge of the airplane in the screenshot can be pasted into the background picture, and the background outside the edge of the airplane cannot be pasted into the background picture, so that a vivid airplane picture is obtained;
step two: generating a vivid airplane picture; the method comprises the following steps of using a high-definition wallpaper picture, a high-resolution picture and a real application scene picture as background pictures to achieve the effect of background diversification, uniformly setting the background pictures into 512 pixels by 512 pixels, then randomly selecting one point in the picture range as the upper left corner of the aircraft paste, selecting an aircraft screenshot with a mask, obtaining an external rectangle of the aircraft according to the mask, picking up the aircraft according to the external rectangle, then setting the size of the aircraft, finally using a paste function of a PIL library, pasting the aircraft to the point selected by the background picture according to the mask of the aircraft screenshot, generating frame information of the aircraft in the pictures while obtaining the vivid aircraft, wherein the frame information comprises: the abscissa and ordinate of the upper left corner and the width and height of the airplane;
step three: the airplane pictures and the corresponding frame information thereof generated in the step two and the airplane and frame information of the real application scene form a deep learning training set, airplane data are generated in a programmed mode, and in addition, the data of the real application scene are added, the number of the real application scene is 100 cameras, and data which can be used for airplane detection training can be obtained; the model obtained based on the generated data training can be used in a real production environment, but the missing detection still occurs, and the level of almost no missing detection or false detection can be achieved by adding about 100 pieces of data of a real application scene of each camera;
step four: simulating airplane pictures under various weather conditions, taking a picture in a training set, taking a picture in the weather of night, rain, snow and the like, storing the picture as the same resolution as the picture in the training set, adding the two pictures according to the weight of 1 to 1, namely multiplying the pixels on the same positions of the two pictures by 0.5 respectively, then adding the pixels to obtain a simulated picture, generating the simulated picture according to the process, and adding the simulated picture into the training set corresponding to the information of the airplane frame; because the airplanes in the real scene are darker in the night, rain, snow and other weather conditions, and the training set has less similar data, the missed detection rate of the airplanes in these weather conditions is higher than that of the airplanes in the sunny days, so the data needs to be added
Step five: training a deep learning model: the generated airplane pictures, the airplane pictures under various weather conditions, a small number of real airplane pictures and airplane frames corresponding to the pictures are input into a neural network for reasonable training, and then the deep learning model with high accuracy, low false alarm rate and low omission factor can be obtained.
Example 2:
the invention provides an airplane detection method based on an airplane 3d model, which comprises the following steps:
the method comprises the following steps: opening a 3d model file through a website for viewing the 3d model on line, rotating the airplane and recording a screen at the same time to obtain a video file, then extracting frames to obtain airplane screenshots at different angles, and obtaining a mask code of each airplane screenshot by using an opencv library;
step two: generating a vivid airplane picture; the method comprises the following steps of using a high-definition wallpaper picture, a high-resolution picture and a real application scene picture as background pictures, uniformly setting the background pictures as 512-512 pixels, then randomly selecting a point in a picture range, using the point as the upper left corner for pasting the airplane, selecting an airplane screenshot with a mask, obtaining an external rectangle of the airplane according to the mask, digging the airplane according to the external rectangle, then setting a size for the airplane, finally using a paste function of a PIL library, pasting the airplane to the point selected by the background picture according to the mask of the airplane screenshot, and generating frame information of the airplane in the picture while obtaining a vivid airplane picture, wherein the frame information comprises: the abscissa and ordinate of the upper left corner and the width and height of the airplane;
step three: the airplane pictures and the corresponding frame information thereof generated in the step two and the airplane and frame information of the real application scene form a deep learning training set, airplane data are generated in a programmed mode, and in addition, the data of the real application scene are added, the number of the real application scene is 100 cameras, and data which can be used for airplane detection training can be obtained;
step four: simulating airplane pictures under various weather conditions, taking a picture in a training set, taking a picture in the weather of night, rain, snow and the like, storing the picture as the same resolution as the picture in the training set, adding the two pictures according to the weight of 1 to 1, namely multiplying the pixels on the same positions of the two pictures by 0.5 respectively, then adding the pixels to obtain a simulated picture, generating the simulated picture according to the process, and adding the simulated picture into the training set corresponding to the information of the airplane frame;
step five: training a deep learning model: the generated airplane pictures, the airplane pictures under various weather conditions, a small number of real airplane pictures and airplane frames corresponding to the pictures are input into a neural network for reasonable training, and then the deep learning model can be obtained.
The points to be finally explained are: first, in the description of the present application, it should be noted that, unless otherwise specified and limited, the terms "mounted," "connected," and "connected" should be understood broadly, and may be a mechanical connection or an electrical connection, or a communication between two elements, and may be a direct connection, and "upper," "lower," "left," and "right" are only used to indicate a relative positional relationship, and when the absolute position of the object to be described is changed, the relative positional relationship may be changed;
secondly, the method comprises the following steps: in the drawings of the disclosed embodiments of the invention, only the structures related to the disclosed embodiments are referred to, other structures can refer to common designs, and the same embodiment and different embodiments of the invention can be combined with each other without conflict;
and finally: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included in the scope of the present invention.

Claims (7)

1. An airplane detection method based on a 3d airplane model is characterized by comprising the following steps:
the method comprises the following steps: opening the 3d model of the airplane to obtain screenshots of the airplane body with airplane masks at different angles;
step two: generating a vivid airplane picture; uniformly setting background pictures into a fixed size, randomly selecting a point in a picture range as the upper left corner of a pasted plane, selecting a plane screenshot with a mask, obtaining an external rectangle of the plane according to the mask, picking up the plane according to the external rectangle, setting a size for the plane, and finally pasting the plane onto the selected point of the background picture by using a paste function of a PIL library according to the mask of the plane screenshot to obtain a vivid plane picture and generate frame information of the plane in the picture;
step three: the plane picture and the corresponding frame information thereof generated in the step two and the plane and frame information of the real application scene form a training set for deep learning, plane data are generated in a programmed mode, and data which can be used for plane detection training can be obtained through the data of the real application scene;
step four: simulating airplane pictures under various weather conditions, taking a picture in a training set, taking a picture in the weather of night, rain, snow and the like, storing the picture as the same resolution as the picture in the training set, adding the two pictures according to the weight of 1 to 1, namely multiplying the pixels on the same positions of the two pictures by 0.5 respectively, then adding the pixels to obtain a simulated picture, generating the simulated picture according to the process, and adding the simulated picture into the training set corresponding to the information of the airplane frame;
step five: training a deep learning model: the generated airplane pictures, the airplane pictures under various weather conditions, a small number of real airplane pictures and airplane frames corresponding to the pictures are input into a neural network for reasonable training, and then the deep learning model can be obtained.
2. The aircraft detection method based on the 3d aircraft model according to claim 1, characterized in that: the specific steps of the first step are as follows:
(1) opening a 3d model file of the airplane by using three-dimensional image viewing software;
(2) directly rendering to obtain plane body screenshots with plane masks at different angles, wherein the screenshots are PNG pictures in an RGBA format, the plane masks are A (alpha) layers in the RGBA format and represent the opacity degree of each pixel point, the plane masks are binary pictures, 0 represents transparency, 1 represents opacity, and the plane masks are as follows: in the plane screenshot, the layer A of the background pixel points is 0, and the layer A of the plane pixel points is 1.
3. The aircraft detection method based on the 3d aircraft model according to claim 1, characterized in that: the specific steps of the first step are as follows:
(1) opening a 3d model file through a website for viewing the 3d model on line;
(2) rotating the plane and recording the screen at the same time to obtain a video file and then extracting frames to obtain plane screenshots at different angles;
(3) the mask for each aircraft screenshot is obtained using the opencv library.
4. The aircraft detection method based on the 3d aircraft model according to claim 1, characterized in that: in the second step, the background picture is one or more of a high-definition wallpaper picture, a high-resolution picture and a real application scene picture.
5. The aircraft detection method based on the 3d aircraft model according to claim 1, characterized in that: in the second step, the size of the background picture is 512 × 512 pixels.
6. The aircraft detection method based on the 3d aircraft model according to claim 1, characterized in that: in the second step, the frame information includes: the abscissa and ordinate of the upper left corner and the width and height of the airplane.
7. The aircraft detection method based on the 3d aircraft model according to claim 1, characterized in that: in the third step, the number of the real application scenes is 100 per camera.
CN202011040774.9A 2020-09-28 2020-09-28 Airplane detection method based on airplane 3d model Pending CN112200227A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011040774.9A CN112200227A (en) 2020-09-28 2020-09-28 Airplane detection method based on airplane 3d model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011040774.9A CN112200227A (en) 2020-09-28 2020-09-28 Airplane detection method based on airplane 3d model

Publications (1)

Publication Number Publication Date
CN112200227A true CN112200227A (en) 2021-01-08

Family

ID=74006784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011040774.9A Pending CN112200227A (en) 2020-09-28 2020-09-28 Airplane detection method based on airplane 3d model

Country Status (1)

Country Link
CN (1) CN112200227A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
CN109977983A (en) * 2018-05-07 2019-07-05 广州逗号智能零售有限公司 Obtain the method and device of training image
CN110069972A (en) * 2017-12-11 2019-07-30 赫克斯冈技术中心 Automatic detection real world objects
CN110084304A (en) * 2019-04-28 2019-08-02 北京理工大学 A kind of object detection method based on generated data collection
WO2019177738A1 (en) * 2018-03-13 2019-09-19 Toyota Research Institute, Inc. Systems and methods for reducing data storage in machine learning
CN110852332A (en) * 2019-10-29 2020-02-28 腾讯科技(深圳)有限公司 Training sample generation method and device, storage medium and electronic equipment
US20200167966A1 (en) * 2018-11-27 2020-05-28 Raytheon Company Computer architecture for artificial image generation using auto-encoder

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069972A (en) * 2017-12-11 2019-07-30 赫克斯冈技术中心 Automatic detection real world objects
WO2019177738A1 (en) * 2018-03-13 2019-09-19 Toyota Research Institute, Inc. Systems and methods for reducing data storage in machine learning
CN109977983A (en) * 2018-05-07 2019-07-05 广州逗号智能零售有限公司 Obtain the method and device of training image
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
US20200167966A1 (en) * 2018-11-27 2020-05-28 Raytheon Company Computer architecture for artificial image generation using auto-encoder
CN110084304A (en) * 2019-04-28 2019-08-02 北京理工大学 A kind of object detection method based on generated data collection
CN110852332A (en) * 2019-10-29 2020-02-28 腾讯科技(深圳)有限公司 Training sample generation method and device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RICHARD SZELISKI: "《计算机视觉-算法与应用》", 31 January 2012, 清华大学出版社 *

Similar Documents

Publication Publication Date Title
CN110619283B (en) Automatic extraction method for unmanned aerial vehicle ortho-image road
CN109360171A (en) A kind of real-time deblurring method of video image neural network based
CN111967313B (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN103679749A (en) Moving target tracking based image processing method and device
CN103236037A (en) Unmanned aerial vehicle real-time image simulation method based on hierarchical degradation model
CN111144418B (en) Railway track area segmentation and extraction method
US20220189145A1 (en) Unpaired image-to-image translation using a generative adversarial network (gan)
CN112381060B (en) Building earthquake damage level classification method based on deep learning
CN108154110A (en) A kind of intensive people flow amount statistical method based on the detection of the deep learning number of people
CN108229587A (en) A kind of autonomous scan method of transmission tower based on aircraft floating state
CN109872278A (en) Image cloud layer removing method based on U-shape network and generation confrontation network
CN105913002A (en) On-line adaptive abnormal event detection method under video scene
CN110009675A (en) Generate method, apparatus, medium and the equipment of disparity map
CN107992899A (en) A kind of airdrome scene moving object detection recognition methods
CN111339902A (en) Liquid crystal display number identification method and device of digital display instrument
CN114120077B (en) Prevention and control risk early warning method based on big data of unmanned aerial vehicle aerial photography
CN103605171A (en) All-sky imaging instrument and cloud layer characteristic analysis method based on all-sky imaging instrument
CN114373009A (en) Building shadow height measurement intelligent calculation method based on high-resolution remote sensing image
Zou et al. Automatic segmentation, inpainting, and classification of defective patterns on ancient architecture using multiple deep learning algorithms
CN112200227A (en) Airplane detection method based on airplane 3d model
CN112785678B (en) Sunlight analysis method and system based on three-dimensional simulation
CN116994162A (en) Unmanned aerial vehicle aerial photographing insulator target detection method based on improved Yolo algorithm
CN114550016B (en) Unmanned aerial vehicle positioning method and system based on context information perception
CN115115713A (en) Unified space-time fusion all-around aerial view perception method
CN115359406A (en) Post office scene figure interaction behavior recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108