CN109918988A - A kind of transplantable unmanned plane detection system of combination imaging emulation technology - Google Patents
A kind of transplantable unmanned plane detection system of combination imaging emulation technology Download PDFInfo
- Publication number
- CN109918988A CN109918988A CN201811649068.7A CN201811649068A CN109918988A CN 109918988 A CN109918988 A CN 109918988A CN 201811649068 A CN201811649068 A CN 201811649068A CN 109918988 A CN109918988 A CN 109918988A
- Authority
- CN
- China
- Prior art keywords
- unmanned plane
- module
- detection
- transplantable
- detection system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The present invention proposes a kind of transplantable unmanned plane detection system of combination imaging emulation technology, it is related to imaging simulation and machine vision interleaving techniques field, the system uses imaging simulation technology, simplify data collection process, reduce the model training period, it is calculated using the network after the internal weight of adjustment, finally realizes unmanned machine testing.The transplantable unmanned plane detection system of combination imaging emulation technology of the present invention, it can be under Various Complex background, a plurality of types of unmanned planes are measured in real time, solve the problems, such as that conventional template matching process generalization ability is poor using depth learning technology, emulation technology is imaged by combining, simplifies data collection process, shortens the model training period, it can build on a variety of embedded devices simultaneously, provide a kind of transplantable intelligent unmanned plane detection system.
Description
Technical field
The invention belongs to imaging simulation and machine vision interleaving techniques field, it is related to a kind of the removable of combination imaging emulation technology
The unmanned plane detection system of plant.
Background technique
In recent years, unmanned air vehicle technique is fast-developing, has more and more people to begin to use unmanned plane to complete various creation
The work of property.Therefore, in unmanned plane no-fly zone or public place, the flight of unmanned plane must be supervised, otherwise the public
Safety and privacy will be on the hazard.And it is exactly unmanned machine testing that unmanned plane, which supervises a very crucial step, detection is exactly quickly to look for
The position of unmanned plane in each frame into video.Unquestionably, following also to develop to intelligence supervision direction, therefore design
A set of transplantable intelligent Detection is very important.
Nowadays, artificial intelligence technology gradually overturns people's lives mode, in order to meet under complex environment to nobody
The detection demand of machine solves the problems, such as that conventional template matching process is poor to unknown object generalization ability, needs a kind of intelligentized
Unmanned plane detection system.With the development of depth learning technology, the unmanned plane detection technique based on deep learning can adapt to multiple
Miscellaneous background environment, (such as block, Qiang Guang, size smaller) detection still has preferable accuracy rate and efficiency under a variety of hard situations.
However, unmanned plane detection technique based on deep learning relies on a large amount of training data, and this type objects of unmanned plane with it is daily
It is more special that detection object is compared, and is difficult from the correlation training for training the public data of neural network to be focused to find out unmanned plane
Data.If manually obtaining the video of specific background environment and unmanned plane type from network, it is necessary first to devote a tremendous amount of time
It collects, secondly also needs the cost in view of manually marking.
Summary of the invention
In order to solve the above technical problems, the invention proposes a kind of transplantable unmanned machine examinations of combination imaging emulation technology
Examining system, the system use imaging simulation technology, simplify data collection process, reduce the model training period, internal using adjustment
Network after weight is calculated, and finally realizes unmanned machine testing.
It is implemented with the following technical solutions:
A kind of transplantable unmanned plane detection system of combination imaging emulation technology, including imaging simulation module, based on deep
Spend detection training module, image/video acquisition module and the model inspection computing module of study, the imaging simulation module and base
In the detection training module connection of deep learning, the detection training module and model inspection computing module based on deep learning
Connection, described image video acquisition module are connect with model inspection computing module;
The imaging simulation module renders the unmanned plane image and corresponding labeled data of photo rank by emulating,
It and is the detection training module based on deep learning for training data;
The detection training module based on deep learning: obtaining the depth characteristic of unmanned plane, and unmanned plane is determined in realization
Position, identification, tracking and key point identify;
Described image video acquisition module: acquisition UAV Video data carry out the UAV Video data of input pre-
Then video data input model is detected computing module by processing;
The model inspection computing module, use it is described based on deep learning detection training module provide data as
Weight is calculated, forms detection network model, and handle the video data inputted in described image video acquisition module in real time;It realizes
The real-time detection of video under a variety of monitoring scenes, finally positions and identifies unmanned plane, exports its location information.
Wherein, the imaging simulation module: use the renderer based on physics, in conjunction with different panorama environment maps, nobody
Machine threedimensional model, Facing material measurement reflectance data, camera internal reference, it is outer ginseng and target position, render realistic
Unmanned plane emulating image.
Preferably, in render process, environment map provides 360 degree of light incident thereon as light source for unmanned plane,
The environment map is synthesized to obtain by the photo shot under several special angles to Same Scene, or from panorama
It is obtained in the database of environment map.
Wherein, more cosmetic variations, the material packet are obtained by changing the material that no-manned plane three-dimensional model uses
Include alloy and plastics.
Preferably, the detection training module based on deep learning is configured with the convolutional Neural net of multiple network structure
Network, the training data provided using the imaging simulation module, obtains the depth characteristic of unmanned plane.
Wherein, the detection training module based on deep learning can be built in multiple hardwares equipment, according to hardware
The computing capability of equipment carries out system environments configuration and convolutional neural networks structure is chosen.
It is obtained each preferably for neural network structure described in every kind using preparatory trained network internal weight
The parameter of the initial parameter of a convolution kernel and full articulamentum.
Wherein, the network internal that the model inspection computing module is provided using the detection training module based on deep learning
Training weight forms detection model, and the video data inputted using described image video acquisition module as weight is calculated,
Realize the positioning and identification to unmanned plane.
Preferably, the model inspection computing module is configured with algorithm of target detection and tracking calculation based on deep learning
Method, when the detection speed of algorithm of target detection can not reach real time frame rate, automatic invocation target track algorithm by requirement is formulated.
The utility model has the advantages that the transplantable unmanned plane detection system of combination imaging emulation technology of the present invention, Neng Gou
Under Various Complex background, a plurality of types of unmanned planes are measured in real time, solve conventional template using depth learning technology
The problem of method of completing the square generalization ability difference is imaged emulation technology by combining, simplifies data collection process, shortens model training week
Phase, while can build on a variety of embedded devices, provide a kind of transplantable intelligent unmanned plane detection system.
Detailed description of the invention
Fig. 1 is the transplantable unmanned plane detection system that emulation technology is imaged in a kind of combination provided in an embodiment of the present invention
Schematic diagram.
Specific embodiment
A specific embodiment of the invention is explained in detail below in conjunction with attached drawing.
As shown in Figure 1, a kind of transplantable unmanned machine testing of combination imaging emulation technology provided in an embodiment of the present invention
System, including four modules: imaging simulation module 1, the detection training module 2 based on deep learning, image/video acquisition module 3
With model inspection computing module 4.Wherein, imaging simulation module 1 uses the renderer based on physics, and input is a variety of different
Panorama environment map and various types of no-manned plane three-dimensional model after carrying out rendering emulation by the module, export having for a large amount of multiplicity
The unmanned plane emulating image of labeled data.The instruction that detection training module 2 based on deep learning is provided using imaging simulation module 1
Practice data training convolutional neural networks, output is used for the network weight of unmanned machine testing.Image/video acquisition module 3 is camera
Deng acquisition equipment, image or video data under real scene, outputting video streams to model inspection computing module 4 are obtained.Finally,
Model inspection computing module 4 is handled input data using trained model, finally positions and identify unmanned plane, output
Its location information.
Present invention combination imaging simulation and depth learning technology render the unmanned plane figure of photo rank by emulation first
Picture and corresponding labeled data solve the problems, such as that real image data is difficult to obtain, and provide trained number for deep learning model
According to.Then, it using current state-of-the-art depth learning technology, realizes and the positioning of unmanned plane, identification, tracking and key point is known
Not.Later, whole system is built on miniaturization embedded device, realizes system transplantation and miniaturization.Finally, through emulating number
It, can be under Various Complex background, to the figure with unmanned plane target of camera acquisition according to trained unmanned plane detection system
Picture and video data accurately, fast and efficiently detect, to realize that intelligence can detect unmanned plane under complex background
System.By configuring system running environment, which can be built on a variety of embedded devices.
1. imaging simulation module:
This module uses the renderer based on physics, in conjunction with different panorama environment maps, no-manned plane three-dimensional model, surface material
Matter measure reflectance data, the internal reference of camera, outer ginseng and target position, render the realistic unmanned planes of a large amount of multiplicity
Emulating image.In unmanned plane render process, environment map provides as light source and is incident on 360 degree on unmanned plane of light.It is logical
In normal situation, environment map can be synthesized by the photo shot under several special angles to Same Scene come
It arrives, or is obtained from the database of panorama environment map.And target can be accurately acquired while rendering emulating image
The coordinate of frame, the position at target critical position and target all pixels in the picture is surrounded, for the detection based on deep learning
Training module 2 provides training data.
In order to increase the diversity that the modular simulation generates unmanned plane, obtained in addition to using more threedimensional models more
Change in shape other than, the material that model uses can also be changed to obtain more cosmetic variations.The material that unmanned plane uses
Most of is various alloys and plastics, therefore can be used and contain the database of the measurement reflectance data of 100 kinds of materials.
2. the detection training module based on deep learning:
The training data training convolutional neural networks that this module uses imaging simulation module 1 to provide, obtain the depth of unmanned plane
Feature is spent, by training, calculates the methods of loss function, backpropagation adjustment network internal weight, unmanned plane is determined in raising
Capability and recognition capability.This module is configured with the convolutional neural networks of multiple network structure, to cope with different hardware equipment
Calculating demand.
This module can be built in multiple hardwares equipment, for the computing capability of distinct device, configure corresponding system
Environment is chosen different types of network structure, is realized not in conjunction with the algorithm of target detection of the fast target detection and two steps of single step
With the module training and detection function on hardware device.For every kind of network structure, weighed using trained network internal in advance
Weight, obtains the initial parameter of each convolution kernel and the parameter of full articulamentum.Transfer learning is done on this basis, by Task Switching
To detection unmanned plane.The training data training convolutional neural networks provided using imaging simulation module 1, adjustment and Optimized model are come
The accuracy rate and recall rate for improving simple target or multiple types target detection balance accuracy rate and call together for different scenes demand
The ratio for the rate of returning is improved in complex illumination (excessively dark, overexposure), complex background (city, countryside) and non-rigid object deformation, low
Accuracy under the scenes such as resolution ratio and blurred picture.
3. image/video acquisition module:
This module acquires UAV Video data using high-speed camera, and prevents from exposing using camera, prevents from shaking
Etc. functions input data is pre-processed, then by video stream data input model detect computing module 4.
4. model inspection computing module:
The weight that this module uses the training provided in the detection training module 2 based on deep learning to complete is weighed as calculating
Weight forms detection network model, and the video stream data inputted in real time processed images video acquisition module 3.By in module
The multiple network Structure Calculation weight and track algorithm of configuration realize the real-time detection of the video under a variety of monitoring scenes, most
Unmanned plane is positioned and identified eventually, exports its location information.Realize the positioning and identification to unmanned plane.In this module there are many configurations
Algorithm of target detection and track algorithm based on deep learning, to adapt to the precision and speed of the object detection task under several scenes
Degree requires.When the detection speed of algorithm of target detection can not reach real time frame rate by requirement is formulated, automatic invocation target tracking is calculated
Method improves system detection speed, realizes the quick tracking to unmanned plane.
The present invention is directed to unmanned plane supervision problem, provides a kind of transplantable unmanned machine testing of combination imaging emulation technology
System can be measured in real time a plurality of types of unmanned planes, be solved using depth learning technology under Various Complex background
The problem of conventional template matching process generalization ability difference is imaged emulation technology by combining, simplifies data collection process, shortens mould
Type cycle of training, while can build on a variety of embedded devices, provide a kind of transplantable unmanned machine testing of intelligence
System.
Therefore, training data is quickly generated using imaging simulation technology and training network model is very important.And people
Work is searched to obtain and be compared, and is rendered using imaging simulation technology and is generated unmanned plane image, not only can be with a large amount of unmanned planes of quick obtaining
Image and accurate encirclement frame data, can also obtain the position of any key position of unmanned plane (such as rotor), or even obtain
Belong to the data of the pixel of unmanned plane in image data.
It is obvious to a person skilled in the art that the embodiment of the present invention is not limited to the details of above-mentioned exemplary embodiment,
And without departing substantially from the spirit or essential attributes of the embodiment of the present invention, this hair can be realized in other specific forms
Bright embodiment.Therefore, in all respects, the present embodiments are to be considered as illustrative and not restrictive, this
The range of inventive embodiments is indicated by the appended claims rather than the foregoing description, it is intended that being equal for claim will be fallen in
All changes in the meaning and scope of important document are included in the embodiment of the present invention.It should not be by any attached drawing mark in claim
Note is construed as limiting the claims involved.Furthermore, it is to be understood that one word of " comprising " does not exclude other units or steps, odd number is not excluded for
Plural number.Multiple units, module or the device stated in system, device or terminal claim can also be by the same units, mould
Block or device are implemented through software or hardware.
Finally it should be noted that embodiment of above is only to illustrate the technical solution of the embodiment of the present invention rather than limits,
Although the embodiment of the present invention is described in detail referring to the above better embodiment, those skilled in the art should
Understand, can modify to the technical solution of the embodiment of the present invention or equivalent replacement should not all be detached from the skill of the embodiment of the present invention
The spirit and scope of art scheme.
Claims (9)
1. a kind of transplantable unmanned plane detection system of combination imaging emulation technology, including imaging simulation module, it is based on depth
Detection training module, image/video acquisition module and the model inspection computing module of study, the imaging simulation module be based on
The detection training module connection of deep learning, the detection training module and model inspection computing module based on deep learning connect
It connects, described image video acquisition module is connect with model inspection computing module;It is characterized by:
The imaging simulation module, the unmanned plane image and corresponding labeled data of photo rank are rendered by emulating, and are
The detection training module based on deep learning is for training data;
The detection training module based on deep learning: obtaining the depth characteristic of unmanned plane, realizes the positioning to unmanned plane, knows
Not, tracking is identified with key point;
Described image video acquisition module: acquisition UAV Video data pre-process the UAV Video data of input,
Then video data input model is detected into computing module;
The model inspection computing module uses the data of the detection training module offer based on deep learning as calculating
Weight forms detection network model, and handles the video data inputted in described image video acquisition module in real time;It realizes more
The real-time detection of video under kind monitoring scene, finally positions and identifies unmanned plane, exports its location information.
2. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 1, it is characterised in that:
The imaging simulation module: the renderer based on physics is used, in conjunction with different panorama environment maps, no-manned plane three-dimensional model, table
Plane materiel matter measure reflectance data, camera internal reference, it is outer ginseng and target position, render realistic unmanned plane analogous diagram
Picture.
3. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 2, it is characterised in that:
In render process, environment map provides 360 degree of light incident thereon as light source for unmanned plane, the environment map by pair
The photo shot under several special angles of Same Scene is synthesized to obtain, or from the database of panorama environment map
It obtains.
4. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 2, it is characterised in that:
More cosmetic variations are obtained by changing the material that no-manned plane three-dimensional model uses, the material includes alloy and plastics.
5. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 1, it is characterised in that:
The detection training module based on deep learning is configured with the convolutional neural networks of multiple network structure, imitative using the imaging
The training data that true module provides, obtains the depth characteristic of unmanned plane.
6. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 5, it is characterised in that:
The detection training module based on deep learning can be built in multiple hardwares equipment, according to the computing capability of hardware device
It carries out system environments configuration and convolutional neural networks structure is chosen.
7. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 5, it is characterised in that:
The initial of each convolution kernel is obtained using preparatory trained network internal weight for neural network structure described in every kind
The parameter of parameter and full articulamentum.
8. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 7, it is characterised in that:
The model inspection computing module use based on deep learning detection training module provide network internal training weight as
Weight is calculated, detection model, and the video data inputted using described image video acquisition module are formed, is realized to unmanned plane
Positioning and identification.
9. the transplantable unmanned plane detection system of combination imaging emulation technology according to claim 8, it is characterised in that:
The model inspection computing module is configured with algorithm of target detection and track algorithm based on deep learning, works as algorithm of target detection
Detection speed can not by formulate requirement reach real time frame rate, automatic invocation target track algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811649068.7A CN109918988A (en) | 2018-12-30 | 2018-12-30 | A kind of transplantable unmanned plane detection system of combination imaging emulation technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811649068.7A CN109918988A (en) | 2018-12-30 | 2018-12-30 | A kind of transplantable unmanned plane detection system of combination imaging emulation technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109918988A true CN109918988A (en) | 2019-06-21 |
Family
ID=66960059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811649068.7A Pending CN109918988A (en) | 2018-12-30 | 2018-12-30 | A kind of transplantable unmanned plane detection system of combination imaging emulation technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109918988A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222070A (en) * | 2021-06-03 | 2021-08-06 | 中国科学院软件研究所 | Automatic labeling method and system for simulation image data |
CN114283237A (en) * | 2021-12-20 | 2022-04-05 | 中国人民解放军军事科学院国防科技创新研究院 | Unmanned aerial vehicle simulation video generation method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101917637A (en) * | 2010-06-24 | 2010-12-15 | 清华大学 | Relighting method and system based on free-viewing angle light transmission matrix |
CN107589758A (en) * | 2017-08-30 | 2018-01-16 | 武汉大学 | A kind of intelligent field unmanned plane rescue method and system based on double source video analysis |
WO2018023556A1 (en) * | 2016-08-04 | 2018-02-08 | SZ DJI Technology Co., Ltd. | Methods and systems for obstacle identification and avoidance |
CN107862705A (en) * | 2017-11-21 | 2018-03-30 | 重庆邮电大学 | A kind of unmanned plane small target detecting method based on motion feature and deep learning feature |
CN107895378A (en) * | 2017-10-12 | 2018-04-10 | 西安天和防务技术股份有限公司 | Object detection method and device, storage medium, electronic equipment |
CN108711172A (en) * | 2018-04-24 | 2018-10-26 | 中国海洋大学 | Unmanned plane identification based on fine grit classification and localization method |
CN108897342A (en) * | 2018-08-22 | 2018-11-27 | 江西理工大学 | For the positioning and tracing method and system of the civilian multi-rotor unmanned aerial vehicle fast moved |
CN108986050A (en) * | 2018-07-20 | 2018-12-11 | 北京航空航天大学 | A kind of image and video enhancement method based on multiple-limb convolutional neural networks |
CN109099779A (en) * | 2018-08-31 | 2018-12-28 | 江苏域盾成鹫科技装备制造有限公司 | A kind of detecting of unmanned plane and intelligent intercept system |
-
2018
- 2018-12-30 CN CN201811649068.7A patent/CN109918988A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101917637A (en) * | 2010-06-24 | 2010-12-15 | 清华大学 | Relighting method and system based on free-viewing angle light transmission matrix |
WO2018023556A1 (en) * | 2016-08-04 | 2018-02-08 | SZ DJI Technology Co., Ltd. | Methods and systems for obstacle identification and avoidance |
CN107589758A (en) * | 2017-08-30 | 2018-01-16 | 武汉大学 | A kind of intelligent field unmanned plane rescue method and system based on double source video analysis |
CN107895378A (en) * | 2017-10-12 | 2018-04-10 | 西安天和防务技术股份有限公司 | Object detection method and device, storage medium, electronic equipment |
CN107862705A (en) * | 2017-11-21 | 2018-03-30 | 重庆邮电大学 | A kind of unmanned plane small target detecting method based on motion feature and deep learning feature |
CN108711172A (en) * | 2018-04-24 | 2018-10-26 | 中国海洋大学 | Unmanned plane identification based on fine grit classification and localization method |
CN108986050A (en) * | 2018-07-20 | 2018-12-11 | 北京航空航天大学 | A kind of image and video enhancement method based on multiple-limb convolutional neural networks |
CN108897342A (en) * | 2018-08-22 | 2018-11-27 | 江西理工大学 | For the positioning and tracing method and system of the civilian multi-rotor unmanned aerial vehicle fast moved |
CN109099779A (en) * | 2018-08-31 | 2018-12-28 | 江苏域盾成鹫科技装备制造有限公司 | A kind of detecting of unmanned plane and intelligent intercept system |
Non-Patent Citations (3)
Title |
---|
SHASHA YU等: "A Low-complexity Autonomous 3D Localization Method for Unmanned Aerial Vehicles by Binocular Stereovision Technology", 《2018 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS》 * |
YI CHENG等: "Positioning method research for unmanned aerial vehicles based on Meanshift tracking algorithm", 《2017 29TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》 * |
潘志松等: "在线学习算法综述", 《数据采集与处理》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222070A (en) * | 2021-06-03 | 2021-08-06 | 中国科学院软件研究所 | Automatic labeling method and system for simulation image data |
CN114283237A (en) * | 2021-12-20 | 2022-04-05 | 中国人民解放军军事科学院国防科技创新研究院 | Unmanned aerial vehicle simulation video generation method |
CN114283237B (en) * | 2021-12-20 | 2024-05-10 | 中国人民解放军军事科学院国防科技创新研究院 | Unmanned aerial vehicle simulation video generation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109523552A (en) | Three-dimension object detection method based on cone point cloud | |
CN109697726A (en) | A kind of end-to-end target method for estimating based on event camera | |
CN108648194A (en) | Based on the segmentation of CAD model Three-dimensional target recognition and pose measuring method and device | |
CN112270688B (en) | Foreground extraction method, device, equipment and storage medium | |
CN112525107B (en) | Structured light three-dimensional measurement method based on event camera | |
CN106155299B (en) | A kind of pair of smart machine carries out the method and device of gesture control | |
CN107066975B (en) | Video identification and tracking system and its method based on depth transducer | |
CN110782498B (en) | Rapid universal calibration method for visual sensing network | |
CN109492618A (en) | Object detection method and device based on grouping expansion convolutional neural networks model | |
CN111915746A (en) | Weak-labeling-based three-dimensional point cloud target detection method and labeling tool | |
CN108305250A (en) | The synchronous identification of unstructured robot vision detection machine components and localization method | |
CN115082254A (en) | Lean control digital twin system of transformer substation | |
CN108010122B (en) | Method and system for reconstructing and measuring three-dimensional model of human body | |
CN109918988A (en) | A kind of transplantable unmanned plane detection system of combination imaging emulation technology | |
CN109993806A (en) | A kind of color identification method, device and electronic equipment | |
CN108090922A (en) | Intelligent Target pursuit path recording method | |
CN113379698A (en) | Illumination estimation method based on step-by-step joint supervision | |
CN104680570A (en) | Action capturing system and method based on video | |
CN113743358A (en) | Landscape visual feature recognition method based on all-dimensional acquisition and intelligent calculation | |
CN111260687B (en) | Aerial video target tracking method based on semantic perception network and related filtering | |
CN113686314A (en) | Monocular water surface target segmentation and monocular distance measurement method of shipborne camera | |
CN109102527A (en) | The acquisition methods and device of video actions based on identification point | |
WO2020212776A1 (en) | Creating training data variability in machine learning for object labelling from images | |
CN109781068A (en) | The vision measurement system ground simulation assessment system and method for space-oriented application | |
CN109145861A (en) | Emotion identification device and method, head-mounted display apparatus, storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190621 |
|
RJ01 | Rejection of invention patent application after publication |