CN111178165B - Automatic extraction method for air-to-ground target information based on small sample training video - Google Patents
Automatic extraction method for air-to-ground target information based on small sample training video Download PDFInfo
- Publication number
- CN111178165B CN111178165B CN201911273109.1A CN201911273109A CN111178165B CN 111178165 B CN111178165 B CN 111178165B CN 201911273109 A CN201911273109 A CN 201911273109A CN 111178165 B CN111178165 B CN 111178165B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- vector
- air
- information based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 23
- 238000012549 training Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000003709 image segmentation Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 40
- 238000000034 method Methods 0.000 claims description 20
- 230000003287 optical effect Effects 0.000 claims description 15
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000010801 machine learning Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 3
- 238000012067 mathematical method Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 230000033001 locomotion Effects 0.000 abstract description 6
- 241001465754 Metazoa Species 0.000 abstract description 3
- 238000010224 classification analysis Methods 0.000 abstract description 2
- 238000012544 monitoring process Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an automatic extraction method of air-to-ground target information based on small sample training video, which comprises image recognition, image preprocessing, image segmentation, target feature extraction, target classification, regular learning and error detection, when the recognition result is wrong, secondary training is carried out, learning and error detection are carried out again, the problem that after the unmanned aerial vehicle and other aircrafts collect the videos of ground people, vehicles, animals and the like, the corresponding types are extracted, and classification analysis is carried out, so that the information of the corresponding types, the motion direction, the speed and the like is obtained, and the reconnaissance application value of the aircrafts is improved.
Description
Technical Field
The invention relates to an information acquisition method, in particular to an automatic extraction method for air-to-ground target information based on a small sample training video.
Background
In recent years, along with the development of low-altitude detection aircrafts such as unmanned aerial vehicles, sounding balloons and the like, particularly the rapid development of unmanned aerial vehicles, the number of unmanned aerial vehicles for civil use and police use is in explosive growth trend. The civil unmanned aerial vehicle is expected to have a holding capacity of more than 300 ten thousand frames in 2020; unmanned aerial vehicles are widely used for low-altitude reconnaissance monitoring, which makes low-altitude ground video information an application which is more and more important in low-altitude reconnaissance monitoring.
At present, video acquisition adopted by unmanned aerial vehicles is mostly a nacelle with 30 times of optical zoom and 1080P. The flying height of the unmanned aerial vehicle is more than 150 meters, and particularly, the unmanned aerial vehicle of the type such as vertical take-off and landing can fly more than 3000 meters, and the flying speed reaches more than 60 km/s. In this case, the pixel density appearing in the video is extremely low for an object such as a person, a car, or an animal, which is small and has a low traveling speed, and in particular, the pixel density may be less than 100 for a person. Under the condition, the existing video information extraction method is difficult to effectively extract corresponding video information, so that the application of the unmanned aerial vehicle in reconnaissance and monitoring is greatly limited, and particularly, the automatic alarm and early warning of people and vehicles in middle and low altitudes are realized. At present, many unmanned aerial vehicles mainly collect videos on the ground, and the videos are identified through human eyes, so that the use efficiency of the unmanned aerial vehicles is greatly reduced, and the manpower input is increased.
At present, no special low-altitude low-speed target information extraction method exists for the ground video, but a general method is provided for the extraction and identification of the small target in the video. The method can be used for extracting low-altitude ground video small slow target information to a certain extent, and the low-altitude ground video is not acquired as top and upper side information, and is characterized by being quite different from the ground level region.
The traditional detection and extraction of the target in the motion background mainly comprises a background model method, an optical flow method, an inter-frame difference method and a neural network method based on machine learning. The background model method and the inter-frame difference method both require significant changes between the background and the moving target, and as the small and slow target of the air-to-ground is collected in the long-distance movement of the air, the pixel density of the small moving target in the image is extremely low, the displacement between frames is small, the duration is extremely short, the available frame number appearing in the video is small, the noise interference is large, and the robustness of target extraction is extremely poor. In recent years, registration, change detection, false alarm elimination and motion tracking are carried out according to video context information, and when the pixel density of a target is less than 150 and the shortest pixel is less than 10 in actual scene detection, the accuracy of extracting information of a small slow target is less than 75%, so that the value of extracting the information of the low-altitude to the ground is greatly limited. There have also been popular recent years in which optical flow fields are used for object extraction, such as: a video target tracking method based on SURF feature point diagram matching and motion generation model uses a fast robust feature SURF feature point set of a target object to describe the target object, so as to realize target extraction and tracking.
In recent years, a plurality of methods which are difficult to identify by algorithms are realized by adopting a machine learning method, in the case of small targets, the international common practice is to adopt algorithms such as c, FRCNN, SSD, YOLO V3 and the like, the low-altitude space-to-ground video small-slow target extraction analysis is carried out, because the number of learning samples is small, the number of algorithm models for space-to-ground is small, the sample libraries such as an image library, a UCF library, a kinetics library, a COCO data set and the like are adopted at present, the object features are induced by adopting the rules of gray-scale echelons, linear radians, basic edges, color blocks and textures, and the basic features are multiplexed in a shallow network. The method has low accuracy rate for detecting and extracting low-altitude ground video small and slow targets, and the current accuracy rate is about 45% -65% and the false alarm rate is high by a certain distance from the practical use through some experiments.
Disclosure of Invention
The invention aims to provide a low-altitude low-speed target information extraction method for training low-altitude ground video by using a low-altitude sample based on a neural network RCNN (radar cross-section network) and an OFSBL operator, aiming at solving the problems of low resolution, large vertical angle of a visual angle, high flying height, high speed of an aircraft, small target pixel density of a low-speed target, lack of an air-to-ground sample library, small target characteristic quantity and short duration of the target appearing in the video. By adopting the method, after the video acquisition of the unmanned aerial vehicle and other aircrafts on the ground, such as people, vehicles, animals and the like, the corresponding types can be extracted, and classification analysis is carried out, so that the information of the corresponding types, the movement direction, the speed and the like can be obtained, and the reconnaissance application value of the aircrafts can be improved.
The invention adopts the technical scheme that:
an automatic extraction method for air-to-ground target information based on a small sample training video comprises the following steps:
s1, acquiring an image; acquiring image information by adopting a video recording mode;
s2, image preprocessing: noise and clutter interference in the image are reduced, and the contrast between a target and a background in the image is enhanced;
s3, image segmentation: positioning and separating the object to be identified from the image;
s4, extracting target features: after the target to be identified is segmented from the image, describing the target by using a mathematical method to obtain each characteristic vector of the target;
s5, target classification: comparing each feature vector of the object with feature vectors representing the object in a feature library of the classifier, once the closest match of the image object is determined, assigning a confidence probability representing the closest match of the image object in a given type by the classifier, and determining the class of the object by the confidence probability;
s6, when the identification result is wrong, inputting an error sample for secondary training, and carrying out image preprocessing again to increase the extraction robustness of the slow target;
s7, rule learning: characteristic operators based on the combination of YOLO V3 and gray-scale echelon vectors and optical flow fields are adopted as a learning algorithm, and FRCNN network learning is used;
s8, error detection: automatically comparing the marked atlas with the checking result, carrying out certain parameter correction to accelerate the network convergence speed, and carrying out target classification again.
Further, in S3, the image is segmented into small images of 32×32.
Further, in S4, a mathematical description is made of the gray Sobel vector and the optical flow field of a small image of 32×32.
Further, in S5, the global optical flow field and the gray scale vector feature points of each frame of image are extracted by using the fast robust feature calculation OFSBL method, and the optical flow field vector and the gray scale Sobel gradient vector are synthesized to form a new vector as a vector factor of machine learning.
Further, in S8, the image block vector is compared with the sample vector, and convergence is performed according to the vector contrast value being greater than 75% to obtain the acquaintance category.
The invention has the beneficial effects that:
1. compared with the traditional mode, the low-altitude ground video low-speed target detection accuracy is greatly improved, and the pixel density is higher than 93% when the pixel density is higher than 150;
2. the OFSBL characteristic parameter adjustment is participated manually, and the network convergence is fast;
3. combines the advantages of YOLO V3 and FRCNN, and has wide training application range;
4. and the positive and negative training is combined, so that the detection robustness is high.
Drawings
FIG. 1 is a flow chart of learning training according to the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention discloses a method for automatically extracting air-to-ground target information based on a small sample training video, which comprises the following steps:
s1, acquiring an image; acquiring image information by adopting a video recording mode;
s2, image preprocessing: reducing noise in the image by adopting a median filtering method, and enhancing the contrast between a target and a background in the image;
s3, image segmentation: dividing the image into small images of 32 x 32;
s4, extracting target features: after the target to be identified is segmented from the image, describing the target by using a mathematical method to obtain each characteristic vector of the image block;
specifically, the mathematical description is performed on the gray scale Sobel vector and the optical flow field of the small image of 32×32, and the mathematical feature vector of the image block is obtained.
S5, target classification: comparing each feature vector of the object with feature vectors representing the object in a feature library of the classifier, once the closest match of the image object is determined, assigning a confidence probability representing the closest match of the image object in a given type by the classifier, and determining the class of the object by the confidence probability; and extracting a global optical flow field and gray level vector feature points of each frame of image by using a fast robust feature OFSBL algorithm, and synthesizing an optical flow field vector and a gray level Sobel gradient vector to form a new vector which is used as a vector factor of machine learning.
S6, when the identification result is wrong, inputting an error sample for secondary training, and carrying out image preprocessing again to increase the extraction robustness of the slow target;
s7, rule learning: characteristic operators based on the combination of YOLO V3 and gray-scale echelon vectors and optical flow fields are adopted as a learning algorithm, and FRCNN network learning is used;
s8, error detection: automatically comparing the marked atlas with the checking result, carrying out certain parameter correction to accelerate the network convergence speed, and carrying out target classification again.
In error detection, image block vectors and sample vectors are adopted for comparison, and convergence is carried out according to the fact that the vector comparison value is greater than 75%.
Claims (5)
1. The automatic extraction method for the air-to-ground target information based on the training video of the small sample is characterized by comprising the following steps:
s1, acquiring an image; acquiring image information by adopting a video recording mode;
s2, image preprocessing: noise and clutter interference in the image are reduced, and the contrast between a target and a background in the image is enhanced;
s3, image segmentation: positioning and separating the object to be identified from the image;
s4, extracting target features: after the target to be identified is segmented from the image, describing the target by using a mathematical method to obtain each characteristic vector of the target;
s5, target classification: comparing each feature vector of the object with feature vectors representing the object in a feature library of the classifier, once the closest match of the image object is determined, assigning a confidence probability representing the closest match of the image object in a given type by the classifier, and determining the class of the object by the confidence probability;
s6, when the identification result is wrong, inputting an error sample for secondary training, and carrying out image preprocessing again to increase the extraction robustness of the slow target;
s7, rule learning: characteristic operators based on the combination of YOLO V3 and gray-scale echelon vectors and optical flow fields are adopted as a learning algorithm, and FRCNN network learning is used;
s8, error detection: automatically comparing the marked atlas with the checking result, carrying out certain parameter correction to accelerate the network convergence speed, and carrying out target classification again.
2. The automatic extraction method for air-to-ground target information based on small sample training video according to claim 1, wherein the method comprises the following steps: in S3, the image is segmented into small images of 32 x 32.
3. The automatic extraction method for air-to-ground target information based on small sample training video according to claim 1, wherein the method comprises the following steps: in S4, the gray Sobel vector and the optical flow field of a small image of 32×32 are mathematically described.
4. The automatic extraction method for air-to-ground target information based on small sample training video according to claim 1, wherein the method comprises the following steps: in S5, extracting the global optical flow field and gray scale vector feature points of each frame of image by using a fast robust feature calculation OFSBL method, and synthesizing an optical flow field vector and a gray scale Sobel gradient vector to form a new vector serving as a vector factor of machine learning.
5. The automatic extraction method for air-to-ground target information based on small sample training video according to claim 1, wherein the method comprises the following steps: in S8, the image block vector is compared with the sample vector, and convergence is carried out according to the situation that the vector contrast value is greater than 75%.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911273109.1A CN111178165B (en) | 2019-12-12 | 2019-12-12 | Automatic extraction method for air-to-ground target information based on small sample training video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911273109.1A CN111178165B (en) | 2019-12-12 | 2019-12-12 | Automatic extraction method for air-to-ground target information based on small sample training video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111178165A CN111178165A (en) | 2020-05-19 |
CN111178165B true CN111178165B (en) | 2023-07-18 |
Family
ID=70655452
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911273109.1A Active CN111178165B (en) | 2019-12-12 | 2019-12-12 | Automatic extraction method for air-to-ground target information based on small sample training video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111178165B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107188A1 (en) * | 2015-12-25 | 2017-06-29 | 中国科学院深圳先进技术研究院 | Method and apparatus for rapidly recognizing video classification |
CN109376591A (en) * | 2018-09-10 | 2019-02-22 | 武汉大学 | The ship object detection method of deep learning feature and visual signature joint training |
CN109492561A (en) * | 2018-10-29 | 2019-03-19 | 北京遥感设备研究所 | A kind of remote sensing image Ship Detection based on improved YOLO V2 model |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447872A (en) * | 2015-12-03 | 2016-03-30 | 中山大学 | Method for automatically identifying liver tumor type in ultrasonic image |
-
2019
- 2019-12-12 CN CN201911273109.1A patent/CN111178165B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107188A1 (en) * | 2015-12-25 | 2017-06-29 | 中国科学院深圳先进技术研究院 | Method and apparatus for rapidly recognizing video classification |
CN109376591A (en) * | 2018-09-10 | 2019-02-22 | 武汉大学 | The ship object detection method of deep learning feature and visual signature joint training |
CN109492561A (en) * | 2018-10-29 | 2019-03-19 | 北京遥感设备研究所 | A kind of remote sensing image Ship Detection based on improved YOLO V2 model |
Non-Patent Citations (2)
Title |
---|
基于深度学习的复杂场景下车辆识别方法;余胜;陈敬东;王新余;;计算机与数字工程(09);全文 * |
面向无人机小样本目标识别的元学习方法研究;李宏男;吴立珍;牛轶峰;王菖;;无人系统技术(06);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111178165A (en) | 2020-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145545B (en) | Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning | |
CN113359810B (en) | Unmanned aerial vehicle landing area identification method based on multiple sensors | |
EP2874097A2 (en) | Automatic scene parsing | |
CN109949361A (en) | A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning | |
CN109460709A (en) | The method of RTG dysopia analyte detection based on the fusion of RGB and D information | |
CN108446634B (en) | Aircraft continuous tracking method based on combination of video analysis and positioning information | |
CN111326023A (en) | Unmanned aerial vehicle route early warning method, device, equipment and storage medium | |
CN110147714B (en) | Unmanned aerial vehicle-based coal mine goaf crack identification method and detection system | |
CN108830246B (en) | Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment | |
CN109949593A (en) | A kind of traffic lights recognition methods and system based on crossing priori knowledge | |
CN106446785A (en) | Passable road detection method based on binocular vision | |
CN109492525B (en) | Method for measuring engineering parameters of base station antenna | |
CN111831010A (en) | Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice | |
CN112818905A (en) | Finite pixel vehicle target detection method based on attention and spatio-temporal information | |
CN115113206A (en) | Pedestrian and obstacle detection method for assisting driving of underground railcar | |
CN114038193A (en) | Intelligent traffic flow data statistical method and system based on unmanned aerial vehicle and multi-target tracking | |
CN114689030A (en) | Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision | |
CN116109950A (en) | Low-airspace anti-unmanned aerial vehicle visual detection, identification and tracking method | |
Liu et al. | Vehicle detection from aerial color imagery and airborne LiDAR data | |
CN110503647A (en) | Wheat plant real-time counting method based on deep learning image segmentation | |
Cheng et al. | Moving Target Detection Technology Based on UAV Vision | |
CN111597992B (en) | Scene object abnormity identification method based on video monitoring | |
CN111178165B (en) | Automatic extraction method for air-to-ground target information based on small sample training video | |
CN110458064B (en) | Low-altitude target detection and identification method combining data driving type and knowledge driving type | |
CN111950524A (en) | Orchard local sparse mapping method and system based on binocular vision and RTK |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |