CN113487738B - Building based on virtual knowledge migration and shielding area monomer extraction method thereof - Google Patents

Building based on virtual knowledge migration and shielding area monomer extraction method thereof Download PDF

Info

Publication number
CN113487738B
CN113487738B CN202110707259.XA CN202110707259A CN113487738B CN 113487738 B CN113487738 B CN 113487738B CN 202110707259 A CN202110707259 A CN 202110707259A CN 113487738 B CN113487738 B CN 113487738B
Authority
CN
China
Prior art keywords
building
image
sim
label
simulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110707259.XA
Other languages
Chinese (zh)
Other versions
CN113487738A (en
Inventor
闫奕名
杨柳青
宿南
冯收
赵春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110707259.XA priority Critical patent/CN113487738B/en
Publication of CN113487738A publication Critical patent/CN113487738A/en
Application granted granted Critical
Publication of CN113487738B publication Critical patent/CN113487738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention provides a building and an occlusion area individualized extraction method based on virtual knowledge migration, relates to the field of remote sensing image information extraction, and aims to solve the problem that in building information extraction, training samples are insufficient, and the target condition and the occlusion condition both have high uncertainty of perspective example segmentation. A virtual knowledge generation module is introduced to automatically acquire a large amount of training data which are marked by real shielding conditions, have similar semantic relations and are comprehensively covered by observation angles, so that the problem of insufficient training samples is solved. By adopting a strategy of combining the example segmentation and the shielding judgment module and matching with the characteristic pyramid network, the problem of diversity of the shape, the scale and the shielding condition of the building is solved, and the accuracy of the building perspective example segmentation is high.

Description

Building based on virtual knowledge migration and shielding area monomer extraction method thereof
Technical Field
The invention relates to the field of remote sensing image information extraction, in particular to the technical field of building scene simulation and building information extraction in remote sensing images.
Background
Building information extraction technology based on visible light remote sensing images is an important research subject in the fields of national defense and civil life. Most of the related researches are carried out by using the ortho remote sensing images at present, however, in some special cases, a sufficient number of ortho images may not be obtained, and in contrast, the oblique remote sensing images are easier to obtain, so that in practical application, the research on the oblique remote sensing images plays a very important role.
With the continuous application of the deep learning theory in the aspect of remote sensing image processing, target detection, semantic segmentation and example segmentation become new ideas of remote sensing building information extraction, wherein the example segmentation integrates the target detection and the semantic segmentation, can distinguish the category of each pixel and detect the object boundary one by one, extracts buildings in images, and is an important method in the field of building information extraction. In 2016, the scholars proposed the atmospheric impact Segmentation (currently, there is information translated into perspective Instance Segmentation), which means that the outlines of the occluded parts of objects in the scene are further predicted on the basis of the conventional Instance Segmentation. In the oblique remote sensing image, the shielding condition is inevitable, if the shielded building outline can be predicted by a perspective example segmentation method, the three-dimensional structure of the building is restrained, meanwhile, the texture information of the building lost due to shielding is recovered, and the method has great significance for the research of the related field of remote sensing image buildings.
Due to the complexity of the task of the perspective example segmentation technology of the oblique remote sensing image, two problems are to be solved:
the buildings are various in shapes and different in size, and most of example segmentation strategies based on detection frames are difficult to set in a targeted mode. Meanwhile, in images with different observation angles, the shielding condition is very complex, and the accuracy of the partition of the building perspective example is not easy to guarantee.
And secondly, the data labeling work for the perspective example segmentation is very complicated and is not easy to obtain in large quantity, and the difficulty of the perspective example segmentation is further improved under the condition of insufficient training samples.
Disclosure of Invention
The purpose of the invention is: aiming at the problems of insufficient building perspective example segmentation training samples and low accuracy in the prior art, a building and a shielding area monomer extraction method thereof based on virtual knowledge migration are provided.
The technical scheme adopted by the invention to solve the technical problems is as follows: the building and the shielding area monomer extraction method based on virtual knowledge migration specifically comprises the following steps:
the method comprises the following steps: acquiring simulation scene V of scene to be identified_SimAnd simulation scenario V_SimTag V of_LabelThen using the simulation scenario V_SimAnd simulation scenario V_SimTag V of_LabelObtaining a simulation image SImageAnd a simulation image SImageCorresponding true value SLabelFinally, using the simulation image SImageAnd a simulation image SImageCorresponding true value SLabelForming virtual knowledge Ksim
Step two: according to virtual knowledge KsimObtaining detection frame areas Box and simulation images S of the whole and vertical surfaces of a buildingImageOf each instance rkAnd generating the Bases of the whole image including the base s of each examplek
Step three: each example substrate skWith an attention map r for each examplekMerging to obtain mask prediction mdThen judging whether shielding areas exist in detection frame areas Box of the whole building and the vertical face, if not, predicting m by using a maskdAs a result of the perspective example segmentation, if an occlusion region exists, the occlusion mask m of the occlusion region is predictedoccThen predict m the maskdAnd a block mask moccPerforming a comprehensive mask, and taking the result of the comprehensive mask as a perspective example segmentation result;
step four: and obtaining a pre-training model pre _ model according to the steps, utilizing the pre-training model pre _ model and adding part of real remote sensing image training samples for transfer learning to obtain a perspective instance segmentation model final _ model, and then utilizing the perspective instance segmentation model final _ model to finish the monomer extraction of the building and the shielding area thereof.
Further, the simulation scene V_SimThe method is obtained by mapping the terrain, the surface feature and the texture information of a scene to be identified.
Further, the simulation image SImageAnd corresponding true value SLabelBy matching simulation scene V_SimAnd label V thereof_LabelAnd processing the same projection model P to obtain the projection model.
Further, in the second step, the detection frame areas Box of the whole and the vertical face of the building and the attention force diagram r of each example in the imagekObtained by a target detector based on FCOS.
Further, the base Bases for generating the whole image in the second step are realized by a BlendMask network.
Further, the base s for each example in the third stepkAnd attention diagram rkMerging is achieved by a blendshield strategy based on BlendMask.
And further, judging whether the blocking areas exist in the detection frame areas Box of the whole building and the vertical surface in the third step or not through a blocking judgment network.
Further, the step three is to predict the shielding mask m of the shielding regionoccBy the occlusion discrimination network.
Further, the virtual knowledge KsimExpressed as:
Ksim={SImage,SLabel}
SImage=P(V_Sim{Terrain,Object,Texture_Sim})
S_Label=P(V_Label{Terrain,Object,Texture_Label})
wherein P represents a projection model, Tertain represents a Terrain, Object represents a ground Object, Texture represents Texture information, Texture represents Texture, and so on_SimRepresenting simulated Texture, Texture_LabelRepresenting the tag texture.
Further, the method comprisesThe mask predicts mdExpressed as:
Figure GDA0003221907660000031
where K is the set number of bases, K represents the number of bases, D is the number of all predicted detection boxes, D represents the number of detection boxes, s represents the base, r represents the attention diagram, and omicron represents the product by element.
The invention has the beneficial effects that:
the invention provides a building and shielding area single extraction method based on virtual knowledge migration, and aims to solve the problem of perspective example segmentation that training samples are insufficient and target conditions and shielding conditions have high uncertainty in building information extraction. A virtual knowledge generation module is introduced to automatically acquire a large amount of training data which are marked by 'real' shielding conditions, have similar semantic relations and are comprehensively covered by observation angles, so that the problem of insufficient training samples is solved. By adopting a strategy of combining the example segmentation and the shielding judgment module and matching with the characteristic pyramid network, the problem of the diversity of the shapes, the scales and the shielding conditions of the buildings is solved, and the accuracy of the perspective example segmentation of the buildings is high.
Drawings
FIG. 1 is an overall flow chart of the present application;
FIG. 2 is a schematic diagram of a module for generating virtual knowledge;
fig. 3 is a schematic diagram of a perspective example split network module.
Detailed Description
It should be noted that, in the case of conflict, the various embodiments disclosed in the present application may be combined with each other.
The first embodiment is as follows: specifically, referring to fig. 1, the building and its shaded area unitization extraction method based on virtual knowledge migration according to the present embodiment includes the following steps:
the method comprises the following steps: acquiring simulation scene V of scene to be identified_SimAnd simulation scenario V_SimTag V of_LabelThen using the simulation scenario V_SimAnd simulation scenario V_SimTag V of_Label obtaining a simulation image SImageAnd a simulation image SImageCorresponding true value SLabelFinally, using the simulation image SImageAnd a simulation image SImageCorresponding true value SLabelForming virtual knowledge Ksim
Step two: according to virtual knowledge KsimObtaining detection frame areas Box and simulation images S of the whole and vertical surfaces of a buildingImageOf each instance rkAnd generating the Bases of the whole image including the base s of each examplekAn example refers to a single target object. And generating the base Bases of the whole image according to a part of the blendmask network structure, and then obtaining the base Bases of the whole image after generating the characteristic pyramid of the image. The Blendmask has a part called FCOS detector, which is composed of three parts, namely a backbone network, a feature pyramid, and a Detection Head. After the image enters the network, the first part of the FCOS detector enters (i.e. the three parts of the backbone network, the feature pyramid network and the Detection Head are entered in turn). The generation module of virtual knowledge is shown in fig. 2.
And (3) the image enters a backbone network for feature extraction, the output feature map is sent to a feature pyramid network to obtain feature maps with different scales to form a feature pyramid, and the base Bases of the whole image can be output after the feature pyramid.
Step three: each example substrate skWith an attention map r for each examplekMerging to obtain mask prediction mdThen judging whether shielding areas exist in detection frame areas Box of the whole building and the vertical face, if not, predicting m by using a maskdAs a result of the perspective example segmentation, if an occlusion region exists, the occlusion mask m of the occlusion region is predictedoccThen predict m with the maskdAnd a shielding mask moccPerforming a comprehensive mask, and taking the result of the comprehensive mask as a perspective example segmentation result;
step four: and obtaining a pre-training model pre _ model according to the steps, utilizing the pre-training model pre _ model and adding part of real remote sensing image training samples for transfer learning to obtain a perspective instance segmentation model final _ model, and then utilizing the perspective instance segmentation model final _ model to finish the monomer extraction of the building and the shielding area thereof. A perspective example split network module is shown in fig. 3.
The part of real remote sensing image training samples in the step can be trained to obtain a model only by expressing the method without a large number of real remote sensing images. Because normally, a large number of sample images are needed for good results when training any network, however, in reality, a large number of sample images are not necessarily available, and labeling of real samples is difficult, so that a virtual sample is selected to train the images (omitting the image labeling process) and then the images are migrated into the real images. This solves the problem of lack of real samples, which is one of the problems solved by this patent.
That is, partially, means that the application requires very few real images to obtain the result, relative to the large number of images that would otherwise be required.
A virtual knowledge generation module is introduced to automatically acquire a large amount of training data which are marked by real shielding conditions, have similar semantic relations and are comprehensively covered by observation angles, so that the problem of insufficient training samples is solved. The method adopts a strategy of combining a single-stage example segmentation method BlendMask with a shielding judgment module and is matched with a characteristic pyramid network to solve the problem of diversity of building shapes, scales and shielding conditions, and the BlendMask has excellent performance in reasoning speed and detection of small targets and objects separated by shielding. And finally, performing transfer learning by combining a pre-training model obtained by virtual sample training with a real sample to obtain a training model facing the real sample.
In addition, in order to realize the split processing of the top surface and the vertical surface of the building, the vertical surface of the building is greatly influenced by the observation angle, and the vertical surface of the building is not suitable to be independently divided as an example, so that the top surface and the whole building are treated as two types of examples, and the top surface and the vertical surface pixels of each building can be further obtained according to the subordination relation.
The second embodiment is as follows: the present embodiment is a further description of the first embodiment, and the difference between the present embodiment and the first embodiment is that the simulation scene V is described_SimThe method is obtained by mapping the terrain, the ground object and the texture information of the scene to be recognized.
The third concrete implementation mode: the present embodiment is a further description of the first embodiment, and the difference between the present embodiment and the first embodiment is that the simulation image S isImageAnd the corresponding true value group-Truth passes through the simulation scene V_SimAnd label V thereof_LabelAnd processing the same projection model P to obtain the projection model.
The fourth concrete implementation mode: the second step is to detect the whole and vertical faces of the building in the frame areas box and the attention map r of each example in the imagekObtained by means of a target detector based on the FCOS.
The fifth concrete implementation mode is as follows: the present embodiment is a further description of the first embodiment, and the difference between the present embodiment and the first embodiment is that the base Bases for generating the whole image in the second step is implemented by a BlendMask network.
The sixth specific implementation mode is as follows: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is that the substrate s of each example is used in the third stepkAnd attention-seeking drawing rkMerging is achieved by a blendshield strategy based on BlendMask.
The seventh embodiment: the present embodiment is further described with respect to the first embodiment, and the difference between the present embodiment and the first embodiment is that the determination of whether or not there is a block area in the detection frame areas Boxes of the entire and vertical surfaces of the building in the step three is performed by a block determination network.
The specific implementation mode is eight: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is that the occlusion mask m for predicting the occlusion region in the third stepoccBy the occlusion discrimination network.
The specific implementation method nine: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is that the virtual knowledge K is describedsimExpressed as:
Ksim={SImage,SLabel}
SImage=P(V_Sim{Terrain,Object,Texture_Sim})
S_Label=P(V_Label{Terrain,Object,Texture_Label})
wherein P represents a projection model, Tertain represents a Terrain, Object represents a ground Object, Texture represents Texture information, Texture represents Texture, and so on_SimRepresenting simulated Texture, Texture_LabelRepresenting the label texture.
The specific implementation mode is ten: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is the mask prediction mdExpressed as:
Figure GDA0003221907660000061
wherein K is the number of the set bases, K represents the serial number of the bases, D is the number of all the predicted detection frames, D represents the serial number of the detection frame, s represents the base, r represents the attention map,
Figure GDA0003221907660000062
representing a product by element.
It should be noted that the detailed description is only for explaining and explaining the technical solution of the present invention, and the scope of protection of the claims is not limited thereby. It is intended that all such modifications and variations be included within the scope of the invention as defined in the following claims and the description.

Claims (9)

1. A building and a shielding area monomer extraction method thereof based on virtual knowledge migration is characterized by comprising the following steps:
the method comprises the following steps: acquiring simulation scene V of scene to be identified_SimAnd simulation scenario V_SimTag V of_LabelThen using the simulation scenario V_SimAnd simulation scenario V_SimTag V of_LabelObtaining a simulation image SImageAnd a simulation image SImageCorresponding true value SLabelFinally, using the simulation image SImageAnd a simulation image SImageCorresponding true value SLabelForming virtual knowledge Ksim
Step two: according to virtual knowledge KsimObtaining detection frame areas Box and simulation images S of the whole and vertical surfaces of a buildingImageOf each instance rkAnd generating the Bases of the whole image including the base s of each examplekThe base Bases for generating the whole image in the second step are realized through a BlendMask network;
step three: each example substrate skWith an attention map r for each examplekMerging to obtain mask prediction mdThen judging whether shielding areas exist in detection frame areas Box of the whole building and the vertical face, if not, predicting m by using a maskdAs a result of the perspective example segmentation, if an occlusion region exists, the occlusion mask m of the occlusion region is predictedoccThen predict m the maskdAnd a block mask moccPerforming a comprehensive mask, and taking the result of the comprehensive mask as a perspective example segmentation result;
step four: and obtaining a pre-training model pre _ model according to the steps, utilizing the pre-training model pre _ model and adding part of real remote sensing image training samples for transfer learning to obtain a perspective instance segmentation model final _ model, and then utilizing the perspective instance segmentation model final _ model to finish the monomer extraction of the building and the shielding area thereof.
2. The building and the shielding area monomer extraction method based on virtual knowledge migration of claim 1, wherein the simulation scene V is_SimThe method is obtained by mapping the terrain, the surface feature and the texture information of a scene to be identified.
3. The building and its shaded area monomer extraction method based on virtual knowledge migration of claim 1, characterized in that the simulation image SImageAnd corresponding true value SLabelBy matching simulation scene V_SimAnd label V thereof_LabelAnd processing the same projection model P to obtain the projection model.
4. The method for extracting the singularity of the building and the sheltered area thereof based on the virtual knowledge migration as claimed in claim 1, wherein in the second step, the detection frame areas box of the whole and the facade of the building and the attention map r of each instance in the imagekObtained by means of a target detector based on the FCOS.
5. The method for the unified extraction of buildings and their sheltered areas based on virtual knowledge migration according to claim 1, characterized in that the step three is a base s for each instancekAnd attention-seeking drawing rkMerging is achieved by a blendshield strategy based on BlendMask.
6. The building and the shielding area unitization extraction method based on virtual knowledge migration according to claim 1, wherein the step three is performed by a shielding judgment network for judging whether shielding areas exist in detection frame areas Box of the whole and vertical surfaces of the building.
7. The method of claim 1, wherein the building and the sheltered area thereof are extracted individually based on virtual knowledge migrationCharacterised by the occlusion mask m for predicting the occlusion region in step threeoccBy the occlusion discrimination network.
8. The building and the shielding area monomer extraction method based on virtual knowledge migration according to claim 1, wherein the virtual knowledge K is obtained by performing a transformation on the building and the shielding area monomer extraction methodsimExpressed as:
Ksim={SImage,SLabel}
SImage=P(V_Sim{Terrain,Object,Texture_Sim})
S_Label=P(V_Label{Terrain,Object,Texture_Label})
wherein P represents a projection model, Tertain represents a Terrain, Object represents a ground Object, Texture represents Texture information, Texture represents Texture, and so on_SimRepresenting simulated Texture, Texture_LabelRepresenting the tag texture.
9. The method of claim 1, wherein the mask prediction m is a prediction of the blocking area of the buildingdExpressed as:
Figure FDA0003497426800000021
wherein K is the number of the set bases, K represents the serial number of the bases, D is the number of all the predicted detection frames, D represents the serial number of the detection frame, s represents the base, r represents the attention map,
Figure FDA0003497426800000022
representing a product by element.
CN202110707259.XA 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof Active CN113487738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110707259.XA CN113487738B (en) 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110707259.XA CN113487738B (en) 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof

Publications (2)

Publication Number Publication Date
CN113487738A CN113487738A (en) 2021-10-08
CN113487738B true CN113487738B (en) 2022-07-05

Family

ID=77936209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110707259.XA Active CN113487738B (en) 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof

Country Status (1)

Country Link
CN (1) CN113487738B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
CN110009657A (en) * 2019-04-01 2019-07-12 南京信息工程大学 A kind of video object dividing method based on the modulation of pyramid network
CN110472089A (en) * 2019-08-16 2019-11-19 重庆邮电大学 A kind of infrared and visible images search method generating network based on confrontation
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9390329B2 (en) * 2014-04-25 2016-07-12 Xerox Corporation Method and system for automatically locating static occlusions
CN107909065B (en) * 2017-12-29 2020-06-16 百度在线网络技术(北京)有限公司 Method and device for detecting face occlusion
CN110163048B (en) * 2018-07-10 2023-06-02 腾讯科技(深圳)有限公司 Hand key point recognition model training method, hand key point recognition method and hand key point recognition equipment
US11016492B2 (en) * 2019-02-28 2021-05-25 Zoox, Inc. Determining occupancy of occluded regions
US11544900B2 (en) * 2019-07-25 2023-01-03 General Electric Company Primitive-based 3D building modeling, sensor simulation, and estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
CN110009657A (en) * 2019-04-01 2019-07-12 南京信息工程大学 A kind of video object dividing method based on the modulation of pyramid network
CN110472089A (en) * 2019-08-16 2019-11-19 重庆邮电大学 A kind of infrared and visible images search method generating network based on confrontation
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"An Instance Segmentation-Based Method to Obtain the Leaf Age and Plant Centre of Weeds in Complex Field Environments";Longzhe Quan等;《sensors》;20210513;第21卷(第10期);第1-22页 *
"Shadow Detection and Removal for Occluded Object Information Recovery in Urban High-Resolution Panchromatic Satellite Images";Nan Su等;《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》;20160610;第9卷(第6期);第1-15页 *
基于Mask R-CNN的行人分割;胡剑秋等;《指挥控制与仿真》;20200410;第42卷(第05期);第42-46页 *

Also Published As

Publication number Publication date
CN113487738A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
Song et al. Automated pavement crack damage detection using deep multiscale convolutional features
CN110622213B (en) System and method for depth localization and segmentation using 3D semantic maps
CN111476781B (en) Concrete crack identification method and device based on video semantic segmentation technology
CN112561966B (en) Sparse point cloud multi-target tracking method fusing spatio-temporal information
CN110188705A (en) A kind of remote road traffic sign detection recognition methods suitable for onboard system
CN110852182B (en) Depth video human body behavior recognition method based on three-dimensional space time sequence modeling
CN111985367A (en) Pedestrian re-recognition feature extraction method based on multi-scale feature fusion
CN106203277A (en) Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
CN103617413B (en) Method for identifying object in image
US20220315243A1 (en) Method for identification and recognition of aircraft take-off and landing runway based on pspnet network
CN103426179A (en) Target tracking method and system based on mean shift multi-feature fusion
CN114758362B (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual shielding
CN115115672B (en) Dynamic vision SLAM method based on target detection and feature point speed constraint
CN112507845A (en) Pedestrian multi-target tracking method based on CenterNet and depth correlation matrix
CN114966696A (en) Transformer-based cross-modal fusion target detection method
Zhao et al. YOLO-highway: An improved highway center marking detection model for unmanned aerial vehicle autonomous flight
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
CN110751077B (en) Optical remote sensing picture ship detection method based on component matching and distance constraint
CN113487738B (en) Building based on virtual knowledge migration and shielding area monomer extraction method thereof
CN103426178B (en) Target tracking method and system based on mean shift in complex scene
Su et al. Which CAM is better for extracting geographic objects? A perspective from principles and experiments
Lei et al. Ship detection based on deep learning under complex lighting
Wang et al. The building area recognition in image based on faster-RCNN
Wang et al. Measuring driving behaviors from live video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant