CN110135354B - Change detection method based on live-action three-dimensional model - Google Patents

Change detection method based on live-action three-dimensional model Download PDF

Info

Publication number
CN110135354B
CN110135354B CN201910412076.8A CN201910412076A CN110135354B CN 110135354 B CN110135354 B CN 110135354B CN 201910412076 A CN201910412076 A CN 201910412076A CN 110135354 B CN110135354 B CN 110135354B
Authority
CN
China
Prior art keywords
change
model
image
live
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910412076.8A
Other languages
Chinese (zh)
Other versions
CN110135354A (en
Inventor
黄先锋
张帆
石芸
赵峻弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhai Dashi Intelligence Technology Co ltd
Wuhan University WHU
Original Assignee
Wuhai Dashi Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhai Dashi Intelligence Technology Co ltd filed Critical Wuhai Dashi Intelligence Technology Co ltd
Priority to CN201910412076.8A priority Critical patent/CN110135354B/en
Publication of CN110135354A publication Critical patent/CN110135354A/en
Application granted granted Critical
Publication of CN110135354B publication Critical patent/CN110135354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Abstract

The invention discloses a change detection method based on a live-action three-dimensional model, which specifically comprises the following steps: s1, calculating an overlapped area, S2, resampling a model, S3, segmenting DOM and DSM, generating a pattern object set, S4, judging whether the overlapped area is a change area, S5, generating a classifier by using a deep learning method, S6, training a sample, S7, generating the classifier, S8 and predicting the type of a change ground object. The change detection method based on the live-action three-dimensional model can realize that the color and the geometric information of the live-action three-dimensional model are utilized, the object-oriented method is used for segmentation, the change detection is carried out by taking the object as a basic unit, after a change area is determined, the change type of the ground feature is identified by utilizing a deep learning method, the change detection precision and the change type identification accuracy are greatly improved, meanwhile, the color and the geometric information are utilized, the detection precision is improved, and the classification basis is enriched.

Description

Change detection method based on live-action three-dimensional model
Technical Field
The invention relates to the technical field of three-dimensional digitization, in particular to a change detection method based on a live-action three-dimensional model.
Background
Change detection is one of key technologies in the fields of land coverage monitoring, land utilization monitoring, disaster assessment, disaster prediction, geographic information data updating and the like, and is always concerned about, the change detection comprises change area detection and change type identification, the traditional change detection process comprises the steps of generating a difference map and changing area classification, the difference map acquisition method comprises image difference values, image ratio values and the like, these pixel-based methods are only applicable to large-scale satellite images or low-resolution aerial images, for high-resolution images with increasingly frequent application, a large number of fragments are easily formed, so that excessive pseudo-change regions are generated, the later data processing is not facilitated, the traditional classification method is divided into supervised classification and unsupervised classification, however, they are based on images, only use the color information of the images, and the classification basis is too single, and the accuracy of classification is not high.
With the rapid development of the unmanned aerial vehicle technology, the unmanned aerial vehicle images are increasingly applied to geographic information source data acquisition due to the advantages of low acquisition cost, high efficiency, high resolution and the like, real three-dimensional model data generated by the unmanned aerial vehicle images also become one of important geographic information data, the real three-dimensional model data simultaneously have color and geometric information and are applied to change detection, and meanwhile, according to the characteristic of high resolution of the unmanned aerial vehicle images, an object-oriented method is applied to perform change detection by taking a segmented object as a basic unit, so that the change detection precision can be greatly improved.
Deep learning is a new field in machine learning research, and the motivation lies in establishing and simulating a Neural network for analyzing and learning the human brain, which simulates the mechanism of the human brain to interpret data such as images, sounds and texts, the deep learning aims at learning better features by constructing a machine learning model with a plurality of hidden layers and massive training data, thereby improving the classification accuracy, and tracing the root and the source, the concept of the deep learning is derived from the research of the artificial Neural network, forms more abstract high-level representation attribute categories or features by combining low-level features to find the distributed feature representation of the data, the Convolutional Neural Network (CNN) specially solving the problem of image classification and identification is a deep learning network with a convolutional structure, the CNN can automatically extract spatial features from the images, and takes the pixels to be classified and the neighborhood pixels as the input of the convolutional Neural network together, and the method is further converted into the characteristics effectively utilized by a machine learning task, in recent years, the problem of image classification by using a neural network method is mature day by day, the application field is expanded, and compared with the traditional classification method, the deep learning method has the strong capability of learning the essential characteristics of the data set from a few sample sets, and the identification accuracy of the types of the changed ground objects can be greatly improved.
In summary, the invention provides a change detection method based on a live-action three-dimensional model, which utilizes live-action three-dimensional model data, detects a change area by an object-oriented method, and identifies a change type by a deep learning method, thereby greatly improving the change detection precision and the change type identification accuracy.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a change detection method based on a live-action three-dimensional model, which solves the problem that the traditional change detection method based on images only utilizes the color information of the images and is difficult to achieve satisfactory effect on change detection precision and change type identification accuracy.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a change detection method based on a live-action three-dimensional model specifically comprises the following steps:
s1, calculating the overlapping area before and after the change according to the range of the live-action three-dimensional model;
s2, performing texture resampling and elevation resampling on the multiple three-dimensional models to generate a digital positive shot image (DOM) and a digital ground model (DSM);
s3, carrying out image processing on the digital positive shot DOM and the digital ground model DSM in the step S2, and then carrying out object-oriented segmentation to generate a speckle object set;
s4, judging whether the image spots are change areas according to the elevation change in each image spot object;
s5, respectively collecting sample data of different ground feature types;
s6, training a sample;
s7, generating a classifier;
and S8, inputting the color and elevation information of the pattern spots to obtain the types of the changed ground features.
Preferably, the step S1 of calculating the overlap area specifically includes: firstly, reading in real-scene three-dimensional model data before and after change, calculating the boundary range of the area before and after the change to obtain an overlapped area before and after the change, then setting the length and the width of a block, dividing the overlapped area into blocks again, and carrying out subsequent processing on each block.
Preferably, the model resampling in step S2 is divided into texture resampling and elevation resampling, a horizontal sampling grid is generated according to the boundary range of the block, the sizes Δ x and Δ y of the grid are set according to the resolution of the model, and the initial coordinate (x) of the grid is known0,y0) The horizontal coordinates of grid point (i, j) may be found as: x ═ x0+i*Δx,y=y0+j*Δy。
Preferably, the digital orthophoto image DOM is generated by taking texture color values of model points at corresponding positions on the model as z values according to horizontal coordinates of the grid points, and then the digital ground model DSM is generated by taking elevation values of the model points at corresponding positions on the model as z values according to the horizontal coordinates of the grid points.
Preferably, in the step S3, the DOM and the DSM are segmented to generate the patch object set, the generated digital positive-shot image is utilized to segment the image by using an effective segmentation method based on a graph, the image is segmented into a plurality of specific regions with unique properties, the details of low-variation regions can be maintained, and the details of high-variation regions can be ignored, so as to reduce the generation of fine fragments, thereby obtaining a good segmentation effect, the digital ground model DSM is utilized to stretch the elevation values of the grid points to the range of 0-255, so as to generate a gray image, the image is segmented into a plurality of non-overlapping regions by using a threshold segmentation method, and the two segmentation results are combined to obtain the final patch object set.
Preferably, in step S4, it is determined whether the change region is a region in which the average of the height differences in the patches is counted for each patch object, a threshold is set, and if the average of the height differences is higher than the threshold, the change region is considered as a candidate change region, otherwise, the change region is considered as a no-change region, and an initial change region is generated.
Preferably, the classifier is generated by deep learning in step S7, the deep learning network is formed by clustering a plurality of neural units together and constructing a hierarchical result, the simplest network is formed by an input layer, an output layer, and an implicit layer, each layer has a plurality of neurons, and the neurons in no layer are connected with the neurons in the next layer, and the output of the previous layer is used as the input of the next layer, such a network is also referred to as a fully-connected network.
Preferably, the predicting of the type of the changed surface feature in step S8 is inputting a surface feature pattern and a classifier, calculating by the classifier, and outputting the probability of each class, where the highest probability is the class of the changed surface feature.
(III) advantageous effects
The invention provides a change detection method based on a live-action three-dimensional model. Compared with the prior art, the method has the following beneficial effects: the change detection method based on the live-action three-dimensional model specifically comprises the following steps: s1, calculating the overlapped area before and after the change according to the range of the real three-dimensional model, S2, carrying out texture resampling and elevation resampling on a plurality of three-dimensional models, generating a digital positive shot image DOM and a digital ground model DSM, S3, carrying out imaging processing on the digital positive shot image DOM and the digital ground model DSM in the step S2, carrying out object-oriented segmentation, generating a spot object set, S4, judging whether a spot is a changed area according to the elevation change in each spot object, S5, respectively collecting sample data of different ground object types, S6, carrying out sample training, S7, generating a classifier, S8, inputting the color and elevation information of the spot to obtain the type of the changed ground object, realizing segmentation by using an object-oriented method by using the color and geometric information of the real three-dimensional model, carrying out change detection by using the object as a basic unit, determining the changed area, the method for recognizing the ground feature change type by utilizing the deep learning method greatly improves the change detection precision and the change type recognition accuracy, provides favorable conditions for further rejecting pseudo change areas by utilizing the color and geometric information, improves the detection precision and enriches the classification basis.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a technical solution: a change detection method based on a live-action three-dimensional model specifically comprises the following steps:
s1, calculating the overlapping area before and after the change according to the range of the live-action three-dimensional model;
s2, performing texture resampling and elevation resampling on the multiple three-dimensional models to generate a digital positive shot image (DOM) and a digital ground model (DSM);
s3, carrying out image processing on the digital positive shot DOM and the digital ground model DSM in the step S2, and then carrying out object-oriented segmentation to generate a speckle object set;
s4, judging whether the image spots are change areas according to the elevation change in each image spot object;
s5, respectively collecting sample data of different ground feature types;
s6, training a sample;
s7, generating a classifier;
and S8, inputting the color and elevation information of the pattern spots to obtain the types of the changed ground features.
In the present invention, the step S1 of calculating the overlap area specifically includes: firstly, reading in real-scene three-dimensional model data before and after change, calculating the boundary range of the area before and after the change to obtain an overlapped area before and after the change, then setting the length and the width of a block, dividing the overlapped area into blocks again, and carrying out subsequent processing on each block.
In the invention, the model resampling in step S2 is divided into texture resampling and elevation resampling, a horizontal sampling grid is generated according to the boundary range of the block, the size delta x and delta y of the grid are set according to the resolution of the model, and the initial coordinate (x) of the grid is known0,y0) The horizontal coordinates of grid point (i, j) may be found as: x ═ x0+i*Δx,y=y0+j*Δy。
According to the horizontal coordinates of the grid points, texture color values of model points at corresponding positions on the model are taken as z values, so that the digital orthophoto image DOM can be generated, and then according to the horizontal coordinates of the grid points, elevation values of the model points at corresponding positions on the model are taken as z values, so that the digital ground model DSM is generated.
In the invention, the DOM and DSM are divided in step S3 to generate a speckle object set, the generated digital positive shooting image is utilized, the image is divided by adopting an effective dividing method based on a graph, the image is divided into a plurality of specific areas with unique properties, the details of low-variation areas can be kept, and the details of high-variation areas can be ignored, so that the generation of fine fragments is reduced, a good dividing effect is obtained, the elevation value of a grid point is stretched to a range of 0-255 by utilizing a digital ground model DSM to generate a gray level image, the image is divided into a plurality of non-overlapping areas by utilizing a threshold dividing method, and the two dividing results are combined to obtain the final speckle object set.
In the present invention, in step S4, it is determined whether the change region is a region of each patch object, a mean value of height differences within the patch is counted, a threshold is set, if the mean value of height differences is higher than the threshold, the change region is considered as a candidate change region, otherwise, the change region is considered as a no-change region, and an initial change region is generated.
The method for eliminating the pseudo change area comprises the following steps: 1) removing unimportant change areas such as vegetation by using the vegetation index, calculating the vegetation index EGI (2G-R-B) or nGEI (2G-R-B)/(2G + R + B) of the map spot object according to the RGB value of the image, and if the vegetation index before and after the change exceeds a threshold value, determining the change areas as pseudo change areas; 2) a fine isolated region, the area of which is generally larger according to the change detection, the fine isolated region can be regarded as a pseudo change; 3) the patch having irregular geometric characteristics such as long, narrow, and concave height is regarded as a pseudo-change region, and if the patch is determined as a change region, step S8 is executed, otherwise, the process is ended.
In the present invention, the generation classifier in step S7 is generated by a deep learning method, the deep learning network is formed by gathering a plurality of neural units together and constructing a layered result, the simplest network is composed of an input layer, an output layer, and an implicit layer, each layer has a plurality of neurons, and the neurons in none of the layers are connected with the neurons in the next layer, the output of the previous layer is used as the input of the next layer, such a network is also referred to as a fully connected network, the deep learning generally uses a multi-layer neural network, and is composed of three parts: 1) the input layer is responsible for data acquisition; 2) the method comprises the steps of extracting features by combining n convolution layers and pooling layers, namely a hidden layer which is invisible to the outside; 3) the output layer is composed of a fully connected multi-layer perceptron classifier.
The last layer of the classification model is usually a Softmax regression model, which works on the principle of adding features that can be judged to be of a certain class, then converting the features into the probability that the judgment is of the class, and describing the features as:
featuresi=∑jWi,jxj+bi
i represents class i, j represents pixel j of an image, biIs bias (representing the tendency of the data itself), W represents a weight parameter, and x represents the input image data.
Next softmax was calculated for all features, with the following results:
softmax(x)=normalize(exp(x))
the probability of the i-th class being determined can be obtained by the following equation.
Figure BDA0002063117950000071
In order to train the model, a loss function needs to be defined to describe the classification precision of the model to the problem, and the smaller the loss function is, the smaller the deviation of the classification result representing the model from the true value is, i.e. the more accurate the model is. For the multi-classification problem, Cross-entropy (Cross-entropy) is usually used as a loss function, and is defined as follows, where y is the predicted distribution probability and y' is the true probability distribution (i.e., the one-hot code of Label), which is used to judge how accurate the model estimates the true probability distribution.
Figure BDA0002063117950000072
The application of Stochastic Gradient Descent (SGD) to neural networks is a back propagation algorithm, a common Stochastic Gradient Descent optimization algorithm is used to optimize a loss function, a Gradient Descent method is to determine a new search direction for each iteration by using a negative Gradient direction, so that a target function to be optimized can be gradually reduced for each iteration, and the most appropriate weight parameter of the perceptron is solved through a known input value (image) and a real output value (prediction probability).
Thus, the deep learning is applied to the change detection, and is mainly divided into 3 steps: 1) collecting samples, namely selecting a certain number of surface feature pattern spots with different resolutions, different visual angles and different light and shade degrees on a pattern according to surface feature types such as roads, buildings, lands, vegetation and the like, and putting the surface feature pattern spots into a sample library; 2) sample training, defining a classification algorithm formula and a loss function, then defining an optimization algorithm, then carrying out iterative training, updating parameters to reduce loss in each iteration, and finally achieving global optimal parameters, wherein in order to better complete a task, the method adopts two different network structures, namely a Google inclusion Net V3 network structure and a SegNet network structure, and identifies and segments an image; 3) and generating a classifier, and storing the model parameters output by training so as to load the model parameters in prediction.
In the invention, the step S8 of predicting the type of the changed ground feature is to input the ground feature pattern spot and the classifier, calculate the classification and output the probability of each class, wherein the highest probability is the class of the changed ground feature.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (3)

1. A change detection method based on a live-action three-dimensional model is characterized by comprising the following steps: the method specifically comprises the following steps:
s1, calculating the overlapping area before and after the change according to the range of the live-action three-dimensional model;
s2, performing texture resampling and elevation resampling on the multiple three-dimensional models to generate a digital positive shot image (DOM) and a digital ground model (DSM);
s3, carrying out image processing on the digital positive shot DOM and the digital ground model DSM in the step S2, and then carrying out object-oriented segmentation to generate a speckle object set;
s4, judging whether the image spots are change areas according to the elevation change in each image spot object;
s5, respectively collecting sample data of different ground feature types;
s6, training a sample;
s7, generating a classifier;
s8, inputting the color and elevation information of the pattern spots to obtain the types of the changed land features;
the step S1 of calculating the overlap area specifically includes: firstly, reading in live-action three-dimensional model data before and after change, calculating the boundary range of areas before and after the change to obtain an overlapped area before and after the change, then setting the length and width of a block, dividing the overlapped area into blocks again, and carrying out subsequent processing on each block;
the model resampling in step S2 is divided into texture resampling and elevation resampling, a horizontal sampling grid is generated according to the boundary range of the block, the size Δ x and Δ y of the grid are set according to the resolution of the model, and the initial coordinate (x) of the grid is known0,y0) The horizontal coordinates of grid point (i, j) may be found as: x ═ x0+i*Δx,y=y0+ j Δ y, according to the horizontal coordinates of the grid points, taking the texture color values of the model points on the corresponding positions on the model as z values, so as to generate a digital positive shot image DOM, and then according to the horizontal coordinates of the grid points, taking the elevation values of the model points on the corresponding positions on the model as z values, so as to generate a digital ground model DSM;
the DOM and DSM are segmented in the step S3 to generate a patch object set, the generated digital positive-shot image is utilized, an effective segmentation method based on a graph is adopted to segment the image, the image is segmented into a plurality of specific areas with unique properties, the details of low-variation areas can be kept, and the details of high-variation areas can be ignored, so that the generation of fine fragments is reduced, a good segmentation effect is obtained, a digital ground model DSM is utilized to stretch the elevation value of the grid point to a range of 0-255, a gray level image is generated, the image is segmented into a plurality of non-overlapping areas by a threshold segmentation method, and the two segmentation results are combined to obtain the final patch object set;
in step S4, it is determined whether the change region is a region of each patch object, and the average of the height differences in the patches is counted, and a threshold is set, and if the average of the height differences is higher than the threshold, the change region is considered as a candidate change region, otherwise, the change region is considered as a no-change region, and an initial change region is generated.
2. The change detection method based on the live-action three-dimensional model according to claim 1, characterized in that: the generation classifier in step S7 is generated by using a deep learning method, the deep learning network is formed by gathering a plurality of neural units together and constructing a layered result, the simplest network is formed by an input layer, an output layer, and an implied layer, each layer has a plurality of neurons, and the neurons on each layer are connected with the neurons on the next layer, and the output of the previous layer is used as the input of the next layer, such a network is also called a fully connected network.
3. The change detection method based on the live-action three-dimensional model according to claim 1, characterized in that: in the step S8, the type of the feature to be predicted is input into the feature pattern and the classifier, the classifier performs calculation, and the probability of each class is output, and the highest probability is the class of the feature to be predicted.
CN201910412076.8A 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model Active CN110135354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910412076.8A CN110135354B (en) 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910412076.8A CN110135354B (en) 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model

Publications (2)

Publication Number Publication Date
CN110135354A CN110135354A (en) 2019-08-16
CN110135354B true CN110135354B (en) 2022-03-29

Family

ID=67574999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910412076.8A Active CN110135354B (en) 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model

Country Status (1)

Country Link
CN (1) CN110135354B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113064954B (en) * 2020-01-02 2024-03-26 沈阳美行科技股份有限公司 Map data processing method, device, equipment and storage medium
CN111259955B (en) * 2020-01-15 2023-12-08 国家测绘产品质量检验测试中心 Reliable quality inspection method and system for geographical national condition monitoring result
CN113515971A (en) * 2020-04-09 2021-10-19 阿里巴巴集团控股有限公司 Data processing method and system, network system and training method and device thereof
CN111723643B (en) * 2020-04-12 2024-03-01 四川川测研地科技有限公司 Target detection method based on fixed-area periodic image acquisition
CN112149920A (en) * 2020-10-17 2020-12-29 河北省地质环境监测院 Regional geological disaster trend prediction method
CN113515798B (en) * 2021-07-05 2022-08-12 中山大学 Urban three-dimensional space expansion simulation method and device
CN115482466B (en) * 2022-09-28 2023-04-28 广西壮族自治区自然资源遥感院 Three-dimensional model vegetation area lightweight processing method based on deep learning
CN115861826B (en) * 2023-02-27 2023-05-12 武汉天际航信息科技股份有限公司 Configuration method, computing device and storage medium for model-oriented overlapping area

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10058596A1 (en) * 2000-11-25 2002-06-06 Aventis Pharma Gmbh Method of screening chemical compounds for modulating the interaction of an EVH1 domain or a protein with an EVH1 domain with an EVH1 binding domain or a protein with an EVH1 binding domain, and a method for detecting said interaction
CN103839286A (en) * 2014-03-17 2014-06-04 武汉大学 True-orthophoto optimization sampling method of object semantic constraint
CN104049245A (en) * 2014-06-13 2014-09-17 中原智慧城市设计研究院有限公司 Urban building change detection method based on LiDAR point cloud spatial difference analysis
CN105893972A (en) * 2016-04-08 2016-08-24 深圳市智绘科技有限公司 Automatic illegal building monitoring method based on image and realization system thereof
CN107844802A (en) * 2017-10-19 2018-03-27 中国电建集团成都勘测设计研究院有限公司 Water and soil conservation value method based on unmanned plane low-altitude remote sensing and object oriented classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10058596A1 (en) * 2000-11-25 2002-06-06 Aventis Pharma Gmbh Method of screening chemical compounds for modulating the interaction of an EVH1 domain or a protein with an EVH1 domain with an EVH1 binding domain or a protein with an EVH1 binding domain, and a method for detecting said interaction
CN103839286A (en) * 2014-03-17 2014-06-04 武汉大学 True-orthophoto optimization sampling method of object semantic constraint
CN104049245A (en) * 2014-06-13 2014-09-17 中原智慧城市设计研究院有限公司 Urban building change detection method based on LiDAR point cloud spatial difference analysis
CN105893972A (en) * 2016-04-08 2016-08-24 深圳市智绘科技有限公司 Automatic illegal building monitoring method based on image and realization system thereof
CN107844802A (en) * 2017-10-19 2018-03-27 中国电建集团成都勘测设计研究院有限公司 Water and soil conservation value method based on unmanned plane low-altitude remote sensing and object oriented classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"融合数字表面模型的无人机遥感影像城市土地利用分类";宋晓阳 等;《地球信息科学》;20180531;第20卷(第5期);第703-711页 *

Also Published As

Publication number Publication date
CN110135354A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110135354B (en) Change detection method based on live-action three-dimensional model
CN110136154B (en) Remote sensing image semantic segmentation method based on full convolution network and morphological processing
EP3614308B1 (en) Joint deep learning for land cover and land use classification
US10984532B2 (en) Joint deep learning for land cover and land use classification
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN104915636B (en) Remote sensing image road recognition methods based on multistage frame significant characteristics
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
CN113033520B (en) Tree nematode disease wood identification method and system based on deep learning
CN110060273B (en) Remote sensing image landslide mapping method based on deep neural network
CN110853026A (en) Remote sensing image change detection method integrating deep learning and region segmentation
Tan et al. Vehicle detection in high resolution satellite remote sensing images based on deep learning
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN113449594A (en) Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN114596500A (en) Remote sensing image semantic segmentation method based on channel-space attention and DeeplabV3plus
CN111738164B (en) Pedestrian detection method based on deep learning
CN111611960B (en) Large-area ground surface coverage classification method based on multilayer perceptive neural network
CN110889840A (en) Effectiveness detection method of high-resolution 6 # remote sensing satellite data for ground object target
CN115512247A (en) Regional building damage grade assessment method based on image multi-parameter extraction
CN110287798B (en) Vector network pedestrian detection method based on feature modularization and context fusion
CN114332644B (en) Large-view-field traffic density acquisition method based on video satellite data
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN115019163A (en) City factor identification method based on multi-source big data
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN110276270B (en) High-resolution remote sensing image building area extraction method
Deepan et al. Comparative analysis of scene classification methods for remotely sensed images using various convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230901

Address after: 430205 room 01, 4 / F, building B2, phase II of financial background service center base construction project, No. 77, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province

Patentee after: WUHAI DASHI INTELLIGENCE TECHNOLOGY CO.,LTD.

Patentee after: WUHAN University

Address before: 430000 Room 01, 02, 8 Floors, Building B18, Building 2, Financial Background Service Center, 77 Guanggu Avenue, Wuhan Donghu New Technology Development Zone, Hubei Province

Patentee before: WUHAI DASHI INTELLIGENCE TECHNOLOGY CO.,LTD.