CN112396006B - Building damage identification method and device based on machine learning and computing equipment - Google Patents

Building damage identification method and device based on machine learning and computing equipment Download PDF

Info

Publication number
CN112396006B
CN112396006B CN202011324668.3A CN202011324668A CN112396006B CN 112396006 B CN112396006 B CN 112396006B CN 202011324668 A CN202011324668 A CN 202011324668A CN 112396006 B CN112396006 B CN 112396006B
Authority
CN
China
Prior art keywords
disaster
image
subnetwork
network
building damage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011324668.3A
Other languages
Chinese (zh)
Other versions
CN112396006A (en
Inventor
白琰冰
杨翰方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Renmin University of China
Original Assignee
Renmin University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renmin University of China filed Critical Renmin University of China
Priority to CN202011324668.3A priority Critical patent/CN112396006B/en
Publication of CN112396006A publication Critical patent/CN112396006A/en
Application granted granted Critical
Publication of CN112396006B publication Critical patent/CN112396006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a building damage identification method and device based on machine learning and computing equipment. The method comprises the following steps: acquiring a pre-disaster image and a post-disaster image of a target area; inputting the pre-disaster image and the post-disaster image into a first sub-network of a pre-trained building damage identification model; respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in a building damage identification model; inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork into a second subnetwork in the building damage identification model; and generating a building damage identification result corresponding to the target area according to the output result of the second subnetwork. By adopting the scheme, end-to-end building damage identification can be realized, the building damage identification precision is improved, model parameters can be greatly reduced, training efficiency is improved, and overall calculation cost is reduced. Moreover, the scheme is simple and feasible, and is suitable for large-scale application and implementation.

Description

Building damage identification method and device based on machine learning and computing equipment
Technical Field
The invention relates to the technical field of machine learning, in particular to a building damage identification method and device based on machine learning and computing equipment.
Background
The building is a common place for people to work and live, and the evaluation of the damage degree of the building after the disaster occurs has very important significance for rescue, emergency and the like.
Currently, machine learning algorithms are commonly used to evaluate the extent of damage to post-disaster buildings. The Chinese patent application with publication number of CN111126308A provides an automatic identification method for damaged buildings by combining pre-disaster and post-disaster remote sensing image information.
However, the inventors found in practice that the following drawbacks exist in the prior art: the automatic damaged building identification method provided by the CN111126308A utilizes two models (a pre-disaster model and a post-disaster model) to respectively process the pre-disaster image and the post-disaster image, has more model parameters, has low model training efficiency and has high calculation cost; in addition, the training and prediction processes of the pre-disaster model and the post-disaster model are mutually independent, so that the relation between the pre-disaster image and the post-disaster image cannot be accurately learned, and the building damage recognition precision is low.
Disclosure of Invention
The present invention has been made in view of the above-mentioned problems, and it is an object of the present invention to provide a machine learning based building damage identification method, apparatus and computing device that overcomes or at least partially solves the above-mentioned problems.
According to one aspect of the present invention, there is provided a building damage recognition method based on machine learning, including:
acquiring a pre-disaster image and a post-disaster image of a target area;
inputting the pre-disaster image and the post-disaster image into a first sub-network of a pre-trained building damage identification model;
respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in the building damage identification model;
inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork to a second subnetwork in the building damage identification model;
and generating a building damage identification result corresponding to the target area according to the output result of the second sub-network.
Optionally, the first subnetwork includes: a cavity residual error network;
and/or the pre-disaster sub-network and the post-disaster sub-network have the same network structure; wherein the pre-disaster subnetwork and/or the post-disaster subnetwork comprises: a hole residual network and an SE-ResNet network;
And/or, the second subnetwork comprises: hole residual network, SE-ResNet network, and pyramid pooling network.
Optionally, the inputting the output results of the pre-disaster subnetwork and the post-disaster subnetwork into the second subnetwork in the building damage identification model further includes:
and subtracting output results of the pre-disaster subnetwork and the post-disaster subnetwork, and inputting the subtracted output results to a second subnetwork in the building damage identification model.
Optionally, before the pre-disaster image and the post-disaster image are input to the first sub-network of the pre-trained building damage identification model, the method further includes:
acquiring an initial sample set and constructing an initial building damage identification model;
preprocessing the initial sample set to obtain a preprocessed sample set;
training the initial building damage recognition model by using the preprocessed sample set to obtain a trained building damage recognition model.
Optionally, the preprocessing the initial sample set to obtain a preprocessed sample set further includes:
distributing corresponding image-level damage grade labels for sample images in the initial sample set; determining the corresponding utilization times of the sample images of the image-level damage level labels according to the duty ratio of the image-level damage level labels in the initial sample set; generating a preprocessed sample set according to the sample image in the initial sample set and the corresponding utilization times of the sample image;
And/or performing a preset type of image processing on the sample images in the initial sample set; and generating a preprocessed sample set according to the sample image in the primary sample set and the sample image after image processing.
Optionally, the loss function adopted by the building damage identification model in the model training process is as follows: a weighted binary cross entropy loss function based on dice loss and focus loss.
Optionally, after the generating the building damage identification result corresponding to the target area, the method further includes:
and acquiring the policy data, matching the policy data with the building damage identification result, and generating a claim settlement plan according to the matching result.
According to another aspect of the present invention, there is provided a machine learning-based building damage recognition apparatus, comprising:
the data acquisition module is suitable for acquiring a pre-disaster image and a post-disaster image of the target area;
the input module is suitable for inputting the pre-disaster image and the post-disaster image into a pre-trained building damage recognition model;
the model prediction module is suitable for inputting the pre-disaster image and the post-disaster image into a first sub-network in a pre-trained building damage identification model; respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in the building damage identification model; inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork to a second subnetwork in the building damage identification model; and generating a building damage identification result corresponding to the target area according to the output result of the second sub-network.
Optionally, the first subnetwork includes: a cavity residual error network;
and/or the pre-disaster sub-network and the post-disaster sub-network have the same network structure; wherein the pre-disaster subnetwork and/or the post-disaster subnetwork comprises: a hole residual network and an SE-ResNet network;
and/or, the second subnetwork comprises: hole residual network, SE-ResNet network, and pyramid pooling network.
Optionally, the model prediction module is further adapted to: and subtracting output results of the pre-disaster subnetwork and the post-disaster subnetwork, and inputting the subtracted output results to a second subnetwork in the building damage identification model.
Optionally, the apparatus further includes: the model training module is suitable for acquiring an initial sample set and constructing an initial building damage recognition model;
preprocessing the initial sample set to obtain a preprocessed sample set;
training the initial building damage recognition model by using the preprocessed sample set to obtain a trained building damage recognition model.
Optionally, the model training module is further adapted to: distributing corresponding image-level damage grade labels for sample images in the initial sample set; determining the corresponding utilization times of the sample images of the image-level damage level labels according to the duty ratio of the image-level damage level labels in the initial sample set; generating a preprocessed sample set according to the sample image in the initial sample set and the corresponding utilization times of the sample image;
And/or performing a preset type of image processing on the sample images in the initial sample set; and generating a preprocessed sample set according to the sample image in the primary sample set and the sample image after image processing.
Optionally, the loss function adopted by the building damage identification model in the model training process is as follows: a weighted binary cross entropy loss function based on dice loss and focus loss.
Optionally, the apparatus further includes: and the claim plan generation module is suitable for acquiring the policy data after the building damage identification result corresponding to the target area is generated, matching the policy data with the building damage identification result, and generating the claim plan according to the matching result.
According to yet another aspect of the present invention, there is provided a computing device comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the building damage identification method based on machine learning.
According to still another aspect of the present invention, there is provided a computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the above-described machine learning-based building damage identification method.
According to the building damage identification method and device based on machine learning and the computing equipment provided by the invention. First, a pre-disaster image and a post-disaster image of a target area are acquired: further inputting the pre-disaster image and the post-disaster image into a first sub-network of a pre-trained building damage identification model; respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in a building damage identification model; inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork into a second subnetwork in the building damage identification model; and generating a building damage identification result corresponding to the target area according to the output result of the second subnetwork. By adopting the scheme, end-to-end building damage identification can be realized, the building damage identification precision is improved, model parameters can be greatly reduced, training efficiency is improved, and overall calculation cost is reduced. Moreover, the scheme is simple and feasible, and is suitable for large-scale application and implementation.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a schematic flow chart of a building damage identification method based on machine learning according to a first embodiment of the invention;
fig. 2 is a schematic diagram showing a specific structure of a building damage recognition model applied to the first embodiment;
fig. 3 shows a schematic structural diagram of a hole residual network applied to the first embodiment of the present invention;
FIG. 4 is a schematic diagram showing data processing of an SE-ResNet network applied to the first embodiment of the invention;
FIG. 5 is a schematic diagram of pyramid pooling network data processing applied to a first embodiment of the present invention;
fig. 6 is a schematic flow chart of a building damage identification method based on machine learning according to a second embodiment of the present invention;
fig. 7 is a schematic flow chart of a building damage identification method based on machine learning according to a third embodiment of the present invention;
fig. 8 is a schematic functional structure diagram of a building damage recognition device based on machine learning according to a fourth embodiment of the present invention;
fig. 9 is a schematic structural diagram of a computing device according to a sixth embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Example 1
Fig. 1 is a schematic flow chart of a building damage identification method based on machine learning according to an embodiment of the invention. The method provided in this embodiment may be executed by a corresponding computing device, and the execution subject of the method is not limited in this embodiment.
As shown in fig. 1, the method includes:
step S110: and acquiring a pre-disaster image and a post-disaster image of the target area.
Firstly, determining an area to be identified by building damage, wherein the area is a target area. The specific position, size, etc. of the target area are not limited in this embodiment. Optionally, to enhance user experience, the embodiment provides a visual operation interface, in which a visual map may be presented, and the user determines the target area through a selection operation on the map area.
Further, aiming at the determined target area, a corresponding pre-disaster image and a post-disaster image are acquired. The pre-disaster image and the post-disaster image may be satellite images, or may be images acquired through unmanned aerial vehicle, aerial photography, or the like. In short, the present embodiment does not limit the acquisition mode of the pre-disaster image and the post-disaster image.
Step S120: and inputting the pre-disaster image and the post-disaster image into a first sub-network of the pre-trained building damage identification model.
After the pre-disaster image and the post-disaster image of the target area are obtained, a pair of pre-disaster and post-disaster images of the same area are input into a pre-trained building damage recognition model together, so that the building damage recognition model can recognize the building damage degree corresponding to the target area.
The building damage identification model includes a first subnetwork, a pre-disaster subnetwork, a post-disaster subnetwork, and a second subnetwork, as shown in fig. 2. In the building damage recognition model, in the recognition process, a pre-disaster image and a post-disaster image are taken as input data to enter a first sub-network first. In the first subnetwork, the same set of parameters is shared for processing the pre-disaster image and the post-disaster image.
In an alternative embodiment, since the Residual Network (Residual Network) has the characteristics of easy optimization and capability of improving accuracy by increasing depth; the cavity convolution (Dilated Convolution) can expand the receptive field so as to be beneficial to extracting the image outline and edges, so that the cavity residual network (Dilated Residual Network) is generated based on the residual network and the cavity convolution and is applied to the first sub-network, the pre-disaster sub-network, the post-disaster sub-network and the second sub-network, peripheral outline information of the image can be extracted, the acquired information is more complete, and identification and positioning of the building outline in the image are realized. In this embodiment, the mode of generating the hole residual network based on the residual network and the hole convolution is not limited, for example, the standard convolution in the original path of the residual network may be replaced by the hole convolution, so as to generate the hole residual network.
Further optionally, the hole residual network in this embodiment is composed of a first hole residual sub-network and a second hole residual sub-network, and the specific structure of the hole residual sub-network is shown in fig. 3. Wherein c is a convolution layer, b is a batch normalization layer, r is a ReLU layer, "+" indicates that the two results are added, and an arrow marks the data flow direction. As can be seen from the figure, the first hole residual sub-network has one more convolution layer and batch normalization layer than the hole residual sub-network, and is used for processing the picture size, the channel number and the like so as to adapt to the subsequent data processing process.
In addition, the first sub-network further comprises a convolution layer, a batch normalization layer and a ReLU layer besides the cavity residual error network. The specific structure of the first subnetwork can be shown in fig. 2, c is a convolution layer, b is a batch normalization layer, r is a ReLU layer, RB 1 For the first hole residual sub-network, RB 2 And is a second hole residual sub-network. The number of the first hole residual sub-networks in the first sub-network may be 1, and the number of the second hole residual sub-networks may be 2, which is not limited in this embodiment.
Step S130: and respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in the building damage identification model.
The output result of the first subnetwork flows to the pre-disaster subnetwork and the post-disaster subnetwork which are arranged in parallel respectively. The pre-disaster sub-network is used for processing pre-disaster image data, the post-disaster sub-network is used for processing post-disaster image data, and the pre-disaster sub-network and the post-disaster sub-network have the same network structure, but the parameters in the pre-disaster sub-network and the post-disaster sub-network are different.
In an alternative implementation manner, if the pre-disaster image and the post-disaster image are satellite images, the data size is larger, and if a conventional neural network is adopted, the convergence speed is slower, and this embodiment aims at the disadvantage that an SE-ResNet network is adopted in the building damage identification model, and is applied to the pre-disaster subnetwork, the post-disaster subnetwork and/or the second subnetwork, so as to improve the convergence speed of the model.
The SE-ResNet Network is a combination of SE modules (Squeeze-and-Excitation Mechanism for Attention) and a Residual Network (Residual Network). The image classification method is a weighting system for assigning weights to the feature images according to channels, and can improve the classification condition of the image network. In order to determine each channel weight, it needs to calculate the average value of the channel activation values, and then convert the channel activation values through two full connection layers and the ReLU and S-type activation functions, thereby obtaining reasonable channel weights. As shown in the SE-res net network schematic diagram of fig. 4, in fig. 4, GAPs are ensemble averaged together (Global Average Pooling). Omega 1 ,ω 2 The method is characterized in that the method is respectively 1 st and 2 nd linear production layers, r is a ReLU layer, S is a Sigmoid function, and x is multiplication operation.
Thus, the pre-disaster subnetwork and/or the post-disaster subnetwork comprises: the specific structure of the hole residual network and the SE-ResNet network is shown in figure 2, and SE is the SE-ResNet network in figure 2. In the pre-disaster sub-network and/or the post-disaster sub-network, the number of the first hole residual sub-network may be 1, the number of the first (from top to bottom) second hole residual sub-network in the figure may be 3, and the number of the second hole residual sub-network may be 22.
Step S140: and inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork into a second subnetwork in the building damage identification model.
Specifically, the output results of the pre-disaster subnetwork and the post-disaster subnetwork are subtracted and then input into a second subnetwork in the building damage identification model.
In an alternative embodiment, the second subnetwork comprises: hole residual network, SE-ResNet network, and pyramid pooling network. Wherein g represents the activation graph of a single channel and N represents the number of concentrators in a column or row, as shown in the pyramid pooling network schematic of fig. 5. A pyramid pooling network (Pyramid Pooling Module) that aggregates the activation graphs for each channel in a pyramid fashion and creates an N x N grid on the activation graph for each channel. Each cell of the grid overlaps a square area of the activation graph, and the overlapping area is concentrated to a value at each portion that is overlapped according to the pooling scheme. Through dimension reduction, each activation graph is quantized into a vector with the same dimension as N, the vector is connected into a representation vector for a certain channel under the condition of different N values, and finally, a result is output after processing through a connection function. By adopting the mode, the most critical characteristic in each pixel in the image can be identified, and the information which possibly interferes with the model or is useless is abandoned, so that the model is more sensitive to characteristic change, and the overall identification accuracy of the model is further improved.
Further, as shown in fig. 2, PPM is a pyramid pooling network and d is a reject layer. As can be seen, the second subnetwork further comprises a reject layer and a convolutional layer in addition to the hole residual network, the SE-res net network, and the pyramid pooling network.
Step S150: and generating a building damage identification result corresponding to the target area according to the output result of the second subnetwork.
The output results of the second subnetwork include two types of results, one of which is a building identification result, which identifies the area of the building in the map of the target area; and secondly, judging and identifying the loss level, and multiplying the two types of results to obtain a final building damage identification result. A building damage level tag identifying portions of the target area in the building damage identification result, the building damage level tag comprising at least one of: no building, no damage, slight damage, severe damage, and complete damage.
The present embodiment does not limit the presentation form of the building damage recognition result. In an optional implementation manner, the building damage identification result may identify the damage level of each pixel in each building on the target area map, that is, the embodiment can implement the damage level identification of the pixel point level; in another alternative embodiment, the building damage identification result may identify the damage level of each building on the target area map.
Therefore, the embodiment utilizes the building damage identification model to process the pre-disaster image and the post-disaster image and output the building damage identification result, thereby integrating the building identification and the damage identification into a whole and realizing the end-to-end building damage identification. In addition, the building damage identification model provided by the embodiment comprises a first sub-network, a second sub-network, a pre-disaster sub-network and a post-disaster sub-network, wherein the first sub-network respectively shares a group of parameters when processing the pre-disaster image and the post-disaster image, and the pre-disaster sub-network and the post-disaster sub-network respectively process the pre-disaster image and the post-disaster image by utilizing the respective parameters, so that the relation between the pre-disaster image and the post-disaster image can be fully learned, the building damage identification precision is improved, the model parameters can be greatly reduced, the training efficiency is improved, and the integral calculation cost is reduced. Moreover, the scheme provided by the embodiment is simple and feasible, and is suitable for large-scale application and implementation.
Example two
Fig. 6 is a schematic flow chart of a building damage identification method based on machine learning according to a second embodiment of the present invention. The method provided in this embodiment is further optimized for the method in embodiment one.
As shown in fig. 6, the method includes:
step S610: and acquiring an initial sample set and constructing an initial building damage identification model.
The initial sample set may be obtained from an xBD dataset (natural disaster image dataset) and/or a dataset generated by means of a drone, aerial photography, etc. The initial sample set contains a large number of pre-disaster images and post-disaster images of different areas.
Further, the specific construction process of the initial building damage recognition model in this step may refer to the description related to the structure of the building damage recognition model in the first embodiment, which is not described herein.
Step S620: the initial sample set is preprocessed to obtain a preprocessed sample set.
In order to improve the prediction precision of the building damage identification model, after the initial sample set is obtained, the embodiment further adopts a corresponding pretreatment mode to pretreat the initial sample set, thereby obtaining a pretreated sample set. Wherein the preprocessing mode of the initial sample set adopts one or more of the following preprocessing modes.
In the first preprocessing method, a plurality of times of use is performed on sample images of a class having a small duty ratio.
In an actual application scene, the number of completely damaged and undamaged buildings is larger than the number of slightly damaged and severely damaged buildings after a huge disaster occurs, so that the number of images corresponding to each damage level in an initial sample set is greatly different. In order to avoid the influence of sample imbalance on the training precision of the subsequent model, the embodiment makes multiple use of sample images of the category with small proportion in the initial sample set. The specific implementation process is as follows:
First, a corresponding image level damage level label is assigned to a sample image in an initial sample set. In the process of determining the image-level damage level label of the sample image, the damage level label corresponding to the pixel point in the sample image needs to be determined, and the damage level label comprises at least one of the following labels: no building, no damage, slight damage, severe damage, and complete damage. Further aiming at any sample image, counting the number of pixel points corresponding to each pixel damage level label in the sample image, determining the pixel damage level label with the largest product of the number of pixel points and the corresponding class label weight, and then determining the pixel damage level label as the image level damage level label of the sample image, for example, if the product of the number of slightly damaged pixel points in a sample image and the slightly damaged weight is largest, the image level damage level label of the sample image is slightly damaged.
Further, according to the duty ratio of each image level damage level label in the initial sample set, the corresponding utilization times of the sample images of each image level damage level label are determined. Specifically, counting in an initial sample set, counting the number of sample images corresponding to each image-level damage grade label, and then determining the duty ratio of each image-level damage grade label in the initial sample set; and finally, determining the corresponding utilization times of the sample images of each type of labels according to the duty ratio. The duty ratio of the image level damage level labels in the initial sample set is negatively related to the corresponding utilization times of the sample images of the image level damage level labels. For example, if the light damage level label has the least proportion in the initial sample set, the most number of times of using the sample image belonging to the light damage level label is determined.
Finally, a preprocessed sample set is generated according to the sample image in the initial sample set and the corresponding utilization times of the sample image. For example, the number of uses for a certain sample image is 3, and the determination indicates that the sample image is used 3 times during model training, which corresponds to that the sample set after preprocessing contains 3 sample images.
And in the second preprocessing mode, a plurality of sample images are obtained through image processing on the sample images in the initial sample set, so that the expansion of the sample set is realized.
Specifically, first, a preset type of image processing is performed on sample images in an initial sample set. Wherein the preset type of image processing includes at least one of the following processing modes: random inversion, rotation, image offset, channel offset, gaussian blur, color adjustment, brightness adjustment, saturation adjustment, and contrast adjustment.
Further, a preprocessed sample set is generated from the sample image in the primary sample set and the image-processed sample image. By adopting the method, the expansion of the sample set can be realized, and the generalization capability of the model can be further improved.
In addition, the sample label of the image level damage level label to be subjected to image processing can be selected according to the ratio of each image level damage level label in the initial sample set. For example, if the slightly damaged duty ratio is the least, the slightly damaged corresponding sample image can be subjected to image processing, so that the sample equalization effect is further achieved on the basis of realizing the expansion of the sample set and improving the generalization capability of the model.
Step S630: training the initial building damage recognition model by using the preprocessed sample set to obtain a trained building damage recognition model.
In the training process, the evaluation indexes adopted are accuracy, recall, F1 score, intersection union (IoU) and the like. Wherein, the specific calculation formula is as follows:
in the formula (2-1), F 0 1 is the F1 score in the building identification phase, TP represents the number of pixels correctly classified as a building, FP represents the number of pixels misclassified as a building, and FN represents the number of pixels misclassified as a background.
In the formula (2-2), P j To destroy the accuracy of the j-th classification in the class classification stage, TP j Representing the number of pixels correctly classified as class j, FP j Representing the number of misclassified pixels of class j.
In the formula (2-3), R j In order to destroy the class classification stage, the recall rate of the j class, TP j Representing the number of pixels correctly classified into class j, FN j Representing the number of pixels misclassified into other categories.
In the formula (2-4), F 1,j To destroy the class classification stage, the F1 score, P of the j-th class j To destroy the accuracy of the j-th classification in the classification stage of the class, R j Recall rate for the j-th classification of the destroy class classification stage.
IoU in the formula (2-5) j To destroy the class classification stage, the j-th classified IoU, TP j Representing the number of pixels correctly classified as class j, FP j Representing the number of misclassified pixels of class j, FN j Representing the number of pixels misclassified into other categories.
In the formula (2-6), ioU is macro IoU, ioU of the damage level classification stage j IoU for the j-th classification of the damage class classification stage, n is the class number of the damage class classification stage.
In addition, the loss function employed in this embodiment is: a weighted binary cross entropy loss function based on dice loss and focus loss.
L s =-[ω s,1 *y s logp ss,0 *(1-y s )log(1-p s )]Formula (2-7)
In the formula (2-7), L s Loss function, ω, for building identification phase s,1 Is the weight of the building, ω s,1 For weights other than buildings, y s P s Ground truth labels and detected building segmentation probabilities, respectively.
Seg c-i =ω 1 *Dice(m p ,m t )+ω 2 *Focal(m p ,m t ) Formula (2-8)
In the above formulas (2-8) and (2-9), seg c-i Binary cross entropy, ω, for the ith damage level 1 Omega, omega 2 The weights of dice loss and focus loss, respectively (where ω 1 Less than omega 2 ),m p M t A real mask and a prediction mask of a damage scale i, respectively, dice representing Dice loss and Focal representing focus loss; l (L) d Loss function, ω, representing the level of corruption c-i The weight of the ith damage level (larger weight can be allocated for slight damage and serious damage so as to further improve the problem of unbalanced identification precision of each level), and n is the number of the damage levels.
Step S640: the method comprises the steps of obtaining a pre-disaster image and a post-disaster image of a target area, and inputting the pre-disaster image and the post-disaster image into a pre-trained building damage recognition model to obtain a building damage recognition result corresponding to the target area.
The specific implementation process of this step may refer to the description of the corresponding part in the first embodiment, and this embodiment is not described herein.
Therefore, in the model training stage, the embodiment makes multiple use of the sample images with less proportion, and obtains multiple sample images from the sample images in the initial sample set through image processing, so that sample equalization and sample expansion are realized, the training precision of the model is improved, and particularly, the recognition precision of slight damage and serious damage is improved greatly; furthermore, the identification accuracy of the model can be further improved by adopting the weighted binary cross entropy loss function based on dice loss and focus loss.
Example III
Fig. 7 is a flow chart of a building damage identification method based on machine learning according to a third embodiment of the present invention. The method provided in this embodiment is further optimized for the method in embodiment one.
As shown in fig. 7, the method includes:
step S710: the method comprises the steps of obtaining a pre-disaster image and a post-disaster image of a target area, and inputting the pre-disaster image and the post-disaster image into a pre-trained building damage recognition model to obtain a building damage recognition result corresponding to the target area.
The specific implementation process of this step may refer to the description of the corresponding part in the first embodiment, and this embodiment is not described herein.
Step S720: and acquiring the policy data, matching the policy data with the building damage identification result, and generating a claim settlement plan according to the matching result.
The embodiment can apply the building damage recognition result to the insurance claim scene, specifically, after obtaining the building damage recognition result, further extracting the preset type of policy data in the system. The preset type of policy data is a building related policy of the applied insurance standard, such as a major disaster property policy, etc.
And further matching the extracted policy data with a building damage identification result, wherein in the matching process, the position of the insurance target (building targets such as houses and workshops) in each policy data can be matched with the building damage identification result, so that the policy to be subjected to the claim settlement, the claim amount of the policy and the like are determined. And then generates a claims plan.
In addition, the building damage recognition result can be applied to post-disaster rescue and post-disaster reconstruction scenes, for example, rescue grades corresponding to different areas can be generated according to the building damage recognition result, wherein the higher the damage degree is, the higher the rescue grade is.
Therefore, the embodiment can apply the building damage recognition result to the insurance claim scene, so that the claim settlement plan is automatically generated according to the matching result of the policy data and the building damage recognition result, thereby improving the claim settlement efficiency and being beneficial to the insurance person (company) to carry out fund distribution and the like.
Example IV
Fig. 8 is a schematic functional structure diagram of a building damage recognition device based on machine learning according to a fourth embodiment of the present invention. As shown in fig. 8, the apparatus includes: a data acquisition module 81, an input module 82 and a model prediction module 83.
The data acquisition module is suitable for acquiring a pre-disaster image and a post-disaster image of the target area;
the input module is suitable for inputting the pre-disaster image and the post-disaster image into a pre-trained building damage recognition model;
the model prediction module is suitable for inputting the pre-disaster image and the post-disaster image into a first sub-network in a pre-trained building damage identification model; respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in the building damage identification model; inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork to a second subnetwork in the building damage identification model; and generating a building damage identification result corresponding to the target area according to the output result of the second sub-network.
Optionally, the first subnetwork includes: a cavity residual error network;
and/or the pre-disaster sub-network and the post-disaster sub-network have the same network structure; wherein the pre-disaster subnetwork and/or the post-disaster subnetwork comprises: a hole residual network and an SE-ResNet network;
and/or, the second subnetwork comprises: hole residual network, SE-ResNet network, and pyramid pooling network.
Optionally, the model prediction module is further adapted to: and subtracting output results of the pre-disaster subnetwork and the post-disaster subnetwork, and inputting the subtracted output results to a second subnetwork in the building damage identification model.
Optionally, the apparatus further includes: the model training module is suitable for acquiring an initial sample set and constructing an initial building damage recognition model;
preprocessing the initial sample set to obtain a preprocessed sample set;
training the initial building damage recognition model by using the preprocessed sample set to obtain a trained building damage recognition model.
Optionally, the model training module is further adapted to: distributing corresponding image-level damage grade labels for sample images in the initial sample set; determining the corresponding utilization times of the sample images of the image-level damage level labels according to the duty ratio of the image-level damage level labels in the initial sample set; generating a preprocessed sample set according to the sample image in the initial sample set and the corresponding utilization times of the sample image;
And/or performing a preset type of image processing on the sample images in the initial sample set; and generating a preprocessed sample set according to the sample image in the primary sample set and the sample image after image processing.
Optionally, the loss function adopted by the building damage identification model in the model training process is as follows: a weighted binary cross entropy loss function based on dice loss and focus loss.
Optionally, the apparatus further includes: and the claim plan generation module is suitable for acquiring the policy data after the building damage identification result corresponding to the target area is generated, matching the policy data with the building damage identification result, and generating the claim plan according to the matching result.
The specific implementation process of each module in the device may refer to the description of the corresponding part in the first embodiment, which is not described herein.
Therefore, the embodiment utilizes the building damage identification model to process the pre-disaster image and the post-disaster image and output the building damage identification result, thereby integrating the building identification and the damage identification into a whole and realizing the end-to-end building damage identification. In addition, the building damage identification model provided by the embodiment comprises a first sub-network, a second sub-network, a pre-disaster sub-network and a post-disaster sub-network, wherein the first sub-network and the pre-disaster image and the post-disaster image are respectively processed by sharing a group of parameters, and the pre-disaster sub-network and the post-disaster sub-network respectively process the pre-disaster image and the post-disaster image by utilizing the respective parameters, so that the relation between the pre-disaster image and the post-disaster image can be fully learned, the building damage identification precision can be improved, the model parameters can be greatly reduced, the training efficiency can be improved, and the integral calculation cost can be reduced. Moreover, the scheme provided by the embodiment is simple and feasible, and is suitable for large-scale application and implementation.
Example five
According to a fifth embodiment of the present invention, there is provided a non-volatile computer storage medium storing at least one executable instruction for performing the machine learning-based building damage identification method in any of the above method embodiments.
Therefore, the embodiment can realize end-to-end building damage identification, improve the building damage identification precision, greatly reduce model parameters, improve training efficiency and reduce overall calculation cost. Moreover, the scheme provided by the embodiment is simple and feasible, and is suitable for large-scale application and implementation.
Example six
Fig. 9 is a schematic structural diagram of a computing device according to a ninth embodiment of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the computing device.
As shown in fig. 9, the computing device may include: a processor 902, a communication interface (Communications Interface), a memory 906, and a communication bus 908.
Wherein: processor 902, communication interface 904, and memory 906 communicate with each other via a communication bus 908. A communication interface 904 for communicating with network elements of other devices, such as clients or other servers. The processor 902 is configured to execute the program 910, and specifically may perform the relevant steps in any of the embodiments of the machine learning-based building damage identification method described above.
In particular, the program 910 may include program code including computer-operating instructions.
The processor 902 may be a central processing unit, CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
A memory 906 for storing a program 910. Memory 906 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 910 may be used to cause the processor 902 to perform operations comprising:
acquiring a pre-disaster image and a post-disaster image of a target area;
inputting the pre-disaster image and the post-disaster image into a first sub-network of a pre-trained building damage identification model;
respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in the building damage identification model;
Inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork to a second subnetwork in the building damage identification model;
and generating a building damage identification result corresponding to the target area according to the output result of the second sub-network.
In an alternative embodiment, the first subnetwork includes: a cavity residual error network;
and/or the pre-disaster sub-network and the post-disaster sub-network have the same network structure; wherein the pre-disaster subnetwork and/or the post-disaster subnetwork comprises: hole residual network and SE-res net network:
and/or, the second subnetwork comprises: hole residual network, SE-ResNet network, and pyramid pooling network.
In an alternative embodiment, the program 910 may be specifically configured to cause the processor 902 to perform the following operations:
and subtracting output results of the pre-disaster subnetwork and the post-disaster subnetwork, and inputting the subtracted output results to a second subnetwork in the building damage identification model.
In an alternative embodiment, the program 910 may be specifically configured to cause the processor 902 to perform the following operations:
acquiring an initial sample set and constructing an initial building damage identification model;
Preprocessing the initial sample set to obtain a preprocessed sample set;
training the initial building damage recognition model by using the preprocessed sample set to obtain a trained building damage recognition model.
In an alternative embodiment, the program 910 may be specifically configured to cause the processor 902 to perform the following operations:
distributing corresponding image-level damage grade labels for sample images in the initial sample set; determining the corresponding utilization times of the sample images of the image-level damage level labels according to the duty ratio of the image-level damage level labels in the initial sample set; generating a preprocessed sample set according to the sample image in the initial sample set and the corresponding utilization times of the sample image;
and/or performing a preset type of image processing on the sample images in the initial sample set; and generating a preprocessed sample set according to the sample image in the initial sample set and the sample image after image processing.
In an alternative embodiment, the loss function adopted by the building damage identification model in the model training process is as follows: a weighted binary cross entropy loss function based on dice loss and focus loss.
In an alternative embodiment, the program 910 may be specifically configured to cause the processor 902 to perform the following operations:
and acquiring the policy data, matching the policy data with the building damage identification result, and generating a claim settlement plan according to the matching result.
Therefore, the embodiment can realize end-to-end building damage identification, improve the building damage identification precision, greatly reduce model parameters, improve training efficiency and reduce overall calculation cost. Moreover, the scheme provided by the embodiment is simple and feasible, and is suitable for large-scale application and implementation.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functionality of some or all of the components according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (8)

1. A machine learning based building damage identification method, comprising:
acquiring a pre-disaster image and a post-disaster image of a target area;
Inputting the pre-disaster image and the post-disaster image into a first sub-network of a pre-trained building damage identification model; the method comprises the steps that in a first sub-network, the processing of a pre-disaster image and a post-disaster image share the same group of parameters, the first sub-network comprises a first hole residual sub-network and a second hole residual sub-network, and the first hole residual sub-network is one convolutional layer and a batch normalization layer more than the second hole residual sub-network;
respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in the building damage identification model;
subtracting output results of the pre-disaster subnetwork and the post-disaster subnetwork, and inputting the subtracted output results to a second subnetwork in the building damage identification model;
generating a building damage identification result corresponding to the target area according to the output result of the second sub-network;
and acquiring the policy data, matching the policy data with the building damage identification result, and generating a claim settlement plan according to the matching result.
2. The method of claim 1, wherein the pre-disaster subnetwork is the same network structure as the post-disaster subnetwork; wherein the pre-disaster subnetwork and/or the post-disaster subnetwork comprises: a hole residual network and an SE-ResNet network;
And/or, the second subnetwork comprises: hole residual network, SE-ResNet network, and pyramid pooling network.
3. The method of claim 1 or 2, wherein prior to said inputting the pre-disaster image and the post-disaster image into the first sub-network of a pre-trained building damage identification model, the method further comprises:
acquiring an initial sample set and constructing an initial building damage identification model;
preprocessing the initial sample set to obtain a preprocessed sample set;
training the initial building damage recognition model by using the preprocessed sample set to obtain a trained building damage recognition model.
4. The method of claim 3, wherein the pre-processing the initial sample set to obtain a pre-processed sample set further comprises:
distributing corresponding image-level damage grade labels for sample images in the initial sample set; determining the corresponding utilization times of the sample images of the image-level damage level labels according to the duty ratio of the image-level damage level labels in the initial sample set; generating a preprocessed sample set according to the sample image in the initial sample set and the corresponding utilization times of the sample image;
And/or performing a preset type of image processing on the sample images in the initial sample set; and generating a preprocessed sample set according to the sample image in the initial sample set and the sample image after image processing.
5. A method according to claim 3, wherein the building damage identification model employs a loss function in the model training process of: a weighted binary cross entropy loss function based on dice loss and focus loss.
6. A machine learning based building damage identification device, comprising:
the data acquisition module is suitable for acquiring a pre-disaster image and a post-disaster image of the target area;
the input module is suitable for inputting the pre-disaster image and the post-disaster image into a pre-trained building damage recognition model;
the model prediction module is suitable for inputting the pre-disaster image and the post-disaster image into a first sub-network in a pre-trained building damage identification model; respectively inputting the output result of the first subnetwork into a pre-disaster subnetwork and a post-disaster subnetwork in the building damage identification model; inputting the output result of the pre-disaster subnetwork and the post-disaster subnetwork to a second subnetwork in the building damage identification model; and generating a building damage identification result corresponding to the target area according to the output result of the second sub-network.
7. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform operations corresponding to the machine learning based building damage identification method of any one of claims 1-5.
8. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the machine learning based building damage identification method of any one of claims 1-5.
CN202011324668.3A 2020-11-23 2020-11-23 Building damage identification method and device based on machine learning and computing equipment Active CN112396006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011324668.3A CN112396006B (en) 2020-11-23 2020-11-23 Building damage identification method and device based on machine learning and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011324668.3A CN112396006B (en) 2020-11-23 2020-11-23 Building damage identification method and device based on machine learning and computing equipment

Publications (2)

Publication Number Publication Date
CN112396006A CN112396006A (en) 2021-02-23
CN112396006B true CN112396006B (en) 2023-11-14

Family

ID=74607776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011324668.3A Active CN112396006B (en) 2020-11-23 2020-11-23 Building damage identification method and device based on machine learning and computing equipment

Country Status (1)

Country Link
CN (1) CN112396006B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091369A (en) * 2014-07-23 2014-10-08 武汉大学 Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method
CN110287932A (en) * 2019-07-02 2019-09-27 中国科学院遥感与数字地球研究所 Route denial information extraction based on the segmentation of deep learning image, semantic
CN110728665A (en) * 2019-09-30 2020-01-24 西安电子科技大学 SAR image change detection method based on parallel probabilistic neural network
CN111126308A (en) * 2019-12-26 2020-05-08 西南交通大学 Automatic damaged building identification method combining pre-disaster remote sensing image information and post-disaster remote sensing image information
CN112241764A (en) * 2020-10-23 2021-01-19 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091369A (en) * 2014-07-23 2014-10-08 武汉大学 Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method
CN110287932A (en) * 2019-07-02 2019-09-27 中国科学院遥感与数字地球研究所 Route denial information extraction based on the segmentation of deep learning image, semantic
CN110728665A (en) * 2019-09-30 2020-01-24 西安电子科技大学 SAR image change detection method based on parallel probabilistic neural network
CN111126308A (en) * 2019-12-26 2020-05-08 西南交通大学 Automatic damaged building identification method combining pre-disaster remote sensing image information and post-disaster remote sensing image information
CN112241764A (en) * 2020-10-23 2021-01-19 北京百度网讯科技有限公司 Image recognition method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Fei Zhao.Building Damage Evaluation from Satellite Imagery using Deep Learning.https://ieeexplore.ieee.org/document/ 9191541.2020,第82-89页. *

Also Published As

Publication number Publication date
CN112396006A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN107945204B (en) Pixel-level image matting method based on generation countermeasure network
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN111738110A (en) Remote sensing image vehicle target detection method based on multi-scale attention mechanism
US11308714B1 (en) Artificial intelligence system for identifying and assessing attributes of a property shown in aerial imagery
CN111027576B (en) Cooperative significance detection method based on cooperative significance generation type countermeasure network
CN112598045A (en) Method for training neural network, image recognition method and image recognition device
CN113361645B (en) Target detection model construction method and system based on meta learning and knowledge memory
CN113743417B (en) Semantic segmentation method and semantic segmentation device
CN115222946B (en) Single-stage instance image segmentation method and device and computer equipment
CN112465709B (en) Image enhancement method, device, storage medium and equipment
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN113822951A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112149526B (en) Lane line detection method and system based on long-distance information fusion
CN116740364B (en) Image semantic segmentation method based on reference mechanism
CN111179270A (en) Image co-segmentation method and device based on attention mechanism
CN112215100A (en) Target detection method for degraded image under unbalanced training sample
Guo et al. Salient object detection from low contrast images based on local contrast enhancing and non-local feature learning
CN115115863A (en) Water surface multi-scale target detection method, device and system and storage medium
CN114998610A (en) Target detection method, device, equipment and storage medium
CN114596503A (en) Road extraction method based on remote sensing satellite image
CN113793341A (en) Automatic driving scene semantic segmentation method, electronic device and readable medium
CN115131621A (en) Image quality evaluation method and device
CN112396006B (en) Building damage identification method and device based on machine learning and computing equipment
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant