CN117689481A - Natural disaster insurance processing method and system based on unmanned aerial vehicle video data - Google Patents

Natural disaster insurance processing method and system based on unmanned aerial vehicle video data Download PDF

Info

Publication number
CN117689481A
CN117689481A CN202410156344.5A CN202410156344A CN117689481A CN 117689481 A CN117689481 A CN 117689481A CN 202410156344 A CN202410156344 A CN 202410156344A CN 117689481 A CN117689481 A CN 117689481A
Authority
CN
China
Prior art keywords
video
image
unmanned aerial
disaster
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410156344.5A
Other languages
Chinese (zh)
Other versions
CN117689481B (en
Inventor
邓可
高云
肖振峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoren Property Insurance Co ltd
Original Assignee
Guoren Property Insurance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoren Property Insurance Co ltd filed Critical Guoren Property Insurance Co ltd
Priority to CN202410156344.5A priority Critical patent/CN117689481B/en
Publication of CN117689481A publication Critical patent/CN117689481A/en
Application granted granted Critical
Publication of CN117689481B publication Critical patent/CN117689481B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a natural disaster insurance processing method and system based on unmanned aerial vehicle video data, comprising the steps of collecting aerial video images after natural disasters occur by using an unmanned aerial vehicle; preliminary preprocessing of the image; separating water areas, farmlands, buildings and roads in the video frame images according to the color characteristics of the video images; extracting key features from the identified farmland areas to form feature vectors; inputting the formed feature vector into a trained convolutional neural network for processing; the convolutional neural network outputs the specific position of the disaster-affected farmland, the type and area of the affected crops, the loss of crop yield, flooding, drought and insect attack; and (3) automatically integrating the identification result into an insurance company claim settlement system by an automatic claim settlement processing flow. According to the method, unmanned plane technology is combined with the convolutional neural network, the traditional disaster loss assessment process is automated and accurate, and through the method, the efficiency of claim settlement processing is improved, and the accuracy and reliability of assessment are improved.

Description

Natural disaster insurance processing method and system based on unmanned aerial vehicle video data
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a natural disaster insurance processing method and system based on unmanned aerial vehicle video data.
Background
In the field of modern agricultural insurance, especially in relation to natural disasters (such as floods, drought, insect pests, etc.), accurate and rapid assessment of crop losses is of paramount importance. Traditional loss assessment methods rely on ground surveys and manual analysis, which are time consuming and labor intensive and inefficient in the event of large scale disasters. Therefore, it is particularly urgent to develop an efficient and automatic loss assessment system. With the development of unmanned aerial vehicle technology, unmanned aerial vehicles become an important tool in the fields of agriculture and disaster management. Unmanned aerial vehicle can cover large tracts of land area fast, gathers high-resolution video data that takes photo by plane, provides new perspective for the disaster evaluation. However, how to effectively extract useful information from massive unmanned aerial vehicle video data, and further accurately evaluate crop losses, is one of the main challenges currently faced.
In recent years, convolutional Neural Networks (CNNs) have achieved significant achievements in the fields of image processing and pattern recognition. The CNN can automatically learn and extract the characteristics of the image, and is suitable for processing complex image data. Therefore, the development of the natural disaster insurance processing method based on the unmanned aerial vehicle video data has important significance by combining the unmanned aerial vehicle technology and the CNN. However, the direct processing of the convolutional neural network results in the processing effect which cannot achieve the ideal effect, the convolutional neural network training or recognition structure needs to be improved to adapt to the characteristics of unmanned aerial vehicle video data, the input and specific processing of the convolutional neural network are greatly different along with the different application scenes, and the unmanned aerial vehicle scenes are not combined at present, so that the unmanned aerial vehicle technology and the convolutional neural network are combined, the traditional disaster damage evaluation process is automated and refined, the efficiency of claim settlement processing is improved, the accuracy and reliability of evaluation are improved, the application potential is wide, and the method can be expanded to other types of natural disaster evaluation and management fields.
Disclosure of Invention
Aiming at the problems mentioned in the prior art, the invention provides a natural disaster insurance processing method and a system based on unmanned aerial vehicle video data, which comprises the steps of performing aerial photography on an affected area after natural disasters occur by using an unmanned aerial vehicle, and collecting high-definition aerial photography video images; pretreating; separating water areas, farmlands, buildings and roads in the video frame images according to the color characteristics of the video images; for the identified farmland area, extracting key characteristics including the color, size and growth density of crops, and forming a characteristic vector by combining seasons, real-time months and longitudes and latitudes; inputting the formed feature vector into a trained convolutional neural network to process the video image; the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and area of the affected crops, the crop yield loss and identifying disaster characteristics, wherein the disaster characteristics comprise flood, drought and insect attack; and (3) automatically integrating the identification result into an insurance company claim settlement system by an automatic claim settlement processing flow. According to the method, unmanned plane technology is combined with the convolutional neural network, the traditional disaster loss assessment process is automated and accurate, and through the method, the efficiency of claim settlement processing is improved, and the accuracy and reliability of assessment are improved.
A natural disaster insurance processing method based on unmanned aerial vehicle video data comprises the following steps:
step S1: using an unmanned aerial vehicle to carry out aerial photography on an affected area after natural disasters occur, and collecting aerial photography video images;
step S2: the aerial video image is transmitted to a video image processing unit in real time through a wireless network, and the video image processing unit extracts the range of the disaster area and the crop damage condition from the aerial video;
step S21: preprocessing, including image denoising, video format conversion and resolution adjustment;
step S22: separating a water area, a farmland, a building and a road in the video frame image according to the color characteristics of the video image by using a Canny edge detector;
step S3: determining a damage degree caused by the disaster from the image data provided from the video image processing unit;
step S31: for the identified farmland area, extracting key characteristics including the color, size and growth density of crops, and forming a characteristic vector by combining seasons, real-time months and longitudes and latitudes;
step S32: inputting the formed feature vector into a trained convolutional neural network for processing;
activation function for convolutional neural networkThe method comprises the following steps:
wherein,for the element characteristic value input to the activation function, < +. >Representing the video resolution of the unmanned aerial vehicle, e is the natural logarithm,for unmanned aerial vehicle flight altitude, +.>For the resolution weight coefficient, +.>For the geographic location weighting factor, < >>Is the groundThe physical location parameters of the device are set,,/>latitude coordinate value representing video image capturing place, +.>Longitude coordinate values representing a video image capturing place;
step S33: the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and area of the affected crops, the crop yield loss and identifying disaster characteristics, wherein the disaster characteristics comprise flood, drought and insect attack;
step S4: and (3) automatically integrating the identification result into an insurance company claim settlement system by an automatic claim settlement processing flow.
Preferably, the step S21: preprocessing, including denoising processing by Gaussian filtering, converting a video format into an AVI format, and reducing the resolution of an original high-definition video, namely reducing 3840x2160 pixels of the original resolution of an aerial video to 1920x1080 pixels.
Preferably, the step S22: separating waters, farmlands, buildings, roads in video frames from video image color features using a Canny edge detector, comprising: firstly, identifying different areas according to a color range by adopting a color filter, wherein a water area is blue or dark gray, a farmland is green, and buildings and roads are gray or brown; secondly, applying a Canny edge detection algorithm to carry out edge detection; finally, using a template matching technology, matching in a template set for each detected edge contour to identify whether the shape belongs to a natural structure or an artificial structure, wherein the natural structure comprises farmlands and water areas, and the artificial structure comprises buildings and roads.
Preferably, the step S21: performing preprocessing, further including performing image stabilization processing, including:
first, an optical flow algorithm is utilized to track the pixel point motion between continuous frames in a video sequence so as to determine a motion mode in the videoEstimating optical flow vectorsThe calculation is as follows:
wherein A is a matrix composed of gradients of video frame images, representing spatial gradients of pixels within a window; t is matrix transposition; i is a matrix or value representing the intensity of ambient illumination, for adjusting the optical flow estimation to reflect the effect of the illumination variation,is the weight coefficient of illumination intensity; />Representing the change of pixel values in a window along with time as a time gradient vector;
secondly, calculating offset and rotation quantity of each frame of image caused by unmanned aerial vehicle motion according to the optical flow data;
geometrically transforming each frame of images to compensate for these offsets and rotations, thereby stabilizing the image sequence;
detecting and removing boundary distortion or black edges in the stabilized video sequence; the frame rate incoherent images are processed using an intra interpolation technique.
Preferably, the step S32: the method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial feature of the video image and the abstract feature of the feature vector respectively, fusing the spatial feature of the video image and the abstract feature of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network.
The application also provides a natural disaster insurance processing system based on unmanned aerial vehicle video data, including:
the unmanned aerial vehicle video acquisition module is used for performing aerial photography on the affected area after the natural disasters occur, and collecting aerial photography video images;
the wireless transmission module is used for transmitting the aerial video image to the video image processing unit in real time through a wireless network, and the video image processing unit extracts the range of the disaster area and the crop damage condition from the aerial video;
the preprocessing module comprises image denoising, video format conversion and resolution adjustment;
the Canny edge detector module separates water areas, farmlands, buildings and roads in the video frame images according to the color characteristics of the video images;
the damage degree determining module: determining a damage degree caused by the disaster from the image data provided from the video image processing unit;
the key feature extraction module: for the identified farmland area, extracting key characteristics including the color, size and growth density of crops, and forming a characteristic vector by combining seasons, real-time months and longitudes and latitudes;
the convolutional neural network processing module: inputting the formed feature vector into a trained convolutional neural network to process the video image;
Activation function for convolutional neural networkThe method comprises the following steps:
wherein,for the element characteristic value input to the activation function, < +.>Representing the video resolution of the unmanned aerial vehicle, e is the natural logarithm,for unmanned aerial vehicle flight altitude, +.>For the resolution weight coefficient, +.>For the geographic location weighting factor, < >>As a parameter of the geographical location,,/>latitude coordinate value representing video image capturing place, +.>Longitude coordinate values representing a video image capturing place;
the convolutional neural network output module: the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and area of the affected crops, the crop yield loss and identifying disaster characteristics, wherein the disaster characteristics comprise flood, drought and insect attack;
and the insurance automation claim settlement processing flow module automatically integrates the identification result into an insurance company claim settlement system.
Preferably, the preprocessing module comprises video format conversion, resolution adjustment and image denoising, so as to facilitate subsequent analysis and processing, wherein the video format is converted into an AVI format, the original high-definition video is reduced in resolution, namely, the original resolution 3840x2160 pixels of the aerial video are reduced to 1920x1080 pixels; and denoising by adopting Gaussian filtering.
Preferably, the Canny edge detector module separates water areas, farmlands, buildings and roads in the video frame according to the color characteristics of the video image, and comprises: firstly, identifying different areas according to a color range by adopting a color filter, wherein a water area is blue or dark gray, a farmland is green, and buildings and roads are gray or brown; secondly, applying a Canny edge detection algorithm to carry out edge detection; finally, using a template matching technology, matching in a template set for each detected edge contour to identify whether the shape belongs to a natural structure or an artificial structure, wherein the natural structure comprises farmlands and water areas, and the artificial structure comprises buildings and roads.
Preferably, the preprocessing module further includes performing image stabilization processing, including:
first, tracking pixel point motion between successive frames in a video sequence by using an optical flow algorithm to determine a motion pattern in the video and estimating an optical flow vectorThe calculation is as follows:
wherein A is a matrix composed of gradients of video frame images, representing spatial gradients of pixels within a window; t is matrix transposition; i is a matrix or value representing the intensity of ambient illumination, for adjusting the optical flow estimation to reflect the effect of the illumination variation, Is the weight coefficient of illumination intensity; />Representing the change of pixel values in a window along with time as a time gradient vector;
secondly, calculating offset and rotation quantity of each frame of image caused by unmanned aerial vehicle motion according to the optical flow data;
geometrically transforming each frame of images to compensate for these offsets and rotations, thereby stabilizing the image sequence;
detecting and removing boundary distortion or black edges in the stabilized video sequence; the frame rate incoherent images are processed using an intra interpolation technique.
Preferably, the convolutional neural network processing module: the method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial feature of the video image and the abstract feature of the feature vector respectively, fusing the spatial feature of the video image and the abstract feature of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network.
The invention provides a natural disaster insurance processing method and system based on unmanned aerial vehicle video data, which can realize the following beneficial technical effects:
1. according to the method, an unmanned plane technology is combined with a convolutional neural network, a traditional disaster loss evaluation process is automated and accurate, the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and the area of crops affected, the crop yield loss and disaster feature identification, and the disaster feature identification comprises flooding, drought and insect damage; the automatic claim settlement processing flow integrates the identification result into the claim settlement system of the insurance company automatically, and by the method, the efficiency of claim settlement processing is improved, and the accuracy and reliability of evaluation are improved.
2. The convolutional neural network processing module comprises: inputting the formed feature vector into a trained convolutional neural network to process the video image; the activation function employed by the convolutional neural network forms an improved activation function by improving the tanh activation functionThe longitude and latitude, the video resolution of the unmanned aerial vehicle and the height of the unmanned aerial vehicle are added into the training process, so that the judgment accuracy is greatly improved, and the activation function adopted by the convolutional neural network is +.>The method comprises the following steps:
Wherein,for the element characteristic value input to the activation function, < +.>Representing the video resolution of the unmanned aerial vehicle, e is the natural logarithm,for unmanned aerial vehicle flight altitude, +.>For the resolution weight coefficient, +.>For the geographic location weighting factor, < >>As a parameter of the geographical location,,/>latitude coordinate value representing video image capturing place, +.>The longitude coordinate value representing the video image shooting place greatly improves the intelligent degree and the accuracy of judging the type and area of the affected crops and the yield loss of the crops.
3. The present application uses an optical flow algorithm to track pixel motion between successive frames in a video sequence to determine motion patterns in the video, estimating optical flow vectorsMatrix or value of ambient light intensity during calculation, < +.>Is the weight coefficient of illumination intensity and->The time gradient vector is added into the calculation process of the optical flow, so that the judgment of stability is greatly enhanced, and the video frame is smoother and more accurate.
4. The method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial features of the video image and the abstract features of the feature vector respectively, fusing the spatial features of the video image and the abstract features of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network. By means of the feature combination mode, feature fusion degree is greatly improved, feature maximization acquisition of video frame images is improved, and judgment accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of steps of a natural disaster insurance processing method based on unmanned aerial vehicle video data;
fig. 2 is a schematic diagram of a natural disaster insurance processing system based on unmanned aerial vehicle video data.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1:
in order to solve the above-mentioned problems in the prior art, as shown in fig. 1, a natural disaster insurance processing method based on unmanned aerial vehicle video data includes:
Step S1: using an unmanned aerial vehicle to carry out aerial photography on an affected area after natural disasters occur, and collecting high-definition aerial photography video images; in some embodiments, flood disasters occur in agricultural areas, resulting in damage to large areas of farmland. In order to quickly evaluate losses and initiate an insurance claim process, detailed image data of the disaster area needs to be collected. Deploying unmanned aerial vehicle: selecting a suitable unmanned aerial vehicle: unmanned aerial vehicles equipped with high-resolution cameras, such as DJI Phantom 4 Pro or similar models, are selected in view of the need to collect high-definition video. These drones can shoot 4K resolution video, ensuring image sharpness. Setting a flight plan: and planning a flight route of the unmanned aerial vehicle according to the geographic information of the disaster area. Ensuring coverage of all critical areas, including different types of farms and areas where disaster is most severe. Performing aerial photography tasks: starting the unmanned aerial vehicle, and automatically flying according to a preset flight plan. In the whole flight process, the camera of the unmanned aerial vehicle continuously records high-definition videos. In certain critical areas, such as farms known to be severely damaged or of special value, the drone may reduce the flight altitude for more detailed shooting. Video data collection: the unmanned aerial vehicle returns after completing the predetermined flight path. And downloading the recorded video data from the unmanned aerial vehicle camera. And (3) performing preliminary inspection on the video data to ensure that all key areas are covered and that the video quality meets the analysis requirements.
Step S2: the aerial video image is transmitted to a video image processing unit in real time through a wireless network, and the video image processing unit extracts the range of the disaster area and the crop damage condition from the aerial video; real-time transmission of video data: in some embodiments, the real-time video data transmission system is started while the drone is returning. This may be achieved by a wireless transmission module (e.g., wi-Fi or 4G LTE module) on the drone, sending the video data to the ground station in real time or uploading directly to the cloud server. To ensure stability and speed during transmission, high bandwidth wireless networks are used and real-time compression of video data is performed as necessary to reduce the bandwidth required for transmission. Setting of a video image processing unit: the video image processing unit may be a high-performance computer or server equipped with sufficient processing power and memory space for processing large amounts of video data. The unit has specialized image processing software and algorithms installed thereon for analyzing the video data. Extracting disaster areas and damaged crops: after the video data arrives at the processing unit, format conversion and decoding are carried out first, so that the compatibility of the video format and processing software is ensured. Each frame of image in the video is analyzed using an image recognition algorithm. The algorithm first identifies and marks areas of the farmland and then evaluates the extent of disaster recovery in those areas. For example, the algorithm may identify characteristics of flooded areas, color changes of damaged crops, and the like. For each frame of image, the algorithm outputs the location of the affected area and the estimated damage level. This data can then be used to generate detailed disaster assessment reports.
Step S21: preprocessing, including video format conversion, resolution adjustment and image denoising, so as to facilitate subsequent analysis and processing; video format conversion: in some embodiments, the original video format captured by the drone is the MOV format, which is a common high definition video format. To ensure that video processing software can process these videos compatibly and efficiently, it is necessary to convert them into a more general format, such as AVI or MP4. Format conversion is performed using video conversion software (e.g., FFmpeg). Such software is able to quickly and without loss of quality convert video from one format to another. Resolution adjustment: assume that the original video has a resolution of 4K (3840 x2160 pixels). While high resolution video provides more detail, their file size is larger and more time consuming to process. To balance the details and processing speed, the resolution of the video is adjusted to 1080p (1920 x1080 pixels). The target resolution is set in the video conversion software and the videos are batched to ensure that all videos are adjusted to the same resolution. Image denoising: noise may be present in the video due to the fact that the drone may be affected by vibration or lighting conditions during flight. In order to improve the image quality, the video needs to be subjected to denoising processing. Each frame in the video is processed using gaussian filtering or other denoising algorithms. Gaussian filtering is a commonly used image smoothing technique that can effectively remove image noise while preserving important structural information.
Step S22: separating a water area, a farmland, a building and a road in the video frame image according to the color characteristics of the video image by using a Canny edge detector; in some embodiments, in processing flood disaster video data collected by unmanned aerial vehicles, it is desirable to distinguish between different areas in the video, such as waters, farms, buildings and roads. For this purpose, a Canny edge detector is used in combination with color feature recognition to achieve this goal. Color filter application: color filters are applied to the preprocessed video frames to identify the different regions. Waters are usually blue or dark grey in color, identified by setting a specific color threshold. Farms typically appear green, identified by the hue, saturation and brightness range of the green. The building and the road are then grey or brown in color, which can be detected by the corresponding color range.
The Canny edge detection algorithm applies: after applying the color filters, each color filtered region is further processed using a Canny edge detection algorithm. The Canny algorithm can effectively detect edges in the image and help distinguish between adjacent different regions. Parameters of the Canny algorithm, such as thresholds, are adjusted to optimize the effect of edge detection. Shape recognition and region segmentation: shape recognition techniques are applied to distinguish between natural and man-made structures. For example, farms typically present regular rectangles or polygons, while natural body edges are more irregular. Each detected region is identified and labeled using a template matching technique or shape-based classification algorithm. After a flood disaster, edges of different areas have been identified by a Canny edge detector in video data collected using the drone. To further accurately identify these areas (e.g., waters, farms, buildings, and roads), we will apply template matching techniques and shape classification algorithms. In some embodiments, a template matching technique applies: a set of templates of field shapes is prepared, including typical farmland shapes (e.g., rectangles, irregular polygons), natural shapes of water areas (e.g., irregular shapes), and specific shapes of buildings and roads (e.g., rectangles, lines). For each region in the video identified by color filtering and edge detection, a template matching technique is used for comparison. This typically involves calculating the similarity between the shape of the region and the individual templates, and selecting the template with the highest similarity as the matching result. The shape classification algorithm applies: existing shape classification algorithms are developed or used that are capable of classifying different regions according to characteristics of the shape (e.g., edge length, angle, curvature, etc.). For each region in the video, its shape features are extracted and its class (e.g., farmland, waters, buildings or roads) is determined using the classification algorithm. Marking and verification: in the video image, each identified region is marked according to its category, for example, different colors or labels are used to represent different types of regions. And selecting part of video frames for manual verification to ensure the accuracy of template matching and shape classification. The parameters of the template set or classification algorithm are adjusted as necessary.
Step S3: determining a damage degree caused by the disaster from the image data provided from the video image processing unit; disaster area identification: using previous processing steps, disaster-stricken areas, waters, buildings and roads have been identified. In some embodiments, these areas are further analyzed, particularly those marked as farms and buildings, which are typically where flood disasters are most severe. Evaluation of the degree of damage: for the farmland areas, the extent of damage was assessed. This can be achieved by analysing the extent of coverage of the crop, colour change (e.g. green to brown) and flooding. For buildings and roads, it is evaluated whether they are flooded and whether the structure has obvious signs of damage. Image analysis technique application: image analysis techniques, such as pixel-level comparisons and object recognition algorithms, are applied to automate the process of assessing the extent of damage. For example, by comparing images of the same area before and after a flood, the area of change can be identified and the area and extent of damage to the crop can be estimated.
Step S31: for the identified farmland area, extracting key characteristics including the color, size and growth density of crops, and forming a characteristic vector by combining seasons, real-time months and longitudes and latitudes;
In some embodiments, video data of the affected area of the farmland is collected using a drone after a flood disaster. Now, key features of crops need to be extracted from the video frames, and feature vectors are formed by combining information such as seasons, real-time months, longitude and latitude and the like. Extracting key features of video frame images: and extracting the features of the farmland area from the unmanned aerial vehicle video frame by using an image processing technology. Color extraction: color analysis algorithms (e.g., HSV color space analysis) are used to evaluate the color of crops. Healthy crops often show a vivid green color, and may turn yellow or brown after disaster. Size and growth density measurements: image segmentation techniques and morphological operations (e.g., dilation, erosion) are applied to identify individual plants, and their size and distribution density are measured. This can be estimated by calculating the coverage of the green pixels. Acquiring time and geographic information: month information: the real-time month when the drone collects the data may be obtained directly from the flight log or metadata. For example, if the drone flies at 4/15 of 2024, the month information is "4 months". Longitude and latitude information: unmanned aerial vehicles are generally equipped with GPS devices that can record the precise geographic location of each video frame at the time of capture. Such information is typically stored in metadata of the video, and latitude and longitude information can be extracted by parsing the metadata. Formation of feature vectors: and combining the characteristics of crop color, size, growth density and the like extracted from the video frame with the acquired month and longitude and latitude information to form a characteristic vector. For example, the feature vector may be an array containing color index, average plant size, vegetation density, month (e.g., 4), longitude and latitude values. Normalization and preprocessing of feature vectors: all data in the feature vector are normalized and normalized to ensure that they are suitable for subsequent machine learning or deep learning analysis. For example, all numerical features may be scaled to a range of 0 to 1, with month features being one-hot coded. By the embodiment, key features of crops can be accurately extracted from unmanned aerial vehicle video data, and the features are combined with important time and geographic information, so that a foundation is provided for deeper data analysis and damage assessment.
In some embodiments, after experiencing a natural disaster such as a flood, the drone video data has been used to identify the area of the farmland that was affected by the disaster. The next step is to extract key features from these identified farmland areas and combine other relevant information to form feature vectors for subsequent depth analysis and damage assessment. In some embodiments, crop feature extraction: the critical features, including the color, size and growth density of the crop, are extracted from the identified field area in the unmanned aerial vehicle video using image processing techniques. For example, color recognition techniques are used to analyze the health of crops. Healthy crops often appear vividly green, and may turn yellow or brown after a disaster. The average height and the intensity of crops are estimated by using object size measurement technology, and the information can be used as an index of growth density and damage degree. Environmental factors are considered: seasonal and timing information about the field area, such as the current month, and geographic location data (latitude and longitude) of the field is collected. This information is critical to understanding the growth cycle of the crop and the seasonal affection that may be experienced. Feature vector formation: and integrating the extracted crop characteristics with the environmental factor data to form a comprehensive characteristic vector. For example, the feature vector may include a color index, average height, growth density, current month, latitude and longitude, etc. of the crop. Data preprocessing and normalization: the data in the eigenvectors are preprocessed and normalized to ensure that they are suitable for input into the convolutional neural network. This may include adjusting the range of values, processing missing data, and normalizing the data format. By the embodiment, key farmland characteristics and environment information can be effectively extracted and combined from the unmanned aerial vehicle video data, and a rich data base is provided for subsequent deep analysis and damage evaluation.
Step S32: inputting the formed feature vector into a trained convolutional neural network to process the video image; preparation of convolutional neural network: in some embodiments, the structural design of the convolutional neural network: input layer: the input to the network includes two parts, one part being the preprocessed video image and the other part being the feature vector. The video image input channel adopts a plurality of convolution layers, and the characteristic vector input is directly connected to the full connection layer. Convolution layer: for video image data, multiple convolution layers are designed to extract spatial features of the image. Each convolution layer is followed by a pooling layer to reduce the dimensions and enhance the features. Full tie layer: after the convolution layer several fully connected layers are added for integrating the data in the feature vectors and the image features extracted by the convolution layer. Output layer: the net-work is finally an output layer for giving a prediction of the extent of crop damage, such as a classification (mild, moderate, severe) or percentage of loss. The network training process comprises the following steps: data preparation: a training dataset is prepared, including historical drone video images and corresponding damage-assessment reports. It is ensured that the data set contains a number of different degrees of disaster situations. Pretreatment: the video image is subjected to necessary preprocessing such as resizing and normalization. At the same time, the data format of the feature vector is ensured to be compatible with the network input. Training network: the network is trained using a supervised learning approach. The input includes the video image and the feature vector, with the label being the corresponding degree of impairment. In the training process, the network performance is optimized by adjusting the learning rate, regularization parameters and the like. Verification and tuning: the performance of the network is tested using the validation data set. And adjusting the network structure or training parameters according to the verification result to improve accuracy and reduce overfitting.
In some embodiments, a convolutional neural network model suitable for image and feature vector analysis is selected or designed. The network should be able to process not only image data, but also numerical and classification data. For example, a network architecture with multiple convolutional layers and fully-connected layers is used. The convolution layer is used to process video image data, while the full link layer is used to integrate image features and additional feature vectors. Training a network: the neural network is trained using historical data, including past disaster videos and corresponding damage-assessment reports. This allows the network to learn how to identify disaster conditions from the images and feature vectors. In the training process, network parameters (such as learning rate, layer number, filter size, etc.) are continuously adjusted to achieve optimal performance. Feature vector and video image input: the extracted feature vectors and the preprocessed video images are fed to the network as input data. For video images, a particular key frame may be selected, or a series of representative images may be extracted from the video. The feature vectors may be input directly to the fully connected layer of the network while the video images are input to the convolutional layer. Network processing and output: the network processes the input data through its multiple layers, extracting and learning key information in the images and feature vectors. The output of the network may be an estimate of the extent of crop damage, for example, the output may be a classification of the extent of disaster (mild, moderate, severe) or a percentage of loss.
Activation function for convolutional neural networkThe method comprises the following steps:
wherein,for the element characteristic value input to the activation function, < +.>Representing the video resolution of the unmanned aerial vehicle, e is the natural logarithm,for unmanned aerial vehicle flight altitude, +.>In order for the resolution weight coefficient to be a high-resolution,/>for the geographic location weighting factor, < >>As a parameter of the geographical location,,/>latitude coordinate value representing video image capturing place, +.>Longitude coordinate values representing a video image capturing place;
step S33: the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and area of the affected crops, the crop yield loss and identifying disaster characteristics, wherein the disaster characteristics comprise flood, drought and insect attack; in some embodiments, the output of the convolutional neural network includes several parts: the location of the affected farmland, the type of crop affected, the area of damaged crop, the estimated yield loss, and the identified disaster characteristics (e.g., flooding, drought, insect damage). For example, the network may output a thermodynamic diagram indicating the most severely affected area of the farmland; classification results of one crop type, such as rice, wheat, etc.; an estimate of damaged area, and a percentage of yield loss; and a disaster type recognition result. Disaster location and crop type analysis: the thermodynamic diagram of the network output is used to determine the most severe area to be subjected to the disaster. For example, darker colored areas indicate a higher degree of damage. According to the classification result of the crop types, the types of the crops mainly planted in each disaster-stricken area can be determined. Damage extent and yield loss assessment: the extent of damage to each field area was assessed based on the area damaged and percent yield loss from the network output. For example, if a particular paddy field is marked with a 50% yield loss, this means that the rice yield in that area is expected to be reduced by half as compared to normal. Disaster feature recognition application: based on the identified disaster characteristics, such as flooding, drought or insect damage, the cause and possible long term impact of the disaster can be further analyzed. For example, if an area is identified as being affected by a flood, the relevant departments may need to be concerned with rainfall forecast and water level conditions for several days in the future.
Step S4: and (3) automatically integrating the identification result into an insurance company claim settlement system by an automatic claim settlement processing flow. After flood disasters, unmanned aerial vehicle video data are analyzed by using a convolutional neural network, and the damage degree of crops is estimated. It is now desirable to automatically integrate these analysis results into the insurance company's claim settlement system to expedite the claim settlement process. Formatting and normalization of analysis results: the data output by the convolutional neural network (including the disaster area map, the type of crop damage, the damage degree, the yield loss percentage, etc.) is formatted into a format that can be identified and processed by the insurance company claims system. For example, the data may be converted to JSON or XML format, which contains all necessary information such as latitude and longitude of the disaster area, crop type, damage degree, etc. Data is transmitted to the insurance company system: and uploading the processed data to an insurance company claim settlement system by using a secure network connection. Ensuring compliance with all data protection during data transmission. Data processing in the claim system: in an insurance company claim settlement system, an automated script or program is provided to receive, parse, and process the uploaded data. The system automatically determines the amount of the claim to be paid, generates a claim report, and distributes the claim settlement request to the corresponding claim settlement processor according to the uploaded data. Claims decision and notification: the claims processor reviews the claims reports generated by the system and the recommended claims amount, making the necessary adjustments. Once the claims amount is determined, the system automatically notifies the farmer or related beneficiary about the claims decision and subsequent flows.
In some embodiments, the step S21: preprocessing, including video format conversion, resolution adjustment and image denoising, so as to facilitate subsequent analysis and processing, wherein the video format is converted into an AVI format, the resolution of the original high-definition video is reduced, namely, the original resolution 3840x2160 pixels of the aerial video are reduced to 1920x1080 pixels; and denoising by adopting Gaussian filtering. After a flood disaster, the drone is used to capture high definition video of the affected area. These videos require a series of preprocessing steps to facilitate subsequent analysis and processing. Video format conversion: the original video is recorded by the drone at 4K resolution (3840 x2160 pixels), possibly in MOV or MP4 format. In order to ensure compatibility of video processing software, it is necessary to convert the video format into AVI format. All original video files are converted to AVI format using a video conversion tool (e.g., FFmpeg) to execute a format conversion command. Resolution adjustment: while 4K video provides high definition, the file is bulky and time consuming to process. To optimize processing speed, video resolution is reduced to 1080p (1920 x1080 pixels). During the video conversion process, target resolution parameters are set and all video files are processed in batches to ensure that they have uniform resolution. Image denoising: because unmanned aerial vehicle probably receives vibrations when flying, noise appears in the video. In order to improve the image quality, a denoising process is performed on the video. And denoising by applying a Gaussian filter algorithm. Gaussian filtering is a commonly used image smoothing technique that can effectively reduce image noise while preserving important details. A gaussian filter is applied to each video frame using image processing software (e.g., openCV).
In some embodiments, the step S22: separating waters, farmlands, buildings, roads in video frames from video image color features using a Canny edge detector, comprising: firstly, identifying different areas according to a color range by adopting a color filter, wherein a water area is blue or dark gray, a farmland is green, and buildings and roads are gray or brown; when using color filters for region identification, this can be done according to typical color characteristics exhibited by different regions. Color recognition may be performed in the RGB (red green blue) or HSV (hue, saturation, brightness) color space, which is generally better suited for handling color recognition tasks, as it better conforms to the human perception of color. And (3) water area identification: RGB space: in the RGB color space, the water area usually appears blue, and its RGB values may be close to (0, 0, 255). Dark grey waters may exhibit low brightness in RGB and R, G, B values near each other. HSV space: in the HSV color space, the hue (H) range of blue is typically between about 210 ° and 240 °, and the saturation (S) and the brightness (V) can be adjusted as the case may be. The saturation of dark grey waters is lower as well as the brightness. And (3) farmland identification: RGB space: the green farmland is in RGB color space, which may have values close to (0, 255, 0). HSV space: the hue range of green is typically between about 70 ° and 140 °, with saturation and brightness adjusted according to the shade of the particular green. Building and road identification: RGB space: gray and brown buildings and roads may not be easily distinguished in RGB space because they may have moderate values in all three dimensions of R, G, B. HSV space: for gray, saturation is lower, while hue can cover a wider range; for brown colors, the hue may be between 15 ° and 30 °. Color filter application: by setting a specific color range threshold, a color filter is applied to each video frame in HSV space. For example, a threshold may be set to capture pixels with hues in the green range, thereby identifying areas of the farmland. Pixel regions screened according to the threshold may be labeled as corresponding region categories, e.g., regions meeting the green threshold range are labeled as farmlands.
Secondly, applying a Canny edge detection algorithm to carry out edge detection; canny edge detection algorithm principle: noise reduction: first, the image is subjected to smoothing processing to remove noise. By applying a gaussian filter. Gradient calculation: the algorithm finds potential edges by computing gradients of the image. This involves calculating the rate at which the brightness of the image varies in the horizontal and vertical directions. Non-maximum suppression: this step aims at refining the potential edges. The algorithm traverses each pixel and removes those pixels that are not the brightest points of the edge. Double threshold detection: finally, the algorithm applies two thresholds to determine the true edge. Pixels below the low threshold are excluded, pixels above the high threshold are considered edges, and pixels between the two are determined based on their connectivity. Edge detection implementation: for each frame of image extracted from the video shot by the unmanned aerial vehicle, a gaussian filter is first applied to perform noise reduction processing. Next, the gradients for each frame of image are computed and non-maxima suppression is applied to refine the potential edges. Then, by setting appropriate high and low thresholds, a double threshold detection method is used to determine and highlight the true edges. Application of edge information: the highlighted edges can clearly indicate the outline of the disaster area, such as the edges of waters, boundaries of farmlands, outlines of damaged buildings, etc., by Canny edge detection. Such edge information may further be used to segment and classify disaster areas, helping to identify the extent and scope of the flood affected areas.
Finally, using a template matching technology, matching in a template set for each detected edge contour to identify whether the shape belongs to a natural structure or an artificial structure, wherein the natural structure comprises farmlands and water areas, and the artificial structure comprises buildings and roads. When processing unmanned video data after flood disasters, we need to distinguish between natural structures (e.g. farms, waters) and man-made structures (e.g. buildings, roads) in the video. The template matching technology is used as an identification method based on image similarity. Template matching technology principle: template matching is a process of finding small images (templates) in a large image. The algorithm slides the template image over the large image and calculates the similarity of both at each location. Similarity calculation: similarity is typically calculated by luminance correlation, for example using cross correlation or normalized cross correlation. Normalized cross-correlation can reduce the effects of illumination variation. Best match: the template, when slid over a large image, generates a similarity score at each location. The location of the highest score is considered where the template best matches the large image. Preparing a template: to identify natural and man-made structures, we prepare a series of template images, each template representing a particular structure type. For example, for natural structures, farmland and water templates of different shapes and sizes may be prepared; for man-made structures, various shapes of building and road templates may be included. Template matching is implemented: for each contour identified by Canny edge detection, we will find the best matching template in the set of templates. And carrying out similarity calculation on each contour and each template. This can be achieved by sliding each template over the contour area and calculating the similarity at each location. And determining which template has the highest similarity with each contour so as to judge whether the contour is a natural structure or an artificial structure. Results application: by means of template matching technology, edge contours in videos can be classified into farmlands, water areas, buildings or roads and the like.
In some embodiments, the step S21: performing preprocessing, further including performing image stabilization processing, including:
first, tracking pixel point motion between successive frames in a video sequence by using an optical flow algorithm to determine a motion pattern in the video and estimating an optical flow vectorThe calculation is as follows:
wherein A is a matrix composed of gradients of video frame images, representing spatial gradients of pixels within a window; t is matrix transposition; i is a matrix or value representing the intensity of ambient illumination, for adjusting the optical flow estimation to reflect the effect of the illumination variation,is the weight coefficient of illumination intensity; />Representing the change of pixel values in a window along with time as a time gradient vector;
flow algorithm principle: basic concept: optical flow refers to the pattern of movement of pixels in an image sequence over time. By analyzing the pixel variation between two consecutive frames, the direction and distance of movement of each pixel can be estimated. Optical flow vector estimation: estimating optical flow vectors typically involves computing the gradients of pixels in space and time. This can be done by modeling the change in pixel brightness over time, typically assuming that the pixel brightness remains constant for a short period of time. Implementing an optical flow algorithm: pretreatment: preliminary processing, such as cropping, denoising and brightness adjustment, is performed on the video shot by the unmanned aerial vehicle to prepare for optical flow analysis. Optical flow vector calculation: for each frame in the video, a pixel gradient between that frame and the next frame is calculated. The optical flow vectors are calculated using the above formula, where matrix a contains spatial gradient information and matrix I contains illumination intensity information to adjust the optical flow estimates. Pixel tracking: and tracking the motion trail of the pixel points in each frame by using the calculated optical flow vector. This helps to determine the image shake pattern due to the movement of the drone or external factors. Video stabilization processing: motion compensation: each frame of image is motion compensated based on the optical flow vector. The offset and rotation of the picture due to the movement of the drone are counteracted by adjusting the pixel positions. Image correction: after compensating for the motion, further correction of the image may be required, such as elimination of distortion introduced by the stabilization process. Post-processing and output: post-processing, such as adjusting the frame rate and format conversion, is performed on the stabilized video to facilitate subsequent analysis. Outputting the stabilized video, and providing higher quality visual data for disaster damage assessment and other analysis.
Secondly, calculating offset and rotation quantity of each frame of image caused by unmanned aerial vehicle motion according to the optical flow data; when processing flood disaster videos shot by unmanned aerial vehicles, the videos deviate and rotate due to unstable flight of the unmanned aerial vehicles. In order to stabilize these videos, it is necessary to calculate the offset and rotation amount of each frame image caused by the unmanned aerial vehicle motion. Optical flow data analysis: in the previous step, vectors representing the pixel motion have been obtained using a light flow algorithm. These vectors contain the movement information of the pixel from one frame to the next. Offset calculation: average displacement vector: all optical flow vectors in each frame of image are averaged to obtain an overall average displacement vector. This vector represents the direction and distance of movement of the image average. The calculation method comprises the following steps: let the set of optical flow vectors be { v1, v2, v3, &..once. Vn } denote a two-dimensional vector containing displacements in the horizontal and vertical directions. Calculating the average value of these vectors to obtain an average offset value, and calculating the rotation value: calculating the rotation angle: the calculation of the rotation amount is complicated. The basic idea is to estimate the rotation angle by comparing the position changes of the corresponding points in the adjacent frames. The calculation method comprises the following steps: several key points in the image are selected and rotation is estimated by tracking the change in position of these points in successive frames. For example, the average rotation angle may be calculated by measuring the angular change of these points relative to the center of the image. And (3) image compensation: and adjusting each frame of image according to the calculated offset and rotation. This typically involves both translational and rotational operations. The shifting operation moves pixels of the image according to the average displacement vector. The rotation operation rotates the image around the center of the image according to the calculated rotation angle.
Geometrically transforming each frame of images to compensate for these offsets and rotations, thereby stabilizing the image sequence; we have obtained the average displacement vector and rotation angle for each frame of image. Each frame of image in the video is prepared for geometric transformation. The transformation consists of two parts: translation and rotation. Translation transformation: principle of: translational transformation refers to moving pixels of an image along the X-axis and the Y-axis. The amount of translation is determined by the previously calculated average displacement vector. The implementation is as follows: for each frame of image, the pixels of the image are moved in the respective directions according to the X and Y components of their average displacement vectors. This may be achieved using affine transformations. Rotation transformation: principle of: the rotation transformation is to rotate the image around its center point. The rotation angle is determined from the previous calculation result. The implementation is as follows: for each frame of image, rotation is performed around the center of the image according to the calculated rotation angle. This is typically done by setting a rotation matrix. Transformation application: the above transformations are performed on each frame of video using an image processing tool or library (e.g., openCV). Translation transformation is performed first, and then rotation transformation is performed. During the transformation, the image edges may need to be properly cropped or filled to avoid the occurrence of blank areas.
Detecting and removing boundary distortion or black edges in the stabilized video sequence; the frame rate incoherent images are processed using an intra interpolation technique. Boundary distortion processing: problem identification: after the stabilization process of rotation and translation of the video frame, black edges or distortions may occur at the edges of the image. The solution method comprises the following steps: these distortions are handled by clipping and filling. First, black edges or distorted regions are identified, and then the image is cropped to remove these regions. If cropping would result in loss of important information, it is contemplated that image filling techniques (such as mirror filling or content aware filling) may be used to complement the edges. Frame rate incoherence processing: problem identification: geometric transformations may cause unnatural jumps from frame to frame of the video, affecting the smoothness of the video. The solution method comprises the following steps: intra interpolation techniques are used to smooth the video. This involves generating intermediate frames between successive frames to make video playback smoother. The interpolated frames may be generated by calculating the average of neighboring frames or using more advanced dynamic interpolation techniques.
In some embodiments: the stabilized video is inspected to determine if there are boundary distortions and frame rate discontinuities. For boundary distortion, black edges or distorted regions are identified and cropped, or image filling techniques are applied. For example, if a black border appears below the video, the region may be cropped or filled with neighboring pixels. For the problem of frame rate discontinuities, the video is processed using intra interpolation software or tools. For example, if the transition of the video from frame a to frame B is not smooth, one or more intermediate frames may be generated to make the transition more natural.
In some embodiments, the step S32: the method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial feature of the video image and the abstract feature of the feature vector respectively, fusing the spatial feature of the video image and the abstract feature of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network. Structural design of convolutional neural network: two input channels: the CNN model is designed to have two independent input channels. One channel is used to receive the preprocessed video image and the other channel is used to receive the feature vector. Rolling and pooling: multiple convolution layers and pooling layers are used for the video image input channel to extract spatial features of the image. For feature vector input channels, a series of fully connected layers may be required to process and extract abstract information in the feature vectors. Feature fusion and network processing: intermediate layer fusion: at the middle layer of the network, the spatial features from the video image and the abstract features of the feature vector are fused. This can be achieved by connecting the outputs of the two channels. Full connection layer treatment: the fused features are passed to the fully connected layer, which further processes the integrated features and outputs final prediction results, such as classification of the extent of the disaster or estimation of the extent of the damage. Examples: training a network: the network is trained using historical unmanned video and corresponding disaster assessment data. For example, the training data may include images of the farmland before and after the flood and corresponding damage reports. Data input: in practical application, a new video image shot by the unmanned aerial vehicle and the extracted feature vector are input into a trained CNN model. Analysis of results: network output is used to identify disaster areas, evaluate the extent of damage, and possibly provide specific disaster types (e.g., floods, drought, etc.).
The application also provides a natural disaster insurance processing system based on unmanned aerial vehicle video data, as shown in fig. 2, the following are main hardware components of the system: unmanned aerial vehicle video acquisition module: hardware: unmanned aerial vehicles equipped with high definition cameras, such as the DJI Phantom or Mavic series; the functions are as follows: and carrying out aerial photography on the affected area after the natural disasters occur, and collecting high-definition video images. And a wireless transmission module: hardware: the wireless network equipment comprises a Wi-Fi or 4G/5G module on the unmanned plane and a network receiver of the ground receiving station; the functions are as follows: the real-time wireless transmission of the aerial video image to the video image processing unit is realized. And a pretreatment module: hardware: a high performance computer or server equipped with a high speed processor and sufficient memory; the functions are as follows: and performing preprocessing operations such as video format conversion, resolution adjustment, image denoising and the like. Canny edge detector module: hardware: like the preprocessing module, a computer or a server with higher computing power is needed; the functions are as follows: the video image is processed to separate water, farmland, buildings and roads according to the color characteristics. The damage degree determining module and the key feature extracting module are as follows: hardware: a high performance computer or server having image processing and machine learning capabilities; the functions are as follows: and analyzing the image data, determining the damage degree caused by the disaster, and extracting key features of farmland areas. The convolutional neural network processing module: hardware: the server is provided with a high-speed GPU and is used for processing complex convolutional neural network calculation; the functions are as follows: the convolutional neural network is trained and operated to process feature vectors and video image data. The convolutional neural network output module: hardware: and the convolutional neural network processing module is the same as the convolutional neural network processing module. The functions are as follows: and outputting network processing results, including information such as the position of the disaster-affected farmland, the type and the area of the affected crops and the like. And the insurance automation claim settlement processing flow module is as follows: hardware: the claim settlement system server of the insurance company needs to have enough database and network processing capacity; the functions are as follows: and integrating the identification result and automatically processing an insurance claim settlement process. The natural disaster insurance processing system needs a series of high-performance hardware devices, including unmanned aerial vehicle equipped with high-definition cameras, high-speed computer or server, deep learning server with GPU acceleration, and claim settlement system server of insurance company. These devices work together to ensure that the entire process from data collection, processing to insurance claims operates efficiently and accurately.
The unmanned aerial vehicle video acquisition module is used for performing aerial photography on the affected area after the natural disasters occur, and collecting high-definition aerial photography video images;
the wireless transmission module is used for transmitting the aerial video image to the video image processing unit in real time through a wireless network, and the video image processing unit extracts the range of the disaster area and the crop damage condition from the aerial video;
the preprocessing module comprises video format conversion, resolution adjustment and image denoising, so that subsequent analysis and processing are facilitated;
the Canny edge detector module separates water areas, farmlands, buildings and roads in the video frame images according to the color characteristics of the video images;
the damage degree determining module: determining a damage degree caused by the disaster from the image data provided from the video image processing unit;
the key feature extraction module: for the identified farmland area, extracting key characteristics including the color, size and growth density of crops, and forming a characteristic vector by combining seasons, real-time months and longitudes and latitudes;
the convolutional neural network processing module: inputting the formed feature vector into a trained convolutional neural network to process the video image;
activation function for convolutional neural network The method comprises the following steps: />
Wherein,for the element characteristic value input to the activation function, < +.>Representing the video resolution of the unmanned aerial vehicle, e is the natural logarithm,for unmanned aerial vehicle flight altitude, +.>For the resolution weight coefficient, +.>For the geographic location weighting factor, < >>As a parameter of the geographical location,,/>latitude coordinate value representing video image capturing place, +.>Longitude coordinate values representing a video image capturing place;
the convolutional neural network output module: the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and area of the affected crops, the crop yield loss and identifying disaster characteristics, wherein the disaster characteristics comprise flood, drought and insect attack;
and the insurance automation claim settlement processing flow module automatically integrates the identification result into an insurance company claim settlement system. When processing a large number of unmanned aerial vehicle video frame images, directly inputting each frame into a Convolutional Neural Network (CNN) may cause a huge computational burden. Key frame extraction is thus performed: rather than processing each frame in the video, key frames are first extracted from the video. Key frames refer to those frames that are representative in a video sequence or contain important information. Key frames are identified by analyzing the differences from frame to frame. For example, when a significant scene change is detected, the frame may be considered a key frame. Preprocessing key frames: preprocessing the extracted key frames, including video format conversion, resolution adjustment, image denoising and the like. In this way, a relatively small number of frames need to be processed, thereby reducing computational requirements. Feature extraction and edge detection: the Canny edge detector is applied to the preprocessed key frames to identify waters, farms, buildings, roads, etc. At the same time, key features such as the color, size and growth density of the crop are extracted from these frames. Convolutional neural network processing: and taking the extracted feature vectors and the key frame images as inputs, and inputting the inputs into the trained convolutional neural network. In CNN, a network can be designed to handle both image features and abstract features. For example, using multiple input channels, one processes a video image and another processes extracted feature vectors. Comprehensive analysis and output: the network output is used to identify the disaster area, evaluate the extent of damage, and possibly provide information on the type of disaster (e.g., flood, drought, insect damage). In this way, a large amount of video data can be efficiently processed while avoiding the huge amount of computation required to process each frame.
In some embodiments, the zoning process: the large-range area of shooting is divided into a plurality of small areas. Doing so may reduce the amount of data processed in a single pass, making the process more efficient. For each small region, the video frames are extracted and analyzed separately. This may be done based on geographic coordinates or specific landmarks. Key frame extraction: in each cell, key frames are extracted from the video. This may be done based on the degree of variation between frames, such as abrupt color changes or significant motion. The keyframes should represent typical or important situations of the area, such as a particular disaster impact status. Pretreatment and feature extraction: the key frames of each small region are preprocessed, including format conversion, resolution adjustment, image denoising and the like. Key features such as the color, size, growth density of crops are extracted, and the season, time and geographical location information of the area are combined. Convolutional neural network processing: and inputting the preprocessed key frames and the feature vectors into a convolutional neural network. The network should be trained to handle this type of data and be able to identify disaster situations. For each small area, the network will output results, such as the extent and type of disaster, independently.
In some embodiments, the preprocessing module includes video format conversion, resolution adjustment, and image denoising to facilitate subsequent analysis processing, wherein the video format is converted into AVI format, and the original high-definition video is reduced in resolution, i.e. the original resolution 3840x2160 pixels of the aerial video is reduced to 1920x1080 pixels; and denoising by adopting Gaussian filtering.
In some embodiments, the Canny edge detector module separates waters, farms, buildings, roads in video frames according to video image color characteristics, comprising: firstly, identifying different areas according to a color range by adopting a color filter, wherein a water area is blue or dark gray, a farmland is green, and buildings and roads are gray or brown; secondly, applying a Canny edge detection algorithm to carry out edge detection; finally, using a template matching technology, matching in a template set for each detected edge contour to identify whether the shape belongs to a natural structure or an artificial structure, wherein the natural structure comprises farmlands and water areas, and the artificial structure comprises buildings and roads.
In some embodiments, the preprocessing module further includes performing an image stabilization process, including:
First, tracking pixel point motion between successive frames in a video sequence by using an optical flow algorithm to determine a motion pattern in the video and estimating an optical flow vectorThe calculation is as follows:
wherein A is a matrix composed of gradients of video frame images, representing spatial gradients of pixels within a window; t is matrix transposition; i is a matrix or value representing the intensity of ambient illumination, for adjusting the optical flow estimation to reflect the effect of the illumination variation,is the weight coefficient of illumination intensity; />Representing the change of pixel values in a window along with time as a time gradient vector;
secondly, calculating offset and rotation quantity of each frame of image caused by unmanned aerial vehicle motion according to the optical flow data;
geometrically transforming each frame of images to compensate for these offsets and rotations, thereby stabilizing the image sequence;
detecting and removing boundary distortion or black edges in the stabilized video sequence; the frame rate incoherent images are processed using an intra interpolation technique.
In some embodiments, the convolutional neural network processing module: the method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial feature of the video image and the abstract feature of the feature vector respectively, fusing the spatial feature of the video image and the abstract feature of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network.
In some embodiments, two types of input data: video image: the preprocessed video frame contains visual information of the disaster area. Feature vector: key information extracted from each video frame, such as the color, size and growth density of crops, is combined with the vector formed by the data of seasons, real-time months, longitude and latitude and the like. Design of convolutional neural network: a CNN model with two input channels was designed. One channel is used to process video images and the other is used to process feature vectors. The video image channel uses a convolution layer and a pooling layer to extract spatial features. The feature vector channels may be processed through a series of full connection layers to extract abstract features. Feature fusion and processing: in the middle layer of the CNN, the outputs from the two channels are fused to form a composite feature. Such fusion may be accomplished through simple join (registration) operations, or using more complex fusion techniques such as weighted summation or feature mapping. The unmanned aerial vehicle shoots a farmland affected by flood. The video image shows the situation that part of farmland is submerged. The feature vector contains specific information of the affected area, such as the area of the farmland that is submerged by water, the type of crop, the degree of damage, and the coordinates of the shooting time and place. CNNs process both types of input, extracting spatial details in the video image and specific data in the feature vectors. And merging the two features in the middle layer of the CNN to form comprehensive features containing detailed disaster information. Finally, the CNN output includes the specific location of the affected farmland, the type and area of the affected crop, the estimated extent of damage.
In some embodiments, employing video stabilization processing involves optical flow algorithm application: first, in the video stabilization module, an optical flow algorithm is applied to track pixel motion between successive video frames. This step aims at determining the motion pattern in the video and calculating the offset and rotation of each frame of image caused by the unmanned motion. The geometric transformation performs: a geometric transformation is performed on each frame of image to compensate for these offsets and rotations. This includes translating the image according to the calculated offset and rotating the image according to the rotation. Boundary processing: and (5) processing boundary distortion or black edges possibly occurring after geometric transformation, and ensuring the visual continuity and integrity of the video. Convolutional neural network processing: selecting key frames: a key frame is selected from the stabilized video. These frames should represent important content in the video, such as the most significant moments of disaster impact or the most varying scenes. Preprocessing key frames: the selected key frames are pre-processed, e.g., resized to meet the input requirements of the CNN, subjected to necessary color correction or enhancement, etc. Feature extraction: and extracting key features (such as the color, size and growth density of crops) of the identified farmland area, and forming feature vectors by combining seasons, real-time months and longitudes and latitudes. Inputting the feature vector and the image into the CNN: feature vectors are combined with the image: and taking the preprocessed key frames and the corresponding feature vectors as two independent inputs, and inputting the two independent inputs into a convolutional neural network. CNN treatment: the network processes the two inputs, extracts the space characteristics of the video image and the abstract characteristics of the characteristic vector, fuses the two inputs at the middle layer of the network, and finally obtains the final output through the full connection layer. Output analysis: damage assessment: the CNN output module interprets the output results of the network, such as the specific position of the disaster-stricken farmland, the type and area of the affected crops, the crop yield loss and the disaster characteristics.
The invention provides a natural disaster insurance processing method and system based on unmanned aerial vehicle video data, which can realize the following beneficial technical effects:
1. according to the method, an unmanned plane technology is combined with a convolutional neural network, a traditional disaster loss evaluation process is automated and accurate, the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and the area of crops affected, the crop yield loss and disaster feature identification, and the disaster feature identification comprises flooding, drought and insect damage; the automatic claim settlement processing flow integrates the identification result into the claim settlement system of the insurance company automatically, and by the method, the efficiency of claim settlement processing is improved, and the accuracy and reliability of evaluation are improved.
2. The convolutional neural network processing module comprises: inputting the formed feature vector into a trained convolutional neural network to process the video image; the activation function employed by the convolutional neural network forms an improved activation function by improving the tanh activation functionThe longitude and latitude, the video resolution of the unmanned aerial vehicle and the height of the unmanned aerial vehicle are added into the training process, so that the judgment accuracy is greatly improved, and the activation function adopted by the convolutional neural network is +.>The method comprises the following steps:
Wherein,for the element characteristic value input to the activation function, < +.>Representing the video resolution of the unmanned aerial vehicle, e is the natural logarithm,for unmanned aerial vehicle flight altitude, +.>For the resolution weight coefficient, +.>For the geographic location weighting factor, < >>As a parameter of the geographical location,,/>latitude coordinate value representing video image capturing place, +.>The longitude coordinate value representing the video image shooting place greatly improves the intelligent degree and the accuracy of judging the type and area of the affected crops and the yield loss of the crops.
4. The present application uses an optical flow algorithm to track pixel motion between successive frames in a video sequence to determine motion patterns in the video, estimating optical flow vectorsMatrix or value of ambient light intensity during calculation, < +.>Is the weight coefficient of illumination intensity and->The time gradient vector is added into the calculation process of the optical flow, so that the judgment of stability is greatly enhanced, and the video frame is smoother and more accurate.
4. The method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial features of the video image and the abstract features of the feature vector respectively, fusing the spatial features of the video image and the abstract features of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network. By means of the feature combination mode, feature fusion degree is greatly improved, feature maximization acquisition of video frame images is improved, and judgment accuracy is improved.
The above describes a natural disaster insurance processing method and system based on unmanned aerial vehicle video data in detail, and specific examples are applied to describe the principle and implementation of the invention, and the description of the above examples is only used for helping to understand the core idea of the invention; also, as will be apparent to those skilled in the art in light of the present teachings, the present disclosure should not be limited to the specific embodiments and applications described herein.

Claims (10)

1. The natural disaster insurance processing method based on the unmanned aerial vehicle video data is characterized by comprising the following steps of:
step S1: using an unmanned aerial vehicle to carry out aerial photography on an affected area after natural disasters occur, and collecting aerial photography video images;
step S2: the aerial video image is transmitted to a video image processing unit in real time through a wireless network, and the video image processing unit extracts the range of the disaster area and the crop damage condition from the aerial video;
step S21: preprocessing, including image denoising, video format conversion and resolution adjustment;
step S22: separating a water area, a farmland, a building and a road in the video frame image according to the color characteristics of the video image by using a Canny edge detector;
Step S3: determining a damage degree caused by the disaster from the image data provided from the video image processing unit;
step S31: for the identified farmland area, extracting characteristics including the color, size and growth density of crops, and forming characteristic vectors by combining seasons, real-time months and longitudes and latitudes;
step S32: inputting the formed feature vector into a trained convolutional neural network for processing;
activation function for convolutional neural networkThe method comprises the following steps:
wherein,for the element characteristic value input to the activation function, < +.>Representing the video resolution of the unmanned aerial vehicle, e is natural logarithm, < + >>For unmanned aerial vehicle flight altitude, +.>For the resolution weight coefficient, +.>For the geographic location weighting factor, < >>As a parameter of the geographical location, ,/>latitude coordinate value representing video image acquisition place, +.>Longitude coordinate values representing a video image capturing place;
step S33: the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and area of the affected crops, the crop yield loss and identifying disaster characteristics, wherein the disaster characteristics comprise flood, drought and insect attack;
step S4: and (3) automatically integrating the identification result into an insurance company claim settlement system by an automatic claim settlement processing flow.
2. The method for natural disaster insurance processing based on unmanned aerial vehicle video data according to claim 1, wherein the step S21: preprocessing, including denoising processing by Gaussian filtering, converting the video format into an AVI format, and reducing the resolution of the video image, namely reducing the original resolution 3840x2160 pixels of the aerial video to 1920x1080 pixels.
3. The method for natural disaster insurance processing based on unmanned aerial vehicle video data according to claim 1, wherein the step S22: separating waters, farmlands, buildings, roads in video frames from video image color features using a Canny edge detector, comprising: firstly, identifying different areas according to a color range by adopting a color filter, wherein a water area is blue or dark gray, a farmland is green, and buildings and roads are gray or brown; secondly, applying a Canny edge detection algorithm to carry out edge detection; finally, using a template matching technology, matching in a template set for each detected edge contour to identify whether the shape belongs to a natural structure or an artificial structure, wherein the natural structure comprises farmlands and water areas, and the artificial structure comprises buildings and roads.
4. The method for natural disaster insurance processing based on unmanned aerial vehicle video data according to claim 1, wherein the step S21: the preprocessing further comprises stabilizing the video image, and the method comprises the following steps:
first, tracking pixel point motion between successive frames in a video sequence by using an optical flow algorithm to determine a motion pattern in the video and estimating an optical flow vectorThe calculation is as follows:
wherein A is a matrix composed of gradients of video frame images, instead ofSpatial gradients of pixels within the table window; t is matrix transposition; i is a matrix or value representing the intensity of ambient illumination, for adjusting the optical flow estimation to reflect the effect of the illumination variation,is the weight coefficient of illumination intensity; />Representing the change of pixel values in a window along with time as a time gradient vector;
secondly, calculating offset and rotation quantity of each frame of image caused by unmanned aerial vehicle motion according to the optical flow data;
geometrically transforming each frame of images to compensate for these offsets and rotations, thereby stabilizing the image sequence;
detecting and removing boundary distortion or black edges in the stabilized video sequence; the frame rate incoherent images are processed using an intra interpolation technique.
5. The method for natural disaster insurance processing based on unmanned aerial vehicle video data according to claim 1, wherein the step S32 is: the method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial feature of the video image and the abstract feature of the feature vector respectively, fusing the spatial feature of the video image and the abstract feature of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network.
6. A natural disaster insurance processing system based on unmanned aerial vehicle video data, comprising:
the unmanned aerial vehicle video acquisition module is used for performing aerial photography on the affected area after the natural disasters occur, and collecting aerial photography video images;
the wireless transmission module is used for transmitting the aerial video image to the video image processing unit in real time through a wireless network, and the video image processing unit extracts the range of the disaster area and the crop damage condition from the aerial video;
The preprocessing module comprises image denoising, video format conversion and resolution adjustment;
the Canny edge detector module separates water areas, farmlands, buildings and roads in the video frame images according to the color characteristics of the video images;
the damage degree determining module: determining a damage degree caused by the disaster from the image data provided from the video image processing unit;
the key feature extraction module: for the identified farmland area, extracting key characteristics including the color, size and growth density of crops, and forming a characteristic vector by combining seasons, real-time months and longitudes and latitudes;
the convolutional neural network processing module: inputting the formed feature vector into a trained convolutional neural network for processing;
activation function for convolutional neural networkThe method comprises the following steps:
wherein,for the element characteristic value input to the activation function, < +.>Representing the video resolution of the unmanned aerial vehicle, e is natural logarithm, < + >>For unmanned aerial vehicle flight altitude, +.>For the resolution weight coefficient, +.>For the geographic location weighting factor, < >>As a parameter of the geographical location, ,/>latitude coordinate value representing video image capturing place, +.>Longitude coordinate values representing a video image capturing place;
the convolutional neural network output module: the convolutional neural network outputs the specific position of a disaster-affected farmland, the type and area of the affected crops, the crop yield loss and identifying disaster characteristics, wherein the disaster characteristics comprise flood, drought and insect attack;
And the insurance automation claim settlement processing flow module automatically integrates the identification result into an insurance company claim settlement system.
7. The unmanned aerial vehicle-based video data natural disaster insurance processing system according to claim 6, wherein the preprocessing module comprises denoising processing by adopting gaussian filtering, converting a video format into an AVI format, and reducing the resolution of an original video image, namely reducing the original resolution 3840x2160 pixels of an aerial video to 1920x1080 pixels.
8. The unmanned aerial vehicle-based video data natural disaster insurance processing system according to claim 6, wherein said Canny edge detector module separates waters, farmlands, buildings, roads in video frames according to video image color characteristics, comprising: firstly, identifying different areas according to a color range by adopting a color filter, wherein a water area is blue or dark gray, a farmland is green, and buildings and roads are gray or brown; secondly, applying a Canny edge detection algorithm to carry out edge detection; finally, using a template matching technology, matching in a template set for each detected edge contour to identify whether the shape belongs to a natural structure or an artificial structure, wherein the natural structure comprises farmlands and water areas, and the artificial structure comprises buildings and roads.
9. The unmanned aerial vehicle video data-based natural disaster insurance processing system according to claim 6, wherein the preprocessing module further comprises performing image stabilization processing, comprising:
first, tracking pixel point motion between successive frames in a video sequence by using an optical flow algorithm to determine a motion pattern in the video and estimating an optical flow vectorThe calculation is as follows:
wherein A is a matrix composed of gradients of video frame images, representing spatial gradients of pixels within a window; t is matrix transposition; i is a matrix or value representing the intensity of ambient illumination, for adjusting the optical flow estimation to reflect the effect of the illumination variation,is the weight coefficient of illumination intensity; />Representing the change of pixel values in a window along with time as a time gradient vector;
secondly, calculating offset and rotation quantity of each frame of image caused by unmanned aerial vehicle motion according to the optical flow data;
geometrically transforming each frame of images to compensate for these offsets and rotations, thereby stabilizing the image sequence;
detecting and removing boundary distortion or black edges in the stabilized video sequence; the frame rate incoherent images are processed using an intra interpolation technique.
10. The unmanned aerial vehicle video data-based natural disaster insurance processing system according to claim 6, wherein the convolutional neural network processing module: the method comprises the steps of inputting the formed feature vector into a trained convolutional neural network to process a video image, inputting the preprocessed video image and the formed feature vector into the convolutional neural network as two independent input channels, processing a convolutional layer and a pooling layer in the convolutional neural network to extract the spatial feature of the video image and the abstract feature of the feature vector respectively, fusing the spatial feature of the video image and the abstract feature of the feature vector in the middle layer of the network to form a comprehensive feature, and processing the comprehensive feature through a full connection layer of the network.
CN202410156344.5A 2024-02-04 2024-02-04 Natural disaster insurance processing method and system based on unmanned aerial vehicle video data Active CN117689481B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410156344.5A CN117689481B (en) 2024-02-04 2024-02-04 Natural disaster insurance processing method and system based on unmanned aerial vehicle video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410156344.5A CN117689481B (en) 2024-02-04 2024-02-04 Natural disaster insurance processing method and system based on unmanned aerial vehicle video data

Publications (2)

Publication Number Publication Date
CN117689481A true CN117689481A (en) 2024-03-12
CN117689481B CN117689481B (en) 2024-04-19

Family

ID=90130499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410156344.5A Active CN117689481B (en) 2024-02-04 2024-02-04 Natural disaster insurance processing method and system based on unmanned aerial vehicle video data

Country Status (1)

Country Link
CN (1) CN117689481B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046909A (en) * 2015-06-17 2015-11-11 中国计量学院 Agricultural loss assessment assisting method based on small-sized unmanned aerial vehicle
CN107169018A (en) * 2017-04-06 2017-09-15 河南云保遥感科技有限公司 A kind of agricultural insurance is surveyed, loss assessment system and its implementation
CN108563986A (en) * 2018-03-02 2018-09-21 中国人民武装警察部队总医院 Earthquake region electric pole posture judgment method based on wide-long shot image and system
CN108764142A (en) * 2018-05-25 2018-11-06 北京工业大学 Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
CN110348324A (en) * 2019-06-20 2019-10-18 武汉大学 A kind of flood based on remote sensing big data floods analysis method and system in real time
JP2020042640A (en) * 2018-09-12 2020-03-19 アメリカン インターナショナル グループ,インコーポレイテッド Insurance business support system, contractor specification device, method, and program
WO2021012898A1 (en) * 2019-07-23 2021-01-28 平安科技(深圳)有限公司 Artificial intelligence-based agricultural insurance surveying method and related device
CN112380917A (en) * 2020-10-23 2021-02-19 西安科锐盛创新科技有限公司 A unmanned aerial vehicle for crops plant diseases and insect pests detect
WO2023052570A1 (en) * 2021-09-29 2023-04-06 Swiss Reinsurance Company Ltd. Aerial and/or satellite imagery-based, optical sensory system and method for quantitative measurements and recognition of property damage after an occurred natural catastrophe event
CN116434088A (en) * 2023-04-17 2023-07-14 重庆邮电大学 Lane line detection and lane auxiliary keeping method based on unmanned aerial vehicle aerial image
WO2024000927A1 (en) * 2022-06-27 2024-01-04 中咨数据有限公司 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
CN117409339A (en) * 2023-10-13 2024-01-16 东南大学 Unmanned aerial vehicle crop state visual identification method for air-ground coordination

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046909A (en) * 2015-06-17 2015-11-11 中国计量学院 Agricultural loss assessment assisting method based on small-sized unmanned aerial vehicle
CN107169018A (en) * 2017-04-06 2017-09-15 河南云保遥感科技有限公司 A kind of agricultural insurance is surveyed, loss assessment system and its implementation
CN108563986A (en) * 2018-03-02 2018-09-21 中国人民武装警察部队总医院 Earthquake region electric pole posture judgment method based on wide-long shot image and system
CN108764142A (en) * 2018-05-25 2018-11-06 北京工业大学 Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
JP2020042640A (en) * 2018-09-12 2020-03-19 アメリカン インターナショナル グループ,インコーポレイテッド Insurance business support system, contractor specification device, method, and program
CN110348324A (en) * 2019-06-20 2019-10-18 武汉大学 A kind of flood based on remote sensing big data floods analysis method and system in real time
WO2021012898A1 (en) * 2019-07-23 2021-01-28 平安科技(深圳)有限公司 Artificial intelligence-based agricultural insurance surveying method and related device
CN112380917A (en) * 2020-10-23 2021-02-19 西安科锐盛创新科技有限公司 A unmanned aerial vehicle for crops plant diseases and insect pests detect
WO2023052570A1 (en) * 2021-09-29 2023-04-06 Swiss Reinsurance Company Ltd. Aerial and/or satellite imagery-based, optical sensory system and method for quantitative measurements and recognition of property damage after an occurred natural catastrophe event
US20240020969A1 (en) * 2021-09-29 2024-01-18 Swiss Reinsurance Company Ltd. Aerial and/or Satellite Imagery-based, Optical Sensory System and Method for Quantitative Measurements and Recognition of Property Damage After An Occurred Natural Catastrophe Event
WO2024000927A1 (en) * 2022-06-27 2024-01-04 中咨数据有限公司 Deep learning based method for automatic geological disaster extraction from unmanned aerial vehicle image
CN116434088A (en) * 2023-04-17 2023-07-14 重庆邮电大学 Lane line detection and lane auxiliary keeping method based on unmanned aerial vehicle aerial image
CN117409339A (en) * 2023-10-13 2024-01-16 东南大学 Unmanned aerial vehicle crop state visual identification method for air-ground coordination

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李广;张立元;宋朝阳;彭曼曼;张瑜;韩文霆;: "小麦倒伏信息无人机多时相遥感提取方法", 农业机械学报, no. 04, 21 January 2019 (2019-01-21), pages 218 - 227 *
罗顶林;李菁;: "基于GIS和遥感机器学习的农业种植保险精准服务系统设计及应用", 电脑知识与技术, no. 14, 15 May 2020 (2020-05-15), pages 114 - 115 *

Also Published As

Publication number Publication date
CN117689481B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN108109385B (en) System and method for identifying and judging dangerous behaviors of power transmission line anti-external damage vehicle
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN108573276A (en) A kind of change detecting method based on high-resolution remote sensing image
CN109801282A (en) Pavement behavior detection method, processing method, apparatus and system
CN106373088B (en) The quick joining method of low Duplication aerial image is tilted greatly
CN114973028B (en) Aerial video image real-time change detection method and system
CN112560623B (en) Unmanned aerial vehicle-based rapid mangrove plant species identification method
CN113160053B (en) Pose information-based underwater video image restoration and splicing method
CN110569797A (en) earth stationary orbit satellite image forest fire detection method, system and storage medium thereof
CN117409339A (en) Unmanned aerial vehicle crop state visual identification method for air-ground coordination
CN114581307A (en) Multi-image stitching method, system, device and medium for target tracking identification
CN112529498B (en) Warehouse logistics management method and system
CN117689481B (en) Natural disaster insurance processing method and system based on unmanned aerial vehicle video data
CN116630828A (en) Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation
CN116205879A (en) Unmanned aerial vehicle image and deep learning-based wheat lodging area estimation method
Majidi et al. Real time aerial natural image interpretation for autonomous ranger drone navigation
CN116311218A (en) Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion
CN112991425B (en) Water area water level extraction method and system and storage medium
Luo et al. An Evolutionary Shadow Correction Network and A Benchmark UAV Dataset for Remote Sensing Images
CN115249357A (en) Bagged citrus detection method based on semi-supervised SPM-YOLOv5
CN115311520A (en) Passion fruit maturity detection and positioning method based on visual identification
CN111832508B (en) DIE _ GA-based low-illumination target detection method
CN114119606A (en) Intelligent tree obstacle hidden danger analysis method based on visible light photo power line coloring
CN115457378A (en) Method, device, equipment and storage medium for detecting base station sky surface information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant