CN114299379A - Shadow area vegetation coverage extraction method based on high dynamic image - Google Patents

Shadow area vegetation coverage extraction method based on high dynamic image Download PDF

Info

Publication number
CN114299379A
CN114299379A CN202111281212.8A CN202111281212A CN114299379A CN 114299379 A CN114299379 A CN 114299379A CN 202111281212 A CN202111281212 A CN 202111281212A CN 114299379 A CN114299379 A CN 114299379A
Authority
CN
China
Prior art keywords
vegetation
high dynamic
images
shadow area
dynamic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111281212.8A
Other languages
Chinese (zh)
Inventor
陈伟
王哲
张学鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN202111281212.8A priority Critical patent/CN114299379A/en
Publication of CN114299379A publication Critical patent/CN114299379A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a shadow area vegetation coverage extraction method based on a high dynamic image, which considers the problems that the vegetation coverage accuracy is influenced due to unobvious shadow area vegetation feature display in a normal exposure image. The method synthesizes three shadow area vegetation images of low exposure, normal exposure and overexposure into a high dynamic image. And performing semantic segmentation on the high dynamic image by adopting a U-shaped neural network. And cutting the high dynamic image and the label into small samples, dividing the small samples into a training set, a verification set and a test set, and then training out an HDR-DL model. And judging whether the model is over-fitted according to the verification set, predicting the test set, splicing the binary images of the small samples, and finally estimating the vegetation coverage. The method is based on the high dynamic image, provides a new technical route for more accurately extracting the vegetation coverage of the shadow area, and provides important support for ground estimation of the vegetation coverage as a remote sensing product and algorithm verification.

Description

Shadow area vegetation coverage extraction method based on high dynamic image
Technical Field
The invention relates to the technical field of agricultural remote sensing, ecological remote sensing and computer vision, and mainly relates to a shadow area vegetation coverage extraction method based on a high dynamic image.
Background
The vegetation is an important component of a land ecosystem, and plays an important role in maintaining the balance of the ecosystem, conserving water sources, maintaining water and soil and the like. Vegetation coverage is generally defined as the percentage of the area of the vertical projection of the aerial parts of the vegetation (including leaves, stems, branches) within a statistical range. The vegetation coverage is an important index for describing the vegetation distribution on the ground, and can analyze the influence factors of the vegetation distribution and evaluate the ecological environment of the area. The vegetation coverage estimation on the ground is very important for remote sensing algorithm and product verification, and can be used for ground verification of remote sensing monitoring results.
The method for obtaining the vegetation coverage mainly comprises a ground actual measurement method and a remote sensing monitoring method. The ground actual measurement method comprises measurement methods such as an eye estimation method, a sampling method, an instrument method, a photographic method and the like. The methods such as the eye estimation method, the sampling method, the instrument method and the like mainly depend on manual field investigation, have the defects of low efficiency, high cost, low timeliness and the like due to the influence of the conditions of manpower and material resources, and are not suitable for vegetation coverage extraction in a large area. And the vertical observation image of the ground vegetation can be conveniently and quickly acquired by using the photographic method of the digital camera, so that the efficiency and the precision of ground measurement are greatly improved. Typically, it is very efficient and accurate to estimate vegetation coverage using digital images. However, due to the influence of factors such as terrain, cloud cover, illumination angle and vegetation density, shadows may exist in vegetation areas in the acquired ground images, and the existence of shadow vegetation may increase errors in vegetation classification, which greatly limits the estimation accuracy of vegetation coverage.
Within a certain range of the scene, there may be tall trees, plants, and even shadows from grass or other vegetation within the range. The problems of natural illumination variation and large intensity difference exist in a single image, and the shadow exists in the image, and the estimation of the vegetation coverage degree is influenced due to the mixed existence of vegetation and the shadow in the image. And the problem of natural illumination change and large intensity difference in the image can be solved by using a High Dynamic Range (HDR) image, and ground objects in a shadow area can be better displayed. The HDR video can provide more dynamic range and image details than the normal video, and the final HDR video is synthesized according to the low dynamic range images with different exposure times. The HDR image can better display the vegetation of the shadow area in the image.
Methods for estimating vegetation coverage based on digital imagery can generally be divided into two categories, one being cluster analysis based on training samples, such as supervised classification, unsupervised classification and object-based image analysis methods. The other is a threshold method based on vegetation index. Such as an overgreen index, a color index, etc. The threshold method based on the vegetation index has a poor vegetation segmentation effect on the shadow area, and meanwhile, the vegetation segmentation effect is greatly influenced by the vegetation type. The U-shaped neural network has a coding and decoding structure with jump connection, can fuse the characteristics of different levels, shadow vegetation images generally have a fixed structure and a small sample nature, and are more suitable for dividing vegetation by a deep learning method based on the U-shaped neural network, and meanwhile, the high dynamic range images can better display details of vegetation in shadow areas, so that a method for extracting vegetation coverage in shadow areas based on high dynamic images is needed.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art and provides a calculation method suitable for extracting vegetation coverage of a shadow area. The method considers the problems of incomplete vegetation information display of shadow areas in normal exposure images and poor segmentation effect by adopting a threshold method. By combining the HDR image, the vegetation information of the shadow area is better displayed. The vegetation index threshold method has great requirements on vegetation types and states and has certain limitation, the U-shaped neural network has a jump connection coding and decoding structure, the characteristics of different levels of HDR images can be fused, the deep learning method based on the U-shaped neural network trains a training set, the overfitting condition of the training model is judged by combining a verification set, the trained HDR-DL model can be better applied to various vegetation types and is compared with the vegetation coverage calculated by the deep learning method based on the normal exposure images, and the vegetation coverage estimation result of the deep learning method based on the U-shaped neural network is more convincing.
The purpose of the invention is realized by the following technical scheme: a shadow area vegetation coverage extraction method based on a high dynamic image comprises the following steps:
(1) shooting images with low exposure, normal exposure and overexposure by using an HDR mode of a digital camera or a smart phone, and automatically synthesizing an HDR image, wherein the shot image is an orthographic HDR image;
(2) labeling the shot HDR image of the shadow vegetation by using Labelme software, and dividing the HDR image into vegetation (including the shadow vegetation and illumination vegetation) and background (soil, straw and the like);
(3) by programming a program, the HDR image and the label are respectively cut into images with the size of 512 multiplied by 512, and are divided into a training set, a verification set and a test set;
(4) training a training set by utilizing a U-shaped neural network in deep learning, wherein a loss function formula of cross entropy in the U-shaped neural network is mainly according to a formula (1);
crossentropy=-∑x(pi(x)logqi(x)) (1)
in the formula, qi(x) Is shown as pi(x) When the value is 1, the U-shaped neural network outputs the maximum value of the class probability value.
(5) In a U-shaped neural network, an Adam optimizer is adopted to carry out back propagation on a loss function value, and the learning rate is automatically adjusted;
Figure BDA0003331028640000021
in the formula, WtIs the weight value at t iterations, η is the momentum vector, mtAnd vtAre the first and second order momentum terms respectively,
Figure BDA0003331028640000031
and
Figure BDA0003331028640000032
is a correction value, and epsilon is a very small number to prevent the denominator from being zero.
(6) In the U-shaped neural network, after an HDR-DL model is trained for training set data, judging whether the HDR-DL model is over-fitted or not by combining verification set data, if the model is over-fitted, modifying U-shaped neural network parameters, and retraining the model;
(7) predicting the RGB images of the test set according to the trained HDR-DL model, and segmenting vegetation and a background to obtain a binary image;
(8) splicing the binary images with the size of 512 multiplied by 512 to obtain images with the same size as the images before cutting, and estimating the vegetation coverage;
(9) and comparing the segmentation result of the vegetation in the shadow area with the result of visual interpretation and evaluating the segmentation effect. The segmentation effect of the shadow vegetation is quantitatively evaluated based on the confusion matrix. Quantitative analysis is carried out on the segmentation result of the shadow vegetation image by adopting a Kappa coefficient, F1 fraction, Recall ratio (Recall) and average intersection ratio (MIOU), and the evaluation indexes can be calculated by the following formula;
Figure BDA0003331028640000033
Figure BDA0003331028640000034
Figure BDA0003331028640000035
Figure BDA0003331028640000036
wherein κ represents a Kappa coefficient, N is the total number of pixels, and XkkFor the diagonal values of the confusion matrix, Recall represents Recall, F1 represents the F1 score, MIOU represents the average cross-over ratio, and TP is true; TN is true negative; FP is false positive; FN was false negative.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention provides a shadow area vegetation coverage extraction method based on a high dynamic image, which is an improvement on the basis of dividing defects of shadow area vegetation by a normal exposure image and a threshold method. The invention considers the problems of incomplete vegetation information display of shadow areas in normal exposure images and poor vegetation segmentation effect by adopting a threshold method. The invention combines the characteristics of high dynamic images and better displays the vegetation information of the shadow area. The method adopts the U-shaped neural network with a jump connection coding and decoding structure to fuse the characteristics of different levels of the HDR image of the shadow vegetation, trains the training set, judges the overfitting condition of the training model by combining the verification set, trains the optimal HDR-DL model, applies the optimal HDR-DL model to the test set, and can realize the segmentation of the vegetation in the shadow area.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a sub-sample vegetation coverage estimated by the shadow region vegetation coverage extraction method based on the high dynamic image and a vegetation coverage true value visually interpreted.
FIG. 3 is a sub-sample vegetation coverage estimated by the shadow region vegetation coverage extraction method based on normal exposure images and a visual interpretation true value of vegetation coverage.
FIG. 4 shows the Kappa coefficient, F1 score, recall rate, and average cross-over ratio of the subsample results obtained by the shadow vegetation coverage extraction method based on the normal exposure image and the high dynamic image.
FIG. 5 shows the vegetation coverage estimation result after splicing the sub-samples segmented by the shadow region vegetation coverage extraction method based on the normal exposure image and the high dynamic image.
FIG. 6 shows the Kappa coefficient, F1 score, recall rate and average cross-over ratio of the sub-sample stitching results obtained by the shadow region vegetation coverage extraction method based on the normal exposure image and the high dynamic image.
Detailed Description
The present invention is described in further detail below by way of examples and figures, but the embodiments of the present invention are not limited thereto.
The method mainly explains the process of utilizing the high dynamic image-based shadow region vegetation coverage extraction method provided by the invention in detail by taking the HDR image result shot by the remote sensing comprehensive test station from Huaiyuan county in Hebei province from 1 month to 4 days 5/2021 as an example. According to the flowchart of the deep learning method shown in fig. 1, the method comprises the following steps:
1: the HDR image of shadow vegetation of a remote sensing comprehensive test station in pregnancy from 5 months and 1 day to 4 days in 2021 year is shot through the HDR mode of the smart phone.
2: labeling the HDR image by using Labelme software, dividing the HDR image into two types of vegetation and background, respectively cutting the HDR image and the label into images with the size of 512 multiplied by 512 by writing a program, and dividing the images into a training set, a verification set and a test set.
3: and (3) training the training set by utilizing a U-shaped neural network in deep learning, wherein the U-shaped neural network adopts a loss function formula of cross entropy of formula (1), and an Adam optimizer of formula (2) is adopted to perform back propagation on a loss function value, so that the learning rate is automatically adjusted.
4: training an HDR-DL model, judging whether the model is over-fit or not by combining verification set data, if the model is over-fit, modifying U-shaped neural network parameters, re-training the model, predicting RGB images of a test set, segmenting vegetation and a background to obtain binary images with the size of 512 multiplied by 512, estimating vegetation coverage, making a scatter diagram between visual interpretation truth values, wherein the result is shown in figure 2 and is compared with the result of a depth learning method of a normally exposed image, and the result is shown in figure 3, and according to the results shown in figures 2 and 3, the correlation between the FVC calculated by the depth learning method based on the HDR image and the FVC truth values visually interpreted by the visual interpretation is stronger than the result of the depth learning method based on the normally exposed image, the root mean square error is reduced by about 82.2% and the deviation is reduced by about 46.6%. It is shown that for the subsample size of 512 × 512, the result of FVC estimation for the HDR image-based depth learning method is better than the result for the normally exposed image, which is closer to the true value of visual interpretation.
5: as a result of calculating the Kappa coefficient, F1 score, recall rate, and average cross-over ratio of the binarized image having a size of 512 × 512 using equations (3), (4), (5), and (6), as shown in fig. 4, it was found from fig. 4 that the Kappa coefficient, F1 score, recall rate, and average cross-over ratio were respectively increased by about 36%, 7%, and 34% in the HDR image compared to the depth learning method based on the normal exposure image. It is demonstrated that for a subsample size of 512 × 512, the HDR image-based depth learning method is superior to the normally exposed image.
6: the binary images with the size of 512 × 512 are spliced to obtain an image with the size consistent with the size before cutting, and the vegetation coverage is estimated, and a specific result is shown in fig. 5, and it is found from fig. 5 that the result of estimating the vegetation coverage by the depth learning method based on the HDR image is closer to the true value of visual interpretation.
7: for the spliced binary image, the Kappa coefficient, F1 score, recall rate, and average cross-over ratio were calculated using the equations (3), (4), and (5), and the specific results are shown in fig. 6. According to fig. 6, it is found that the Kappa coefficient, F1 score, recall rate and average cross-over ratio of the HDR image-based deep learning method are all the highest for the spliced binarized image, and are superior to the normal exposure image-based deep learning method.
The above examples are preferred embodiments of the present invention, but the present invention is not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and they are included in the scope of the present invention.

Claims (8)

1. A shadow area vegetation coverage extraction method based on high dynamic images is characterized by comprising the following steps: the method comprises the following steps:
A. according to the low exposure, normal exposure and overexposure images of the vegetation in the shadow area shot by a digital camera or a camera of a smart phone, a high dynamic image (HDR) is synthesized;
B. marking vegetation in the high dynamic image, cutting the high dynamic image and the label of the vegetation in the shadow area according to a written program, and dividing a training set, a verification set and a test set;
C. training the training set according to a U-shaped neural network in deep learning to obtain an HDR-DL model;
D. judging whether the HDR-DL model is over-fitted according to the verification set, if so, modifying the parameters of the U-shaped neural network, and retraining;
E. according to the trained HDR-DL model, predicting the high dynamic image of the test set, and segmenting vegetation and a background to obtain a binary image;
F. splicing the binary images with the size of 512 multiplied by 512 to obtain images with the same size as the images before cutting, and estimating the vegetation coverage;
G. the result of the division of the vegetation in the shadow area is compared with the result of the visual interpretation and the effect of the division is evaluated.
2. The method for extracting vegetation coverage in the shadow area based on the high dynamic image as claimed in claim 1, wherein the step a specifically comprises:
a1: the high dynamic images of the vegetation in the shadow area are mainly obtained by shooting according to a digital camera or a camera of a smart phone, the shot images are orthoimages, the digital camera or the camera of the smart phone has an HDR mode, low exposure images, normal exposure images and overexposure images can be shot, and the HDR images are automatically synthesized.
3. The method for extracting vegetation coverage in the shadow area based on the high dynamic image as claimed in claim 1, wherein the step B specifically comprises:
b1: marking and making a label on a shot high-dynamic image of vegetation in a shadow area, wherein the image is mainly divided into vegetation (including shadow vegetation and illumination vegetation) and a background (soil, straw and the like);
b2: according to the programmed program, the high dynamic image and the label of the vegetation in the shadow area are respectively cut into images with the size of 512 multiplied by 512, and the images are divided into a training set, a verification set and a test set.
4. The method for extracting vegetation coverage in the shadow area based on the high dynamic image as claimed in claim 1, wherein the step C specifically comprises:
c1: training a training set according to a U-shaped neural network in deep learning, wherein a loss function formula of cross entropy of the U-shaped neural network is as follows;
crossentropy=-∑x(pi(x)logqi(x));
wherein q isi(x) Is shown as pi(x) When the value is 1, outputting the maximum value of the class probability value in the U-shaped neural network;
c2: in a U-shaped neural network, an Adam optimizer is adopted to carry out back propagation on a loss function value, and the learning rate is automatically adjusted;
Figure FDA0003331028630000021
wherein, WtIs the weight value at t iterations, η is the momentum vector, mtAnd vtAre the first and second order momentum terms respectively,
Figure FDA0003331028630000022
and
Figure FDA0003331028630000023
is a correction value, and epsilon is a very small number to prevent the denominator from being zero.
5. The method for extracting vegetation coverage in the shadow area based on the high dynamic image as claimed in claim 1, wherein the step D specifically comprises:
d1: in the U-shaped neural network, after an HDR-DL model is trained on training set data, whether the HDR-DL model is over-fitted or not is judged by combining verification set data, if the model is over-fitted, parameters of the U-shaped neural network are modified, and the model is retrained.
6. The method for extracting vegetation coverage in the shadow area based on the high dynamic image as claimed in claim 1, wherein the step E specifically comprises:
e1: and predicting the RGB images of the test set according to the trained HDR-DL model, and segmenting vegetation and a background to obtain a binary image.
7. The method for extracting vegetation coverage in the shadow area based on the high dynamic image as claimed in claim 1, wherein the step F specifically comprises:
f1: and splicing the binary images with the size of 512 multiplied by 512 to obtain an image with the size consistent with that before cutting, and estimating the vegetation coverage.
8. The method for extracting vegetation coverage in the shadow area based on the high dynamic image as claimed in claim 1, wherein the step G specifically comprises:
g1: the segmentation result of the shadow vegetation is compared with the result of the visual interpretation and the segmentation effect is evaluated. The segmentation effect of the shadow vegetation is quantitatively evaluated based on the confusion matrix. Quantitative analysis is carried out on segmentation results of the shadow vegetation image by adopting Kappa coefficient, F1 score, recall ratio and average intersection, and the indexes are integrated to be favorable for better evaluating the segmentation performance of the shadow vegetation image;
Figure FDA0003331028630000031
Figure FDA0003331028630000032
Figure FDA0003331028630000033
Figure FDA0003331028630000034
wherein κ represents a Kappa coefficient, N is the total number of pixels, and XkkFor the values of the diagonals of the confusion matrix, Recall represents the Recall, F1 represents the F1 score, MIOU represents the average cross-over ratio, TP is true, TN is true negative, FP is false positive, and FN is false negative.
CN202111281212.8A 2021-11-01 2021-11-01 Shadow area vegetation coverage extraction method based on high dynamic image Pending CN114299379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111281212.8A CN114299379A (en) 2021-11-01 2021-11-01 Shadow area vegetation coverage extraction method based on high dynamic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111281212.8A CN114299379A (en) 2021-11-01 2021-11-01 Shadow area vegetation coverage extraction method based on high dynamic image

Publications (1)

Publication Number Publication Date
CN114299379A true CN114299379A (en) 2022-04-08

Family

ID=80964320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111281212.8A Pending CN114299379A (en) 2021-11-01 2021-11-01 Shadow area vegetation coverage extraction method based on high dynamic image

Country Status (1)

Country Link
CN (1) CN114299379A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742855A (en) * 2022-04-11 2022-07-12 电子科技大学 Semi-automatic image labeling method fusing threshold segmentation and image superposition technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583378A (en) * 2018-11-30 2019-04-05 东北大学 A kind of vegetation coverage extracting method and system
CN110889394A (en) * 2019-12-11 2020-03-17 安徽大学 Rice lodging recognition method based on deep learning UNet network
CN111669514A (en) * 2020-06-08 2020-09-15 北京大学 High dynamic range imaging method and apparatus
US20210142559A1 (en) * 2019-11-08 2021-05-13 General Electric Company System and method for vegetation modeling using satellite imagery and/or aerial imagery

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583378A (en) * 2018-11-30 2019-04-05 东北大学 A kind of vegetation coverage extracting method and system
US20210142559A1 (en) * 2019-11-08 2021-05-13 General Electric Company System and method for vegetation modeling using satellite imagery and/or aerial imagery
CN110889394A (en) * 2019-12-11 2020-03-17 安徽大学 Rice lodging recognition method based on deep learning UNet network
CN111669514A (en) * 2020-06-08 2020-09-15 北京大学 High dynamic range imaging method and apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FABIEN H. WAGNER,ET AL: "Using the U-net convolutional network to map forest types and disturbance in the Atlantic rainforest with very high resolution images", 《REMOTE SENSING IN ECOLOGY AND CONSERVATION》 *
HYUN K. SUH,ET AL: "Improved vegetation segmentation with ground shadow removal using an HDR camera", 《PRECISION AGRIC》 *
SAMUEL E. COX, ET AL: "Shadow attenuation with high dynamic range images", 《ENVIRON MONIT ASSESS》 *
苍圣: "基于压缩感知与稀疏表示的高光谱遥感影像森林分类方法研究", 《中国博士学位论文全文数据库(电子期刊)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742855A (en) * 2022-04-11 2022-07-12 电子科技大学 Semi-automatic image labeling method fusing threshold segmentation and image superposition technology
CN114742855B (en) * 2022-04-11 2023-06-30 电子科技大学 Semi-automatic image labeling method integrating threshold segmentation and image superposition technologies

Similar Documents

Publication Publication Date Title
Sadeghi-Tehran et al. DeepCount: in-field automatic quantification of wheat spikes using simple linear iterative clustering and deep convolutional neural networks
Xiong et al. Panicle-SEG: a robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization
CN110619632B (en) Mango example confrontation segmentation method based on Mask R-CNN
Deng et al. Deep learning-based automatic detection of productive tillers in rice
CN111461052A (en) Migration learning-based method for identifying lodging regions of wheat in multiple growth periods
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
Ruiz-Ruiz et al. Testing different color spaces based on hue for the environmentally adaptive segmentation algorithm (EASA)
CN109086826B (en) Wheat drought identification method based on image deep learning
CN110310241B (en) Method for defogging traffic image with large air-light value by fusing depth region segmentation
CN111340141A (en) Crop seedling and weed detection method and system based on deep learning
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN109063754A (en) A kind of remote sensing image multiple features combining classification method based on OpenStreetMap
CN111814563B (en) Method and device for classifying planting structures
CN113111947B (en) Image processing method, apparatus and computer readable storage medium
CN115223063A (en) Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning
CN111291818B (en) Non-uniform class sample equalization method for cloud mask
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN114299379A (en) Shadow area vegetation coverage extraction method based on high dynamic image
CN117576195A (en) Plant leaf morphology recognition method
Lin et al. A novel approach for estimating the flowering rate of litchi based on deep learning and UAV images
CN111832480B (en) Remote sensing identification method for rape planting area based on spectral characteristics
CN111007474A (en) Weather radar echo classification method based on multiple features
CN114120359A (en) Method for measuring body size of group-fed pigs based on stacked hourglass network
CN112861869A (en) Sorghum lodging image segmentation method based on lightweight convolutional neural network
Song et al. Real-time determination of flowering period for field wheat based on improved YOLOv5s model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220408

WD01 Invention patent application deemed withdrawn after publication