CN113095279B - Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium - Google Patents

Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium Download PDF

Info

Publication number
CN113095279B
CN113095279B CN202110467162.6A CN202110467162A CN113095279B CN 113095279 B CN113095279 B CN 113095279B CN 202110467162 A CN202110467162 A CN 202110467162A CN 113095279 B CN113095279 B CN 113095279B
Authority
CN
China
Prior art keywords
flower
fruit tree
convolution
flowers
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110467162.6A
Other languages
Chinese (zh)
Other versions
CN113095279A (en
Inventor
熊俊涛
刘柏林
杨洲
丁允贺
霍钊威
谢志明
焦镜棉
郑镇辉
钟灼
翁健豪
陈淑绵
李洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202110467162.6A priority Critical patent/CN113095279B/en
Publication of CN113095279A publication Critical patent/CN113095279A/en
Application granted granted Critical
Publication of CN113095279B publication Critical patent/CN113095279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an intelligent visual recognition method, device and system for flower quantity of a fruit tree and a storage medium, wherein the method comprises the following steps: acquiring a plurality of fruit tree flower images; denoising and standardizing each fruit tree flower image; extracting a fruit tree flower and leaf feature map in the processed fruit tree flower image by using the trained deep convolutional neural network; generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph, so as to obtain a fruit tree flower and leaf segmentation graph; counting the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower and leaf segmentation map of the fruit tree; calculating the density of flowers according to the number of pixels of the flowers and the number of pixels of the leaves; and converting the densities of flowers in each fruit tree flower image into analog quantities. The application can provide visual support for flower thinning equipment and assist fruit farmers in flowering phase management.

Description

Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium
Technical Field
The application relates to an intelligent visual recognition method, device and system for flower quantity of fruit trees and a storage medium, and belongs to the technical field of agricultural equipment.
Background
With the development of artificial intelligence, the application of machine vision technology in agriculture is more and more important, and vision support is provided for intelligent operation. The yield and quality of fruit tree fruits are affected by the flowering quantity of the fruit tree fruits in the flowering phase, fruit farmers need to observe the flowering quantity during the flowering phase, and flower thinning is carried out according to the observation result. For a large-scale orchard, the efficiency of manual observation and flower thinning is low, and the labor cost is high. At present, the robot vision system is mostly applied to fruit detection and picking (the application patent application of the vision system of the strawberry picking robot is named as a Chinese patent application No. 201711015161.8), and the vision system for flowering phase management of fruit trees is lacked.
Disclosure of Invention
In view of the above, the application provides an intelligent visual recognition method, device and system for flower quantity of fruit trees and a storage medium, which can provide visual support for flower thinning equipment and assist fruit farmers in flowering phase management.
The first aim of the application is to provide an intelligent visual recognition method for flower quantity of fruit trees.
The second aim of the application is to provide an intelligent visual recognition device for flower quantity of fruit trees.
The third aim of the application is to provide an intelligent visual recognition system for flower quantity of fruit trees.
A fourth object of the present application is to provide a computer-readable storage medium.
The first object of the present application can be achieved by adopting the following technical scheme:
an intelligent visual recognition method for flower quantity of fruit trees, which comprises the following steps:
acquiring a plurality of fruit tree flower images;
denoising and standardizing each fruit tree flower image;
extracting a fruit tree flower and leaf feature map in the processed fruit tree flower image by using the trained deep convolutional neural network;
generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph, so as to obtain a fruit tree flower and leaf segmentation graph;
counting the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower and leaf segmentation map of the fruit tree;
calculating the density of flowers according to the number of pixels of the flowers and the number of pixels of the leaves;
and converting the densities of flowers in each fruit tree flower image into analog quantities.
Further, the deep convolutional neural network comprises a main network and a pyramid structure, wherein the main network comprises a transition feature extraction part and a main feature extraction part, and the transition feature extraction part, the main feature extraction part and the pyramid structure are sequentially connected;
the transition feature extraction part comprises a first convolution layer, a second convolution layer and a third convolution layer which are sequentially connected, wherein the first convolution layer, the second convolution layer and the third convolution layer are all convolution layers with 3 multiplied by 3 convolution kernels;
the main feature extraction part comprises four stages which are sequentially connected, wherein the four stages are a 1 st stage, a 2 nd stage, a 3 rd stage and a 4 th stage respectively, 6, 8, 12 and 6 convolution blocks are respectively arranged from the 1 st stage to the 4 th stage, each two continuous convolution blocks form a residual error module, and each residual error module is used for linearly fusing the constant mapping of the output features and the input features of the two convolution blocks and then inputting the output features and the input features as the input of the subsequent convolution blocks; the output characteristic quantity of the convolution block in the same stage is the same as that of the initial input of the stage, the cross-stage characteristic quantity is different, the characteristic quantity is adjusted by using a convolution layer of a 1 multiplied by 1 convolution kernel, and each convolution is completed through an activation function and normalization layer processing; in each convolution block, a 3×3 convolution kernel processing feature is used, and each convolution is completed through an activation function and normalization layer processing.
Furthermore, a hole convolution structure is added in the convolution blocks of the 3 rd stage and the 4 th stage, and an attention module is added between every two adjacent stages.
Further, the denoising and standardization processing for the fruit tree flower image specifically comprises:
denoising the fruit tree flower image by using median filtering;
calculating the mean value and standard deviation of three RGB channel components of the denoised fruit tree flower image;
and (3) carrying out standardized calculation on the denoised fruit tree flower image according to the mean value and the standard deviation.
Further, generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph, thereby obtaining a fruit tree flower and leaf segmentation graph, which specifically comprises:
fusing the She Tezheng figures of the fruit tree flowers through a first convolution block to obtain a first characteristic figure; wherein the first convolution block has a kernel size of 3 x 3;
inputting the first characteristic diagram into a second convolution block to obtain a second characteristic diagram; wherein the kernel size of the second convolution block is 1×1;
mapping the second feature map into three probability prediction maps through a full convolution layer, wherein the three probability prediction maps respectively represent flowers, leaves and backgrounds of fruit trees;
obtaining maximum values of the three probability prediction graphs in characteristic dimensions to obtain a fruit tree flower and leaf prediction graph;
and superposing the fruit tree flower and leaf prediction graph and the fruit tree flower and leaf feature graph according to preset weights to obtain a fruit tree flower She Fenge graph.
Further, the calculating the flower density according to the number of the pixels of the flower and the number of the pixels of the leaves specifically includes:
summing the number of pixels of the flower and the number of pixels of the leaf to obtain a total number of pixels;
the number of pixels of the flower is divided by the total number of pixels to obtain the flower density.
Further, the converting the density of flowers in each flower image of the fruit tree into analog quantity specifically includes:
according to the flower densities in all the flower images of the fruit trees, a maximum value and a minimum value are obtained and used as the upper limit and the lower limit of the flower density and analog quantity conversion;
according to the density of flowers and the upper limit and the lower limit of analog quantity conversion, using a maximum and minimum normalization method to scale the density range of flowers to between 0 and 1, and enabling the analog quantity range to be 4 to 20 milliamperes;
constructing a mapping relation between the density range and the analog range of the flower by using linear scaling;
and converting the flower density in each fruit tree flower image into analog quantity according to the mapping relation between the flower density range and the analog quantity range.
The second object of the application can be achieved by adopting the following technical scheme:
an intelligent visual recognition device for flower quantity of fruit trees, which comprises:
the acquisition unit is used for acquiring a plurality of fruit tree flower images;
the processing unit is used for denoising and standardizing each fruit tree flower image;
the extraction unit is used for extracting the characteristic diagrams of the flowers and leaves of the fruit trees in the processed flower images of the fruit trees by using the trained deep convolutional neural network;
the generation unit is used for generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph so as to obtain a fruit tree flower and leaf segmentation graph;
the statistical unit is used for counting the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower and leaf segmentation map of the fruit tree;
a calculation unit for calculating a density of flowers based on the number of pixels of the flowers and the number of pixels of the leaves;
and the conversion unit is used for converting the densities of flowers in each fruit tree flower image into analog quantities.
The third object of the present application can be achieved by adopting the following technical scheme:
the intelligent visual recognition system for the flower quantity of the fruit tree comprises a camera and a processor, wherein the camera is connected with the processor, the camera is in a horizontal visual angle, and the visual field of the camera comprises the whole flower thinning working range;
the camera is used for shooting a fruit tree flower image;
the processor is used for executing the intelligent visual recognition method for the flower quantity of the fruit tree.
The fourth object of the present application can be achieved by adopting the following technical scheme:
a computer readable storage medium storing a program which, when executed by a processor, implements the intelligent visual recognition method for flower quantity of fruit trees.
Compared with the prior art, the application has the following beneficial effects:
according to actual flower thinning equipment and field flower thinning scenes, a camera and a processor are adopted to form an intelligent visual recognition system for the flower quantity of the fruit tree, the system can be mounted on flower thinning equipment, a She Tezheng chart of the flower of the fruit tree is extracted through a trained deep convolutional neural network, the flower area of the fruit tree can be accurately segmented in real time, the flower density is calculated, the flower density is converted into an analog quantity, an analog quantity signal is transmitted to the flower thinning equipment, and visual support is provided for automatic flower thinning; a plurality of dimension stabilizing components can be arranged, so that the whole system works more stably.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of an intelligent visual recognition system for flower quantity of fruit trees according to embodiment 1 of the present application.
Fig. 2 is a simple flow chart of the intelligent visual recognition method for flower quantity of fruit trees in embodiment 1 of the application.
Fig. 3 is a detailed flowchart of the intelligent visual recognition method for flower quantity of fruit trees in embodiment 1 of the application.
Fig. 4 is a feature extraction flow chart of embodiment 1 of the present application.
Fig. 5 shows the density calculation and analog conversion process of the flower according to example 1 of the present application.
Fig. 6 is a block diagram of the intelligent visual recognition device for flower quantity of fruit tree in embodiment 2 of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present application are within the scope of protection of the present application.
Example 1:
as shown in fig. 1, the present embodiment provides an intelligent visual recognition system for flower quantity of fruit trees, which is mounted on flower thinning equipment 101, and includes a camera 102 and a processor 103, wherein the camera 102 is connected with the processor 103, the height of the camera 102 is the same as that of the flower thinning equipment 101, the flower thinning equipment 101 does not belong to the protection scope of the present application, and details are omitted here to indicate that the system of the present embodiment has mountability.
The camera 102 is a color camera, is vertical to the ground, is a horizontal view angle, has a visual field including the whole flower thinning working range, can collect complete image information, and is mainly used for shooting flower images of fruit trees.
Further, in order to make the height of the camera 102 identical to the height of the flower thinning device 101, the intelligent visual recognition system for flower quantity of fruit trees of this embodiment further includes an adjustable support rod 104, and the height of the camera 102 can be adjusted by the adjustable support rod 104, so as to ensure a shooting field of view.
Further, in order to maintain stability of the camera 102, the intelligent visual recognition system for flower quantity of fruit trees of this embodiment further includes a stabilizer 105, the camera 102 is fixed on the upper portion of the stabilizer 105, the lower portion of the stabilizer 105 is connected with an adjustable supporting rod 104, that is, the adjustable supporting rod 104 is through adjusting the height of the stabilizer 105, and then adjusting the height of the camera 102, the stabilizer 105 can slow down shaking and vibration effects of movement of the flower thinning equipment 101.
Further, in order to alleviate the shock and noise influence of flower thinning equipment 101 to processor 103, the intelligent visual recognition system for flower quantity of fruit tree of this embodiment further comprises a cushioning platform 106, cushioning platform 106 is installed on flower thinning equipment 101, adjustable support rod 104 is installed on cushioning platform 106, processor 103 is arranged in cushioning platform 106, and cushioning platform 106 can guarantee that processor 103 normally works.
The processor 103 adopts an Yingweida Xavier processor, is specifically connected with the camera 102 through a USB data line, mainly acquires image information shot by the camera 102, processes the image information shot by the camera, obtains a segmentation map, flower density and analog quantity calculation, and transmits the segmentation map, flower density and analog quantity calculation to the flower thinning equipment 101 for operation.
As shown in fig. 1 to 3, the present embodiment provides an intelligent visual recognition method for flower quantity of a fruit tree, which is mainly implemented by a processor 103, and specifically includes the following steps:
s201, acquiring a plurality of fruit tree flower images.
The flowers of the fruit tree in this embodiment are litchi flowers, after the viewing angle of the camera 101 is fixed, the visual field range can include the whole flower thinning working range, the camera 101 shoots RGB color litchi flower images in the flower thinning range, because the flower thinning equipment 101 is in a moving state, automatic shooting within a fixed time interval needs to be set, the shooting interval time is determined according to the moving speed and the flower thinning speed of the flower thinning equipment 101, and then the shooting interval time is transmitted to the processor 103 through a USB data line, so that the processor 103 can acquire the litchi flower images.
S202, denoising and standardizing each fruit tree flower image.
Further, the step S202 specifically includes:
s2021, denoising the fruit tree flower image by using median filtering.
S2022, calculating the mean value and standard deviation of the RGB three channel components of the denoised fruit tree flower image.
S2023, carrying out standardized calculation on the denoised fruit tree flower image according to the mean value and the standard deviation, wherein the value range of the calculated image is 0 to 1, and the influence caused by transformation in the characteristic extraction process can be effectively reduced.
S203, extracting a fruit tree flower and leaf feature map in the processed fruit tree flower image by using the trained deep convolutional neural network.
Corresponding to the fruit tree flower of the embodiment being litchi flower, the flower and leaf feature map of the fruit tree of the embodiment is litchi flower and leaf feature map; as shown in fig. 4, the deep convolutional neural network of the present embodiment includes a backbone network and a pyramid structure, where the backbone network includes two parts, namely a transitional feature extraction part (part 1 in the figure) and a backbone feature extraction part (part 2 in the figure), and the transitional feature extraction part, the backbone feature extraction part and the pyramid structure are sequentially connected.
The transition feature extraction part is a shallow layer network, which comprises a first convolution layer, a second convolution layer and a third convolution layer which are sequentially connected, and mainly learns general features such as textures, corner points and the like, and in the embodiment, the first convolution layer, the second convolution layer and the third convolution layer are all convolution layers with 3×3 convolution kernels.
The main feature extraction part is a deep network, takes a 'convolution-activation-normalization' convolution block with 32 layers of depth continuously stacked as a main, and is divided into four stages which are sequentially connected, wherein the four stages are a 1 st stage, a 2 nd stage, a 3 rd stage and a 4 th stage respectively, 6, 8, 12 and 6 convolution blocks are respectively arranged from the 1 st stage to the 4 th stage, each two continuous convolution blocks form a residual error module, the specific features related to a main learning task are formed, and each residual error module linearly fuses the constant mapping of the output features and the input features of the two convolution blocks and then is used as the input of the subsequent convolution block. The number of output features of the convolution blocks in the same stage is the same as the number of features originally input in the stage, the number of cross-stage features is different, the number of features is adjusted by using a convolution layer of a 1×1 convolution kernel, and each convolution is completed by an activation function and normalization layer. In each convolution block, the convolution is to process the feature by using a 3×3 convolution kernel, then input to an activation function, and finally processed by a normalization layer.
Further, a cavity convolution structure is added into the convolution blocks in the 3 rd and 4 th stages, and the cavity convolution means that 0 values with regular intervals are inserted into a convolution kernel to enlarge the size of the convolution kernel, so that the receptive field of deep features is expanded, the operation amount is not increased, meanwhile, the maximum pooling is replaced by adjusting the sliding stride of the filtering operation, and the loss of feature information is reduced; classical hole convolution uses a larger hole convolution rate to achieve large object feature extraction, and in this embodiment, in order to better extract litchi flower features, a small hole convolution structure is more suitable.
Further, an attention module is added between every two adjacent stages, the attention module compresses the features with the size of w multiplied by h into 1 multiplied by 1, and then the features are input into a learnable linear connection layer to obtain weight vectors of different channel features, each weight measures the importance degree of the input features, and the weight is broadcasted into the size of the input features and then multiplied by the original input features to play a role of screening the features.
In order to increase the richness of the learning features of the convolution layers, dense feature connection is selectively realized according to the action of the deep and shallow convolution layers, and particularly dense feature connection is realized on deep features, the connection mode is shown in fig. 4, the dense feature connection refers to that input features of the next layer are output features of all previous layers, as described above, general features such as shallow layer network learning textures, corner points and the like, specific features related to deep layer network learning tasks have larger difference between the features of the too shallow layer and the too deep layer, so that the richness and consistency of feature extraction are ensured by adjusting the dense connection range.
After the trunk feature is extracted, the trunk feature is processed by using a pyramid structure, so that the learning capacity of the multi-scale feature of the network is enhanced; in the pyramid structure, four convolution layers with different void ratios and filtering steps are used for parallel processing of input intermediate features, then the input features and the parallel processed features are spliced in feature dimensions, and finally multi-scale litchi flower and leaf feature graphs are obtained through fusion and correction of 3X 3 convolution layers, wherein the total size of the litchi flower and leaf feature graphs is 1/16 of that of an input image.
S204, generating a fruit tree flower and leaf prediction diagram according to the fruit tree flower She Tezheng diagram, thereby obtaining a fruit tree flower She Fenge diagram.
Corresponding to the litchi flower and leaf feature map of the fruit tree flower and leaf feature map of the embodiment, the fruit tree flower and leaf prediction map of the embodiment is a litchi flower and leaf prediction map, and the fruit tree flower and leaf segmentation map is a litchi flower and leaf segmentation map; as shown in fig. 5, this step S204 is implemented by constructing a prediction layer including two convolution blocks, wherein the first convolution block has a kernel size of 3×3 and the second convolution block has a kernel size of 1×1; the step S204 specifically includes:
s2041, fusing the fruit tree flowers She Tezheng by the first convolution block to obtain a first characteristic diagram.
S2042, inputting the first characteristic diagram into a second convolution block to obtain a second characteristic diagram.
In steps S2041 and S2042, 1280 litchi flower leaf feature maps are fused by using a 3×3 convolution block to obtain a first feature map, and then the first feature map is input to a second convolution block to obtain 256 feature maps, namely a second feature map, wherein the feature size of the second feature map is unchanged.
S2043, mapping the second feature map into three probability prediction maps through a full convolution layer, wherein the three probability prediction maps respectively represent litchi flowers, leaves and backgrounds.
S2044, solving the maximum value of the three probability prediction graphs in the characteristic dimension to obtain a fruit tree flower and leaf prediction graph, wherein the fruit tree flower and leaf prediction graph is provided with flower labels and leaf labels.
S2045, superposing the fruit tree flower and leaf prediction graph and the fruit tree flower and leaf feature graph according to preset weights to obtain a fruit tree flower She Fenge graph.
S205, counting the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower and leaf segmentation map of the fruit tree.
S206, calculating the density of flowers according to the number of pixels of the flowers and the number of pixels of the leaves.
Further, the step S206 specifically includes:
s2061, summing the number of pixels of the flower and the number of pixels of the leaf to obtain the total number of pixels.
S2062, dividing the number of pixels of the flower by the total number of pixels to obtain the flower density.
S207, converting the densities of flowers in each fruit tree flower image into analog quantities.
Further, the step S207 specifically includes:
s2071, obtaining the maximum value and the minimum value according to the densities of flowers in all the flower images of the fruit trees, and taking the maximum value and the minimum value as the upper limit and the lower limit of the conversion of the densities of the flowers and the analog quantity.
In this embodiment, the maximum value and the minimum value may be set to the upper limit and the lower limit, respectively, but in order to secure reliability, the upper limit may be set to 1.1 times the maximum value and the lower limit may be set to 0.
S2072, scaling the density range of flowers to between 0 and 1 by using a maximum and minimum normalization method according to the density of flowers and the upper limit and the lower limit of analog quantity conversion, and enabling the analog quantity range to be 4 to 20 milliamperes.
S2073, constructing the mapping relation between the density range and the analog range of the flower by using linear scaling.
S2074, converting the flower density in each fruit tree flower image into analog quantity according to the mapping relation between the flower density range and the analog quantity range.
The processor 103 of the present embodiment transmits the converted analog quantity signal to the flower thinning apparatus 101 to realize automatic control of flower thinning.
It should be noted that although the method operations of the above embodiments are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in that particular order or that all illustrated operations be performed in order to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Example 2:
as shown in fig. 6, this embodiment provides an intelligent visual recognition device for flower quantity of fruit trees, which can be applied to the processor of embodiment 1, and includes an acquisition unit 601, a processing unit 602, an extraction unit 603, a generation unit 604, a statistics unit 605, a calculation unit 606 and a conversion unit 607, where specific functions of the units are as follows:
an acquiring unit 601 is configured to acquire a plurality of fruit tree flower images.
And the processing unit 602 is used for denoising and normalizing each fruit tree flower image.
And the extracting unit 603 is used for extracting the feature map of the flowers and leaves of the fruit tree in the processed flower image of the fruit tree by using the trained deep convolutional neural network.
And the generating unit 604 is used for generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph, so as to obtain a fruit tree flower She Fenge graph.
The statistics unit 605 is configured to count the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower-leaf segmentation map of the fruit tree.
A calculating unit 606 for calculating the density of flowers based on the number of pixels of the flowers and the number of pixels of the leaves.
A conversion unit 607, configured to convert the density of flowers in each flower image of the fruit tree into an analog quantity.
Specific implementation of each unit in this embodiment may refer to the intelligent visual recognition method for flower quantity of fruit tree in embodiment 1, and will not be described in detail here; it should be noted that, the apparatus provided in this embodiment is only exemplified by the division of the above functional units, and in practical application, the above functions may be allocated to different functional units as needed to complete, that is, the internal structure may be divided into different functional units to complete all or part of the functions described above.
Example 3:
the present embodiment provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the intelligent visual recognition method for flower quantity of fruit trees of the above embodiment 1, as follows:
acquiring a plurality of fruit tree flower images;
denoising and standardizing each fruit tree flower image;
extracting a fruit tree flower and leaf feature map in the processed fruit tree flower image by using the trained deep convolutional neural network;
generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph, so as to obtain a fruit tree flower and leaf segmentation graph;
counting the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower and leaf segmentation map of the fruit tree;
calculating the density of flowers according to the number of pixels of the flowers and the number of pixels of the leaves;
and converting the densities of flowers in each fruit tree flower image into analog quantities.
The computer readable storage medium of the present embodiment may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In summary, according to the actual flower thinning equipment and the field flower thinning scene, the intelligent visual recognition system for the flower quantity of the fruit tree is formed by adopting the camera and the processor, the system can be carried on the flower thinning equipment, the She Tezheng chart of the flower of the fruit tree is extracted through the trained deep convolutional neural network, the flower area of the fruit tree can be accurately segmented in real time, the flower density is calculated, the flower density is converted into an analog quantity, and the analog quantity signal is transmitted to the flower thinning equipment, so that visual support is provided for automatic flower thinning; a plurality of dimension stabilizing components can be arranged, so that the whole system works more stably.
The above-mentioned embodiments are only preferred embodiments of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can make equivalent substitutions or modifications according to the technical solution and the inventive concept of the present application within the scope of the present application disclosed in the present application patent, and all those skilled in the art belong to the protection scope of the present application.

Claims (8)

1. An intelligent visual recognition method for flower quantity of fruit trees is characterized by comprising the following steps:
acquiring a plurality of fruit tree flower images;
denoising and standardizing each fruit tree flower image;
extracting a fruit tree flower and leaf feature map in the processed fruit tree flower image by using the trained deep convolutional neural network;
generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph, so as to obtain a fruit tree flower and leaf segmentation graph;
counting the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower and leaf segmentation map of the fruit tree;
calculating the density of flowers according to the number of pixels of the flowers and the number of pixels of the leaves;
converting the flower density in each fruit tree flower image into analog quantity;
the deep convolutional neural network comprises a main network and a pyramid structure, wherein the main network comprises a transition characteristic extraction part and a main characteristic extraction part, and the transition characteristic extraction part, the main characteristic extraction part and the pyramid structure are sequentially connected;
the transition feature extraction part comprises a first convolution layer, a second convolution layer and a third convolution layer which are sequentially connected, wherein the first convolution layer, the second convolution layer and the third convolution layer are all convolution layers with 3 multiplied by 3 convolution kernels;
the main feature extraction part comprises four stages which are sequentially connected, wherein the four stages are a 1 st stage, a 2 nd stage, a 3 rd stage and a 4 th stage respectively, 6, 8, 12 and 6 convolution blocks are respectively arranged from the 1 st stage to the 4 th stage, each two continuous convolution blocks form a residual error module, and each residual error module is used for linearly fusing the constant mapping of the output features and the input features of the two convolution blocks and then inputting the output features and the input features as the input of the subsequent convolution blocks; the output characteristic quantity of the convolution block in the same stage is the same as that of the initial input of the stage, the cross-stage characteristic quantity is different, the characteristic quantity is adjusted by using a convolution layer of a 1 multiplied by 1 convolution kernel, and each convolution is completed through an activation function and normalization layer processing; in each convolution block, using a 3×3 convolution kernel processing feature, and completing the processing of an activation function and a normalization layer by each convolution;
the method for converting the flower density in each fruit tree flower image into analog quantity specifically comprises the following steps:
according to the flower densities in all the flower images of the fruit trees, a maximum value and a minimum value are obtained and used as the upper limit and the lower limit of the flower density and analog quantity conversion;
according to the density of flowers and the upper limit and the lower limit of analog quantity conversion, using a maximum and minimum normalization method to scale the density range of flowers to between 0 and 1, and enabling the analog quantity range to be 4 to 20 milliamperes;
constructing a mapping relation between the density range and the analog range of the flower by using linear scaling;
and converting the flower density in each fruit tree flower image into analog quantity according to the mapping relation between the flower density range and the analog quantity range.
2. The intelligent visual recognition method for the flower quantity of the fruit tree according to claim 1, wherein a cavity convolution structure is added in the convolution blocks of the 3 rd stage and the 4 th stage, and an attention module is added between every two adjacent stages.
3. The intelligent visual recognition method for the flower quantity of the fruit tree according to any one of claims 1 to 2, wherein the denoising and standardization process is carried out on the flower image of the fruit tree, and the method specifically comprises the following steps:
denoising the fruit tree flower image by using median filtering;
calculating the mean value and standard deviation of three RGB channel components of the denoised fruit tree flower image;
and (3) carrying out standardized calculation on the denoised fruit tree flower image according to the mean value and the standard deviation.
4. The intelligent visual recognition method for flower quantity of fruit tree according to any one of claims 1-2, wherein the generating a predicted map of flowers and leaves of the fruit tree according to the She Tezheng map of flowers and leaves of the fruit tree, thereby obtaining a segmented map of flowers and leaves of the fruit tree, specifically comprises:
fusing the She Tezheng figures of the fruit tree flowers through a first convolution block to obtain a first characteristic figure; wherein the first convolution block has a kernel size of 3 x 3;
inputting the first characteristic diagram into a second convolution block to obtain a second characteristic diagram; wherein the kernel size of the second convolution block is 1×1;
mapping the second feature map into three probability prediction maps through a full convolution layer, wherein the three probability prediction maps respectively represent flowers, leaves and backgrounds of fruit trees;
obtaining maximum values of the three probability prediction graphs in characteristic dimensions to obtain a fruit tree flower and leaf prediction graph;
and superposing the fruit tree flower and leaf prediction graph and the fruit tree flower and leaf feature graph according to preset weights to obtain a fruit tree flower She Fenge graph.
5. The intelligent visual recognition method for flower quantity of fruit tree according to any one of claims 1-2, wherein the calculating the flower density according to the number of pixels of flower and the number of pixels of leaf specifically comprises:
summing the number of pixels of the flower and the number of pixels of the leaf to obtain a total number of pixels;
the number of pixels of the flower is divided by the total number of pixels to obtain the flower density.
6. An intelligent visual recognition device for flower quantity of fruit trees, which is characterized by comprising:
the acquisition unit is used for acquiring a plurality of fruit tree flower images;
the processing unit is used for denoising and standardizing each fruit tree flower image;
the extraction unit is used for extracting the characteristic diagrams of the flowers and leaves of the fruit trees in the processed flower images of the fruit trees by using the trained deep convolutional neural network;
the generation unit is used for generating a fruit tree flower and leaf prediction graph according to the fruit tree flower She Tezheng graph so as to obtain a fruit tree flower and leaf segmentation graph;
the statistical unit is used for counting the number of pixels belonging to flowers and the number of pixels belonging to leaves according to the flower and leaf segmentation map of the fruit tree;
a calculation unit for calculating a density of flowers based on the number of pixels of the flowers and the number of pixels of the leaves;
the conversion unit is used for converting the densities of flowers in each fruit tree flower image into analog quantities;
the deep convolutional neural network comprises a main network and a pyramid structure, wherein the main network comprises a transition characteristic extraction part and a main characteristic extraction part, and the transition characteristic extraction part, the main characteristic extraction part and the pyramid structure are sequentially connected;
the transition feature extraction part comprises a first convolution layer, a second convolution layer and a third convolution layer which are sequentially connected, wherein the first convolution layer, the second convolution layer and the third convolution layer are all convolution layers with 3 multiplied by 3 convolution kernels;
the main feature extraction part comprises four stages which are sequentially connected, wherein the four stages are a 1 st stage, a 2 nd stage, a 3 rd stage and a 4 th stage respectively, 6, 8, 12 and 6 convolution blocks are respectively arranged from the 1 st stage to the 4 th stage, each two continuous convolution blocks form a residual error module, and each residual error module is used for linearly fusing the constant mapping of the output features and the input features of the two convolution blocks and then inputting the output features and the input features as the input of the subsequent convolution blocks; the output characteristic quantity of the convolution block in the same stage is the same as that of the initial input of the stage, the cross-stage characteristic quantity is different, the characteristic quantity is adjusted by using a convolution layer of a 1 multiplied by 1 convolution kernel, and each convolution is completed through an activation function and normalization layer processing; in each convolution block, using a 3×3 convolution kernel processing feature, and completing the processing of an activation function and a normalization layer by each convolution;
the method for converting the flower density in each fruit tree flower image into analog quantity specifically comprises the following steps:
according to the flower densities in all the flower images of the fruit trees, a maximum value and a minimum value are obtained and used as the upper limit and the lower limit of the flower density and analog quantity conversion;
according to the density of flowers and the upper limit and the lower limit of analog quantity conversion, using a maximum and minimum normalization method to scale the density range of flowers to between 0 and 1, and enabling the analog quantity range to be 4 to 20 milliamperes;
constructing a mapping relation between the density range and the analog range of the flower by using linear scaling;
and converting the flower density in each fruit tree flower image into analog quantity according to the mapping relation between the flower density range and the analog quantity range.
7. The intelligent visual recognition system for the flower quantity of the fruit tree is characterized by comprising a camera and a processor, wherein the camera is connected with the processor, the camera is in a horizontal visual angle, and the visual field of the camera comprises the whole flower thinning working range;
the camera is used for shooting a fruit tree flower image;
the processor is used for executing the intelligent visual recognition method for the flower quantity of the fruit tree according to any one of claims 1-5.
8. A computer-readable storage medium storing a program, wherein the program, when executed by a processor, implements the intelligent visual recognition method for flower quantity of fruit trees according to any one of claims 1 to 5.
CN202110467162.6A 2021-04-28 2021-04-28 Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium Active CN113095279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110467162.6A CN113095279B (en) 2021-04-28 2021-04-28 Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110467162.6A CN113095279B (en) 2021-04-28 2021-04-28 Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium

Publications (2)

Publication Number Publication Date
CN113095279A CN113095279A (en) 2021-07-09
CN113095279B true CN113095279B (en) 2023-10-24

Family

ID=76681302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110467162.6A Active CN113095279B (en) 2021-04-28 2021-04-28 Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium

Country Status (1)

Country Link
CN (1) CN113095279B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511849B (en) * 2021-12-30 2024-05-17 广西慧云信息技术有限公司 Grape thinning identification method based on graph attention network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2948499A1 (en) * 2016-11-16 2018-05-16 The Governing Council Of The University Of Toronto System and method for classifying and segmenting microscopy images with deep multiple instance learning
CN109919025A (en) * 2019-01-30 2019-06-21 华南理工大学 Video scene Method for text detection, system, equipment and medium based on deep learning
CN109978077A (en) * 2019-04-08 2019-07-05 南京旷云科技有限公司 Visual identity methods, devices and systems and storage medium
CN111523546A (en) * 2020-04-16 2020-08-11 湖南大学 Image semantic segmentation method, system and computer storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2948499A1 (en) * 2016-11-16 2018-05-16 The Governing Council Of The University Of Toronto System and method for classifying and segmenting microscopy images with deep multiple instance learning
CN109919025A (en) * 2019-01-30 2019-06-21 华南理工大学 Video scene Method for text detection, system, equipment and medium based on deep learning
CN109978077A (en) * 2019-04-08 2019-07-05 南京旷云科技有限公司 Visual identity methods, devices and systems and storage medium
CN111523546A (en) * 2020-04-16 2020-08-11 湖南大学 Image semantic segmentation method, system and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多通道多尺度卷积神经网络的单幅图像去雨方法;柳长源;王琪;毕晓君;;电子与信息学报(09);第224-231页 *
基于改进YLOL v3网络的夜间环境柑橘识别方法;熊俊涛 等;《农业机械学报》;第51卷(第4期);第199-207页 *

Also Published As

Publication number Publication date
CN113095279A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN109886155B (en) Single-plant rice detection and positioning method, system, equipment and medium based on deep learning
CN108710863A (en) Unmanned plane Scene Semantics dividing method based on deep learning and system
CN110222787A (en) Multiscale target detection method, device, computer equipment and storage medium
CN111797712A (en) Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network
CN107918776A (en) A kind of plan for land method, system and electronic equipment based on machine vision
CN107016403A (en) A kind of method that completed region of the city threshold value is extracted based on nighttime light data
CN110322509B (en) Target positioning method, system and computer equipment based on hierarchical class activation graph
CN114092833A (en) Remote sensing image classification method and device, computer equipment and storage medium
CN113095279B (en) Intelligent visual recognition method, device and system for flower quantity of fruit tree and storage medium
CN112597920A (en) Real-time object detection system based on YOLOv3 pruning network
CN114898359B (en) Litchi plant diseases and insect pests detection method based on improvement EFFICIENTDET
Peng et al. Litchi detection in the field using an improved YOLOv3 model
CN114596274A (en) Natural background citrus greening disease detection method based on improved Cascade RCNN network
CN113569772A (en) Remote sensing image farmland instance mask extraction method, system, equipment and storage medium
CN116091350B (en) Image rain removing method and system based on multi-cascade progressive convolution structure
CN116740337A (en) Safflower picking point identification positioning method and safflower picking system
CN112084815A (en) Target detection method based on camera focal length conversion, storage medium and processor
CN112082475B (en) Living stumpage species identification method and volume measurement method
CN114782360A (en) Real-time tomato posture detection method based on DCT-YOLOv5 model
CN114419443A (en) Automatic remote-sensing image cultivated land block extraction method and system
CN113920427A (en) Real-time tomato detection system based on YOLOv5CA attention model
CN114419559B (en) Attention mechanism-based method for identifying climbing hidden danger of vines of towers of distribution network line
CN118154855B (en) Method for detecting camouflage target of green fruit
CN117523550B (en) Apple pest detection method, apple pest detection device, electronic equipment and storage medium
CN116416527A (en) Remote sensing image-based power transmission line corridor object identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant