CN109784345A - A kind of agricultural pests detection method based on scale free depth network - Google Patents
A kind of agricultural pests detection method based on scale free depth network Download PDFInfo
- Publication number
- CN109784345A CN109784345A CN201811587707.1A CN201811587707A CN109784345A CN 109784345 A CN109784345 A CN 109784345A CN 201811587707 A CN201811587707 A CN 201811587707A CN 109784345 A CN109784345 A CN 109784345A
- Authority
- CN
- China
- Prior art keywords
- pest
- scale free
- target
- image
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Catching Or Destruction (AREA)
Abstract
The present invention relates to a kind of agricultural pests detection methods based on scale free depth network, method includes the following steps: the pest image training data that (1) pretreatment is given.(2) pest scale free object detector is constructed.(3) the scale free feature for extracting pest image to be measured predicts confidence level and the position of pest target.(4) confidence level for the pest target that postposition optimization obtains and position.(5) determine position and the number of the pest target in pest image to be measured.The present invention efficiently solves artificial setting object reference frame defect, is suitable for the identification of different scale pest, improves the identification and detection performance of small pest target, improve agricultural pests detection accuracy and robustness by extracting and encoding pest image scale free feature.
Description
Technical field
The present invention relates to precision agriculture pest target detection technique fields, and in particular to one kind is based on scale free depth network
Agricultural pests detection method.
Background technique
China is a large agricultural country, and agricultural production accounts for very big specific gravity in national economy, however the invasion of pest cause
The agriculture underproduction, greatly damage quality of agricultural product.Being monitored to the type and quantity of agricultural pests is FORECAST AND PREVENTION agricultural
The premise and key of pest have important application value and meaning.
Traditional agricultural pests detection method relies primarily on plant protection expert and carries out manual identified, inspection according to the feature of pest
Survey accuracy rate is influenced by the knowledge hierarchy of expert, experience level and subjective consciousness, and there are certain subjectivity and limitations.
Meanwhile it is agricultural pests broad categories, large number of, a large amount of human and material resources and financial resources need to be expended by detecting to agricultural pests.
With the continuous development of computer vision technique, it is more next that agricultural pests detection acquirement is carried out using deep learning network
More it is widely applied.Existing method realizes the detection of different scale pest by the way that different size of recommendation frame is arranged.But people
Scale for the recommendation frame of setting is limited, and cannot preferably be covered the target of different scale, cannot be obtained and utilize pest
The scale free feature of target, poor robustness.Simultaneously as considering to recommend a large amount of target frames, when repeating serious recommendation problem, detection
Between increase, it is especially, low for the detection accuracy of Small object object.
Summary of the invention
The purpose of the present invention is to provide a kind of agricultural pests detection methods based on scale free depth network, existing to overcome
There is deficiency present in technology, improves agricultural pests precision and efficiency of detecting.
To achieve the above object, the invention adopts the following technical scheme:
A kind of agricultural pests detection method based on scale free depth network, method includes the following steps:
(1) the given pest image training data of pretreatment.
(2) pest scale free object detector is constructed.
(3) the scale free feature for extracting pest image to be measured predicts confidence level and the position of pest target.
(4) confidence level for the pest target that postposition optimization obtains and position.
(5) determine position and the number of the pest target in pest image to be measured.
Further, " the given pest image training data of pretreatment " described in step (1), following step is specifically included
It is rapid:
(11) by all pest image training data compressed transforms a to fixed dimension.
(12) the minimum rectangle frame in every image comprising pest target is marked out, the seat of pest locations of real targets is obtained
Mark information.
Further, " pest scale free object detector is constructed " described in step (2), specifically includes the following steps:
(21) weight of the deep neural network of random initializtion pest scale free object detector and biasing (W, B), institute
The weight W for stating deep neural network includes the weight of depth characteristic encoder and the weight of target detection, the depth characteristic
Encoder includes N1Layer positive convolution module, N2Layer warp volume module and 1 layer of scale free characteristic module, target detection are single
Layer fully-connected network, output layer include 6 neurons.
(22) the positive negative sample of scale free is obtained using center drop point method, using the sliding window that size is n × n in deconvolution
It is slided on the output characteristic pattern of module;On each sliding position, by the center of the corresponding sliding window of the sliding position
Position is mapped in original image, does one using the central point of mapping as origin, using R as the circle of radius, if there is real goal frame center
Point is fallen in the circumference, then the sliding window is considered as positive sample, is labeled as 1;Otherwise it is considered as negative sample, is labeled as 0.
The original image, which refers to, inputs pest image, the evil in middle finger step (11) of the present invention after resize operational transformation
Worm image;The real goal frame refers to the minimum rectangle frame of the pest target marked out in step (12).
(23) the positive and negative sample set obtained according to step (22) obtains scale free depth e-learning loss function L:
Wherein, k indicates specimen number, NcFor the size of batch processing, NrEqual to sample number, α is coefficient of balance, pkIt indicates
Predict the probability value of pest target;Indicate that corresponding sample label, positive sample take 1, negative sample takes 0.
Expression parameter sample real border frame, tk={ tx, ty, tw, thExpression parameter
Sample predictions bounding box; tx=x/Winput, ty=y/Hinput, tw=log (w/Winput), th=log (h/Hinput);It is right
It answers, x, y, w, h are respectively abscissa, ordinate, width and the height of the upper left angle point of predicted boundary frame;x*, y*, w*, h*Point
Not Wei real border frame the upper left corner abscissa, ordinate, width and height;Winput、HinputRespectively input pest image
Width, height.
(24) using BP algorithm to the weight of the deep neural network of pest scale free object detector and offset parameter into
Row study, iteration n times, until object detector parameter is optimal:
Wherein, l indicates the object detector network number of plies l=1,2 ..., N1+N2+ 2, WlIndicate l layers of weight matrix, Bl
Indicate that l layers of offset parameter, η indicate learning rate.
(25) optimal scale free object detector is obtained, specifically, including the parameter W for obtaining optimal characteristics encoderl,
Bl, l=1,2 ..., N1+N2+ 1 and optimal objective detection son weight and offset parameter: Wc1, Wc2, Wb1, Wb2, Wb3, Wb4, bc1,
bc1, Bb1, Bb2, Bb3, Bb4, wherein Wc1, Wc2Weight, b are returned for objective degrees of confidencec1, bc2For offset parameter, Wb1, Wb2, Wb3, Wb4
Weight, B are returned for positionb1, Bb2, Bb3, Bb4For offset parameter.
Further, described in step (3) " the scale free feature of pest image to be measured is extracted, predicts pest target
Confidence level and position ", specifically includes the following steps:
(31) using trained scale free object detector in step (25), the scale free for obtaining pest image to be measured is special
Levy Y:
Wherein,For scale free characteristic module weight,It is inclined for scale free characteristic module
Parameter is set,For the output of warp volume module,It can be obtained by following formula recursion: Xl=σ (Wl*
Xl-1+Bl), XlFor every layer of output feature, l indicates the index of the object detector network number of plies, l=1,2 ..., N1+N2, * expression volume
Product operation, σ () are ReLu function, X0To input pest image array to be measured.
(32) son is detected using optimal objective trained in step (25), is calculated using the following equation pest image to be measured
In each pest candidate frame confidence level c:
Wherein,E=2.72 is math constant.
(33) son is detected using optimal objective trained in step (25), obtains the expression of pest target position are as follows:
Upper left corner abscissa x=wx×Winput, upper left corner ordinate y=wy×Hinput, widthAnd height
Wherein, wx=σ (Wb1*Y+bb1),wy=σ (Wb2*Y+bb2), ww=σ (Wb3*Y+bb3), wh=σ (Wb4*Y+bb4)。
Further, " confidence level for the pest target that postposition optimization obtains and position " described in step (4), specific to wrap
Include following steps:
(41) usable floor area compensation policy corrects Q and P value, Wherein, s indicates pest target candidate frame area, s1For the area threshold of setting, λ be adjust because
Son, the value range of λ be (0,1], and recalculate acquisition objective degrees of confidence
(42) postposition optimization is carried out to the position of pest target:
(421) it is sorted according to the confidence level c of pest target to target candidate frame, and marks confidence level Maximum alternative frame.
(422) friendship and the ratio of the highest pest target candidate frame of confidence value and remaining each candidate frame are calculated.
(423) removal is handed over and ratio is greater than given threshold NtCandidate frame.
(424) step (421), (422) and (423) are repeated to the pest target candidate frame of reservation, until the last one time
Frame is selected, iteration terminates, the markd candidate frame of output institute.
Further, " position and the number that determine the pest target in pest image to be measured " described in step (5), have
Body is the following steps are included: to the markd candidate frame of institute that step (4) obtain, given threshold value ts, selection target confidence value is big
In tsCandidate frame as final pest object detection results, and calculate its number as in pest image to be measured pest sum
Mesh.
Compared with prior art, the beneficial effects of the present invention are:
(1) present invention obtains the positive negative sample of scale free using center drop point method, is schemed using deep neural network coding pest
As scale free feature, artificial setting object reference frame scale is avoided, the identification of different scale pest is suitable for, improves agriculture evil
The precision and flexibility of worm identification.
(2) present invention balances the confidence weight of Small object and big target using Area Compensation strategy, is particularly advantageous to
The identification and detection of smaller pest target.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is pest scale free object detector structure chart in the present invention;
Fig. 3 is center drop point method schematic diagram;
Fig. 4 is the flow chart of pest objective degrees of confidence and the optimization of position postposition in the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawing:
A kind of agricultural pests detection method based on scale free depth network as shown in Figure 1, this method includes following step
It is rapid:
The given pest image training data of S1, pretreatment, specifically includes the following steps:
S11, by all pest image training datas, pass through resize operational transformation to fixed dimension.Resize function is
For compressing a function of image to specified size in matlab or opencv.
S12, mark out in every image include pest target minimum rectangle frame, obtain pest locations of real targets seat
Mark information.
S2, building pest scale free object detector." the building pest scale free object detector ", specifically includes
Following steps:
The weight and biasing (W, B) of the deep neural network of S21, random initializtion pest scale free object detector, institute
The weight W for stating deep neural network includes the weight of depth characteristic encoder and the weight of target detection, the depth characteristic
Encoder includes N1Layer positive convolution module, N2Layer warp volume module and 1 layer of scale free characteristic module.Scale free characteristic module exports k
A P ties up scale free feature vector, and target detection is single layer fully-connected network, and output layer includes 6 neurons, wherein 2
A neuron is for predicting objective degrees of confidence, and in addition 4 neurons are for predicting pest target position.
S22, as shown in figure 3, using center drop point method obtain the positive negative sample of scale free, using size be n × n sliding window
Mouth is slided on the output characteristic pattern of warp volume module;On each sliding position, by the corresponding sliding of the sliding position
The center of window is mapped in original image, does one using the central point of mapping as origin, using R as the circle of radius, if having true
Target frame central point is fallen in the circumference, then the sliding window is considered as positive sample, is labeled as 1;Otherwise it is considered as negative sample, is labeled as
0。
The original image, which refers to, inputs pest image, the evil in middle finger step (11) of the present invention after resize operational transformation
Worm image;The real goal frame refers to the minimum rectangle frame of the pest target marked out in step (12).
S23, the positive and negative sample set obtained according to step S22 obtain scale free depth e-learning loss function L:
Wherein, k indicates specimen number, NcFor the size of batch processing, NrEqual to sample number, α is coefficient of balance, pkIt indicates
Predict the probability value of pest target;Indicate that corresponding sample label, positive sample take 1, negative sample takes 0.
Expression parameter sample real border frame, tk={ tx, ty, tw, thExpression parameter
Sample predictions bounding box; tx=x/Winput, ty=y/Hinput, tw=log (w/Winput), th=log (h/Hinput);It is right
It answers, x, y, w, h are respectively abscissa, ordinate, width and the height of the upper left angle point of predicted boundary frame;x*, y*, w*, h*Point
Not Wei real border frame the upper left corner abscissa, ordinate, width and height;Winput、WinputRespectively input pest image
Width, height.
S24, using BP algorithm to the weight of the deep neural network of pest scale free object detector and offset parameter into
Row study, iteration n times, until object detector parameter is optimal:
Wherein, l indicates the object detector network number of plies l=1,2 ..., N1+N2+ 2, WlIndicate l layers of weight matrix, Bl
Indicate that l layers of offset parameter, η indicate learning rate.
S25, optimal scale free object detector is obtained, specifically, including the parameter W for obtaining optimal characteristics encoderl,
Bl, l=1,2 ..., N1+N2+ 1 and optimal objective detection son weight and offset parameter: Wc1, Wc2, Wb1, Wb2, Wb3, Wb4, bc1,
bc1, Bb1, Bb2, Bb3, Bb4, wherein Wc1, Wc2Weight, b are returned for objective degrees of confidencec1, bc2For offset parameter, Wb1, Wb2, Wb3, Wb4
Weight, B are returned for positionb1, Bb2, Bb3, Bb4For offset parameter.
S3, the scale free feature for extracting pest image to be measured predict confidence level and the position of pest target, specifically include with
Lower step:
S31, using the trained scale free feature coding device of step S25, obtain the scale free feature of pest image to be measured:
Wherein,For scale free characteristic module weight,For scale free characteristic module
Offset parameter,For the output of warp volume module,It can be obtained by following formula recursion: Xl=σ
(Wl*Xl-1+Bl), XlFor every layer of output feature, l indicates the index of the object detector network number of plies, l=1,2 ..., N1+N2, * table
Show convolution operation, σ () is ReLu function, X0To input pest image array to be measured.
S32, son is detected using optimal objective trained in step (25), is calculated using the following equation pest image to be measured
In each pest candidate frame confidence level c:
Wherein,E=2.72 is math constant.
S33, son is detected using optimal objective trained in step S25, obtains the expression of pest target position are as follows:
Upper left corner abscissa x=wx×Winput, upper left corner ordinate y=wy×Hinput, widthAnd height
Wherein, wx=σ (Wb1*Y+bb1),wy=σ (Wb2*Y+bb2), ww=σ (Wb3*Y+bb3), wh=σ (Wb4*Y+bb4)。
The confidence level for the pest target that S4, postposition optimization obtain and position.
As shown in figure 4, " confidence level for the pest target that postposition optimization obtains and the position ", specifically includes following step
It is rapid:
S41, usable floor area compensation policy correct Q and P value, Wherein, s indicates pest target candidate frame area, s1For the area threshold of setting, λ be adjust because
Son, the value range of λ be (0,1], and recalculate acquisition objective degrees of confidence
S42, postposition optimization is carried out to the position of pest target:
S421, it is sorted according to the confidence level c of pest target to target candidate frame, and marks confidence level Maximum alternative frame.
S422, the friendship for calculating the highest pest target candidate frame of confidence value and remaining each candidate frame and ratio.
S423, removal are handed over and ratio is greater than given threshold NtCandidate frame.
S424, step (421), (422) and (423) are repeated to the pest target candidate frame of reservation, until the last one time
Frame is selected, iteration terminates, the markd candidate frame of output institute.
S5, the position for determining pest target in pest image to be measured and number, specifically includes the following steps: to step
(4) the markd candidate frame of institute obtained, given threshold value ts, selection target confidence value is greater than tsCandidate frame as final
Pest object detection results, and its number is calculated as pest total number in pest image to be measured.
Embodiment described above only describe the preferred embodiments of the invention, not to model of the invention
It encloses and is defined, without departing from the spirit of the design of the present invention, those of ordinary skill in the art are to technical side of the invention
The various changes and improvements that case is made should all be fallen into the protection scope that claims of the present invention determines.
Claims (6)
1. a kind of agricultural pests detection method based on scale free depth network, it is characterised in that: method includes the following steps:
(1) the given pest image training data of pretreatment;
(2) pest scale free object detector is constructed;
(3) the scale free feature for extracting pest image to be measured predicts confidence level and the position of pest target;
(4) confidence level for the pest target that postposition optimization obtains and position;
(5) determine position and the number of the pest target in pest image to be measured.
2. a kind of agricultural pests detection method based on scale free depth network according to claim 1, it is characterised in that:
" the given pest image training data of pretreatment " described in step (1), specifically includes the following steps:
(11) by all pest image training datas, pass through resize operational transformation to fixed dimension;
(12) the minimum rectangle frame in every image comprising pest target is marked out, the coordinate letter of pest locations of real targets is obtained
Breath.
3. a kind of agricultural pests detection method based on scale free depth network according to claim 2, it is characterised in that:
" building pest scale free object detector " described in step (2), specifically includes the following steps:
(21) weight of the deep neural network of random initializtion pest scale free object detector and biasing (W, B), the depth
The weight W of degree neural network includes the weight of depth characteristic encoder and the weight of target detection, the depth characteristic coding
Device includes N1Layer positive convolution module, N2Layer warp volume module and 1 layer of scale free characteristic module, target detection are that single layer is complete
Network is connected, output layer includes 6 neurons;
(22) the positive negative sample of scale free is obtained using center drop point method, using the sliding window that size is n × n in warp volume module
Output characteristic pattern on slided;On each sliding position, by the center of the corresponding sliding window of the sliding position
It is mapped in original image, does one using the central point of mapping as origin, using R as the circle of radius, if there is real goal frame central point to fall
In in the circumference, then the sliding window is considered as positive sample, is labeled as 1;Otherwise it is considered as negative sample, is labeled as 0;
(23) the positive and negative sample set obtained according to step (22) obtains scale free depth e-learning loss function L:
Wherein, k indicates specimen number, NcFor the size of batch processing, NrEqual to sample number, α is coefficient of balance, pkIndicate prediction
The probability value of pest target;Indicate that corresponding sample label, positive sample take 1, negative sample takes 0;
Expression parameter sample real border frame, tk={ tx, ty, tw, thExpression parameter sample
Predicted boundary frame; tx=x/Winput, ty=y/Hinput, tw=log (w/Winput), th=log (h/Hinput);
Corresponding, x, y, w, h are respectively abscissa, ordinate, width and the height of the upper left angle point of predicted boundary frame;x*, y*, w*, h*
The respectively abscissa, ordinate, width and height in the upper left corner of real border frame;Winput、HinputRespectively input pest figure
The width of picture, height;
(24) using BP algorithm to the weight and offset parameter of the deep neural network of pest scale free object detector
It practises, iteration n times, until object detector parameter is optimal:
Wherein, l indicates the object detector network number of plies l=1,2 ..., N1+N2+ 2, WlIndicate l layers of weight matrix, BlIt indicates
L layers of offset parameter, η indicate learning rate;
(25) optimal scale free object detector is obtained, specifically, including the parameter W for obtaining optimal characteristics encoderl, Bl, l
=1,2 ..., N1+N2+ 1 and optimal objective detection son weight and offset parameter: Wc1, Wc2, Wb1, Wb2, Wb3, Wb4, bc1, bc1,
Bb1, Bb2, Bb3, Bb4, wherein Wc1, Wc2Weight, b are returned for objective degrees of confidencec1, bc2For offset parameter, Wb1, Wb2, Wb3, Wb4For
Position returns weight, Bb1, Bb2, Bb3, Bb4For offset parameter.
4. a kind of agricultural pests detection method based on scale free depth network according to claim 3, it is characterised in that:
" the scale free feature of pest image to be measured is extracted, predict confidence level and the position of pest target " described in step (3), specifically
The following steps are included:
(31) using trained scale free object detector in step (25), the scale free characteristic Y of pest image to be measured is obtained:
Wherein,For scale free characteristic module weight,For the biasing of scale free characteristic module
Parameter,For the output of warp volume module,It can be obtained by following formula recursion: Xl=σ (Wl*Xl -1+Bl), XlFor every layer of output feature, l indicates the index of the object detector network number of plies, l=1,2 ..., N1+N2, * expression convolution
Operation, σ () are ReLu function, X0To input pest image array to be measured;
(32) son is detected using optimal objective trained in step (25), be calculated using the following equation in pest image to be measured every
The confidence level c of a pest candidate frame:
Wherein,E=2.72 is math constant;
(33) son is detected using optimal objective trained in step (25), obtains the expression of pest target position are as follows:
Upper left corner abscissa x=wx×Winput, upper left corner ordinate y=wy×Hinput, width
And height
Wherein, wx=σ (Wb1*Y+bb1), wy=σ (Wb2*Y+bb2), ww=σ (Wb3*Y+bb3), wh=σ (Wb4*Y+bb4)。
5. a kind of agricultural pests detection method based on scale free depth network according to claim 4, it is characterised in that:
" confidence level for the pest target that postposition optimization obtains and position " described in step (4), specifically includes the following steps:
(41) usable floor area compensation policy corrects Q and P value:
Wherein, s indicates pest target candidate frame area, s1For the area threshold of setting, λ is regulatory factor, and the value range of λ is
(0,1], and recalculate acquisition objective degrees of confidence
(42) postposition optimization is carried out to the position of pest target:
(421) it is sorted according to the confidence level c of pest target to target candidate frame, and marks confidence level Maximum alternative frame;
(422) friendship and the ratio of the highest pest target candidate frame of confidence value and remaining each candidate frame are calculated;
(423) removal is handed over and ratio is greater than given threshold NtCandidate frame;
(424) step (421), (422) and (423) are repeated to the pest target candidate frame of reservation, until the last one candidate frame,
Iteration terminates, the markd candidate frame of output institute.
6. a kind of agricultural pests detection method based on scale free depth network according to claim 5, it is characterised in that:
" position and the number that determine the pest target in pest image to be measured " described in step (5), specifically includes the following steps: right
The markd candidate frame of institute that step (4) obtains, given threshold value ts, selection target confidence value is greater than tsCandidate frame as most
Whole pest object detection results, and its number is calculated as pest total number in pest image to be measured.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811587707.1A CN109784345B (en) | 2018-12-25 | 2018-12-25 | Agricultural pest detection method based on non-scale depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811587707.1A CN109784345B (en) | 2018-12-25 | 2018-12-25 | Agricultural pest detection method based on non-scale depth network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109784345A true CN109784345A (en) | 2019-05-21 |
CN109784345B CN109784345B (en) | 2022-10-28 |
Family
ID=66497571
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811587707.1A Active CN109784345B (en) | 2018-12-25 | 2018-12-25 | Agricultural pest detection method based on non-scale depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109784345B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245604A (en) * | 2019-06-12 | 2019-09-17 | 西安电子科技大学 | Mosquito recognition methods based on convolutional neural networks |
CN110287993A (en) * | 2019-05-22 | 2019-09-27 | 广东精点数据科技股份有限公司 | A kind of data preprocessing method and system based on characteristics of image refinement |
CN113191229A (en) * | 2021-04-20 | 2021-07-30 | 华南农业大学 | Intelligent visual pest detection method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107665355A (en) * | 2017-09-27 | 2018-02-06 | 重庆邮电大学 | A kind of agricultural pests detection method based on region convolutional neural networks |
WO2018137357A1 (en) * | 2017-01-24 | 2018-08-02 | 北京大学 | Target detection performance optimization method |
-
2018
- 2018-12-25 CN CN201811587707.1A patent/CN109784345B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018137357A1 (en) * | 2017-01-24 | 2018-08-02 | 北京大学 | Target detection performance optimization method |
CN107665355A (en) * | 2017-09-27 | 2018-02-06 | 重庆邮电大学 | A kind of agricultural pests detection method based on region convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
魏杨等: "基于区域卷积神经网络的农业害虫检测方法", 《计算机科学》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287993A (en) * | 2019-05-22 | 2019-09-27 | 广东精点数据科技股份有限公司 | A kind of data preprocessing method and system based on characteristics of image refinement |
CN110245604A (en) * | 2019-06-12 | 2019-09-17 | 西安电子科技大学 | Mosquito recognition methods based on convolutional neural networks |
CN113191229A (en) * | 2021-04-20 | 2021-07-30 | 华南农业大学 | Intelligent visual pest detection method |
CN113191229B (en) * | 2021-04-20 | 2023-09-05 | 华南农业大学 | Intelligent visual detection method for pests |
Also Published As
Publication number | Publication date |
---|---|
CN109784345B (en) | 2022-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113392775B (en) | Sugarcane seedling automatic identification and counting method based on deep neural network | |
CN105354595B (en) | A kind of robust visual pattern classification method and system | |
CN109670452A (en) | Method for detecting human face, device, electronic equipment and Face datection model | |
CN107239514A (en) | A kind of plants identification method and system based on convolutional neural networks | |
CN109215013A (en) | Automatic stone age prediction technique, system, computer equipment and storage medium | |
CN109784345A (en) | A kind of agricultural pests detection method based on scale free depth network | |
CN109613006A (en) | A kind of fabric defect detection method based on end-to-end neural network | |
CN108647583A (en) | A kind of face recognition algorithms training method based on multiple target study | |
CN110517311A (en) | Pest and disease monitoring method based on leaf spot lesion area | |
CN110766058B (en) | Battlefield target detection method based on optimized RPN (resilient packet network) | |
CN107038416A (en) | A kind of pedestrian detection method based on bianry image modified HOG features | |
CN109740485B (en) | Reservoir or small reservoir identification method based on spectral analysis and deep convolutional neural network | |
CN113705478A (en) | Improved YOLOv 5-based mangrove forest single tree target detection method | |
CN106600595A (en) | Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm | |
CN109902715A (en) | A kind of method for detecting infrared puniness target based on context converging network | |
CN109214470A (en) | Image visibility detection method based on coding network fine adjustment | |
CN111141653B (en) | Tunnel leakage rate prediction method based on neural network | |
CN109948501A (en) | The detection method of personnel and safety cap in a kind of monitor video | |
CN111738138B (en) | Remote sensing monitoring method for severity of wheat strip embroidery disease based on coupling meteorological characteristic region scale | |
CN113344045B (en) | Method for improving SAR ship classification precision by combining HOG characteristics | |
CN106778683A (en) | Based on the quick Multi-angle face detection method for improving LBP features | |
CN111368900A (en) | Image target object identification method | |
CN113011397A (en) | Multi-factor cyanobacterial bloom prediction method based on remote sensing image 4D-FractalNet | |
CN108734717A (en) | The dark weak signal target extracting method of single frames star chart background based on deep learning | |
CN110544253A (en) | fabric flaw detection method based on image pyramid and column template |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |