CN109409365A - It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection - Google Patents

It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection Download PDF

Info

Publication number
CN109409365A
CN109409365A CN201811249496.0A CN201811249496A CN109409365A CN 109409365 A CN109409365 A CN 109409365A CN 201811249496 A CN201811249496 A CN 201811249496A CN 109409365 A CN109409365 A CN 109409365A
Authority
CN
China
Prior art keywords
fruit
training
image
prediction block
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811249496.0A
Other languages
Chinese (zh)
Inventor
邓杨敏
李�亨
吕继团
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu De Shao Mdt Infotech Ltd
Original Assignee
Jiangsu De Shao Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu De Shao Mdt Infotech Ltd filed Critical Jiangsu De Shao Mdt Infotech Ltd
Priority to CN201811249496.0A priority Critical patent/CN109409365A/en
Publication of CN109409365A publication Critical patent/CN109409365A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D46/00Picking of fruits, vegetables, hops, or the like; Devices for shaking trees or shrubs
    • A01D46/30Robotic devices for individually picking crops
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Robotics (AREA)
  • Environmental Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention disclose it is a kind of based on depth targets detection identify and position method to fruit-picking, comprising the following steps: Image Acquisition.Image labeling.Data set prepares.The fruit character in image is extracted in feature extraction using depth convolutional neural networks CNN first;Use VGG16 as the convolutional neural networks of feature extraction.Model training, characteristic pattern on each convolutional layer is extracted first for all training samples, the characteristic pattern obtained for extraction generates prediction block, calculate the classification of target in prediction block, the distance between prediction block and true callout box are calculated, the target loss function of classification loss and prediction block offset loss as training is combined during training.Primary training is completed based on an above-mentioned complete calculating process, is terminated when frequency of training reaches predetermined threshold or loss less than predetermined threshold training.

Description

It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection
Technical field
Method is identified and positioned to fruit-picking the present invention relates to one kind more particularly to a kind of depth targets that are based on are examined That surveys identifies and positions method to fruit-picking.
Background technique
The complexity of natural environment locating for fruit picking operation causes fruit picking to still rely on manpower, fruit picking link Required labour accounts for more than half of entire production process investment labour.With the continuous decline of China's agricultural working population With the continuous rising of cost of labor, fruit automation picking guarantees that fruit is in due course for solving manpower shortage in fruit industry Picking improves picking quality etc. and has great importance.Therefore, the automatic picked technology for studying fruit is extremely urgent.
It quickly reliably finds fruit target under the natural growing environment of fruit and determines the position to fruit-picking It is to realize the key technology picked automatically.In recent years, many fruit detections identify and position algorithm and propose in succession.Have based on exceedingly popular Image and 2R-G-B color difference enhance apple feature, and apple region is extracted from image using adaptive threshold fuzziness method;It grinds The method based on region growing and color characteristic segmentation is studied carefully, the apple color and shape extracted by support vector machines identification is special Sign, but average error rate caused by blocking for blade face is larger;Have and is detected using real transform absolute value and block matching method Potential citrusfruit pixel, constructing a support vector machines realizes higher identification accurate rate, but for blocking and The identification of the even fruit of uneven illumination is performed poor.
Under the natural growing environment of fruit, there is the mutual of complexity in the branches and leaves and fruit of fruit tree between fruit and fruit Positional relationship causes fruit to be blocked at random by branches and leaves or other fruits.The thinking of existing algorithm is primarily directed to fruit-picking The engineers such as texture, color and form and extraction feature, when circumstance of occlusion and illumination condition under field conditions (factors) are with scene Change and change, the corresponding feature to fruit-picking also changes therewith, therefore the fruit based on manual features design is special It is to be improved to levy extraction algorithm robustness;Under the conditions of natural production, the fruit image of acquisition, which usually all exists, to be blocked or illumination Uneven equal complex backgrounds, above-mentioned fruit identify and position algorithm under natural growthing condition fruit discrimination and positioning Precision is to be improved.
The content of invention
In view of the above problems, the present invention provides fruit Automatic signature extraction under a kind of achievable complex environment, it is complicated from Fruit automatic identification under right environment, the automatic positioning of the fruit under Complex Natural Environment based on depth targets detection wait pick Fruit identifies and positions method.
In order to solve problem above, present invention employs following technical solutions: it is a kind of based on depth targets detection wait adopt It picks fruit and identifies and positions method, which comprises the following steps:
Step 1 Image Acquisition
This Image Acquisition realized based on common RGB image, Image Acquisition target be under practical natural growing environment to Fruit-picking;Image is obtained by common slr camera, and when imaging simulates the visual angle manually picked, keep camera height and Fruit target remains basically stable, and fruit target is maintained at image center location;
Step 2 image labeling
The initial data acquired in step 1 is manually marked;It is artificial by image to fruit-picking by most Small boundary rectangle marks, and records the classification (such as apple) of target fruit at frame, records minimum circumscribed rectangle upper left corner in picture With the apex coordinate in the upper right corner;The format of data record mark and the mark of now widely used ImageNet image data set Format is identical;
Step 3 data set prepares
By the corresponding flag data of the original image and step 2 of step 1 collectively as one can training data, will be all Can training data form original data set;By initial data according to the ratio of 5:3:2, artificial random cutting is 3 subsets, Respectively as the training set, verifying collection and test set of subsequent detection model;Training set, verifying collection and test set are based on original Data set, and be independently distributed on image space between three subclass and do not repeat;
Step 4 network training
For the training set divided in step 3, depth convolutional neural networks (convolutionsl neural is used Network, CNN) first extract image in fruit character;Use VGG16 as the convolutional neural networks of feature extraction, VGG16 network includes 13 convolution (Conv) layers, 13 activation (Relu) layers and 4 pond (Pooling) layers;Design flared end is filled out (Padding) processing is filled, the benefit 0 of pixel value is carried out to image border during convolutional calculation, ensure that Conv layers of guarantee are defeated Enter in the same size with output matrix;The convolution that original image for input is 3*3 by size for the original image of input Core carries out convolution algorithm and extracts characteristic pattern (Feature map), shown in convolutional calculation formula such as formula (1);In formulaIt indicates in l J-th of characteristic pattern of convolutional layer,Indicate that j-th of characteristic pattern in l-1 convolutional layer, F () indicate activation primitive, MjIt represents The characteristic quantity of input picture, i ∈ MjIndicate that i is the number of input picture characteristic quantity,For convolution kernel,For bias term;
Selection is based on the SSD based on convolutional neural networks (Convolutional Neural Network, CNN) (single shot detector) detects network to train last detect to fruit-picking to identify and position model;SSD is calculated The VGG16 of full convolutional layer is used as basis network in the detection network of method building to extract clarification of objective, i.e. step 4 can be with It is merged one part of the detection network as step 5;SSD object detection frame generates fixed big in multilayer feature figure The confidence level of classification in small bounding box and frame, shown in the overall goal loss function such as formula (2) of model training;
X indicates the classification of target in prediction block in formula, and c is confidence level of the Softmax function to every classification, and N is that matching is silent Recognize the quantity of frame, weight term α is set as 1 by cross validation;Lose L in positionlocIt is prediction blockWith true tag frameIt Between smooth loss smoothL1, subscript m expression { cx, cy, w, h }, loss expression is as shown in formula (3).
Offset recurrence is carried out in training process between the prediction block of generation and true label frame, wherein box=cx, Cy, w, h } indicate prediction block centre coordinate and its wide height;Use translational movementAnd scaling factorIt obtains The approximate regression prediction block of true tagWithThe coordinate value of prediction block is respectively indicated,WithIndicate the width of prediction block And length, as shown in formula (4);
Confidence is lost as shown in formula (5);
WhereinIndicate classification p i-th of default frame confidence level, i ∈ pos and i ∈ neg respectively indicate i positive sample with Number in negative sample set,Calculating such as formula (6) shown in;
Training stage uses formula (2) to calculate as the recurrence of prediction block coordinate shift, makes it as close possible to callout box;Instruction The default value that two the number of iterations, learning rate parameters keep SSD network initial during white silk.
Step 5 model measurement
To be identified and positioning fruit image is inputted, the fruit detection model for calling training to complete is tested, final defeated Result is the classification information and location information of fruit in input picture out, and is marked fruit by rectangle frame.
The present invention is opposite with the prior art, has the advantages that the present invention is based on the realizations of depth convolutional neural networks Fruit clarification of objective is extracted under natural scene, and the feature representativeness for artificially designing and extracting when solving target scene variation is not Sufficient and deficient robustness can be realized effectively extracting and indicating to fruit-picking feature under natural production scene.The present invention is based on Identifying and positioning for fruit is converted regression problem by SSD depth detection frame, constructs position prediction regression model to carry out water Fruit positioning, calculates prediction block classification confidence level probability to identify fruit classification, realizes the water to be picked under natural production scene The effective and reliable recognition of fruit and positioning
Detailed description of the invention
Fig. 1 is SSD depth detection network frame.
Specific embodiment
By taking apple automatic identification to be picked and positioning under natural production scene as an example.
Hardware device:
A. digital camera (Canon EOS 70D)
B. processing platform is the PSC-HB1X deep learning work station of AMAX, and processor is Inter (R) E5-2600v3, main Frequency is 2.1GHZ, inside saves as 128GB, hard disk size 1TB, and video card model is GeForce GTX Titan X.Running environment Are as follows: Ubuntu 16.0.4, Python 2.7.
Specific embodiment is as follows:
1 Image Acquisition of Step
Using the Apple image under Canon's digital camera acquisition nature growth scene, camera heights and adult are upright when shooting Highly close (1.8 meters or so), camera lens angle random keep in sample shooting process apple target in acquisition image Between and it is clear, keep imaging focal length constant in entire sample shooting process.
Step2 image labeling
Apple in 1000 training set samples pictures is manually marked, the minimum of each true apple is marked Boundary rectangle frame and the coordinate information for recording the rectangle frame upper left corner and the lower right corner vertex Liang Ge.
Step3 data set prepares
It is opened for apple picture 2000 is acquired in Step 1, manually randomly selects 600 as verifying collection, 400 conducts Test set, remaining 1000 markup informations being added in step 2 are as training set.
Step4 network training
The deployment of Step4-1 network
Deployment SSD detection on deep learning work station (or the relatively high computer of any performance) described in (b) The code of frame, it is as shown in Fig. 1 that SSD detects network structure.The structure of SSD is gone forward side by side based on VGG16 convolutional network structure Modification is gone, when training also passes through conv1_1, conv1_2, conv2_1, conv2_2, conv3_1, conv3_2, conv3_ 3, conv4_1, conv4_2, conv4_3, conv5_1, conv5_2, conv5_3;Fc6 passes through the convolution of 3*3*1024 (originally Fc6 in VGG16 is full articulamentum, becomes convolutional layer here, and fc7 layer below is similarly), fc7 passes through the convolution of 1*1*1024, Conv6_1, conv6_2 (conv8_2 in corresponding diagram), conv7_1, conv7_2, conv, 8_1, conv8_2, conv9_1, Conv9_2, loss.Then on the one hand: being directed to conv4_3 (4), fc7 (6), conv6_2 (6), conv7_2 (6), conv8_2 (4), each of conv9_2 (4) (default frame (default box) species number that each layer choosing of digital representation takes in bracket) The convolution kernel that two 3*3 sizes are respectively adopted again carries out convolution, finally generates 8732 prediction blocks to each classification, most has logical It crosses non-maxima suppression and exports last prediction block.
Appledetection file is established under the VOCdevkit catalogue of the SSD code of preservation, Appledetection file establishes tri- files of Annotations, ImageSets, JPEGImages. The training set that the xml format that Step is generated is stored under Annotations catalogue marks file.It include Main under ImageSet catalogue File includes four txt files: test.txt (picture name comprising sample in test set), train.txt are (comprising instruction Practice the picture name for concentrating sample), trainval.txt (including the picture name that whole samples are concentrated in training set and verifying) and Val.txt (concentrates the picture name of sample comprising verifying).
All data pictures are stored under JPEGImages catalogue.
The adjustment of Step4-2 network parameter
It is to be discriminated to modify network
Classification information is to be identified and positioning apple (apple), and the prediction block classification confidence threshold value for modifying network is 0.4, the minimum value of modification prediction block size is 16*16 (pixel), remaining network hyper parameter keeps default value.Step4-3 net Network training
It downloads the good VGGnet model of pre-training in SSD demo and is stored under caffe/models/VGGNet (such as There is no VGGNet file then to create one).According to the test chart the piece number in the number of samples modification code of test set in Step2 With classification number to be identified (it is revised as 1+ classification number, is here apple+background=2);It is repaired according to the hardware environment of itself Change GPU in code number (if only one GPU, gpus=" 0,1,2,3 "===> be changed to " 0 ", and so on).Modification Training code is run after the completion carries out network training.
In trained process, the characteristic pattern (formula 1) on each convolutional layer is extracted first for all training samples, for It is drawn into characteristic pattern and generates prediction block (generation of prediction block centered on each characteristic point on characteristic pattern, to being fixed length and width The scale of ratio is generated), the classification (formula 5) of target in prediction block is calculated, is calculated between prediction block and true callout box Distance (formula 3) combines the target loss function of classification loss and prediction block offset loss as training during training (formula 2).Primary training is completed based on an above-mentioned complete calculating process, when frequency of training reaches predetermined threshold or loss Terminate less than predetermined threshold training.
Step5 model measurement
Before code is tested in operation, the label information storage path in test file code is modified, modification training is completed Detection model stores the storage path in path, modification test picture, storage address of the above-mentioned path according to itself corresponding data To modify and determine.
SSD depth detection network is depth network end to end, and feature extraction and model training are integrated in a network In.Sample image to be tested is inputted when test, exports classification information (apple) and positioning knot for Apple in test image Fruit (coordinate information of minimum circumscribed rectangle prediction block), and the predicted position of apple is visually shown in final test sample Frame.
The foregoing is only a preferred embodiment of the present invention, is not restricted to the present invention, for the technology of this field For personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should be included within scope of the presently claimed invention.

Claims (1)

1. a kind of identify and position method to fruit-picking based on depth targets detection, which comprises the following steps:
Step 1 Image Acquisition
This Image Acquisition realizes that Image Acquisition target is under practical natural growing environment wait pick based on common RGB image Fruit;Image is obtained by common slr camera, and when imaging simulates the visual angle manually picked, and keeps the height and fruit of camera Target remains basically stable, and fruit target is maintained at image center location;
Step 2 image labeling
The initial data acquired in step 1 is manually marked;Artificial will be outer by minimum to fruit-picking in image It connects rectangle to mark, records the classification of target fruit at frame, record the top in minimum circumscribed rectangle upper left corner and the upper right corner in picture Point coordinate;The format of data record mark is identical as the annotation formatting of now widely used ImageNet image data set;
Step 3 data set prepares
By the corresponding flag data of the original image and step 2 of step 1 collectively as one can training data, all are instructed Practice data and forms original data set;By initial data according to the ratio of 5:3:2, artificial random cutting is 3 subsets, respectively Training set, verifying collection and test set as subsequent detection model;Training set, verifying collection and test set are based on original data Collection, and be independently distributed on image space between three subclass and do not repeat;
Step 4 network training
For the training set divided in step 3, depth convolutional neural networks (Convolutionsl Neural is used Network, CNN) first extract image in fruit character;Use VGG16 as the convolutional neural networks of feature extraction, VGG16 network includes 13 convolution (Conv) layers, 13 activation (Relu) layers and 4 pond (Pooling) layers;Design flared end is filled out Fill (Padding) processing, during convolutional calculation to image border carry out pixel value benefit 0, ensure that Conv layer input with Output matrix is in the same size;Convolution algorithm is carried out by the convolution kernel that size is 3*3 for the original image of input and extracts feature Scheme (Feature map), shown in convolutional calculation formula such as formula (1);In formulaIndicate j-th of characteristic pattern in l convolutional layer,Indicate that j-th of characteristic pattern in l-1 convolutional layer, F () indicate activation primitive, MjThe characteristic quantity of representing input images, i ∈MjIndicate that i is the number of input picture characteristic quantity,For convolution kernel,For bias term;
Selection is based on SSD (the single shot of convolutional neural networks (Convolutional Neural Network, CNN) Detector network is detected) to train last detect to fruit-picking to identify and position model;The detection net of SSD algorithm building The VGG16 of full convolutional layer is used in network as basic network to extract clarification of objective, i.e. step 4 inspection that can be used as step 5 It is merged one part of survey grid network;SSD object detection frame generated in multilayer feature figure fixed size bounding box and The confidence level of classification in frame, shown in the overall goal loss function such as formula (2) of model training;
X indicates the classification of target in prediction block in formula, and c is confidence level of the Softmax function to every classification, and N is matching default frame Quantity, weight term α is set as 1 by cross validation;Lose L in positionlocIt is prediction blockWith true tag frameBetween Smooth loss smoothL1, subscript m expression { cx, cy, w, h }, loss expression is as shown in formula (3).
Offset recurrence is carried out in training process between the prediction block of generation and true label frame, wherein box=cx, cy, w, H } indicate prediction block centre coordinate and its wide height;Use translational movementAnd scaling factorIt obtains true The approximate regression prediction block of label,WithThe coordinate value of prediction block is respectively indicated,WithIndicate prediction block width and It is long, as shown in formula (4);
Confidence is lost as shown in formula (5);
WhereinIndicate i-th of default frame confidence level of classification p, i ∈ pos and i ∈ neg respectively indicates i in positive sample and negative sample Number in this set,Calculating such as formula (6) shown in;
Training stage uses formula (2) to calculate as the recurrence of prediction block coordinate shift, makes it as close possible to callout box;It trained The default value that two the number of iterations, learning rate parameters keep SSD network initial in journey.
Step 5 model measurement
To be identified and positioning fruit image is inputted, the fruit detection model for calling training to complete is tested, final output knot Fruit is the classification information and location information of fruit in input picture, and is marked fruit by rectangle frame.
CN201811249496.0A 2018-10-25 2018-10-25 It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection Pending CN109409365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811249496.0A CN109409365A (en) 2018-10-25 2018-10-25 It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811249496.0A CN109409365A (en) 2018-10-25 2018-10-25 It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection

Publications (1)

Publication Number Publication Date
CN109409365A true CN109409365A (en) 2019-03-01

Family

ID=65469223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811249496.0A Pending CN109409365A (en) 2018-10-25 2018-10-25 It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection

Country Status (1)

Country Link
CN (1) CN109409365A (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858569A (en) * 2019-03-07 2019-06-07 中国科学院自动化研究所 Multi-tag object detecting method, system, device based on target detection network
CN109919930A (en) * 2019-03-07 2019-06-21 浙江大学 The statistical method of fruit number on tree based on convolutional neural networks YOLO V3
CN110009628A (en) * 2019-04-12 2019-07-12 南京大学 A kind of automatic testing method for polymorphic target in continuous two dimensional image
CN110070530A (en) * 2019-04-19 2019-07-30 山东大学 A kind of powerline ice-covering detection method based on deep neural network
CN110223349A (en) * 2019-05-05 2019-09-10 华南农业大学 A kind of picking independent positioning method
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning
CN110363122A (en) * 2019-07-03 2019-10-22 昆明理工大学 A kind of cross-domain object detection method based on multilayer feature alignment
CN110472529A (en) * 2019-07-29 2019-11-19 深圳大学 Target identification navigation methods and systems
CN110569703A (en) * 2019-05-10 2019-12-13 阿里巴巴集团控股有限公司 computer-implemented method and device for identifying damage from picture
CN110648366A (en) * 2019-10-14 2020-01-03 阿丘机器人科技(苏州)有限公司 Orange detection system and orange sectioning method based on deep learning
CN110689081A (en) * 2019-09-30 2020-01-14 中国科学院大学 Weak supervision target classification and positioning method based on bifurcation learning
CN110717534A (en) * 2019-09-30 2020-01-21 中国科学院大学 Target classification and positioning method based on network supervision
CN110736709A (en) * 2019-10-26 2020-01-31 苏州大学 blueberry maturity nondestructive testing method based on deep convolutional neural network
CN110796135A (en) * 2019-09-20 2020-02-14 平安科技(深圳)有限公司 Target positioning method and device, computer equipment and computer storage medium
CN110889464A (en) * 2019-12-10 2020-03-17 北京市商汤科技开发有限公司 Neural network training method and device and target object detection method and device
CN111222480A (en) * 2020-01-13 2020-06-02 佛山科学技术学院 Grape weight online estimation method and detection device based on deep learning
CN111275906A (en) * 2020-02-17 2020-06-12 重庆邮电大学 Intelligent weighing fruit identification system based on SSD algorithm
CN111311683A (en) * 2020-03-26 2020-06-19 深圳市商汤科技有限公司 Method and device for detecting picked point of object, robot, equipment and medium
CN111582345A (en) * 2020-04-29 2020-08-25 中国科学院重庆绿色智能技术研究院 Target identification method for complex environment under small sample
CN111738070A (en) * 2020-05-14 2020-10-02 华南理工大学 Automatic accurate detection method for multiple small targets
CN111767962A (en) * 2020-07-03 2020-10-13 中国科学院自动化研究所 One-stage target detection method, system and device based on generation countermeasure network
CN112115885A (en) * 2020-09-22 2020-12-22 中国农业科学院农业信息研究所 Fruit tree bearing branch shearing point positioning method for picking based on deep convolutional neural network
CN112136505A (en) * 2020-09-07 2020-12-29 华南农业大学 Fruit picking sequence planning method based on visual attention selection mechanism
CN112149501A (en) * 2020-08-19 2020-12-29 北京豆牛网络科技有限公司 Method and device for identifying packaged fruits and vegetables, electronic equipment and computer readable medium
CN112163517A (en) * 2020-09-27 2021-01-01 广东海洋大学 Underwater imaging fish net damage identification method and system based on deep learning
US10885625B2 (en) 2019-05-10 2021-01-05 Advanced New Technologies Co., Ltd. Recognizing damage through image analysis
CN112712128A (en) * 2021-01-11 2021-04-27 中南民族大学 Intelligent picking method, equipment, storage medium and device based on neural network
CN112819796A (en) * 2021-02-05 2021-05-18 杭州天宸建筑科技有限公司 Tobacco shred foreign matter identification method and equipment
CN112926495A (en) * 2021-03-19 2021-06-08 高新兴科技集团股份有限公司 Vehicle detection method based on multistage convolution characteristic cascade
CN113076819A (en) * 2021-03-17 2021-07-06 山东师范大学 Fruit identification method and device under homochromatic background and fruit picking robot
CN114155453A (en) * 2022-02-10 2022-03-08 深圳爱莫科技有限公司 Training method for ice chest commodity image recognition, model and occupancy calculation method
CN114494441A (en) * 2022-04-01 2022-05-13 广东机电职业技术学院 Grape and picking point synchronous identification and positioning method and device based on deep learning
CN115035192A (en) * 2022-06-21 2022-09-09 北京远舢智能科技有限公司 Method and device for determining positions of tobacco leaf distributing vehicle and conveying belt
CN111626279B (en) * 2019-10-15 2023-06-02 西安网算数据科技有限公司 Negative sample labeling training method and highly-automatic bill identification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周俊宇等: ""卷积神经网络在图像分类和目标检测应用综述"", 《计算机工程与应用》 *
彭红星等: ""自然环境下多类水果采摘目标识别的通用改进SSD模型"", 《农业工程学报》 *

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858569A (en) * 2019-03-07 2019-06-07 中国科学院自动化研究所 Multi-tag object detecting method, system, device based on target detection network
CN109919930A (en) * 2019-03-07 2019-06-21 浙江大学 The statistical method of fruit number on tree based on convolutional neural networks YOLO V3
CN110009628A (en) * 2019-04-12 2019-07-12 南京大学 A kind of automatic testing method for polymorphic target in continuous two dimensional image
CN110070530A (en) * 2019-04-19 2019-07-30 山东大学 A kind of powerline ice-covering detection method based on deep neural network
CN110223349A (en) * 2019-05-05 2019-09-10 华南农业大学 A kind of picking independent positioning method
CN110569703A (en) * 2019-05-10 2019-12-13 阿里巴巴集团控股有限公司 computer-implemented method and device for identifying damage from picture
US10885625B2 (en) 2019-05-10 2021-01-05 Advanced New Technologies Co., Ltd. Recognizing damage through image analysis
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning
CN110223352B (en) * 2019-06-14 2021-07-02 浙江明峰智能医疗科技有限公司 Medical image scanning automatic positioning method based on deep learning
CN110363122A (en) * 2019-07-03 2019-10-22 昆明理工大学 A kind of cross-domain object detection method based on multilayer feature alignment
CN110363122B (en) * 2019-07-03 2022-10-11 昆明理工大学 Cross-domain target detection method based on multi-layer feature alignment
CN110472529A (en) * 2019-07-29 2019-11-19 深圳大学 Target identification navigation methods and systems
CN110796135A (en) * 2019-09-20 2020-02-14 平安科技(深圳)有限公司 Target positioning method and device, computer equipment and computer storage medium
CN110689081A (en) * 2019-09-30 2020-01-14 中国科学院大学 Weak supervision target classification and positioning method based on bifurcation learning
CN110717534A (en) * 2019-09-30 2020-01-21 中国科学院大学 Target classification and positioning method based on network supervision
CN110717534B (en) * 2019-09-30 2020-09-15 中国科学院大学 Target classification and positioning method based on network supervision
CN110648366A (en) * 2019-10-14 2020-01-03 阿丘机器人科技(苏州)有限公司 Orange detection system and orange sectioning method based on deep learning
CN111626279B (en) * 2019-10-15 2023-06-02 西安网算数据科技有限公司 Negative sample labeling training method and highly-automatic bill identification method
CN110736709A (en) * 2019-10-26 2020-01-31 苏州大学 blueberry maturity nondestructive testing method based on deep convolutional neural network
CN110889464A (en) * 2019-12-10 2020-03-17 北京市商汤科技开发有限公司 Neural network training method and device and target object detection method and device
CN110889464B (en) * 2019-12-10 2021-09-14 北京市商汤科技开发有限公司 Neural network training method for detecting target object, and target object detection method and device
CN111222480A (en) * 2020-01-13 2020-06-02 佛山科学技术学院 Grape weight online estimation method and detection device based on deep learning
CN111222480B (en) * 2020-01-13 2023-05-26 佛山科学技术学院 Online grape weight estimation method and detection device based on deep learning
CN111275906A (en) * 2020-02-17 2020-06-12 重庆邮电大学 Intelligent weighing fruit identification system based on SSD algorithm
CN111311683B (en) * 2020-03-26 2023-08-15 深圳市商汤科技有限公司 Method and device for detecting pick-up point of object, robot, equipment and medium
CN111311683A (en) * 2020-03-26 2020-06-19 深圳市商汤科技有限公司 Method and device for detecting picked point of object, robot, equipment and medium
CN111582345A (en) * 2020-04-29 2020-08-25 中国科学院重庆绿色智能技术研究院 Target identification method for complex environment under small sample
CN111738070A (en) * 2020-05-14 2020-10-02 华南理工大学 Automatic accurate detection method for multiple small targets
CN111767962A (en) * 2020-07-03 2020-10-13 中国科学院自动化研究所 One-stage target detection method, system and device based on generation countermeasure network
CN111767962B (en) * 2020-07-03 2022-11-08 中国科学院自动化研究所 One-stage target detection method, system and device based on generation countermeasure network
CN112149501A (en) * 2020-08-19 2020-12-29 北京豆牛网络科技有限公司 Method and device for identifying packaged fruits and vegetables, electronic equipment and computer readable medium
CN112136505A (en) * 2020-09-07 2020-12-29 华南农业大学 Fruit picking sequence planning method based on visual attention selection mechanism
CN112115885A (en) * 2020-09-22 2020-12-22 中国农业科学院农业信息研究所 Fruit tree bearing branch shearing point positioning method for picking based on deep convolutional neural network
CN112115885B (en) * 2020-09-22 2023-08-11 中国农业科学院农业信息研究所 Fruit tree fruiting branch shearing point positioning method based on deep convolutional neural network
CN112163517A (en) * 2020-09-27 2021-01-01 广东海洋大学 Underwater imaging fish net damage identification method and system based on deep learning
CN112712128A (en) * 2021-01-11 2021-04-27 中南民族大学 Intelligent picking method, equipment, storage medium and device based on neural network
CN112819796A (en) * 2021-02-05 2021-05-18 杭州天宸建筑科技有限公司 Tobacco shred foreign matter identification method and equipment
CN113076819A (en) * 2021-03-17 2021-07-06 山东师范大学 Fruit identification method and device under homochromatic background and fruit picking robot
CN112926495A (en) * 2021-03-19 2021-06-08 高新兴科技集团股份有限公司 Vehicle detection method based on multistage convolution characteristic cascade
CN114155453A (en) * 2022-02-10 2022-03-08 深圳爱莫科技有限公司 Training method for ice chest commodity image recognition, model and occupancy calculation method
CN114494441A (en) * 2022-04-01 2022-05-13 广东机电职业技术学院 Grape and picking point synchronous identification and positioning method and device based on deep learning
CN114494441B (en) * 2022-04-01 2022-06-17 广东机电职业技术学院 Grape and picking point synchronous identification and positioning method and device based on deep learning
CN115035192A (en) * 2022-06-21 2022-09-09 北京远舢智能科技有限公司 Method and device for determining positions of tobacco leaf distributing vehicle and conveying belt
CN115035192B (en) * 2022-06-21 2023-04-14 北京远舢智能科技有限公司 Method and device for determining positions of tobacco leaf distributing vehicle and conveying belt

Similar Documents

Publication Publication Date Title
CN109409365A (en) It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
CN113128405A (en) Plant identification and model construction method combining semantic segmentation and point cloud processing
CN110310259A (en) It is a kind of that flaw detection method is tied based on the wood for improving YOLOv3 algorithm
CN110472575B (en) Method for detecting ripeness of tomatoes stringing together based on deep learning and computer vision
CN109325504A (en) A kind of underwater sea cucumber recognition methods and system
CN109658412A (en) It is a kind of towards de-stacking sorting packing case quickly identify dividing method
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN114387520A (en) Precision detection method and system for intensive plums picked by robot
Rong et al. Pest identification and counting of yellow plate in field based on improved mask r-cnn
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN110517228A (en) Trunk image rapid detection method based on convolutional neural networks and transfer learning
CN111027538A (en) Container detection method based on instance segmentation model
CN113822844A (en) Unmanned aerial vehicle inspection defect detection method and device for blades of wind turbine generator system and storage medium
CN109816634A (en) Detection method, model training method, device and equipment
CN109615610A (en) A kind of medical band-aid flaw detection method based on YOLO v2-tiny
CN112966698A (en) Freshwater fish image real-time identification method based on lightweight convolutional network
CN113191334A (en) Plant canopy dense leaf counting method based on improved CenterNet
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN116597438A (en) Improved fruit identification method and system based on Yolov5
CN115995017A (en) Fruit identification and positioning method, device and medium
CN115619719A (en) Pine wood nematode infected wood detection method based on improved Yolo v3 network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190301

WD01 Invention patent application deemed withdrawn after publication