WO2017178666A1 - Autonomous set of devices and method for detecting and identifying plant species in an agricultural crop for the selective application of agrochemicals - Google Patents
Autonomous set of devices and method for detecting and identifying plant species in an agricultural crop for the selective application of agrochemicals Download PDFInfo
- Publication number
- WO2017178666A1 WO2017178666A1 PCT/ES2016/070655 ES2016070655W WO2017178666A1 WO 2017178666 A1 WO2017178666 A1 WO 2017178666A1 ES 2016070655 W ES2016070655 W ES 2016070655W WO 2017178666 A1 WO2017178666 A1 WO 2017178666A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- plant species
- detection
- identification
- crop
- agricultural crop
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01C—PLANTING; SOWING; FERTILISING
- A01C21/00—Methods of fertilising, sowing or planting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Definitions
- the present invention relates to Application technologies in the agro industrial field. Particularly, it is a set of devices autonomous for the detection and identification of species vegetables in an agricultural crop for the application of Agrochemicals selectively.
- the set consists of multiple cameras that must be arranged on the wing or boom arm of for example a machine sprayer, a detection device and plant species identification, a circuit electronic responsible for managing the opening and closing of the agrochemical product sprinkler peaks, and a Ultrasound sensor for each chamber in the set.
- the device processing which is able to detect, segment and identify the different species for sure vegetables found in the image scene processed.
- the device sends a signal to the electronic circuit which is responsible for the management of opening and closing of different solenoid valves of the sprinkler peaks of agrochemical product to open during a period of predetermined time a specific peak, falling from this way a defined dose of agrochemical product over the desired plant.
- the processing device is able to diagnose the agrochemical to use based on a table of correspondence comprising the different species vegetables, what is your specific agrochemical for treatment and the recommended dose to use.
- Artificial vision is a subfield of the artificial intelligence whose purpose is to program a computer to "understand" the characteristics of an image.
- the typical objectives of artificial vision include: detection, segmentation, location and recognition of certain objects in images; evaluation of the results, such as segmentation, registration; registration of different images of the same scene or object, that is to make agree same object in different images; tracking a object in a sequence of images; mapping a scene to generate a three-dimensional model of it, what that could be used by a robot to navigate the scene; estimation of the three-dimensional positions of humans; and search for digital images by content.
- Continuous signal images are reproduced by analog electronic devices that record image data accurately using several methods, such as a sequence of fluctuations of an electrical signal or changes in the chemical nature of an emulsion of a film, which vary continuously in different aspects of the image.
- a continuous signal or an analog image must be convert first to an understandable digital format to the computer. This process applies to all images regardless of origin, complexity and if they are black and white or grayscale, or all color.
- a digital image is composed of a matrix rectangular or square of pixels representing a series of intensity values ordered in a system of coordinates in a plane (x, y).
- V1 Primary visual
- This type network is a variation of a multilayer perceptron, but their operation makes them much more effective for artificial vision tasks, especially in the Image classification Per perceptron must understand an artificial neuron and basic unit of inference in the form of linear discriminator, this is a algorithm capable of generating a criterion to select a subgroup from one more component group big.
- multithreading allows you to execute efficiently multiple threads at the same time on the same GPU, managing to process several algorithms in the form concurrent, and in this way take full potential of the processor and in a shorter space of time, being able to as needed share the same logical resources and / or system physicists.
- the convolutional neural networks consist in multiple layers with different purposes. To the principle is the extraction phase of characteristics, composed of convolutional neurons and of sampling reduction. At the end of the network you they find simple perceptron neurons to make the final classification on features extracted.
- the feature extraction phase is resembles the stimulating process in the cells of the visual cortex This phase consists of alternate layers of convolutional neurons and reduction neurons of sampling. As the data progresses along this phase, its dimensionality is reduced, being the neurons in distant layers much less sensitive to disturbances in the input data, but at the same time being these activated by characteristics every more and more complex
- the simple neurons of a perceptron are replaced by matrix processors that perform an operation about the 2D image data that passes through them, in place of a single numerical value
- the convolution operator has the effect of filter the input image with a kernel previously trained. This transforms the data in such a way that certain characteristics, determined by the shape of the core, become more dominant in the output image having these a higher numerical value assigned to the pixels that represent them.
- These cores have specific image processing skills, such as edge detection that can be perform with cores that highlight a gradient in a particular address.
- edge detection that can be perform with cores that highlight a gradient in a particular address.
- the nuclei that they are trained by a convolutional neural network they are generally more complex to be able to extract other more abstract and non-trivial features.
- Neural networks have some tolerance to small disturbances in the data of entry. For example, if two almost identical images differentiated only by a transfer of some pixels laterally are analyzed with a neural network, The result should be essentially the same. This is gets, in part, given the reduction in sampling that It occurs within a convolutional neural network. To the reduce resolution, same features will correspond to a greater activation field in the input image.
- neural networks convolutional used a subsampling process to carry out this operation.
- other operations such as by max-pooling example, they are much more effective in summarizing characteristics about a region.
- this type of operation is similar to how the visual cortex can summarize information internally.
- the max-pooling operation finds the value maximum between a sample window and pass this value as a summary of characteristics about that area. How result, the size of the data is reduced by a factor equal to the size of the sample window on the which one is operated
- the data After one or more extraction phases of characteristics, the data finally reaches the phase of classification. By then, the data has been debugged up to a series of unique features to the input image, and it is now the work of the latter phase to classify these characteristics towards a label or other, depending on training objectives.
- neural networks convolutional are being used for the Image recognition and classification.
- recognition process using a classifier based on a convolutional neural network a image to the network and after several repetitions of convolution operations in a maximum space of sampling and complete connection, is extracted as a result of recognition an accurate classification of the image and a maximum level of security of the result.
- Object tracking in English object tracking is a process that allows you to estimate over time the location of one or more mobile objects using the use of a camera
- the improvements achieved in form accelerated in the quality and resolution of the sensors image, together with the impressive increase of The computing power achieved in the last decade has favored the creation of new algorithms and applications by tracking objects.
- Object tracking can be a process slow due to the large amount of data contained in a video, which can increase its complexity in the face of possible need to use recognition techniques of objects to track.
- Video cameras capture information on objects of interest in the form of a set of pixels
- an object follower values the location of this object in time.
- the relationship between the object and projection of its image is very complex and it may depend on more factors than just the object position, which implies that the tracking of objects is a difficult goal to achieve.
- the main challenges to have in account in the design of an object follower are related to the similarity of aspect between the object of interest and other objects in the scene, as well as the variation of appearance of the object itself. Since the appearance of both other objects and the background can be similar to the object of interest, this may Interfere with your observation. In that case, the features extracted from those unwanted areas it can be difficult to differentiate from what is expected that the object of interest generates. This phenomenon is known. with the name of disorder ("clutter").
- an object In a tracking scenario, an object is you can define as anything of interest for later analysis.
- the objects can be represent through their forms and appearances, generally: points, primitive geometric shapes, object silhouette and contour, articulated models of shape, skeletal models.
- the most desired visual feature is the uniqueness because objects can be distinguished easily in the space of features.
- the details of the most common features are the following: color, margins, optical flow, texture.
- Each tracking method requires an object detection mechanism, either in each frame or when the first object appears in the video.
- a common method for object detection is the use of single-frame information.
- some object detection methods make use of the temporal information calculated from a sequence of images to reduce the number of false detections. This temporary information is generally calculated using the “frame differencing” technique, which shows the changing regions in consecutive sections. Once the regions of the object in the image are taken into account, it is then the task of the follower to perform the object correspondence from one frame to another to generate the tracking.
- the most popular methods in the context of object tracking are: point detectors, background subtraction, segmentation.
- Point detectors are used to find points of interest in images that have an expressive texture in their respective locations. Points of interest have been used for a long time. time in the context of the movement and in the problems of follow up. A desirable feature in terms of the points of interest is its invariance in the changes of illumination and in the point of view of the camera.
- Object detection can be achieved by building a representation of the scene called background model and then finding the model deviations for each incoming frame. Any significant change in a region of the background model image represents an object in movement. The pixels that make up the regions in change process are marked for later processing. In general, a component algorithm connected is applied to get connected regions that correspond to the objects. This process is known. as background subtraction.
- segmentation algorithms of the image is to divide the image into regions perceptually similar.
- Each algorithm of segmentation covers two problems, the criteria for a good partition and the method to get the efficient partition.
- 2D motion models are simple, But less realistic. As a consequence, the systems of 3D segmentation are the most used in the practice. Within three-dimensional methods, They can distinguish two different algorithms: structure from the SFM movement (acronym for English structure from motion) and parametric algorithms.
- the SFM generally handles 3D scenes that contain relevant depth information while that in parametric methods this is not assumed depth. Another important difference between the two algorithms is that in the SFM a movement is assumed rigid while in parametric algorithms only stiffness of movement is assumed in parts of the scene.
- Object tracking is a very task important within the field of video processing. He main objective of the monitoring techniques of objects is to generate the trajectory of an object through of time, positioning it within the image. We can classify techniques according to three large groups: point tracking, tracking core (kernel) and silhouettes tracking.
- Point tracking techniques the objects detected in consecutive images are represented each by one or several points and the association of these is based on the state of the object in the previous image, which may include position and movement.
- An external mechanism is required that Detect the objects of each frame. This technique may present problems in scenarios where you object presents occlusions and in the entrances and exits of these.
- Point tracking techniques can be Also classify into two broad categories: deterministic and statistical.
- Core tracking techniques perform a calculation of the movement of the object, which is represented by an initial region of an image to the next
- the movement of the object is expressed in general in the form of parametric movement (translation, rotation, related %) or through the flow field Calculated in the following frames.
- parametric movement translation, rotation, related
- Silhouettes tracking techniques are performed by valuing the region of the object in each image using the information it contains.
- This information can be in the form of density of look or shape models that are generally presented with margin maps. It has two methods: correspondence of form and monitoring of contour.
- Tracking objects of interest on video It is the basis of many applications ranging from video production to remote surveillance, and from Robotics to interactive games.
- the Object followers are used to improve the understanding of certain video data sets of medical and safety applications; to increase the productivity by reducing the amount of labor that it is necessary to complete a task and to give rise to Natural interaction with machines.
- optical flow in English "optical flow" is the pattern of the apparent movement of objects, surfaces and edges in a scene caused by the relative movement between an observer's eye or a Camera and scene.
- a second definition then more refined define the term "affordances" as the possibilities of action that a user is aware of being able to perform.
- Applications of optical flow such as motion detection, object segmentation, the time until the collision and the calculation approach of expansions, motion coding compensated and Stereoscopic disparity measurement use this movement of the surfaces and edges of the objects.
- US Patent 6038337 A which refers to a hybrid system of neural networks for the object recognition that exhibits local sampling of image, a neural network of self-organized maps, and a hybrid convolutional neural network.
- the car map organization provides quantification of samples image in a topological space where entries that are close in the original space are also in the exit space, then providing the dimensionality reduction and change invariance minors in the image sample, and the neural network convolutional hybrid provides partial invariance to the translation, rotation, scale, and deformation.
- the net convolutional hybrid extracts features successively larger in a set of layers hierarchical Alternative embodiments are described. using the Karhunen-Lo + E, gra e + EE that transforms instead of the own organization map, and a multilayer perceptron instead of the convolutional network.
- an autonomous vehicle carries the device chemical application and is, in part, controlled by the processing requirements of device vision component responsible for detecting and assigning lists of objectives to chemical ejectors that target these target points while the device evolves to Through the countryside or a natural environment.
- the device request US20150245565A1 patent is only able to detect the presence of a plant, but is unable to identify What plant species is it? Can only distinguish two characteristics that are absolutely different, how is the soil of plants, and determine only with a certain probability whether it is a crop or not.
- this system is unable to distinguish what plant species is it to apply the specific herbicide and thus eliminate said species.
- it is not a system that works in all kinds of terrain without having a certain pattern to maintain its trajectory, it cannot identify the kind of species in question, you can't make a intelligent application of the necessary agrochemical with a great cost savings in agrochemicals and efficiency unbeatable in weed management.
- the purpose of this invention is an autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals selectively, said set comprises: a chemical application device that comprises at least one container container of agrochemicals linked in fluid communication with a plurality of sprinkler peaks through a valve, a plurality of cameras that are arranged on the autonomous vehicle and focused on cultivation, where each camera has an associated ultrasound sensor for the height measurement to the crop in real time, and in where each camera is tilted forward at 45 degrees from normal; a device of detection and identification of plant species connected to the cameras to receive information from video of them, an electronic circuit in charge to manage the opening and closing of the valves of the sprinkler peaks of agrochemicals connected to detection device that manages through said circuit the opening and closing of the valves of the sprinkler peaks, and where, the set of devices is mounted on a vehicle of transport.
- a chemical application device that comprises at least one container container of agrochemicals linked in fluid communication with a plurality of sprinkler peaks
- the transport vehicle is a self-propelled vehicle or a tow vehicle.
- the vehicle self-propelled is a vehicle for fumigation with arms sides arranged perpendicularly to it (mosquito).
- the device Detection and identification consists of a processor.
- the processor comprises a tool based on computer software developed in C ++ language, a vision framework artificial, and a neural network framework convolutional
- step a) of the method allows distinguish weed cultivation.
- step a) of the method allows to identify plant species to Determine the agrochemical to apply.
- step c) of the method the dose of agrochemical is sprayed by opening a solenoid valve.
- the species Vegetables correspond to the crop and weeds.
- the agrochemical is a herbicide, a foliar fertilizer, an insecticide, a fungicide, or a protective compound.
- the step b) also allows to determine the state of the crop.
- step c) of the method comprises selecting a specific herbicide from a set of herbicides for each weed identified in step a) regarding the crop.
- step c) of the method comprises selecting a specific foliar fertilizer of a set of foliar fertilizers for cultivation identified in step a) according to their status.
- step c) of the method comprises selecting a specific insecticide from a set of insecticides for the crop identified in step a) according to its state of deterioration.
- step c) of the method comprises selecting a specific fungicide from a set of fungicides for the identified crop in step a) according to its state of deterioration.
- step c) of the method comprises select a specific protective compound from a set of protective compounds for cultivation identified in step a) according to their status.
- the method for the detection and identification of plant species in an agricultural crop comprises the steps of: a) obtaining a Real Time Video Stream from a plurality of cameras positioned along the wings of the autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals in a selective way; b) process each of the frames obtained; c) convert the frame to a numerical matrix with the representation of RGB (Red-Green-Blue English) colors of each pixel in the image; d) trim the matrix to select the area of the frame to be processed; e) assign an area of the image to the corresponding sprinklers, so that if weeds are detected in that area, the opening order is sent to the corresponding sprinkler; f) apply 4 filters to obtain a mask of the predominant colors of the plant species to be identified; g) identify the contours of the image on the color mask by saving the information of the position of each one; h) estimate the travel speed with the positions of the contours found in the current frame and the positions of those same
- step f) of the method previous separates strange elements like earth, dry vegetable residue and species stones vegetables, where: a first filter transforms the YCbCr color format matrix; a second filter subtracts two channels in the RGB format depending on the color to filter; a third filter is a logical AND operation (bit by bit) between the filter results previous first and second; and a fourth filter applies to the previous result a blur (Gaussian blur), converting the image to black and white and removing the noise.
- a first filter transforms the YCbCr color format matrix
- a second filter subtracts two channels in the RGB format depending on the color to filter
- a third filter is a logical AND operation (bit by bit) between the filter results previous first and second
- a fourth filter applies to the previous result a blur (Gaussian blur), converting the image to black and white and removing the noise.
- FIG.1a schematically represents the way the data goes through different types of tests, in order to make a decision in A three layer network.
- FIG. 1b schematically represents the way in which the input layers of the network contain neurons that encode pixel values from entry.
- FIG.1c schematically represents a possible architecture with rectangles denoting the subnets in order to show how the convolutional neural networks.
- FIG. 2a schematically represents a rapid detection to classify plants that allows distinguish weed cultivation.
- FIG.2b schematically represents the area and perimeter analysis performed by the system a Once weeds are detected.
- FIG.2c schematically represents the spray with precision and accuracy that performs the weed system once the species is detected Vegetable and its size.
- FIG.3 represents a frame of a Flow Real Time Video obtained from a camera.
- FIG. 4 represents a table of equivalences between number of seals (or shutters) per second and vehicle speed in movement that carries the cameras.
- FIG. 5 represents the frame of FIG. 3 converted to a numeric matrix with representation of RGB (Red-Green-Blue English) colors of each Image pixel
- FIG. 6 represents an ideal size of central horizontal strip of the image to be process from the frame of [FIG.3].
- FIG. 7a represents a transformation that make a first matrix filter in color format YCbCr.
- FIG.7b represents a transformation that makes a second filter by subtracting two channels in the RGB format depending on the color to filter.
- FIG.7c represents a transformation AND logic (bit by bit) between the results of the two previous filters as shown in [FIG.7a] and [FIG.7b] that performs a third filter.
- FIG.7d1 represents a transformation of a fourth filter that applies to the previous result of the [FIG. 7c] a Gaussian blur.
- FIG.7d2 represents the conversion of the Image from [FIG.7d1] to black and white.
- FIG.7d3 represents the elimination of noise of [FIG.7d2] corresponding to the points scattered whites.
- FIG. 8 represents the identification of Contours of the image on the color mask.
- FIG. 9 represents the image cropped in small squares of approximately the same size containing the contours of [FIG. 8].
- FIG. 10 represents one of the squares cropped from [FIG. 9] with image size changed to a preferred size of 256 x 256 pixels.
- FIG. 11a represents another of the squares cut out of [FIG. 9] corresponding to a weed present in cultivation.
- FIG. 11b represents a sequence where the square of [FIG. 11a] of a 256 x 256 image pixels is sent to the first layer or input layer of the previously trained convolutional neural network for analysis and categorization until you reach a last layer
- FIG. 12 represents a result in value average success of each of the categories obtained from the last layer or network exit layer convolutional neuronal according to the sequence of the [FIG. 11a].
- FIG. 13 is a complete representation of the processed frame of the main video stream according to the sequence of [FIG. 11a], where the unwanted plant species identified framed in red to apply a necessary agrochemical in each One of the weeds.
- FIG. 14 represents an AlexNet model that It consists of 5 convolutional layers according to the architecture chosen for the training of the Caffe network.
- FIG. 15 schematically represents a preferred embodiment of the autonomous assembly of devices for the detection and identification of plant species according to the present invention, showing how the boards with a microcontroller and development environment IDE (acronym for "Integrated Drive Electronics") Integrated Control Electronics) with inputs and outputs analog and digital chicken to the CPU (acronym for English “Central Processing Unit”, Processing Unit Central) through a USB port (acronym for English “Universal Serial Bus”, Universal Serial Bus).
- IDE Integrated Drive Electronics
- FIG. 16 represents a detail of the end of a side arm of a spraying unit where you can observe the cameras tilted towards forward at an angle of approximately 45 degrees with respect to the lower vertical axis and a plurality of associated sprinklers.
- FIG.17 schematically represents a camera tilted forward at an angle ⁇ of approximately 45 degrees from the vertical axis bottom that is installed on a side arm of a spray unit showing image and size approximate of the scene to process with that decline.
- the present invention is constituted mainly by an autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of Agrochemicals selectively.
- the set consists of multiple cameras that must be arranged on the wing or arm of the boom of, for example, a machine sprayer, a detection device and plant species identification, a circuit electronic responsible for managing the opening and closing of the agrochemical product sprinkler peaks, and a Ultrasound sensor for each chamber in the set.
- an autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals selectively said set comprises: a product application device chemicals comprising at least one container container of agrochemicals linked in fluid communication with a plurality of sprinkler peaks through a valve, a plurality of cameras that are arranged on the autonomous vehicle and focused on cultivation, where each camera has an associated ultrasound sensor for the height measurement to the crop in real time, and in where each camera is tilted forward at 45 degrees from normal; a device of detection and identification of plant species connected to the cameras to receive information from video of them, an electronic circuit in charge to manage the opening and closing of the valves of the sprinkler peaks of agrochemicals connected to detection device that manages through said circuit the opening and closing of the valves of the sprinkler peaks, and where, the set of devices is mounted on a vehicle of transport.
- FIG. 15], [FIG. 16] and [FIG. 17] show diagrams and diagrams of the autonomous set of devices for the detection and identification of plant species in an agricultural crop for application of agrochemicals selectively according to a preferred form of the present invention.
- the transport vehicle is a self-propelled vehicle or a tow vehicle.
- the self-propelled vehicle is a fumigation vehicle with side arms arranged perpendicular to it (mosquito).
- the detection and identification device consists of a processor that comprises a tool based on computer software developed in language C ++, and the use of the artificial vision framework OpenCV, and the neural network framework convolutional “Caffe” by Berkeley Vision and Learning Center By using this tool you can achieve recognition of different species Vegetables with 96% effectiveness.
- the network is trained to perform a certain type of processing Once reached a adequate training level, you go to the phase of operation, where the network is used to carry out the task for which she was trained.
- the Convolutionary Neural Network a set of data (Data Set) of a minimum of 50,000 photographs of the different plant species that you want to identify during training, taking into account the specific region of the planet and the species predominant in that place. These photographs are loaded through different folders or directories that represent the category to which each one belongs of them. The different photographs are supplied for example in JPEG format and at a minimum size of 80 x 80 pixels, preferably in a recommended size 256 x 256 pixels, which are included for each species different situations of the seedling namely loose leaves, partial leaves, whole plant, flowers, plant in context, etc.
- the architecture chosen for Caffe network training is the AlexNet model consisting of 5 convolutional layers [See FIG. 14].
- the network Once the learning or training phase is finished, the network generates a “ deploy.prototxt ” file, which is basically the learning model, and in this way it can already be used to perform the task for which it was trained.
- a “ deploy.prototxt ” file which is basically the learning model, and in this way it can already be used to perform the task for which it was trained.
- One of the main advantages of this model is that the network learns the relationship between the data, acquiring the ability to generalize concepts. In this way, a convolutional neural network can operate with information that was not presented during the training phase.
- the network based classifier convolved neurals comprises: a plurality of feature mapping layers, at least one map of characteristics in at least one of a plurality of function mapping layers that are divided into a plurality of regions; and a plurality of templates convolutional corresponding to the plurality of regions, respectively, each of the templates convolutional is used to obtain a value response of a neuron in the region correspondent.
- FIG.1a is a example that illustrates how data passes through different types of tests, in order to take a decision in a three layer network.
- FIG.1b is an example that illustrates how network input layers contain neurons that encode the values of the input pixels.
- FIG.1c is an example that illustrates a possible architecture, with rectangles denoting the subnets This is not meant to be a realistic approach to solve the problem of detection and identification of plant species, it is only by way of example for understand how neural networks work convolutional
- the set of devices allows to determine the state of the crop, and being the agrochemical a herbicide, a foliar fertilizer, a insecticide, a fungicide, or a protective compound, is you can apply the appropriate agrochemical according to each circumstance.
- the method allows you to select a specific herbicide of a set of herbicides for each weed identified with respect to the crop; or a specific foliar fertilizer of a set of foliar fertilizers for the crop identified according to its state; or a specific insecticide of a set of insecticides for the crop identified according to its state of deterioration; or a specific fungicide of a set of fungicides for cultivation identified according to its state of deterioration; and / or a specific protective compound of a set of protective compounds for the identified crop according to your condition
- the process flow step by step in the process of detection and identification of the plant varieties of interest is as follows: 1) A Real Time Video Stream [FIG.3] is obtained from one or several cameras positioned along the wings or arms of, for example, a spraying machine. This stage is done at 60 frames per second on shutters per second [FIG. 4]. The number of shutters (or shutters) per second depends on the speed of the moving vehicle. 2) Each of the frames obtained is processed. 3) The frame is converted to a numerical matrix with the representation of RGB colors (Red-Green-Blue English) of each pixel in the image [FIG. 5]. Each pixel has blue, green and red components.
- RGB colors Red-Green-Blue English
- Each of these components has a range of 0 to 255, which gives a total of 2,563 different possible colors.
- the matrix is cut to select the area of the frame to be processed [FIG. 6].
- a horizontal strip size of the image to be processed is determined. This area to be processed is taken according to the subsequent ability to open the sprinkler for the application of the agrochemical exactly on the specific area.
- the area to be processed is exactly the middle strip, since it maintains an optimal ratio of distance to the camera, low image distortion and time that will pass between the processing and subsequent application of the agrochemical, and success in the shot on the plant .
- An area of the image is assigned to the corresponding sprinklers, so if weeds are detected in that area, the opening order is sent to the corresponding sprinkler.
- the first filter transforms the matrix to YCbCr color format and makes a logical operation between the channels, depending on the color to be filtered [FIG. 7a].
- the second filter subtracts two channels in the RGB format, depending on the color to be filtered [FIG.7b].
- the third filter is a logical AND (bitwise) operation between the results of the two previous filters [FIG.7c].
- the fourth filter applies a blur [FIG.7d1] or Gaussian blur to the previous result, the image is converted to black and white [FIG.7d2], and the noise [FIG.7d3] which are the points is eliminated scattered whites. 7)
- the contours of the image on the color mask are identified [FIG. 8] and the position information of each one is saved. 8)
- An estimated calculation of the velocity is made with the positions of the contours found in the current frame and their positions in a previous frame. A pixel speed per frame is obtained and a pixel-meter ratio and the frame-second ratio are used to make the speed passage to meters per second.
- FIG. 11 The 256 x 256 pixel image squares [FIG. 11a] are sent to the first layer or input layer of the convolutional neural network previously trained for analysis and categorization [FIG. 11b]. 12) Each square is processed within the Neural Network, which can process several at a time. The processing is carried out in parallel and executed internally in the GPU and not on the CPU, to achieve great performance in arithmetic operations. Previously trained within the network, the picture is taken and is performed in parallel one pass (forward) and the result to the output layer of the neural network is sent. 13) The result of the last layer or output layer of the convolutional neural network is obtained.
- the result of the output layer gives us an average value of success of each of the categories, the highest value being the category of plant species to which the image belongs [FIG. 12].
- the categories of the neural network are the NO plant species, in which there are unknown plant species and / or elements not to be taken into account, earth, sky, etc. If the result is this category, it means that in the analysis of this picture box there is no category of known or pre-trained plant species. 14)
- the agrochemical to be used is determined.
- the system contains a table with the possible plant species to be identified and their relationship with the agrochemical to be used according to diagnosis, if applicable.
- the device for the detection and identification of plant species already has a complete representation of the identified plant species and the agrochemical required to be applied in each of the plants that appear in the processed frame that was obtained from the main video stream [FIG. 13].
- the mathematical calculation of the exact moment of activation of the electromechanical order is made, taking into account the speed of the spray vehicle and the distance from the chamber to the ground. Depending on the area of the processed frame, only the electromechanical valve corresponding to the specific field of action is activated.
- valve corresponding to the specific agrochemical to be used is activated. It allows to administer multiple tanks of agrochemical product according to a specific need.
- a field test of the whole was carried out Autonomous device for detection and identification of plant species in a crop agricultural for the application of agrochemicals in the form selective in the town of Las Rosas, province of Santa Fe, on a lot of 20 hectares planted with soy.
- the herbicide selected to be applied was glyphosate (RoundUp) to approximately 1.4 liters per hectare on average.
- mosquito Pulp, MAP II 3250 model
- MAP II 3250 model An autonomous type sprayer was used mosquito (Pla, MAP II 3250 model) composed of a 3250 liter tank, where the side arms they included a line of pads mounted TeeJet brand sprayers, commanded with solenoid valves connected directly to the computer that controls dose application of herbicide.
- the height of the peaks and sensors with respect of the floor was 1 meter.
- the propulsion speed of the fumigator was approximately 16 km / h during The whole application.
- a quantity of 12 cameras was used with sensors distributed evenly throughout the wing of the boom 28 meters long.
- the batch chosen for the trial was cultivated with 4-week post-emergence soybeans and it had a low percentage of weeds and high concentration "staining", this is weed randomly distributed in patches.
Abstract
The invention relates to an autonomous set of devices for detecting and identifying wild and cultivated plant species on a farm, using software which, by obtaining a video in real time, can detect, isolate and identify different wild and cultivated plant species by using convolutional neural networks able to distinguish distinctive aspects of the morphology, taxonomy and philotaxy of the plants. By previously training the convolutional neural networks on the characteristics that distinguish one species from another, the system allows the particular identification of each species. By means of a system of video cameras mounted along a transport vehicle, and with the data being obtained in real time, the computer system can determine the agrochemical to be applied according to the plant identified and electronically or mechanically actuate the opening of the valve of a spray nozzle. In this way, the plant receives the exact dose and the specific agrochemical according to the necessary treatment.
Description
La presente invención se relaciona con
tecnologías de aplicación en el campo agro industrial.
Particularmente, se trata de un conjunto de dispositivos
autónomo para la detección e identificación de especies
vegetales en un cultivo agrícola para la aplicación de
agroquímicos en forma selectiva. El conjunto consta de
múltiples cámaras que deben estar dispuestas sobre el
ala o brazo del botalón de por ejemplo una máquina
pulverizadora, un dispositivo de detección e
identificación de especies vegetales, un circuito
electrónico encargado de gestionar la apertura y cierre
de los picos aspersores de producto agroquímico, y un
sensor de ultrasonido por cada cámara del conjunto. The present invention relates to
Application technologies in the agro industrial field.
Particularly, it is a set of devices
autonomous for the detection and identification of species
vegetables in an agricultural crop for the application of
Agrochemicals selectively. The set consists of
multiple cameras that must be arranged on the
wing or boom arm of for example a machine
sprayer, a detection device and
plant species identification, a circuit
electronic responsible for managing the opening and closing
of the agrochemical product sprinkler peaks, and a
Ultrasound sensor for each chamber in the set.
Más particularmente, a través de la obtención
de imágenes digitales provistas por el conjunto de
cámaras y enviadas en forma automática al dispositivo de
procesamiento el cual es capaz de detectar, segmentar e
identificar a ciencia cierta las distintas especies
vegetales que se encuentran en la escena de la imagen
procesada. En caso de identificar una especie vegetal
previamente designada como especie a eliminar o maleza,
el dispositivo envía una señal al circuito electrónico
que se encarga de la gestión de apertura y cierre de las
distintas electroválvulas de los picos aspersores de
producto agroquímico para que abra durante un período de
tiempo predeterminado un pico específico, cayendo de
esta manera una dosis definida de producto agroquímico
sobre la planta deseada. Más particularmente aun, el
dispositivo de procesamiento es capaz de diagnosticar el
agroquímico a utilizar en base a una tabla de
correspondencia que comprende las distintas especies
vegetales, cuál es su agroquímico específico para
tratamiento y la dosis recomendada a utilizar. More particularly, through obtaining
of digital images provided by the set of
cameras and automatically sent to the device
processing which is able to detect, segment and
identify the different species for sure
vegetables found in the image scene
processed. In case of identifying a plant species
previously designated as a species to be removed or weeds,
the device sends a signal to the electronic circuit
which is responsible for the management of opening and closing of
different solenoid valves of the sprinkler peaks of
agrochemical product to open during a period of
predetermined time a specific peak, falling from
this way a defined dose of agrochemical product
over the desired plant. More particularly, the
processing device is able to diagnose the
agrochemical to use based on a table of
correspondence comprising the different species
vegetables, what is your specific agrochemical for
treatment and the recommended dose to use.
En la actualidad existen diversos avances
tecnológicos que han permitido al sector agro industrial
incorporar nuevas herramientas, técnicas y maquinarias
capaces de incrementar su eficiencia y lograr mejores
rendimientos. Esta incorporación y aplicación de
tecnologías al sector agro industrial se pasó a definir
como “Agricultura de Precisión”. Dentro de esta nueva
definición están comprendidas diversas áreas como por
ejemplo tecnología de geo posicionamiento satelital,
software de gestión, software de electrónica y robótica,
como para nombrar algunos. There are currently several advances
technology that have allowed the agro industrial sector
incorporate new tools, techniques and machinery
able to increase their efficiency and achieve better
yields This incorporation and application of
technologies to the agro industrial sector was defined
as "Precision Agriculture." Within this new
definition encompasses various areas as per
example satellite geo positioning technology,
management software, electronics and robotics software,
Like to name a few.
Cabe destacar, además, que existe un nuevo
campo en la industria de la informática llamado visión
artificial, visión técnica o también visión por
computador (del inglés computer vision) desarrollado en
los últimos 10 años que realmente comenzó a tener mayor
trascendencia y dar sus frutos de eficiencia en este
último tiempo. Todo ello, gracias a los avances,
disponibilidad en el mercado y a precios accesibles de
cámaras de video digital de alta definición, con gran
cantidad de fotogramas logrados por segundo, junto a
computadoras con procesadores (CPU) más veloces, la
incorporación de coprocesadores dedicados únicamente al
cálculo de la carga gráfica llamados tarjetas gráficas
(GPU), permitiendo lograr una real eficiencia para la
gestión y análisis de video en tiempo real. It should also be noted that there is a new
field in the computer industry called vision
artificial, technical vision or also vision by
computer (computer vision) developed in
the last 10 years that really started to get older
transcendence and bear fruit of efficiency in this
last time. All this, thanks to the advances,
availability in the market and at affordable prices of
high definition digital video cameras, with great
number of frames achieved per second, together with
computers with faster processors (CPUs), the
incorporation of coprocessors dedicated solely to
graphical load calculation called graphics cards
(GPU), allowing real efficiency to be achieved for
Real-time video management and analysis.
La visión artificial es un subcampo de la
inteligencia artificial cuyo propósito es programar un
computador para que "entienda" las características de
una imagen. Artificial vision is a subfield of the
artificial intelligence whose purpose is to program a
computer to "understand" the characteristics of
an image.
Los objetivos típicos de la visión artificial
incluyen: detección, segmentación, localización y
reconocimiento de ciertos objetos en imágenes;
evaluación de los resultados, como por ejemplo
segmentación, registro; registro de diferentes imágenes
de una misma escena u objeto, es decir hacer concordar
un mismo objeto en diversas imágenes; seguimiento de un
objeto en una secuencia de imágenes; mapeo de una escena
para generar un modelo tridimensional de la misma, lo
que podría ser usado por un robot para navegar por la
escena; estimación de las posturas tridimensionales de
humanos; y búsqueda de imágenes digitales por su
contenido. The typical objectives of artificial vision
include: detection, segmentation, location and
recognition of certain objects in images;
evaluation of the results, such as
segmentation, registration; registration of different images
of the same scene or object, that is to make agree
same object in different images; tracking a
object in a sequence of images; mapping a scene
to generate a three-dimensional model of it, what
that could be used by a robot to navigate the
scene; estimation of the three-dimensional positions of
humans; and search for digital images by
content.
Estos objetivos se consiguen por medio de
reconocimiento de patrones, aprendizaje estadístico,
geometría de proyección, procesamiento de imágenes,
teoría de grafos y otras técnicas. These objectives are achieved through
pattern recognition, statistical learning,
projection geometry, image processing,
graph theory and other techniques.
Las imágenes de señal continua se reproducen
mediante dispositivos electrónicos analógicos que
registran los datos de la imagen con precisión
utilizando varios métodos, como una secuencia de
fluctuaciones de una señal eléctrica o cambios en la
naturaleza química de una emulsión de una película, que
varían continuamente en los diferentes aspectos de la
imagen. Para procesar o visualizar en el ordenador una
señal continua o una imagen analógica se la debe
convertir primero a un formato digital comprensible para
el ordenador. Este proceso se aplica a todas las
imágenes con independencia de origen, complejidad y si
son en blanco y negro o escala de grises, o a todo
color. Una imagen digital se compone de una matriz
rectangular o cuadrada de píxeles que representan una
serie de valores de intensidad ordenados en un sistema
de coordenadas en un plano (x,y). Continuous signal images are reproduced
by analog electronic devices that
record image data accurately
using several methods, such as a sequence of
fluctuations of an electrical signal or changes in the
chemical nature of an emulsion of a film, which
vary continuously in different aspects of the
image. To process or display on the computer a
continuous signal or an analog image must be
convert first to an understandable digital format to
the computer. This process applies to all
images regardless of origin, complexity and if
they are black and white or grayscale, or all
color. A digital image is composed of a matrix
rectangular or square of pixels representing a
series of intensity values ordered in a system
of coordinates in a plane (x, y).
En el campo de la Visión Artificial recién se
están dando los primeros pasos de aplicación en la
agricultura. Existen varios proyectos de investigación
en curso, realizados por entes gubernamentales y/o
universidades, pero hasta el momento ninguno ha pasado
la etapa de investigación y desarrollo en laboratorio,
ni adecuación de prototipo en el campo de acción, por lo
que hasta la fecha no existe disponibilidad comercial
alguna para la adquisición de esta tecnología por parte
del consumidor perteneciente al campo agro industrial.
No obstante, cabe aclarar que la aparición de estos
nuevos inventos a través de la utilización de visión
artificial están más enfocados en lograr aplicaciones
para dotar a robots metal mecánicos de sistemas de
guiado visual a través del campo. In the field of Artificial Vision just
they are taking the first steps of application in the
farming. There are several research projects
in progress, carried out by government entities and / or
universities, but so far none has passed
the research and development stage in the laboratory,
or prototype adaptation in the field of action, so
that to date there is no commercial availability
any for the acquisition of this technology by
of the consumer belonging to the agro industrial field.
However, it should be clarified that the appearance of these
new inventions through the use of vision
artificial are more focused on achieving applications
to provide mechanical metal robots with systems of
visual guidance through the field.
Otro de los últimos adelantos tecnológicos con
los que contamos es la Inteligencia Artificial, aplicada
a través de la utilización de redes neuronales
convolucionales. Son un tipo de red neuronal artificial
donde las neuronas corresponden a campos receptivos de
una manera muy similar a las neuronas en la corteza
visual primaria (V1) de un cerebro biológico. Este tipo
de red es una variación de una perceptrón multicapa,
pero su funcionamiento las hace mucho más efectivas para
tareas de visión artificial, especialmente en la
clasificación de imágenes. Por perceptrón debe
entenderse a una neurona artificial y unidad básica de
inferencia en forma de discriminador lineal, esto es un
algoritmo capaz de generar un criterio para seleccionar
un subgrupo a partir de un grupo de componentes más
grande. Another of the latest technological advances with
what we have is Artificial Intelligence, applied
through the use of neural networks
convolutional They are a type of artificial neural network
where neurons correspond to receptive fields of
a way very similar to the neurons in the cortex
Primary visual (V1) of a biological brain. This type
network is a variation of a multilayer perceptron,
but their operation makes them much more effective for
artificial vision tasks, especially in the
Image classification Per perceptron must
understand an artificial neuron and basic unit of
inference in the form of linear discriminator, this is a
algorithm capable of generating a criterion to select
a subgroup from one more component group
big.
Los fundamentos de las redes neuronales
convolucionales se basan en el Neocognitron, concepto
introducido por Kunihiko Fukushima en 1980, habiendo
sido mejorado más tarde por Yann LeCun et al. en 1998 al
introducir un método de aprendizaje basado en
Backpropagation para entrenar el sistema correctamente.
En el año 2012 estas redes fueron refinadas por Dan
Ciresan et al. e implementadas en un GPU consiguiendo
así resultados nunca imaginados. The basics of neural networks
convolutional are based on the Neocognitron, concept
introduced by Kunihiko Fukushima in 1980, having
been improved later by Yann LeCun et al. in 1998 to
introduce a learning method based on
Backpropagation to train the system correctly.
In 2012 these networks were refined by Dan
Ciresan et al. and implemented in a GPU getting
So results never imagined.
Asimismo, un procesamiento en paralelo
multihilo (“multithreading”) permite ejecutar
eficientemente múltiples hilos al mismo tiempo sobre la
misma GPU, logrando procesar varios algoritmos en forma
concurrente, y de esta forma sacar el máximo potencial
del procesador y en un menor espacio de tiempo, pudiendo
según necesidad compartir los mismos recursos lógicos
y/o físicos del sistema. Also, a parallel processing
multi-thread (“multithreading”) allows you to execute
efficiently multiple threads at the same time on the
same GPU, managing to process several algorithms in the form
concurrent, and in this way take full potential
of the processor and in a shorter space of time, being able to
as needed share the same logical resources
and / or system physicists.
Las redes neuronales convolucionales consisten
en múltiples capas con distintos propósitos. Al
principio se encuentra la fase de extracción de
características, compuesta de neuronas convolucionales y
de reducción de muestreo. Al final de la red se
encuentran neuronas de perceptrón sencillas para
realizar la clasificación final sobre las
características extraídas. The convolutional neural networks consist
in multiple layers with different purposes. To the
principle is the extraction phase of
characteristics, composed of convolutional neurons and
of sampling reduction. At the end of the network you
they find simple perceptron neurons to
make the final classification on
features extracted.
La fase de extracción de características se
asemeja al proceso estimulante en las células de la
corteza visual. Esta fase se compone de capas alternas
de neuronas convolucionales y neuronas de reducción de
muestreo. Según progresan los datos a lo largo de esta
fase, se disminuye su dimensionalidad, siendo las
neuronas en capas lejanas mucho menos sensibles a
perturbaciones en los datos de entrada, pero al mismo
tiempo siendo éstas activadas por características cada
vez más complejas. The feature extraction phase is
resembles the stimulating process in the cells of the
visual cortex This phase consists of alternate layers
of convolutional neurons and reduction neurons of
sampling. As the data progresses along this
phase, its dimensionality is reduced, being the
neurons in distant layers much less sensitive to
disturbances in the input data, but at the same
time being these activated by characteristics every
more and more complex
En la fase de extracción de características,
las neuronas sencillas de un perceptrón son reemplazadas
por procesadores en matriz que realizan una operación
sobre los datos de imagen 2D que pasan por ellas, en
lugar de un único valor numérico In the feature extraction phase,
the simple neurons of a perceptron are replaced
by matrix processors that perform an operation
about the 2D image data that passes through them, in
place of a single numerical value
El operador de convolución tiene el efecto de
filtrar la imagen de entrada con un núcleo previamente
entrenado. Esto transforma los datos de tal manera que
ciertas características, determinadas por la forma del
núcleo, se vuelven más dominantes en la imagen de salida
al tener éstas un valor numérico más alto asignados a
los píxeles que las representan. Estos núcleos tienen
habilidades de procesamiento de imágenes específicas,
como por ejemplo la detección de bordes que se puede
realizar con núcleos que resaltan un gradiente en una
dirección en particular. Sin embargo, los núcleos que
son entrenados por una red neuronal convolucional
generalmente son más complejos para poder éstos extraer
otras características más abstractas y no triviales. The convolution operator has the effect of
filter the input image with a kernel previously
trained. This transforms the data in such a way that
certain characteristics, determined by the shape of the
core, become more dominant in the output image
having these a higher numerical value assigned to
the pixels that represent them. These cores have
specific image processing skills,
such as edge detection that can be
perform with cores that highlight a gradient in a
particular address. However, the nuclei that
they are trained by a convolutional neural network
they are generally more complex to be able to extract
other more abstract and non-trivial features.
Las redes neuronales cuentan con cierta
tolerancia a pequeñas perturbaciones en los datos de
entrada. Por ejemplo, si dos imágenes casi idénticas
diferenciadas únicamente por un traslado de algunos
pixeles lateralmente se analizan con una red neuronal,
el resultado debería ser esencialmente el mismo. Esto se
obtiene, en parte, dado a la reducción de muestreo que
ocurre dentro de una red neuronal convolucional. Al
reducir la resolución, las mismas características
corresponderán a un mayor campo de activación en la
imagen de entrada. Neural networks have some
tolerance to small disturbances in the data of
entry. For example, if two almost identical images
differentiated only by a transfer of some
pixels laterally are analyzed with a neural network,
The result should be essentially the same. This is
gets, in part, given the reduction in sampling that
It occurs within a convolutional neural network. To the
reduce resolution, same features
will correspond to a greater activation field in the
input image.
Originalmente, las redes neuronales
convolucionales utilizaban un proceso de subsampling
para llevar a cabo esta operación. Sin embargo, estudios
recientes han demostrado que otras operaciones, como por
ejemplo max-pooling, son mucho más eficaces en resumir
características sobre una región. Además, existe
evidencia de que este tipo de operación es similar a
cómo la corteza visual puede resumir información
internamente. Originally, neural networks
convolutional used a subsampling process
to carry out this operation. However, studies
recent have shown that other operations, such as by
max-pooling example, they are much more effective in summarizing
characteristics about a region. In addition, there is
evidence that this type of operation is similar to
how the visual cortex can summarize information
internally.
La operación de max-pooling encuentra el valor
máximo entre una ventana de muestra y pasa este valor
como resumen de características sobre ese área. Como
resultado, el tamaño de los datos se reduce por un
factor igual al tamaño de la ventana de muestra sobre la
cual se opera. The max-pooling operation finds the value
maximum between a sample window and pass this value
as a summary of characteristics about that area. How
result, the size of the data is reduced by a
factor equal to the size of the sample window on the
which one is operated
Después de una o más fases de extracción de
características, los datos finalmente llegan a la fase
de clasificación. Para entonces, los datos han sido
depurados hasta una serie de características únicas para
la imagen de entrada, y es ahora la labor de esta última
fase el poder clasificar estas características hacia una
etiqueta u otra, según los objetivos de entrenamiento. After one or more extraction phases of
characteristics, the data finally reaches the phase
of classification. By then, the data has been
debugged up to a series of unique features to
the input image, and it is now the work of the latter
phase to classify these characteristics towards a
label or other, depending on training objectives.
Dado la naturaleza de las convoluciones dentro
de las redes neuronales convolucionales, éstas son aptas
para poder aprender a clasificar todo tipo de datos
donde éstos estén distribuidos de una forma continua a
lo largo del mapa de entrada, y a su vez sean
estadísticamente similares en cualquier lugar del mapa
de entrada. Por esta razón, son especialmente eficaces
para clasificar imágenes, por ejemplo para el
auto-etiquetado de imágenes. Given the nature of the convolutions within
of the convolutional neural networks, these are suitable
to learn to classify all types of data
where these are distributed in a continuous way to
along the entry map, and in turn be
statistically similar anywhere on the map
input For this reason, they are especially effective.
to classify images, for example for
Self-tagging of images.
Otro punto importante a tener en cuenta es que
las redes neuronales convolucionales han demostrado su
éxito en muchas aplicaciones, debido a su capacidad para
resolver ciertos problemas con relativa facilidad de
aplicación. Se han podido resolver problemas sin la
necesidad de entender o aprender las propiedades
analíticas y estadísticas de los mismos, ni los pasos de
la solución. La investigación en redes neuronales
convolucionales ha resultado en una amplia variedad de
modelos y algoritmos de aprendizaje, pero recién en los
últimos dos años es donde ha habido un cambio de
paradigma excepcional. Another important point to keep in mind is that
convolutional neural networks have demonstrated their
success in many applications, due to its ability to
solve certain problems with relative ease of
application. We have been able to solve problems without the
need to understand or learn the properties
analytics and statistics thereof, nor the steps of
the solution. Neural Network Research
convolutional has resulted in a wide variety of
models and learning algorithms, but only in the
last two years is where there has been a change of
exceptional paradigm.
Se han logrado avances en el campo de redes
neuronales convolucionales artificiales que están
pudiendo lograr utilizar metodologías similares para la
identificación de especies vegetales, pero todas ellas
están basadas en condiciones ideales de adquision de la
imagen principalmente sobre fondo blanco, aislada de
otras plantas, imagen fija, ejemplar de forma completa,
etc., lo cual no sirve en entornos reales y/o en
cultivos, donde las plantas interfieren entre sí,
superponiéndose la masa foliar de unas con otras,
condiciones de iluminación adversas, movimientos de las
mismas por el viento y, lo que es fundamental, no son
adquiridas las imágenes en formato video y a velocidades
de hasta 20 km/h, con la consiguiente necesidad de
procesamiento de hasta 7 metros de terreno por segundo. Progress has been made in the field of networks
artificial convolutional neuronal that are
being able to use similar methodologies for
identification of plant species, but all of them
they are based on ideal conditions of acquisition of the
Image mainly on white background, isolated from
other plants, still image, full copy,
etc., which does not work in real environments and / or in
crops, where plants interfere with each other,
overlapping the leaf mass with each other,
adverse lighting conditions, movements of the
themselves by the wind and, what is fundamental, they are not
acquired the images in video format and at speeds
up to 20 km / h, with the consequent need for
processing of up to 7 meters of land per second.
En relación al método utilizado para el
entrenamiento de una red neuronal convolucional, se
puede citar a la patente US 7747070 B2 concedida el 29
de junio de 2010 a Microsoft Corp. Dicha patente se
refiere a una red neuronal convolucional implementada en
una unidad de procesamiento gráfico, la cual es entonces
entrenada a través de una serie de pasos hacia delante y
hacia atrás, con núcleos convolucionales y matrices
sesgadas modificados en cada paso hacia atrás de acuerdo
con un gradiente de una función de error. La aplicación
aprovecha las capacidades de procesamiento paralelo de
las unidades de sombreado de píxeles en una GPU, y
utiliza un conjunto de fórmulas de principio a fin para
programar los cálculos en los sombreadores de píxeles.
La entrada y salida al programa se realiza a través de
texturas, y se emplea un proceso de sumatoria de
múltiples pasos cuando se necesitan sumas a través de
registros de unidades de sombreado de píxeles. In relation to the method used for
training of a convolutional neural network, it
You can cite the patent US 7747070 B2 granted on 29
June 2010 to Microsoft Corp. Such patent is
refers to a convolutional neural network implemented in
a graphics processing unit, which is then
trained through a series of steps forward and
backward, with convolutional cores and matrices
modified biases in each step backwards according
with a gradient of an error function. The application
take advantage of the parallel processing capabilities of
pixel shader units in a GPU, and
use a set of formulas from beginning to end to
Schedule calculations in pixel shaders.
The entry and exit to the program is done through
textures, and a summation process of
multiple steps when sums are needed through
records of pixel shader units.
De esta manera, las redes neuronales
convolucionales están siendo utilizadas para el
reconocimiento y clasificación de imágenes. En el
proceso de reconocimiento utilizando un clasificador
basado en una red neuronal convolucional, se ingresa una
imagen a la red y luego de varias repeticiones de
operaciones de convolución en un espacio máximo de
muestreo y conexión completa, se extrae como resultado
del reconocimiento una clasificación precisa de la
imagen y un nivel máximo en seguridad del resultado. In this way, neural networks
convolutional are being used for the
Image recognition and classification. At
recognition process using a classifier
based on a convolutional neural network, a
image to the network and after several repetitions of
convolution operations in a maximum space of
sampling and complete connection, is extracted as a result
of recognition an accurate classification of the
image and a maximum level of security of the result.
Existen en el mercado además dos productos
comerciales similares en su concepción tecnológica
conocidos como "Trimble WeedSeeker" y "PSB- Weedit".
Ambos sistemas pretenden como argumento comercial ser un
detector de malezas pero por sus características y
funcionalidades sólo son capaces de reconocer la
presencia de clorofila a través de sus sensores de luz
infrarroja. Esto es útil sólo en el momento en que un
campo está en situación de barbecho, esto es un campo no
cultivado en espera de una nueva siembra, o en
premergencia si fue sembrado, ya que al no poder
distinguir entre la planta específica del cultivo y las
plantas que se van a tratar como malezas no es útil en
otra situación. There are also two products on the market
similar commercials in its technological conception
known as "Trimble WeedSeeker" and "PSB-Weedit".
Both systems claim as a commercial argument to be a
weed detector but for its characteristics and
functionalities are only able to recognize the
presence of chlorophyll through its light sensors
infrared This is useful only when a
field is fallow, this is a field not
grown in anticipation of a new planting, or in
premergence if it was sown, since not being able
distinguish between the specific crop plant and the
plants to be treated as weeds is not useful in
other situation
El seguimiento de objetos (en inglés object
tracking) es un proceso que permite estimar en el tiempo
la ubicación de uno o más objetos móviles mediante el
uso de una cámara. Las mejoras logradas en forma
acelerada en la calidad y resolución de los sensores de
imagen, conjuntamente con el impresionante incremento de
la potencia de cálculo logrado en la última década, ha
favorecido la creación de nuevos algoritmos y
aplicaciones mediante el seguimiento de objetos. Object tracking (in English object
tracking) is a process that allows you to estimate over time
the location of one or more mobile objects using the
use of a camera The improvements achieved in form
accelerated in the quality and resolution of the sensors
image, together with the impressive increase of
The computing power achieved in the last decade has
favored the creation of new algorithms and
applications by tracking objects.
El seguimiento de objetos puede ser un proceso
lento debido a la gran cantidad de datos que contiene un
video, lo cual puede incrementar su complejidad ante la
posible necesidad de utilizar técnicas de reconocimiento
de objetos para realizar el seguimiento. Object tracking can be a process
slow due to the large amount of data contained in a
video, which can increase its complexity in the face of
possible need to use recognition techniques
of objects to track.
Las cámaras de video capturan información
sobre los objetos de interés en forma de conjunto de
píxeles. Al modelar la relación entre el aspecto del
objeto de interés y el valor de los píxeles
correspondientes, un seguidor de objetos valora la
ubicación de este objeto en el tiempo. La relación entre
el objeto y la proyección de su imagen es muy compleja y
puede depender de más factores que no sean solamente la
posición del objeto, lo que implica que el seguimiento
de objetos sea un objetivo dificultoso de lograr. Video cameras capture information
on objects of interest in the form of a set of
pixels When modeling the relationship between the appearance of
object of interest and pixel value
corresponding, an object follower values the
location of this object in time. The relationship between
the object and projection of its image is very complex and
it may depend on more factors than just the
object position, which implies that the tracking
of objects is a difficult goal to achieve.
Los principales retos que hay que tener en
cuenta en el diseño de un seguidor de objetos están
relacionados con la similitud de aspecto entre el objeto
de interés y el resto de objetos en la escena, así como
la variación de aspecto del propio objeto. Dado que el
aspecto tanto del resto de objetos como el fondo puede
ser similar al del objeto de interés, esto puede
interferir en su observación. En ese caso, las
características extraídas de esas áreas no deseadas
puede ser difícil de diferenciar de las que se espera
que el objeto de interés genere. Este fenómeno se conoce
con el nombre de desorden (“clutter”). The main challenges to have in
account in the design of an object follower are
related to the similarity of aspect between the object
of interest and other objects in the scene, as well as
the variation of appearance of the object itself. Since the
appearance of both other objects and the background can
be similar to the object of interest, this may
Interfere with your observation. In that case, the
features extracted from those unwanted areas
it can be difficult to differentiate from what is expected
that the object of interest generates. This phenomenon is known.
with the name of disorder ("clutter").
Además del reto de seguimiento que causa el
“clutter”, los cambios de aspecto del objeto en el plano
de la imagen dificulta el seguimiento causado por uno o
más de los siguientes factores: cambios de posición,
iluminación ambiente, ruido, oclusiones. In addition to the follow-up challenge caused by the
"Clutter", the changes of aspect of the object in the plane
of the image makes it difficult to track caused by one or
more of the following factors: position changes,
ambient lighting, noise, occlusions.
En un escenario de seguimiento, un objeto se
puede definir como cualquier cosa que sea de interés
para su posterior análisis. Los objetos se pueden
representar mediante sus formas y apariencias,
generalmente: puntos, formas geométricas primitivas,
silueta del objeto y contorno, modelos articulados de
forma, modelos esqueléticos. In a tracking scenario, an object is
you can define as anything of interest
for later analysis. The objects can be
represent through their forms and appearances,
generally: points, primitive geometric shapes,
object silhouette and contour, articulated models of
shape, skeletal models.
También hay varias maneras de representar las
características de aspecto de los objetos. Hay que tener
en cuenta que las representaciones de forma también se
pueden combinar con las de aspecto para llevar a cabo el
seguimiento. Algunas de las representaciones de aspecto
más comunes son: la densidad de probabilidad del aspecto
de los objetos, plantillas, modelos activos de aspecto,
modelos de aspecto multivista. There are also several ways to represent the
appearance characteristics of objects. Must have
keep in mind that shape representations are also
can be combined with those of appearance to carry out the
tracing. Some of the aspect representations
most common are: the probability density of the aspect
of objects, templates, active appearance models,
Multivist-looking models.
Seleccionar las características adecuadas
tiene un papel fundamental en el seguimiento. En
general, la característica visual más deseada es la
singularidad porque los objetos se pueden distinguir
fácilmente en el espacio de características. Los
detalles de las características más comunes son los
siguientes: color, márgenes, flujo óptico, textura. Select the appropriate features
It has a fundamental role in monitoring. In
In general, the most desired visual feature is the
uniqueness because objects can be distinguished
easily in the space of features. The
details of the most common features are the
following: color, margins, optical flow, texture.
Cada método de seguimiento requiere un
mecanismo de detección de objetos, ya sea en cada
fotograma o cuando el primer objeto aparece en el vídeo.
Un método común para la detección de objetos es el uso
de la información de un solo fotograma. No obstante,
algunos métodos de detección de objetos hacen uso de la
información temporal calculada a partir de una secuencia
de imágenes para reducir así el número de falsas
detecciones. Esta información temporal se calcula
generalmente con la técnica “frame differencing”
, que pone de manifiesto las regiones cambiantes en
tramos consecutivos. Una vez se tiene en cuenta las
regiones del objeto en la imagen, es entonces tarea del
seguidor de realizar la correspondencia de objeto de un
fotograma a otro para generar el seguimiento. Los
métodos más populares en el contexto del seguimiento de
objetos son: los detectores de puntos, la sustracción
del fondo, la segmentación. Each tracking method requires an object detection mechanism, either in each frame or when the first object appears in the video. A common method for object detection is the use of single-frame information. However, some object detection methods make use of the temporal information calculated from a sequence of images to reduce the number of false detections. This temporary information is generally calculated using the “frame differencing” technique, which shows the changing regions in consecutive sections. Once the regions of the object in the image are taken into account, it is then the task of the follower to perform the object correspondence from one frame to another to generate the tracking. The most popular methods in the context of object tracking are: point detectors, background subtraction, segmentation.
Los detectores de puntos se utilizan para
encontrar los puntos de interés en imágenes que tienen
una textura expresiva en sus respectivas localidades.
Los puntos de interés se han utilizado durante mucho
tiempo en el contexto del movimiento y en los problemas
de seguimiento. Una característica deseable en cuanto a
los puntos de interés es su invariación en los cambios
de iluminación y en el punto de vista de la cámara. Point detectors are used to
find points of interest in images that have
an expressive texture in their respective locations.
Points of interest have been used for a long time.
time in the context of the movement and in the problems
of follow up. A desirable feature in terms of
the points of interest is its invariance in the changes
of illumination and in the point of view of the camera.
La detección de objetos se puede conseguir
mediante la construcción de una representación de la
escena llamada modelo de fondo y después encontrando las
desviaciones del modelo para cada fotograma entrante.
Cualquier cambio significativo en una región de la
imagen del modelo de fondo representa un objeto en
movimiento. Los píxeles que constituyen las regiones en
proceso de cambio se marcan para su posterior
procesamiento. En general, un algoritmo de componentes
conectados se aplica para obtener regiones conectadas
que corresponden a los objetos. Este proceso se conoce
como la sustracción de fondo. Object detection can be achieved
by building a representation of the
scene called background model and then finding the
model deviations for each incoming frame.
Any significant change in a region of the
background model image represents an object in
movement. The pixels that make up the regions in
change process are marked for later
processing In general, a component algorithm
connected is applied to get connected regions
that correspond to the objects. This process is known.
as background subtraction.
El objetivo de los algoritmos de segmentación
de la imagen es dividir la imagen en regiones
perceptualmente similares. Cada algoritmo de
segmentación abarca dos problemas, los criterios para
una buena partición y el método para conseguir la
partición eficiente. Existen diferentes técnicas de
segmentación de objetos en movimiento que se pueden
separar en dos grandes grupos: las basadas en
movimientos y las basadas en características
espaciotemporales. The objective of segmentation algorithms
of the image is to divide the image into regions
perceptually similar. Each algorithm of
segmentation covers two problems, the criteria for
a good partition and the method to get the
efficient partition. There are different techniques of
segmentation of moving objects that can be
separate into two large groups: those based on
movements and feature based
spacetime.
Estas técnicas hacen uso principalmente de la
información de movimiento. Dentro de este grupo podemos
diferenciar dos tipos: los que trabajan con el
movimiento en dos dimensiones (2D) y los que lo hacen en
tres (3D). Dentro de las técnicas en dos dimensiones
encontramos: técnicas basadas en las discontinuidades
del flujo óptico y técnicas basadas en la detección de
cambios. These techniques make use mainly of the
movement information. Within this group we can
differentiate two types: those who work with the
two-dimensional movement (2D) and those who do it in
three (3D). Within two-dimensional techniques
We found: techniques based on discontinuities
of optical flow and techniques based on the detection of
changes
Los modelos de movimiento en 2D son simples,
pero menos realistas. Como consecuencia, los sistemas de
segmentación en 3D son los más utilizados en la
práctica. Dentro de los métodos en tres dimensiones se
pueden distinguir dos algoritmos diferentes: estructura
a partir del movimiento SFM (acrónimo del inglés
structure from motion) y algoritmos paramétricos. 2D motion models are simple,
But less realistic. As a consequence, the systems of
3D segmentation are the most used in the
practice. Within three-dimensional methods,
They can distinguish two different algorithms: structure
from the SFM movement (acronym for English
structure from motion) and parametric algorithms.
El SFM generalmente maneja escenas 3D que
contienen información relevante de profundidad, mientras
que en los métodos paramétricos no se asume esta
profundidad. Otra diferencia importante entre los dos
algoritmos es que en el SFM se asume un movimiento
rígido, mientras que en los algoritmos paramétricos sólo
se asume rigidez de movimiento en partes de la escena. The SFM generally handles 3D scenes that
contain relevant depth information while
that in parametric methods this is not assumed
depth. Another important difference between the two
algorithms is that in the SFM a movement is assumed
rigid while in parametric algorithms only
stiffness of movement is assumed in parts of the scene.
En las técnicas espaciotemporales, los métodos
de segmentación basados únicamente en movimiento son
sensibles a las inexactitudes de la valoración de
movimiento. Para solucionar estos problemas, en los
métodos espaciotemporales se propone complementar el
movimiento mediante el uso de la información espacial.
Hay dos enfoques dominantes: basados en límites y
basados en regiones. In spacetime techniques, the methods
segmentation based solely on motion are
sensitive to the inaccuracies of the valuation of
movement. To solve these problems, in the
spacetime methods it is proposed to complement the
movement through the use of spatial information.
There are two dominant approaches: based on limits and
based on regions.
El seguimiento de objetos es una tarea muy
importante dentro del campo del procesado de vídeo. El
objetivo principal de las técnicas de seguimiento de
objetos es generar la trayectoria de un objeto a través
del tiempo, posicionando éste dentro de la imagen.
Podemos hacer una clasificación de técnicas según tres
grandes grupos: seguimiento de puntos, seguimiento de
núcleo (kernel) y seguimiento de siluetas. Object tracking is a very task
important within the field of video processing. He
main objective of the monitoring techniques of
objects is to generate the trajectory of an object through
of time, positioning it within the image.
We can classify techniques according to three
large groups: point tracking, tracking
core (kernel) and silhouettes tracking.
Según las técnicas de seguimiento de puntos
los objetos detectados en imágenes consecutivas están
representados cada uno por uno o varios puntos y la
asociación de éstos está basada en el estado del objeto
en la imagen anterior, que puede incluir posición y
movimiento. Se requiere de un mecanismo externo que
detecte los objetos de cada fotograma. Esta técnica
puede presentar problemas en escenarios donde le objeto
presenta oclusiones y en las entradas y salidas de
éstos. Las técnicas de seguimiento de puntos se pueden
clasificar también en dos grandes categorías:
deterministas y estadísticos. According to point tracking techniques
the objects detected in consecutive images are
represented each by one or several points and the
association of these is based on the state of the object
in the previous image, which may include position and
movement. An external mechanism is required that
Detect the objects of each frame. This technique
may present problems in scenarios where you object
presents occlusions and in the entrances and exits of
these. Point tracking techniques can be
Also classify into two broad categories:
deterministic and statistical.
Las técnicas de seguimiento del núcleo
realizan un cálculo del movimiento del objeto, el cual
está representado por una región inicial, de una imagen
a la siguiente. El movimiento del objeto se expresa en
general en forma de movimiento paramétrico (translación,
rotación, afín...) o mediante el campo de flujo
calculado en los siguientes fotogramas. Podemos
distinguir dos categorías: seguimiento utilizando
plantillas y modelos de apariencia basados en densidad
de probabilidad y seguimiento basado en modelos
multivista. Core tracking techniques
perform a calculation of the movement of the object, which
is represented by an initial region of an image
to the next The movement of the object is expressed in
general in the form of parametric movement (translation,
rotation, related ...) or through the flow field
Calculated in the following frames. We can
distinguish two categories: tracking using
density templates and appearance models
of probability and follow-up based on models
multivist
Las técnicas de seguimiento de siluetas se
realizan mediante la valoración de la región del objeto
en cada imagen utilizando la información que contiene.
Esta información puede ser en forma de densidad de
aspecto o de modelos de forma que son generalmente
presentados con mapas de márgenes. Dispone de dos
métodos: correspondencia de forma y seguimiento del
contorno. Silhouettes tracking techniques are
performed by valuing the region of the object
in each image using the information it contains.
This information can be in the form of density of
look or shape models that are generally
presented with margin maps. It has two
methods: correspondence of form and monitoring of
contour.
El seguimiento de objetos de interés en vídeo
es la base de muchas aplicaciones que van desde la
producción de vídeo hasta la vigilancia remota, y desde
la robótica hasta los juegos interactivos. Los
seguidores de objetos se utilizan para mejorar la
comprensión de ciertos conjuntos de datos de vídeo de
aplicaciones médicas y de seguridad; para aumentar la
productividad al reducir la cantidad de mano de obra que
es necesaria para completar una tarea y par dar lugar a
la interacción natural con máquinas. Tracking objects of interest on video
It is the basis of many applications ranging from
video production to remote surveillance, and from
Robotics to interactive games. The
Object followers are used to improve the
understanding of certain video data sets of
medical and safety applications; to increase the
productivity by reducing the amount of labor that
it is necessary to complete a task and to give rise to
Natural interaction with machines.
El flujo óptico (en inglés “optical flow”) es
el patrón del movimiento aparente de los objetos,
superficies y bordes en una escena causado por el
movimiento relativo entre el ojo de un observador o una
cámara y la escena. The optical flow (in English "optical flow") is
the pattern of the apparent movement of objects,
surfaces and edges in a scene caused by the
relative movement between an observer's eye or a
Camera and scene.
El concepto de flujo óptico se estudió por
primera vez en la década de 1940 y, finalmente, fue
publicado por el psicólogo estadounidense James J.
Gibson en su artículo “Teoría de Affordances” de 1977,
donde describe todas las posibilidades de acción que son
materialmente posibles en un contexto determinado, esto
es la cualidad de un objeto o ambiente que permite a un
individuo realizar una acción. The concept of optical flow was studied by
first time in the 1940s and finally it was
published by the American psychologist James J.
Gibson in his article "Theory of Affordances" of 1977,
where he describes all the possibilities of action that are
materially possible in a given context, this
it is the quality of an object or environment that allows a
individual perform an action.
El concepto de “affordances” se puede
interpretar a través del concepto de disponibilidades u
ofertas. No existe una traducción al castellano
normalmente aceptada del significado de este concepto,
por lo que se le han otorgado significados variados
tales como “permisividad”, “habilitación”,
“oportunidades ambientales”, y hasta “invitaciones al
uso”. The concept of "affordances" can be
interpret through the concept of availabilities or
offers. There is no translation into Spanish
normally accepted of the meaning of this concept,
for which they have been given varied meanings
such as "permissiveness", "habilitation",
“Environmental opportunities”, and even “invitations to
use".
En 1988, Donald Norman utilizó el término
“affordances” en el contexto HCI (acrónimo del inglés
Human-Computer Interaction, esto es Interacción
Humano-Computadora) para referirse a esas posibilidades
de acción que son inmediatamente percibidas por el
usuario. En su libro “The Design of Everyday Things”
establece el concepto “affordances” no sólo dentro de
las capacidades físicas del usuario, sino también en la
capacidad de éste de nutrirse de experiencias pasadas,
metas, planes, estimaciones comparando otro tipo de
vivencias, etc. In 1988, Donald Norman used the term
“Affordances” in the HCI context (acronym for English
Human-Computer Interaction, this is Interaction
Human-Computer) to refer to those possibilities
of action that are immediately perceived by the
Username. In his book "The Design of Everyday Things"
establishes the concept "affordances" not only within
the user's physical abilities, but also in the
its ability to feed on past experiences,
goals, plans, estimates comparing other types of
experiences, etc.
Una segunda definición entonces más depurada
define el término “affordances” como las posibilidades
de acción que un usuario es consciente de poder
realizar. Las aplicaciones del flujo óptico tales como
la detección de movimiento, la segmentación de objetos,
el tiempo hasta la colisión y el enfoque de cálculo de
expansiones, la codificación del movimiento compensado y
la medición de la disparidad estereoscópica utilizan
este movimiento de las superficies y bordes de los
objetos. A second definition then more refined
define the term "affordances" as the possibilities
of action that a user is aware of being able to
perform. Applications of optical flow such as
motion detection, object segmentation,
the time until the collision and the calculation approach of
expansions, motion coding compensated and
Stereoscopic disparity measurement use
this movement of the surfaces and edges of the
objects.
Estas tecnologías pueden encontrase aplicadas
en, por ejemplo, la Patente US 6038337 A que se refiere
a un sistema híbrido de redes neuronales para el
reconocimiento de objetos que exhibe muestreo local de
imagen, una red neural de mapas auto organizada, y un
red neural convolucional híbrida. El mapa de auto
organización provee una cuantificación de las muestras
de imagen en un espacio topológico donde las entradas
que están cerca en el espacio original están también en
el espacio de salida, proporcionando entonces la
reducción de dimensionalidad e invariancia en cambios
menores en la muestra de imagen, y la red neural
convolucional híbrida provee invariancia parcial a la
translación, rotación, escala, y deformación. La red
convolucional híbrida extrae características
sucesivamente más grandes en un conjunto de capas
jerárquico. Se describen realizaciones alternativas
usando el Karhunen-Lo + E, gra e + EE que se transforma
en lugar del mapa de organización propia, y un
perceptron multicapa en lugar de la red convolucional. These technologies may be applied.
in, for example, US Patent 6038337 A which refers
to a hybrid system of neural networks for the
object recognition that exhibits local sampling of
image, a neural network of self-organized maps, and a
hybrid convolutional neural network. The car map
organization provides quantification of samples
image in a topological space where entries
that are close in the original space are also in
the exit space, then providing the
dimensionality reduction and change invariance
minors in the image sample, and the neural network
convolutional hybrid provides partial invariance to the
translation, rotation, scale, and deformation. The net
convolutional hybrid extracts features
successively larger in a set of layers
hierarchical Alternative embodiments are described.
using the Karhunen-Lo + E, gra e + EE that transforms
instead of the own organization map, and a
multilayer perceptron instead of the convolutional network.
Por su parte, la solicitud de patente US
20150245565 A1 se refiere a un dispositivo y método para
la aplicación de productos químicos a plantas y partes
de plantas en entornos naturales específicos, así como
campos de cultivo. En una forma de realización
preferida, un vehículo autónomo lleva el dispositivo de
aplicación de productos químicos y es, en parte,
controlado por los requisitos de procesamiento del
componente de visión artificial del dispositivo
responsable de la detección y la asignación de listas de
objetivos a eyectores químicos que se dirigen a estos
puntos objetivo al tiempo que el aparato evoluciona a
través del campo o un medio ambiente natural. For its part, the US patent application
20150245565 A1 refers to a device and method for
the application of chemicals to plants and parts
of plants in specific natural environments, as well as
Farmlands. In one embodiment
preferred, an autonomous vehicle carries the device
chemical application and is, in part,
controlled by the processing requirements of
device vision component
responsible for detecting and assigning lists of
objectives to chemical ejectors that target these
target points while the device evolves to
Through the countryside or a natural environment.
Sin embargo, el dispositivo de la solicitud de
patente US20150245565A1 sólo es capaz de detectar la
presencia de una planta, pero es incapaz de identificar
de qué especie vegetal se trata. Sólo puede distinguir
dos características que sean absolutamente diferentes,
como es el suelo de las plantas, y determinar solamente
con una cierta probabilidad si se trata de cultivo o no. However, the device request
US20150245565A1 patent is only able to detect the
presence of a plant, but is unable to identify
What plant species is it? Can only distinguish
two characteristics that are absolutely different,
how is the soil of plants, and determine only
with a certain probability whether it is a crop or not.
Dicho sistema únicamente puede distinguir el
cultivo siempre y cuando se den ciertos parámetros
(párrafo [0024]) con un alto margen de error, lo que se
traduce en un mal desempeño en el manejo del agroquímico
a aplicar. Esto lo hace a través de la utilización de
“computer vision” distinguiendo siempre la
línea de cultivo principal, de este modo básicamente es
capaz de aplicar herbicida o cualquier agroquímico sobre
cualquier planta que esté fuera de línea principal. Por
lo tanto, en primer lugar para el dispositivo de la
solicitud en cuestión se trata de las plantas del
cultivo sólo aquellas que se encuentren dentro de la
hilera de cultivo, por lo que si están fuera de ella se
consideran maleza. Si está dentro de la hilera pero la
forma de la hoja es un tanto distinta, también se
considera maleza; si está en la hilera y la forma de la
hoja es similar pero el espacio entre planta y planta no
es la que se utiliza habitualmente para la siembra de
ese cultivo, también se considera maleza. En cambio,
este sistema es incapaz de distinguir entre las
distintas especies vegetales, no sólo si se trata de una
planta o suelo. This system can only distinguish the crop as long as certain parameters are given (paragraph [0024]) with a high margin of error, which translates into poor performance in the handling of the agrochemical to be applied. This is done through the use of "computer vision" always distinguishing the main crop line, thus basically being able to apply herbicide or any agrochemical on any plant that is outside the main line. Therefore, in the first place for the device of the application in question, it is the crop plants only those that are within the crop row, so if they are outside it they are considered weeds. If it is inside the row but the shape of the leaf is somewhat different, it is also considered weed; if it is in the row and the shape of the leaf is similar but the space between plant and plant is not the one that is usually used for planting that crop, it is also considered weed. Instead, this system is unable to distinguish between different plant species, not only if it is a plant or soil.
El sistema propuesto en la solicitud de
patente US 20150245565 A1 tiene la gran limitación de
sólo poder trabajar en cultivos en hileras continuas. En
los cultivos agrícolas la mayoría de las veces se
cultiva en forma de hilera continua, pero es
prácticamente imposible mantener ésta de forma uniforme
en todo el lote, debido fundamentalmente a que el
vehículo que realizó previamente la siembra realiza
varios desvíos en su trayectoria, algunos inevitables;
provocando eventualmente fallas en el intento de
mantener la trayectoria del vehículo aplicador de
agroquímico al tratar de seguir la línea principal. The system proposed in the request for
US patent 20150245565 A1 has the great limitation of
Only be able to work in crops in continuous rows. In
agricultural crops most of the time it
cultivate in a continuous row, but it is
virtually impossible to keep this uniform
in the whole lot, mainly because the
vehicle that previously made the sowing performs
several deviations in his career, some inevitable;
eventually causing failures in the attempt to
maintain the trajectory of the applicator vehicle of
agrochemical when trying to follow the main line.
Además, este sistema es incapaz de distinguir
de qué especie vegetal se trata para poder aplicar el
herbicida específico y así eliminar dicha especie. Por
lo tanto, no es un sistema que funcione en todo tipo de
terreno sin necesidad de tener un patrón determinado
para mantener su trayectoria, no puede identificar el
tipo de especie de que se trata, tampoco puede hacer una
aplicación inteligente del agroquímico necesario con un
gran ahorro de costes en agroquímicos y una eficacia
inmejorable en el manejo de las malezas. In addition, this system is unable to distinguish
what plant species is it to apply the
specific herbicide and thus eliminate said species. By
therefore, it is not a system that works in all kinds of
terrain without having a certain pattern
to maintain its trajectory, it cannot identify the
kind of species in question, you can't make a
intelligent application of the necessary agrochemical with a
great cost savings in agrochemicals and efficiency
unbeatable in weed management.
Por otro lado, cuando menciona la utilización
de cámara multiespectral (párrafo [0031]) claramente
especifica que no es viable actualmente para una
utilización en tiempo real debido a que sólo funciona
con fotografías, no fuentes de vídeo en vivo. Además,
necesita una normalización y/o calibración de la fuente
de iluminación previa a cada análisis, lo cual es
totalmente impracticable en un entorno no controlado de
iluminación como lo es un sitio al aire libre. On the other hand, when you mention the use
multispectral camera (paragraph [0031]) clearly
specifies that it is not currently viable for a
real-time use because it only works
with pictures, not live video sources. Further,
need a normalization and / or calibration of the source
of lighting prior to each analysis, which is
totally impracticable in an uncontrolled environment of
lighting as it is an outdoor site.
Por lo tanto, es necesario aun contar con
maquinaria agrícola que sirva no sólo para aplicar un
herbicida sino también para identificar en forma
específica qué plantas son hierbas diferenciándolas del
cultivo, y direccionar la aplicación del herbicida
solamente sobre las hierbas dosificándolo según su
identificación y comportamiento para lograr un resultado
satisfactorio máximo. En otras palabras, es necesario
disponer de equipo agrícola que permita eliminar en
forma efectiva y selectiva las hierbas de las plantas
del cultivo con las dosis justas y específicas según sea
la hierba identificada protegiendo de esta forma el
desarrollo del cultivo y el medio ambiente. Therefore, it is still necessary to have
agricultural machinery that serves not only to apply a
herbicide but also to identify fit
specific which plants are herbs differentiating them from
cultivation, and direct herbicide application
only on herbs dosing it according to its
identification and behavior to achieve a result
maximum satisfactory. In other words, it is necessary
have agricultural equipment to eliminate
effectively and selectively plant herbs
of the crop with fair and specific doses as
the grass identified thus protecting the
crop development and the environment.
Por lo tanto, el objeto de la presente
invención es un conjunto autónomo de dispositivos para
la detección e identificación de especies vegetales en
un cultivo agrícola para la aplicación de agroquímicos
en forma selectiva, dicho conjunto comprende: un
dispositivo de aplicación de productos químicos que
comprende al menos un recipiente contenedor de
agroquímicos vinculado en comunicación fluida con una
pluralidad de picos aspersores a través de una válvula,
una pluralidad de cámaras que están dispuestas sobre el
vehículo autónomo y enfocadas al cultivo, en donde cada
cámara cuenta con un sensor de ultrasonido asociado para
la medición de altura al cultivo en tiempo real, y en
donde cada cámara está inclinada hacia adelante a 45
grados respecto de la normal; un dispositivo de
detección e identificación de especies vegetales
conectado a las cámaras para recibir información de
video de las mismas, un circuito electrónico encargado
de gestionar la apertura y cierre de las válvulas de los
picos aspersores de producto agroquímico conectado al
dispositivo de detección que gestiona a través de dicho
circuito la apertura y cierre de las válvulas de los
picos aspersores, y en donde, el conjunto de
dispositivos se encuentra montado sobre un vehículo de
transporte. Therefore, the purpose of this
invention is an autonomous set of devices for
the detection and identification of plant species in
an agricultural crop for the application of agrochemicals
selectively, said set comprises: a
chemical application device that
comprises at least one container container of
agrochemicals linked in fluid communication with a
plurality of sprinkler peaks through a valve,
a plurality of cameras that are arranged on the
autonomous vehicle and focused on cultivation, where each
camera has an associated ultrasound sensor for
the height measurement to the crop in real time, and in
where each camera is tilted forward at 45
degrees from normal; a device of
detection and identification of plant species
connected to the cameras to receive information from
video of them, an electronic circuit in charge
to manage the opening and closing of the valves of the
sprinkler peaks of agrochemicals connected to
detection device that manages through said
circuit the opening and closing of the valves of the
sprinkler peaks, and where, the set of
devices is mounted on a vehicle of
transport.
Preferentemente, el vehículo de transporte es
un vehículo autopropulsado o un vehículo de arrastre. Preferably, the transport vehicle is
a self-propelled vehicle or a tow vehicle.
Más preferentemente, el vehículo
autopropulsado es un vehículo para fumigación con brazos
laterales dispuestos perpendicularmente al mismo
(mosquito). More preferably, the vehicle
self-propelled is a vehicle for fumigation with arms
sides arranged perpendicularly to it
(mosquito).
Más preferentemente aun, el dispositivo de
detección e identificación consta de un procesador. More preferably, the device
Detection and identification consists of a processor.
Todavía más preferentemente aun, el procesador
comprende una herramienta basada en software informático
desarrollado en lenguaje C++, un framework de visión
artificial, y un framework de redes neuronales
convolucionales. Even more preferably, the processor
comprises a tool based on computer software
developed in C ++ language, a vision framework
artificial, and a neural network framework
convolutional
Es otro objeto de la presente invención, un
método para la detección e identificación de especies
vegetales en un cultivo agrícola para la aplicación de
agroquímicos en forma selectiva con el conjunto autónomo
de dispositivos descrito anteriormente, dicho método
comprende los pasos de: It is another object of the present invention, a
method for the detection and identification of species
vegetables in an agricultural crop for the application of
agrochemicals selectively with the autonomous set
of devices described above, said method
Understand the steps of:
a) detectar y clasificar especies vegetales en
un cultivo agrícola durante el recorrido del conjunto
autónomo de dispositivos a través del cultivo; b)
analizar la información obtenida en a) determinando área
y perímetro de las malezas; y c) pulverizar la dosis
adecuada de un agroquímico en el lugar correcto teniendo
en cuenta la velocidad de avance del equipo. a) detect and classify plant species in
an agricultural crop during the whole tour
autonomous of devices through cultivation; b)
analyze the information obtained in a) determining area
and perimeter of weeds; and c) spray the dose
proper of an agrochemical in the right place taking
consider the speed of advance of the equipment.
Preferiblemente, el paso a) del método permite
distinguir el cultivo de las malezas. Preferably, step a) of the method allows
distinguish weed cultivation.
Más preferiblemente, el paso a) del método
permite identificar las especies vegetales para
determinar el agroquímico a aplicar. More preferably, step a) of the method
allows to identify plant species to
Determine the agrochemical to apply.
En forma preferible, en el paso c) del método
la dosis de agroquímico se pulveriza abriendo una
válvula solenoide. Preferably, in step c) of the method
the dose of agrochemical is sprayed by opening a
solenoid valve.
En forma también preferible, las especies
vegetales corresponden al cultivo y a las malezas. Also preferably, the species
Vegetables correspond to the crop and weeds.
En forma más preferible aun, el agroquímico es
un herbicida, un fertilizante foliar, un insecticida, un
fungicida, o un compuesto protector. More preferably, the agrochemical is
a herbicide, a foliar fertilizer, an insecticide, a
fungicide, or a protective compound.
En forma todavía más preferible aun, el paso
b) además permite determinar el estado del cultivo. Even more preferably, the step
b) also allows to determine the state of the crop.
Adicionalmente, el paso c) del método
comprende seleccionar un herbicida específico de un
conjunto de herbicidas para cada maleza identificada en
el paso a) respecto del cultivo. Additionally, step c) of the method
comprises selecting a specific herbicide from a
set of herbicides for each weed identified in
step a) regarding the crop.
También adicionalmente, el paso c) del método
comprende seleccionar un fertilizante foliar específico
de un conjunto de fertilizantes foliares para el cultivo
identificado en el paso a) según su estado. Also additionally, step c) of the method
comprises selecting a specific foliar fertilizer
of a set of foliar fertilizers for cultivation
identified in step a) according to their status.
En forma adicional, el paso c) del método
comprende seleccionar un insecticida específico de un
conjunto de insecticidas para el cultivo identificado en
el paso a) según su estado de deterioro. Additionally, step c) of the method
comprises selecting a specific insecticide from a
set of insecticides for the crop identified in
step a) according to its state of deterioration.
Todavía en forma adicional, el paso c) del
método comprende seleccionar un fungicida específico de
un conjunto de fungicidas para el cultivo identificado
en el paso a) según su estado de deterioro. Still additionally, step c) of the
method comprises selecting a specific fungicide from
a set of fungicides for the identified crop
in step a) according to its state of deterioration.
Además, el paso c) del método comprende
seleccionar un compuesto protector específico de un
conjunto de compuestos protectores para el cultivo
identificado en el paso a) según su estado. In addition, step c) of the method comprises
select a specific protective compound from a
set of protective compounds for cultivation
identified in step a) according to their status.
En forma más específica, el método para la
detección e identificación de especies vegetales en un
cultivo agrícola comprende los pasos de: a) obtener un
Flujo de Video en Tiempo Real desde una pluralidad de
cámaras posicionadas a lo largo de alas del conjunto
autónomo de dispositivos para la detección e
identificación de especies vegetales en un cultivo
agrícola para la aplicación de agroquímicos en forma
selectiva; b) procesar cada uno de los fotogramas
obtenidos; c) convertir el fotograma a una matriz
numérica con la representación de los colores RGB (del
inglés Red-Green-Blue) de cada pixel de la imagen; d)
recortar la matriz para seleccionar el área del
fotograma a procesar; e) asignar un área de la imagen a
los aspersores que correspondan, de manera que si se
detecta maleza en esa área, se le envía la orden de
apertura al aspersor que corresponda; f) aplicar 4
filtros para obtener una máscara de los colores
predominantes de las especies vegetales a identificar;
g) identificar los contornos de la imagen sobre la
máscara de color guardando la información de la posición
de cada uno; h) calcular estimativamente la velocidad de
desplazamiento con las posiciones de los contornos
encontrados en el fotograma actual y las posiciones de
esos mismos contornos en un fotograma anterior; i)
obtener una velocidad de pixeles por fotograma y hacer
el pasaje de la velocidad a metros por segundo
utilizando una relación pixel-metros y una relación
fotogramas-segundos; j) recortar la imagen en cuadrados
pequeños que contienen los contornos; k) cambiar el
tamaño a cada uno de los cuadrados recortados de una
parte de la imagen a un tamaño preferente; l) enviar los
cuadrados de imágenes a la primera capa (capa de
entrada) de la red neuronal convolucional previamente
entrenada para su análisis y categorización; m) procesar
cada cuadrado dentro de la Red Neuronal previamente
entrenada, en donde se toma la imagen, se realiza en
paralelo una pasada ( forward ) y se envía el
resultado a la capa de salida de la red neuronal; n)
obtener el resultado de la última capa (capa de salida)
de la red neuronal convolucional; ñ) determinar el
agroquímico a utilizar en función del valor numérico de
la especie vegetal identificada; o) correlacionar la
representación completa de las especies vegetales
identificadas con el agroquímico necesario a aplicar en
cada una de las plantas que aparecen en el fotograma
procesado que se obtuvo del flujo de video principal, y
p) enviar la orden de acuerdo a la especie identificada
para cada imagen procesada por la red neuronal que está
asociada a un aspersor por el área en donde se
encuentra, de manera que la aspersión quede exactamente
sobre la especie vegetal a tratar; More specifically, the method for the detection and identification of plant species in an agricultural crop comprises the steps of: a) obtaining a Real Time Video Stream from a plurality of cameras positioned along the wings of the autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals in a selective way; b) process each of the frames obtained; c) convert the frame to a numerical matrix with the representation of RGB (Red-Green-Blue English) colors of each pixel in the image; d) trim the matrix to select the area of the frame to be processed; e) assign an area of the image to the corresponding sprinklers, so that if weeds are detected in that area, the opening order is sent to the corresponding sprinkler; f) apply 4 filters to obtain a mask of the predominant colors of the plant species to be identified; g) identify the contours of the image on the color mask by saving the information of the position of each one; h) estimate the travel speed with the positions of the contours found in the current frame and the positions of those same contours in a previous frame; i) obtain a pixel speed per frame and make the speed passage in meters per second using a pixel-meter ratio and a frame-second ratio; j) crop the image into small squares that contain the contours; k) resize each of the squares cut out of a part of the image to a preferred size; l) send the squares of images to the first layer (input layer) of the previously trained convolutional neural network for analysis and categorization; m) processing each square within the previously trained neural network, wherein the image is taken, it is performed in parallel in one pass (forward) and the result is sent to the output layer of the neural network; n) obtain the result of the last layer (output layer) of the convolutional neural network; ñ) determine the agrochemical to be used based on the numerical value of the identified plant species; o) correlate the complete representation of the plant species identified with the agrochemical necessary to apply in each of the plants that appear in the processed frame that was obtained from the main video stream, and p) send the order according to the species identified for each image processed by the neural network that is associated with a sprinkler by the area where it is located, so that the spray is exactly on the plant species to be treated;
En forma preferida, en el paso f) del método
anterior se separa elementos extraños como tierra,
residuo vegetal seco y piedras de las especies
vegetales, en donde: un primer filtro transforma la
matriz a formato color YCbCr; un segundo filtro resta
dos canales en el formato RGB dependiendo del color a
filtrar; un tercer filtro es una operación lógica AND
(bit a bit) entre los resultados de los filtros
anteriores primero y segundo; y un cuarto filtro aplica
al resultado anterior un blur (desenfoque gaussiano),
convirtiendo la imagen a blanco y negro y eliminando el
ruido. Preferably, in step f) of the method
previous separates strange elements like earth,
dry vegetable residue and species stones
vegetables, where: a first filter transforms the
YCbCr color format matrix; a second filter subtracts
two channels in the RGB format depending on the color to
filter; a third filter is a logical AND operation
(bit by bit) between the filter results
previous first and second; and a fourth filter applies
to the previous result a blur (Gaussian blur),
converting the image to black and white and removing the
noise.
La [Fig.1a] representa esquemáticamente la
manera en que los datos pasan a través de diferentes
tipos de pruebas, con el fin de tomar una decisión en
una red de tres capas. [Fig.1a] schematically represents the
way the data goes through different
types of tests, in order to make a decision in
A three layer network.
La [Fig.1b] representa esquemáticamente la
manera en que las capas de entrada de la red contienen
neuronas que codifican los valores de los píxeles de
entrada. [Fig. 1b] schematically represents the
way in which the input layers of the network contain
neurons that encode pixel values from
entry.
La [Fig.1c] representa esquemáticamente una
arquitectura posible con rectángulos que denotan las
subredes con el objeto de mostrar cómo funcionan las
redes neuronales convolucionales. [Fig.1c] schematically represents a
possible architecture with rectangles denoting the
subnets in order to show how the
convolutional neural networks.
La [Fig.2a] representa esquemáticamente una
rápida detección para clasificar plantas que permite
distinguir el cultivo de las malezas. [Fig. 2a] schematically represents a
rapid detection to classify plants that allows
distinguish weed cultivation.
La [Fig.2b] representa esquemáticamente el
análisis de área y perímetro que realiza el sistema una
vez detectada la maleza. [Fig.2b] schematically represents the
area and perimeter analysis performed by the system a
Once weeds are detected.
La [Fig.2c] representa esquemáticamente la
pulverización con precisión y exactitud que realiza el
sistema sobre la maleza una vez detectada la especie
vegetal y su tamaño. [Fig.2c] schematically represents the
spray with precision and accuracy that performs the
weed system once the species is detected
Vegetable and its size.
La [Fig.3] representa un fotograma de un Flujo
de Video en Tiempo Real obtenido a partir de una cámara. [Fig.3] represents a frame of a Flow
Real Time Video obtained from a camera.
La [Fig.4] representa una tabla de
equivalencias entre cantidad de obturaciones (o
shutters) por segundo y la velocidad del vehículo en
movimiento que lleva las cámaras. [Fig. 4] represents a table of
equivalences between number of seals (or
shutters) per second and vehicle speed in
movement that carries the cameras.
La [Fig.5] representa el fotograma de la FIG.
3 convertido a una matriz numérica con la representación
de los colores RGB (del inglés Red-Green-Blue) de cada
pixel de la imagen. [Fig. 5] represents the frame of FIG.
3 converted to a numeric matrix with representation
of RGB (Red-Green-Blue English) colors of each
Image pixel
La [Fig.6] representa un tamaño ideal de
franja horizontal central de la imagen que se va a
procesar del fotograma de la [FIG.3]. [Fig. 6] represents an ideal size of
central horizontal strip of the image to be
process from the frame of [FIG.3].
La [Fig.7a] representa una transformación que
efectúa un primer filtro de la matriz a formato color
YCbCr. [Fig. 7a] represents a transformation that
make a first matrix filter in color format
YCbCr.
La [Fig.7b] representa una transformación que
efectúa un segundo filtro restando dos canales en el
formato RGB dependiendo del color a filtrar. [Fig.7b] represents a transformation that
makes a second filter by subtracting two channels in the
RGB format depending on the color to filter.
La [Fig.7c] representa una transformación
lógica AND (bit a bit) entre los resultados de los dos
filtros anteriores según se muestra en las [FIG.7a] y
[FIG.7b] que efectúa un tercer filtro. [Fig.7c] represents a transformation
AND logic (bit by bit) between the results of the two
previous filters as shown in [FIG.7a] and
[FIG.7b] that performs a third filter.
La [Fig.7d1] representa una transformación de
un cuarto filtro que aplica al resultado anterior de la
[FIG. 7c] un blur o desenfoque gaussiano. [Fig.7d1] represents a transformation of
a fourth filter that applies to the previous result of the
[FIG. 7c] a Gaussian blur.
La [Fig.7d2] representa la conversión de la
imagen de la [FIG.7d1] a blanco y negro. [Fig.7d2] represents the conversion of the
Image from [FIG.7d1] to black and white.
La [Fig.7d3] representa la eliminación del
ruido de la [FIG.7d2] que corresponde a los puntos
blancos dispersos. [Fig.7d3] represents the elimination of
noise of [FIG.7d2] corresponding to the points
scattered whites.
La [Fig.8] representa la identificación de los
contornos de la imagen sobre la máscara de color. [Fig. 8] represents the identification of
Contours of the image on the color mask.
La [Fig.9] representa la imagen recortada en
cuadrados pequeños de aproximadamente un mismo tamaño
que contienen los contornos de la [FIG.8]. [Fig. 9] represents the image cropped in
small squares of approximately the same size
containing the contours of [FIG. 8].
La [Fig.10] representa a uno de los cuadrados
recortados de la [FIG.9] con el tamaño de la imagen
cambiada a un tamaño preferente de 256 x 256 pixeles. [Fig. 10] represents one of the squares
cropped from [FIG. 9] with image size
changed to a preferred size of 256 x 256 pixels.
La [Fig.11a] representa a otro de los
cuadrados recortados de la [FIG.9] correspondiente a una
maleza presente en cultivo. [Fig. 11a] represents another of the
squares cut out of [FIG. 9] corresponding to a
weed present in cultivation.
La [Fig.11b] representa una secuencia en donde
el cuadrado de la [FIG.11a] de una imagen de 256 x 256
píxeles es enviado a la primera capa o capa de entrada
de la red neuronal convolucional previamente entrenada
para su análisis y categorización hasta llegar a una
última capa. [Fig. 11b] represents a sequence where
the square of [FIG. 11a] of a 256 x 256 image
pixels is sent to the first layer or input layer
of the previously trained convolutional neural network
for analysis and categorization until you reach a
last layer
La [Fig.12] representa un resultado en valor
promedio de acierto de cada una de las categorías
obtenido de la última capa o capa de salida de la red
neuronal convolucional según la secuencia de la
[FIG.11a]. [Fig. 12] represents a result in value
average success of each of the categories
obtained from the last layer or network exit layer
convolutional neuronal according to the sequence of the
[FIG. 11a].
La [Fig.13] es una representación completa del
fotograma procesado del flujo de video principal según
la secuencia de la [FIG.11a], en donde se observan las
especies vegetales no deseadas identificadas enmarcadas
en rojo para aplicar un agroquímico necesario en cada
una de las malezas. [Fig. 13] is a complete representation of the
processed frame of the main video stream according to
the sequence of [FIG. 11a], where the
unwanted plant species identified framed
in red to apply a necessary agrochemical in each
One of the weeds.
La [Fig.14] representa un modelo AlexNet que
consta de 5 capas convolucionales según la arquitectura
elegida para el entrenamiento de la red Caffe. [Fig. 14] represents an AlexNet model that
It consists of 5 convolutional layers according to the architecture
chosen for the training of the Caffe network.
La [Fig.15] representa esquemáticamente una
forma preferida de realización del conjunto autónomo de
dispositivos para la detección e identificación de
especies vegetales según la presente invención,
mostrando cómo se encuentran vinculadas entre sí las
placas con un microcontrolador y entorno de desarrollo
IDE (acrónimo del inglés “Integrated Drive Electronics”,
Electrónica de Control Integrada) con entradas y salidas
analógicas y digitales Arduino a la CPU (acrónimo del
inglés “Central Processing Unit”, Unidad Procesadora
Central) a través de un puerto USB (acrónimo del inglés
“Universal Serial Bus”, Bus Serial Universal). A las
placas Arduino se encuentran conectadas las cámaras, los
sensores de ultrasonido y los aspersores. [Fig. 15] schematically represents a
preferred embodiment of the autonomous assembly of
devices for the detection and identification of
plant species according to the present invention,
showing how the
boards with a microcontroller and development environment
IDE (acronym for "Integrated Drive Electronics")
Integrated Control Electronics) with inputs and outputs
analog and digital Arduino to the CPU (acronym for
English "Central Processing Unit", Processing Unit
Central) through a USB port (acronym for English
"Universal Serial Bus", Universal Serial Bus). At
Arduino boards are connected cameras, the
Ultrasound sensors and sprinklers.
La [Fig.16] representa un detalle del extremo
de un brazo lateral de una unidad fumigadora donde puede
observarse instaladas las cámaras inclinadas hacia
adelante a un ángulo de aproximadamente 45 grados
respecto del eje vertical inferior y una pluralidad de
aspersores asociados. [Fig. 16] represents a detail of the end
of a side arm of a spraying unit where you can
observe the cameras tilted towards
forward at an angle of approximately 45 degrees
with respect to the lower vertical axis and a plurality of
associated sprinklers.
La [Fig.17] representa esquemáticamente una
cámara inclinada hacia adelante a un ángulo α de
aproximadamente 45 grados respecto del eje vertical
inferior que está instalada en un brazo lateral de una
unidad fumigadora mostrando la toma de imagen y tamaño
aproximado de la escena a procesar con esa declinación. [Fig.17] schematically represents a
camera tilted forward at an angle α of
approximately 45 degrees from the vertical axis
bottom that is installed on a side arm of a
spray unit showing image and size
approximate of the scene to process with that decline.
La presente invención está constituida
principalmente por un conjunto autónomo de dispositivos
para la detección e identificación de especies vegetales
en un cultivo agrícola para la aplicación de
agroquímicos en forma selectiva. El conjunto consta de
múltiples cámaras que deben estar dispuestas sobre el
ala o brazo del botalón de, por ejemplo, una máquina
pulverizadora, un dispositivo de detección e
identificación de especies vegetales, un circuito
electrónico encargado de gestionar la apertura y cierre
de los picos aspersores de producto agroquímico, y un
sensor de ultrasonido por cada cámara del conjunto. The present invention is constituted
mainly by an autonomous set of devices
for the detection and identification of plant species
in an agricultural crop for the application of
Agrochemicals selectively. The set consists of
multiple cameras that must be arranged on the
wing or arm of the boom of, for example, a machine
sprayer, a detection device and
plant species identification, a circuit
electronic responsible for managing the opening and closing
of the agrochemical product sprinkler peaks, and a
Ultrasound sensor for each chamber in the set.
En forma más específica, es el objeto de la
presente invención un conjunto autónomo de dispositivos
para la detección e identificación de especies vegetales
en un cultivo agrícola para la aplicación de
agroquímicos en forma selectiva, dicho conjunto
comprende: un dispositivo de aplicación de productos
químicos que comprende al menos un recipiente contenedor
de agroquímicos vinculado en comunicación fluida con una
pluralidad de picos aspersores a través de una válvula,
una pluralidad de cámaras que están dispuestas sobre el
vehículo autónomo y enfocadas al cultivo, en donde cada
cámara cuenta con un sensor de ultrasonido asociado para
la medición de altura al cultivo en tiempo real, y en
donde cada cámara está inclinada hacia adelante a 45
grados respecto de la normal; un dispositivo de
detección e identificación de especies vegetales
conectado a las cámaras para recibir información de
video de las mismas, un circuito electrónico encargado
de gestionar la apertura y cierre de las válvulas de los
picos aspersores de producto agroquímico conectado al
dispositivo de detección que gestiona a través de dicho
circuito la apertura y cierre de las válvulas de los
picos aspersores, y en donde, el conjunto de
dispositivos se encuentra montado sobre un vehículo de
transporte. More specifically, it is the object of the
present invention an autonomous set of devices
for the detection and identification of plant species
in an agricultural crop for the application of
agrochemicals selectively said set
comprises: a product application device
chemicals comprising at least one container container
of agrochemicals linked in fluid communication with a
plurality of sprinkler peaks through a valve,
a plurality of cameras that are arranged on the
autonomous vehicle and focused on cultivation, where each
camera has an associated ultrasound sensor for
the height measurement to the crop in real time, and in
where each camera is tilted forward at 45
degrees from normal; a device of
detection and identification of plant species
connected to the cameras to receive information from
video of them, an electronic circuit in charge
to manage the opening and closing of the valves of the
sprinkler peaks of agrochemicals connected to
detection device that manages through said
circuit the opening and closing of the valves of the
sprinkler peaks, and where, the set of
devices is mounted on a vehicle of
transport.
Las [FIG.15], [FIG.16] y [FIG.17] muestran
diagramas y esquemas del conjunto autónomo de
dispositivos para la detección e identificación de
especies vegetales en un cultivo agrícola para la
aplicación de agroquímicos en forma selectiva según una
forma preferida de la presente invención. [FIG. 15], [FIG. 16] and [FIG. 17] show
diagrams and diagrams of the autonomous set of
devices for the detection and identification of
plant species in an agricultural crop for
application of agrochemicals selectively according to a
preferred form of the present invention.
Asimismo, el vehículo de transporte es un
vehículo autopropulsado o un vehículo de arrastre.
Particularmente, el vehículo autopropulsado es un
vehículo para fumigación con brazos laterales dispuestos
perpendicularmente al mismo (mosquito). Also, the transport vehicle is a
self-propelled vehicle or a tow vehicle.
Particularly, the self-propelled vehicle is a
fumigation vehicle with side arms arranged
perpendicular to it (mosquito).
El dispositivo de detección e identificación
consta de un procesador que comprende una herramienta
basada en software informático desarrollado en lenguaje
C++, y la utilización del framework de visión artificial
OpenCV, y el framework de redes neuronales
convolucionales “Caffe” de Berkeley Vision and Learning
Center. Mediante la utilización de dicha herramienta se
puede lograr el reconocimiento de las distintas especies
vegetales con un 96% de efectividad. The detection and identification device
consists of a processor that comprises a tool
based on computer software developed in language
C ++, and the use of the artificial vision framework
OpenCV, and the neural network framework
convolutional “Caffe” by Berkeley Vision and Learning
Center By using this tool you
can achieve recognition of different species
Vegetables with 96% effectiveness.
En la operatoria de una red neuronal podemos
distinguir claramente dos fases o modos de operación:
(i) una fase de aprendizaje o entrenamiento y (ii) una
fase de operación o ejecución. In the operation of a neural network we can
clearly distinguish two phases or modes of operation:
(i) a learning or training phase and (ii) a
operation or execution phase.
Durante la primera fase, la fase de
aprendizaje, la red es entrenada para realizar un
determinado tipo de procesamiento. Una vez alcanzado un
nivel de entrenamiento adecuado, se pasa a la fase de
operación, donde la red es utilizada para llevar a cabo
la tarea para la cual fue entrenada. During the first phase, the phase of
learning, the network is trained to perform a
certain type of processing Once reached a
adequate training level, you go to the phase of
operation, where the network is used to carry out
the task for which she was trained.
Entrenamiento o Aprendizaje de la Red Neuronal
Convolucional: Training or Learning of the Neural Network
Convolutionary:
A continuación se detalla cómo es la
arquitectura utilizada de la red neuronal convolucional,
y cómo ésta es entrenada para el conocimiento y
clasificación de las distintas especies vegetales
conocidas en campos agrícolas. Below is detailed how is the
architecture used of the convolutional neural network,
and how it is trained for knowledge and
classification of different plant species
known in agricultural fields.
Se suministra a la Red Neuronal Convolucional
un conjunto de datos (Data Set) de un mínimo de 50.000
fotografías de las diferentes especies vegetales que se
desea identificar durante el entrenamiento, teniendo en
cuenta la región especifica del planeta y las especies
predominantes en dicho lugar. Estas fotografías son
cargadas a través de diferentes carpetas o directorios
que representan la categoría a la que pertenece cada una
de ellas. Las distintas fotografías son suministradas
por ejemplo en formato JPEG y en un tamaño mínimo de 80
x 80 pixeles, preferentemente en un tamaño recomendable
de 256 x 256 pixeles, en las cuales se incluyen por cada
especie diferentes situaciones de la plántula a saber
hojas sueltas, hojas parciales, planta entera, flores,
planta en su contexto, etc. La arquitectura elegida para
el entrenamiento de la red Caffe es el modelo AlexNet
que consta de 5 capas convolucionales [Ver FIG. 14]. It is supplied to the Convolutionary Neural Network
a set of data (Data Set) of a minimum of 50,000
photographs of the different plant species that
you want to identify during training, taking into
account the specific region of the planet and the species
predominant in that place. These photographs are
loaded through different folders or directories
that represent the category to which each one belongs
of them. The different photographs are supplied
for example in JPEG format and at a minimum size of 80
x 80 pixels, preferably in a recommended size
256 x 256 pixels, which are included for each
species different situations of the seedling namely
loose leaves, partial leaves, whole plant, flowers,
plant in context, etc. The architecture chosen for
Caffe network training is the AlexNet model
consisting of 5 convolutional layers [See FIG. 14].
Para el funcionamiento y entrenamiento de la
red neuronal convolucional es necesario tener una
computadora (CPU) con arquitectura x86 y una placa
gráfica de video Nvidia con soporte CUDA 7.0 en
adelante, con al menos 2500 CUDAs de capacidad de
procesamiento, lo cual permite realizar en paralelo la
ejecución de operaciones aritméticas e incrementar el
poder de cálculo a través de la utilización de la unidad
GPU (unidad de procesamiento gráfico). Una vez ejecutado
el entrenamiento, el sistema devolverá un archivo
“deploy.prototxt” que se usará de ahora en más, dentro
del software de procesamiento de detección e
identificación de especies vegetales. For the operation and training of the
convolutional neural network is necessary to have a
computer (CPU) with x86 architecture and a board
Nvidia video graphic with CUDA 7.0 support in
forward, with at least 2500 CUDAs of capacity
processing, which allows to perform in parallel the
execution of arithmetic operations and increase the
computing power through the use of the unit
GPU (graphics processing unit). Once executed
the training, the system will return a file
"Deploy.prototxt" to be used from now on, within
of detection processing software e
Identification of plant species.
Una vez finalizada la fase de aprendizaje o
entrenamiento, la red genera un archivo “
deploy.prototxt ”, que es básicamente el
modelo de aprendizaje, y de esta manera ya puede ser
utilizada para realizar la tarea para la que fue
entrenada. Una de las principales ventajas que posee
este modelo es que la red aprende la relación existente
entre los datos, adquiriendo la capacidad de generalizar
conceptos. De este modo, una red neuronal convolucional
puede operar con información que no le fue presentada
durante la fase de entrenamiento. Once the learning or training phase is finished, the network generates a “ deploy.prototxt ” file, which is basically the learning model, and in this way it can already be used to perform the task for which it was trained. One of the main advantages of this model is that the network learns the relationship between the data, acquiring the ability to generalize concepts. In this way, a convolutional neural network can operate with information that was not presented during the training phase.
Acerca del funcionamiento del sistema de
clasificación se incorpora aquí a modo de referencia las
enseñanzas de la solicitud de patente US 2015036920 A1
publicada el 5 de febrero de 2015 y solicitada por
FUJITSU LTD., la cual se refiere a un clasificador
basado en redes neurales convolucionadas, un método de
clasificación mediante el uso de un clasificador basado
en redes neurales convolucionadas y un método para la
formación del clasificador basado en redes neurales
convolucionadas. El clasificador basado en redes
neurales convolucionadas comprende: una pluralidad de
capas de mapeo característica, al menos un mapa de
características en al menos una de una pluralidad de
capas de mapeo de función que se divide en una
pluralidad de regiones; y una pluralidad de plantillas
convolucionales correspondientes a la pluralidad de
regiones, respectivamente, cada una de las plantillas
convolucionales se utiliza para la obtención de un valor
de respuesta de una neurona en la región
correspondiente. About the operation of the system
classification is incorporated here by way of reference
teachings of patent application US 2015036920 A1
published on February 5, 2015 and requested by
FUJITSU LTD., Which refers to a classifier
based on convoluted neural networks, a method of
classification by using a classifier based
in convoluted neural networks and a method for
classifier training based on neural networks
convoluted The network based classifier
convolved neurals comprises: a plurality of
feature mapping layers, at least one map of
characteristics in at least one of a plurality of
function mapping layers that are divided into a
plurality of regions; and a plurality of templates
convolutional corresponding to the plurality of
regions, respectively, each of the templates
convolutional is used to obtain a value
response of a neuron in the region
correspondent.
Según la presente invención, la [FIG.1a] es un
ejemplo que ilustra cómo los datos pasan a través de
diferentes tipos de pruebas, con el fin de tomar una
decisión en una red de tres capas. According to the present invention, [FIG.1a] is a
example that illustrates how data passes through
different types of tests, in order to take a
decision in a three layer network.
La [FIG.1b] es un ejemplo que ilustra cómo las
capas de entrada de la red contienen neuronas que
codifican los valores de los píxeles de entrada. Los
datos de esta red neuronal se forman con imágenes de 28
x 28 píxeles, por lo que la capa de entrada contiene 784
= 28 × 28 neuronas. [FIG.1b] is an example that illustrates how
network input layers contain neurons that
encode the values of the input pixels. The
data from this neural network are formed with images of 28
x 28 pixels, so the input layer contains 784
= 28 × 28 neurons.
La [FIG.1c] es un ejemplo que ilustra una
posible arquitectura, con rectángulos que denotan las
subredes. Esto no pretende ser un enfoque realista para
resolver el problema de detección e identificación de
especies vegetales, es sólo a modo de ejemplo para
comprender cómo funcionan las redes neuronales
convolucionales. [FIG.1c] is an example that illustrates a
possible architecture, with rectangles denoting the
subnets This is not meant to be a realistic approach to
solve the problem of detection and identification of
plant species, it is only by way of example for
understand how neural networks work
convolutional
Ejemplo de Diagramación Rápida del Proceso de
Detección e Identificación utilizado en la Pulverización
de Herbicida de forma Selectiva sobre Malezas en una
plantación de Soja. Example of Rapid Diagramming of the Process of
Detection and Identification used in Spraying
Herbicide Selectively on Weeds in a
Soya plantation.
Detección: La rápida clasificación de plantas
permite distinguir el cultivo de las malezas [FIG.2a]. Detection: The rapid classification of plants
allows us to distinguish weed culture [FIG. 2a].
Análisis: El conjunto de dispositivos detecta
la maleza. Su área y perímetro son calculados para
descargar la dosis de agroquímico exacto en el lugar
correcto [FIG.2b]. Analysis: The set of devices detects
weed. Its area and perimeter are calculated to
download the exact agrochemical dose in place
correct [FIG.2b].
Pulverización: Una vez detectada la especie
vegetal y su tamaño por el sistema, teniendo en cuenta
la velocidad de avance del equipo, se abre la válvula
solenoide dejando pasar la dosis de agroquímico,
pulverizando con precisión y exactitud sobre la especie
vegetal [FIG.2c]. Spraying: Once the species is detected
vegetable and its size by the system, taking into account
the forward speed of the equipment, the valve opens
solenoid bypassing the dose of agrochemical,
spraying precisely and accurately on the species
vegetable [FIG.2c].
Accesoriamente, el conjunto de dispositivos
permite determinar el estado del cultivo, y siendo el
agroquímico un herbicida, un fertilizante foliar, un
insecticida, un fungicida, o un compuesto protector, se
puede aplicar el agroquímico adecuado según cada
circunstancia. Accessory, the set of devices
allows to determine the state of the crop, and being the
agrochemical a herbicide, a foliar fertilizer, a
insecticide, a fungicide, or a protective compound, is
you can apply the appropriate agrochemical according to each
circumstance.
Por ejemplo, el método permite seleccionar un
herbicida específico de un conjunto de herbicidas para
cada maleza identificada respecto del cultivo; o un
fertilizante foliar específico de un conjunto de
fertilizantes foliares para el cultivo identificado
según su estado; o un insecticida específico de un
conjunto de insecticidas para el cultivo identificado
según su estado de deterioro; o un fungicida específico
de un conjunto de fungicidas para el cultivo
identificado según su estado de deterioro; y/o un
compuesto protector específico de un conjunto de
compuestos protectores para el cultivo identificado
según su estado. For example, the method allows you to select a
specific herbicide of a set of herbicides for
each weed identified with respect to the crop; or a
specific foliar fertilizer of a set of
foliar fertilizers for the crop identified
according to its state; or a specific insecticide of a
set of insecticides for the crop identified
according to its state of deterioration; or a specific fungicide
of a set of fungicides for cultivation
identified according to its state of deterioration; and / or a
specific protective compound of a set of
protective compounds for the identified crop
according to your condition
Flujo de Procesamiento Paso a Paso en el
Proceso de Detección e Identificación de Malezas Processing Flow Step by Step in the
Weed Detection and Identification Process
Según la presente invención, el flujo de
procesamiento paso a paso en el proceso de detección e
identificación de las variedades vegetales de interés es
el siguiente: 1) Se obtiene un Flujo de Video en Tiempo
Real [FIG.3] desde una o varias cámaras posicionadas a
lo largo de las alas o brazos de, por ejemplo, una
máquina pulverizadora. Esta etapa se efectúa a 60
fotogramas x segundo o n obturaciones x segundo [FIG.4].
La cantidad de obturaciones (o shutters) por segundo
depende de la velocidad del vehículo en movimiento. 2)
Se procesa cada uno de los fotogramas obtenidos. 3) Se
convierte el fotograma a una matriz numérica con la
representación de los colores RGB (del inglés
Red-Green-Blue) de cada pixel de la imagen [FIG.5]. Cada
pixel tiene componentes azules, verdes y rojos. Cada
uno de estos componentes tiene un rango de 0 a 255, lo
que da un total de 2.563 diferentes posibles colores. 4)
Se recorta la matriz para seleccionar el área del
fotograma a procesar [FIG.6]. Se determina un tamaño de
franja horizontal de la imagen que se va a procesar.
Este área a procesar es tomado en función de la
capacidad posterior de abrir el aspersor para aplicación
del agroquímico exactamente sobre el área específica. El
área a procesar es exactamente la franja del medio, ya
que mantiene una relación óptima de distancia a la
cámara, baja distorsión de la imagen y tiempo que
transcurrirá entre el procesamiento y la aplicación
posterior del agroquímico, y acierto en el disparo sobre
la planta. 5) Se asigna un área de la imagen a los
aspersores que correspondan, de manera que si se detecta
maleza en ese área, se le envía la orden de apertura al
aspersor que corresponda. 6) Se aplican 4 filtros para
obtener una máscara de los colores predominantes de las
especies vegetales a identificar. De esta manera por
ejemplo separamos la tierra, residuo vegetal seco,
piedras, etc. El primer filtro transforma la matriz a
formato color YCbCr y hace una operación lógica entre
los canales, dependiendo del color a filtrar [FIG.7a].
El segundo filtro resta dos canales en el formato RGB,
dependiendo del color a filtrar [FIG.7b]. El tercer
filtro es una operación lógica AND (bit a bit) entre los
resultados de los dos filtros anteriores [FIG.7c]. Por
último, el cuarto filtro aplica al resultado anterior un
blur [FIG.7d1] o desenfoque gaussiano, se convierte la
imagen a blanco y negro [FIG.7d2], y se le elimina el
ruido [FIG.7d3] que son los puntos blancos dispersos. 7)
Se identifican los contornos de la imagen sobre la
máscara de color [FIG.8] y se guarda la información de
la posición de cada uno. 8) Se hace un cálculo estimado
de la velocidad con las posiciones de los contornos
encontrados en el fotograma actual y las posiciones de
esos mismos en un fotograma anterior. Se obtiene una
velocidad de pixeles por fotograma y se utiliza una
relación pixel-metros y la relación fotogramas-segundos
para hacer el pasaje de la velocidad a metros por
segundo. Con esa velocidad y con la distancia real de
los aspersores a las plantas que se ven en la franja de
la imagen que se procesa, se determina el tiempo exacto
en que se debe indicar a los aspersores que rocíen si
luego se detecta una maleza. 9) Se recorta la imagen en
cuadrados pequeños que contienen esos contornos [FIG.9].
Si un contorno es muy ancho, se lo separa en dos
horizontalmente, si un contorno es muy alto, se lo
separa en dos verticalmente y si un contorno es muy
grande, se lo separa en cuatro cuadrados. Finalmente se
obtienen cuadrados de un mismo tamaño aproximado. 10) A
cada uno de estos cuadrados recortados de una parte de
la imagen se le cambia el tamaño a un tamaño preferente
de 256 x 256 píxeles, que es el tamaño que trabaja
internamente la red neuronal convolucional [FIG.10]. 11)
Los cuadrados de imágenes de 256 x 256 pixeles [FIG.11a]
son enviados a la primera capa o capa de entrada de la
red neuronal convolucional previamente entrenada para su
análisis y categorización [FIG.11b]. 12) Cada cuadrado
es procesado dentro de la Red Neuronal, que puede
procesar de a varios a la vez. Se realiza en paralelo el
procesamiento y se lo ejecuta internamente en la GPU y
no sobre la CPU, para lograr un gran desempeño en las
operaciones aritméticas. Dentro de la red previamente
entrenada, se toma la imagen y se realiza en paralelo
una pasada (forward ) y se envía el resultado a
la capa de salida de la red neuronal. 13) Se obtiene el
resultado de la última capa o capa de salida de la red
neuronal convolucional. El resultado de la capa de
salida nos entrega un valor promedio de acierto de cada
una de las categorías, siendo el valor más alto, la
categoría de especie vegetal a la que pertenece la
imagen [FIG.12]. En una de las categorías de la red
neuronal se encuentran las NO especies vegetales, en la
que se encuentran especies vegetales desconocidas y/o
elementos a no tener en cuenta, tierra, cielo, etc. Si
el resultado es esta categoría, significa que en el
análisis de este cuadro de imagen no hay contenida
ninguna categoría de especie vegetal conocida o
pre-entrenada. 14) En función del valor numérico de la
especie vegetal identificada se determina el agroquímico
a utilizar. El sistema contiene una tabla con las
posibles especies vegetales a identificar y su relación
con el agroquímico a utilizar según diagnóstico, en caso
de que corresponda su aplicación. 15) El dispositivo de
detección e identificación de especies vegetales ya
tiene una representación completa de las especies
vegetales identificadas y el agroquímico necesario a
aplicar en cada una de las plantas que aparecen en el
fotograma procesado que se obtuvo del flujo de video
principal [FIG.13]. 16) Para cada imagen procesada por
la red neuronal, que está asociada a un aspersor por el
área en donde se encuentra, se envía la orden de acuerdo
a la especie identificada, de manera que la aspersión
quede exactamente sobre la especie vegetal a tratar. Se
hace el cálculo matemático del momento exacto de
accionamiento de la orden electromecánica, teniendo en
cuenta la velocidad del vehículo pulverizador y la
distancia de la cámara al suelo. En función del área del
fotograma procesado se acciona únicamente la válvula
electromecánica que corresponde al campo de acción
específico. De esta manera se evita la aspersión de
producto agroquímico en los lugares que no corresponde y
sólo se aplica en la planta previamente identificada. En
función del agroquímico necesario, se acciona la válvula
correspondiente al agroquímico específico a utilizar.
Permite administrar múltiples tanques de producto
agroquímico según una necesidad específica. According to the present invention, the process flow step by step in the process of detection and identification of the plant varieties of interest is as follows: 1) A Real Time Video Stream [FIG.3] is obtained from one or several cameras positioned along the wings or arms of, for example, a spraying machine. This stage is done at 60 frames per second on shutters per second [FIG. 4]. The number of shutters (or shutters) per second depends on the speed of the moving vehicle. 2) Each of the frames obtained is processed. 3) The frame is converted to a numerical matrix with the representation of RGB colors (Red-Green-Blue English) of each pixel in the image [FIG. 5]. Each pixel has blue, green and red components. Each of these components has a range of 0 to 255, which gives a total of 2,563 different possible colors. 4) The matrix is cut to select the area of the frame to be processed [FIG. 6]. A horizontal strip size of the image to be processed is determined. This area to be processed is taken according to the subsequent ability to open the sprinkler for the application of the agrochemical exactly on the specific area. The area to be processed is exactly the middle strip, since it maintains an optimal ratio of distance to the camera, low image distortion and time that will pass between the processing and subsequent application of the agrochemical, and success in the shot on the plant . 5) An area of the image is assigned to the corresponding sprinklers, so if weeds are detected in that area, the opening order is sent to the corresponding sprinkler. 6) 4 filters are applied to obtain a mask of the predominant colors of the plant species to be identified. In this way, for example, we separate the soil, dry plant residue, stones, etc. The first filter transforms the matrix to YCbCr color format and makes a logical operation between the channels, depending on the color to be filtered [FIG. 7a]. The second filter subtracts two channels in the RGB format, depending on the color to be filtered [FIG.7b]. The third filter is a logical AND (bitwise) operation between the results of the two previous filters [FIG.7c]. Finally, the fourth filter applies a blur [FIG.7d1] or Gaussian blur to the previous result, the image is converted to black and white [FIG.7d2], and the noise [FIG.7d3] which are the points is eliminated scattered whites. 7) The contours of the image on the color mask are identified [FIG. 8] and the position information of each one is saved. 8) An estimated calculation of the velocity is made with the positions of the contours found in the current frame and their positions in a previous frame. A pixel speed per frame is obtained and a pixel-meter ratio and the frame-second ratio are used to make the speed passage to meters per second. With that speed and with the actual distance of the sprinklers to the plants that are seen in the strip of the image that is processed, the exact time in which the sprinklers should be instructed to spray is determined if a weed is then detected. 9) The image is cut into small squares that contain these contours [FIG. 9]. If a contour is very wide, it is separated in two horizontally, if a contour is very high, it is separated in two vertically and if a contour is very large, it is separated into four squares. Finally, squares of the same approximate size are obtained. 10) Each of these squares cut out of a part of the image is resized to a preferred size of 256 x 256 pixels, which is the size that the convolutional neural network works internally [FIG. 10]. 11) The 256 x 256 pixel image squares [FIG. 11a] are sent to the first layer or input layer of the convolutional neural network previously trained for analysis and categorization [FIG. 11b]. 12) Each square is processed within the Neural Network, which can process several at a time. The processing is carried out in parallel and executed internally in the GPU and not on the CPU, to achieve great performance in arithmetic operations. Previously trained within the network, the picture is taken and is performed in parallel one pass (forward) and the result to the output layer of the neural network is sent. 13) The result of the last layer or output layer of the convolutional neural network is obtained. The result of the output layer gives us an average value of success of each of the categories, the highest value being the category of plant species to which the image belongs [FIG. 12]. In one of the categories of the neural network are the NO plant species, in which there are unknown plant species and / or elements not to be taken into account, earth, sky, etc. If the result is this category, it means that in the analysis of this picture box there is no category of known or pre-trained plant species. 14) Depending on the numerical value of the identified plant species, the agrochemical to be used is determined. The system contains a table with the possible plant species to be identified and their relationship with the agrochemical to be used according to diagnosis, if applicable. 15) The device for the detection and identification of plant species already has a complete representation of the identified plant species and the agrochemical required to be applied in each of the plants that appear in the processed frame that was obtained from the main video stream [FIG. 13]. 16) For each image processed by the neural network, which is associated with a sprinkler by the area where it is located, the order is sent according to the identified species, so that the spray is exactly on the plant species to be treated. The mathematical calculation of the exact moment of activation of the electromechanical order is made, taking into account the speed of the spray vehicle and the distance from the chamber to the ground. Depending on the area of the processed frame, only the electromechanical valve corresponding to the specific field of action is activated. This avoids the spraying of agrochemicals in the places that do not apply and only applies to the previously identified plant. Depending on the required agrochemical, the valve corresponding to the specific agrochemical to be used is activated. It allows to administer multiple tanks of agrochemical product according to a specific need.
Se realizó un ensayo a campo del conjunto
autónomo de dispositivos para la detección e
identificación de especies vegetales en un cultivo
agrícola para la aplicación de agroquímicos en forma
selectiva en la localidad de Las Rosas, Provincia de
Santa Fe, sobre un lote de 20 hectáreas sembradas
con soja. A field test of the whole was carried out
Autonomous device for detection and
identification of plant species in a crop
agricultural for the application of agrochemicals in the form
selective in the town of Las Rosas, Province of
Santa Fe, on a lot of 20 hectares planted
with soy.
El herbicida seleccionado para ser
aplicado fue glifosato (RoundUp) a aproximadamente
1,4 litros por hectárea en promedio. The herbicide selected to be
applied was glyphosate (RoundUp) to approximately
1.4 liters per hectare on average.
Se utilizó una fumigadora autónoma tipo
mosquito (Pla, modelo MAP II 3250) compuesta por un
tanque de 3250 litros, en donde los brazos laterales
comprendían montada una línea de pastillas
pulverizadoras de la marca TeeJet, comandadas con
electroválvulas conectadas directamente a la
computadora que controla la aplicación de la dosis
de herbicida. An autonomous type sprayer was used
mosquito (Pla, MAP II 3250 model) composed of a
3250 liter tank, where the side arms
they included a line of pads mounted
TeeJet brand sprayers, commanded with
solenoid valves connected directly to the
computer that controls dose application
of herbicide.
La altura de los picos y sensores respecto
del piso fue de 1 metro. The height of the peaks and sensors with respect
of the floor was 1 meter.
La velocidad de propulsión de la
fumigadora fue de aproximadamente 16 km/h durante
toda la aplicación. The propulsion speed of the
fumigator was approximately 16 km / h during
The whole application.
La aplicación del herbicida sobre las
malezas del cultivo de soja se realizó a las 10
horas de la mañana. The herbicide application on the
soybean weed was made at 10
morning hours
Se empleó una cantidad de 12 cámaras con
sensores distribuidas uniformemente a lo largo del
ala del botalón de 28 metros de largo. A quantity of 12 cameras was used with
sensors distributed evenly throughout the
wing of the boom 28 meters long.
El lote elegido para el ensayo estaba
cultivado con soja de 4 semanas post emergencia y
poseía un porcentaje bajo en cantidad de malezas y
alta concentración "manchoneo", esto es malezas
distribuidas al azar en manchones. The batch chosen for the trial was
cultivated with 4-week post-emergence soybeans and
it had a low percentage of weeds and
high concentration "staining", this is weed
randomly distributed in patches.
A partir del ensayo realizado a campo
mediante la utilización del conjunto autónomo de
dispositivos según el presente invento, se pudo
detectar la presencia de malezas sobre el cultivo
con un porcentaje de certeza que varió entre 76% y
92%, en donde el nivel de acierto dependió de los
movimientos de la maquinaria dentro del cultivo y de
la incidencia de la iluminación (luz y sombra) sobre
el sensor detector. From the field test
by using the autonomous set of
devices according to the present invention, it was possible
detect the presence of weeds on the crop
with a certainty percentage that varied between 76% and
92%, where the level of success depended on the
machinery movements within the crop and of
the incidence of lighting (light and shadow) on
The detector sensor.
El porcentaje de ahorro obtenido en la
aplicación del herbicida, al haberse aplicado
únicamente sobre las malezas detectadas llego al 86%
de producto sobre lo que hubiera sido una aplicación
de cobertura total a 2 litros por hectárea. The percentage of savings obtained in the
herbicide application, when applied
only on weeds detected reached 86%
of product on what would have been an application
of total coverage at 2 liters per hectare.
Claims (19)
- Un conjunto autónomo de dispositivos para la detección e identificación de especies vegetales en un cultivo agrícola para la aplicación de agroquímicos en forma selectiva, dicho conjunto CARACTERIZADO PORQUE comprende: un dispositivo de aplicación de productos químicos que comprende al menos un recipiente contenedor de agroquímicos vinculado en comunicación fluida con una pluralidad de picos aspersores a través de una válvula, una pluralidad de cámaras que están dispuestas sobre el vehículo autónomo y enfocadas al cultivo, en donde cada cámara cuenta con un sensor de ultrasonido asociado para la medición de altura al cultivo en tiempo real, y en donde cada cámara está inclinada hacia adelante a 45 grados respecto de la normal; un dispositivo de detección e identificación de especies vegetales conectado a las cámaras para recibir información de video de las mismas, un circuito electrónico encargado de gestionar la apertura y cierre de las válvulas de los picos aspersores de producto agroquímico conectado al dispositivo de detección que gestiona a través de dicho circuito la apertura y cierre de las válvulas de los picos aspersores, y en donde, el conjunto de dispositivos se encuentra montado sobre un vehículo de transporte. A stand-alone set of devices for the detection and identification of plant species in a agricultural cultivation for the application of agrochemicals in selectively, said set CHARACTERIZED BECAUSE comprises: a product application device chemicals comprising at least one container container of agrochemicals linked in fluid communication with a plurality of sprinkler peaks through a valve, a plurality of cameras that are arranged on the autonomous vehicle and focused on cultivation, where each camera has an associated ultrasound sensor for the height measurement to the crop in real time, and in where each camera is tilted forward at 45 degrees from normal; a device of detection and identification of plant species connected to the cameras to receive information from video of them, an electronic circuit in charge to manage the opening and closing of the valves of the sprinkler peaks of agrochemicals connected to detection device that manages through said circuit the opening and closing of the valves of the sprinkler peaks, and where, the set of devices is mounted on a vehicle of transport.
- El conjunto autónomo de la cláusula 1, CARACTERIZADO PORQUE el vehículo de transporte es un vehículo autopropulsado o un vehículo de arrastre. The autonomous set of clause 1, CHARACTERIZED BECAUSE the transport vehicle is a self-propelled vehicle or a tow vehicle.
- El conjunto autónomo de dispositivos de la cláusula 2, CARACTERIZADO PORQUE el vehículo autopropulsado es un vehículo para fumigación con brazos laterales dispuestos perpendicularmente al mismo (mosquito). The autonomous set of devices of the clause 2, CHARACTERIZED BECAUSE the vehicle self-propelled is a vehicle for fumigation with arms sides arranged perpendicularly to it (mosquito).
- El conjunto autónomo de dispositivos de cualquiera de las cláusulas 1 a 3, CARACTERIZADO PORQUE el dispositivo de detección e identificación consta de un procesador. The autonomous set of devices any of clauses 1 to 3, CHARACTERIZED BECAUSE The detection and identification device consists of a processor
- El conjunto autónomo de dispositivos de la cláusula 4, CARACTERIZADO PORQUE el procesador comprende una herramienta basada en software informático desarrollado en lenguaje C++, un framework de visión artificial, y un framework de redes neuronales convolucionales. The autonomous set of devices of the clause 4, CHARACTERIZED BECAUSE the processor comprises a tool based on computer software developed in C ++ language, a vision framework artificial, and a neural network framework convolutional
- Un método para la detección e identificación de especies vegetales en un cultivo agrícola para la aplicación de agroquímicos en forma selectiva con el conjunto autónomo de dispositivos de cualquiera de las cláusulas 1 a 5, dicho método CARACTERIZADO PORQUE comprende los pasos de: a) detectar y clasificar especies vegetales en un cultivo agrícola durante el recorrido del conjunto autónomo de dispositivos a través del cultivo; b) analizar la información obtenida en a) determinando área y perímetro de las malezas; y c) pulverizar la dosis adecuada de un agroquímico en el lugar correcto teniendo en cuenta la velocidad de avance del equipo. A method for the detection and identification of plant species in an agricultural crop for application of agrochemicals selectively with the autonomous set of devices of any of the clauses 1 to 5, said method CHARACTERIZED BECAUSE Understand the steps of: a) detect and classify plant species in an agricultural crop during travel of the autonomous set of devices through of the crop; b) analyze the information obtained in a) determining area and perimeter of weeds; and c) spray the appropriate dose of an agrochemical in the right place considering forward speed of the team.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 6, CARACTERIZADO PORQUE el paso a) permite distinguir el cultivo de las malezas. The method for the detection and identification of plant species in an agricultural crop clause 6, CHARACTERIZED BECAUSE step a) allows to distinguish the weed cultivation
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 6 ó 7, CARACTERIZADO PORQUE el paso a) permite identificar las especies vegetales para determinar el agroquímico a aplicar. The method for the detection and identification of plant species in an agricultural crop clause 6 or 7, CHARACTERIZED BECAUSE step a) allows identify plant species to determine the agrochemical to apply.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de cualquiera de las cláusulas 6 a 8, CARACTERIZADO PORQUE en el paso c) la dosis de agroquímico se pulveriza abriendo una válvula solenoide. The method for the detection and identification of plant species in an agricultural crop of any of clauses 6 to 8, CHARACTERIZED BECAUSE in step c) the dose of agrochemical is sprayed by opening a solenoid valve.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de cualquiera de las cláusulas 6 a 9, CARACTERIZADO PORQUE las especies vegetales corresponden al cultivo y a las malezas. The method for the detection and identification of plant species in an agricultural crop of any of clauses 6 to 9, CHARACTERIZED BECAUSE the plant species correspond to the crop and weeds.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de cualquiera de las cláusulas 6 a 9, CARACTERIZADO PORQUE el agroquímico es un herbicida, un fertilizante foliar, un insecticida, un fungicida, o un compuesto protector. The method for the detection and identification of plant species in an agricultural crop of any of clauses 6 to 9, CHARACTERIZED BECAUSE the agrochemical is a herbicide, a foliar fertilizer, a insecticide, a fungicide, or a protective compound.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de cualquiera de las cláusulas 6 a 11, CARACTERIZADO PORQUE el paso b) además permite determinar el estado del cultivo. The method for the detection and identification of plant species in an agricultural crop of any of clauses 6 to 11, CHARACTERIZED BECAUSE step b) It also allows to determine the state of the crop.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 11 ó 12, CARACTERIZADO PORQUE el paso c) comprende además seleccionar un herbicida específico de un conjunto de herbicidas para cada maleza identificada en el paso a) respecto del cultivo. The method for the detection and identification of plant species in an agricultural crop clause 11 or 12, CHARACTERIZED BECAUSE step c) comprises also select a specific herbicide from a set of herbicides for each weed identified in step a) regarding the crop.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 11 ó 12, CARACTERIZADO PORQUE el paso c) comprende además seleccionar un fertilizante foliar específico de un conjunto de fertilizantes foliares para el cultivo identificado en el paso a) según su estado. The method for the detection and identification of plant species in an agricultural crop clause 11 or 12, CHARACTERIZED BECAUSE step c) comprises also select a specific foliar fertilizer from a set of foliar fertilizers for cultivation identified in step a) according to their status.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 11 ó 12, CARACTERIZADO PORQUE el paso c) comprende además seleccionar un insecticida específico de un conjunto de insecticidas para el cultivo identificado en el paso a) según su estado de deterioro. The method for the detection and identification of plant species in an agricultural crop clause 11 or 12, CHARACTERIZED BECAUSE step c) comprises also select a specific insecticide from a set of insecticides for the crop identified in step a) according to its state of deterioration.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 11 ó 12, CARACTERIZADO PORQUE el paso c) comprende además seleccionar un fungicida específico de un conjunto de fungicidas para el cultivo identificado en el paso a) según su estado de deterioro. The method for the detection and identification of plant species in an agricultural crop clause 11 or 12, CHARACTERIZED BECAUSE step c) comprises also select a specific fungicide from a set of fungicides for the crop identified in step a) according to its state of deterioration.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 11 ó 12, CARACTERIZADO PORQUE el paso c) comprende además seleccionar un compuesto protector específico de un conjunto de compuestos protectores para el cultivo identificado en el paso a) según su estado. The method for the detection and identification of plant species in an agricultural crop clause 11 or 12, CHARACTERIZED BECAUSE step c) comprises also select a specific protective compound of a set of protective compounds for cultivation identified in step a) according to their status.
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 6, CARACTERIZADO PORQUE comprende los pasos de: a) obtener un Flujo de Video en Tiempo Real desde una pluralidad de cámaras posicionadas a lo largo de alas del conjunto autónomo de dispositivos para la detección e identificación de especies vegetales en un cultivo agrícola para la aplicación de agroquímicos en forma selectiva; b) procesar cada uno de los fotogramas obtenidos; c) convertir el fotograma a una matriz numérica con la representación de los colores RGB (del inglés Red-Green-Blue) de cada pixel de la imagen; d) recortar la matriz para seleccionar el área del fotograma a procesar; e) asignar un área de la imagen a los aspersores que correspondan, de manera que si se detecta maleza en esa área, se le envía la orden de apertura al aspersor que corresponda; f) aplicar 4 filtros para obtener una máscara de los colores predominantes de las especies vegetales a identificar; g) identificar los contornos de la imagen sobre la máscara de color guardando la información de la posición de cada uno; h) calcular estimativamente la velocidad de desplazamiento con las posiciones de los contornos encontrados en el fotograma actual y las posiciones de esos mismos contornos en un fotograma anterior; i) obtener una velocidad de pixeles por fotograma y hacer el pasaje de la velocidad a metros por segundo utilizando una relación pixel-metros y una relación fotogramas-segundos; j) recortar la imagen en cuadrados pequeños que contienen los contornos; k) cambiar el tamaño a cada uno de los cuadrados recortados de una parte de la imagen a un tamaño preferente; l) enviar los cuadrados de imágenes a la primera capa (capa de entrada) de la red neuronal convolucional previamente entrenada para su análisis y categorización; m) procesar cada cuadrado dentro de la Red Neuronal previamente entrenada, en donde se toma la imagen, se realiza en paralelo una pasada ( forward ) y se envía el resultado a la capa de salida de la red neuronal; n) obtener el resultado de la última capa (capa de salida) de la red neuronal convolucional; ñ) determinar el agroquímico a utilizar en función del valor numérico de la especie vegetal identificada; o) correlacionar la representación completa de las especies vegetales identificadas con el agroquímico necesario a aplicar en cada una de las plantas que aparecen en el fotograma procesado que se obtuvo del flujo de video principal, y p) enviar la orden de acuerdo a la especie identificada para cada imagen procesada por la red neuronal que está asociada a un aspersor por el área en donde se encuentra, de manera que la aspersión quede exactamente sobre la especie vegetal a tratar; The method for the detection and identification of plant species in an agricultural crop of clause 6, CHARACTERIZED BECAUSE it comprises the steps of: a) obtaining a Real Time Video Stream from a plurality of cameras positioned along the wings of the autonomous set of devices for the detection and identification of plant species in an agricultural crop for the application of agrochemicals in a selective way; b) process each of the frames obtained; c) convert the frame to a numerical matrix with the representation of RGB (Red-Green-Blue English) colors of each pixel in the image; d) trim the matrix to select the area of the frame to be processed; e) assign an area of the image to the corresponding sprinklers, so that if weeds are detected in that area, the opening order is sent to the corresponding sprinkler; f) apply 4 filters to obtain a mask of the predominant colors of the plant species to be identified; g) identify the contours of the image on the color mask by saving the information of the position of each one; h) estimate the travel speed with the positions of the contours found in the current frame and the positions of those same contours in a previous frame; i) obtain a pixel speed per frame and make the speed passage in meters per second using a pixel-meter ratio and a frame-second ratio; j) crop the image into small squares that contain the contours; k) resize each of the squares cut out of a part of the image to a preferred size; l) send the squares of images to the first layer (input layer) of the previously trained convolutional neural network for analysis and categorization; m) processing each square within the previously trained neural network, wherein the image is taken, it is performed in parallel in one pass (forward) and the result is sent to the output layer of the neural network; n) obtain the result of the last layer (output layer) of the convolutional neural network; ñ) determine the agrochemical to be used based on the numerical value of the identified plant species; o) correlate the complete representation of the plant species identified with the agrochemical necessary to apply in each of the plants that appear in the processed frame that was obtained from the main video stream, and p) send the order according to the species identified for each image processed by the neural network that is associated with a sprinkler by the area where it is located, so that the spray is exactly on the plant species to be treated;
- El método para la detección e identificación de especies vegetales en un cultivo agrícola de la cláusula 18, CARACTERIZADO PORQUE en el paso f) se separa elementos extraños como tierra, residuo vegetal seco y piedras de las especies vegetales, en donde: un primer filtro transforma la matriz a formato color YCbCr; un segundo filtro resta dos canales en el formato RGB dependiendo del color a filtrar; un tercer filtro es una operación lógica AND (bit a bit) entre los resultados de los filtros anteriores primero y segundo; y un cuarto filtro aplica al resultado anterior un blur (desenfoque gaussiano), convirtiendo la imagen a blanco y negro y eliminando el ruido. The method for the detection and identification of plant species in an agricultural crop clause 18, CHARACTERIZED BECAUSE in step f) it separates foreign elements such as soil, dry plant residue and stones of plant species, where: a first filter transforms the matrix to YCbCr color format; a second filter subtracts two channels in RGB format depending on the color to filter; a third filter is a AND logical operation (bit by bit) between the results of the first and second previous filters; and a quarter filter applies a blur to the previous result Gaussian), converting the image to black and white and Eliminating noise
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
ARP160100983A AR104234A1 (en) | 2016-04-12 | 2016-04-12 | AUTONOMOUS SET OF DEVICES AND METHOD FOR THE DETECTION AND IDENTIFICATION OF VEGETABLE SPECIES IN AN AGRICULTURAL CULTURE FOR THE APPLICATION OF AGROCHEMICALS IN A SELECTIVE FORM |
AR20160100983 | 2016-04-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017178666A1 true WO2017178666A1 (en) | 2017-10-19 |
Family
ID=59487587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/ES2016/070655 WO2017178666A1 (en) | 2016-04-12 | 2016-09-20 | Autonomous set of devices and method for detecting and identifying plant species in an agricultural crop for the selective application of agrochemicals |
Country Status (2)
Country | Link |
---|---|
AR (1) | AR104234A1 (en) |
WO (1) | WO2017178666A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898059A (en) * | 2018-05-30 | 2018-11-27 | 上海应用技术大学 | Flowers recognition methods and its equipment |
WO2019094266A1 (en) * | 2017-11-07 | 2019-05-16 | University Of Florida Research Foundation | Detection and management of target vegetation using machine vision |
CN110070101A (en) * | 2019-03-12 | 2019-07-30 | 平安科技(深圳)有限公司 | Floristic recognition methods and device, storage medium, computer equipment |
WO2019226869A1 (en) * | 2018-05-24 | 2019-11-28 | Blue River Technology Inc. | Semantic segmentation to identify and treat plants in a field and verify the plant treatments |
CN111325240A (en) * | 2020-01-23 | 2020-06-23 | 杭州睿琪软件有限公司 | Weed-related computer-executable method and computer system |
US10748042B2 (en) | 2018-06-22 | 2020-08-18 | Cnh Industrial Canada, Ltd. | Measuring crop residue from imagery using a machine-learned convolutional neural network |
US20210244010A1 (en) * | 2020-02-12 | 2021-08-12 | Martin Perry Heard | Ultrasound controlled spot sprayer for row crops |
WO2021176254A1 (en) | 2020-03-05 | 2021-09-10 | Plantium S.A. | System and method of detection and identification of crops and weeds |
RU2763438C2 (en) * | 2019-06-20 | 2021-12-29 | ФГБОУ ВО "Оренбургский государственный аграрный университет" | Stand for setting up contactless sensors |
CN115349340A (en) * | 2022-09-19 | 2022-11-18 | 沈阳农业大学 | Artificial intelligence-based sorghum fertilization control method and system |
US11580718B2 (en) | 2019-08-19 | 2023-02-14 | Blue River Technology Inc. | Plant group identification |
EP4245135A1 (en) | 2022-03-16 | 2023-09-20 | Bayer AG | Conducting and documenting an application of plant protection agents |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1000540A1 (en) * | 1998-11-16 | 2000-05-17 | McLoughlin, Daniel | Image processing |
US6714662B1 (en) * | 2000-07-10 | 2004-03-30 | Case Corporation | Method and apparatus for determining the quality of an image of an agricultural field using a plurality of fuzzy logic input membership functions |
JP2004180554A (en) * | 2002-12-02 | 2004-07-02 | National Agriculture & Bio-Oriented Research Organization | Method and apparatus for selectively harvesting fruit vegetables |
US20150245565A1 (en) * | 2014-02-20 | 2015-09-03 | Bob Pilgrim | Device and Method for Applying Chemicals to Specific Locations on Plants |
-
2016
- 2016-04-12 AR ARP160100983A patent/AR104234A1/en unknown
- 2016-09-20 WO PCT/ES2016/070655 patent/WO2017178666A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1000540A1 (en) * | 1998-11-16 | 2000-05-17 | McLoughlin, Daniel | Image processing |
US6714662B1 (en) * | 2000-07-10 | 2004-03-30 | Case Corporation | Method and apparatus for determining the quality of an image of an agricultural field using a plurality of fuzzy logic input membership functions |
JP2004180554A (en) * | 2002-12-02 | 2004-07-02 | National Agriculture & Bio-Oriented Research Organization | Method and apparatus for selectively harvesting fruit vegetables |
US20150245565A1 (en) * | 2014-02-20 | 2015-09-03 | Bob Pilgrim | Device and Method for Applying Chemicals to Specific Locations on Plants |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019094266A1 (en) * | 2017-11-07 | 2019-05-16 | University Of Florida Research Foundation | Detection and management of target vegetation using machine vision |
US11468670B2 (en) | 2017-11-07 | 2022-10-11 | University Of Florida Research Foundation, Incorporated | Detection and management of target vegetation using machine vision |
US11514671B2 (en) | 2018-05-24 | 2022-11-29 | Blue River Technology Inc. | Semantic segmentation to identify and treat plants in a field and verify the plant treatments |
WO2019226869A1 (en) * | 2018-05-24 | 2019-11-28 | Blue River Technology Inc. | Semantic segmentation to identify and treat plants in a field and verify the plant treatments |
US10713484B2 (en) | 2018-05-24 | 2020-07-14 | Blue River Technology Inc. | Semantic segmentation to identify and treat plants in a field and verify the plant treatments |
CN108898059A (en) * | 2018-05-30 | 2018-11-27 | 上海应用技术大学 | Flowers recognition methods and its equipment |
US10748042B2 (en) | 2018-06-22 | 2020-08-18 | Cnh Industrial Canada, Ltd. | Measuring crop residue from imagery using a machine-learned convolutional neural network |
CN110070101A (en) * | 2019-03-12 | 2019-07-30 | 平安科技(深圳)有限公司 | Floristic recognition methods and device, storage medium, computer equipment |
RU2763438C2 (en) * | 2019-06-20 | 2021-12-29 | ФГБОУ ВО "Оренбургский государственный аграрный университет" | Stand for setting up contactless sensors |
US11823388B2 (en) | 2019-08-19 | 2023-11-21 | Blue River Technology Inc. | Plant group identification |
US11580718B2 (en) | 2019-08-19 | 2023-02-14 | Blue River Technology Inc. | Plant group identification |
CN111325240A (en) * | 2020-01-23 | 2020-06-23 | 杭州睿琪软件有限公司 | Weed-related computer-executable method and computer system |
US20210244010A1 (en) * | 2020-02-12 | 2021-08-12 | Martin Perry Heard | Ultrasound controlled spot sprayer for row crops |
WO2021176254A1 (en) | 2020-03-05 | 2021-09-10 | Plantium S.A. | System and method of detection and identification of crops and weeds |
EP4245135A1 (en) | 2022-03-16 | 2023-09-20 | Bayer AG | Conducting and documenting an application of plant protection agents |
WO2023174827A1 (en) | 2022-03-16 | 2023-09-21 | Bayer Aktiengesellschaft | Carrying out and documenting the application of crop protection products |
CN115349340A (en) * | 2022-09-19 | 2022-11-18 | 沈阳农业大学 | Artificial intelligence-based sorghum fertilization control method and system |
CN115349340B (en) * | 2022-09-19 | 2023-05-19 | 沈阳农业大学 | Sorghum fertilization control method and system based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
AR104234A1 (en) | 2017-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017178666A1 (en) | Autonomous set of devices and method for detecting and identifying plant species in an agricultural crop for the selective application of agrochemicals | |
US10614562B2 (en) | Inventory, growth, and risk prediction using image processing | |
Di Cicco et al. | Automatic model based dataset generation for fast and accurate crop and weeds detection | |
Kamilaris et al. | Deep learning in agriculture: A survey | |
Gao et al. | A spraying path planning algorithm based on colour-depth fusion segmentation in peach orchards | |
Ajayi et al. | Effect of varying training epochs of a faster region-based convolutional neural network on the accuracy of an automatic weed classification scheme | |
Grondin et al. | Tree detection and diameter estimation based on deep learning | |
Ahn et al. | An overview of perception methods for horticultural robots: From pollination to harvest | |
Zhang et al. | Feasibility assessment of tree-level flower intensity quantification from UAV RGB imagery: a triennial study in an apple orchard | |
Olsen | Improving the accuracy of weed species detection for robotic weed control in complex real-time environments | |
US11544920B2 (en) | Using empirical evidence to generate synthetic training data for plant detection | |
Bulanon et al. | Machine vision system for orchard management | |
Cruz Ulloa et al. | Trend technologies for robotic fertilization process in row crops | |
Valicharla | Weed recognition in agriculture: A mask R-CNN approach | |
Negrete | Artificial vision in mexican agriculture for identification of diseases, pests and invasive plants | |
Chen | Estimating plant phenotypic traits from RGB imagery | |
Garibaldi-Márquez et al. | Segmentation and Classification Networks for Corn/Weed Detection Under Excessive Field Variabilities | |
Nirunsin et al. | Size Estimation of Mango Using Mask-RCNN Object Detection and Stereo Camera for Agricultural Robotics | |
US20220383042A1 (en) | Generating labeled synthetic training data | |
Lim et al. | Classification and Detection of Obstacles for Rover Navigation | |
Han et al. | Deep learning-based path detection in citrus orchard | |
Charitha et al. | Detection of Weed Plants Using Image Processing and Deep Learning Techniques | |
Silvestre | Computer vision techniques for greenness identification and obstacle detection in maize fields Técnicas de visión por computador para la detección del verdor y la detección de obstáculos en campos de maíz | |
Mejia | Autonomous mobile robotics in open environments | |
Adaptable | Building an aerial–ground robotics system for precision farming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16898531 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.02.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16898531 Country of ref document: EP Kind code of ref document: A1 |