US12038969B2 - Predictive classification of insects - Google Patents
Predictive classification of insects Download PDFInfo
- Publication number
- US12038969B2 US12038969B2 US16/859,397 US202016859397A US12038969B2 US 12038969 B2 US12038969 B2 US 12038969B2 US 202016859397 A US202016859397 A US 202016859397A US 12038969 B2 US12038969 B2 US 12038969B2
- Authority
- US
- United States
- Prior art keywords
- classification
- insect
- image
- images
- location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
Definitions
- insects be may be classified as male or female and selectively sterilized before being released into the wild.
- Such programs may be implemented to minimize or eliminate insect-borne diseases and/or to manage insect populations in certain areas.
- classification and sterilization may be performed at one or more stages of insect development.
- a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
- One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- One general aspect includes a system, including: an imaging device configured to capture images of insects and a computing device in communication with the imaging device, and configured to at least: instruct the imaging device to capture an image that depicts at least a portion of the insect.
- the computing device is also configured to receive, from the imaging device, the image.
- the computing device is also configured to determine, using an industrial vision classifier: (i) a first classification of the image into at least one category based at least in part on features extracted from the image, and (ii) a first confidence measure corresponding to the first classification.
- the computing device is also configured to determine, using a machine learning classifier (i) a second classification of the image into the at least one category based at least in part on the image, and (ii) a second confidence measure corresponding to the second classification.
- the computing device is also configured to determine a third classification of the image based at least in part on the first confidence measure and the second confidence measure.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a computer-implemented method, including: receiving a sequence of images that each depict at least a portion of an insect.
- the computer-implemented method also includes determining, using an industrial vision classifier, a first classification of the sequence of images insect into at least one category.
- the computer-implemented method also includes determining a first confidence measure corresponding to the first classification.
- the computer-implemented method also includes, in the event the first confidence measure falls below a threshold, determining, using a machine learning classifier, a second classification of the sequence of images into the at least one category.
- the computer-implemented method also includes determining a second confidence measure corresponding to the second classification.
- the computer-implemented method also includes generating classification information relating to the insect based on the second confidence measure.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a system, including: an imaging device configured to capture images of insects, and a computing device in communication with the imaging device, and configured to at least instruct the imaging device to capture a sequence of images depicting at least a portion of an insect.
- the computing device is also configured to use a first predictive model to determine a first output corresponding to a first classification of a first image of the sequence of images, the first output including a confidence measure of the first classification.
- the computing device is also configured to generate classification information based at least in part on the first output.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes one or more non-transitory computer-readable media including computer-executable instructions that, when executed by one or more computing systems, cause the one or more computing systems to receive a set of images depicting at least a portion of an insect.
- the instructions further cause the one or more computer systems to determine a first set of classifications for the set of images by at least inputting the set of images into a first predictive model, the first predictive model including a core deep neural network model.
- the instructions further cause the one or more computer systems to determine a first set of confidence measures for the first set of classifications.
- the instructions further cause the one or more computer systems to determine a second set of classifications for the set of images by at least inputting the first set of confidence measures into a second predictive model, the second predictive model including a recurrent neural network model.
- the instructions further cause the one or more computer systems to determine a second set of confidence measures for the second set of classifications.
- the instructions further cause the one or more computer systems to generate classification information based at least in part on the second set of confidence measures.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a computer-implemented method, including receiving a set of images depicting at least a portion of an insect.
- the computer-implemented method also includes determining a first classification for the set of images by least inputting the set of images into a first predictive model, the first predictive model including a core deep neural network model.
- the computer-implemented method also includes determining a set of features corresponding to the first classification.
- the computer-implemented method also includes determining a second classification for the set of images by at least inputting the set of features into a second predictive model, the second predictive model including a recurrent neural network model.
- the computer-implemented method also includes determining a second confidence measure for the second classification.
- the computer-implemented method also includes generating classification information based at least in part on the second confidence measure.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- FIG. 1 illustrates a block diagram and a corresponding flowchart illustrating a process for classifying insects, according to at least one example.
- FIG. 2 illustrates an example system for separating insects based on a determined classification, according to at least one example.
- FIG. 3 illustrates an example flowchart illustrating a process for classifying insects, according to at least one example.
- FIG. 4 illustrates an example device including a classification module for classifying insects, according to at least one example.
- FIG. 5 illustrates an example diagram for implementing two predictive models for classifying insects, according to at least one example.
- FIG. 6 illustrates an example a flow chart depicting an example process for classifying insects, according to at least one example.
- FIG. 7 illustrates an example a flow chart depicting an example process for classifying insects, according to at least one example.
- FIG. 8 illustrates an example a flow chart depicting an example process for classifying insects, according to at least one example.
- FIG. 9 illustrates an example a flow chart depicting an example process for classifying insects, according to at least one example.
- FIG. 10 illustrates an example a flow chart depicting an example process for classifying insects, according to at least one example.
- FIG. 11 illustrates an example system for implementing techniques relating to classifying insects, according to at least one example.
- FIG. 12 illustrates example images of pairs of insects, according to at least one example.
- an insect sorting system includes an imaging device and a classification module.
- the imaging device is used to capture images of the insects.
- the classification module classifies these images into at least one category (e.g., sex categories, species categories, etc.). Based on this classification, the insect can be sorted, e.g., the insect can be loaded into one of two compartments such as a non-female compartment (e.g., male, intersex, or gynandromorph) or a female compartment. This approach can be repeated to sort additional insects into one of the two compartments.
- the classification module can classify the images and output instructions for sorting the insects in about real time.
- the insect sorting system includes an onramp from an insect holding area connected to narrow imaging path disposed within a field of view of the imaging device.
- the imaging path is connected to the two compartments.
- the insect sorting system also includes device(s) for singulating the insects (e.g., blowers, vacuums, trap doors, etc.) along the imaging path and sortation devices (e.g., blowers, vacuums, trap doors, etc.) to move insects from the imaging path to one of the compartments.
- sortation devices e.g., blowers, vacuums, trap doors, etc.
- the imaging device captures one or more images, which are then processed by the classification module to predict whether the insect on the path is female, male, intersex, or gynandromorph.
- one or more sortation devices are used to sort the insect into the appropriate compartment.
- additional labels may be applied to the images such as, for example, number of mosquitoes or species of mosquito.
- the classification module can perform additional functionality beyond image classification.
- the additional functionality of the classification module can be used to validate a previous classification performed by the classification functionality of the classification module. This can be performed at some later time, e.g., as part of quality control measure. For example, after the insect from above has been loaded into a compartment or at some other later time, the classification module can process the one or more images to confirm the earlier prediction.
- the classification module can run on the same electronics as the imaging device (e.g., a phone, custom board, or other device) or it can run on a centralized set of servers that receive sequences of images from many imaging devices simultaneously and process the images at about the same time.
- the classification module includes both an industrial vision classifier and a machine learning classifier.
- the industrial vision classifier is used to perform a first classification of an image of an insect.
- the machine learning classifier which may include one or more predictive models, is used to perform a second classification of the image.
- the second classification may be based on the first classification or may be performed without regard to the first classification.
- the industrial vision classifier performs the first classification at a first time and the machine learning classifier performs the second classification shortly thereafter such that the two classifications are used to make a single prediction.
- the industrial vision classifier performs the first classification, from which a prediction is made, and the machine learning classifier validates the prediction as a quality control measure. For those classifications that do not meet some pre-determined level of confidence after the first and second classifications, the images are provided to one or more people for manual classification.
- the first predictive model is a core deep neural network model and the second predictive model is a recurrent neural network model that are connected in series.
- a sequence of images of an insect can be input into the first predictive model, and its output (e.g., features of the image and/or confidence measures for predictions) can be input into the second predictive model, which outputs second confidence measures for the sequence of images.
- This approach for classifying the sequence of images using the two predictive models may result in improved precision as compared to a single predictive model or a single industrial vision classifier. This approach also provides for get increased recall because the system allows the earlier classifiers to be a little more lenient, which will be balanced by the check on a different feature set.
- the system utilizes three different approaches for classification that each home in on different types of features.
- the industrial vision focuses on human understandable morphological features (e.g., an antenna shape)
- the core deep neural network model focuses on machine understandable statistically discriminatory static features
- the recurrent neural network focuses on picking up on motion/changes through time.
- FIG. 1 illustrates a system 102 and a corresponding flowchart illustrating a process 100 for classifying insects, according to at least one example.
- the system 102 includes an imaging device 104 and a computing device 106 .
- the imaging device 104 which may include any suitable combination of image sensors, lenses, computer hardware, and/or software, may be in network communication with the computing device 106 .
- the imaging device 104 in this example captures images of insects 116 moving through an environment 118 , such as a pathway, and outputs image data to the computing device 106 , which receives the image data and performs an industrial vision technique to determine a classification for an insect.
- the imaging device 104 is configured to capture images at any suitable rate.
- the imaging device 104 capture images at a rate of about 5-10 frames per second. In other examples, the imaging device 104 captures images at a slower rate (e.g., less than 5 frames per second) or at a faster rate (e.g., faster than 5 frames per second).
- the computing device 106 is any suitable electronic device (e.g., personal computer, hand-held device, server computer, server cluster, virtual computer, etc.) configured to execute computer-executable instructions to perform operations such as those described herein.
- the computing device 106 can include a classification module, among other modules/components, that is configured to host an industrial vision classifier 110 and a machine learning classifier 112 , and includes the functionality to perform the processes described herein.
- the system 102 also includes a sortation system 113 .
- the sortation system 113 is a system for sorting insects based on a classification made by the computing device 106 based on images captured by the imaging device 104 .
- the imaging device 104 and the environment 118 are part of the sortation system 113 . In this manner, insects can be imaged, classified, and sorted all within the sortation system 113 .
- the components of the system 102 are connected via one or more communication links with the network 111 .
- the network 111 includes any suitable combination of wired, wireless, cellular, personal area, local area, enterprise, virtual, or other suitable network.
- the process 100 illustrated in FIG. 1 provides an overview of how the system 102 may be employed to classify insects 116 moving through the environment 118 .
- the computing device 106 instructs the imaging device 104 to capture a set of images 108 ( 1 )- 108 (N of an insect 116 .
- the imaging device 104 also generates metadata associated with each of the images 108 such as a timestamp of when the image 108 was taken and a unique image identifier.
- the ellipsis between the image 108 ( 1 ) and the image 108 (N) are used to designated that any suitable number of images may be included in this image set. Ellipses are used similarly throughout the figures.
- the images 108 ( 1 -N) depict the insect 116 within the environment 118 .
- the insect 116 may be located within a pathway of an insect sortation system.
- the imaging device 104 captures the images 108 in a different manner, e.g., according to a fixed schedule (e.g., every five minutes), based on a trigger (e.g., after detecting movement in an insect population), or in any other suitable manner.
- the set of images 108 ( 1 -N) can also include a sequence of images (e.g., images taken of the same insect 116 at some fixed capture rate).
- the computing device 106 determines a first classification of the set of images 108 using the industrial vision classifier 110 .
- the industrial vision classifier includes a computer vision system that is configured to classify objects present in images by extracting a set of features of the object from the image and applying a set of rules upon those features.
- the first classification of the set of images 108 represents a determination by the industrial vision classifier 110 that the insect 114 depicted in the images 108 belongs to one of one or more categories, e.g., a female category or non-female category (e.g., male, intersex, or gynandromorph).
- the first classification also includes a confidence measure, e.g., a value between 0 and 1, that represents a likelihood that the classification is correct.
- the computing device 106 determines a second classification of the set of images 108 using the machine learning classifier 112 .
- the machine learning classifier 112 includes one or more predictive models operating in parallel or in series that are trained to classify images of insects into one or more categories (e.g., female, male, intersex, and/or gynandromorph).
- the second classification of the set of images 108 represents a determination by the machine learning classifier 112 that the insect 114 depicted in the images 108 belongs to one of one or more categories.
- the second classification also includes a confidence measure, e.g., a value between 0 and 1, that represents a likelihood that the second classification is correct.
- the machine learning classifier 112 includes at least two predictive models, the confidence measures from these models may be combined to achieve higher confidence.
- the computing device 106 determines classification information based on the first and second classifications. In some examples, this includes combining the confidence measures from the two classifications to determine a composite classification. Thus, the classification information may include a composite confidence measure that is based on the confidence measures corresponding to the first classification and/or the second classification. In some examples, the classification information also includes other information output from the industrial vision classifier (e.g., features relied upon for the first classification) and/or the machine learning classifier (e.g., features relied upon for the second classification).
- the industrial vision classifier e.g., features relied upon for the first classification
- machine learning classifier e.g., features relied upon for the second classification
- the computing device 106 causes, based on the classification information, an action with respect to the insect to be performed. If the process 100 is performed to identify a category into which the insect 116 should be classified, the action can include causing the insect 116 to be physically moved into a container corresponding to the identified category. For example, as described in further detail with respect to FIG. 2 , a sortation device such as an air nozzle or a trap door can be actuated to physically perform the action, e.g., move the insect into the container.
- a sortation device such as an air nozzle or a trap door can be actuated to physically perform the action, e.g., move the insect into the container.
- the action can include releasing the insect 116 from the container if the process 100 determines a different category.
- FIG. 2 illustrates a system 200 including an insect sortation system 201 (e.g., the sortation system 113 ) for separating insects based on a determined classification, according to at least one example.
- the insect sortation insect sortation system 201 which is illustrated in a simplified form, can be used to singulate, count, classify, and sort a population of insects.
- the population of insects may include male insects, female insects, intersex insects, and gynandromorph insects.
- the insect sortation system 200 includes any suitable combination of chambers, paths, doors, and any other mechanical or electrical means to singulate the population of insects in a manner that enables counting, classifying, and sorting (e.g., physically moving to an appropriate container based on the classifying).
- These components of the insect sortation system 201 may be actuated by instructions provided by the computing device 206 over the network 211 . In this manner, the computing device 106 may control the operation of the insect sortation system 201 .
- the insect sortation system 201 includes a holding chamber 236 for holding an insect population.
- the insect population in the holding chamber 236 may include those of various sexes, various species, various sizes, and have any other varying characteristic.
- the techniques described herein may be used to classify insects of the insect population based on any one, or a combination of more than one, of these characteristics.
- Insects 216 move in a route (identified by the directional arrows) from the holding chamber 236 to an onboarding ramp 228 , onto a singulation pathway 230 , and into one of a plurality of chambers 234 .
- the insects 216 are singulated along this route so that only a single-file line of insects may move down the singulation pathway 230 .
- one or more mechanical singulation devices 232 are provided to physically move the insects 216 along their route.
- such movement devices 232 may include blowers, vacuums, vibrators, agitators, conveyors, and the like that are configured to move the insects 216 along the route.
- the insect sortation system 201 also includes one or more imaging devices 204 ( 1 )- 204 (N) in communication with, and in some examples, under the control of a computing device 206 .
- the computing device 206 receives the images, e.g., as discrete images or as a continuous video feed having a defined frame rate, and uses the techniques described herein to classify the insect 216 ( 2 ). For example, this can include using the classification module to analyze the images to predict a category to which the insect 216 ( 2 ) should be assigned. Based on this classification, the insect 216 ( 2 ) will then be assigned to one of chambers 234 .
- the insect 216 ( 2 ) has moved further along the singulation pathway 230 .
- the computing device 206 has classified the insect 216 ( 2 ) based on images received from one or more of the imaging devices 204 . Based on this classification, the insect 216 ( 2 ) can be moved into one of the chambers 234 .
- the chamber 234 ( 2 ) may include all insects 216 identified as female or for which a classification could not be determined and the chamber 234 ( 1 ) may include all insects 216 identified as non-female.
- Such classification and sortation may be desirable for a SIT program that seeks to avoid releasing female insects.
- Other examples, however, may sort based on different characteristics, e.g., insect health, deformities, species, etc.
- the insect sortation system 201 also includes sortation devices 236 to cause the insects 216 to be loaded into one of the chambers 234 .
- sortation devices 236 can include blowers, sliding doors or floors, etc.
- the system 200 also includes a remote server 238 connected to the computing device 206 and/or the insect sortation system 201 via the network 211 .
- the remote server 238 is communicably connected to one or more user terminals 240 ( 1 )-(N).
- the user terminals 240 are used to by users to validate classifications of images captured by the imaging device 204 .
- the user terminals 240 are operated by a third-party service or may be personal user terminals, by which users access the images for validation.
- the user terminals 240 are used to label training images for training the predictive models described herein.
- FIG. 3 illustrates an example flowchart illustrating a process 300 for classifying insects, according to at least one example.
- the primary portion 312 in this example includes automated classification functionality, while the optional portion 314 includes manual classification operations that may be performed if the primary portion 312 is unable to arrive at a classification having a confidence level above a predetermined threshold.
- Each of the blocks 304 - 310 use the images captured by the imaging device at block 302 . Because each block 304 - 310 applies a different approach for classifying, the overall accuracy in a classification will likely increase. Depending on the implementation, a first set the blocks 304 - 310 are performed for classification and a second set of the blocks 304 - 310 are performed for quality control, such as to validate the classification(s).
- the primary portion 312 is used to make a classification.
- This can include, for example, an imaging device (e.g., 104 ) capturing images at block 302 , an industrial vision classifier (e.g., 110 ) classifying the images at block 304 , and a machine learning classifier (e.g., 112 ) classifying the images at block 306 .
- Both the industrial vision classifier 110 and machine learning classifier 112 run in real-time from a sequence of images, e.g., a video captured by an imaging device.
- the two confidence scores are combined to create an overall classification that may be used for an immediate decision for those examples with a high confidence.
- the two classifiers 110 and 112 are built independently. As such, they are likely to make different mistakes.
- the industrial vision classifier 110 may include any suitable system that can classify objects visually by extracting a set of features of the object and then applying a set of rules upon those features.
- the industrial vision classifier 110 may include a feature detection algorithm that processes image pixels to determine whether there is a feature present at that pixel.
- Image features can include, for example, edges, corners, blobs, ridges and other similar features.
- the machine learning classifier 112 may be any suitable system that uses a large number of example images to statistically learn how to classify an image.
- the machine learning classifier 112 may include a core deep neural network and/or a recurrent neural network.
- the more labeled images that are available and can be used to train the classifier 112 the more accurate such a system is likely to be.
- machine learning classifiers 112 operate in a fundamentally different manner from the industrial vision classifier 110 , they are less likely to have correlated errors. This allows the two systems to be used together to produce an overall system that is more accurate.
- different combinations of the blocks 302 - 306 can be repeated if any one confidence measure or a composite confidence measure does not meet or exceed some predefined threshold. For example, if a first confidence measure output by the industrial vision classifier at the block 304 meets or exceeds the threshold, the process 300 may end. If the first confidence measure does not meet the threshold, a second confidence measure may be output by the machine learning classifier at the block 306 . This second confidence measure can be combined with the first confidence measure to arrive at a composite confidence measure. If this second confidence measure alone or if the composite confidence measure does not meet the threshold, one or more blocks 308 or 310 of the optional portion 314 may be performed.
- a classification may depend on output from both an industrial vision classifier and a machine learning classifier.
- a cascade of classifiers can be used in which the subject image needs to path both classifiers independently.
- a Naive Bayesian Optimal Classifier is used to combine the classification scores and uncertainties such that the combined classifier can trade off information from the industrial vision classifier and the machine learning classifier based on their confidences. This approach may be desirable because they system only needs to combine a small number of classifiers (e.g., 2 or 3).
- block 302 and block 304 are used to classify and sort an insect.
- a classification may depend on output from just an industrial vision classifier, or on just a machine learning classifier. In some examples, using just one classifier may conserve computing resources and increase throughput as decisions about sorting can be made quicker than if two classifiers are running. If the classification depends only on the industrial vision classifier, the machine learning classifier and, in some examples, one or more blocks from the optional portion 314 can be used to validate the earlier classification and sortation.
- the optional portion 314 can include non-expert verification at 308 and expert verification at 310 .
- the optional portion 314 may be considered “optional” because it is typically performed after and, in some examples, only when the primary portion 312 is unable to determine a classification with a confidence level above a predetermined threshold. For example, if the system is sufficiently confident after performing the primary portion 312 , the optional portion 314 of the process 300 may be disregarded. If, however, the system is not sufficiently confident, additional verification may be obtained using the block 308 and/or the block 310 .
- the images are sent for human evaluation (e.g., non-expert verification at the block 308 and/or expert verification at the block 310 ).
- the images can be sent to the remote server 238 and made available to the user terminals 240 .
- the image can be sent to a panel of non-experts, e.g., over the network 211 to the remote server 238 and/or directly to a user terminal 240 .
- These non-experts may be sourced using a micro task platform such as Amazon Turk® (or other set of organized or unorganized human users).
- the remote server 238 may be configured to host the micro task platform.
- workers are paid a small amount for each classification performed. Results from these systems are usually available within a short period of time relative to the expert because there is a large pool of workers of whom some will always be available.
- each worker is presented with simplified instructions explaining how to differentiate the classes of objects present in the images.
- the workers are then shown examples of images and asked to categorize each image into one of the desired categories.
- Images with known classifications e.g., labels
- salting is also mixed in with the uncertain examples in a process known as salting. If the worker incorrectly identifies these salted images too often, then the worker is deemed to be of too low a quality to use and is prevented from doing further tasks.
- Each uncertain image is examined by multiple workers in this example. If all of the workers agree, then the image may be determined to have that label. If one or more the workers disagree (or a threshold number of workers disagree), the image is sent to one or more experts to review, at the block 310 .
- the expert(s) may definitively identify the classification for the image.
- the experts that perform the expert verification may include those that are sourced based on qualifications and/or may be employed by the same entity that operates the insect sortation system 200 .
- the experts may use any suitable means to classify the image (e.g., visual inspection including magnification of the images).
- FIG. 4 illustrates an example device 400 including a classification module 436 for classifying insects, according to at least one example.
- the device 400 includes any suitable combination of hardware, software, and/or firmware configured to implement the functionality described with reference to the classification module.
- the classification module 436 is configured to perform insect image classification 436 , as described herein.
- the classification module 436 includes an image capture component 438 , the industrial vision classifier 410 , the machine learning classifier 412 , a singulation and sortation control component 444 , and a validation component 446 .
- the machine learning classifier 412 includes at least one of a core deep neural network model 440 and a recurrent neural network model 442
- the image capture component 438 is configured to control the function of the imaging device 104 . This may include instructing the image device 104 regarding when to capture images, how frequently, and the like.
- the image capture component 438 may store information (e.g., in the form of metadata) in association with the images. Such information can include timestamps, location data, build and version data, unique image identifiers, and the like.
- the industrial vision classifier 410 is configured to access the images captured by the image capture component 438 (e.g., from memory and/or streaming from the imaging device 104 ). As introduced herein, the industrial vision classifier 410 may be configured to use object detection techniques to classify images including insects.
- the object may be segmented from the background by subtracting the image taken by the imaging device shortly before the object is present.
- features can be extracted from the segmented object.
- these features may include, but are not limited to: size of the object, shape of the object, visual similarity to a known example, color of the object, texture of the object, the same type of features extracted from sub-regions of the object, and the same type of features extracted from successive images of the same object.
- the features may be combined together using a manually designed set of rules, or they can be combined using a decision tree (e.g., Baysian or boosted).
- Decision trees are lightweight machine learning algorithms that require less data to train to their maximum potential than a full machine learning classifier but can often achieve better performance than hand selected rules.
- the resulting trained decision tree may then be implemented as a sequence of if/then statements in any coding platform.
- a mosquito walks in front of an imaging device (e.g., the imaging device 104 ) running at 5-10 frames per second. As described with respect to images 1200 (A)- 1200 (H) FIG. 12 , for each frame, the system looks for the mosquito's body 1202 . If the body 1202 is found, then if the mosquito is too large, it is rejected as females are larger than males.
- an imaging device e.g., the imaging device 104
- the claspers 1206 (C) and 1206 (D) of the males are more dull and, in some examples, look like there are two distal structures as compared to the claspers 1206 (G) and 1206 (H) of the females. If the claspers are positively identified, the image is classified as male. If both antennae 1204 are found, then the image is also classified as male. For an insect to be classified as male in this example, all frames with a valid body 1202 found must be identified as male and at least three images must be used before the mosquito reaches a specific point along the lane. If not enough images are acquired, the mosquito is pushed back with air in order to acquire more images.
- the industrial vision classifier 410 then outputs a classification and a confidence. In other examples, different techniques, requirements, or thresholds may be used. For example, rather than requiring all frames to have an identifiable body, a lower threshold may be used. Similarly, a different threshold number of images than three may be employed.
- the machine learning classifier 412 it is configured to access the images captured by the image capture component 438 (e.g., from memory and/or streaming from the imaging device 104 ). As introduced herein, the machine learning classifier 412 may be configured to utilize one or more machine learning models to classify the images of the insects.
- the core deep neural network model 440 is one of the machine learning models implemented by the machine learning classifier 412 .
- the core deep neural network model 440 is an object classification model for a single image.
- the core deep neural network model 440 uses the structure from a generic object classification model (e.g., Inception-v2) that processes a single still image.
- the core deep neural network model 440 takes as input a single still image and outputs a classification and a corresponding confidence measure and/or one or more features from the image that triggered the classification. This output from the core deep neural network model 440 can be used to classify the insect shown in the images, as described herein.
- the machine learning classifier 412 Before it may be used to classify insects, the machine learning classifier 412 must be trained. In order to train the core deep neural network model 440 of this example, a large number of training images are generated and labeled. These images are generated by having individual insects pass in front of an imaging device. As the insects pass in front, many images are taken and annotated as coming from the same insect. One of those images is labeled by humans and the label is copied to all other images in the sequence for training. This multiplies the number of training examples for the amount of labeling required.
- the core deep neural network model 440 is trained using stochastic gradient descent and backpropagation in this example.
- the training examples may be heavily male because pupae may be mechanically sex separated by size, removing most of the females before imaging of the adults would occur. This can bias the core deep neural network model 440 towards predicting the male class more regularly, which may not desired. So, the loss for every example can be reweighted to be proportional to the inverse of the label frequency.
- the core deep neural network model 440 when the core deep neural network model 440 is being trained, if the machine learning classifier 412 makes a mistake, a penalty proportional to the loss can be applied and, from this, that penalty can be back propagated through the core deep neural network model 440 so that the core deep neural network model 440 changes its internal weights (i.e. its model/understanding) a little bit to be more likely to get that example correct in the future.
- the core deep neural network model 440 can get very good at correctly classifying the examples but not so good at the rarer examples. To account for this, the loss of the rarer examples is changed so that these losses are penalized more and thus have more of an impact on the core deep neural network model 440 .
- the inverse frequency is used for this, e.g., if there are 1000 male images and 100 female images, then the loss of getting a male wrong is 1/1000 and the loss from getting a female wrong is 1/100.
- the images that form the training data can be sent to a third-party service for labelling by human users. Once labelled, the images are received back and used as training data.
- the core deep neural network model 440 can be initially trained as described above using dedicated training data (e.g., images that have been labeled by human users).
- the core deep neural network model 440 in some examples, is re-trained using data for the insects actually being classified. For example, images that have been validated using as part of the optional portion 314 of the process 300 may be fed back into the core deep neural network model 440 as training data.
- the machine learning classifier 412 also includes the recurrent neural network model 442 (e.g., a long term, short term memory network (LSTM)).
- the recurrent neural network model 442 may function on a sequence of inputs and contains some memory. It should be understood, however, that the techniques described herein may be implemented with any one of the recurrent neural network model 442 or the core deep neural network model 440 . Combining the recurrent neural network model 442 with the core deep neural network model 440 may enable the classification module 436 to classify images with higher confidence measures.
- the recurrent neural network model 442 is training on sequences of images and can therefore accept sequences of images for insect classification.
- the training sequences may have been captured for training the core deep neural network 440 .
- Many training examples can be generated from a single sequence by using subsets of the sequence. This subset keeps the same ordering of images but by removing some images, the recurrent neural network model 442 will learn to be more robust to latency issues in production.
- a regularization term is added to the loss function equal to the sum of the uncertainty of the model at each step. This encourages the recurrent neural network model 442 to prefer a representation that allows it to make a determination more quickly.
- At least one of the core deep neural network model 440 or the recurrent neural network model 442 may be trained using an iterative bad label mining process. This process may operate on the property that a learning process will produce different versions of the predictive model in each run based on the images used for learning and on pure randomness. So, the predictive model can be trained many different times. Each time, a subset of the data can be selected and used to train the predictive model, and the model can be evaluated using the remaining data (e.g., the data that is left over after the subset is selected). Any examples that were incorrectly labeled are then re-examined by a human user and the label changed if appropriate. This approach can be iteratively repeated many times.
- a different subset of the data can be selected and used to train the predictive model.
- the remaining data e.g., a different subset of the data
- the resulting dataset will have a higher degree of accuracy and can be used to more tightly bound the error in the final model trained.
- the singulation and sortation component 444 is configured to control aspects of the system 200 relating to singulation and sortation.
- the singulation sortation component 444 may control the operation of the singulation devices 232 and the sortation devices 236 .
- Such control can include determining when to operate any one of the devices 232 , 236 based on feedback from the imaging device 204 and/or other sensors present in the system 200 (e.g., proximity sensors, weight sensors, and the like configured to output information useable to determine a location of an insect within the system 200 ).
- image data from the imaging device 204 can be used to determine which of the singulation devices 232 or the sortation devices 236 to actuate.
- the validation component 446 is configured to use output from one or more other components of the classification module 236 to generate a composite classification for one or more images.
- the validation component 446 combines output from the industrial vision classifier 410 (e.g., features and/or confidence measures) and output from the machine learning classifier 412 (e.g., confidence measures) to determine a composite confidence measure for a particular image or sequence of images.
- the Naive Bayesian Optimal Classifier can be used to determine the likelihood of the distribution. This provides both a classification (e.g., maximum likelihood) and confidence (e.g., spread of the distribution).
- the validation component 436 is also configured to validate one or more classifications determined by the industrial vision classifier 410 and/or the machine learning classifier 412 , in addition to validating the composite classification.
- the validation component 436 is configured to provide images to remote users for validation (e.g., to the remote server 238 ).
- FIG. 5 illustrates an example diagram 500 implementing two predictive models for classifying insects, according to at least one example.
- the core deep neural network model 540 can be configured to incorporate multiple images of the same individual insect in two ways.
- a first way is to combine the estimated confidence measures using a Bayesian approach assuming that each measurement is independent and combining the confidence measures directly. For example, this can be achieved using Recursive Bayesian Estimation with no process model. Such an approach is equivalent to the Bayesian Optimal Classifier described herein but with an undefined number of measurements feeding the final prediction. Using this approach, the more measurements (i.e., images) that are sampled, the more confident the model becomes.
- a second way is described herein. This second approach includes feeding the output of the core deep neural network model 540 into the recurrent neural network model 542 .
- the output from the core deep neural network model 540 for a single image provides the input for a single step in the recurrent neural network model 542 .
- Sequences of images can be input into the recurrent neural network model 542 with their corresponding timestamps and outputs from the core deep neural network model 540 .
- the recurrent neural network model 542 is reset. The system determines that a new insect is seen by, for example, detecting that the prior insect left one edge of the image and an insect appeared at the other edge. If multiple cameras are used, once an insect is visible in a second camera, an insect detected by the first camera is determined to be a new insect.
- Output from the recurrent neural network model 542 is a classification prediction 550 including corresponding confidence measure, e.g., a value between 0 and 1.
- the diagram 500 graphically depicts a classification path 552 being traversed at successive times using different images.
- the classification path 552 ( 1 ) represents the classification path 552 being traversed at a first time for a first image 508 ( 1 )
- the second classification path 552 ( 2 ) represents the classification path 552 being traversed at a second time for a second image 508 ( 2 )
- a N-th classification path represents the classification patch 552 being traversed for the N-th time for an N-th image 508 (N).
- the images 508 ( 1 )- 508 (N) represent a sequence of images of the same insect. In some examples, the images 508 are not identical to each other.
- the first image 508 ( 1 ) of the insect is input into the core deep neural network model 540 .
- Output from the core deep neural network model 540 is used as input to the recurrent neural network model 542 .
- the recurrent neural network model 542 uses this input and/or the image 508 ( 1 ) to make a first prediction 550 ( 1 ) for the image 508 ( 1 ).
- the first prediction 550 ( 1 ) corresponds to a first predicted classification for the image 508 ( 1 ) and may be based on the classifications by the core deep neural network model 540 and the recurrent neural network model 542 .
- the second classification path 552 ( 2 ) includes the second image 508 ( 2 ) input into the core deep neural network model 540 .
- Output from the core deep neural network model 540 based on the second image 508 ( 2 ) is used as input to the recurrent neural network model 542 (e.g., at a different layer of the recurrent neural network model 542 than was used in the first classification path 552 ( 1 )).
- output from the recurrent neural network model 542 when it evaluated the first image 508 ( 1 ) is also used as input to the recurrent neural network model 542 in the second classification path 552 ( 2 ).
- Second prediction 550 ( 2 ) corresponds to a second predicted classification for the image 508 ( 1 ) and may be based on the classifications by the core deep neural network model 540 and the recurrent neural network model 542 . As shown with respect to the N-th classification path 552 (N), this same process can be repeated for other images 508 .
- the predictions 550 can be combined and/or compared in any suitable manner to determine whether the insect has been classified with a confidence level above a predetermined threshold and/or whether additional evaluation should be performed.
- FIGS. 6 - 10 illustrate example flow diagrams showing processes 600 , 700 , 800 , 900 , and 1000 , according to at least a few examples. These processes, and any other processes described herein (e.g., the process 100 ), are illustrated as logical flow diagrams, each operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof.
- the operations may represent computer-executable instructions stored on one or more non-transitory computer-readable storage media that, when executed by one or more processors, perform the recited operations.
- computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types.
- the order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.
- any, or all of the processes described herein may be performed under the control of one or more computer systems configured with specific executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof.
- code e.g., executable instructions, one or more computer programs, or one or more applications
- the code may be stored on a non-transitory computer readable storage medium, for example, in the form of a computer program including a plurality of instructions executable by one or more processors.
- FIG. 6 illustrates an example flow chart depicting the process 600 for classifying insects, according to at least one example.
- the process 600 is performed by the classification module 436 ( FIG. 4 ) executing in the computing device 106 ( FIG. 1 ).
- the process 600 in particular corresponds to using outputs from an industrial vision classifier and a machine learning classifier to classify an image.
- the process 600 begins at block 602 by the computing device 106 instructing an imaging device (e.g., the imaging device 104 ) to capture an image that depicts at least a portion of an insect.
- the image capture component 438 ( FIG. 4 ) executing in the computing device 106 instructs the imaging device (e.g., the imaging device 104 ) to capture the image.
- the insect may be located at a first location within an environment when the imaging device captures the image. For example, the insect may be located on the singulation pathway 230 ( FIG. 2 ).
- the process 600 includes the computing device 106 receiving the image.
- the image capture component 438 executing in the computing device 106 receives and/or accesses the image from a memory device configured to store images.
- receiving the image includes receiving a stream of images from the imaging device 104 (e.g., as a video stream).
- the process 600 includes the computing device 106 determining, using an industrial vision classifier (e.g., the industrial vision classifier 410 ( FIG. 4 )), a first classification of the image.
- using the industrial vision classifier includes inputting the image into the industrial vision classifier and receiving the first classification as an output from the industrial vision classifier.
- the first classification may be a classification of the image into at least one category based at least in part on features extracted from the image by the industrial vision classifier.
- the process 600 includes the computing device 106 determining, using the industrial vision classifier, a first confidence measure corresponding to the first classification.
- the industrial vision classifier 410 executing in the computing device performs the block 608 .
- the first confidence measure may indicate a likelihood that the first classification is correct.
- the process 600 also includes instructing movement of the insect to a second location based at least in part on the first classification and the first confidence measure.
- the insect can be moved from the first location (e.g., on the singulation pathway 234 ( FIG. 2 )) to the second location (e.g., within one of the chambers 234 ( FIG. 2 )).
- the process 600 includes the computing device 106 determining, using a machine learning classifier (e.g., the machine learning classifier 412 ( FIG. 4 )), a second classification of the image.
- using the machine learning classifier includes inputting the image and/or output from the industrial vision classifier into the machine learning classifier.
- the second classification may be a classification of the image into at least one category.
- the block 610 is performed after the insect is located at the second location, as described above.
- the machine learning classifier can be used to validate that the insect was correctly classified and sorted (e.g., moved to the second location).
- the process 600 includes the computing device 106 determining, using the machine learning classifier, a second confidence measure corresponding to the second classification.
- the second confidence measure may indicate a likelihood that the second classification is correct.
- the process 600 includes the computing device 106 determining a third classification of the image based at least in part on the first confidence measure and the second confidence measure. In some examples, this may be performed by the validation component 446 ( FIG. 4 ) executing in the computing device 106 . This may include validating the first classification and the second classification.
- the process 600 further includes the computing device 106 instructing movement of the insect from the second location to a third location based at least in part on the third classification. For example, if the insect was misclassified and loaded into the incorrect chamber 234 , the computing device 106 can generate an instruction to inform a user to remove the insect from the chamber 234 and/or empty the entire chamber 234 .
- the first location can be within an insect sortation device (e.g., the pathway 230 )
- the second location can be within an insect transportation device (e.g., the chamber 234 )
- the third location is outside of the insect transportation device (e.g., released from the chamber 234 ).
- the insect is located at the first location while the imaging device captures the image and while the third classification of the image is determined.
- the process 600 may further include the computing device 106 instructing movement of the insect to a second location based at least in part on the third classification.
- the movement from the pathway 230 (e.g., first location) into the chamber 234 (e.g., second location) may be based on the third classification
- instructing movement may include sending an instruction to a mechanical device to perform the movement.
- the instruction may activate a blower for blowing the insect into the second location or open a trap door for the insect to fall in to the second location.
- the image includes a sequence of images, e.g., a video
- the first confidence measure includes a plurality of first confidence measures corresponding to individual images of the sequence of images.
- the second confidence measure includes a plurality of second confidence measures corresponding to the individual images of the sequence of images.
- FIG. 7 illustrates an example flow chart depicting the process 700 for classifying insects, according to at least one example.
- the process 700 is performed by the classification module 436 ( FIG. 4 ) executing in the computing device 106 ( FIG. 1 ).
- the process 700 in particular corresponds to using a machine learning classifier to classify a sequence of images when an industrial vision classifier fails to confidently classify the images.
- the process 700 begins at block 702 by the computing device 106 receiving a sequence of images that each depict at least a portion of an insect.
- the image capture component 438 ( FIG. 4 ) executing in the computing device 106 may receive the sequence of images generally as discussed above with respect to block 604 .
- the process 700 includes the computing device 106 determining, using an industrial vision classifier, a first classification of the sequence of images insect into at least one category.
- the industrial vision classifier 410 ( FIG. 4 ) may determine the first classification generally as discussed above with respect to block 606 .
- the process 700 includes the computing device 106 determining a first confidence measure corresponding to the first classification.
- the industrial vision classifier 410 may determine the first confidence measure generally as discussed above with respect to block 608 .
- the process 700 includes the computing device 106 determining whether the first confidence measure exceeds a confidence threshold.
- the validation component 446 ( FIG. 4 ) executing in the computing device 106 may determine whether the first confidence measure exceeds the confidence threshold.
- the confidence threshold may be an fixed number (e.g., 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and any other suitable number not explicitly listed herein), selected based on heuristics, and/or determined in any other way (e.g., arbitrarily selected by a human user).
- the confidence threshold increases proportionally with respect to the number of images in the sequence. For example, if the sequence includes as single image, the threshold may be 80%. If the sequence includes two images, the threshold may be increased to 82%.
- the process 700 proceeds to block 710 , at which, the process 700 includes the computing device 106 generating classification information relating to the insect based on the first confidence measure.
- the validation component 446 executing in the computing device 106 may generate the classification information.
- the classification information which may include a determined classification for the images, any corresponding confidence measure(s), and any other information relating to the classification, may be generated based solely on output from the industrial vision classifier.
- the process 700 proceeds to block 712 , at which, the process 700 includes the computing device 106 determining, using a machine learning classifier, a second classification of the sequence of images into the at least one category.
- the machine learning classifier 412 may determine the second classification generally as discussed above with respect to block 610 .
- the process 700 proceeds to block 710 if the first confidence measure falls below the threshold.
- the process 700 includes the computing device 106 determining a second confidence measure corresponding to the second classification.
- the machine learning classifier 412 may determine the second confidence measure generally as discussed above with respect to block 612 .
- the second confidence measure is based on output from a first predictive model and/or a second predictive model, e.g., the core deep neural network model 440 and/or the recurrent neural network model 442 .
- the first predictive model outputs a first output that includes at least one of a first set of confidence measures corresponding to the sequence of images or a first set of features corresponding to the sequence of images.
- the second predictive model takes as input at least one of the first set of confidence measures or the first set of features, and outputs a second set of confidence measures corresponding to the sequence of images.
- determining the second confidence measure includes determining the second confidence measure based at least in part on the second set of confidence measures.
- the process 700 includes the computing device 106 generating classification information relating to the insect based on the second confidence measure.
- the validation component 446 executing in the computing device 106 may determine the classification information.
- generation of the classification information is also based at least in part on the first confidence measure.
- the classification information is associated with the insect and includes an instruction to either move the insect or refrain from moving the insect from a particular location.
- the process 700 may further include providing the sequence of images to a plurality of user devices for classification of the sequence of images by a plurality of non-expert users associated with the plurality of user devices. This approach is described further with respect to block 308 ( FIG. 3 ) of the process 300 .
- the process 700 may further include providing the sequence of images to at least one user device for classification of the sequence of images by at least one expert user associated with the at least one user device. This approach is described further with respect to block 310 ( FIG. 3 ) of the process 300 .
- FIG. 8 illustrates an example flow chart depicting the process 800 for classifying insects, according to at least one example.
- the process 800 is performed by the classification module 436 ( FIG. 4 ) executing in the computing device 106 ( FIG. 1 ).
- the process 800 in particular corresponds to using a machine learning classifier to classify a sequence of images.
- the process 800 begins at block 802 by the computing device 106 instructing an imaging device to capture a sequence of images depicting at least a portion of an insect.
- the image capture component 438 ( FIG. 4 ) executing in the computing device 106 instructs the imaging device to capture the sequence of images generally as discussed above with respect to block 602 .
- the process 800 includes the computing device 106 using a first predictive model to determine a first output corresponding to a first classification of a first image of the sequence of images.
- the machine learning classifier 412 may determine the first output.
- the predictive model may be the core deep neural network model 440 .
- the predictive model may be the recurrent neural network model 442 .
- the first output includes a confidence measure of the first classification.
- the process 800 includes the computing device 106 generating classification information based at least in part on the first output.
- the validation component 446 ( FIG. 4 ) may generate the classification information generally as discussed above.
- the process 800 further includes the computing device 106 using a second predictive model to determine a second output corresponding to a second classification of the sequence of images based at least in part on the first output from the first predictive model.
- the output from the first predictive model may be input into the second predictive model.
- the second predictive model in this example, may be the recurrent neural network model 442 ( FIG. 4 ).
- generating the classification information may further be based at least in part on the second output.
- the second predictive model is trained using multiple subsets of multiple sequences of labeled images.
- each sequence of the multiple sequences may depict a different insect.
- the sequence of images may include a set of chronological images of the insect.
- the process 800 further includes the computing device 106 using the first predictive model to determine a second output corresponding to a second classification of a second image of the sequence of images.
- the second output may include a second confidence measure of the second classification.
- the process 800 further includes the computing device 106 using a second predictive model to determine a set of third outputs corresponding to third classification of the sequence of images based at least in part on the first output and the second output from the first predictive model.
- generating the classification information may be further based at least in part on the third output.
- the classification information identifies a category to which the first classification corresponds.
- the process 800 further includes the computing device 106 instructing the imaging device to capture an additional set of images when the confidence measure fails to meet a confidence threshold for the category, using the first predictive model to determine a second output corresponding to a second classification of one or more images of the additional set of images, the second output including an additional confidence measure of the second classification, and generating updated classification information based at least in part on the second output.
- FIG. 9 illustrates an example flow chart depicting the process 900 for classifying insects, according to at least one example.
- the process 900 is performed by the classification module 436 ( FIG. 4 ) executing in the computing device 106 ( FIG. 1 ).
- the process 900 in particular corresponds to using two predictive models to classify a set of images.
- the process 900 begins at block 902 by the computing device 106 receiving a set of images depicting at least a portion of an insect.
- the image capture component 438 ( FIG. 4 ) executing in the computing device 106 may receive the set of images, generally as discussed above with respect to block 602 .
- the process 900 includes the computing device 106 determining a first set of classifications for the set of images by at least inputting the set of images into a first predictive model.
- the machine learning classifier 412 ( FIG. 4 ) using the core deep neural network model 440 ( FIG. 4 ) may determine the first set of classifications.
- the first predictive model may be the core deep neural network model 440 .
- the first predictive model may be trained by at least:
- the process 900 includes the computing device 106 determining a first set of confidence measures for the first set of classifications.
- the machine learning classifier 412 may determine the first set of confidence measures by taking the images as inputs to the machine lea.
- the process 900 includes the computing device 106 determining a second set of classifications for the set of images by at least inputting the first set of confidence measures into a second predictive model.
- the machine learning classifier 412 using the recurrent neural network model 442 may determine the second set of classifications.
- the second predictive model at block 908 may be the recurrent neural network model 442 .
- each confidence measure of the first set of confidence measures is input into a different layer of the recurrent neural network.
- the process 900 includes the computing device 106 determining a second set of confidence measures for the second set of classifications.
- the machine learning classifier 412 may determine the second set of confidence measures, generally as described above.
- the process 900 includes the computing device 106 generating classification information based at least in part on the second set of confidence measures.
- the validation component 446 ( FIG. 4 ) executing in the computing device 106 may determine the classification information.
- the classification information may include a prediction that the insect depicting in the set of images is at least one of a male insect, a female insect, an intersex insect, or a gynandromorph insect.
- the classification information may include an instruction for moving the insect from a particular location or to refrain from moving the insect from the particular location.
- FIG. 10 illustrates an example flow chart depicting the process 1000 for classifying insects, according to at least one example.
- the process 1000 is performed by the classification module 436 ( FIG. 4 ) executing in the computing device 106 ( FIG. 1 ).
- the process 1000 in particular corresponds to using two predictive models to classify a set of images.
- the process 1000 begins at block 1002 by the computing device 106 receiving a set of images depicting at least a portion of an insect.
- the image capture component 438 ( FIG. 4 ) may receive the set of images, generally as discussed above with respect to block 602 .
- the insect is in an adult stage insect.
- the process 1000 includes the computing device 106 determining a first classification for the set of images by least inputting the set of images into a first predictive model.
- the machine learning classifier 412 ( FIG. 4 ) using the core deep neural network model 440 ( FIG. 4 ) may determine the first classification.
- the first predictive model is the core deep neural network model 440 .
- the process 1000 includes the computing device 106 determining a set of features corresponding to the first classification.
- the machine learning classifier 412 may determine the set of features.
- the process 1000 includes the computing device 106 determining a second classification for the set of images by at least inputting the set of features into a second predictive model.
- the machine learning classifier 412 using the recurrent neural network model 442 may determine the second classification.
- the second predictive model is the recurrent neural network model 442 .
- the process 1000 includes the computing device 106 determining a second confidence measure for the second classification.
- the machine learning classifier 412 may determine the second confidence measure.
- the process 1000 includes the computing device 106 generating classification information based at least in part on the second confidence measure.
- the validation component 446 ( FIG. 4 ) executing in the computing device 106 may generate the classification information.
- the classification information includes a prediction that the insect depicting in the set of images is at least one of a male insect, a female insect, an intersex insect, a number of insects, a species of an insect, or a gynandromorph insect.
- the process 1000 further includes classifying, using an industrial vision classifier, the set of images prior to determining the first classification. This may be performed by the industrial vision classifier 410 ( FIG. 4 ).
- FIG. 11 illustrates examples of components of a computer system 1100 , according to at least one example.
- the computer system 1100 may be a single computer such as a user computing device and/or can represent a distributed computing system such as one or more server computing devices.
- the computer system 1100 is an example of the computing device 106 .
- the computer system 1100 may include at least a processor 1102 , a memory 1104 , a storage device 1106 , input/output peripherals (I/O) 1108 , communication peripherals 1110 , and an interface bus 1112 .
- the interface bus 1112 is configured to communicate, transmit, and transfer data, controls, and commands among the various components of the computer system 1100 .
- the memory 1104 and the storage device 1106 include computer-readable storage media, such as Radom Access Memory (RAM), Read ROM, electrically erasable programmable read-only memory (EEPROM), hard drives, CD-ROMs, optical storage devices, magnetic storage devices, electronic non-volatile computer storage, for example Flash® memory, and other tangible storage media.
- Any of such computer-readable storage media can be configured to store instructions or program codes embodying aspects of the disclosure.
- the memory 1104 and the storage device 1106 also include computer-readable signal media.
- a computer-readable signal medium includes a propagated data signal with computer-readable program code embodied therein. Such a propagated signal takes any of a variety of forms including, but not limited to, electromagnetic, optical, or any combination thereof.
- a computer-readable signal medium includes any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use in connection with the computer system 1100 .
- the memory 1104 includes an operating system, programs, and applications.
- the processor 1102 is configured to execute the stored instructions and includes, for example, a logical processing unit, a microprocessor, a digital signal processor, and other processors.
- the memory 1104 and/or the processor 1102 can be virtualized and can be hosted within another computing system of, for example, a cloud network or a data center.
- the I/O peripherals 1108 include user interfaces, such as a keyboard, screen (e.g., a touch screen), microphone, speaker, other input/output devices, and computing components, such as graphical processing units, serial ports, parallel ports, universal serial buses, and other input/output peripherals.
- the I/O peripherals 1108 are connected to the processor 1102 through any of the ports coupled to the interface bus 1112 .
- the communication peripherals 1110 are configured to facilitate communication between the computer system 1100 and other computing devices over a communications network and include, for example, a network interface controller, modem, wireless and wired interface cards, antenna, and other communication peripherals.
- a computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs.
- Suitable computing devices include multipurpose microprocessor-based computing systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
- Embodiments of the methods disclosed herein may be performed in the operation of such computing devices.
- the order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
- based on is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited.
- use of “based at least in part on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based at least in part on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Robotics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Catching Or Destruction (AREA)
Abstract
Description
-
- (i) identifying a subset of labeled training images;
- (ii) using the subset of the labeled training images to train a version of the first predictive model;
- (iii) classifying remaining labeled training images into at least one category;
- (iv) identifying which labeled training images were misclassified; and
- (v) updating labeling of the misclassified labeled training images;
- iteratively repeating (i) through (v) for different subsets of labeled training images and different remaining labeled training images.
Claims (35)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/859,397 US12038969B2 (en) | 2019-05-03 | 2020-04-27 | Predictive classification of insects |
| US18/651,983 US20240281466A1 (en) | 2019-05-03 | 2024-05-01 | Insect sortation system including sortation devices for directing mosquitos to an outlet after passing a detector |
| US18/740,326 US20240330360A1 (en) | 2019-05-03 | 2024-06-11 | Generating insect classifications using predictive models based on sequences of images |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201962843080P | 2019-05-03 | 2019-05-03 | |
| US16/859,397 US12038969B2 (en) | 2019-05-03 | 2020-04-27 | Predictive classification of insects |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/651,983 Continuation US20240281466A1 (en) | 2019-05-03 | 2024-05-01 | Insect sortation system including sortation devices for directing mosquitos to an outlet after passing a detector |
| US18/740,326 Division US20240330360A1 (en) | 2019-05-03 | 2024-06-11 | Generating insect classifications using predictive models based on sequences of images |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200349668A1 US20200349668A1 (en) | 2020-11-05 |
| US12038969B2 true US12038969B2 (en) | 2024-07-16 |
Family
ID=70775495
Family Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/859,397 Active 2041-04-03 US12038969B2 (en) | 2019-05-03 | 2020-04-27 | Predictive classification of insects |
| US18/651,983 Pending US20240281466A1 (en) | 2019-05-03 | 2024-05-01 | Insect sortation system including sortation devices for directing mosquitos to an outlet after passing a detector |
| US18/740,326 Pending US20240330360A1 (en) | 2019-05-03 | 2024-06-11 | Generating insect classifications using predictive models based on sequences of images |
Family Applications After (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/651,983 Pending US20240281466A1 (en) | 2019-05-03 | 2024-05-01 | Insect sortation system including sortation devices for directing mosquitos to an outlet after passing a detector |
| US18/740,326 Pending US20240330360A1 (en) | 2019-05-03 | 2024-06-11 | Generating insect classifications using predictive models based on sequences of images |
Country Status (4)
| Country | Link |
|---|---|
| US (3) | US12038969B2 (en) |
| EP (1) | EP3963476A1 (en) |
| SG (1) | SG11202112149QA (en) |
| WO (1) | WO2020226934A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240037914A1 (en) * | 2020-12-03 | 2024-02-01 | Kansas State University Research Foundation | Machine learning method and computing device for art authentication |
| US12310348B2 (en) * | 2022-01-03 | 2025-05-27 | Feng Chia University | Intelligent Forcipomyia taiwana monitoring and management system |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110430751B (en) * | 2017-01-22 | 2022-07-08 | 塞纳科有限公司 | Apparatus and method for automatically treating and sorting insects for growth and release |
| WO2020226933A2 (en) | 2019-05-03 | 2020-11-12 | Verily Life Sciences Llc | Insect singulation and classification |
| US20220104474A1 (en) * | 2020-10-07 | 2022-04-07 | University Of South Florida | Smart mosquito trap for mosquito classification |
| CN112270681B (en) * | 2020-11-26 | 2022-11-15 | 华南农业大学 | A method and system for depth detection and counting of yellow board pests |
| CN112508012B (en) * | 2020-12-01 | 2022-09-06 | 浙大宁波理工学院 | Orchard pest intelligent positioning and identifying method suitable for small target sample |
| CN112580714B (en) * | 2020-12-15 | 2023-05-30 | 电子科技大学中山学院 | Article identification method for dynamically optimizing loss function in error-cause reinforcement mode |
| CN114463581A (en) * | 2022-01-24 | 2022-05-10 | 中国人民解放军军事科学院军事医学研究院 | A method and system for identifying mosquito species based on deep learning |
| JP7400035B1 (en) | 2022-07-27 | 2023-12-18 | 日本農薬株式会社 | Programs and pest inspection equipment for pest inspection |
| CN117252833A (en) * | 2023-09-21 | 2023-12-19 | 南通锐越信息科技有限公司 | An insect detection method, device, equipment and storage medium based on imagga |
Citations (33)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2300480A (en) | 1995-05-05 | 1996-11-06 | Truetzschler Gmbh & Co Kg | Detecting and separating coloured and metallic foreign matter from fibre material |
| US5594654A (en) | 1995-02-17 | 1997-01-14 | The United States Of America As Represented By The Secretary Of Agriculture | Beneficial insect counting and packaging device |
| US7496228B2 (en) | 2003-06-13 | 2009-02-24 | Landwehr Val R | Method and system for detecting and classifying objects in images, such as insects and other arthropods |
| US7849032B1 (en) | 2002-05-24 | 2010-12-07 | Oracle International Corporation | Intelligent sampling for neural network data mining models |
| US8025027B1 (en) | 2009-08-05 | 2011-09-27 | The United States Of America As Represented By The Secretary Of Agriculture | Automated insect separation system |
| US8269842B2 (en) | 2008-06-11 | 2012-09-18 | Nokia Corporation | Camera gestures for user interface control |
| US8478052B1 (en) | 2009-07-17 | 2013-07-02 | Google Inc. | Image classification |
| US20140289323A1 (en) * | 2011-10-14 | 2014-09-25 | Cyber Ai Entertainment Inc. | Knowledge-information-processing server system having image recognition system |
| US20150030255A1 (en) * | 2013-07-25 | 2015-01-29 | Canon Kabushiki Kaisha | Method and apparatus for classifying pixels in an input image and image processing system |
| CN104850836A (en) | 2015-05-15 | 2015-08-19 | 浙江大学 | Automatic insect image identification method based on depth convolutional neural network |
| US9633306B2 (en) | 2015-05-07 | 2017-04-25 | Siemens Healthcare Gmbh | Method and system for approximating deep neural networks for anatomical object detection |
| CN106733701A (en) | 2016-12-23 | 2017-05-31 | 郑州轻工业学院 | Potato multi-stage combination dysnusia detecting system and detection method |
| US9668699B2 (en) | 2013-10-17 | 2017-06-06 | Siemens Healthcare Gmbh | Method and system for anatomical object detection using marginal space deep neural networks |
| CN106997475A (en) | 2017-02-24 | 2017-08-01 | 中国科学院合肥物质科学研究院 | A kind of insect image-recognizing method based on parallel-convolution neutral net |
| US9730643B2 (en) | 2013-10-17 | 2017-08-15 | Siemens Healthcare Gmbh | Method and system for anatomical object detection using marginal space deep neural networks |
| US20170273290A1 (en) | 2016-03-22 | 2017-09-28 | Matthew Jay | Remote insect monitoring systems and methods |
| US20170273291A1 (en) | 2014-12-12 | 2017-09-28 | E-Tnd Co., Ltd. | Insect capturing device having imaging function for harmful insect information management |
| US9786270B2 (en) | 2015-07-09 | 2017-10-10 | Google Inc. | Generating acoustic models |
| US20170316281A1 (en) | 2016-04-28 | 2017-11-02 | Microsoft Technology Licensing, Llc | Neural network image classifier |
| WO2017201540A1 (en) | 2016-05-20 | 2017-11-23 | Techcyte, Inc. | Machine learning classification of particles or substances in digital microscopy images |
| US9830526B1 (en) | 2016-05-26 | 2017-11-28 | Adobe Systems Incorporated | Generating image features based on robust feature-learning |
| US20180084772A1 (en) * | 2016-09-23 | 2018-03-29 | Verily Life Sciences Llc | Specialized trap for ground truthing an insect recognition system |
| US20180114334A1 (en) * | 2016-10-24 | 2018-04-26 | International Business Machines Corporation | Edge-based adaptive machine learning for object recognition |
| US20180121764A1 (en) | 2016-10-28 | 2018-05-03 | Verily Life Sciences Llc | Predictive models for visually classifying insects |
| US10019654B1 (en) | 2017-06-28 | 2018-07-10 | Accenture Global Solutions Limited | Image object recognition |
| US20180206473A1 (en) | 2017-01-23 | 2018-07-26 | Verily Life Sciences Llc | Insect singulator system |
| WO2018134829A1 (en) | 2017-01-22 | 2018-07-26 | Senecio Ltd. | Method for sex sorting of mosquitoes and apparatus therefor |
| US20180279598A1 (en) | 2015-04-13 | 2018-10-04 | University Of Florida Research Foundation, Incorporated | Wireless smart mosquito and insect trap device, network and method of counting a population of the mosquitoes or insects |
| WO2019008591A2 (en) | 2017-07-06 | 2019-01-10 | Senecio Ltd. | Sex sorting of mosquitoes |
| US20190104719A1 (en) | 2017-10-11 | 2019-04-11 | Tony Guo | Insect vacuum and trap attachment systems |
| US10278368B1 (en) | 2016-10-05 | 2019-05-07 | Verily Life Sciences Llc | Automated flying insect separator |
| US20200219262A1 (en) * | 2019-01-03 | 2020-07-09 | The Regents Of The University Of California | Automated selection of an optimal image from a series of images |
| US20200349397A1 (en) | 2019-05-03 | 2020-11-05 | Verily Life Sciences Llc | Insect singulation and classification |
-
2020
- 2020-04-27 WO PCT/US2020/030128 patent/WO2020226934A1/en not_active Ceased
- 2020-04-27 EP EP20727012.5A patent/EP3963476A1/en active Pending
- 2020-04-27 SG SG11202112149QA patent/SG11202112149QA/en unknown
- 2020-04-27 US US16/859,397 patent/US12038969B2/en active Active
-
2024
- 2024-05-01 US US18/651,983 patent/US20240281466A1/en active Pending
- 2024-06-11 US US18/740,326 patent/US20240330360A1/en active Pending
Patent Citations (37)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5594654A (en) | 1995-02-17 | 1997-01-14 | The United States Of America As Represented By The Secretary Of Agriculture | Beneficial insect counting and packaging device |
| GB2300480A (en) | 1995-05-05 | 1996-11-06 | Truetzschler Gmbh & Co Kg | Detecting and separating coloured and metallic foreign matter from fibre material |
| US7849032B1 (en) | 2002-05-24 | 2010-12-07 | Oracle International Corporation | Intelligent sampling for neural network data mining models |
| US7496228B2 (en) | 2003-06-13 | 2009-02-24 | Landwehr Val R | Method and system for detecting and classifying objects in images, such as insects and other arthropods |
| US8269842B2 (en) | 2008-06-11 | 2012-09-18 | Nokia Corporation | Camera gestures for user interface control |
| US8478052B1 (en) | 2009-07-17 | 2013-07-02 | Google Inc. | Image classification |
| US8025027B1 (en) | 2009-08-05 | 2011-09-27 | The United States Of America As Represented By The Secretary Of Agriculture | Automated insect separation system |
| US20140289323A1 (en) * | 2011-10-14 | 2014-09-25 | Cyber Ai Entertainment Inc. | Knowledge-information-processing server system having image recognition system |
| US20150030255A1 (en) * | 2013-07-25 | 2015-01-29 | Canon Kabushiki Kaisha | Method and apparatus for classifying pixels in an input image and image processing system |
| US9668699B2 (en) | 2013-10-17 | 2017-06-06 | Siemens Healthcare Gmbh | Method and system for anatomical object detection using marginal space deep neural networks |
| US9730643B2 (en) | 2013-10-17 | 2017-08-15 | Siemens Healthcare Gmbh | Method and system for anatomical object detection using marginal space deep neural networks |
| US20170273291A1 (en) | 2014-12-12 | 2017-09-28 | E-Tnd Co., Ltd. | Insect capturing device having imaging function for harmful insect information management |
| US20180279598A1 (en) | 2015-04-13 | 2018-10-04 | University Of Florida Research Foundation, Incorporated | Wireless smart mosquito and insect trap device, network and method of counting a population of the mosquitoes or insects |
| US9633306B2 (en) | 2015-05-07 | 2017-04-25 | Siemens Healthcare Gmbh | Method and system for approximating deep neural networks for anatomical object detection |
| CN104850836A (en) | 2015-05-15 | 2015-08-19 | 浙江大学 | Automatic insect image identification method based on depth convolutional neural network |
| US9786270B2 (en) | 2015-07-09 | 2017-10-10 | Google Inc. | Generating acoustic models |
| US20170273290A1 (en) | 2016-03-22 | 2017-09-28 | Matthew Jay | Remote insect monitoring systems and methods |
| US20170316281A1 (en) | 2016-04-28 | 2017-11-02 | Microsoft Technology Licensing, Llc | Neural network image classifier |
| US10007866B2 (en) | 2016-04-28 | 2018-06-26 | Microsoft Technology Licensing, Llc | Neural network image classifier |
| WO2017201540A1 (en) | 2016-05-20 | 2017-11-23 | Techcyte, Inc. | Machine learning classification of particles or substances in digital microscopy images |
| US9830526B1 (en) | 2016-05-26 | 2017-11-28 | Adobe Systems Incorporated | Generating image features based on robust feature-learning |
| US9990558B2 (en) | 2016-05-26 | 2018-06-05 | Adobe Systems Incorporated | Generating image features based on robust feature-learning |
| US20180084772A1 (en) * | 2016-09-23 | 2018-03-29 | Verily Life Sciences Llc | Specialized trap for ground truthing an insect recognition system |
| US10278368B1 (en) | 2016-10-05 | 2019-05-07 | Verily Life Sciences Llc | Automated flying insect separator |
| US20180114334A1 (en) * | 2016-10-24 | 2018-04-26 | International Business Machines Corporation | Edge-based adaptive machine learning for object recognition |
| US20180121764A1 (en) | 2016-10-28 | 2018-05-03 | Verily Life Sciences Llc | Predictive models for visually classifying insects |
| CN106733701A (en) | 2016-12-23 | 2017-05-31 | 郑州轻工业学院 | Potato multi-stage combination dysnusia detecting system and detection method |
| WO2018134829A1 (en) | 2017-01-22 | 2018-07-26 | Senecio Ltd. | Method for sex sorting of mosquitoes and apparatus therefor |
| US20180206473A1 (en) | 2017-01-23 | 2018-07-26 | Verily Life Sciences Llc | Insect singulator system |
| CN106997475A (en) | 2017-02-24 | 2017-08-01 | 中国科学院合肥物质科学研究院 | A kind of insect image-recognizing method based on parallel-convolution neutral net |
| US10019654B1 (en) | 2017-06-28 | 2018-07-10 | Accenture Global Solutions Limited | Image object recognition |
| WO2019008591A2 (en) | 2017-07-06 | 2019-01-10 | Senecio Ltd. | Sex sorting of mosquitoes |
| US20200281164A1 (en) * | 2017-07-06 | 2020-09-10 | Senecio Ltd. | Method and apparatus for sex sorting of mosquitoes |
| US20190104719A1 (en) | 2017-10-11 | 2019-04-11 | Tony Guo | Insect vacuum and trap attachment systems |
| US20200219262A1 (en) * | 2019-01-03 | 2020-07-09 | The Regents Of The University Of California | Automated selection of an optimal image from a series of images |
| US20200349397A1 (en) | 2019-05-03 | 2020-11-05 | Verily Life Sciences Llc | Insect singulation and classification |
| US11794214B2 (en) | 2019-05-03 | 2023-10-24 | Verily Life Sciences Llc | Insect singulation and classification |
Non-Patent Citations (22)
| Title |
|---|
| Application No. DOP2021-0186 , Office Action, Mailed On Nov. 24, 2022, 8 pages. |
| AU2020268184 , "Second Examination Report", Oct. 26, 2022, 4 pages. |
| Australian Application No. 2020268184 , "First Examination Report", Apr. 1, 2022, 3 pages. |
| Cheng et al., "3D Tracking Targets Via Kinematic Model Weighted Particle Filter", 2016 IEEE International Conference on Multimedia and Expo (ICME), Available online at https://www.researchgate.net/publication/307436501_3D_tracking_targets_via_kinematic_model_weighted_particle_filter, Jul. 11, 2016, pp. 1-6. |
| Chinese Application No. 202090000536.2 , Office Action, Mailed On Mar. 16, 2022, 4 pages. |
| Ding, Automatic moth detection from trap images for pest management, Computers and Electronics in Agriculture, vol. 123, Apr. 2016, pp. 17-28. (Year: 2016). * |
| EP Appl. No. 20726615.6, Office Action, Jul. 24, 2023, 3 pages. |
| EP Appl. No. 20727012.5, Office Action, Dec. 5, 2023, 10 pages. |
| International Application No. PCT/US2020/030127, "Invitation to Pay Additional Fees and, Where Applicable, Protest Fee", Sep. 4, 2020, 12 pages. |
| International Application No. PCT/US2020/030127, International Search Report and Written Opinion, mailed On Nov. 20, 2020, 17 pages. |
| International Application No. PCT/US2020/030128, "Invitation to Pay Additional Fees and, Where Applicable, Protest Fee", Jul. 14, 2020, 18 pages. |
| International Application No. PCT/US2020/030128, International Search Report and Written Opinion, mailed Oct. 20, 2020, 24 pages. |
| Kumar et al., "Robust Insect Classification Applied to Real Time Greenhouse Infestation Monitoring", Available Online at https://www.semanticscholar.org/paper/Robust-Insect-Classification-Applied-to-Real-Time-Kumar-Martin/71f9c50ec4bdf66f5b6365fd158ce541ede4f2fd?p2df, Dec. 31, 2010, pp. 1-4. |
| Landwehr et al., "Logistic Model Trees", Machine Learning, Kluwer Academic Publishers-Plenum Publishers, 2005, pp. 161-205. |
| Larios et al., "Automated Insect Identification through Concatenated Histograms of Local Appearance Features: Feature Vector Generation and Region Detection for Deformable Objects", Machine Vision And Applications, vol. 19, No. 2, 2008, pp. 105-123. |
| Rustia et al., "A Real-time Multi-Class Insect Pest Identification Method Using Cascaded Convolutional Neural Networks", 9th International Symposium on Machinery and Mechatronics for Agriculture and Biosystems Engineering (ISMAB), May 28, 2018, pp. 1-6. |
| Singapore Appl. No. 11202112149Q, Written Opinion, May 15, 2024, 11 pages. |
| Singapore Application No. SG11202109762S , Written Opinion, Mailed On Apr. 21, 2022, 7 pages. |
| U.S. Appl. No. 16/859,405 , Notice of Allowance, Mailed On Feb. 9, 2022, 5 pages. |
| U.S. Appl. No. 16/859,405, Non-Final Office Action, mailed Jul. 27, 2021, 18 pages. |
| U.S. Appl. No. 17/805,583 , Non-Final Office Action, Mailed On Feb. 27, 2023, 18 pages. |
| U.S. Appl. No. 17/805,583, Notice of Allowance, Jun. 21, 2023, 9 pages. |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240037914A1 (en) * | 2020-12-03 | 2024-02-01 | Kansas State University Research Foundation | Machine learning method and computing device for art authentication |
| US12310348B2 (en) * | 2022-01-03 | 2025-05-27 | Feng Chia University | Intelligent Forcipomyia taiwana monitoring and management system |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200349668A1 (en) | 2020-11-05 |
| US20240330360A1 (en) | 2024-10-03 |
| SG11202112149QA (en) | 2021-11-29 |
| WO2020226934A1 (en) | 2020-11-12 |
| US20240281466A1 (en) | 2024-08-22 |
| EP3963476A1 (en) | 2022-03-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240330360A1 (en) | Generating insect classifications using predictive models based on sequences of images | |
| Bianco et al. | Combination of video change detection algorithms by genetic programming | |
| CN111868780B (en) | Learning data generation device and method, model generation system and program | |
| AU2023201944B2 (en) | Insect singulation and classification | |
| US11151423B2 (en) | Predictive models for visually classifying insects | |
| AU2018253501B2 (en) | Method and apparatus for automated delineation of structore shape for image guided treatment planning | |
| Deng et al. | Structure inference machines: Recurrent neural networks for analyzing relations in group activity recognition | |
| JP6941123B2 (en) | Cell annotation method and annotation system using adaptive additional learning | |
| EP3092619B1 (en) | Information processing apparatus and information processing method | |
| CN110349187A (en) | Method for tracking target, device and storage medium based on TSK Fuzzy Classifier | |
| CN105095908B (en) | Group behavior characteristic processing method and apparatus in video image | |
| US20160070972A1 (en) | System and method for determining a pet breed from an image | |
| Mulero-Pázmány et al. | Addressing significant challenges for animal detection in camera trap images: a novel deep learning-based approach | |
| CN112183336A (en) | Expression recognition model training method and device, terminal equipment and storage medium | |
| Ciampi et al. | A deep learning-based pipeline for whitefly pest abundance estimation on chromotropic sticky traps | |
| CN108710761A (en) | A kind of robust Model approximating method removing outlier based on spectral clustering | |
| US20250318507A1 (en) | Smart insect control device via artificial intelligence in real time | |
| Johnsen et al. | SafetyCage: A misclassification detector for feed-forward neural networks | |
| CN119203036B (en) | A multi-agent credible collaborative target recognition method based on uncertainty quantification in an open environment | |
| CN103778642B (en) | Object tracking method and apparatus | |
| Wilfing et al. | Comparing Classification Methods for the Building Plan Components | |
| Jeon et al. | Improving Out-of-Distribution Detection in Echocardiographic View Classication through Enhancing Semantic Features | |
| Rhinelander et al. | Tracking a moving hypothesis for visual data with explicit switch detection | |
| CN118887446A (en) | Method and device for image classification based on image class spatial relationship model | |
| Patten et al. | Automatic Heliothis zea classification using image analysis |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
| AS | Assignment |
Owner name: VERILY LIFE SCIENCES LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DESNOYER, MARK;CRISWELL, VICTOR;LIVNI, JOSH;AND OTHERS;SIGNING DATES FROM 20200605 TO 20210520;REEL/FRAME:056331/0340 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| ZAAA | Notice of allowance and fees due |
Free format text: ORIGINAL CODE: NOA |
|
| ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: VERILY LIFE SCIENCES LLC, TEXAS Free format text: CHANGE OF ADDRESS;ASSIGNOR:VERILY LIFE SCIENCES LLC;REEL/FRAME:069390/0656 Effective date: 20240924 |