EP4661663A1 - Détection d'une déficience fonctionnelle de pièges à insectes surveillés par caméra - Google Patents

Détection d'une déficience fonctionnelle de pièges à insectes surveillés par caméra

Info

Publication number
EP4661663A1
EP4661663A1 EP24702571.1A EP24702571A EP4661663A1 EP 4661663 A1 EP4661663 A1 EP 4661663A1 EP 24702571 A EP24702571 A EP 24702571A EP 4661663 A1 EP4661663 A1 EP 4661663A1
Authority
EP
European Patent Office
Prior art keywords
machine learning
learning model
insect trap
functional impairment
image recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP24702571.1A
Other languages
German (de)
English (en)
Inventor
Matthias Tempel
Fabian Christian BORN
Martijn Diederik POLMAN
Sven Meyer Zu Eissen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayer AG
Original Assignee
Bayer AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayer AG filed Critical Bayer AG
Publication of EP4661663A1 publication Critical patent/EP4661663A1/fr
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M1/00Stationary means for catching or killing insects
    • A01M1/10Catching insects by using Traps
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M1/00Stationary means for catching or killing insects
    • A01M1/02Stationary means for catching or killing insects with devices or substances, e.g. food, pheronones attracting the insects
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M1/00Stationary means for catching or killing insects
    • A01M1/02Stationary means for catching or killing insects with devices or substances, e.g. food, pheronones attracting the insects
    • A01M1/026Stationary means for catching or killing insects with devices or substances, e.g. food, pheronones attracting the insects combined with devices for monitoring insect presence, e.g. termites
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M1/00Stationary means for catching or killing insects
    • A01M1/02Stationary means for catching or killing insects with devices or substances, e.g. food, pheronones attracting the insects
    • A01M1/04Attracting insects by using illumination or colours
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M1/00Stationary means for catching or killing insects
    • A01M1/14Catching by adhesive surfaces
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M1/00Stationary means for catching or killing insects
    • A01M1/20Poisoning, narcotising, or burning insects
    • A01M1/2022Poisoning or narcotising insects by vaporising an insecticide
    • A01M1/2027Poisoning or narcotising insects by vaporising an insecticide without heating
    • A01M1/2044Holders or dispensers for liquid insecticide, e.g. using wicks

Definitions

  • the systems, methods and computer programs disclosed herein relate to the automated detection of functional impairments in camera-monitored insect traps using machine learning methods.
  • WO2018054767A1 discloses an insect trap that is visited by a user to check whether insects have entered the insect trap. Using a smartphone, the user creates an image of the insects in the trap. Using a computer program, the insects depicted in the image are automatically detected, counted and/or identified.
  • W02020058175A1 and W02020058170A1 disclose an insect trap equipped with a camera.
  • the camera automatically takes images of a collection area of the insect trap in which insects accumulate.
  • the images are transmitted via a transmitter unit to a separate computer system where they are examined by a user or analyzed using image recognition algorithms to determine the number of insects in the collection area and/or to identify the insects.
  • the insect traps disclosed in W02020058175A1 and W02020058170A1 have the advantage over the insect trap disclosed in WO2018054767A1 that the user does not have to visit the insect traps to check whether insects have entered the insect traps.
  • a camera-monitored insect trap may deteriorate over time.
  • a liquid used in the insect trap to immobilize insects evaporates over time. It is possible that the liquid level in the insect trap drops to a level where insects are no longer immobilized.
  • Another possible functional impairment may be contamination of the collection area and/or optical elements of the camera (e.g. a lens). Other possible functional impairments are listed further down in the description.
  • a functional impairment leads to the function intended to be performed by the insect trap no longer being performed or no longer being performed sufficiently.
  • the present disclosure describes means by which a functional impairment of a camera-monitored insect trap can be detected at an early stage.
  • a first subject of the present disclosure is a computer-implemented method for detecting a functional impairment in a camera-monitored insect trap.
  • the detection method comprises:
  • a further subject of the present disclosure is a computer system comprising: an input unit, a control and computing unit and an output unit, wherein the control and computing unit is configured to cause the input unit to receive an image recording, wherein the image recording shows at least part of a collection area of an insect trap, to feed the received image recording to a machine learning model, wherein the machine learning model is configured and has been trained on the basis of training data to distinguish image recordings of insect traps with a functional impairment from image recordings of insect traps without a functional impairment, to receive information from the machine learning model, wherein the information indicates whether the received image recording shows an insect trap with a functional impairment, to cause the output unit to output a message, wherein the message indicates that the insect trap shown at least partially in the received image recording has a functional impairment if the information output by the machine learning model indicates that the received image recording shows an insect trap with a functional impairment shows.
  • Another subject of the present disclosure is a non-transitory computer-readable storage medium having stored thereon software instructions that, when executed by a processor of a computer system, cause the computer system to perform the following steps:
  • Another subject of the present disclosure is a system comprising:
  • control unit is configured to cause the camera to generate one or more images of a collection area of the insect trap
  • analysis unit is configured to feed the one or more generated images to a machine learning model, wherein the machine learning model is configured and has been trained on the basis of training data to distinguish images of insect traps with a functional impairment from images of insect traps without a functional impairment
  • the analysis unit is configured to receive information from the machine learning model, wherein the information indicates whether the one or more images show an insect trap with a functional impairment
  • the transmission unit is configured to transmit the information and/or the one or more images to a separate computer system
  • output unit is configured to output the information.
  • kits comprising an insect trap and a computer program product, wherein the insect trap comprises a camera or means for receiving a camera, wherein the computer program product comprises program instructions, wherein the program instructions can be loaded into a working memory of a computer system and cause the computer system to carry out the following steps:
  • Fig. 1 shows schematically and as an example the training of a machine learning model.
  • Fig. 2 schematically shows the use of a trained machine learning model to detect a functional impairment in an insect trap.
  • Fig. 3 schematically shows another example of training a machine learning model.
  • Fig. 4 schematically shows another example of using a trained machine learning model to detect a functional impairment in an insect trap.
  • Fig. 5 shows an example and schematically a computer-implemented method for training a machine learning model in the form of a flow chart.
  • Fig. 6 shows an example and schematically a computer-implemented method for detecting a functional impairment in a camera-monitored insect trap in the form of a flow chart.
  • Fig. 7 shows an exemplary and schematic illustration of an embodiment of a computer system of the present disclosure.
  • Fig. 8 shows an exemplary and schematic illustration of another embodiment of a computer system of the present disclosure.
  • the present disclosure describes means for detecting a malfunction in a camera-monitored insect trap early and automatically.
  • An insect trap is a device that is visited by insects randomly or deliberately and that allows a user to detect whether insects are present in an area.
  • the area can be, for example, a field or a greenhouse for growing crops, a building (e.g. a storage room for storing food and/or animal feed, a hospital, a nursing home, a school and/or the like), a room in a building (e.g. a living room, a bedroom, a canteen, a sick bay and/or the like) or another area.
  • insect includes all stages from the larva (caterpillar, dew larva) to the adult stage.
  • insect trap should not be understood to mean that only insects can get into the insect trap.
  • insect trap was chosen because such traps are mainly used to check whether insects are present in an area.
  • the insect trap can also be used to detect the presence of other arthropods (lat. Arthropoda), e.g. the presence of spiders. It is also conceivable that the insect trap is used to detect the presence of a defined species in an area and is accordingly prepared to attract and/or immobilize and/or detect this defined species.
  • Examples of such defined species are: codling moth, aphid, thrips wing, fruit pod moth, Colorado potato beetle, cherry fruit fly, cockchafer, European corn borer, plum moth, rhododendron cicada, seed moth, scale insect, gypsy moth, spider mite, grape moth, walnut fruit fly, whitefly, large rapeseed stem weevil, spotted cabbage stem weevil, rapeseed beetle, cabbage pod weevil, cabbage pod midge or rapeseed flea, or a forest pest such as aphid, blue pine jewel beetle, bark beetle, oak jewel beetle, oak processiffy moth, oak tormentor, spruce web sawfly, common woodworm, large brown barkeater, pine bushhorn sawfly, pine owl, pine moth, small spruce sawfly, nun moth, Horse chestnut leaf miner, gypsy moth,
  • the insect trap may comprise means for immobilising insects.
  • the insect trap may, for example, comprise a bowl filled with a liquid (e.g. water or an aqueous solution). It is conceivable that the liquid comprises a surfactant to reduce surface tension and/or comprises an agent against algae formation and/or comprises an attractant to attract insects. Insects that enter the liquid may, for example, drown in the liquid or be held by the liquid.
  • the insect trap may also comprise a surface provided with glue or another adhesive to which insects stick.
  • an insect trap does not have to comprise means for immobilising insects; it may be sufficient for the intended use of the insect trap that insects enter the collection area and remain there for a period of time.
  • the insect trap may include means for attracting insects. Some insects (e.g. rapeseed pests such as the large rapeseed stem weevil) are attracted by a yellow colour, for example. Some insects (e.g. male food moths) can be attracted by a pheromone. It is also possible to use food to attract the insects. Some insects are attracted by electromagnetic radiation of a defined wavelength range.
  • the insect trap may be equipped with a source of electromagnetic radiation that emits electromagnetic radiation of a defined wavelength range (or several wavelength ranges). However, the insect trap does not have to have any attractants; it may be sufficient for the intended use of the insect trap that insects accidentally stray into the trap or come into contact with it.
  • the insect trap comprises a collection area.
  • the collection area is an area that can be visited by insects (or other arthropods). This can be a flat surface of a board or card or the like. It can also be the bottom of a container. It is possible that the insect trap comprises several collection areas. It is also conceivable that the insect trap has different collection areas, for example a Collection area for (specific) pests and another collection area for (specific) beneficial organisms.
  • the collection area preferably comprises a flat, smooth or structured surface with a round, oval, elliptical, angular (triangular, square, pentagonal, hexagonal or generally n-sided, with n as a whole number that is greater than or equal to three) cross-section.
  • the cross-section is preferably round or rectangular (in particular square).
  • Walls can extend upwards from the surface to form a container.
  • the container can be cylindrical, conical or box-shaped, for example. It preferably has a round or angular cross-section and the walls extend conically upwards from the base, with the base surface and wall surface preferably running at an angle of more than 90° and less than 120° to one another. In the case of an angular cross-section, the corners can be rounded.
  • the bottom of the container may have markings and/or a structure that allows automated focusing of the camera and/or that provides a reference, for example to determine the size of an insect.
  • the bottom of the container can have depressions, as described for example in WO2022243150A1, in order to achieve an isolation of insects in the collection area. Isolating facilitates the automated detection, counting and/or identification of insects in the collection area.
  • the collection area can be part of a pest trapping device, such as a yellow trap tray or an optionally glued color board.
  • the insect trap is a trap as described in W02020058175A1, WQ2020058170A1, WO2021213824A1 or WO2022243150A1.
  • the insect trap is a camera-monitored insect trap. This means that a camera is positioned and aligned in such a way that it can produce images of a collection area of the insect trap.
  • a camera is a device that can create images in digital form and store them and/or make them available via an interface.
  • a camera usually comprises an image sensor and optical elements.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the optical elements serve to create the sharpest possible image of the object from which a digital image is to be created on the image sensor.
  • the camera can, for example, be part of a smartphone or tablet computer.
  • the camera is used to generate digital images of the collection area or part thereof.
  • the generated images can be used (i) to detect whether one or more insects are present in the collection area (insect detection), (ii) to count insects in the collection area and/or (iii) to identify insects, i.e. to determine which insect (subclass, superorder, order, suborder, family, genus, species, stage and/or sex) it is.
  • a light source is required to illuminate the collection area so that light (electromagnetic radiation in the infrared, visible and/or ultraviolet range of the spectrum) is scattered/reflected from the illuminated collection area towards the camera.
  • Daylight can be used for this purpose.
  • a lighting unit that is designed for a defined, Provides lighting that is independent of daylight. This is preferably installed to the side of the camera so that the camera does not cast a shadow on the collection area.
  • light and “illumination” should not mean that the spectral range is limited to visible light (approximately 380 nm to approximately 780 nm). It is also conceivable that electromagnetic radiation with a wavelength below 380 nm (ultraviolet light: 100 nm to 380 nm) or above 780 nm (infrared light: 780 nm to 1000 pm) is used for illumination.
  • the image sensor and the optical elements are usually adapted to the electromagnetic radiation used.
  • the camera-monitored insect trap comprises a control unit.
  • the control unit can be a component of the camera or a separate device.
  • the control unit is configured to cause the camera to produce one or more image recordings of a collection area or a part thereof.
  • the control unit can be a computer system as described later in the description.
  • the camera can be a component of such a computer system.
  • the one or more image recordings can be individual image recordings or sequences of image recordings (e.g. video recordings).
  • the control unit can be configured to cause the camera to take images at defined times (e.g. once a day at 12 noon) and/or repeatedly at defined time periods (e.g. every hour between 7 a.m. and 8 p.m.) and/or when a defined event occurs (e.g. after sunrise when a defined brightness is reached or as a result of an insect being detected by a sensor) and/or as a result of a command from a user.
  • the control unit can be part of the camera or a device independent of the camera that can communicate with the camera via a wired or wireless connection (e.g. Bluetooth).
  • the insect trap also includes a transmitting unit. Images and/or information can be transmitted to a separate computer system via the transmitting unit. Transmission preferably takes place via a radio network, for example via a mobile network.
  • the transmitting unit can be a component of the control unit or a unit independent of the control unit.
  • the insect trap may include a receiving unit to receive commands from a separate computer system.
  • the transmitting unit and/or the receiving unit can be components of a computer system as described later in the description.
  • the images generated by the camera are usually analyzed automatically to detect, count and/or identify the insects located in the collection area or part thereof.
  • This analysis can be carried out by an analysis unit that can be part of the insect trap; however, this analysis can also be carried out by an analysis unit that can be part of a separate computer system to which the images are transmitted by means of the transmission unit of the insect trap.
  • the analysis unit can be part of a computer system as described further down in the description.
  • the analysis unit can comprise a trained machine learning model that is configured and trained to detect, count and/or identify insects depicted in images. Details on the automated detection, counting and/or identification of insects in images are provided in Publications on this topic are described (see e.g.: DCK Amarathunga et al.
  • the images generated by the camera can also be used to detect any malfunction of the camera-monitored insect trap.
  • a functional impairment describes a condition of the insect trap that affects one or more components of the insect trap or the insect trap as a whole in such a way that one or more functions are no longer performed or are no longer performed sufficiently or are no longer performed optimally. For example, a function may no longer be performed sufficiently or optimally if the impairment slows down or makes it more difficult or the result is inferior or the result is faulty.
  • Functions typically to be performed by the insect trap are attracting insects and/or immobilizing insects and/or isolating insects in the collection area and/or illuminating the collection area with one or more light sources and/or generating images of the collection area and/or sending images and/or other/further information to a separate computer system and/or other/further functions.
  • a functional impairment can be an impairment that currently affects the function or will affect the function in the near future if no measures are taken to maintain the functions.
  • Liquid level has fallen below a lower threshold If the collection area of the insect trap is designed as a container, a liquid (e.g. water or an aqueous solution) in the container can be used to immobilize insects. If the amount of liquid in the container falls below a threshold, insects may no longer be immobilized. This represents a functional impairment.
  • a liquid e.g. water or an aqueous solution
  • insects are only partially covered by liquid and the partial covering of the insects with liquid makes automated detection, counting and/or identification more difficult.
  • the contours of insects may not be clearly visible in an image due to the partial covering and/or unwanted reflections may occur at the transition points between liquid and insect.
  • the insect trap it is also possible for the insect trap to be provided with a storage container from which liquid automatically flows into the collection area when the liquid level in the collection area has fallen below a defined threshold value (as described, for example, in WO2021213824A1). It is possible for the amount of liquid still contained in the storage container to be shown in an image recording. If the amount of liquid contained in the storage container has fallen below a defined residual amount, it is possible that in the near future no more liquid will be able to flow from the storage container into the collection area; a functional impairment is imminent.
  • Liquid level has risen above an upper threshold It is possible that, for example, as a result of rainwater, the liquid level in the collection area has risen above an upper threshold. It is possible that liquid flows uncontrollably over a wall of the container that forms the collection area, removing insects floating on the liquid from the collection area.
  • Collection area is dirty: It is possible that contaminants have accumulated in the collection area, making it difficult to detect, count and/or identify insects. Contaminants can completely or partially cover insects or clump with them (form agglomerates). Contaminants can be leaves and/or other plant parts, dust, excretions from insects and/or other animals and/or the like.
  • Collection area is exhausted: It is possible that a large number of insects have already collected in the collection area, which are completely or partially overlapping and/or aggregating into clusters. This can impair the automated detection, counting and/or identification of the insects.
  • Algae formation in the collection area It is possible for algae to form in a liquid in the collection area. The algae can make it difficult to automatically detect, count and/or identify insects in the collection area.
  • Foam in the collection area If a collecting tray is filled with liquid, foam can form on the liquid over time. This foam can completely or partially block the view of insects in the collection area. It is also possible that insects are no longer immobilized by the liquid.
  • Ice formation or icing If a trap is filled with liquid, the liquid can freeze at low temperatures. This means that insects may no longer be immobilized by the liquid.
  • spider webs in the insect trap mean that the camera no longer has a clear view of the collection area.
  • spiders that are in front of a camera lens can also pose a problem.
  • insect constructs e.g. pupae of larvae
  • plant parts e.g. twigs, leaves, roots
  • Camera and/or optical elements are dirty: Deposits on a camera lens can cause impairment. It is possible that as a result of a deposit, the camera's field of view is restricted and the entire original collection area is no longer recorded. It is conceivable that deposits lead to blurred or partially blurred images. It is possible that water (e.g. rainwater) and/or another liquid (e.g. a liquid from the collection area) gets onto a lens and restricts the field of view and/or leads to blurred images.
  • water e.g. rainwater
  • another liquid e.g. a liquid from the collection area
  • Illumination source(s) defective and/or dirty If the insect trap is equipped with one or more illumination sources, it is possible that one or more of these illumination sources emits no or less electromagnetic radiation and/or that, due to contamination of one or more illumination sources, not enough electromagnetic radiation reaches and illuminates the collection area. The lack of or reduced illumination can lead to a loss of contrast and/or increased noise in the images, which in turn can make the automated detection, counting and/or identification of insects more difficult.
  • Unwanted reflections It is possible that reflections can be observed in the images at certain times, which may be caused by sunlight falling under a defined angle range. It is possible that sunlight enters the collection area at certain times of the day and/or year and causes unwanted reflections. It is possible that such unwanted reflections from sunlight were not observed when the insect trap was set up and/or only appeared later. It is possible that the insect trap was moved by wind and/or by an animal and/or precipitation from its original position and/or orientation to another position and/or orientation where the reflections occur.
  • Changes in position and/or location of components of the insect trap and/or the insect trap as a whole It is conceivable that over time there will be a change in the position and/or location and/or orientation of components of the insect trap and/or the insect trap as a whole. Such changes can be the result of weather influences (e.g. precipitation, wind, sunlight), interactions with animals and/or people and/or tremors in the earth (e.g. earthquakes, falling trees, vehicles driving past).
  • weather influences e.g. precipitation, wind, sunlight
  • interactions with animals and/or people and/or tremors in the earth e.g. earthquakes, falling trees, vehicles driving past.
  • insects accumulate in one or more places in the collection container.
  • Camera is defective It is possible that the camera is defective and the images produced are not suitable for automated detection, counting and/or identification of insects in the collection area. It is possible, for example, that the images produced are noisy and/or have a color cast and/or have a low contrast range and/or are completely black or white.
  • Camera does not produce images of the collection area: It is possible that the camera produces images during maintenance of the insect trap and/or when assembling the insect trap that do not show the collection area of the insect trap, but for example other components of the insect trap and/or the surroundings of the insect trap. Such images may be unsuitable for automated detection, counting and/or identification of insects in the collection area of the insect trap. Such images can also be identified and, for example, sorted out using the means described in this description. Sorting out can mean that a sorted out image is not automatically analyzed to detect, count and/or identify insects in the collection area.
  • the functional impairments and/or their effects are captured in images generated by the camera in the camera-monitored insect trap.
  • the images are used to train a machine learning model to automatically detect such functional impairments.
  • the term "automatically” means without human intervention.
  • Such a “machine learning model” can be understood as a computer-implemented data processing architecture.
  • the model can receive input data and provide output data based on this input data and model parameters.
  • the model can learn a relationship between the input data and the output data through training. During training, model parameters can be adjusted to produce a desired output for a given input.
  • the model When training such a model, the model is presented with training data from which it can learn.
  • the trained machine learning model is the result of the training process.
  • the training data includes the correct output data (target data) that the model should generate based on the input data.
  • patterns are recognized that map the input data to the target data.
  • the input data of the training data is fed into the model and the model generates output data.
  • the output data is compared with the target data.
  • Model parameters are changed so that the deviations between the output data and the target data are reduced to a (defined) minimum.
  • An optimization method such as a gradient method can be used to modify the model parameters with a view to reducing the deviations.
  • the deviations can be quantified using a loss function.
  • a loss function can be used to calculate an error (loss) for a given pair of output data and target data.
  • the goal of the training process can be to change (adjust) the parameters of the machine learning model so that the error is reduced to a (defined) minimum for all pairs of the training data set.
  • the error function can be the absolute difference between those numbers.
  • a high absolute error may mean that one or more model parameters need to be changed significantly.
  • difference metrics between vectors such as the mean square error, a cosine distance, a norm of the difference vector such as a Euclidean distance, a Chebyshev distance, an Lp norm of a difference vector, a weighted norm, or another type of difference metric of two vectors can be chosen as the error function.
  • an element-wise difference metric can be used.
  • the output data can be transformed, e.g. into a one-dimensional vector, before calculating an error value.
  • the machine learning model receives one or more image recordings as input data.
  • the model can be trained to output information for the one or more image recordings as to whether the one or more image recordings are one or more image recordings of an insect trap with a malfunction or one or more image recordings of an insect trap without a malfunction.
  • the machine learning model can be trained to distinguish image recordings of insect traps with malfunction/malfunctions from image recordings of insect traps without malfunction/malfunctions.
  • the machine learning model can be trained to assign the one or more image recordings to one of at least two classes, wherein at least a first class represents image recordings of insect traps that do not exhibit functional impairments and at least a second class represents image recordings of insect traps that exhibit functional impairments.
  • the machine learning model is trained on the basis of training data.
  • the training data includes a large number of images of one or more insect traps.
  • the term “multiplicity” means more than 10, preferably more than 100.
  • the images act as input data. Some of the images may show the one or more insect traps without any functional impairments, i.e. in a state in which they function properly. Another part of the images may show the one or more insect traps with a functional impairment.
  • the training data may also include target data.
  • the target data may indicate for each image whether the insect trap depicted in the image has a functional impairment or whether it has no functional impairment.
  • the target data may also include information about which functional impairment is present in the individual case and/or how severe it is and/or what its degree of severity is.
  • the machine learning model can be trained to assign each image to exactly one of two classes, where exactly one class represents images of insect traps that do not have any functional impairments and the other class represents images of insect traps that have one functional impairment or multiple functional impairments.
  • the machine learning model can be trained to perform binary classification. In such a case, it is sufficient that for each of the individual images in the training data there is information as to whether the image shows an insect trap with a functional impairment or whether the image shows an insect trap without any functional impairment. In such a case, the machine learning model can be trained to recognize insect traps with one (or more) functional impairments, regardless of which functional impairment(s) are involved.
  • the machine learning model can be trained to generate similar compressed representations for images that do not show an insect trap with a functional impairment. If an image of an insect trap with a functional impairment is fed to the trained machine learning model, the trained machine learning model generates a compressed representation of the image of the insect trap with the functional impairment that can be distinguished from the compressed representations of the images of insect traps without a functional impairment.
  • Such training where the machine learning model is only trained to recognize whether there is a functional impairment or no functional impairment, may be useful if a user is only interested in knowing whether the insect trap is working properly or whether intervention is required to eliminate a functional impairment (whatever it is).
  • the machine learning model can also be trained to recognize a specific functional impairment.
  • the specific functional impairment can be one of the functional impairments described earlier in this description. It is possible that a user is only interested in knowing whether the specific functional impairment is present (e.g. the liquid level is too low).
  • the machine learning model can be trained to assign each image capture to one of two classes, where one class represents images of insect traps that have the specific functional impairment and the other class represents images of insect traps that do not have the specific functional impairment, i.e. that either have no functional impairment at all or have a functional impairment other than the specific impairment.
  • the training data for each image capture includes information about whether or not the specific functional impairment is present in the insect trap depicted.
  • the machine learning model can also be trained to assign each image to one of more than two classes, where the classes may, for example, reflect the severity of the specific functional impairment.
  • a first class can represent images of insect traps in which the specific functional impairment does not occur (e.g. no contamination); a second class can represent images of insect traps in which the specific functional impairment occurs slightly (e.g. slight contamination); a third class can represent images of insect traps in which the specific functional impairment occurs clearly (e.g. significant contamination).
  • a slight functional impairment can mean that the insect trap is still sufficiently functional, but that maintenance is necessary in the future to avoid further functional impairment.
  • a significant or severe functional impairment can mean that immediate maintenance is required. More than the three levels mentioned are also conceivable, e.g. four (e.g.
  • the training data for each image capture includes information on whether the functional impairment is present in the imaged insect trap and, if so, how severe it is and/or with what severity it occurs.
  • the machine learning model can also be trained to recognize more than one specific functional impairment, i.e. to distinguish different functional impairments from one another.
  • the machine learning model can be trained to learn a number n of specific functional impairments, where n is an integer greater than 1.
  • the machine learning model can be trained to assign each image to one of at least n+1 classes, where a first class represents images of insect traps that do not exhibit a functional impairment, and each of the at least n remaining classes represents images of insect traps that show one of the n specific functional impairments. It is also possible for the machine learning model to be additionally trained to recognize two or more degrees of severity of one or more of the n specific functional impairments.
  • the machine learning model can be trained to recognize for one or more of the n specific functional impairments how severe it is and/or with what degree of severity it occurs.
  • the training data includes, for each image acquisition, information on whether a functional impairment is present, if a functional impairment is present, which specific functional impairment is present, and for one or more of the specific functional impairments, how severe it is and/or with what severity it occurs.
  • Camera-monitored insect traps can be used to generate the training data described in this description.
  • Camera-monitored insect traps can be operated for a period of time.
  • the images generated by the camera-monitored insect traps can be analyzed by one or more experts.
  • the one or more experts can provide each image with one of the pieces of information (annotations) required for training, which are then used as target data.
  • the one or more experts can view the images and provide each image with information as to whether the respective image shows an insect trap with no functional impairment or with a functional impairment. If necessary for training the machine learning model, each image showing an insect trap with a functional impairment can be provided with information as to how severe the functional impairment is and/or with what severity it occurs. If necessary for training the machine learning model, each image showing an insect trap with a functional impairment can be annotated with information about which specific functional impairment is present.
  • the machine learning model can be configured to assign each image recording to one of at least two classes.
  • the class assignment can be output by the machine learning model, for example, in the form of a number.
  • the number 0 can represent images of insect traps that show no functional impairment
  • the number 1 can represent images of insect traps that show a first specific functional impairment
  • the number 2 can represent images of insect traps that show a second specific functional impairment, etc.
  • the machine learning model can be configured to output a vector for each image recording, wherein the vector includes a number for each functional impairment at a coordinate of the vector, which number indicates whether the respective functional impairment is shown in the image recording (i.e. is present in the insect trap shown) or is not shown (i.e. is not present in the insect trap shown).
  • the number 0 can indicate that a specific functional impairment is not present and the number 1 can indicate that the specific functional impairment is present.
  • the location in the vector (coordinate) at which the respective number occurs can provide information about which specific functional impairment is involved in each case.
  • the machine learning model is configured to specify a probability for one or more (specific) functional impairments that the (specific) functional impairment occurs in the respective insect trap depicted.
  • the probability can, for example, be specified as a value in the range from 0 to 1, with the probability being greater the larger the value.
  • the machine learning model is configured to output a severity level for one or more (specific) functional impairments with which the (specific) functional impairment occurs in the respective insect trap depicted.
  • the output (output data) produced by the machine learning model based on an input image can be compared with the target data.
  • deviations between the output data and the target data can be quantified.
  • the deviations can be reduced by modifying model parameters. If the deviations reach a (predefined) minimum or a plateau, training can be terminated.
  • the trained machine learning model can be used to detect one or more functional impairments and optionally their severity in an insect trap.
  • a new image of a collection area of an insect trap can be fed to the machine learning model.
  • the term “new” means that the corresponding image has not already been used to train the machine learning model.
  • the trained machine learning model assigns the new image to one of the at least two classes that were used to train the machine learning model.
  • the trained machine learning model outputs information about which class the machine learning model has assigned the image to. It is possible that the trained machine learning model outputs information about the probability that one or more functional impairments are present and/or how serious they are and/or with what degree of severity they occur.
  • the output of the machine learning model may be displayed on a screen, printed on a printer, stored in a repository, and/or transmitted to a separate computer system (e.g., over a network).
  • the output of the machine learning model can be used to automatically discard images of insect traps with one or more functional impairments.
  • a message can be issued to a user.
  • Such a message can inform the user that an insect trap has a functional impairment.
  • the message can inform the user that an image will not be analyzed to detect, count, and/or identify insects because the insect trap that is at least partially depicted in the image has a functional impairment.
  • a message to a user can include the following information: which insect trap is affected (e.g. a location of the insect trap can be specified), what functional impairment exists, how severe the functional impairment is, what measures can be taken to restore the full functionality of the insect trap, when should the measures be taken to avoid further functional impairment). It is also possible that the image recording in which the trained machine learning model has detected a functional impairment is also displayed to the user so that the user can form his or her own picture of the functional impairment.
  • the output of the trained machine learning model is a probability value for the presence of a functional impairment
  • this probability value can be compared to a predefined threshold value. If the probability value is greater than or equal to the threshold value, a notification can be issued to a user about the presence of a functional impairment in an insect trap. If the probability value is less than the threshold value, the image recording can be subjected to analysis to detect, count and/or identify insects in the collection area of the insect trap.
  • the probability value there is more than one threshold value with which the probability value is compared. For example, it is possible that there is an upper threshold value and a lower threshold value. If the probability value is below the lower threshold value, the probability of a functional impairment is so low that the user does not need to be informed.
  • the image recording can be submitted to an analysis to detect, count and/or identify insects in the collection area of the insect trap. If the probability value is above the upper threshold value, the probability of a functional impairment is so high that a message is issued to the user about the presence of a functional impairment. It is possible that the image recording is not submitted to an analysis to detect, count and/or identify insects.
  • the probability value is in the range from the lower threshold to the upper threshold, there is a certain degree of uncertainty as to whether or not there is a functional impairment. This uncertainty may result from the fact that the image acquisition requires a comparatively is of low quality. It is possible that a command is sent to the control unit of the insect trap to generate another image recording in order to also feed this additional image recording to the trained machine learning model for detecting a functional impairment. It is possible that when generating the additional image recording, parameters are changed in order to increase the quality of the image recording. For example, the exposure time can be increased and/or the lighting of the collection area can be increased by one or more lighting units and/or filters (color filters, polarization filters and/or the like) can be used.
  • the additional image recording can then provide clarity as to whether or not a functional impairment is present.
  • a functional impairment is only just becoming apparent, ie that there is only a comparatively minor functional impairment (e.g. slight contamination).
  • the control unit of the insect trap is prompted by a command to reduce the time interval between two consecutive image recordings. Images are then taken at reduced intervals between each other in order to detect further impairment of function and/or an increase in the severity of the impairment at an early stage.
  • Threshold values can be set by an expert based on his or her experience. However, they can also be set by a user. It is possible for the user to decide for themselves whether they would like to be informed when there is a low probability of a functional impairment, or whether they would rather be informed when the probability of a functional impairment is comparatively high.
  • the machine learning model of the present disclosure may be or comprise an artificial neural network.
  • An "artificial neural network” comprises at least three layers of processing elements: a first layer with input neurons (nodes), a k-th layer with at least one output neuron (node), and k-2 inner layers, where k is a natural number and greater than 2.
  • the input neurons are used to receive the input representations. Typically, there is one input neuron for each pixel of an image that is input to the artificial neural network. There may be additional input neurons for additional input values (e.g. information about the image capture, the insect trap depicted, camera parameters, weather conditions when the image capture was generated, and/or the like).
  • additional input neurons e.g. information about the image capture, the insect trap depicted, camera parameters, weather conditions when the image capture was generated, and/or the like).
  • the output neurons can be used to output information about which class the input image was assigned to and/or the probability with which it was assigned to the class.
  • the processing elements of the layers between the input neurons and the output neurons are connected in a predetermined pattern with predetermined connection weights.
  • the neural network can be trained using a backpropagation method, for example.
  • the aim is to map the input data to the target data as reliably as possible for the network.
  • the quality of the prediction is described by an error function.
  • the aim is to minimize the error function.
  • the backpropagation method is used to train an artificial neural network by changing the connection weights.
  • connection weights between the processing elements contain information regarding the relationship between image recordings and functional impairments of the insect traps depicted in the image recordings, which can be used to determine a functional impairment of a Insect traps can be detected early.
  • new means that the new image was not already used to train the artificial neural network.
  • a cross-validation method can be used to split the data into training and validation sets.
  • the training set is used in backpropagation training of the network weights.
  • the validation set is used to check the prediction accuracy of the trained network when applied to unknown (new) data.
  • the artificial neural network can be a so-called convolutional neural network (CNN for short) or it can include one.
  • CNN convolutional neural network
  • a convolutional neural network (“CNN”) is able to process input data in the form of a matrix. This makes it possible to use images presented as a matrix (e.g. width x height x color channels) as input data.
  • a neural network e.g. in the form of a multi-layer perceptron (MLP), on the other hand, requires a vector as input, i.e. in order to use an image as input, the image elements (pixels) of the image would have to be rolled out one after the other in a long chain.
  • a CNN usually consists essentially of filters (convolutional layer) and aggregation layers (pooling layer), which repeat alternately, and at the end of one or more layers of fully connected neurons (dense / fully connected layer).
  • the scientific literature describes numerous architectures of artificial neural networks that are used to assign an image to a class (image classification). Examples are Xception (see e.g.: F. Chollet: EfficientNet (see e.g.: T. Mingxing Tan et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks, arXiv:1905.11946v5), DenseNet (see e.g.: G. Wang et al. Study on Image Classification Algorithm Based on Improved DenseNet, Journal of Physics: Conference Series, 2021, 1952, 022011), Inception (see e.g. J. Bankar et al.
  • the machine learning model of the present disclosure may have such or a similar architecture.
  • the machine learning model of the present disclosure may be or include a transformer.
  • a transformer is a model that can translate a sequence of characters into another sequence of characters, taking into account dependencies between distant characters. Such a model can be used, for example, to translate text from one language to another.
  • a transformer includes encoders connected in series and decoders connected in series. Transformers have already been used successfully to classify images (see, for example: A. Dosovitskiy et al.: An image is worth 16 x 16 Words: Transformers for image recognition at scale, arXiv:2010.11929v2; A. Khan et al. : Transformers in Vision: A Survey, arXiv:2101.01169v5).
  • the machine learning model of the present disclosure may have a hybrid architecture in which, for example, elements of a CNN are combined with elements of a transformer.
  • the machine learning model of the present disclosure can be initialized using standard methods (random initialization, He initialization, Xavier initialization, etc.). However, it can also be pre-trained on the basis of publicly available, already annotated images (see e.g. https://www.image-net.org). The training of the machine learning model can therefore be based on initialization or pre-training and can also include transfer learning, so that only parts of the weights/parameters of the machine learning model are retrained.
  • the machine learning model may have an autoencoder architecture.
  • An “autoencoder” is an artificial neural network that can be used to learn efficient data encodings in an unsupervised learning process. In general, the task of an autoencoder is to learn a compressed representation for a dataset and thus extract essential features. This allows it to be used for dimensionality reduction by training the network to ignore "noise”.
  • An autoencoder includes an encoder, a decoder, and a layer between the encoder and the decoder that has a smaller dimension than the encoder's input layer and the decoder's output layer.
  • This layer forces the encoder to produce a compressed representation of the input data that minimizes noise and is sufficient for the decoder to reconstruct the input data.
  • the autoencoder can therefore be trained to generate a compressed representation of an input image.
  • the autoencoder can be trained exclusively on the basis of images of insect traps that do not exhibit any functional impairment. However, the autoencoder can also be trained on the basis of images that show insect traps with and without functional impairment.
  • the decoder can be discarded and the encoder can be used to generate a compressed representation for each input image. If a new insect trap is installed, a first image of the insect trap can be generated after installation.
  • This first image which shows the insect trap without any functional impairment, can be used as a reference.
  • a compressed representation of the first image can be generated using the encoder of the trained autoencoder. This compressed representation is the reference representation.
  • compressed representations can be generated from images taken by the insect trap camera using the encoder. The more similar a compressed representation is to the reference image, the less likely it is that there is a functional impairment. The more a compressed representation differs from the reference image, the more likely it is that there is a functional impairment.
  • the similarity of representations can be quantified using a similarity or distance measure. Examples of such similarity or distance measures are cosine similarity, Manhattan distance, Euclidean distance, Minkowski distance, Lp norm, Chebyshev distance. If a distance measure exceeds or falls below a predefined threshold that can be set by an expert or specified by a user, a message can be issued that there is a functional impairment and/or the image can be discarded.
  • U-Net An example of an autoencoder architecture is the U-Net (see e.g. O. Ronneberger et al.-. U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical image computing and computer-assisted intervention, pages 234-241, Springer, 2015, https://doi.org/10.1007/978-3-319-24574-4_28).
  • the autoencoder described there has an encoder and a decoder as well as a projection head that generates an output based on the compressed representation that the encoder generates, indicating whether the image shows an insect trap with a functional impairment or without a functional impairment.
  • the autoencoder is not only trained to generate a compressed representation of the input data and to reconstruct the input data based on the compressed representation, but the autoencoder is also trained to distinguish images of insect traps with functional impairment from images of insect traps without functional impairment (contrastive reconstruction).
  • the encoder can be used to generate a compressed reference representation for a first image of a newly installed insect trap. This compressed reference representation is compared with compressed representations of images taken during operation of the insect trap, and in the event of a defined deviation, a message is issued indicating that a functional impairment has occurred. It is also possible to use the encoder together with the projection operator (projection head) directly for classification.
  • Fig. 1 shows schematically and by way of example the training of a machine learning model.
  • the machine learning model is trained using training data TD.
  • the training data TD comprise a large number of images.
  • Each image I shows a collection area of an insect trap (not shown in Fig. 1).
  • the training data TD further comprise, for each image I, information A as to whether the insect trap depicted in the image I has a functional impairment or whether it has no functional impairment.
  • the information A comprises information as to which specific functional impairment is present in the individual case and/or how serious the functional impairment is and/or what degree of severity the functional impairment has.
  • Fig. 1 only shows one training data set comprising an image I with information A; however, the training data comprise a large number of such training data sets.
  • the image recording I represents input data for the machine learning model MLM.
  • the information A represents target data for the machine learning model MLM.
  • the image recording I is fed to the machine learning model MLM.
  • the machine learning model MLM assigns the image recording to one of at least two classes. The assignment is made on the basis of the image recording I and on the basis of model parameters MP.
  • the machine learning model MLM outputs information O that indicates which class the image recording was assigned to and/or the probability with which the image recording was assigned to one or more of the at least two classes.
  • the output information O is compared with the information A.
  • An error function LF is used to quantify the deviations between the information O (output) and the information A (target data).
  • An error value LV can be calculated for each pair of information A and information O.
  • the error value LV can be reduced in an optimization process (e.g. a gradient method) by modifying model parameters MP.
  • the goal of the training can be to reduce the error value for all image recordings to a predefined minimum. Once the predefined minimum is reached, the training can be terminated.
  • Fig. 2 shows schematically the use of a trained machine learning model to detect a functional impairment in an insect trap.
  • the trained model MLM 1 of the machine learning can have been trained in a training method as described in relation to Fig. 1.
  • the trained machine learning model MLM 1 is fed a new image I*.
  • the new image I* shows a collection area of an insect trap.
  • the model assigns the new image I* to one of the at least two classes for which the trained machine learning model MLM 1 was trained.
  • the trained machine learning model MLM 1 outputs information O which indicates which class the image was assigned to and/or with what probability the image was assigned to one or more of the at least two classes.
  • the information O can be output to a user.
  • Fig. 3 schematically shows another example of training a machine learning model.
  • the machine learning model has an autoencoder architecture.
  • the autoencoder AE comprises an encoder E and a decoder D.
  • the encoder E is configured to generate a compressed representation CR for an image recording I based on model parameters MP.
  • the decoder D is configured to generate a reconstructed image recording RI based on the compressed representation CR and on model parameters MP that is as close as possible to the image recording I.
  • An error function LF can be used to quantify deviations between the image recording I and the reconstructed image recording RI. The deviations can be minimized in an optimization process (e.g. in a gradient process) by modifying model parameters MP.
  • the autoencoder AE is usually trained on the basis of a large number of image recordings in an unsupervised learning process. In Fig. 3, only one image recording I of the large number of image recordings is shown.
  • Each image of the plurality of image captures shows a collection area or a portion thereof of one or more insect traps.
  • the one or more insect traps may have one or more functional impairments or may be free of functional impairments.
  • Fig. 4 schematically shows another example of using a trained machine learning model to detect a functional impairment in an insect trap.
  • the trained machine learning model can have been trained in a training process as described in relation to Fig. 3.
  • the trained machine learning model can be an encoder E of an autoencoder.
  • the encoder E is shown twice in Fig. 4; however, it is the same encoder in both cases; it is only shown twice to illustrate the detection process.
  • a first image recording L* of a collection area of an insect trap is fed to the encoder E.
  • the asterisk * indicates that the image recording L* was not used to train the machine learning model.
  • the image recording L* is preferably an image recording of an insect trap without functional impairment, which can have been created, for example, after the insect trap was installed.
  • the encoder E is configured to generate a first compressed representation CRi for the first image recording L*.
  • the first compressed representation CRi can be used as a reference representation. It can be stored in a data memory. While the insect trap is operating, further images of the collection area of the insect trap are generated. Fig. 4 shows one of these further images, the image recording L*.
  • the image recording L* is also fed to the encoder E.
  • the encoder E generates a second compressed representation CR2 for the image recording L*.
  • the first representation CRi and the second representation CR2 are compared with each other in a next step. During this comparison, a distance measure D is calculated that quantifies the differences between the first representation CRi and the second representation CR2.
  • a message M is issued.
  • the message M includes information that the insect trap shown in the image recording I2* has a functional impairment.
  • the image recording I2* is fed to an analysis DCI(l2*) in order to detect insects in the collection area of the insect trap and/or to count and/or identify the insects located in the collection area.
  • Fig. 5 shows an example and schematically a computer-implemented method for training a machine learning model in the form of a flow chart.
  • the training procedure (100) includes the following steps:
  • the training data comprise input data and target data
  • the input data comprise a plurality of image recordings of one or more insect traps, wherein each image recording shows at least part of a collection area of an insect trap
  • the target data comprise a class assignment for each image recording, wherein the class assignment indicates which class of at least two classes the image recording is assigned to, wherein at least a first class represents image recordings of insect traps that do not have a functional impairment and at least a second class represents image recordings of insect traps that have a functional impairment
  • (120) providing a machine learning model, wherein the machine learning model is configured to assign the image recording to one of the at least two classes based on an image recording and on the basis of model parameters,
  • (140) storing and/or outputting the trained machine learning model and/or using the trained machine learning model to detect a functional impairment in a camera-monitored insect trap.
  • Fig. 6 shows an example and schematically a computer-implemented method for detecting a functional impairment in a camera-monitored insect trap.
  • the detection method (200) comprises the following steps:
  • (220) providing a trained machine learning model, wherein the machine learning model is configured and trained on the basis of training data to assign image recordings to one of at least two classes, wherein the training data comprises input data and target data,
  • the input data comprises a plurality of images of one or more insect traps, each image showing at least a portion of a collection area of an insect trap
  • the target data for each image recording comprises a class assignment, wherein the class assignment indicates which class of at least two classes the image recording is assigned to, wherein at least a first class represents images of insect traps that do not have a functional impairment and at least a second class represents images of insect traps that have a functional impairment
  • a “computer system” is an electronic data processing system that processes data using programmable calculation rules. Such a system usually includes a “computer”, the unit that includes a processor for carrying out logical operations, and peripherals.
  • peripherals refers to all devices that are connected to the computer and are used to control the computer and/or as input and output devices. Examples of these are monitors (screens), printers, scanners, mice, keyboards, drives, cameras, microphones, speakers, etc. Internal connections and expansion cards are also considered peripherals in computer technology.
  • Today's computer systems are often divided into desktop PCs, portable PCs, laptops, notebooks, netbooks and tablet PCs as well as so-called handhelds (e.g. smartphones); all of these systems can be used to carry out the invention.
  • the term "computer” should be interpreted broadly to include any type of electronic device with data processing capabilities, including, as non-limiting examples, personal computers, servers, embedded cores, communications devices, processors (e.g., digital signal processors (DSP), microcontrollers, field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), etc.), and other electronic computing devices.
  • processors e.g., digital signal processors (DSP), microcontrollers, field programmable gate arrays (FPGA), application specific integrated circuits (ASIC), etc.
  • DSP digital signal processors
  • FPGA field programmable gate arrays
  • ASIC application specific integrated circuits
  • processor includes any kind of computation or manipulation or transformation of data that is represented as physical, e.g. electronic, phenomena and that may occur or be stored, e.g. in registers and/or memories of at least one computer or processor.
  • processor includes a single processing unit or a plurality of distributed or remote such units.
  • Fig. 7 shows an exemplary and schematic embodiment of a computer system of the present disclosure.
  • the computer system (1) comprises an input unit (10), a control and computing unit (20) and an output unit (30).
  • the control and computing unit (20) is configured to cause the input unit to receive an image recording, wherein the received image recording shows a collection area of an insect trap, to feed the received image recording to a trained machine learning model, wherein the machine learning model is configured and has been trained on the basis of training data, to assign image recordings to one of at least two classes, wherein the training data comprises input data and target data, o wherein the input data comprises a plurality of image recordings of one or more insect traps, wherein each image recording shows at least part of a collection area of an insect trap, o wherein the target data for each image recording comprises a class assignment, wherein the class assignment indicates which class of at least two classes the image recording is assigned to, wherein at least a first class represents image recordings of insect traps that do not have a functional impairment and at least a second class represents image recordings of insect traps that have a functional impairment, to receive information from the machine learning model as to which class of the at least two classes the image recording was fed to and/or with what probability the image recording is assigned to one or
  • Fig. 8 shows an exemplary and schematic illustration of another embodiment of a computer system of the present disclosure.
  • the computer system (1) comprises a processing unit (20) connected to a memory (50).
  • the processing unit (20) may comprise one or more processors alone or in combination with one or more memories.
  • the processing unit (20) may be ordinary computer hardware capable of processing information such as digital images, computer programs and/or other digital information.
  • the processing unit (20) typically consists of an arrangement of electronic circuits, some of which may be implemented as an integrated circuit or as multiple interconnected integrated circuits (an integrated circuit is sometimes referred to as a "chip").
  • the processing unit (20) may be configured to execute computer programs that may be stored in a main memory of the processing unit (20) or in the memory (50) of the same or another computer system.
  • the memory (50) may be ordinary computer hardware capable of storing information such as digital images (e.g. representations of the examination area), data, computer programs and/or other digital information either temporarily and/or permanently.
  • the memory (50) may comprise volatile and/or non-volatile memory and may be permanently installed or removable. Examples of suitable memories are RAM (Random Access Memory), ROM (Read-Only Memory), a hard disk, flash memory, a removable computer diskette, an optical disc, a magnetic tape, or a combination of the above.
  • Optical discs may include read-only compact discs (CD-ROM), read/write compact discs (CD-R/W), DVDs, Blu-ray discs, and the like.
  • the processing unit (20) may also be connected to one or more interfaces (11, 12, 30, 41, 42) to display, transmit and/or to receive.
  • the interfaces may comprise one or more communication interfaces (41, 42) and/or one or more user interfaces (11, 12, 30).
  • the one or more communication interfaces (41, 42) may be configured to send and/or receive information, e.g. to and/or from an MRI scanner, a CT scanner, an ultrasound camera, other computer systems, networks, data storage, or the like.
  • the one or more communication interfaces (41, 42) may be configured to transmit and/or receive information via physical (wired) and/or wireless communication links.
  • the one or more communication interfaces (41, 42) may include one or more interfaces for connecting to a network, e.g. using technologies such as cellular, Wi-Fi, satellite, cable, DSL, fiber optic, and/or the like.
  • the one or more communication interfaces (41, 42) may include one or more short-range communication interfaces configured to connect devices with short-range communication technologies such as NFC, RFID, Bluetooth, Bluetooth LE, ZigBee, infrared (e.g., IrDA), or the like.
  • the user interfaces (11, 12, 30) may include a display (30).
  • a display (30) may be configured to display information to a user. Suitable examples include a liquid crystal display (LCD), a light emitting diode display (LED), a plasma display panel (PDP), or the like.
  • the user input interface(s) (11, 12) may be wired or wireless and may be configured to receive information from a user into the computer system (1), e.g. for processing, storage, and/or display. Suitable examples of user input interfaces (11, 12) include a microphone, an image or video capture device (e.g. a camera), a keyboard or keypad, a joystick, a touch-sensitive surface (separate from or integrated into a touchscreen), or the like.
  • the user interfaces may include automatic identification and data capture (AIDC) technology for machine-readable information. These may include barcodes, radio frequency identification (RFID), magnetic stripes, optical character recognition (OCR), integrated circuit cards (ICC), and the like.
  • the user interfaces may further include one or more interfaces for communicating with peripheral devices such as printers and the like.
  • One or more computer programs (60) may be stored in the memory (50) and executed by the processing unit (20), which is thereby programmed to perform the functions described in this description.
  • the retrieval, loading and execution of instructions of the computer program (60) may be carried out sequentially, so that one instruction is retrieved, loaded and executed at a time. However, the retrieval, loading and/or execution may also be carried out in parallel.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Pest Control & Pesticides (AREA)
  • Engineering & Computer Science (AREA)
  • Insects & Arthropods (AREA)
  • Wood Science & Technology (AREA)
  • Zoology (AREA)
  • Environmental Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Catching Or Destruction (AREA)

Abstract

Les systèmes, les procédés et les programmes informatiques décrits par l'invention se rapportent à la détection automatisée d'une déficience fonctionnelle de pièges à insectes surveillés par caméra au moyen de procédés d'apprentissage automatique.
EP24702571.1A 2023-02-06 2024-02-02 Détection d'une déficience fonctionnelle de pièges à insectes surveillés par caméra Pending EP4661663A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP23155179 2023-02-06
PCT/EP2024/052592 WO2024165430A1 (fr) 2023-02-06 2024-02-02 Détection d'une déficience fonctionnelle de pièges à insectes surveillés par caméra

Publications (1)

Publication Number Publication Date
EP4661663A1 true EP4661663A1 (fr) 2025-12-17

Family

ID=85175686

Family Applications (1)

Application Number Title Priority Date Filing Date
EP24702571.1A Pending EP4661663A1 (fr) 2023-02-06 2024-02-02 Détection d'une déficience fonctionnelle de pièges à insectes surveillés par caméra

Country Status (3)

Country Link
EP (1) EP4661663A1 (fr)
CN (1) CN120614984A (fr)
WO (1) WO2024165430A1 (fr)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190075781A1 (en) * 2015-11-05 2019-03-14 John Michael Redmayne A trap or dispensing device
ES2908475T3 (es) 2016-09-22 2022-04-29 Basf Agro Trademarks Gmbh Control de organismos nocivos
EP3482630B1 (fr) * 2017-11-13 2021-06-30 EFOS d.o.o. Procédé, système et programme informatique permettant d'effectuer une prévision des infestations de ravageurs
EP3852520A1 (fr) 2018-09-21 2021-07-28 Bayer Aktiengesellschaft Détection d'arthropodes
EP3900530A1 (fr) 2020-04-23 2021-10-27 Bayer AG Gestion du liquide pour un dispositif d'arrêt
DE102020128032A1 (de) * 2020-10-24 2022-04-28 Jürgen Buchstaller Vorrichtung zur Bekämpfung und/oder Überwachung von Schädlingen
KR20240023381A (ko) * 2021-02-09 2024-02-21 렌토킬 이니셜 1927 피엘씨 적응가능한 미끼 스테이션
EP4091444A1 (fr) 2021-05-21 2022-11-23 Bayer AG Bac de récupération pour organismes nuisibles aux végétaux

Also Published As

Publication number Publication date
CN120614984A (zh) 2025-09-09
WO2024165430A1 (fr) 2024-08-15

Similar Documents

Publication Publication Date Title
EP4025047B1 (fr) Système et procédé d'identification de mauvaises herbes
Yalcin Plant phenology recognition using deep learning: Deep-Pheno
Rupanagudi et al. A novel cloud computing based smart farming system for early detection of borer insects in tomatoes
DE69217047T2 (de) Verbesserungen in neuronalnetzen
DE112018000349T5 (de) Visuelles Analysesystem für auf einem konvolutionalen neuronalen Netz basierte Klassifizierer
Roldán-Serrato et al. Automatic pest detection on bean and potato crops by applying neural classifiers
DE112017001311T5 (de) System und Verfahren zum Trainieren eines Objektklassifikators durch maschinelles Lernen
WO2018065308A1 (fr) Identification d'organismes utiles et/ou de substances nocives dans un champ de plantes cultivées
EP2353146A1 (fr) Procédé pour mesurer la croissance de disques de feuille de plantes ainsi qu'un dispositif approprié à cet effet
EP3528609A1 (fr) Prévisions de rendement pour un champ de blé
EP3626077A1 (fr) Contrôle de la présence des organismes nuisibles
DE112022003791T5 (de) Automatisches erzeugen eines oder mehrerer bildverarbeitungsaufträge basierend auf bereichen von interesse (rois) digitaler bilder
EP4064818B1 (fr) Procédé de traitement de plantes dans un champ
Dandekar et al. Weed plant detection from agricultural field images using YOLOv3 algorithm
CN110569858A (zh) 一种基于深度学习算法的烟叶病虫害识别方法
EP3979214A1 (fr) Lutte contre des organismes nuisibles
DE102023122228A1 (de) Verfahren, auswertesystem, auswerteeinrichtung und system oder vorrichtung zum auswerten von spektraldaten
DE102023113704A1 (de) Vorrichtung, auswerteeinrichtung, auswertesystem und verfahren für eine zustandsanalyse einer pflanze
DE102017217258A1 (de) Verfahren zum Klassifizieren von Pflanzen
Kandalkar et al. Classification of agricultural pests using dwt and back propagation neural networks
WO2023208619A1 (fr) Prédiction de structures de dépôt d'agents phytosanitaires et/ou de substances nutritives sur des parties de plantes
DE102019131858A1 (de) System zur automatischen Erfassung und Bestimmung von sich bewegenden Objekten
EP4661663A1 (fr) Détection d'une déficience fonctionnelle de pièges à insectes surveillés par caméra
DE202025102548U1 (de) Ein datenzentriertes System für die Analyse landwirtschaftlicher Kulturen mit Hilfe von künstlicher Intelligenz und maschinellem Lernen
EP4672961A1 (fr) Prédiction d'exigences de maintenance pour un piège à insectes

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20250908

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR