WO2024010467A2 - Devices and methods for bivalve harvest optimization - Google Patents

Devices and methods for bivalve harvest optimization Download PDF

Info

Publication number
WO2024010467A2
WO2024010467A2 PCT/NZ2023/050093 NZ2023050093W WO2024010467A2 WO 2024010467 A2 WO2024010467 A2 WO 2024010467A2 NZ 2023050093 W NZ2023050093 W NZ 2023050093W WO 2024010467 A2 WO2024010467 A2 WO 2024010467A2
Authority
WO
WIPO (PCT)
Prior art keywords
bivalve
photographs
obtaining
meat
metrics
Prior art date
Application number
PCT/NZ2023/050093
Other languages
French (fr)
Other versions
WO2024010467A3 (en
Inventor
Crispin David LOVELL-SMITH
Julian Roscoe MACLAREN
Nicola Ann HAWES
Reno Harley Bolstad HOLMES
Original Assignee
Harvest Hub Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harvest Hub Limited filed Critical Harvest Hub Limited
Publication of WO2024010467A2 publication Critical patent/WO2024010467A2/en
Publication of WO2024010467A3 publication Critical patent/WO2024010467A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Definitions

  • This relates to devices and methods for bivalve harvest optimization.
  • Bivalves including mussels, oysters, scallops, and clams, are harvested for consumption or other use. In some cases, they may be wild caught, but in other cases they may be farmed. When farmed, this may occur on one or more dropper lines. When the bivalves are mature, they can then be harvested. Harvesting bivalves before they are mature may result in decreased yield or quality.
  • a method comprising: obtaining one or more photographs of a bivalve; and determining, based on the one or more photographs, one or more metrics associated with the bivalve.
  • Figure 1 shows an example method for optimizing bivalve harvest.
  • Figure 2 shows an example approach for obtaining one or more photographs of a bivalve.
  • Figure 3 shows a first example method for processing one or more photographs.
  • Figure 4 shows a second example method of processing one or more photographs.
  • Figure 5 shows an example system.
  • Figure 6 shows a first example tray.
  • Figure 7 shows a second example tray comprising indicia.
  • optimizing does not necessarily mean obtaining an optimal outcome. Instead, optimizing means providing a potentially improved outcome compared to conventional human approaches.
  • Figure 1 shows an example method for optimizing bivalve harvest.
  • the system obtains one or more photographs of each of one or more bivalves.
  • the bivalves may be obtained as part of a sampling approach.
  • the bivalves may comprise one or more bivalves from each of one or more dropper lines.
  • the bivalves are obtained from a range of areas of dropper line, for example from the top, middle, and bottom of each dropper line.
  • the bivalves may be raw or cooked. For a given species or implementation, one approach may provide a higher quality output which can be identified during calibration of the system.
  • step 120 the system processes the one or more photographs.
  • the processing may comprise transforming the photographs in a manner to improve their use for subsequent input. As a result, the photographs are output in a relatively standardized comparable format. [0021] The processing may occur in accordance with the method shown in Figure 3 or 4.
  • the one or more photographs is provided as an input to a trained model.
  • the trained models are configured to receive an input comprising imagery (such as photographs) of a bivalve and provide an output.
  • the appropriate trained model is selected based on the desired output. Examples of how the model may be trained are described below.
  • the trained model provides an output.
  • the output may be data, and may comprise one or more metrics.
  • the metrics may relate to readiness for harvest.
  • a metric may be a condition score.
  • the condition score indicates the readiness to harvest. This may be calculated in the manner shown in Figure 2. Based on the condition score, this may indicate whether the bivalves are ready for harvest. In cases where the photographed bivalves are a sample, this may indicate that the entire dropper line or farm is ready for harvest.
  • a metric may be a biofouling score.
  • the biofouling score indicates the amount of the bivalve affected by another species.
  • the other species may be one or more invasive species identified by a list of invasive species.
  • the list of invasive species may be provided by a government department, a regulator, or another entity. Based on the biofouling score, this may indicate that some intervention is required to reduce the amount of biofouling.
  • a metric may be one or more measurements of the bivalve.
  • this may comprise one or more of a sex, length, width, thickness, shell volume, shell damage, meat pigmentation, and meat-to-shell ratio. Based on these, this may indicate whether the bivalves are ready for harvest. In cases where the photographed bivalves are a sample, this may indicate that the entire dropper line or farm is ready for harvest. Additionally or alternatively, this may indicate the suitability of the bivalve for breeding.
  • the metrics may further be used to indicate the suitability of the bivalve (or genetically related bivalves) for breeding. Where the preceding steps are not destructive (for example, the bivalve is not opened), the same bivalve that is photographed might be suitable for breeding. In a more general case, such as where the bivalve is opened, this indication may relate to other individuals of the same family might be suitable for breeding. This could then form part of a selective breeding programme.
  • the output may be displayed to a user, for example on a user interface on a mobile device.
  • the output may be provided to control one of more apparatus.
  • the output may be an instruction to a harvesting apparatus.
  • the harvesting apparatus may determine, based on the output, whether or not to harvest the bivalves.
  • the output may be an instruction to a sampling system.
  • the sampling system may determine, based on the output, whether or not to continue sampling the bivalves.
  • the output may be an instruction to a processing system.
  • the processing system may determine, based on the output, how to process the bivalves which are harvested. For example, bivalves with certain characteristics may be used for oil, whereas bivalves with other characteristics may be used as meat.
  • Figure 2 shows an example approach for obtaining one or more photographs of a bivalve.
  • Figure 2 may occur using a system comprising a camera.
  • the system may be a mobile device such as a smartphone.
  • the bivalve is opened. This results in the meat of the bivalve remaining attached to one half of the shell.
  • the other half of the shell is typically free of meat, and therefore empty.
  • the bivalve is located on a tray.
  • the purpose of the tray may be to at least hold the bivalves in place.
  • the bivalves are located such that the meat-containing shell is located face up on the tray and the empty shell is located face down on the tray. In this way, both shells are on the same tray.
  • the system determines that the camera is in the appropriate position. This may be based on whether the entire tray is in the field of view of the camera and/or whether sufficient lighting is provided.
  • the system may provide feedback to an operator to adjust the camera and/or the tray, or may adjust the camera and/or the tray automatically.
  • the system may provide feedback to an operator to adjust the lighting or may adjust the lighting automatically.
  • step 230 may be omitted, for example where precise positioning is not necessary.
  • a camera takes one or more photographs of the tray.
  • the one or more photographs of the tray may comprise still photographs and/or a video.
  • the one or more photographs may be taken from different positions. This may be from multiple cameras which take photographs from different positions and/or from one or more cameras which move location.
  • the one or more photographs may be taken with a time delay between each photograph. In a preferred embodiment, this is about 1/10 of a second. This results in images that show the tray in slightly different positions over time.
  • the one or more photographs may be taken with different lighting conditions. This may occur as the result of alternating the lighting providing by a lighting module of the system. For example, this may comprise turning on or off illumination and/or changing the colour of the illumination between white, red, infrared, or other colours.
  • the photographs may comprise LIDAR ("light detection and ranging") images of the tray. This may allow for improve geometry of the bivalve.
  • Figure 3 shows a first example method for processing one or more photographs.
  • a plurality of photographs is merged to obtain a merged image.
  • the plurality of photographs relate to the same bivalve.
  • the merger may occur on the basis of indicia on a tray in the photograph. Additionally or alternatively, the merger may occur on the basis of scale invariant feature transformation (SIFT) features.
  • SIFT scale invariant feature transformation
  • the merger may reduce or eliminate reflections.
  • each photograph will tend to have reflection (if any) at a different part of the photograph.
  • the reflection may be eliminated in the merged image. This may be particularly beneficial in use, as the environment in which bivalves are harvested is typically wet, which increases the likelihood of reflection.
  • the bivalve within the merged image is isolated.
  • isolating may mean providing cropped photographs in which features other than the bivalve are omitted.
  • this may occur through semantic segmentation.
  • the semantic segmentation may occur through the use of a trained model, for example a convolutional neural network such as a U-Net.
  • a convolutional neural network such as a U-Net.
  • different parts of the bivalve may be isolated. For example, the meat and shell may be isolated.
  • one or more processed images are provided. These may then be used for further analysis.
  • Figure 4 shows a second example method of processing one or more photographs.
  • the bivalve within each of one or more photographs is isolated.
  • isolating may mean providing cropped photographs in which features other than the bivalve are omitted.
  • this may occur through semantic segmentation.
  • the semantic segmentation may occur through the use of a trained model, for example a convolutional neural network such as a U-Net.
  • a convolutional neural network such as a U-Net.
  • different parts of the bivalve may be isolated. For example, the meat and shell may be isolated.
  • step 420 one or more processed images are provided. These may then be used for further analysis. Figure 4 therefore differs from Figure 3 in that no merger occurs so that multiple images of the same bivalve are obtained. Training the Model
  • one or more photographs are provided as an input to a trained model.
  • a trained model There are multiple approaches for training an appropriate model, based on the preferred output.
  • a model is trained for use in calculating a condition score.
  • a condition score indicates the readiness of the bivalve for harvest.
  • a training set is provided where images of a shell containing meat are provided along with a condition score.
  • the condition score in the training set is based on the meat-to-shell ratio, obtained through cooking and weighing bivalves in the training set.
  • the condition score in the training set is based on scores provided by human evaluators.
  • a model is trained for use in calculating a biofouling score.
  • the biofouling score indicates the amount of the bivalve affected by another species.
  • the other species may be one or more invasive species identified by a list of invasive species.
  • the list of invasive species may be provided by a government department, a regulator, or another entity.
  • a training set is provided where images of an empty shell are provided along with a biofouling score.
  • the biofouling score in the training set may be based on weighing the amount of other species.
  • a model is trained for use in calculating parameters relating to the meat of a bivalve based on its outer appearance.
  • the parameters may comprise condition score.
  • a training set is provided where images of the outside of a shell are provided along with determined parameters.
  • the bivalves in the training set may be exclusively raw, exclusively cooked, or a mixture.
  • the training set includes both data relating to the same bivalve individual in both a raw and a cooked state. This can provide additional ground truth data and additional consistency checks over time. In addition, this may allow properties of a cooked bivalve to be derived from photographs of a raw bivalve (and vice versa).
  • Training the models uses artificial intelligence techniques.
  • supervising training of a neural network is used.
  • Figure 5 shows an example system for performing the methods noted above.
  • edge device 510 (such as a smart phone) may be provided, in use, proximate the bivalves 540 on a tray 541.
  • Edge device 510 may comprise one or more cameras 511.
  • the edge device 510 is configured to perform the method of Figure 1, steps 230 and 240 of Figure 2, Figure 3, and Figure 4 through an app. That is, the edge device 510 can preferentially perform all steps other than training models. This may be useful in cases where network access is limited: the edge device 510 can perform the necessary analysis in real-time to enable sampling on site.
  • Server 520 is accessible by the edge device 510 via network 530 (such as the Internet).
  • edge device 510 sends data to the server 520 when the network 530 is available. This may occur substantially after processing occurs by the edge device 530. This enables record keeping at server 520, for example via farm management applications.
  • bivalves may be located on a tray when photographs are taken.
  • FIG. 6 shows a first example tray 600 which may be used for this purpose.
  • Tray 600 comprises a substantially flat surface onto which the bivalves 610. This may provide a reference point for subsequent photographs to allow the photographs to be comparable, despite non-standardised cameras.
  • the tray may be made of a robust and/or waterproof material, such as plastic.
  • the substantially flat surface may be bounded by walls 601 to reduce the chance of a shell falling off the tray 600 in use.
  • Figure 7 shows a second example tray 700, which is the same as the tray 600 with the addition of one or more indicia.
  • the indicia may comprise a regular geometric pattern 710, such as a checkerboard. This may aid in providing known geometry, such as corner points or other reference points.
  • the indicia may comprise a fiducial marker 720, such as a quick response (QR) code.
  • the fiducial marker may encode or correspond to one or more of a location of harvest and/or a time of harvest.
  • the indicia may comprise a colour chart 730.
  • the colour chart optionally in conjunction with a white background, may allow for colour correction purposes, in order to accurately account for differences in camera image sensor properties and camera settings. For example, in the case of a mussels, a creamy white colour in a male or a bright orange colour in a female indicates that the mussel is ready for harvest.
  • the indicia may be duplicated and distributed across the tray. This reduces the chance that a located bivalve obscures the indicia.
  • the indicia may be printed directly on the tray. In other cases, the indicia may be separate pieces, such as printed card or other material which can be placed or attached to the tray.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A method comprising: obtaining one or more photographs of a bivalve; and determining, based on the one or more photographs, one or more metrics associated with the bivalve.

Description

DEVICES AND METHODS FOR BIVALVE HARVEST OPTIMIZATION
FIELD
[0001] This relates to devices and methods for bivalve harvest optimization.
BACKGROUND
[0002] Bivalves, including mussels, oysters, scallops, and clams, are harvested for consumption or other use. In some cases, they may be wild caught, but in other cases they may be farmed. When farmed, this may occur on one or more dropper lines. When the bivalves are mature, they can then be harvested. Harvesting bivalves before they are mature may result in decreased yield or quality.
SUMMARY
[0003] In a first example, there is provided a method comprising: obtaining one or more photographs of a bivalve; and determining, based on the one or more photographs, one or more metrics associated with the bivalve.
[0004] Further examples are set out in the claims.
BRIEF DESCRIPTION
[0005] The description is framed by way of example with reference to the drawings which show certain embodiments. However, these drawings are provided for illustration only, and do not exhaustively set out all embodiments.
[0006] Figure 1 shows an example method for optimizing bivalve harvest.
[0007] Figure 2 shows an example approach for obtaining one or more photographs of a bivalve.
[0008] Figure 3 shows a first example method for processing one or more photographs.
[0009] Figure 4 shows a second example method of processing one or more photographs. [0010] Figure 5 shows an example system.
[0011] Figure 6 shows a first example tray.
[0012] Figure 7 shows a second example tray comprising indicia.
DETAILED DESCRIPTION
[0013] In an embodiment, there is provided an approach for optimizing bivalve harvest. Optimizing does not necessarily mean obtaining an optimal outcome. Instead, optimizing means providing a potentially improved outcome compared to conventional human approaches.
[0014] Figure 1 shows an example method for optimizing bivalve harvest.
[0015] At step 110, the system obtains one or more photographs of each of one or more bivalves.
[0016] The bivalves may be obtained as part of a sampling approach. For example, the bivalves may comprise one or more bivalves from each of one or more dropper lines. In a further example, the bivalves are obtained from a range of areas of dropper line, for example from the top, middle, and bottom of each dropper line.
[0017] The bivalves may be raw or cooked. For a given species or implementation, one approach may provide a higher quality output which can be identified during calibration of the system.
[0018] The one or more photographs may be obtained in accordance with the method shown in Figure 2.
[0019] At step 120, the system processes the one or more photographs.
[0020] The processing may comprise transforming the photographs in a manner to improve their use for subsequent input. As a result, the photographs are output in a relatively standardized comparable format. [0021] The processing may occur in accordance with the method shown in Figure 3 or 4.
[0022] At step 130, the one or more photographs is provided as an input to a trained model.
[0023] The trained models are configured to receive an input comprising imagery (such as photographs) of a bivalve and provide an output. The appropriate trained model is selected based on the desired output. Examples of how the model may be trained are described below.
[0024] At step 140, the trained model provides an output.
[0025] In some cases, the output may be data, and may comprise one or more metrics. The metrics may relate to readiness for harvest.
[0026] In a first example, a metric may be a condition score. The condition score indicates the readiness to harvest. This may be calculated in the manner shown in Figure 2. Based on the condition score, this may indicate whether the bivalves are ready for harvest. In cases where the photographed bivalves are a sample, this may indicate that the entire dropper line or farm is ready for harvest.
[0027] In a second example, a metric may be a biofouling score. The biofouling score indicates the amount of the bivalve affected by another species. In some cases, the other species may be one or more invasive species identified by a list of invasive species. The list of invasive species may be provided by a government department, a regulator, or another entity. Based on the biofouling score, this may indicate that some intervention is required to reduce the amount of biofouling.
[0028] In a third example, a metric may be one or more measurements of the bivalve. For example, this may comprise one or more of a sex, length, width, thickness, shell volume, shell damage, meat pigmentation, and meat-to-shell ratio. Based on these, this may indicate whether the bivalves are ready for harvest. In cases where the photographed bivalves are a sample, this may indicate that the entire dropper line or farm is ready for harvest. Additionally or alternatively, this may indicate the suitability of the bivalve for breeding.
[0029] In some cases, the metrics may further be used to indicate the suitability of the bivalve (or genetically related bivalves) for breeding. Where the preceding steps are not destructive (for example, the bivalve is not opened), the same bivalve that is photographed might be suitable for breeding. In a more general case, such as where the bivalve is opened, this indication may relate to other individuals of the same family might be suitable for breeding. This could then form part of a selective breeding programme.
[0030] The output may be displayed to a user, for example on a user interface on a mobile device.
[0031] In some embodiments, the output may be provided to control one of more apparatus.
[0032] In a first example, the output may be an instruction to a harvesting apparatus. The harvesting apparatus may determine, based on the output, whether or not to harvest the bivalves.
[0033] In a second example, the output may be an instruction to a sampling system. The sampling system may determine, based on the output, whether or not to continue sampling the bivalves.
[0034] In a third example, the output may be an instruction to a processing system. The processing system may determine, based on the output, how to process the bivalves which are harvested. For example, bivalves with certain characteristics may be used for oil, whereas bivalves with other characteristics may be used as meat.
[0035] This provides an optimized approach for the harvest of bivalves which may result in improved yield and/or quality compared with conventional approaches. Photographs
[0036] Figure 2 shows an example approach for obtaining one or more photographs of a bivalve. Figure 2 may occur using a system comprising a camera. In some cases, the system may be a mobile device such as a smartphone.
[0037] At step 210, the bivalve is opened. This results in the meat of the bivalve remaining attached to one half of the shell. The other half of the shell is typically free of meat, and therefore empty.
[0038] At step 220, the bivalve is located on a tray.
[0039] The purpose of the tray may be to at least hold the bivalves in place. In preferred embodiments, the bivalves are located such that the meat-containing shell is located face up on the tray and the empty shell is located face down on the tray. In this way, both shells are on the same tray.
[0040] Examples of such trays, and their usage, are shown in Figures 6 or 7.
[0041] At step 230, the system determines that the camera is in the appropriate position. This may be based on whether the entire tray is in the field of view of the camera and/or whether sufficient lighting is provided.
[0042] When the entire tray is not in the field of view of the camera, the system may provide feedback to an operator to adjust the camera and/or the tray, or may adjust the camera and/or the tray automatically. When sufficient lighting is not provided, the system may provide feedback to an operator to adjust the lighting or may adjust the lighting automatically.
[0043] In some cases, step 230 may be omitted, for example where precise positioning is not necessary.
[0044] At step 240, a camera takes one or more photographs of the tray.
[0045] The one or more photographs of the tray may comprise still photographs and/or a video. [0046] The one or more photographs may be taken from different positions. This may be from multiple cameras which take photographs from different positions and/or from one or more cameras which move location.
[0047] The one or more photographs may be taken with a time delay between each photograph. In a preferred embodiment, this is about 1/10 of a second. This results in images that show the tray in slightly different positions over time.
[0048] The one or more photographs may be taken with different lighting conditions. This may occur as the result of alternating the lighting providing by a lighting module of the system. For example, this may comprise turning on or off illumination and/or changing the colour of the illumination between white, red, infrared, or other colours.
[0049] In a second example, the photographs may comprise LIDAR ("light detection and ranging") images of the tray. This may allow for improve geometry of the bivalve.
[0050] As the result of the method of Figure 2, one or more photographs of the bivalve are provided.
Processing Photographs
[0051] Figure 3 shows a first example method for processing one or more photographs.
[0052] At step 310, a plurality of photographs is merged to obtain a merged image. The plurality of photographs relate to the same bivalve. The merger may occur on the basis of indicia on a tray in the photograph. Additionally or alternatively, the merger may occur on the basis of scale invariant feature transformation (SIFT) features.
[0053] The merger may reduce or eliminate reflections. For example, where the plurality of photographs comprise photographs of substantially the same content from different positions, each photograph will tend to have reflection (if any) at a different part of the photograph. By combining these appropriately, the reflection may be eliminated in the merged image. This may be particularly beneficial in use, as the environment in which bivalves are harvested is typically wet, which increases the likelihood of reflection.
[0054] At step 320, the bivalve within the merged image is isolated. In this case, isolating may mean providing cropped photographs in which features other than the bivalve are omitted.
[0055] In some cases, this may occur through semantic segmentation. The semantic segmentation may occur through the use of a trained model, for example a convolutional neural network such as a U-Net. In some cases, different parts of the bivalve may be isolated. For example, the meat and shell may be isolated.
[0056] At step 330, one or more processed images are provided. These may then be used for further analysis.
[0057] Figure 4 shows a second example method of processing one or more photographs.
[0058] At step 410, the bivalve within each of one or more photographs is isolated. In this case, isolating may mean providing cropped photographs in which features other than the bivalve are omitted.
[0059] In some cases, this may occur through semantic segmentation. The semantic segmentation may occur through the use of a trained model, for example a convolutional neural network such as a U-Net. In some cases, different parts of the bivalve may be isolated. For example, the meat and shell may be isolated.
[0060] At step 420, one or more processed images are provided. These may then be used for further analysis. Figure 4 therefore differs from Figure 3 in that no merger occurs so that multiple images of the same bivalve are obtained. Training the Model
[0061] At step 130 noted above, one or more photographs are provided as an input to a trained model. There are multiple approaches for training an appropriate model, based on the preferred output.
[0062] In a first example, a model is trained for use in calculating a condition score. A condition score indicates the readiness of the bivalve for harvest. A training set is provided where images of a shell containing meat are provided along with a condition score. In one case, the condition score in the training set is based on the meat-to-shell ratio, obtained through cooking and weighing bivalves in the training set. In another case, the condition score in the training set is based on scores provided by human evaluators.
[0063] In a second example, a model is trained for use in calculating a biofouling score. The biofouling score indicates the amount of the bivalve affected by another species. In some cases, the other species may be one or more invasive species identified by a list of invasive species. The list of invasive species may be provided by a government department, a regulator, or another entity. A training set is provided where images of an empty shell are provided along with a biofouling score. The biofouling score in the training set may be based on weighing the amount of other species.
[0064] In a third example, a model is trained for use in calculating parameters relating to the meat of a bivalve based on its outer appearance. The parameters may comprise condition score. A training set is provided where images of the outside of a shell are provided along with determined parameters.
[0065] Further models may be provided for various parameters of the bivalves, such as sex, length, width, thickness, shell damage, meat pigmentation, and meat- to-shell ratio.
[0066] The bivalves in the training set may be exclusively raw, exclusively cooked, or a mixture. In some embodiments, the training set includes both data relating to the same bivalve individual in both a raw and a cooked state. This can provide additional ground truth data and additional consistency checks over time. In addition, this may allow properties of a cooked bivalve to be derived from photographs of a raw bivalve (and vice versa).
[0067] Training the models uses artificial intelligence techniques. In preferred embodiments, supervising training of a neural network is used.
System
[0068] Figure 5 shows an example system for performing the methods noted above.
[0069] In the system, edge device 510 (such as a smart phone) may be provided, in use, proximate the bivalves 540 on a tray 541. Edge device 510 may comprise one or more cameras 511.
[0070] The edge device 510 is configured to perform the method of Figure 1, steps 230 and 240 of Figure 2, Figure 3, and Figure 4 through an app. That is, the edge device 510 can preferentially perform all steps other than training models. This may be useful in cases where network access is limited: the edge device 510 can perform the necessary analysis in real-time to enable sampling on site.
[0071] Server 520 is accessible by the edge device 510 via network 530 (such as the Internet). In use, edge device 510 sends data to the server 520 when the network 530 is available. This may occur substantially after processing occurs by the edge device 530. This enables record keeping at server 520, for example via farm management applications.
Tray
[0072] As described above, bivalves may be located on a tray when photographs are taken.
[0073] Figure 6 shows a first example tray 600 which may be used for this purpose. [0074] Tray 600 comprises a substantially flat surface onto which the bivalves 610. This may provide a reference point for subsequent photographs to allow the photographs to be comparable, despite non-standardised cameras. The tray may be made of a robust and/or waterproof material, such as plastic. The substantially flat surface may be bounded by walls 601 to reduce the chance of a shell falling off the tray 600 in use.
[0075] Figure 7 shows a second example tray 700, which is the same as the tray 600 with the addition of one or more indicia.
[0076] In a first example, the indicia may comprise a regular geometric pattern 710, such as a checkerboard. This may aid in providing known geometry, such as corner points or other reference points.
[0077] In a second example, the indicia may comprise a fiducial marker 720, such as a quick response (QR) code. The fiducial marker may encode or correspond to one or more of a location of harvest and/or a time of harvest.
[0078] In a third example, the indicia may comprise a colour chart 730. The colour chart, optionally in conjunction with a white background, may allow for colour correction purposes, in order to accurately account for differences in camera image sensor properties and camera settings. For example, in the case of a mussels, a creamy white colour in a male or a bright orange colour in a female indicates that the mussel is ready for harvest.
[0079] The indicia may be duplicated and distributed across the tray. This reduces the chance that a located bivalve obscures the indicia.
[0080] In some cases, the indicia may be printed directly on the tray. In other cases, the indicia may be separate pieces, such as printed card or other material which can be placed or attached to the tray. Interpretation
[0081] A number of methods have been described above. Any of these methods may be embodied in a series of instructions, which may form a computer program. These instructions, or this computer program, may be stored on a computer readable medium, which may be non-transitory. When executed, these instructions or this program cause a processor to perform the described methods.
[0082] The steps of the methods have been described in a particular order for ease of understanding. However, the steps can be performed in a different order from that specified, or with steps being performed in parallel. This is the case in all methods except where one step is dependent on another having been performed.
[0083] The term "comprises" and other grammatical forms is intended to have an inclusive meaning unless otherwise noted. That is, they should be taken to mean an inclusion of the listed components, and possibly of other non-specified components or elements. [0084] While the present invention has been explained by the description of certain embodiments, the invention is not restricted to these embodiments. It is possible to modify these embodiments without departing from the spirit or scope of the invention.

Claims

1. A method comprising: obtaining one or more photographs of a bivalve; and determining, based on the one or more photographs, one or more metrics associated with the bivalve.
2. The method of claim 1, further comprising: locating the bivalve on a surface of known geometry.
3. The method of claim 2, where the known geometry is flat.
4. The method of claim 2 or 3, wherein the surface comprises one or more indicia.
5. The method of claim 4, wherein the indicia comprises one or more of: a regular geometric pattern; a fiducial marker; and a colour chart.
6. The method of any one of claims 2 to 5, wherein locating the bivalve on a surface of known geometry comprises: locating the half of the bivalve containing meat face up; and locating the other half of the bivalve face down.
7. The method of any one of claims 2 to 6, wherein the bivalve is cooked.
8. The method of any one of claims 1 to 7, wherein obtaining one or more photographs of a bivalve comprises: using the camera of a mobile device to obtain one or more photographs of a bivalve.
9. The method of any one of claims 1 to 8, wherein obtaining one or more photographs of a bivalve comprises: obtaining a plurality of photographs of the bivalve, where in the photographs differ in a location and/or a time.
10. The method of any one of claims 1 to 9, wherein obtaining one or more photographs of a bivalve comprises: obtaining a video of the bivalve.
11. The method of any one of claims 1 to 10, wherein the metrics comprise one or more of: a condition score indicating the readiness to harvest; a biofouling score indicating the amount of the bivalve affected by another species; sex; length; width; thickness; shell volume; shell damage; meat pigmentation; and meat-to-shell ratio.
12. The method of any one of claims 1 to 11, further comprising: processing the one or more photographs.
13 The method of claim 12, wherein processing the one or more photographs comprises: applying semantic segmentation to the one or more photographs.
14. The method of claim 12 or 13, wherein processing the one or more photographs comprises: merging two or more photographs to form a merged image.
15. The method of any one of claims 1 to 14, wherein determining, based on the one or more photographs, one or more metrics associated with the bivalve comprises: providing the one or more photographs as an input to a trained model; and obtaining the one or more metrics as an output from the trained model.
16. The method of any one of claims 1 to 15, further comprising: displaying the one or more metrics on a user interface.
17. A system configured to perform the method of any one of claims 1 to 16.
18. The system of claim 17, wherein the system is a mobile device.
PCT/NZ2023/050093 2022-07-08 2023-09-08 Devices and methods for bivalve harvest optimization WO2024010467A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NZ790159 2022-07-08
NZ79015922 2022-07-08

Publications (2)

Publication Number Publication Date
WO2024010467A2 true WO2024010467A2 (en) 2024-01-11
WO2024010467A3 WO2024010467A3 (en) 2024-03-07

Family

ID=89453897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NZ2023/050093 WO2024010467A2 (en) 2022-07-08 2023-09-08 Devices and methods for bivalve harvest optimization

Country Status (1)

Country Link
WO (1) WO2024010467A2 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060005460A1 (en) * 2004-06-28 2006-01-12 Bittrick Mark E Fish dimension recording towel
DE102011051279A1 (en) * 2011-03-07 2012-09-13 André Meißner Arrangement for detection and documentation of count, direction, speed, size and type of aquatic organisms i.e. fishes, in fish control station, has lights working in range of infrared light, and image formed on reference surface
AU2021422850A1 (en) * 2021-01-29 2023-08-17 Running Tide Technologies, Inc. System and method for grading and counting aquatic animals
DE202021104009U1 (en) * 2021-07-28 2021-08-17 Ecosoph Gmbh Large-scale ecotoxicological early warning system with a freely movable bio-indicator and a compact electronic measuring and transmitting device

Also Published As

Publication number Publication date
WO2024010467A3 (en) 2024-03-07

Similar Documents

Publication Publication Date Title
An et al. Application of computer vision in fish intelligent feeding system—A review
Descamps et al. An automatic counter for aerial images of aggregations of large birds
Navotas et al. Fish identification and freshness classification through image processing using artificial neural network
CN112232978B (en) Aquatic product length and weight detection method, terminal equipment and storage medium
US11282199B2 (en) Methods and systems for identifying internal conditions in juvenile fish through non-invasive means
Navarro et al. IMAFISH_ML: A fully-automated image analysis software for assessing fish morphometric traits on gilthead seabream (Sparus aurata L.), meagre (Argyrosomus regius) and red porgy (Pagrus pagrus)
TWI718572B (en) A computer-stereo-vision-based automatic measurement system and its approaches for aquatic creatures
CN112634202A (en) Method, device and system for detecting behavior of polyculture fish shoal based on YOLOv3-Lite
CN112232977A (en) Aquatic product cultivation evaluation method, terminal device and storage medium
Costa et al. Preliminary evidences of colour differences in European sea bass reared under organic protocols
Huang et al. Pork primal cuts recognition method via computer vision
CN115512215A (en) Underwater biological monitoring method and device and storage medium
CN114612397A (en) Fry sorting method and system, electronic device and storage medium
Stien et al. Rapid estimation of fat content in salmon fillets by colour image analysis
CN110414369B (en) Cow face training method and device
CN115861721A (en) Livestock and poultry breeding spraying equipment state identification method based on image data
JP2021107991A (en) Information processing device, computer program and information processing method
WO2024010467A2 (en) Devices and methods for bivalve harvest optimization
Kaewtapee et al. Objective scoring of footpad dermatitis in broiler chickens using image segmentation and a deep learning approach: camera-based scoring system
Yorzinski et al. A songbird can detect the eyes of conspecifics under daylight and artificial nighttime lighting
Cao et al. A computer vision program that identifies and classifies fish species
CN111951233B (en) Fishbone residue detection method and system
Taparhudee et al. Optimizing Convolutional Neural Networks, XGBoost, and Hybrid CNN-XGBoost for Precise Red Tilapia (Oreochromis niloticus Linn.) Weight Estimation in River Cage Culture with Aerial Imagery
Taparhudee et al. Weight estimation of Nile tilapia (Oreochromis niloticus Linn.) using image analysis with and without fins and tail
CN113052114A (en) Dead shrimp identification method, terminal device and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23835914

Country of ref document: EP

Kind code of ref document: A2