WO2019198701A1 - Dispositif d'analyse et procédé d'analyse - Google Patents

Dispositif d'analyse et procédé d'analyse Download PDF

Info

Publication number
WO2019198701A1
WO2019198701A1 PCT/JP2019/015417 JP2019015417W WO2019198701A1 WO 2019198701 A1 WO2019198701 A1 WO 2019198701A1 JP 2019015417 W JP2019015417 W JP 2019015417W WO 2019198701 A1 WO2019198701 A1 WO 2019198701A1
Authority
WO
WIPO (PCT)
Prior art keywords
captured image
analysis
unit
fish
image
Prior art date
Application number
PCT/JP2019/015417
Other languages
English (en)
Japanese (ja)
Inventor
準 小林
真美子 麓
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2020513402A priority Critical patent/JP7006776B2/ja
Publication of WO2019198701A1 publication Critical patent/WO2019198701A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K61/00Culture of aquatic animals
    • A01K61/90Sorting, grading, counting or marking live aquatic animals, e.g. sex determination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an aquatic organism analysis apparatus and analysis method.
  • Patent Document 1 discloses a method for monitoring the aquatic life of aquatic organisms, which accurately measures the three-dimensional position of aquatic organisms such as fish moving in a tank and monitors the behavioral state of aquatic organisms. That is, based on the back side (or ventral side) of the fish taken from the upper side (or bottom side) and the side of the aquarium, and the front side shot image of the fish, the fish head, trunk, tail fin, etc. The shape and size are estimated for each part. Further, the shape and size of each part of the fish are estimated using a plurality of template images given to each part.
  • Patent Document 2 discloses an image discrimination device for a moving object (fish), and is applied to, for example, a survey on the amount of fish in the sea. That is, underwater fish are photographed by a moving image camera and a still image camera, and a fish shadow is observed based on the moving image and the still image. Note that the size of the fish is estimated by the image size (or the number of pixels).
  • the color and brightness of the photographed image change according to the photographing conditions such as water quality and weather conditions, so the feature points of the aquatic creatures appearing in the photographed image cannot be fully recognized. there is a possibility.
  • the size of the underwater creature estimated based on the captured image can be provided to the user as useful information related to the breeding state of the fish.
  • the user who receives the information regarding the breeding state of the fish needs to know the specific degree of the aquatic organisms in the photographed image (for example, the specific number of the aquatic organisms) before paying for the information providing service. is there.
  • it is difficult to accurately determine the number and type of aquatic organisms based on the captured image and it is difficult to provide users with accurate information regarding the aquatic organism's growth state. .
  • the present invention has been made to solve the above-described problems, and an object thereof is to provide an analysis apparatus and an analysis method capable of providing accurate information on aquatic organisms.
  • the analysis device includes a captured image acquisition unit that acquires a captured image of an aquatic organism, and a preliminary analysis that generates determination material information for starting analysis based on the degree of identification of the aquatic organism in the captured image. A part.
  • the analysis method acquires a photographed image of aquatic organisms, and generates determination material information for starting analysis based on the specific degree of the aquatic organisms in the photographed image.
  • the storage medium causes the computer to acquire a photographed image of the aquatic organism, and a process of generating determination material information for starting analysis based on the specific degree of the aquatic organism in the photographed image.
  • a computer program for executing the process is stored.
  • the underwater organism monitoring system performs automatic recognition processing for a plurality of feature points indicating a shape feature of the underwater organism reflected in the imaging device that captures an image of the underwater organism and the captured image of the imaging device. And an analysis device that calculates the specific degree of the aquatic organisms and generates analysis start decision material information. When receiving an analysis start instruction according to the determination material information, the analysis device calculates statistical information about the size of the aquatic organism, and generates monitoring report information including the statistical information.
  • the present invention since a photographed image of an underwater organism is acquired and determination material information for starting analysis is generated based on the specific degree of the underwater organism in the photographed image, a user (or an operator) The specific degree of the organism can be known before starting this analysis.
  • 1 is a system configuration diagram showing an underwater organism monitoring system provided with an analyzer according to an embodiment of the present invention. It is a hardware block diagram of the analyzer which concerns on one Embodiment of this invention. It is a functional block diagram of the analyzer concerning one embodiment of the present invention. An example of the image image
  • FIG. 1 is a system configuration diagram showing an underwater organism monitoring system 100 including an analysis apparatus 1 according to an embodiment of the present invention.
  • the underwater organism monitoring system 100 includes an analysis device 1, a stereo camera 2, and a terminal 3.
  • the stereo camera 2 is installed at a position where the underwater creatures grown in the ginger 4 installed in the sea can be photographed.
  • the stereo camera 2 is installed at the corner of a rectangular parallelepiped ginger 4 and is arranged with the shooting direction directed to the center of the ginger 4.
  • the function and operation of the underwater organism monitoring system 100 will be described as growing fish in the ginger 4.
  • the stereo camera 2 installed in the water of the ginger 4 is connected to the terminal 3 for communication.
  • the stereo camera 2 captures an image in the capturing direction and transmits the captured image to the terminal 3.
  • the terminal 3 is communicatively connected to the analysis device 1.
  • the terminal 3 transmits the captured image received from the stereo camera 2 to the analysis device 1.
  • the analysis device 1 is a server device connected to a communication network such as the Internet, for example. Further, the analysis apparatus 1 is communicatively connected to a service providing destination terminal 5 (hereinafter referred to as “terminal 5”).
  • the analysis apparatus 1 performs machine learning based on the captured image received from the stereo camera 2 via the terminal 3 and the feature points for specifying the shape characteristics of the aquatic life reflected in the captured image.
  • the analysis apparatus 1 performs automatic recognition processing using learning data generated by machine learning, and estimates feature points that specify shape characteristics of aquatic organisms in a captured image.
  • the analysis device 1 sends a report on the size of the underwater organism to the terminal 5 based on the feature points of the underwater organism reflected in the captured image.
  • the analysis apparatus 1 estimates the fish size based on the characteristic points of the fish grown in the ginger 4.
  • the analysis device 1 generates a monitoring report including statistical information on fish size.
  • the analysis apparatus 1 performs a pre-analysis, and generates pre-analysis judgment material information including a specific degree of aquatic organisms in the captured image in the pre-analysis.
  • the analyzer 1 sends the judgment material information to the terminal 5.
  • the user (or worker) of the terminal 5 confirms the specific degree of the aquatic life (fish) included in the determination document information, and determines whether to instruct the start of this analysis.
  • the operator operates the terminal 5 to transmit a main analysis start instruction indicating whether to start the main analysis to the analysis apparatus 1.
  • the analyzer 1 starts the main analysis according to the main analysis start instruction received from the terminal 5.
  • FIG. 2 is a hardware configuration diagram of the analysis apparatus 1.
  • the analysis apparatus 1 includes a CPU (Central Processing Unit) 101, a ROM (Read Only Memory) 102, a RAM (Random Access Memory) 103, a database 104, and a communication module 105.
  • the analysis device 1 communicates with the terminal 3 via the communication module 105. Note that the terminals 3 and 5 also have the same hardware configuration as the analysis apparatus 1.
  • FIG. 3 is a functional block diagram of the analysis apparatus 1.
  • the CPU 101 executes a program stored in advance in a storage unit such as the ROM 102, thereby realizing the functional unit shown in FIG.
  • the captured image acquisition unit 11 and the feature designation reception unit 12 are mounted on the analysis apparatus 1 by executing an information acquisition program stored in advance in the storage unit.
  • the learning unit 13 is mounted on the analysis device 1 by executing a machine learning program stored in advance in the storage unit after the analysis device 1 is activated.
  • a feature estimation program stored in advance in the storage unit is executed, so that the analysis apparatus 1 includes the learning data acquisition unit 14, the pre-analysis unit 101, the feature point estimation unit 15, the same individual A specifying unit 16, a data discarding unit 17, a size estimating unit 18, a report information generating unit 102, and an output unit 19 are mounted.
  • the captured image acquisition unit 11 acquires a captured image from the stereo camera 2 via the terminal 3.
  • the feature designation accepting unit 12 accepts input of a rectangular range in which the fish body shown in the photographed image is accommodated and a plurality of feature points in the fish body.
  • the learning unit 13 performs machine learning based on the captured image received from the stereo camera 2 and the feature points for specifying the shape characteristics of the underwater creatures reflected in the captured image. Machine learning will be described later.
  • the learning data acquisition unit 14 acquires the learning data generated by the learning unit 13.
  • the feature point estimation unit 15 estimates a feature point that identifies the shape feature of the fish that appears in the captured image by automatic recognition processing using the learning data.
  • the prior analysis unit 101 generates determination material information for starting analysis based on the degree of fish specification in the captured image, and sends the determination material information to the terminal 5.
  • the same individual specifying unit 16 specifies the fish of the same individual shown in each of the two captured images obtained from the stereo camera 2.
  • the data discarding unit 17 discards the estimation result when the relationship between the plurality of feature points of the fish estimated by the automatic recognition processing is abnormal.
  • the size estimation unit 18 estimates the size of the fish based on the fish feature points in the captured image. In the present embodiment, the size of the fish is the fish body length, body height, weight, and the like.
  • the report information generation unit 102 generates monitoring report information using statistical information of the fish fork length, body height, weight, and numerical values specified from the captured image.
  • the output unit 19 generates output information based on the fish size estimated by the size estimation unit 18 and sends the output information to a predetermined output destination.
  • FIG. 4 shows an example of an image taken by the stereo camera 2.
  • the stereo camera 2 includes two lenses 21 and 22 arranged at a predetermined interval.
  • the stereo camera 2 captures two incident images at the same timing by capturing light incident on the left and right lenses 21 and 22 with an image sensor.
  • the stereo camera 2 captures images at a predetermined time interval.
  • a first photographed image is generated corresponding to the right lens 21 and a second photographed image is generated corresponding to the left lens 22.
  • FIG. 4 shows one of the first captured image and the second captured image.
  • the position of the same fish individual in which the first photographed image and the second photographed image appear is slightly different depending on the position of the lenses 21 and 22.
  • the stereo camera 2 generates several or several tens of captured images per second.
  • the stereo camera 2 sequentially transmits captured images to the analysis device 1.
  • the analysis apparatus 1 associates the acquisition time of the captured image, the captured time, the first captured image, and the second captured image and sequentially records them in the database 104.
  • FIG. 5 is a flowchart showing the information acquisition process of the analyzer 1 (steps S101 to S106).
  • FIG. 6 shows an example of the first input image and the second input image.
  • the analysis apparatus 1 sequentially acquires captured images from the stereo camera 2 via the terminal 3 (S101).
  • the captured image acquisition unit 11 sequentially acquires a combination of the first captured image and the second captured image captured by the stereo camera 2 at the same time.
  • the analysis apparatus 1 sequentially acquires a number of photographed images that can generate learning data that can automatically recognize the first rectangular range A1 and the feature points P1, P2, P3, and P4 in which the fish body shown in the newly input photographed image is contained. To do.
  • the captured image acquisition unit 11 gives identification information (ID) to each of the first captured image and the second captured image.
  • ID identification information
  • the photographed image acquisition unit 11 associates the first photographed image with the ID and the second photographed image with the ID, and associates the first photographed image and the second photographed image generated at the same time and records them in the database 104. (S102).
  • the feature designation receiving unit 12 starts processing according to the operation of the worker.
  • the feature designation accepting unit 12 accepts the input of the first rectangular range A1 in which the fish body reflected in the captured image obtained from the stereo camera 2 fits and the plurality of feature points P1, P2, P3, and P4 integrated with the fish body (S103).
  • the feature designation receiving unit 12 receives a first input image G1 and a first input image G1 for receiving inputs of the first rectangular range A1 and the feature points P1, P2, P3, and P4 in the captured image designated by the operator.
  • An input application screen including the two-input image G2 is generated and displayed on the monitor (S104).
  • the feature designation accepting unit 12 includes the first rectangular range A1, the feature points P1, P2, and the second feature for each of the first photographed image and the second photographed image photographed by the left and right lenses 21 and 22 of the stereo camera 2.
  • An input application screen for receiving inputs of P3 and P4 may be generated and displayed on the monitor.
  • the feature designation receiving unit 12 displays a first input image G1 indicating a captured image designated by the worker on the input application screen on the monitor.
  • the operator designates the first rectangular range A1 by using an input device such as a mouse so that a fish body is included in the first input image G1.
  • the feature designation receiving unit 12 generates an input application screen showing the second input image G2 in which the first rectangular range A1 is enlarged and displays it on the monitor.
  • the operator designates feature points P1, P2, P3, and P4 for specifying the shape feature of the fish in the second input image G2.
  • the feature points P1, P2, P3, and P4 may be a predetermined circular range including a plurality of pixels.
  • the feature point P1 is a circular range indicating the tip position of the fish mouth.
  • the feature point P2 is a circular range indicating the position of the outer edge of the central recess where the fish fin is split into two.
  • the feature point P3 is a circular range indicating the root position in front of the fish fin.
  • the feature point P4 is a circular range indicating the root position in front of the fish belly fin.
  • the feature designation receiving unit 12 includes coordinates indicating the first rectangular range A1 designated from the first input image G1 according to the position of the mouse pointer on the input application screen and the click operation of the mouse button by the operator, and feature points.
  • the coordinates indicating the circular ranges of P1, P2, P3, and P4 are temporarily stored in a storage unit such as the RAM 103. These coordinates may be determined using the reference position of the captured image (for example, the pixel position at the upper left corner of the rectangular range of the captured image) as the origin.
  • the feature designation accepting unit 12 receives the coordinates of the first rectangular range A1 designated on the input application screen, the coordinates of the circular ranges of the feature points P1, P2, P3, and P4, the ID of the photographed image, and information about the fish body integration.
  • the fish ID for identification is linked and recorded in the database 104 (S105).
  • the feature designation receiving unit 12 may perform the above-described processing for each of the first captured image and the second captured image.
  • the feature designation receiving unit 12 is a combination of a fish ID for identifying information about the fish, the first captured image ID, the first rectangular range A1 of the first captured image, and the feature points P1, P2, P3, and P4.
  • a fish ID for identifying information related to the same fish and a combination of the second photographed image ID, the first rectangular range A1 of the second photographed image, and the feature points P1, P2, P3, and P4. Record in database 104.
  • a plurality of fish are photographed in the photographed image.
  • the operator designates the first rectangular range A1 and the feature points P1, P2, P3, and P4 for the fish that shows the entire fish body among the plurality of fish that appear in one captured image, whereby the feature designation receiving unit 12 Those pieces of information are acquired and recorded in the database 104.
  • the feature designation receiving unit 12 determines whether or not the designation of the photographed image by the operator has been completed (S106). When the operator designates the next photographed image, the feature designation receiving unit 12 repeats the above steps S103 to S105.
  • FIG. 7 is a flowchart showing the learning process of the analysis apparatus 1 (steps S201 to S205).
  • the learning unit 13 starts the learning process in response to the operator's operation (S201).
  • the learning unit 13 selects one fish ID recorded in the database 104 and acquires information associated with the fish ID (S202).
  • This information includes the captured image, the coordinates of the first rectangular range A, and the coordinates of the circular ranges of the feature points P1, P2, P3, and P4.
  • the learning unit 13 uses the pixel value at the coordinates in the first rectangular range A1 in the captured image and the pixel value at the coordinates in the circular ranges of the feature points P1, P2, P3, and P4 as correct data, and a convolutional neural network such as AlexNet. Machine learning using is performed (S203).
  • the learning unit 13 includes the positions of the feature points P1, P2, P3, and P4 in the first rectangular range A1, the positional relationship of the feature points P1, P2, P3, and P4, and the circular range of the feature points P1, P2, P3, and P4.
  • Machine learning is performed based on the pixel value at the coordinates, the pixel value at the coordinates in the first rectangular range A1, and the like.
  • the learning unit 13 determines whether or not information associated with the next fish ID is recorded in the database 104 (S204). When the next fish ID exists, the learning unit 13 repeats steps S202 to S203 for the fish ID.
  • the learning unit 13 generates first learning data for automatically specifying a rectangular range in which the fish body reflected in the captured image is accommodated. Further, the learning unit 13 generates second learning data for automatically specifying the fish-integrated feature points P1, P2, P3, and P4 shown in the captured image.
  • the first learning data is, for example, data for determining a neural network for outputting a determination result as to whether or not a rectangular range set in a newly acquired captured image is a rectangular range including only a fish body. is there.
  • the second learning data is, for example, whether the range provided in the captured image includes the feature point P1, the range provided in the captured image includes the feature point P2, or the range provided in the captured image is the feature point.
  • a determination result indicating whether the range including P3, the range provided in the captured image includes the feature point P4, or the range provided in the captured image does not include the feature points P1, P2, P3, and P4 is output. This is data for determining a neural network.
  • the learning unit 13 records the first learning data and the second learning data in the database 104 (S205).
  • the analysis apparatus 1 automatically learns the first rectangular range A1 in which the fish body reflected in the photographed image is accommodated and the plurality of feature points P1, P2, P3, and P4 that are integrated in the fish body. Can be generated.
  • the learning unit 13 performs a multiplication process (Data Augmentation) on the captured image that is the correct answer data recorded in the database 104, and uses the many correct answer data that has been propagated. Learning data and second learning data may be generated.
  • a known method can be used for the multiplication processing of correct data. For example, a Random Crop method, a Horizontal Clip method, a first Color Augmentation method, a second Color Augmentation method, a third Color Augmentation method, or the like can be used.
  • the learning unit 13 resizes a captured image into an image of 256 pixels ⁇ 256 pixels, and randomly extracts a plurality of images of 224 pixels ⁇ 224 pixels from the resized image to form a new captured image. .
  • the learning unit 13 performs the machine learning process described above using a new captured image.
  • the learning unit 13 In the Horizonal Flip method, the learning unit 13 inverts the pixels of the captured image in the horizontal direction to obtain a new captured image.
  • the learning unit 13 performs the machine learning process described above using a new captured image.
  • the RGB values of pixels in a captured image are regarded as a set of three-dimensional vectors, and the analysis apparatus 1 performs a principal component analysis (PCA: Principal Component Analysis) of the three-dimensional vectors.
  • PCA Principal Component Analysis
  • the learning unit 13 generates noise using a Gaussian distribution, and generates a new image by adding noise in the eigenvector direction of the RGB three-dimensional vector by principal component analysis to the pixels of the captured image.
  • the learning unit 13 performs machine learning processing using a new image.
  • the learning unit 13 changes the color information of the captured image in a direction (axial direction) in which the dispersion of the principal component of the color information in the color space determined by the principal component analysis of the color information of the captured image is maximized.
  • the learning unit 13 randomly changes the contrast, brightness, and RGB value of the pixel of the captured image within a range of, for example, 0.5 to 1.5 times. Thereafter, the learning unit 13 generates a new image by a method similar to the first Color Augmentation method. The learning unit 13 performs machine learning processing using a new image.
  • the learning unit 13 corrects the captured images of different colors captured under different imaging environment conditions to the color of the captured image under the reference imaging conditions. Then, the learning unit 13 performs machine learning processing so as to generate first learning data and second learning data based on the first rectangular range and the plurality of feature points in the captured image after the correction.
  • the color of the captured image may change depending on the shooting location, water quality, season, and weather.
  • the correct answer data is a color video, it is assumed that the analysis device 1 cannot accurately recognize the feature points using the learning data when the learning data is generated based on the captured images having different colors.
  • the learning unit 13 acquires, as correct answer data, captured images that are captured under various shooting conditions regarding the shooting location, water quality, season, weather, and the like. Then, when performing learning processing using these captured images, the learning unit 13 performs color correction on the entire captured image so that the water colors in the captured images of all the correct answer data are the same color.
  • the feature point estimation unit 15 of the analysis apparatus 1 stores information relating to color correction (for example, a color correction coefficient) together with imaging conditions. Thereafter, when the feature point estimation unit 15 recognizes a rectangular range including a fish body or a feature point of a fish body from a new photographed image, the feature point estimation unit 15 acquires a combination of a photographing condition and color correction information.
  • the feature point estimation unit 15 selects a shooting condition closest to the shot image from a plurality of shooting conditions, and performs color correction on the shot image using color correction information corresponding to the shooting condition.
  • the feature point estimation unit 15 performs automatic recognition processing using the color-corrected captured image.
  • the learning unit 13 virtually unifies the shooting conditions of the shot image that is the correct answer data by unifying the colors of the shot images shot under different shooting conditions corresponding to different colors. Correct data can be generated, and learning processing can be appropriately performed using the correct data. For this reason, the analyzer 1 can improve the precision of the automatic recognition process by the learning data obtained by the learning process.
  • the learning unit 13 may use one of a plurality of proliferation processing methods for the captured image, or may use a plurality of proliferation processes.
  • the operator does not use all of the plurality of proliferation processing methods in combination and uses the learning data generated by gradually increasing the number of combinations of the plurality of methods such as one method, two methods, and three methods. Based on the evaluation. If the worker does not improve the recognition accuracy by adding the multiplication processing method and specifying the first rectangular range A1 or the feature point of the photographed image, the learning data generated by a combination of a plurality of methods is used. The adoption of is canceled.
  • the learning unit 13 stores a plurality of photographed images that are multiplied by the correct answer data, and when a plurality of photographed images having similarities are included in the plurality of photographed images, a photographed image having a high similarity is used for the learning process. You may make it not. For example, the learning unit 13 generates a score (for example, a scalar value or a vector value) for each of a plurality of captured images that are the multiplied correct answer data, and compares the scores between the captured images. The learning unit 13 determines that one of the captured images having a close score is an unnecessary image. The learning unit 13 performs principal component analysis on the RGB values of the pixels of the captured images in order to capture the tendency of the captured images determined to be unnecessary.
  • a score for example, a scalar value or a vector value
  • the learning unit 13 stores a principal component (eigenvector) calculated by principal component analysis and its threshold value.
  • the learning unit 13 obtains a principal component score (inner product of the eigenvector and the RGB value) for each pixel by using the eigenvector for the photographed image newly generated by the multiplication process, and adds up.
  • the total value and the threshold value are compared, and if the total value exceeds the threshold value, it is determined that the newly generated captured image is not used for the learning process.
  • FIG. 8 is a flowchart showing the pre-analysis process of the analysis apparatus 1 (steps S801 to S815).
  • the analyzer 1 performs a pre-analysis process before generating statistical information on the size of the fish grown on the ginger 4.
  • the analysis apparatus 1 receives captured image data generated by the stereo camera 2 during a predetermined time (S801).
  • the analysis apparatus 1 sequentially acquires captured images captured at predetermined time intervals included in the captured image data.
  • the captured image acquisition unit 11 acquires the first captured image and the second captured image captured at the same time.
  • the captured image acquisition unit 11 assigns identification information (ID) to the first captured image and the second captured image, respectively.
  • ID identification information
  • the photographed image acquisition unit 11 associates the first photographed image with the ID, the second photographed image with the ID, and associates the first photographed image with the second photographed image, thereby creating a new photographed image for automatic recognition processing. Is recorded in the database 104 (S802).
  • the stereo camera 2 ends the shooting after a predetermined shooting time has elapsed since the start of shooting.
  • the predetermined photographing time may be, for example, the time for one individual to make one round of rotation within the ginger 4 when the fish to be imaged continuously migrates in one direction around the center of the ginger 4. Note that the predetermined photographing time may be determined in advance.
  • the captured image acquisition unit 11 stops the captured image acquisition process when reception of the captured image data is stopped.
  • the photographed image data may be a photographed image that constitutes moving image data, or may be a photographed image that constitutes still image data.
  • the captured image acquisition unit 11 acquires the moving image data corresponding to the left and right lenses 21 and 22 of the stereo camera 2, the captured image corresponding to the capturing time at a predetermined time interval among a plurality of captured images constituting the moving image data. Images may be sequentially acquired as targets for automatic recognition of fish feature points.
  • the predetermined time interval may be, for example, a time during which the fish passes from one end to the other end of the rectangular captured image.
  • the analysis apparatus 1 uses the captured images acquired at predetermined time intervals to estimate the feature points of one or a plurality of fish that appear in the captured image.
  • the pre-analysis unit 101 starts the pre-analysis process (S803).
  • the prior analysis unit 101 instructs the learning data acquisition unit 14 to acquire learning data.
  • the learning data acquisition unit 14 acquires the first learning data and the second learning data recorded in the database 104 and sends them to the pre-analysis unit 101.
  • the pre-analysis unit 101 acquires the first pair of first captured images and second captured images from the database 104 according to their image IDs (S804).
  • the pre-analysis unit 101 starts automatic recognition processing using the neural network specified based on the first learning data for the photographed image, and the second rectangular range A2 including the fish body in the photographed image (see FIG. 9). ) Is specified (S805).
  • the pre-analysis unit 101 starts automatic recognition processing using the pixels in the second rectangular range A2 and the neural network specified based on the second learning data, and the feature point P1 in the second rectangular range A2.
  • P2, P3, and P4 are specified as circular ranges (S806).
  • the pre-analysis unit 101 sets the third rectangular range A3 by expanding, for example, several pixels to several tens of pixels in the vertical and horizontal directions with reference to the center coordinates of the second rectangular range A2, or sets the second rectangular range A3.
  • a third rectangular area A3 in which the size of A2 is enlarged by several tens of percent is set, and automatic recognition processing is performed using the pixels in the third rectangular area A3 and the neural network specified based on the second learning data. .
  • FIG. 9 shows an example of a captured image that has been subjected to the automatic recognition process described above.
  • the pre-analysis unit 101 specifies a second rectangular range A2 that surrounds any of the plurality of fishes shown in the captured image, or a third rectangular range A3 that is an enlargement of the second rectangular range A2.
  • the pre-analysis unit 101 also specifies feature points by estimation processing for a captured image in which a fish head, tail fin, or the like is cut off at the top, bottom, left, or right ends of the captured image.
  • the data discarding unit 17 may detect an estimation result including a feature point estimated outside the end of the captured image based on the coordinates of the feature point, and discard the data relating to the estimation result. .
  • the pre-analysis unit 101 identifies the circular ranges of the feature points P1, P2, P3, and P4 for each of the first captured image and the second captured image captured at the same time. Since the fish shown in the first photographed image and the fish shown in the second photographed image are adjusted by machine learning so as to be a fish showing the same individual, the first learning data is learning data generated by the learning unit 13. . Thereby, learning data for specifying the second rectangular range A ⁇ b> 2 indicating the same fish body in the two captured images acquired from the stereo camera 2 can be generated. Further, the pre-analysis unit 101 specifies a second rectangular range A2 that surrounds the same individual fish shown in each of the first captured image and the second captured image.
  • the pre-analysis unit 101 generates a fish ID of a fish included in the second rectangular range A2 specified in each of the first captured image and the second captured image, and the feature points P1, P2, which are specified in the fish ID and the captured image,
  • the representative coordinates (for example, the coordinates of the center) of the circular range of P3 and P4 are recorded in the database 104 as the result of automatic recognition of fish feature points (S807).
  • the pre-analysis unit 101 determines whether the second rectangular range A2 including other fish bodies or the third rectangular range A3 obtained by enlarging the second rectangular range A2 can be specified in the same captured image (S808). If the second rectangular range A2 or the third rectangular range A3 including other fish can be specified in the same captured image, the pre-analysis unit 101 repeats steps S805 to S807 described above. When the second rectangular range A2 or the third rectangular range A3 including another fish cannot be specified, the pre-analysis unit 101 determines whether the processing of the number of photographed images necessary for the pre-analysis process has been completed (S809).
  • the number of captured images required for the pre-analysis processing is included in a predetermined ratio of the number of captured images included in the captured image data acquired in step S801 or a predetermined amount of data. There may be as many photographed images as possible.
  • the pre-analysis unit 101 identifies the number of captured images included in the first 5 minutes of moving image data as the target of the pre-analysis process.
  • the pre-analysis unit 101 repeats steps S804 to S808 when the processing of the number of photographed images necessary for the pre-analysis process has not been completed.
  • the feature point estimation unit 15 starts generation of determination material information for starting the main analysis when the processing of the number of captured images necessary for the pre-analysis processing is completed (S810).
  • the pre-analysis unit 101 acquires the information of the automatic recognition processing result repeatedly recorded in the database 104 in step S807 when generating the judgment material information (S811).
  • the pre-analysis unit 101 counts the number of multiple fish IDs included in the information of the automatic recognition processing result.
  • the pre-analysis unit 101 inputs the number of photographed images used for the pre-analysis processing and the number of fish IDs indicating the fish identified from the photographed images into the treatment specific degree calculation formula, and calculates the fish specific degree in the photographed image. May be.
  • the pre-analysis unit 101 generates determination material information for starting this analysis including a specific degree (or specific frequency) indicating a specific number of fishes according to the number of fish IDs (S812).
  • the pre-analysis unit 101 transmits the determination material information to the service providing destination terminal 5 predetermined in correspondence with the stereo camera 2 that is the transmission source of the captured image data received in step S801 (S813).
  • the user who monitors the fish migrating the ginger 4 using the service providing destination terminal 5 determines whether to request the start of this analysis based on the specific degree such as the specific number of fish included in the determination document information.
  • the user inputs a main analysis start instruction to the terminal 5 when requesting the start of the main analysis.
  • the terminal 5 transmits this analysis start instruction to the analysis apparatus 1.
  • the prior analysis unit 101 of the analysis apparatus 1 instructs the feature point estimation unit 15 to start the main analysis (S815).
  • This analysis start instruction includes information on the stereo camera 2 that is the transmission source of the captured image that has been analyzed in advance, and identification information (ID) of the user who monitors the fish of the ginger 4 on which the stereo camera 2 is installed. include.
  • the user of the terminal 5 can know in advance the specific degree of the fish body in the photographed image obtained by photographing the aquatic life such as the fish grown on the ginger 4 before starting the analysis.
  • the user gives an instruction to start this analysis to the analysis device 1 according to the specific degree, and the analysis device 1 may generate monitoring report information including statistical information regarding the size of aquatic organisms such as fish of ginger 4. it can.
  • the pre-analysis unit 101 determines the statistical value of the color of the captured image from the RGB value of the pixel of the captured image instead of the number of fish IDs, and in the color space of the statistical value of the RGB value indicating the color. You may calculate the numerical value which shows the specific degree of a fish body based on a coordinate. For example, when the numerical value of the specific degree according to the statistical value of the RGB value of the photographed image is within a predetermined range indicating that the water quality or the brightness of the photographed image is sufficient, the user can change the morphological characteristics of the aquatic life such as fish Can be determined to be an environment that can be sufficiently specified based on the photographed image. In this case, the pre-analysis unit 101 can input a statistical value of RGB values of pixels of one or a plurality of captured images into a specific degree calculation formula to calculate a numerical value indicating the specific degree of the fish body.
  • FIG. 10 is a flowchart showing the main analysis process of the analyzer 1 (steps S901 to S914).
  • the feature point estimation unit 15 receives the main analysis start instruction from the pre-analysis unit 101, the feature point estimation unit 15 instructs the learning data acquisition unit 14 to acquire learning data.
  • the learning data acquisition unit 14 acquires the first learning data and the second learning data recorded in the database 104 and sends them to the feature point estimation unit 15.
  • the feature point estimation unit 15 acquires the first pair of first learning data and second learning data from the database 104 according to the image ID, the ID of the stereo camera 2 and the user identification information (user ID). (S901).
  • the feature point estimation unit 15 starts automatic recognition processing using the neural network specified based on the first learning data for the captured image, and specifies the second rectangular range A2 including the fish body in the captured image. (S902).
  • the processing of the feature point estimation unit 15 will be described on the assumption that the learning unit 13 performs the learning process using a captured image that has been corrected to the color under the standard imaging condition by the third color augmentation method.
  • the feature point estimation unit 15 corrects the captured image acquired in step S901 in the same manner as the third Color Augmentation method, and estimates the feature point of the aquatic life reflected in the corrected captured image.
  • the automatic recognition accuracy of the feature points of the aquatic organisms can be increased.
  • the feature point estimation unit 15 starts automatic recognition processing using the pixels in the second rectangular range A2 and the neural network specified based on the second learning data, and the feature points in the second rectangular range A2
  • a circular range of P1, P2, P3, and P4 is specified (S903).
  • the third rectangular range A3 is set by enlarging several pixels to several tens of pixels vertically and horizontally with reference to the center coordinates of the second rectangular range A2, or the size of the second rectangular range A2 is set to several tens of percent.
  • the third rectangular area A3 may be set by enlarging. That is, the feature point estimation unit 15 may identify the circular ranges of the feature points P1, P2, P3, and P4 by performing automatic recognition processing on the pixels in the third rectangular range A3.
  • the third rectangular range A3 it is possible to improve the recognition accuracy of the circular ranges of the feature points P1, P2, P3, and P4.
  • the feature point estimation unit 15 specifies the circular range of the feature points P1, P2, P3, and P4 for each of the first captured image and the second captured image captured at the same time.
  • the first learning data is the learning data generated by the learning unit 13 because the fish reflected in the first photographed image and the fish reflected in the second photographed image are adjusted by machine learning so as to indicate the same individual fish. is there.
  • the feature point estimation part 15 specifies 2nd rectangular range A2 containing the fish body of the same individual reflected in each of a 1st picked-up image and a 2nd picked-up image.
  • the feature point estimation unit 15 attaches a fish ID to the fish included in the second rectangular range A2 specified for each of the first photographed image and the second photographed image, and the feature point P1 identified in the fish ID and the photographed image. , P2, P3, and P4 are recorded in the database 104 as a result of the automatic recognition processing of the fish feature points (S904).
  • the feature point estimation unit 15 determines whether the second rectangular range A2 or the third rectangular range A3 including other fish can be specified in the same captured image (S905).
  • the feature point estimation unit 15 repeats steps S902 to S904 when the second rectangular range A2 or the third rectangular range A3 including other fish can be specified.
  • the feature point estimation unit 15 records the image ID of the photographed image to be used for the next unprocessed automatic recognition process in the database 104. It is determined whether it has been performed (S906). If the image ID of the photographed image to be used for the next unprocessed automatic recognition process is recorded in the database 104, the feature point estimation unit 15 repeats steps S901 to S905. On the other hand, if the image ID of the photographed image to be used for the next unprocessed automatic recognition process is not recorded in the database 104, the feature point estimation unit 15 ends the automatic recognition process.
  • FIG. 11 shows an example of an automatic recognition image based on the result of automatic recognition processing.
  • the feature point estimation unit 15 specifies the second rectangular range A2-R1 or the third rectangular range A3-R1 in the first captured image (for example, an image captured by the right lens 21). .
  • the feature point estimation unit 15 identifies the feature points P1-R1, P2-R2, P3-R1, and P4-R1 in the second rectangular range A2-R1 or the third rectangular range A3-R1.
  • the feature point estimation unit 15 specifies the second rectangular range A2-L1 or the third rectangular range A3-L1 in the second captured image (for example, an image captured by the left lens 22).
  • the feature point estimation unit 15 specifies the feature points P1-L1, P2-L2, P3-L1, and P4-L1 in the second rectangular range A2-L1 or the third rectangular range A3-L1.
  • FIG. 12 shows another example of the automatic recognition image based on the result of the automatic recognition processing.
  • the feature point estimation unit 15 also identifies feature points of other fish that appear in the first captured image. Specifically, the feature point estimation unit 15 specifies another second rectangular range A2-R2 or another third rectangular range A3-R2 in the first captured image. The feature point estimation unit 15 identifies the feature points P1-R2, P2-R2, P3-R2, and P4-R2 in the second rectangular range A2-R2 or the third rectangular range A3-R2. The feature point estimating unit 15 further specifies the second rectangular range A2-R3 or another third rectangular range A3-R3 in the first captured image. The feature point estimation unit 15 identifies the feature points P1-R3, P2-R3, P3-R3, and P4-R3 in the second rectangular range A2-R3 or the third rectangular range A3-R3.
  • the feature point estimation unit 15 also specifies feature points of other fish that are reflected in the second captured image. Specifically, the feature point estimation unit 15 specifies another second rectangular range A2-L2 or another third rectangular range A3-L2 in the second captured image. The feature point estimation unit 15 specifies the feature points P1-L2, P2-L2, P3-L2, and P4-L2 in the second rectangular range A2-L2 or the third rectangular range A3-L2. In addition, the feature point estimation unit 15 further specifies the second rectangular range A2-L3 or another third rectangular range A3-L3 in the second captured image. The feature point estimation unit 15 identifies the feature points P1-L3, P2-L3, P3-L3, and P4-L3 in the second rectangular range A2-L3 or the third rectangular range A3-L3.
  • the feature point estimation unit 15 records the feature points of the fish included in the captured image and the information related to the second rectangular range A2 and the third rectangular range A3 in the database 104 in association with the fish ID.
  • the output unit 19 may display an automatic recognition image (FIG. 12) based on the result of the automatic recognition processing on the monitor of the terminal 3 (or terminal 5) used by the worker. In this case, in the first photographed image and the second photographed image corresponding to the image ID selected by the operator, the second rectangular range A2 and the third rectangular range A3 each including the corresponding fish body, and the feature points P1, P2, P3 , P4 is displayed on the monitor.
  • the output unit 19 for example, the color of the frame of the second rectangular range A2 and the third rectangular range A3 including the fish bodies related to the same individual May be set to the same color, or a different color may be set for each individual fish and displayed on the monitor.
  • the feature point estimator 15 instructs the size estimator 18 to start the fish size estimation process when the automatic recognition process of the fish feature points has been completed for all the images to be automatically recognized.
  • the size estimation unit 18 obtains the representative coordinates of the feature points P1, P2, P3, and P4 extracted from the first photographed image associated with the unselected fish ID from the result of the automatic recognition process of the fish feature points, and the second photograph.
  • the representative coordinates of the feature points P1, P2, P3, and P4 extracted from the image are read (S907).
  • the size estimation unit 18 uses a known three-dimensional coordinate conversion method such as a DLT (Direct Linear Transformation) method to calculate the three-dimensional coordinates in the three-dimensional space corresponding to the feature points P1, P2, P3, and P4.
  • DLT Direct Linear Transformation
  • the size estimation unit 18 Based on the three-dimensional coordinates of the feature points P1, P2, P3, and P4, the size estimation unit 18 has a fork length that connects the three-dimensional coordinates corresponding to the feature point P1 and the three-dimensional coordinates corresponding to the feature point P2. Then, the body height connecting the three-dimensional coordinate corresponding to the feature point P3 and the three-dimensional coordinate corresponding to the feature point P4 is calculated (S909). The size estimation unit 18 calculates the weight of the fish by substituting the fork length and the body height into the weight calculation formula for calculating the weight of the fish using the fork length and the body height as variables (S910).
  • the size estimation unit 18 determines whether the fish size has been calculated by selecting all the fish IDs from the result of the automatic recognition processing of the fish feature points (S911). If all fish IDs have not been selected from the result of the automatic recognition processing of fish feature points and the fish size has not been calculated, the size estimating unit 18 repeats steps S907 to S910.
  • the report information generation unit 102 calculates statistical information of the fish grown in the ginger 4 based on the fork length, body height, and weight corresponding to the fish ID (S912).
  • the report information generation unit 102 generates monitoring report information indicating the fork length, body height, weight and statistical information corresponding to the fish ID (S913).
  • the output unit 19 transmits the monitoring report information to the service providing destination terminal 5 (S914).
  • the user of the service provision destination terminal 5 confirms the fish fork length, body height, weight and statistical information included in the monitoring report information, confirms the state of the fish grown in the ginger 4 and determines the shipping time. .
  • a feature point that identifies the shape feature of an aquatic organism such as a fish reflected in a captured image is estimated by automatic recognition processing using the first learning data and the second learning data.
  • the analysis device 1 generates the first learning data and the second learning data in advance, and thereby performs automatic recognition processing using the learning data without recording a template image of a large number of fish that are recognition processing targets in the database.
  • Fish feature points can be identified with high accuracy.
  • the analysis apparatus 1 can generate statistical information on the size of the fish based on the characteristics of a large number of fish reflected in the captured image. Thereby, the user who receives a service for providing statistical information on aquatic organisms such as fish can know the growing state of the aquatic organisms to be monitored every time the statistical information is acquired.
  • the pre-analysis process may be performed by providing the stereo camera 2 or the terminal 3 with the pre-analysis processing unit 101.
  • the decision material information for starting this analysis generated by the pre-analysis process is provided to the user before charging the price for the statistical information providing service. For this reason, the user may receive provision of decision material information several times free of charge.
  • the same individual specifying unit 16 recognizes the fish of the same individual shown in the first captured image and the second captured image. Specifically, in response to a request from the feature point estimating unit 15, the same individual specifying unit 16 performs feature point estimation on the coordinates of the second rectangular range A2 specified in each of the first captured image and the second captured image. Obtained from the unit 15.
  • the same individual specifying unit 16 has a predetermined threshold (for example, a range that overlaps the other one of the second rectangular range A2 specified from the first captured image and the second rectangular range A2 specified from the second captured image). 70%) or more.
  • the same individual specifying unit 16 specifies a combination of the second rectangular range A2 having the widest overlapping range between the first captured image and the second captured image, and is reflected in the second rectangular range A2 related to the combination. You may determine that a fish body is the same individual.
  • the same individual specifying unit 16 determines the second in the first photographed image and the second photographed image based on the positional deviation between the feature point identified from the first photographed image and the feature point identified as the second photographed image. You may determine with the fish body reflected in the rectangular range A2 being the same individual. Specifically, the same individual specifying unit 16 determines the positional deviation of each feature point specified from the second rectangular range A2 of the first captured image from the feature point specified from the second rectangular range A2 of the second captured image. calculate. When the positional deviation is less than the predetermined value, the same individual specifying unit 16 determines that the fishes reflected in the two second rectangular areas A2 are the same individual.
  • specification part 16 calculates the area of the fish body which occupies in 2nd rectangular range A2 selected in the 1st picked-up image and the 2nd picked-up image. If the difference in the area of the fish occupying the two second rectangular ranges A2 is within a predetermined threshold (for example, 10%), the same individual specifying unit 16 is the same individual as the fish reflected in the second rectangular range A2. Is determined.
  • a predetermined threshold for example, 10%
  • the processing of the analysis apparatus 1 has been described for the case of estimating the characteristic points of fish.
  • the aquatic organisms are not limited to fish, and other aquatic organisms (for example, squids, dolphins, jellyfish etc.) It may be. That is, the analyzer 1 may estimate feature points according to predetermined aquatic organisms.
  • the data discarding unit 17 discards the estimated values when the estimated values of the fish fork length, body height, and weight calculated by the size estimating unit 18 meet predetermined conditions that can be determined to be inaccurate. Also good. For example, when the estimated value is not included in the range of “average value of estimated value + standard deviation ⁇ 2”, the data discarding unit 17 may determine that the estimated value is not accurate.
  • the data discarding unit 17 determines the positional relationship between the feature points P1, P2, P3, and P4 as a result of automatic recognition processing for an aquatic organism such as a fish, the average positional relationship of the feature points, and the reference positional relationship registered in advance If there is a significant divergence compared to the information, the information on the feature points P1, P2, P3, and P4 as a result of the automatic recognition processing may be discarded. For example, when the feature point P4 of the belly fin is positioned above the fork length line connecting the feature points P1 and P2, the data discarding unit 17 sets the feature points P1 and P2 as a result of the automatic recognition processing. , P3 and P4 are discarded. When the ratio between the fork length and the body height is more than 20% apart from the average value or the reference value, the data discarding unit 17 uses the feature points P1, P2, Discard information related to P3 and P4.
  • the data discarding unit 17 stores a reference score value regarding a predetermined condition that can be determined that the information is not accurate, calculates a score value as a result of automatic recognition processing according to the predetermined condition, and the score value is equal to or higher than the reference score value or If it is less, the information related to the feature points P1, P2, P3, and P4 as a result of the automatic recognition processing may be automatically discarded.
  • the data discarding unit 17 displays confirmation information including information on the result of the automatic recognition processing in which the data discarding has been determined on the monitor, and determines whether to discard the data when an operation for accepting the data discarding is received from the operator. The information of the result of the automatic recognition processing that has been performed may be discarded.
  • FIG. 13 shows the minimum configuration of the analyzer 1.
  • the analysis device 1 includes a captured image acquisition unit 11 and a pre-analysis unit 101.
  • the captured image acquisition unit 11 acquires a captured image of the underwater organism from the imaging device such as the stereo camera 2, and the pre-analysis unit 101 generates analysis start decision material information based on the specific degree of the underwater organism in the captured image.
  • the analysis apparatus 1 may include a learning data acquisition unit 14 and a feature point estimation unit 15.
  • the learning data acquisition unit 14 acquires learning data generated by machine learning based on a photographed image of the underwater organism and a feature point for specifying a shape feature of the aquatic organism reflected in the photographed image.
  • the feature point estimation unit 15 estimates a feature point that identifies the shape feature of the aquatic organism reflected in the captured image by automatic recognition processing using the learning data.
  • the analysis apparatus 1 has a computer system inside, and the above-described processing steps are stored as a computer program in a computer-readable storage medium, and the computer reads out and executes the computer program, whereby the above-described processing is performed. Realize the process.
  • the computer-readable storage medium means a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, or the like.
  • the computer program may be distributed to the computer via a communication line so that the computer executes the computer program.
  • the above computer program may realize a part of the function of the analysis device 1 described above. Further, it may be a difference file (difference program) that realizes the above-described function in combination with a preinstalled program already recorded in the computer system.
  • difference file difference program
  • the present invention analyzes and monitors the shape characteristics of aquatic organisms such as fish grown with ginger and the like, and provides the monitoring report to the user, but the aquatic organisms are not limited to fish, It may be an aquatic organism.
  • the monitoring target is not limited to marine products in the ginger, and for example, it is possible to analyze and monitor the shape characteristics of aquatic organisms in the ocean.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Environmental Sciences (AREA)
  • Animal Husbandry (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Zoology (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Mining & Mineral Resources (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Agronomy & Crop Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Farming Of Fish And Shellfish (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un dispositif d'analyse qui acquiert une image capturée de la vie aquatique et génère des informations de matière de détermination de début d'analyse sur la base du rapport spécifique de la vie aquatique dans l'image capturée. En outre, le dispositif d'analyse génère des données d'apprentissage en effectuant un traitement d'apprentissage sur de multiples points caractéristiques qui spécifient des caractéristiques de forme de la vie aquatique dans l'image capturée, et, par un traitement de reconnaissance automatique à l'aide des données d'apprentissage, estime de multiples points caractéristiques qui spécifient des caractéristiques de forme de la vie aquatique dans l'image capturée.
PCT/JP2019/015417 2018-04-13 2019-04-09 Dispositif d'analyse et procédé d'analyse WO2019198701A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020513402A JP7006776B2 (ja) 2018-04-13 2019-04-09 分析装置、分析方法、プログラムおよび水中生物監視システム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-077853 2018-04-13
JP2018077853 2018-04-13

Publications (1)

Publication Number Publication Date
WO2019198701A1 true WO2019198701A1 (fr) 2019-10-17

Family

ID=68164180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/015417 WO2019198701A1 (fr) 2018-04-13 2019-04-09 Dispositif d'analyse et procédé d'analyse

Country Status (2)

Country Link
JP (1) JP7006776B2 (fr)
WO (1) WO2019198701A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021112798A1 (fr) * 2019-12-03 2021-06-10 Yonga Teknoloji Mikroelektronik Arge Tic. Ltd. Sti. Machine et système de comptage de poissons
WO2021149816A1 (fr) * 2020-01-23 2021-07-29 ソフトバンク株式会社 Programme d'estimation, procédé d'estimation et dispositif de traitement d'informations
JP2021125057A (ja) * 2020-02-07 2021-08-30 株式会社電通 魚の品質判定システム
JP2021158994A (ja) * 2020-03-31 2021-10-11 中国電力株式会社 プランクトンの分布情報を管理するためのサーバ、ユーザ端末、およびサーバとユーザ端末を備えたシステム
GR1010400B (el) * 2022-04-06 2023-02-03 Ελληνικο Κεντρο Θαλασσιων Ερευνων (Ελ.Κε.Θε.), Μεθοδος και συστημα μη επεμβατικης μετρησης μεγεθους ψαριων ιχθυοκαλλιεργειας

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001242158A (ja) * 2000-02-28 2001-09-07 Toshiba Corp 水質監視方法およびその監視装置
JP2015099559A (ja) * 2013-11-20 2015-05-28 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
JP2016162072A (ja) * 2015-02-27 2016-09-05 株式会社東芝 特徴量抽出装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001242158A (ja) * 2000-02-28 2001-09-07 Toshiba Corp 水質監視方法およびその監視装置
JP2015099559A (ja) * 2013-11-20 2015-05-28 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
JP2016162072A (ja) * 2015-02-27 2016-09-05 株式会社東芝 特徴量抽出装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021112798A1 (fr) * 2019-12-03 2021-06-10 Yonga Teknoloji Mikroelektronik Arge Tic. Ltd. Sti. Machine et système de comptage de poissons
WO2021149816A1 (fr) * 2020-01-23 2021-07-29 ソフトバンク株式会社 Programme d'estimation, procédé d'estimation et dispositif de traitement d'informations
JP2021117590A (ja) * 2020-01-23 2021-08-10 ソフトバンク株式会社 推定プログラム、推定方法および情報処理装置
JP2021125057A (ja) * 2020-02-07 2021-08-30 株式会社電通 魚の品質判定システム
JP7337002B2 (ja) 2020-02-07 2023-09-01 株式会社電通 魚の品質判定システム
EP4101313A4 (fr) * 2020-02-07 2024-03-13 Dentsu Inc. Système de détermination de qualité de poisson
JP2021158994A (ja) * 2020-03-31 2021-10-11 中国電力株式会社 プランクトンの分布情報を管理するためのサーバ、ユーザ端末、およびサーバとユーザ端末を備えたシステム
JP7560044B2 (ja) 2020-03-31 2024-10-02 中国電力株式会社 プランクトンの分布情報を管理するためのサーバ、ユーザ端末、およびサーバとユーザ端末を備えたシステム
GR1010400B (el) * 2022-04-06 2023-02-03 Ελληνικο Κεντρο Θαλασσιων Ερευνων (Ελ.Κε.Θε.), Μεθοδος και συστημα μη επεμβατικης μετρησης μεγεθους ψαριων ιχθυοκαλλιεργειας

Also Published As

Publication number Publication date
JPWO2019198701A1 (ja) 2021-05-13
JP7006776B2 (ja) 2022-01-24

Similar Documents

Publication Publication Date Title
WO2019198701A1 (fr) Dispositif d'analyse et procédé d'analyse
WO2019198611A1 (fr) Dispositif d'estimation de caractéristique et procédé d'estimation de caractéristique
CN110493527B (zh) 主体对焦方法、装置、电子设备和存储介质
EP3496383A1 (fr) Procédé, appareil et dispositif de traitement d'images
JP5762211B2 (ja) 画像処理装置および画像処理方法、プログラム
JP2015197745A (ja) 画像処理装置、撮像装置、画像処理方法及びプログラム
EP2662833B1 (fr) Dispositif, procédé et programme de traitement de données d'une source lumineuse
JP7207561B2 (ja) 大きさ推定装置、大きさ推定方法および大きさ推定プログラム
CN108124142A (zh) 基于rgb景深相机和高光谱相机的图像目标识别系统及方法
JP2020194454A (ja) 画像処理装置および画像処理方法、プログラム、並びに記憶媒体
JP2006301962A (ja) 画像処理方法及び画像処理装置
US20160044295A1 (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
JP7116925B2 (ja) 観察装置の作動方法、観察装置、およびプログラム
CN110738698A (zh) 浮水式海底数据测量方法、装置和电子设备
JP7057086B2 (ja) 画像処理装置、画像処理方法、及びプログラム
US8538142B2 (en) Face-detection processing methods, image processing devices, and articles of manufacture
WO2022171267A1 (fr) Système, procédé et code exécutable par ordinateur pour la quantification d'organismes
JP5616743B2 (ja) 撮像装置および画像処理方法
JP2023019521A (ja) 学習方法、プログラム及び画像処理装置
CN113066121A (zh) 图像分析系统和识别重复细胞的方法
JP7338726B1 (ja) 移動体計測方法
JP6776532B2 (ja) 画像処理装置、撮像装置、電子機器及び画像処理プログラム
JP6521763B2 (ja) 画像処理装置、撮像装置、画像処理方法、プログラム、及び記録媒体
WO2022213288A1 (fr) Procédé et appareil de traitement d'image de profondeur, et support de stockage
JP2019045993A (ja) 画像処理装置、画像処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19785514

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020513402

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19785514

Country of ref document: EP

Kind code of ref document: A1