WO2022137748A1 - 対象検出装置、機械学習実行装置、対象検出プログラム及び機械学習実行プログラム - Google Patents
対象検出装置、機械学習実行装置、対象検出プログラム及び機械学習実行プログラム Download PDFInfo
- Publication number
- WO2022137748A1 WO2022137748A1 PCT/JP2021/037947 JP2021037947W WO2022137748A1 WO 2022137748 A1 WO2022137748 A1 WO 2022137748A1 JP 2021037947 W JP2021037947 W JP 2021037947W WO 2022137748 A1 WO2022137748 A1 WO 2022137748A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- subject
- learning
- light
- wavelength
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 154
- 238000010801 machine learning Methods 0.000 title claims description 91
- 230000001678 irradiating effect Effects 0.000 claims abstract description 62
- 230000006870 function Effects 0.000 claims description 28
- 241000287828 Gallus gallus Species 0.000 description 84
- 210000000689 upper leg Anatomy 0.000 description 82
- 210000000845 cartilage Anatomy 0.000 description 31
- 238000000034 method Methods 0.000 description 29
- 238000001228 spectrum Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 21
- 210000003127 knee Anatomy 0.000 description 16
- 238000004891 communication Methods 0.000 description 14
- 235000013372 meat Nutrition 0.000 description 13
- 238000012545 processing Methods 0.000 description 13
- 235000013305 food Nutrition 0.000 description 11
- 230000005484 gravity Effects 0.000 description 6
- 235000020997 lean meat Nutrition 0.000 description 6
- 238000010521 absorption reaction Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 229910000530 Gallium indium arsenide Inorganic materials 0.000 description 2
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 2
- 244000046052 Phaseolus vulgaris Species 0.000 description 2
- KXNLCSXBJCPWGL-UHFFFAOYSA-N [Ga].[As].[In] Chemical compound [Ga].[As].[In] KXNLCSXBJCPWGL-UHFFFAOYSA-N 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 229910052736 halogen Inorganic materials 0.000 description 2
- 150000002367 halogens Chemical class 0.000 description 2
- 235000015220 hamburgers Nutrition 0.000 description 2
- 230000012447 hatching Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000011946 reduction process Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 2
- 229910052721 tungsten Inorganic materials 0.000 description 2
- 239000010937 tungsten Substances 0.000 description 2
- YBNMDCCMCLUHBL-UHFFFAOYSA-N (2,5-dioxopyrrolidin-1-yl) 4-pyren-1-ylbutanoate Chemical compound C=1C=C(C2=C34)C=CC3=CC=CC4=CC=C2C=1CCCC(=O)ON1C(=O)CCC1=O YBNMDCCMCLUHBL-UHFFFAOYSA-N 0.000 description 1
- 241000251468 Actinopterygii Species 0.000 description 1
- JBRZTFJDHDCESZ-UHFFFAOYSA-N AsGa Chemical compound [As]#[Ga] JBRZTFJDHDCESZ-UHFFFAOYSA-N 0.000 description 1
- YZCKVEUIGOORGS-OUBTZVSYSA-N Deuterium Chemical compound [2H] YZCKVEUIGOORGS-OUBTZVSYSA-N 0.000 description 1
- 229910001218 Gallium arsenide Inorganic materials 0.000 description 1
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000002835 absorbance Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 229910052805 deuterium Inorganic materials 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 210000003195 fascia Anatomy 0.000 description 1
- HZXMRANICFIONG-UHFFFAOYSA-N gallium phosphide Chemical compound [Ga]#P HZXMRANICFIONG-UHFFFAOYSA-N 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- XCAUINMIESBTBL-UHFFFAOYSA-N lead(ii) sulfide Chemical compound [Pb]=S XCAUINMIESBTBL-UHFFFAOYSA-N 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 229910052698 phosphorus Inorganic materials 0.000 description 1
- 239000011574 phosphorus Substances 0.000 description 1
- GGYFMLJDMAMTAB-UHFFFAOYSA-N selanylidenelead Chemical compound [Pb]=[Se] GGYFMLJDMAMTAB-UHFFFAOYSA-N 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 229910052724 xenon Inorganic materials 0.000 description 1
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/27—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22C—PROCESSING MEAT, POULTRY, OR FISH
- A22C17/00—Other devices for processing meat or bones
- A22C17/0073—Other devices for processing meat or bones using visual recognition, X-rays, ultrasounds, or other contactless means to determine quality or size of portioned meat
- A22C17/008—Other devices for processing meat or bones using visual recognition, X-rays, ultrasounds, or other contactless means to determine quality or size of portioned meat for measuring quality, e.g. to determine further processing
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22C—PROCESSING MEAT, POULTRY, OR FISH
- A22C21/00—Processing poultry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/62—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
- G01N21/63—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
- G01N21/64—Fluorescence; Phosphorescence
- G01N21/645—Specially adapted constructive features of fluorimeters
- G01N21/6456—Spatial resolved fluorescence measurements; Imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/94—Investigating contamination, e.g. dust
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N2021/1765—Method using an image detector and processing of image signal
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N2021/845—Objects on a conveyor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8883—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/94—Investigating contamination, e.g. dust
- G01N2021/945—Liquid or solid deposits of macroscopic size on surfaces, e.g. drops, films, or clustered contaminants
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present invention relates to a target detection device, a machine learning execution device, a target detection program, and a machine learning execution program.
- Patent Document 1 When handling food in factories, etc., it may be necessary to have a technique to detect specific parts contained in the food, foreign substances mixed in the food, etc.
- the apparatus disclosed in Patent Document 1 can be mentioned.
- This device is a device for inspecting an object or article and is a means for irradiating a line across the object or article and a specific device that observes all or an extension of the line and is excited by radiation. It has an observation means to detect narrow band radiation emitted by the area of the object or article, the observation system is a narrow band path that filters out virtually everything but the narrow frequency band within a particular angle of incidence.
- Patent Document 1 discloses that this device can be applied to an inspection (using ultraviolet rays) for the freshness of chopped fish or the presence of bones.
- the above-mentioned device irradiates an article or the like with only one type of light having a specific spectrum such as ultraviolet rays, it is appropriate to use a specific part contained in the article or the like or a foreign substance mixed in the article or the article. It may not be possible to detect it.
- an article or the like when an article or the like is irradiated with X-rays to take an X-ray image and an attempt is made to detect an object contained in the article or the like, if the difference between the density of the object and the density around the object is small, In the X-ray image, the contrast of brightness between the area where the object is drawn and the area around the area becomes small, and the object may not be detected properly.
- an article or the like is irradiated with excitation light to take a fluorescent image and an attempt is made to detect an object contained in the article or the like, if an object that emits fluorescence is included in addition to the object, the object concerned is concerned. The subject may not be detected accurately.
- the present invention has been made in view of the above circumstances, and provides a target detection device, a machine learning execution device, a target detection program, and a machine learning execution program capable of accurately detecting an object contained in a subject. It is something to try.
- One aspect of the present invention is different from the first image acquisition unit, which is generated by irradiating a subject with light belonging to the first wavelength group and acquires a first image depicting the subject, and the first wavelength group.
- One aspect of the present invention is the target detection device, wherein the target detection unit detects a first element depicted in the first image with a brightness exceeding a predetermined first brightness, and the second image.
- the target is detected by detecting a second element different from the first element, which is visualized with a brightness exceeding a predetermined second brightness.
- One aspect of the present invention is the target detection device, wherein the first image acquisition unit has a wavelength at which the reflectance when the first element is irradiated exceeds a predetermined first reflectance. The first image generated by irradiating the subject with light is acquired.
- One aspect of the present invention is the target detection device, wherein the second image acquisition unit has a wavelength at which the reflectance when the second element is irradiated exceeds a predetermined second reflectance. The second image generated by irradiating the subject with light is acquired.
- the target detection unit is a first point based on a predetermined first rule for each region in which the first element is depicted in the first image. Is selected, a second point is selected based on a predetermined second rule for each region in which the second element is drawn in the second image, and a line segment connecting the first point and the second point is selected.
- the combination of the first element and the second element that gives the first point and the second point whose length is less than a predetermined length is detected as the target.
- the target detection device wherein the first image acquisition unit has a first main wavelength belonging to the first wavelength group, is irradiated to the subject, and is reflected by the subject.
- the subject is irradiated with the brightness of each pixel included in the first main image depicting the subject and the first sub-wavelength belonging to the first wavelength group.
- the first sub-image obtained by detecting the light reflected by the subject and representing the value calculated based on the brightness of each pixel included in the first sub-image depicting the subject is represented by each pixel. Get an image.
- the target detection device wherein the second image acquisition unit has a second main wavelength belonging to the second wavelength group, is irradiated to the subject, and is reflected by the subject.
- the subject is irradiated with the brightness of each pixel included in the second main image depicting the subject and the second sub-wavelength belonging to the second wavelength group.
- the second which is taken by detecting the light reflected by the subject and is calculated based on the brightness of each pixel included in the second sub-image depicting the subject, is represented by each pixel. Get an image.
- One aspect of the present invention is the target detection device, wherein the first image acquisition unit emits light from the subject by irradiating the subject with light belonging to the first wavelength group, and the first main wavelength is emitted from the subject.
- the subject is emitted by irradiating the subject with the brightness of each pixel included in the first main image depicting the subject and the light belonging to the first wavelength group.
- the light having the first sub-wavelength is detected and photographed, and each pixel represents a value calculated based on the brightness of each pixel included in the first sub-image depicting the subject. Get the first image.
- One aspect of the present invention is the target detection device, wherein the second image acquisition unit emits light from the subject by irradiating the subject with light belonging to the second wavelength group, and the second main wavelength is emitted from the subject.
- the subject is emitted by irradiating the subject with the brightness of each pixel included in the second main image depicting the subject and the light belonging to the second wavelength group.
- the light having the second sub-wavelength is detected and photographed, and each pixel represents a value calculated based on the brightness of each pixel included in the second sub-image depicting the subject. Get the second image.
- One aspect of the present invention is a first learning image generated by irradiating a learning subject with light belonging to the first wavelength group and depicting the learning subject, and a second image different from the first wavelength group.
- the learning subject is generated by irradiating the learning subject with light having a wavelength different from the light radiated to the learning subject in order to generate the first learning image, which belongs to the wavelength group.
- the problem is a learning multi-channel image generated by using the second learning image to be drawn, and the learning target included in the learning subject drawn in the learning multi-channel image is Machine learning including a teacher data acquisition unit that acquires teacher data as an answer to the position of the area drawn, and a machine learning execution unit that inputs the teacher data into the machine learning model and trains the machine learning model. It is an execution device.
- One aspect of the present invention is a first inference image acquisition unit that acquires a first inference image that is generated by irradiating a inference subject with light belonging to the first wavelength group and depicting the inference subject.
- the inference subject is irradiated with light having a wavelength different from the light that belongs to the second wavelength group different from the first inference group and is radiated to the inference subject in order to generate the first inference image.
- the second inference image acquisition unit that acquires the second inference image that is generated and depicts the inference subject, the first inference image, and the second inference image are used.
- An image generation unit for inference that generates a multi-channel image for inference, a first learning image that is generated by irradiating a learning subject with light belonging to the first wavelength group, and depicting the subject for learning.
- the learning subject is irradiated with light having a wavelength different from the light that belongs to the second wavelength group different from the first wavelength group and is irradiated to the learning subject in order to generate the first learning image.
- the learning multi-channel image generated by using the second learning image that is generated and depicting the learning subject is used as a problem, and the learning subject that is drawn on the learning multi-channel image is used as a problem.
- the multi-channel image for inference is input to the machine learning model learned using the teacher data whose answer is the position of the area where the learning object included in is drawn, and is included in the inference subject. It is an object detection device including an inference object detection unit that causes the machine learning model to detect an inference object.
- One aspect of the present invention is a first image acquisition function of irradiating a subject with light belonging to the first wavelength group to acquire a first image depicting the subject, and a first wavelength. It belongs to a second wavelength group different from the group, and is generated by irradiating the subject with light having a wavelength different from the light irradiated to the subject in order to generate the first image, and the subject is visualized.
- a target detection program that realizes a second image acquisition function that acquires a second image and a target detection function that detects an object contained in the subject using the first image and the second image. be.
- One aspect of the present invention includes a first learning image generated by irradiating a learning subject with light belonging to the first wavelength group to a computer and depicting the learning subject, and the first wavelength group.
- the learning subject is generated by irradiating the learning subject with light having a wavelength different from that of the light shining on the learning subject in order to generate the first learning image, which belongs to a different second wavelength group.
- the learning multi-channel image generated by using the second learning image depicting the subject is used as a problem, and the learning included in the learning subject drawn in the learning multi-channel image is used as a problem.
- a teacher data acquisition function that acquires teacher data that answers the position of the area where the target is drawn, and a machine learning execution function that inputs the teacher data into the machine learning model and trains the machine learning model. It is a machine learning execution program to be realized.
- One aspect of the present invention is a first inference image for acquiring a first inference image generated by irradiating a computer with light belonging to the first wavelength group to depict the inference subject.
- the acquisition function and the light that belongs to the second wavelength group different from the first wavelength group and has a wavelength different from the light radiated to the reasoning subject in order to generate the first reasoning image is used for the reasoning.
- the second inference image acquisition function for acquiring the second inference image generated by irradiating the subject and depicting the inference subject, the first inference image, and the second inference image.
- An image generation function for inference that uses it to generate a multi-channel image for inference, and a first learning function that is generated by irradiating a learning subject with light belonging to the first wavelength group to depict the learning subject.
- the image and the light that belongs to the second wavelength group different from the first wavelength group and has a wavelength different from the light radiated to the learning subject in order to generate the first learning image is the learning subject.
- the learning multi-channel image generated by irradiating the learning subject with the second learning image and the learning multi-channel image is used as a problem, and the learning multi-channel image is drawn on the learning multi-channel image.
- the multi-channel image for inference is input to the machine learning model learned using the teacher data whose answer is the position of the area where the learning object included in the learning subject is drawn, and the inference subject is used. It is an object detection program that realizes an inference object detection function that causes the machine learning model to detect the included inference object.
- the target detection device and the target detection program according to the first embodiment will be described with reference to FIGS. 1 to 17.
- the knee cartilage part is a part composed of a combination of cartilage and fat, and is cut out from chicken thigh using a knife attached to the tip of an articulated robot, for example.
- FIG. 1 is a diagram showing an example of the hardware configuration of the target detection device according to the first embodiment.
- the target detection device 10 includes a processor 11, a main storage device 12, a communication interface 13, an auxiliary storage device 14, an input / output device 15, and a bus 16.
- the processor 11 is, for example, a CPU (Central Processing Unit), reads and executes a target detection program, and realizes each function of the target detection device 10. Further, the processor 11 may read and execute a program other than the target detection program to realize the functions necessary for realizing each function of the target detection device 10.
- a CPU Central Processing Unit
- the main storage device 12 is, for example, a RAM (Random Access Memory), and stores in advance a target detection program and other programs that are read and executed by the processor 11.
- RAM Random Access Memory
- the communication interface 13 is an interface circuit for executing communication with the first photographing device 155, the second photographing device 157, the control device 200, and other devices via the network.
- the network referred to here is, for example, WAN (Wide Area Network), LAN (Local Area Network), the Internet, and an intranet.
- the auxiliary storage device 14 is, for example, a hard disk drive (HDD: Hard Disk Drive), a solid state drive (SSD: Solid State Drive), a flash memory (Flash Memory), and a ROM (Read Only Memory).
- HDD Hard Disk Drive
- SSD Solid State Drive
- flash memory Flash Memory
- ROM Read Only Memory
- the input / output device 15 is, for example, an input / output port (Input / Output Port).
- the input / output device 15 is connected to the mouse 151, the keyboard 152, and the display 153 shown in FIG.
- FIG. 2 is a diagram showing an example of an object detection device, a first light emitting device, a first photographing device, a second light emitting device, a second photographing device, and a food processing line according to the first embodiment. ..
- the input / output device 15 is connected to, for example, the first photographing device 155, the second photographing device 157, and the control device 200 shown in FIGS. 1 and 2. Further, the control device 200 is connected to the first light emitting device 154 and the second light emitting device 156, and controls the first light emitting device 154 and the second light emitting device 156.
- the mouse 151 and the keyboard 152 are used, for example, for inputting data necessary for operating the target detection device 10.
- the display 153 is, for example, a liquid crystal display.
- the display 153 displays, for example, a graphical user interface (GUI: Graphical User Interface) of the target detection device 10.
- GUI Graphical User Interface
- the display 153 is, for example, an image shown in FIG. 4, an image shown in FIG. 5, an image shown in FIG. 7, an image shown in FIG. 8, an image shown in FIG. 10, and an image shown in FIG. At least one of the image shown in FIG. 12, the image shown in FIG. 13, the image shown in FIG. 14, the image shown in FIG. 15, and the image shown in FIG. 16 is displayed.
- the first light emitting device 154 is a device that irradiates a subject with light belonging to the first wavelength group, and includes, for example, an LED (light emission radio), a halogen lamp, a tungsten lamp, or a laser.
- the subject is, for example, the chicken thigh M shown in FIG.
- the chicken thigh M is conveyed by the belt conveyor 201 constituting the food processing line 20, and is sequentially detected by the photoelectric sensor 202 and the photoelectric sensor 203.
- the belt conveyor 201, the photoelectric sensor 202 and the photoelectric sensor 203 are controlled by the control device 200.
- the first light emitting device 154 is installed at a position where the chicken thigh M conveyed by the belt conveyor 201 can be irradiated with light from above.
- the above-mentioned first wavelength group includes at least one wavelength belonging to a predetermined wavelength region.
- the first wavelength group includes a wavelength of 365 nm or a wavelength of 340 nm.
- the wavelength of 460 nm is an example of the first main wavelength described later.
- the wavelength of 520 nm is an example of the first sub-wavelength described later.
- FIG. 3 is a diagram showing an example of a reflection spectrum in a visible light region of chicken thigh, a reflection spectrum in a visible light region of fat, and a reflection spectrum in a visible light region of lean meat according to the first embodiment.
- the solid line shown in FIG. 3 shows the reflection spectrum of the cartilage of chicken thigh M in the visible light region.
- the dashed line shown in FIG. 3 shows the reflection spectrum of the fat in the chicken thigh M in the visible light region.
- the alternate long and short dash line shown in FIG. 3 shows the reflection spectrum of the lean meat of chicken thigh M.
- the wavelength of 460 nm is a wavelength that gives a peak of the reflection spectrum of cartilage.
- the light having a wavelength of 460 nm constitutes the knee cartilage portion, and the light having a wavelength in which the reflectance when irradiated to the cartilage, which is an example of the first element, exceeds a predetermined first reflectance. This is just one example.
- the first light emitting device 154 has a center wavelength of 365 nm, a shortest wavelength of 355 nm, and a longest wavelength of 375 nm in the chicken thigh M after the chicken thigh M is detected by the photoelectric sensor 202 and before the chicken thigh M is detected by the photoelectric sensor 203. Irradiate with light.
- the first light emitting device 154 has a center wavelength of 340 nm, a shortest wavelength of 330 nm, and a maximum wavelength of 350 nm in the chicken thigh M after the chicken thigh M is detected by the photoelectric sensor 202 and before the chicken thigh M is detected by the photoelectric sensor 203. Irradiate with light having. These lights are lights that excite fluorescence having a wavelength of 460 nm and a wavelength of 520 nm on the surface of chicken thigh M, and are light belonging to the first wavelength group.
- the first photographing device 155 is a camera provided with a light receiving element capable of detecting the light emitted from the chicken thigh M.
- the light emitted from the chicken thigh M referred to here is a fluorescence emitted from the chicken thigh M itself and having a wavelength of 460 nm and a fluorescence having a wavelength of 520 nm.
- the wavelength of 460 nm is an example of the first main wavelength.
- the wavelength of 520 nm is an example of the first sub-wavelength.
- the light emitted from the chicken thigh M may be the light reflected on the surface of the chicken thigh M.
- the first photographing apparatus 155 uses silicon (Si), gallium phosphide (GaP), or gallium arsenide phosphorus (GaAsP) as a semiconductor. It is equipped with a light receiving element used as a light receiving element. Further, as shown in FIG. 2, for example, the first photographing apparatus 155 is installed so that the chicken thigh meat M located between the photoelectric sensor 202 and the photoelectric sensor 203 can be photographed from above.
- the first photographing device 155 photographs a subject illuminated by the first light emitting device 154 to generate a first main image.
- FIG. 4 is a diagram showing an example of a first main image taken by detecting light having a first main wavelength emitted from chicken thigh meat according to the first embodiment.
- the chicken thigh M is irradiated with light by the first light emitting device 154 and the fluorescence having a wavelength of 460 nm and the fluorescence having a wavelength of 520 nm are emitted from the chicken thigh M
- Fluorescence having a wavelength of 460 nm is detected from the above to generate the first main image shown in FIG.
- the first main image shown in FIG. 4 depicts the chicken thigh meat M, and the knee cartilage portion contained in the chicken thigh meat M is depicted in the region indicated by the circle C4.
- FIG. 5 is a diagram showing an example of a first sub-image taken by detecting light having a first sub-wavelength emitted from chicken thigh according to the first embodiment.
- the first photographing device 155 when the chicken thigh M is irradiated with light by the first light emitting device 154 and the fluorescence having a wavelength of 460 nm and the fluorescence having a wavelength of 520 nm are emitted from the chicken thigh M, the chicken thigh M is emitted. Fluorescence with a wavelength of 520 nm is detected from the above to generate the first sub-image shown in FIG.
- the first sub-image shown in FIG. 5 depicts the chicken thigh meat M.
- the second light emitting device 156 belongs to a second wavelength group different from the first wavelength group, and is a device that irradiates the subject with light having a wavelength different from the light radiated to the subject in order to capture the first image. Includes, for example, LEDs, halogen lamps, tungsten lamps or lasers. As shown in FIG. 2, for example, the second light emitting device 156 is installed at a position where the chicken thigh M conveyed by the belt conveyor 201 can be irradiated with light from above.
- the above-mentioned second wavelength group includes at least one wavelength belonging to a predetermined wavelength region.
- the second wavelength group includes a wavelength of 1211 nm and a wavelength of 1287 nm belonging to the near-infrared region from a wavelength of 700 nm to a wavelength of 2500 nm.
- the wavelength of 1211 nm is an example of a second sub-wavelength described later.
- the wavelength of 1287 nm is an example of the second main wavelength described later.
- a part of the wavelengths included in the second wavelength group may be included in the first wavelength group.
- FIG. 6 is a diagram showing an example of a reflection spectrum in the near-infrared region of chicken thigh cartilage, a reflection spectrum in the near-infrared region of fat, and a reflection spectrum in the near-infrared region of lean meat according to the first embodiment.
- the solid line shown in FIG. 6 shows the reflection spectrum of the cartilage of chicken thigh M in the near infrared region.
- the dashed line shown in FIG. 6 shows the reflection spectrum of the fat of chicken thigh M in the near infrared region.
- the alternate long and short dash line shown in FIG. 6 shows the reflection spectrum of the lean meat of chicken thigh M. As shown by an arrow in FIG.
- the above-mentioned wavelength 1287 nm is a wavelength that gives a peak of the reflection spectrum of cartilage. Further, the light having a wavelength of 1287 nm is an example of the second element, and the light having a wavelength whose reflectance exceeds a predetermined second reflectance when irradiated with the fat constituting the knee cartilage portion. This is just one example.
- the second light emitting device 156 irradiates the chicken thigh M with light having a wavelength of 1287 nm and a wavelength of 1211 nm at the timing after the chicken thigh M is detected by the photoelectric sensor 203.
- the second photographing device 157 is a camera provided with a light receiving element capable of detecting the light emitted from the chicken thigh M.
- the light emitted from the chicken thigh M referred to here is light reflected on the surface of the chicken thigh M and having a wavelength of 1211 nm and light reflected on the surface of the chicken thigh M and having a wavelength of 1287 nm. Further, the wavelength of 1211 nm is an example of the second sub-wavelength. On the other hand, the wavelength of 1287 nm is an example of the second main wavelength.
- the light emitted from the chicken thigh M may be the light emitted from the chicken thigh M itself.
- the second photographing apparatus 157 may use indium gallium arsenide (InGaAs), lead sulfide (PbS) or lead selenium (PbSe). Is provided with a light receiving element using the above as a semiconductor. Further, as shown in FIG. 2, for example, the second photographing apparatus 157 is installed on the belt conveyor 201 in such a manner that the chicken thigh meat M located downstream of the photoelectric sensor 203 can be photographed from above.
- InGaAs indium gallium arsenide
- PbS lead sulfide
- PbSe lead selenium
- the second photographing device 157 photographs a subject illuminated by the first light emitting device 154 to generate a second main image.
- FIG. 7 is a diagram showing an example of a second sub-image taken by irradiating the chicken thigh according to the first embodiment with light having a second sub-wavelength.
- the second photographing apparatus 157 photographs the chicken thigh M when the chicken thigh M is irradiated with light having a wavelength of 1211 nm and light having a wavelength of 1287 nm and these lights are reflected on the surface of the chicken thigh M.
- the second sub-image shown in FIG. 7 is generated.
- the second secondary image shown in FIG. 7 depicts the chicken thigh meat M, and the knee cartilage portion contained in the chicken thigh meat M is depicted in the region indicated by the circle C7.
- FIG. 8 is a diagram showing an example of a second main image taken by irradiating the chicken thigh according to the first embodiment with light having a second main wavelength.
- the second photographing apparatus 157 photographs the chicken thigh M when the chicken thigh M is irradiated with light having a wavelength of 1287 nm and light having a wavelength of 1287 nm and these lights are reflected on the surface of the chicken thigh M.
- the second main image shown in FIG. 8 is generated.
- the second main image shown in FIG. 8 depicts the chicken thigh meat M.
- the bus 16 connects the processor 11, the main storage device 12, the communication interface 13, the auxiliary storage device 14, and the input / output device 15 so that data can be transmitted and received to each other.
- FIG. 9 is a diagram showing an example of a functional configuration of the target detection device according to the first embodiment.
- the target detection device 10 includes a first image generation unit 101, a first image acquisition unit 102, a second image generation unit 103, a second image acquisition unit 104, and a target detection unit 105. To prepare for.
- the first image generation unit 101 generates a first image using the first primary image and the first sub image. Specifically, the first image generation unit 101 represents a value calculated based on the brightness of each pixel included in the first main image and the brightness of each pixel included in the first sub image by each pixel. The image is generated as the first image.
- FIG. 10 is a diagram showing an example of a first image generated by using the first main image shown in FIG. 4 and the first sub image shown in FIG.
- the first image generation unit 101 has the luminance represented by each of the two pixels located at the same coordinates in the first primary image shown in FIG. 4 and the first sub image shown in FIG.
- the process of calculating the ratio of is executed for all the coordinates, and the first image shown in FIG. 10 is generated.
- each pixel represents the value obtained by dividing the brightness of each pixel of the first primary image by the brightness of each pixel of the first sub image, it is due to the unevenness of the surface of the chicken thigh M.
- the image is less affected by variations in the degree of light hitting.
- the first image acquisition unit 102 acquires the first image that is generated by irradiating the subject with light belonging to the first wavelength group and depicting the subject.
- the first image acquisition unit 102 acquires, for example, the first image shown in FIG.
- the second image generation unit 103 generates a second image using the second main image and the second sub image. Specifically, the second image generation unit 103 represents a value calculated based on the brightness of each pixel included in the second main image and the brightness of each pixel included in the second sub image by each pixel. The image is generated as a second image.
- FIG. 11 is a diagram showing an example of a second image generated by using the second sub image shown in FIG. 7 and the second main image shown in FIG.
- the second image generation unit 103 has the luminance represented by each of the two pixels located at the same coordinates in the second sub-image shown in FIG. 7 and the second main image shown in FIG.
- the process of calculating the difference between the above is executed for all the coordinates, and the second image shown in FIG. 11 is generated. Since the second image shown in FIG. 11 represents a value obtained by subtracting the brightness of the second sub-image from the brightness of each pixel of the second main image, the image is an image in which the influence of noise is reduced. There is.
- the second image acquisition unit 104 belongs to a second wavelength group different from the first wavelength group, and is generated by irradiating the subject with light having a wavelength different from the light emitted to the subject in order to generate the first image.
- the second image that depicts the subject is acquired.
- the second image acquisition unit 104 acquires, for example, the second image shown in FIG.
- the object detection unit 105 detects an object included in the subject by using the first image and the second image. Specifically, the target detection unit 105 detects the knee cartilage portion contained in the chicken thigh M shown in FIG. 2 using the first image shown in FIG. 10 and the second image shown in FIG. do. Further, the target detection unit 105 detects the first element drawn with a brightness exceeding a predetermined first brightness in the first image, and is drawn with a brightness exceeding a predetermined second brightness in the second image, and the first element is drawn. The target is detected by detecting the second element different from the above.
- FIG. 12 is a diagram showing an example of a region depicted with a luminance exceeding a predetermined first luminance in the first image according to the first embodiment.
- the target detection unit 105 extracts the region R1, the region R21, the region R22, the region R23, the region R24, the region R31, the region R32, and the region R33 shown in FIG. 12 based on the predetermined first brightness.
- These regions constitute the knee cartilage portion contained in chicken thigh meat M, and are regions that may depict cartilage, which is an example of the above-mentioned first element. Further, these regions are objects other than the cartilage, and may depict an object that emits fluorescence having a wavelength similar to that of the cartilage. Alternatively, these regions may depict an object having a reflectance comparable to that of the cartilage.
- the target detection unit 105 excludes a region having a relatively high possibility that the first element is not drawn from the above-mentioned regions.
- the target detection unit 105 has a region in which the area is less than the predetermined first lower threshold and the area exceeds the predetermined first upper threshold among the regions depicted in the first image with the luminance exceeding the predetermined first luminance. Exclude at least one of the areas you are in. For example, the target detection unit 105 excludes the region R23 and the region R24 whose area is less than the predetermined first lower limit threshold value from the regions shown in FIG. Since the area R23 and the area R24 are small, the area R23 and the area R24 are not areas in which the first element is drawn, but are areas in which noise is likely to be drawn.
- the target detection unit 105 has a region in which the area of the circumscribed figure is less than a predetermined first lower threshold value and an area of the circumscribed figure among the regions depicted in the first image with a brightness exceeding a predetermined first luminance. Exclude at least one of the areas that exceed a predetermined first upper threshold.
- the target detection unit 105 excludes a region R31, a region R32, and a region R33 in which the area of the circumscribed circle exceeds a predetermined first upper limit threshold value from the regions shown in FIG.
- Region R31, region R32, and region R33 are all regions that are elongated and have a long radius of the circumscribed circle, and are regions that are likely to depict muscles or fascia contained in chicken thigh meat M. be.
- FIG. 13 is a diagram showing an example of a region in which the first element according to the first embodiment may be depicted.
- the target detection unit 105 uses the above-mentioned narrowing conditions to depict the cartilage constituting the knee cartilage portion as a region R1, a region R21, and a region. Extract R22.
- FIG. 14 is a diagram showing an example of a region depicted with a brightness exceeding a predetermined second brightness in the second image according to the first embodiment.
- the target detection unit 105 extracts the region R3, the region R40, the region R51, the region R52, the region R60, the region R71, and the region R72 shown in FIG. 14 based on the predetermined second brightness.
- These regions constitute the knee cartilage portion contained in chicken thigh meat M, and are regions that may depict fat, which is an example of the above-mentioned second element.
- these regions are objects other than the fat, and there is a possibility that an object having a reflectance similar to that of the fat is depicted.
- these regions may also depict a fluorescent object having a wavelength comparable to that of the fat.
- the target detection unit 105 excludes a region having a relatively high possibility that the second element is not drawn from the above-mentioned regions.
- the target detection unit 105 has a region in which the area is less than the predetermined second lower threshold and the area exceeds the predetermined second upper threshold among the regions depicted in the second image with the luminance exceeding the predetermined second luminance. Exclude at least one of the areas you are in. For example, the target detection unit 105 excludes the region R51 and the region R52 whose area is less than the predetermined second lower limit threshold value from the regions shown in FIG. Since the area R51 and the area R52 are relatively small, the area R51 and the area R52 are not areas in which the second element is drawn, but are areas in which noise is likely to be drawn.
- the target detection unit 105 excludes a region 72 whose area exceeds a predetermined second upper limit threshold value from the regions shown in FIG. Since the area R72 is relatively large, it is not a region where the second element is drawn, but a region where there is a high possibility that the chicken skin contained in the chicken thigh M is drawn.
- the target detection unit 105 includes a region and a contour in which the value obtained by dividing the length of the contour by the area is less than the predetermined second lower limit threshold value among the regions depicted in the second image with a brightness exceeding a predetermined second luminance. Exclude at least one of the regions where the value of the length divided by the area exceeds a predetermined second upper threshold.
- the target detection unit 105 excludes the region R60 in which the value obtained by dividing the length of 71 by the area exceeds a predetermined first upper limit threshold value from the regions shown in FIG.
- the region R60 is not a region in which the second element is visualized, but a region in which there is a high possibility that fat that does not constitute the knee cartilage portion is visualized.
- the target detection unit 105 excludes a region in which the distance from the contour of the subject is less than a predetermined threshold value from the region depicted in the second image with a brightness exceeding a predetermined second luminance.
- the target detection unit 105 excludes the region R71 and the region R72 in which the distance from the contour of the subject is less than a predetermined threshold value among the regions shown in FIG. Since the region R71 and the region R72 are located relatively close to the contour of the chicken thigh M, the chicken skin contained in the chicken thigh M is drawn instead of the region where the second element is drawn. It is an area that is likely to be an area.
- FIG. 15 is a diagram showing an example of a region in which the second element according to the first embodiment may be depicted.
- the target detection unit 105 extracts the region R3 and the region R40 as regions in which the cartilage constituting the knee cartilage portion may be visualized by using the above-mentioned narrowing conditions. do.
- FIG. 16 is a diagram showing an example of a multi-channel image according to the first embodiment.
- the target detection unit 105 executes a synthesis process for synthesizing the first image shown in FIG. 13 and the second image shown in FIG. 15 to generate the multi-channel image shown in FIG.
- the multi-channel image shown in FIG. 16 depicts a region R1, a region R21 and a region R22 also shown in FIG. 13, and a region R3 and a region R40 also shown in FIG.
- the synthesis process includes at least one of an enlargement process, a reduction process, a resolution matching process, a distortion removing process, and an alignment process.
- the enlargement process enlarges at least one of the first image and the second image in at least one direction for the purpose of matching the dimensions of the subject depicted in the first image with the dimensions of the subject depicted in the second image. It is a process to make it.
- the reduction process reduces at least one of the first image and the second image in at least one direction for the purpose of matching the dimensions of the subject depicted in the first image with the dimensions of the subject depicted in the second image. It is a process to make it.
- the resolution matching process is a process of adjusting at least one of the resolution of the first image and the resolution of the second image to match the resolution of the first image with the resolution of the second image.
- the distortion removal process is a process for removing the distortion of the first image caused by the distortion of the optical component such as the lens used for taking the first image, and the distortion of the optical component such as the lens used for taking the second image. It includes at least one of the processes for removing the distortion of the second image caused by the above.
- the alignment process the reference point set in a specific part of the subject depicted in the first image and the reference point set in a specific part of the subject depicted in the second image are aligned with each other.
- it is a process of adjusting at least one of the first image and the second image.
- the target detection unit 105 selects the first point based on a predetermined first rule for each region in which the first element is drawn in the first image. For example, the target detection unit 105 calculates the positions of the centers of gravity of each of the regions R1, the regions R21, and the regions R22 shown in FIG. 16, and selects these centers of gravity as the first point. Further, the target detection unit 105 selects a second point based on a predetermined second rule for each region in which the second element is drawn in the second image. For example, the target detection unit 105 calculates the position of the center of gravity of each of the regions R3 and R40 shown in FIG. 16, and selects these centers of gravity as the second point.
- the target detection unit 105 includes the first element and the second element that give the first point and the second point that the length of the line segment connecting the first point and the second point is less than a predetermined length. Detect combinations as targets.
- the first element is drawn in the region R1
- the second element is drawn in the region R3
- the first element and the second element are drawn. Is detected as the knee cartilage portion contained in chicken thigh meat M.
- FIG. 17 is a flowchart showing an example of processing executed by the target detection device according to the first embodiment.
- step S11 the first image generation unit 101 generates a first image using the first main image and the first sub image.
- step S12 the first image acquisition unit 102 acquires the first image.
- step S13 the second image generation unit 103 generates a second image using the second main image and the second sub image.
- step S14 the second image acquisition unit 104 acquires the second image.
- step S15 the target detection unit 105 detects the target included in the subject by using the first image and the second image.
- the target detection device 10 includes a first image acquisition unit 102, a second image acquisition unit 104, and a target detection unit 105.
- the first image acquisition unit 102 acquires a first image that is generated by irradiating a subject with light belonging to the first wavelength group and depicting the subject.
- the second image acquisition unit 104 belongs to a second wavelength group different from the first wavelength group, and is generated by irradiating the subject with light having a wavelength different from the light emitted to the subject in order to generate the first image.
- the second image that depicts the subject is acquired.
- the object detection unit 105 detects an object included in the subject by using the first image and the second image.
- the object detection device 10 detects an object included in the subject by using a plurality of lights having different wavelengths from each other. Therefore, the target detection device 10 accurately detects the target included in the subject even when the target included in the subject is composed of a plurality of elements and the reflection spectra of the elements are different. be able to.
- the target detection device 10 detects a first element drawn with a brightness exceeding a predetermined first brightness in the first image, and is drawn with a brightness exceeding a predetermined second brightness in the second image, and the first element is drawn.
- the target is detected by detecting the second element different from the above.
- the target detection device 10 can detect the target with higher accuracy when the first element and the second element are included in the target.
- the first element referred to here may be an element that has a wavelength belonging to the first wavelength group and emits light having a certain intensity or higher, or may be an element having a wavelength belonging to the first wavelength group. It may be an element having a relatively large reflectance.
- the second element referred to here may be an element that has a wavelength belonging to the second wavelength group and emits light having a certain intensity or higher, or may be an element having a wavelength belonging to the second wavelength group. It may be an element having a relatively large reflectance.
- the target detection device 10 acquires a first image generated by irradiating the subject with light having a wavelength whose reflectance exceeds a predetermined first reflectance when the first element is irradiated.
- the object detection device 10 can detect the object including the first element and the second element by using the first image in which the first element contained in the object is more clearly depicted. can.
- the target detection device 10 acquires a second image generated by irradiating the subject with light having a wavelength whose reflectance exceeds a predetermined second reflectance when the second element is irradiated.
- the object detection device 10 can detect the first element and the object including the second element by using the second image in which the second element contained in the object is more clearly depicted. can.
- the target detection device 10 selects the first point based on a predetermined first rule for each region in which the first element is drawn in the first image. Similarly, the target detection device 10 selects the second point based on a predetermined second rule for each region in which the second element is depicted in the second image. Then, the target detection device 10 has the first element and the second element that give the first point and the second point that the length of the line segment connecting the first point and the second point is less than a predetermined length. Detect combinations as targets.
- the target detection device 10 has a plurality of regions in which the first element may be depicted and at least one of the regions in which the second element may be visualized, but the target is the first. If it is known that one element and the second element are included, the object can be detected more reliably.
- the target detection device 10 represents a value calculated based on the brightness of each pixel included in the first primary image and the brightness of each pixel included in the first sub-image by each pixel. To get. As a result, the object detection device 10 can detect the object included in the subject by using the first image that more clearly depicts the first element included in the object.
- the target detection device 10 uses each pixel to represent a value calculated based on the brightness of each pixel included in the second main image and the brightness of each pixel included in the second sub image. To get. As a result, the object detection device 10 can detect the object included in the subject by using the second image that more clearly depicts the second element included in the object.
- the area of the circumscribed figure is less than the predetermined first lower threshold. And the case where at least one of the areas where the area of the circumscribed figure exceeds the predetermined first upper limit threshold value is excluded as an example, but the present invention is not limited thereto.
- the target detection unit 105 has a region in which the area of the circumscribed figure is less than a predetermined second lower threshold value and an area of the circumscribed figure among the regions depicted in the second image with a brightness exceeding a predetermined second luminance. At least one of the regions exceeding a predetermined second upper limit threshold value may be excluded.
- the value obtained by dividing the length of the contour by the area in the region where the target detection unit 105 is drawn with the brightness exceeding the predetermined second luminance in the second image is the predetermined second lower limit.
- the case where at least one of the region which is less than the threshold value and the region where the value obtained by dividing the length of the contour by the area exceeds the predetermined second upper limit threshold value is excluded is given as an example, but the present invention is not limited thereto.
- the target detection unit 105 includes a region and a contour in which the value obtained by dividing the length of the contour by the area is less than the predetermined first lower threshold value among the regions depicted in the first image with a brightness exceeding a predetermined first luminance. You may exclude at least one of the regions where the value of the length divided by the area exceeds a predetermined first upper threshold.
- a region in which the distance from the contour of the subject is less than a predetermined threshold value is excluded.
- the target detection unit 105 may exclude a region in which the distance from the contour of the subject is less than a predetermined threshold value from the region depicted in the first image with a brightness exceeding a predetermined first luminance.
- the machine learning execution device, the target detection device, the machine learning execution program, and the target detection program according to the second embodiment will be described with reference to FIGS. 18 to 21.
- the case where the knee cartilage portion contained in chicken thigh is detected will be described as an example.
- the machine learning execution device, the target detection device, the machine learning execution program, and the target detection program according to the second embodiment are different from the target detection device and the target detection program according to the first embodiment by using machine learning. , Detects the object contained in the subject. Therefore, in the description of the second embodiment, the parts different from the first embodiment will be mainly described, and the description of the contents overlapping with the first embodiment will be omitted as appropriate.
- FIG. 18 is a diagram showing an example of the hardware configuration of the machine learning execution device according to the second embodiment.
- the machine learning execution device 30 includes a processor 31, a main storage device 32, a communication interface 33, an auxiliary storage device 34, an input / output device 35, and a bus 36.
- the processor 31 is, for example, a CPU, reads and executes a machine learning execution program, and realizes each function of the machine learning execution device 30. Further, the processor 31 may read and execute a program other than the machine learning execution program to realize the functions necessary for realizing each function of the machine learning execution device 30.
- the main storage device 32 is, for example, a RAM, and stores in advance a machine learning execution program and other programs that are read and executed by the processor 31.
- the communication interface 33 is an interface circuit for executing communication with the machine learning device 400 and other devices via the network NW.
- the network NW is, for example, WAN (Wide Area Network), LAN (Local Area Network), the Internet, and an intranet.
- the auxiliary storage device 34 is, for example, a hard disk drive, a solid state drive, a flash memory, or a ROM.
- the input / output device 35 is, for example, an input / output port.
- the input / output device 35 is connected to, for example, the mouse 351 shown in FIG. 18, the keyboard 352, and the display 353.
- the mouse 351 and the keyboard 352 are used, for example, for inputting data necessary for operating the machine learning execution device 30.
- the display 353 is, for example, a liquid crystal display.
- the display 353 displays, for example, the graphical user interface of the machine learning execution device 30.
- FIG. 19 is a diagram showing an example of a functional configuration of the machine learning execution device according to the second embodiment.
- the machine learning execution device 30 includes a teacher data acquisition unit 301 and a machine learning execution unit 302.
- the teacher data acquisition unit 301 has a problem of a learning multi-channel image generated by using the first learning image and the second learning image, and sets the learning subject drawn in the learning multi-channel image as a problem. Acquire teacher data that answers the position of the area where the included learning object is drawn.
- the teacher data acquisition unit 301 acquires teacher data via, for example, the communication interface 33.
- the first learning image is an image generated by irradiating the learning subject with light belonging to the first wavelength group and depicting the learning subject.
- the second learning image is an image generated by irradiating the learning subject with light belonging to the second wavelength group and depicting the learning subject.
- the learning subject referred to here is, for example, chicken thigh.
- FIG. 20 is a diagram showing an example of a learning multi-channel image according to the second embodiment.
- the learning multi-channel image is an image generated by subjecting the first learning image and the second learning image to a compositing process.
- the region shown by dot hatching in FIG. 20 is a region in which the first element is relatively likely to be depicted.
- the region shown by vertical line hatching in FIG. 20 is a region in which the second element is relatively likely to be drawn.
- the rectangle L shown in FIG. 20 indicates the size and position of the region in which the learning object included in the learning subject depicted in the learning multi-channel image is drawn.
- the size and position of the rectangle L may be determined, for example, by applying object recognition to the learning multi-channel image shown in FIG.
- the size and position of the rectangle L may be determined by the data input by the user or the like referring to the learning multi-channel image shown in FIG. 20 using the mouse 351 or the keyboard 352.
- the machine learning execution unit 302 inputs the teacher data into the machine learning model 400M mounted on the machine learning device 400, and trains the machine learning model 400M.
- the machine learning model 400M is, for example, a convolutional neural network (CNN).
- FIG. 21 is a flowchart showing an example of processing executed by the machine learning execution device according to the second embodiment.
- step S31 the teacher data acquisition unit 301 considers the learning multi-channel image as a problem, and determines the position of the region where the learning target included in the learning subject depicted in the learning multi-channel image is drawn. Get the teacher data to answer.
- step S32 the machine learning execution unit 302 inputs the teacher data into the machine learning model 400M and trains the machine learning model 400M.
- FIG. 22 is a diagram showing an example of the hardware configuration of the target detection device according to the second embodiment.
- the target detection device 50 includes a processor 51, a main storage device 52, a communication interface 53, an auxiliary storage device 54, an input / output device 55, and a bus 56.
- the processor 51 is, for example, a CPU, reads and executes a target detection program, and realizes each function of the target detection device 50. Further, the processor 51 may read and execute a program other than the target detection program to realize the functions necessary for realizing each function of the target detection device 50.
- the main storage device 52 is, for example, a RAM, and stores in advance a target detection program and other programs that are read and executed by the processor 51.
- the communication interface 53 is an interface circuit for executing communication with the machine learning device 400 and other devices via the network NW.
- the network NW is, for example, WAN, LAN, the Internet, or an intranet.
- the auxiliary storage device 54 is, for example, a hard disk drive, a solid state drive, a flash memory, or a ROM.
- the input / output device 55 is, for example, an input / output port.
- the input / output device 55 is connected to, for example, the mouse 551, the display 552, and the keyboard 553 shown in FIG.
- the mouse 551 and the keyboard 553 are used, for example, for inputting data necessary for operating the target detection device 50.
- the display 552 is, for example, a liquid crystal display.
- the display 552 displays, for example, the graphical user interface of the target detection device 50.
- FIG. 23 is a diagram showing an example of a functional configuration of the target detection device according to the second embodiment.
- the target detection device 50 includes a first inference image acquisition unit 501, a second inference image acquisition unit 502, an inference image generation unit 503, and an inference target detection unit 504. ..
- the first inference image acquisition unit 501 acquires the first inference image that is generated by irradiating the inference subject with light belonging to the first wavelength group and depicting the inference subject.
- the second inference image acquisition unit 502 acquires a second inference image that is generated by irradiating the inference subject with light belonging to the second wavelength group and depicting the inference subject.
- the inference subject referred to here is, for example, chicken thigh.
- the inference image generation unit 503 generates an inference multi-channel image using the first inference image and the second inference image. For example, the inference image generation unit 503 generates an inference multi-channel image by performing the above-mentioned composition processing on the first inference image and the second inference image.
- the inference target detection unit 504 inputs an inference multi-channel image into the machine learning model 400M learned by the machine learning execution device 30, and causes the machine learning model to detect the inference target included in the inference subject.
- the inference target referred to here is the knee cartilage portion contained in chicken thigh.
- FIG. 24 is a flowchart showing an example of the processing executed by the target detection device according to the second embodiment.
- step S51 the first inference image acquisition unit 501 acquires the first inference image.
- step S52 the second inference image acquisition unit 502 acquires the second inference image.
- step S53 the inference image generation unit 503 generates an inference multi-channel image using the first inference image and the second inference image.
- step S54 the inference target detection unit 504 inputs an inference multi-channel image to the machine learning model, and causes the machine learning model to detect the inference target included in the inference subject.
- the machine learning execution device, the target detection device, the machine learning execution program, and the target detection program according to the second embodiment have been described above.
- the machine learning execution device 30 includes a teacher data acquisition unit 301 and a machine learning execution unit 302.
- the teacher data acquisition unit 301 has a problem of a learning multi-channel image generated by using the first learning image and the second learning image, and sets the learning subject drawn in the learning multi-channel image as a problem.
- Acquire teacher data that answers the position of the area where the included learning object is drawn.
- the machine learning execution unit 302 inputs the teacher data into the machine learning model 400M and trains the machine learning model 400M.
- the machine learning execution device 30 can generate the machine learning model 400M that executes the same processing as the target detection device 10 according to the first embodiment.
- the target detection device 50 includes a first inference image acquisition unit 501, a second inference image acquisition unit 502, an inference image generation unit 503, and an inference target detection unit 504.
- the first inference image acquisition unit 501 acquires a first inference image that is generated by irradiating the inference subject with light belonging to the first wavelength group and depicting the inference subject.
- the second inference image acquisition unit 502 is generated by irradiating the inference subject with light belonging to the second inference group, and the inference image generation unit 503 acquires the second inference image depicting the inference subject.
- the inference target detection unit 504 inputs an inference multi-channel image to the machine learning model 400M learned by the machine learning execution device 30, and causes the machine learning model to detect the inference target included in the inference subject.
- the target detection device 50 is included in the inference subject drawn in the inference multi-channel image using the machine learning model 400M that executes the same processing as the target detection device 10 according to the first embodiment. It is possible to detect the inference target.
- the target detection device 10 shown in FIG. 1 is realized by the processor 11 that reads and executes the target detection program has been described as an example, but the present invention is not limited to this.
- At least a part of the target detection device 10 shown in FIG. 1 is a circuit unit such as an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), and a GPU (Graphics Processing Unit). It may be realized by hardware including circuits). Alternatively, at least a part of the target detection device 10 shown in FIG. 1 may be realized by the cooperation of software and hardware. Further, these hardware may be integrated into one or may be divided into a plurality of pieces.
- the machine learning execution device 30 shown in FIG. 18 is realized by the processor 31 that reads and executes the machine learning execution program has been described as an example, but the present invention is limited to this. Not done. At least a part of the machine learning execution device 30 shown in FIG. 18 may be realized by hardware including circuit units such as LSI, ASIC, FPGA, and GPU. Alternatively, at least a part of the machine learning execution device 30 shown in FIG. 18 may be realized by the cooperation of software and hardware. Further, these hardware may be integrated into one or may be divided into a plurality of pieces.
- the target detection device 50 shown in FIG. 22 is realized by the processor 51 that reads and executes the target detection program has been described as an example, but the present invention is not limited to this.
- At least a part of the target detection device 50 shown in FIG. 22 may be realized by hardware including a circuit unit such as an LSI, an ASIC, an FPGA, and a GPU.
- Alternatively, at least a part of the target detection device 50 shown in FIG. 22 may be realized by the cooperation of software and hardware. Further, these hardware may be integrated into one or may be divided into a plurality of pieces.
- the case where the first light emitting device 154 includes an LED or the like is given as an example, but the present invention is not limited to this.
- the first light emitting device 154 may include a light source and an optical filter that attenuates a part of wavelength components contained in the light emitted from the light source.
- the case where the second light emitting device 156 includes an LED or the like is given as an example, but the present invention is not limited to this.
- the second light emitting device 156 may include a light source and an optical filter that attenuates a part of the wavelength component contained in the light emitted from the light source. Examples of the light source include a xenon lamp and a deuterium lamp.
- the first image acquisition unit 102 may acquire an image other than the first image generated by using the first primary image and the first sub image as the first image. That is, the first image acquisition unit 102 does not necessarily represent the value calculated based on the brightness of each pixel included in the first primary image and the brightness of each pixel included in the first sub image by each pixel. It is not necessary to acquire the first image that is present.
- the first image acquisition unit 102 may acquire an image taken by irradiating light having a wavelength belonging to the first wavelength group as the first image.
- the first image acquisition unit 102 includes the brightness of each pixel included in the first primary image other than the above-mentioned first primary image and the brightness of each pixel included in the first sub-image other than the above-mentioned first sub-image.
- the first image in which the value calculated based on the above is represented by each pixel may be acquired.
- the first main image other than the above-mentioned first main image has, for example, a first main wavelength belonging to the first wavelength group, is irradiated to the subject, and is photographed by detecting the light reflected by the subject. , The first main image depicting the subject.
- the first sub-image other than the above-mentioned first sub-image has, for example, a first sub-wavelength belonging to the first wavelength group, and detects light that is applied to the subject and reflected by the subject. This is the first sub-image that was taken and depicts the subject.
- the second image acquisition unit 104 may acquire an image other than the second image generated by using the second primary image and the second sub image as the second image. That is, the second image acquisition unit 104 does not necessarily represent the value calculated based on the brightness of each pixel included in the second main image and the brightness of each pixel included in the second sub image by each pixel. There is no need to acquire the second image.
- the second image acquisition unit 104 may acquire an image taken by irradiating light having a wavelength belonging to the second wavelength group as a second image.
- the second image acquisition unit 104 has the brightness of each pixel included in the second main image other than the above-mentioned second main image and the brightness of each pixel included in the second sub-image other than the above-mentioned second sub-image.
- a second image may be acquired in which the value calculated based on the above is represented by each pixel.
- the second main image other than the above-mentioned second main image is emitted from the subject by irradiating the subject with light belonging to the second wavelength group, and the light having the second main wavelength is detected and photographed. This is the second main image that depicts the subject.
- the second main image other than the above-mentioned second main image is emitted from the subject by irradiating the subject with light belonging to the second wavelength group, and detects light having a second sub-wavelength.
- the first image acquisition unit 102 does not necessarily have to acquire the first image generated by detecting the light reflected by the subject with the light receiving element.
- the first image acquisition unit 102 may acquire the first image generated by detecting the light transmitted through the subject with a light receiving element.
- the second image acquisition unit 104 does not necessarily have to acquire the first image generated by detecting the light reflected by the subject with the light receiving element.
- the second image acquisition unit 104 may acquire the second image generated by detecting the light transmitted through the subject by the light receiving element.
- the present invention is not limited to this.
- the subject may be a package of food or the like
- the detected object may be a character, a pattern, or the like printed on the package.
- both the first wavelength group and the second wavelength group include wavelengths belonging to the visible light region.
- the subject may be food and the detected target may be hair mixed in the food.
- the hair when the hair receives light belonging to the ultraviolet region, it fluoresces and has a characteristic absorption peak in the infrared region. Therefore, the wavelength in which the first wavelength group or the second wavelength group belongs to the ultraviolet region or the infrared region is selected. It is preferable to include it.
- the subject may be food and the detected object may be a part having a frozen state different from other parts.
- the wavelength that gives the absorption peak in the infrared region changes depending on the frozen state, it is preferable that both the first wavelength group and the second wavelength group include wavelengths belonging to the infrared region.
- the subject may be a hamburger steak
- the detected object may be the front surface or the back surface of the hamburger steak.
- the first wavelength group or the second wavelength group includes a wavelength of 1200 nm, one of which belongs to the near infrared region.
- the other preferably contains a wavelength of 600 nm belonging to the visible light region.
- the subject may be dumplings and the detected target may be dumplings.
- the wavelength belonging to the wavelength belonging to the near-infrared region is included in the first wavelength group or the second wavelength group.
- the absorption peak due to the bean paste of dumplings exists in the visible light region, it is preferable that the wavelength belonging to the wavelength belonging to the visible light region is included in the first wavelength group or the second wavelength group.
- the effect of the present invention described in the above-described embodiment is the effect shown as an example. Therefore, the present invention may exhibit other effects that can be recognized by those skilled in the art from the description of the above-mentioned embodiments, in addition to the above-mentioned effects.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Pathology (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Wood Science & Technology (AREA)
- Food Science & Technology (AREA)
- Zoology (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Processing (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
Description
本願は、2020年12月23日に、日本に出願された特願2020-213342号に基づき優先権を主張し、その内容をここに援用する。
図1から図17を参照しながら第一実施形態に係る対象検出装置及び対象検出プログラムについて説明する。第一実施形態の説明では、対象検出装置が鶏もも肉に含まれている膝軟骨部を検出する場合を例に挙げて説明する。膝軟骨部は、軟骨と脂肪との組み合わせにより構成されている部位であり、例えば、多関節ロボットの先端に取り付けられたナイフを使用して鶏もも肉から切り出される。
図18から図21を参照しながら第二実施形態に係る機械学習実行装置、対象検出装置、機械学習実行プログラム及び対象検出プログラムについて説明する。第二実施形態の説明では、鶏もも肉に含まれている膝軟骨部が検出される場合を例に挙げて説明する。また、第二実施形態に係る機械学習実行装置、対象検出装置、機械学習実行プログラム及び対象検出プログラムは、第一実施形態に係る対象検出装置及び対象検出プログラムと異なり、機械学習を使用することにより、被写体に含まれている対象を検出する。そこで、第二実施形態の説明では、第一実施形態と異なる部分を中心に説明し、第一実施形態と重複する内容の説明を適宜省略する。
Claims (14)
- 第一波長群に属する光を被写体に照射して生成され、前記被写体を描出している第一画像を取得する第一画像取得部と、
前記第一波長群と異なる第二波長群に属しており、前記第一画像を生成するために前記被写体に照射された光と異なる波長を有する光を前記被写体に照射して生成され、前記被写体を描出している第二画像を取得する第二画像取得部と、
前記被写体に含まれている対象を前記第一画像及び前記第二画像を使用して検出する対象検出部と、
を備える対象検出装置。 - 前記対象検出部は、前記第一画像において所定の第一輝度を超える輝度で描出される第一要素を検出し、前記第二画像において所定の第二輝度を超える輝度で描出され、前記第一要素と異なる第二要素を検出して前記対象を検出する、
請求項1に記載の対象検出装置。 - 前記第一画像取得部は、前記第一要素に照射された場合における反射率が所定の第一反射率を超えている波長を有する光を前記被写体に照射して生成された前記第一画像を取得する、
請求項2に記載の対象検出装置。 - 前記第二画像取得部は、前記第二要素に照射された場合における反射率が所定の第二反射率を超えている波長を有する光を前記被写体に照射して生成された前記第二画像を取得する、
請求項2に記載の対象検出装置。 - 前記対象検出部は、前記第一画像において前記第一要素が描出されている領域各々に関して所定の第一規則に基づいて第一点を選択し、前記第二画像において前記第二要素が描出されている領域各々に関して所定の第二規則に基づいて第二点を選択し、前記第一点と前記第二点とを結ぶ線分の長さが所定の長さ未満となる前記第一点と前記第二点とを与える前記第一要素と前記第二要素との組み合わせを前記対象として検出する、
請求項2に記載の対象検出装置。 - 前記第一画像取得部は、前記第一波長群に属する第一主波長を有し、前記被写体に照射され、前記被写体により反射された光を検出して撮影され、前記被写体を描出している第一主画像に含まれる画素各々の輝度と、前記第一波長群に属する第一副波長を有し、前記被写体に照射され、前記被写体により反射された光を検出して撮影され、前記被写体を描出している第一副画像に含まれる画素各々の輝度とに基づいて算出された値を各画素により表している前記第一画像を取得する、
請求項1に記載の対象検出装置。 - 前記第二画像取得部は、前記第二波長群に属する第二主波長を有し、前記被写体に照射され、前記被写体により反射された光を検出して撮影され、前記被写体を描出している第二主画像に含まれる画素各々の輝度と、前記第二波長群に属する第二副波長を有し、前記被写体に照射され、前記被写体により反射された光を検出して撮影され、前記被写体を描出している第二副画像に含まれる画素各々の輝度とに基づいて算出された値を各画素により表している前記第二画像を取得する、
請求項1に記載の対象検出装置。 - 前記第一画像取得部は、前記第一波長群に属する光を前記被写体に照射することにより前記被写体から出射され、第一主波長を有する光を検出して撮影され、前記被写体を描出している第一主画像に含まれる画素各々の輝度と、前記第一波長群に属する光を前記被写体に照射することにより前記被写体から出射され、第一副波長を有する光を検出して撮影され、前記被写体を描出している第一副画像に含まれる画素各々の輝度とに基づいて算出された値を各画素により表している前記第一画像を取得する、
請求項1に記載の対象検出装置。 - 前記第二画像取得部は、前記第二波長群に属する光を前記被写体に照射することにより前記被写体から出射され、第二主波長を有する光を検出して撮影され、前記被写体を描出している第二主画像に含まれる画素各々の輝度と、前記第二波長群に属する光を前記被写体に照射することにより前記被写体から出射され、第二副波長を有する光を検出して撮影され、前記被写体を描出している第二副画像に含まれる画素各々の輝度とに基づいて算出された値を各画素により表している前記第二画像を取得する、
請求項1に記載の対象検出装置。 - 第一波長群に属する光を学習用被写体に照射して生成され、前記学習用被写体を描出している第一学習用画像と、前記第一波長群と異なる第二波長群に属しており、前記第一学習用画像を生成するために前記学習用被写体に照射された光と異なる波長を有する光を前記学習用被写体に照射して生成され、前記学習用被写体を描出している第二学習用画像とを使用して生成された学習用マルチチャンネル画像を問題とし、前記学習用マルチチャンネル画像に描出されている前記学習用被写体に含まれている学習用対象が描出されている領域の位置を答えとする教師データを取得する教師データ取得部と、
前記教師データを機械学習モデルに入力し、前記機械学習モデルを学習させる機械学習実行部と、
を備える機械学習実行装置。 - 第一波長群に属する光を推論用被写体に照射して生成され、前記推論用被写体を描出している第一推論用画像を取得する第一推論用画像取得部と、
前記第一波長群と異なる第二波長群に属しており、前記第一推論用画像を生成するために前記推論用被写体に照射された光と異なる波長を有する光を前記推論用被写体に照射して生成され、前記推論用被写体を描出している第二推論用画像を取得する第二推論用画像取得部と、
前記第一推論用画像と、前記第二推論用画像とを使用して推論用マルチチャンネル画像を生成する推論用画像生成部と、
前記第一波長群に属する光を学習用被写体に照射して生成され、前記学習用被写体を描出している第一学習用画像と、前記第一波長群と異なる第二波長群に属しており、前記第一学習用画像を生成するために前記学習用被写体に照射された光と異なる波長を有する光を前記学習用被写体に照射して生成され、前記学習用被写体を描出している第二学習用画像とを使用して生成された学習用マルチチャンネル画像を問題とし、前記学習用マルチチャンネル画像に描出されている前記学習用被写体に含まれている学習用対象が描出されている領域の位置を答えとする教師データを使用して学習した機械学習モデルに前記推論用マルチチャンネル画像を入力し、前記推論用被写体に含まれている推論用対象を前記機械学習モデルに検出させる推論用対象検出部と、
を備える対象検出装置。 - コンピュータに、
第一波長群に属する光を被写体に照射して生成され、前記被写体を描出している第一画像を取得する第一画像取得機能と、
前記第一波長群と異なる第二波長群に属しており、前記第一画像を生成するために前記被写体に照射された光と異なる波長を有する光を前記被写体に照射して生成され、前記被写体を描出している第二画像を取得する第二画像取得機能と、
前記被写体に含まれている対象を前記第一画像及び前記第二画像を使用して検出する対象検出機能と、
を実現させる対象検出プログラム。 - コンピュータに、
第一波長群に属する光を学習用被写体に照射して生成され、前記学習用被写体を描出している第一学習用画像と、前記第一波長群と異なる第二波長群に属しており、前記第一学習用画像を生成するために前記学習用被写体に照射された光と異なる波長を有する光を前記学習用被写体に照射して生成され、前記学習用被写体を描出している第二学習用画像とを使用して生成された学習用マルチチャンネル画像を問題とし、前記学習用マルチチャンネル画像に描出されている前記学習用被写体に含まれている学習用対象が描出されている領域の位置を答えとする教師データを取得する教師データ取得機能と、
前記教師データを機械学習モデルに入力し、前記機械学習モデルを学習させる機械学習実行機能と、
を実現させる機械学習実行プログラム。 - コンピュータに、
第一波長群に属する光を推論用被写体に照射して生成され、前記推論用被写体を描出している第一推論用画像を取得する第一推論用画像取得機能と、
前記第一波長群と異なる第二波長群に属しており、前記第一推論用画像を生成するために前記推論用被写体に照射された光と異なる波長を有する光を前記推論用被写体に照射して生成され、前記推論用被写体を描出している第二推論用画像を取得する第二推論用画像取得機能と、
前記第一推論用画像と、前記第二推論用画像とを使用して生成された推論用マルチチャンネル画像を生成する推論用画像生成機能と、
前記第一波長群に属する光を学習用被写体に照射して生成され、前記学習用被写体を描出している第一学習用画像と、前記第一波長群と異なる第二波長群に属しており、前記第一学習用画像を生成するために前記学習用被写体に照射された光と異なる波長を有する光を前記学習用被写体に照射して生成され、前記学習用被写体を描出している第二学習用画像とを使用して生成された学習用マルチチャンネル画像を問題とし、前記学習用マルチチャンネル画像に描出されている前記学習用被写体に含まれている学習用対象が描出されている領域の位置を答えとする教師データを使用して学習した機械学習モデルに前記推論用マルチチャンネル画像を入力し、前記推論用被写体に含まれている推論用対象を前記機械学習モデルに検出させる推論用対象検出機能と、
を実現させる対象検出プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/254,007 US20240005624A1 (en) | 2020-12-23 | 2021-10-13 | Target detection device, machine learning implementation device, target detection program, and machine learning implementation program |
EP21909906.6A EP4270303A1 (en) | 2020-12-23 | 2021-10-13 | Object detection device, machine learning implementation device, object detection program, and machine learning implementation program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020213342A JP2022099535A (ja) | 2020-12-23 | 2020-12-23 | 対象検出装置、機械学習実行装置、対象検出プログラム及び機械学習実行プログラム |
JP2020-213342 | 2020-12-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022137748A1 true WO2022137748A1 (ja) | 2022-06-30 |
Family
ID=82158954
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/037947 WO2022137748A1 (ja) | 2020-12-23 | 2021-10-13 | 対象検出装置、機械学習実行装置、対象検出プログラム及び機械学習実行プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240005624A1 (ja) |
EP (1) | EP4270303A1 (ja) |
JP (1) | JP2022099535A (ja) |
TW (1) | TWI805061B (ja) |
WO (1) | WO2022137748A1 (ja) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0285750A (ja) | 1988-05-06 | 1990-03-27 | Gersan Etab | 狭周波数バンドの放射線および宝石の検知 |
JP2007024510A (ja) * | 2005-07-12 | 2007-02-01 | Ckd Corp | 基板の検査装置 |
WO2008102143A1 (en) * | 2007-02-22 | 2008-08-28 | Enfis Limited | Quality control of meat products and the like |
JP2012112688A (ja) * | 2010-11-22 | 2012-06-14 | Seiko Epson Corp | 検査装置 |
JP2015190898A (ja) * | 2014-03-28 | 2015-11-02 | セーレン株式会社 | 欠陥検出装置及び欠陥検出方法 |
JP2018066649A (ja) * | 2016-10-19 | 2018-04-26 | 株式会社前川製作所 | 食肉の骨部判別装置及び食肉の骨部判別方法 |
JP2019015654A (ja) * | 2017-07-10 | 2019-01-31 | ファナック株式会社 | 機械学習装置、検査装置及び機械学習方法 |
JP2019032283A (ja) * | 2017-08-09 | 2019-02-28 | シスメックス株式会社 | 試料処理装置、試料処理システム、および測定時間の算出方法 |
WO2019059011A1 (ja) * | 2017-09-19 | 2019-03-28 | 富士フイルム株式会社 | 教師データ作成方法及び装置並びに欠陥検査方法及び装置 |
-
2020
- 2020-12-23 JP JP2020213342A patent/JP2022099535A/ja active Pending
-
2021
- 2021-10-13 EP EP21909906.6A patent/EP4270303A1/en active Pending
- 2021-10-13 WO PCT/JP2021/037947 patent/WO2022137748A1/ja active Application Filing
- 2021-10-13 US US18/254,007 patent/US20240005624A1/en active Pending
- 2021-11-05 TW TW110141239A patent/TWI805061B/zh active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0285750A (ja) | 1988-05-06 | 1990-03-27 | Gersan Etab | 狭周波数バンドの放射線および宝石の検知 |
JP2007024510A (ja) * | 2005-07-12 | 2007-02-01 | Ckd Corp | 基板の検査装置 |
WO2008102143A1 (en) * | 2007-02-22 | 2008-08-28 | Enfis Limited | Quality control of meat products and the like |
JP2012112688A (ja) * | 2010-11-22 | 2012-06-14 | Seiko Epson Corp | 検査装置 |
JP2015190898A (ja) * | 2014-03-28 | 2015-11-02 | セーレン株式会社 | 欠陥検出装置及び欠陥検出方法 |
JP2018066649A (ja) * | 2016-10-19 | 2018-04-26 | 株式会社前川製作所 | 食肉の骨部判別装置及び食肉の骨部判別方法 |
JP2019015654A (ja) * | 2017-07-10 | 2019-01-31 | ファナック株式会社 | 機械学習装置、検査装置及び機械学習方法 |
JP2019032283A (ja) * | 2017-08-09 | 2019-02-28 | シスメックス株式会社 | 試料処理装置、試料処理システム、および測定時間の算出方法 |
WO2019059011A1 (ja) * | 2017-09-19 | 2019-03-28 | 富士フイルム株式会社 | 教師データ作成方法及び装置並びに欠陥検査方法及び装置 |
Also Published As
Publication number | Publication date |
---|---|
US20240005624A1 (en) | 2024-01-04 |
JP2022099535A (ja) | 2022-07-05 |
TW202240536A (zh) | 2022-10-16 |
TWI805061B (zh) | 2023-06-11 |
EP4270303A1 (en) | 2023-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111684268B (zh) | 食品检验辅助系统、食品检验辅助装置和计算机程序 | |
US10184789B2 (en) | Image inspection apparatus | |
US10379035B2 (en) | Appearance inspection apparatus and appearance inspection method | |
JPWO2019151393A1 (ja) | 食品検査システム、食品検査プログラム、食品検査方法および食品生産方法 | |
US20200305721A1 (en) | Near-Infrared Fluorescence Imaging for Blood Flow and Perfusion Visualization and Related Systems and Computer Program Products | |
JP2009115613A (ja) | 異物検査装置 | |
CN108449962A (zh) | 用于确定物体的反射率的方法和相关设备 | |
CN109076155B (zh) | 用于对人的脸部进行照明的方法和光电子照明装置、相机和移动终端 | |
JP2018072180A (ja) | 容器の異物検査装置及び異物検査方法 | |
WO2022137748A1 (ja) | 対象検出装置、機械学習実行装置、対象検出プログラム及び機械学習実行プログラム | |
JP2011033612A (ja) | 農産物検査装置 | |
JP2022180641A (ja) | 異物判別方法、加工農作物の製造方法、食品検査装置、及び異物除去システム | |
JP6914990B2 (ja) | 物品検査装置及び物品検査方法 | |
JP2018205084A (ja) | 光学測定装置及び光学測定方法 | |
JP7029343B2 (ja) | 異物検出装置および異物検出方法 | |
EP4072406A1 (en) | Systems and methods for discrimination of tissue targets | |
JP2017190957A (ja) | 光学測定装置 | |
JP7155430B2 (ja) | 画像生成装置、薬剤識別装置、薬剤表示装置、画像生成方法及びプログラム | |
Lawrence et al. | Hyperspectral imaging for poultry contaminant detection | |
JP2019095238A (ja) | 画像処理システム、画像処理方法、及びプログラム | |
WO2020241844A1 (ja) | 画像処理装置及び画像処理プログラム | |
JP2004317153A (ja) | 青果物の光沢検査装置及び光沢検査方法 | |
JP2020005053A (ja) | イメージセンサの分光感度測定方法、分光感度測定装置の検査方法及び分光感度測定装置 | |
JP2023143024A (ja) | 食品についての検査システム及び検査方法 | |
JP6878093B2 (ja) | 添付書類の検出装置及び検出方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21909906 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18254007 Country of ref document: US |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112023012279 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112023012279 Country of ref document: BR Kind code of ref document: A2 Effective date: 20230620 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021909906 Country of ref document: EP Effective date: 20230724 |