WO2022158060A1 - Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device - Google Patents

Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device Download PDF

Info

Publication number
WO2022158060A1
WO2022158060A1 PCT/JP2021/038549 JP2021038549W WO2022158060A1 WO 2022158060 A1 WO2022158060 A1 WO 2022158060A1 JP 2021038549 W JP2021038549 W JP 2021038549W WO 2022158060 A1 WO2022158060 A1 WO 2022158060A1
Authority
WO
WIPO (PCT)
Prior art keywords
determination
image
learning
classification
processing
Prior art date
Application number
PCT/JP2021/038549
Other languages
French (fr)
Japanese (ja)
Inventor
知行 内村
智哉 坂井
健太郎 織田
Original Assignee
株式会社荏原製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社荏原製作所 filed Critical 株式会社荏原製作所
Priority to CN202180091239.2A priority Critical patent/CN116724224A/en
Publication of WO2022158060A1 publication Critical patent/WO2022158060A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23BTURNING; BORING
    • B23B27/00Tools for turning or boring machines; Tools of a similar kind in general; Accessories therefor
    • B23B27/14Cutting tools of which the bits or tips or cutting inserts are of special material
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23BTURNING; BORING
    • B23B27/00Tools for turning or boring machines; Tools of a similar kind in general; Accessories therefor
    • B23B27/14Cutting tools of which the bits or tips or cutting inserts are of special material
    • B23B27/18Cutting tools of which the bits or tips or cutting inserts are of special material with cutting bits or tips or cutting inserts rigidly mounted, e.g. by brazing
    • B23B27/20Cutting tools of which the bits or tips or cutting inserts are of special material with cutting bits or tips or cutting inserts rigidly mounted, e.g. by brazing with diamond bits or cutting inserts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B24GRINDING; POLISHING
    • B24BMACHINES, DEVICES, OR PROCESSES FOR GRINDING OR POLISHING; DRESSING OR CONDITIONING OF ABRADING SURFACES; FEEDING OF GRINDING, POLISHING, OR LAPPING AGENTS
    • B24B49/00Measuring or gauging equipment for controlling the feed movement of the grinding tool or work; Arrangements of indicating or measuring equipment, e.g. for indicating the start of the grinding operation
    • B24B49/12Measuring or gauging equipment for controlling the feed movement of the grinding tool or work; Arrangements of indicating or measuring equipment, e.g. for indicating the start of the grinding operation involving optical means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a machined surface determination device, a machined surface determination program, a machined surface determination method, a machining system, an inference device, and a machine learning device.
  • Patent Literature 1 discloses an inspection apparatus that inspects the outer shape of an impeller by performing a binarization process or the like on an image of an impeller, which is an inspection object.
  • One of the indicators for judging product quality is, for example, the state of the processed surface after various processing processes such as polishing, grinding, cutting, or casting have been performed.
  • the state of the machined surface includes, for example, various judgment items such as roughness, unevenness, undulation, warp, pattern, streaks, and waviness.
  • Patent Document 1 inspects the outer shape of the inspection object, it cannot determine the state of the machined surface of the inspection object.
  • workers are to judge the state of the machined surface, it depends on the skill and experience (including tacit knowledge) of the workers, so individual differences between workers will increase, and product quality will not be guaranteed. It is difficult to
  • the present invention provides a machined surface determination device, a machined surface determination program, a machined surface determination method, a machining system, and an inference device that enable automatic determination of the state of a machined surface of an object to be determined. , and to provide a machine learning device.
  • a machined surface determination device includes: A machined surface determination device that determines the state of the machined surface based on a judgment image in which the machined surface of the object to be judged is captured, Classification for obtaining a classification result for each small image area when the state of the processed surface is classified into one of a plurality of processing states for a plurality of small image areas obtained by dividing the determination image area of the determination image. a result acquisition unit; The processing surface in the plurality of learning image areas based on the classification results of the plurality of small image areas and the classification results of the plurality of learning image areas corresponding to the plurality of small image areas. a determination result inference unit that infers the determination result for the determination image by inputting a correlation with the determination result when the state of is determined to a determination learning model subjected to machine learning.
  • the determination result inference unit divides the determination image region of the determination image into a plurality of small image regions, and classifies the classification result for each small image region into the determination learning model. By inputting, the determination result for the image for determination is inferred. Therefore, it is possible to automatically determine the state of the machined surface of the object to be determined.
  • FIG. 1 is a schematic configuration diagram showing an example of a machining system 1 including a machined surface determination device 7 according to a first embodiment
  • FIG. 2 is a hardware configuration diagram showing an example of a computer 200 that constitutes a machine learning device 6 and a machined surface determination device 7.
  • FIG. 1 is a block diagram showing an example of a machine learning device 6 according to a first embodiment
  • FIG. 4 is a data configuration diagram showing an example of first classification learning data
  • It is a data block diagram which shows an example of the data for determination learning.
  • FIG. 2 is a schematic diagram showing an example of an inference model 20 applied to a first classification learning model 2A
  • 2 is a schematic diagram showing an example of an inference model 20 applied to a judgment learning model 2;
  • FIG. 1 is a block diagram showing an example of a machined surface determination device 7 according to a first embodiment
  • FIG. FIG. 11 is a function explanatory diagram showing an example of a classification result acquisition process by a classification result acquisition unit 70A
  • FIG. 7 is a functional explanatory diagram showing an example of determination result inference processing by a determination result inference unit 71
  • 5 is a flowchart showing an example of a machined surface determination method by the machined surface determination device 7 according to the first embodiment
  • It is a block diagram which shows an example of the machine-learning apparatus 6 which concerns on 2nd Embodiment.
  • FIG. 10 is a data configuration diagram showing an example of second classification learning data
  • FIG. 10 is a data configuration diagram showing an example of second classification learning data
  • FIG. 4 is a schematic diagram showing an example of an inference model 20B applied to a second classification learning model 2B;
  • FIG. 7 is a block diagram showing an example of a machined surface determination device 7 according to a second embodiment;
  • FIG. 11 is a function explanatory diagram showing an example of a classification result acquisition process by a classification result acquisition unit 70B;
  • 9 is a flow chart showing an example of a machined surface determination method by the machined surface determination device 7 according to the second embodiment.
  • FIG. 1 is a schematic configuration diagram showing an example of a machining system 1 including a machined surface determination device 7 according to the first embodiment.
  • the processing system 1 uses a processing unit 3 that processes the determination target object 10, an imaging unit 4 that captures an image of the processed surface 100 of the determination target object 10, the first classification learning model 2A, and the determination learning model 2.
  • a machined surface determination device 7 that determines the state of the machined surface 100 of the determination object 10 , and a control device 5 that controls the processing unit 3 , the imaging unit 4 , and the machined surface determination device 7 .
  • the processing system 1 also includes a machine learning device 6 that generates the first learning model for classification 2A and the learning model for determination 2 as an additional configuration.
  • the determination target object 10 is an arbitrary article to be processed by the processing unit 3, made of any material such as metal, resin, ceramics, or the like.
  • the determination target 10 is, as a specific example, a fluid machine or a fluid component that constitutes a fluid machine.
  • the three-dimensional shape, surface properties, color, size, etc. of the determination object 10 are not particularly limited.
  • the processed surface 100 is, for example, the surface of the determination target object 10 when the determination target object 10 is processed by the processing unit 3 .
  • the processing surface 100 may be any surface of the determination target object 10 , and may be the entire surface of the determination target object 10 or a portion thereof.
  • the processing unit 3 is composed of various robot manipulators that operate using electric power, fluid pressure, etc. as drive sources, processing mechanism units of machine tools, and the like.
  • the processing unit 3 performs processing steps such as polishing, grinding, cutting, casting, etc. based on control commands from the control device 5 .
  • the processing unit 3 may perform any processing process as long as it processes or forms the surface of the determination target object 10, and may perform a combination of a plurality of processing steps.
  • the processing section 3 is composed of a robot manipulator with a replaceable grindstone attached to its tip, and performs the grinding process.
  • the determination target 10 is an impeller having a plurality of blades as a fluid component constituting a pump, and the processed surface 100 is the surface of each blade processed by the grinding process by the processing unit 3 .
  • the imaging unit 4 is a camera that images the processing surface 100, and is composed of an image sensor such as a CMOS sensor or a CCD sensor, for example.
  • the imaging unit 4 is attached at a predetermined position where the processing surface 100 can be imaged.
  • the processing unit 3 is composed of a robot manipulator
  • the imaging unit 4 may be attached to the tip of the robot manipulator, or may be a mounting table (including a movable type) on which the determination object 10 is mounted. ) may be fixed above.
  • the processing unit 3 is configured by, for example, a processing mechanism unit of a machine tool
  • the imaging unit 4 may be attached inside a safety cover of the machine tool, or may be a separate unit from the machine tool. It may be fixed above the workbench.
  • the imaging unit 4 is attached to the predetermined position as described above, and its position and orientation are adjusted so that the processed surface 100 fits within the angle of view of the imaging unit 4 .
  • the imaging unit 4 may be separately provided with the imaging unit 4 connected to the machine learning device 6 and the imaging unit 4 connected to the machined surface determination device 7,
  • One imaging unit 4 may be connected to both the machine learning device 6 and the machined surface determination device 7 and shared.
  • the imaging unit 4 may have pan/tilt/zoom functions.
  • the imaging unit 4 is not limited to imaging the processing surface 100 with one camera, and may be imaging with a plurality of cameras.
  • the control device 5 includes, for example, a control panel 50 composed of a general-purpose or dedicated computer (see FIG. 2 described later), a microcontroller, etc., and an operation display panel 51 composed of a touch panel display, switches, buttons, etc. .
  • the control panel 50 is connected to actuators and sensors (both not shown) of the machining unit 3, and sends control commands to the actuators according to machining operation parameters for carrying out the machining process and detection signals of the sensors. It controls the machining process by the machining unit 3 .
  • the control panel 50 sends an image capturing command to the image capturing unit 4 and receives a captured image captured by the image capturing unit 4 as a result.
  • the control panel 50 sends the captured image as a determination image to the machined surface determination device 7 , and as a result, receives the state of the machined surface 100 determined by the machined surface determination device 7 .
  • the control panel 50 may send the captured image to the machine learning device 6 .
  • the operation display panel 51 accepts operator's operations and outputs various information by display and sound.
  • the machine learning device 6 operates as the subject of the learning phase in machine learning.
  • the machine learning device 6 acquires learning data based on the captured image captured by the imaging unit 4, and generates the first classification learning model 2A and the determination learning model 2 based on the learning data.
  • the machine learning device 6 provides the machined surface determination device 7 with the learned first classification learning model 2A and determination learning model 2 via an arbitrary communication network, recording medium, or the like. Details of the machine learning device 6 will be described later.
  • the machined surface determination device 7 operates as the subject of the inference phase in machine learning.
  • the machined surface determination device 7 judges the image of the machined surface 100 captured by the imaging unit 4 using the first learning model 2A for classification and the learning model 2 for determination generated by the machine learning device 6.
  • the state of the processing surface 100 of the determination object 10 is determined as the image for determination. Details of the machined surface determination device 7 will be described later.
  • each component of the machining system 1 may be configured as, for example, one machine tool by being incorporated in one housing.
  • the machine learning device 6 and the machined surface determination device 7 may be incorporated in the control device 5.
  • each component of the processing system 1 may be configured by a processing device including the processing unit 3 and an inspection device including the imaging unit 4 and the processing surface determination device 7.
  • the control device 5 Functionality may be distributed between processing equipment and inspection equipment.
  • each component of the machining system 1 is connected by a wireless or wired network, so that at least one of the machine learning device 6 and the machined surface determination device 7 can perform machining in which the machining unit 3 and the imaging unit 4 are installed. It may be installed at a place away from the work site, and in that case, the control device 5 may be installed at the work site or at another place.
  • FIG. 2 is a hardware configuration diagram showing an example of a computer 200 that constitutes the machine learning device 6 and the machined surface determination device 7. As shown in FIG.
  • Each of the machine learning device 6 and the machined surface determination device 7 is configured by a general-purpose or dedicated computer 200 .
  • the computer 200 includes, as its main components, a bus 210, a processor 212, a memory 214, an input device 216, a display device 218, a storage device 220, a communication I/F (interface) section 222, an external A device I/F section 224 , an I/O (input/output) device I/F section 226 , and a media input/output section 228 are provided.
  • the above components may be omitted as appropriate depending on the application in which the computer 200 is used.
  • the processor 212 is composed of one or more arithmetic processing units (CPU, MPU, GPU, DSP, etc.) and operates as a control unit that controls the computer 200 as a whole.
  • the memory 214 stores various data and programs 230, and is composed of, for example, a volatile memory (DRAM, SRAM, etc.) functioning as a main memory and a non-volatile memory (ROM, flash memory, etc.).
  • DRAM volatile memory
  • SRAM static random access memory
  • ROM non-volatile memory
  • the input device 216 is composed of, for example, a keyboard, mouse, numeric keypad, electronic pen, and the like.
  • the display device 218 is composed of, for example, a liquid crystal display, an organic EL display, electronic paper, a projector, or the like.
  • the input device 216 and the display device 218 may be configured integrally like a touch panel display.
  • the storage device 220 is composed of, for example, an HDD, SSD, etc., and stores various data necessary for executing the operating system and programs 230 .
  • the communication I/F unit 222 is wired or wirelessly connected to a network 240 such as the Internet or an intranet, and transmits and receives data to and from other computers according to a predetermined communication standard.
  • the external device I/F unit 224 is connected to an external device 250 such as a printer or a scanner by wire or wirelessly, and transmits and receives data to and from the external device 250 according to a predetermined communication standard.
  • the I/O device I/F unit 226 is connected to I/O devices 260 such as various sensors and actuators, and exchanges with the I/O devices 260, for example, detection signals from sensors and control signals to actuators. Sends and receives various signals and data.
  • the media input/output unit 228 is composed of, for example, a drive device such as a DVD drive and a CD drive, and reads and writes data with respect to media 270 such as a DVD and a CD.
  • the processor 212 calls the program 230 to the work memory area of the memory 214 and executes it, and controls each part of the computer 200 via the bus 210 .
  • the program 230 may be stored in the storage device 220 instead of the memory 214 .
  • the program 230 may be recorded in a non-temporary recording medium such as a CD or DVD in an installable file format or executable file format and provided to the computer 200 via the media input/output unit 228 .
  • Program 230 may be provided to computer 200 by downloading via network 240 via communication I/F section 222 .
  • the computer 200 may implement various functions realized by the processor 212 executing the program 230 by hardware such as FPGA and ASIC, for example.
  • the computer 200 is, for example, a stationary computer or a portable computer, and is an arbitrary form of electronic equipment.
  • the computer 200 may be a client computer, a server computer, or a cloud computer.
  • Computer 200 may be applied to devices other than machine learning device 6 and machined surface determination device 7 .
  • FIG. 3 is a block diagram showing an example of the machine learning device 6 according to the first embodiment.
  • the machine learning device 6 includes a learning data acquisition unit 60, a learning data storage unit 61, a machine learning unit 62, and a trained model storage unit 63.
  • the machine learning device 6 is composed of, for example, a computer 200 shown in FIG.
  • the learning data acquisition unit 60 is composed of the communication I/F unit 222 or the I/O device I/F unit 226 and the processor 212
  • the machine learning unit 62 is composed of the processor 212
  • the storage unit 61 and the learned model storage unit 63 are configured by the storage device 220 .
  • the learning data acquisition unit 60 is an interface unit that is connected to various external devices via a communication network and acquires learning data in which input data and output data are associated.
  • the external devices are, for example, the imaging unit 4, the machined surface determination device 7, and the worker terminal 8 used by the worker.
  • the learning data storage unit 61 is a database that stores multiple sets of learning data acquired by the learning data acquisition unit 60 .
  • the learning data includes first classification learning data for generating the first classification learning model 2A and determination learning data for generating the determination learning model 2A. Note that the specific configuration of the database that constitutes the learning data storage unit 61 may be appropriately designed.
  • the machine learning unit 62 performs machine learning using the learning data stored in the learning data storage unit 61. That is, the machine learning unit 62 inputs a plurality of sets of the first classification learning data to the first classification learning model 2A, thereby determining the correlation between the input data and the output data included in the first classification learning data. The relationship is machine-learned by the first learning model for classification 2A to generate the first learning model for classification 2A. In addition, the machine learning unit 62 inputs a plurality of sets of data for determination learning to the learning model 2 for determination, so that the correlation between the input data and the output data included in the data for determination learning is machine-learned into the learning model 2 for determination. The learning model 2 for determination is generated by learning.
  • the learned model storage unit 63 is a database that stores the first classification learning model 2A and the determination learning model 2 generated by the machine learning unit 62.
  • the first classification learning model 2A and the determination learning model 2 stored in the trained model storage unit 63 are provided to the actual system (for example, the machined surface determination device 7) via any communication network, recording medium, or the like. be done.
  • the first learning model for classification 2A and the learning model for determination 2 may be provided to an external computer (for example, a server computer or a cloud computer) and stored in a storage unit of the external computer.
  • the learning data storage unit 61 and the trained model storage unit 63 are shown as separate storage units in FIG. 3, they may be configured as a single storage unit.
  • FIG. 4 is a data configuration diagram showing an example of the first classification learning data.
  • the first classification learning data includes the learning image 41 as input data, and the classification results obtained by classifying the state of the machined surface 100 included in the learning image 41 into one of a plurality of machining states as output data. , these input data and output data are associated with each other.
  • a learning image 41 as input data is generated by dividing a captured image 40 having a predetermined captured image region 400 in which the processing surface 100 of the determination object 10 is captured by the imaging unit 4 into learning image regions 410 . each of a plurality of images.
  • a captured image area 400 of the captured image 40 is an area captured by the imaging unit 4 and is determined by the angle of view of the imaging unit 4 .
  • a captured image area 400 shown in FIG. 4 is set so as to include a portion of one blade of the impeller, which is the determination target 10 .
  • the captured image 40 shown in FIG. 4 not only the processed surface 100 but also the background 110 are captured. However, the captured image area 400 may be set so that the background 110 is not captured.
  • the learning image area 410 of the learning image 41 is obtained by dividing the captured image area 400 of the captured image 40 into a lattice so that each of the learning image areas 410 has a square shape.
  • the number of images, the shape, the size, and the aspect ratio of the learning image area 410 may be changed as appropriate.
  • the shape may be rectangular or other shapes.
  • the method of dividing the captured image area 400 into the learning image areas 410 may be changed as appropriate.
  • the classification results as output data are called, for example, teacher data or correct labels in supervised learning. If, for example, two classes of "good” and “bad” are used as a plurality of machining states, the classification result is represented by either “good” or "bad”. When adopting three classes of "good”, “acceptable” and “bad” as a plurality of machining states, the classification result is represented by one of "good”, “acceptable” and “bad”. Note that the plurality of machining states when classifying the states of the machined surface 100 are not limited to the classes described above, and may be classified into, for example, four or more classes, or may be classified from other viewpoints. .
  • the edge of the processing surface 100 or the background 110 other than the processing surface 100 exists in the learning image area 410, it is possible to add a class of "not subject to determination" for classification.
  • the classification result in the case of classification as "not subject to determination” for the reason that the edge of the processing surface 100 or the background 110 other than the processing surface 100 exists in the learning image region 410 is "
  • the processing surface 100 and the background 110 are captured in the learning image 41, for example, when the ratio of the background 110 is higher than a predetermined ratio, it is classified into the class of "out of determination".
  • it may be arranged such that it is not always classified into the class of "non-judgment target".
  • FIG. 5 is a data configuration diagram showing an example of determination learning data.
  • the determination learning data is input data obtained by classifying the state of the processing surface 100 into one of a plurality of processing states for each of the plurality of learning image regions 410. Based on the classification results, a plurality of learning processes are performed. Each output data includes determination results when determining the state of the processing surface 100 in the image area 410 for processing, and these input data and output data are associated with each other.
  • the classification results for the plurality of learning image regions 410 as input data are, for example, when the state of the processing surface 100 is classified into one of “good”, “acceptable”, “bad”, and “not subject to judgment”. is represented by the integer values "0", "1", "2" and "3".
  • the judgment results as output data are called, for example, teacher data or correct labels in supervised learning.
  • the determination result is obtained by determining the state of the entire processing surface 100 with respect to a plurality of learning image regions 410, that is, the captured image region 400 before being divided into a plurality of learning image regions 410.
  • the determination result is, as the state of the machined surface 100, the necessity of re-machining in which the same machining process as when the machined surface 100 was machined, or the necessity of another machining in which the machining process different from when the machined surface 100 was machined is required. , at least one of the necessity of finish machining in which an operator performs finish machining on the machined surface 100, and the machining range for which re-machining, another machining, or finishing machining is performed on the machined surface 100 is determined. is. Alternatively or additionally, the determination result may be one of a plurality of machining states including at least "good” and "bad" with respect to the entire machined surface 100.
  • the learning data acquisition unit 60 can employ various methods as a method for acquiring the first classification learning data and the determination learning data. For example, the learning data acquisition unit 60 acquires a captured image 40 captured by the imaging unit 4 of the determination object 10 after the processing process has been performed by the processing unit 3, and divides the captured captured image 40. By doing so, a plurality of learning images 41 are generated. Next, the learning data acquisition unit 60 superimposes a frame line forming each learning image region 410 on the captured image 40, so that the plurality of learning images 41 can be distinguished from each other. displayed on the display screen of the user terminal 8.
  • a result (classification result ) is input, and the result (determination result) of determining the state of the processing surface 100 included in the captured image 40 is input via the operator terminal 8 .
  • the learning data acquisition unit 60 accepts the operator's input operation, and acquires the learning image 41 (input data) and the classification result (output data) of the input operation to the learning image 41.
  • a plurality of first classification learning data are acquired by associating.
  • the learning data acquisition unit 60 obtains the classification results (input data) for the plurality of learning image regions 410 of each of the learning images 41 and the determination results (output data ) are associated with each other to acquire data for judgment learning.
  • the learning data acquisition unit 60 can acquire a number of first classification learning data corresponding to the number of divisions when one captured image 40 is divided into a plurality of learning images 41. Furthermore, by repeating the above operations, a desired number of first classification learning data can be acquired. In addition, the learning data acquisition unit 60 can acquire determination learning data in conjunction with acquiring the first classification learning data. Therefore, the first data for classification learning and the data for judgment learning can be easily collected.
  • FIG. 6 is a schematic diagram showing an example of an inference model 20A applied to the first classification learning model 2A.
  • the inference model 20A employs a convolutional neural network (CNN) as a specific method of machine learning.
  • the inference model 20A includes an input layer 21, an intermediate layer 22, and an output layer 23.
  • the input layer 21 has a number of neurons corresponding to the number of pixels in the learning image 41 as input data, and the pixel value of each pixel is input to each neuron.
  • the intermediate layer 22 is composed of a convolutional layer 22a, a pooling layer 22b and a fully connected layer 22c.
  • a convolution layer 22a and pooling layers 22b are provided alternately.
  • the convolution layer 22a and the pooling layer 22b extract features from the image input via the input layer 21.
  • the fully connected layer 22c converts the feature amount extracted from the image by the convolution layer 22a and the pooling layer 22b, for example, by an activation function, and outputs it as a feature vector.
  • the total bonding layer 22c may be provided in a plurality of layers.
  • the output layer 23 outputs output data including classification results based on the feature vectors output from the fully connected layer 22c.
  • the output data may include, for example, a score indicating the reliability of the classification result in addition to the classification result.
  • a synapse connecting each neuron between the layers is set, and a weight is associated with each synapse of the convolution layer 22a and the fully connected layer 22c of the intermediate layer 22.
  • the machine learning unit 62 inputs the first classification learning data to the inference model 20A, and causes the inference model 20A to machine-learn the correlation between the learning image 41 and the classification result. Specifically, the machine learning unit 62 inputs the learning image 41 constituting the first classification learning data to the input layer 21 of the inference model 20A as input data. Note that the machine learning unit 62 applies predetermined image adjustments (eg, image format, image size, image filter, image mask, etc.) to the learning image 41 as preprocessing when inputting the learning image 41 to the input layer 21 . may be applied to predetermined image adjustments (eg, image format, image size, image filter, image mask, etc.) to the learning image 41 as preprocessing when inputting the learning image 41 to the input layer 21 . may be applied to
  • the machine learning unit 62 uses an error function that compares the classification result (inference result) indicated by the output data output from the output layer 23 and the classification result (teacher data) that constitutes the first data for classification learning. Then, the weight associated with each synapse is adjusted (back promotion) so that the evaluation value of the error function becomes smaller. Then, when the machine learning unit 62 determines that a predetermined learning end condition is satisfied, such as repeating the series of processes described above a predetermined number of times or that the evaluation value of the error function becomes smaller than an allowable value, terminates the machine learning and stores the inference model 20A (all weights associated with each synapse) at that time in the trained model storage unit 63 as the first classification learning model 2A.
  • a predetermined learning end condition such as repeating the series of processes described above a predetermined number of times or that the evaluation value of the error function becomes smaller than an allowable value
  • FIG. 7 is a schematic diagram showing an example of the inference model 20 applied to the learning model 2 for judgment.
  • the inference model 20 employs a convolutional neural network as a specific method of machine learning, similar to the inference model 20A shown in FIG. In the following, the inference model 20 will be described, focusing on the differences from the inference model 20A shown in FIG.
  • the input layer 21 has a number of neurons corresponding to the number of divisions when the captured image area 400 is divided into a plurality of learning image areas 410, and the classification results (for example, 0, 1, Integer values of 2 and 3) are input to each neuron, respectively.
  • the output layer 23 outputs output data including determination results based on the feature vectors output from the fully connected layer 22c.
  • the output data may include, for example, a score indicating the reliability of the determination result.
  • the machine learning unit 62 inputs the judgment learning data to the inference model 20 and causes the inference model 20 to machine-learn the correlation between the classification results and the judgment results for the plurality of learning image regions 410 . Specifically, the machine learning unit 62 inputs the classification results for the plurality of learning image regions 410 constituting the determination learning data to the input layer 21 of the inference model 20 as input data.
  • the machine learning unit 62 uses an error function that compares the determination result (inference result) indicated by the output data output from the output layer 23 with the determination result (teacher data) that constitutes the determination learning data, and calculates the error It repeats adjusting the weight associated with each synapse (back promotion) so that the evaluation value of the function becomes smaller. Then, when the machine learning unit 62 determines that a predetermined learning end condition is satisfied, such as repeating the series of processes described above a predetermined number of times or that the evaluation value of the error function becomes smaller than an allowable value, terminates the machine learning and stores the inference model 20 (all weights associated with each synapse) at that time in the learned model storage unit 63 as the learning model 2 for determination.
  • a predetermined learning end condition such as repeating the series of processes described above a predetermined number of times or that the evaluation value of the error function becomes smaller than an allowable value
  • FIG. 8 is a block diagram showing an example of the machined surface determination device 7 according to the first embodiment.
  • the machined surface determination device 7 includes a classification result acquisition unit 70A, a determination result inference unit 71, a learned model storage unit 72, and an output processing unit 73.
  • the machined surface determination device 7 is composed of, for example, a computer 200 shown in FIG.
  • the classification result acquisition unit 70A is composed of the communication I/F unit 222 or the I/O device I/F unit 226 and the processor 212
  • the determination result inference unit 71 and the output processing unit 73 are composed of the processor 212.
  • the trained model storage unit 72 is configured by the storage device 220 .
  • the classification result acquisition unit 70A obtains the classification result when the state of the processing surface 100 is classified into one of a plurality of processing states for a plurality of small image regions 430 obtained by dividing the judgment image region 420 included in the judgment image 42.
  • a classification result acquisition process (see FIG. 9 described later) for acquiring in units of small image areas 430 is performed.
  • the classification result acquisition unit 70A is connected to the imaging unit 4, and converts the captured image obtained by capturing the processing surface 100 of the determination target object 10 by the imaging unit 4 into a determination image area 420 having a determination image area 420.
  • a small image generating unit 701 that generates a plurality of small images 43 from the judgment image 42 by dividing the judgment image region 420 into a plurality of small image regions 430;
  • a first classification result inference unit 702A for inferring classification results for a plurality of small image regions 430 by inputting the small images 43 into the first classification learning model 2A in units of small image regions 430 .
  • the first classification result inference unit 702A determines the positional relationship of each small image region 430 with respect to the judgment image region 420 so that the judgment image 42 before division can be reconstructed from the plurality of small images 43, for example, the small image 43 additional information.
  • the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 acquired by the classification result acquisition unit 70A to the determination learning model 2, thereby inferring the determination result for the determination image region 420. Inference processing (see FIG. 10 described later) is performed.
  • the determination results inferred by the determination result inference unit 71 are the necessity of remachining, the necessity of separate machining, the necessity of finish machining, and the target of remachining, separate machining, or finish machining of the machined surface 100. At least one of the processing ranges to be determined. Instead of or in addition to the above, the determination result may be one of a plurality of machining states including at least "good” and "bad” with respect to the entire machined surface 100.
  • a part or all of the classification result acquisition unit 70A and the determination result inference unit 71 may be replaced by a processor of an external computer (for example, a server computer or a cloud computer). Part or all of the acquisition processing and determination result inference processing by the determination result inference unit 71 may be executed by an external computer.
  • an external computer for example, a server computer or a cloud computer.
  • the trained model storage unit 72 stores the trained first classification learning model 2A used in the inference processing of the classification result acquisition unit 70A and the learned judgment model used in the inference processing of the judgment result inference unit 71. It is a database that stores the learning model 2.
  • the number of the first classification learning model 2A and the judgment learning model 2 stored in the learned model storage unit 72 is not limited to one. A plurality of trained models with different conditions such as the determination target 10 may be stored and selectively used.
  • the trained model storage unit 72 may be replaced by a storage unit of an external computer (for example, a server computer or a cloud computer). In that case, the classification result acquisition unit 70A and the determination result inference unit 71 , the above-described classification result acquisition processing and determination result inference processing may be performed by accessing the external computer.
  • the output processing unit 73 performs output processing for outputting the determination result inferred by the determination result inference unit 71 .
  • Various means can be adopted as specific output means for outputting the determination result.
  • the output processing unit 73 transmits an operation command for reprocessing or another processing to the processing unit 3 via the control panel 50, or via the operation display panel 51 or the operator terminal 8.
  • the execution of the finishing process may be notified to the operator by display or sound, or may be stored in the storage means of the control panel 50 as the operation history of the processing section 3 .
  • the output processing unit 73 may simply output (transmit, notify, or store) the determination result from the determination result inference unit 71, or may output (transmit, notify, or store) the determination result from the determination result inference unit 71, and may also output a plurality of data from the classification result acquisition unit 70A.
  • the classification result for the small image area 430 may be further output (transmitted, notified, or stored).
  • FIG. 9 is a functional explanatory diagram showing an example of the classification result acquisition process by the classification result acquisition section 70A.
  • a judgment image area 420 of the judgment image 42 is an area captured by the imaging unit 4 and is determined by the angle of view of the imaging unit 4 .
  • a determination image area 420 shown in FIG. 9 is set to include a portion of one blade of the impeller, which is the determination target 10, similarly to the captured image area 400 shown in FIG. Note that the judgment image area 420 may be set at a position different from that of the captured image area 400, and the number, shape, size, and aspect ratio of both images may be different.
  • the small image regions 430 of the small image 43 are obtained by dividing the judgment image region 420 of the judgment image 42 into a grid so that each of the small image regions 430 has a square shape.
  • a small image region 430 of the small image 43 corresponds to the learning image region 410 of the learning image 41 when the first classification learning model 2A is generated by the machine learning device 6.
  • shape, size and aspect ratio are preferably the same or comparable.
  • the image area for determination 420 is set to the small image area.
  • the method of dividing into the regions 430 may be changed as appropriate. For example, division may be performed in a zigzag pattern, or division may be performed according to other criteria.
  • the dividing method for dividing the determination image region 420 into the small image regions 430 may be the same as or different from the dividing method for dividing the captured image region 400 into the learning image regions 410 .
  • the first classification learning model 2A divides the state of the learning image 41 having the learning image region 410 corresponding to the small image region 430 and the processing surface 100 included in the learning image 41 into a plurality of processing states.
  • the machine learning device 6 performs machine learning on the correlation with the classification results classified into any of the above. Therefore, the first classification result inference unit 702A inputs the plurality of small images 43 to the first classification learning model 2A in units of small image regions 430, thereby obtaining the state of the processing surface 100 in the small image regions 430. It functions as a classifier that classifies into one of multiple processing states.
  • the classification result is represented by two classes (good, bad), and three classes (good, fair, bad) as the plurality of machining states. is adopted, the classification result is represented by three classes (good, fair, and bad).
  • the first classification learning model 2A includes a learning image 41 in which at least one of the processed surface 100 and the background 110 other than the processed surface 100 is captured, and the state of the processed surface 100 captured in the learning image 41. is classified into one of a plurality of processing states, or is excluded from the determination target for the reason that the edge of the processing surface 100 or the background 110 other than the processing surface 100 exists in the small image area 430 of the learning image 41
  • the machine learning device 6 may perform machine learning on the correlation with the classification result when classified.
  • the first classification result inference unit 702A of the classification result acquisition unit 70A inputs the plurality of small images 43 to the first classification learning model 2A in units of small image regions 430, thereby
  • the state of the processed surface 100 is classified into one of a plurality of processed states, or the edge of the processed surface 100 or the background 110 other than the processed surface 100 exists in the small image area 430.
  • function as a classifier for In the classification results there are a plurality of machining states, as well as non-judgment conditions. not covered).
  • the classification result for the small image region 430 may include the score (reliability) for each class.
  • the score for each class for the specific small image region 430 is, for example, "0.02", “ 0.10", "0.95", and "0.31". Any method may be adopted as the method of using the score.
  • the class with the highest score in the above example, the defective score of "0.95"
  • the score of a predetermined class may be used. exceeds a predetermined score reference value (in the above example, if the score “0.95” of the “bad” class exceeds the score reference value “0.80”), the class is classified as a result may be
  • classification results for the small image region 430 are preferably stored in the learned model storage unit 72 or another storage device (not shown), and the past classification results are stored in the learned first classification learning, for example.
  • the model 2A it can be used as first classification learning data used for online learning and re-learning.
  • FIG. 10 is a functional explanatory diagram showing an example of determination result inference processing by the determination result inference unit 71.
  • FIG. 10 it is assumed that one judgment image 42 is divided into 60 small images 43 as shown in FIG. 10 by dividing the judgment image region 420 into 60 small image regions 430. I will assume and explain.
  • the determination learning model 2 determines the state of the processing surface 100 in the plurality of learning image regions 410 based on the classification results for the plurality of learning image regions 410 corresponding to the plurality of small image regions 430 and the classification results. This is a result of machine learning of the correlation with the judgment result of time. Therefore, the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 acquired by the classification result acquisition unit 70A to the learning model 2 for determination, thereby processing the processing surface 100 in the plurality of small image regions 430. , that is, the determination result for the processing surface 100 in the determination image area 420 is inferred.
  • the determination result is, for example, a real value having a value range of 0 to 1, The closer to "0", the more "no", and the closer to 1, the more "required”.
  • a processing range is adopted as the state of the processing surface 100, a range including at least the small image regions 430 classified as "defective" as a classification result for the plurality of small image regions 430, for example, is used as the processing range. be judged.
  • the determination result inference unit 71 may perform predetermined post-processing on the determination result inferred by the determination learning model 2 as described above. For example, as post-processing, the determination result inference unit 71 compares the value of the determination result regarding the necessity of reprocessing, the value of the determination result regarding the necessity of separate processing, and the value of the determination result regarding the necessity of finishing processing. , the machining with the largest determination result value may be selected as the final determination result.
  • FIG. 11 is a flow chart showing an example of a machined surface determination method by the machined surface determination device 7 according to the first embodiment. 11 is repeatedly executed by the machined surface determination device 7 at predetermined timings.
  • the predetermined timing may be arbitrary timing, for example, it may be after the processing process by the processing unit 3 is completed, it may be in the middle of the processing process, or when a predetermined event occurs (at the time of operator operation, production control system at the time of an instruction from, etc.).
  • a case will be described below in which the machined surface determination method is executed on the determination target object 10 machined by the machining process after the machining process by the machining unit 3 is completed.
  • step S100 when the processing process by the processing unit 3 is completed, the processing surface 100 of the determination object 10 processed by the processing process is imaged by the imaging unit 4, and the captured image is transmitted via the control device 5.
  • the image acquisition unit 700 of the classification result acquisition unit 70A acquires the captured image as the determination image 42 by sending it to the machined surface determination device 7 .
  • step S110 the small image generation unit 701 divides the judgment image region 420 of the judgment image 42 into a plurality of small image regions 430 as preprocessing for the judgment image 42, thereby dividing the judgment image 42 into a plurality of small image regions 430.
  • a plurality of small images 43 are generated.
  • the first classification result inference unit 702A assigns serial numbers (1 ⁇ n ⁇ K) to the plurality of small images 43, where K is the division number of the plurality of small images 43.
  • the loop process is executed by incrementing the variable i from "1" to "K".
  • step S120 the first classification result inference unit 702A initializes the variable i to "1".
  • step S122 the first classification result inference unit 702A selects the i-th small image 43 and inputs it to the input layer 21 of the first classification learning model 2A, thereby performing the first classification Infer the classification results output from the output layer 23 of the learning model 2A.
  • step S126 the variable i is incremented, and in step S128, it is determined whether or not the variable i exceeds the division number K. Then, the first classification result inference unit 702A obtains the classification results for the plurality of small image regions 430 by repeating steps S122 and S126 until the variable i exceeds the number of divisions K.
  • FIG. 1 the variable i is incremented, and in step S128, it is determined whether or not the variable i exceeds the division number K.
  • step S130 the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the input layer 21 of the determination learning model 2, and from the output layer 23 of the determination learning model 2,
  • the output determination result (for example, necessity of reprocessing, necessity of another processing, necessity of finish processing, processing range, etc.) is inferred.
  • step S140 the output processing unit 73 outputs information corresponding to the determination result inferred by the determination result inference unit 71 to output means (eg, the control device 5, the worker terminal 8, etc.). Then, a series of the machined surface determination method shown in FIG. 11 ends.
  • step S100 corresponds to an image obtaining step
  • steps S100 to S128 correspond to a classification result obtaining step
  • step S130 corresponds to a determination result inference step
  • step S140 corresponds to an output processing step.
  • the classification result acquisition unit 70A divides the determination image region 420 into the small image regions 430, thereby dividing the determination image 42 into the small image regions 430.
  • the classification results for the plurality of small image regions 430 are inferred.
  • the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the determination learning model 2, thereby inferring the state of the machined surface 100 as the determination result.
  • the classification result by the first classification learning model 2A is inferred in units of small image regions 430 by inputting each of the plurality of small images 43 into which the judgment image 42 is divided.
  • the state of the machined surface 100 included in the determination image 42 is determined by inputting the classification results for the plurality of small image regions 430 by the first classification learning model 2A to the determination learning model 2. . Therefore, the state of the processing surface 100 of the determination target 10 can be automatically determined.
  • FIG. 12 is a block diagram showing an example of the machine learning device 6 according to the second embodiment.
  • the machine learning device 6 includes a learning data acquisition unit 60, a learning data storage unit 61, a machine learning unit 62, and a trained model storage unit 63, as in the first embodiment.
  • the learning data acquisition unit 60 is an interface unit that is connected to various external devices via a communication network and acquires learning data.
  • the learning data storage unit 61 is a database that stores a plurality of sets of learning data acquired by the learning data acquisition unit 60 .
  • the learning data includes second classification learning data for generating the second classification learning model 2B and judgment learning data similar to the first embodiment.
  • the machine learning unit 62 calculates the correlation between the input data and the output data included in the second classification learning data.
  • the second classification learning model 2B is generated by performing machine learning on the second classification learning model 2B. Further, the machine learning unit 62 generates the determination learning model 2 using the determination learning data, as in the first embodiment.
  • the learned model storage unit 63 is a database that stores the second classification learning model 2B and the determination learning model 2 generated by the machine learning unit 62.
  • FIG. 13 is a data configuration diagram showing an example of the second classification learning data.
  • the second classification learning data uses the pixel classification results for the plurality of learning pixel regions 411 acquired from the learning image 41 as input data, and the state of the processing surface 100 included in the learning image 41 is processed by a plurality of processing methods.
  • the classification results classified into any of the states are included as output data, and these input data and output data are associated with each other.
  • the pixel classification results for the plurality of learning pixel regions 411 as input data are obtained based on the pixel values in the learning pixel regions 411 for the plurality of learning pixel regions 411 forming the learning image 41 .
  • the pixel classification result indicating the classification result for is obtained in units of learning pixel regions 411 .
  • the learning pixel area 411 is an area corresponding to one pixel, and pixel values in the learning pixel area 411 are represented by, for example, RGB values, grayscale values, luminance values, and the like.
  • the pixel classification result is, for example, the pixel value in the learning pixel region 411. and three predetermined thresholds (third threshold ⁇ second threshold ⁇ first threshold), and if the pixel value is equal to or greater than the first threshold, it is classified as “good” (0).
  • the classification result of "good” (1) is: If the pixel value is less than the second threshold and greater than or equal to the third threshold, is assigned a classification result of "defective” (2), and a classification result of "out of determination” (3) when the pixel value is less than the third threshold.
  • the classification results as output data are, for example, "good”, “acceptable”, and “bad” as shown in FIG. ” and “not subject to determination”.
  • the learning data acquisition unit 60 can employ various methods as a method for acquiring the second classification learning data and the determination learning data. For example, as in the first embodiment, the learning data acquisition unit 60 acquires the captured image 40 captured by the imaging unit 4 of the determination object 10 after the processing step has been performed by the processing unit 3, A plurality of learning images 41 are generated by dividing the captured image 40 , and the plurality of learning images 41 are displayed on the display screen of the operator terminal 8 .
  • a result (classification result ) is input, and the result (determination result) of determining the state of the processing surface 100 included in the captured image 40 is input via the operator terminal 8 .
  • the learning data acquisition unit 60 accepts the operator's input operation, and the pixel classification results (input data) for the plurality of learning pixel regions 411 acquired from the learning image 41 and the learning image 41 A plurality of second classification learning data are acquired by associating the classification result (output data) input-operated with respect to .
  • the learning data acquisition unit 60 obtains the classification results (input data) for the plurality of learning image regions 410 of each of the learning images 41 and the determination results (output data ) are associated with each other to acquire data for judgment learning.
  • the learning data acquisition unit 60 can acquire a number of second classification learning data corresponding to the number of divisions when one captured image 40 is divided into a plurality of learning images 41. Furthermore, by repeating the above operation, a desired number of second classification learning data can be acquired. In addition, the learning data acquisition unit 60 can acquire determination learning data in conjunction with acquiring the second classification learning data. Therefore, it is possible to easily collect the second data for classification learning and the data for judgment learning.
  • FIG. 14 is a schematic diagram showing an example of the inference model 20B applied to the second classification learning model 2B.
  • the inference model 20B employs a convolutional neural network as a specific method of machine learning, similar to the inference model 20A shown in FIG. In the following, the inference model 20B will be described, focusing on the differences from the inference model 20A shown in FIG.
  • the input layer 21 has a number of neurons corresponding to the number of pixels in the learning image 41 as input data, and pixel classification results for a plurality of learning pixel regions 411 are input to each neuron.
  • the output layer 23 outputs output data including classification results based on the feature vectors output from the fully connected layer 22c.
  • the output data may include, for example, a score indicating the reliability of the classification result in addition to the classification result.
  • the machine learning unit 62 inputs the second classification learning data to the inference model 20B, and causes the inference model 20B to machine-learn the correlation between the pixel classification results for the plurality of learning pixel regions 411 and the classification results. Specifically, the machine learning unit 62 inputs the pixel classification results for the plurality of learning pixel regions 411 constituting the second classification learning data to the input layer 21 of the inference model 20B as input data.
  • the machine learning unit 62 uses an error function that compares the classification result (inference result) indicated by the output data output from the output layer 23 and the classification result (teacher data) that constitutes the second classification learning data. Then, the weight associated with each synapse is adjusted (back promotion) so that the evaluation value of the error function becomes smaller. Then, when the machine learning unit 62 determines that a predetermined learning termination condition is satisfied, such as repeating the series of processes described above a predetermined number of times or the evaluation value of the error function being smaller than the allowable value, terminates the machine learning and stores the inference model 20B (all weights associated with each synapse) at that time in the trained model storage unit 63 as the second classification learning model 2B.
  • a predetermined learning termination condition such as repeating the series of processes described above a predetermined number of times or the evaluation value of the error function being smaller than the allowable value
  • FIG. 15 is a block diagram showing an example of the machined surface determination device 7 according to the second embodiment.
  • the machined surface determination device 7 includes a classification result acquisition unit 70B, a determination result inference unit 71, a learned model storage unit 72, and an output processing unit 73, as in the first embodiment.
  • the classification result acquisition unit 70B obtains the classification result when the state of the processing surface 100 is classified into one of a plurality of processing states for a plurality of small image regions 430 obtained by dividing the judgment image region 420 included in the judgment image 42.
  • a classification result acquisition process (see FIG. 16 described later) for acquiring in units of small image areas 430 is performed.
  • the classification result acquisition unit 70B includes an image acquisition unit 700 and a small image generation unit 701 similar to those in the first embodiment, and a plurality of pixel regions that form each of the plurality of small images 43.
  • a pixel classification result obtaining unit 703 for obtaining, in units of pixel areas, pixel classification results indicating the classification results for pixel areas based on the pixel classification results; and a second classification result inference unit 702B for inferring classification results for the plurality of small image regions 430 by inputting .
  • the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 acquired by the classification result acquisition unit 70B to the determination learning model 2, thereby inferring the determination result for the determination image region 420. Perform inference processing.
  • the trained model storage unit 72 stores the trained second classification learning model 2B used in the inference processing of the classification result acquisition unit 70B and the learned judgment model 2B used in the inference processing of the judgment result inference unit 71. It is a database that stores the learning model 2.
  • FIG. 16 is a functional explanatory diagram showing an example of the classification result acquisition process by the classification result acquisition section 70B.
  • a judgment image area 420 of the judgment image 42 is an area captured by the image pickup unit 4 as in the first embodiment, and a small image area 430 of the small image 43 is a judgment image of the judgment image 42.
  • the region 420 is divided into grids.
  • a small image region 430 of the small image 43 corresponds to the learning image region 410 of the learning image 41, and a plurality of pixel regions 431 forming the small image 43 correspond to a plurality of learning pixel regions forming the learning image 41. 411 equivalent.
  • the second classification learning model 2B performs pixel classification results for the plurality of learning pixel regions 411 corresponding to the plurality of pixel regions 431 and processing in the plurality of learning pixel regions 411 based on the pixel classification results.
  • Machine learning is performed on the correlation with the classification result when the state of the surface 100 is classified into one of a plurality of machining states. Therefore, the second classification result inference unit 702B inputs the pixel classification results for the plurality of pixel regions 431 forming each of the plurality of small images 43 to the second classification learning model 2B in units of small image regions 430. functions as a classifier that classifies the state of the processing surface 100 in the small image area 430 into one of a plurality of processing states.
  • FIG. 17 is a flowchart showing an example of a machined surface determination method by the machined surface determination device 7 according to the second embodiment.
  • step S100 the image acquisition unit 700 of the classification result acquisition unit 70B acquires the determination image 42.
  • step S110 the small image generating unit 701 divides the judgment image region 420 of the judgment image 42 into a plurality of small image regions 430 as preprocessing for the judgment image 42, thereby dividing the judgment image 42 into a plurality of small image regions 430.
  • a plurality of small images 43 are generated.
  • step S112 the pixel classification result acquisition unit 703 acquires the pixel classification result for the pixel regions 431 based on the pixel values in the pixel regions 431 for the plurality of pixel regions 431 forming each of the plurality of small images 43. Acquired in units of 431 areas.
  • the second classification result inference unit 702B assigns a serial number (1 ⁇ n ⁇ K) to each of the plurality of small images 43, where K is the division number of the plurality of small images 43.
  • the loop process is executed by incrementing the variable i from "1" to "K".
  • step S120 the second classification result inference unit 702B initializes the variable i to "1".
  • step S124 the second classification result inference unit 702B selects the i-th small image 43, and applies the pixel classification results for the plurality of pixel regions 431 forming the small image 43 to the second classification learning.
  • the classification result output from the output layer 23 of the second learning model for classification 2B is inferred.
  • step S126 the variable i is incremented, and in step S128, it is determined whether or not the variable i exceeds the division number K. Then, the second classification result inference unit 702B acquires the classification results for the plurality of small image regions 430 by repeating steps S124 and S126 until the variable i exceeds the number of divisions K.
  • step S130 the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the input layer 21 of the determination learning model 2, and from the output layer 23 of the determination learning model 2,
  • the output determination result (for example, necessity of reprocessing, necessity of another processing, necessity of finish processing, processing range, etc.) is inferred.
  • step S140 the output processing unit 73 outputs information corresponding to the determination result inferred by the determination result inference unit 71 to output means (eg, the control device 5, the worker terminal 8, etc.). Then, the series of the machined surface determination method shown in FIG. 17 ends.
  • step S100 corresponds to an image obtaining step
  • steps S100 to S128 correspond to a classification result obtaining step
  • step S130 corresponds to a determination result inference step
  • step S140 corresponds to an output processing step.
  • the classification result acquisition unit 70B divides the determination image region 420 into the small image regions 430, thereby dividing the determination image 42 into the small image regions 430.
  • a plurality of small image regions 43 are generated from the plurality of small image regions 43, and the pixel classification results for the plurality of pixel regions 431 constituting each of the plurality of small image regions 43 are input to the second classification learning model 2B to obtain a plurality of small image regions Infer the classification result for 430.
  • the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the determination learning model 2, thereby inferring the state of the machined surface 100 as the determination result.
  • the classification result by the second classification learning model 2B is inferred in units of small image regions 430 by inputting each of the plurality of small images 43 into which the judgment image 42 is divided.
  • the state of the machined surface 100 included in the determination image 42 is determined by inputting the classification results for the plurality of small image regions 430 by the second classification learning model 2B to the determination learning model 2. . Therefore, the state of the processing surface 100 of the determination target 10 can be automatically determined.
  • the judgment image area 420 is set to include a part of one blade of the impeller of the judgment object 10 as the processing surface 100 to be judged.
  • the determination image area 420 may be set so as to include the plurality of blades of the impeller as the plurality of processing surfaces 100 to be determined by expanding the entire impeller. That is, when the determination target object 10 has a plurality of processed surfaces 100 that have been processed through different processing steps by the processing unit 3, the determination image region 420 is set to include the plurality of processed surfaces 100.
  • the classification result acquisition unit 70B acquires the determination images 42 in which the plurality of processed surfaces 100 are captured, and performs determination for each processed surface so that the determination images 42 are separated at the boundaries of the plurality of processed surfaces 100.
  • set the image area 420 for The boundary of the processing surface 100 may be set in advance, or may be set by image processing on the determination image 42 .
  • the classification result acquisition unit 70B acquires the classification result for each small image area 430 for a plurality of small image areas 430 obtained by dividing the determination image area 420 for each processing surface.
  • the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the determination learning model 2 for each processed surface, thereby inferring the determination result for the determination image 42 for each processed surface. .
  • machine learning unit 62 can be any other machine A method of learning may be employed.
  • Other machine learning methods include, for example, tree types such as decision trees and regression trees, ensemble learning such as bagging and boosting, neural network types such as recurrent neural networks and convolutional neural networks (including deep learning), Hierarchical clustering, non-hierarchical clustering, k-neighbor method, clustering type such as k-means method, principal component analysis, factor analysis, multivariate analysis such as logistic regression, support vector machine, and the like.
  • the present invention can be provided in the form of a program (machined surface determination program) 230 that causes the computer 200 shown in FIG. 2 to function as each unit included in the machined surface determination device 7 according to the above embodiment.
  • the present invention can also be provided in the form of a program (machined surface determination program) 230 for causing the computer 200 shown in FIG. 2 to execute each step included in the machined surface determination method according to the above embodiment.
  • the present invention is not only based on the aspect of the machined surface determination device 7 (machined surface determination method or machined surface determination program) according to the above embodiment, but also an inference device (inference method) used to determine the state of the machined surface 100 or an inference program).
  • the inference device may include a memory and a processor, and the processor of these may execute a series of processes.
  • the series of processing refers to classification results obtained when the state of the processed surface 100 is classified into one of a plurality of processed states for a plurality of small image regions 430 obtained by dividing the determination image region 420 included in the determination image 42.
  • the determination result for the determination image 42 is obtained as a determination image.
  • a judgment result inference process for inferring the state of the machined surface 100 included in 42;
  • an inference device inference method or inference program
  • it can be applied to various devices more easily than when the machined surface determination device 7 is implemented.
  • the inference device inference method or inference program
  • the machined surface judgment device 7 uses the learned judgment learning model 2 generated by the machine learning device 6 according to the above embodiment. It should be understood by those skilled in the art that the inference method performed by the determination result inference unit 71 may be applied.
  • the present invention can be used for a machined surface determination device, a machined surface determination program, a machined surface determination method, a machining system, an inference device, and a machine learning device.
  • Classification result acquisition unit 71 Judgment result inference Unit 72
  • Model storage unit 73 Output processing unit 100
  • Machining surface 110 ... Background 200
  • Computer 400
  • Determining image area 430 ... Small image area 431... Pixel area 700
  • Image acquiring unit 701 Small image generating unit 702A
  • First classification result inference unit 702B Second classification result inference unit 703

Abstract

A machining surface determination device (7) provided with: a classification result acquisition unit (70A) that, regarding multiple subimage regions (430) obtained by dividing a determination image region (420) contained in a determination image (42), acquires for each subimage region (430) a classification result in which the state of a machining surface (100) is classified as one of multiple machining states; and a determination result inference unit (71) that infers a determination result for the determination image (42) by inputting the classification results for the multiple subimage regions (430) into a determination learning model (2) that has machine-learned relationships between classification results for multiple learning image regions (410) corresponding to the multiple subimage regions (430) and determination results in which the states of machining surfaces (100) in the multiple learning image regions (410) are determined on the basis of the classification results.

Description

加工面判定装置、加工面判定プログラム、加工面判定方法、加工システム、推論装置、及び、機械学習装置Machined surface determination device, machined surface determination program, machined surface determination method, machining system, reasoning device, and machine learning device
 本発明は、加工面判定装置、加工面判定プログラム、加工面判定方法、加工システム、推論装置、及び、機械学習装置に関する。 The present invention relates to a machined surface determination device, a machined surface determination program, a machined surface determination method, a machining system, an inference device, and a machine learning device.
 近年、各種の製品を製造する製造過程において、作業者が目視にて製品の品質を判定することに代えて、各種のセンサを用いて製品の品質を自動的に判定する装置の開発が進められている。例えば、特許文献1には、検査対象物である羽根車を撮像した撮像画像に対して二値化処理等を行うことにより、羽根車の外形形状を検査する検査装置が開示されている。 In recent years, in the manufacturing process of manufacturing various products, instead of visually determining product quality by workers, there has been progress in the development of devices that automatically determine product quality using various sensors. ing. For example, Patent Literature 1 discloses an inspection apparatus that inspects the outer shape of an impeller by performing a binarization process or the like on an image of an impeller, which is an inspection object.
特開2008-51664号公報JP 2008-51664 A
 製品の品質を判定する際の指標の1つとして、例えば、研磨工程、研削工程、切削工程又は鋳造工程等の各種の加工工程が実施された後の加工面の状態が挙げられる。この加工面の状態には、例えば、粗さ、凹凸、うねり、そり、模様、筋目、波打ち等の様々な判定項目が含まれる。 One of the indicators for judging product quality is, for example, the state of the processed surface after various processing processes such as polishing, grinding, cutting, or casting have been performed. The state of the machined surface includes, for example, various judgment items such as roughness, unevenness, undulation, warp, pattern, streaks, and waviness.
 しかしながら、特許文献1に開示された検査装置は、検査対象物の外形形状を検査するものであるが、検査対象物における加工面の状態を判定することはできない。また、作業員が、加工面の状態を判定するものとした場合、作業者の熟練度や経験(暗黙知を含む)に依存するため、作業者の個人差が大きくなり、製品の品質を担保することが困難である。 However, although the inspection apparatus disclosed in Patent Document 1 inspects the outer shape of the inspection object, it cannot determine the state of the machined surface of the inspection object. In addition, if workers are to judge the state of the machined surface, it depends on the skill and experience (including tacit knowledge) of the workers, so individual differences between workers will increase, and product quality will not be guaranteed. It is difficult to
 本発明は、上述した課題に鑑み、判定対象物が有する加工面の状態を自動的に判定することを可能とする加工面判定装置、加工面判定プログラム、加工面判定方法、加工システム、推論装置、及び、機械学習装置を提供することを目的とする。 In view of the problems described above, the present invention provides a machined surface determination device, a machined surface determination program, a machined surface determination method, a machining system, and an inference device that enable automatic determination of the state of a machined surface of an object to be determined. , and to provide a machine learning device.
 上記目的を達成するために、本発明の一態様に係る加工面判定装置は、
 判定対象物の加工面が撮像された判定用画像に基づいて、前記加工面の状態を判定する加工面判定装置であって、
 前記判定用画像が有する判定用画像領域を分割した複数の小画像領域について、前記加工面の状態を複数の加工状態のいずれかに分類したときの分類結果を前記小画像領域単位で取得する分類結果取得部と、
 複数の前記小画像領域に対する前記分類結果を、複数の前記小画像領域に相当する複数の学習用画像領域に対する前記分類結果と当該分類結果に基づいて複数の前記学習用画像領域内の前記加工面の状態を判定したときの判定結果との相関関係を機械学習させた判定用学習モデルに入力することにより、前記判定用画像に対する前記判定結果を推論する判定結果推論部と、を備える。
In order to achieve the above object, a machined surface determination device according to one aspect of the present invention includes:
A machined surface determination device that determines the state of the machined surface based on a judgment image in which the machined surface of the object to be judged is captured,
Classification for obtaining a classification result for each small image area when the state of the processed surface is classified into one of a plurality of processing states for a plurality of small image areas obtained by dividing the determination image area of the determination image. a result acquisition unit;
The processing surface in the plurality of learning image areas based on the classification results of the plurality of small image areas and the classification results of the plurality of learning image areas corresponding to the plurality of small image areas. a determination result inference unit that infers the determination result for the determination image by inputting a correlation with the determination result when the state of is determined to a determination learning model subjected to machine learning.
 本発明に係る加工面判定装置によれば、判定結果推論部が、判定用画像の判定用画像領域を複数の小画像領域に分割したときの各小画像領域に対する分類結果を判定用学習モデルに入力することにより、判定用画像に対する判定結果を推論する。したがって、判定対象物が有する加工面の状態を自動的に判定することができる。 According to the machined surface determination device according to the present invention, the determination result inference unit divides the determination image region of the determination image into a plurality of small image regions, and classifies the classification result for each small image region into the determination learning model. By inputting, the determination result for the image for determination is inferred. Therefore, it is possible to automatically determine the state of the machined surface of the object to be determined.
 上記以外の課題、構成及び効果は、後述する発明を実施するための形態にて明らかにされる。 Problems, configurations, and effects other than the above will be clarified in the mode for carrying out the invention described later.
第1の実施形態に係る加工面判定装置7を備える加工システム1の一例を示す概略構成図である。1 is a schematic configuration diagram showing an example of a machining system 1 including a machined surface determination device 7 according to a first embodiment; FIG. 機械学習装置6及び加工面判定装置7を構成するコンピュータ200の一例を示すハードウエア構成図である。2 is a hardware configuration diagram showing an example of a computer 200 that constitutes a machine learning device 6 and a machined surface determination device 7. FIG. 第1の実施形態に係る機械学習装置6の一例を示すブロック図である。1 is a block diagram showing an example of a machine learning device 6 according to a first embodiment; FIG. 第1の分類学習用データの一例を示すデータ構成図である。FIG. 4 is a data configuration diagram showing an example of first classification learning data; 判定学習用データの一例を示すデータ構成図である。It is a data block diagram which shows an example of the data for determination learning. 第1の分類用学習モデル2Aに適用される推論モデル20の一例を示す概略図である。FIG. 2 is a schematic diagram showing an example of an inference model 20 applied to a first classification learning model 2A; 判定用学習モデル2に適用される推論モデル20の一例を示す概略図である。2 is a schematic diagram showing an example of an inference model 20 applied to a judgment learning model 2; FIG. 第1の実施形態に係る加工面判定装置7の一例を示すブロック図である。1 is a block diagram showing an example of a machined surface determination device 7 according to a first embodiment; FIG. 分類結果取得部70Aによる分類結果取得処理の一例を示す機能説明図である。FIG. 11 is a function explanatory diagram showing an example of a classification result acquisition process by a classification result acquisition unit 70A; 判定結果推論部71による判定結果推論処理の一例を示す機能説明図である。FIG. 7 is a functional explanatory diagram showing an example of determination result inference processing by a determination result inference unit 71; 第1の実施形態に係る加工面判定装置7による加工面判定方法の一例を示すフローチャートである。5 is a flowchart showing an example of a machined surface determination method by the machined surface determination device 7 according to the first embodiment; 第2の実施形態に係る機械学習装置6の一例を示すブロック図である。It is a block diagram which shows an example of the machine-learning apparatus 6 which concerns on 2nd Embodiment. 第2の分類学習用データの一例を示すデータ構成図である。FIG. 10 is a data configuration diagram showing an example of second classification learning data; 第2の分類用学習モデル2Bに適用される推論モデル20Bの一例を示す概略図である。FIG. 4 is a schematic diagram showing an example of an inference model 20B applied to a second classification learning model 2B; 第2の実施形態に係る加工面判定装置7の一例を示すブロック図である。FIG. 7 is a block diagram showing an example of a machined surface determination device 7 according to a second embodiment; 分類結果取得部70Bによる分類結果取得処理の一例を示す機能説明図である。FIG. 11 is a function explanatory diagram showing an example of a classification result acquisition process by a classification result acquisition unit 70B; 第2の実施形態に係る加工面判定装置7による加工面判定方法の一例を示すフローチャートである。9 is a flow chart showing an example of a machined surface determination method by the machined surface determination device 7 according to the second embodiment.
 以下、図面を参照して本発明を実施するための実施形態について説明する。以下では、本発明の目的を達成するための説明に必要な範囲を模式的に示し、本発明の該当部分の説明に必要な範囲を主に説明することとし、説明を省略する箇所については公知技術によるものとする。 Hereinafter, embodiments for carrying out the present invention will be described with reference to the drawings. In the following, the range necessary for the description to achieve the object of the present invention is schematically shown, and the range necessary for the description of the relevant part of the present invention is mainly described. It shall be by technology.
(第1の実施形態)
 図1は、第1の実施形態に係る加工面判定装置7を備える加工システム1の一例を示す概略構成図である。
(First embodiment)
FIG. 1 is a schematic configuration diagram showing an example of a machining system 1 including a machined surface determination device 7 according to the first embodiment.
 加工システム1は、判定対象物10を加工する加工部3と、判定対象物10の加工面100を撮像する撮像部4と、第1の分類用学習モデル2A及び判定用学習モデル2を用いて判定対象物10の加工面100の状態を判定する加工面判定装置7と、加工部3、撮像部4及び加工面判定装置7を制御する制御装置5とを備える。また、加工システム1は、付加的な構成として、第1の分類用学習モデル2A及び判定用学習モデル2を生成する機械学習装置6を備える。 The processing system 1 uses a processing unit 3 that processes the determination target object 10, an imaging unit 4 that captures an image of the processed surface 100 of the determination target object 10, the first classification learning model 2A, and the determination learning model 2. A machined surface determination device 7 that determines the state of the machined surface 100 of the determination object 10 , and a control device 5 that controls the processing unit 3 , the imaging unit 4 , and the machined surface determination device 7 . The processing system 1 also includes a machine learning device 6 that generates the first learning model for classification 2A and the learning model for determination 2 as an additional configuration.
 判定対象物10は、例えば、金属、樹脂、セラミックス等の任意の材料で形成されて、加工部3による加工対象となる任意の物品である。判定対象物10は、その具体例として、流体機械又は流体機械を構成する流体部品である。なお、判定対象物10の立体形状、表面性状、色、大きさ等は特に限定されない。 The determination target object 10 is an arbitrary article to be processed by the processing unit 3, made of any material such as metal, resin, ceramics, or the like. The determination target 10 is, as a specific example, a fluid machine or a fluid component that constitutes a fluid machine. The three-dimensional shape, surface properties, color, size, etc. of the determination object 10 are not particularly limited.
 加工面100は、例えば、加工部3により判定対象物10が加工されたときの当該判定対象物10の表面である。加工面100は、判定対象物10が有する任意の表面でよく、判定対象物10が有する表面の全体でもよいし、その一部分でもよい。 The processed surface 100 is, for example, the surface of the determination target object 10 when the determination target object 10 is processed by the processing unit 3 . The processing surface 100 may be any surface of the determination target object 10 , and may be the entire surface of the determination target object 10 or a portion thereof.
 加工部3は、電力や流体圧等を駆動源として動作する各種のロボットマニュピレータや工作機械の加工機構部等で構成される。加工部3は、制御装置5からの制御指令に基づいて、研磨、研削、切削又は鋳造等の加工工程を実施する。なお、加工部3は、判定対象物10の表面を加工又は形成するものであれば任意の加工工程を実施するものでよく、さらに複数の加工工程を組み合わせて実施するものでもよい。 The processing unit 3 is composed of various robot manipulators that operate using electric power, fluid pressure, etc. as drive sources, processing mechanism units of machine tools, and the like. The processing unit 3 performs processing steps such as polishing, grinding, cutting, casting, etc. based on control commands from the control device 5 . The processing unit 3 may perform any processing process as long as it processes or forms the surface of the determination target object 10, and may perform a combination of a plurality of processing steps.
 図1に示す加工システム1では、加工部3は、交換式の砥石が先端に装着されたロボットマニュピレータで構成され、研削工程を実施するものである。また、判定対象物10は、ポンプを構成する流体部品として、複数の羽根を有する羽根車であり、加工面100は、加工部3による研削工程により加工された各羽根の表面である。 In the processing system 1 shown in FIG. 1, the processing section 3 is composed of a robot manipulator with a replaceable grindstone attached to its tip, and performs the grinding process. Further, the determination target 10 is an impeller having a plurality of blades as a fluid component constituting a pump, and the processed surface 100 is the surface of each blade processed by the grinding process by the processing unit 3 .
 撮像部4は、加工面100を撮像するカメラであり、例えば、CMOSセンサやCCDセンサ等のイメージセンサで構成される。撮像部4は、加工面100を撮像可能な所定の位置に取り付けられる。加工部3が、例えば、ロボットマニュピレータで構成される場合には、撮像部4は、ロボットマニュピレータの先端に取り付けられてもよいし、判定対象物10が載置される載置台(可動式も含む)の上方に固定されてもよい。また、加工部3が、例えば、工作機械の加工機構部で構成される場合には、撮像部4は、工作機械の安全カバーの内側に取り付けられてもよいし、工作機械とは別体の作業台の上方に固定されてもよい。 The imaging unit 4 is a camera that images the processing surface 100, and is composed of an image sensor such as a CMOS sensor or a CCD sensor, for example. The imaging unit 4 is attached at a predetermined position where the processing surface 100 can be imaged. For example, when the processing unit 3 is composed of a robot manipulator, the imaging unit 4 may be attached to the tip of the robot manipulator, or may be a mounting table (including a movable type) on which the determination object 10 is mounted. ) may be fixed above. Further, when the processing unit 3 is configured by, for example, a processing mechanism unit of a machine tool, the imaging unit 4 may be attached inside a safety cover of the machine tool, or may be a separate unit from the machine tool. It may be fixed above the workbench.
 撮像部4は、上記のような所定の位置に取り付けられて、撮像部4の画角内に加工面100が収まるように、位置や向きが調節されている。なお、撮像部4は、図1に示すように、機械学習装置6に接続された撮像部4と、加工面判定装置7に接続された撮像部4とが別々に設けられてもよいし、1つの撮像部4が機械学習装置6及び加工面判定装置7の双方に接続されて共用されてもよい。また、撮像部4は、パン・チルト・ズームの機能を備えるものでもよい。さらに、撮像部4は、加工面100を1台のカメラで撮像するものに限られず、複数台のカメラで撮像するものでもよい。 The imaging unit 4 is attached to the predetermined position as described above, and its position and orientation are adjusted so that the processed surface 100 fits within the angle of view of the imaging unit 4 . In addition, as shown in FIG. 1, the imaging unit 4 may be separately provided with the imaging unit 4 connected to the machine learning device 6 and the imaging unit 4 connected to the machined surface determination device 7, One imaging unit 4 may be connected to both the machine learning device 6 and the machined surface determination device 7 and shared. In addition, the imaging unit 4 may have pan/tilt/zoom functions. Furthermore, the imaging unit 4 is not limited to imaging the processing surface 100 with one camera, and may be imaging with a plurality of cameras.
 制御装置5は、例えば、汎用又は専用のコンピュータ(後述の図2参照)やマイクロコントローラ等で構成される制御盤50と、タッチパネルディスプレイ、スイッチ、ボタン等で構成される操作表示盤51とを備える。 The control device 5 includes, for example, a control panel 50 composed of a general-purpose or dedicated computer (see FIG. 2 described later), a microcontroller, etc., and an operation display panel 51 composed of a touch panel display, switches, buttons, etc. .
 制御盤50は、加工部3のアクチュエータやセンサ(いずれも不図示)に接続されて、加工工程を実施するための加工動作パラメータやセンサの検出信号に応じてアクチュエータに制御指令を送ることで、加工部3による加工工程を制御する。制御盤50は、撮像部4に撮像指令を送り、その結果として撮像部4により撮像された撮像画像を受け取る。制御盤50は、撮像画像を判定用画像として加工面判定装置7に送り、その結果として、加工面判定装置7により判定された加工面100の状態を受け取る。なお、制御盤50は、撮像画像を機械学習装置6に送るようにしてもよい。 The control panel 50 is connected to actuators and sensors (both not shown) of the machining unit 3, and sends control commands to the actuators according to machining operation parameters for carrying out the machining process and detection signals of the sensors. It controls the machining process by the machining unit 3 . The control panel 50 sends an image capturing command to the image capturing unit 4 and receives a captured image captured by the image capturing unit 4 as a result. The control panel 50 sends the captured image as a determination image to the machined surface determination device 7 , and as a result, receives the state of the machined surface 100 determined by the machined surface determination device 7 . Note that the control panel 50 may send the captured image to the machine learning device 6 .
 操作表示盤51は、作業者の操作を受け付けるとともに、各種の情報を表示や音で出力する。 The operation display panel 51 accepts operator's operations and outputs various information by display and sound.
 機械学習装置6は、機械学習における学習フェーズの主体として動作する。機械学習装置6は、撮像部4により撮像された撮像画像に基づいて学習用データを取得し、その学習用データに基づいて第1の分類用学習モデル2A及び判定用学習モデル2を生成する。機械学習装置6は、学習済みの第1の分類用学習モデル2A及び判定用学習モデル2を任意の通信網や記録媒体等を介して加工面判定装置7に提供する。機械学習装置6の詳細は後述する。 The machine learning device 6 operates as the subject of the learning phase in machine learning. The machine learning device 6 acquires learning data based on the captured image captured by the imaging unit 4, and generates the first classification learning model 2A and the determination learning model 2 based on the learning data. The machine learning device 6 provides the machined surface determination device 7 with the learned first classification learning model 2A and determination learning model 2 via an arbitrary communication network, recording medium, or the like. Details of the machine learning device 6 will be described later.
 加工面判定装置7は、機械学習における推論フェーズの主体として動作する。加工面判定装置7は、機械学習装置6により生成された学習済みの第1の分類用学習モデル2A及び判定用学習モデル2を用いて、撮像部4により撮像された加工面100の画像を判定用画像として、判定対象物10の加工面100の状態を判定する。加工面判定装置7の詳細は後述する。 The machined surface determination device 7 operates as the subject of the inference phase in machine learning. The machined surface determination device 7 judges the image of the machined surface 100 captured by the imaging unit 4 using the first learning model 2A for classification and the learning model 2 for determination generated by the machine learning device 6. The state of the processing surface 100 of the determination object 10 is determined as the image for determination. Details of the machined surface determination device 7 will be described later.
 なお、加工システム1の各構成要素は、1つの筐体に組み込まれることで、例えば、1つの工作機械として構成されていてもよく、その場合には、機械学習装置6及び加工面判定装置7の少なくとも一方は、制御装置5に組み込まれてもよい。また、加工システム1の各構成要素は、加工部3を備える加工装置と、撮像部4及び加工面判定装置7を備える検査装置とで構成されてもよく、その場合には、制御装置5の機能が加工装置と検査装置とに分散されてもよい。さらに、加工システム1の各構成要素は、無線又は有線のネットワークにより接続されることで、機械学習装置6及び加工面判定装置7の少なくとも一方は、加工部3及び撮像部4が設置された加工現場とは離れた場所に設置されてもよく、その場合には、制御装置5は、加工現場に設置されてもよいし、他の場所に設置されてもよい。 In addition, each component of the machining system 1 may be configured as, for example, one machine tool by being incorporated in one housing. In that case, the machine learning device 6 and the machined surface determination device 7 may be incorporated in the control device 5. Further, each component of the processing system 1 may be configured by a processing device including the processing unit 3 and an inspection device including the imaging unit 4 and the processing surface determination device 7. In that case, the control device 5 Functionality may be distributed between processing equipment and inspection equipment. Furthermore, each component of the machining system 1 is connected by a wireless or wired network, so that at least one of the machine learning device 6 and the machined surface determination device 7 can perform machining in which the machining unit 3 and the imaging unit 4 are installed. It may be installed at a place away from the work site, and in that case, the control device 5 may be installed at the work site or at another place.
 図2は、機械学習装置6及び加工面判定装置7を構成するコンピュータ200の一例を示すハードウエア構成図である。 FIG. 2 is a hardware configuration diagram showing an example of a computer 200 that constitutes the machine learning device 6 and the machined surface determination device 7. As shown in FIG.
 機械学習装置6及び加工面判定装置7のそれぞれは、汎用又は専用のコンピュータ200により構成される。コンピュータ200は、図2に示すように、その主要な構成要素として、バス210、プロセッサ212、メモリ214、入力デバイス216、表示デバイス218、ストレージ装置220、通信I/F(インターフェース)部222、外部機器I/F部224、I/O(入出力)デバイスI/F部226、及び、メディア入出力部228を備える。なお、上記の構成要素は、コンピュータ200が使用される用途に応じて適宜省略されてもよい。 Each of the machine learning device 6 and the machined surface determination device 7 is configured by a general-purpose or dedicated computer 200 . As shown in FIG. 2, the computer 200 includes, as its main components, a bus 210, a processor 212, a memory 214, an input device 216, a display device 218, a storage device 220, a communication I/F (interface) section 222, an external A device I/F section 224 , an I/O (input/output) device I/F section 226 , and a media input/output section 228 are provided. Note that the above components may be omitted as appropriate depending on the application in which the computer 200 is used.
 プロセッサ212は、1つ又は複数の演算処理装置(CPU、MPU、GPU、DSP等)で構成され、コンピュータ200全体を統括する制御部として動作する。メモリ214は、各種のデータ及びプログラム230を記憶し、例えば、メインメモリとして機能する揮発性メモリ(DRAM、SRAM等)と、不揮発性メモリ(ROM、フラッシュメモリ等)とで構成される。 The processor 212 is composed of one or more arithmetic processing units (CPU, MPU, GPU, DSP, etc.) and operates as a control unit that controls the computer 200 as a whole. The memory 214 stores various data and programs 230, and is composed of, for example, a volatile memory (DRAM, SRAM, etc.) functioning as a main memory and a non-volatile memory (ROM, flash memory, etc.).
 入力デバイス216は、例えば、キーボード、マウス、テンキー、電子ペン等で構成される。表示デバイス218は、例えば、液晶ディスプレイ、有機ELディスプレイ、電子ペーパー、プロジェクタ等で構成される。入力デバイス216及び表示デバイス218は、タッチパネルディスプレイのように、一体的に構成されていてもよい。ストレージ装置220は、例えば、HDD、SSD等で構成され、オペレーティングシステムやプログラム230の実行に必要な各種のデータを記憶する。 The input device 216 is composed of, for example, a keyboard, mouse, numeric keypad, electronic pen, and the like. The display device 218 is composed of, for example, a liquid crystal display, an organic EL display, electronic paper, a projector, or the like. The input device 216 and the display device 218 may be configured integrally like a touch panel display. The storage device 220 is composed of, for example, an HDD, SSD, etc., and stores various data necessary for executing the operating system and programs 230 .
 通信I/F部222は、インターネットやイントラネット等のネットワーク240に有線又は無線により接続され、所定の通信規格に従って他のコンピュータとの間でデータの送受信を行う。外部機器I/F部224は、プリンタ、スキャナ等の外部機器250に有線又は無線により接続され、所定の通信規格に従って外部機器250との間でデータの送受信を行う。I/OデバイスI/F部226は、各種のセンサ、アクチュエータ等のI/Oデバイス260に接続され、I/Oデバイス260との間で、例えば、センサによる検出信号やアクチュエータへの制御信号等の各種の信号やデータの送受信を行う。メディア入出力部228は、例えば、DVDドライブ、CDドライブ等のドライブ装置で構成され、DVD、CD等のメディア270に対してデータの読み書きを行う。 The communication I/F unit 222 is wired or wirelessly connected to a network 240 such as the Internet or an intranet, and transmits and receives data to and from other computers according to a predetermined communication standard. The external device I/F unit 224 is connected to an external device 250 such as a printer or a scanner by wire or wirelessly, and transmits and receives data to and from the external device 250 according to a predetermined communication standard. The I/O device I/F unit 226 is connected to I/O devices 260 such as various sensors and actuators, and exchanges with the I/O devices 260, for example, detection signals from sensors and control signals to actuators. Sends and receives various signals and data. The media input/output unit 228 is composed of, for example, a drive device such as a DVD drive and a CD drive, and reads and writes data with respect to media 270 such as a DVD and a CD.
 上記構成を有するコンピュータ200において、プロセッサ212は、プログラム230をメモリ214のワークメモリ領域に呼び出して実行し、バス210を介してコンピュータ200の各部を制御する。なお、プログラム230は、メモリ214の代わりに、ストレージ装置220に記憶されていてもよい。プログラム230は、インストール可能なファイル形式又は実行可能なファイル形式でCD、DVD等の非一時的な記録媒体に記録され、メディア入出力部228を介してコンピュータ200に提供されてもよい。プログラム230は、通信I/F部222を介してネットワーク240経由でダウンロードすることによりコンピュータ200に提供されてもよい。また、コンピュータ200は、プロセッサ212がプログラム230を実行することで実現する各種の機能を、例えば、FPGA、ASIC等のハードウエアで実現するものでもよい。 In the computer 200 having the above configuration, the processor 212 calls the program 230 to the work memory area of the memory 214 and executes it, and controls each part of the computer 200 via the bus 210 . Note that the program 230 may be stored in the storage device 220 instead of the memory 214 . The program 230 may be recorded in a non-temporary recording medium such as a CD or DVD in an installable file format or executable file format and provided to the computer 200 via the media input/output unit 228 . Program 230 may be provided to computer 200 by downloading via network 240 via communication I/F section 222 . Further, the computer 200 may implement various functions realized by the processor 212 executing the program 230 by hardware such as FPGA and ASIC, for example.
 コンピュータ200は、例えば、据置型コンピュータや携帯型コンピュータで構成され、任意の形態の電子機器である。コンピュータ200は、クライアント型コンピュータでもよいし、サーバ型コンピュータやクラウド型コンピュータでもよい。コンピュータ200は、機械学習装置6及び加工面判定装置7以外の他の装置に適用されてもよい。 The computer 200 is, for example, a stationary computer or a portable computer, and is an arbitrary form of electronic equipment. The computer 200 may be a client computer, a server computer, or a cloud computer. Computer 200 may be applied to devices other than machine learning device 6 and machined surface determination device 7 .
(機械学習装置6)
 図3は、第1の実施形態に係る機械学習装置6の一例を示すブロック図である。
(Machine learning device 6)
FIG. 3 is a block diagram showing an example of the machine learning device 6 according to the first embodiment.
 機械学習装置6は、学習用データ取得部60、学習用データ記憶部61、機械学習部62、及び、学習済みモデル記憶部63を備える。機械学習装置6は、例えば、図2に示すコンピュータ200で構成される。その場合、学習用データ取得部60は、通信I/F部222又はI/OデバイスI/F部226とプロセッサ212とで構成され、機械学習部62は、プロセッサ212で構成され、学習用データ記憶部61及び学習済みモデル記憶部63は、ストレージ装置220で構成される。 The machine learning device 6 includes a learning data acquisition unit 60, a learning data storage unit 61, a machine learning unit 62, and a trained model storage unit 63. The machine learning device 6 is composed of, for example, a computer 200 shown in FIG. In that case, the learning data acquisition unit 60 is composed of the communication I/F unit 222 or the I/O device I/F unit 226 and the processor 212, and the machine learning unit 62 is composed of the processor 212, and the learning data The storage unit 61 and the learned model storage unit 63 are configured by the storage device 220 .
 学習用データ取得部60は、各種の外部装置と通信網を介して接続され、入力データ及び出力データが対応付けられた学習用データを取得するインタフェースユニットである。外部装置は、例えば、撮像部4、加工面判定装置7、及び、作業者が使用する作業者用端末8等である。 The learning data acquisition unit 60 is an interface unit that is connected to various external devices via a communication network and acquires learning data in which input data and output data are associated. The external devices are, for example, the imaging unit 4, the machined surface determination device 7, and the worker terminal 8 used by the worker.
 学習用データ記憶部61は、学習用データ取得部60で取得した学習用データを複数組記憶するデータベースである。学習用データには、第1の分類用学習モデル2Aを生成するための第1の分類学習用データと、判定用学習モデル2を生成するための判定学習用データとが含まれる。なお、学習用データ記憶部61を構成するデータベースの具体的な構成は適宜設計すればよい。 The learning data storage unit 61 is a database that stores multiple sets of learning data acquired by the learning data acquisition unit 60 . The learning data includes first classification learning data for generating the first classification learning model 2A and determination learning data for generating the determination learning model 2A. Note that the specific configuration of the database that constitutes the learning data storage unit 61 may be appropriately designed.
 機械学習部62は、学習用データ記憶部61に記憶された学習用データを用いて機械学習を実施する。すなわち、機械学習部62は、第1の分類用学習モデル2Aに第1の分類学習用データを複数組入力することで、第1の分類学習用データに含まれる入力データと出力データとの相関関係を第1の分類用学習モデル2Aに機械学習させることで、第1の分類用学習モデル2Aを生成する。また、機械学習部62は、判定用学習モデル2に判定学習用データを複数組入力することで、判定学習用データに含まれる入力データと出力データとの相関関係を判定用学習モデル2に機械学習させることで、判定用学習モデル2を生成する。 The machine learning unit 62 performs machine learning using the learning data stored in the learning data storage unit 61. That is, the machine learning unit 62 inputs a plurality of sets of the first classification learning data to the first classification learning model 2A, thereby determining the correlation between the input data and the output data included in the first classification learning data. The relationship is machine-learned by the first learning model for classification 2A to generate the first learning model for classification 2A. In addition, the machine learning unit 62 inputs a plurality of sets of data for determination learning to the learning model 2 for determination, so that the correlation between the input data and the output data included in the data for determination learning is machine-learned into the learning model 2 for determination. The learning model 2 for determination is generated by learning.
 学習済みモデル記憶部63は、機械学習部62により生成された第1の分類用学習モデル2A及び判定用学習モデル2を記憶するデータベースである。学習済みモデル記憶部63に記憶された第1の分類用学習モデル2A及び判定用学習モデル2は、任意の通信網や記録媒体等を介して実システム(例えば、加工面判定装置7)に提供される。なお、第1の分類用学習モデル2A及び判定用学習モデル2は、外部コンピュータ(例えば、サーバ型コンピュータやクラウド型コンピュータ)に提供されて、外部コンピュータの記憶部に記憶されてもよい。また、図3では、学習用データ記憶部61と、学習済みモデル記憶部63とが別々の記憶部として示されているが、これらは単一の記憶部で構成されてもよい。 The learned model storage unit 63 is a database that stores the first classification learning model 2A and the determination learning model 2 generated by the machine learning unit 62. The first classification learning model 2A and the determination learning model 2 stored in the trained model storage unit 63 are provided to the actual system (for example, the machined surface determination device 7) via any communication network, recording medium, or the like. be done. Note that the first learning model for classification 2A and the learning model for determination 2 may be provided to an external computer (for example, a server computer or a cloud computer) and stored in a storage unit of the external computer. In addition, although the learning data storage unit 61 and the trained model storage unit 63 are shown as separate storage units in FIG. 3, they may be configured as a single storage unit.
 図4は、第1の分類学習用データの一例を示すデータ構成図である。 FIG. 4 is a data configuration diagram showing an example of the first classification learning data.
 第1の分類学習用データは、学習用画像41を入力データとし、当該学習用画像41に含まれる加工面100の状態を複数の加工状態のいずれかに分類した分類結果を出力データとしてそれぞれ含み、これらの入力データ及び出力データが対応付けられて構成される。 The first classification learning data includes the learning image 41 as input data, and the classification results obtained by classifying the state of the machined surface 100 included in the learning image 41 into one of a plurality of machining states as output data. , these input data and output data are associated with each other.
 入力データとしての学習用画像41は、撮像部4により判定対象物10の加工面100が撮像された所定の撮像画像領域400を有する撮像画像40を学習用画像領域410に分割することで生成される複数の画像の各々である。 A learning image 41 as input data is generated by dividing a captured image 40 having a predetermined captured image region 400 in which the processing surface 100 of the determination object 10 is captured by the imaging unit 4 into learning image regions 410 . each of a plurality of images.
 撮像画像40の撮像画像領域400は、撮像部4により撮像された領域であり、撮像部4の画角により定められる。図4に示す撮像画像領域400は、判定対象物10である羽根車が有する1つの羽根の一部分を含むように設定されている。なお、図4に示す撮像画像40には、加工面100だけでなく、背景110についても撮像されているが、背景110が撮像されないように、撮像画像領域400が設定されてもよい。 A captured image area 400 of the captured image 40 is an area captured by the imaging unit 4 and is determined by the angle of view of the imaging unit 4 . A captured image area 400 shown in FIG. 4 is set so as to include a portion of one blade of the impeller, which is the determination target 10 . In the captured image 40 shown in FIG. 4, not only the processed surface 100 but also the background 110 are captured. However, the captured image area 400 may be set so that the background 110 is not captured.
 学習用画像41の学習用画像領域410は、図4に示すように、学習用画像領域410の各々が正方形状となるように、撮像画像40の撮像画像領域400を格子状に分割したものである。なお、学習用画像領域410の画像数、形状、大きさ及び縦横比は、適宜変更してもよく、例えば、長方形状でもよいし、他の形状でもよい。また、撮像画像領域400を学習用画像領域410に分割するときの分割方法は、適宜変更してもよく、例えば、千鳥状に分割してもよいし、他の基準に従って分割してもよい。 As shown in FIG. 4, the learning image area 410 of the learning image 41 is obtained by dividing the captured image area 400 of the captured image 40 into a lattice so that each of the learning image areas 410 has a square shape. be. The number of images, the shape, the size, and the aspect ratio of the learning image area 410 may be changed as appropriate. For example, the shape may be rectangular or other shapes. Also, the method of dividing the captured image area 400 into the learning image areas 410 may be changed as appropriate.
 出力データとしての分類結果は、教師あり学習において、例えば、教師データや正解ラベルと称される。複数の加工状態として、例えば、「良」及び「不良」の2クラスを採用する場合には、分類結果は、「良」及び「不良」のいずれかで表される。複数の加工状態として、「良」、「可」及び「不良」の3クラスを採用する場合には、分類結果は、「良」、「可」及び「不良」のいずれかで表される。なお、加工面100の状態を分類する際の複数の加工状態は、上記のようなクラスに限定されず、例えば、4クラス以上に分類されてもよいし、他の観点で分類されてもよい。 The classification results as output data are called, for example, teacher data or correct labels in supervised learning. If, for example, two classes of "good" and "bad" are used as a plurality of machining states, the classification result is represented by either "good" or "bad". When adopting three classes of "good", "acceptable" and "bad" as a plurality of machining states, the classification result is represented by one of "good", "acceptable" and "bad". Note that the plurality of machining states when classifying the states of the machined surface 100 are not limited to the classes described above, and may be classified into, for example, four or more classes, or may be classified from other viewpoints. .
 さらに、学習用画像領域410内に、加工面100のエッジ又は加工面100以外の背景110が存在する場合には、「判定対象外」というクラスを追加して分類することも可能である。学習用画像領域410内に、加工面100のエッジ又は加工面100以外の背景110が存在することを理由に「判定対象外」に分類する場合の分類結果は、上記2クラスの例では、「良」、「不良」及び「判定対象外」のいずれかで表され、上記3クラスの例では、図4に示すように、「良」、「可」「不良」及び「判定対象外」のいずれかで表される。なお、学習用画像41に加工面100及び背景110の両方が撮像されている場合には、例えば、背景110の比率が所定の比率よりも高い場合には「判定対象外」のクラスに分類するようにしてもよいし、「判定対象外」のクラスに常に分類しないようにしてもよい。 Furthermore, when the edge of the processing surface 100 or the background 110 other than the processing surface 100 exists in the learning image area 410, it is possible to add a class of "not subject to determination" for classification. In the example of the above two classes, the classification result in the case of classification as "not subject to determination" for the reason that the edge of the processing surface 100 or the background 110 other than the processing surface 100 exists in the learning image region 410 is " In the example of the above three classes, as shown in FIG. It is represented by either. Note that when both the processing surface 100 and the background 110 are captured in the learning image 41, for example, when the ratio of the background 110 is higher than a predetermined ratio, it is classified into the class of "out of determination". Alternatively, it may be arranged such that it is not always classified into the class of "non-judgment target".
 図5は、判定学習用データの一例を示すデータ構成図である。 FIG. 5 is a data configuration diagram showing an example of determination learning data.
 判定学習用データは、複数の学習用画像領域410の各々について加工面100の状態を複数の加工状態のいずれかに分類したときの分類結果を入力データとし、当該分類結果に基づいて複数の学習用画像領域410内の加工面100の状態を判定したときの判定結果を出力データとしてそれぞれ含み、これらの入力データ及び出力データが対応付けられて構成される。 The determination learning data is input data obtained by classifying the state of the processing surface 100 into one of a plurality of processing states for each of the plurality of learning image regions 410. Based on the classification results, a plurality of learning processes are performed. Each output data includes determination results when determining the state of the processing surface 100 in the image area 410 for processing, and these input data and output data are associated with each other.
 入力データとしての複数の学習用画像領域410に対する分類結果は、加工面100の状態が、例えば、「良」、「可」「不良」及び「判定対象外」のいずれかで分類されている場合には、「0」、「1」、「2」及び「3」という整数値で表される。 The classification results for the plurality of learning image regions 410 as input data are, for example, when the state of the processing surface 100 is classified into one of “good”, “acceptable”, “bad”, and “not subject to judgment”. is represented by the integer values "0", "1", "2" and "3".
 出力データとしての判定結果は、教師あり学習において、例えば、教師データや正解ラベルと称される。判定結果は、複数の学習用画像領域410、すなわち、複数の学習用画像領域410に分割する前の撮像画像領域400を対象として、加工面100全体の状態を判定したものである。 The judgment results as output data are called, for example, teacher data or correct labels in supervised learning. The determination result is obtained by determining the state of the entire processing surface 100 with respect to a plurality of learning image regions 410, that is, the captured image region 400 before being divided into a plurality of learning image regions 410. FIG.
 判定結果は、加工面100の状態として、加工面100を加工したときと同一の加工工程を再度行う再加工の要否、加工面100を加工したときと異なる加工工程を行う別加工の要否、加工面100に対して作業者が仕上げ加工を行う仕上げ加工の要否、及び、加工面100のうち再加工、別加工又は仕上げ加工を行う対象とする加工範囲の少なくとも1つを判定したものである。なお、判定結果は、上記に代えて又は加えて、加工面100全体に対して、少なくとも「良」及び「不良」を含む複数の加工状態のいずれかであるかを判定したものでもよい。 The determination result is, as the state of the machined surface 100, the necessity of re-machining in which the same machining process as when the machined surface 100 was machined, or the necessity of another machining in which the machining process different from when the machined surface 100 was machined is required. , at least one of the necessity of finish machining in which an operator performs finish machining on the machined surface 100, and the machining range for which re-machining, another machining, or finishing machining is performed on the machined surface 100 is determined. is. Alternatively or additionally, the determination result may be one of a plurality of machining states including at least "good" and "bad" with respect to the entire machined surface 100.
 学習用データ取得部60は、第1の分類学習用データ及び判定学習用データを取得する方法として、各種の方法を採用することができる。例えば、学習用データ取得部60は、加工部3により加工工程が実施された後の判定対象物10を撮像部4で撮像された撮像画像40を取得し、その撮像した撮像画像40を分割することで複数の学習用画像41を生成する。次に、学習用データ取得部60は、例えば、撮像画像40に対して各学習用画像領域410を構成する枠線を重畳させることで、複数の学習用画像41を区別可能な状態で、作業者用端末8の表示画面に表示させる。 The learning data acquisition unit 60 can employ various methods as a method for acquiring the first classification learning data and the determination learning data. For example, the learning data acquisition unit 60 acquires a captured image 40 captured by the imaging unit 4 of the determination object 10 after the processing process has been performed by the processing unit 3, and divides the captured captured image 40. By doing so, a plurality of learning images 41 are generated. Next, the learning data acquisition unit 60 superimposes a frame line forming each learning image region 410 on the captured image 40, so that the plurality of learning images 41 can be distinguished from each other. displayed on the display screen of the user terminal 8.
 作業者が、その表示画面上の学習用画像41の各々を視認し、複数の学習用画像41の各々に含まれる加工面100の状態を複数の加工状態(クラス)に分類した結果(分類結果)を入力操作するとともに、撮像画像40に含まれる加工面100の状態を判定した結果(判定結果)を作業者用端末8により入力操作する。そして、学習用データ取得部60は、その作業者の入力操作を受け付けて、学習用画像41(入力データ)と、その学習用画像41に対して入力操作された分類結果(出力データ)とを対応付けることで複数の第1の分類学習用データを取得する。また、学習用データ取得部60は、学習用画像41の各々が有する複数の学習用画像領域410に対する分類結果(入力データ)と、その撮像画像40に対して入力操作された判定結果(出力データ)とを対応付けることで判定学習用データを取得する。 A result (classification result ) is input, and the result (determination result) of determining the state of the processing surface 100 included in the captured image 40 is input via the operator terminal 8 . Then, the learning data acquisition unit 60 accepts the operator's input operation, and acquires the learning image 41 (input data) and the classification result (output data) of the input operation to the learning image 41. A plurality of first classification learning data are acquired by associating. In addition, the learning data acquisition unit 60 obtains the classification results (input data) for the plurality of learning image regions 410 of each of the learning images 41 and the determination results (output data ) are associated with each other to acquire data for judgment learning.
 したがって、学習用データ取得部60は、1枚の撮像画像40から複数枚の学習用画像41に分割したときの分割数に相当する数の第1の分類学習用データを取得することができ、さらに上記の作業を繰り返すことで所望の数の第1の分類学習用データを取得することができる。また、学習用データ取得部60は、第1の分類学習用データを取得するのに合わせて判定学習用データを取得することができる。そのため、第1の分類学習用データ及び判定学習用データを容易に収集することができる。 Therefore, the learning data acquisition unit 60 can acquire a number of first classification learning data corresponding to the number of divisions when one captured image 40 is divided into a plurality of learning images 41. Furthermore, by repeating the above operations, a desired number of first classification learning data can be acquired. In addition, the learning data acquisition unit 60 can acquire determination learning data in conjunction with acquiring the first classification learning data. Therefore, the first data for classification learning and the data for judgment learning can be easily collected.
 図6は、第1の分類用学習モデル2Aに適用される推論モデル20Aの一例を示す概略図である。 FIG. 6 is a schematic diagram showing an example of an inference model 20A applied to the first classification learning model 2A.
 推論モデル20Aは、機械学習の具体的な手法として、畳み込みニューラルネットワーク(CNN(Convolutional Neural Network))を採用したものである。推論モデル20Aは、入力層21、中間層22、及び、出力層23を備える。 The inference model 20A employs a convolutional neural network (CNN) as a specific method of machine learning. The inference model 20A includes an input layer 21, an intermediate layer 22, and an output layer 23.
 入力層21は、入力データとしての学習用画像41の画素数に対応する数のニューロンを有し、各ピクセルの画素値が各ニューロンにそれぞれ入力される。 The input layer 21 has a number of neurons corresponding to the number of pixels in the learning image 41 as input data, and the pixel value of each pixel is input to each neuron.
 中間層22は、畳み込み層22a、プーリング層22b及び全結合層22cから構成されている。畳み込み層22a及びプーリング層22bは、例えば、交互に複数層設けられている。畳み込み層22a及びプーリング層22bは、入力層21を介して入力された画像から特徴量を抽出する。全結合層22cは、畳み込み層22a及びプーリング層22bにより画像から抽出された特徴量を、例えば、活性化関数によって変換し、特徴ベクトルとして出力する。なお、全結合層22cは、複数層設けられていてもよい。 The intermediate layer 22 is composed of a convolutional layer 22a, a pooling layer 22b and a fully connected layer 22c. For example, a plurality of convolution layers 22a and pooling layers 22b are provided alternately. The convolution layer 22a and the pooling layer 22b extract features from the image input via the input layer 21. FIG. The fully connected layer 22c converts the feature amount extracted from the image by the convolution layer 22a and the pooling layer 22b, for example, by an activation function, and outputs it as a feature vector. It should be noted that the total bonding layer 22c may be provided in a plurality of layers.
 出力層23は、全結合層22cから出力された特徴ベクトルに基づいて、分類結果を含む出力データを出力する。なお、出力データは、分類結果の他に、例えば、分類結果の信頼度を示すスコアを含むものでもよい。 The output layer 23 outputs output data including classification results based on the feature vectors output from the fully connected layer 22c. Note that the output data may include, for example, a score indicating the reliability of the classification result in addition to the classification result.
 推論モデル20Aの各層の間には、層間のニューロンをそれぞれ接続するシナプスが張られており、中間層22の畳み込み層22a及び全結合層22cの各シナプスには、重みが対応付けられている。 Between each layer of the inference model 20A, a synapse connecting each neuron between the layers is set, and a weight is associated with each synapse of the convolution layer 22a and the fully connected layer 22c of the intermediate layer 22.
 機械学習部62は、第1の分類学習用データを推論モデル20Aに入力し、学習用画像41と分類結果との相関関係を推論モデル20Aに機械学習させる。具体的には、機械学習部62は、第1の分類学習用データを構成する学習用画像41を入力データとして、推論モデル20Aの入力層21に入力する。なお、機械学習部62は、学習用画像41を入力層21に入力する際の前処理として、所定の画像調整(例えば、画像フォーマット、画像サイズ、画像フィルタ、画像マスク等)を学習用画像41に施してもよい。 The machine learning unit 62 inputs the first classification learning data to the inference model 20A, and causes the inference model 20A to machine-learn the correlation between the learning image 41 and the classification result. Specifically, the machine learning unit 62 inputs the learning image 41 constituting the first classification learning data to the input layer 21 of the inference model 20A as input data. Note that the machine learning unit 62 applies predetermined image adjustments (eg, image format, image size, image filter, image mask, etc.) to the learning image 41 as preprocessing when inputting the learning image 41 to the input layer 21 . may be applied to
 機械学習部62は、出力層23から出力された出力データが示す分類結果(推論結果)と、当該第1の分類学習用データを構成する分類結果(教師データ)とを比較する誤差関数を用いて、誤差関数の評価値が小さくなるように、各シナプスに対応付けられた重みを調整する(バックプロバケーション)ことを反復する。そして、機械学習部62は、上記の一連の処理を所定の回数反復実施することや、誤差関数の評価値が許容値より小さくなること等の所定の学習終了条件が満たされたと判断した場合には、機械学習を終了し、そのときの推論モデル20A(各シナプスのそれぞれに対応付けられた全ての重み)を、第1の分類用学習モデル2Aとして学習済みモデル記憶部63に格納する。 The machine learning unit 62 uses an error function that compares the classification result (inference result) indicated by the output data output from the output layer 23 and the classification result (teacher data) that constitutes the first data for classification learning. Then, the weight associated with each synapse is adjusted (back promotion) so that the evaluation value of the error function becomes smaller. Then, when the machine learning unit 62 determines that a predetermined learning end condition is satisfied, such as repeating the series of processes described above a predetermined number of times or that the evaluation value of the error function becomes smaller than an allowable value, terminates the machine learning and stores the inference model 20A (all weights associated with each synapse) at that time in the trained model storage unit 63 as the first classification learning model 2A.
 図7は、判定用学習モデル2に適用される推論モデル20の一例を示す概略図である。 FIG. 7 is a schematic diagram showing an example of the inference model 20 applied to the learning model 2 for judgment.
 推論モデル20は、機械学習の具体的な手法として、図6に示す推論モデル20Aと同様に、畳み込みニューラルネットワークを採用したものである。以下では、推論モデル20について、図6に示す推論モデル20Aと異なる点を中心に説明する。 The inference model 20 employs a convolutional neural network as a specific method of machine learning, similar to the inference model 20A shown in FIG. In the following, the inference model 20 will be described, focusing on the differences from the inference model 20A shown in FIG.
 入力層21は、撮像画像領域400を複数の学習用画像領域410に分割したときの分割数に対応する数のニューロンを有し、各学習用画像領域410に対する分類結果(例えば、0、1、2、3の整数値)が各ニューロンにそれぞれ入力される。 The input layer 21 has a number of neurons corresponding to the number of divisions when the captured image area 400 is divided into a plurality of learning image areas 410, and the classification results (for example, 0, 1, Integer values of 2 and 3) are input to each neuron, respectively.
 出力層23は、全結合層22cから出力された特徴ベクトルに基づいて、判定結果を含む出力データを出力する。なお、出力データは、判定結果の他に、例えば、判定結果の信頼度を示すスコアを含むものでもよい。 The output layer 23 outputs output data including determination results based on the feature vectors output from the fully connected layer 22c. In addition to the determination result, the output data may include, for example, a score indicating the reliability of the determination result.
 機械学習部62は、判定学習用データを推論モデル20に入力し、複数の学習用画像領域410に対する分類結果と判定結果との相関関係を推論モデル20に機械学習させる。具体的には、機械学習部62は、判定学習用データを構成する、複数の学習用画像領域410に対する分類結果を入力データとして、推論モデル20の入力層21に入力する。 The machine learning unit 62 inputs the judgment learning data to the inference model 20 and causes the inference model 20 to machine-learn the correlation between the classification results and the judgment results for the plurality of learning image regions 410 . Specifically, the machine learning unit 62 inputs the classification results for the plurality of learning image regions 410 constituting the determination learning data to the input layer 21 of the inference model 20 as input data.
 機械学習部62は、出力層23から出力された出力データが示す判定結果(推論結果)と、当該判定学習用データを構成する判定結果(教師データ)とを比較する誤差関数を用いて、誤差関数の評価値が小さくなるように、各シナプスに対応付けられた重みを調整する(バックプロバケーション)ことを反復する。そして、機械学習部62は、上記の一連の処理を所定の回数反復実施することや、誤差関数の評価値が許容値より小さくなること等の所定の学習終了条件が満たされたと判断した場合には、機械学習を終了し、そのときの推論モデル20(各シナプスのそれぞれに対応付けられた全ての重み)を、判定用学習モデル2として学習済みモデル記憶部63に格納する。 The machine learning unit 62 uses an error function that compares the determination result (inference result) indicated by the output data output from the output layer 23 with the determination result (teacher data) that constitutes the determination learning data, and calculates the error It repeats adjusting the weight associated with each synapse (back promotion) so that the evaluation value of the function becomes smaller. Then, when the machine learning unit 62 determines that a predetermined learning end condition is satisfied, such as repeating the series of processes described above a predetermined number of times or that the evaluation value of the error function becomes smaller than an allowable value, terminates the machine learning and stores the inference model 20 (all weights associated with each synapse) at that time in the learned model storage unit 63 as the learning model 2 for determination.
(加工面判定装置7)
 図8は、第1の実施形態に係る加工面判定装置7の一例を示すブロック図である。
(machined surface determination device 7)
FIG. 8 is a block diagram showing an example of the machined surface determination device 7 according to the first embodiment.
 加工面判定装置7は、分類結果取得部70A、判定結果推論部71、学習済みモデル記憶部72、及び、出力処理部73を備える。加工面判定装置7は、例えば、図2に示すコンピュータ200で構成される。その場合、分類結果取得部70Aは、通信I/F部222又はI/OデバイスI/F部226とプロセッサ212とで構成され、判定結果推論部71及び出力処理部73は、プロセッサ212で構成され、学習済みモデル記憶部72は、ストレージ装置220で構成される。 The machined surface determination device 7 includes a classification result acquisition unit 70A, a determination result inference unit 71, a learned model storage unit 72, and an output processing unit 73. The machined surface determination device 7 is composed of, for example, a computer 200 shown in FIG. In that case, the classification result acquisition unit 70A is composed of the communication I/F unit 222 or the I/O device I/F unit 226 and the processor 212, and the determination result inference unit 71 and the output processing unit 73 are composed of the processor 212. , and the trained model storage unit 72 is configured by the storage device 220 .
 分類結果取得部70Aは、判定用画像42が有する判定用画像領域420を分割した複数の小画像領域430について、加工面100の状態を複数の加工状態のいずれかに分類したときの分類結果を小画像領域430単位で取得する分類結果取得処理(後述の図9参照)を行う。 The classification result acquisition unit 70A obtains the classification result when the state of the processing surface 100 is classified into one of a plurality of processing states for a plurality of small image regions 430 obtained by dividing the judgment image region 420 included in the judgment image 42. A classification result acquisition process (see FIG. 9 described later) for acquiring in units of small image areas 430 is performed.
 分類結果取得部70Aは、具体的な構成として、撮像部4に接続されて、撮像部4により判定対象物10の加工面100が撮像された撮像画像を、判定用画像領域420を有する判定用画像42として取得する画像取得部700と、判定用画像領域420を複数の小画像領域430に分割することで判定用画像42から複数の小画像43を生成する小画像生成部701と、複数の小画像43を第1の分類用学習モデル2Aに小画像領域430単位で入力することにより、複数の小画像領域430に対する分類結果を推論する第1の分類結果推論部702Aとを備える。 As a specific configuration, the classification result acquisition unit 70A is connected to the imaging unit 4, and converts the captured image obtained by capturing the processing surface 100 of the determination target object 10 by the imaging unit 4 into a determination image area 420 having a determination image area 420. a small image generating unit 701 that generates a plurality of small images 43 from the judgment image 42 by dividing the judgment image region 420 into a plurality of small image regions 430; A first classification result inference unit 702A for inferring classification results for a plurality of small image regions 430 by inputting the small images 43 into the first classification learning model 2A in units of small image regions 430 .
 第1の分類結果推論部702Aは、複数の小画像43から分割前の判定用画像42を再構築できるように、判定用画像領域420に対する各小画像領域430の位置関係を、例えば、小画像43の付加情報として記録する。 The first classification result inference unit 702A determines the positional relationship of each small image region 430 with respect to the judgment image region 420 so that the judgment image 42 before division can be reconstructed from the plurality of small images 43, for example, the small image 43 additional information.
 判定結果推論部71は、分類結果取得部70Aにより取得された複数の小画像領域430に対する分類結果を判定用学習モデル2に入力することにより、判定用画像領域420に対する判定結果を推論する判定結果推論処理(後述の図10参照)を行う。 The determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 acquired by the classification result acquisition unit 70A to the determination learning model 2, thereby inferring the determination result for the determination image region 420. Inference processing (see FIG. 10 described later) is performed.
 判定結果推論部71により推論された判定結果は、再加工の要否、別加工の要否、仕上げ加工の要否、及び、加工面100のうち再加工、別加工又は仕上げ加工を行う対象とする加工範囲の少なくとも1つを判定したものである。判定結果は、上記に代えて又は加えて、加工面100全体に対して、少なくとも「良」及び「不良」を含む複数の加工状態のいずれかであるかを判定したものでもよい。 The determination results inferred by the determination result inference unit 71 are the necessity of remachining, the necessity of separate machining, the necessity of finish machining, and the target of remachining, separate machining, or finish machining of the machined surface 100. At least one of the processing ranges to be determined. Instead of or in addition to the above, the determination result may be one of a plurality of machining states including at least "good" and "bad" with respect to the entire machined surface 100.
 なお、分類結果取得部70A及び判定結果推論部71の一部又は全ては、外部コンピュータ(例えば、サーバ型コンピュータやクラウド型コンピュータ)のプロセッサで代用されてもよく、分類結果取得部70Aによる分類結果取得処理及び判定結果推論部71による判定結果推論処理の一部又は全てが、外部コンピュータで実行されてもよい。 A part or all of the classification result acquisition unit 70A and the determination result inference unit 71 may be replaced by a processor of an external computer (for example, a server computer or a cloud computer). Part or all of the acquisition processing and determination result inference processing by the determination result inference unit 71 may be executed by an external computer.
 学習済みモデル記憶部72は、分類結果取得部70Aの推論処理にて用いられる学習済みの第1の分類用学習モデル2Aと、判定結果推論部71の推論処理にて用いられる学習済みの判定用学習モデル2とを記憶するデータベースである。なお、学習済みモデル記憶部72に記憶される第1の分類用学習モデル2A及び判定用学習モデル2の数は1つに限定されず、例えば、機械学習の手法、加工部3による加工工程、判定対象物10等の条件が異なる複数の学習済みモデルが記憶され、選択的に利用可能としてもよい。また、学習済みモデル記憶部72は、外部コンピュータ(例えば、サーバ型コンピュータやクラウド型コンピュータ)の記憶部で代用されてもよく、その場合には、分類結果取得部70A及び判定結果推論部71は、当該外部コンピュータにアクセスすることで、上記の分類結果取得処理及び判定結果推論処理を行ってもよい。 The trained model storage unit 72 stores the trained first classification learning model 2A used in the inference processing of the classification result acquisition unit 70A and the learned judgment model used in the inference processing of the judgment result inference unit 71. It is a database that stores the learning model 2. The number of the first classification learning model 2A and the judgment learning model 2 stored in the learned model storage unit 72 is not limited to one. A plurality of trained models with different conditions such as the determination target 10 may be stored and selectively used. Also, the trained model storage unit 72 may be replaced by a storage unit of an external computer (for example, a server computer or a cloud computer). In that case, the classification result acquisition unit 70A and the determination result inference unit 71 , the above-described classification result acquisition processing and determination result inference processing may be performed by accessing the external computer.
 出力処理部73は、判定結果推論部71により推論された判定結果を出力するための出力処理を行う。判定結果を出力するための具体的な出力手段は、種々の手段を採用することが可能である。例えば、出力処理部73は、判定結果に応じて、制御盤50を介して再加工や別加工の動作指令を加工部3に送信したり、操作表示盤51や作業者用端末8を介して作業者に仕上げ加工の実施を表示や音で報知したり、加工部3の動作履歴として制御盤50の記憶手段に記憶したりしてもよい。なお、出力処理部73は、判定結果推論部71による判定結果を出力(送信、報知、記憶)するだけでもよいし、判定結果推論部71による判定結果の他に、分類結果取得部70Aによる複数の小画像領域430に対する分類結果をさらに出力(送信、報知、記憶)してもよい。 The output processing unit 73 performs output processing for outputting the determination result inferred by the determination result inference unit 71 . Various means can be adopted as specific output means for outputting the determination result. For example, according to the determination result, the output processing unit 73 transmits an operation command for reprocessing or another processing to the processing unit 3 via the control panel 50, or via the operation display panel 51 or the operator terminal 8. The execution of the finishing process may be notified to the operator by display or sound, or may be stored in the storage means of the control panel 50 as the operation history of the processing section 3 . Note that the output processing unit 73 may simply output (transmit, notify, or store) the determination result from the determination result inference unit 71, or may output (transmit, notify, or store) the determination result from the determination result inference unit 71, and may also output a plurality of data from the classification result acquisition unit 70A. The classification result for the small image area 430 may be further output (transmitted, notified, or stored).
 図9は、分類結果取得部70Aによる分類結果取得処理の一例を示す機能説明図である。 FIG. 9 is a functional explanatory diagram showing an example of the classification result acquisition process by the classification result acquisition section 70A.
 判定用画像42の判定用画像領域420は、撮像部4により撮像された領域であり、撮像部4の画角により定められる。図9に示す判定用画像領域420は、図4に示す撮像画像領域400と同様に、判定対象物10である羽根車が有する1つの羽根の一部分を含むように設定されている。なお、判定用画像領域420は、撮像画像領域400と異なる位置に設定されてもよいし、両者の画像数、形状、大きさ及び縦横比が異なるものでもよい。 A judgment image area 420 of the judgment image 42 is an area captured by the imaging unit 4 and is determined by the angle of view of the imaging unit 4 . A determination image area 420 shown in FIG. 9 is set to include a portion of one blade of the impeller, which is the determination target 10, similarly to the captured image area 400 shown in FIG. Note that the judgment image area 420 may be set at a position different from that of the captured image area 400, and the number, shape, size, and aspect ratio of both images may be different.
 小画像43の小画像領域430は、図9に示すように、小画像領域430の各々が正方形状となるように、判定用画像42の判定用画像領域420を格子状に分割したものである。小画像43の小画像領域430は、機械学習装置6にて第1の分類用学習モデル2Aを生成したときの学習用画像41の学習用画像領域410に相当するものであり、両者の画像数、形状、大きさ及び縦横比は同一又は同程度であることが好ましい。 As shown in FIG. 9, the small image regions 430 of the small image 43 are obtained by dividing the judgment image region 420 of the judgment image 42 into a grid so that each of the small image regions 430 has a square shape. . A small image region 430 of the small image 43 corresponds to the learning image region 410 of the learning image 41 when the first classification learning model 2A is generated by the machine learning device 6. , shape, size and aspect ratio are preferably the same or comparable.
 したがって、小画像領域430の画像数、形状、大きさ及び縦横比が、学習用画像領域410の画像数、形状、大きさ及び縦横比に相当するのであれば、判定用画像領域420を小画像領域430に分割するときの分割方法は、適宜変更してもよく、例えば、千鳥状に分割してもよいし、他の基準に従って分割してもよい。その際、判定用画像領域420を小画像領域430に分割するときの分割方法は、撮像画像領域400を学習用画像領域410に分割するときの分割方法と同一でもよいし、異なるものでもよい。 Therefore, if the number, shape, size, and aspect ratio of the small image area 430 correspond to the number, shape, size, and aspect ratio of the image area for learning 410, the image area for determination 420 is set to the small image area. The method of dividing into the regions 430 may be changed as appropriate. For example, division may be performed in a zigzag pattern, or division may be performed according to other criteria. At this time, the dividing method for dividing the determination image region 420 into the small image regions 430 may be the same as or different from the dividing method for dividing the captured image region 400 into the learning image regions 410 .
 ここで、第1の分類用学習モデル2Aは、小画像領域430に相当する学習用画像領域410を有する学習用画像41と当該学習用画像41に含まれる加工面100の状態を複数の加工状態のいずれかに分類した分類結果との相関関係を機械学習装置6にて機械学習させたものである。したがって、第1の分類結果推論部702Aは、複数の小画像43を第1の分類用学習モデル2Aに小画像領域430単位で入力することにより、小画像領域430内の加工面100の状態を複数の加工状態のいずれかに分類する分類器として機能する。複数の加工状態として、2クラス(良、不良)を採用する場合には、分類結果は、2クラス(良、不良)で表され、複数の加工状態として、3クラス(良、可、不良)を採用する場合には、分類結果は、3クラス(良、可、不良)で表される。 Here, the first classification learning model 2A divides the state of the learning image 41 having the learning image region 410 corresponding to the small image region 430 and the processing surface 100 included in the learning image 41 into a plurality of processing states. The machine learning device 6 performs machine learning on the correlation with the classification results classified into any of the above. Therefore, the first classification result inference unit 702A inputs the plurality of small images 43 to the first classification learning model 2A in units of small image regions 430, thereby obtaining the state of the processing surface 100 in the small image regions 430. It functions as a classifier that classifies into one of multiple processing states. When adopting two classes (good, bad) as the plurality of machining states, the classification result is represented by two classes (good, bad), and three classes (good, fair, bad) as the plurality of machining states. is adopted, the classification result is represented by three classes (good, fair, and bad).
 また、第1の分類用学習モデル2Aは、加工面100又は加工面100以外の背景110の少なくとも一方が撮像された学習用画像41と、当該学習用画像41に撮像された加工面100の状態を複数の加工状態のいずれかに分類するか、当該学習用画像41が有する小画像領域430内に加工面100のエッジ又は加工面100以外の背景110が存在することを理由に判定対象外に分類したときの分類結果との相関関係を機械学習装置6にて機械学習させたものでもよい。この場合、分類結果取得部70Aの第1の分類結果推論部702Aは、複数の小画像43を第1の分類用学習モデル2Aに小画像領域430単位で入力することにより、小画像領域430内の加工面100の状態を複数の加工状態のいずれかに分類するか、小画像領域430内に加工面100のエッジ又は加工面100以外の背景110が存在することを理由に判定対象外に分類する分類器として機能する。分類結果には、複数の加工状態にさらに判定対象外が加わるため、上記の例における分類結果は、3クラス(良、不良、判定対象外)、又は、4クラス(良、可、不良、判定対象外)で表される。 The first classification learning model 2A includes a learning image 41 in which at least one of the processed surface 100 and the background 110 other than the processed surface 100 is captured, and the state of the processed surface 100 captured in the learning image 41. is classified into one of a plurality of processing states, or is excluded from the determination target for the reason that the edge of the processing surface 100 or the background 110 other than the processing surface 100 exists in the small image area 430 of the learning image 41 The machine learning device 6 may perform machine learning on the correlation with the classification result when classified. In this case, the first classification result inference unit 702A of the classification result acquisition unit 70A inputs the plurality of small images 43 to the first classification learning model 2A in units of small image regions 430, thereby The state of the processed surface 100 is classified into one of a plurality of processed states, or the edge of the processed surface 100 or the background 110 other than the processed surface 100 exists in the small image area 430. function as a classifier for In the classification results, there are a plurality of machining states, as well as non-judgment conditions. not covered).
 なお、小画像領域430に対する分類結果は、各クラスに対するスコア(信頼度)を含むものでもよい。この場合、分類結果が、4クラス(良、可、不良、判定対象外)で表されるものとすると、特定の小画像領域430に対するクラス毎のスコアは、例えば、「0.02」、「0.10」、「0.95」、「0.31」のように出力される。スコアの利用方法は、任意の方法を採用すればよく、例えば、スコアが最も高いクラス(上記の例では、スコア「0.95」の不良)を分類結果としてもよいし、所定のクラスのスコアが所定のスコア基準値を超えている場合(上記の例では、「不良」のクラスのスコア「0.95」がスコア基準値「0.80」を超えている場合)、当該クラスを分類結果としてもよい。 It should be noted that the classification result for the small image region 430 may include the score (reliability) for each class. In this case, assuming that the classification results are represented by four classes (good, acceptable, bad, not subject to judgment), the score for each class for the specific small image region 430 is, for example, "0.02", " 0.10", "0.95", and "0.31". Any method may be adopted as the method of using the score. For example, the class with the highest score (in the above example, the defective score of "0.95") may be used as the classification result, or the score of a predetermined class may be used. exceeds a predetermined score reference value (in the above example, if the score “0.95” of the “bad” class exceeds the score reference value “0.80”), the class is classified as a result may be
 また、小画像領域430に対する分類結果は、学習済みモデル記憶部72や他の記憶装置(不図示)に記憶することが好ましく、過去の分類結果は、例えば、学習済みの第1の分類用学習モデル2Aの推論精度の更なる向上のため、オンライン学習や再学習に用いられる第1の分類学習用データとして利用することが可能である。 Further, the classification results for the small image region 430 are preferably stored in the learned model storage unit 72 or another storage device (not shown), and the past classification results are stored in the learned first classification learning, for example. In order to further improve the inference accuracy of the model 2A, it can be used as first classification learning data used for online learning and re-learning.
 図10は、判定結果推論部71による判定結果推論処理の一例を示す機能説明図である。以下では、判定用画像領域420を60個の小画像領域430に分割することで、1枚の判定用画像42が、図10に示すように、60枚の小画像43に分割された場合を想定して説明する。 FIG. 10 is a functional explanatory diagram showing an example of determination result inference processing by the determination result inference unit 71. FIG. In the following description, it is assumed that one judgment image 42 is divided into 60 small images 43 as shown in FIG. 10 by dividing the judgment image region 420 into 60 small image regions 430. I will assume and explain.
 判定用学習モデル2は、複数の小画像領域430に相当する複数の学習用画像領域410に対する分類結果と当該分類結果に基づいて複数の学習用画像領域410内の加工面100の状態を判定したときの判定結果との相関関係を機械学習させたものである。したがって、判定結果推論部71は、分類結果取得部70Aにより取得された複数の小画像領域430に対する分類結果を判定用学習モデル2に入力することにより、複数の小画像領域430内の加工面100の状態、すなわち、判定用画像領域420内の加工面100に対する判定結果を推論する。 The determination learning model 2 determines the state of the processing surface 100 in the plurality of learning image regions 410 based on the classification results for the plurality of learning image regions 410 corresponding to the plurality of small image regions 430 and the classification results. This is a result of machine learning of the correlation with the judgment result of time. Therefore, the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 acquired by the classification result acquisition unit 70A to the learning model 2 for determination, thereby processing the processing surface 100 in the plurality of small image regions 430. , that is, the determination result for the processing surface 100 in the determination image area 420 is inferred.
 加工面100の状態として、再加工の要否、別加工の要否、及び、仕上げ加工の要否を採用する場合には、判定結果は、例えば、0~1の値域を有する実数値として、「0」に近づくほど「否」、1に近づくほど「要」として表される。さらに、加工面100の状態として、加工範囲を採用する場合には、複数の小画像領域430に対する分類結果として、例えば、「不良」に分類された小画像領域430を少なくとも含む範囲が加工範囲として判定される。 When adopting the necessity of remachining, the necessity of another machining, and the necessity of finish machining as the state of the machined surface 100, the determination result is, for example, a real value having a value range of 0 to 1, The closer to "0", the more "no", and the closer to 1, the more "required". Furthermore, when a processing range is adopted as the state of the processing surface 100, a range including at least the small image regions 430 classified as "defective" as a classification result for the plurality of small image regions 430, for example, is used as the processing range. be judged.
 なお、判定結果推論部71は、上記のように、判定用学習モデル2により推論された判定結果に対して所定の後処理を行うようにしてもよい。例えば、判定結果推論部71は、後処理として、再加工の要否に対する判定結果の値、別加工の要否に対する判定結果の値、及び、仕上げ加工の要否に対する判定結果の値を比較し、それらの中で判定結果の値が最も大きな加工を選択し、最終的な判定結果としてもよい。 Note that the determination result inference unit 71 may perform predetermined post-processing on the determination result inferred by the determination learning model 2 as described above. For example, as post-processing, the determination result inference unit 71 compares the value of the determination result regarding the necessity of reprocessing, the value of the determination result regarding the necessity of separate processing, and the value of the determination result regarding the necessity of finishing processing. , the machining with the largest determination result value may be selected as the final determination result.
(加工面判定方法)
 図11は、第1の実施形態に係る加工面判定装置7による加工面判定方法の一例を示すフローチャートである。なお、図11に示す一連の加工面判定方法は、加工面判定装置7により所定のタイミングにて繰り返し実行されるものである。所定のタイミングは、任意のタイミングでよく、例えば、加工部3による加工工程が終了した後でもよいし、加工工程の途中でもよいし、所定の事象発生時(作業者の操作時、生産管理システムからの指示時等)でもよい。以下では、加工面判定方法が、加工部3による加工工程が終了した後に、当該加工工程により加工された判定対象物10に対して実行される場合について説明する。
(Processing surface determination method)
FIG. 11 is a flow chart showing an example of a machined surface determination method by the machined surface determination device 7 according to the first embodiment. 11 is repeatedly executed by the machined surface determination device 7 at predetermined timings. The predetermined timing may be arbitrary timing, for example, it may be after the processing process by the processing unit 3 is completed, it may be in the middle of the processing process, or when a predetermined event occurs (at the time of operator operation, production control system at the time of an instruction from, etc.). A case will be described below in which the machined surface determination method is executed on the determination target object 10 machined by the machining process after the machining process by the machining unit 3 is completed.
 まず、ステップS100において、加工部3による加工工程が終了すると、当該加工工程により加工された判定対象物10の加工面100が撮像部4により撮像されて、撮像画像が制御装置5を経由して加工面判定装置7に送られることで、分類結果取得部70Aの画像取得部700が、当該撮像画像を判定用画像42として取得する。 First, in step S100, when the processing process by the processing unit 3 is completed, the processing surface 100 of the determination object 10 processed by the processing process is imaged by the imaging unit 4, and the captured image is transmitted via the control device 5. The image acquisition unit 700 of the classification result acquisition unit 70A acquires the captured image as the determination image 42 by sending it to the machined surface determination device 7 .
 次に、ステップS110において、小画像生成部701は、判定用画像42に対する前処理として、判定用画像42の判定用画像領域420を複数の小画像領域430に分割することで判定用画像42から複数の小画像43を生成する。 Next, in step S110, the small image generation unit 701 divides the judgment image region 420 of the judgment image 42 into a plurality of small image regions 430 as preprocessing for the judgment image 42, thereby dividing the judgment image 42 into a plurality of small image regions 430. A plurality of small images 43 are generated.
 次に、ステップS120~S128において、第1の分類結果推論部702Aは、複数の小画像43の分割数をKとして、複数の小画像43に通し番号(1≦n≦K)をそれぞれ割り当てた状態で、変数iを「1」から「K」までインクリメントすることにより、ループ処理を実行する。 Next, in steps S120 to S128, the first classification result inference unit 702A assigns serial numbers (1≦n≦K) to the plurality of small images 43, where K is the division number of the plurality of small images 43. , the loop process is executed by incrementing the variable i from "1" to "K".
 具体的には、ステップS120において、第1の分類結果推論部702Aは、変数iを「1」で初期化する。次に、ステップS122において、第1の分類結果推論部702Aは、i番目の小画像43を選択し、第1の分類用学習モデル2Aの入力層21に入力することにより、当該第1の分類用学習モデル2Aの出力層23から出力された分類結果を推論する。 Specifically, in step S120, the first classification result inference unit 702A initializes the variable i to "1". Next, in step S122, the first classification result inference unit 702A selects the i-th small image 43 and inputs it to the input layer 21 of the first classification learning model 2A, thereby performing the first classification Infer the classification results output from the output layer 23 of the learning model 2A.
 次に、ステップS126において、変数iをインクリメントし、ステップS128において、変数iが分割数Kを超えたか否かを判定する。そして、第1の分類結果推論部702Aは、変数iが分割数Kを超えるまで上記ステップS122、S126を繰り返すことで、複数の小画像領域430に対する分類結果を取得する。 Next, in step S126, the variable i is incremented, and in step S128, it is determined whether or not the variable i exceeds the division number K. Then, the first classification result inference unit 702A obtains the classification results for the plurality of small image regions 430 by repeating steps S122 and S126 until the variable i exceeds the number of divisions K. FIG.
 次に、ステップS130において、判定結果推論部71は、複数の小画像領域430に対する分類結果を判定用学習モデル2の入力層21に入力することにより、当該判定用学習モデル2の出力層23から出力された判定結果(例えば、再加工の要否、別加工の要否、仕上げ加工の要否、加工範囲等)を推論する。 Next, in step S130, the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the input layer 21 of the determination learning model 2, and from the output layer 23 of the determination learning model 2, The output determination result (for example, necessity of reprocessing, necessity of another processing, necessity of finish processing, processing range, etc.) is inferred.
 次に、ステップS140において、出力処理部73は、判定結果推論部71により推論された判定結果に応じた情報を出力手段(例えば、制御装置5、作業者用端末8等)に出力する。そして、図11に示す一連の加工面判定方法を終了する。加工面判定方法において、ステップS100が画像取得工程、ステップS100~S128が分類結果取得工程、ステップS130が判定結果推論工程、ステップS140が出力処理工程に相当する。 Next, in step S140, the output processing unit 73 outputs information corresponding to the determination result inferred by the determination result inference unit 71 to output means (eg, the control device 5, the worker terminal 8, etc.). Then, a series of the machined surface determination method shown in FIG. 11 ends. In the machined surface determination method, step S100 corresponds to an image obtaining step, steps S100 to S128 correspond to a classification result obtaining step, step S130 corresponds to a determination result inference step, and step S140 corresponds to an output processing step.
 以上のように、本実施形態に係る加工面判定装置7及び加工面判定法によれば、分類結果取得部70Aが、判定用画像領域420を小画像領域430に分割することで判定用画像42から生成される複数の小画像43の各々を第1の分類用学習モデル2Aに入力することにより、複数の小画像領域430に対する分類結果を推論する。そして、判定結果推論部71が、複数の小画像領域430に対する分類結果を判定用学習モデル2に入力することにより、加工面100の状態を判定結果として推論する。 As described above, according to the processed surface determination device 7 and the processed surface determination method according to the present embodiment, the classification result acquisition unit 70A divides the determination image region 420 into the small image regions 430, thereby dividing the determination image 42 into the small image regions 430. By inputting each of the plurality of small images 43 generated from , into the first learning model 2A for classification, the classification results for the plurality of small image regions 430 are inferred. Then, the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the determination learning model 2, thereby inferring the state of the machined surface 100 as the determination result.
 そのため、第1の分類用学習モデル2Aによる分類結果は、判定用画像42が分割された複数の小画像43の各々が入力されることにより小画像領域430単位で推論されるので、1枚の判定用画像42を第1の分類用学習モデル2Aに入力する場合に比べて、機械学習に必要な学習データの収集が容易であるとともに、第1の分類用学習モデル2Aの精度を向上することができる。そして、第1の分類用学習モデル2Aによる複数の小画像領域430に対する分類結果が、判定用学習モデル2に入力されることで、判定用画像42に含まれる加工面100の状態が判定される。したがって、判定対象物10が有する加工面100の状態を自動的に判定することができる。 Therefore, the classification result by the first classification learning model 2A is inferred in units of small image regions 430 by inputting each of the plurality of small images 43 into which the judgment image 42 is divided. To facilitate the collection of learning data necessary for machine learning and to improve the accuracy of the first classification learning model 2A as compared with the case of inputting the determination image 42 to the first classification learning model 2A. can be done. Then, the state of the machined surface 100 included in the determination image 42 is determined by inputting the classification results for the plurality of small image regions 430 by the first classification learning model 2A to the determination learning model 2. . Therefore, the state of the processing surface 100 of the determination target 10 can be automatically determined.
(第2の実施形態)
 第1の実施形態に係る加工システム1では、機械学習の学習フェーズ及び推論フェーズにて、第1の分類用学習モデル2A及び判定用学習モデル2を採用した場合について説明した。これに対し、第2の実施形態に係る加工システム1では、第2の分類用学習モデル2B及び判定用学習モデル2を採用する場合について説明する。なお、第2の実施形態に係る加工システム1の基本的な構成や動作は、第1の実施形態と同様であるため、以下では、第1の実施形態との相違点である第2の分類用学習モデル2Bに関連する部分を中心に説明する。
(Second embodiment)
In the processing system 1 according to the first embodiment, the case where the first learning model for classification 2A and the learning model for judgment 2 are employed in the learning phase and the inference phase of machine learning has been described. On the other hand, in the processing system 1 according to the second embodiment, a case where the second learning model for classification 2B and the learning model for determination 2 are adopted will be described. Since the basic configuration and operation of the processing system 1 according to the second embodiment are the same as those in the first embodiment, the second classification, which is the difference from the first embodiment, will be described below. The description will focus on the part related to the learning model 2B.
(機械学習装置6)
 図12は、第2の実施形態に係る機械学習装置6の一例を示すブロック図である。
(Machine learning device 6)
FIG. 12 is a block diagram showing an example of the machine learning device 6 according to the second embodiment.
 機械学習装置6は、第1の実施形態と同様に、学習用データ取得部60、学習用データ記憶部61、機械学習部62、及び、学習済みモデル記憶部63を備える。 The machine learning device 6 includes a learning data acquisition unit 60, a learning data storage unit 61, a machine learning unit 62, and a trained model storage unit 63, as in the first embodiment.
 学習用データ取得部60は、各種の外部装置と通信網を介して接続され、学習用データを取得するインタフェースユニットである。学習用データ記憶部61は、学習用データ取得部60で取得した学習用データを複数組記憶するデータベースである。学習用データには、第2の分類用学習モデル2Bを生成するための第2の分類学習用データと、第1の実施形態と同様の判定学習用データとが含まれる。 The learning data acquisition unit 60 is an interface unit that is connected to various external devices via a communication network and acquires learning data. The learning data storage unit 61 is a database that stores a plurality of sets of learning data acquired by the learning data acquisition unit 60 . The learning data includes second classification learning data for generating the second classification learning model 2B and judgment learning data similar to the first embodiment.
 機械学習部62は、第2の分類用学習モデル2Bに第2の分類学習用データを複数組入力することで、第2の分類学習用データに含まれる入力データと出力データとの相関関係を第2の分類用学習モデル2Bに機械学習させることで、第2の分類用学習モデル2Bを生成する。また、機械学習部62は、第1の実施形態と同様に、判定学習用データを用いて判定用学習モデル2を生成する。 By inputting a plurality of sets of second classification learning data to the second classification learning model 2B, the machine learning unit 62 calculates the correlation between the input data and the output data included in the second classification learning data. The second classification learning model 2B is generated by performing machine learning on the second classification learning model 2B. Further, the machine learning unit 62 generates the determination learning model 2 using the determination learning data, as in the first embodiment.
 学習済みモデル記憶部63は、機械学習部62により生成された第2の分類用学習モデル2B及び判定用学習モデル2を記憶するデータベースである。 The learned model storage unit 63 is a database that stores the second classification learning model 2B and the determination learning model 2 generated by the machine learning unit 62.
 図13は、第2の分類学習用データの一例を示すデータ構成図である。 FIG. 13 is a data configuration diagram showing an example of the second classification learning data.
 第2の分類学習用データは、学習用画像41から取得される複数の学習用画素領域411に対する画素分類結果を入力データとし、当該学習用画像41に含まれる加工面100の状態を複数の加工状態のいずれかに分類した分類結果を出力データとしてそれぞれ含み、これらの入力データ及び出力データが対応付けられて構成される。 The second classification learning data uses the pixel classification results for the plurality of learning pixel regions 411 acquired from the learning image 41 as input data, and the state of the processing surface 100 included in the learning image 41 is processed by a plurality of processing methods. The classification results classified into any of the states are included as output data, and these input data and output data are associated with each other.
 入力データとしての複数の学習用画素領域411に対する画素分類結果は、学習用画像41を構成する複数の学習用画素領域411について、学習用画素領域411内の画素値に基づいて学習用画素領域411に対する分類結果を示す画素分類結果を学習用画素領域411単位で取得されたものである。 The pixel classification results for the plurality of learning pixel regions 411 as input data are obtained based on the pixel values in the learning pixel regions 411 for the plurality of learning pixel regions 411 forming the learning image 41 . The pixel classification result indicating the classification result for is obtained in units of learning pixel regions 411 .
 学習用画素領域411は、1ピクセルに相当する領域であり、学習用画素領域411内の画素値は、例えば、RGB値、グレースケール値、輝度値等で表される。複数の加工状態として、例えば、「良」、「可」「不良」及び「判定対象外」の4クラスを採用する場合には、画素分類結果は、例えば、学習用画素領域411内の画素値と、所定の3つの閾値(第3の閾値<第2の閾値<第1の閾値)とを比較し、画素値が第1の閾値以上である場合には、「良」(0)の分類結果、画素値が第1の閾値未満かつ第2の閾値以上である場合には、「可」(1)の分類結果、画素値が第2の閾値未満かつ第3の閾値以上である場合には、「不良」(2)の分類結果、画素値が第3の閾値未満である場合には、「判定対象外」(3)の分類結果がそれぞれ割り当てられる。 The learning pixel area 411 is an area corresponding to one pixel, and pixel values in the learning pixel area 411 are represented by, for example, RGB values, grayscale values, luminance values, and the like. For example, when four classes of "good", "acceptable", "bad", and "out of determination" are adopted as the plurality of processing states, the pixel classification result is, for example, the pixel value in the learning pixel region 411. and three predetermined thresholds (third threshold < second threshold < first threshold), and if the pixel value is equal to or greater than the first threshold, it is classified as “good” (0). As a result, if the pixel value is less than the first threshold and greater than or equal to the second threshold, the classification result of "good" (1) is: If the pixel value is less than the second threshold and greater than or equal to the third threshold, is assigned a classification result of "defective" (2), and a classification result of "out of determination" (3) when the pixel value is less than the third threshold.
 出力データとしての分類結果は、第1の実施形態と同様に、学習用画像領域410内の加工面100に対する分類結果として、例えば、図13に示すように、「良」、「可」「不良」及び「判定対象外」のいずれかで表される。 As in the first embodiment, the classification results as output data are, for example, "good", "acceptable", and "bad" as shown in FIG. ” and “not subject to determination”.
 学習用データ取得部60は、第2の分類学習用データ及び判定学習用データを取得する方法として、各種の方法を採用することができる。例えば、学習用データ取得部60は、第1の実施形態と同様に、加工部3により加工工程が実施された後の判定対象物10を撮像部4で撮像された撮像画像40を取得し、その撮像画像40を分割することで複数の学習用画像41を生成し、その複数の学習用画像41を作業者用端末8の表示画面に表示させる。 The learning data acquisition unit 60 can employ various methods as a method for acquiring the second classification learning data and the determination learning data. For example, as in the first embodiment, the learning data acquisition unit 60 acquires the captured image 40 captured by the imaging unit 4 of the determination object 10 after the processing step has been performed by the processing unit 3, A plurality of learning images 41 are generated by dividing the captured image 40 , and the plurality of learning images 41 are displayed on the display screen of the operator terminal 8 .
 作業者が、その表示画面上の学習用画像41の各々を視認し、複数の学習用画像41の各々に含まれる加工面100の状態を複数の加工状態(クラス)に分類した結果(分類結果)を入力操作するとともに、撮像画像40に含まれる加工面100の状態を判定した結果(判定結果)を作業者用端末8により入力操作する。そして、学習用データ取得部60は、その作業者の入力操作を受け付けて、学習用画像41から取得された複数の学習用画素領域411に対する画素分類結果(入力データ)と、その学習用画像41に対して入力操作された分類結果(出力データ)とを対応付けることで複数の第2の分類学習用データを取得する。また、学習用データ取得部60は、学習用画像41の各々が有する複数の学習用画像領域410に対する分類結果(入力データ)と、その撮像画像40に対して入力操作された判定結果(出力データ)とを対応付けることで判定学習用データを取得する。 A result (classification result ) is input, and the result (determination result) of determining the state of the processing surface 100 included in the captured image 40 is input via the operator terminal 8 . Then, the learning data acquisition unit 60 accepts the operator's input operation, and the pixel classification results (input data) for the plurality of learning pixel regions 411 acquired from the learning image 41 and the learning image 41 A plurality of second classification learning data are acquired by associating the classification result (output data) input-operated with respect to . In addition, the learning data acquisition unit 60 obtains the classification results (input data) for the plurality of learning image regions 410 of each of the learning images 41 and the determination results (output data ) are associated with each other to acquire data for judgment learning.
 したがって、学習用データ取得部60は、1枚の撮像画像40から複数枚の学習用画像41に分割したときの分割数に相当する数の第2の分類学習用データを取得することができ、さらに上記の作業を繰り返すことで所望の数の第2の分類学習用データを取得することができる。また、学習用データ取得部60は、第2の分類学習用データを取得するのに合わせて判定学習用データを取得することができる。そのため、第2の分類学習用データ及び判定学習用データを容易に収集することができる。 Therefore, the learning data acquisition unit 60 can acquire a number of second classification learning data corresponding to the number of divisions when one captured image 40 is divided into a plurality of learning images 41. Furthermore, by repeating the above operation, a desired number of second classification learning data can be acquired. In addition, the learning data acquisition unit 60 can acquire determination learning data in conjunction with acquiring the second classification learning data. Therefore, it is possible to easily collect the second data for classification learning and the data for judgment learning.
 図14は、第2の分類用学習モデル2Bに適用される推論モデル20Bの一例を示す概略図である。 FIG. 14 is a schematic diagram showing an example of the inference model 20B applied to the second classification learning model 2B.
 推論モデル20Bは、機械学習の具体的な手法として、図6に示す推論モデル20Aと同様に、畳み込みニューラルネットワークを採用したものである。以下では、推論モデル20Bについて、図6に示す推論モデル20Aと異なる点を中心に説明する。 The inference model 20B employs a convolutional neural network as a specific method of machine learning, similar to the inference model 20A shown in FIG. In the following, the inference model 20B will be described, focusing on the differences from the inference model 20A shown in FIG.
 入力層21は、入力データとしての学習用画像41の画素数に対応する数のニューロンを有し、複数の学習用画素領域411に対する画素分類結果が各ニューロンにそれぞれ入力される。 The input layer 21 has a number of neurons corresponding to the number of pixels in the learning image 41 as input data, and pixel classification results for a plurality of learning pixel regions 411 are input to each neuron.
 出力層23は、全結合層22cから出力された特徴ベクトルに基づいて、分類結果を含む出力データを出力する。なお、出力データは、分類結果の他に、例えば、分類結果の信頼度を示すスコアを含むものでもよい。 The output layer 23 outputs output data including classification results based on the feature vectors output from the fully connected layer 22c. Note that the output data may include, for example, a score indicating the reliability of the classification result in addition to the classification result.
 機械学習部62は、第2の分類学習用データを推論モデル20Bに入力し、複数の学習用画素領域411に対する画素分類結果と分類結果との相関関係を推論モデル20Bに機械学習させる。具体的には、機械学習部62は、第2の分類学習用データを構成する、複数の学習用画素領域411に対する画素分類結果を入力データとして、推論モデル20Bの入力層21に入力する。 The machine learning unit 62 inputs the second classification learning data to the inference model 20B, and causes the inference model 20B to machine-learn the correlation between the pixel classification results for the plurality of learning pixel regions 411 and the classification results. Specifically, the machine learning unit 62 inputs the pixel classification results for the plurality of learning pixel regions 411 constituting the second classification learning data to the input layer 21 of the inference model 20B as input data.
 機械学習部62は、出力層23から出力された出力データが示す分類結果(推論結果)と、当該第2の分類学習用データを構成する分類結果(教師データ)とを比較する誤差関数を用いて、誤差関数の評価値が小さくなるように、各シナプスに対応付けられた重みを調整する(バックプロバケーション)ことを反復する。そして、機械学習部62は、上記の一連の処理を所定の回数反復実施することや、誤差関数の評価値が許容値より小さくなること等の所定の学習終了条件が満たされたと判断した場合には、機械学習を終了し、そのときの推論モデル20B(各シナプスのそれぞれに対応付けられた全ての重み)を、第2の分類用学習モデル2Bとして学習済みモデル記憶部63に格納する。 The machine learning unit 62 uses an error function that compares the classification result (inference result) indicated by the output data output from the output layer 23 and the classification result (teacher data) that constitutes the second classification learning data. Then, the weight associated with each synapse is adjusted (back promotion) so that the evaluation value of the error function becomes smaller. Then, when the machine learning unit 62 determines that a predetermined learning termination condition is satisfied, such as repeating the series of processes described above a predetermined number of times or the evaluation value of the error function being smaller than the allowable value, terminates the machine learning and stores the inference model 20B (all weights associated with each synapse) at that time in the trained model storage unit 63 as the second classification learning model 2B.
(加工面判定装置7)
 図15は、第2の実施形態に係る加工面判定装置7の一例を示すブロック図である。
(machined surface determination device 7)
FIG. 15 is a block diagram showing an example of the machined surface determination device 7 according to the second embodiment.
 加工面判定装置7は、第1の実施形態と同様に、分類結果取得部70B、判定結果推論部71、学習済みモデル記憶部72、及び、出力処理部73を備える。 The machined surface determination device 7 includes a classification result acquisition unit 70B, a determination result inference unit 71, a learned model storage unit 72, and an output processing unit 73, as in the first embodiment.
 分類結果取得部70Bは、判定用画像42が有する判定用画像領域420を分割した複数の小画像領域430について、加工面100の状態を複数の加工状態のいずれかに分類したときの分類結果を小画像領域430単位で取得する分類結果取得処理(後述の図16参照)を行う。 The classification result acquisition unit 70B obtains the classification result when the state of the processing surface 100 is classified into one of a plurality of processing states for a plurality of small image regions 430 obtained by dividing the judgment image region 420 included in the judgment image 42. A classification result acquisition process (see FIG. 16 described later) for acquiring in units of small image areas 430 is performed.
 分類結果取得部70Bは、第1の実施形態と同様の画像取得部700及び小画像生成部701と、複数の小画像43の各々を構成する複数の画素領域について、画素領域内の画素値に基づいて画素領域に対する分類結果を示す画素分類結果を画素領域単位で取得する画素分類結果取得部703と、複数の画素領域に対する画素分類結果を第2の分類用学習モデル2Bに小画像領域430単位で入力することにより、複数の小画像領域430に対する分類結果を推論する第2の分類結果推論部702Bとを備える。 The classification result acquisition unit 70B includes an image acquisition unit 700 and a small image generation unit 701 similar to those in the first embodiment, and a plurality of pixel regions that form each of the plurality of small images 43. a pixel classification result obtaining unit 703 for obtaining, in units of pixel areas, pixel classification results indicating the classification results for pixel areas based on the pixel classification results; and a second classification result inference unit 702B for inferring classification results for the plurality of small image regions 430 by inputting .
 判定結果推論部71は、分類結果取得部70Bにより取得された複数の小画像領域430に対する分類結果を判定用学習モデル2に入力することにより、判定用画像領域420に対する判定結果を推論する判定結果推論処理を行う。 The determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 acquired by the classification result acquisition unit 70B to the determination learning model 2, thereby inferring the determination result for the determination image region 420. Perform inference processing.
 学習済みモデル記憶部72は、分類結果取得部70Bの推論処理にて用いられる学習済みの第2の分類用学習モデル2Bと、判定結果推論部71の推論処理にて用いられる学習済みの判定用学習モデル2とを記憶するデータベースである。 The trained model storage unit 72 stores the trained second classification learning model 2B used in the inference processing of the classification result acquisition unit 70B and the learned judgment model 2B used in the inference processing of the judgment result inference unit 71. It is a database that stores the learning model 2.
 図16は、分類結果取得部70Bによる分類結果取得処理の一例を示す機能説明図である。 FIG. 16 is a functional explanatory diagram showing an example of the classification result acquisition process by the classification result acquisition section 70B.
 判定用画像42の判定用画像領域420は、第1の実施形態と同様に、撮像部4により撮像された領域であり、小画像43の小画像領域430は、判定用画像42の判定用画像領域420を格子状に分割したものである。小画像43の小画像領域430は、学習用画像41の学習用画像領域410に相当し、小画像43を構成する複数の画素領域431は、学習用画像41を構成する複数の学習用画素領域411に相当する。 A judgment image area 420 of the judgment image 42 is an area captured by the image pickup unit 4 as in the first embodiment, and a small image area 430 of the small image 43 is a judgment image of the judgment image 42. The region 420 is divided into grids. A small image region 430 of the small image 43 corresponds to the learning image region 410 of the learning image 41, and a plurality of pixel regions 431 forming the small image 43 correspond to a plurality of learning pixel regions forming the learning image 41. 411 equivalent.
 ここで、第2の分類用学習モデル2Bは、複数の画素領域431に相当する複数の学習用画素領域411に対する画素分類結果と当該画素分類結果に基づいて複数の学習用画素領域411内の加工面100の状態を複数の加工状態のいずれかに分類したときの分類結果との相関関係を機械学習させたものである。したがって、第2の分類結果推論部702Bは、複数の小画像43の各々を構成する複数の画素領域431に対する画素分類結果を第2の分類用学習モデル2Bに小画像領域430単位で入力することにより、小画像領域430内の加工面100の状態を複数の加工状態のいずれかに分類する分類器として機能する。 Here, the second classification learning model 2B performs pixel classification results for the plurality of learning pixel regions 411 corresponding to the plurality of pixel regions 431 and processing in the plurality of learning pixel regions 411 based on the pixel classification results. Machine learning is performed on the correlation with the classification result when the state of the surface 100 is classified into one of a plurality of machining states. Therefore, the second classification result inference unit 702B inputs the pixel classification results for the plurality of pixel regions 431 forming each of the plurality of small images 43 to the second classification learning model 2B in units of small image regions 430. functions as a classifier that classifies the state of the processing surface 100 in the small image area 430 into one of a plurality of processing states.
(加工面判定方法)
 図17は、第2の実施形態に係る加工面判定装置7による加工面判定方法の一例を示すフローチャートである。
(Processing surface determination method)
FIG. 17 is a flowchart showing an example of a machined surface determination method by the machined surface determination device 7 according to the second embodiment.
 まず、ステップS100において、分類結果取得部70Bの画像取得部700が、判定用画像42を取得する。 First, in step S100, the image acquisition unit 700 of the classification result acquisition unit 70B acquires the determination image 42.
 次に、ステップS110において、小画像生成部701は、判定用画像42に対する前処理として、判定用画像42の判定用画像領域420を複数の小画像領域430に分割することで判定用画像42から複数の小画像43を生成する。 Next, in step S110, the small image generating unit 701 divides the judgment image region 420 of the judgment image 42 into a plurality of small image regions 430 as preprocessing for the judgment image 42, thereby dividing the judgment image 42 into a plurality of small image regions 430. A plurality of small images 43 are generated.
 そして、ステップS112において、画素分類結果取得部703は、複数の小画像43の各々を構成する複数の画素領域431について、画素領域431内の画素値に基づいて画素領域431に対する画素分類結果を画素領域431単位で取得する。 Then, in step S112, the pixel classification result acquisition unit 703 acquires the pixel classification result for the pixel regions 431 based on the pixel values in the pixel regions 431 for the plurality of pixel regions 431 forming each of the plurality of small images 43. Acquired in units of 431 areas.
 次に、ステップS120~S128において、第2の分類結果推論部702Bは、複数の小画像43の分割数をKとして、複数の小画像43に通し番号(1≦n≦K)をそれぞれ割り当てた状態で、変数iを「1」から「K」までインクリメントすることにより、ループ処理を実行する。 Next, in steps S120 to S128, the second classification result inference unit 702B assigns a serial number (1≦n≦K) to each of the plurality of small images 43, where K is the division number of the plurality of small images 43. , the loop process is executed by incrementing the variable i from "1" to "K".
 具体的には、ステップS120において、第2の分類結果推論部702Bは、変数iを「1」で初期化する。次に、ステップS124において、第2の分類結果推論部702Bは、i番目の小画像43を選択し、その小画像43を構成する複数の画素領域431に対する画素分類結果を第2の分類用学習モデル2Bの入力層21に入力することにより、当該第2の分類用学習モデル2Bの出力層23から出力された分類結果を推論する。 Specifically, in step S120, the second classification result inference unit 702B initializes the variable i to "1". Next, in step S124, the second classification result inference unit 702B selects the i-th small image 43, and applies the pixel classification results for the plurality of pixel regions 431 forming the small image 43 to the second classification learning. By inputting to the input layer 21 of the model 2B, the classification result output from the output layer 23 of the second learning model for classification 2B is inferred.
 次に、ステップS126において、変数iをインクリメントし、ステップS128において、変数iが分割数Kを超えたか否かを判定する。そして、第2の分類結果推論部702Bは、変数iが分割数Kを超えるまで上記ステップS124、S126を繰り返すことで、複数の小画像領域430に対する分類結果を取得する。 Next, in step S126, the variable i is incremented, and in step S128, it is determined whether or not the variable i exceeds the division number K. Then, the second classification result inference unit 702B acquires the classification results for the plurality of small image regions 430 by repeating steps S124 and S126 until the variable i exceeds the number of divisions K. FIG.
 次に、ステップS130において、判定結果推論部71は、複数の小画像領域430に対する分類結果を判定用学習モデル2の入力層21に入力することにより、当該判定用学習モデル2の出力層23から出力された判定結果(例えば、再加工の要否、別加工の要否、仕上げ加工の要否、加工範囲等)を推論する。 Next, in step S130, the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the input layer 21 of the determination learning model 2, and from the output layer 23 of the determination learning model 2, The output determination result (for example, necessity of reprocessing, necessity of another processing, necessity of finish processing, processing range, etc.) is inferred.
 次に、ステップS140において、出力処理部73は、判定結果推論部71により推論された判定結果に応じた情報を出力手段(例えば、制御装置5、作業者用端末8等)に出力する。そして、図17に示す一連の加工面判定方法を終了する。加工面判定方法において、ステップS100が画像取得工程、ステップS100~S128が分類結果取得工程、ステップS130が判定結果推論工程、ステップS140が出力処理工程に相当する。 Next, in step S140, the output processing unit 73 outputs information corresponding to the determination result inferred by the determination result inference unit 71 to output means (eg, the control device 5, the worker terminal 8, etc.). Then, the series of the machined surface determination method shown in FIG. 17 ends. In the machined surface determination method, step S100 corresponds to an image obtaining step, steps S100 to S128 correspond to a classification result obtaining step, step S130 corresponds to a determination result inference step, and step S140 corresponds to an output processing step.
 以上のように、本実施形態に係る加工面判定装置7及び加工面判定法によれば、分類結果取得部70Bが、判定用画像領域420を小画像領域430に分割することで判定用画像42から複数の小画像43を生成し、その複数の小画像43の各々を構成する複数の画素領域431に対する画素分類結果を第2の分類用学習モデル2Bに入力することにより、複数の小画像領域430に対する分類結果を推論する。そして、判定結果推論部71が、複数の小画像領域430に対する分類結果を判定用学習モデル2に入力することにより、加工面100の状態を判定結果として推論する。 As described above, according to the processed surface determination device 7 and the processed surface determination method according to the present embodiment, the classification result acquisition unit 70B divides the determination image region 420 into the small image regions 430, thereby dividing the determination image 42 into the small image regions 430. a plurality of small image regions 43 are generated from the plurality of small image regions 43, and the pixel classification results for the plurality of pixel regions 431 constituting each of the plurality of small image regions 43 are input to the second classification learning model 2B to obtain a plurality of small image regions Infer the classification result for 430. Then, the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the determination learning model 2, thereby inferring the state of the machined surface 100 as the determination result.
 そのため、第2の分類用学習モデル2Bによる分類結果は、判定用画像42が分割された複数の小画像43の各々が入力されることにより小画像領域430単位で推論されるので、1枚の判定用画像42を第2の分類用学習モデル2Bに入力する場合に比べて、機械学習に必要な学習データの収集が容易であるとともに、第2の分類用学習モデル2Bの精度を向上することができる。そして、第2の分類用学習モデル2Bによる複数の小画像領域430に対する分類結果が、判定用学習モデル2に入力されることで、判定用画像42に含まれる加工面100の状態が判定される。したがって、判定対象物10が有する加工面100の状態を自動的に判定することができる。 Therefore, the classification result by the second classification learning model 2B is inferred in units of small image regions 430 by inputting each of the plurality of small images 43 into which the judgment image 42 is divided. To facilitate collection of learning data necessary for machine learning and to improve the accuracy of the second classification learning model 2B as compared with the case of inputting the judgment image 42 to the second classification learning model 2B. can be done. Then, the state of the machined surface 100 included in the determination image 42 is determined by inputting the classification results for the plurality of small image regions 430 by the second classification learning model 2B to the determination learning model 2. . Therefore, the state of the processing surface 100 of the determination target 10 can be automatically determined.
(他の実施形態)
 本発明は上述した実施形態に制約されるものではなく、本発明の主旨を逸脱しない範囲内で種々変更して実施することが可能である。そして、それらはすべて、本発明の技術思想に含まれるものである。
(Other embodiments)
The present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present invention. All of them are included in the technical idea of the present invention.
 例えば、上記実施形態では、判定用画像領域420は、判定対象物10の羽根車が有する1つの羽根の一部分を判定対象の加工面100として含むように設定されたものである。これに対し、判定用画像領域420は、羽根車全体に拡大することで、羽根車が有する複数の羽根を、判定対象の複数の加工面100として含むように設定されてもよい。すなわち、判定対象物10が、加工部3により異なる加工工程を経てそれぞれ加工された複数の加工面100を有する場合には、判定用画像領域420は、複数の加工面100を含むように設定されてもよい。 For example, in the above embodiment, the judgment image area 420 is set to include a part of one blade of the impeller of the judgment object 10 as the processing surface 100 to be judged. On the other hand, the determination image area 420 may be set so as to include the plurality of blades of the impeller as the plurality of processing surfaces 100 to be determined by expanding the entire impeller. That is, when the determination target object 10 has a plurality of processed surfaces 100 that have been processed through different processing steps by the processing unit 3, the determination image region 420 is set to include the plurality of processed surfaces 100. may
 この場合、分類結果取得部70Bは、複数の加工面100が撮像された判定用画像42を取得し、判定用画像42を複数の加工面100の境界で分離するように、加工面毎の判定用画像領域420を設定する。加工面100の境界は、事前に設定されてもよいし、判定用画像42に対する画像処理により設定されてもよい。そして、分類結果取得部70Bは、加工面毎の判定用画像領域420をそれぞれ分割した複数の小画像領域430について、分類結果を小画像領域430単位で取得する。次に、判定結果推論部71は、複数の小画像領域430に対する分類結果を、判定用学習モデル2に加工面毎に入力することにより、加工面毎の判定用画像42に対する判定結果を推論する。 In this case, the classification result acquisition unit 70B acquires the determination images 42 in which the plurality of processed surfaces 100 are captured, and performs determination for each processed surface so that the determination images 42 are separated at the boundaries of the plurality of processed surfaces 100. set the image area 420 for The boundary of the processing surface 100 may be set in advance, or may be set by image processing on the determination image 42 . Then, the classification result acquisition unit 70B acquires the classification result for each small image area 430 for a plurality of small image areas 430 obtained by dividing the determination image area 420 for each processing surface. Next, the determination result inference unit 71 inputs the classification results for the plurality of small image regions 430 to the determination learning model 2 for each processed surface, thereby inferring the determination result for the determination image 42 for each processed surface. .
 また、上記実施形態では、機械学習部62による機械学習の具体的な手法として、CNN(図6、図7参照)を採用した場合について説明したが、機械学習部62は、任意の他の機械学習の手法を採用してもよい。他の機械学習の手法としては、例えば、決定木、回帰木等のツリー型、バギング、ブースティング等のアンサンブル学習、再帰型ニューラルネットワーク、畳み込みニューラルネットワーク等のニューラルネット型(ディープラーニングを含む)、階層型クラスタリング、非階層型クラスタリング、k近傍法、k平均法等のクラスタリング型、主成分分析、因子分析、ロジスティク回帰等の多変量解析、サポートベクターマシン等が挙げられる。 Further, in the above-described embodiment, the case where CNN (see FIGS. 6 and 7) is adopted as a specific method of machine learning by the machine learning unit 62 has been described, but the machine learning unit 62 can be any other machine A method of learning may be employed. Other machine learning methods include, for example, tree types such as decision trees and regression trees, ensemble learning such as bagging and boosting, neural network types such as recurrent neural networks and convolutional neural networks (including deep learning), Hierarchical clustering, non-hierarchical clustering, k-neighbor method, clustering type such as k-means method, principal component analysis, factor analysis, multivariate analysis such as logistic regression, support vector machine, and the like.
(加工面判定プログラム)
 本発明は、図2に示すコンピュータ200を、上記実施形態に係る加工面判定装置7が備える各部として機能させるプログラム(加工面判定プログラム)230の態様で提供することができる。また、本発明は、図2に示すコンピュータ200に、上記実施形態に係る加工面判定方法が備える各工程を実行させるためのプログラム(加工面判定プログラム)230の態様で提供することもできる。
(machined surface determination program)
The present invention can be provided in the form of a program (machined surface determination program) 230 that causes the computer 200 shown in FIG. 2 to function as each unit included in the machined surface determination device 7 according to the above embodiment. The present invention can also be provided in the form of a program (machined surface determination program) 230 for causing the computer 200 shown in FIG. 2 to execute each step included in the machined surface determination method according to the above embodiment.
(推論装置、推論方法及び推論プログラム)
 本発明は、上記実施形態に係る加工面判定装置7(加工面判定方法又は加工面判定プログラム)の態様によるもののみならず、加工面100の状態を判定するために用いられる推論装置(推論方法又は推論プログラム)の態様で提供することもできる。その場合、推論装置(推論方法又は推論プログラム)としては、メモリと、プロセッサとを含み、このうちのプロセッサが、一連の処理を実行するものとすることができる。当該一連の処理とは、判定用画像42が有する判定用画像領域420を分割した複数の小画像領域430について、加工面100の状態を複数の加工状態のいずれかに分類したときの分類結果を小画像領域430単位で取得する分類結果取得処理(分類結果取得工程)と、分類結果取得処理にて複数の小画像領域430に対する分類結果を取得すると、判定用画像42に対する判定結果として判定用画像42に含まれる加工面100の状態を推論する判定結果推論処理(判定結果推論工程)と、含む。
(Inference Apparatus, Inference Method and Inference Program)
The present invention is not only based on the aspect of the machined surface determination device 7 (machined surface determination method or machined surface determination program) according to the above embodiment, but also an inference device (inference method) used to determine the state of the machined surface 100 or an inference program). In that case, the inference device (inference method or inference program) may include a memory and a processor, and the processor of these may execute a series of processes. The series of processing refers to classification results obtained when the state of the processed surface 100 is classified into one of a plurality of processed states for a plurality of small image regions 430 obtained by dividing the determination image region 420 included in the determination image 42. When the classification result acquisition process (classification result acquisition process) for acquiring each small image area 430 and the classification results for a plurality of small image areas 430 are acquired in the classification result acquisition process, the determination result for the determination image 42 is obtained as a determination image. a judgment result inference process (judgment result inference process) for inferring the state of the machined surface 100 included in 42;
 推論装置(推論方法又は推論プログラム)の態様で提供することで、加工面判定装置7を実装する場合に比して簡単に種々の装置への適用が可能となる。推論装置(推論方法又は推論プログラム)が加工面100の状態を推論する際、上記実施形態に係る機械学習装置6により生成された学習済みの判定用学習モデル2を用いて、加工面判定装置7の判定結果推論部71が実施する推論手法を適用してもよいことは、当業者にとって当然に理解され得るものである。 By providing it in the form of an inference device (inference method or inference program), it can be applied to various devices more easily than when the machined surface determination device 7 is implemented. When the inference device (inference method or inference program) infers the state of the machined surface 100, the machined surface judgment device 7 uses the learned judgment learning model 2 generated by the machine learning device 6 according to the above embodiment. It should be understood by those skilled in the art that the inference method performed by the determination result inference unit 71 may be applied.
 本発明は、加工面判定装置、加工面判定プログラム、加工面判定方法、加工システム、推論装置、及び、機械学習装置に利用可能である。 The present invention can be used for a machined surface determination device, a machined surface determination program, a machined surface determination method, a machining system, an inference device, and a machine learning device.
1…加工システム、2…判定用学習モデル、
2A…第1の分類用学習モデル、2B…第2の分類用学習モデル
3…加工部、4…撮像部、5…制御装置、6…機械学習装置、
7…加工面判定装置、8…作業者用端末、10…判定対象物、
20、20A、20B…推論モデル、21…入力層、22…中間層、
22a…畳み込み層、22b…プーリング層、22c…全結合層、23…出力層、
40…撮像画像、41…学習用画像、42…判定用画像、43…小画像、
50…制御盤、51…操作表示盤
60…学習用データ取得部、61…学習用データ記憶部
62…機械学習部、63…モデル記憶部
70A、70B…分類結果取得部、71…判定結果推論部
72…モデル記憶部、73…出力処理部
100…加工面、110…背景
200…コンピュータ、
400…撮像画像領域、410…学習用画像領域、411…学習用画素領域
420…判定用画像領域、430…小画像領域、431…画素領域
700…画像取得部、701…小画像生成部、
702A…第1の分類結果推論部、702B…第2の分類結果推論部
703…画素分類結果取得部

 
1... Machining system, 2... Learning model for judgment,
2A... First learning model for classification, 2B... Second learning model for classification 3... Processing unit, 4... Imaging unit, 5... Control device, 6... Machine learning device,
7... Machined surface determination device, 8... Terminal for operator, 10... Object to be determined,
20, 20A, 20B... inference model, 21... input layer, 22... intermediate layer,
22a... convolution layer, 22b... pooling layer, 22c... fully connected layer, 23... output layer,
40... Captured image, 41... Learning image, 42... Judging image, 43... Small image,
50... Control panel 51... Operation display panel 60... Learning data acquisition unit 61... Learning data storage unit 62... Machine learning unit 63... Model storage units 70A, 70B... Classification result acquisition unit 71... Judgment result inference Unit 72... Model storage unit 73... Output processing unit 100... Machining surface 110... Background 200... Computer,
400... Captured image area 410... Learning image area 411... Learning pixel area 420... Determining image area 430... Small image area 431... Pixel area 700... Image acquiring unit 701... Small image generating unit
702A First classification result inference unit 702B Second classification result inference unit 703 Pixel classification result acquisition unit

Claims (13)

  1.  判定対象物の加工面が撮像された判定用画像に基づいて、前記加工面の状態を判定する加工面判定装置であって、
     前記判定用画像が有する判定用画像領域を分割した複数の小画像領域について、前記加工面の状態を複数の加工状態のいずれかに分類したときの分類結果を前記小画像領域単位で取得する分類結果取得部と、
     複数の前記小画像領域に対する前記分類結果を、複数の前記小画像領域に相当する複数の学習用画像領域に対する前記分類結果と当該分類結果に基づいて複数の前記学習用画像領域内の前記加工面の状態を判定したときの判定結果との相関関係を機械学習させた判定用学習モデルに入力することにより、前記判定用画像に対する前記判定結果を推論する判定結果推論部と、を備える、
     加工面判定装置。
    A machined surface determination device that determines the state of the machined surface based on a judgment image in which the machined surface of the object to be judged is captured,
    Classification for obtaining a classification result for each small image area when the state of the processed surface is classified into one of a plurality of processing states for a plurality of small image areas obtained by dividing the determination image area of the determination image. a result acquisition unit;
    The processing surface in the plurality of learning image areas based on the classification results of the plurality of small image areas and the classification results of the plurality of learning image areas corresponding to the plurality of small image areas. a determination result inference unit that infers the determination result for the determination image by inputting the correlation with the determination result when determining the state of the determination learning model that has undergone machine learning,
    Machining surface determination device.
  2.  前記分類結果取得部は、
      前記判定用画像領域を有する前記判定用画像を取得する画像取得部と、
      前記判定用画像領域を複数の前記小画像領域に分割することで前記判定用画像から複数の前記小画像を生成する小画像生成部と、
      複数の前記小画像を、前記学習用画像領域を有する学習用画像と当該学習用画像に含まれる前記加工面の状態を複数の前記加工状態のいずれかに分類したときの前記分類結果との相関関係を機械学習させた第1の分類用学習モデルに前記小画像領域単位で入力することにより、複数の前記小画像領域に対する前記分類結果を推論する第1の分類結果推論部と、を備える、
     請求項1に記載の加工面判定装置。
    The classification result acquisition unit
    an image acquiring unit that acquires the determination image having the determination image area;
    a small image generation unit that generates a plurality of the small images from the judgment image by dividing the judgment image region into a plurality of the small image regions;
    Correlation between the plurality of small images and the classification result when classifying the state of the machined surface included in the learning image having the learning image area and the state of the machined surface included in the learning image into one of the plurality of machining states a first classification result inference unit that infers the classification results for a plurality of the small image regions by inputting the relationship into the first classification learning model that has undergone machine learning in units of the small image regions;
    The machined surface determination device according to claim 1.
  3.  前記分類結果取得部は、
      前記判定用画像領域を有する前記判定用画像を取得する画像取得部と、
      前記判定用画像領域を複数の前記小画像領域に分割することで前記判定用画像から複数の前記小画像を生成する小画像生成部と、
      複数の前記小画像の各々を構成する複数の画素領域について、前記画素領域内の画素値に基づいて前記画素領域に対する前記分類結果を示す画素分類結果を前記画素領域単位で取得する画素分類結果取得部と、
      複数の前記画素領域に対する前記画素分類結果を、複数の前記画素領域に相当する複数の学習用画素領域に対する前記画素分類結果と当該画素分類結果に基づいて複数の前記学習用画素領域内の前記加工面の状態を複数の前記加工状態のいずれかに分類したときの前記分類結果との相関関係を機械学習させた第2の分類用学習モデルに前記小画像領域単位で入力することにより、複数の前記小画像領域に対する前記分類結果を推論する第2の分類結果推論部と、を備える、
     請求項1に記載の加工面判定装置。
    The classification result acquisition unit
    an image acquiring unit that acquires the determination image having the determination image area;
    a small image generation unit that generates a plurality of the small images from the judgment image by dividing the judgment image region into a plurality of the small image regions;
    Acquisition of pixel classification results for each of a plurality of pixel areas constituting each of the plurality of small images, the pixel classification result indicating the classification results for the pixel areas based on the pixel values in the pixel areas. Department and
    The pixel classification results for the plurality of pixel regions are processed in the plurality of learning pixel regions based on the pixel classification results for the plurality of learning pixel regions corresponding to the plurality of pixel regions and the pixel classification results. By inputting the correlation with the classification result when the state of the surface is classified into one of the plurality of processing states in units of the small image area into the second classification learning model that has undergone machine learning, a second classification result inference unit that infers the classification result for the small image region;
    The machined surface determination device according to claim 1.
  4.  前記分類結果取得部は、
      複数の前記加工面が撮像された前記判定用画像が有する前記加工面毎の前記判定用画像領域をそれぞれ分割した複数の前記小画像領域について、前記分類結果を前記小画像領域単位で取得し、
     前記判定結果推論部は、
      複数の前記小画像領域に対する前記分類結果を、前記判定用学習モデルに前記加工面毎に入力することにより、前記加工面毎の前記判定用画像に対する前記判定結果を推論する、
     請求項1乃至請求項3のいずれか一項に記載の加工面判定装置。
    The classification result acquisition unit
    Acquiring the classification result for each of the small image regions for a plurality of the small image regions obtained by dividing the image region for judgment for each of the processing surfaces included in the image for judgment in which the plurality of processing surfaces are imaged;
    The determination result inference unit
    inferring the determination result for the determination image for each processed surface by inputting the classification results for the plurality of small image regions into the determination learning model for each processed surface;
    The machined surface determination device according to any one of claims 1 to 3.
  5.  前記分類結果取得部は、
      複数の前記小画像領域の各々について、前記小画像領域内の前記加工面の状態を少なくとも良及び不良を含む複数の前記加工状態のいずれかに分類するか、前記小画像領域内に前記加工面のエッジ又は前記加工面以外の背景が存在することを理由に判定対象外に分類したときの前記分類結果を前記小画像領域単位で取得する、
     請求項1乃至請求項4のいずれか一項に記載の加工面判定装置。
    The classification result acquisition unit
    For each of the plurality of small image regions, the state of the processing surface in the small image region is classified into one of a plurality of processing states including at least good and bad, or the processing surface is classified into the small image region. Obtaining the classification result in units of the small image area when classified as not to be determined due to the presence of an edge of or a background other than the processed surface.
    The machined surface determination device according to any one of claims 1 to 4.
  6.  前記判定結果推論部は、
     前記判定結果として、
      前記加工面を加工したときと同一の加工工程を再度行う再加工の要否、
      前記加工面を加工したときと異なる加工工程を行う別加工の要否、
      前記加工面に対して作業者が仕上げ加工を行う仕上げ加工の要否、及び、
      前記加工面のうち前記再加工、前記別加工又は前記仕上げ加工を行う対象とする加工範囲、
     の少なくとも1つを推論する、
     請求項1乃至請求項5のいずれか一項に記載の加工面判定装置。
    The determination result inference unit
    As the determination result,
    Necessity of reprocessing to repeat the same processing steps as when the processing surface was processed,
    Necessity of another processing to perform a processing process different from that when processing the processing surface,
    Necessity of finish processing in which the operator performs finishing processing on the processing surface, and
    A processing range targeted for the reprocessing, the separate processing, or the finishing processing of the processing surface,
    infer at least one of
    The machined surface determination device according to any one of claims 1 to 5.
  7.  前記加工面は、研磨工程、研削工程、切削工程又は鋳造工程により前記判定対象物が加工されたときの当該判定対象物の表面である、
     請求項1乃至請求項6のいずれか一項に記載の加工面判定装置。
    The processed surface is the surface of the determination target when the determination target is processed by a polishing process, a grinding process, a cutting process, or a casting process.
    The machined surface determination device according to any one of claims 1 to 6.
  8.  前記判定対象物は、流体機械又は前記流体機械を構成する流体部品である、
     請求項1乃至請求項7のいずれか一項に記載の加工面判定装置。
    The object to be determined is a fluid machine or a fluid component that constitutes the fluid machine,
    The machined surface determination device according to any one of claims 1 to 7.
  9.  コンピュータを、請求項1乃至請求項8のいずれか一項に記載の加工面判定装置として機能させる、
     加工面判定プログラム。
    causing a computer to function as the machined surface determination device according to any one of claims 1 to 8,
    Machining surface judgment program.
  10.  判定対象物の加工面が撮像された判定用画像に基づいて、前記加工面の状態を判定する加工面判定方法であって、
     前記判定用画像が有する判定用画像領域を分割した複数の小画像領域について、前記加工面の状態を複数の加工状態のいずれかに分類したときの分類結果を前記小画像領域単位で取得する分類結果取得工程と、
     複数の前記小画像領域に対する前記分類結果を、複数の前記小画像領域に相当する複数の学習用画像領域に対する前記分類結果と当該分類結果に基づいて複数の前記学習用画像領域内の前記加工面の状態を判定したときの判定結果との相関関係を機械学習させた判定用学習モデルに入力することにより、前記判定用画像に対する前記判定結果を推論する判定結果推論工程と、を備える、
     加工面判定方法。
    A machined surface determination method for determining the state of the machined surface based on a judgment image in which the machined surface of the object to be judged is captured,
    Classification for obtaining a classification result for each small image area when the state of the processed surface is classified into one of a plurality of processing states for a plurality of small image areas obtained by dividing the determination image area of the determination image. a result acquisition step;
    The processing surface in the plurality of learning image areas based on the classification results of the plurality of small image areas and the classification results of the plurality of learning image areas corresponding to the plurality of small image areas. a judgment result inference step of inferring the judgment result for the judgment image by inputting the correlation with the judgment result when judging the state of the judgment learning model machine-learned,
    Machining surface judgment method.
  11.  請求項1乃至請求項8のいずれか一項に記載の加工面判定装置と、
     前記判定対象物を加工する加工部と、
     前記判定対象物の加工面を撮像する撮像部と、
     前記加工面判定装置、前記加工部及び前記撮像部を制御する制御部と、を備える、
     加工システム。
    A machined surface determination device according to any one of claims 1 to 8;
    a processing unit that processes the determination object;
    an imaging unit that captures an image of the processed surface of the determination target;
    A control unit that controls the machined surface determination device, the processing unit, and the imaging unit,
    processing system.
  12.  判定対象物の加工面が撮像された判定用画像に基づいて、前記加工面の状態を判定するために用いられる推論装置であって、
     前記推論装置は、メモリと、プロセッサとを備え、
     前記プロセッサは、
      前記判定用画像が有する判定用画像領域を分割した複数の小画像領域について、前記加工面の状態を複数の加工状態のいずれかに分類したときの分類結果を前記小画像領域単位で取得する分類結果取得処理と、
      前記分類結果取得処理にて複数の前記小画像領域に対する前記分類結果を取得すると、前記判定用画像に対する判定結果として前記判定用画像に含まれる前記加工面の状態を推論する判定結果推論処理と、を実行する、
     推論装置。
    An inference device used to determine the state of the processing surface based on a determination image in which the processing surface of a determination target is captured,
    The reasoning device comprises a memory and a processor,
    The processor
    Classification for obtaining a classification result for each small image area when the state of the processed surface is classified into one of a plurality of processing states for a plurality of small image areas obtained by dividing the determination image area of the determination image. a result acquisition process;
    a judgment result inference process for inferring the state of the machined surface included in the judgment image as a judgment result for the judgment image when the classification results for the plurality of small image regions are acquired in the classification result acquisition process; run the
    reasoning device.
  13.  判定対象物の加工面が撮像された判定用画像に基づいて前記加工面の状態を判定する加工面判定装置にて用いられる判定用学習モデルを生成する機械学習装置であって、
     前記判定用画像が有する判定用画像領域を分割した複数の小画像領域に相当する複数の学習用画像領域の各々について前記加工面の状態を複数の加工状態のいずれかに分類したときの分類結果を入力データとし、当該分類結果に基づいて複数の前記学習用画像領域内の前記加工面の状態を判定したときの判定結果を出力データとしてそれぞれ含む学習用データを複数組記憶する学習用データ記憶部と、
     前記学習用データが複数組入力されることで、前記入力データと前記出力データとの相関関係を推論する前記判定用学習モデルを学習する機械学習部と、
     前記機械学習部により学習された前記判定用学習モデルを記憶する学習済みモデル記憶部と、を備える、
     機械学習装置。

     
    A machine learning device for generating a determination learning model for use in a machined surface determination device that determines the state of a machined surface based on a judgment image in which a machined surface of an object to be judged is captured,
    A classification result when the state of the machined surface is classified into one of a plurality of machining states for each of a plurality of learning image areas corresponding to a plurality of small image areas obtained by dividing the judgment image area of the judgment image. is input data, and a plurality of sets of learning data each including, as output data, determination results when determining the state of the processing surface in the plurality of learning image regions based on the classification results are stored. Department and
    a machine learning unit that learns the determination learning model for inferring the correlation between the input data and the output data by receiving a plurality of sets of the learning data;
    a learned model storage unit that stores the determination learning model learned by the machine learning unit;
    Machine learning device.

PCT/JP2021/038549 2021-01-25 2021-10-19 Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device WO2022158060A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180091239.2A CN116724224A (en) 2021-01-25 2021-10-19 Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021009520A JP2022113345A (en) 2021-01-25 2021-01-25 Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device
JP2021-009520 2021-01-25

Publications (1)

Publication Number Publication Date
WO2022158060A1 true WO2022158060A1 (en) 2022-07-28

Family

ID=82548689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/038549 WO2022158060A1 (en) 2021-01-25 2021-10-19 Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device

Country Status (3)

Country Link
JP (1) JP2022113345A (en)
CN (1) CN116724224A (en)
WO (1) WO2022158060A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7460857B1 (en) 2023-07-25 2024-04-02 ファナック株式会社 Abnormal area identification device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010139317A (en) * 2008-12-10 2010-06-24 Mitsubishi Materials Corp Method and device for inspecting defect on surface of shaft-like tool
JP2016017838A (en) * 2014-07-08 2016-02-01 アズビル株式会社 Image inspection device and image inspection method
US20190362480A1 (en) * 2018-05-22 2019-11-28 Midea Group Co., Ltd. Methods and system for improved quality inspection
WO2020071234A1 (en) * 2018-10-05 2020-04-09 日本電産株式会社 Image processing device, image processing method, appearance inspection system, and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010139317A (en) * 2008-12-10 2010-06-24 Mitsubishi Materials Corp Method and device for inspecting defect on surface of shaft-like tool
JP2016017838A (en) * 2014-07-08 2016-02-01 アズビル株式会社 Image inspection device and image inspection method
US20190362480A1 (en) * 2018-05-22 2019-11-28 Midea Group Co., Ltd. Methods and system for improved quality inspection
WO2020071234A1 (en) * 2018-10-05 2020-04-09 日本電産株式会社 Image processing device, image processing method, appearance inspection system, and computer program

Also Published As

Publication number Publication date
CN116724224A (en) 2023-09-08
JP2022113345A (en) 2022-08-04

Similar Documents

Publication Publication Date Title
US11084225B2 (en) Systems, methods, and media for artificial intelligence process control in additive manufacturing
Li et al. Geometrical defect detection for additive manufacturing with machine learning models
JP6921241B2 (en) Display screen quality inspection methods, equipment, electronic devices and storage media
JP7255919B2 (en) Systems, methods and media for artificial intelligence process control in additive manufacturing
JP7408653B2 (en) Automatic analysis of unsteady mechanical performance
JP2018142097A (en) Information processing device, information processing method, and program
JP2019162712A (en) Control device, machine learning device and system
US11762679B2 (en) Information processing device, information processing method, and non-transitory computer-readable storage medium
JP6993483B2 (en) How to detect abnormalities in robot devices
CN107944563B (en) Machine learning device and machine learning method
JP7088871B2 (en) Inspection equipment, inspection system, and user interface
US20220366244A1 (en) Modeling Human Behavior in Work Environment Using Neural Networks
WO2022158060A1 (en) Machining surface determination device, machining surface determination program, machining surface determination method, machining system, inference device, and machine learning device
KR20220117194A (en) Inference computing device, model training device, and inference computing system
Jyeniskhan et al. Integrating machine learning model and digital twin system for additive manufacturing
JP2021135977A (en) Apparatus and method for processing information
JP7450517B2 (en) Machining surface determination device, machining surface determination program, machining surface determination method, and machining system
JP2021026599A (en) Image processing system
TWI801820B (en) Systems and methods for manufacturing processes
KR102233109B1 (en) Mechanical diagnostic system based on image learning and method for mechanical diagnosis using the same
JP2023034745A (en) Rock fall prediction device, machine learning device, rock fall prediction method, and machine learning method
WO2022138545A1 (en) Machine learning device and machine learning method
TWI768092B (en) Inspection-guided critical site selection for critical dimension measurement
US20230342908A1 (en) Distortion prediction for additive manufacturing using image analysis
WO2021124417A1 (en) Foreground extraction device, foreground extraction method, and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21921163

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180091239.2

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21921163

Country of ref document: EP

Kind code of ref document: A1