WO2019159425A1 - Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium - Google Patents
Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium Download PDFInfo
- Publication number
- WO2019159425A1 WO2019159425A1 PCT/JP2018/037252 JP2018037252W WO2019159425A1 WO 2019159425 A1 WO2019159425 A1 WO 2019159425A1 JP 2018037252 W JP2018037252 W JP 2018037252W WO 2019159425 A1 WO2019159425 A1 WO 2019159425A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- evaluation
- image
- region
- captured image
- unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/89—Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
- G01N21/892—Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the flaw, defect or object feature examined
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Definitions
- the present disclosure relates to an evaluation system, an evaluation apparatus, an evaluation method, an evaluation program, and a recording medium.
- Patent Document 1 discloses an inspection apparatus that inputs an image obtained by imaging an inspection object to a neural network and determines the presence and type of a defect.
- Patent Document 2 discloses a measuring apparatus that irradiates light to a steel material and measures a scale from reflected light intensity at a specific wavelength.
- JP 2012-26982 A JP-A-8-254502
- the imaging condition of the captured image may be different for each captured image.
- the image may be taken from a position away from the object, or the image may be taken near the object.
- An arbitrary magnification is set and the image is taken.
- the degree of rust can be evaluated by the extent of rust spreading in an actual object. For this reason, when the captured image of the object is used as it is, there is a possibility that sufficient evaluation accuracy cannot be obtained.
- the evaluation system is a system that performs an evaluation on the degree of rust of an evaluation object using a captured image of the evaluation object.
- the evaluation system includes an image acquisition unit that acquires a captured image, a correction unit that generates an evaluation image by correcting the captured image, an evaluation unit that performs evaluation based on the evaluation image, and an evaluation result by the evaluation unit.
- the correction unit extracts an evaluation area that is an image having a predetermined area on the surface of the evaluation object from the captured image, and generates an evaluation image based on the evaluation area.
- An evaluation apparatus is an apparatus that performs an evaluation on the degree of rust of an evaluation object using a captured image of the evaluation object.
- the evaluation device includes an image acquisition unit that acquires a captured image, a correction unit that generates an evaluation image by correcting the captured image, an evaluation unit that performs evaluation based on the evaluation image, and an evaluation result by the evaluation unit.
- the correction unit extracts an evaluation area that is an image having a predetermined area on the surface of the evaluation object from the captured image, and generates an evaluation image based on the evaluation area.
- the evaluation method is a method for performing an evaluation on the degree of rust of the evaluation object using a captured image of the evaluation object.
- the evaluation method outputs an evaluation result in a step of acquiring a captured image, a step of generating an evaluation image by correcting the captured image, a step of performing an evaluation based on the evaluation image, and a step of performing the evaluation Steps.
- an evaluation region that is an image in a range having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is generated based on the evaluation region.
- An evaluation program is based on a step of acquiring a captured image of an evaluation object in a computer, a step of generating an evaluation image by correcting the captured image, and the evaluation image.
- the step of generating the evaluation image an evaluation region that is an image in a range having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is generated based on the evaluation region.
- a recording medium is based on a step of acquiring a captured image of an evaluation object in a computer, a step of generating an evaluation image by correcting the captured image, and an evaluation image.
- a computer-readable recording medium recording an evaluation program for executing the step of performing an evaluation on the degree of rust of the evaluation object and the step of outputting an evaluation result in the step of performing the evaluation.
- an evaluation region that is an image in a range having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is generated based on the evaluation region.
- an evaluation area is extracted from a captured image of an evaluation object, and an evaluation image is generated based on the evaluation area. Then, evaluation regarding the degree of rust is performed based on the evaluation image, and an evaluation result is output.
- the evaluation area is an image of a range having a predetermined area on the surface of the evaluation object. Therefore, regardless of the conditions under which the captured image was captured (distance between the evaluation target and the imaging device, magnification, etc.), the range having the same area of the surface of the evaluation target is targeted. An assessment is made of the degree of rust that has occurred. As a result, it is possible to improve the evaluation accuracy regarding the degree of rust.
- the captured image may include a marker area that is an image of a marker attached to the evaluation object.
- the correction unit may extract the evaluation area based on the marker area.
- the range in the captured image is determined based on the size of the marker region, and the determined range can be extracted as the evaluation region. For this reason, an evaluation area can be extracted without using other information. As a result, the evaluation system can be reduced in size.
- the marker may have a shape that can identify the orientation of the marker.
- the orientation in the captured image can be identified by the marker area.
- the marker may have an omnidirectional shape. In this case, since the marker shape is simple, the creation of the marker can be facilitated.
- the marker may have an opening.
- the opening area of the opening may be equal to the above area.
- the correction unit may extract an image of the surface of the evaluation object exposed through the opening as an evaluation region. In this case, extraction of the evaluation area can be simplified.
- the marker may be surrounded by a frame.
- a gap may be provided between the marker and the frame, and the color of the gap may be different from the color of the outer edge portion of the marker.
- a gap having a color different from the color of the outer edge portion of the marker exists between the frame and the marker. For this reason, even if the color around the marker area is similar to the color of the outer edge portion of the marker area, the outer edge of the marker area can be detected, so that the detection accuracy of the marker area is improved. As a result, it is possible to further improve the evaluation accuracy regarding the degree of rust.
- the frame may have a shape in which the frame line is interrupted.
- edge detection or the like it is possible to reduce the possibility that a region surrounded by a frame is detected as a marker region, so that the detection accuracy of the marker region is improved. As a result, it is possible to further improve the evaluation accuracy regarding the degree of rust.
- the evaluation system may further include a distance acquisition unit that acquires the distance between the imaging device that captured the captured image and the evaluation object.
- the correction unit may extract the evaluation region based on the distance. Since a range having a predetermined area on the surface of the evaluation object can be specified based on the distance between the imaging device and the evaluation object, the evaluation region can be extracted more accurately.
- the correction unit may correct the color of the evaluation area based on the color of the reference area included in the captured image.
- the reference area may be an image of a reference body with a specific color.
- the color tone of the captured image may change depending on the color tone of the light source used for photographing.
- the brightness of the captured image may vary depending on the amount of light irradiation.
- the influence of light can be reduced by correcting the color of the evaluation region so that the color of the reference region becomes a specific color (for example, the original color). Thereby, it becomes possible to further improve the evaluation accuracy regarding the degree of rust.
- the correction unit may remove specular reflection from the evaluation area.
- specular reflection may occur.
- the captured image may be overexposed. Color information is lost in areas where whiteout occurs. For this reason, the color information can be restored by removing the specular reflection (whiteout). Thereby, it becomes possible to further improve the evaluation accuracy regarding the degree of rust.
- the evaluation unit may perform evaluation using a neural network. By learning the neural network, it is possible to further improve the evaluation accuracy regarding the degree of rust.
- the correction unit may adjust the size of the evaluation image to the size of the reference image used for learning of the neural network by expanding or reducing the evaluation region. In this case, the evaluation by the neural network can be appropriately performed.
- the evaluation system may further include an extraction unit that extracts a candidate area where rust is generated from the captured image.
- the correcting unit may generate the evaluation image by correcting the candidate area. In this case, since it is not necessary to evaluate the entire surface of the evaluation object, it is possible to improve the evaluation efficiency related to the degree of rust.
- Evaluation may include evaluation of rust. In this case, it becomes possible to improve the evaluation accuracy of the rust degree.
- Evaluation may include evaluation of the degree of rust removal. In this case, it becomes possible to improve the evaluation accuracy of the degree of rust removal.
- the evaluation accuracy regarding the degree of rust can be improved.
- FIG. 1 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the first embodiment.
- FIG. 2 is a hardware configuration diagram of the user terminal shown in FIG.
- FIG. 3 is a hardware configuration diagram of the evaluation apparatus shown in FIG.
- FIG. 4 is a sequence diagram showing an evaluation method performed by the evaluation system shown in FIG.
- FIG. 5 is a flowchart showing in detail the correction process shown in FIG. 6A to 6F are diagrams showing examples of markers.
- FIG. 7 is a diagram for explaining distortion correction.
- FIG. 8 is a diagram for explaining extraction of an evaluation area using a marker.
- (A) of FIG. 9 is a figure for demonstrating the calculation method of the actual magnitude
- B of FIG.
- FIG. 9 is a figure which shows the relationship between a focal distance and the focal distance of 35 mm conversion.
- 10A and 10B are diagrams for explaining the color correction.
- FIG. 11 is a diagram illustrating an example of a neural network.
- FIG. 12 is a diagram illustrating an example of the evaluation result.
- FIG. 13 is a diagram illustrating a display example of the evaluation result.
- FIG. 14 is a diagram illustrating an example of the evaluation result of the degree of rust removal.
- (A) and (b) of FIG. 15 is a figure which shows the example of correction of an evaluation result.
- FIG. 16 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the second embodiment.
- FIG. 17 is a sequence diagram showing an evaluation method performed by the evaluation system shown in FIG. FIG.
- FIG. 18 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the third embodiment.
- FIG. 19 is a flowchart showing an evaluation method performed by the evaluation system shown in FIG.
- FIG. 20 is a configuration diagram schematically illustrating an evaluation system according to a modification.
- (A) of FIG. 21 is a figure for demonstrating the detection of a candidate area
- FIG. 21B is a diagram for explaining extraction of candidate regions.
- (A) to (d) of FIG. 22 are diagrams showing modifications of the marker.
- FIG. 23 is a diagram for describing a modification of the evaluation region extraction method.
- FIG. 24 is a diagram for explaining a modification of the evaluation region extraction method.
- FIG. 1 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the first embodiment.
- An evaluation system 1 shown in FIG. 1 is a system that performs an evaluation on the degree of rust of an evaluation object.
- the evaluation regarding the degree of rust includes the degree of rust and the degree of rust removal. That is, the evaluation system 1 evaluates the degree of rust and the degree of rust removal of the evaluation object.
- Examples of the evaluation object include steel materials.
- the rust degree is an index indicating the degree of rust generated on the surface of the evaluation object before the rust removal treatment.
- a thin black oxide film (mill scale) is formed on the surface of the steel material as it is heat-treated at a high temperature and cooled.
- the mill scale corresponds to black rust before red rust occurs in the steel material.
- the degree of rust includes mill scale, red rust, and the degree of pitting (progression).
- the degree of rust removal is an index indicating the degree of finishing of the rust removal treatment.
- the degree of rust removal is an index indicating the degree of deposits such as rust remaining on the surface of the evaluation object after the rust removal treatment.
- the rust removal treatment include blast treatment, water jet treatment, and flame cleaning.
- an electric tool, a file, a brush, a spatula, or the like may be used.
- the evaluation system 1 includes one or a plurality of user terminals 10 and an evaluation device 20.
- the user terminal 10 and the evaluation device 20 are connected to be communicable with each other via a network NW.
- the network NW may be configured by either wired or wireless. Examples of the network NW include the Internet, a mobile communication network, and a WAN (Wide Area Network).
- the user terminal 10 is a terminal device used by a user.
- the user terminal 10 captures the evaluation object to generate a captured image of the evaluation object, and transmits the captured image to the evaluation device 20.
- the user terminal 10 receives the evaluation result from the evaluation device 20 and outputs the evaluation result to the user.
- the user terminal 10 may be applied to a portable terminal that incorporates an imaging device, or may be applied to a device that can communicate with the imaging device. In the present embodiment, the user terminal 10 will be described using a portable terminal incorporating an imaging device. Examples of portable terminals include smartphones, tablet terminals, and notebook PCs (Personal Computers).
- FIG. 2 is a hardware configuration diagram of the user terminal shown in FIG.
- the user terminal 10 physically includes one or more processors 101, a main storage device 102, an auxiliary storage device 103, a communication device 104, an input device 105, an output device 106, and an imaging device 107. , And a computer including hardware such as the measurement device 108.
- the processor 101 a processor having a high processing speed is used. Examples of the processor 101 include a GPU (Graphics Processing Unit) and a CPU (Central Processing Unit).
- the main storage device 102 includes a RAM (Random Access Memory) and a ROM (Read Only Memory). Examples of the auxiliary storage device 103 include a semiconductor memory and a hard disk device.
- the communication device 104 is a device that transmits and receives data to and from other devices via the network NW.
- An example of the communication device 104 is a network card. Encryption may be used for data transmission / reception via the network NW. That is, the communication device 104 may encrypt the data and transmit the encrypted data to another device. The communication device 104 may receive the encrypted data from another device and decrypt the encrypted data.
- a common key cryptosystem such as Triple DES (Data Encryption Standard) and Rijndael, or a public key cryptosystem such as RSA and ElGamal may be used.
- the input device 105 is a device used when the user operates the user terminal 10. Examples of the input device 105 include a touch panel, a keyboard, and a mouse.
- the output device 106 is a device that outputs various types of information to the user of the user terminal 10. Examples of the output device 106 include a display, a speaker, and a vibrator.
- the imaging device 107 is a device for imaging (imaging).
- the imaging device 107 is, for example, a camera module.
- the imaging apparatus 107 includes a plurality of optical system components such as a lens Cl and an imaging element Cs (see FIG. 9A), a plurality of control system circuits that drive and control the components, and an imaging element. And a circuit unit of a signal processing system that converts an electrical signal representing a captured image generated by Cs into an image signal that is a digital signal.
- the measuring device 108 is a device that measures a distance. Examples of the measuring device 108 include a millimeter wave radar, an ultrasonic sensor, a LIDAR (Light Detection and Ranging), a motion stereo, and a stereo camera.
- LIDAR Light Detection and Ranging
- Each function illustrated in FIG. 1 of the user terminal 10 is performed by causing each hardware device such as the main storage device 102 to read one or more predetermined computer programs under the control of one or more processors 101. This is realized by operating the hardware and reading and writing data in the main storage device 102 and the auxiliary storage device 103.
- the user terminal 10 includes an image acquisition unit 11, a distance acquisition unit 12, a correction unit 13, a transmission unit 14, a reception unit 15, an output unit 16, and a correction information acquisition unit 17. I have.
- the image acquisition unit 11 is a part for acquiring a captured image including an evaluation object.
- the image acquisition unit 11 is realized by the imaging device 107, for example.
- the captured image may be a still image or a moving image.
- the captured image is acquired as, for example, image data indicating the pixel value of each pixel (pixel), but is expressed as a captured image for convenience of explanation.
- the image acquisition unit 11 receives, for example, a captured image captured by another device (for example, a terminal having a camera function) from the other device. To get.
- a portion that performs a captured image reception process (such as the communication device 104 in FIG. 2) functions as the image acquisition unit 11.
- the image acquisition unit 11 outputs the captured image to the correction unit 13.
- the distance acquisition unit 12 is a part for acquiring the distance D1 between the imaging device 107 (specifically, the lens Cl) that generated the captured image and the evaluation object.
- the distance acquisition unit 12 is realized by the measurement device 108, for example.
- the imaging device 107 is a depth camera
- the distance D1 is obtained together with the captured image, so the distance acquisition unit 12 is realized by the imaging device 107.
- the distance acquisition unit 12 receives the distance D1 measured by another device (for example, a terminal having a measurement function) from the other device, for example, thereby calculating the distance D1. get.
- the distance acquisition unit 12 when the distance acquisition unit 12 receives the distance D1 from another device via the network NW, the part that performs the reception process of the distance D1 (such as the communication device 104 in FIG. 2) functions as the distance acquisition unit 12.
- the user may measure the distance D1 using a scale or a measure, and input the distance D1 to the user terminal 10.
- a portion that receives an input of the distance D1 (such as the input device 105 in FIG. 2) functions as the distance acquisition unit 12.
- the distance acquisition unit 12 outputs the distance D1 to the correction unit 13.
- the correction unit 13 is a part for generating an evaluation image by correcting the captured image.
- the correction unit 13 extracts an evaluation area from the captured image, and generates an evaluation image based on the evaluation area.
- the evaluation region is an image (image region) in a range having a predetermined area on the surface of the evaluation object.
- the correction unit 13 performs size correction, distortion correction, color correction, specular reflection removal, noise removal, and blur correction on the captured image. Details of each correction process will be described later.
- the correction unit 13 outputs the evaluation image to the transmission unit 14.
- the transmission unit 14 is a part for transmitting the evaluation image to the evaluation device 20.
- the transmission unit 14 transmits the evaluation image to the evaluation device 20 via the network NW.
- the transmission unit 14 further transmits the correction information acquired by the correction information acquisition unit 17 to the evaluation device 20.
- the transmission unit 14 is realized by the communication device 104, for example.
- the receiving unit 15 is a part for receiving an evaluation result from the evaluation device 20.
- the receiving unit 15 receives the evaluation result from the evaluation device 20 via the network NW.
- the receiving unit 15 is realized by the communication device 104, for example.
- the output unit 16 is a part for outputting the evaluation result.
- the output unit 16 is realized by the output device 106, for example.
- the output unit 16 transmits the evaluation result to the other device via the network NW, for example.
- a part (such as the communication device 104 in FIG. 2) that performs the evaluation result transmission process functions as the output unit 16.
- the correction information acquisition unit 17 is a part for acquiring correction information of the evaluation result. For example, the user may correct the evaluation result using the input device 105 after confirming the evaluation result output by the output unit 16. At this time, the correction information acquisition unit 17 acquires the corrected evaluation result as correction information. The correction information acquisition unit 17 outputs the correction information to the transmission unit 14.
- the evaluation apparatus 20 is an apparatus that performs an evaluation on the degree of rust of the evaluation object using a captured image (evaluation image) of the evaluation object.
- the evaluation device 20 is configured by an information processing device (server device) such as a computer, for example.
- FIG. 3 is a hardware configuration diagram of the evaluation apparatus shown in FIG.
- the evaluation device 20 may be physically configured as a computer including hardware such as one or more processors 201, a main storage device 202, an auxiliary storage device 203, and a communication device 204.
- the processor 201 a processor having a high processing speed is used. Examples of the processor 201 include a GPU and a CPU.
- the main storage device 202 includes a RAM, a ROM, and the like. Examples of the auxiliary storage device 203 include a semiconductor memory and a hard disk device.
- the communication device 204 is a device that transmits and receives data to and from other devices via the network NW.
- An example of the communication device 204 is a network card. Encryption may be used for data transmission / reception via the network NW. In other words, the communication device 204 may encrypt the data and transmit the encrypted data to another device. The communication device 204 may receive the encrypted data from another device and decrypt the encrypted data.
- a common key cryptosystem such as Triple DES and Rijndael
- a public key cryptosystem such as RSA and ElGamal
- the communication device 204 may perform user authentication for determining whether the user of the user terminal 10 is a regular user or a non-regular user.
- the evaluation device 20 does not have to evaluate the degree of rust when the user is a regular user, and does not have to evaluate the degree of rust when the user is a non-regular user.
- user authentication for example, a user ID (identifier) and a password registered in advance are used.
- a one-time pad (one-time password) may be used for user authentication.
- Each function shown in FIG. 1 of the evaluation device 20 is performed by causing each hardware item such as the main storage device 202 to read one or more predetermined computer programs under the control of one or more processors 201. This is realized by operating the hardware and reading and writing data in the main storage device 202 and the auxiliary storage device 203.
- the evaluation device 20 includes a reception unit 21, an evaluation unit 22, and a transmission unit 23.
- the receiving unit 21 is a part for receiving an evaluation image from the user terminal 10.
- the receiving unit 21 receives an evaluation image from the user terminal 10 via the network NW.
- the receiving unit 21 further receives correction information from the user terminal 10.
- the receiving unit 21 is realized by the communication device 204, for example.
- the receiving unit 21 outputs the evaluation image and the correction information to the evaluation unit 22.
- the evaluation unit 22 is a part for performing an evaluation on the degree of rust of the evaluation object based on the evaluation image.
- the evaluation unit 22 performs an evaluation on the degree of rust of the evaluation object using a neural network.
- the neural network may be a convolutional neural network (CNN) or a recurrent neural network (RNN).
- CNN convolutional neural network
- RNN recurrent neural network
- the transmission unit 23 is a part for transmitting the evaluation result to the user terminal 10.
- the transmission unit 23 transmits the evaluation result to the user terminal 10 via the network NW.
- the transmission unit 23 is realized by the communication device 204, for example. Since the transmission unit 23 outputs (transmits) the evaluation result to the user terminal 10, it can be regarded as an output unit.
- FIG. 4 is a sequence diagram showing an evaluation method performed by the evaluation system shown in FIG.
- FIG. 5 is a flowchart showing in detail the correction process shown in FIG. 6A to 6F are diagrams showing examples of markers.
- FIG. 7 is a diagram for explaining distortion correction.
- FIG. 8 is a diagram for explaining extraction of an evaluation area using a marker.
- A) of FIG. 9 is a figure for demonstrating the calculation method of the actual magnitude
- (B) of FIG. 9 is a figure which shows the relationship between a focal distance and the focal distance of 35 mm conversion.
- 10A and 10B are diagrams for explaining the color correction.
- FIG. 4 is a sequence diagram showing an evaluation method performed by the evaluation system shown in FIG.
- FIG. 5 is a flowchart showing in detail the correction process shown in FIG. 6A to 6F are diagrams showing examples of markers.
- FIG. 7 is a diagram for explaining distortion correction.
- FIG. 8 is a diagram for explaining extraction of an evaluation
- FIG. 11 is a diagram illustrating an example of a neural network.
- FIG. 12 is a diagram illustrating an example of the evaluation result.
- FIG. 13 is a diagram illustrating a display example of the evaluation result.
- FIG. 14 is a diagram illustrating an example of the evaluation result of the degree of rust removal.
- (A) and (b) of FIG. 15 is a figure which shows the example of correction of an evaluation result.
- the series of processes of the evaluation method illustrated in FIG. 4 is started, for example, when the user of the user terminal 10 photographs the evaluation object using the imaging device 107.
- the image acquisition unit 11 acquires a captured image of the evaluation object (step S01).
- the image acquisition unit 11 acquires an image of the evaluation object generated by the imaging device 107 as a captured image.
- the image acquisition unit 11 outputs the acquired captured image to the correction unit 13.
- the marker MK may be attached to the evaluation object before acquiring the captured image of the evaluation object.
- the marker MK is used for correcting a captured image in image processing described later.
- the marker MK has a shape that can specify the direction of the marker MK.
- the marker MK is asymmetric in at least one of the vertical direction and the width direction. Specifically, as shown in FIGS. 6A to 6F, the marker MK includes a white area Rw and a black area Rb. In order to facilitate image processing to be described later, the marker MK has a square edge F1.
- the edge F1 is an edge of the region Rb.
- the marker MK may be surrounded by a frame F2, and a gap Rgap may be provided between the frame F2 and the region Rb.
- the marker MK is drawn on the sheet-like member.
- the user of the user terminal 10 directly attaches the sheet-like member including the marker MK to the evaluation object.
- the user may attach a sheet-like member including the marker MK to the evaluation object using a UAV (Unmanned Aero Vehicle) or an expansion / contraction rod.
- UAV Unmanned Aero Vehicle
- the marker MK only needs to be composed of two or more regions with different colors.
- the color assigned to the region Rw may not be white, but may be gray or the like.
- the color assigned to the region Rb may not be black, but may be a color having saturation.
- the correction unit 13 corrects the captured image (step S02).
- the correction unit 13 performs distortion correction in order to correct distortion of the captured image (step S21).
- the captured image may be distorted as compared to an image obtained by photographing the evaluation object from the front.
- the correction unit 13 performs distortion correction by converting the captured image into an image obtained by photographing the evaluation object from the front based on the distance between the imaging device 107 and each position of the evaluation object.
- the correction unit 13 may further perform an aspect correction as a distortion correction.
- the correction unit 13 may perform distortion correction using the marker MK.
- the captured image of the evaluation object to which the marker MK is attached includes a marker region Rm that is an image (image region) of the marker MK.
- the correction unit 13 first extracts the marker region Rm from the captured image.
- the correcting unit 13 extracts the marker region Rm by performing object detection processing or edge detection processing on the captured image, for example.
- edge detection processing may be used because edge detection processing has higher detection accuracy and processing speed than object detection processing.
- the correction unit 13 confirms whether or not the extracted marker region Rm is an image of the marker MK. For example, the correction unit 13 performs binarization processing on the marker region Rm after performing histogram averaging processing on the marker region Rm. Then, the correction unit 13 compares the binarized marker region Rm and the marker MK, and determines that the marker region Rm is an image of the marker MK when the two match. Thereby, the vertex coordinates of the marker MK in the captured image are acquired. The correction unit 13 determines that the marker region Rm is not an image of the marker MK when the two do not match, and extracts the marker region Rm again.
- the correction unit 13 calculates the direction of the marker MK in the captured image using the marker region Rm. Since the marker MK is asymmetric in at least one of the vertical direction and the width direction, the direction of the marker MK in the captured image can be calculated. Then, as illustrated in FIG. 7, the correction unit 13 performs projective transformation on the captured image so that the original shape of the marker MK is restored from the vertex coordinates and orientation of the marker MK in the captured image, thereby evaluating the correction.
- the captured image is converted into an image obtained by photographing the object from the front.
- the correcting unit 13 sets the vertex Pm1 as the origin, the direction from the vertex Pm1 to the vertex Pm2 as the X1 axis direction, and the direction from the vertex Pm1 toward the vertex Pm4 as the Y1 axis direction. Then, the correction unit 13 restores the shape of the marker MK by converting the X1-Y1 coordinate system to the XY orthogonal coordinate system. Thereby, distortion correction is performed.
- the correction unit 13 extracts the evaluation area Re from the captured image (step S22).
- evaluation of a rust degree and a rust removal degree needs to be performed with respect to the range (evaluation range) which occupies the predetermined area among the surfaces of an evaluation target object.
- the correction unit 13 determines the evaluation range in the captured image and extracts it as the evaluation region Re. That is, the evaluation area Re is an image area corresponding to the evaluation range in the captured image.
- the size (size) of the evaluation region Re in the captured image differs.
- the correction unit 13 extracts the evaluation area Re from the captured image based on the marker area Rm.
- the evaluation range is a size obtained by extending the length of each side of the marker MK by a predetermined magnification.
- the correction unit 13 calculates the length of each side of the marker region Rm from the coordinates of the vertices Pm1 to Pm4 of the marker region Rm.
- amendment part 13 specifies the magnitude
- the correcting unit 13 extracts an area having the specified evaluation range size as an evaluation area Re from the captured image.
- the evaluation region Re may or may not include the marker region Rm.
- the correcting unit 13 may extract the evaluation region Re from the captured image based on the distance D1 without using the marker MK.
- the distance acquisition unit 12 acquires the distance D1.
- the correction unit 13 uses the formula (1), based on the distance D1, the focal length f1, and the width w1 of the imaging element Cs included in the imaging device 107.
- the actual width W is calculated.
- the width W is the horizontal length of the evaluation object imaged by the imaging device 107. When the entire evaluation object is not included in the captured image, the width W is the horizontal length of the evaluation object in the imaging range.
- the focal length f1 is a distance from the image sensor Cs to the lens Cl.
- the width w1 of the imaging element Cs of the imaging device 107 is the horizontal size (length) of the imaging element Cs and is determined in advance.
- the correction unit 13 acquires the actual focal length f1 and width w1 from EXIF (Exchangeable Image File Format) information attached to the captured image.
- EXIF Exchangeable Image File Format
- the correction unit 13 acquires the name of the imaging device 107, and searches the database using the name of the imaging device 107, whereby the focal length f1 and the width are obtained. Get w1.
- the user may input the focal length f1 and the width w1 into the user terminal 10 using the input device 105.
- a width w2 in terms of 35 mm is obtained instead of the width w1 of the image sensor Cs.
- the correction unit 13 calculates the width w1 based on the focal length f1, the focal length f2 converted to 35 mm, and the width w2.
- the imaging field angle ⁇ is equal to a field angle converted to 35 mm.
- the following relational expression (2) is established.
- formula (3) is obtained.
- the correcting unit 13 calculates the width w1 using Expression (3).
- the correcting unit 13 may directly calculate the width W from the focal length f2 and the width w2 using Expression (4).
- the correction unit 13 acquires the focal length f2 from the EXIF information.
- the correction unit 13 acquires the name of the imaging device 107 and searches the database using the name of the imaging device 107, thereby acquiring the focal length f2.
- the user may input the focal length f2 to the user terminal 10 using the input device 105.
- the correction unit 13 calculates the width wp per pixel by dividing the width W by the number of pixels Npw in the width direction (horizontal direction) of the captured image, as shown in Expression (5).
- the correction unit 13 acquires the pixel number Npw from the EXIF information.
- the correction unit 13 may acquire the pixel number Npw by counting the pixels in the width direction in the captured image.
- the correction unit 13 calculates the height H of the imaged portion in the same manner as the width W.
- the height H is the length in the vertical direction of the evaluation object imaged by the imaging device 107. When the entire evaluation object is not included in the captured image, the height H is the vertical length of the evaluation object in the imaging range.
- the correction unit 13 calculates the height hp per pixel by dividing the height H by the number of pixels Nph in the height direction (vertical direction) of the captured image, as shown in Expression (6).
- the correction unit 13 acquires the pixel number Nph from the EXIF information.
- the correction unit 13 may acquire the number of pixels Nph by counting the pixels in the height direction in the captured image.
- the correction unit 13 determines the evaluation area Re in the captured image using the width wp and the height hp per pixel, and extracts the evaluation area Re. For example, the correction unit 13 calculates the number of pixels that the evaluation range in the captured image has in the width direction and the height direction. The correction unit 13 divides the width of the evaluation range by the width wp, and sets the division result as the number of pixels in the width direction. Similarly, the correction unit 13 divides the height of the evaluation range by the height hp, and sets the division result as the number of pixels in the height direction. Then, the correction unit 13 extracts an area having the number of pixels in the width direction and the number of pixels in the height direction as an evaluation area Re from the captured image.
- the correction unit 13 corrects the size of the evaluation area Re (step S23).
- the size of the evaluation region Re can change according to the distance D1.
- the correction unit 13 performs an expansion / contraction process on the evaluation region Re in order to match the predetermined evaluation size.
- the evaluation size is the size of a reference image (teacher data) used for learning of the neural network NN.
- the correction unit 13 compares the size of the evaluation region Re with the evaluation size, and determines which of the enlargement process and the reduction process is performed.
- the correction unit 13 performs enlargement processing when the size of the evaluation region Re is smaller than the evaluation size, and performs reduction processing when the size of the evaluation region Re is larger than the evaluation size. That is, the correction unit 13 enlarges or reduces the evaluation area Re to adjust the size of the evaluation image to the evaluation size.
- the enlargement process for example, a bilinear interpolation method is used.
- the reduction processing for example, an average pixel method is used.
- Other enlargement / reduction algorithms may be used for the enlargement process and the reduction process, but it is desirable that the image state be maintained even when the extension / reduction process is performed.
- the correction unit 13 performs color correction of the evaluation area Re (step S24). Even for the same evaluation object, the brightness of the image may change depending on the shooting environment. Further, when the color of the light source used for photographing is different, the color of the image may be different. Color correction is performed to reduce the influence of the shooting environment.
- the correcting unit 13 corrects the color of the evaluation area Re based on the color of the reference area included in the captured image.
- the reference area is an image (image area) of a reference body with a specific color.
- the region Rw of the marker MK can be used as the reference body.
- the color of the region Rw of the marker MK is measured in advance with a color meter or the like, and a reference value indicating the measured color is stored in a memory (not shown).
- a value indicating a color an RGB value, an HSV value, or the like is used.
- the correction unit 13 acquires the color value of the region Rw in the marker region Rm included in the captured image (evaluation region Re), and compares the acquired value with a reference value. Then, color correction is performed so that these differences are small (for example, zero).
- gamma correction or the like is used.
- a difference may be added to each pixel value (offset processing).
- the marker MK may not be used as the reference body. In this case, even if color correction of the evaluation region Re is performed in the same manner as in the case of using the marker MK, a sample (for example, a gray board) whose color is measured in advance is used as a reference body and photographed together with the evaluation object. Good.
- the correction unit 13 may perform color correction based on the gray hypothesis.
- the correction unit 13 removes the specular reflection from the evaluation region Re (step S25).
- Specular reflection may be caused when the evaluation object has a metallic luster. Depending on the state of the coating film of the evaluation object, specular reflection may be caused. In the image, the part that caused the specular reflection usually appears as strong white. That is, the portion where the specular reflection occurs causes whiteout in the image.
- the specular reflection portion can be detected as a white portion, so the correction unit 13 removes the specular reflection using the color corrected image (evaluation region Re).
- the correction unit 13 specifies the specular reflection part based on the pixel value of each pixel included in the evaluation region Re. For example, the correction unit 13 determines that the pixel is a part of the specular reflection portion when all of the RGB pixel values are larger than a predetermined threshold value.
- the correction unit 13 converts the pixel value into HSV, and specifies the specular reflection part by performing the same threshold processing on brightness (V) or both brightness (V) and saturation (S). May be.
- the correction unit 13 removes the specular reflection from the specular reflection portion and restores the original image information (pixel value).
- the correction unit 13 automatically interpolates (restores) the image information of the specular reflection portion with the image information in the vicinity of the specular reflection portion by, for example, a method using Navier-Stokes or a fast marching method of Alexander Telea.
- the correction unit 13 may restore the image information of the specular reflection part by learning various rust images in advance by machine learning. For machine learning, for example, GAN (Generative Adversarial Network) is used. Note that the correction unit 13 may restore the image information for a region in which the outer edge of the specular reflection portion is expanded (that is, a region that includes the specular reflection portion and is larger than the specular reflection portion).
- the correction unit 13 removes noise from the evaluation region Re (step S26).
- the correction unit 13 removes noise from the evaluation region Re using, for example, a denoising filter (denoising function) such as a Gaussian filter and a low-pass filter.
- a denoising filter denoising function
- the correction unit 13 performs blur correction on the evaluation area Re (step S27).
- blurring such as camera shake may occur.
- the correction unit 13 performs image blur correction using, for example, a Wiener filter and a blind deconvolution algorithm.
- the correction process in FIG. 5 is an example, and the correction process performed by the correction unit 13 is not limited to this. Some or all of steps S21 and S23 to S27 may be omitted. Steps S21 to S27 may be performed in any order. As described above, when the specular reflection removal is performed after the color correction, the specular reflection portion appears as strong white in the image, so that the specular reflection portion specifying accuracy is improved.
- the correction unit 13 considers that the marker region Rm is configured by a plurality of blocks arranged in a grid, and uses the coordinates of the four vertices of the marker region Rm (marker MK). Thus, the vertex coordinates of each block may be obtained. Thereby, the correction
- the correction unit 13 outputs the captured image corrected by the correction process in step S02 to the transmission unit 14 as an evaluation image, and the transmission unit 14 transmits the evaluation image to the evaluation device 20 via the network NW.
- the transmission unit 14 transmits the evaluation image to the evaluation device 20 together with a terminal ID that can uniquely identify the user terminal 10. For example, an IP (Internet Protocol) address may be used as the terminal ID.
- the reception unit 21 receives the evaluation image transmitted from the user terminal 10 and outputs the evaluation image to the evaluation unit 22.
- the correction unit 13 may not output the evaluation image to the transmission unit 14 when the evaluation image is not clear.
- the transmission unit 14 may encrypt the evaluation image and transmit the encrypted evaluation image to the evaluation device 20. In this case, the receiving unit 21 receives the encrypted evaluation image from the user terminal 10, decrypts the encrypted evaluation image, and outputs the evaluation image to the evaluation unit 22.
- the evaluation unit 22 performs an evaluation on the degree of rust of the evaluation object based on the evaluation image (step S04).
- the evaluation unit 22 uses the neural network NN shown in FIG. 11 to evaluate the degree of rust and the degree of rust of the evaluation object.
- the evaluation unit 22 assigns an image ID that can uniquely identify the evaluation image to the evaluation image.
- Neural network NN inputs an image for evaluation and outputs the matching rate of each category.
- rust grade for example, grades A to D defined by JIS (Japanese Industrial Standards) Z0313 are used.
- grades A to D defined by JIS (Japanese Industrial Standards) Z0313 are used.
- grades A, B, C, and D the degree of rust generated on the evaluation object increases.
- grades of the degree of rust removal for example, grades St2, St3, Sa1, Sa2, Sa2.5, and Sa3 defined in JIS Z0313 are used.
- the grade corresponding to the state where the rust removal treatment is not performed is not defined, but in the following description, it is assumed as grade RAW for convenience.
- rust grade may be used as the rust grade and the rust removal grade.
- ISO International Organization for Standardization
- SIS Sedish Standard
- SSPC Stepel Structures Painting Council
- SPSS Standard for the Preparation of Steel Surface to Painting
- NACE National Foundations
- Grades defined in standards such as “Corrosion Engineers” may be used.
- grade of the degree of rust removal a grade defined by standards such as ISO, SSPC, SPSS, and NACE may be used.
- a combination of a rust grade and a rust grade is used as a category.
- the precision indicates the probability that the degree of rust of the evaluation object belongs to the category (grade).
- Grade A + Grade St2, Grade A + Grade St3, Grade A + Grade Sa1, and Grade A + Grade Sa2 do not exist in any of the standards, but here, for the sake of convenience, these combinations will be described. These combinations may be set by the user or may be omitted.
- the evaluation unit 22 may separate the evaluation image into one or a plurality of channels and use the image information (pixel value) of each channel as an input to the neural network NN. For example, the evaluation unit 22 separates the evaluation image into each component of the color space. When the RGB color space is used as the color space, the evaluation unit 22 separates the evaluation image into an R component pixel value, a G component pixel value, and a B component pixel value. When the HSV color space is used as the color space, the evaluation unit 22 separates the evaluation image into an H component pixel value, an S component pixel value, and a V component pixel value. The evaluation unit 22 may convert the evaluation image into a gray scale and use the converted image as an input to the neural network NN.
- the neural network NN includes an input layer L1, an intermediate layer L2, and an output layer L3.
- the input layer L1 is located at the entrance of the neural network NN, and M input values x i (i is an integer from 1 to M) are input to the input layer L1.
- the input layer L1 includes a plurality of neurons 41.
- the neurons 41 are provided corresponding to the input values x i , and the number of neurons 41 is equal to the total number M of the input values x i . That is, the number of neurons 41 is equal to the total number of pixels included in each channel of the evaluation image.
- the i-th neuron 41 outputs the input value x i to each neuron 421 of the first intermediate layer L21 of the intermediate layer L2.
- the input layer L1 includes a node 41b.
- the node 41b outputs a bias value b j (j is an integer from 1 to M1) to each neuron 421.
- the intermediate layer L2 is located between the input layer L1 and the output layer L3.
- the intermediate layer L2 is also called a hidden layer because it is hidden from the outside of the neural network NN.
- the intermediate layer L2 includes one or more layers.
- the intermediate layer L2 includes a first intermediate layer L21 and a second intermediate layer L22.
- the first intermediate layer L21 has M1 neurons 421.
- the j-th neuron 421 adds a bias value b j to the sum of values obtained by weighting the input values x i by the weighting coefficient w ij to obtain a calculated value.
- z j is obtained.
- the neuron 421 sequentially performs, for example, convolution, calculation using an activation function, and pooling. In this case, for example, a ReLU function is used as the activation function.
- the j-th neuron 421 outputs the calculated value z j to each neuron 422 of the second intermediate layer L22.
- the first intermediate layer L21 includes a node 421b.
- the node 421b outputs a bias value to each neuron 422.
- each neuron performs the same calculation as that of the neuron 421 and outputs the calculated value to each neuron in the subsequent layer.
- the final-stage neuron (here, the neuron 422) of the intermediate layer L2 outputs the calculated value to each neuron 43 of the output layer L3.
- the output layer L3 is located at the exit of the neural network NN and outputs an output value y k (k is an integer from 1 to N).
- the output value y k is assigned to each category, and is a value corresponding to the matching rate of that category.
- the output layer L3 includes a plurality of neurons 43.
- the neurons 43 are provided corresponding to the output values y k , and the number of neurons 43 is equal to the total number N of output values y k . That is, the number of neurons 43 is equal to the number of categories indicating the degree of rust.
- Each neuron 43 performs a calculation similar to that of the neuron 421 and calculates an activation function using the calculation result as an argument to obtain an output value y k .
- Examples of the activation function include a softmax function, a ReLU function, a hyperbolic function, a sigmoid function, an identity function, and a step function.
- a softmax function is used. For this reason, each output value y k is normalized so that the sum of the N output values y k is 1. That is, the precision (%) is obtained by multiplying the output value y k by 100.
- the evaluation unit 22 outputs the N output values y k together with the image ID of the evaluation image to the transmission unit 23 as the evaluation result of the evaluation image.
- An array of N output values y k is determined in advance, and each output value y k is associated with one of the N categories.
- the evaluation unit 22 may use the largest output value among the N output values y k as an evaluation result together with the category name or index (corresponding to “number” shown in FIG. 12) corresponding to the output value. .
- an array of output values corresponding to the precision shown in FIG. 12 is output to the transmission unit 23 as an evaluation result.
- the user terminal 10 can determine how to output to the user.
- the transmission part 23 transmits an evaluation result to the user terminal 10 via the network NW (step S05).
- the transmission unit 23 identifies the destination user terminal 10 based on the terminal ID transmitted from the user terminal 10 together with the evaluation image, and transmits the evaluation result to the user terminal 10.
- the reception unit 15 receives the evaluation result transmitted from the evaluation device 20 and outputs the evaluation result to the output unit 16.
- the transmitting unit 23 may encrypt the evaluation result and transmit the encrypted evaluation result to the user terminal 10 as described above. In this case, the receiving unit 15 receives the encrypted evaluation result from the evaluation device 20, decrypts the encrypted evaluation result, and outputs the evaluation result to the output unit 16.
- the output unit 16 generates output information for notifying the user of the evaluation result, and outputs the evaluation result to the user based on the output information (step S06).
- a scatter diagram is used.
- the vertical axis of the scatter diagram shows the grade before rust removal (grade of rust), and the horizontal axis of the scatter plot shows the grade after rust removal (grade of rust removal).
- the color bar indicates the color corresponding to the matching rate.
- the conformity rate of the category is indicated by a point plotted at the intersection of the grade before the rust removal treatment and the grade after the rust removal treatment.
- the color and size of the point are set according to the matching rate of the category corresponding to the point. The larger the precision of the category corresponding to the point, the larger the size of the point.
- the color of the point is set to a color corresponding to the matching rate of the category corresponding to the point.
- Point P1 is located at the intersection of grade B and grade RAW and has a color corresponding to 15%. That is, the point P1 indicates that the grade of the grade before the rust removal treatment is grade B, and the conformity rate of the category where the grade after the rust removal treatment is grade RAW (a state where the rust removal treatment is not performed) is 15% Show. Similarly, the point P2 indicates that the grade of the grade before the rust removal treatment is grade B and the conformity rate of the category where the grade after the rust removal treatment is grade Sa1 is 5%. Similarly, the point P3 indicates that the grade of the grade before the rust removal treatment is grade D and the conformity rate of the category where the grade after the rust removal treatment is grade Sa1 is 85%.
- the grade after the rust removal treatment of the category having the highest precision is grade RAW
- the grade before the rust removal treatment of that category is used as the rust degree.
- the grade after the rust removal treatment of the category having the highest matching rate is other than the grade RAW
- the grade after the rust removal treatment of that category is used as the degree of rust removal.
- the output part 16 may display the conformity rate for every grade after a rust removal process, integrating a precision rate for every grade after a rust removal process.
- the output unit 16 may quantify the evaluation result by multiplying the matching rate for each grade after the rust removal processing by the classification coefficient of each grade and adding the multiplication results.
- the output unit 16 may notify the user whether the derusting process is acceptable or not, using the evaluation result.
- the output unit 16 may integrate the conformity ratio for each grade before the rust removal treatment and display the conformity ratio for each grade before the rust removal treatment. Good.
- the output unit 16 may display the evaluation result as text. For example, the output unit 16 displays the grade name of the category having the highest relevance rate and the relevance rate in text. For example, the output unit 16 displays “result: [before rust removal treatment] grade D ⁇ [after rust removal treatment] grade Sa1 (80%)”. The output unit 16 may display the grade names of all categories and their matching rates in text.
- the output unit 16 may output the evaluation result by voice or may output the evaluation result by vibration.
- the form of output by the output unit 16 may be set by the user.
- the correction information acquisition unit 17 determines whether or not an evaluation result correction operation has been performed by the user. For example, after confirming the evaluation result output by the output unit 16, the user operates the input device 105 to display a screen for correcting the evaluation result.
- the user operates the input device 105 and designates a category on the scatter diagram using the pointer MP. That is, the user determines the category by visually inspecting the evaluation object, and the point Pu is plotted at the intersection of the grade before the rust removal and the grade after the rust removal corresponding to the category determined by the user. Is done.
- the output unit 16 may display a category grade name and a radio button.
- the category determined by the user is selected.
- An object such as a drop-down list or a slider may be used for the user to select a category.
- the correction information acquisition unit 17 determines that the correction operation has not been performed, a series of processes of the evaluation method by the evaluation system 1 ends. On the other hand, when the correction information acquisition unit 17 determines that the correction operation has been performed by the input device 105, the correction information acquisition unit 17 acquires information indicating the corrected category as correction information together with the image ID of the evaluation image on which the correction operation has been performed. (Step S07).
- the correction information acquisition unit 17 outputs the correction information to the transmission unit 14, and the transmission unit 14 transmits the correction information to the evaluation device 20 via the network NW (Step S08).
- the receiving unit 21 receives the correction information transmitted from the user terminal 10 and outputs the correction information to the evaluation unit 22.
- the transmission unit 14 may encrypt the correction information and transmit the encrypted correction information to the evaluation device 20.
- the reception unit 21 receives the encrypted correction information from the user terminal 10, decrypts the encrypted correction information, and outputs the correction information to the evaluation unit 22.
- the evaluation unit 22 performs learning based on the correction information (step S09). Specifically, the evaluation unit 22 uses a set of the corrected category and the evaluation image as teacher data.
- the evaluation unit 22 may learn the neural network NN by any of online learning, mini-batch learning, and batch learning. Online learning is a method in which learning is performed using new teacher data each time new teacher data is acquired. Mini-batch learning is a method in which a certain amount of teacher data is taken as one unit and learning is performed using one unit of teacher data. Batch learning is a method of performing learning using all teacher data. For the learning, an algorithm such as back propagation is used. Note that learning of the neural network NN means updating the weighting coefficient and bias value used in the neural network NN to more optimal values.
- each function part in the user terminal 10 and the evaluation apparatus 20 is implement
- the evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory.
- the evaluation program may be provided as a data signal via a network.
- the evaluation area Re is extracted from the captured image of the evaluation object, and an evaluation image is generated based on the evaluation area Re. Then, evaluation regarding the degree of rust is performed based on the evaluation image, and an evaluation result is output.
- the evaluation region Re is an image of a range having a predetermined area on the surface of the evaluation object. For this reason, the range having the same area on the surface of the evaluation object is used as a target regardless of the conditions (distance between the evaluation object and the imaging device 107, magnification, etc.) where the captured image is captured. An assessment is made as to the degree of rust occurring within.
- the imaging device 107 As a result, it is possible to improve the evaluation accuracy regarding the degree of rust. Specifically, the rust degree and the degree of rust removal are evaluated based on the evaluation image. For this reason, it becomes possible to improve the evaluation precision of a rust degree and a rust removal degree.
- a general-purpose machine can be used as the imaging device 107, and the distance D1 between the imaging device 107 and the evaluation object can be freely changed.
- the marker MK has a shape that can specify the direction of the marker MK. Therefore, a range having a predetermined area of the surface of the evaluation object in the captured image is determined based on the orientation and size of the marker region Rm included in the captured image, and the determined range is set as the evaluation region Re. Can be extracted. Thereby, the evaluation region Re can be extracted without using other information. Therefore, the evaluation system 1 can be reduced in size.
- a range having a predetermined area on the surface of the evaluation object can be specified, so that the evaluation region Re can be extracted more accurately. It becomes possible to do.
- the color tone of the captured image may change depending on the color tone of the light source used for shooting.
- the brightness of the captured image may vary depending on the amount of light irradiation.
- the color of the evaluation region Re is corrected based on the color of the reference region (for example, the region Rw in the marker region Rm) included in the captured image.
- the color of the evaluation region Re is corrected so that the color of the region Rw in the marker region Rm becomes the color of the region Rw in the marker MK.
- the influence of light can be reduced. As a result, it becomes possible to further improve the evaluation accuracy of the degree of rust and the degree of rust removal.
- Rust and rust removal are evaluated using the neural network NN.
- the rust pattern before the rust removal treatment and the rust pattern after the rust removal treatment are indefinite. For this reason, in general object detection, it is difficult to specify the position and state of an irregular object. Pattern recognition is not suitable for recognizing a myriad of patterns.
- by learning the neural network NN it is possible to evaluate the rust degree and the degree of rust removal, and it is possible to further improve the evaluation accuracy of the rust degree and the degree of rust removal.
- the size of the evaluation image is adjusted to the size of the reference image (teacher data) used for learning of the neural network NN. For this reason, evaluation by the neural network NN can be performed appropriately.
- FIG. 16 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the second embodiment.
- An evaluation system 1A shown in FIG. 16 is mainly different from the evaluation system 1 in that a user terminal 10A is provided instead of the user terminal 10 and an evaluation device 20A is provided instead of the evaluation device 20.
- the user terminal 10A is mainly different from the user terminal 10 in that the correction unit 13 is not provided and the captured image and the distance D1 are transmitted to the evaluation device 20A instead of the evaluation image.
- the image acquisition unit 11 outputs the captured image to the transmission unit 14.
- the distance acquisition unit 12 outputs the distance D1 to the transmission unit 14.
- the transmission unit 14 transmits the captured image and the distance D1 to the evaluation device 20A.
- the evaluation device 20A is mainly different from the evaluation device 20 in that the captured image and the distance D1 are received from the user terminal 10A instead of the evaluation image, and the correction unit 24 is further provided.
- the receiving unit 21 receives the captured image and the distance D1 from the user terminal 10A, and outputs the captured image and the distance D1 to the correction unit 24.
- the receiving part 21 since the receiving part 21 has acquired the captured image from the user terminal 10A, it can be regarded as an image acquisition part.
- the correction unit 24 has the same function as the correction unit 13. That is, the correction unit 24 extracts an evaluation area from the captured image, and generates an evaluation image based on the evaluation area. Then, the correction unit 24 outputs the evaluation image to the evaluation unit 22.
- FIG. 17 is a sequence diagram showing an evaluation method performed by the evaluation system shown in FIG.
- the image acquisition unit 11 acquires a captured image of the evaluation object (step S31). For example, as in step S01, the image acquisition unit 11 acquires an image of the evaluation target generated by the imaging device 107 as a captured image.
- the image acquisition unit 11 outputs the acquired captured image to the transmission unit 14, and the transmission unit 14 transmits the captured image to the evaluation device 20A via the network NW (step S32).
- the transmission unit 14 transmits the captured image to the evaluation apparatus 20A together with a terminal ID that can uniquely identify the user terminal 10A.
- the reception unit 21 receives the captured image transmitted from the user terminal 10 ⁇ / b> A and outputs the captured image to the correction unit 24.
- the transmission unit 14 may encrypt the captured image and transmit the encrypted captured image to the evaluation device 20A.
- the reception unit 21 receives the encrypted captured image from the user terminal 10 ⁇ / b> A, decrypts the encrypted captured image, and outputs the captured image to the correction unit 24.
- the correction unit 24 corrects the captured image (step S33). Since the process of step S33 is the same as the process of step S02, the detailed description thereof is omitted.
- the correction unit 24 outputs the captured image corrected by the correction process in step S33 to the evaluation unit 22 as an evaluation image. Thereafter, the processing from step S34 to step S39 is the same as the processing from step S04 to step S09, and therefore detailed description thereof is omitted. As described above, a series of processes of the evaluation method by the evaluation system 1A is completed.
- each functional unit in the user terminal 10A and the evaluation device 20A is realized by executing a program module for realizing each function in a computer constituting the user terminal 10A and the evaluation device 20A.
- the evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory.
- the evaluation program may be provided as a data signal via a network.
- the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment the evaluation system 1, the evaluation device 20, the evaluation method, the evaluation program, and the recording medium according to the first embodiment Similar effects are produced. Further, in the evaluation system 1A, the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment, the user terminal 10A does not have the correction unit 13, and therefore the processing load on the user terminal 10A can be reduced. it can. In the evaluation system 1A, the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment, the captured image and the distance D1 are accumulated in the evaluation device 20A. This makes it easy to learn the neural network NN more effectively.
- FIG. 18 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the third embodiment.
- the evaluation system 1B shown in FIG. 18 is mainly different from the evaluation system 1 in that the user terminal 10B is provided instead of the user terminal 10 and the evaluation device 20 is not provided.
- the user terminal 10B is mainly different from the user terminal 10 in that the evaluation unit 18 is further provided, and the transmission unit 14 and the reception unit 15 are not provided. In this case, the user terminal 10B is also a stand-alone evaluation device.
- the correction unit 13 outputs an evaluation image to the evaluation unit 18.
- the correction information acquisition unit 17 outputs the correction information to the evaluation unit 18.
- the evaluation unit 18 has the same function as the evaluation unit 22. That is, the evaluation unit 18 performs an evaluation on the degree of rust of the evaluation object based on the evaluation image. Then, the evaluation unit 18 outputs the evaluation result to the output unit 16.
- FIG. 19 is a flowchart showing an evaluation method performed by the evaluation system shown in FIG.
- the image acquisition unit 11 acquires a captured image of the evaluation object as in step S01 (step S41). Then, the image acquisition unit 11 outputs the captured image to the correction unit 13. Subsequently, the correction unit 13 corrects the captured image (step S42). Since the process of step S42 is the same as the process of step S02, the detailed description thereof is omitted. Then, the correction unit 13 outputs the captured image corrected by the correction process in step S42 to the evaluation unit 18 as an evaluation image.
- step S43 the evaluation unit 18 performs an evaluation on the degree of rust of the evaluation object based on the evaluation image. Since the process of step S43 is the same as the process of step S04, its detailed description is omitted. Then, the evaluation unit 18 outputs the evaluation result to the output unit 16. Subsequently, the output unit 16 generates output information for notifying the user of the evaluation result, and outputs the evaluation result to the user based on the output information (step S44). Since the process of step S44 is the same as the process of step S06, its detailed description is omitted.
- the correction information acquisition unit 17 determines whether or not an evaluation result correction operation has been performed by the user (step S45).
- step S45: NO the series of processes of the evaluation method by the evaluation system 1B ends.
- step S45: YES the correction information acquisition unit 17 displays the information indicating the corrected category together with the image ID of the evaluation image on which the correction operation has been performed. Get as. Then, the correction information acquisition unit 17 outputs the correction information to the evaluation unit 18.
- step S46 the evaluation unit 18 performs learning based on the correction information. Since the process of step S46 is the same as the process of step S09, detailed description thereof is omitted. As described above, a series of processes of the evaluation method by the evaluation system 1B is completed.
- each functional unit in the user terminal 10B is realized by executing a program module for realizing each function in a computer constituting the user terminal 10B.
- the evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory.
- the evaluation program may be provided as a data signal via a network.
- the evaluation system 1B the user terminal 10B, the evaluation method, the evaluation program, and the recording medium according to the third embodiment
- the user terminal 10B, the evaluation method, the evaluation program, and the recording medium according to the third embodiment it is not necessary to transmit / receive data via the network NW, and therefore, accompanying the communication via the network NW. There is no time lag, and the response speed can be improved. Further, it becomes possible to reduce traffic and communication charges of the network NW.
- evaluation system the evaluation device, the evaluation method, the evaluation program, and the recording medium according to the present disclosure are not limited to the above embodiment.
- the user terminals 10, 10A, and 10B may not include the distance acquisition unit 12.
- the user terminals 10, 10 ⁇ / b> A, and 10 ⁇ / b> B may not include the correction information acquisition unit 17.
- batch normalization is a process of converting the output value of each layer so that the variance is constant. In this case, since it is not necessary to use a bias value, nodes (node 41b, node 421b, etc.) that output the bias value can be omitted.
- the evaluation units 18 and 22 may perform an evaluation on the degree of rust based on the evaluation image using a method other than the neural network.
- the output unit 16 may output the evaluation result to a memory (storage device) (not shown) and store the evaluation result in the memory.
- the output unit 16 creates management data in which an evaluation result is associated with a management number that can uniquely identify an evaluation result, a date on which the evaluation is performed, and the evaluation result, and stores the management data.
- FIG. 20 is a configuration diagram schematically illustrating an evaluation system according to a modification.
- rust is generated under the coating film due to moisture passing through the coating film, whereby the coating film is lifted and the coating film is peeled off.
- the coating film peeling is a state in which the coating film (coating film) applied to the base (metal substrate) is peeled off in a scaly manner. When the coating film is peeled off, the base is exposed, and rust is more likely to occur.
- Steel materials such as weathering steel can be used without painting.
- the evaluation system 1 of the modified example shown in FIG. 20 is a region in which rust occurs (candidate region Rc) from a captured image obtained by imaging the entire evaluation object or part of the evaluation object. Is extracted and evaluated.
- the evaluation related to the degree of rust includes the degree of rust distribution in the evaluation object.
- the modified evaluation system 1 further includes an extraction unit 19.
- the extraction unit 19 is a part for extracting the candidate region Rc from the captured image G. As shown in FIG. 21A, the extraction unit 19 screens the captured image G, and detects a region including coating film peeling (rust) as a candidate region Rc.
- the extraction unit 19 detects the candidate region Rc, for example, by performing object detection processing. For the object detection processing, an image recognition technique such as a method using a cascade classifier, a support vector machine (SVM), and a neural network is used.
- SVM support vector machine
- the extraction unit 19 detects the candidate region Rc by distinguishing between dirt and coating film peeling (rust). That is, the extraction unit 19 does not detect a region with dirt as the candidate region Rc.
- the extraction unit 19 extracts a region Rex including the candidate region Rc, and outputs the region Rex to the correction unit 13. If the candidate area Rc is larger than the predetermined size, the extraction unit 19 divides the candidate area Rc into a plurality of areas, and outputs a plurality of areas Rex including the divided areas to the correction unit 13. The correction unit 13 generates the evaluation image by performing the above-described correction on the region Rex.
- a combination of a grade indicating the magnitude of coating film peeling and a grade indicating the degree of coating film peeling can be used.
- “overall”, “spot”, and “pinpoint” defined in the SSPC standard are used as the grade of the magnitude of coating film peeling.
- “no abnormality” and “starting peeling” are used.
- the same effect as the evaluation system 1 according to the first embodiment described above is exhibited.
- a candidate area Rc in which rust is generated is extracted from the captured image G, and an evaluation regarding the degree of rust is performed using the candidate area Rc. For this reason, since it is not necessary to evaluate the whole surface of an evaluation target object, it becomes possible to improve the evaluation efficiency regarding the degree of rust.
- the shape of the marker MK is not limited to a square.
- the shape of the marker MK may be a rectangle.
- the marker MK has a shape that can specify the orientation of the marker MK, but the shape of the marker MK is not limited to a shape having directivity.
- the shape of the marker MK may be an omnidirectional shape.
- the shape of the region Rb may be a square
- the shape of the region Rw may be a square that is slightly smaller than the region Rb.
- the center point of the region Rb and the center point of the region Rw may be overlapped, and each side of the region Rb and each side of the region Rw may be arranged in parallel to each other.
- the marker MK may have an opening Hm.
- the opening Hm is a through hole that penetrates the sheet-like member on which the marker MK is drawn.
- the opening area of the opening Hm is equal to the area of the evaluation range.
- the correction units 13 and 24 extract an image of the surface of the evaluation object exposed through the opening Hm as the evaluation region Re. Thereby, the extraction of the evaluation area Re can be simplified.
- the opening area of the opening Hm may be larger than the area of the evaluation range.
- the correction units 13 and 24 may extract a region exposed through the opening Hm from the captured image as preprocessing for extracting the evaluation region Re.
- amendment parts 13 and 24 may extract the evaluation area
- a range having a predetermined area of the surface of the evaluation object in the captured image is determined based on the size of the marker region Rm included in the captured image regardless of the shape of the marker MK, and the evaluation region Can be extracted as Re.
- the evaluation region Re can be extracted without using other information. Therefore, the evaluation systems 1, 1A, 1B can be reduced in size.
- the marker MK when the marker MK has an omnidirectional shape, the marker MK has a simple shape, so that the creation of the marker MK can be facilitated. Further, since the orientation of the marker MK is not important, the user can easily photograph the evaluation object.
- the boundary between the marker region Rm and the region to be evaluated may become unclear due to light reflection or the like.
- the edge may not be detected by the edge detection process.
- object detection if the determination threshold is too small, false detections increase, and if the determination threshold is excessively large, detection omissions increase.
- the direction (angle) of the marker region Rm cannot be obtained by object detection itself.
- the edge enhancement process is performed after the marker area Rm is extracted by the object detection process, and the edge detection process is further performed, the detection accuracy is improved, but the color of the outer edge portion of the marker area Rm and the periphery of the marker area Rm In the case where the color of the color hardly changes, detection omission may occur.
- the marker MK is surrounded by the frame F2, and the frame F2 and the region Rb are between.
- a gap Rgap is provided.
- the gap Rgap surrounds the region Rb along the edge F1.
- the color of the gap Rgap is different from the color of the outer edge portion (that is, the region Rb) of the marker MK. Therefore, even if the color around the marker area Rm (outside the frame F2) is similar to the color of the outer edge portion (area Rb) of the marker area Rm, the outer edge (edge F1) of the marker area Rm is clear. Therefore, the outer edge of the marker region Rm can be detected.
- the edge enhancement process is performed after the marker area Rm is extracted by the object detection process and the edge detection process is further performed, the vertices (vertices Pm1 to Pm4) of the area Rb can be detected more reliably. Therefore, the marker region Rm can be extracted with high speed and high accuracy. As a result, it is possible to further improve the evaluation accuracy regarding the degree of rust.
- the distance between the frame F2 and the region Rb may be, for example, one tenth or more of one side of the marker MK in order to secure the gap Rgap.
- the distance between the frame F2 and the region Rb (the width of the gap Rgap) may be, for example, less than half of one side of the marker MK in consideration of ease of use of the marker MK.
- the frame F2 may not be a frame that completely surrounds the marker MK. That is, the missing part Fgap may be provided in the frame F2.
- the frame F2 is not limited to a solid line and may be a broken line. In this case, the frame F2 has a shape in which the frame line of the frame F2 is interrupted.
- the missing part Fgap is provided in the frame F2
- the correction units 13 and 24 when the correction units 13 and 24 extract the evaluation region Re from the captured image G, the correction units 13 and 24 randomly determine the evaluation region Re from the captured image G, and extract the determined evaluation region Re. Also good. In this case, first, the correction units 13 and 24 obtain the maximum value of coordinates that can be taken by the reference point Pr of the evaluation region Re.
- the reference point Pr is one of the four vertices of the evaluation area Re, and here is the vertex closest to the origin of the XY coordinates among the four vertices of the evaluation area Re.
- the maximum value y Crop_max maximum value x Crop_max and Y coordinates of the X-coordinate of the reference point Pr is expressed by the following equation (8). Note that the vertex Pg1 of the captured image G is located at the origin (0, 0), the vertex Pg2 is located at (X g , 0), the vertex Pg3 is located at (X g , Y g ), and the vertex Pg4 is ( 0, Y g ).
- the correction units 13 and 24 randomly determine the coordinates (x crop , y crop ) of the reference point of the evaluation region Re using Expression (9).
- the function random (minimum value, maximum value) is a function that returns an arbitrary value included in the range from the minimum value to the maximum value.
- the correction units 13 and 24 may determine the coordinates of the reference point of the evaluation area Re again when the determined evaluation area Re and the marker area Rm overlap.
- the correction units 13 and 24 may specify the extraction direction for the marker region Rm and extract the evaluation region Re from the captured image G.
- the correction units 13 and 24 calculate the coordinates (x cg , y cg ) of the center position Cg of the captured image G and the coordinates (x cm , y cm ) of the center position Cm of the marker region Rm. To do.
- amendment parts 13 and 24 calculate the vector V which goes to center position Cg from center position Cm, as shown by Formula (10).
- the correction units 13 and 24 determine the position of the evaluation region Re in the direction indicated by the vector V from the marker region Rm. For example, the correction units 13 and 24 determine the position of the evaluation region Re so that the reference point Pr of the evaluation region Re is located in the direction indicated by the vector V from the center position Cm.
- the reference point Pr is the vertex closest to the marker region Rm among the four vertices of the evaluation region Re.
- the correction units 13 and 24 determine the position of the evaluation region Re so as not to overlap with the marker region Rm.
- the correction units 13 and 24 include the coordinates (x crop_max , y crop_max ) of the reference point Pr_max farthest from the marker region Rm and the reference closest to the marker region Rm among the coordinates that the reference point Pr can take.
- the coordinates (x crop_min , y crop_min ) of the point Pr_min are calculated.
- the correcting units 13 and 24 determine the position of the evaluation region Re so that the reference point Pr is positioned on the line segment between these two points.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Analytical Chemistry (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Textile Engineering (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
This evaluation system is for performing evaluation for a degree of rusting of a to-be-evaluated object by using a captured image of the to-be-evaluated object. The evaluation system is provided with: an image acquisition unit that acquires a captured image; a correction unit that corrects the captured image to generate an evaluation image; an evaluation unit that performs evaluation on the basis of the evaluation image; and an output unit that outputs an evaluation result by the evaluation unit. The correction unit extracts, from the captured image, an evaluation region that represents an image of a range having a predetermined area on a surface of the to-be-evaluated object, and generates the evaluation image on the basis of the evaluation region.
Description
本開示は、評価システム、評価装置、評価方法、評価プログラム、及び記録媒体に関する。
The present disclosure relates to an evaluation system, an evaluation apparatus, an evaluation method, an evaluation program, and a recording medium.
検査対象物の画像を用いて検査を行う検査装置が知られている。例えば、特許文献1には、検査対象物を撮像することで得られた画像をニューラルネットワークに入力し、欠陥の有無及び種類を判定する検査装置が開示されている。
An inspection apparatus that performs inspection using an image of an inspection object is known. For example, Patent Document 1 discloses an inspection apparatus that inputs an image obtained by imaging an inspection object to a neural network and determines the presence and type of a defect.
また、鋼材に生じているさびの程度を評価することが知られている。例えば、特許文献2には、鋼材に光を照射し、特定波長での反射光強度からスケールを測定する測定装置が開示されている。
It is also known to evaluate the degree of rust generated in steel materials. For example, Patent Document 2 discloses a measuring apparatus that irradiates light to a steel material and measures a scale from reflected light intensity at a specific wavelength.
鋼材等の対象物の表面に生じているさびの程度を評価するために、特許文献1に記載されているように、対象物の画像を用いることが考えられる。この場合、撮像画像の撮像条件は、撮像画像ごとに異なることがある。例えば、対象物から離れた位置から撮影されることもあれば、対象物の近くで撮影されることもある。また、任意の倍率が設定されて、撮影される。しかしながら、さびの程度は、実際の対象物におけるさびの広がり具合等によって評価され得る。このため、対象物の撮像画像をそのまま用いた場合には、評価精度が十分に得られないおそれがある。
In order to evaluate the degree of rust generated on the surface of an object such as steel, it is conceivable to use an image of the object as described in Patent Document 1. In this case, the imaging condition of the captured image may be different for each captured image. For example, the image may be taken from a position away from the object, or the image may be taken near the object. An arbitrary magnification is set and the image is taken. However, the degree of rust can be evaluated by the extent of rust spreading in an actual object. For this reason, when the captured image of the object is used as it is, there is a possibility that sufficient evaluation accuracy cannot be obtained.
本技術分野では、さびの程度に関する評価精度を向上させることが望まれている。
In this technical field, it is desired to improve the evaluation accuracy regarding the degree of rust.
本開示の一側面に係る評価システムは、評価対象物の撮像画像を用いて、評価対象物のさびの程度に関する評価を行うシステムである。評価システムは、撮像画像を取得する画像取得部と、撮像画像を補正することで評価用画像を生成する補正部と、評価用画像に基づいて評価を行う評価部と、評価部による評価結果を出力する出力部と、を備える。補正部は、撮像画像から評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域を抽出し、評価領域に基づいて評価用画像を生成する。
The evaluation system according to one aspect of the present disclosure is a system that performs an evaluation on the degree of rust of an evaluation object using a captured image of the evaluation object. The evaluation system includes an image acquisition unit that acquires a captured image, a correction unit that generates an evaluation image by correcting the captured image, an evaluation unit that performs evaluation based on the evaluation image, and an evaluation result by the evaluation unit. An output unit for outputting. The correction unit extracts an evaluation area that is an image having a predetermined area on the surface of the evaluation object from the captured image, and generates an evaluation image based on the evaluation area.
本開示の別の側面に係る評価装置は、評価対象物の撮像画像を用いて、評価対象物のさびの程度に関する評価を行う装置である。評価装置は、撮像画像を取得する画像取得部と、撮像画像を補正することで評価用画像を生成する補正部と、評価用画像に基づいて評価を行う評価部と、評価部による評価結果を出力する出力部と、を備える。補正部は、撮像画像から評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域を抽出し、評価領域に基づいて評価用画像を生成する。
An evaluation apparatus according to another aspect of the present disclosure is an apparatus that performs an evaluation on the degree of rust of an evaluation object using a captured image of the evaluation object. The evaluation device includes an image acquisition unit that acquires a captured image, a correction unit that generates an evaluation image by correcting the captured image, an evaluation unit that performs evaluation based on the evaluation image, and an evaluation result by the evaluation unit. An output unit for outputting. The correction unit extracts an evaluation area that is an image having a predetermined area on the surface of the evaluation object from the captured image, and generates an evaluation image based on the evaluation area.
本開示のさらに別の側面に係る評価方法は、評価対象物の撮像画像を用いて、評価対象物のさびの程度に関する評価を行う方法である。評価方法は、撮像画像を取得するステップと、撮像画像を補正することで評価用画像を生成するステップと、評価用画像に基づいて評価を行うステップと、評価を行うステップにおける評価結果を出力するステップと、を備える。評価用画像を生成するステップでは、撮像画像から評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域が抽出され、評価領域に基づいて評価用画像が生成される。
The evaluation method according to still another aspect of the present disclosure is a method for performing an evaluation on the degree of rust of the evaluation object using a captured image of the evaluation object. The evaluation method outputs an evaluation result in a step of acquiring a captured image, a step of generating an evaluation image by correcting the captured image, a step of performing an evaluation based on the evaluation image, and a step of performing the evaluation Steps. In the step of generating the evaluation image, an evaluation region that is an image in a range having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is generated based on the evaluation region.
本開示のさらに別の側面に係る評価プログラムは、コンピュータに、評価対象物の撮像画像を取得するステップと、撮像画像を補正することで評価用画像を生成するステップと、評価用画像に基づいて、評価対象物のさびの程度に関する評価を行うステップと、評価を行うステップにおける評価結果を出力するステップと、を実行させるためのプログラムである。評価用画像を生成するステップでは、撮像画像から評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域が抽出され、評価領域に基づいて評価用画像が生成される。
An evaluation program according to still another aspect of the present disclosure is based on a step of acquiring a captured image of an evaluation object in a computer, a step of generating an evaluation image by correcting the captured image, and the evaluation image. A program for executing a step of performing an evaluation on the degree of rust of an evaluation object and a step of outputting an evaluation result in the step of performing the evaluation. In the step of generating the evaluation image, an evaluation region that is an image in a range having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is generated based on the evaluation region.
本開示のさらに別の側面に係る記録媒体は、コンピュータに、評価対象物の撮像画像を取得するステップと、撮像画像を補正することで評価用画像を生成するステップと、評価用画像に基づいて、評価対象物のさびの程度に関する評価を行うステップと、評価を行うステップにおける評価結果を出力するステップと、を実行させるための評価プログラムを記録したコンピュータ読み取り可能な記録媒体である。評価用画像を生成するステップでは、撮像画像から評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域が抽出され、評価領域に基づいて評価用画像が生成される。
A recording medium according to still another aspect of the present disclosure is based on a step of acquiring a captured image of an evaluation object in a computer, a step of generating an evaluation image by correcting the captured image, and an evaluation image. A computer-readable recording medium recording an evaluation program for executing the step of performing an evaluation on the degree of rust of the evaluation object and the step of outputting an evaluation result in the step of performing the evaluation. In the step of generating the evaluation image, an evaluation region that is an image in a range having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is generated based on the evaluation region.
これらの評価システム、評価装置、評価方法、評価プログラム、及び記録媒体では、評価対象物の撮像画像から評価領域が抽出され、評価領域に基づいて評価用画像が生成される。そして、評価用画像に基づいてさびの程度に関する評価が行われ、評価結果が出力される。評価領域は、評価対象物の表面のうちの所定の面積を有する範囲の画像である。このため、撮像画像が撮像された条件(評価対象物と撮像装置との距離、及び倍率等)によらずに、評価対象物の表面のうちの同じ面積を有する範囲を対象として、その範囲内に生じているさびの程度に関する評価が行われる。その結果、さびの程度に関する評価精度を向上させることが可能となる。
In these evaluation systems, evaluation apparatuses, evaluation methods, evaluation programs, and recording media, an evaluation area is extracted from a captured image of an evaluation object, and an evaluation image is generated based on the evaluation area. Then, evaluation regarding the degree of rust is performed based on the evaluation image, and an evaluation result is output. The evaluation area is an image of a range having a predetermined area on the surface of the evaluation object. Therefore, regardless of the conditions under which the captured image was captured (distance between the evaluation target and the imaging device, magnification, etc.), the range having the same area of the surface of the evaluation target is targeted. An assessment is made of the degree of rust that has occurred. As a result, it is possible to improve the evaluation accuracy regarding the degree of rust.
撮像画像は、評価対象物に付されたマーカの画像であるマーカ領域を含んでもよい。補正部は、マーカ領域に基づいて評価領域を抽出してもよい。この場合、マーカ領域の大きさを基準として、撮像画像における上記範囲が決定され、決定された範囲が評価領域として抽出され得る。このため、他の情報を用いることなく、評価領域を抽出することができる。これにより、評価システムを小型化することが可能となる。
The captured image may include a marker area that is an image of a marker attached to the evaluation object. The correction unit may extract the evaluation area based on the marker area. In this case, the range in the captured image is determined based on the size of the marker region, and the determined range can be extracted as the evaluation region. For this reason, an evaluation area can be extracted without using other information. As a result, the evaluation system can be reduced in size.
マーカは、マーカの向きを特定可能な形状を有してもよい。この場合、マーカ領域によって、撮像画像における向きを識別することができる。
The marker may have a shape that can identify the orientation of the marker. In this case, the orientation in the captured image can be identified by the marker area.
マーカは、無指向性の形状を有していてもよい。この場合、マーカの形状が単純であるので、マーカの作成を容易化することができる。
The marker may have an omnidirectional shape. In this case, since the marker shape is simple, the creation of the marker can be facilitated.
マーカは、開口部を有してもよい。開口部の開口面積は、上記面積と等しくてもよい。補正部は、開口部を介して露出する評価対象物の表面の画像を評価領域として抽出してもよい。この場合、評価領域の抽出を簡易化することができる。
The marker may have an opening. The opening area of the opening may be equal to the above area. The correction unit may extract an image of the surface of the evaluation object exposed through the opening as an evaluation region. In this case, extraction of the evaluation area can be simplified.
マーカは、枠によって囲まれていてもよい。マーカと枠との間に間隙が設けられてもよく、間隙の色はマーカの外縁部分の色とは異なってもよい。この場合、枠とマーカとの間に、マーカの外縁部分の色とは異なる色を有する間隙が存在する。このため、マーカ領域の周辺の色が、マーカ領域の外縁部分の色と類似していたとしても、マーカ領域の外縁を検出することができるので、マーカ領域の検出精度が向上する。その結果、さびの程度に関する評価精度をさらに向上させることが可能となる。
The marker may be surrounded by a frame. A gap may be provided between the marker and the frame, and the color of the gap may be different from the color of the outer edge portion of the marker. In this case, a gap having a color different from the color of the outer edge portion of the marker exists between the frame and the marker. For this reason, even if the color around the marker area is similar to the color of the outer edge portion of the marker area, the outer edge of the marker area can be detected, so that the detection accuracy of the marker area is improved. As a result, it is possible to further improve the evaluation accuracy regarding the degree of rust.
枠は、枠線が途切れた形状を有してもよい。この場合、エッジ検出等において、枠によって囲まれた領域がマーカ領域として検出される可能性を低減することができるので、マーカ領域の検出精度が向上する。その結果、さびの程度に関する評価精度をさらに向上させることが可能となる。
The frame may have a shape in which the frame line is interrupted. In this case, in edge detection or the like, it is possible to reduce the possibility that a region surrounded by a frame is detected as a marker region, so that the detection accuracy of the marker region is improved. As a result, it is possible to further improve the evaluation accuracy regarding the degree of rust.
評価システムは、撮像画像を撮像した撮像装置と評価対象物との距離を取得する距離取得部をさらに備えてもよい。補正部は、上記距離に基づいて評価領域を抽出してもよい。撮像装置と評価対象物との距離に基づいて、評価対象物の表面のうちの所定の面積を有する範囲を特定することができるので、評価領域をより正確に抽出することが可能となる。
The evaluation system may further include a distance acquisition unit that acquires the distance between the imaging device that captured the captured image and the evaluation object. The correction unit may extract the evaluation region based on the distance. Since a range having a predetermined area on the surface of the evaluation object can be specified based on the distance between the imaging device and the evaluation object, the evaluation region can be extracted more accurately.
補正部は、撮像画像に含まれる参照領域の色に基づいて、評価領域の色を補正してもよい。参照領域は、特定の色が付された参照体の画像であってもよい。同じ評価対象物であっても、撮影に用いられた光源の色調によって、撮像画像の色調が変わることがある。また、同じ評価対象物であっても、光の照射量によって、撮像画像の明るさが異なることがある。上記構成では、参照領域の色が特定の色と異なっている場合、撮像画像における色が光の影響を受けていると考えられる。そこで、例えば、参照領域の色が特定の色(例えば、元の色)になるように、評価領域の色を補正することで、光の影響を低減することができる。これにより、さびの程度に関する評価精度をさらに向上させることが可能となる。
The correction unit may correct the color of the evaluation area based on the color of the reference area included in the captured image. The reference area may be an image of a reference body with a specific color. Even for the same evaluation object, the color tone of the captured image may change depending on the color tone of the light source used for photographing. Moreover, even for the same evaluation object, the brightness of the captured image may vary depending on the amount of light irradiation. In the above configuration, when the color of the reference region is different from the specific color, it is considered that the color in the captured image is affected by light. Therefore, for example, the influence of light can be reduced by correcting the color of the evaluation region so that the color of the reference region becomes a specific color (for example, the original color). Thereby, it becomes possible to further improve the evaluation accuracy regarding the degree of rust.
補正部は、評価領域から鏡面反射を除去してもよい。評価対象物に強い光が照射されると鏡面反射が生じることがあり、その状態で評価対象物が撮像されると撮像画像に白飛びが生じることがある。白飛びが生じている領域では、色情報が失われている。このため、鏡面反射(白飛び)を除去することで、色情報が復元され得る。これにより、さびの程度に関する評価精度をさらに向上させることが可能となる。
The correction unit may remove specular reflection from the evaluation area. When the evaluation object is irradiated with strong light, specular reflection may occur. When the evaluation object is imaged in this state, the captured image may be overexposed. Color information is lost in areas where whiteout occurs. For this reason, the color information can be restored by removing the specular reflection (whiteout). Thereby, it becomes possible to further improve the evaluation accuracy regarding the degree of rust.
評価部は、ニューラルネットワークを用いて評価を行ってもよい。ニューラルネットワークを学習させることによって、さびの程度に関する評価精度をさらに向上させることが可能となる。
The evaluation unit may perform evaluation using a neural network. By learning the neural network, it is possible to further improve the evaluation accuracy regarding the degree of rust.
補正部は、評価領域を拡大又は縮小することで、評価用画像のサイズをニューラルネットワークの学習に用いられる基準画像のサイズに合わせてもよい。この場合、ニューラルネットワークによる評価を適切に行うことができる。
The correction unit may adjust the size of the evaluation image to the size of the reference image used for learning of the neural network by expanding or reducing the evaluation region. In this case, the evaluation by the neural network can be appropriately performed.
評価システムは、撮像画像からさびが生じている候補領域を抽出する抽出部をさらに備えてもよい。補正部は、候補領域を補正することで評価用画像を生成してもよい。この場合、評価対象物の表面全体を評価する必要が無いので、さびの程度に関する評価効率を向上させることが可能となる。
The evaluation system may further include an extraction unit that extracts a candidate area where rust is generated from the captured image. The correcting unit may generate the evaluation image by correcting the candidate area. In this case, since it is not necessary to evaluate the entire surface of the evaluation object, it is possible to improve the evaluation efficiency related to the degree of rust.
評価は、さび度の評価を含んでもよい。この場合、さび度の評価精度を向上させることが可能となる。
Evaluation may include evaluation of rust. In this case, it becomes possible to improve the evaluation accuracy of the rust degree.
評価は、除錆度の評価を含んでもよい。この場合、除錆度の評価精度を向上させることが可能となる。
Evaluation may include evaluation of the degree of rust removal. In this case, it becomes possible to improve the evaluation accuracy of the degree of rust removal.
本開示の各側面及び各実施形態によれば、さびの程度に関する評価精度を向上させることができる。
According to each aspect and each embodiment of the present disclosure, the evaluation accuracy regarding the degree of rust can be improved.
以下、図面を参照しながら本開示の実施形態が詳細に説明される。なお、図面の説明において同一要素には同一符号が付され、重複する説明は省略される。
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. In the description of the drawings, the same elements are denoted by the same reference numerals, and redundant descriptions are omitted.
(第1実施形態)
図1は、第1実施形態に係る評価装置を含む評価システムを概略的に示す構成図である。図1に示される評価システム1は、評価対象物のさびの程度に関する評価を行うシステムである。さびの程度に関する評価には、さび度及び除錆度が含まれる。つまり、評価システム1は、評価対象物のさび度及び除錆度を評価する。評価対象物の例としては、鋼材が挙げられる。 (First embodiment)
FIG. 1 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the first embodiment. Anevaluation system 1 shown in FIG. 1 is a system that performs an evaluation on the degree of rust of an evaluation object. The evaluation regarding the degree of rust includes the degree of rust and the degree of rust removal. That is, the evaluation system 1 evaluates the degree of rust and the degree of rust removal of the evaluation object. Examples of the evaluation object include steel materials.
図1は、第1実施形態に係る評価装置を含む評価システムを概略的に示す構成図である。図1に示される評価システム1は、評価対象物のさびの程度に関する評価を行うシステムである。さびの程度に関する評価には、さび度及び除錆度が含まれる。つまり、評価システム1は、評価対象物のさび度及び除錆度を評価する。評価対象物の例としては、鋼材が挙げられる。 (First embodiment)
FIG. 1 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the first embodiment. An
さび度は、除錆処理を施す前の評価対象物の表面に発生しているさびの程度を示す指標である。鋼材が高温で熱処理され、冷えていく過程で鋼材の表面に黒色の薄い酸化被膜(ミルスケール)が形成される。ミルスケールは、鋼材に赤さびが発生する前の黒さびに相当する。ミルスケールが剥がれ落ち、鋼材の母材に赤さびが発生し、さらに赤さびが進行すると、徐々に母材が侵食され、穴を開けながら腐食する孔食が母材に生じる。さびの程度には、ミルスケール、赤さび、及び孔食の程度(進行具合)が含まれる。除錆度は、除錆処理の仕上げの程度を示す指標である。言い換えると、除錆度は、除錆処理を施した後の評価対象物の表面に残存するさび等の付着物の程度を示す指標である。除錆処理の例として、ブラスト処理、ウォータージェット処理、及び火炎清掃が挙げられる。除錆処理として、電動工具、ヤスリ、ブラシ、及びヘラ等が用いられてもよい。
The rust degree is an index indicating the degree of rust generated on the surface of the evaluation object before the rust removal treatment. A thin black oxide film (mill scale) is formed on the surface of the steel material as it is heat-treated at a high temperature and cooled. The mill scale corresponds to black rust before red rust occurs in the steel material. When the mill scale peels off, red rust is generated in the steel base material, and further the red rust progresses, the base material is gradually eroded, and pitting corrosion that corrodes while forming holes is generated in the base material. The degree of rust includes mill scale, red rust, and the degree of pitting (progression). The degree of rust removal is an index indicating the degree of finishing of the rust removal treatment. In other words, the degree of rust removal is an index indicating the degree of deposits such as rust remaining on the surface of the evaluation object after the rust removal treatment. Examples of the rust removal treatment include blast treatment, water jet treatment, and flame cleaning. As the rust removal treatment, an electric tool, a file, a brush, a spatula, or the like may be used.
評価システム1は、1又は複数のユーザ端末10と、評価装置20と、を備えている。ユーザ端末10と、評価装置20とは、ネットワークNWによって互いに通信可能に接続されている。ネットワークNWは、有線及び無線のいずれで構成されてもよい。ネットワークNWの例としては、インターネット、移動体通信網、及びWAN(Wide Area Network)が挙げられる。
The evaluation system 1 includes one or a plurality of user terminals 10 and an evaluation device 20. The user terminal 10 and the evaluation device 20 are connected to be communicable with each other via a network NW. The network NW may be configured by either wired or wireless. Examples of the network NW include the Internet, a mobile communication network, and a WAN (Wide Area Network).
ユーザ端末10は、ユーザにより用いられる端末装置である。ユーザ端末10は、評価対象物を撮像することによって評価対象物の撮像画像を生成し、撮像画像を評価装置20に送信する。ユーザ端末10は、評価装置20から評価結果を受信し、評価結果をユーザに出力する。ユーザ端末10は、撮像装置を内蔵した携帯端末に適用されてもよく、撮像装置と通信可能な装置に適用されてもよい。本実施形態では、ユーザ端末10として、撮像装置を内蔵した携帯端末を用いて説明する。携帯端末の例としては、スマートフォン、タブレット端末、及びノートPC(Personal Computer)が挙げられる。
The user terminal 10 is a terminal device used by a user. The user terminal 10 captures the evaluation object to generate a captured image of the evaluation object, and transmits the captured image to the evaluation device 20. The user terminal 10 receives the evaluation result from the evaluation device 20 and outputs the evaluation result to the user. The user terminal 10 may be applied to a portable terminal that incorporates an imaging device, or may be applied to a device that can communicate with the imaging device. In the present embodiment, the user terminal 10 will be described using a portable terminal incorporating an imaging device. Examples of portable terminals include smartphones, tablet terminals, and notebook PCs (Personal Computers).
図2は、図1に示されるユーザ端末のハードウェア構成図である。図2に示されるように、ユーザ端末10は、物理的には、1又は複数のプロセッサ101、主記憶装置102、補助記憶装置103、通信装置104、入力装置105、出力装置106、撮像装置107、及び計測装置108等のハードウェアを備えるコンピュータとして構成され得る。プロセッサ101としては、処理速度が速いプロセッサが用いられる。プロセッサ101の例としては、GPU(Graphics Processing Unit)及びCPU(Central Processing Unit)が挙げられる。主記憶装置102は、RAM(Random Access Memory)及びROM(Read Only Memory)等で構成される。補助記憶装置103の例としては、半導体メモリ、及びハードディスク装置が挙げられる。
FIG. 2 is a hardware configuration diagram of the user terminal shown in FIG. As shown in FIG. 2, the user terminal 10 physically includes one or more processors 101, a main storage device 102, an auxiliary storage device 103, a communication device 104, an input device 105, an output device 106, and an imaging device 107. , And a computer including hardware such as the measurement device 108. As the processor 101, a processor having a high processing speed is used. Examples of the processor 101 include a GPU (Graphics Processing Unit) and a CPU (Central Processing Unit). The main storage device 102 includes a RAM (Random Access Memory) and a ROM (Read Only Memory). Examples of the auxiliary storage device 103 include a semiconductor memory and a hard disk device.
通信装置104は、ネットワークNWを介して他の装置とデータの送受信を行う装置である。通信装置104の例としては、ネットワークカードが挙げられる。ネットワークNWを介したデータの送受信には、暗号化が用いられてもよい。つまり、通信装置104は、データを暗号化し、暗号化されたデータを他の装置に送信してもよい。通信装置104は、暗号化されたデータを他の装置から受信し、暗号化されたデータを復号してもよい。暗号化には、トリプルDES(Data Encryption Standard)及びRijndael等の共通鍵暗号方式、又はRSA及びElGamal等の公開鍵暗号方式が用いられ得る。
The communication device 104 is a device that transmits and receives data to and from other devices via the network NW. An example of the communication device 104 is a network card. Encryption may be used for data transmission / reception via the network NW. That is, the communication device 104 may encrypt the data and transmit the encrypted data to another device. The communication device 104 may receive the encrypted data from another device and decrypt the encrypted data. For encryption, a common key cryptosystem such as Triple DES (Data Encryption Standard) and Rijndael, or a public key cryptosystem such as RSA and ElGamal may be used.
入力装置105は、ユーザがユーザ端末10を操作する際に用いられる装置である。入力装置105の例としては、タッチパネル、キーボード、及びマウスが挙げられる。出力装置106は、各種情報をユーザ端末10のユーザに出力する装置である。出力装置106の例としては、ディスプレイ、スピーカ、及びバイブレータが挙げられる。
The input device 105 is a device used when the user operates the user terminal 10. Examples of the input device 105 include a touch panel, a keyboard, and a mouse. The output device 106 is a device that outputs various types of information to the user of the user terminal 10. Examples of the output device 106 include a display, a speaker, and a vibrator.
撮像装置107は、撮像(画像化)するための装置である。撮像装置107は、例えば、カメラモジュールである。具体的には、撮像装置107は、レンズCl及び撮像素子Cs(図9の(a)参照)等の複数の光学系の部品と、それらを駆動制御する複数の制御系の回路と、撮像素子Csによって生成された撮像画像を表す電気信号をデジタル信号である画像信号に変換する信号処理系の回路部と、を含む。計測装置108は、距離を計測する装置である。計測装置108の例としては、ミリ波レーダ、超音波センサ、LIDAR(Light Detection and Ranging)、モーションステレオ、及びステレオカメラが挙げられる。
The imaging device 107 is a device for imaging (imaging). The imaging device 107 is, for example, a camera module. Specifically, the imaging apparatus 107 includes a plurality of optical system components such as a lens Cl and an imaging element Cs (see FIG. 9A), a plurality of control system circuits that drive and control the components, and an imaging element. And a circuit unit of a signal processing system that converts an electrical signal representing a captured image generated by Cs into an image signal that is a digital signal. The measuring device 108 is a device that measures a distance. Examples of the measuring device 108 include a millimeter wave radar, an ultrasonic sensor, a LIDAR (Light Detection and Ranging), a motion stereo, and a stereo camera.
ユーザ端末10の図1に示される各機能は、主記憶装置102等のハードウェアに1又は複数の所定のコンピュータプログラムを読み込ませることにより、1又は複数のプロセッサ101の制御のもとで各ハードウェアを動作させるとともに、主記憶装置102及び補助記憶装置103におけるデータの読み出し及び書き込みを行うことで実現される。
Each function illustrated in FIG. 1 of the user terminal 10 is performed by causing each hardware device such as the main storage device 102 to read one or more predetermined computer programs under the control of one or more processors 101. This is realized by operating the hardware and reading and writing data in the main storage device 102 and the auxiliary storage device 103.
ユーザ端末10は、機能的には、画像取得部11と、距離取得部12と、補正部13と、送信部14と、受信部15と、出力部16と、修正情報取得部17と、を備えている。
Functionally, the user terminal 10 includes an image acquisition unit 11, a distance acquisition unit 12, a correction unit 13, a transmission unit 14, a reception unit 15, an output unit 16, and a correction information acquisition unit 17. I have.
画像取得部11は、評価対象物を含む撮像画像を取得するための部分である。画像取得部11は、例えば、撮像装置107によって実現される。撮像画像は、静止画像であってもよく、動画像であってもよい。撮像画像は、例えば、各画素(ピクセル)の画素値を示す画像データとして取得されるが、説明の便宜上、撮像画像として表現する。ユーザ端末10が撮像装置107を有しない場合、画像取得部11は、例えば、他の装置(例えばカメラ機能を有する端末等)によって撮像された撮像画像を当該他の装置から受け取ることにより、撮像画像を取得する。例えば、画像取得部11が他の装置からネットワークNWを介して撮像画像を受信する場合、撮像画像の受信処理を行う部分(図2の通信装置104等)が画像取得部11として機能する。画像取得部11は、撮像画像を補正部13に出力する。
The image acquisition unit 11 is a part for acquiring a captured image including an evaluation object. The image acquisition unit 11 is realized by the imaging device 107, for example. The captured image may be a still image or a moving image. The captured image is acquired as, for example, image data indicating the pixel value of each pixel (pixel), but is expressed as a captured image for convenience of explanation. When the user terminal 10 does not include the imaging device 107, the image acquisition unit 11 receives, for example, a captured image captured by another device (for example, a terminal having a camera function) from the other device. To get. For example, when the image acquisition unit 11 receives a captured image from another device via the network NW, a portion that performs a captured image reception process (such as the communication device 104 in FIG. 2) functions as the image acquisition unit 11. The image acquisition unit 11 outputs the captured image to the correction unit 13.
距離取得部12は、撮像画像を生成した撮像装置107(具体的には、レンズCl)と評価対象物との距離D1を取得するための部分である。距離取得部12は、例えば、計測装置108によって実現される。撮像装置107がデプスカメラである場合には、撮像画像とともに距離D1が得られるので、距離取得部12は、撮像装置107によって実現される。ユーザ端末10が計測機能を有しない場合、距離取得部12は、例えば、他の装置(例えば計測機能を有する端末等)によって計測された距離D1を当該他の装置から受け取ることにより、距離D1を取得する。例えば、距離取得部12が他の装置からネットワークNWを介して距離D1を受信する場合、距離D1の受信処理を行う部分(図2の通信装置104等)が距離取得部12として機能する。また、ユーザがスケール又はメジャーを用いて距離D1を計測し、ユーザ端末10に距離D1を入力してもよい。この場合、距離D1の入力を受け付ける部分(図2の入力装置105等)が距離取得部12として機能する。距離取得部12は、距離D1を補正部13に出力する。
The distance acquisition unit 12 is a part for acquiring the distance D1 between the imaging device 107 (specifically, the lens Cl) that generated the captured image and the evaluation object. The distance acquisition unit 12 is realized by the measurement device 108, for example. When the imaging device 107 is a depth camera, the distance D1 is obtained together with the captured image, so the distance acquisition unit 12 is realized by the imaging device 107. When the user terminal 10 does not have a measurement function, the distance acquisition unit 12 receives the distance D1 measured by another device (for example, a terminal having a measurement function) from the other device, for example, thereby calculating the distance D1. get. For example, when the distance acquisition unit 12 receives the distance D1 from another device via the network NW, the part that performs the reception process of the distance D1 (such as the communication device 104 in FIG. 2) functions as the distance acquisition unit 12. Alternatively, the user may measure the distance D1 using a scale or a measure, and input the distance D1 to the user terminal 10. In this case, a portion that receives an input of the distance D1 (such as the input device 105 in FIG. 2) functions as the distance acquisition unit 12. The distance acquisition unit 12 outputs the distance D1 to the correction unit 13.
補正部13は、撮像画像を補正することで評価用画像を生成するための部分である。補正部13は、撮像画像から評価領域を抽出し、評価領域に基づいて評価用画像を生成する。評価領域は、評価対象物の表面のうちの所定の面積を有する範囲の画像(画像領域)である。補正部13は、例えば、撮像画像に対して、サイズ補正、歪み補正、色補正、鏡面反射除去、ノイズ除去、及びブレ補正を行う。各補正処理の詳細については、後述する。補正部13は、評価用画像を送信部14に出力する。
The correction unit 13 is a part for generating an evaluation image by correcting the captured image. The correction unit 13 extracts an evaluation area from the captured image, and generates an evaluation image based on the evaluation area. The evaluation region is an image (image region) in a range having a predetermined area on the surface of the evaluation object. For example, the correction unit 13 performs size correction, distortion correction, color correction, specular reflection removal, noise removal, and blur correction on the captured image. Details of each correction process will be described later. The correction unit 13 outputs the evaluation image to the transmission unit 14.
送信部14は、評価用画像を評価装置20に送信するための部分である。送信部14は、ネットワークNWを介して評価用画像を評価装置20に送信する。送信部14は、さらに、修正情報取得部17によって取得される修正情報を評価装置20に送信する。送信部14は、例えば、通信装置104によって実現される。受信部15は、評価装置20から評価結果を受信するための部分である。受信部15は、ネットワークNWを介して評価結果を評価装置20から受信する。受信部15は、例えば、通信装置104によって実現される。
The transmission unit 14 is a part for transmitting the evaluation image to the evaluation device 20. The transmission unit 14 transmits the evaluation image to the evaluation device 20 via the network NW. The transmission unit 14 further transmits the correction information acquired by the correction information acquisition unit 17 to the evaluation device 20. The transmission unit 14 is realized by the communication device 104, for example. The receiving unit 15 is a part for receiving an evaluation result from the evaluation device 20. The receiving unit 15 receives the evaluation result from the evaluation device 20 via the network NW. The receiving unit 15 is realized by the communication device 104, for example.
出力部16は、評価結果を出力するための部分である。出力部16は、例えば、出力装置106によって実現される。他の装置が有するディスプレイ等の出力装置において評価結果を出力する場合、出力部16は、例えば、ネットワークNWを介して評価結果を他の装置に送信する。この場合、評価結果の送信処理を行う部分(図2の通信装置104等)が出力部16として機能する。
The output unit 16 is a part for outputting the evaluation result. The output unit 16 is realized by the output device 106, for example. When outputting an evaluation result in an output device such as a display included in another device, the output unit 16 transmits the evaluation result to the other device via the network NW, for example. In this case, a part (such as the communication device 104 in FIG. 2) that performs the evaluation result transmission process functions as the output unit 16.
修正情報取得部17は、評価結果の修正情報を取得するための部分である。例えば、ユーザは、出力部16によって出力された評価結果を確認した後、入力装置105を用いて評価結果を修正する場合がある。このとき、修正情報取得部17は、修正された評価結果を修正情報として取得する。修正情報取得部17は、修正情報を送信部14に出力する。
The correction information acquisition unit 17 is a part for acquiring correction information of the evaluation result. For example, the user may correct the evaluation result using the input device 105 after confirming the evaluation result output by the output unit 16. At this time, the correction information acquisition unit 17 acquires the corrected evaluation result as correction information. The correction information acquisition unit 17 outputs the correction information to the transmission unit 14.
評価装置20は、評価対象物の撮像画像(評価用画像)を用いて、評価対象物のさびの程度に関する評価を行う装置である。評価装置20は、例えば、コンピュータ等の情報処理装置(サーバ装置)によって構成される。
The evaluation apparatus 20 is an apparatus that performs an evaluation on the degree of rust of the evaluation object using a captured image (evaluation image) of the evaluation object. The evaluation device 20 is configured by an information processing device (server device) such as a computer, for example.
図3は、図1に示される評価装置のハードウェア構成図である。図3に示されるように、評価装置20は、物理的には、1又は複数のプロセッサ201、主記憶装置202、補助記憶装置203、及び通信装置204等のハードウェアを備えるコンピュータとして構成され得る。プロセッサ201としては、処理速度が速いプロセッサが用いられる。プロセッサ201の例としては、GPU及びCPUが挙げられる。主記憶装置202は、RAM及びROM等で構成される。補助記憶装置203の例としては、半導体メモリ、及びハードディスク装置が挙げられる。
FIG. 3 is a hardware configuration diagram of the evaluation apparatus shown in FIG. As illustrated in FIG. 3, the evaluation device 20 may be physically configured as a computer including hardware such as one or more processors 201, a main storage device 202, an auxiliary storage device 203, and a communication device 204. . As the processor 201, a processor having a high processing speed is used. Examples of the processor 201 include a GPU and a CPU. The main storage device 202 includes a RAM, a ROM, and the like. Examples of the auxiliary storage device 203 include a semiconductor memory and a hard disk device.
通信装置204は、ネットワークNWを介して他の装置とデータの送受信を行う装置である。通信装置204の例としては、ネットワークカードが挙げられる。ネットワークNWを介したデータの送受信には、暗号化が用いられてもよい。つまり、通信装置204は、データを暗号化し、暗号化されたデータを他の装置に送信してもよい。通信装置204は、暗号化されたデータを他の装置から受信し、暗号化されたデータを復号してもよい。暗号化には、トリプルDES及びRijndael等の共通鍵暗号方式、又はRSA及びElGamal等の公開鍵暗号方式が用いられ得る。
The communication device 204 is a device that transmits and receives data to and from other devices via the network NW. An example of the communication device 204 is a network card. Encryption may be used for data transmission / reception via the network NW. In other words, the communication device 204 may encrypt the data and transmit the encrypted data to another device. The communication device 204 may receive the encrypted data from another device and decrypt the encrypted data. For encryption, a common key cryptosystem such as Triple DES and Rijndael or a public key cryptosystem such as RSA and ElGamal may be used.
なお、通信装置204は、ユーザ端末10のユーザが正規ユーザか非正規ユーザかを判定するユーザ認証を実施してもよい。この場合、評価装置20は、ユーザが正規ユーザである場合にさびの程度に関する評価を行い、ユーザが非正規ユーザである場合にはさびの程度に関する評価を行わなくてもよい。ユーザ認証には、例えば、予め登録されたユーザID(identifier)及びパスワードが用いられる。ユーザ認証には、ワンタイムパッド(ワンタイムパスワード)が用いられてもよい。
Note that the communication device 204 may perform user authentication for determining whether the user of the user terminal 10 is a regular user or a non-regular user. In this case, the evaluation device 20 does not have to evaluate the degree of rust when the user is a regular user, and does not have to evaluate the degree of rust when the user is a non-regular user. For user authentication, for example, a user ID (identifier) and a password registered in advance are used. A one-time pad (one-time password) may be used for user authentication.
評価装置20の図1に示される各機能は、主記憶装置202等のハードウェアに1又は複数の所定のコンピュータプログラムを読み込ませることにより、1又は複数のプロセッサ201の制御のもとで各ハードウェアを動作させるとともに、主記憶装置202及び補助記憶装置203におけるデータの読み出し及び書き込みを行うことで実現される。
Each function shown in FIG. 1 of the evaluation device 20 is performed by causing each hardware item such as the main storage device 202 to read one or more predetermined computer programs under the control of one or more processors 201. This is realized by operating the hardware and reading and writing data in the main storage device 202 and the auxiliary storage device 203.
評価装置20は、機能的には、受信部21と、評価部22と、送信部23と、を備えている。
Functionally, the evaluation device 20 includes a reception unit 21, an evaluation unit 22, and a transmission unit 23.
受信部21は、ユーザ端末10から評価用画像を受信するための部分である。受信部21は、ネットワークNWを介して評価用画像をユーザ端末10から受信する。受信部21は、さらに、ユーザ端末10から修正情報を受信する。受信部21は、例えば、通信装置204によって実現される。受信部21は、評価用画像及び修正情報を評価部22に出力する。
The receiving unit 21 is a part for receiving an evaluation image from the user terminal 10. The receiving unit 21 receives an evaluation image from the user terminal 10 via the network NW. The receiving unit 21 further receives correction information from the user terminal 10. The receiving unit 21 is realized by the communication device 204, for example. The receiving unit 21 outputs the evaluation image and the correction information to the evaluation unit 22.
評価部22は、評価用画像に基づいて評価対象物のさびの程度に関する評価を行うための部分である。評価部22は、ニューラルネットワークを用いて、評価対象物のさびの程度に関する評価を行う。ニューラルネットワークは、畳み込みニューラルネットワーク(Convolutional Neural Network:CNN)でもよく、再帰型ニューラルネットワーク(Recurrent Neural Network:RNN)でもよい。評価部22は、評価結果を送信部23に出力する。
The evaluation unit 22 is a part for performing an evaluation on the degree of rust of the evaluation object based on the evaluation image. The evaluation unit 22 performs an evaluation on the degree of rust of the evaluation object using a neural network. The neural network may be a convolutional neural network (CNN) or a recurrent neural network (RNN). The evaluation unit 22 outputs the evaluation result to the transmission unit 23.
送信部23は、ユーザ端末10に評価結果を送信するための部分である。送信部23は、ネットワークNWを介して評価結果をユーザ端末10に送信する。送信部23は、例えば、通信装置204によって実現される。なお、送信部23は、評価結果をユーザ端末10に出力(送信)しているので、出力部とみなされ得る。
The transmission unit 23 is a part for transmitting the evaluation result to the user terminal 10. The transmission unit 23 transmits the evaluation result to the user terminal 10 via the network NW. The transmission unit 23 is realized by the communication device 204, for example. Since the transmission unit 23 outputs (transmits) the evaluation result to the user terminal 10, it can be regarded as an output unit.
次に、図4~図15の(b)を参照して、評価システム1が行う評価方法を説明する。図4は、図1に示される評価システムが行う評価方法を示すシーケンス図である。図5は、図4に示される補正処理を詳細に示すフローチャートである。図6の(a)~(f)は、マーカの例を示す図である。図7は、歪み補正を説明するための図である。図8は、マーカを用いた評価領域の抽出を説明するための図である。図9の(a)は、評価対象物の実際の大きさの計算方法を説明するための図である。図9の(b)は、焦点距離と35mm換算の焦点距離との関係を示す図である。図10の(a)及び(b)は、色補正を説明するための図である。図11は、ニューラルネットワークの一例を示す図である。図12は、評価結果の一例を示す図である。図13は、評価結果の表示例を示す図である。図14は、除錆度の評価結果の一例を示す図である。図15の(a)及び(b)は、評価結果の修正例を示す図である。
Next, an evaluation method performed by the evaluation system 1 will be described with reference to FIGS. 4 to 15B. FIG. 4 is a sequence diagram showing an evaluation method performed by the evaluation system shown in FIG. FIG. 5 is a flowchart showing in detail the correction process shown in FIG. 6A to 6F are diagrams showing examples of markers. FIG. 7 is a diagram for explaining distortion correction. FIG. 8 is a diagram for explaining extraction of an evaluation area using a marker. (A) of FIG. 9 is a figure for demonstrating the calculation method of the actual magnitude | size of an evaluation target object. (B) of FIG. 9 is a figure which shows the relationship between a focal distance and the focal distance of 35 mm conversion. 10A and 10B are diagrams for explaining the color correction. FIG. 11 is a diagram illustrating an example of a neural network. FIG. 12 is a diagram illustrating an example of the evaluation result. FIG. 13 is a diagram illustrating a display example of the evaluation result. FIG. 14 is a diagram illustrating an example of the evaluation result of the degree of rust removal. (A) and (b) of FIG. 15 is a figure which shows the example of correction of an evaluation result.
図4に示される評価方法の一連の処理は、例えば、ユーザ端末10のユーザが撮像装置107を用いて評価対象物を撮影することを契機として開始される。まず、画像取得部11が評価対象物の撮像画像を取得する(ステップS01)。例えば、画像取得部11は、撮像装置107によって生成された評価対象物の画像を撮像画像として取得する。そして、画像取得部11は、取得した撮像画像を補正部13に出力する。
The series of processes of the evaluation method illustrated in FIG. 4 is started, for example, when the user of the user terminal 10 photographs the evaluation object using the imaging device 107. First, the image acquisition unit 11 acquires a captured image of the evaluation object (step S01). For example, the image acquisition unit 11 acquires an image of the evaluation object generated by the imaging device 107 as a captured image. Then, the image acquisition unit 11 outputs the acquired captured image to the correction unit 13.
なお、評価対象物の撮像画像を取得する前に、評価対象物にマーカMKが付されてもよい。マーカMKは、後述の画像処理において、撮像画像を補正するために用いられる。マーカMKは、マーカMKの向きを特定可能な形状を有する。マーカMKは、上下方向及び幅方向の少なくともいずれかにおいて非対称である。具体的には、図6の(a)~(f)に示されるように、マーカMKは、白色が付された領域Rwと黒色が付された領域Rbとを含む。後述の画像処理を容易化するために、マーカMKは、四角形の縁F1を有している。縁F1は、領域Rbの縁である。図6の(b)~(f)に示されるように、マーカMKは、枠F2によって囲まれ、枠F2と領域Rbとの間には、間隙Rgapが設けられてもよい。
It should be noted that the marker MK may be attached to the evaluation object before acquiring the captured image of the evaluation object. The marker MK is used for correcting a captured image in image processing described later. The marker MK has a shape that can specify the direction of the marker MK. The marker MK is asymmetric in at least one of the vertical direction and the width direction. Specifically, as shown in FIGS. 6A to 6F, the marker MK includes a white area Rw and a black area Rb. In order to facilitate image processing to be described later, the marker MK has a square edge F1. The edge F1 is an edge of the region Rb. As shown in FIGS. 6B to 6F, the marker MK may be surrounded by a frame F2, and a gap Rgap may be provided between the frame F2 and the region Rb.
マーカMKは、シート状部材に描かれる。例えば、ユーザ端末10のユーザが、マーカMKを含むシート状部材を評価対象物に直接張り付ける。ユーザは、UAV(Unmanned Aerial Vehicle)又は伸縮棒等を用いて、マーカMKを含むシート状部材を評価対象物に張り付けてもよい。
The marker MK is drawn on the sheet-like member. For example, the user of the user terminal 10 directly attaches the sheet-like member including the marker MK to the evaluation object. The user may attach a sheet-like member including the marker MK to the evaluation object using a UAV (Unmanned Aero Vehicle) or an expansion / contraction rod.
なお、マーカMKは、互いに異なる色が付された2以上の領域から構成されていればよい。例えば、領域Rwに付される色は、白色でなくてもよく、灰色等であってもよい。領域Rbに付される色は、黒色でなくてもよく、彩度を有する色でもよい。本実施形態では、図6の(a)に示されるマーカMKが用いられる。
Note that the marker MK only needs to be composed of two or more regions with different colors. For example, the color assigned to the region Rw may not be white, but may be gray or the like. The color assigned to the region Rb may not be black, but may be a color having saturation. In the present embodiment, a marker MK shown in FIG.
続いて、補正部13は、撮像画像を補正する(ステップS02)。図5に示されるように、ステップS02の補正処理では、まず、補正部13は、撮像画像の歪みを補正するために歪み補正を行う(ステップS21)。評価対象物を正面から撮影することによって得られる画像と比較して、撮像画像が歪むことがある。例えば、撮像装置107がデプスカメラである場合、撮像装置107と評価対象物の各位置との距離が得られる。この場合、補正部13は、撮像装置107と評価対象物の各位置との距離に基づいて、評価対象物を正面から撮影することによって得られる画像に撮像画像を変換することで歪み補正を行う。補正部13は、評価対象物がパイプ等の局面を有する構造物である場合、歪み補正として局面補正をさらに行ってもよい。
Subsequently, the correction unit 13 corrects the captured image (step S02). As shown in FIG. 5, in the correction process in step S02, first, the correction unit 13 performs distortion correction in order to correct distortion of the captured image (step S21). The captured image may be distorted as compared to an image obtained by photographing the evaluation object from the front. For example, when the imaging device 107 is a depth camera, the distance between the imaging device 107 and each position of the evaluation object is obtained. In this case, the correction unit 13 performs distortion correction by converting the captured image into an image obtained by photographing the evaluation object from the front based on the distance between the imaging device 107 and each position of the evaluation object. . When the evaluation object is a structure having an aspect such as a pipe, the correction unit 13 may further perform an aspect correction as a distortion correction.
補正部13は、マーカMKを用いて歪み補正を行ってもよい。マーカMKが付された評価対象物の撮像画像は、マーカMKの画像(画像領域)であるマーカ領域Rmを含む。この場合、まず、補正部13は、撮像画像からマーカ領域Rmを抽出する。補正部13は、例えば、撮像画像に対して物体検出処理又はエッジ検出処理を行うことで、マーカ領域Rmを抽出する。マーカMKが簡易な形状を有する場合には、物体検出処理よりもエッジ検出処理の方が、検出精度が高く、処理速度が速いことがあるので、エッジ検出処理が用いられてもよい。
The correction unit 13 may perform distortion correction using the marker MK. The captured image of the evaluation object to which the marker MK is attached includes a marker region Rm that is an image (image region) of the marker MK. In this case, the correction unit 13 first extracts the marker region Rm from the captured image. The correcting unit 13 extracts the marker region Rm by performing object detection processing or edge detection processing on the captured image, for example. When the marker MK has a simple shape, edge detection processing may be used because edge detection processing has higher detection accuracy and processing speed than object detection processing.
そして、補正部13は、抽出されたマーカ領域RmがマーカMKの画像であるか否かを確認する。補正部13は、例えば、マーカ領域Rmに対してヒストグラムの平均化処理を行った後に、マーカ領域Rmに対して二値化処理を行う。そして、補正部13は、二値化されたマーカ領域RmとマーカMKとを比較して、両者が一致する場合にマーカ領域RmがマーカMKの画像であると判定する。これにより、撮像画像におけるマーカMKの頂点座標が取得される。補正部13は、両者が一致しない場合にマーカ領域RmがマーカMKの画像でないと判定し、マーカ領域Rmを再度抽出する。
Then, the correction unit 13 confirms whether or not the extracted marker region Rm is an image of the marker MK. For example, the correction unit 13 performs binarization processing on the marker region Rm after performing histogram averaging processing on the marker region Rm. Then, the correction unit 13 compares the binarized marker region Rm and the marker MK, and determines that the marker region Rm is an image of the marker MK when the two match. Thereby, the vertex coordinates of the marker MK in the captured image are acquired. The correction unit 13 determines that the marker region Rm is not an image of the marker MK when the two do not match, and extracts the marker region Rm again.
そして、補正部13は、マーカ領域Rmを用いて撮像画像におけるマーカMKの向きを計算する。マーカMKは、上下方向及び幅方向の少なくともいずれかにおいて非対称であるので、撮像画像におけるマーカMKの向きが計算され得る。そして、図7に示されるように、補正部13は、撮像画像におけるマーカMKの頂点座標及び向きから、本来のマーカMKの形状が復元されるように、撮像画像を射影変換することで、評価対象物を正面から撮影することによって得られる画像に撮像画像を変換する。具体的には、補正部13は、頂点Pm1を原点とし、頂点Pm1から頂点Pm2に向かう方向をX1軸方向とし、頂点Pm1から頂点Pm4に向かう方向をY1軸方向とする。そして、補正部13は、X1-Y1座標系をX-Y直交座標系に変換することで、マーカMKの形状を復元する。これにより、歪み補正が行われる。
Then, the correction unit 13 calculates the direction of the marker MK in the captured image using the marker region Rm. Since the marker MK is asymmetric in at least one of the vertical direction and the width direction, the direction of the marker MK in the captured image can be calculated. Then, as illustrated in FIG. 7, the correction unit 13 performs projective transformation on the captured image so that the original shape of the marker MK is restored from the vertex coordinates and orientation of the marker MK in the captured image, thereby evaluating the correction. The captured image is converted into an image obtained by photographing the object from the front. Specifically, the correcting unit 13 sets the vertex Pm1 as the origin, the direction from the vertex Pm1 to the vertex Pm2 as the X1 axis direction, and the direction from the vertex Pm1 toward the vertex Pm4 as the Y1 axis direction. Then, the correction unit 13 restores the shape of the marker MK by converting the X1-Y1 coordinate system to the XY orthogonal coordinate system. Thereby, distortion correction is performed.
続いて、補正部13は、撮像画像から評価領域Reを抽出する(ステップS22)。ここで、さび度及び除錆度の評価は、評価対象物の表面のうちの所定の面積を占める範囲(評価範囲)に対して行われる必要がある。このため、補正部13は、当該評価範囲を撮像画像において決定し、評価領域Reとして抽出する。つまり、評価領域Reは、撮像画像のうちの評価範囲に対応する画像領域である。距離D1に応じて、撮像画像のうちの評価領域Reの大きさ(サイズ)が異なる。
Subsequently, the correction unit 13 extracts the evaluation area Re from the captured image (step S22). Here, evaluation of a rust degree and a rust removal degree needs to be performed with respect to the range (evaluation range) which occupies the predetermined area among the surfaces of an evaluation target object. For this reason, the correction unit 13 determines the evaluation range in the captured image and extracts it as the evaluation region Re. That is, the evaluation area Re is an image area corresponding to the evaluation range in the captured image. Depending on the distance D1, the size (size) of the evaluation region Re in the captured image differs.
補正部13は、マーカ領域Rmに基づいて、撮像画像から評価領域Reを抽出する。例えば、評価範囲が、マーカMKの各辺の長さを予め定められた倍率で延長した大きさであるとする。この場合、図8に示されるように、補正部13は、マーカ領域Rmの頂点Pm1~Pm4の座標からマーカ領域Rmの各辺の長さを計算する。そして、補正部13は、マーカ領域Rmの各辺を所定倍率(ここでは、5倍)で延長することで、撮像画像における評価範囲の大きさを特定する。補正部13は、特定された評価範囲の大きさを有する領域を撮像画像から評価領域Reとして抽出する。評価領域Reには、マーカ領域Rmが含まれていてもよいし、含まれていなくてもよい。
The correction unit 13 extracts the evaluation area Re from the captured image based on the marker area Rm. For example, it is assumed that the evaluation range is a size obtained by extending the length of each side of the marker MK by a predetermined magnification. In this case, as shown in FIG. 8, the correction unit 13 calculates the length of each side of the marker region Rm from the coordinates of the vertices Pm1 to Pm4 of the marker region Rm. And the correction | amendment part 13 specifies the magnitude | size of the evaluation range in a captured image by extending each edge | side of the marker area | region Rm by predetermined magnification (here 5 times). The correcting unit 13 extracts an area having the specified evaluation range size as an evaluation area Re from the captured image. The evaluation region Re may or may not include the marker region Rm.
補正部13は、マーカMKを用いることなく、距離D1に基づいて、撮像画像から評価領域Reを抽出してもよい。この場合、距離取得部12は、距離D1を取得する。そして、図9の(a)に示されるように、補正部13は、式(1)を用いて、距離D1、焦点距離f1、及び撮像装置107が有する撮像素子Csの幅w1に基づいて、実際の幅Wを計算する。幅Wは、撮像装置107によって撮像された評価対象物の水平方向の長さである。評価対象物の全体が撮像画像に含まれていない場合には、幅Wは、撮影範囲における評価対象物の水平方向の長さである。焦点距離f1は、撮像素子CsからレンズClまでの距離である。撮像装置107の撮像素子Csの幅w1は、撮像素子Csの水平方向のサイズ(長さ)であり、予め定められている。
The correcting unit 13 may extract the evaluation region Re from the captured image based on the distance D1 without using the marker MK. In this case, the distance acquisition unit 12 acquires the distance D1. Then, as illustrated in FIG. 9A, the correction unit 13 uses the formula (1), based on the distance D1, the focal length f1, and the width w1 of the imaging element Cs included in the imaging device 107. The actual width W is calculated. The width W is the horizontal length of the evaluation object imaged by the imaging device 107. When the entire evaluation object is not included in the captured image, the width W is the horizontal length of the evaluation object in the imaging range. The focal length f1 is a distance from the image sensor Cs to the lens Cl. The width w1 of the imaging element Cs of the imaging device 107 is the horizontal size (length) of the imaging element Cs and is determined in advance.
なお、補正部13は、撮像画像に付されたEXIF(Exchangeable Image File Format)情報から、実際の焦点距離f1及び幅w1を取得する。EXIF情報から焦点距離f1及び幅w1が得られなかった場合、補正部13は、撮像装置107の名称を取得し、撮像装置107の名称を用いてデータベースを検索することで、焦点距離f1及び幅w1を取得する。ユーザが入力装置105を用いて焦点距離f1及び幅w1をユーザ端末10に入力してもよい。
The correction unit 13 acquires the actual focal length f1 and width w1 from EXIF (Exchangeable Image File Format) information attached to the captured image. When the focal length f1 and the width w1 are not obtained from the EXIF information, the correction unit 13 acquires the name of the imaging device 107, and searches the database using the name of the imaging device 107, whereby the focal length f1 and the width are obtained. Get w1. The user may input the focal length f1 and the width w1 into the user terminal 10 using the input device 105.
撮像素子Csの幅w1ではなく、35mm換算の幅w2が得られる場合がある。この場合、補正部13は、焦点距離f1、35mm換算の焦点距離f2、及び幅w2に基づいて、幅w1を計算する。図9の(b)に示されるように、撮像画角θは、35mm換算の画角と等しい。このため、以下の関係式(2)が成り立つ。式(2)を整理すると、式(3)が得られる。補正部13は、式(3)を用いて幅w1を計算する。
In some cases, a width w2 in terms of 35 mm is obtained instead of the width w1 of the image sensor Cs. In this case, the correction unit 13 calculates the width w1 based on the focal length f1, the focal length f2 converted to 35 mm, and the width w2. As shown in FIG. 9B, the imaging field angle θ is equal to a field angle converted to 35 mm. For this reason, the following relational expression (2) is established. When formula (2) is arranged, formula (3) is obtained. The correcting unit 13 calculates the width w1 using Expression (3).
補正部13は、式(4)を用いて、焦点距離f2及び幅w2から幅Wを直接計算してもよい。
The correcting unit 13 may directly calculate the width W from the focal length f2 and the width w2 using Expression (4).
なお、補正部13は、EXIF情報から、焦点距離f2を取得する。EXIF情報から焦点距離f2が得られなかった場合、補正部13は、撮像装置107の名称を取得し、撮像装置107の名称を用いてデータベースを検索することで、焦点距離f2を取得する。ユーザが入力装置105を用いて焦点距離f2をユーザ端末10に入力してもよい。
Note that the correction unit 13 acquires the focal length f2 from the EXIF information. When the focal length f2 is not obtained from the EXIF information, the correction unit 13 acquires the name of the imaging device 107 and searches the database using the name of the imaging device 107, thereby acquiring the focal length f2. The user may input the focal length f2 to the user terminal 10 using the input device 105.
補正部13は、式(5)に示されるように、撮像画像の幅方向(水平方向)の画素数Npwで幅Wを除算することで、1画素当たりの幅wpを計算する。補正部13は、EXIF情報から画素数Npwを取得する。補正部13は、撮像画像における幅方向の画素をカウントすることによって、画素数Npwを取得してもよい。
The correction unit 13 calculates the width wp per pixel by dividing the width W by the number of pixels Npw in the width direction (horizontal direction) of the captured image, as shown in Expression (5). The correction unit 13 acquires the pixel number Npw from the EXIF information. The correction unit 13 may acquire the pixel number Npw by counting the pixels in the width direction in the captured image.
補正部13は、幅Wと同様にして、撮像された部分の高さHを計算する。高さHは、撮像装置107によって撮像された評価対象物の垂直方向の長さである。評価対象物の全体が撮像画像に含まれていない場合には、高さHは、撮影範囲における評価対象物の垂直方向の長さである。補正部13は、式(6)に示されるように、撮像画像の高さ方向(垂直方向)の画素数Nphで高さHを除算することで、1画素当たりの高さhpを計算する。補正部13は、EXIF情報から画素数Nphを取得する。補正部13は、撮像画像における高さ方向の画素をカウントすることによって、画素数Nphを取得してもよい。
The correction unit 13 calculates the height H of the imaged portion in the same manner as the width W. The height H is the length in the vertical direction of the evaluation object imaged by the imaging device 107. When the entire evaluation object is not included in the captured image, the height H is the vertical length of the evaluation object in the imaging range. The correction unit 13 calculates the height hp per pixel by dividing the height H by the number of pixels Nph in the height direction (vertical direction) of the captured image, as shown in Expression (6). The correction unit 13 acquires the pixel number Nph from the EXIF information. The correction unit 13 may acquire the number of pixels Nph by counting the pixels in the height direction in the captured image.
補正部13は、1画素当たりの幅wp及び高さhpを用いて、撮像画像の中で評価領域Reを決定し、評価領域Reを抽出する。例えば、補正部13は、撮像画像における評価範囲が幅方向及び高さ方向に有する画素数を計算する。補正部13は、評価範囲の幅を幅wpで除算し、その除算結果を幅方向の画素数とする。同様に、補正部13は、評価範囲の高さを高さhpで除算し、その除算結果を高さ方向の画素数とする。そして、補正部13は、幅方向の画素数及び高さ方向の画素数を有する領域を撮像画像から評価領域Reとして抽出する。
The correction unit 13 determines the evaluation area Re in the captured image using the width wp and the height hp per pixel, and extracts the evaluation area Re. For example, the correction unit 13 calculates the number of pixels that the evaluation range in the captured image has in the width direction and the height direction. The correction unit 13 divides the width of the evaluation range by the width wp, and sets the division result as the number of pixels in the width direction. Similarly, the correction unit 13 divides the height of the evaluation range by the height hp, and sets the division result as the number of pixels in the height direction. Then, the correction unit 13 extracts an area having the number of pixels in the width direction and the number of pixels in the height direction as an evaluation area Re from the captured image.
続いて、補正部13は、評価領域Reのサイズを補正する(ステップS23)。評価領域Reの大きさは、距離D1に応じて変化し得る。このため、サイズ補正では、補正部13は、所定の評価サイズに合わせるために、評価領域Reの伸縮処理を行う。評価サイズは、ニューラルネットワークNNの学習に用いられる基準画像(教師データ)のサイズである。伸縮処理では、まず、補正部13は、評価領域Reの大きさと評価サイズとを比較し、拡大処理及び縮小処理のいずれを行うかを決定する。
Subsequently, the correction unit 13 corrects the size of the evaluation area Re (step S23). The size of the evaluation region Re can change according to the distance D1. For this reason, in the size correction, the correction unit 13 performs an expansion / contraction process on the evaluation region Re in order to match the predetermined evaluation size. The evaluation size is the size of a reference image (teacher data) used for learning of the neural network NN. In the expansion / contraction process, first, the correction unit 13 compares the size of the evaluation region Re with the evaluation size, and determines which of the enlargement process and the reduction process is performed.
補正部13は、評価領域Reの大きさが評価サイズよりも小さい場合には、拡大処理を行い、評価領域Reの大きさが評価サイズよりも大きい場合には、縮小処理を行う。つまり、補正部13は、評価領域Reを拡大又は縮小することで、評価用画像のサイズを評価サイズに合わせる。拡大処理には、例えば、バイリニア補間法が用いられる。縮小処理には、例えば、平均画素法が用いられる。拡大処理及び縮小処理には、他の伸縮アルゴリズムが用いられてもよいが、伸縮処理を行っても画像の状態が保たれることが望ましい。
The correction unit 13 performs enlargement processing when the size of the evaluation region Re is smaller than the evaluation size, and performs reduction processing when the size of the evaluation region Re is larger than the evaluation size. That is, the correction unit 13 enlarges or reduces the evaluation area Re to adjust the size of the evaluation image to the evaluation size. For the enlargement process, for example, a bilinear interpolation method is used. For the reduction processing, for example, an average pixel method is used. Other enlargement / reduction algorithms may be used for the enlargement process and the reduction process, but it is desirable that the image state be maintained even when the extension / reduction process is performed.
続いて、補正部13は、評価領域Reの色補正を行う(ステップS24)。同じ評価対象物であっても、撮影環境に応じて、画像の明暗が変化し得る。また、撮影に用いられた光源の色が異なると、画像の色も異なり得る。撮影環境の影響を低減するために、色補正が行われる。補正部13は、撮像画像に含まれる参照領域の色に基づいて、評価領域Reの色を補正する。参照領域は、特定の色が付された参照体の画像(画像領域)である。
Subsequently, the correction unit 13 performs color correction of the evaluation area Re (step S24). Even for the same evaluation object, the brightness of the image may change depending on the shooting environment. Further, when the color of the light source used for photographing is different, the color of the image may be different. Color correction is performed to reduce the influence of the shooting environment. The correcting unit 13 corrects the color of the evaluation area Re based on the color of the reference area included in the captured image. The reference area is an image (image area) of a reference body with a specific color.
図10の(a)に示されるように、参照体としては、マーカMKの領域Rwが用いられ得る。この場合、マーカMKの領域Rwの色がカラーメータ等で予め測定され、測定された色を示す基準値が不図示のメモリに記憶されている。色を示す値としては、RGB値及びHSV値等が用いられる。図10の(b)に示されるように、補正部13は、撮像画像(評価領域Re)に含まれるマーカ領域Rmにおける領域Rwの色の値を取得し、取得した値と基準値とを比較して、これらの差分が小さく(例えば、ゼロに)なるように色補正を行う。色補正には、ガンマ補正等が用いられる。色補正として、各画素値に差分が加えられてもよい(オフセット処理)。
As shown in FIG. 10A, the region Rw of the marker MK can be used as the reference body. In this case, the color of the region Rw of the marker MK is measured in advance with a color meter or the like, and a reference value indicating the measured color is stored in a memory (not shown). As a value indicating a color, an RGB value, an HSV value, or the like is used. As illustrated in FIG. 10B, the correction unit 13 acquires the color value of the region Rw in the marker region Rm included in the captured image (evaluation region Re), and compares the acquired value with a reference value. Then, color correction is performed so that these differences are small (for example, zero). For color correction, gamma correction or the like is used. As color correction, a difference may be added to each pixel value (offset processing).
参照体としてマーカMKが用いられなくてもよい。この場合、予め色が測定されている試料(例えば、グレーボード)を参照体として、評価対象物とともに撮影することで、マーカMKを用いる場合と同様に評価領域Reの色補正が行われてもよい。補正部13は、灰色仮説に基づいて色補正を行ってもよい。
The marker MK may not be used as the reference body. In this case, even if color correction of the evaluation region Re is performed in the same manner as in the case of using the marker MK, a sample (for example, a gray board) whose color is measured in advance is used as a reference body and photographed together with the evaluation object. Good. The correction unit 13 may perform color correction based on the gray hypothesis.
続いて、補正部13は、評価領域Reから鏡面反射を除去する(ステップS25)。鏡面反射は、評価対象物が金属光沢を有する場合に引き起こされることがある。評価対象物の塗膜の状態によっても、鏡面反射が引き起こされることがある。画像では、鏡面反射を引き起こした部分は通常強い白色として現れる。つまり、鏡面反射を起こした部分は画像において白飛びを生じてしまう。色補正後には、鏡面反射を起こした部分は、白色部分として検出され得るので、補正部13は、色補正後の画像(評価領域Re)を用いて鏡面反射を除去する。
Subsequently, the correction unit 13 removes the specular reflection from the evaluation region Re (step S25). Specular reflection may be caused when the evaluation object has a metallic luster. Depending on the state of the coating film of the evaluation object, specular reflection may be caused. In the image, the part that caused the specular reflection usually appears as strong white. That is, the portion where the specular reflection occurs causes whiteout in the image. After the color correction, the specular reflection portion can be detected as a white portion, so the correction unit 13 removes the specular reflection using the color corrected image (evaluation region Re).
そこで、補正部13は、評価領域Reに含まれる各画素の画素値に基づいて鏡面反射部分を特定する。例えば、補正部13は、RGBの画素値がいずれも所定の閾値よりも大きい場合に当該画素は鏡面反射部分の一部であると判定する。補正部13は、画素値をHSVに変換し、明るさ(V)、又は明るさ(V)及び彩度(S)の両方に対して同様の閾値処理をすることで鏡面反射部分を特定してもよい。
Therefore, the correction unit 13 specifies the specular reflection part based on the pixel value of each pixel included in the evaluation region Re. For example, the correction unit 13 determines that the pixel is a part of the specular reflection portion when all of the RGB pixel values are larger than a predetermined threshold value. The correction unit 13 converts the pixel value into HSV, and specifies the specular reflection part by performing the same threshold processing on brightness (V) or both brightness (V) and saturation (S). May be.
そして、補正部13は、鏡面反射部分から鏡面反射を除去し、元の画像情報(画素値)を復元する。補正部13は、例えば、ナビエ・ストークスを用いた方法、及びAlexandru Teleaのファストマーチング方法等により、鏡面反射部分の近傍の画像情報で鏡面反射部分の画像情報を自動補間(復元)する。補正部13は、機械学習により様々なさびの画像を予め学習しておくことで、鏡面反射部分の画像情報を復元してもよい。機械学習には、例えば、GAN(Generative Adversarial Network)が用いられる。なお、補正部13は、鏡面反射部分の外縁を拡張した領域(つまり、鏡面反射部分を含み、鏡面反射部分よりも大きい領域)に対して画像情報を復元してもよい。
Then, the correction unit 13 removes the specular reflection from the specular reflection portion and restores the original image information (pixel value). The correction unit 13 automatically interpolates (restores) the image information of the specular reflection portion with the image information in the vicinity of the specular reflection portion by, for example, a method using Navier-Stokes or a fast marching method of Alexander Telea. The correction unit 13 may restore the image information of the specular reflection part by learning various rust images in advance by machine learning. For machine learning, for example, GAN (Generative Adversarial Network) is used. Note that the correction unit 13 may restore the image information for a region in which the outer edge of the specular reflection portion is expanded (that is, a region that includes the specular reflection portion and is larger than the specular reflection portion).
続いて、補正部13は、評価領域Reからノイズを除去する(ステップS26)。補正部13は、例えば、ガウシアンフィルタ、及びローパスフィルタ等のデノイズフィルタ(デノイズ関数)を用いて、評価領域Reからノイズを除去する。
Subsequently, the correction unit 13 removes noise from the evaluation region Re (step S26). The correction unit 13 removes noise from the evaluation region Re using, for example, a denoising filter (denoising function) such as a Gaussian filter and a low-pass filter.
続いて、補正部13は、評価領域Reのブレ補正を行う(ステップS27)。ユーザがユーザ端末10を用いて撮影を行う際に、手振れ等のブレが生じることがある。補正部13は、例えば、Wienerフィルタ、及びブラインドデコンボリューションアルゴリズムを用いて画像のブレ補正を行う。
Subsequently, the correction unit 13 performs blur correction on the evaluation area Re (step S27). When a user performs shooting using the user terminal 10, blurring such as camera shake may occur. The correction unit 13 performs image blur correction using, for example, a Wiener filter and a blind deconvolution algorithm.
なお、図5の補正処理は、一例であって、補正部13が行う補正処理はこれに限られない。ステップS21,S23~S27の一部又は全部は省略されてもよい。ステップS21~S27は、任意の順番で行われてもよい。上述のように、色補正の後に鏡面反射除去を行う場合、画像において鏡面反射部分は強い白色として現れるので、鏡面反射部分の特定精度が向上する。
The correction process in FIG. 5 is an example, and the correction process performed by the correction unit 13 is not limited to this. Some or all of steps S21 and S23 to S27 may be omitted. Steps S21 to S27 may be performed in any order. As described above, when the specular reflection removal is performed after the color correction, the specular reflection portion appears as strong white in the image, so that the specular reflection portion specifying accuracy is improved.
また、図7に示されるように、補正部13は、マス目状に並んだ複数のブロックによってマーカ領域Rmが構成されているとみなし、マーカ領域Rm(マーカMK)の4頂点の座標を用いて、各ブロックの頂点座標を求めてもよい。これにより、補正部13は、マーカ領域Rmを複数のブロックに分割して取り扱うことができる。例えば、補正部13は、各ブロックを用いて、マーカ領域RmがマーカMKの画像であるか否かを判定する。また、補正部13は、いずれかのブロックを、色補正に用いられる参照領域としてもよい。さらに、補正部13は、各ブロックの座標から撮像画像の歪み具合を算出し、撮像装置107のキャリブレーションを行ってもよい。
Further, as illustrated in FIG. 7, the correction unit 13 considers that the marker region Rm is configured by a plurality of blocks arranged in a grid, and uses the coordinates of the four vertices of the marker region Rm (marker MK). Thus, the vertex coordinates of each block may be obtained. Thereby, the correction | amendment part 13 can divide | segment the marker area | region Rm into a some block, and can handle it. For example, the correction unit 13 determines whether the marker region Rm is an image of the marker MK using each block. The correction unit 13 may use any block as a reference area used for color correction. Further, the correction unit 13 may calculate the degree of distortion of the captured image from the coordinates of each block and calibrate the imaging apparatus 107.
続いて、補正部13は、ステップS02の補正処理によって補正された撮像画像を評価用画像として送信部14に出力し、送信部14は、ネットワークNWを介して評価用画像を評価装置20に送信する(ステップS03)。このとき、送信部14は、ユーザ端末10を一意に識別可能な端末IDとともに、評価用画像を評価装置20に送信する。端末IDとして、例えば、IP(Internet Protocol)アドレスが用いられてもよい。そして、受信部21は、ユーザ端末10から送信された評価用画像を受信し、評価用画像を評価部22に出力する。なお、補正部13は、評価用画像の鮮明さが不足する場合には、評価用画像を送信部14に出力しなくてもよい。また、送信部14は、上述のように、評価用画像を暗号化し、暗号化された評価用画像を評価装置20に送信してもよい。この場合、受信部21は、暗号化された評価用画像をユーザ端末10から受信し、暗号化された評価用画像を復号して、評価用画像を評価部22に出力する。
Subsequently, the correction unit 13 outputs the captured image corrected by the correction process in step S02 to the transmission unit 14 as an evaluation image, and the transmission unit 14 transmits the evaluation image to the evaluation device 20 via the network NW. (Step S03). At this time, the transmission unit 14 transmits the evaluation image to the evaluation device 20 together with a terminal ID that can uniquely identify the user terminal 10. For example, an IP (Internet Protocol) address may be used as the terminal ID. Then, the reception unit 21 receives the evaluation image transmitted from the user terminal 10 and outputs the evaluation image to the evaluation unit 22. The correction unit 13 may not output the evaluation image to the transmission unit 14 when the evaluation image is not clear. Further, as described above, the transmission unit 14 may encrypt the evaluation image and transmit the encrypted evaluation image to the evaluation device 20. In this case, the receiving unit 21 receives the encrypted evaluation image from the user terminal 10, decrypts the encrypted evaluation image, and outputs the evaluation image to the evaluation unit 22.
続いて、評価部22は、評価用画像に基づいて、評価対象物のさびの程度に関する評価を行う(ステップS04)。この例では、評価部22は、図11に示されるニューラルネットワークNNを用いて、評価対象物のさび度及び除錆度を評価する。なお、評価部22は、評価用画像を受け取ると、評価用画像を一意に識別可能な画像IDを評価用画像に付与する。
Subsequently, the evaluation unit 22 performs an evaluation on the degree of rust of the evaluation object based on the evaluation image (step S04). In this example, the evaluation unit 22 uses the neural network NN shown in FIG. 11 to evaluate the degree of rust and the degree of rust of the evaluation object. When the evaluation unit 22 receives the evaluation image, the evaluation unit 22 assigns an image ID that can uniquely identify the evaluation image to the evaluation image.
ニューラルネットワークNNは、評価用画像を入力し、各カテゴリの適合率を出力する。カテゴリとしては、さび度のグレード、除錆度のグレード、及びその組み合わせが用いられ得る。さび度のグレードとしては、例えば、JIS(Japanese Industrial Standards) Z0313で規定されたグレードA~Dが用いられる。グレードA,B,C,Dの順に、評価対象物に発生しているさびの程度が大きくなる。除錆度のグレードとしては、例えば、JIS Z0313で規定されたグレードSt2,St3,Sa1,Sa2,Sa2.5,Sa3が用いられる。JIS Z0313では、除錆処理を施していない状態に対応するグレードは規定されていないが、以下の説明では便宜上グレードRAWとする。さび度のグレード及び除錆度のグレードとして、他のグレードが用いられてもよい。例えば、さび度のグレードとして、ISO(International Organization for Standardization)、SIS(Swedish Standard)、SSPC(Steel Structures Painting Council)、SPSS(Standard for the Preparation of Steel Surface prior to Painting)、及びNACE(the National Association of Corrosion Engineers)等の規格で規定されたグレードが用いられてもよい。除錆度のグレードとして、ISO、SSPC、SPSS、及びNACE等の規格で規定されたグレードが用いられてもよい。
Neural network NN inputs an image for evaluation and outputs the matching rate of each category. As the category, rust grade, rust grade, and combinations thereof can be used. As the grade of rust, for example, grades A to D defined by JIS (Japanese Industrial Standards) Z0313 are used. In the order of grades A, B, C, and D, the degree of rust generated on the evaluation object increases. As grades of the degree of rust removal, for example, grades St2, St3, Sa1, Sa2, Sa2.5, and Sa3 defined in JIS Z0313 are used. In JIS Z0313, the grade corresponding to the state where the rust removal treatment is not performed is not defined, but in the following description, it is assumed as grade RAW for convenience. Other grades may be used as the rust grade and the rust removal grade. For example, ISO (International Organization for Standardization), SIS (Swedish Standard), SSPC (Steel Structures Painting Council), SPSS (Standard for the Preparation of Steel Surface to Painting), and NACE (the National Foundations) Grades defined in standards such as “Corrosion Engineers” may be used. As the grade of the degree of rust removal, a grade defined by standards such as ISO, SSPC, SPSS, and NACE may be used.
図12に示されるように、本実施形態では、カテゴリとして、さび度のグレードと除錆度のグレードとの組み合わせが用いられる。適合率は、評価対象物のさびの程度がそのカテゴリ(グレード)に属する確率を表している。適合率が大きいほど、評価対象物のさびの程度がそのカテゴリ(グレード)に属する可能性が高いことを意味する。なお、グレードA+グレードSt2、グレードA+グレードSt3、グレードA+グレードSa1、及びグレードA+グレードSa2は、いずれの規格にも存在しないが、ここでは、便宜上、これらの組み合わせも含めて説明する。これらの組み合わせは、ユーザによって設定されてもよく、省略されてもよい。
As shown in FIG. 12, in this embodiment, a combination of a rust grade and a rust grade is used as a category. The precision indicates the probability that the degree of rust of the evaluation object belongs to the category (grade). The higher the matching rate, the higher the possibility that the degree of rust of the evaluation object belongs to the category (grade). Note that Grade A + Grade St2, Grade A + Grade St3, Grade A + Grade Sa1, and Grade A + Grade Sa2 do not exist in any of the standards, but here, for the sake of convenience, these combinations will be described. These combinations may be set by the user or may be omitted.
評価部22は、評価用画像を1又は複数のチャネルに分離して、各チャネルの画像情報(画素値)をニューラルネットワークNNの入力としてもよい。評価部22は、例えば、評価用画像を色空間の各成分に分離する。色空間としてRGB色空間が用いられる場合には、評価部22は、評価用画像をR成分の画素値、G成分の画素値、及びB成分の画素値に分離する。色空間としてHSV色空間が用いられる場合には、評価部22は、評価用画像をH成分の画素値、S成分の画素値、及びV成分の画素値に分離する。評価部22は、評価用画像をグレースケールに変換し、変換した画像をニューラルネットワークNNの入力としてもよい。
The evaluation unit 22 may separate the evaluation image into one or a plurality of channels and use the image information (pixel value) of each channel as an input to the neural network NN. For example, the evaluation unit 22 separates the evaluation image into each component of the color space. When the RGB color space is used as the color space, the evaluation unit 22 separates the evaluation image into an R component pixel value, a G component pixel value, and a B component pixel value. When the HSV color space is used as the color space, the evaluation unit 22 separates the evaluation image into an H component pixel value, an S component pixel value, and a V component pixel value. The evaluation unit 22 may convert the evaluation image into a gray scale and use the converted image as an input to the neural network NN.
図11に示されるように、ニューラルネットワークNNは、入力層L1と、中間層L2と、出力層L3と、を有する。入力層L1は、ニューラルネットワークNNの入口に位置し、入力層L1には、M個の入力値xi(iは1~Mの整数)が入力される。入力層L1は、複数のニューロン41を有する。ニューロン41は、入力値xiに対応して設けられており、ニューロン41の数は、入力値xiの総数Mと等しい。つまり、ニューロン41の数は、評価用画像の各チャネルに含まれる画素数の総和と等しい。i番目のニューロン41は、入力値xiを、中間層L2の第1中間層L21の各ニューロン421に出力する。入力層L1は、ノード41bを含む。ノード41bは、各ニューロン421にバイアス値bj(jは1~M1の整数)を出力する。
As shown in FIG. 11, the neural network NN includes an input layer L1, an intermediate layer L2, and an output layer L3. The input layer L1 is located at the entrance of the neural network NN, and M input values x i (i is an integer from 1 to M) are input to the input layer L1. The input layer L1 includes a plurality of neurons 41. The neurons 41 are provided corresponding to the input values x i , and the number of neurons 41 is equal to the total number M of the input values x i . That is, the number of neurons 41 is equal to the total number of pixels included in each channel of the evaluation image. The i-th neuron 41 outputs the input value x i to each neuron 421 of the first intermediate layer L21 of the intermediate layer L2. The input layer L1 includes a node 41b. The node 41b outputs a bias value b j (j is an integer from 1 to M1) to each neuron 421.
中間層L2は、入力層L1と出力層L3との間に位置する。中間層L2は、ニューラルネットワークNNの外部から隠れているので隠れ層とも呼ばれる。中間層L2は、1又は複数の層を含む。図11に示される例では、中間層L2は、第1中間層L21と第2中間層L22とを含む。第1中間層L21は、M1個のニューロン421を有する。このとき、j番目のニューロン421は、式(7)に示されるように、各入力値xiを重み係数wijによって重み付けした値の総和に、さらにバイアス値bjを加算することで計算値zjを得る。なお、ニューラルネットワークNNが畳み込みニューラルネットワークである場合には、ニューロン421は、例えば、畳み込み、活性化関数を用いた計算、及びプーリングを順に行う。この場合、活性化関数には、例えば、ReLU関数が用いられる。
The intermediate layer L2 is located between the input layer L1 and the output layer L3. The intermediate layer L2 is also called a hidden layer because it is hidden from the outside of the neural network NN. The intermediate layer L2 includes one or more layers. In the example shown in FIG. 11, the intermediate layer L2 includes a first intermediate layer L21 and a second intermediate layer L22. The first intermediate layer L21 has M1 neurons 421. At this time, as shown in the equation (7), the j-th neuron 421 adds a bias value b j to the sum of values obtained by weighting the input values x i by the weighting coefficient w ij to obtain a calculated value. z j is obtained. When the neural network NN is a convolutional neural network, the neuron 421 sequentially performs, for example, convolution, calculation using an activation function, and pooling. In this case, for example, a ReLU function is used as the activation function.
そして、j番目のニューロン421は、第2中間層L22の各ニューロン422に計算値zjを出力する。第1中間層L21は、ノード421bを含む。ノード421bは、各ニューロン422にバイアス値を出力する。以下、各ニューロンは、ニューロン421と同様の計算を行い、後段の層の各ニューロンに計算値を出力する。中間層L2の最終段のニューロン(ここでは、ニューロン422)は、出力層L3の各ニューロン43に計算値を出力する。
Then, the j-th neuron 421 outputs the calculated value z j to each neuron 422 of the second intermediate layer L22. The first intermediate layer L21 includes a node 421b. The node 421b outputs a bias value to each neuron 422. Thereafter, each neuron performs the same calculation as that of the neuron 421 and outputs the calculated value to each neuron in the subsequent layer. The final-stage neuron (here, the neuron 422) of the intermediate layer L2 outputs the calculated value to each neuron 43 of the output layer L3.
出力層L3は、ニューラルネットワークNNの出口に位置し、出力値yk(kは1~Nの整数)を出力する。出力値ykは、各カテゴリに割り当てられており、そのカテゴリの適合率に対応する値である。出力層L3は、複数のニューロン43を有する。ニューロン43は、出力値ykに対応して設けられており、ニューロン43の数は、出力値ykの総数Nと等しい。つまり、ニューロン43の数は、さびの程度を示すカテゴリの数と等しい。各ニューロン43は、ニューロン421と同様の計算を行い、その計算結果を引数として活性化関数を計算することで、出力値ykを得る。活性化関数の例としては、softmax関数、ReLU関数、双曲線関数、シグモイド関数、恒等関数、及びステップ関数が挙げられる。本実施形態では、softmax関数が用いられる。このため、各出力値ykは、N個の出力値ykの合計が1になるように正規化されている。つまり、出力値ykに100を乗算することによって、適合率(%)が得られる。
The output layer L3 is located at the exit of the neural network NN and outputs an output value y k (k is an integer from 1 to N). The output value y k is assigned to each category, and is a value corresponding to the matching rate of that category. The output layer L3 includes a plurality of neurons 43. The neurons 43 are provided corresponding to the output values y k , and the number of neurons 43 is equal to the total number N of output values y k . That is, the number of neurons 43 is equal to the number of categories indicating the degree of rust. Each neuron 43 performs a calculation similar to that of the neuron 421 and calculates an activation function using the calculation result as an argument to obtain an output value y k . Examples of the activation function include a softmax function, a ReLU function, a hyperbolic function, a sigmoid function, an identity function, and a step function. In the present embodiment, a softmax function is used. For this reason, each output value y k is normalized so that the sum of the N output values y k is 1. That is, the precision (%) is obtained by multiplying the output value y k by 100.
続いて、評価部22は、例えば、N個の出力値ykを評価用画像の画像IDとともに評価用画像の評価結果として送信部23に出力する。N個の出力値ykの配列は予め定められており、各出力値ykは、N個のカテゴリのいずれかのカテゴリに対応付けられている。なお、評価部22は、N個の出力値ykのうち、最も大きい出力値をその出力値に対応するカテゴリ名又はインデックス(図12に示される「番号」に相当)とともに評価結果としてもよい。ここでは、図12に示される適合率に対応する出力値の配列が評価結果として送信部23に出力される。この場合、ユーザ端末10が、ユーザにどのように出力するかを決定することができる。
Subsequently, for example, the evaluation unit 22 outputs the N output values y k together with the image ID of the evaluation image to the transmission unit 23 as the evaluation result of the evaluation image. An array of N output values y k is determined in advance, and each output value y k is associated with one of the N categories. The evaluation unit 22 may use the largest output value among the N output values y k as an evaluation result together with the category name or index (corresponding to “number” shown in FIG. 12) corresponding to the output value. . Here, an array of output values corresponding to the precision shown in FIG. 12 is output to the transmission unit 23 as an evaluation result. In this case, the user terminal 10 can determine how to output to the user.
そして、送信部23は、ネットワークNWを介して評価結果をユーザ端末10に送信する(ステップS05)。このとき、送信部23は、評価用画像とともにユーザ端末10から送信された端末IDに基づいて、送信先のユーザ端末10を識別し、当該ユーザ端末10に評価結果を送信する。そして、受信部15は、評価装置20から送信された評価結果を受信し、評価結果を出力部16に出力する。なお、送信部23は、上述のように、評価結果を暗号化し、暗号化された評価結果をユーザ端末10に送信してもよい。この場合、受信部15は、暗号化された評価結果を評価装置20から受信し、暗号化された評価結果を復号して、評価結果を出力部16に出力する。
And the transmission part 23 transmits an evaluation result to the user terminal 10 via the network NW (step S05). At this time, the transmission unit 23 identifies the destination user terminal 10 based on the terminal ID transmitted from the user terminal 10 together with the evaluation image, and transmits the evaluation result to the user terminal 10. Then, the reception unit 15 receives the evaluation result transmitted from the evaluation device 20 and outputs the evaluation result to the output unit 16. Note that the transmitting unit 23 may encrypt the evaluation result and transmit the encrypted evaluation result to the user terminal 10 as described above. In this case, the receiving unit 15 receives the encrypted evaluation result from the evaluation device 20, decrypts the encrypted evaluation result, and outputs the evaluation result to the output unit 16.
続いて、出力部16は、評価結果をユーザに通知するための出力情報を生成し、出力情報に基づいて評価結果をユーザに出力する(ステップS06)。
Subsequently, the output unit 16 generates output information for notifying the user of the evaluation result, and outputs the evaluation result to the user based on the output information (step S06).
図13の表示例では、散布図が用いられる。散布図の縦軸は、除錆処理前のグレード(さび度のグレード)を示し、散布図の横軸は、除錆処理後のグレード(除錆度のグレード)を示す。カラーバーは、適合率に応じた色を示している。散布図において、除錆処理前のグレードと除錆処理後のグレードとの交点にプロットされる点によって、そのカテゴリの適合率が示される。点の色及び大きさは、点に対応するカテゴリの適合率に応じて設定される。点に対応するカテゴリの適合率が大きいほど、点の大きさも大きくなる。点の色は、点に対応するカテゴリの適合率に対応した色に設定される。
In the display example of FIG. 13, a scatter diagram is used. The vertical axis of the scatter diagram shows the grade before rust removal (grade of rust), and the horizontal axis of the scatter plot shows the grade after rust removal (grade of rust removal). The color bar indicates the color corresponding to the matching rate. In the scatter diagram, the conformity rate of the category is indicated by a point plotted at the intersection of the grade before the rust removal treatment and the grade after the rust removal treatment. The color and size of the point are set according to the matching rate of the category corresponding to the point. The larger the precision of the category corresponding to the point, the larger the size of the point. The color of the point is set to a color corresponding to the matching rate of the category corresponding to the point.
点P1は、グレードBとグレードRAWとの交点に位置し、15%に対応する色を有している。つまり、点P1は、除錆処理前のグレードがグレードBであり、除錆処理後のグレードがグレードRAW(除錆処理を施していない状態)であるカテゴリの適合率が15%であることを示している。同様に、点P2は、除錆処理前のグレードがグレードBであり、除錆処理後のグレードがグレードSa1であるカテゴリの適合率が5%であることを示している。同様に、点P3は、除錆処理前のグレードがグレードDであり、除錆処理後のグレードがグレードSa1であるカテゴリの適合率が85%であることを示している。
Point P1 is located at the intersection of grade B and grade RAW and has a color corresponding to 15%. That is, the point P1 indicates that the grade of the grade before the rust removal treatment is grade B, and the conformity rate of the category where the grade after the rust removal treatment is grade RAW (a state where the rust removal treatment is not performed) is 15% Show. Similarly, the point P2 indicates that the grade of the grade before the rust removal treatment is grade B and the conformity rate of the category where the grade after the rust removal treatment is grade Sa1 is 5%. Similarly, the point P3 indicates that the grade of the grade before the rust removal treatment is grade D and the conformity rate of the category where the grade after the rust removal treatment is grade Sa1 is 85%.
この表示例では、最も大きい適合率を有するカテゴリの除錆処理後のグレードがグレードRAWである場合には、そのカテゴリの除錆処理前のグレードがさび度として用いられる。一方、最も大きい適合率を有するカテゴリの除錆処理後のグレードがグレードRAW以外である場合には、そのカテゴリの除錆処理後のグレードが除錆度として用いられる。
In this display example, when the grade after the rust removal treatment of the category having the highest precision is grade RAW, the grade before the rust removal treatment of that category is used as the rust degree. On the other hand, when the grade after the rust removal treatment of the category having the highest matching rate is other than the grade RAW, the grade after the rust removal treatment of that category is used as the degree of rust removal.
図13の表示例では、除錆処理後のグレードが同一であっても、除錆処理前のグレードによって異なるカテゴリに分類される。しかし、評価対象物の除錆度を評価する場合には、さびがどの程度除去されたかが重要である。このため、出力部16は、除錆処理後のグレードごとに適合率を統合し、除錆処理後のグレードごとの適合率を表示してもよい。例えば、図14に示されるように、グレードRAWの適合率は、グレードA+グレードRAWの適合率、グレードB+グレードRAWの適合率、グレードC+グレードRAWの適合率、及びグレードD+グレードRAWの適合率の合計であるので、15%(=0+15+0+0)である。グレードSt2,St3,Sa2,Sa2.5,Sa3の適合率はいずれも0%であり、グレードSa1の適合率は、85%(=0+5+0+80)である。出力部16は、除錆処理後のグレードごとの適合率に各グレードの分類係数を乗算し、乗算結果を足し合わせることで、評価結果を数値化してもよい。
In the display example of FIG. 13, even if the grade after the rust removal treatment is the same, it is classified into different categories depending on the grade before the rust removal treatment. However, when evaluating the degree of rust removal of the evaluation object, it is important how much rust has been removed. For this reason, the output part 16 may display the conformity rate for every grade after a rust removal process, integrating a precision rate for every grade after a rust removal process. For example, as shown in FIG. 14, the grade RAW compliance rate is the grade A + grade RAW grade, grade B + grade RAW grade, grade C + grade RAW grade, and grade D + grade RAW grade. Since it is the sum, it is 15% (= 0 + 15 + 0 + 0). The compliance rates of grades St2, St3, Sa2, Sa2.5, and Sa3 are all 0%, and the compliance rate of grade Sa1 is 85% (= 0 + 5 + 0 + 80). The output unit 16 may quantify the evaluation result by multiplying the matching rate for each grade after the rust removal processing by the classification coefficient of each grade and adding the multiplication results.
出力部16は、評価結果を用いて、除錆処理が合格であるか不合格であるかをユーザに通知してもよい。
The output unit 16 may notify the user whether the derusting process is acceptable or not, using the evaluation result.
同様に、評価対象物のさび度を評価する場合には、出力部16は、除錆処理前のグレードごとに適合率を統合し、除錆処理前のグレードごとの適合率を表示してもよい。
Similarly, when evaluating the degree of rust of the object to be evaluated, the output unit 16 may integrate the conformity ratio for each grade before the rust removal treatment and display the conformity ratio for each grade before the rust removal treatment. Good.
出力部16は、テキストで評価結果を表示してもよい。出力部16は、例えば、適合率が最も高いカテゴリのグレード名と、その適合率とをテキストで表示する。例えば、出力部16は、「結果:[除錆処理前]グレードD→[除錆処理後]グレードSa1(80%)」等と表示する。出力部16は、全てのカテゴリのグレード名とその適合率とをテキストで表示してもよい。
The output unit 16 may display the evaluation result as text. For example, the output unit 16 displays the grade name of the category having the highest relevance rate and the relevance rate in text. For example, the output unit 16 displays “result: [before rust removal treatment] grade D → [after rust removal treatment] grade Sa1 (80%)”. The output unit 16 may display the grade names of all categories and their matching rates in text.
出力部16は、音声で評価結果を出力してもよく、振動で評価結果を出力してもよい。出力部16による出力の形態は、ユーザによって設定されてもよい。
The output unit 16 may output the evaluation result by voice or may output the evaluation result by vibration. The form of output by the output unit 16 may be set by the user.
続いて、修正情報取得部17は、ユーザによって評価結果の修正操作が行われたか否かを判定する。例えば、ユーザは、出力部16によって出力された評価結果を確認した後、入力装置105を用いて評価結果を修正するための画面を表示するように操作する。
Subsequently, the correction information acquisition unit 17 determines whether or not an evaluation result correction operation has been performed by the user. For example, after confirming the evaluation result output by the output unit 16, the user operates the input device 105 to display a screen for correcting the evaluation result.
例えば、図15の(a)に示されるように、ユーザは、入力装置105を操作して、ポインタMPを用いて、散布図上でカテゴリを指定する。つまり、ユーザが評価対象物を目視で検査することによってカテゴリを判定し、ユーザによって判定されたカテゴリに対応する除錆処理前のグレードと除錆処理後のグレードとの交点に、点Puがプロットされる。
For example, as shown in FIG. 15A, the user operates the input device 105 and designates a category on the scatter diagram using the pointer MP. That is, the user determines the category by visually inspecting the evaluation object, and the point Pu is plotted at the intersection of the grade before the rust removal and the grade after the rust removal corresponding to the category determined by the user. Is done.
図15の(b)に示されるように、出力部16はカテゴリのグレード名とラジオボタンとを表示してもよい。この場合、ユーザが入力装置105を操作して、ポインタMPを用いてラジオボタンをチェックすることで、ユーザによって判定されたカテゴリが選択される。ユーザがカテゴリを選択するために、ドロップダウンリスト、又はスライダ等のオブジェクトが用いられてもよい。
As shown in FIG. 15B, the output unit 16 may display a category grade name and a radio button. In this case, when the user operates the input device 105 and checks the radio button using the pointer MP, the category determined by the user is selected. An object such as a drop-down list or a slider may be used for the user to select a category.
修正情報取得部17が修正操作は行われなかったと判定した場合、評価システム1による評価方法の一連の処理が終了する。一方、修正情報取得部17は、入力装置105によって修正操作が行われたと判定した場合、修正後のカテゴリを示す情報を、当該修正操作が行われた評価用画像の画像IDとともに修正情報として取得する(ステップS07)。
When the correction information acquisition unit 17 determines that the correction operation has not been performed, a series of processes of the evaluation method by the evaluation system 1 ends. On the other hand, when the correction information acquisition unit 17 determines that the correction operation has been performed by the input device 105, the correction information acquisition unit 17 acquires information indicating the corrected category as correction information together with the image ID of the evaluation image on which the correction operation has been performed. (Step S07).
そして、修正情報取得部17は、修正情報を送信部14に出力し、送信部14は、ネットワークNWを介して修正情報を評価装置20に送信する(ステップS08)。そして、受信部21は、ユーザ端末10から送信された修正情報を受信し、修正情報を評価部22に出力する。なお、送信部14は、上述のように、修正情報を暗号化し、暗号化された修正情報を評価装置20に送信してもよい。この場合、受信部21は、暗号化された修正情報をユーザ端末10から受信し、暗号化された修正情報を復号して、修正情報を評価部22に出力する。
Then, the correction information acquisition unit 17 outputs the correction information to the transmission unit 14, and the transmission unit 14 transmits the correction information to the evaluation device 20 via the network NW (Step S08). Then, the receiving unit 21 receives the correction information transmitted from the user terminal 10 and outputs the correction information to the evaluation unit 22. As described above, the transmission unit 14 may encrypt the correction information and transmit the encrypted correction information to the evaluation device 20. In this case, the reception unit 21 receives the encrypted correction information from the user terminal 10, decrypts the encrypted correction information, and outputs the correction information to the evaluation unit 22.
続いて、評価部22は、修正情報に基づいて学習を行う(ステップS09)。具体的には、評価部22は、修正後のカテゴリと評価用画像との組を教師データとする。評価部22は、オンライン学習、ミニバッチ学習、及びバッチ学習のいずれの方法でニューラルネットワークNNの学習を行ってもよい。オンライン学習は、新たな教師データを取得するごとにその教師データを用いて学習を行う方法である。ミニバッチ学習は、一定量の教師データを1単位として、1単位の教師データを用いて学習を行う方法である。バッチ学習は、全ての教師データを用いて学習を行う方法である。学習には、バックプロパゲーション等のアルゴリズムが用いられる。なお、ニューラルネットワークNNの学習とは、ニューラルネットワークNNで用いられる重み係数及びバイアス値をより最適な値に更新することを意味する。
Subsequently, the evaluation unit 22 performs learning based on the correction information (step S09). Specifically, the evaluation unit 22 uses a set of the corrected category and the evaluation image as teacher data. The evaluation unit 22 may learn the neural network NN by any of online learning, mini-batch learning, and batch learning. Online learning is a method in which learning is performed using new teacher data each time new teacher data is acquired. Mini-batch learning is a method in which a certain amount of teacher data is taken as one unit and learning is performed using one unit of teacher data. Batch learning is a method of performing learning using all teacher data. For the learning, an algorithm such as back propagation is used. Note that learning of the neural network NN means updating the weighting coefficient and bias value used in the neural network NN to more optimal values.
以上のようにして、評価システム1による評価方法の一連の処理が終了する。
As described above, a series of processes of the evaluation method by the evaluation system 1 is completed.
なお、ユーザ端末10及び評価装置20における各機能部は、各機能を実現させるためのプログラムモジュールが、ユーザ端末10及び評価装置20を構成するコンピュータにおいて実行されることにより実現される。これらのプログラムモジュールを含む評価プログラムは、例えば、ROM又は半導体メモリ等のコンピュータ読み取り可能な記録媒体によって提供される。また、評価プログラムは、データ信号としてネットワークを介して提供されてもよい。
In addition, each function part in the user terminal 10 and the evaluation apparatus 20 is implement | achieved when the program module for implement | achieving each function is performed in the computer which comprises the user terminal 10 and the evaluation apparatus 20. FIG. The evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory. The evaluation program may be provided as a data signal via a network.
以上説明した評価システム1、評価装置20、評価方法、評価プログラム、及び記録媒体では、評価対象物の撮像画像から評価領域Reが抽出され、評価領域Reに基づいて評価用画像が生成される。そして、評価用画像に基づいてさびの程度に関する評価が行われ、評価結果が出力される。評価領域Reは、評価対象物の表面のうちの所定の面積を有する範囲の画像である。このため、撮像画像が撮像された条件(評価対象物と撮像装置107との距離、及び倍率等)によらずに、評価対象物の表面のうちの同じ面積を有する範囲を対象として、その範囲内に生じているさびの程度に関する評価が行われる。その結果、さびの程度に関する評価精度を向上させることが可能となる。具体的には、評価用画像に基づいてさび度及び除錆度の評価が行われる。このため、さび度及び除錆度の評価精度を向上させることが可能となる。このように、撮像装置107として汎用機を用いることができ、撮像装置107と評価対象物との距離D1を自由に変えることができる。
In the evaluation system 1, the evaluation device 20, the evaluation method, the evaluation program, and the recording medium described above, the evaluation area Re is extracted from the captured image of the evaluation object, and an evaluation image is generated based on the evaluation area Re. Then, evaluation regarding the degree of rust is performed based on the evaluation image, and an evaluation result is output. The evaluation region Re is an image of a range having a predetermined area on the surface of the evaluation object. For this reason, the range having the same area on the surface of the evaluation object is used as a target regardless of the conditions (distance between the evaluation object and the imaging device 107, magnification, etc.) where the captured image is captured. An assessment is made as to the degree of rust occurring within. As a result, it is possible to improve the evaluation accuracy regarding the degree of rust. Specifically, the rust degree and the degree of rust removal are evaluated based on the evaluation image. For this reason, it becomes possible to improve the evaluation precision of a rust degree and a rust removal degree. As described above, a general-purpose machine can be used as the imaging device 107, and the distance D1 between the imaging device 107 and the evaluation object can be freely changed.
マーカMKは、マーカMKの向きを特定可能な形状を有している。このため、撮像画像に含まれるマーカ領域Rmの向き及び大きさを基準として、撮像画像において評価対象物の表面のうちの所定の面積を有する範囲が決定され、決定された範囲が評価領域Reとして抽出され得る。これにより、他の情報を用いることなく、評価領域Reを抽出することができる。したがって、評価システム1を小型化することが可能となる。
The marker MK has a shape that can specify the direction of the marker MK. Therefore, a range having a predetermined area of the surface of the evaluation object in the captured image is determined based on the orientation and size of the marker region Rm included in the captured image, and the determined range is set as the evaluation region Re. Can be extracted. Thereby, the evaluation region Re can be extracted without using other information. Therefore, the evaluation system 1 can be reduced in size.
撮像画像を生成した撮像装置107と評価対象物との距離D1に基づいて、評価対象物の表面のうちの所定の面積を有する範囲を特定することができるので、評価領域Reをより正確に抽出することが可能となる。
Based on the distance D1 between the imaging device 107 that generated the captured image and the evaluation object, a range having a predetermined area on the surface of the evaluation object can be specified, so that the evaluation region Re can be extracted more accurately. It becomes possible to do.
同じ評価対象物であっても、撮影に用いられた光源の色調によって、撮像画像の色調が変わることがある。また、同じ評価対象物であっても、光の照射量によって、撮像画像の明るさが異なることがある。このため、撮像画像に含まれる参照領域(例えば、マーカ領域Rmにおける領域Rw)の色に基づいて、評価領域Reの色が補正される。マーカ領域Rmにおける領域Rwの色が、マーカMKにおける領域Rwの色(白色)と異なっている場合、撮像画像における色が光の影響を受けていると考えられる。そこで、マーカ領域Rmにおける領域Rwの色がマーカMKにおける領域Rwの色になるように、評価領域Reの色が補正される。これにより、光の影響を低減することができる。その結果、さび度及び除錆度の評価精度をさらに向上させることが可能となる。
Even for the same evaluation object, the color tone of the captured image may change depending on the color tone of the light source used for shooting. Moreover, even for the same evaluation object, the brightness of the captured image may vary depending on the amount of light irradiation. For this reason, the color of the evaluation region Re is corrected based on the color of the reference region (for example, the region Rw in the marker region Rm) included in the captured image. When the color of the region Rw in the marker region Rm is different from the color (white) of the region Rw in the marker MK, it is considered that the color in the captured image is affected by light. Therefore, the color of the evaluation region Re is corrected so that the color of the region Rw in the marker region Rm becomes the color of the region Rw in the marker MK. Thereby, the influence of light can be reduced. As a result, it becomes possible to further improve the evaluation accuracy of the degree of rust and the degree of rust removal.
評価対象物に強い光が照射されると鏡面反射が生じることがあり、その状態で評価対象物が撮像されると撮像画像に白飛びが生じることがある。白飛びが生じている領域では、色情報が失われている。このため、評価領域Reから鏡面反射(白飛び)を除去することで、色情報が復元され得る。これにより、さび度及び除錆度の評価精度をさらに向上させることが可能となる。
When the evaluation object is irradiated with strong light, specular reflection may occur, and when the evaluation object is imaged in this state, whiteout may occur in the captured image. Color information is lost in areas where whiteout occurs. For this reason, the color information can be restored by removing the specular reflection (out-of-white) from the evaluation region Re. Thereby, it becomes possible to further improve the evaluation accuracy of the degree of rust and the degree of rust removal.
ニューラルネットワークNNを用いてさび度及び除錆度の評価が行われる。除錆処理前のさびの模様、及び除錆処理後のさびの模様は、不定形である。このため、一般的な物体検出では、不定形な物体の位置及び状態を特定することが困難である。また、パターン認識は無数に存在する模様を認識することには向いていない。これに対し、ニューラルネットワークNNを学習させることによって、さび度及び除錆度の評価が可能となり、さび度及び除錆度の評価精度をさらに向上させることが可能となる。
Rust and rust removal are evaluated using the neural network NN. The rust pattern before the rust removal treatment and the rust pattern after the rust removal treatment are indefinite. For this reason, in general object detection, it is difficult to specify the position and state of an irregular object. Pattern recognition is not suitable for recognizing a myriad of patterns. On the other hand, by learning the neural network NN, it is possible to evaluate the rust degree and the degree of rust removal, and it is possible to further improve the evaluation accuracy of the rust degree and the degree of rust removal.
評価領域Reを拡大又は縮小することで、評価用画像のサイズがニューラルネットワークNNの学習に用いられる基準画像(教師データ)のサイズに合わせられる。このため、ニューラルネットワークNNによる評価を適切に行うことができる。
By enlarging or reducing the evaluation area Re, the size of the evaluation image is adjusted to the size of the reference image (teacher data) used for learning of the neural network NN. For this reason, evaluation by the neural network NN can be performed appropriately.
(第2実施形態)
図16は、第2実施形態に係る評価装置を含む評価システムを概略的に示す構成図である。図16に示される評価システム1Aは、ユーザ端末10に代えてユーザ端末10Aを備える点、及び評価装置20に代えて評価装置20Aを備える点において、評価システム1と主に相違する。 (Second Embodiment)
FIG. 16 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the second embodiment. Anevaluation system 1A shown in FIG. 16 is mainly different from the evaluation system 1 in that a user terminal 10A is provided instead of the user terminal 10 and an evaluation device 20A is provided instead of the evaluation device 20.
図16は、第2実施形態に係る評価装置を含む評価システムを概略的に示す構成図である。図16に示される評価システム1Aは、ユーザ端末10に代えてユーザ端末10Aを備える点、及び評価装置20に代えて評価装置20Aを備える点において、評価システム1と主に相違する。 (Second Embodiment)
FIG. 16 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the second embodiment. An
ユーザ端末10Aは、補正部13を備えない点、及び評価用画像に代えて撮像画像及び距離D1を評価装置20Aに送信する点においてユーザ端末10と主に相違する。なお、ユーザ端末10Aでは、画像取得部11は、撮像画像を送信部14に出力する。距離取得部12は、距離D1を送信部14に出力する。送信部14は、撮像画像及び距離D1を評価装置20Aに送信する。
The user terminal 10A is mainly different from the user terminal 10 in that the correction unit 13 is not provided and the captured image and the distance D1 are transmitted to the evaluation device 20A instead of the evaluation image. In the user terminal 10 </ b> A, the image acquisition unit 11 outputs the captured image to the transmission unit 14. The distance acquisition unit 12 outputs the distance D1 to the transmission unit 14. The transmission unit 14 transmits the captured image and the distance D1 to the evaluation device 20A.
評価装置20Aは、評価用画像に代えて撮像画像及び距離D1をユーザ端末10Aから受信する点、及び補正部24をさらに備える点において評価装置20と主に相違する。受信部21は、撮像画像及び距離D1をユーザ端末10Aから受信し、撮像画像及び距離D1を補正部24に出力する。なお、受信部21は、撮像画像をユーザ端末10Aから取得しているので、画像取得部とみなされ得る。補正部24は、補正部13と同様の機能を有する。つまり、補正部24は、撮像画像から評価領域を抽出し、評価領域に基づいて評価用画像を生成する。そして、補正部24は、評価用画像を評価部22に出力する。
The evaluation device 20A is mainly different from the evaluation device 20 in that the captured image and the distance D1 are received from the user terminal 10A instead of the evaluation image, and the correction unit 24 is further provided. The receiving unit 21 receives the captured image and the distance D1 from the user terminal 10A, and outputs the captured image and the distance D1 to the correction unit 24. In addition, since the receiving part 21 has acquired the captured image from the user terminal 10A, it can be regarded as an image acquisition part. The correction unit 24 has the same function as the correction unit 13. That is, the correction unit 24 extracts an evaluation area from the captured image, and generates an evaluation image based on the evaluation area. Then, the correction unit 24 outputs the evaluation image to the evaluation unit 22.
次に、図17を参照して、評価システム1Aが行う評価方法を説明する。図17は、図16に示される評価システムが行う評価方法を示すシーケンス図である。まず、画像取得部11が評価対象物の撮像画像を取得する(ステップS31)。例えば、画像取得部11は、ステップS01と同様に、撮像装置107によって生成された評価対象物の画像を撮像画像として取得する。
Next, an evaluation method performed by the evaluation system 1A will be described with reference to FIG. FIG. 17 is a sequence diagram showing an evaluation method performed by the evaluation system shown in FIG. First, the image acquisition unit 11 acquires a captured image of the evaluation object (step S31). For example, as in step S01, the image acquisition unit 11 acquires an image of the evaluation target generated by the imaging device 107 as a captured image.
そして、画像取得部11は、取得した撮像画像を送信部14に出力し、送信部14は、ネットワークNWを介して撮像画像を評価装置20Aに送信する(ステップS32)。このとき、送信部14は、ユーザ端末10Aを一意に識別可能な端末IDとともに、撮像画像を評価装置20Aに送信する。そして、受信部21は、ユーザ端末10Aから送信された撮像画像を受信し、撮像画像を補正部24に出力する。なお、送信部14は、上述のように、撮像画像を暗号化し、暗号化された撮像画像を評価装置20Aに送信してもよい。この場合、受信部21は、暗号化された撮像画像をユーザ端末10Aから受信し、暗号化された撮像画像を復号して、撮像画像を補正部24に出力する。
Then, the image acquisition unit 11 outputs the acquired captured image to the transmission unit 14, and the transmission unit 14 transmits the captured image to the evaluation device 20A via the network NW (step S32). At this time, the transmission unit 14 transmits the captured image to the evaluation apparatus 20A together with a terminal ID that can uniquely identify the user terminal 10A. Then, the reception unit 21 receives the captured image transmitted from the user terminal 10 </ b> A and outputs the captured image to the correction unit 24. As described above, the transmission unit 14 may encrypt the captured image and transmit the encrypted captured image to the evaluation device 20A. In this case, the reception unit 21 receives the encrypted captured image from the user terminal 10 </ b> A, decrypts the encrypted captured image, and outputs the captured image to the correction unit 24.
続いて、補正部24は、撮像画像を補正する(ステップS33)。ステップS33の処理は、ステップS02の処理と同様であるので、その詳細な説明を省略する。補正部24は、ステップS33の補正処理によって補正された撮像画像を評価用画像として、評価部22に出力する。以降、ステップS34~ステップS39の処理は、ステップS04~ステップS09の処理と同様であるので、その詳細な説明を省略する。以上のようにして、評価システム1Aによる評価方法の一連の処理が終了する。
Subsequently, the correction unit 24 corrects the captured image (step S33). Since the process of step S33 is the same as the process of step S02, the detailed description thereof is omitted. The correction unit 24 outputs the captured image corrected by the correction process in step S33 to the evaluation unit 22 as an evaluation image. Thereafter, the processing from step S34 to step S39 is the same as the processing from step S04 to step S09, and therefore detailed description thereof is omitted. As described above, a series of processes of the evaluation method by the evaluation system 1A is completed.
なお、ユーザ端末10A及び評価装置20Aにおける各機能部は、各機能を実現させるためのプログラムモジュールが、ユーザ端末10A及び評価装置20Aを構成するコンピュータにおいて実行されることにより実現される。これらのプログラムモジュールを含む評価プログラムは、例えば、ROM又は半導体メモリ等のコンピュータ読み取り可能な記録媒体によって提供される。また、評価プログラムは、データ信号としてネットワークを介して提供されてもよい。
Note that each functional unit in the user terminal 10A and the evaluation device 20A is realized by executing a program module for realizing each function in a computer constituting the user terminal 10A and the evaluation device 20A. The evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory. The evaluation program may be provided as a data signal via a network.
第2実施形態に係る評価システム1A、評価装置20A、評価方法、評価プログラム、及び記録媒体においても、第1実施形態に係る評価システム1、評価装置20、評価方法、評価プログラム、及び記録媒体と同様の効果が奏される。また、第2実施形態に係る評価システム1A、評価装置20A、評価方法、評価プログラム、及び記録媒体では、ユーザ端末10Aが補正部13を有しないので、ユーザ端末10Aの処理負荷を軽減することができる。また、第2実施形態に係る評価システム1A、評価装置20A、評価方法、評価プログラム、及び記録媒体では、評価装置20Aに撮像画像及び距離D1が蓄積されるので、評価装置20Aでのデータ拡張が容易にでき、ニューラルネットワークNNをより効果的に学習させることが可能となる。
Also in the evaluation system 1A, the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment, the evaluation system 1, the evaluation device 20, the evaluation method, the evaluation program, and the recording medium according to the first embodiment Similar effects are produced. Further, in the evaluation system 1A, the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment, the user terminal 10A does not have the correction unit 13, and therefore the processing load on the user terminal 10A can be reduced. it can. In the evaluation system 1A, the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment, the captured image and the distance D1 are accumulated in the evaluation device 20A. This makes it easy to learn the neural network NN more effectively.
(第3実施形態)
図18は、第3実施形態に係る評価装置を含む評価システムを概略的に示す構成図である。図18に示される評価システム1Bは、ユーザ端末10に代えてユーザ端末10Bを備える点、及び評価装置20を備えない点において、評価システム1と主に相違する。ユーザ端末10Bは、評価部18をさらに備える点、並びに、送信部14及び受信部15を備えない点においてユーザ端末10と主に相違する。この場合、ユーザ端末10Bは、スタンドアロン型の評価装置でもある。 (Third embodiment)
FIG. 18 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the third embodiment. Theevaluation system 1B shown in FIG. 18 is mainly different from the evaluation system 1 in that the user terminal 10B is provided instead of the user terminal 10 and the evaluation device 20 is not provided. The user terminal 10B is mainly different from the user terminal 10 in that the evaluation unit 18 is further provided, and the transmission unit 14 and the reception unit 15 are not provided. In this case, the user terminal 10B is also a stand-alone evaluation device.
図18は、第3実施形態に係る評価装置を含む評価システムを概略的に示す構成図である。図18に示される評価システム1Bは、ユーザ端末10に代えてユーザ端末10Bを備える点、及び評価装置20を備えない点において、評価システム1と主に相違する。ユーザ端末10Bは、評価部18をさらに備える点、並びに、送信部14及び受信部15を備えない点においてユーザ端末10と主に相違する。この場合、ユーザ端末10Bは、スタンドアロン型の評価装置でもある。 (Third embodiment)
FIG. 18 is a configuration diagram schematically showing an evaluation system including an evaluation apparatus according to the third embodiment. The
なお、ユーザ端末10Bでは、補正部13は、評価用画像を評価部18に出力する。修正情報取得部17は、修正情報を評価部18に出力する。評価部18は、評価部22と同様の機能を有する。つまり、評価部18は、評価用画像に基づいて評価対象物のさびの程度に関する評価を行う。そして、評価部18は、評価結果を出力部16に出力する。
In the user terminal 10B, the correction unit 13 outputs an evaluation image to the evaluation unit 18. The correction information acquisition unit 17 outputs the correction information to the evaluation unit 18. The evaluation unit 18 has the same function as the evaluation unit 22. That is, the evaluation unit 18 performs an evaluation on the degree of rust of the evaluation object based on the evaluation image. Then, the evaluation unit 18 outputs the evaluation result to the output unit 16.
次に、図19を参照して、評価システム1B(ユーザ端末10B)が行う評価方法を説明する。図19は、図18に示される評価システムが行う評価方法を示すフローチャートである。
Next, an evaluation method performed by the evaluation system 1B (user terminal 10B) will be described with reference to FIG. FIG. 19 is a flowchart showing an evaluation method performed by the evaluation system shown in FIG.
まず、画像取得部11が、ステップS01と同様に評価対象物の撮像画像を取得する(ステップS41)。そして、画像取得部11は、撮像画像を補正部13に出力する。続いて、補正部13は、撮像画像を補正する(ステップS42)。ステップS42の処理は、ステップS02の処理と同様であるので、その詳細な説明を省略する。そして、補正部13は、ステップS42の補正処理によって補正された撮像画像を評価用画像として、評価部18に出力する。
First, the image acquisition unit 11 acquires a captured image of the evaluation object as in step S01 (step S41). Then, the image acquisition unit 11 outputs the captured image to the correction unit 13. Subsequently, the correction unit 13 corrects the captured image (step S42). Since the process of step S42 is the same as the process of step S02, the detailed description thereof is omitted. Then, the correction unit 13 outputs the captured image corrected by the correction process in step S42 to the evaluation unit 18 as an evaluation image.
続いて、評価部18は、評価用画像に基づいて、評価対象物のさびの程度に関する評価を行う(ステップS43)。ステップS43の処理は、ステップS04の処理と同様であるので、その詳細な説明を省略する。そして、評価部18は、評価結果を出力部16に出力する。続いて、出力部16は、評価結果をユーザに通知するための出力情報を生成し、出力情報に基づいて評価結果をユーザに出力する(ステップS44)。ステップS44の処理は、ステップS06の処理と同様であるので、その詳細な説明を省略する。
Subsequently, the evaluation unit 18 performs an evaluation on the degree of rust of the evaluation object based on the evaluation image (step S43). Since the process of step S43 is the same as the process of step S04, its detailed description is omitted. Then, the evaluation unit 18 outputs the evaluation result to the output unit 16. Subsequently, the output unit 16 generates output information for notifying the user of the evaluation result, and outputs the evaluation result to the user based on the output information (step S44). Since the process of step S44 is the same as the process of step S06, its detailed description is omitted.
続いて、修正情報取得部17は、ユーザによって評価結果の修正操作が行われたか否かを判定する(ステップS45)。修正情報取得部17が、修正操作が行われなかったと判定した場合(ステップS45:NO)、評価システム1Bによる評価方法の一連の処理は終了する。一方、修正情報取得部17は、修正操作が行われたと判定した場合(ステップS45:YES)、修正後のカテゴリを示す情報を、当該修正操作が行われた評価用画像の画像IDとともに修正情報として取得する。そして、修正情報取得部17は、修正情報を評価部18に出力する。
Subsequently, the correction information acquisition unit 17 determines whether or not an evaluation result correction operation has been performed by the user (step S45). When the correction information acquisition unit 17 determines that the correction operation has not been performed (step S45: NO), the series of processes of the evaluation method by the evaluation system 1B ends. On the other hand, when it is determined that the correction operation has been performed (step S45: YES), the correction information acquisition unit 17 displays the information indicating the corrected category together with the image ID of the evaluation image on which the correction operation has been performed. Get as. Then, the correction information acquisition unit 17 outputs the correction information to the evaluation unit 18.
続いて、評価部18は、修正情報に基づいて学習を行う(ステップS46)。ステップS46の処理は、ステップS09の処理と同様であるので、その詳細な説明を省略する。以上のようにして、評価システム1Bによる評価方法の一連の処理が終了する。
Subsequently, the evaluation unit 18 performs learning based on the correction information (step S46). Since the process of step S46 is the same as the process of step S09, detailed description thereof is omitted. As described above, a series of processes of the evaluation method by the evaluation system 1B is completed.
なお、ユーザ端末10Bにおける各機能部は、各機能を実現させるためのプログラムモジュールが、ユーザ端末10Bを構成するコンピュータにおいて実行されることにより実現される。これらのプログラムモジュールを含む評価プログラムは、例えば、ROM又は半導体メモリ等のコンピュータ読み取り可能な記録媒体によって提供される。また、評価プログラムは、データ信号としてネットワークを介して提供されてもよい。
Note that each functional unit in the user terminal 10B is realized by executing a program module for realizing each function in a computer constituting the user terminal 10B. The evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory. The evaluation program may be provided as a data signal via a network.
第3実施形態に係る評価システム1B、ユーザ端末10B、評価方法、評価プログラム、及び記録媒体においても、第1実施形態に係る評価システム1、評価装置20、評価方法、評価プログラム、及び記録媒体と同様の効果が奏される。また、第3実施形態に係る評価システム1B、ユーザ端末10B、評価方法、評価プログラム、及び記録媒体では、ネットワークNWを介したデータの送受信を行う必要がないので、ネットワークNWを介した通信に伴うタイムラグが生じず、応答速度を向上させることが可能となる。また、ネットワークNWのトラフィック及び通信料を削減することが可能となる。
Also in the evaluation system 1B, the user terminal 10B, the evaluation method, the evaluation program, and the recording medium according to the third embodiment, the evaluation system 1, the evaluation device 20, the evaluation method, the evaluation program, and the recording medium according to the first embodiment Similar effects are produced. Further, in the evaluation system 1B, the user terminal 10B, the evaluation method, the evaluation program, and the recording medium according to the third embodiment, it is not necessary to transmit / receive data via the network NW, and therefore, accompanying the communication via the network NW. There is no time lag, and the response speed can be improved. Further, it becomes possible to reduce traffic and communication charges of the network NW.
なお、本開示に係る評価システム、評価装置、評価方法、評価プログラム、及び記録媒体は上記実施形態に限定されない。
Note that the evaluation system, the evaluation device, the evaluation method, the evaluation program, and the recording medium according to the present disclosure are not limited to the above embodiment.
例えば、補正部13,24が補正処理において距離D1を用いない場合には、ユーザ端末10,10A,10Bは、距離取得部12を備えていなくてもよい。
For example, when the correction units 13 and 24 do not use the distance D1 in the correction process, the user terminals 10, 10A, and 10B may not include the distance acquisition unit 12.
また、ユーザによる評価結果の修正を行わない場合には、ユーザ端末10,10A,10Bは、修正情報取得部17を備えていなくてもよい。
Further, when the evaluation result is not corrected by the user, the user terminals 10, 10 </ b> A, and 10 </ b> B may not include the correction information acquisition unit 17.
また、ニューラルネットワークNNでは、バッチ正規化が行われてもよい。バッチ正規化とは、各層の出力値を分散が一定になるように変換する処理である。この場合、バイアス値を用いる必要が無いので、バイアス値を出力するノード(ノード41b、及びノード421b等)が省略され得る。
Also, in the neural network NN, batch normalization may be performed. Batch normalization is a process of converting the output value of each layer so that the variance is constant. In this case, since it is not necessary to use a bias value, nodes (node 41b, node 421b, etc.) that output the bias value can be omitted.
また、評価部18,22は、ニューラルネットワーク以外の手法を用いて、評価用画像に基づいてさびの程度に関する評価を行ってもよい。
Further, the evaluation units 18 and 22 may perform an evaluation on the degree of rust based on the evaluation image using a method other than the neural network.
また、出力部16は、評価結果を不図示のメモリ(記憶装置)に出力し、評価結果をメモリに保存してもよい。出力部16は、例えば、評価結果を一意に識別可能な管理番号、及び当該評価が行われた日付等と評価結果とを対応付けた管理データを作成し、管理データを保存する。
Further, the output unit 16 may output the evaluation result to a memory (storage device) (not shown) and store the evaluation result in the memory. For example, the output unit 16 creates management data in which an evaluation result is associated with a management number that can uniquely identify an evaluation result, a date on which the evaluation is performed, and the evaluation result, and stores the management data.
評価システム1,1A,1Bは、診断ツールとして用いられてもよい。ここでは、評価システム1を診断ツールに用いる変形例について説明する。図20は、変形例に係る評価システムを概略的に示す構成図である。塗装されている鋼材では、例えば、塗膜を水分がすり抜ける等して塗膜の下にさびが生じることで、塗膜が浮き上がり、塗膜剥離が生じる。塗膜剥離は、下地(金属素地)に塗られた塗料の膜(塗膜)がうろこ状に剥がれ出している状態である。塗膜剥離が生じると、下地が露出するので、さびがさらに発生しやすくなる。耐候性鋼のような鋼材は、無塗装で用いられ得る。このような無塗装の鋼材では、塗膜が無いためにさびによる塗膜剥離は生じないが、さびは、鋼材に部分的に発生することがある。例えば、評価対象物が鋼橋である場合、評価対象物の表面全体に対してさびの評価を行うことは非効率的である。
Evaluation system 1, 1A, 1B may be used as a diagnostic tool. Here, a modified example in which the evaluation system 1 is used as a diagnostic tool will be described. FIG. 20 is a configuration diagram schematically illustrating an evaluation system according to a modification. In a steel material being coated, for example, rust is generated under the coating film due to moisture passing through the coating film, whereby the coating film is lifted and the coating film is peeled off. The coating film peeling is a state in which the coating film (coating film) applied to the base (metal substrate) is peeled off in a scaly manner. When the coating film is peeled off, the base is exposed, and rust is more likely to occur. Steel materials such as weathering steel can be used without painting. In such an unpainted steel material, since there is no coating film, coating film peeling due to rust does not occur, but rust may be partially generated in the steel material. For example, when the evaluation object is a steel bridge, it is inefficient to evaluate rust on the entire surface of the evaluation object.
このため、図20に示される変形例の評価システム1は、評価対象物全体又は評価対象物の一部を撮像することで得られた撮像画像から、さびが生じている領域(候補領域Rc)を抽出した上で、評価を行う。本変形例の評価システム1では、さびの程度に関する評価には、評価対象物におけるさびの分布具合が含まれる。
For this reason, the evaluation system 1 of the modified example shown in FIG. 20 is a region in which rust occurs (candidate region Rc) from a captured image obtained by imaging the entire evaluation object or part of the evaluation object. Is extracted and evaluated. In the evaluation system 1 of the present modification, the evaluation related to the degree of rust includes the degree of rust distribution in the evaluation object.
変形例の評価システム1は、抽出部19をさらに備える。抽出部19は、撮像画像Gから候補領域Rcを抽出するための部分である。図21の(a)に示されるように、抽出部19は、撮像画像Gのスクリーニングを行い、塗膜剥離(さび)を含む領域を候補領域Rcとして検出する。抽出部19は、例えば、物体検出処理を行うことにより、候補領域Rcを検出する。物体検出処理には、カスケード分類器、サポートベクターマシン(Support Vector Machine:SVM)、及びニューラルネットワークを用いた手法等の画像認識技術が用いられる。抽出部19は、例えば、汚れと塗膜剥離(さび)とを区別して候補領域Rcを検出する。つまり、抽出部19は、汚れが付着している領域を候補領域Rcとして検出しない。
The modified evaluation system 1 further includes an extraction unit 19. The extraction unit 19 is a part for extracting the candidate region Rc from the captured image G. As shown in FIG. 21A, the extraction unit 19 screens the captured image G, and detects a region including coating film peeling (rust) as a candidate region Rc. The extraction unit 19 detects the candidate region Rc, for example, by performing object detection processing. For the object detection processing, an image recognition technique such as a method using a cascade classifier, a support vector machine (SVM), and a neural network is used. For example, the extraction unit 19 detects the candidate region Rc by distinguishing between dirt and coating film peeling (rust). That is, the extraction unit 19 does not detect a region with dirt as the candidate region Rc.
図21の(b)に示されるように、抽出部19は、候補領域Rcを含む領域Rexを抽出し、領域Rexを補正部13に出力する。なお、抽出部19は、候補領域Rcが所定のサイズよりも大きい場合には、候補領域Rcを複数の領域に分割し、分割した各領域を含む複数の領域Rexを補正部13に出力する。補正部13は、領域Rexに対して上述の補正を行うことで、評価用画像を生成する。
21B, the extraction unit 19 extracts a region Rex including the candidate region Rc, and outputs the region Rex to the correction unit 13. If the candidate area Rc is larger than the predetermined size, the extraction unit 19 divides the candidate area Rc into a plurality of areas, and outputs a plurality of areas Rex including the divided areas to the correction unit 13. The correction unit 13 generates the evaluation image by performing the above-described correction on the region Rex.
評価部22において用いられるカテゴリとしては、例えば、塗膜剥離の大きさを示すグレードと、塗膜剥離の程度を示すグレードと、の組み合わせが用いられ得る。塗膜剥離の大きさのグレードとしては、例えば、SSPC規格において規定されている「全体」、「スポット」、及び「ピンポイント」が用いられる。塗膜剥離の程度を示すグレードとしては、例えば、「異常なし」、及び「剥離はじめ」が用いられる。
As the category used in the evaluation unit 22, for example, a combination of a grade indicating the magnitude of coating film peeling and a grade indicating the degree of coating film peeling can be used. For example, “overall”, “spot”, and “pinpoint” defined in the SSPC standard are used as the grade of the magnitude of coating film peeling. As the grade indicating the degree of coating film peeling, for example, “no abnormality” and “starting peeling” are used.
以上の変形例の評価システム1においても、上述した第1実施形態に係る評価システム1と同様の効果が奏される。変形例の評価システム1では、撮像画像Gから、さびが生じている候補領域Rcが抽出され、候補領域Rcを用いてさびの程度に関する評価が行われる。このため、評価対象物の表面全体を評価する必要が無いので、さびの程度に関する評価効率を向上させることが可能となる。
Also in the evaluation system 1 of the above modification, the same effect as the evaluation system 1 according to the first embodiment described above is exhibited. In the evaluation system 1 of the modified example, a candidate area Rc in which rust is generated is extracted from the captured image G, and an evaluation regarding the degree of rust is performed using the candidate area Rc. For this reason, since it is not necessary to evaluate the whole surface of an evaluation target object, it becomes possible to improve the evaluation efficiency regarding the degree of rust.
マーカMKの形状は、正方形に限られない。マーカMKの形状は、長方形であってもよい。
The shape of the marker MK is not limited to a square. The shape of the marker MK may be a rectangle.
上記実施形態では、マーカMKは、マーカMKの向きを特定可能な形状を有しているが、マーカMKの形状は指向性を有する形状に限られない。マーカMKの形状は、無指向性の形状であってもよい。例えば、図22の(a)に示されるように、領域Rbの形状が正方形であり、領域Rwの形状が領域Rbよりも一回り小さい正方形であってもよい。領域Rbの中心点と領域Rwの中心点とが重なり、かつ領域Rbの各辺と領域Rwの各辺とが互いに平行となるように配置されていてもよい。
In the above embodiment, the marker MK has a shape that can specify the orientation of the marker MK, but the shape of the marker MK is not limited to a shape having directivity. The shape of the marker MK may be an omnidirectional shape. For example, as illustrated in FIG. 22A, the shape of the region Rb may be a square, and the shape of the region Rw may be a square that is slightly smaller than the region Rb. The center point of the region Rb and the center point of the region Rw may be overlapped, and each side of the region Rb and each side of the region Rw may be arranged in parallel to each other.
図22の(b)に示されるように、マーカMKは、開口部Hmを有してもよい。開口部Hmは、マーカMKが描かれるシート状部材を貫通する貫通孔である。開口部Hmの開口面積は、評価範囲の面積と等しい。このマーカMKが評価対象物に付された場合、評価対象物の表面が開口部Hmを介して露出する。開口部Hmの開口面積が評価範囲の面積と等しいので、補正部13,24は、開口部Hmを介して露出する評価対象物の表面の画像を評価領域Reとして抽出する。これにより、評価領域Reの抽出を簡易化することができる。
22B, the marker MK may have an opening Hm. The opening Hm is a through hole that penetrates the sheet-like member on which the marker MK is drawn. The opening area of the opening Hm is equal to the area of the evaluation range. When the marker MK is attached to the evaluation object, the surface of the evaluation object is exposed through the opening Hm. Since the opening area of the opening Hm is equal to the area of the evaluation range, the correction units 13 and 24 extract an image of the surface of the evaluation object exposed through the opening Hm as the evaluation region Re. Thereby, the extraction of the evaluation area Re can be simplified.
なお、開口部Hmの開口面積は、評価範囲の面積よりも大きくてもよい。この場合、補正部13,24は、評価領域Reの抽出の前処理として、撮像画像から開口部Hmを介して露出している領域を抽出してもよい。そして、補正部13,24は、抽出した領域から評価領域Reを抽出してもよい。
Note that the opening area of the opening Hm may be larger than the area of the evaluation range. In this case, the correction units 13 and 24 may extract a region exposed through the opening Hm from the captured image as preprocessing for extracting the evaluation region Re. And the correction | amendment parts 13 and 24 may extract the evaluation area | region Re from the extracted area | region.
このように、マーカMKの形状によらず、撮像画像に含まれるマーカ領域Rmの大きさを基準として、撮像画像において評価対象物の表面のうちの所定の面積を有する範囲が決定され、評価領域Reとして抽出され得る。これにより、他の情報を用いることなく、評価領域Reを抽出することができる。したがって、評価システム1,1A,1Bを小型化することが可能となる。
As described above, a range having a predetermined area of the surface of the evaluation object in the captured image is determined based on the size of the marker region Rm included in the captured image regardless of the shape of the marker MK, and the evaluation region Can be extracted as Re. Thereby, the evaluation region Re can be extracted without using other information. Therefore, the evaluation systems 1, 1A, 1B can be reduced in size.
図22の(a)に示されるように、マーカMKが無指向性の形状を有している場合、マーカMKの形状が単純であるので、マーカMKの作成を容易化することができる。また、マーカMKの向きは重要ではないので、ユーザが評価対象物の撮影を容易に行うことができる。
As shown in FIG. 22A, when the marker MK has an omnidirectional shape, the marker MK has a simple shape, so that the creation of the marker MK can be facilitated. Further, since the orientation of the marker MK is not important, the user can easily photograph the evaluation object.
枠F2に囲まれていないマーカMKを用いた場合、光の反射等に起因してマーカ領域Rmと評価対象物の領域との境界が不明確となることがある。このような場合には、エッジ検出処理では、エッジを検出できない場合がある。物体検出では、判定閾値を小さくし過ぎると誤検出が多くなり、判定閾値を大きくし過ぎると検出漏れが多くなる。また、物体検出自体ではマーカ領域Rmの向き(角度)を得ることができない。さらに、物体検出処理でマーカ領域Rmを抽出した上で、エッジ強調処理を行い、さらにエッジ検出処理を行う場合、検出精度は向上するものの、マーカ領域Rmの外縁部分の色とマーカ領域Rmの周辺の色とがほとんど変わらないような場合には、検出漏れが生じ得る。
When the marker MK not surrounded by the frame F2 is used, the boundary between the marker region Rm and the region to be evaluated may become unclear due to light reflection or the like. In such a case, the edge may not be detected by the edge detection process. In object detection, if the determination threshold is too small, false detections increase, and if the determination threshold is excessively large, detection omissions increase. In addition, the direction (angle) of the marker region Rm cannot be obtained by object detection itself. Furthermore, when the edge enhancement process is performed after the marker area Rm is extracted by the object detection process, and the edge detection process is further performed, the detection accuracy is improved, but the color of the outer edge portion of the marker area Rm and the periphery of the marker area Rm In the case where the color of the color hardly changes, detection omission may occur.
一方、図6の(b)~(f)、図22の(c)及び(d)に示されるマーカMKでは、マーカMKが枠F2によって囲まれており、枠F2と領域Rbとの間に間隙Rgapが設けられる。間隙Rgapは、縁F1に沿って領域Rbを囲んでいる。間隙Rgapの色は、マーカMKの外縁部分(つまり、領域Rb)の色とは異なる。このため、マーカ領域Rmの周辺(枠F2の外側)の色が、マーカ領域Rmの外縁部分(領域Rb)の色と類似していたとしても、マーカ領域Rmの外縁(縁F1)は明確であるので、マーカ領域Rmの外縁を検出することができる。例えば、物体検出処理でマーカ領域Rmを抽出した上で、エッジ強調処理を行い、さらにエッジ検出処理を行う場合、領域Rbの頂点(頂点Pm1~Pm4)をより確実に検出することができる。したがって、高速かつ高精度にマーカ領域Rmを抽出することが可能となる。その結果、さびの程度に関する評価精度をさらに向上させることが可能となる。なお、枠F2と領域Rbとの距離(間隙Rgapの幅)は、例えば、間隙Rgapを確保するために、マーカMKの一辺の10分の1以上であってもよい。枠F2と領域Rbとの距離(間隙Rgapの幅)は、例えば、マーカMKの使いやすさを考慮して、マーカMKの一辺の半分以下であってもよい。
On the other hand, in the markers MK shown in FIGS. 6B to 6F and FIGS. 22C and 22D, the marker MK is surrounded by the frame F2, and the frame F2 and the region Rb are between. A gap Rgap is provided. The gap Rgap surrounds the region Rb along the edge F1. The color of the gap Rgap is different from the color of the outer edge portion (that is, the region Rb) of the marker MK. Therefore, even if the color around the marker area Rm (outside the frame F2) is similar to the color of the outer edge portion (area Rb) of the marker area Rm, the outer edge (edge F1) of the marker area Rm is clear. Therefore, the outer edge of the marker region Rm can be detected. For example, when the edge enhancement process is performed after the marker area Rm is extracted by the object detection process and the edge detection process is further performed, the vertices (vertices Pm1 to Pm4) of the area Rb can be detected more reliably. Therefore, the marker region Rm can be extracted with high speed and high accuracy. As a result, it is possible to further improve the evaluation accuracy regarding the degree of rust. Note that the distance between the frame F2 and the region Rb (the width of the gap Rgap) may be, for example, one tenth or more of one side of the marker MK in order to secure the gap Rgap. The distance between the frame F2 and the region Rb (the width of the gap Rgap) may be, for example, less than half of one side of the marker MK in consideration of ease of use of the marker MK.
また、図22の(c)及び(d)に示されるように、枠F2は、マーカMKを完全に囲む枠でなくてもよい。つまり、枠F2には、欠落部Fgapが設けられてもよい。例えば、枠F2は、実線に限られず、破線でもよい。この場合、枠F2は、枠F2の枠線が途中で途切れた形状を有する。枠F2に欠落部Fgapが設けられる場合には、エッジ検出処理等によって、枠F2によって囲まれた領域がマーカ領域Rmとして検出される可能性を低減することができるので、マーカ領域Rmの検出精度が向上する。つまり、枠F2の頂点が検出される可能性を低減することができるので、マーカ領域Rm(領域Rb)の頂点をさらに確実に検出することが可能となる。その結果、さびの程度に関する評価精度をさらに向上させることが可能となる。
Further, as shown in FIGS. 22C and 22D, the frame F2 may not be a frame that completely surrounds the marker MK. That is, the missing part Fgap may be provided in the frame F2. For example, the frame F2 is not limited to a solid line and may be a broken line. In this case, the frame F2 has a shape in which the frame line of the frame F2 is interrupted. When the missing part Fgap is provided in the frame F2, it is possible to reduce the possibility that the region surrounded by the frame F2 is detected as the marker region Rm by edge detection processing or the like. Therefore, the detection accuracy of the marker region Rm Will improve. That is, since the possibility that the vertex of the frame F2 is detected can be reduced, the vertex of the marker region Rm (region Rb) can be more reliably detected. As a result, it is possible to further improve the evaluation accuracy regarding the degree of rust.
図23に示されるように、補正部13,24は、評価領域Reを撮像画像Gから抽出する際に、撮像画像Gからランダムに評価領域Reを決定し、決定した評価領域Reを抽出してもよい。この場合、まず、補正部13,24は、評価領域Reの基準点Prが取り得る座標の最大値を求める。基準点Prとは、評価領域Reの4つの頂点のうちの1つであり、ここでは、評価領域Reの4つの頂点のうちのX-Y座標の原点に最も近い頂点である。例えば、評価領域Reの一辺の長さが100ピクセルである場合、基準点PrのX座標の最大値xcrop_maxとY座標の最大値ycrop_maxとは、以下の式(8)で表される。なお、撮像画像Gの頂点Pg1は原点(0,0)に位置し、頂点Pg2は(Xg,0)に位置し、頂点Pg3は(Xg,Yg)に位置し、頂点Pg4は(0,Yg)に位置する。
As illustrated in FIG. 23, when the correction units 13 and 24 extract the evaluation region Re from the captured image G, the correction units 13 and 24 randomly determine the evaluation region Re from the captured image G, and extract the determined evaluation region Re. Also good. In this case, first, the correction units 13 and 24 obtain the maximum value of coordinates that can be taken by the reference point Pr of the evaluation region Re. The reference point Pr is one of the four vertices of the evaluation area Re, and here is the vertex closest to the origin of the XY coordinates among the four vertices of the evaluation area Re. For example, if the length of one side of the evaluation region Re is 100 pixels, the maximum value y Crop_max maximum value x Crop_max and Y coordinates of the X-coordinate of the reference point Pr, is expressed by the following equation (8). Note that the vertex Pg1 of the captured image G is located at the origin (0, 0), the vertex Pg2 is located at (X g , 0), the vertex Pg3 is located at (X g , Y g ), and the vertex Pg4 is ( 0, Y g ).
補正部13,24は、式(9)を用いて、評価領域Reの基準点の座標(xcrop,ycrop)をランダムに決定する。なお、関数random(最小値,最大値)は、最小値から最大値までの範囲に含まれる任意の値を返す関数である。
The correction units 13 and 24 randomly determine the coordinates (x crop , y crop ) of the reference point of the evaluation region Re using Expression (9). The function random (minimum value, maximum value) is a function that returns an arbitrary value included in the range from the minimum value to the maximum value.
補正部13,24は、決定された評価領域Reとマーカ領域Rmとが重なる場合には、評価領域Reの基準点の座標を再度決定してもよい。
The correction units 13 and 24 may determine the coordinates of the reference point of the evaluation area Re again when the determined evaluation area Re and the marker area Rm overlap.
図24に示されるように、補正部13,24は、マーカ領域Rmに対して抽出方向を指定して、撮像画像Gから評価領域Reを抽出してもよい。この場合、まず、補正部13,24は、撮像画像Gの中心位置Cgの座標(xcg,ycg)と、マーカ領域Rmの中心位置Cmの座標(xcm,ycm)と、を計算する。そして、補正部13,24は、式(10)に示されるように、中心位置Cmから中心位置Cgに向かうベクトルVを計算する。
As illustrated in FIG. 24, the correction units 13 and 24 may specify the extraction direction for the marker region Rm and extract the evaluation region Re from the captured image G. In this case, first, the correction units 13 and 24 calculate the coordinates (x cg , y cg ) of the center position Cg of the captured image G and the coordinates (x cm , y cm ) of the center position Cm of the marker region Rm. To do. And the correction | amendment parts 13 and 24 calculate the vector V which goes to center position Cg from center position Cm, as shown by Formula (10).
補正部13,24は、マーカ領域RmからベクトルVが示す方向において、評価領域Reの位置を決定する。補正部13,24は、例えば、中心位置CmからベクトルVが示す方向に評価領域Reの基準点Prが位置するように、評価領域Reの位置を決定する。ここでは、基準点Prは、評価領域Reの4つの頂点のうちのマーカ領域Rmに最も近い頂点である。補正部13,24は、例えば、マーカ領域Rmと重複しないように、評価領域Reの位置を決定する。具体的には、補正部13,24は、基準点Prが取り得る座標のうち、マーカ領域Rmから最も離れた基準点Pr_maxの座標(xcrop_max,ycrop_max)と、マーカ領域Rmに最も近い基準点Pr_minの座標(xcrop_min,ycrop_min)と、を計算する。そして、補正部13,24は、これらの2点間の線分上に基準点Prが位置するように、評価領域Reの位置を決定する。
The correction units 13 and 24 determine the position of the evaluation region Re in the direction indicated by the vector V from the marker region Rm. For example, the correction units 13 and 24 determine the position of the evaluation region Re so that the reference point Pr of the evaluation region Re is located in the direction indicated by the vector V from the center position Cm. Here, the reference point Pr is the vertex closest to the marker region Rm among the four vertices of the evaluation region Re. For example, the correction units 13 and 24 determine the position of the evaluation region Re so as not to overlap with the marker region Rm. Specifically, the correction units 13 and 24 include the coordinates (x crop_max , y crop_max ) of the reference point Pr_max farthest from the marker region Rm and the reference closest to the marker region Rm among the coordinates that the reference point Pr can take. The coordinates (x crop_min , y crop_min ) of the point Pr_min are calculated. Then, the correcting units 13 and 24 determine the position of the evaluation region Re so that the reference point Pr is positioned on the line segment between these two points.
1,1A,1B…評価システム、10,10A,10B…ユーザ端末、11…画像取得部、13,24…補正部、16…出力部、17…修正情報取得部、18,22…評価部、19…抽出部、20,20A…評価装置、21…受信部(画像取得部)、23…送信部(出力部)、D1…距離、G…撮像画像、MK…マーカ、NN…ニューラルネットワーク、Rc…候補領域、Re…評価領域、Rm…マーカ領域。
DESCRIPTION OF SYMBOLS 1,1A, 1B ... Evaluation system 10, 10A, 10B ... User terminal, 11 ... Image acquisition part, 13, 24 ... Correction | amendment part, 16 ... Output part, 17 ... Correction information acquisition part, 18, 22 ... Evaluation part, DESCRIPTION OF SYMBOLS 19 ... Extraction part 20, 20A ... Evaluation apparatus, 21 ... Reception part (image acquisition part), 23 ... Transmission part (output part), D1 ... Distance, G ... Captured image, MK ... Marker, NN ... Neural network, Rc ... candidate area, Re ... evaluation area, Rm ... marker area.
Claims (19)
- 評価対象物の撮像画像を用いて、前記評価対象物のさびの程度に関する評価を行う評価システムであって、
前記撮像画像を取得する画像取得部と、
前記撮像画像を補正することで評価用画像を生成する補正部と、
前記評価用画像に基づいて前記評価を行う評価部と、
前記評価部による評価結果を出力する出力部と、
を備え、
前記補正部は、前記撮像画像から前記評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域を抽出し、前記評価領域に基づいて前記評価用画像を生成する、評価システム。 An evaluation system for performing an evaluation on the degree of rust of the evaluation object using a captured image of the evaluation object,
An image acquisition unit for acquiring the captured image;
A correction unit that generates an evaluation image by correcting the captured image;
An evaluation unit that performs the evaluation based on the evaluation image;
An output unit for outputting an evaluation result by the evaluation unit;
With
The correction unit extracts, from the captured image, an evaluation region that is an image having a predetermined area on the surface of the evaluation object, and generates the evaluation image based on the evaluation region. . - 前記撮像画像は、前記評価対象物に付されたマーカの画像であるマーカ領域を含み、
前記補正部は、前記マーカ領域に基づいて前記評価領域を抽出する、請求項1に記載の評価システム。 The captured image includes a marker region that is an image of a marker attached to the evaluation object,
The evaluation system according to claim 1, wherein the correction unit extracts the evaluation region based on the marker region. - 前記マーカは、前記マーカの向きを特定可能な形状を有する、請求項2に記載の評価システム。 The evaluation system according to claim 2, wherein the marker has a shape capable of specifying the orientation of the marker.
- 前記マーカは、無指向性の形状を有する、請求項2に記載の評価システム。 The evaluation system according to claim 2, wherein the marker has an omnidirectional shape.
- 前記マーカは、開口部を有し、
前記開口部の開口面積は、前記面積と等しく、
前記補正部は、前記開口部を介して露出する前記評価対象物の表面の画像を前記評価領域として抽出する、請求項2~請求項4のいずれか一項に記載の評価システム。 The marker has an opening,
The opening area of the opening is equal to the area,
The evaluation system according to any one of claims 2 to 4, wherein the correction unit extracts an image of the surface of the evaluation object exposed through the opening as the evaluation region. - 前記マーカは、枠によって囲まれており、
前記マーカと前記枠との間に間隙が設けられ、
前記間隙の色は前記マーカの外縁部分の色とは異なる、請求項2~請求項5のいずれか一項に記載の評価システム。 The marker is surrounded by a frame;
A gap is provided between the marker and the frame;
The evaluation system according to any one of claims 2 to 5, wherein a color of the gap is different from a color of an outer edge portion of the marker. - 前記枠は、枠線が途切れた形状を有する、請求項6に記載の評価システム。 The evaluation system according to claim 6, wherein the frame has a shape in which a frame line is interrupted.
- 前記撮像画像を撮像した撮像装置と前記評価対象物との距離を取得する距離取得部をさらに備え、
前記補正部は、前記距離に基づいて前記評価領域を抽出する、請求項1に記載の評価システム。 A distance acquisition unit that acquires a distance between the imaging device that captured the captured image and the evaluation object;
The evaluation system according to claim 1, wherein the correction unit extracts the evaluation region based on the distance. - 前記補正部は、前記撮像画像に含まれる参照領域の色に基づいて、前記評価領域の色を補正し、
前記参照領域は、特定の色が付された参照体の画像である、請求項1~請求項8のいずれか一項に記載の評価システム。 The correction unit corrects the color of the evaluation area based on the color of the reference area included in the captured image,
The evaluation system according to any one of claims 1 to 8, wherein the reference region is an image of a reference body with a specific color. - 前記補正部は、前記評価領域から鏡面反射を除去する、請求項1~請求項9のいずれか一項に記載の評価システム。 The evaluation system according to any one of claims 1 to 9, wherein the correction unit removes specular reflection from the evaluation region.
- 前記評価部は、ニューラルネットワークを用いて前記評価を行う、請求項1~請求項10のいずれか一項に記載の評価システム。 The evaluation system according to any one of claims 1 to 10, wherein the evaluation unit performs the evaluation using a neural network.
- 前記補正部は、前記評価領域を拡大又は縮小することで、前記評価用画像のサイズを前記ニューラルネットワークの学習に用いられる基準画像のサイズに合わせる、請求項11に記載の評価システム。 The evaluation system according to claim 11, wherein the correction unit adjusts the size of the evaluation image to the size of a reference image used for learning of the neural network by expanding or reducing the evaluation region.
- 前記撮像画像からさびが生じている候補領域を抽出する抽出部をさらに備え、
前記補正部は、前記候補領域を補正することで前記評価用画像を生成する、請求項1~請求項12のいずれか一項に記載の評価システム。 An extraction unit that extracts a candidate area where rust is generated from the captured image;
The evaluation system according to any one of claims 1 to 12, wherein the correction unit generates the evaluation image by correcting the candidate region. - 前記評価は、さび度の評価を含む、請求項1~請求項13のいずれか一項に記載の評価システム。 The evaluation system according to any one of claims 1 to 13, wherein the evaluation includes an evaluation of a degree of rust.
- 前記評価は、除錆度の評価を含む、請求項1~請求項14のいずれか一項に記載の評価システム。 The evaluation system according to any one of claims 1 to 14, wherein the evaluation includes an evaluation of a degree of rust removal.
- 評価対象物の撮像画像を用いて、前記評価対象物のさびの程度に関する評価を行う評価装置であって、
前記撮像画像を取得する画像取得部と、
前記撮像画像を補正することで評価用画像を生成する補正部と、
前記評価用画像に基づいて前記評価を行う評価部と、
前記評価部による評価結果を出力する出力部と、
を備え、
前記補正部は、前記撮像画像から前記評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域を抽出し、前記評価領域に基づいて前記評価用画像を生成する、評価装置。 An evaluation apparatus that performs an evaluation on the degree of rust of the evaluation object using a captured image of the evaluation object,
An image acquisition unit for acquiring the captured image;
A correction unit that generates an evaluation image by correcting the captured image;
An evaluation unit that performs the evaluation based on the evaluation image;
An output unit for outputting an evaluation result by the evaluation unit;
With
The correction unit extracts, from the captured image, an evaluation region that is an image having a predetermined area on the surface of the evaluation object, and generates the evaluation image based on the evaluation region . - 評価対象物の撮像画像を用いて、前記評価対象物のさびの程度に関する評価を行う評価方法であって、
前記撮像画像を取得するステップと、
前記撮像画像を補正することで評価用画像を生成するステップと、
前記評価用画像に基づいて前記評価を行うステップと、
前記評価を行うステップにおける評価結果を出力するステップと、
を備え、
前記評価用画像を生成するステップでは、前記撮像画像から前記評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域が抽出され、前記評価領域に基づいて前記評価用画像が生成される、評価方法。 An evaluation method for evaluating the degree of rust of the evaluation object using a captured image of the evaluation object,
Obtaining the captured image;
Generating an image for evaluation by correcting the captured image;
Performing the evaluation based on the evaluation image;
Outputting an evaluation result in the step of performing the evaluation;
With
In the step of generating the evaluation image, an evaluation region that is an image having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is extracted based on the evaluation region. The evaluation method that is generated. - コンピュータに、
評価対象物の撮像画像を取得するステップと、
前記撮像画像を補正することで評価用画像を生成するステップと、
前記評価用画像に基づいて、前記評価対象物のさびの程度に関する評価を行うステップと、
前記評価を行うステップにおける評価結果を出力するステップと、
を実行させるための評価プログラムであって、
前記評価用画像を生成するステップでは、前記撮像画像から前記評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域が抽出され、前記評価領域に基づいて前記評価用画像が生成される、評価プログラム。 On the computer,
Obtaining a captured image of the evaluation object;
Generating an image for evaluation by correcting the captured image;
Performing an evaluation on the degree of rust of the evaluation object based on the evaluation image;
Outputting an evaluation result in the step of performing the evaluation;
An evaluation program for executing
In the step of generating the evaluation image, an evaluation region that is an image having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is extracted based on the evaluation region. Generated evaluation program. - コンピュータに、
評価対象物の撮像画像を取得するステップと、
前記撮像画像を補正することで評価用画像を生成するステップと、
前記評価用画像に基づいて、前記評価対象物のさびの程度に関する評価を行うステップと、
前記評価を行うステップにおける評価結果を出力するステップと、
を実行させるための評価プログラムを記録したコンピュータ読み取り可能な記録媒体であって、
前記評価用画像を生成するステップでは、前記撮像画像から前記評価対象物の表面のうちの所定の面積を有する範囲の画像である評価領域が抽出され、前記評価領域に基づいて前記評価用画像が生成される、記録媒体。 On the computer,
Obtaining a captured image of the evaluation object;
Generating an image for evaluation by correcting the captured image;
Performing an evaluation on the degree of rust of the evaluation object based on the evaluation image;
Outputting an evaluation result in the step of performing the evaluation;
A computer-readable recording medium having recorded thereon an evaluation program for executing
In the step of generating the evaluation image, an evaluation region that is an image having a predetermined area on the surface of the evaluation object is extracted from the captured image, and the evaluation image is extracted based on the evaluation region. A recording medium to be generated.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-026099 | 2018-02-16 | ||
JP2018026099A JP6881347B2 (en) | 2018-02-16 | 2018-02-16 | Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019159425A1 true WO2019159425A1 (en) | 2019-08-22 |
Family
ID=67620052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/037252 WO2019159425A1 (en) | 2018-02-16 | 2018-10-04 | Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP6881347B2 (en) |
TW (1) | TW201935320A (en) |
WO (1) | WO2019159425A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160265A (en) * | 2019-12-30 | 2020-05-15 | Oppo(重庆)智能科技有限公司 | File conversion method and device, storage medium and electronic equipment |
JP2021056117A (en) * | 2019-09-30 | 2021-04-08 | 日立造船株式会社 | Evaluation device, evaluation system, control program, and method for evaluation |
US20220084181A1 (en) * | 2020-09-17 | 2022-03-17 | Evonik Operations Gmbh | Qualitative or quantitative characterization of a coating surface |
US11941796B2 (en) | 2018-02-16 | 2024-03-26 | Sintokogio, Ltd. | Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium |
WO2024142405A1 (en) * | 2022-12-28 | 2024-07-04 | 日本電信電話株式会社 | Inspection device, inspection method, and program |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7361613B2 (en) * | 2020-01-08 | 2023-10-16 | キヤノンメディカルシステムズ株式会社 | Winding inspection method, winding inspection device, and manufacturing method of superconducting coil device |
JP7440823B2 (en) * | 2020-02-21 | 2024-02-29 | オムロン株式会社 | Information processing device, information processing method and program |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001266121A (en) * | 2000-03-16 | 2001-09-28 | Tomoe Corp | Method for diagnosing deterioration of coating on coated steel product |
JP2009053001A (en) * | 2007-08-27 | 2009-03-12 | Akinobu Morita | Structure inspection device |
JP2009118585A (en) * | 2007-11-02 | 2009-05-28 | Chugoku Electric Power Co Inc:The | Deterioration diagnosis device of power distribution equipment |
JP2010538258A (en) * | 2007-09-03 | 2010-12-09 | コリア エクスプレスウェイ コーポレイション | Steel bridge coating film inspection system using image processing method and processing method thereof |
JP2012026820A (en) * | 2010-07-22 | 2012-02-09 | Nippon Sharyo Seizo Kaisha Ltd | Blast derusting grade inspection system |
JP2012185161A (en) * | 2011-03-03 | 2012-09-27 | Sang Mo Bae | Method for measuring real dimension of object by using camera included in portable terminal |
JP2012189523A (en) * | 2011-03-14 | 2012-10-04 | Hitachi Ltd | Facility degradation diagnostic device, facility degradation diagnostic method, and facility degradation diagnostic program |
JP2012207948A (en) * | 2011-03-29 | 2012-10-25 | Hitachi Ltd | Equipment abnormality over-time change determination device, equipment abnormality change determination method, and program |
JP2014178328A (en) * | 2010-05-31 | 2014-09-25 | Tohoku Electric Power Co Inc | Steel pipe internal corrosion analyzer and steel pipe internal corrosion analysis method |
JP2015060493A (en) * | 2013-09-20 | 2015-03-30 | 大日本印刷株式会社 | Pattern inspection apparatus and pattern inspection method |
JP2016024100A (en) * | 2014-07-23 | 2016-02-08 | マツダ株式会社 | Vehicle rust detector, vehicle rust detection system, and vehicle rust detection method |
JP2017009528A (en) * | 2015-06-25 | 2017-01-12 | ダイハツ工業株式会社 | Acceptance/denial determination method for trouble |
-
2018
- 2018-02-16 JP JP2018026099A patent/JP6881347B2/en active Active
- 2018-10-04 WO PCT/JP2018/037252 patent/WO2019159425A1/en active Application Filing
- 2018-10-31 TW TW107138630A patent/TW201935320A/en unknown
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001266121A (en) * | 2000-03-16 | 2001-09-28 | Tomoe Corp | Method for diagnosing deterioration of coating on coated steel product |
JP2009053001A (en) * | 2007-08-27 | 2009-03-12 | Akinobu Morita | Structure inspection device |
JP2010538258A (en) * | 2007-09-03 | 2010-12-09 | コリア エクスプレスウェイ コーポレイション | Steel bridge coating film inspection system using image processing method and processing method thereof |
JP2009118585A (en) * | 2007-11-02 | 2009-05-28 | Chugoku Electric Power Co Inc:The | Deterioration diagnosis device of power distribution equipment |
JP2014178328A (en) * | 2010-05-31 | 2014-09-25 | Tohoku Electric Power Co Inc | Steel pipe internal corrosion analyzer and steel pipe internal corrosion analysis method |
JP2012026820A (en) * | 2010-07-22 | 2012-02-09 | Nippon Sharyo Seizo Kaisha Ltd | Blast derusting grade inspection system |
JP2012185161A (en) * | 2011-03-03 | 2012-09-27 | Sang Mo Bae | Method for measuring real dimension of object by using camera included in portable terminal |
JP2012189523A (en) * | 2011-03-14 | 2012-10-04 | Hitachi Ltd | Facility degradation diagnostic device, facility degradation diagnostic method, and facility degradation diagnostic program |
JP2012207948A (en) * | 2011-03-29 | 2012-10-25 | Hitachi Ltd | Equipment abnormality over-time change determination device, equipment abnormality change determination method, and program |
JP2015060493A (en) * | 2013-09-20 | 2015-03-30 | 大日本印刷株式会社 | Pattern inspection apparatus and pattern inspection method |
JP2016024100A (en) * | 2014-07-23 | 2016-02-08 | マツダ株式会社 | Vehicle rust detector, vehicle rust detection system, and vehicle rust detection method |
JP2017009528A (en) * | 2015-06-25 | 2017-01-12 | ダイハツ工業株式会社 | Acceptance/denial determination method for trouble |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11941796B2 (en) | 2018-02-16 | 2024-03-26 | Sintokogio, Ltd. | Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium |
JP2021056117A (en) * | 2019-09-30 | 2021-04-08 | 日立造船株式会社 | Evaluation device, evaluation system, control program, and method for evaluation |
CN111160265A (en) * | 2019-12-30 | 2020-05-15 | Oppo(重庆)智能科技有限公司 | File conversion method and device, storage medium and electronic equipment |
US20220084181A1 (en) * | 2020-09-17 | 2022-03-17 | Evonik Operations Gmbh | Qualitative or quantitative characterization of a coating surface |
WO2024142405A1 (en) * | 2022-12-28 | 2024-07-04 | 日本電信電話株式会社 | Inspection device, inspection method, and program |
Also Published As
Publication number | Publication date |
---|---|
JP2019144013A (en) | 2019-08-29 |
TW201935320A (en) | 2019-09-01 |
JP6881347B2 (en) | 2021-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019159425A1 (en) | Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium | |
WO2019159424A1 (en) | Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium | |
US20180225527A1 (en) | Method, apparatus, storage medium and device for modeling lane line identification, and method, apparatus, storage medium and device for identifying lane line | |
US9576375B1 (en) | Methods and systems for detecting moving objects in a sequence of image frames produced by sensors with inconsistent gain, offset, and dead pixels | |
US11841421B2 (en) | Synthetic aperture radar image analysis system, synthetic aperture radar image analysis method, and synthetic aperture radar image analysis program | |
JP2005292132A (en) | Radiometric calibration from single image | |
CN107564020B (en) | Image area determination method and device | |
CN116228780B (en) | Silicon wafer defect detection method and system based on computer vision | |
Panetta et al. | Logarithmic Edge Detection with Applications. | |
US11410459B2 (en) | Face detection and recognition method using light field camera system | |
Wharton et al. | Logarithmic edge detection with applications | |
JP5704909B2 (en) | Attention area detection method, attention area detection apparatus, and program | |
JP2018124963A (en) | Image processing device, image recognition device, image processing program, and image recognition program | |
CN108710881B (en) | Neural network model, candidate target area generation method and model training method | |
CN113989336A (en) | Visible light image and infrared image registration method and device | |
CN111222446B (en) | Face recognition method, face recognition device and mobile terminal | |
JPH11312243A (en) | Facial region detector | |
CN105913427B (en) | Machine learning-based noise image saliency detecting method | |
CN116129069A (en) | Method and device for calculating area of planar area, electronic equipment and storage medium | |
JP2007004721A (en) | Object detecting device and object detecting method | |
JP2010113562A (en) | Apparatus, method and program for detecting and tracking object | |
CN115115653A (en) | Refined temperature calibration method for cold and hot impact test box | |
EP3570250A1 (en) | Identification device and electronic apparatus | |
Zhou et al. | A self-adaptive learning method for motion blur kernel estimation of the single image | |
JP7527532B1 (en) | IMAGE POINT CLOUD DATA PROCESSING APPARATUS, IMAGE POINT CLOUD DATA PROCESSING METHOD, AND IMAGE POINT CLOUD DATA PROCESSING PROGRAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18906539 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18906539 Country of ref document: EP Kind code of ref document: A1 |