CN110186802B - Measuring apparatus - Google Patents

Measuring apparatus Download PDF

Info

Publication number
CN110186802B
CN110186802B CN201910126155.2A CN201910126155A CN110186802B CN 110186802 B CN110186802 B CN 110186802B CN 201910126155 A CN201910126155 A CN 201910126155A CN 110186802 B CN110186802 B CN 110186802B
Authority
CN
China
Prior art keywords
area
correction
measurement
processor
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910126155.2A
Other languages
Chinese (zh)
Other versions
CN110186802A (en
Inventor
安久幸也
金田幸三
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seiren Co Ltd
Original Assignee
Seiren Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiren Co Ltd filed Critical Seiren Co Ltd
Publication of CN110186802A publication Critical patent/CN110186802A/en
Application granted granted Critical
Publication of CN110186802B publication Critical patent/CN110186802B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N5/00Analysing materials by weighing, e.g. weighing small particles separated from a gas or liquid
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8411Application to online plant, process monitoring
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8444Fibrous material

Landscapes

  • Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

Provided is a measuring device capable of measuring and outputting the mass per unit area of a fabric. The measurement device is provided with: a mounting table, a mass meter, a camera, an output device and a processor. The output device outputs a mass per unit area representing a mass per unit area of the cloth sheet. The processor executes mass processing, imaging processing for measurement, area processing for measurement, calculation processing, and output processing. In the mass processing, the mass of the fabric sheet measured by the mass meter is acquired. In the measurement imaging process, imaging data for measurement is acquired. The captured data for measurement includes an image portion of the fabric sheet captured by the camera. In the area measurement processing, the area for measurement of a region corresponding to the fabric piece is acquired from an image portion of the fabric piece included in the captured data for measurement. In the calculation process, the mass per unit area is calculated from the mass and the area for measurement. In the output process, the mass per unit area is output by an outputter.

Description

Measuring apparatus
Technical Field
The present invention relates to a measuring apparatus for measuring and outputting a mass per unit area of a cloth sheet.
Background
The applicant has proposed a line inspection method and a line inspection apparatus in patent document 1. In the line inspection method and the line inspection apparatus, the texture feature in the two-dimensional space of the textile is extracted, and the line crossing angle, the number of warps and the number of wefts are mathematically obtained. The above-described extraction algorithm of texture features includes the following sequence (1) to (5). That is, (1) digital image data of the textile is taken from a digital still camera. (2) And performing window function processing on the digital image data. (3) The digital image data subjected to the window function processing is subjected to fast fourier transform to obtain a fourier spectrum. (4) Extracting the peak value of Fourier spectrum. (5) The information (angle, frequency) of each peak is associated with the texture feature quantity (line crossing angle, number of warp threads, and number of weft threads) of the textile. The line inspection method and the line inspection apparatus use, as an inspection object, a film, a sheet, a screen, or a plate on which a mesh-like line is formed by printing or etching, in addition to a textile. For example, the line inspection method and the line inspection apparatus use an electromagnetic wave shielding filter for a plasma display, a separation filter for biochemistry, a screen gauze or a screen window for printing as an inspection target.
Patent document 1: japanese patent No. 4520794
In the production of a fabric or a predetermined product of the fabric, the quality of the product or the fabric as a material is managed in a production plant. For example, in a manufacturing plant, the mass per unit area of the cloth is managed. The mass per unit area is the mass per unit area. Therefore, the inventors studied the following measurement apparatus: the mass per unit area of the cloth can be measured smoothly regardless of the size (area) of the cloth piece, and the mass per unit area can be output.
Disclosure of Invention
The invention aims to provide a measuring device capable of measuring and outputting the mass per unit area of cloth.
One aspect of the present invention is a measurement device including: a mounting table on which a cloth sheet is mounted, a mass meter that measures a mass of the cloth sheet mounted on the mounting table, a camera that photographs the cloth sheet mounted on the mounting table, an outputter that outputs a mass per unit area indicating a mass per unit area of the cloth sheet, and a processor that executes: a mass process of acquiring a mass of the cloth piece measured by the mass meter, a measurement imaging process of acquiring imaging data for measurement including an image portion of the cloth piece captured by the camera, a measurement area process of acquiring a measurement area of a region corresponding to the cloth piece from the image portion of the cloth piece included in the imaging data for measurement acquired in the measurement imaging process, a calculation process of calculating the mass per unit area from the mass of the cloth piece acquired in the mass process and the measurement area acquired in the measurement area process, and an output process of outputting the mass per unit area acquired in the calculation process by the output device.
According to this measuring apparatus, the mass per unit area of the fabric sheet can be measured and output as a measurement result. By acquiring the area for measurement of the region corresponding to the fabric piece from the image portion of the fabric piece included in the captured data for measurement, the mass per unit area can be measured regardless of the shape of the fabric piece. For example, it is not necessary to cut a piece of cloth from the cloth into a predetermined shape when measuring the mass per unit area. For example, in the production of a fabric or a predetermined product of the fabric, the mass per unit area of the product or the fabric as a material can be measured smoothly in a production plant.
The measurement device may further include an irradiator for irradiating an imaging range of the camera with infrared rays, the camera including an image sensor having sensitivity to infrared rays, the camera performing imaging while irradiating infrared rays from the irradiator, and the measurement imaging processing may be processing for acquiring imaging data for measurement including an image portion of the cloth sheet formed by infrared rays reflected by the cloth sheet. In this case, the measurement device may include an infrared transmission filter between the mounting table and the camera. In the measuring apparatus, the camera may be an infrared camera.
According to the above-described respective configurations, the image portion of the cloth sheet can be stably detected. The accuracy of the measurement area can be improved. The cloth piece is a part of the cloth having the following pattern. The pattern is, for example, a pattern in which a plurality of patterns having a predetermined shape and the same color as the mounting surface are arranged on the entire fabric. The placing surface is a surface of the placing table on which the cloth piece is placed. As such a pattern, a bead pattern in which a plurality of beads of the same color as the placement surface are appropriately arranged is exemplified. The above-described water drop pattern is exemplified. Before measuring the mass per unit area, the fabric with the water drop pattern was cut to form a fabric sheet. In this case, the cutting position of the cloth may be on the bead. When visible light is used for imaging the fabric, the background color (color of the placing surface) is the same as the color of the cut water droplet portion at the outer edge of the fabric in the visible light image corresponding to the captured image data. As a result, it is considered that the image portion of the cloth sheet cannot be appropriately detected, or that special image analysis is required for the above detection. In contrast, in the captured data for measurement using infrared rays for capturing an image of a fabric sheet, the influence of the pattern of the water droplets in the image portion of the fabric sheet is eliminated or reduced. As a result, the boundary between the background (placement surface) and the image portion of the cloth sheet is stabilized, and the image portion of the cloth sheet can be distinguished from the background. The work of changing the placing table according to the pattern of the cloth is not required.
In the measuring apparatus, a reference sample having an area set to a reference value may be placed on the mounting table, the camera may capture an image of the reference sample placed on the mounting table, and the processor may execute: a calibration imaging process for acquiring calibration imaging data including an image portion of the reference sample captured by the camera, a calibration area process for acquiring a calibration area of a region corresponding to the reference sample from the image portion of the reference sample included in the calibration imaging data acquired in the calibration imaging process, and a correction coefficient process for calculating a correction coefficient corresponding to a difference between the reference value and the calibration area acquired in the calibration area process, wherein the calculation process is a process for calculating the mass per unit area, the mass per unit area is obtained by dividing the mass of the cloth piece obtained in the mass process by the corrected area obtained by correcting the area for measurement obtained in the area for measurement by the correction coefficient calculated in the correction coefficient process.
With this configuration, the measurement area can be corrected by the correction coefficient. The measurement accuracy of the mass per unit area can be improved.
According to the present invention, a measuring apparatus capable of measuring and outputting the mass per unit area of a fabric can be obtained.
Drawings
Fig. 1 is a perspective view showing an example of a schematic configuration of a measuring apparatus.
Fig. 2 is a diagram showing an example of a schematic configuration of a main screen as an operation screen. The upper layer represents the state before measurement of the mass per unit area. Corresponding to the state that the cloth piece is not arranged on the loading platform. The lower layer represents the state after measurement of the mass per unit area. Corresponding to the state of the cloth piece carried on the carrying platform.
Fig. 3 is a diagram showing an example of a schematic configuration of a sub-screen as an operation screen. Corresponding to the state in which the reference sample is placed on the placement table.
Fig. 4 is a flowchart of the main process.
Fig. 5 is a flow chart of a first part of the calibration process.
Fig. 6 is a flow chart of a second part of the calibration process.
Fig. 7 is a flowchart of the area measurement processing.
Fig. 8 is a flowchart of the first correction area processing.
Fig. 9 is a flowchart of the second correction area processing.
Fig. 10 is a flowchart of the third correction area processing.
Fig. 11 is a flowchart of the fourth correction area processing.
Fig. 12 is a flowchart of the fifth correction area processing.
Fig. 13 is a flowchart of the actual measurement process.
Fig. 14 is a flowchart of the correction coefficient processing.
Fig. 15 is a diagram showing measurement imaging data. The upper layer represents the first mode. In the first mode, infrared rays are used for imaging. The lower layer represents the second mode. In the second mode, visible light is used for shooting.
Detailed Description
Embodiments for carrying out the present invention will be described with reference to the drawings. The present invention is not limited to the following configurations, and various configurations can be adopted in the same technical idea. For example, a part of the following structures may be omitted or replaced with another structure. Other configurations may also be included.
< measuring device >
The measurement device 10 will be described with reference to fig. 1. Fig. 1 schematically shows a measurement apparatus 10. In fig. 1, the broken line is a hidden line, the dot-dash line is a center line (reference line), and the two-dot chain line is an imaginary line.
The measuring apparatus 10 is an apparatus for measuring the mass per unit area N. In the embodiment, the object to be measured for the mass per unit area N is the fabric sheet 15. The cloth sheet 15 is, for example, a part of cloth cut from the cloth. A fabric is used as a material of a predetermined product. The mass per unit area N is the mass per unit area. In the embodiment, the unit area is set to 1m2. However, the unit area may be 1m2Different areas. The unit area is appropriately determined in consideration of various conditions. The measuring apparatus 10 is used, for example, in a manufacturing plant of a fabric or a manufacturing plant of a predetermined product of the fabric. The quality of the cloth is controlled by the measuring device 10. The measurement device 10 includes: a mounting table 20, a mass meter 30, a camera 40, a housing chamber 45, an irradiator 50, an output device 60, and a controller 70.
The placement sheet 15 is placed on the table 20. In the embodiment, a surface of the mounting table 20 on which the cloth piece 15 is mounted is referred to as a "mounting surface 22". In the mounting table 20, any or all of the infrared reflectance, infrared absorptance, and infrared transmittance of the mounting surface 22 may be different from those of the fabric sheet 15. For example, in the mounting table 20, the infrared reflectance of the mounting surface 22 may be set lower than the fabric sheet 15, the infrared absorptance of the mounting surface 22 may be set higher than the fabric sheet 15, and the infrared transmittance of the mounting surface 22 may be set higher than the fabric sheet 15. In general, a cloth has a property of reflecting infrared rays. In the measurement apparatus 10, the next stage is used as the stage 20. That is, the mounting table 20 is made of acrylic, and the mounting surface 22 is frosted in a single color. The mounting surface 22 is colored black.
The mass meter 30 measures the mass M of the object placed on the mounting table 20, and outputs the measured mass M. In the embodiment, the placement sheet 15 is placed on the table 20. In this case, the mass meter 30 outputs the mass M of the cloth sheet 15. Further, a reference sample 17 (see fig. 3) is placed on the stage 20. Reference sample 17 will be described later. The mass meter 30 is, for example, an electronic balance. In the measuring apparatus 10, a known mass meter can be used as the mass meter 30. Therefore, other explanations about the mass meter 30 are omitted.
The camera 40 is a digital camera. The camera 40 includes an image sensor having sensitivity with respect to infrared rays. That is, the camera 40 can photograph infrared rays. However, the image sensor also has sensitivity with respect to visible light. Therefore, the camera 40 can also capture visible light. The camera 40 is disposed above the mounting surface 22 so as to face the mounting surface 22. In the measuring apparatus 10, the camera 40 faces the mounting surface 22. Therefore, the imaging direction of the camera 40 is perpendicular to the mounting surface 22. The camera 40 includes the mounting surface 22 in the imaging range R1. The shooting range R1 is a range shot by the camera 40. In fig. 1, the inner rectangular frame of the 2 rectangular frames indicated by the two-dot chain lines on the mounting surface 22 indicates a shooting range R1. In the embodiment, the entire cloth piece 15 is included in the shooting range R1. That is, the cloth piece 15 has an arbitrary shape smaller than the imaging range R1. Accordingly, the camera 40 images the entire surface of the cloth sheet 15 with the cloth sheet 15 placed on the mounting table 20. In the embodiment, the entire reference sample 17 is also included in the imaging range R1. That is, the reference sample 17 has a predetermined shape smaller than the imaging range R1. Along with this, the camera 40 images the entire surface of the reference sample 17 placed on the stage 20.
The housing chamber 45 houses the camera 40. An infrared transmission filter 47 is provided on the bottom surface of the housing chamber 45. The housing chamber 45 has a structure in which a portion other than the infrared ray transmitting filter 47 on the bottom surface is shielded from light. In a state where the camera 40 is accommodated in the accommodation chamber 45, the infrared transmission filter 47 is provided between the mounting table 20 and the camera 40. The infrared transmission filter 47 is a filter that absorbs ultraviolet rays and visible light and transmits infrared rays. Therefore, visible light and the like in the light reflected by the fabric sheet 15 are absorbed by the infrared transmission filter 47 and do not reach the camera 40. In other words, infrared rays of the light reflected by the cloth piece 15 pass through the infrared ray transmitting filter 47 and reach the camera 40. In the measurement device 10, a known infrared transmission filter that absorbs light having a wavelength of 800nm or less can be used as the infrared transmission filter 47. Therefore, other descriptions about the infrared transmission filter 47 are omitted.
The irradiator 50 irradiates infrared rays. As the irradiator 50 that can be used in the measurement apparatus 10, an infrared LED having a peak wavelength of 850nm is exemplified. The irradiator 50 irradiates an imaging range R1 with infrared rays from above the mounting surface 22. In the measuring apparatus 10, the irradiation direction of the infrared ray is a direction inclined with respect to the mounting surface 22. Therefore, the irradiator 50 irradiates the imaging range R1 with infrared rays from obliquely above the placement surface 22. In the embodiment, the number of the irradiators 50 of the measurement apparatus 10 is 2. The 2 irradiators 50 are disposed on both sides of the camera 40. The range on the mounting surface 22 irradiated with infrared rays is referred to as "illumination range R2". The illumination range R2 includes a shooting range R1. In the measurement apparatus 10, the 2 illuminators 50 are set in a state in which the illumination ranges on the placement surface 22 of each illuminator 50 are all uniform within the illumination range R2. In fig. 1, the outer rectangular frame of the 2 rectangular frames indicated by the two-dot chain lines on the placement surface 22 indicates the illumination range R2.
However, the irradiation direction of the infrared rays and the number and arrangement of the irradiators 50 are exemplified. The irradiation direction of the infrared rays and the number and arrangement of the irradiators 50 are appropriately determined in consideration of various conditions. For example, the number of the irradiators 50 may be 1, and the illumination range R2 may be irradiated by 1 of the irradiators 50. Further, the ranges on the mounting surface 22 to be irradiated by the plurality of irradiators 50 may be set to different ranges, and the entire illumination range R2 may be irradiated by the plurality of irradiators 50. The illuminator 50 may be disposed on the mounting table 20 so as to be close to the mounting surface 22 and illuminate the entire illumination range R2 from the outer periphery of the imaging range R1. In this case, the plurality of irradiators 50 may be arranged in a ring shape at the above-described positions, or the ring-shaped irradiators 50 may be arranged at the above-described positions.
The output unit 60 is a device that outputs the mass per unit area N as a measurement result. In the measurement device 10, the output unit 60 is a display. As the display, a liquid crystal display is exemplified. However, the display employed as the outputter 60 may be a display different from a liquid crystal display. That is, in the measuring apparatus 10, an operation screen including the mass per unit area N is displayed on a display as the output unit 60. In fig. 1, the operation screen is not shown. The operation screen will be described later.
The operation unit 65 receives inputs of various instructions to the measurement device 10. In the measurement device 10, the manipulator 65 is set to a specification including a touch panel. In this case, the operator 65 is a touch panel together with the output unit 60 as a display. The operator of the measurement device 10 performs a predetermined operation with respect to the operation device 65 of the touch panel in a state where the operation screen is displayed on the output device 60. As the above operation, tapping is exemplified. The operator 65 receives an instruction input corresponding to an operation by the operator. The operator 65 outputs the received instruction. However, the operator 65 may include a predetermined hardware key. Further, the operator 65 may be a keyboard and a mouse.
The controller 70 includes a processor 71, a memory 72, a memory 73, and a connection interface 74. In the embodiment, the connection interface 74 is described as "connection I/F74". The processor 71 executes arithmetic processing to control the measuring apparatus 10. The processor 71 is, for example, a CPU.
The memory 72 is a flash memory. However, the memory 72 may be a storage medium other than a flash memory. For example, the memory 72 may be a hard disk. The memory 72 stores programs. The program stored in the memory 72 includes a program of a main process (see fig. 4). The program of the main processing includes the programs of the respective processes shown in fig. 5 to 14. The program for the main processing is loaded in advance in the memory 72.
The memory 73 is a storage area for the processor 71 to execute the program stored in the memory 72. In the memory 73, predetermined data is stored in a predetermined storage area during execution of the processing. The memory 73 is, for example, a RAM. In the measuring apparatus 10, the processor 71 of the controller 70 executes a program stored in the memory 72. Accordingly, the measurement device 10 executes various processes to realize functions corresponding to the executed processes.
To the connection I/F74, for example, a mass meter 30, a camera 40, and an operator 65 are connected. The mass M output from the mass meter 30 is input to the controller 70 via the connection I/F74. The mass M from the mass meter 30 is output from the connection I/F74 to the processor 71. The processor 71 obtains the mass M via the connection I/F74. A shooting signal corresponding to a shot image shot by the camera 40 is input to the controller 70 via the connection I/F74. The shooting signal from the camera 40 is output from the connection I/F74 to the processor 71. The processor 71 acquires the imaging signal via the connection I/F74. Along with this, the processor 71 acquires captured data corresponding to the captured image. In other words, the processor 71 processes the acquired imaging signal to generate imaging data. The instruction output from the operator 65 is input to the controller 70 via the connection I/F74. The instruction from the operator 65 is output from the connection I/F74 to the processor 71. The processor 71 obtains the above instruction via the connection I/F74. The illuminator 50 and the output unit 60 are connected to the connection I/F74. In fig. 1, the illustration of the electric wiring connecting the mass meter 30, the camera 40, the illuminator 50, and the output device 60 to the connection I/F74 is omitted.
< operation Picture >
The operation screen will be described with reference to fig. 2 and 3. In the measurement device 10, the operation screen includes a main screen 80 and a sub-screen 90. In the embodiment, when the main screen 80 and the sub screen 90 are not distinguished from each other, or when these are collectively referred to as "operation screen", the operation screen is referred to as "operation screen". The main screen 80 is displayed on the output unit 60 in accordance with the start of the main process. The main screen 80 is an operation screen corresponding to the measurement of the mass per unit area N, and is displayed on the output unit 60 when the mass per unit area N of the cloth piece 15 is measured. The sub-screen 90 is displayed when the calibration process (see fig. 5 and 6) is performed.
The calibration process is performed using the reference sample 17 having an area set as a reference value. In the embodiment, as the reference sample 17, a first sample, a second sample, a third sample, a fourth sample, and a fifth sample whose areas are set to 5 different reference values are exemplified. The first sample is set to a first reference value of 400mm2"reference sample 17. As a shape having an area of the first reference value, a square having a size of 20mm of 1 side is exemplified. The second sample is the area settingIs a second reference value of 1600mm2"reference sample 17. As a shape having an area of the second reference value, a square having a size of 40mm of 1 side is exemplified. The third sample is an area set to a third reference value of "3600 mm2"reference sample 17. As a shape having an area of the third reference value, a square having a size of 60mm of 1 side is exemplified. The fourth sample is area set to a fourth reference value of "6400 mm2"reference sample 17. As a shape having an area of the fourth reference value, a square having a size of 80mm of 1 side is exemplified. The fifth sample is a sample in which the area is set to a fifth reference value of "10000 mm2"reference sample 17. As a shape having an area of the fifth reference value, a square having a size of 100mm of 1 side is exemplified. In the embodiment, when the first sample, the second sample, the third sample, the fourth sample, and the fifth sample are not distinguished from each other, or when these are collectively referred to, the reference sample 17 is referred to. The above-described values as the first reference value, the second reference value, the third reference value, the fourth reference value, and the fifth reference value are exemplified. The reference value of the reference sample 17 is appropriately determined in consideration of various conditions. The shape of the reference sample 17 may be a rectangle or a polygon other than a square and a rectangle. The shape of the reference sample 17 may be circular, elliptical, or any other shape different from the above-described shapes. The shape of the reference sample 17 is appropriately determined in consideration of various conditions.
The reference sample 17 may be in a state different from the mounting surface 22 in any one or all of the infrared reflectance, the infrared absorptance, and the infrared transmittance, as in the fabric sheet 15. For example, the reference sample 17 may have an infrared reflectance higher than the mounting surface 22, an infrared absorptance lower than the mounting surface 22, or an infrared transmittance lower than the mounting surface 22. For example, the reference sample 17 may be formed of a material that reflects infrared rays from the mounting surface 22, or may be in a state of reflecting infrared rays from the mounting surface 22, as in the case of the cloth piece 15. The material forming the reference sample 17 may be a material that does not cause wrinkles, warpage, or the like, or is less likely to cause wrinkles, warpage, or the like. In the case where the reference sample 17 is formed by shearing a material having a cross-sectional area larger than the reference value, a material that can be easily sheared into a shape having an area that becomes the reference value may be selected as the material for forming the reference sample 17. For example, the material forming the reference sample 17 may be a material that does not generate fuzz or is less likely to generate fuzz at the cut position, or may be a material having low elasticity. For example, the reference sample 17 can be made of paper. In this case, the reference sample 17 made of paper may have a white surface color.
Main screen 80 includes preview area 81, measurement result area 82, and calibration button 83 (see fig. 2). The preview area 81 is an area where a captured image corresponding to the capturing range R1 is displayed. In the measurement device 10, the captured image displayed in the preview area 81 is a moving image. The measurement result region 82 shows the mass N (g/m) per unit area of the cloth piece 152) Area (mm) of the fabric sheet 152) And a region of mass (g). The calibration button 83 is an operation button that instructs execution of the calibration process (see S17, fig. 5, and fig. 6 of fig. 4). The calibration button 83 corresponds to a calibration instruction. The calibration instruction is an instruction to execute the calibration process. In the event that the operator 65 accepts a tap relative to the calibration button 83, the processor 71 retrieves the calibration indication. Along with this, the processor 71 starts the calibration process.
Sub-screen 90 includes preview area 91, first sample button 92, second sample button 93, third sample button 94, fourth sample button 95, fifth sample button 96, correction coefficient button 97, and return button 98 (see fig. 3). The preview area 91 is an area where a captured image corresponding to the capturing range R1 is displayed. In the measurement device 10, the captured image displayed in the preview area 91 is a moving image.
The first sample button 92 is an operation button that instructs execution of the first correction shooting process (see S35 of fig. 5). The first sample button 92 corresponds to a first sample indication. The first sample instruction is an execution instruction of the first correction-use shooting process. The operator places the first sample on the mounting table 20. After that, the operator taps the first specimen button 92 in a state where the first specimen is placed on the placement table 20. The first sample button 92 is tapped and the tap is accepted by the operator 65. In this case, the processor 71 takes the first sample indication. Along with this, the processor 71 starts the first correction shooting process.
The second sample button 93 is an operation button that instructs execution of the second correction imaging process (see S41 in fig. 5). The second sample button 93 corresponds to a second sample indication. The second sample instruction is an execution instruction of the second correction imaging process. The operator places the second specimen on the stage 20. After that, the operator taps the second sample button 93 in a state where the second sample is placed on the stage 20. The second sample button 93 is tapped and the tap is accepted by the operator 65. In this case, the processor 71 takes a second sample indication. Along with this, the processor 71 starts the second correction imaging process.
The third sample button 94 is an operation button for instructing execution of the third correction shooting process (refer to S47 of fig. 5). The third sample button 94 corresponds to a third sample indication. The third sample instruction is an instruction to execute the third correction shooting process. The operator places the third specimen on the stage 20. After that, the operator taps the third sample button 94 in a state where the third sample is placed on the stage 20. The third sample button 94 is tapped and the tap is accepted by the operator 65. In this case, the processor 71 takes the third sample indication. Along with this, the processor 71 starts the third correction shooting process.
The fourth sample button 95 is an operation button for instructing execution of the fourth correction shooting process (see S53 of fig. 6). The fourth sample button 95 corresponds to a fourth sample indication. The fourth sample instruction is an instruction to execute the fourth correction shooting process. The operator places the fourth sample on the stage 20. After that, the operator taps the fourth sample button 95 in a state where the fourth sample is placed on the placement stage 20. The fourth sample button 95 is tapped and the tap is accepted by the operator 65. In this case, the processor 71 takes a fourth sample indication. Along with this, the processor 71 starts the fourth correction shooting process.
The fifth sample button 96 is an operation button that instructs execution of the fifth correction shooting process (see S59 of fig. 6). The fifth sample button 96 corresponds to a fifth sample indication. The fifth sample instruction is an instruction to execute the fifth correction shooting process. The operator places the fifth sample on the stage 20. After that, the operator taps the fifth sample button 96 in a state where the fifth sample is placed on the placement stage 20. The fifth sample button 96 is tapped and the tap is accepted by the operator 65. In this case, the processor 71 takes a fifth sample indication. Along with this, the processor 71 starts the fifth correction shooting process.
The correction coefficient button 97 is an operation button for instructing execution of the correction coefficient processing (see S65 of fig. 6 and fig. 14). The correction coefficient button 97 corresponds to a correction coefficient instruction. The correction coefficient instruction is an instruction to execute correction coefficient processing. The operator taps the correction coefficient button 97, for example, after tapping the first sample button 92, the second sample button 93, the third sample button 94, the fourth sample button 95, and the fifth sample button 96, respectively. The correction coefficient button 97 is tapped and the tap is accepted by the operator 65. In this case, the processor 71 obtains a correction coefficient indication. Along with this, the processor 71 starts the correction coefficient processing.
The return button 98 is an operation button that instructs the end of the calibration process. The return button 98 corresponds to a return instruction. The return instruction is an end instruction of the calibration process. The operator taps the return button 98, for example, when the calibration process is ended. The return button 98 is tapped and the tap is accepted by the operator 65. In this case, the processor 71 obtains a return instruction. Along with this, the processor 71 ends the calibration process.
< Main processing >
The main processing will be described with reference to fig. 4. The operator turns on the power supply of the measuring apparatus 10. Accordingly, the processor 71 starts the program of the main process stored in the memory 72 to start the main process. The irradiator 50 starts irradiation of infrared rays.
The processor 71 which starts the main process displays the main screen 80 (S11). The processor 71 outputs a display instruction of the home screen 80 to the output unit 60. The output unit 60 displays a main screen 80 (see the upper layer of fig. 2) in accordance with the display instruction. The processor 71 activates the camera 40 simultaneously with the execution of S11 (S13). The processor 71 outputs a start instruction to the camera 40. The camera 40 is activated in response to the activation instruction to start shooting. The processor 71 acquires shot data corresponding to a shot image shot by the camera 40. The processor 71 causes the captured image corresponding to the captured data to be included in the preview area 81 of the main screen 80. In the main screen 80, a captured image that matches the capturing range R1 is output in the preview area 81 (see fig. 2). Regarding the order of S11 and S13, S11 may be executed after S13 is executed.
Next, the processor 71 determines whether or not a calibration instruction has been acquired (S15). The operator taps the calibration button 83. Tapping of the calibration button 83 is accepted by the operator 65. In this case, the processor 71 takes the calibration instruction from the operator 65. When the calibration instruction is acquired (yes in S15), the processor 71 executes the calibration process (S17). The calibration process is a process of obtaining a correction coefficient. The calibration process and the correction coefficient are described later. In the calibration process, the operation screen is switched from the main screen 80 to the sub-screen 90. After performing S17, the processor 71 displays the home screen 80 (S19). S19 is performed in the same manner as S11. Therefore, the other explanation concerning S19 is omitted. After executing S19, the processor 71 returns the process to S15. After that, the processor 71 repeatedly executes the processing of S15 and thereafter.
In the case where the calibration instruction is not obtained (S15: no), the processor 71 performs quality processing (S21). The mass processing is processing for acquiring the mass M of the fabric sheet 15 measured by the mass meter 30. The operator places the cloth piece 15 on the placing table 20. The mass meter 30 measures the mass M of the cloth piece 15 placed on the mounting table 20. In S21, the processor 71 acquires the mass M of the fabric piece 15 measured by the mass meter 30. The processor 71 causes the mass M to be stored in the memory 73. When the value from the mass meter 30 is greater than 0 and the value indicates the set time constant value, the processor 71 acquires the value from the mass meter 30 as the mass M. The setting time is appropriately determined in consideration of various conditions. For example, the next time obtained by a previous experiment may be set as the set time. The above time is a time required for the mass M measured by the mass meter 30 to be stable after the cloth piece 15 is placed on the placing table 20.
Next, the processor 71 executes measurement imaging processing (S23). The measurement imaging process is a process of acquiring imaging data for measurement. In the embodiment, the imaging data for measurement is referred to as "imaging data for measurement". The measurement imaging data is image data including an image portion of the fabric 15 imaged by the camera 40. The measurement imaging data is image data corresponding to an infrared image obtained by reflecting infrared rays from the irradiator 50 at the cloth sheet 15 on the mounting table 20. In the measurement device 10, the measurement imaging data is image data of a still image. In S23, the processor 71 outputs a shooting instruction to the camera 40. The camera 40 photographs the photographing range R1 according to the photographing instruction. The processor 71 acquires measurement imaging data corresponding to an imaging image captured by the camera 40. The processor 71 causes the measurement imaging data to be stored in the memory 73. Regarding the order of S21 and S23, S21 may be executed after S23 is executed. However, the processor 71 may execute S23 on the condition that the value from the mass meter 30 is greater than 0 and the value indicates that the time constant value is set.
After execution of S23, the processor 71 executes area for measurement processing (S25). The measurement area processing is processing for acquiring the measurement area a of the region corresponding to the fabric piece 15 from the image portion of the fabric piece 15 included in the measurement captured data acquired in S23. The area processing for measurement will be described later.
Next, the processor 71 executes calculation processing (S27). The calculation process is a process of calculating the mass per unit area N from the mass M acquired in S21 and the measurement area a acquired in S25. In the calculation process, the measurement area a is corrected by a polynomial approximation curve including a correction coefficient (see S163 in fig. 14), and the mass M acquired in S21 is divided by the corrected area B to calculate the mass N per unit area. In an embodiment, the polynomial approximation curve has a degree of 4.
The calculation process executed in S27 will be further explained. The processor 71 obtains the measurement area a and the correction coefficient from the memory 73, and calculates the correction rate J using equation (1). The measurement area a is stored in the memory 73 in S73 of fig. 7 described later. The correction coefficient is stored in the memory 73 in S163 of fig. 14 described later. Further, the processor 71 calculates the corrected area B by using equation (2). Thus, the processor 71 obtains the mass per unit area N by equation (3).
J=Coeff4×A4+Coeff3×A3+Coeff2×A2+Coeff1×A+Coeff0···(1)
B=(J+1)×A···(2)
N=M/B···(3)
If the calibration process (see fig. 5 and 6) is not executed in S17, the corrected area B has the same value as the measurement area a in S27. Therefore, the formula (3) actually becomes "M/a". The correction factor J or a correction coefficient stored in the memory 73 in S163 of fig. 14, which will be described later, may be registered in a program of the main processing, for example. After the start of the main processing currently being executed, if S17 is not executed and the correction coefficient is not stored in memory 73(S163 of fig. 14: not executed), processor 71 uses the registered correction rate J or correction coefficient in S35. The initial value of the correction rate J or the correction coefficient (Coeff0, Coeff1, Coeff2, Coeff3, Coeff4) of the program registered in the main processing is set to 0. When the correction rate J is calculated by equation (1), the processor 71 updates the correction rate J of the program registered in the main process to the newly calculated correction rate J.
Next, the processor 71 executes output processing (S29). The output process is a process of outputting the mass per unit area N obtained in S27 by the outputter 60. In the embodiment, the corrected area B and the mass M are also output together with the mass N per unit area. The processor 71 outputs an output command of the unit area mass N, the corrected area B, and the mass M to the output unit 60. The output unit 60 outputs the mass per unit area N, the corrected area B, and the mass M according to the output command. That is, the output unit 60 displays the main screen 80 (see the lower layer of fig. 2) including the mass N per unit area, the corrected area B, and the mass M as the measurement result in the measurement result region 82. After executing S29, the processor 71 returns the process to S15. After that, the processor 71 repeatedly executes the processing of S15 and thereafter. The main processing is terminated when the power of the measurement device 10 is turned off. The irradiation of infrared rays from the irradiator 50 is continued until the power supply of the measurement apparatus 10 is cut off.
< calibration processing >
The calibration process executed in S17 of fig. 4 will be described with reference to fig. 5 and 6. The processor 71 that starts the calibration process displays the sub-screen 90 (S31). The processor 71 outputs a display instruction of the sub-screen 90 to the output unit 60. The output unit 60 displays a sub-screen 90 (see fig. 3) in response to the display instruction.
Next, the processor 71 determines whether or not the first sample instruction has been acquired (S33). The operator taps the first sample button 92. However, if the first sample is not mounted on the mounting table 20, the first sample is not displayed in the preview area 91. In this case, the operator places the first sample on the placement table 20 before tapping the first sample button 92. The tapping of the first sample button 92 is accepted by the operator 65. In this case, the processor 71 obtains the first sample instruction from the operator 65.
In the case where the first sample instruction is not taken (S33: no), the processor 71 moves the process to S39. When the first sample instruction is acquired (yes in S33), the processor 71 executes the first correction imaging process (S35). The first correction imaging process is a process of acquiring the next imaging data for correction. In the embodiment, the above-described imaging data for correction is referred to as "first imaging data for correction". The first correction shot data is image data including an image portion of the first sample shot by the camera 40. The first calibration imaging data is image data corresponding to an infrared image obtained by reflecting the infrared ray from the irradiator 50 by the first sample on the mounting table 20. In the measurement device 10, the first calibration imaging data is image data of a still image. In S35, the processor 71 outputs a shooting instruction to the camera 40. The camera 40 photographs the photographing range R1 according to the photographing instruction. The processor 71 acquires first calibration imaging data corresponding to an imaging image captured by the camera 40. The processor 71 causes the first calibration shot data to be stored in the memory 73.
Next, the processor 71 executes first area processing for correction (S37). The first correction area processing is processing for acquiring the area of the region corresponding to the first sample from the image portion of the first sample included in the first correction captured data acquired in S35. In the embodiment, the area obtained by the first correction area processing is referred to as "first correction area X1". The first correction area processing will be described later. After executing S37, the processor 71 returns the process to S33. After that, the processor 71 repeatedly executes the processing of S33 and thereafter.
At S39, the processor 71 determines whether a second sample indication has been obtained. The operator taps the second sample button 93. However, if the second sample is not mounted on the mounting table 20, the second sample is not displayed in the preview area 91. In this case, the operator places the second sample on the mounting table 20 before tapping the second sample button 93. The tapping of the second sample button 93 is accepted by the operator 65. In this case, the processor 71 takes the second sample indication from the operator 65.
In the case where the second sample instruction is not taken (S39: no), the processor 71 moves the process to S45. When the second sample instruction is acquired (yes in S39), the processor 71 executes second correction imaging processing (S41). The second correction imaging process is a process of acquiring imaging data for the next correction. In the embodiment, the shot data for correction described above is referred to as "shot data for second correction". The second correction-use captured data is image data including an image portion of the second specimen captured by the camera 40. The second calibration imaging data is image data corresponding to an infrared image obtained by reflecting the infrared ray from the irradiator 50 on the second specimen on the stage 20. In the measurement device 10, the second calibration captured data is image data of a still image. In S41, the processor 71 outputs a shooting instruction to the camera 40. The camera 40 photographs the photographing range R1 according to the photographing instruction. The processor 71 acquires second correction imaging data corresponding to the imaging image captured by the camera 40. The processor 71 causes the second correction shooting data to be stored in the memory 73.
Next, the processor 71 executes the second correction area processing (S43). The second correction area processing is processing for acquiring the area of the region corresponding to the second specimen from the image portion of the second specimen included in the second correction imaging data acquired in S41. In the embodiment, the area obtained by the second correction area processing is referred to as "second correction area X2". The second correction area processing will be described later. After executing S43, the processor 71 returns the process to S33. After that, the processor 71 repeatedly executes the processing of S33 and thereafter.
At S45, the processor 71 determines whether a third sample indication has been obtained. The operator taps the third sample button 94. However, if the third sample is not attached to the mounting table 20, the third sample is not displayed in the preview area 91. In this case, the operator places the third sample on the stage 20 before tapping the third sample button 94. The tapping of the third sample button 94 is accepted by the operator 65. In this case, the processor 71 takes the third sample indication from the operator 65.
If the third sample instruction is not obtained (S45: no), the processor 71 moves the process to S51 in fig. 6. When the third sample instruction is acquired (yes in S45), the processor 71 executes third correction imaging processing (S47). The third correction imaging process is a process of acquiring the next imaging data for correction. In the embodiment, the above-described imaging data for correction is referred to as "third imaging data for correction". The third correction shot data is image data including an image portion of the third sample shot by the camera 40. The third calibration imaging data is image data corresponding to an infrared image obtained by reflecting the infrared ray from the irradiator 50 by the third sample on the stage 20. In the measurement device 10, the third calibration shot data is image data of a still image. In S47, the processor 71 outputs a shooting instruction to the camera 40. The camera 40 photographs a photographing range R1 according to a photographing instruction. The processor 71 acquires third calibration imaging data corresponding to the imaging image captured by the camera 40. The processor 71 causes the third correction shooting data to be stored in the memory 73.
Next, the processor 71 executes a third area processing for correction (S49). The third correction area processing is processing for acquiring the area of the region corresponding to the third sample from the image portion of the third sample included in the third correction captured data acquired in S47. In the embodiment, the area obtained by the third correction area processing is referred to as "third correction area X3". The third correction area processing will be described later. After executing S49, the processor 71 returns the process to S33. After that, the processor 71 repeatedly executes the processing of S33 and thereafter.
At S51 of fig. 6, the processor 71 determines whether a fourth sample indication has been taken. The operator taps the fourth sample button 95. However, the fourth sample is not displayed in the preview area 91 so that the fourth sample is not mounted on the mounting table 20. In this case, the operator places the fourth sample on the placement stage 20 before tapping the fourth sample button 95. A tap of the fourth sample button 95 is accepted by the operator 65. In this case, processor 71 takes the fourth sample indication from operator 65.
If the fourth sample instruction is not obtained (S51: no), the processor 71 moves the process to S57. When the fourth sample instruction is acquired (yes in S51), the processor 71 executes fourth correction imaging processing (S53). The fourth correction imaging process is a process of acquiring the next imaging data for correction. In the embodiment, the shot data for correction described above is referred to as "fourth shot data for correction". The fourth correction shot data is image data including an image portion of the fourth sample shot by the camera 40. The fourth calibration imaging data is image data corresponding to an infrared image obtained by reflecting the infrared ray from the irradiator 50 by the fourth sample on the mounting table 20. In the measurement device 10, the fourth calibration shot data is image data of a still image. In S53, the processor 71 outputs a shooting instruction to the camera 40. The camera 40 photographs the photographing range R1 according to the photographing instruction. The processor 71 acquires fourth correction imaging data corresponding to the imaging image captured by the camera 40. The processor 71 causes the fourth correction shooting data to be stored in the memory 73.
Next, the processor 71 executes a fourth area processing for correction (S55). The fourth correction area processing is processing for acquiring the area of the region corresponding to the fourth sample from the image portion of the fourth sample included in the fourth correction captured data acquired in S53. In the embodiment, the area obtained by the fourth correction area processing is referred to as "fourth correction area X4". The fourth correction area processing will be described later. After executing S55, processor 71 returns the process to S33 of fig. 5. After that, the processor 71 repeatedly executes the processing of S33 and thereafter.
At S57, processor 71 determines whether a fifth sample indication has been taken. The operator taps the fifth sample button 96. However, if the fifth sample is not mounted on the mounting table 20, the fifth sample is not displayed in the preview area 91. In this case, the operator places the fifth sample on the stage 20 before tapping the fifth sample button 96. A tap of the fifth sample button 96 is accepted by the operator 65. In this case, the processor 71 takes the fifth sample indication from the operator 65.
In the case where the fifth sample instruction is not taken (S57: no), the processor 71 moves the process to S63. When the fifth sample instruction is acquired (yes in S57), the processor 71 executes fifth correction shooting processing (S59). The fifth correction imaging process is a process of acquiring the next imaging data for correction. In the embodiment, the shot data for correction described above is referred to as "shot data for fifth correction". The fifth correction shot data is image data of an image portion containing the fifth sample shot by the camera 40. The fifth calibration imaging data is image data corresponding to an infrared image obtained by reflecting the infrared ray from the irradiator 50 on the fifth sample on the stage 20. In the measurement device 10, the fifth calibration shot data is image data of a still image. In S53, the processor 71 outputs a shooting instruction to the camera 40. The camera 40 photographs the photographing range R1 according to the photographing instruction. The processor 71 acquires fifth calibration imaging data corresponding to the imaging image captured by the camera 40. The processor 71 causes the memory 73 to store the fifth correction shooting data.
Next, the processor 71 executes fifth area processing for correction (S61). The fifth correction area processing is processing for acquiring the area of the region corresponding to the fifth sample from the image portion of the fifth sample included in the fifth correction captured data acquired in S59. In the embodiment, the area obtained by the fifth correction area processing is referred to as "fifth correction area X5". The fifth correction area processing will be described later. After executing S61, processor 71 returns the process to S33 of fig. 5. After that, the processor 71 repeatedly executes the processing of S33 and thereafter.
At S63, processor 71 determines whether or not the correction coefficient instruction has been acquired. The operator taps the correction factor button 97. The tap of the correction coefficient button 97 is accepted by the operator 65. In this case, processor 71 obtains a correction coefficient instruction from operator 65.
If the correction coefficient instruction is not obtained (no in S63), the processor 71 moves the process to S67. When the correction coefficient instruction is obtained (yes in S63), the processor 71 executes the correction coefficient processing (S65). The correction coefficient processing is processing of calculating correction coefficients corresponding to the first difference, the second difference, the third difference, the fourth difference, and the fifth difference. The first difference is a first reference value (400 mm)2) Difference from the first correction area X1. The second difference is a second reference value (1600 mm)2) Difference from the second correction area X2. The third difference is a third reference value (3600 mm)2) Difference from the third correction area X3. The fourth difference is a fourth reference value (6400 mm)2) And the fourth correction area X4. The fifth difference is a fifth reference value (10000 mm)2) Difference from the fifth correction area X5. In the correction coefficient processing, the coefficients of the polynomial equation of degree 4 used in S27 in fig. 3 are obtained by gaussian elimination (see the above equation (1)). The correction coefficient processing will be described later. After that, the processor 71 ends the calibration process.
At S67, processor 71 determines whether or not a return instruction has been obtained. If the return instruction is not obtained (S67: no), the processor 71 returns the process to S33 of fig. 5. After that, the processor 71 repeatedly executes the processing after S33. When the return instruction is obtained (yes in S67), the processor 71 ends the calibration process.
< area treatment for measurement >
The area measurement processing executed in S25 in fig. 4 will be described with reference to fig. 7. The processor 71 which starts the area measurement processing executes actual measurement processing (S71). The measurement area a obtained in the measurement area processing is obtained by the actual measurement processing executed in S71. The actual measurement process will be described later. After execution of S71, the processor 71 causes the measurement area a acquired in the actual measurement process to be stored in the memory 73 (S73). After that, the processor 71 ends the area processing for measurement.
< area processing for first correction >
With reference to fig. 8, the first correction area processing executed in S37 of fig. 5 will be described. The processor 71 which starts the first area processing for correction executes the actual measurement processing (S81). The first correction area X1 obtained in the first correction area processing is obtained by the actual measurement processing executed in S81. The measurement processing will be described later. After execution of S81, the processor 71 causes the memory 73 to store the first correction area X1 acquired in the actual measurement process (S83). After that, the processor 71 ends the first area processing for correction.
< area processing for second correction >
The second correction area processing executed in S43 in fig. 5 will be described with reference to fig. 9. The processor 71 which starts the second correction area processing executes the actual measurement processing (S91). The second correction area X2 obtained in the second correction area processing is obtained by the actual measurement processing executed in S91. The actual measurement process will be described later. After execution of S91, the processor 71 causes the memory 73 to store the second correction area X2 acquired in the actual measurement process (S93). After that, the processor 71 ends the second correction area processing.
< area processing for third correction >
The third correction area processing executed in S49 of fig. 5 will be described with reference to fig. 10. The processor 71 which starts the third correction area processing executes actual measurement processing (S101). The third correction area X3 obtained in the third correction area processing is obtained by the actual measurement processing executed in S101. The actual measurement process will be described later. After executing S101, the processor 71 causes the memory 73 to store the third correction area X3 acquired in the actual measurement process (S103). After that, the processor 71 ends the third area processing for correction.
< area processing for fourth correction >
The fourth correction area processing executed in S55 of fig. 6 will be described with reference to fig. 11. The processor 71 which starts the fourth area processing for correction executes actual measurement processing (S111). The fourth correction area X4 obtained in the fourth correction area processing is obtained by the actual measurement processing executed in S111. The actual measurement process will be described later. After executing S111, the processor 71 causes the memory 73 to store the fourth correction area X4 acquired in the actual measurement process (S113). After that, the processor 71 ends the fourth area processing for correction.
< area processing for fifth correction >
With reference to fig. 12, the fifth correction area processing executed in S61 of fig. 6 will be described. The processor 71 which starts the fifth correction area processing executes actual measurement processing (S121). The fifth correction area X5 obtained in the fifth correction area processing is obtained by the actual measurement processing executed in S121. The measurement processing will be described later. After executing S121, the processor 71 causes the memory 73 to store the fifth correction area X5 acquired in the actual measurement process (S123). After that, the processor 71 ends the fifth area processing for correction.
< actual measurement processing >
With reference to fig. 13, the actual measurement process executed at S71 in fig. 7, S81 in fig. 8, S91 in fig. 9, S101 in fig. 10, S111 in fig. 11, and S121 in fig. 12 will be described. In the embodiment, when the measurement imaging data, the first correction imaging data, the second correction imaging data, the third correction imaging data, the fourth correction imaging data, and the fifth correction imaging data are not distinguished from each other, or when these are collectively referred to, they are simply referred to as "imaging data". In the case where the cloth sheet 15 is not distinguished from the first specimen, the second specimen, the third specimen, the fourth specimen, and the fifth specimen as the reference specimen 17, or in the case where these are collectively referred to, it is referred to as "detected object". When the measurement area a, the first correction area X1, the second correction area X2, the third correction area X3, the fourth correction area X4, and the fifth correction area X5 are not distinguished from each other, or when these areas are collectively referred to, they are simply referred to as "area" or "area of object to be detected".
Therefore, when the actual measurement process is executed in S71 in fig. 7, the captured data is the captured data for measurement in the description of the actual measurement process. In this case, the region detected in the actual measurement process is an image portion of the fabric piece 15 included in the measurement imaging data, and the area is the measurement area a. When the actual measurement process is executed in S81 in fig. 8, the captured data is the first calibration captured data in the description of the actual measurement process. In this case, the region detected in the actual measurement process is an image portion of the first sample as the reference sample 17 included in the first calibration shot data, and the area is the first calibration area X1. When the actual measurement process is executed in S91 in fig. 9, the captured data is the second calibration captured data in the description of the actual measurement process. In this case, the region detected in the actual measurement process is an image portion of the second specimen as the reference sample 17 included in the second calibration imaging data, and the area is the second calibration area X2. When the actual measurement process is executed in S101 of fig. 10, the captured data is the third calibration captured data in the description of the actual measurement process. In this case, the region detected in the actual measurement process is an image portion of the third sample as the reference sample 17 included in the third calibration shot data, and the area is the third calibration area X3. When the actual measurement process is executed in S111 of fig. 11, the captured data is the fourth calibration captured data in the description of the actual measurement process. In this case, the region detected in the actual measurement process is an image portion of the fourth sample as the reference sample 17 included in the fourth calibration photographed data, and the area is the fourth calibration area X4. When the actual measurement process is executed in S121 in fig. 12, the shot data is the fifth calibration shot data in the description of the actual measurement process. In this case, the region detected in the actual measurement process is an image portion of the fifth sample as the reference sample 17 included in the fifth calibration shot data, and the area is the fifth calibration area X5.
The processor 71 which starts the actual measurement process executes binarization processing with the captured data as a processing target (S131). The binarization processing is processing for binarizing captured data. The shot data is converted into image data of 2 gradations of white and black by binarization processing. The binarization processing is a well-known image processing that has been put to practical use. Therefore, other descriptions related to S131 are omitted.
Next, the processor 71 executes the seal processing with the shot data processed in S131 as a processing target (S133). The sealing process is a process of performing an expansion process a predetermined number of times and then repeating a contraction process the same number of times as the expansion process. The expansion process is a process of converting black pixels adjacent to the periphery (upper, lower, left, and right) of white pixels into white pixels. The contraction process is a process of converting white pixels adjacent to the periphery (upper, lower, left, and right) of the black pixels into black pixels. The sealing process, the expansion process, and the contraction process are well-known image processes that have been put to practical use. Therefore, the other explanation concerning S133 is omitted.
Next, the processor 71 executes a labeling process with the shot data processed in S133 as a processing target (S135). The marking process is a process of giving different labels to the black pixels and the white pixels, respectively. According to the labeling process, a label different from a next second pixel is attached to a next first pixel among pixels forming a captured image corresponding to the captured data. The first pixel is a pixel forming an image portion of the object included in the captured data. In the case where the object to be detected is the fabric piece 15, the first pixel is formed in a black image portion in the measurement imaging data after the marking process of the first aspect shown on the upper right side of fig. 15 (see a portion indicated by reference numeral "15" in the measurement imaging data). The second pixels are pixels forming image portions other than the detected object included in the captured data. In other words, the second pixel is a pixel other than the first pixel among pixels forming the captured image corresponding to the captured data. For example, when the object is a material having a shape with a reference area, the second pixels are pixels forming an image portion of the mounting surface 22 in the next range included in the captured data. The above-mentioned range is a range of the mounting surface 22 exposed without being covered by the object in the imaging range R1. The marking process is a well-known image process that has been put to practical use. Therefore, other descriptions related to S135 are omitted.
After executing S135, the processor 71 acquires the area of the object to be detected (S137). Although the description is omitted above, the area value of each pixel is registered in the actual measurement processing program included in the main processing program. In S137, the processor 71 determines the number of the first pixels described above. The processor 71 obtains the area of the object to be detected from the product of the specified number and the area value. For example, the number of the first pixels is C, and the area value of each pixel is Dmm2Pixel/pixel. In this case, the processor 71 acquires the value obtained by "C × D" as the area of the object to be detected. After that, the processor 71 ends the actual measurement processing.
< correction coefficient processing >
The correction coefficient processing executed in S65 of fig. 6 will be described with reference to fig. 14. The processor 71 which starts the correction coefficient processing acquires the first correction area X1 from the memory 73 (S141). The first correction area X1 is stored in the memory 73 in S83 of fig. 8. Next, the processor 71 obtains the first correction value Y1 (S143). The first correction value Y1 is calculated by the following equation (4). In the formula (4), the value is "400 mm2"400 mm from the first reference value2And (7) correspondingly. The processor 71 obtains the calculated value as the first correction value Y1. The processor 71 causes the first correction value Y1 to be stored in the memory 73.
Y1=(400-X1)/X1···(4)
After execution of S143, the processor 71 acquires the second correction area X2 from the memory 73 (S145). The second correction area X2 is stored in the memory 73 in S93 of fig. 9. Next, the processor 71 obtains the second correction value Y2 (S147). The second correction value Y2 is calculated by the following equation (5). In the formula (5), the value is "1600 mm21600mm from the second reference value2And (7) corresponding. The processor 71 obtains the calculated value as the second correction value Y2. The processor 71 causes the second correction value Y2 to be stored in the memory 73.
Y2=(1600-X2)/X2···(5)
After execution of S147, the processor 71 acquires the third correction area X3 from the memory 73 (S149). The third correction area X3 is stored in the memory 73 in S103 in fig. 10. Next, the processor 71 obtains the third correction value Y3 (S151). The third correction value Y3 is calculated by the following equation (6). In the formula (6), the value is "3600 mm2"3600 mm from the third reference value2And (7) correspondingly. The processor 71 obtains the calculated value as the third correction value Y3. The processor 71 causes the third correction value Y3 to be stored in the memory 73.
Y3=(3600-X3)/X3···(6)
After executing S151, the processor 71 acquires the fourth correction area X4 from the memory 73 (S153). The fourth correction area X4 is stored in the memory 73 in S113 in fig. 11. Next, the processor 71 obtains the fourth correction value Y4 (S155). The fourth correction value Y4 is calculated by the following equation (7). In the formula (7), the value is "6400 mm26400mm from the fourth reference value2And (7) corresponding. The processor 71 obtains the calculated value as the fourth correction value Y4. The processor 71 causes the fourth correction value Y4 to be stored in the memory 73.
Y4=(6400-X4)/X4···(7)
After executing S155, the processor 71 acquires the fifth correction area X5 from the memory 73 (S157). The fifth correction area X5 is stored in the memory 73 in S123 of fig. 12. Next, the processor 71 obtains the fifth correction value Y5 (S159). The fifth correction value Y5 is calculated by the following equation (8). In the formula (8), the value is "10000 mm2"10000 mm from the fifth reference value2And (7) corresponding. The processor 71 obtains the calculated value as a fifth correction value Y5. The processor 71 causes the fifth correction value Y5 to be stored in the memory 73.
Y5=(10000-X5)/X5···(8)
Next, the processor 71 calculates a correction coefficient (S161). The correction coefficient is calculated using a gaussian elimination method. In the gaussian elimination method, the first correction area X1, the second correction area X2, the third correction area X3, the fourth correction area X4, and the fifth correction area X5 are used as input values of the matrix X, and the first correction value Y1, the second correction value Y2, the third correction value Y3, the fourth correction value Y4, and the fifth correction value Y5 are used as input values of the matrix Y. Here, in S161, a predetermined function in a known program is appropriately used. In the embodiment, 4 permutations are calculated as correction coefficients by the gaussian elimination method. That is, in S161, the correction coefficient is calculated 4 times.
Next, the processor 71 causes the memory 73 to store the correction coefficient "Coeff [5] ═ xx [5 ]" calculated in S161 (S163). Coeff [5] includes [ Coeff0, Coeff1, Coeff2, Coeff3, Coeff4] (0 times, 1 times, 2 times, 3 times, 4 times) as a correction coefficient. The row xx [5] stores [ xx0, xx1, xx2, xx3, xx4] (0 times, 1 times, 2 times, 3 times, 4 times) as a value corresponding to [ Coeff0, Coeff1, Coeff2, Coeff3, Coeff4 ]. After executing S163, the processor 71 ends the correction coefficient processing.
< example >
The inventors performed the above-described calibration process using the measurement device corresponding to the measurement device 10 this time. Therefore, this will be explained. The reference sample 17 is formed identically to the first, second, third, fourth and fifth samples described above.
In a first sample (first reference value: 400 mm)2) In the first correction area processing (see fig. 8) as the object, the first correction area X1 "389.6 mm is obtained by the actual measurement processing at S812"(refer to S137 in fig. 13), the above-described value is stored as the first correction area X1 (refer to S83 in fig. 8).
In a second sample (second reference value: 1600 mm)2) In the second correction area processing (see fig. 9) for the target, the second correction area X2 "1588.9 mm is acquired by the actual measurement processing at S912"(refer to S137 in fig. 13), the above-described value is stored as the second correction area X2 (refer to S93 in fig. 9).
After a third sample (third reference value: 3600 mm)2) In the third correction area processing (see fig. 10) for the target object, the third correction area X3 "3599.5 mm" is obtained by the actual measurement processing in S1012"(refer to S137 in fig. 13), the above-mentioned value is stored as the third correction area X3 (refer to S103 in fig. 10))。
After a fourth sample (fourth reference value: 6400 mm)2) In the fourth correction area processing (see fig. 11) for the object, the actual measurement processing in S111 obtains a fourth correction area X4 "6374.8 mm2"(refer to S137 in fig. 13), the above-described value is stored as the fourth correction area X4 (refer to S113 in fig. 11).
In a fifth sample (fifth reference value: 10000 mm)2) In the fifth correction area processing (see fig. 12) for the object, the actual measurement processing in S121 obtains a fifth correction area X5 "9975.9 mm2"(refer to S137 in fig. 13), the above-described value is stored as the fifth correction area X5 (refer to S123 in fig. 12).
In the correction coefficient processing (fig. 14), the following values are acquired as the first correction value Y1, the second correction value Y2, the third correction value Y3, the fourth correction value Y4, and the fifth correction value Y5. That is, in S143, the first correction value Y1 "0.026694045" is obtained from the above equation (4). The first difference (400-X1) is 10.4mm2. In S147, the second correction value Y2 "0.006985965" is obtained from the above equation (5). The second difference (1600-X2) is 11.1mm2. In S151, a third correction value Y3 "0.000138908" is obtained from the above equation (6). The third difference (3600-X3) is 0.5mm2. In S155, a fourth correction value Y4 "0.003953065" is obtained from the above equation (7). The fourth difference (6400-X4) was 25.2mm2. In S159, a fifth correction value Y5 "0.002415822" is obtained from the above equation (8). The fifth difference (10000-X5) is 24.1mm2
In the correction coefficient processing, the next correction coefficient is calculated by gaussian elimination performed on each of these values, and the calculated correction coefficient is stored (see S161 and S163 in fig. 14). In this case, the above formula (1) is the following formula (9).
Coeff0=0.0372790078891326000000000
Coeff1=0.0000303485836542386000000
Coeff2=0.0000000085293095598340900
Coeff3=0.0000000000009581917945669
Coeff4=0.0000000000000000373938979
J=0.0000000000000000373938979×A4+0.0000000000009581917945669×A3+0.0000000085293095598340900×A2+0.0000303485836542386000000×A+0.0372790078891326000000000···(9)
< effects of the embodiment >
According to the embodiments, the following effects can be obtained.
(1) The measurement device 10 includes a mounting table 20, a mass meter 30, a camera 40, an output unit 60, and a controller 70 (see fig. 1). The controller 70 includes a processor 71. In the measurement device 10, the processor 71 executes a main process (see fig. 4). In the main processing, the processor 71 executes mass processing, imaging processing for measurement, area processing for measurement, calculation processing, and output processing (see S21 to S29 in fig. 4). That is, in the mass processing, the processor 71 acquires the mass M of the fabric sheet 15 measured by the mass meter 30. In the measurement imaging process, the processor 71 acquires measurement imaging data including an image portion of the fabric 15 imaged by the camera 40. In the area measurement processing, the processor 71 acquires the area a for measurement of the region corresponding to the fabric piece 15 from the image portion of the fabric piece 15 included in the captured data for measurement (see S71 in fig. 7 and S137 in fig. 13). In the calculation process, the processor 71 calculates the mass per unit area N from the mass M and the measurement area a. In the output process, the processor 71 outputs the mass per unit area N using the outputter 60.
Therefore, the mass N per unit area of the fabric sheet 15 can be measured and output as the measurement result. That is, the output unit 60 can display the mass per unit area N in the measurement result region 82 of the main screen 80 (see the lower layer of fig. 2). The measurement area a of the region corresponding to the fabric piece 15 is acquired from the image portion of the fabric piece 15 included in the measurement imaging data, whereby the mass per unit area N can be measured regardless of the shape of the fabric piece 15. For example, when the mass per unit area N is measured, it is not necessary to cut the fabric piece 15 from the fabric into a predetermined shape. For example, in the production of a fabric or a predetermined product of the fabric, the mass N per unit area of the product or the fabric as a material can be measured smoothly in a production plant.
(2) The measurement device 10 includes a housing chamber 45 and an irradiator 50 (see fig. 1). The housing chamber 45 houses the camera 40. An infrared transmission filter 47 is provided on the bottom surface of the housing chamber 45. In a state where the camera 40 is accommodated in the accommodation chamber 45, the infrared transmission filter 47 is provided between the mounting table 20 and the camera 40. The illuminator 50 irradiates infrared rays to the imaging range R1. The camera 40 includes an image sensor having sensitivity with respect to infrared rays. The camera 40 images the cloth sheet 15 in a state where infrared rays are irradiated from the irradiator 50 and reflected from the cloth sheet 15 from the irradiator 50. In this case, the processor 71 acquires measurement imaging data including an image portion of the fabric piece 15 formed by infrared rays reflected by the fabric piece 15 (see S23 in fig. 4).
Therefore, the image portion of the cloth sheet 15 can be stably detected. The accuracy of the measurement area A can be improved. Unlike the fabric sheet 15 of the embodiment, the fabric sheet is a part of the fabric of the next pattern. The pattern is, for example, a pattern in which a plurality of patterns having a predetermined shape of the same color as the mounting surface 22 are arranged on the entire fabric. As such a pattern, a bead pattern in which a plurality of beads of the same color as the placement surface 22 are appropriately arranged is exemplified. The above-described water drop pattern is exemplified. Before measuring the mass N per unit area, the cloth with water drop patterns is cut to form a cloth piece. In this case, the cutting position of the cloth may be on the bead. When visible light is used for imaging the cloth piece, the background color (color of the placement surface 22) is the same color as the color of the cut water droplet portion at the outer edge of the cloth piece in the visible light image corresponding to the imaging data obtained by the imaging. As a result, it is considered that an image portion of the fabric sheet cannot be appropriately detected, or that special image analysis is required for the above detection. On the other hand, in the measurement imaging data using infrared rays for imaging the fabric sheet, the influence of the pattern of the water droplets in the image portion of the fabric sheet is eliminated or reduced. As a result, the boundary between the background (placement surface 22) and the image portion of the cloth sheet is stabilized, and the image portion of the cloth sheet can be distinguished from the background. The work of the mounting table 20 does not need to be changed according to the pattern of the cloth. By forming the housing chamber 45 in a structure in which the portion other than the infrared transmission filter 47 on the bottom surface is shielded from light, the infrared rays can be captured by the camera 40 without being affected by visible light or the like in the environment in which the measurement device 10 is installed.
The inventors compared the image portions of the cloth piece 15 determined by the marking process in S135 of fig. 13 in the following first and second modes. In the first embodiment, the irradiator 50 irradiates infrared rays to the imaging range R1. The measurement device of the first embodiment corresponds to the measurement device 10. The measurement imaging data includes an image portion of the fabric piece 15 formed by the infrared rays reflected by the fabric piece 15 (see "before area processing for measurement" shown on the left side of the upper layer in fig. 15). In the second aspect, a visible light LED is used as an irradiator, and the irradiator irradiates visible light to the imaging range R1. In the measurement device of the second embodiment, the infrared transmission filter 47 is omitted. The measurement imaging data includes an image portion of the fabric sheet 15 formed by the visible light reflected by the fabric sheet 15 (see "before the area measurement processing" shown on the left side of the lower layer in fig. 15). The other conditions are the same in the first and second aspects.
In the first embodiment, the outer edge of the image portion of the fabric piece 15 specified by the marking process is equal to the outer edge of the image portion of the fabric piece 15 included in the measurement imaging data before the measurement area process (see fig. 7) (see the upper layer of fig. 15). That is, in the first aspect, it is possible to detect an image portion of the fabric sheet 15 that is equivalent to or similar to the actual shape of the fabric sheet 15. In the second aspect, the outer edge of the image portion of the fabric piece 15 specified by the marking process is different from the outer edge of the image portion of the fabric piece 15 included in the measurement imaging data before the measurement area processing, as compared with the first aspect (see fig. 15 lower layer). From these results, the inventors considered that the following points can be considered in determining the specification of the measuring apparatus. That is, the inventors considered that the measuring apparatus 10 corresponding to the measuring apparatus of the first embodiment is preferably used when the mass N per unit area of a product or a cloth as a material needs to be measured with high accuracy in a manufacturing plant. In contrast, the inventors considered that the measurement device of the second embodiment can be adopted, for example, when the measurement of the mass per unit area N in the manufacturing plant is relatively simple. In fig. 15, rectangular frames indicated by a chain line in the following frames are illustrative diagrams. Each of the above-described frames includes the measurement imaging data after the labeling processing of the first aspect and the measurement imaging data after the labeling processing of the second aspect. That is, the rectangular frame indicated by the one-dot chain line in the first aspect corresponds to the illustrated range of the measurement imaging data before the measurement area processing in the first aspect, and is a boundary line between the background in the frame including the measurement imaging data after the mark processing in the first aspect and the placement surface 22 included in the measurement imaging data. The rectangular frame indicated by the one-dot chain line in the second embodiment corresponds to the illustrated range of the measurement imaging data before the measurement area processing in the second embodiment, and is a boundary line between the background in the frame including the measurement imaging data after the mark processing in the second embodiment and the placement surface 22 included in the measurement imaging data.
(3) In the measurement device 10, the processor 71 executes a calibration process (see fig. 5 and 6). When the calibration process is executed, the reference sample 17 is placed on the stage 20, and the camera 40 images the reference sample 17 (see fig. 3). In the calibration process, the processor 71 executes a first shooting process for correction and a first area process for correction, a second shooting process for correction and a second area process for correction, a third shooting process for correction and a third area process for correction, a fourth shooting process for correction and a fourth area process for correction, a fifth shooting process for correction and a fifth area process for correction, and a correction coefficient process (refer to S35, S37, S41, S43, S47, S49 of fig. 5, and S53, S55, S59, S61, S65 of fig. 6).
In the first calibration imaging process, the processor 71 acquires first calibration imaging data including an image portion of the first sample as the reference sample 17 imaged by the camera 40. In the first correction area processing, the processor 71 acquires the first correction area X1 of the region corresponding to the first sample from the image part of the first sample included in the first correction captured data (see S81 of fig. 8 and S137 of fig. 13). In the second correction imaging process, the processor 71 acquires second correction imaging data including an image portion of the second specimen as the reference specimen 17 imaged by the camera 40. In the second correction area processing, the processor 71 acquires a second correction area X2 of an area corresponding to the second specimen from the image portion of the second specimen included in the second correction captured data (see S91 in fig. 9 and S137 in fig. 13). In the third calibration imaging process, the processor 71 acquires third calibration imaging data including an image portion of a third sample, which is the reference sample 17, imaged by the camera 40. In the third correction area processing, the processor 71 acquires a third correction area X3 of a region corresponding to the third sample from the image portion of the third sample included in the third correction captured data (see S101 in fig. 10 and S137 in fig. 13). In the fourth calibration imaging process, the processor 71 acquires fourth calibration imaging data including an image portion of a fourth sample, which is the reference sample 17, imaged by the camera 40. In the fourth correction area processing, the processor 71 acquires a fourth correction area X4 of a region corresponding to the fourth sample from an image portion of the fourth sample included in the fourth correction captured data (see S111 in fig. 11 and S137 in fig. 13). In the fifth correction imaging process, the processor 71 acquires fifth correction imaging data including an image portion of a fifth sample, which is the reference sample 17, imaged by the camera 40. In the fifth correction area processing, the processor 71 acquires a fifth correction area X5 of a region corresponding to the fifth sample from an image portion of the fifth sample included in the fifth correction captured data (see S121 in fig. 12 and S137 in fig. 13).
In the correction coefficient processing, the processor 71 calculates correction coefficients corresponding to the first difference, the second difference, the third difference, the fourth difference, and the fifth difference (refer to S161 of fig. 14). The first difference is a first reference value (400 mm) of the first sample2) Difference from the first correction area X1. The second difference is a second reference value (1600 mm) for the second sample2) Difference from the second correction area X2. The third difference is a third reference value (3600 m) for a third samplem2) Difference from the third correction area X3. The fourth difference is a fourth reference value (6400 mm) for the fourth sample2) And the fourth correction area X4. The fifth difference is a fifth reference value (10000 mm) of the fifth sample2) Difference from the fifth correction area X5. That is, in the correction coefficient processing, the processor 71 acquires the first correction value Y1 corresponding to the first difference, the second correction value Y2 corresponding to the second difference, the third correction value Y3 corresponding to the third difference, the fourth correction value Y4 corresponding to the fourth difference, and the fifth correction value Y5 corresponding to the fifth difference (see S143, S147, S151, S155, and S159 in fig. 13). Next, the processor 71 calculates a correction coefficient by a gaussian elimination method in which the first correction area X1, the second correction area X2, the third correction area X3, the fourth correction area X4, and the fifth correction area X5 are set as input values of the matrix X, and the first correction value Y1, the second correction value Y2, the third correction value Y3, the fourth correction value Y4, and the fifth correction value Y5 are set as input values of the matrix Y (see S161 in fig. 14).
Therefore, the measurement area a can be corrected by the correction coefficient. In the calculation process of the main process (see S27 in fig. 4), the processor 71 calculates the mass per unit area N by dividing the mass M by the corrected area B. The corrected area B is an area obtained by correcting the measurement area a by a correction factor J including a correction coefficient calculated by the gaussian elimination method. In the measuring apparatus 10, the correction rate J is a polynomial approximation curve of 4 degrees. The measurement device 10 can improve the measurement accuracy of the mass N per unit area. This is the same even in the measurement device of the second embodiment. That is, by executing the calibration process in the measurement device of the second aspect, the above-described effects can be obtained even in the measurement device of the second aspect.
< modification example >
Embodiments can also be formed as follows. Several configurations in the modifications shown below can also be combined and employed as appropriate. Hereinafter, points different from the above are described, and description of the same points is appropriately omitted.
(1) The measuring apparatus 10 includes a housing chamber 45 (see fig. 1). The camera 40 is accommodated in the accommodation chamber 45. An infrared transmission filter 47 is provided on the bottom surface of the housing chamber 45. In the measuring apparatus, the housing chamber 45 may be omitted. The infrared transmission filter may be directly attached to the lens portion of the camera 40. With such a configuration, the infrared transmission filter can be provided between the mounting table 20 and the camera 40 in the measuring apparatus. As with the measurement device 10 described above, infrared rays can be captured by the camera 40 without being affected by visible light or the like in the environment where the measurement device is installed.
(2) The measurement device 10 includes a camera 40 and an infrared transmission filter 47 (see fig. 1). In the measuring apparatus, an infrared camera may be used as the camera. The infrared camera that can be used in the measuring device is a camera also called a night vision camera, and does not include a camera called a thermal imaging camera for temperature measurement. In this case, the infrared transmission filter 47 provided separately from the camera 40 in the measurement device 10 may be omitted.
(3) In the measuring apparatus 10, a display (see fig. 1) is used as the output device 60. The output device may be a device different from the display device. The output device may also be a printing press, for example. In this case, in the output processing executed in S29 of the main processing (see fig. 4), the mass per unit area N is printed on a predetermined sheet. In addition, the output device may be a speaker. In this case, in the above-described output process, the mass per unit area N is output as sound. Further, the output device may include 2 or more devices of a display, a printer, and a speaker.
(4) As the reference sample 17, a first sample, a second sample, a third sample, a fourth sample, and a fifth sample are exemplified. In this case, the sub-screen 90 includes a first sample button 92, a second sample button 93, a third sample button 94, a fourth sample button 95, and a fifth sample button 96 (see fig. 3). The calibration process is performed in a format corresponding to the 5 types of reference samples 17 described above (see fig. 5 and 6). That is, the calibration process includes S33 to S37, S39 to S43, S45 to S49, S51 to S55, and S57 to S61, and the correction coefficient process performed in S65 of fig. 6 includes S141 and S143, S145 and S147, S149 and S151, S153 and S155, S157 and S159 (see fig. 14). S33 to S37, S141 and S143 correspond to the first sample. S39 to S43, S145 and S147 correspond to the second sample. S45 to S49, S149 and S151 correspond to the third sample. S51-S55, S153, and S155 correspond to the fourth sample. S57 to S61, S157 and S159 correspond to the fifth sample. In S161 of fig. 14, a gaussian elimination method is performed in which the first correction area X1, the second correction area X2, the third correction area X3, the fourth correction area X4, and the fifth correction area X5 are set as input values of the matrix X, and the first correction value Y1, the second correction value Y2, the third correction value Y3, the fourth correction value Y4, and the fifth correction value Y5 are set as input values of the matrix Y.
A sample different from the first sample, the second sample, the third sample, the fourth sample, and the fifth sample may be used as the reference sample 17. The sample of the reference sample 17 may be formed of 3 or 4 types, or 6 or more types, for example. In the calibration process, a series of processes identical to each of S33 to S37, S39 to S43, S45 to S49, S51 to S55, and S57 to S61 is repeated the number of times corresponding to the number of the reference samples 17. In the correction coefficient processing, a series of processes identical to each of S141 and S143, S145 and S147, S149 and S151, S153 and S155, S157 and S159 is repeated the number of times corresponding to the number of reference samples 17. In the gaussian elimination method executed in S161, the correction area of the number corresponding to the number of reference samples 17 is set as the input value of the matrix X, and the correction value of the number corresponding to the number of reference samples 17 is set as the input value of the matrix Y.
The case of 3 types of the reference sample 17, i.e., the first sample, the second sample, and the third sample, will be described as an example. In the calibration process, S51 to S61 are omitted. If S45 is negated (see S45 in fig. 5: no), the processor 71 moves the process to S63 in fig. 6. In the correction coefficient processing, S153 to S159 are omitted. In S161, the processor 71 calculates the correction coefficient by performing gaussian elimination using the first correction area X1, the second correction area X2, and the third correction area X3 as input values of the matrix X, and using the first correction value Y1, the second correction value Y2, and the third correction value Y3 as input values of the matrix Y. In this case, the correction coefficient can be calculated 4 times by the gaussian elimination method. However, the correction coefficient may be formed at a different number of times from 4 times. The number of times of correction coefficients is appropriately determined in consideration of various conditions.
(5) In the measurement apparatus 10, in the measurement of the mass per unit area N, it is assumed that foreign matter other than the measurement object exists on the mounting table 20 within the imaging range R1 together with the fabric sheet 15. In this case, the measurement imaging data includes the image portion of the foreign matter described above together with the image portion of the fabric piece 15. In the area measurement processing executed in S25 of the main processing (see fig. 4), the following processing may be executed. That is, in the actual measurement process (see fig. 13) executed in S71 of fig. 7, the marking process of S135 is performed to mark different labels on the pixels forming the image portion of the fabric piece 15 and the pixels forming the image portion of the foreign object. However, the foreign matter is assumed to be smaller than the cloth piece 15. In the case where the foreign matter is larger than the cloth sheet 15, it is considered that the operator can notice the foreign matter. Therefore, in S137, the processor 71 specifies the number of pixels to which labels having a large number of pixels are assigned (see the "first pixels" described above) among the pixels described above. This enables the mass N per unit area of the cloth piece 15 to be measured smoothly. In this case, the pixel of the image portion where the foreign substance is formed becomes the second pixel described above.
(6) As the reference sample 17, a sample (see fig. 2) in which the material itself that reflects infrared rays has a shape with an area that is a reference value can be used. The reference sample 17 may be a pattern in which a graph having an area set as a reference value is represented as a material having an area larger than the reference value. Printing is exemplified as an expression method of the above-described pattern serving as the reference sample 17. In this case, in the imaging range R1, the reference sample 17 may be in a state in which any or all of the infrared reflectance, infrared absorptance, and infrared transmittance are different from each other in the same manner as in the case of the placement surface 22 in the region where the above-described pattern of the reference sample 17 is expressed and the other regions. For example, the portion of the reference sample 17 having the above-described pattern may be in a state of reflecting infrared rays from an area where the pattern is not represented, which is disposed within the imaging range R1. When the area of the material on which the pattern is represented is larger than the imaging range R1, the region on which the pattern is not represented is a region on which the pattern is not represented. When the area of the material on which the pattern is represented is smaller than the imaging range R1, the area on which the pattern is not represented is the area of the material on which the pattern is not represented and the area including the placement surface 22. In the case where the reference sample 17 is a pattern in which a material having an area larger than the reference value represents a pattern in which the area is set as the reference value, the processor 71 sets the area of each pattern, which is set as the first reference value, the second reference value, the third reference value, the fourth reference value, and the fifth reference value, as the object to be detected in the actual measurement process (see fig. 13) performed in S81 of fig. 8, S91 of fig. 9, S101 of fig. 10, S111 of fig. 11, and S121 of fig. 12, and acquires the areas in the same manner as described above (see S137 of fig. 13).
(7) The measuring apparatus 10 measures the mass N per unit area. The main screen 80 includes a measurement result region 82, and the mass per unit area N, the corrected area B, and the mass M are output to the measurement result region 82 (see the lower layer of fig. 2). In the measuring apparatus 10, the crossing angle of the warp and the weft of the cloth piece 15 may be measured and outputted. In the measuring apparatus 10, the number of weft threads and the number of warp threads per unit length of the cloth piece 15 may be measured and output. The processing algorithm disclosed in patent document 1 can be used to measure the line crossing angle, the number of warp threads, and the number of weft threads. Therefore, the description about the measurement of the line crossing angle, the number of warp threads, and the number of weft threads is omitted.
[ description of reference numerals ]
10-measuring device, 15-cloth piece, 17-reference sample, 20-mounting table, 22-mounting surface, 30-mass meter, 40-camera, 45-housing chamber, 47-infrared transmission filter, 50-irradiator, 60-output device, 65-operator, 70-controller, 71-processor, 72-memory, 73-memory, 74-connection interface (connection I/F), 80-main screen, 81-preview area, 82-measurement result area, 83-calibration button, 90-sub-screen, 91-preview area, 92-first sample button, 93-second sample button, 94-third sample button, 95-fourth sample button, 96-fifth sample button, 97-correction coefficient button, 98-return button, a-area for measurement, B-correction area, M-mass, N-mass per unit area, R1-shooting range, r2 illumination range, X1 first correction area, X2 second correction area, X3 third correction area, X4 fourth correction area, and X5 fifth correction area.

Claims (4)

1. A measurement device is characterized by comprising:
a placing table on which a cloth sheet is placed;
a mass meter that measures the mass of the cloth sheet placed on the placing table;
a camera for shooting the cloth sheet placed on the placing table;
an outputter that outputs a mass per unit area representing a mass per unit area of the cloth sheet; and
a processor for processing the received data, wherein the processor is used for processing the received data,
the processor performs the following processing:
a mass processing step of acquiring the mass of the cloth piece measured by the mass meter;
a measurement imaging process of acquiring measurement imaging data including an image portion of the cloth piece imaged by the camera;
a measurement area process of acquiring a measurement area of a region corresponding to the fabric piece from an image portion of the fabric piece included in the measurement imaging data acquired in the measurement imaging process;
a calculation process of calculating the mass per unit area from the mass of the cloth piece acquired in the mass process and the area for measurement acquired in the area for measurement process; and
an output process of outputting the mass per unit area obtained in the calculation process through the outputter,
the measuring apparatus further includes an irradiator for irradiating infrared rays from a side of a mounting surface of the mounting table on which the cloth sheet is mounted to an imaging range of the camera,
the camera comprises an image sensor having a sensitivity with respect to infrared light,
the camera shoots in a state of irradiating infrared rays from the irradiator,
the measurement imaging process is a process of acquiring the measurement imaging data including an image portion of the cloth sheet formed by infrared rays reflected by the cloth sheet.
2. The assay device according to claim 1,
an infrared transmission filter is provided between the mounting table and the camera.
3. The measurement apparatus according to claim 1,
the camera is an infrared camera.
4. The measurement device according to any one of claims 1 to 3,
a reference sample having an area set to a reference value is placed on the placing table,
the camera shoots the reference sample loaded on the loading platform,
the processor performs the following processing:
a correction imaging process of acquiring calibration imaging data including an image portion of the reference sample formed by infrared rays reflected by the reference sample captured by the camera;
a correction area processing of acquiring a correction area of a region corresponding to the reference sample from an image portion of the reference sample included in the correction imaging data acquired in the correction imaging processing; and
a correction coefficient process of calculating a correction coefficient corresponding to a difference between the reference value and the correction area obtained in the correction area process,
the calculation process is a process of calculating the mass per unit area obtained by dividing the mass of the fabric sheet obtained in the mass process by the correction area obtained by correcting the area for measurement obtained in the area for measurement process by the correction coefficient calculated in the correction coefficient process.
CN201910126155.2A 2018-02-21 2019-02-20 Measuring apparatus Active CN110186802B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-028437 2018-02-21
JP2018028437A JP7054634B2 (en) 2018-02-21 2018-02-21 measuring device

Publications (2)

Publication Number Publication Date
CN110186802A CN110186802A (en) 2019-08-30
CN110186802B true CN110186802B (en) 2022-07-19

Family

ID=67713616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910126155.2A Active CN110186802B (en) 2018-02-21 2019-02-20 Measuring apparatus

Country Status (2)

Country Link
JP (1) JP7054634B2 (en)
CN (1) CN110186802B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113447104B (en) * 2021-06-25 2023-03-31 成都理工大学 Method for measuring fresh weight of miniature aquatic organisms, miniature device and application

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003214827A (en) * 2002-01-29 2003-07-30 Ohbayashi Corp Method for measuring craze by image processing
JP2015045587A (en) * 2013-08-28 2015-03-12 株式会社キーエンス Three-dimensional image processor, method of determining change in state of three-dimensional image processor, program for determining change in state of three-dimensional image processor, computer readable recording medium, and apparatus having the program recorded therein
WO2015052842A1 (en) * 2013-10-11 2015-04-16 株式会社島津製作所 Mass spectrometry data analysis device
CN204944457U (en) * 2015-08-24 2016-01-06 杨雪莲 A kind of definite quantity measuring apparatus of cigarette paper

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2259760A (en) * 1991-09-17 1993-03-24 Richard Edward Davies Measuring weight per unit length of fibrous material
JP3531160B2 (en) 2000-02-18 2004-05-24 日本製紙株式会社 How to inspect the quality of paper cuts
JP2006521544A (en) 2003-03-27 2006-09-21 マーロ ゲーエムベーハ ウント ツェーオー. カーゲー Method for inspecting the quality criteria of a flat fabric structure embodied in a multi-layer format according to contours
US20050227563A1 (en) 2004-01-30 2005-10-13 Bond Eric B Shaped fiber fabrics
JP4520794B2 (en) * 2004-08-20 2010-08-11 セーレン株式会社 Line inspection method and apparatus
JP5228439B2 (en) * 2007-10-22 2013-07-03 三菱電機株式会社 Operation input device
CN201532241U (en) 2009-09-24 2010-07-21 常州市开天纺织印染有限公司 Full-automatic textile weighing instrument
JP5592937B2 (en) 2010-03-30 2014-09-17 三井化学株式会社 Non-woven
CN202617249U (en) * 2012-06-04 2012-12-19 美细耐斯(上海)电子有限公司 Camera module for small size electronic equipment
US20160175751A1 (en) 2014-12-19 2016-06-23 The Procter & Gamble Company Composite filter substrate comprising a mixture of fibers
JP6299668B2 (en) * 2015-05-13 2018-03-28 信越半導体株式会社 How to evaluate haze
US11725309B2 (en) 2015-06-03 2023-08-15 The Procter & Gamble Company Coforming processes and forming boxes used therein
DE102015225962A1 (en) * 2015-12-18 2017-06-22 Voith Patent Gmbh Method and device for determining the basis weight of a fibrous web

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003214827A (en) * 2002-01-29 2003-07-30 Ohbayashi Corp Method for measuring craze by image processing
JP2015045587A (en) * 2013-08-28 2015-03-12 株式会社キーエンス Three-dimensional image processor, method of determining change in state of three-dimensional image processor, program for determining change in state of three-dimensional image processor, computer readable recording medium, and apparatus having the program recorded therein
WO2015052842A1 (en) * 2013-10-11 2015-04-16 株式会社島津製作所 Mass spectrometry data analysis device
CN204944457U (en) * 2015-08-24 2016-01-06 杨雪莲 A kind of definite quantity measuring apparatus of cigarette paper

Also Published As

Publication number Publication date
JP2019144105A (en) 2019-08-29
JP7054634B2 (en) 2022-04-14
CN110186802A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
KR20070013512A (en) Image processing device and method
US9759662B2 (en) Examination device and examination method
TW201840991A (en) Testing device and method for circuit board dispensing
CN110186802B (en) Measuring apparatus
EP2573508B1 (en) Solder height detection method and solder height detection device
TW200532162A (en) Inspection system and method for providing feedback
CN107233082B (en) Infrared thermal imaging detection system
US10458924B2 (en) Inspection apparatus and inspection method
KR20120103476A (en) X-ray analyzing apparatus
JP2013092465A (en) Three-dimensional surface inspection device and three-dimensional surface inspection method
JP2010117322A (en) Surface flaw inspection device, method and program for surface flaw inspection
JP2014197762A (en) Image processing system and image processing program
US8705698B2 (en) X-ray analyzer and mapping method for an X-ray analysis
WO2009151805A2 (en) Printing
KR101834812B1 (en) Method and system for detecting defect of display using vision inspection
KR101522312B1 (en) Inspection device for pcb product and inspecting method using the same
JP2011220755A (en) Surface appearance inspection apparatus
KR101906139B1 (en) Machine vision system of automatic light setting using inspection standard image
JP2009243920A (en) Reference plate, optical axis adjustment method of surface inspection apparatus and surface inspection apparatus
JP5647504B2 (en) Image processing apparatus, inspection apparatus, measurement apparatus, image processing method, measurement method
JP5738628B2 (en) Internal defect inspection apparatus and internal defect inspection method
WO2014010309A1 (en) Inspection method, inspection device, and method for producing glass plate
JP2004150873A (en) Printed matter measuring apparatus, image data correcting method, and program
JP2005043290A (en) Seal drawing inspection device and seal drawing inspection method
KR20080006202A (en) Apparatus and method of inspecting display substrate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant