WO2012153649A1 - Internal inspection method for glass-melting furnace, operation method for glass-melting furnance, and internal inspection system for glass-melting furnace - Google Patents

Internal inspection method for glass-melting furnace, operation method for glass-melting furnance, and internal inspection system for glass-melting furnace Download PDF

Info

Publication number
WO2012153649A1
WO2012153649A1 PCT/JP2012/061252 JP2012061252W WO2012153649A1 WO 2012153649 A1 WO2012153649 A1 WO 2012153649A1 JP 2012061252 W JP2012061252 W JP 2012061252W WO 2012153649 A1 WO2012153649 A1 WO 2012153649A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
background
melting furnace
observation data
glass melting
Prior art date
Application number
PCT/JP2012/061252
Other languages
French (fr)
Japanese (ja)
Inventor
博信 黒石
鈴木 俊彦
信 楜澤
亮介 赤木
Original Assignee
旭硝子株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 旭硝子株式会社 filed Critical 旭硝子株式会社
Priority to JP2013513979A priority Critical patent/JP5928451B2/en
Priority to CN201280012165.XA priority patent/CN103415476B/en
Priority to KR1020137023603A priority patent/KR101923239B1/en
Publication of WO2012153649A1 publication Critical patent/WO2012153649A1/en

Links

Images

Classifications

    • CCHEMISTRY; METALLURGY
    • C03GLASS; MINERAL OR SLAG WOOL
    • C03BMANUFACTURE, SHAPING, OR SUPPLEMENTARY PROCESSES
    • C03B5/00Melting in furnaces; Furnaces so far as specially adapted for glass manufacture
    • C03B5/16Special features of the melting process; Auxiliary means specially adapted for glass-melting furnaces
    • CCHEMISTRY; METALLURGY
    • C03GLASS; MINERAL OR SLAG WOOL
    • C03BMANUFACTURE, SHAPING, OR SUPPLEMENTARY PROCESSES
    • C03B5/00Melting in furnaces; Furnaces so far as specially adapted for glass manufacture
    • C03B5/16Special features of the melting process; Auxiliary means specially adapted for glass-melting furnaces
    • C03B5/24Automatically regulating the melting process
    • CCHEMISTRY; METALLURGY
    • C03GLASS; MINERAL OR SLAG WOOL
    • C03BMANUFACTURE, SHAPING, OR SUPPLEMENTARY PROCESSES
    • C03B5/00Melting in furnaces; Furnaces so far as specially adapted for glass manufacture
    • C03B5/04Melting in furnaces; Furnaces so far as specially adapted for glass manufacture in tank furnaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach

Definitions

  • the present invention relates to a glass melting furnace monitoring method, a glass melting furnace operating method, a glass melting furnace monitoring system, and a glass article manufacturing method.
  • the glass manufacturing process there is a process in which a glass raw material is charged into a glass melting furnace and the raw material is melted in the glass melting furnace.
  • the raw material charged into the glass melting furnace is a solid and gradually melts in the glass melting furnace.
  • the raw material that has been charged and accumulated in the glass melting furnace is called a batch pile.
  • the batch pile gradually moves along the flow of the molten glass that is the molten raw material (that is, from the upstream side to the downstream side of the glass melting furnace).
  • the batch piles are gradually dissolved because they are dissolved by heat.
  • the behavior of the batch pile is a guideline for the operation of the glass melting furnace, so the batch pile in the glass melting furnace is visually observed from the observation window provided in the glass melting furnace or sketched. It was. When observing a batch mountain, the part above the surface (namely, liquid level) of a molten glass becomes an observation object.
  • Non-Patent Document 1 the Hough transform capable of detecting a straight line is used for determining the monitoring area.
  • Non-Patent Document 1 describes obtaining the occupancy ratio of a batch mountain.
  • Patent Document 1 describes that a batch mountain is photographed, and the position and shape of the boundary line between the batch mountain and the liquid surface or the most downstream position are compared at each photographing time.
  • Patent Document 2 scans the liquid level in the furnace to capture an image, obtains a position-luminance characteristic line from the image, and determines the location of the batch mountain based on the position-luminance characteristic line. A method is described.
  • Patent Document 3 describes a method for measuring and adjusting parameters relating to a raw material melted in a glass melting furnace.
  • a method of binarizing pixels there is a method of binarizing pixels.
  • binarization for example, there is a method for identifying a valley of a histogram of a pixel according to a luminance value and dividing the pixel into two classes.
  • a mode method, a discriminant analysis binarization method, or the like is known as a method for specifying a valley of a histogram of a pixel corresponding to a luminance value.
  • the mode method is described in Non-Patent Documents 2 and 3.
  • the discriminant analysis binarization method is described in Non-Patent Document 3.
  • the threshold value is determined so that the separation between the two classes is the best. Specifically, a threshold value that maximizes the variance ratio between the intra-class variance and the inter-class variance for the background area and the specific object area in the image is determined.
  • the camera position and orientation may shift during maintenance work such as cleaning the observation window. Then, the shooting range of the camera is also shifted. As described above, when the position and orientation of the camera change, the accuracy of evaluation of the time-dependent change of the batch mountain state is lowered.
  • bubbles are generated on the liquid surface of the dissolved raw material when the raw material is heated. Therefore, when a batch mountain in the glass melting furnace is photographed, an image of the batch mountain with a background of bubbles is obtained. In order to accurately monitor the state of the batch crest, it is preferable to separate the bubbles and the batch crest in the image and extract the batch crest portion from the image.
  • an object of the present invention is to provide a glass melting furnace monitoring method and a glass melting furnace monitoring system capable of satisfactorily continuing observation of a certain region in the glass melting furnace. Moreover, it aims at providing the manufacturing method of the glass article which manufactures a glass article, implement
  • an object of the present invention is to provide a glass melting furnace operating method capable of clarifying which operating parameter of the glass melting furnace should be adjusted according to the state of the monitored batch mountain.
  • the image photographing means photographs an image including a reference pattern provided in the glass melting furnace and a certain range on the liquid surface of the glass raw material melted in the glass melting furnace.
  • a background excluded image generation step that generates a background excluded image that excludes the background from the extracted image in which the batch mountain and the background are reflected, and observation data related to the batch mountain are calculated based on the background excluded image. And an observation data calculation step.
  • the background image creation step the number of pixels corresponding to each luminance value is counted for each corresponding pixel or corresponding area of the plurality of extracted images, and the background is determined based on the count result of the pixels corresponding to each luminance value.
  • a method of creating a background image by determining the luminance value to be represented may be used.
  • the background excluded image generation step a process for subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to a certain range from the captured image is performed for each pixel.
  • a method of generating a background excluded image by binarizing each subtraction result may be used.
  • a process of subtracting the luminance value of the corresponding pixel in the image and calculating the observation data based on the background excluded image generated in the background excluded image generating step may be used in the observation data calculating step.
  • a background excluded image conversion step that converts the background excluded image into an image when a certain range is observed from above facing the liquid surface.
  • the background excluded image is converted into a background excluded image after the conversion by the background excluded image converting step.
  • a method of calculating observation data based on this may be used.
  • a method including a pre-processing step of calculating an amount representing contrast of light and dark in the image and selecting an image satisfying a predetermined condition with respect to the amount representing contrast. May be.
  • the number of edges in the image is calculated as an amount representing contrast, a plurality of images satisfying a condition that the number of edges is equal to or greater than a predetermined threshold is selected, and based on the selected plurality of images A method of generating an image that is a target for extracting a region corresponding to a certain range may be used.
  • the glass melting furnace operating method has an effect of deriving the degree of influence of the operating parameters of the glass melting furnace on the observation data calculated in the observation data calculation step in the above-mentioned monitoring method in the glass melting furnace.
  • the monitoring system in the glass melting furnace is an image photographing means for photographing an image including a reference pattern provided in the glass melting furnace and a certain range in the liquid surface of the glass raw material melted in the glass melting furnace.
  • an image calibration unit that extracts a region corresponding to a certain range from the captured image according to the attitude of the image capturing unit calculated using the positional deviation of the reference pattern captured in the image, and a constant
  • a background image creating means for creating a background image as a background of a batch mountain that is a glass raw material accumulated in a glass melting furnace, and photographing
  • a difference calculating means for generating a background excluded image excluding the background from the extracted image in a state where the batch
  • the background image creating means counts the number of pixels corresponding to each luminance value for each corresponding pixel or corresponding area of the plurality of extracted images, and based on the count result of the pixels corresponding to each luminance value, the background is generated.
  • the background image may be created by determining the luminance value to be represented.
  • the difference calculation means performs, for each pixel, a process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to a certain range from the photographed image.
  • the configuration may be such that the background excluded image is generated by binarizing the subtraction result.
  • the image calibration means converts the background image into an image when a certain range is observed from above facing the liquid surface, and extracts the extracted image extracted as a region corresponding to the certain range so that the certain range faces the liquid surface.
  • the image is converted into an image observed from above, and the difference calculation means subtracts the luminance value of the corresponding pixel in the background image after conversion by the image calibration means from the luminance value of the extracted image after conversion by the image calibration means.
  • the observation data calculating unit may calculate the observation data based on the background excluded image generated by the difference calculating unit.
  • the image calibration unit converts the background excluded image generated by the difference calculation unit into an image when a certain range is observed from above facing the liquid surface, and the observation data calculation unit converts the background after the conversion by the image calibration unit.
  • the configuration may be such that observation data is calculated based on the excluded image.
  • an amount representing the contrast of light and dark in the image is calculated, and a preprocessing means for selecting an image satisfying a predetermined condition regarding the amount representing the contrast is provided. May be.
  • the preprocessing means calculates the number of edges in the image as an amount representing contrast, selects a plurality of images satisfying a condition that the number of edges is equal to or greater than a predetermined threshold, and based on the selected plurality of images
  • a configuration may be used in which an image that is a target for extracting a region corresponding to a certain range is generated.
  • It may be configured to include observation data analysis means for deriving the degree of influence of the operating parameters of the glass melting furnace on the observation data calculated by the observation data calculation means.
  • observation data When the observation data satisfies a predetermined condition, it may be configured to include a melting furnace control means for changing an operation parameter in which the absolute value of the degree of influence on the observation data is equal to or greater than a predetermined value.
  • the method for producing a glass article according to the present invention includes a glass melting step for producing molten glass in a glass melting furnace, a clarification step for removing bubbles of the molten glass in a clarification tank, The method includes a molding step for molding the molten glass from which bubbles have been removed, and a slow cooling step for gradually cooling the molded molten glass.
  • the image photographing means includes a reference pattern provided in the glass melting furnace, and glass melting.
  • an area extraction step for extracting an area corresponding to a certain range from the photographed image and a plurality of extracted images extracted from a plurality of images as an area corresponding to the certain range are loaded into the glass melting furnace.
  • a background-excluded image is generated by excluding the background from the extracted image in a state in which the batch mountain and the background are reflected by performing, for each pixel, the process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel
  • the method includes an excluded image generating step and an observation data calculating step of calculating observation data regarding the batch mountain based on the background excluded image.
  • the glass melting furnace monitoring method and the glass melting furnace monitoring system of the present invention it is possible to continue observation of a certain area in the glass melting furnace and to monitor the state of the batch mountain in the certain area well. Moreover, according to the manufacturing method of a glass article, a glass article can be manufactured while realizing such a good monitoring state.
  • the top view which shows the example of the glass melting furnace to which the monitoring system in the glass melting furnace of this invention is applied.
  • the block diagram which shows the structural example of the monitoring system in the glass melting furnace of the 1st Embodiment of this invention.
  • Explanatory drawing which shows the example of the picked-up image by the camera 11a .
  • Explanatory drawing which shows the example of the matching using the example of the image of a reference pattern, and a reference pattern.
  • 9 is a flowchart showing an example of posture estimation operation performed by posture identification means 14.
  • Among the captured image by the camera 11 a schematic diagram extracted range corresponding to the liquid level of dissolved material.
  • Explanatory drawing which shows the example of the conversion result converted so that a viewpoint may be changed just above the fixed area
  • the flowchart which shows the example of the process progress of the attitude
  • the flowchart which shows the example of processing progress until observation data derivation.
  • the flowchart which shows the example of processing progress of background image creation processing (Step S11).
  • the histogram obtained as a result of step S24.
  • Explanatory drawing which shows the example of the image after the conversion of step S13.
  • Explanatory drawing which shows the example of the background image after conversion of step S12.
  • Explanatory drawing which shows the example of the image of the result of performing the process of step S14.
  • Explanatory drawing which shows the example of the image after a binarization process.
  • the graph which shows the change of the situation where the correlation of observation data and quality data is lost or appears newly.
  • the schematic diagram which shows an example of the manufacturing line of the glass article used with the manufacturing method of the glass article of 3rd Embodiment.
  • the flowchart which shows the example of the manufacturing method of the glass article of 3rd Embodiment.
  • FIG. 1 is a plan view showing an example of such a glass melting furnace.
  • the glass melting furnace 1 melts a glass raw material by heat in a space surrounded by a bottom surface, an upstream wall (upstream wall) 7, a side wall 6, a downstream wall (downstream wall) 8, and a ceiling (not shown).
  • the upstream wall 7 is provided with inlets 3 a and 3 b for introducing raw materials
  • the downstream wall 8 is provided with an outlet 4 for discharging the melted glass raw material.
  • the side walls 6 are provided with an observation window 2 and a burner 5, respectively.
  • FIG. 1 shows the case where the inlets 3 a and 3 b are provided, the number of inlets is not limited to two.
  • a solid glass material is introduced from the inlets 3 a and 3 b . Since the inside of the glass melting furnace is heated by the flame blown from the burner 5, this raw material is gradually melted, and the melted raw material is gradually moved downstream and discharged from the discharge port 4.
  • the batch pile 10 is a raw material accumulated in the glass melting furnace 1 in a solid state. The batch mountain 10 dissolves while moving downstream as time passes.
  • the monitoring system in the glass melting furnace of the present invention includes cameras 11 a and 11 b , and monitors certain areas 9 a and 9 b of the liquid level in the glass melting furnace.
  • FIG 1 by two constant region 9 a, 9 b, as the region between the side walls in the front direction of each camera of the liquid level in the furnace is covered, define two constant region 9 a, 9 b The case is shown as an example.
  • the camera 11 a captures a certain area 9 a on the right side when viewed from the upstream side (hereinafter simply referred to as the certain area 9 a ), and the camera 11 b captures the certain area 9 b on the left side when viewed from the upstream side (hereinafter referred to as “constant area 9 a” ) , simply referred to as a constant region 9 b.) to shoot.
  • the glass melting furnace monitoring system includes two cameras 11 a and 11 b will be described as an example, but the number of cameras included in the glass melting furnace monitoring system is not limited to two.
  • the fixed regions 9 a and 9 b are determined away from the vicinity of the inlets 3 a and 3 b .
  • all the parts corresponding to the fixed area in the shot image become batch mountains, and there is a high possibility that bubbles as a background do not appear. This is because data related to batch mountains cannot be calculated.
  • FIG. 2 is a block diagram illustrating a configuration example of the glass melting furnace monitoring system according to the first embodiment of the present invention.
  • the glass melting furnace monitoring system according to the first embodiment includes a camera 11 a , a camera 11 b, and an image processing device 13.
  • the monitoring system in the glass melting furnace performs the same processing on the images taken by the cameras 11 a and 11 b . Therefore, hereinafter, described with respect to the camera 11 a, description of the camera 11 b is omitted.
  • the camera 11 a through the observation window 2 of the glass melting furnace (see FIG. 1), repeatedly photographed images of certain areas 9 a of the liquid surface. This image is a still image.
  • the camera 11 b also through the observation window 2 of the glass melting furnace (see FIG. 1), repeatedly taking a still image of a certain area 9 b of the liquid surface. Imaging interval of the camera 11 a, 11 b may be determined in advance.
  • the camera 11 a shooting range (range of field of view), as well as certain areas 9 a, the liquid surface or in the vicinity constant region 9 a, the side wall is also housed facing the camera 11 a. Therefore, the camera 11 a captured image of the liquid surface and the constant region 9 a and the vicinity thereof, are also captured the opposing sidewalls. The same applies to the camera 11b .
  • Images taken by the cameras 11 a and 11 b are input to the image processing device 13.
  • the image processing apparatus 13 performs image processing on the image captured by the camera 11 a, calculates various data relating to the batch mountain in certain areas 9 a (e.g., data relating to the arrangement and movement). Similarly, the image processing apparatus 13 performs image processing on the image captured by the camera 11 b, calculates various data relating to the batch mountain in certain areas 9 b.
  • the batch mountain data calculated based on the images taken by the cameras 11 a and 11 b is hereinafter referred to as observation data.
  • the image processing device 13 includes a preprocessing unit 19, an image storage unit 12, a posture specifying unit 14, a background image creation unit 15, an image calibration unit 16, a difference calculation unit 17, and an observation data calculation unit 18. Prepare.
  • Preprocessing means 19 based on the image by the camera 11 a is taken to generate an image of the state in which the raw material powder and a frame (flame blown out from the burner 5) Implied.
  • the image of the batch mountain becomes unclear.
  • Preprocessing means 19 the same processing is performed with respect to the image by the camera 11 b is taken. Generating an image from which the influence of the raw material powder and the frame is removed in this way is referred to as preprocessing.
  • an image generated by the preprocessing unit 19 from a plurality of images taken by the camera may be referred to as a preprocessed image hereinafter.
  • the preprocessed image is the same as the image captured by each camera, except that the effect of the raw material powder and the frame is removed to make the batch mountain clearer. It may be noted. That is, it may be referred to as a photographed image in the same way as the image itself photographed by the camera.
  • Preprocessing means 19, preprocessing image obtained on the basis of the camera 11 a, and, the pre-processing image obtained on the basis of the camera 11 b, respectively, are stored in the image storage unit 12.
  • pre-treatment may not be necessary at all, or part of the pre-treatment may not be necessary.
  • the pretreatment may not be performed.
  • the image processing apparatus 13 may store the images input from the cameras 11 a and 11 b in the image storage unit 12 as they are.
  • the image storage unit 12 is a storage device that stores an image. As described above, when the preprocessing means 19 performs the preprocessing on the images input from the cameras 11 a and 11 b , the preprocessed image obtained by the preprocessing is stored. Further, when the preprocessing is not performed, the images input from the cameras 11 a and 11 b are stored as they are.
  • the preprocessing unit 19 performs preprocessing and the image storage unit 12 stores the preprocessed image will be described as an example.
  • the posture identifying unit 14 from the image captured by the camera 11 a (preprocessed image in this example) to identify the orientation of the camera 11 a.
  • the posture means the position and orientation of the camera.
  • the posture identifying unit 14 performs the same processing with respect to the camera 11 b.
  • the captured image is an image taken of a certain region 9 a direction.
  • the image captured by the camera 11 a, the other part above the liquid surface 25 in a batch mountain 10, is reflected also part of the opposing side walls 6 and an observation window 2.
  • the images on the side wall 6 and the observation window 2 are used to specify the orientation and position of the camera (camera posture). That is, the boundary lines (grooves) between the bricks forming the side walls 6, the intersections between the boundary lines, and the corners (corner parts) of the observation window 2 appear as characteristic patterns in the captured image.
  • the reference pattern needs to be a pattern in which a similar pattern does not exist in the same image when taken. For example, if a combination of the shape of a corner such as a window, a line or a point is a characteristic pattern, such a combination may be used as a reference pattern. Further, as will be described later, the posture specifying unit 14 may sequentially update the image stored as the image of the reference pattern. If the posture of the camera does not change, the reference pattern appears at a substantially constant position (coordinates) in the captured image. On the other hand, when the posture of the camera changes during cleaning or the like, the position of the reference pattern in the captured image also changes.
  • the posture identifying unit 14 based on the position of the reference pattern in the image captured by the camera 11 a, determines the presence or absence of displacement of the posture of the camera 11 a.
  • the reference pattern is used to determine whether or not a camera position shift has occurred.
  • the coordinates representing the position in the image are hereinafter referred to as image coordinates.
  • a plurality of reference patterns exist in the image from the viewpoint of increasing the reliability of the determination of the camera posture deviation.
  • the posture specifying means 14 stores the image of the reference pattern and the image coordinates of the reference pattern in the captured image.
  • the image coordinates of the reference pattern may be image coordinates of the center position of the reference pattern, for example.
  • the posture identifying section 14 may, for example, an image of the point 21 a and around the corner of the viewing window 2 stores an image of the reference pattern, and stores the image coordinates of the position.
  • FIG. 4A shows an example of a reference pattern image.
  • FIG. 4B shows an example of a captured image that is matched with the reference pattern.
  • FIG. 4B illustrates a captured image similar to that in FIG. 4B, the same elements as those shown in FIG.
  • the posture specifying unit 14 performs pattern matching between the captured image and the stored image of each reference pattern, and specifies the image coordinates of the portion in the captured image corresponding to each stored reference pattern image. To do.
  • the posture identifying section 14 determines its image coordinates, by comparing the image coordinates are stored, whether the deviation occurs in the posture of the camera 11 a. In the pattern matching, the similarity that is a similar index value is calculated.
  • the posture specifying unit 14 performs pattern matching between the image of the reference pattern illustrated in FIG. 4A and the captured image illustrated in FIG. b) reference) is specified, and the image coordinates of the portion 81 (for example, the center coordinates of the portion 81 in the captured image) are specified. Then, the posture identifying section 14, and the coordinates, by comparing the image coordinates stored in advance, it may be determined whether the deviation occurs in the posture of the camera 11 a.
  • a characteristic point used for camera posture estimation is referred to as a reference point.
  • a point in the reference pattern (for example, the point 21 a at the corner of the observation window 2) may be included.
  • FIG. 3 illustrates the case where the points 21 a to 21 e are used as reference points.
  • the posture specifying means 14 stores the image coordinates of the reference point and the three-dimensional coordinates of the reference point in real space as information on the reference point. Since the posture specifying means 14 stores “the image of the reference pattern and its image coordinates” and “the image coordinate and the three-dimensional coordinates of the reference point”, the relative positional relationship between the reference pattern and the reference point on the image is stored. I can judge.
  • the camera 11 a is capturing images including the reference pattern and a constant region 9 a
  • the processing by the camera 11 b captures an image including the reference pattern and constant regions 9 b corresponds to an image capturing step.
  • FIG. 5 is a flowchart illustrating an example of the posture estimation operation performed by the posture identification unit 14.
  • the posture identifying section 14, and the image coordinates of the reference patterns in the captured image, as described above, by comparing the image coordinates are stored, if a shift in position of the camera 11 a is determined to have occurred, their image coordinates Is used to calculate the amount of posture deviation (step S51). That is, the posture specifying means 14 calculates how much the reference pattern has shifted in the captured image.
  • the posture specifying unit 14 reflects the amount of deviation of the reference pattern in the captured image on the stored image coordinates of the reference point (step S52). That is, the posture identifying unit 14, by the amount of shift image coordinates of the reference pattern in the captured image by a deviation occurs in the posture of the camera 11 a, the image coordinates of shifting the image coordinates of each reference point (reference point Change the value of).
  • the posture identifying unit 14 uses the image coordinates of the reference point, and a 3-dimensional coordinates of the reference point in the real space, performs camera calibration process, estimating the pose of the camera 11 a. Specifically, the posture identifying section 14, the image coordinates of the individual reference points in various orientations of the cameras 11 a, is calculated from the three-dimensional coordinates of each reference point in the real space (step S53). Then, the posture specifying unit 14 is configured such that the image coordinates calculated from the three-dimensional coordinates of each reference point are closest to the image coordinates of the reference point shifted in accordance with the shift of the image coordinates of the reference pattern as described above. the attitude, determines that the posture of the camera 11 a (step S54).
  • the camera 11 a has been described as an example, the posture identifying unit 14, the determination and pose estimation of the presence or absence of a deviation of the orientation related to the camera 11 b similarly performed.
  • Image calibration means 16 according to the posture of the camera 11 a posture specifying unit 14 has identified, (in this example, before the processed image) in the captured image to identify the range corresponding to the constant region 9 a in. 6, among the image taken by the camera 11 a, a schematic diagram obtained by extracting the range corresponding to the liquid surface 25 of the dissolved material.
  • the right side and the left side of FIG. 6 are the upstream and downstream of the glass melting furnace, respectively.
  • a range 31 enclosed by a thick solid line corresponds to the constant region 9 a in the real space.
  • the image calibration unit 16 identifies and extracts a range 31 a corresponding to the certain region 9 a according to the posture of the camera 11 a .
  • the liquid level in the glass melting furnace is assumed to be constant.
  • a certain range of area 9 a in the height is predetermined. That is, a certain range of area 9 a (position) is pre-defined as a position of the region in the plane of constant height in the real space. Therefore, the posture of the camera 11 a is identified, it is possible to determine also the range corresponding to the constant region 9 a in the image captured by the camera 11 a. That is, the image calibration unit 16 may specify the range 31 a in the image when the constant region 9 a at a constant height in the real space is projected onto the captured image of the camera 11 a whose posture is known. .
  • the pixel resolution in the photographed image is investigated by investigating how many millimeters the deviation of one pixel in the photographed image is displaced in the real space. (Mm / pixel) can be grasped.
  • the image the image calibration unit 16, with respect to the range 31 a corresponding to the constant region 9 a in the image performs viewpoint conversion processing for changing the view point directly over the constant region 9 a, which is observed from the viewpoint Should be generated.
  • the image calibration means 16 is not limited to the range 31 a extracted from the image captured by the camera 11 a that is to be converted into an image when the fixed region 9 a is observed from directly above.
  • the image calibration unit 16 performs the same conversion on an image obtained by image processing (for example, background image creation processing described later).
  • Image calibration means 16 (in this example, the preprocessed image) captured image by the camera 11 b the same processing is performed with respect.
  • Background image creating means 15 using the image before the processing means range 31 extracted by the image calibration unit 16 from a plurality of pre-processing images sequentially generated by 19 a (range 31 corresponding to a certain area 9 a a) Then, an image of the liquid level is created when there is no batch mountain (background image creation processing).
  • This range 31 a is a picture corresponding to the constant region 9 a, the pictures in the batch mountain background bubbles. Further, since the moving speed and the dissolution rate of the batch mountain is moderate, in the range 31 a, always (or high in frequency) is reflected batch mountain. Therefore, it is difficult to directly capture an image in which only bubbles (background) are shown as the range 31 a corresponding to the certain region 9 a . Therefore, the background image creating unit 15 uses the range 31 a extracted from the plurality of images, to create a background image batch mountain does not exist.
  • Bubbles are present on the liquid surface where no batch pile exists.
  • the batch mountain dissolves without gradually moving in the downstream direction. Therefore, even pixels corresponding to the batch mountain in the range 31 a extracted from an image, would represent a range 31 a in bubbles extracted from another image.
  • the background image creating means 15 extracts, for each set of corresponding pixels in the range 31 a corresponding to the fixed area 9 a extracted from a plurality of images (in other words, pixels corresponding to the same position in the fixed area 9 a ). By specifying the brightness corresponding to the bubbles, an image representing only the background of the batch mountain is created without the presence of the batch mountain.
  • a bubble is generated.
  • the luminance corresponding to may be specified.
  • An area is an area formed by gathering consecutive pixels.
  • Background image creating means 15 (in this example, the preprocessed image) captured image by the camera 11 b the same processing is performed with respect.
  • the difference calculation means 17 calculates the difference between corresponding pixels between two images. Specifically, the luminance value of the corresponding pixel in the background image is subtracted from the luminance value of each pixel of the image showing the batch mountain. By this subtraction process, an image obtained by removing the background portion from the image showing the batch mountain is obtained. However, there are some changes in the brightness of the bubbles. Therefore, the result of subtracting the luminance value of the corresponding pixel in the background image from the luminance of the pixel corresponding to the bubble in the image showing the batch mountain is not always zero.
  • the difference calculation means 17 subtracts the luminance value of the corresponding pixel in the background image from the luminance value of each pixel of the image showing the batch mountain, and then subtracts the subtraction result for each pixel to “0” or “1”. It is preferable to perform a process for converting the value. In this binarization process, the difference calculation means 17 rounds up the subtraction result to “1” for each pixel if the subtraction result is equal to or greater than a predetermined value, and if the subtraction result is less than the predetermined value, It may be rounded down to “0”. By performing this binarization processing, the region corresponding to the batch mountain (the region having the luminance value “1”) and the region corresponding to the background (the region having the luminance value “0”) are more clearly distinguished. can do.
  • the observation data calculation means 18 calculates the observation data of the batch mountain from the image from which the background portion is removed and the portion corresponding to the batch mountain is left.
  • Examples of observation data include, for example, the position of the tip of the batch crest, the movement speed of the batch crest, the dissolution rate of the batch crest (batch crest reduction rate), the occupancy ratio of the batch crest in each of the constant regions 9 a and 9 b , and the like. It is done. Further, regarding these observation data, a difference between a value in the fixed region 9a and a value in the fixed region 9b may be calculated, and the difference may be used as the observation data.
  • the constant region 9a is divided into two parts, a side wall region and a central region in the width direction of the glass melting furnace, and the ratio of batch occupancy ratios in the two regions (hereinafter referred to as an internal / external ratio). .) May be calculated as observation data.
  • the ratio of occupancy of the batch mountain in the two regions (inside and outside ratio) as the observation data You may calculate.
  • the pre-processing means 19, the posture specifying means 14, the background image creation means 15, the image calibration means 16, the difference calculation means 17 and the observation data calculation means 18 are realized by a CPU of a computer that operates according to a program, for example.
  • the CPU reads a program stored in a program storage device (not shown) of the computer, and the CPU follows the program in accordance with the preprocessing means 19, the posture specifying means 14, the background image creating means 15, and the image calibration means 16.
  • the difference calculating unit 17 and the observation data calculating unit 18 may be operated.
  • the camera 11 a periodically photographs the direction of the fixed area 9 a and inputs the images sequentially to the preprocessing means 19.
  • Preprocessing means 19 a constant cycle (e.g., the period of a few seconds) for each, based on the plurality of images input from the camera 11 a in its cycle, generating a preprocessed image.
  • the preprocessing unit 19 counts the number of edges in the image for each image input within one period. Note that an edge is a line that appears in an image.
  • the region to be counted for the number of edges in the image may be limited to a region corresponding to the wall surface and a region corresponding to the certain region 9a.
  • the processing cycle by pre-processing means 19 short, each image input from the camera 11 a within that period, many cases the large number of is reflected batch mountain does not change. Also, the fact that the number of batch peaks in the image does not change means that the number of edges should be maintained at a certain level if there is no influence of the frame or the raw material powder.
  • the preprocessing unit 19 determines that the number of edges in the image is large when the number of edges obtained as a count result satisfies a condition that the number of edges is equal to or greater than a predetermined threshold. Then, an image having the number of edges equal to or greater than a threshold value may be selected. Further, when the number of edges obtained as a count result is less than the threshold, the preprocessing unit 19 determines that the number of edges in the image is small and does not select an image having the number of edges less than the threshold. Alternatively, the judgment criterion for the number of edges may be changed according to the count result of the number of edges in each input image.
  • the preprocessing unit 19 selects a plurality of continuous images has been described as an example.
  • the plurality of images selected by the preprocessing unit 19 may not be continuous images.
  • the preprocessing means 19 may calculate an amount representing the contrast between light and dark in the image and select an image satisfying a predetermined condition regarding the amount representing the contrast.
  • the number of edges described above is an example of an amount representing the contrast between light and dark in an image.
  • the condition that the number of edges is equal to or greater than the threshold is an example of a condition that is predetermined with respect to an amount that represents contrast between light and dark.
  • An example in which the preprocessing means 19 selects an image by a method other than the image selection method based on the number of edges is shown below.
  • the pre-processing means 19 a camera 11 for each input image from a, a quantity representing the contrast of the brightness of the image may be calculated standard deviation of the luminance values.
  • the preprocessing means 19 may calculate the standard deviation of the luminance value of each pixel included in the entire image.
  • an area where the boundary line between the bricks is captured may be determined in advance, and the preprocessing unit 19 may calculate the standard deviation of the luminance value in that area in the image.
  • the condition for selecting an image an image from the occurrence of an event in which the amount representing the contrast of the image is lower than the amount representing the contrast of the previous image by a certain value or more until the lapse of the certain time is excluded. And a condition of selecting an image remaining without being excluded.
  • the preprocessing unit 19 uses the standard deviation of the luminance value of the previous image as the standard deviation of the luminance value in a certain image. In the case of lowering by a certain value or more, an image generated until a certain period elapses from that point is excluded from subsequent processing targets, and an image remaining without being excluded may be selected. Then, the preprocessing unit 19 generates a preprocessed image from the selected plurality of images. It should be noted that the fact that the amount representing the contrast between light and dark in the image has decreased by a certain value or more means that the contrast has suddenly decreased, and it can be considered that a phenomenon such as the raw material powder rising has occurred.
  • the preprocessing unit 19 selects an image based on the number of edges in the image.
  • the preprocessing unit 19 generates a preprocessed image by determining the luminance value of each pixel in the preprocessed image using the selected plurality of images. In a plurality of selected images, attention is paid to corresponding pixels (pixels having the same image coordinates), and the minimum luminance value among the pixels is specified. Then, the preprocessing unit 19 determines the luminance value as the luminance value of the corresponding pixel in the preprocessed image. For example, the pre-processing unit 19 reads the luminance values of the image coordinates of each image selected (x 1, y 1), specifies the minimum value of the luminance values in the image coordinate (x 1, y 1).
  • the preprocessing unit 19 determines the minimum luminance value as the luminance value at the image coordinates (x 1 , y 1 ) of the preprocessed image. The preprocessing unit 19 performs this processing for each pixel. The preprocessing unit 19 stores the generated preprocessed image in the image storage unit 12. The preprocessing means 19 repeats this processing at a constant cycle. Thus, the preprocessed image by the camera 11 a is generated based on the captured images are sequentially accumulates the image storage unit 12.
  • the "plurality of continuous images are kept count result many state of the edge" other images may be ignored.
  • the camera 11 a is explained an exemplary case of using the image taken, the camera 11 b also periodically capturing a predetermined region 9 b direction, and inputs the image sequence, the pre-processing means 19 .
  • the preprocessing unit 19 similarly generates a preprocessed image from the image captured by the camera 11 b and stores it in the image storage unit 12.
  • a plurality of continuous images maintaining a state in which the edge count result is large are images in which the frame and the raw material powder are not shown so much. This is because, in an image in which many frames and floating raw material powder are shown, batch peaks and side walls become unclear and the number of edges in the image decreases.
  • the luminance value of a portion corresponding to the frame in the image is a high value. Accordingly, as described above, by selecting a plurality of images in which the frame and the raw material powder are not shown so much, and by specifying the minimum luminance value among the corresponding pixels in those images, the frame and the raw material powder are not shown. The luminance value in the state image can be selected.
  • the preprocessing means 19 Since determining the preprocessed image as an image having such a brightness value, a part of the image the camera 11 a is taken, even captured the raw material powder and a frame suspended in the furnace, such raw material powder and the frame Can be generated. That is, it is possible to obtain an image that clearly shows the batch mountain to be monitored.
  • the operation in which the preprocessing means 19 generates a preprocessed image corresponds to a preprocessing step.
  • the image processing device 13 may store the images taken by the cameras 11 a and 11 b in the image storage unit 12 as they are.
  • FIG. 8 is a flowchart illustrating an example of processing progress of the camera posture determination processing.
  • the posture specifying unit 14 stores images of a plurality of reference patterns and their image coordinates will be described as an example.
  • the preprocessing unit 19 every fixed period (e.g., period of a few seconds), generates a preprocessed image from the image by the camera 11 a is taken, and stores the image in the image storage unit 12.
  • the posture identifying means 14 (in this example, the front camera 11 a is generated based on the captured image processed image) a plurality of captured images stored in the image storage unit 12 reads, the posture of the camera 11 a A process for determining whether or not a deviation has occurred is periodically performed.
  • the processing cycle of the posture specifying unit 14 is longer than the processing cycle of the preprocessing unit 19 compared to the processing cycle of the preprocessing unit 19 being, for example, several seconds.
  • the processing cycle of the posture specifying means 14 may be several hours.
  • the posture identifying unit 14 determines that becomes the processing start timing, reads the captured image of the latest predetermined number stored in the image storage unit 12 (front camera 11 a is generated based on the captured image processed image) .
  • the predetermined number may be determined in advance.
  • the posture specifying means 14 generates an average image of the read predetermined number of photographed images (preprocessed images) (step S1). Specifically, the posture specifying unit 14 calculates an average value of luminance values for each corresponding pixel with respect to a predetermined number of read captured images, generates an image using the average value as a luminance value, An average image may be used. In this example, the case where an average image is generated is illustrated, but an intermediate value of luminance values may be calculated for each corresponding pixel, and an image (intermediate value image) having the intermediate value as a luminance value may be generated.
  • step S1 the case where an average image is generated from a plurality of images in step S1 is exemplified, but the processing after step S2 may be performed on one image stored in the image storage unit 12. That is, the process of step S1 may be omitted.
  • the posture specifying unit 14 performs pattern matching on the plurality of reference patterns stored in advance by the posture specifying unit 14 on the average image generated in step S1 (step S2).
  • step S ⁇ b> 2 the posture specifying unit 14 calculates the similarity between the reference pattern image stored in advance and each unit in the average image. Then, the position in the image having the highest degree of similarity (in this example, the similarity is the smallest value) is specified. For example, if the image of the reference pattern illustrated in FIG. 4 and the image coordinates thereof are stored in advance, the posture specifying unit 14 has a similarity value with the image of the reference pattern illustrated in FIG.
  • the posture specifying unit 14 specifies, for example, the image coordinates of the central pixel at the specified location. That is, the posture specifying unit 14 specifies a portion most similar to the image of the reference pattern illustrated in FIG. 4 from the average image, for example, specifies the image coordinates of the central pixel.
  • the posture specifying means 14 performs this process for each image of the reference pattern stored in advance.
  • Calculating the degree of similarity may be performed by a known method.
  • examples of the similarity include SSD (Sum? Of? Squared? Difference?) And SAD (Sum? Of? Absolute? Difference).
  • the SSD is a total value of the squares of differences in luminance values between corresponding pixels in a pair of images to be subjected to similarity calculation. Therefore, the posture specifying unit 14 calculates the SSD by calculating the square of the difference between the luminance values for each pair of corresponding pixels in the pair of images as the similarity calculation target, and further calculating the total value thereof. do it.
  • SAD is a total value of absolute values of differences in luminance values between corresponding pixels in a pair of images to be subjected to similarity calculation.
  • the posture specifying unit 14 calculates the absolute value of the difference in luminance value for each pair of corresponding pixels in the pair of images that are the similarity calculation target, and further calculates the total value thereof, thereby calculating the SAD. What is necessary is just to calculate.
  • the posture specifying unit 14 performs XOR (eXclusive OR) for each pair of corresponding pixels in the pair of images that are the similarity calculation target. Or the total value thereof, and the calculation result may be used as the similarity.
  • the total value of SSD, SAD, and XOR for each pair of pixels is a similarity that decreases as the degree of similarity between images increases.
  • the posture specifying means 14 may calculate a normalized cross-correlation (NCC) as the similarity.
  • NCC normalized cross-correlation
  • the normalized cross-correlation has a value closer to 1 as the degree of similarity between images increases. Therefore, when the normalized cross-correlation is calculated as the similarity, the posture specifying unit 14 may specify a location where the similarity (normalized cross-correlation) value is closest to 1.
  • the posture specifying unit 14 stores in advance, for each reference pattern, the image coordinates of the portion having the highest degree of similarity (the portion having the smallest similarity value in this example) specified in step S2.
  • the difference between the image coordinates of which was the reference pattern i.e., distance
  • the posture specifying unit 14 compares the distance between the image coordinates specified in step S2 and the feature coordinates stored in advance with a threshold value. If the distance between the coordinates is equal to or larger than the threshold value, the posture of the camera is shifted.
  • the posture specifying unit 14 stores a plurality of reference patterns in advance, the distance between the coordinates (the difference between the image coordinates specified in step S2 and the feature coordinates stored in advance) for each reference pattern. calculate.
  • a criterion for comparing the plurality of distances with the threshold value and determining whether or not the posture of the camera has shifted is not particularly limited.
  • a deviation has occurred in the posture of the camera on condition that a predetermined number or more of a plurality of inter-coordinate distances obtained by calculation for each reference pattern is equal to or greater than a threshold value.
  • a threshold value For example, it may be determined that a deviation has occurred in the posture of the camera on condition that all the inter-coordinate distances are equal to or greater than the threshold value.
  • two criteria are illustrated, but it may be determined whether or not a deviation has occurred in the posture of the camera according to other criteria.
  • the posture specifying unit 14 specifies the image coordinates of the reference pattern image and the image coordinate stored in advance in step S2. By substituting the image coordinates, the image coordinates in the set of the reference pattern image and the image coordinates stored are updated (step S4). That is, the posture specifying unit 14 sets the image coordinates of the part specified as the part corresponding to the reference pattern in the average image (in the above example, the image coordinates of the central pixel of the part) in combination with the image of the reference pattern. The stored image coordinates are updated as the image coordinates. By the process of step S4, the coordinates (image coordinates) of the reference pattern in the average image are updated in accordance with the deviation of the camera posture. However, the posture specifying means 14 is also used in the process of step S5 for the image coordinates before update. The image coordinates before update are also stored until used in step S5.
  • the posture specifying means 14 estimates the posture of the camera 11a using the reference point (step S5).
  • the posture specifying means 14 may perform the following processing. Based on the image coordinates of the reference pattern before update (the image coordinates of the reference pattern stored in advance) and the image coordinates of the updated reference pattern, the posture specifying unit 14 determines how much the reference pattern is in the image, Calculate in which direction it has shifted. In the case where there are a plurality of reference patterns, for example, the average deviation amount for each reference pattern or the average deviation direction may be calculated, and the average value may be used as the deviation amount and deviation direction of the reference pattern. Alternatively, the reference pattern deviation amount and the deviation direction may be determined based on other standards.
  • the posture specifying means 14 shifts the image coordinates of the reference points stored in advance according to the direction and amount of deviation of the reference pattern. That is, the coordinate value of the image coordinates of the reference point is updated in accordance with the deviation of the reference pattern before and after the update. Then, the posture identifying section 14, the image coordinates of the individual reference points in various orientations of the cameras 11 a, is calculated from the three-dimensional coordinates of each reference point in the real space. Then, the posture identifying section 14, the image coordinates calculated from the three-dimensional coordinates of each reference point, identifies the closest become posture image coordinates of each reference point of the updated, its posture is a posture of the camera 11 a Is determined. Then, the designated estimation process (that is, the process of step S5) is terminated.
  • the posture specifying unit 14 stores in advance an image of a portion corresponding to the reference pattern in the average image specified in step S2.
  • the image of the reference pattern that has been updated is updated (step S6). That is, in step S2, an image of a portion specified as a portion corresponding to the reference pattern in the average image is extracted, and the image is stored as a new reference pattern image.
  • the posture specifying means 14 performs this process for each reference pattern. By the processing in step S6, the image of the reference pattern and the image of the reference pattern in the set of image coordinates stored in advance by the posture specifying means 14 are updated.
  • the state of the side wall in the glass melting furnace gradually changes, and the degree of similarity between the position corresponding to the reference pattern in the image and the image of the reference pattern stored in the posture specifying means 14 may decrease.
  • the raw material powder gradually adheres to the corner portion, so that an image of the reference pattern portion in the photographed image is obtained.
  • step S5 the next pattern matching can be accurately performed by updating the stored reference pattern image based on the pattern matching result in the average image.
  • the reference pattern image illustrated in FIG. 4 stored in advance can be gradually updated to a reference pattern image with rounded corners.
  • the next pattern matching can be accurately performed, and the posture determination of the camera can also be accurately performed.
  • the posture specifying means 14 may perform the processes after step S1 at regular intervals on the preprocessed image generated based on the image taken by the camera 11a and stored in the image storage means 12. Similarly, with respect to the preprocessed image generated based on the image photographed by the camera 11b and stored in the image storage unit 12, the processing after step S1 may be performed at regular intervals.
  • the posture specifying unit 14 performs the processing on the images captured by the camera 11 a.
  • the processing after step S1 may be performed at regular intervals.
  • the image of the camera 11 b is taken, every predetermined period, it may be performed the processing at and after Step S1.
  • FIG. 9 is a flowchart showing an example of processing progress of this operation.
  • the camera 11 a is preprocessed image which is generated based on the captured image
  • captured image by the camera 11 a is will be described as an example when the image processing apparatus 13 performs processing for the image processing device 13 (in this example, the preprocessed image generated based on the image by the camera 11 b is taken) image captured by the camera 11 b the same processing is performed with respect to.
  • the image calibration unit 16 (in this example, the preprocessed image) captured image by the camera 11 a stored in the image storage unit 12 reads a plurality sequentially from newer. The number of captured images to be read at this time may be determined in advance. Then, the image calibration unit 16 extracts from the respective captured images, a range 31 corresponding to a certain area 9 a in the real space a (see FIG. 6) (step S10). Extracted range 31 a an image shown (hereinafter, referred to as an extracted image.) Is a batch mountain image to background foam.
  • the image calibration unit 16 when the image capturing of the camera 11 a of Based on the posture, a range 31 a corresponding to the certain region 9 a in the real space may be extracted from the captured image.
  • Step S10 corresponds to a region extraction step.
  • the background image creating means 15 creates an image when no batch mountain exists based on the extracted images respectively extracted from the plurality of photographed images. That is, a background image as a background of the batch mountain is created (step S11). In step S11, a background image having pixels having the same image coordinates as that of the extracted image extracted from the latest photographed image and a luminance value of the pixel representing a bubble is created. Step S11 corresponds to a background image creation step.
  • FIG. 10 is a flowchart showing an example of processing progress of the background image creation processing in step S11.
  • the background image creation means 15 selects individual pixels in the extracted image extracted from the latest photographed image, and the brightness of the selected pixel and the pixels in the other extracted images corresponding to the selected pixel. Based on the value, a luminance value representing the background in the selected pixel is determined. As a result, a background image is obtained when no batch mountain exists.
  • this process will be described with reference to FIG. Note that, here, a case where the luminance value representing the background is determined for each pixel will be described as an example, but the background image creating unit 15 determines the luminance value representing the background for each individual area in the extracted image. May be.
  • the background image creating means 15 selects one pixel from the pixels of the extracted image extracted from the latest photographed image (step S21). Then, the background image creating unit 15, from the extracted image extracted from other photographed image in step S10 (see FIG. 9), the pixel corresponding to the selected pixel (i.e., the same position in a predetermined region 9 a (Corresponding pixel) is extracted (step S22).
  • the background image creating means 15 targets the pixel selected in step S21 and the pixel in the other extracted image corresponding to the pixel (that is, the pixel obtained in step S22) for each luminance value.
  • the number of pixels corresponding to the luminance value is counted (step S24). It can be said that the process of step S24 is a histogram creation process.
  • the background image creating means 15 evaluates the variation of the luminance value within the luminance value range in which the pixel count number (frequency) is increased (step S25).
  • the range of luminance values in which the number of counts is increased is, for example, a range in which luminance values in which the number of counts is equal to or greater than a threshold value (threshold value determined for the count number) continue.
  • 11 and 12 are histograms obtained as a result of step S24. In the example shown in FIG. 11, the range of the luminance value in which the pixel count is large is k 1 to k 2 . In the example shown in FIG. 12, the range of luminance values in which the number of pixel counts is large is k 3 to k 4 .
  • the standard deviation or variance of the luminance values of the pixels counted within such a range may be used.
  • the width of the luminance value range in which the pixel count number is increased may be used as the evaluation value.
  • such an evaluation value may be calculated.
  • the standard deviation, variance, or the width of the range of luminance values in which the number of pixel counts are increased as the evaluation value the smaller the evaluation value, the smaller the variation in the luminance value.
  • Another index value may be used as an evaluation value of variation.
  • the background image creating means 15 determines whether or not there is a large variation in luminance value within the range of luminance values in which the number of pixel counts is large, based on the evaluation value calculated in step S25. (Step S26). In step S26, it is only necessary to determine whether or not the variation is large by comparing a predetermined threshold value (threshold value for the evaluation value of variation) with the evaluation value. For example, when the standard deviation of luminance values is calculated as an evaluation value, if the evaluation value is equal to or greater than a threshold value (threshold value determined for the evaluation value), it is determined that the variation is large, and if the evaluation value is less than the threshold value It can be determined that the variation is small.
  • the threshold value may be determined in advance according to an index value (standard deviation, variance, etc.) adopted as the evaluation value.
  • the background image creating unit 15 determines the most frequent luminance value within the range of luminance values where the count value is large (step S28).
  • FIG. 11 is an example of a histogram when the variation in luminance value is small. Taking FIG. 11 as an example, the range of luminance values where the count value is large is k 1 to k 2 , and the most frequent luminance value within this range (the luminance value where the pixel count number is maximum). ) Is S. Therefore, the background image creation means 15 specifies the value of S in step S28. Then, the value of S is determined as the luminance value in the pixel at the coordinate selected in step S21.
  • the mode luminance value S can be determined as the luminance value of the bubble as the background as described above.
  • the average value of the luminance values of the pixels corresponding to the luminance value range k 1 to k 2 in which the count value is large is calculated, and the average value is You may determine as a luminance value showing a background.
  • the median value of the luminance value range k 1 to k 2 may be determined as the luminance value representing the background.
  • the background image creation unit 15 corresponds to a luminance value larger than the discrimination reference value within the range of the luminance value where the count value is large.
  • An average value of luminance values of each pixel is calculated (step S27).
  • FIG. 12 is an example of a histogram when the variation in luminance value is large. Taking FIG. 12 as an example, the range of luminance values where the count value is large is k 3 to k 4 . Further, it is assumed that the discrimination reference value is T. At this time, the background image creating means 15 calculates the average value of the luminance values of the pixels corresponding to the range up to k 4 where the luminance value is greater than T.
  • the background image creating means 15 determines the average value as the luminance value at the pixel at the coordinate selected in step S21. If the selected coordinates have a large variation in luminance value, it can be said that a batch mountain appears in the coordinates or a background bubble appears. And the brightness value of a bubble is larger than the brightness value of a batch mountain. Therefore, the average of the luminance values of the pixels corresponding to the range larger than the discrimination reference value as described above can be determined as the luminance value of the bubbles serving as the background. Instead of calculating the average value as described above, in a range larger than determination reference value within a range of luminance values that count value becomes a number (in the range of T ⁇ k 4 in the example shown in FIG.
  • the mode luminance value may be determined, and the mode luminance value may be determined as the luminance value in the pixel at the selected coordinate.
  • the median value in the range from T to k 4 may be determined as the luminance value in the pixel at the selected coordinate.
  • the discriminant reference value is a threshold value for separating a large variation range (luminance value range in this example) into two, and is a threshold value in the discriminant analysis binarization method described in Non-Patent Document 3. Applicable. Therefore, the threshold value that maximizes the variance ratio between the intra-class variance and the inter-class variance for the background area and the batch mountain area may be used as the discrimination reference value T.
  • the luminance value range k 3 to k 4 is divided into two classes by the discriminant analysis binarization method is shown, but the luminance value range k 3 to k 4 is divided into two classes by other methods. You may divide into.
  • the luminance value range k 3 to k 4 may be divided into two classes by a mode method, a method of fitting two normal distributions, or the like. Then, from the class with the higher luminance value, the luminance value in the pixel at the selected coordinate may be determined in the same manner as described above.
  • the background image creating unit 15 performs the above-described processing described with reference to the flowchart of FIG. 10 for each pixel, and uses the luminance value obtained in step S27 or step S28 for the background image corresponding to the pixel selected in step S21. It is determined as the luminance value of the pixel. As a result, in the extracted image extracted from the latest photographed image, an image from which the batch mountain is removed is obtained. The image is a background image when viewed from the perspective of the camera 11 a.
  • the background image creating means 15 may determine a luminance value representing the background for each area obtained by dividing the extracted image.
  • the background image creating means 15 selects one area from the extracted image extracted from the latest photographed image. There is no particular limitation on how to define the area.
  • the background image creating unit 15, in step S22 from the extracted image extracted from another captured image, and extracts an area corresponding to the selected area (area corresponding to the same portion of the constant region 9 a) .
  • step S24 and subsequent steps a histogram is created for each pixel belonging to the area selected in step S21 and the area corresponding to the area (the area obtained in step S22), and the evaluation value of the variation in luminance value is calculated.
  • the luminance value may be calculated according to whether or not the variation is large (steps S24 to S28).
  • the background image creating means 15 performs this processing for each area obtained by dividing the extracted image, and uses the luminance value obtained in step S27 or step S28 for the background image corresponding to the area selected in step S21. What is necessary is just to determine as a luminance value of each pixel in an area.
  • step S12 the image calibration unit 16, a background image obtained by the background image creating process (step S11), and converts the image when observed from directly above the constant region 9 a (step S12). That is, for the background image obtained in step S11, performs viewpoint conversion processing for changing the viewpoint from the position of the camera 11 a directly above the constant region 9 a, creating a background image when viewed from the viewpoint. As a result, with no batch mountain exists in a certain area 9 a, an image of when observed from directly above the constant region 9 a is obtained.
  • Step S12 corresponds to a background image conversion step.
  • step S10 the extraction image extracted from the latest captured image is converted into an image when observed from directly above the constant region 9 a (step S13). That is, for extraction image extracted from the latest captured image, performs viewpoint conversion processing for changing the viewpoint from the position of the camera 11 a directly above the constant region 9 a, converts the image when viewed from the viewpoint.
  • This converted image shows a batch mountain and background.
  • the conversion process in steps S12 and S13 is a similar conversion process.
  • Step S13 corresponds to an extracted image conversion step.
  • the image calibration unit 16 may perform correction so that the sizes of the images after the conversion in steps S12 and S13 are made uniform.
  • step S14 described later may be executed using the converted image obtained in steps S12 and S13 as it is.
  • the image processing apparatus 13 executes the processing from step S10 to step S13, the image calibration unit 16, an image (constant region 9 a obtained in step S12 a background image) when observed from the respective images (images when observed from directly above the constant region 9 a) obtained in step S13, may be plural storage. Then, the image calibration means 16 selects the latest predetermined number of images obtained each time step S12 is executed, synthesizes the selected images (for example, generates an average image), and similarly performs step S13. It is also possible to select the latest predetermined number of images obtained each time execution is performed and synthesize the selected images.
  • step S12 the background image when observed from directly above the constant region 9 a
  • step S13 the image obtained in each execution of step S13 the image (constant region 9
  • step S14 which will be described later, may be executed using an image obtained by observing a from directly above.
  • the composite image of a plurality of images obtained in each step S12 and the image obtained in each step S13 are obtained rather than performing the processing in the next step S14 and subsequent steps.
  • the image calibration unit 16 calculates, for example, the average value of the luminance values of the corresponding pixels in the plurality of images, and uses the average value as the combined image.
  • the luminance value of the corresponding pixel in can be used. What is necessary is just to produce
  • a minimum value may be specified for the luminance value of each corresponding pixel, and the minimum value of the luminance value may be used as the luminance value of the corresponding pixel in the composite image.
  • the image calibration unit 16 may generate a composite image by performing the same process when combining a plurality of images obtained each time step S12 is executed.
  • the processing after step S14 is performed using the images obtained in steps S12 and S13 without generating the above-described composite image. Just do it. Further, when calculating the movement speed of the batch mountain, the processes after step S10 are performed using the images themselves taken by the cameras 11a and 11b.
  • the difference calculation means 17 calculates a difference in luminance value between corresponding pixels between the image after the conversion at Step S13 and the background image after the conversion at Step S12 (Step S14).
  • the image after the conversion in step S13 may be one image obtained in step S13 or a composite image of a plurality of images obtained each time step S13 is executed.
  • the background image after the conversion in step S12 may be one image obtained in step S12 or a composite image of a plurality of images obtained each time step S12 is executed. .
  • step S14 the difference calculation means 17 subtracts the luminance value of the pixel of the background image after the conversion in step S12 from the luminance value of the pixel of the image after the conversion in step S13 (an image showing the batch mountain and the background). .
  • the difference calculation means 17 performs this subtraction process for each pair of corresponding pixels.
  • FIG. 13 shows an example of the image after the conversion in step S13.
  • the background and the batch mountain 10 are shown.
  • FIG. 14 shows an example of the background image after the conversion in step S12.
  • FIG. 15 shows an example of an image obtained as a result of performing the process of step S14 on these two images.
  • the brightness value of the pixel corresponding to the background is not always 0 after the process of step S14.
  • the difference calculation means 17 performs a binarization process on the image (see FIG. 15) obtained in step S14 (step S15). That is, for each pixel in the image, the difference calculation means 17 performs a process of replacing a luminance value equal to or higher than a predetermined threshold for binarization processing with “1” and replacing a luminance value less than the threshold with “0”. . Since the luminance value of the pixel corresponding to the background has become a value near 0 by the subtraction process in step S14, it becomes “0” by the binarization process. Further, the luminance value of the pixel corresponding to the batch crest 10 is set to “1” by the binarization process because the value is not greatly reduced by the subtraction process in step S14.
  • Binarization after image represents the position of the batch mountain in certain areas 9 a which is created based on the extracted image extracted from the latest captured image. Incidentally, this image shows a state observed from directly above the point of view of a certain region 9 a, information of the height of the batch mountain does not include.
  • the difference calculation means 17 stores the image generated in step S15 (hereinafter, binarized image). Steps S14 and S15 correspond to a background excluded image generation step.
  • the observation data calculating unit 18 uses the binary image generated in step S15, and calculates the observation data of the batch pile present in the constant region 9 a (step S16).
  • the observation data may be calculated using not only the most recently generated binarized image but also a binarized image that continues from the past. Further, here, it has been described for the generation of the binarized image related to a certain area 9 a, the image processing apparatus 13, based on the image captured by the camera 11 b, also generates binarized image related to a certain region 9 b.
  • the observation data calculation means 18 may calculate observation data based on the binarized images of the fixed areas 9 a and 9 b . Step S16 corresponds to an observation data calculation step.
  • FIG. 17 is an explanatory diagram showing a region in which the constant regions 9 a and 9 b are divided into two equal parts: a region on the side wall 6 side and a region on the center side of the glass melting furnace. Elements similar to those shown in FIG. 1 are given the same reference numerals as those in FIG. Regions 51 and 52 is an area in which bisects constant region 9 a to the region of the side wall 6 side region and the central side, the region 51 is the region of the side wall 6 side, region 52 in the center side region is there.
  • regions 41 and 42 is an area in which bisects constant region 9 b in the region and the central side region of the side wall 6 side, region 41 is the region of the side wall 6 side, the region 42 is the center side It is an area.
  • Observation data calculating means 18 as the inner and outer ratio for constant region 9 a, the occupancy rate of the batch mountain region 51 may be calculated evaluation value representing the ratio of occupancy of batch mountain region 52.
  • the inner and outer ratio for constant region 9 b may be calculated and occupancy batch mountain region 41, an evaluation value representing the ratio of the occupancy of the batch mountain region 42.
  • the observation data calculation means 18 may calculate the evaluation value represented by the following formula (1) as the internal / external ratio.
  • Q and R are expressed as percentages and are values in the range of 0 to 100, respectively.
  • is a constant, and for example, ⁇ may be 100.
  • the inside / outside ratio is a value in the range of ⁇ 0.5 to 0.5.
  • the observation data calculation means 18 may calculate the inside / outside ratio for each of the fixed regions 9 a and 9 b .
  • the raw material in the solid state may flow out of the glass melting furnace without being melted, and in this case, the quality of the glass is deteriorated. It is possible to confirm whether or not the solid state raw material is too close to the side wall 6 by the inside / outside ratio.
  • the glass melting furnace may be operated so that the batch mountain approaches the center.
  • observation data calculation means 18 may calculate the occupancy ratio of the batch mountain in each of the constant regions 9 a and 9 b .
  • observation data calculation means 18 may calculate the tip position of the batch mountain (for example, the coordinates of the tip position of the batch mountain) in each of the fixed regions 9 a and 9 b .
  • the observation data calculating means 18 divides in a direction perpendicular to the traveling direction of the raw material obtained by dissolving a predetermined region 9 a, calculates an area of the batch mountain in each divided region. Then, assuming that the change in the area of the batch mountain from the upstream divided region toward the downstream divided region is a linear change, the position where the batch mountain area becomes 0 is calculated, and the position is the tip of the batch mountain. The position may be determined. The same applies to the constant region 9b .
  • the glass melting furnace may be operated so that the tip position of the batch peak returns to the upstream side.
  • the observation data calculating means 18, the value of the observational data in the right-hand constant region 9 a as viewed from the upstream side, the difference between the value of the observational data when viewed from the upstream side in the predetermined region 9 b of the left observation data May be calculated as
  • the observation data calculating means 18, the occupancy rate of the batch mountain in certain areas 9 a may calculate the difference between the occupancy of the batch mountain in certain areas 9 b.
  • the observation data calculating means 18, the tip position of the batch mountain in certain areas 9 a may calculate the difference between the tip position of the batch mountain in certain areas 9 b.
  • the difference in the values of the observation data in the fixed regions 9 a and 9 b is referred to as a left / right difference.
  • this left / right difference as one of the observation data, it is possible to confirm whether the state of the solid raw material is not biased between the right side and the left side as viewed from the upstream side. For example, it is possible to confirm that the melting progresses only on one of the right side and the left side when viewed from the upstream side, and that the melting is delayed on the other side, and the glass melting furnace is operated according to the situation. Can be judged.
  • step S ⁇ b > 16 it is determined that the raw material is delayed in the constant region.
  • An operation such as increasing the amount of fuel input to the burner on the near side wall (ie, increasing the burner's heating power) may be performed.
  • the observation data calculation unit 18 may calculate the left-right difference regarding other observation data.
  • the occupancy ratio of the batch mountain, the tip position, and the left-right difference between them may be calculated from the most recent binary image, but may be calculated from the composite image of the most recent binary images. Also good.
  • the images obtained for each step S12 are synthesized, and the images obtained for each step S13 are synthesized, and those images are obtained. It is preferable to generate a binarized image by performing the processing after step S14 using the composite image.
  • the observation data calculation means 18 may calculate the movement speed of the entire batch mountain based on the position of the same batch mountain in a plurality of continuous binarized images and the photographing interval of the camera. Since the movement of the entire batch mountain is slow, there is little change in the position of the same batch mountain in a plurality of continuous binarized images. Therefore, the observation data calculation unit 18 may determine that the batch mountains having the closest position coordinates are the same batch mountain in a plurality of continuous binarized images. Then, the movement distance of the batch mountain may be calculated from the change in the coordinates of the same batch mountain, and the movement speed of the entire batch mountain may be calculated from the movement distance and the photographing interval. In this example, the movement speed of one batch mountain is regarded as the movement speed of the entire batch mountain.
  • the process after step S10 is performed using the image itself which the camera 11a, 11b image
  • observation data calculation means 18 may calculate the movement direction of the batch mountain based on the same batch mountain position in a plurality of continuous binarized images.
  • the observation data calculation means 18 may calculate the batch mountain reduction rate from a plurality of continuous binarized images. For example, the observation data calculation unit 18 may determine the same batch mountain in each continuous binarized image, and calculate the reduction rate of the area and length of the batch mountain in each binarized image. In calculating the length reduction rate, the reduction rate may be calculated based on the length along the flow direction of the raw material, or based on the length along the direction perpendicular to the flow direction of the raw material. Thus, the decrease rate may be calculated.
  • the observation data calculation means 18 preferably uses a plurality of continuous binarized images when calculating the movement speed of the entire batch mountain, the movement direction of the batch mountain, the reduction rate of the batch mountain, and the like. A plurality of binarized images may be used.
  • This batch mountain reduction rate is considered to have a correlation with the batch mountain height reduction rate, and the batch mountain height can be determined from the batch mountain reduction rate. If the batch crest is too high, it takes time to dissolve and the tip position is extended.
  • observation data calculation means 18 may calculate the direction of each batch mountain (the direction in which the batch mountain extends) from the binarized image. Such a direction may be expressed in advance by determining a reference direction and an angle formed with the reference direction.
  • observation data calculation means 18 may calculate the size of each batch mountain from the binarized image.
  • the observation data calculation means 18 may calculate an evaluation value for evaluating the gas blowing state in the batch mountain based on the binarized image and the image obtained in step S13.
  • the observation data calculation means 18 determines a region corresponding to the batch mountain from the images obtained in step S13 using the binarized image, calculates a standard deviation of luminance values in the region, and calculates the standard. The deviation may be an evaluation value of the gas blowing state.
  • the portion depressed by the gas blowout is observed as a black region. Therefore, the observation data calculation means 18 determines a region corresponding to the batch mountain from the images obtained in step S13 using the binarized image, counts the total number of black pixels in the region, and counts the count. The result may be an evaluation value of the gas blowing state.
  • Non-Patent Document 1 and Patent Document 1 it is described that the occupancy ratio of the batch mountain and the tip position (the most downstream position) of the batch mountain are evaluated.
  • various observation data such as difference, batch crest speed and moving direction, batch crest reduction rate, individual batch crest direction and size, evaluation value of gas blowing state in batch crest, etc.
  • the mountain can be quantitatively evaluated stably.
  • high-quality glass can be produced by appropriately operating the glass melting furnace.
  • the posture specifying unit 14 performs pattern matching of the reference pattern on the captured image (more specifically, the average image of the captured image), and sets the image coordinates of the reference pattern in the captured image. Based on this, it is determined whether or not there is a shift in the posture of the camera. If it is determined that a shift has occurred, the posture (position and orientation) of the camera is specified using the amount of shift in the posture. Then, the image calibration unit 16, based on the attitude of the camera, extracting a range corresponding to the constant region 9 a, 9 b in real space from a captured image.
  • the background image creating unit 15 creates a background image from the extracted image
  • the image calibration unit 16 performs viewpoint conversion processing for changing the viewpoint of the extracted image and the background image from the position of the camera to a position directly above a certain region.
  • the difference calculation means 17 calculates the difference between the two luminance values. Therefore, even if the posture of the camera changes during cleaning or the like, observation of a certain region in the glass melting furnace can be continued well.
  • the preprocessing means 19 selects a plurality of consecutive images that maintain a state where the edge count result is large from among a plurality of images input from the camera as preprocessing. Then, the preprocessing unit 19 pays attention to the corresponding pixel in the selected plurality of images, specifies the minimum luminance value among the pixels, and determines the luminance value of the corresponding pixel in the preprocessed image. Determine as The preprocessing means 19 performs this processing for each corresponding pixel. Depending on the image taken by the camera, raw powder floating in the furnace may appear, the frame may appear, and the background and batch mountains may be blurred, but by performing the pretreatment as described above, It is possible to create an image that is less affected by disturbance such as a frame or raw material powder. Then, using such an image, the process after step S10 (see FIG. 9) can be performed to obtain a good background image that is not affected by disturbance or an image that shows only batch mountains. It is possible to accurately monitor the condition of the batch mountain in a certain area.
  • step S10 may be performed using the image itself generated by photographing the inside of the furnace by the camera.
  • the posture specifying unit 14 performs pattern matching of a plurality of reference patterns on the captured image, and specifies the posture of the camera.
  • a plurality of reference patterns are used, the reliability of the determination of the posture deviation of the camera is increased.
  • FIG. 18 is a block diagram showing a configuration example of the monitoring system in the glass melting furnace in such a modification of the first embodiment.
  • Each means shown in FIG. 18 is the same as each means shown in FIG. 2, and is denoted by the same reference numerals as in FIG.
  • FIG. 19 is a flowchart illustrating an example of a processing progress until observation data calculation in the modification of the first embodiment.
  • the same processes as those described in the first embodiment are denoted by the same reference numerals as those in FIG.
  • the difference calculation means 17 performs a luminance value between corresponding pixels between the extracted image extracted from the latest photographed image and the background image created in step S11. Is calculated (step S31). At this time, the difference calculating means 17 subtracts the luminance value of the pixel of the background image from the luminance value of the pixel of the extracted image (image showing the batch mountain and the background) extracted from the latest photographed image. The difference calculation means 17 performs this subtraction process for each pair of corresponding pixels. As a result, it is possible to obtain an image of a certain region viewed from the viewpoint of the camera and having the background removed. However, in the above subtraction result, the luminance value of the pixel corresponding to the background is not always 0.
  • the difference calculation means 17 performs binarization processing on the image obtained in step S31 after step S31 (step S32).
  • the binary image is an image of a certain region viewed from the viewpoint of the camera, the luminance value of the pixel corresponding to the background is “0”, and the luminance value of the pixel corresponding to the batch mountain 10 is “1”.
  • a converted image is obtained.
  • Steps S31 and S32 correspond to a background excluded image generation step.
  • step S32 the image calibration unit 16 performs viewpoint conversion processing for changing the viewpoint from the position of the camera to a position directly above a certain region with respect to the binarized image generated in step S32 (step S33).
  • step S33 a binarized image similar to the binarized image obtained in step S15 (see FIG. 9) already described is obtained.
  • Step S33 corresponds to a background excluded image conversion step.
  • the observation data calculation means 18 calculates the observation data of the batch mountain existing in a certain region using the binarized image after the conversion process in step S32 (step S16). This process is the same as the process of step S16 already described.
  • FIG. 20 is a block diagram illustrating a configuration example of the glass melting furnace monitoring system according to the second embodiment of the present invention.
  • the same components as those in the first embodiment are denoted by the same reference numerals as those in FIG.
  • Glass melting furnace monitoring system of the second embodiment includes a camera 11 a, and the camera 11 b, and an image processing unit 13 a.
  • the image processing apparatus 13 a includes observation data analysis in addition to the preprocessing means 19, the image storage means 12, the posture specifying means 14, the background image creation means 15, the image calibration means 16, the difference calculation means 17, and the observation data calculation means 18.
  • Means 61 and melting furnace control means 62 are provided.
  • the image processing apparatus 13 a may be configured by adding the observed image processing apparatus for a glass melting furnace monitoring system data analysis means 61 and the melting furnace control device 62 shown in FIG. 18.
  • the observation data analysis means 61 determines the degree of correlation between various observation data whose values are calculated by the observation data calculation means 18 and various operating parameters of the glass melting furnace. In other words, the observation data analysis means 61 derives the degree of influence of various operating parameters of the glass melting furnace on various observation data whose values are calculated by the observation data calculation means 18.
  • observation data the occupancy ratio of the batch mountain in each of the constant regions 9 a and 9 b , the tip position of the batch mountain, and the left-right difference between the observation data, the internal / external ratio in the constant regions 9 a and 9 b , the batch mountain The moving speed, the reduction rate of the batch mountain, and the like can be mentioned, but the observation data is not limited to these.
  • the operation parameters include burner fuel combustion conditions (for example, combustion amount), raw material input conditions (for example, input amount), batch / cullet ratio, etc., but the operation parameters are not limited to these.
  • the observation data analysis means 61 determines the degree of correlation between the observation data and the operation parameter by, for example, principal component analysis and multivariate analysis (for example, multiple regression analysis). For example, the observation data analysis means 61 performs principal component analysis to obtain a principal component, and performs multivariate analysis using the principal component. And the observation data analysis means 61 derives the influence degree of each parameter by using the coefficient used in said process. Specifically, the influence level of the parameter is the degree of influence of the operation parameter on the observation data.
  • the process in which the observation data analysis means 61 derives the parameter influence degree corresponds to an influence degree derivation step.
  • FIG. 21 is a graph showing an example of the result of calculating the influence of the operation parameter on one observation data (here, observation data A).
  • FIG. 21 shows the correlation between the observation data A, the input parameters A (referred to as the amount of raw material input), the input conditions B, and the combustion parameters A to D, which are operation parameters.
  • the vertical axis in FIG. 21 represents the degree of influence of each operation parameter.
  • the combustion parameters A to D are the amount of combustion in the burner at each location. If the influence value of the operation parameter is positive, there is a positive correlation with the observation data, and if the influence value of the operation parameter is negative, there is a negative correlation with the observation data. In addition, the larger the absolute value of the influence value, the stronger the degree of correlation between the operation parameter and the observation data.
  • the input condition A raw material input amount
  • the value of the observation data A is also increased.
  • the combustion parameter A is increased, the value of the observation data A is decreased.
  • the melting furnace control means 62 refers to the observation data calculated by the observation data calculation means 18, and if the observation data has reached a value at which the operating state of the glass melting furnace should be changed, the observation data is between the observation data. Change the correlated operating parameters.
  • the operation parameter having a correlation with the observation data is, for example, an operation parameter in which the absolute value of the degree of influence on the observation data is equal to or greater than a predetermined value. For example, if the value of the observation data exceeds the upper limit value and is too high, the value of the operating parameter having a positive correlation with the observation data is decreased, or the value of the observation data is Or increase the value of the operating parameter having a negative correlation.
  • the furnace control means 62 may operate the glass melting furnace so as to increase the furnace temperature. That is, the burner's heating power may be increased.
  • the process in which the melting furnace control means 62 changes the operation parameter corresponds to a melting furnace control step.
  • the melting furnace control means 62 may output an alarm when the observation data value exceeds the upper limit value or falls below the lower limit value.
  • the operator may change the operating parameters of the glass melting furnace.
  • the melting furnace control means 62 may not be provided.
  • the operator refers to the observation data calculated by the observation data calculation means 18 and the influence degree between the observation data calculated by the observation data analysis means 61 and the operation parameters. What is necessary is just to judge how to change.
  • the observation data analysis means 61 calculates the degree of influence indicating the degree of correlation of the operation parameter with respect to the observation data, so which operation parameter of the glass melting furnace is determined according to the monitored batch mountain state. It can be clarified whether adjustment is necessary.
  • the glass melting furnace can be automatically controlled to an appropriate state without depending on the operator.
  • the observation data analysis unit 61 calculates the influence of the operation parameter on the observation data.
  • quality data for example, the number of bubbles
  • the observation data analyzing means 61 has an influence degree indicating the degree of correlation between the observation data and the operation parameters with respect to the quality data. It may be calculated. This degree of influence may also be performed by, for example, principal component analysis and multivariate analysis. In addition, it means that the state of a kiln is so bad that there are many foam numbers.
  • FIG. 22 is a graph showing the result of calculating the influence of observation data A and B and the operating parameters temperatures A to D on the number of bubbles as one quality data.
  • the observation data A and B are data calculated by the observation data calculation unit 18 using a binarized image generated based on the captured image.
  • the temperatures A to D are values obtained by measuring the temperature at each location of the glass melting furnace. Also in the example shown in FIG. 22, if the influence value is positive, there is a positive correlation between the observation data and temperature and the quality data, and if the influence value is negative, the observation data and temperature And quality data have a negative correlation. In addition, the larger the absolute value of the influence value, the greater the degree of correlation.
  • FIG. 23 is a graph showing changes in the situation in which the correlation between the observation data and the quality data is lost or newly appears.
  • the left vertical axis shown in FIG. 23 indicates the value of the observation data.
  • the vertical axis on the right side shows the value of quality data (here, the number of bubbles).
  • the horizontal axis represents the passage of time.
  • a correlation was observed between the observation data A and the quality data until the middle of the measurement period, but the correlation was lost in the latter half.
  • there was no correlation between the observation data B and the quality data until the middle of the measurement period but a correlation between the observation data B and the quality data was recognized in the latter half.
  • the observation data analysis means 61 repeatedly calculate the degree of influence between the observation data and the quality data.
  • an operation parameter that has a correlation with observation data is determined based on the degree of influence calculated by the observation data analysis unit 61, and the operation parameter is changed according to the observation data. .
  • the operator can determine which operation parameter to operate by referring to the binarized image, the operator may increase or decrease the operation parameter by referring to the binarized image. For example, when it is determined from the binarized image that the dissolution of the right batch mountain as viewed from the upstream wall is slow, the operator may increase the heating power of the right burner as viewed from the upstream wall.
  • the camera 11 a is arranged at a position to be photographed from directly above the constant region 9 a
  • the camera 11 b may be arranged in a position for imaging from above the constant region 9 b .
  • characteristic objects for example, side walls, burners, etc.
  • constant viewpoint area 9 a need not perform the viewpoint conversion processing or changing just above the constant region 9 b. That is, the viewpoint conversion process in steps S12 and S13 (see FIG. 9) need not be performed. Further, in the process progress shown as a modification of the embodiment (see FIG. 19), the viewpoint conversion process in step S33 may not be performed.
  • FIG. 24 is a schematic diagram illustrating an example of a production line for glass articles used in the method for producing a glass article of the present embodiment.
  • the cameras 11 a and 11 b and the image processing device 13 are not shown, but the cameras 11 a and 11 b are disposed in the vicinity of the glass melting furnace 1.
  • An image processing device 13 is also arranged.
  • the arrangement position of the image processing apparatus 13 is not limited. It may also be an image processing apparatus 13 a described in the second embodiment are arranged.
  • a glass melting furnace 1 and a clarification tank 30 are provided in a production line for glass articles.
  • the kind of clarification tank 30 is not limited.
  • the clarification tank 30 may be a depressurization type clarification tank in which the inside of the tank is depressurized to remove bubbles.
  • the clarification tank 30 may be a high-temperature type clarification tank in which the inside of the tank is heated to remove bubbles.
  • the glass melting furnace 1 melts a glass raw material and changes it into a molten glass 71.
  • the illustration of the batch mountain is omitted.
  • the clarification tank 30 removes bubbles generated in the molten glass 71.
  • the molten glass from which the bubbles have been removed moves to a forming step and a slow cooling step.
  • FIG. 25 is a flowchart showing an example of a method for manufacturing a glass article of the present embodiment.
  • a glass raw material is charged into the glass melting furnace 1.
  • the glass melting furnace 1 includes a burner 5 (see FIG. 1), and maintains the interior of the glass melting furnace 1 at a high temperature.
  • the molten glass 71 is manufactured by heating the raw material of glass in the glass melting furnace 1 (step S91, glass melting step).
  • the image processing apparatus 13 performs the same processing as in the first embodiment with respect to the resulting image. That is, processes such as steps S51 to S54 (see FIG. 5), steps S1 to S6 (see FIG. 8), steps S10 to S16 (see FIG. 9 or FIG. 19), steps S21 to S28 (see FIG. 10), and the like are performed. By this processing, observation data is obtained, and the inside of the glass melting furnace 1 can be monitored well.
  • the molten glass 71 manufactured in step S91 is caused to flow into the clarification tank 30. Bubbles exist in the molten glass 71, and a bubble layer (not shown) is generated on the surface of the molten glass 71. Inside the clarification tank 30, bubbles of the molten glass 71 are removed (step S92, clarification step).
  • the molten glass from which bubbles have been removed is formed (step S93, forming step).
  • the molten glass may be formed by a float process. Specifically, the molten glass 71 from which bubbles have been removed is floated on molten tin (not shown) and is advanced in the conveying direction to form a continuous plate-like glass ribbon. At this time, in order to form a glass ribbon having a predetermined plate thickness, a rotating roll is pressed against both side portions of the glass ribbon, and the glass ribbon is stretched outward in the width direction (direction perpendicular to the conveying direction).
  • step S94 slow cooling step
  • the glass ribbon formed in step S93 is gradually cooled.
  • the glass ribbon is pulled out from the molten tin, and the glass ribbon is gradually cooled inside a slow cooling furnace (not shown). Even after transporting to the outside of the slow cooling furnace, the glass ribbon is gradually cooled to near normal temperature.
  • step S95 processing step.
  • processing in step S95 include cutting and polishing.
  • the present invention is not limited to cutting and polishing, and other processing may be performed.
  • the glass article can be manufactured while observing a certain region in the glass melting furnace well.
  • the image processing apparatus 13 a is, as in the second embodiment, the degree of correlation between the observed data and the operating parameters of the glass melting furnace 1 determined, by changing the operating parameters of the glass melting furnace 1, the furnace A glass article can be manufactured by operating the glass melting furnace 1 with an appropriate operation parameter according to the observation result.
  • the present invention is suitably applied to a glass melting furnace monitoring system for monitoring batch hills in a glass melting furnace.

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • Organic Chemistry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Waste-Gas Treatment And Other Accessory Devices For Furnaces (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

 Provided is an internal inspection system for glass-melting furnaces that enables the continuous observation of a specific region inside a glass-melting furnace. An image comprising a reference pattern provided inside a glass-melting furnace and and a specific range of the liquid surface of a molten glass base material is recorded. A region corresponding to the specific range is extracted from the recorded image on the basis of the reference pattern pictured in the image. Then, a background image which is to act as the background of a batch pile is generated on the basis of a plurality of extracted images extracted from a plurality of images. Background exclusion images, in which the background is removed from the extracted images in which the batch pile and the background are pictured, are generated by carrying out a process on each pixel whereby the brightness value in the background image is subtracted from the brightness value of the corresponding pixel in the extracted image. Observation data relating to the batch pile is then calculated on the basis of the background exclusion images.

Description

ガラス溶融炉内監視方法、ガラス溶融炉操作方法、ガラス溶融炉内監視システムGlass melting furnace monitoring method, glass melting furnace operating method, glass melting furnace monitoring system
 本発明は、ガラス溶融炉内監視方法、ガラス溶融炉操作方法、ガラス溶融炉内監視システムおよびガラス物品の製造方法に関する。 The present invention relates to a glass melting furnace monitoring method, a glass melting furnace operating method, a glass melting furnace monitoring system, and a glass article manufacturing method.
 ガラスの製造工程には、ガラスの原料をガラス溶融炉に投入して、その原料をガラス溶融炉内で溶解させる工程がある。ガラス溶融炉内に投入される原料は固体であり、ガラス溶融炉内で徐々に溶解する。投入されガラス溶融炉内で積もった原料をバッチ山と呼ぶ。バッチ山は、溶解した原料である溶融ガラスの流れに沿って(すなわち、ガラス溶融炉の上流から下流へ)、徐々に移動する。また、バッチ山は、熱により溶解していくので、徐々に小さくなっていく。バッチ山の挙動は、ガラス溶融炉の操作の指針となるので、ガラス溶融炉内のバッチ山をガラス溶融炉内に設けられた観察窓から目視により観察したり、スケッチしたりすることが行われていた。バッチ山を観察する場合、溶融ガラスの表面(すなわち、液面)よりも上の部分が観察対象となる。 In the glass manufacturing process, there is a process in which a glass raw material is charged into a glass melting furnace and the raw material is melted in the glass melting furnace. The raw material charged into the glass melting furnace is a solid and gradually melts in the glass melting furnace. The raw material that has been charged and accumulated in the glass melting furnace is called a batch pile. The batch pile gradually moves along the flow of the molten glass that is the molten raw material (that is, from the upstream side to the downstream side of the glass melting furnace). In addition, the batch piles are gradually dissolved because they are dissolved by heat. The behavior of the batch pile is a guideline for the operation of the glass melting furnace, so the batch pile in the glass melting furnace is visually observed from the observation window provided in the glass melting furnace or sketched. It was. When observing a batch mountain, the part above the surface (namely, liquid level) of a molten glass becomes an observation object.
 また、目視による観察やスケッチに依らずに、ガラス溶融炉内の観察窓にカメラを配置してバッチ山を監視する方法が種々提案されている。 In addition, various methods have been proposed for monitoring a batch mountain by arranging a camera in an observation window in a glass melting furnace without relying on visual observation or sketching.
 例えば、非特許文献1に記載された技術では、監視領域の決定に、直線検出が可能なハフ(Hough )変換を利用している。また、非特許文献1には、バッチ山の占有率を求めることが記載されている。 For example, in the technique described in Non-Patent Document 1, the Hough transform capable of detecting a straight line is used for determining the monitoring area. Non-Patent Document 1 describes obtaining the occupancy ratio of a batch mountain.
 また、特許文献1には、バッチ山を撮影し、各撮影時刻において、バッチ山と液面との境界線の位置や形状あるいは最下流位置を比較することが記載されている。 Patent Document 1 describes that a batch mountain is photographed, and the position and shape of the boundary line between the batch mountain and the liquid surface or the most downstream position are compared at each photographing time.
 また、特許文献2には、炉内の液面を走査して画像を撮影し、その画像から位置対輝度特性線を得て、位置対輝度特性線に基づいてバッチ山の存在位置を判定する方法が記載されている。 Further, Patent Document 2 scans the liquid level in the furnace to capture an image, obtains a position-luminance characteristic line from the image, and determines the location of the batch mountain based on the position-luminance characteristic line. A method is described.
 また、特許文献3には、ガラス溶融炉内で溶解した原料に関するパラメータの測定や調節方法が記載されている。 In addition, Patent Document 3 describes a method for measuring and adjusting parameters relating to a raw material melted in a glass melting furnace.
 また、画像から特定物体を抽出する基本的な方法として、画素を二値化する方法がある。二値化にも様々な手法があり、例えば、輝度値に応じた画素のヒストグラムの谷を特定して画素を2つのクラスに分ける手法がある。輝度値に応じた画素のヒストグラムの谷を特定する方法として、モード法や、判別分析二値化法等が知られている。モード法は、非特許文献2,3に記載されている。判別分析二値化法は、非特許文献3に記載されている。判別分析二値化法では、ヒストグラムを2つのクラスに分割するときに、2つのクラス間の分離が最もよくなるように、閾値を決定する。具体的には、画像における背景領域と特定物体の領域に関するクラス内分散とクラス間分散との分散比が最大となる閾値を決定する。 Also, as a basic method of extracting a specific object from an image, there is a method of binarizing pixels. There are various methods for binarization, for example, there is a method for identifying a valley of a histogram of a pixel according to a luminance value and dividing the pixel into two classes. As a method for specifying a valley of a histogram of a pixel corresponding to a luminance value, a mode method, a discriminant analysis binarization method, or the like is known. The mode method is described in Non-Patent Documents 2 and 3. The discriminant analysis binarization method is described in Non-Patent Document 3. In the discriminant analysis binarization method, when the histogram is divided into two classes, the threshold value is determined so that the separation between the two classes is the best. Specifically, a threshold value that maximizes the variance ratio between the intra-class variance and the inter-class variance for the background area and the specific object area in the image is determined.
日本国特開2009-161396号公報Japanese Unexamined Patent Publication No. 2009-161396 日本国特開昭59-44606号公報Japanese Unexamined Patent Publication No. 59-44606 米国特許出願公開第2004/0079113号明細書US Patent Application Publication No. 2004/0079113
 ガラス溶融炉の観察窓にカメラを配置してバッチ山を監視する場合、ガラス溶融炉内の一定の領域の観察を継続し、その領域におけるバッチ山の状態を正確に監視できるようにすることが好ましい。 When monitoring a batch pile by placing a camera in the observation window of the glass melting furnace, it is possible to continue to observe a certain area in the glass melting furnace and to accurately monitor the state of the batch pile in that area. preferable.
 しかし、観察窓の掃除等のメンテナンス作業時にカメラの位置および向きがずれてしまうことがある。すると、カメラの撮影範囲もずれてしまう。このように、カメラの位置や向きが変化してしまうと、バッチ山の状態の経時変化の評価の精度が低下してしまう。 However, the camera position and orientation may shift during maintenance work such as cleaning the observation window. Then, the shooting range of the camera is also shifted. As described above, when the position and orientation of the camera change, the accuracy of evaluation of the time-dependent change of the batch mountain state is lowered.
 また、溶解した原料の液面には、原料が加熱されることにより泡が生じる。そのため、ガラス溶融炉内のバッチ山を撮影した場合、泡を背景とするバッチ山の画像が得られる。バッチ山の状態を正確に監視するためには、画像内における泡とバッチ山とを切り分け、画像内からバッチ山の部分を抽出することが好ましい。 In addition, bubbles are generated on the liquid surface of the dissolved raw material when the raw material is heated. Therefore, when a batch mountain in the glass melting furnace is photographed, an image of the batch mountain with a background of bubbles is obtained. In order to accurately monitor the state of the batch crest, it is preferable to separate the bubbles and the batch crest in the image and extract the batch crest portion from the image.
 また、バッチ山を監視したときに、その監視結果に応じて、ガラス溶融炉のどの運転パラメータを調節すればよいかを適切に把握して、ガラス溶融炉を操作することが好ましい。 It is also preferable to operate the glass melting furnace by properly grasping which operating parameters of the glass melting furnace should be adjusted according to the monitoring result when the batch mountain is monitored.
 そこで、本発明は、ガラス溶融炉内の一定領域の観察を良好に継続することができるガラス溶融炉内監視方法およびガラス溶融炉内監視システムを提供することを目的とする。また、そのような良好な観察状態を実現しつつガラス物品を製造するガラス物品の製造方法を提供することを目的とする。 Therefore, an object of the present invention is to provide a glass melting furnace monitoring method and a glass melting furnace monitoring system capable of satisfactorily continuing observation of a certain region in the glass melting furnace. Moreover, it aims at providing the manufacturing method of the glass article which manufactures a glass article, implement | achieving such a favorable observation state.
 また、本発明は、監視したバッチ山の状態に応じて、ガラス溶融炉のどの運転パラメータを調節すればよいかを明確化できるガラス溶融炉操作方法を提供することを目的とする。 Also, an object of the present invention is to provide a glass melting furnace operating method capable of clarifying which operating parameter of the glass melting furnace should be adjusted according to the state of the monitored batch mountain.
 本発明によるガラス溶融炉内監視方法は、画像撮影手段が、ガラス溶融炉内に設けられた基準パターンと、ガラス溶融炉内で溶解したガラス原料の液面における一定範囲とを含む画像を撮影する画像撮影ステップと、画像内に写された基準パターンの位置のずれを用いて計算される画像撮影手段の姿勢に応じて、撮影された画像内から一定範囲に該当する領域を抽出する領域抽出ステップと、一定範囲に該当する領域として複数の画像から抽出された複数の抽出画像に基づいて、ガラス溶融炉内に積もったガラス原料であるバッチ山の背景となる背景画像を作成する背景画像作成ステップと、撮影された画像から一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、背景画像における対応画素の輝度値を減算する処理を画素毎に行うことで、バッチ山および背景が写った状態の抽出画像から背景を除外した背景除外画像を生成する背景除外画像生成ステップと、背景除外画像に基づいて、バッチ山に関する観察データを算出する観察データ算出ステップとを含むことを特徴とする。 In the monitoring method in the glass melting furnace according to the present invention, the image photographing means photographs an image including a reference pattern provided in the glass melting furnace and a certain range on the liquid surface of the glass raw material melted in the glass melting furnace. An image capturing step and an area extracting step for extracting an area corresponding to a certain range from the captured image according to the attitude of the image capturing means calculated using the position shift of the reference pattern captured in the image And a background image creation step of creating a background image as a background of a batch mountain that is a glass raw material accumulated in a glass melting furnace based on a plurality of extracted images extracted from a plurality of images as a region corresponding to a certain range And a process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to a certain range from the photographed image. This is done for each element, and a background excluded image generation step that generates a background excluded image that excludes the background from the extracted image in which the batch mountain and the background are reflected, and observation data related to the batch mountain are calculated based on the background excluded image. And an observation data calculation step.
 背景画像作成ステップで、複数の抽出画像の対応画素毎または対応するエリア毎に、各輝度値に該当する画素の数をカウントし、各輝度値に該当する画素のカウント結果に基づいて、背景を表す輝度値を決定することによって、背景画像を作成する方法であってもよい。 In the background image creation step, the number of pixels corresponding to each luminance value is counted for each corresponding pixel or corresponding area of the plurality of extracted images, and the background is determined based on the count result of the pixels corresponding to each luminance value. A method of creating a background image by determining the luminance value to be represented may be used.
 背景除外画像生成ステップで、撮影された画像から一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、背景画像における対応画素の輝度値を減算する処理を画素毎に行い、画素毎の減算結果を二値化することによって背景除外画像を生成する方法であってもよい。 In the background excluded image generation step, a process for subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to a certain range from the captured image is performed for each pixel. A method of generating a background excluded image by binarizing each subtraction result may be used.
 背景画像を、一定範囲を液面に対向する上方から観察したときの画像に変換する背景画像変換ステップと、一定範囲に該当する領域として抽出された抽出画像を、当該一定範囲を液面に対向する上方から観察したときの画像に変換する抽出画像変換ステップとを含み、背景除外画像生成ステップでは、抽出画像変換ステップによる変換後の抽出画像の輝度値から、背景画像変換ステップによる変換後の背景画像における対応画素の輝度値を減算する処理を行い、観察データ算出ステップでは、背景除外画像生成ステップで生成された背景除外画像に基づいて観察データを算出する方法であってもよい。 A background image conversion step for converting a background image into an image when a certain range is observed from above facing the liquid surface, and an extracted image extracted as an area corresponding to the certain range, the certain range facing the liquid surface An extracted image conversion step that converts the image into an image when observed from above, and in the background excluded image generation step, the background value after the conversion by the background image conversion step is calculated from the luminance value of the extracted image after the conversion by the extraction image conversion step. A process of subtracting the luminance value of the corresponding pixel in the image and calculating the observation data based on the background excluded image generated in the background excluded image generating step may be used in the observation data calculating step.
 背景除外画像を、一定範囲を液面に対向する上方から観察したときの画像に変換する背景除外画像変換ステップを含み、観察データ算出ステップでは、背景除外画像変換ステップによる変換後の背景除外画像に基づいて観察データを算出する方法であってもよい。 A background excluded image conversion step that converts the background excluded image into an image when a certain range is observed from above facing the liquid surface. In the observation data calculation step, the background excluded image is converted into a background excluded image after the conversion by the background excluded image converting step. A method of calculating observation data based on this may be used.
 画像撮影ステップで得られた各画像に対して、画像内の明暗のコントラストを表す量を算出し、コントラストを表す量に関して予め定められた条件を満たす画像を選択する前処理ステップを含む方法であってもよい。 For each image obtained in the image capturing step, a method including a pre-processing step of calculating an amount representing contrast of light and dark in the image and selecting an image satisfying a predetermined condition with respect to the amount representing contrast. May be.
 前処理ステップで、コントラストを表す量として、画像内のエッジ数を算出し、エッジ数が予め定められた閾値以上であるという条件を満たす複数の画像を選択し、選択した複数の画像に基づいて、一定範囲に該当する領域を抽出する対象となる画像を生成する方法であってもよい。 In the pre-processing step, the number of edges in the image is calculated as an amount representing contrast, a plurality of images satisfying a condition that the number of edges is equal to or greater than a predetermined threshold is selected, and based on the selected plurality of images A method of generating an image that is a target for extracting a region corresponding to a certain range may be used.
 また、本発明によるガラス溶融炉操作方法は、上記のガラス溶融炉内監視方法における観察データ算出ステップで算出される観察データに対して、ガラス溶融炉の運転パラメータが与える影響の度合を導出する影響度導出ステップと、観察データが所定の条件を満たした場合に、当該観察データに対する影響の度合の絶対値が予め定められた値以上になっている運転パラメータを変更する溶融炉制御ステップとを含むことを特徴とする。 Further, the glass melting furnace operating method according to the present invention has an effect of deriving the degree of influence of the operating parameters of the glass melting furnace on the observation data calculated in the observation data calculation step in the above-mentioned monitoring method in the glass melting furnace. And a melting furnace control step of changing an operation parameter in which the absolute value of the degree of influence on the observation data is equal to or greater than a predetermined value when the observation data satisfies a predetermined condition It is characterized by that.
 また、本発明によるガラス溶融炉内監視システムは、ガラス溶融炉内に設けられた基準パターンと、ガラス溶融炉内で溶解したガラス原料の液面における一定範囲とを含む画像を撮影する画像撮影手段と、画像内に写された基準パターンの位置のずれを用いて計算される画像撮影手段の姿勢に応じて、撮影された画像内から一定範囲に該当する領域を抽出する画像較正手段と、一定範囲に該当する領域として複数の画像から抽出された複数の抽出画像に基づいて、ガラス溶融炉内に積もったガラス原料であるバッチ山の背景となる背景画像を作成する背景画像作成手段と、撮影された画像から一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、背景画像における対応画素の輝度値を減算する処理を画素毎に行うことで、バッチ山および背景が写った状態の抽出画像から背景を除外した背景除外画像を生成する差分演算手段と、背景除外画像に基づいて、バッチ山に関する観察データを算出する観察データ算出手段とを備えることを特徴とする。 Further, the monitoring system in the glass melting furnace according to the present invention is an image photographing means for photographing an image including a reference pattern provided in the glass melting furnace and a certain range in the liquid surface of the glass raw material melted in the glass melting furnace. And an image calibration unit that extracts a region corresponding to a certain range from the captured image according to the attitude of the image capturing unit calculated using the positional deviation of the reference pattern captured in the image, and a constant Based on a plurality of extracted images extracted from a plurality of images as a region corresponding to the range, a background image creating means for creating a background image as a background of a batch mountain that is a glass raw material accumulated in a glass melting furnace, and photographing By subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as a region corresponding to a certain range from the obtained image for each pixel A difference calculating means for generating a background excluded image excluding the background from the extracted image in a state where the batch mountain and the background are reflected, and an observation data calculating means for calculating observation data regarding the batch mountain based on the background excluded image It is characterized by.
 背景画像作成手段が、複数の抽出画像の対応画素毎または対応するエリア毎に、各輝度値に該当する画素の数をカウントし、各輝度値に該当する画素のカウント結果に基づいて、背景を表す輝度値を決定することによって、背景画像を作成する構成であってもよい。 The background image creating means counts the number of pixels corresponding to each luminance value for each corresponding pixel or corresponding area of the plurality of extracted images, and based on the count result of the pixels corresponding to each luminance value, the background is generated. The background image may be created by determining the luminance value to be represented.
 差分演算手段が、撮影された画像から一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、背景画像における対応画素の輝度値を減算する処理を画素毎に行い、画素毎の減算結果を二値化することによって背景除外画像を生成する構成であってもよい。 The difference calculation means performs, for each pixel, a process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to a certain range from the photographed image. The configuration may be such that the background excluded image is generated by binarizing the subtraction result.
 画像較正手段が、背景画像を、一定範囲を液面に対向する上方から観察したときの画像に変換し、一定範囲に該当する領域として抽出した抽出画像を、当該一定範囲を液面に対向する上方から観察したときの画像に変換し、差分演算手段が、画像較正手段による変換後の抽出画像の輝度値から、画像較正手段による変換後の背景画像における対応画素の輝度値を減算する処理を行い、観察データ算出手段が、差分演算手段に生成された背景除外画像に基づいて観察データを算出する構成であってもよい。 The image calibration means converts the background image into an image when a certain range is observed from above facing the liquid surface, and extracts the extracted image extracted as a region corresponding to the certain range so that the certain range faces the liquid surface. The image is converted into an image observed from above, and the difference calculation means subtracts the luminance value of the corresponding pixel in the background image after conversion by the image calibration means from the luminance value of the extracted image after conversion by the image calibration means. The observation data calculating unit may calculate the observation data based on the background excluded image generated by the difference calculating unit.
 画像較正手段が、差分演算手段によって生成された背景除外画像を、一定範囲を液面に対向する上方から観察したときの画像に変換し、観察データ算出手段が、画像較正手段による変換後の背景除外画像に基づいて観察データを算出する構成であってもよい。 The image calibration unit converts the background excluded image generated by the difference calculation unit into an image when a certain range is observed from above facing the liquid surface, and the observation data calculation unit converts the background after the conversion by the image calibration unit. The configuration may be such that observation data is calculated based on the excluded image.
 画像撮影手段によって得られた各画像に対して、画像内の明暗のコントラストを表す量を算出し、コントラストを表す量に関して予め定められた条件を満たす画像を選択する前処理手段を備える構成であってもよい。 For each image obtained by the image photographing means, an amount representing the contrast of light and dark in the image is calculated, and a preprocessing means for selecting an image satisfying a predetermined condition regarding the amount representing the contrast is provided. May be.
 前処理手段が、コントラストを表す量として、画像内のエッジ数を算出し、エッジ数が予め定められた閾値以上であるという条件を満たす複数の画像を選択し、選択した複数の画像に基づいて、一定範囲に該当する領域を抽出する対象となる画像を生成する構成であってもよい。 The preprocessing means calculates the number of edges in the image as an amount representing contrast, selects a plurality of images satisfying a condition that the number of edges is equal to or greater than a predetermined threshold, and based on the selected plurality of images A configuration may be used in which an image that is a target for extracting a region corresponding to a certain range is generated.
 観察データ算出手段によって算出される観察データに対して、ガラス溶融炉の運転パラメータが与える影響の度合を導出する観察データ解析手段を備える構成であってもよい。 It may be configured to include observation data analysis means for deriving the degree of influence of the operating parameters of the glass melting furnace on the observation data calculated by the observation data calculation means.
 観察データが所定の条件を満たした場合に、当該観察データに対する影響の度合の絶対値が予め定められた値以上になっている運転パラメータを変更する溶融炉制御手段を備える構成であってもよい。 When the observation data satisfies a predetermined condition, it may be configured to include a melting furnace control means for changing an operation parameter in which the absolute value of the degree of influence on the observation data is equal to or greater than a predetermined value. .
 また、本発明によるガラス物品の製造方法は、ガラス溶融炉内で溶融ガラスを製造するガラス溶融ステップと、清澄槽内で溶融ガラスの泡を除去する清澄ステップと、
 泡が除去された溶融ガラスを成形する成形ステップと、成形された溶融ガラスを徐冷する徐冷ステップとを含むとともに、画像撮影手段が、ガラス溶融炉内に設けられた基準パターンと、ガラス溶融炉内で溶解したガラス原料の液面における一定範囲とを含む画像を撮影する画像撮影ステップと、画像内に写された基準パターンの位置のずれを用いて計算される画像撮影手段の姿勢に応じて、撮影された画像内から一定範囲に該当する領域を抽出する領域抽出ステップと、一定範囲に該当する領域として複数の画像から抽出された複数の抽出画像に基づいて、ガラス溶融炉内に積もったガラス原料であるバッチ山の背景となる背景画像を作成する背景画像作成ステップと、撮影された画像から一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、背景画像における対応画素の輝度値を減算する処理を画素毎に行うことで、バッチ山および背景が写った状態の抽出画像から背景を除外した背景除外画像を生成する背景除外画像生成ステップと、背景除外画像に基づいて、バッチ山に関する観察データを算出する観察データ算出ステップとを含むことを特徴とする。
Moreover, the method for producing a glass article according to the present invention includes a glass melting step for producing molten glass in a glass melting furnace, a clarification step for removing bubbles of the molten glass in a clarification tank,
The method includes a molding step for molding the molten glass from which bubbles have been removed, and a slow cooling step for gradually cooling the molded molten glass. The image photographing means includes a reference pattern provided in the glass melting furnace, and glass melting. According to the image photographing step for photographing an image including a certain range on the liquid surface of the glass raw material melted in the furnace, and the posture of the image photographing means calculated using the deviation of the position of the reference pattern photographed in the image Then, an area extraction step for extracting an area corresponding to a certain range from the photographed image and a plurality of extracted images extracted from a plurality of images as an area corresponding to the certain range are loaded into the glass melting furnace. A background image creating step for creating a background image as a background of a batch mountain, which is a raw glass material, and an extracted image extracted as an area corresponding to a certain range from the photographed image A background-excluded image is generated by excluding the background from the extracted image in a state in which the batch mountain and the background are reflected by performing, for each pixel, the process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel The method includes an excluded image generating step and an observation data calculating step of calculating observation data regarding the batch mountain based on the background excluded image.
 本発明のガラス溶融炉内監視方法およびガラス溶融炉内監視システムによれば、ガラス溶融炉内の一定領域の観察を継続し、その一定領域におけるバッチ山の状態を良好に監視することができる。また、ガラス物品の製造方法によれば、そのような良好な監視状態を実現しつつガラス物品を製造することができる。 According to the glass melting furnace monitoring method and the glass melting furnace monitoring system of the present invention, it is possible to continue observation of a certain area in the glass melting furnace and to monitor the state of the batch mountain in the certain area well. Moreover, according to the manufacturing method of a glass article, a glass article can be manufactured while realizing such a good monitoring state.
 また、本発明のガラス溶融炉操作方法によれば、監視したバッチ山の状態に応じて、ガラス溶融炉のどの運転パラメータを調節すればよいかを明確にすることができる。 In addition, according to the glass melting furnace operating method of the present invention, it is possible to clarify which operating parameter of the glass melting furnace should be adjusted according to the state of the monitored batch mountain.
本発明のガラス溶融炉内監視システムが適用されるガラス溶融炉の例を示す平面図。The top view which shows the example of the glass melting furnace to which the monitoring system in the glass melting furnace of this invention is applied. 本発明の第1の実施形態のガラス溶融炉内監視システムの構成例を示すブロック図。The block diagram which shows the structural example of the monitoring system in the glass melting furnace of the 1st Embodiment of this invention. カメラ11による撮影画像の例を示す説明図。Explanatory drawing which shows the example of the picked-up image by the camera 11a . 基準パターンの画像の例および基準パターンを用いたマッチングの例を示す説明図。Explanatory drawing which shows the example of the matching using the example of the image of a reference pattern, and a reference pattern. 姿勢特定手段14が行う姿勢推定動作の例を示すフローチャート。9 is a flowchart showing an example of posture estimation operation performed by posture identification means 14. カメラ11による撮影画像のうち、溶解した原料の液面に該当する範囲を抜き出した模式図。Among the captured image by the camera 11 a, schematic diagram extracted range corresponding to the liquid level of dissolved material. 視点を一定領域9の真上に変化させるように変換した変換結果の例を示す説明図。Explanatory drawing which shows the example of the conversion result converted so that a viewpoint may be changed just above the fixed area | region 9a. カメラの姿勢判断処理の処理経過の例を示すフローチャート。The flowchart which shows the example of the process progress of the attitude | position determination process of a camera. 観察データ導出までの処理経過の例を示すフローチャート。The flowchart which shows the example of processing progress until observation data derivation. 背景画像作成処理(ステップS11)の処理経過の例を示すフローチャートThe flowchart which shows the example of processing progress of background image creation processing (Step S11). ステップS24の結果得られたヒストグラム。The histogram obtained as a result of step S24. ステップS24の結果得られたヒストグラム。The histogram obtained as a result of step S24. ステップS13の変換後の画像の例を示す説明図。Explanatory drawing which shows the example of the image after the conversion of step S13. ステップS12の変換後の背景画像の例を示す説明図。Explanatory drawing which shows the example of the background image after conversion of step S12. ステップS14の処理を行った結果の画像の例を示す説明図。Explanatory drawing which shows the example of the image of the result of performing the process of step S14. 二値化処理後の画像の例を示す説明図。Explanatory drawing which shows the example of the image after a binarization process. 一定領域9,9を側壁6側の領域とガラス溶融炉の中央側の領域とに二等分した領域を示す説明図。Explanatory view showing a bisected regions and the central side region of the constant region 9 a, 9 b of the side wall 6 side region and the glass melting furnace. 第1の実施形態の変形例におけるガラス溶融炉内監視システムの構成例を示すブロック図。The block diagram which shows the structural example of the monitoring system in a glass melting furnace in the modification of 1st Embodiment. 第1の実施形態の変形例における観察データ導出までの処理経過の例を示すフローチャート。The flowchart which shows the example of the process progress until observation data derivation | leading-out in the modification of 1st Embodiment. 本発明の第2の実施形態のガラス溶融炉内監視システムの構成例を示すブロック図。The block diagram which shows the structural example of the monitoring system in the glass melting furnace of the 2nd Embodiment of this invention. 1つの観察データに対する運転パラメータの影響度を計算した結果の例を示すグラフ。The graph which shows the example of the result of having calculated the influence degree of the driving parameter with respect to one observation data. 1つの品質データに対する観察データA,Bおよび温度A~Dの影響度を計算した結果を示すグラフ。The graph which shows the result of having calculated the influence degree of observation data A and B and temperature A to D with respect to one quality data. 観察データと品質データとの相関が失われたり新たに現れたりする状況の変化を示すグラフ。The graph which shows the change of the situation where the correlation of observation data and quality data is lost or appears newly. 第3の実施形態のガラス物品の製造方法で用いるガラス物品の製造ラインの一例を示す模式図。The schematic diagram which shows an example of the manufacturing line of the glass article used with the manufacturing method of the glass article of 3rd Embodiment. 第3の実施形態のガラス物品の製造方法の例を示すフローチャート。The flowchart which shows the example of the manufacturing method of the glass article of 3rd Embodiment.
 以下、本発明の実施の形態を図面を参照して説明する。
 まず、本発明のガラス溶融炉内監視システムが適用されるガラス溶融炉の例について説明する。図1は、そのようなガラス溶融炉の例を示す平面図である。ガラス溶融炉1は、底面、上流壁(上流側の壁)7、側壁6、下流壁(下流側の壁)8および天井(図示略)に囲まれた空間内で、熱によってガラス原料を溶解させる。上流壁7には、原料を投入する投入口3,3が設けられ、下流壁8には、溶解させたガラス原料を排出する排出口4が設けられている。また、側壁6には、それぞれ、観察窓2とバーナー5が設けられている。図1では、投入口3,3が設けられている場合を示したが、投入口の数は2つに限定されない。
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
First, an example of a glass melting furnace to which the glass melting furnace monitoring system of the present invention is applied will be described. FIG. 1 is a plan view showing an example of such a glass melting furnace. The glass melting furnace 1 melts a glass raw material by heat in a space surrounded by a bottom surface, an upstream wall (upstream wall) 7, a side wall 6, a downstream wall (downstream wall) 8, and a ceiling (not shown). Let The upstream wall 7 is provided with inlets 3 a and 3 b for introducing raw materials, and the downstream wall 8 is provided with an outlet 4 for discharging the melted glass raw material. The side walls 6 are provided with an observation window 2 and a burner 5, respectively. Although FIG. 1 shows the case where the inlets 3 a and 3 b are provided, the number of inlets is not limited to two.
 投入口3,3からは、固体状態のガラス原料が投入される。ガラス溶融炉内がバーナー5から吹き出された炎で加熱されるため、この原料は徐々に溶解していき、溶解した原料は、徐々に下流側に移動して排出口4から排出される。ガラス溶融炉1内で固体状態で積もっている原料がバッチ山10である。バッチ山10は、時間経過とともに下流側に移動しつつ溶解していく。 From the inlets 3 a and 3 b , a solid glass material is introduced. Since the inside of the glass melting furnace is heated by the flame blown from the burner 5, this raw material is gradually melted, and the melted raw material is gradually moved downstream and discharged from the discharge port 4. The batch pile 10 is a raw material accumulated in the glass melting furnace 1 in a solid state. The batch mountain 10 dissolves while moving downstream as time passes.
 本発明のガラス溶融炉内監視システムは、カメラ11,11を備え、ガラス溶融炉内における液面の一定領域9,9を監視する。図1では、2つの一定領域9,9によって、炉内の液面のうち各カメラの正面方向における側壁間の領域がカバーされるように、2つの一定領域9,9を定めた場合を例示している。カメラ11は、上流側から見て右側の一定領域9(以下、単に一定領域9と記す。)を撮影し、カメラ11は、上流側から見て左側の一定領域9(以下、単に一定領域9と記す。)を撮影する。本発明では、ガラス溶融炉内監視システムが2台のカメラ11,11を備える場合を例に説明するが、ガラス溶融炉内監視システムが備えるカメラの台数は2台に限定されない。 The monitoring system in the glass melting furnace of the present invention includes cameras 11 a and 11 b , and monitors certain areas 9 a and 9 b of the liquid level in the glass melting furnace. In Figure 1, by two constant region 9 a, 9 b, as the region between the side walls in the front direction of each camera of the liquid level in the furnace is covered, define two constant region 9 a, 9 b The case is shown as an example. The camera 11 a captures a certain area 9 a on the right side when viewed from the upstream side (hereinafter simply referred to as the certain area 9 a ), and the camera 11 b captures the certain area 9 b on the left side when viewed from the upstream side (hereinafter referred to as “constant area 9 a” ) , simply referred to as a constant region 9 b.) to shoot. In the present invention, the case where the glass melting furnace monitoring system includes two cameras 11 a and 11 b will be described as an example, but the number of cameras included in the glass melting furnace monitoring system is not limited to two.
 なお、一定領域9,9は、投入口3,3付近から離して定める。投入口3,3の直近の領域を一定領域として撮影した場合、撮影画像内で一定領域に該当する部分が全てバッチ山となり、背景となる泡が写らない可能性が高く、その場合、バッチ山に関するデータを算出できないためである。 The fixed regions 9 a and 9 b are determined away from the vicinity of the inlets 3 a and 3 b . When shooting the immediate area of the inlets 3 a and 3 b as a fixed area, all the parts corresponding to the fixed area in the shot image become batch mountains, and there is a high possibility that bubbles as a background do not appear. This is because data related to batch mountains cannot be calculated.
[実施形態1]
 図2は、本発明の第1の実施形態のガラス溶融炉内監視システムの構成例を示すブロック図である。第1の実施形態のガラス溶融炉内監視システムは、カメラ11と、カメラ11と、画像処理装置13とを備える。ガラス溶融炉内監視システムは、カメラ11,11が撮影した画像に対してそれぞれ同様の処理を行う。そのため、以下、カメラ11に関して説明し、カメラ11に関する説明は、適宜省略する。
[Embodiment 1]
FIG. 2 is a block diagram illustrating a configuration example of the glass melting furnace monitoring system according to the first embodiment of the present invention. The glass melting furnace monitoring system according to the first embodiment includes a camera 11 a , a camera 11 b, and an image processing device 13. The monitoring system in the glass melting furnace performs the same processing on the images taken by the cameras 11 a and 11 b . Therefore, hereinafter, described with respect to the camera 11 a, description of the camera 11 b is omitted.
 カメラ11は、ガラス溶融炉の観察窓2(図1参照)を介して、液面の一定領域9の画像を繰り返し撮影する。この画像は静止画像である。同様に、カメラ11も、ガラス溶融炉の観察窓2(図1参照)を介して、液面の一定領域9の静止画像を繰り返し撮影する。カメラ11,11の撮影間隔は、予め定めておけばよい。 The camera 11 a through the observation window 2 of the glass melting furnace (see FIG. 1), repeatedly photographed images of certain areas 9 a of the liquid surface. This image is a still image. Similarly, the camera 11 b also through the observation window 2 of the glass melting furnace (see FIG. 1), repeatedly taking a still image of a certain area 9 b of the liquid surface. Imaging interval of the camera 11 a, 11 b may be determined in advance.
 なお、カメラ11の撮影範囲(視野の範囲)には、一定領域9だけでなく、一定領域9近辺の液面や、カメラ11に対向する側壁も収められる。従って、カメラ11の撮影画像には、一定領域9およびその近傍の液面や、対向する側壁も写っている。カメラ11に関しても同様である。 Note that the camera 11 a shooting range (range of field of view), as well as certain areas 9 a, the liquid surface or in the vicinity constant region 9 a, the side wall is also housed facing the camera 11 a. Therefore, the camera 11 a captured image of the liquid surface and the constant region 9 a and the vicinity thereof, are also captured the opposing sidewalls. The same applies to the camera 11b .
 カメラ11,11で撮影された画像は、画像処理装置13に入力される。 Images taken by the cameras 11 a and 11 b are input to the image processing device 13.
 画像処理装置13は、カメラ11で撮影された画像に対して画像処理を行い、一定領域9におけるバッチ山に関する種々のデータ(例えば、配置や動きに関するデータ)を算出する。同様に、画像処理装置13は、カメラ11で撮影された画像に対して画像処理を行い、一定領域9におけるバッチ山に関する種々のデータを算出する。カメラ11,11が撮影した画像に基づいて算出されたバッチ山のデータを以下、観察データと記す。 The image processing apparatus 13 performs image processing on the image captured by the camera 11 a, calculates various data relating to the batch mountain in certain areas 9 a (e.g., data relating to the arrangement and movement). Similarly, the image processing apparatus 13 performs image processing on the image captured by the camera 11 b, calculates various data relating to the batch mountain in certain areas 9 b. The batch mountain data calculated based on the images taken by the cameras 11 a and 11 b is hereinafter referred to as observation data.
 画像処理装置13は、前処理手段19と、画像記憶手段12と、姿勢特定手段14と、背景画像作成手段15と、画像較正手段16と、差分演算手段17と、観察データ算出手段18とを備える。 The image processing device 13 includes a preprocessing unit 19, an image storage unit 12, a posture specifying unit 14, a background image creation unit 15, an image calibration unit 16, a difference calculation unit 17, and an observation data calculation unit 18. Prepare.
 前処理手段19は、カメラ11が撮影した画像に基づいて、原料粉やフレーム(バーナー5から吹き出された炎)が写っていない状態の画像を生成する。ガラス溶融炉内に浮遊する原料粉やフレームが画像に写ると、バッチ山の画像が不鮮明になる。前処理手段19は、カメラ11が撮影した複数の画像を用いて、原料粉やフレーム等の外乱の影響を受けずにバッチ山が鮮明に写った状態の画像を生成する。前処理手段19は、カメラ11が撮影した画像に関しても同様の処理を行う。このように、原料粉やフレームの影響を除去した画像を生成することを前処理と記す。また、前処理手段19がカメラによって撮影された複数の画像から生成した画像を、以下、前処理画像と記す場合がある。ただし、前処理画像は、原料粉やフレームの影響を除去してバッチ山をより鮮明にしたという点を除けば、各カメラが撮影した画像と同様であり、前処理画像を、単に撮影画像と記す場合もある。すなわち、カメラが撮影した画像そのものと同様に撮影画像と称する場合がある。前処理手段19は、カメラ11に基づいて得られた前処理画像、および、カメラ11に基づいて得られた前処理画像を、それぞれ、画像記憶手段12に記憶させる。 Preprocessing means 19 based on the image by the camera 11 a is taken to generate an image of the state in which the raw material powder and a frame (flame blown out from the burner 5) Implied. When the raw powder and frame floating in the glass melting furnace are reflected in the image, the image of the batch mountain becomes unclear. Preprocessing means 19, by using a plurality of images by the camera 11 a is taken, a batch mountain generates an image in a state in which reflected in the sharp without being affected by disturbances of the raw material powder and a frame or the like. Preprocessing means 19, the same processing is performed with respect to the image by the camera 11 b is taken. Generating an image from which the influence of the raw material powder and the frame is removed in this way is referred to as preprocessing. Further, an image generated by the preprocessing unit 19 from a plurality of images taken by the camera may be referred to as a preprocessed image hereinafter. However, the preprocessed image is the same as the image captured by each camera, except that the effect of the raw material powder and the frame is removed to make the batch mountain clearer. It may be noted. That is, it may be referred to as a photographed image in the same way as the image itself photographed by the camera. Preprocessing means 19, preprocessing image obtained on the basis of the camera 11 a, and, the pre-processing image obtained on the basis of the camera 11 b, respectively, are stored in the image storage unit 12.
 なお、ガラス溶融炉によっては、前処理が全く必要なかったり、あるいは、一部必要なかったりする場合もある。例えば、フレームの影響が少なかったり、浮遊する原料粉が少なかったりするガラス溶融炉では、前処理を行わなくてもよい。その場合、画像処理装置13は、各カメラ11,11から入力された画像をそのまま画像記憶手段12に記憶させればよい。 Depending on the glass melting furnace, pre-treatment may not be necessary at all, or part of the pre-treatment may not be necessary. For example, in a glass melting furnace in which the influence of the frame is small or the raw material powder that floats is small, the pretreatment may not be performed. In that case, the image processing apparatus 13 may store the images input from the cameras 11 a and 11 b in the image storage unit 12 as they are.
 画像記憶手段12は、画像を記憶する記憶装置である。前述のように、前処理手段19が各カメラ11,11から入力された画像に対して前処理を行った場合には、その前処理によって得られた前処理画像を記憶する。また、前処理を行わない場合には、各カメラ11,11から入力された画像をそのまま記憶する。 The image storage unit 12 is a storage device that stores an image. As described above, when the preprocessing means 19 performs the preprocessing on the images input from the cameras 11 a and 11 b , the preprocessed image obtained by the preprocessing is stored. Further, when the preprocessing is not performed, the images input from the cameras 11 a and 11 b are stored as they are.
 以下、前処理手段19が前処理を行い、画像記憶手段12が前処理画像を記憶する場合を例にして説明する。 Hereinafter, the case where the preprocessing unit 19 performs preprocessing and the image storage unit 12 stores the preprocessed image will be described as an example.
 姿勢特定手段14は、カメラ11による撮影画像(本例では前処理画像)から、カメラ11の姿勢を特定する。ここで、姿勢とは、カメラの位置および向きを意味する。姿勢特定手段14は、カメラ11に関しても同様の処理を行う。 The posture identifying unit 14, from the image captured by the camera 11 a (preprocessed image in this example) to identify the orientation of the camera 11 a. Here, the posture means the position and orientation of the camera. The posture identifying unit 14 performs the same processing with respect to the camera 11 b.
 図3は、カメラ11による撮影画像(本例では、カメラ11が撮影した画像に基づいて生成された前処理画像)の例を示す説明図である。この撮影画像は、一定領域9方向を写した画像である。カメラ11による撮影画像には、バッチ山10における液面25より上の部分の他に、対向する側壁6や観察窓2の一部も写っている。側壁6や観察窓2の画像は、カメラの向きおよび位置(カメラの姿勢)を特定するために利用される。すなわち、側壁6を形成するレンガ同士の境界線(溝)、その境界線同士の交差部、および観察窓2の角部(コーナー部)は、撮影画像内において特徴的なパターンとして現れる。以下、このような特徴的なパターンを基準パターンと記す。基準パターンは、撮影したときに同一画像中に似たパターンが存在しないパターンである必要がある。例えば、窓等のコーナーの形状、線や点等の組み合わせが特徴的なパターンとなるのであれば、そのような組み合わせを基準パターンにしてもよい。また、後述するように、姿勢特定手段14が、基準パターンの画像として記憶する画像を逐次更新してもよい。カメラの姿勢が変化しなければ、基準パターンは、撮影画像内におけるほぼ一定の位置(座標)に現れる。一方、清掃時等にカメラの姿勢が変化すると、撮影画像内における基準パターンの位置も変化する。姿勢特定手段14は、カメラ11による撮影画像における基準パターンの位置に基づいて、カメラ11の姿勢のずれの有無を判定する。すなわち、基準パターンは、カメラの姿勢のずれが生じたか否かを判定するために用いられる。なお、画像内における位置を表す座標を、以下、画像座標と記す。 3, (in this example, the preprocessed image by the camera 11 a is generated based on the captured image) captured image by the camera 11 a is an explanatory diagram showing an example of. The captured image is an image taken of a certain region 9 a direction. The image captured by the camera 11 a, the other part above the liquid surface 25 in a batch mountain 10, is reflected also part of the opposing side walls 6 and an observation window 2. The images on the side wall 6 and the observation window 2 are used to specify the orientation and position of the camera (camera posture). That is, the boundary lines (grooves) between the bricks forming the side walls 6, the intersections between the boundary lines, and the corners (corner parts) of the observation window 2 appear as characteristic patterns in the captured image. Hereinafter, such a characteristic pattern is referred to as a reference pattern. The reference pattern needs to be a pattern in which a similar pattern does not exist in the same image when taken. For example, if a combination of the shape of a corner such as a window, a line or a point is a characteristic pattern, such a combination may be used as a reference pattern. Further, as will be described later, the posture specifying unit 14 may sequentially update the image stored as the image of the reference pattern. If the posture of the camera does not change, the reference pattern appears at a substantially constant position (coordinates) in the captured image. On the other hand, when the posture of the camera changes during cleaning or the like, the position of the reference pattern in the captured image also changes. The posture identifying unit 14 based on the position of the reference pattern in the image captured by the camera 11 a, determines the presence or absence of displacement of the posture of the camera 11 a. In other words, the reference pattern is used to determine whether or not a camera position shift has occurred. Note that the coordinates representing the position in the image are hereinafter referred to as image coordinates.
 また、カメラの姿勢ずれ判定の信頼性を増す観点から、画像内に基準パターンが複数個存在していることが好ましい。 Also, it is preferable that a plurality of reference patterns exist in the image from the viewpoint of increasing the reliability of the determination of the camera posture deviation.
 姿勢特定手段14は、基準パターンの画像および撮影画像内における基準パターンの画像座標を記憶する。基準パターンの画像座標は、例えば、基準パターンの中心位置の画像座標であってもよい。姿勢特定手段14は、例えば、観察窓2のコーナー部の点21およびその周辺の画像を基準パターンの画像として記憶するとともに、その位置の画像座標を記憶する。この場合の基準パターンの画像の例、および基準パターンを用いたマッチングの例を、図4に示す。図4(a)は、基準パターンの画像の例を示す。図4(b)は、基準パターンとのマッチングを行う撮影画像の例を示す。図4(b)では、図3と同様の撮影画像を例示している。図4(b)において、図3に示す要素と同一の要素については、同一の符号を付し、説明を省略する。また、図4(a)では、基準パターンを分かりやすくするため、撮影画像と比較して大きく図示している。姿勢特定手段14は、撮影画像と、記憶している各基準パターンの画像との間でパターンマッチングを行い、記憶している各基準パターンの画像に該当する撮影画像内の部分の画像座標を特定する。姿勢特定手段14は、その画像座標と、記憶している画像座標とを比較して、カメラ11の姿勢にずれが生じたか否かを判定する。なお、パターンマッチングでは、類似する程度の指標値となる類似度を計算する。 The posture specifying means 14 stores the image of the reference pattern and the image coordinates of the reference pattern in the captured image. The image coordinates of the reference pattern may be image coordinates of the center position of the reference pattern, for example. The posture identifying section 14 may, for example, an image of the point 21 a and around the corner of the viewing window 2 stores an image of the reference pattern, and stores the image coordinates of the position. An example of an image of a reference pattern in this case and an example of matching using the reference pattern are shown in FIG. FIG. 4A shows an example of a reference pattern image. FIG. 4B shows an example of a captured image that is matched with the reference pattern. FIG. 4B illustrates a captured image similar to that in FIG. 4B, the same elements as those shown in FIG. 3 are denoted by the same reference numerals, and the description thereof is omitted. In FIG. 4A, the reference pattern is shown larger than the captured image for easy understanding. The posture specifying unit 14 performs pattern matching between the captured image and the stored image of each reference pattern, and specifies the image coordinates of the portion in the captured image corresponding to each stored reference pattern image. To do. The posture identifying section 14 determines its image coordinates, by comparing the image coordinates are stored, whether the deviation occurs in the posture of the camera 11 a. In the pattern matching, the similarity that is a similar index value is calculated.
 例えば、姿勢特定手段14は、図4(a)に例示する基準パターンの画像と、図4(b)に示す撮影画像との間でパターンマッチングを行い、撮影画像内の部分81(図4(b)参照)を特定し、その部分81の画像座標(例えば、撮影画像内の部分81の中心座標)を特定する。そして、姿勢特定手段14は、その座標と、予め記憶している画像座標とを比較して、カメラ11の姿勢にずれが生じたか否かを判定すればよい。 For example, the posture specifying unit 14 performs pattern matching between the image of the reference pattern illustrated in FIG. 4A and the captured image illustrated in FIG. b) reference) is specified, and the image coordinates of the portion 81 (for example, the center coordinates of the portion 81 in the captured image) are specified. Then, the posture identifying section 14, and the coordinates, by comparing the image coordinates stored in advance, it may be determined whether the deviation occurs in the posture of the camera 11 a.
 また、カメラの姿勢推定に用いる特徴的な点を基準点と記す。基準点群の中に、基準パターン内の点(例えば、観察窓2のコーナー部の点21)が含まれていてもよい。図3では、点21~21を基準点とする場合を例示している。姿勢特定手段14は、基準点に関する情報として、基準点の画像座標と、実空間における基準点の3次元座標とを記憶する。姿勢特定手段14は、「基準パターンの画像およびその画像座標」と「基準点の画像座標および3次元座標」を記憶しているので、画像上における基準パターンと基準点の相対的な位置関係を判断できる。 A characteristic point used for camera posture estimation is referred to as a reference point. In the reference point group, a point in the reference pattern (for example, the point 21 a at the corner of the observation window 2) may be included. FIG. 3 illustrates the case where the points 21 a to 21 e are used as reference points. The posture specifying means 14 stores the image coordinates of the reference point and the three-dimensional coordinates of the reference point in real space as information on the reference point. Since the posture specifying means 14 stores “the image of the reference pattern and its image coordinates” and “the image coordinate and the three-dimensional coordinates of the reference point”, the relative positional relationship between the reference pattern and the reference point on the image is stored. I can judge.
 カメラ11が基準パターンおよび一定領域9を含む画像を撮影し、カメラ11が基準パターンおよび一定領域9を含む画像を撮影する処理は、画像撮影ステップに相当する。 The camera 11 a is capturing images including the reference pattern and a constant region 9 a, the processing by the camera 11 b captures an image including the reference pattern and constant regions 9 b corresponds to an image capturing step.
 図5は、姿勢特定手段14が行う姿勢推定動作の例を示すフローチャートである。姿勢特定手段14は、前述のように撮影画像内における基準パターンの画像座標と、記憶している画像座標とを比較し、カメラ11の姿勢にずれが生じたと判定した場合、それらの画像座標を用いて、姿勢のずれ量を計算する(ステップS51)。すなわち、姿勢特定手段14は、基準パターンが撮影画像内でどれだけずれたかを計算する。 FIG. 5 is a flowchart illustrating an example of the posture estimation operation performed by the posture identification unit 14. The posture identifying section 14, and the image coordinates of the reference patterns in the captured image, as described above, by comparing the image coordinates are stored, if a shift in position of the camera 11 a is determined to have occurred, their image coordinates Is used to calculate the amount of posture deviation (step S51). That is, the posture specifying means 14 calculates how much the reference pattern has shifted in the captured image.
 そして、姿勢特定手段14は、撮影画像内における基準パターンのずれた量を、記憶している基準点の画像座標に反映する(ステップS52)。すなわち、姿勢特定手段14は、カメラ11の姿勢にずれが生じたことによって撮影画像内での基準パターンの画像座標がずれた分だけ、各基準点の画像座標をずらす(基準点の画像座標の値を変化させる)。 Then, the posture specifying unit 14 reflects the amount of deviation of the reference pattern in the captured image on the stored image coordinates of the reference point (step S52). That is, the posture identifying unit 14, by the amount of shift image coordinates of the reference pattern in the captured image by a deviation occurs in the posture of the camera 11 a, the image coordinates of shifting the image coordinates of each reference point (reference point Change the value of).
 そして、姿勢特定手段14は、その基準点の画像座標と、実空間における基準点の3次元座標とを用いて、カメラキャリブレーション処理を行い、カメラ11の姿勢を推定する。具体的には、姿勢特定手段14は、カメラ11の各種姿勢における個々の基準点の画像座標を、実空間における各基準点の3次元座標から算出する(ステップS53)。そして、姿勢特定手段14は、各基準点の3次元座標から算出した画像座標が、上記のように基準パターンの画像座標のずれに合わせてずらした基準点の画像座標に最も近い座標となるときの姿勢を、カメラ11の姿勢であると判定する(ステップS54)。 Then, the posture identifying unit 14 uses the image coordinates of the reference point, and a 3-dimensional coordinates of the reference point in the real space, performs camera calibration process, estimating the pose of the camera 11 a. Specifically, the posture identifying section 14, the image coordinates of the individual reference points in various orientations of the cameras 11 a, is calculated from the three-dimensional coordinates of each reference point in the real space (step S53). Then, the posture specifying unit 14 is configured such that the image coordinates calculated from the three-dimensional coordinates of each reference point are closest to the image coordinates of the reference point shifted in accordance with the shift of the image coordinates of the reference pattern as described above. the attitude, determines that the posture of the camera 11 a (step S54).
 ここでは、カメラ11を例にして説明したが、姿勢特定手段14は、カメラ11に関する姿勢のずれの有無の判定や姿勢推定も同様に行う。 Here, the camera 11 a has been described as an example, the posture identifying unit 14, the determination and pose estimation of the presence or absence of a deviation of the orientation related to the camera 11 b similarly performed.
 画像較正手段16は、姿勢特定手段14が特定したカメラ11の姿勢に応じて、撮影画像内(本例では、前処理画像内)において一定領域9に該当する範囲を特定する。図6は、カメラ11による撮影画像のうち、溶解した原料の液面25に該当する範囲を抜き出した模式図である。なお、図6の右側および左側はそれぞれガラス溶融炉の上流および下流である。この液面25の画像のうち、太い実線で囲んだ範囲31が、実空間における一定領域9に該当する。画像較正手段16は、カメラ11の姿勢に応じて、一定領域9に該当する範囲31を特定し、抽出する。 Image calibration means 16, according to the posture of the camera 11 a posture specifying unit 14 has identified, (in this example, before the processed image) in the captured image to identify the range corresponding to the constant region 9 a in. 6, among the image taken by the camera 11 a, a schematic diagram obtained by extracting the range corresponding to the liquid surface 25 of the dissolved material. In addition, the right side and the left side of FIG. 6 are the upstream and downstream of the glass melting furnace, respectively. Among the image of the liquid surface 25, a range 31 enclosed by a thick solid line, corresponds to the constant region 9 a in the real space. The image calibration unit 16 identifies and extracts a range 31 a corresponding to the certain region 9 a according to the posture of the camera 11 a .
 ただし、ガラス溶融炉内での液面の高さは一定であるとする。この高さにおける一定領域9の範囲は予め定められている。すなわち、一定領域9の範囲(位置)は、実空間内における一定の高さの面内における領域の位置として予め規定されている。従って、カメラ11の姿勢が特定されると、そのカメラ11による撮影画像内における一定領域9に該当する範囲も定めることができる。すなわち、画像較正手段16は、実空間において一定の高さにおける一定領域9を、姿勢が既知となったカメラ11の撮影画像に射影したときの画像内における範囲31を特定すればよい。 However, the liquid level in the glass melting furnace is assumed to be constant. A certain range of area 9 a in the height is predetermined. That is, a certain range of area 9 a (position) is pre-defined as a position of the region in the plane of constant height in the real space. Therefore, the posture of the camera 11 a is identified, it is possible to determine also the range corresponding to the constant region 9 a in the image captured by the camera 11 a. That is, the image calibration unit 16 may specify the range 31 a in the image when the constant region 9 a at a constant height in the real space is projected onto the captured image of the camera 11 a whose posture is known. .
 なお、ガラス溶融炉内での液面の高さが一定であるとした場合、撮影画像における一画素分のずれが、実空間において何mmずれているかを調査することにより、撮影画像における画素分解能(mm/pixel)を把握することができる。 In addition, when the liquid level in the glass melting furnace is assumed to be constant, the pixel resolution in the photographed image is investigated by investigating how many millimeters the deviation of one pixel in the photographed image is displaced in the real space. (Mm / pixel) can be grasped.
 また、画像較正手段16は、画像内で一定領域9に該当する範囲31を特定する処理の他に、その範囲31の画像を、一定領域9を真上(換言すれば、液面に対向する上方)から観察したときの画像に変換する処理も行う。すなわち、図6に例示する画像は、カメラ11の視点(液面に対して傾斜方向)で一定領域9を観察した場合の画像であるが、画像内の範囲31に関して、視点を一定領域9の真上に変化させた場合の画像に変換する。この変換結果の例を図7に例示する。このように、画像較正手段16は、画像内で一定領域9に該当する範囲31に関して、視点を一定領域9の真上に変化させる視点変換処理を行い、その視点から観察される画像を生成すればよい。 The image calibration means 16, the range 31 a corresponding to the constant region 9 a in the image to another process for specifying an image of the range 31 a, in other words immediately above (the constant region 9 a, liquid A process of converting the image into an image when observed from the upper side facing the surface is also performed. That is, the image illustrated in FIG. 6 is a picture in the case of observing a certain area 9 a camera 11 a viewpoint (inclination direction to the liquid surface), with respect to the scope 31 a in the image, a certain viewpoint into image in the case of changing directly above the area 9 a. An example of the conversion result is illustrated in FIG. Thus, the image the image calibration unit 16, with respect to the range 31 a corresponding to the constant region 9 a in the image, performs viewpoint conversion processing for changing the view point directly over the constant region 9 a, which is observed from the viewpoint Should be generated.
 なお、画像較正手段16が、一定領域9を真上から観察したときの画像に変換する対象となるのは、カメラ11による撮影画像から抽出された範囲31に限らない。例えば、画像処理(例えば、後述の背景画像作成処理)によって得られた画像に対しても、画像較正手段16は同様の変換を行う。 Note that the image calibration means 16 is not limited to the range 31 a extracted from the image captured by the camera 11 a that is to be converted into an image when the fixed region 9 a is observed from directly above. For example, the image calibration unit 16 performs the same conversion on an image obtained by image processing (for example, background image creation processing described later).
 画像較正手段16は、カメラ11による撮影画像(本例では、前処理画像)に関しても同様の処理を行う。 Image calibration means 16 (in this example, the preprocessed image) captured image by the camera 11 b the same processing is performed with respect.
 背景画像作成手段15は、前処理手段19によって順次生成される複数の前処理画像から画像較正手段16によって抽出された範囲31(一定領域9に該当する範囲31)の画像を用いて、バッチ山が存在しない場合の液面の画像を作成する(背景画像作成処理)。この範囲31は一定領域9に該当する画像であるので、泡を背景としてバッチ山を写した画像となる。また、バッチ山の移動速度や溶解速度は緩やかであるので、範囲31には、常に(あるいは、高い頻度で)バッチ山が写っている。そのため、一定領域9に該当する範囲31として、泡(背景)だけが写った状態の画像を直接撮影することは困難である。そこで、背景画像作成手段15は、複数の画像から抽出された範囲31を用いて、バッチ山が存在しない背景画像を作成する。 Background image creating means 15, using the image before the processing means range 31 extracted by the image calibration unit 16 from a plurality of pre-processing images sequentially generated by 19 a (range 31 corresponding to a certain area 9 a a) Then, an image of the liquid level is created when there is no batch mountain (background image creation processing). This range 31 a is a picture corresponding to the constant region 9 a, the pictures in the batch mountain background bubbles. Further, since the moving speed and the dissolution rate of the batch mountain is moderate, in the range 31 a, always (or high in frequency) is reflected batch mountain. Therefore, it is difficult to directly capture an image in which only bubbles (background) are shown as the range 31 a corresponding to the certain region 9 a . Therefore, the background image creating unit 15 uses the range 31 a extracted from the plurality of images, to create a background image batch mountain does not exist.
 液面においてバッチ山が存在しない箇所には泡が存在する。また、バッチ山は、徐々に下流方向に移動しなから溶解していく。従って、ある画像から抽出された範囲31においてバッチ山に該当した画素も、別の画像から抽出された範囲31では泡を表すことになる。背景画像作成手段15は、複数の画像から抽出された、一定領域9に該当する範囲31における対応する画素の組毎に(換言すれば、一定領域9内の同じ位置に該当する画素の組毎に)、泡に該当する輝度を特定することによって、バッチ山が存在せずにバッチ山の背景のみを表した画像を作成する。なお、ここでは、複数の画像から抽出された範囲31における対応画素の組毎に処理を行う場合を例にしたが、複数の画像から抽出された範囲31における対応するエリア毎に、泡に該当する輝度を特定してもよい。エリアは、連続する画素が集まって形成する領域である。 Bubbles are present on the liquid surface where no batch pile exists. In addition, the batch mountain dissolves without gradually moving in the downstream direction. Therefore, even pixels corresponding to the batch mountain in the range 31 a extracted from an image, would represent a range 31 a in bubbles extracted from another image. The background image creating means 15 extracts, for each set of corresponding pixels in the range 31 a corresponding to the fixed area 9 a extracted from a plurality of images (in other words, pixels corresponding to the same position in the fixed area 9 a ). By specifying the brightness corresponding to the bubbles, an image representing only the background of the batch mountain is created without the presence of the batch mountain. Here, the case where processing is performed for each set of corresponding pixels in the range 31 a extracted from a plurality of images is taken as an example, but for each corresponding area in the range 31 a extracted from a plurality of images, a bubble is generated. The luminance corresponding to may be specified. An area is an area formed by gathering consecutive pixels.
 背景画像作成手段15は、カメラ11による撮影画像(本例では、前処理画像)に関しても同様の処理を行う。 Background image creating means 15 (in this example, the preprocessed image) captured image by the camera 11 b the same processing is performed with respect.
 差分演算手段17は、2枚の画像間における対応する画素間の差分を計算する。具体的には、バッチ山を写した画像の各画素の輝度値から、背景画像における対応画素の輝度値を減算する。この減算処理によって、バッチ山を写した画像から背景部分が除去された画像が得られる。ただし、泡の輝度にも多少の変化はある。従って、バッチ山を写した画像内における泡に該当する画素の輝度から、背景画像における対応画素の輝度値を減算した結果が0になるとは限らない。そこで、差分演算手段17は、バッチ山を写した画像の各画素の輝度値から、背景画像における対応画素の輝度値を減算した後、画素毎の減算結果を“0”または“1”に二値化する処理を行うことが好ましい。この二値化処理では、差分演算手段17は、画素毎に、減算結果が所定値以上であれば、減算結果を“1”に切り上げ、減算結果がその所定値未満であれば、減算結果を“0”に切り下げればよい。この二値化処理を行うことで、バッチ山に該当する領域(輝度値が“1”の領域)と、背景に該当する領域(輝度値が“0”の領域)とを、より明確に区別することができる。 The difference calculation means 17 calculates the difference between corresponding pixels between two images. Specifically, the luminance value of the corresponding pixel in the background image is subtracted from the luminance value of each pixel of the image showing the batch mountain. By this subtraction process, an image obtained by removing the background portion from the image showing the batch mountain is obtained. However, there are some changes in the brightness of the bubbles. Therefore, the result of subtracting the luminance value of the corresponding pixel in the background image from the luminance of the pixel corresponding to the bubble in the image showing the batch mountain is not always zero. Therefore, the difference calculation means 17 subtracts the luminance value of the corresponding pixel in the background image from the luminance value of each pixel of the image showing the batch mountain, and then subtracts the subtraction result for each pixel to “0” or “1”. It is preferable to perform a process for converting the value. In this binarization process, the difference calculation means 17 rounds up the subtraction result to “1” for each pixel if the subtraction result is equal to or greater than a predetermined value, and if the subtraction result is less than the predetermined value, It may be rounded down to “0”. By performing this binarization processing, the region corresponding to the batch mountain (the region having the luminance value “1”) and the region corresponding to the background (the region having the luminance value “0”) are more clearly distinguished. can do.
 観察データ算出手段18は、背景部分が除去され、バッチ山に該当する部分が残された画像から、バッチ山の観察データを算出する。観察データの例として、例えば、バッチ山の先端位置、バッチ山の移動速度、バッチ山の溶解速度(バッチ山の減少率)、一定領域9,9それぞれにおけるバッチ山の占有率等が挙げられる。また、これらの観察データに関して、一定領域9における値と、一定領域9における値との差を算出し、その差を観察データとしてもよい。 The observation data calculation means 18 calculates the observation data of the batch mountain from the image from which the background portion is removed and the portion corresponding to the batch mountain is left. Examples of observation data include, for example, the position of the tip of the batch crest, the movement speed of the batch crest, the dissolution rate of the batch crest (batch crest reduction rate), the occupancy ratio of the batch crest in each of the constant regions 9 a and 9 b , and the like. It is done. Further, regarding these observation data, a difference between a value in the fixed region 9a and a value in the fixed region 9b may be calculated, and the difference may be used as the observation data.
 また、一定領域9を側壁側の領域と、ガラス溶融炉の幅方向の中央側の領域とに二等分し、その二つ領域におけるバッチ山の占有率の比(以下、内外比と記す。)を観察データとして計算してもよい。同様に、一定領域9に関しても、側壁側の領域と、ガラス溶融炉の内側の領域とに二等分し、その二つ領域におけるバッチ山の占有率の比(内外比)を観察データとして計算してもよい。 Further, the constant region 9a is divided into two parts, a side wall region and a central region in the width direction of the glass melting furnace, and the ratio of batch occupancy ratios in the two regions (hereinafter referred to as an internal / external ratio). .) May be calculated as observation data. Likewise, for certain regions 9 b, and side wall regions, bisected into an inner region of the glass melting furnace, the ratio of occupancy of the batch mountain in the two regions (inside and outside ratio) as the observation data You may calculate.
 前処理手段19、姿勢特定手段14、背景画像作成手段15、画像較正手段16、差分演算手段17および観察データ算出手段18は、例えば、プログラムに従って動作するコンピュータのCPUによって実現される。この場合、例えば、コンピュータのプログラム記憶装置(図示略)に記憶されたプログラムをCPUが読み込み、CPUがそのプログラムに従って、前処理手段19、姿勢特定手段14、背景画像作成手段15、画像較正手段16、差分演算手段17および観察データ算出手段18として動作すればよい。 The pre-processing means 19, the posture specifying means 14, the background image creation means 15, the image calibration means 16, the difference calculation means 17 and the observation data calculation means 18 are realized by a CPU of a computer that operates according to a program, for example. In this case, for example, the CPU reads a program stored in a program storage device (not shown) of the computer, and the CPU follows the program in accordance with the preprocessing means 19, the posture specifying means 14, the background image creating means 15, and the image calibration means 16. The difference calculating unit 17 and the observation data calculating unit 18 may be operated.
 次に、動作について説明する。
 まず、前処理手段19による前処理について説明する。カメラ11は、定期的に一定領域9方向を撮影し、その画像を順次、前処理手段19に入力する。前処理手段19は、一定の周期(例えば、数秒の周期)毎に、その周期内にカメラ11から入力された複数の画像に基づいて、前処理画像を生成する。具体的には、前処理手段19は、1周期内で入力された個々の画像に関して、画像内のエッジの数をカウントする。なお、エッジとは、画像内に現れる線である。画像内におけるエッジの数のカウント対象とする領域を、例えば、壁面に相当する領域および一定領域9に相当する領域に限定してもよい。前処理手段19による処理周期は短く、その周期内でカメラ11から入力される各画像において、写っているバッチ山の数の多さが変化しない場合が多い。また、画像に写るバッチ山の多さが変化しないということは、フレームや原料粉の影響がなければ、エッジの数もある程度の多さを維持しているはずである。このことを利用して、前処理手段19は、1周期内でカメラ11から入力された複数の画像の中から、エッジの数のカウント結果が多い状態を保っている連続する複数の画像を選択する。なお、画像内におけるエッジの数のカウント結果の多寡を判断する基準として、例えば、予め定められた閾値を用いてもよい。具体的には、前処理手段19は、カウント結果として得られたエッジ数が、エッジ数に関して予め定められた閾値以上であるという条件を満たしている場合に、画像内のエッジ数が多いと判定し、エッジ数が閾値以上である画像を選択すればよい。また、前処理手段19は、カウント結果として得られたエッジ数が閾値未満である場合に、画像内のエッジ数が少ないと判定し、エッジ数が閾値未満である画像を選択しない。あるいは、入力された各画像におけるエッジの数のカウント結果に応じて、エッジの数の多寡の判断基準を変動させてもよい。
Next, the operation will be described.
First, preprocessing by the preprocessing unit 19 will be described. The camera 11 a periodically photographs the direction of the fixed area 9 a and inputs the images sequentially to the preprocessing means 19. Preprocessing means 19, a constant cycle (e.g., the period of a few seconds) for each, based on the plurality of images input from the camera 11 a in its cycle, generating a preprocessed image. Specifically, the preprocessing unit 19 counts the number of edges in the image for each image input within one period. Note that an edge is a line that appears in an image. For example, the region to be counted for the number of edges in the image may be limited to a region corresponding to the wall surface and a region corresponding to the certain region 9a. The processing cycle by pre-processing means 19 short, each image input from the camera 11 a within that period, many cases the large number of is reflected batch mountain does not change. Also, the fact that the number of batch peaks in the image does not change means that the number of edges should be maintained at a certain level if there is no influence of the frame or the raw material powder. By utilizing this, the pre-processing means 19, from among a plurality of images input from the camera 11 a in one cycle, a plurality of continuous images is maintained the number of count result is often a state of the edge select. For example, a predetermined threshold value may be used as a reference for determining the number of edge counts in the image. Specifically, the preprocessing unit 19 determines that the number of edges in the image is large when the number of edges obtained as a count result satisfies a condition that the number of edges is equal to or greater than a predetermined threshold. Then, an image having the number of edges equal to or greater than a threshold value may be selected. Further, when the number of edges obtained as a count result is less than the threshold, the preprocessing unit 19 determines that the number of edges in the image is small and does not select an image having the number of edges less than the threshold. Alternatively, the judgment criterion for the number of edges may be changed according to the count result of the number of edges in each input image.
 また、上記の説明では、前処理手段19が連続する複数の画像を選択する場合を例にして説明したが、前処理手段19が選択する複数の画像は連続する画像でなくてもよい。 In the above description, the case where the preprocessing unit 19 selects a plurality of continuous images has been described as an example. However, the plurality of images selected by the preprocessing unit 19 may not be continuous images.
 また、前処理手段19は、画像内の明暗のコントラストを表す量を算出し、そのコントラストを表す量に関して予め定められた条件を満たす画像を選択すればよい。前述のエッジ数は、画像内の明暗のコントラストを表す量の一例である。また、エッジ数が閾値以上であるという条件は、明暗のコントラストを表す量に関して予め定められた条件の一例である。前処理手段19がエッジ数に基づく画像選択方法以外の方法で画像を選択する例を以下に示す。例えば、前処理手段19は、カメラ11から入力された画像毎に、画像の明暗のコントラストを表す量として輝度値の標準偏差を算出してもよい。このとき、前処理手段19は、画像全体に含まれる各画素の輝度値の標準偏差を算出してもよい。あるいは、画像内において、レンガ同士の境界線が写る領域を予め定めておき、前処理手段19は、画像内のその領域における輝度値の標準偏差を算出してもよい。また、画像を選択する条件の一例として、画像のコントラストを表す量がその前の画像のコントラストを表す量よりも一定値以上低下する事象の発生時からその一定時間経過後までの画像を除外し、除外されずに残った画像を選択するという条件が挙げられる。例えば、この条件を採用し、明暗のコントラストを表す量として輝度値の標準偏差を算出する場合、前処理手段19は、ある画像で、輝度値の標準偏差が前の画像の輝度値の標準偏差よりも一定値以上低下した場合、その時点から一定期間が経過するまでに生成された画像をその後の処理の対象から除外し、除外されずに残った画像を選択すればよい。そして、前処理手段19は、選択した複数の画像から前処理画像を生成する。なお、画像内の明暗のコントラストを表す量が一定値以上低下したということは、コントラストが急に低下したということであり、原料粉が舞い上がる等の現象が生じたとみなすことができる。 Further, the preprocessing means 19 may calculate an amount representing the contrast between light and dark in the image and select an image satisfying a predetermined condition regarding the amount representing the contrast. The number of edges described above is an example of an amount representing the contrast between light and dark in an image. The condition that the number of edges is equal to or greater than the threshold is an example of a condition that is predetermined with respect to an amount that represents contrast between light and dark. An example in which the preprocessing means 19 selects an image by a method other than the image selection method based on the number of edges is shown below. For example, the pre-processing means 19, a camera 11 for each input image from a, a quantity representing the contrast of the brightness of the image may be calculated standard deviation of the luminance values. At this time, the preprocessing means 19 may calculate the standard deviation of the luminance value of each pixel included in the entire image. Alternatively, in the image, an area where the boundary line between the bricks is captured may be determined in advance, and the preprocessing unit 19 may calculate the standard deviation of the luminance value in that area in the image. In addition, as an example of the condition for selecting an image, an image from the occurrence of an event in which the amount representing the contrast of the image is lower than the amount representing the contrast of the previous image by a certain value or more until the lapse of the certain time is excluded. And a condition of selecting an image remaining without being excluded. For example, when this condition is adopted and the standard deviation of the luminance value is calculated as an amount representing the contrast between light and dark, the preprocessing unit 19 uses the standard deviation of the luminance value of the previous image as the standard deviation of the luminance value in a certain image. In the case of lowering by a certain value or more, an image generated until a certain period elapses from that point is excluded from subsequent processing targets, and an image remaining without being excluded may be selected. Then, the preprocessing unit 19 generates a preprocessed image from the selected plurality of images. It should be noted that the fact that the amount representing the contrast between light and dark in the image has decreased by a certain value or more means that the contrast has suddenly decreased, and it can be considered that a phenomenon such as the raw material powder rising has occurred.
 以下の説明では、前処理手段19が、画像内のエッジ数に基づいて画像を選択する場合を例にして説明する。 In the following description, a case where the preprocessing unit 19 selects an image based on the number of edges in the image will be described as an example.
 前処理手段19は、選択した複数の画像を用いて、前処理画像における個々の画素の輝度値を決定することにより、前処理画像を生成する。選択した複数の画像において、対応する画素(同じ画像座標の画素)に着目し、その画素の中で最小となる輝度値を特定する。そして、前処理手段19は、その輝度値を、前処理画像における対応画素の輝度値として定める。例えば、前処理手段19は、選択した各画像における画像座標(x,y)の輝度値を読み込み、画像座標(x,y)における輝度値のうちの最小値を特定する。そして、前処理手段19は、その最小となっている輝度値を、前処理画像の画像座標(x,y)における輝度値として定める。前処理手段19は、この処理を画素毎に行う。そして、前処理手段19は、生成した前処理画像を画像記憶手段12に記憶させる。前処理手段19は、この処理を一定周期で繰り返す。従って、カメラ11が撮影した画像に基づいて生成された前処理画像が順次、画像記憶手段12に蓄積されていく。 The preprocessing unit 19 generates a preprocessed image by determining the luminance value of each pixel in the preprocessed image using the selected plurality of images. In a plurality of selected images, attention is paid to corresponding pixels (pixels having the same image coordinates), and the minimum luminance value among the pixels is specified. Then, the preprocessing unit 19 determines the luminance value as the luminance value of the corresponding pixel in the preprocessed image. For example, the pre-processing unit 19 reads the luminance values of the image coordinates of each image selected (x 1, y 1), specifies the minimum value of the luminance values in the image coordinate (x 1, y 1). Then, the preprocessing unit 19 determines the minimum luminance value as the luminance value at the image coordinates (x 1 , y 1 ) of the preprocessed image. The preprocessing unit 19 performs this processing for each pixel. The preprocessing unit 19 stores the generated preprocessed image in the image storage unit 12. The preprocessing means 19 repeats this processing at a constant cycle. Thus, the preprocessed image by the camera 11 a is generated based on the captured images are sequentially accumulates the image storage unit 12.
 なお、前処理において、カメラ11から入力された複数の画像のうち、「エッジのカウント結果が多い状態を保っている連続する複数の画像」以外の画像については、無視してよい。 Incidentally, in the preprocessing, of the plurality of images input from the camera 11 a, the "plurality of continuous images are kept count result many state of the edge" other images may be ignored.
 ここでは、カメラ11が撮影した画像を用いる場合を例にして説明したが、カメラ11も、定期的に一定領域9方向を撮影し、その画像を順次、前処理手段19に入力する。前処理手段19は、カメラ11が撮影した画像からも、同様に前処理画像を生成し、画像記憶手段12に記憶させていく。 Here, the camera 11 a is explained an exemplary case of using the image taken, the camera 11 b also periodically capturing a predetermined region 9 b direction, and inputs the image sequence, the pre-processing means 19 . The preprocessing unit 19 similarly generates a preprocessed image from the image captured by the camera 11 b and stores it in the image storage unit 12.
 エッジのカウント結果が多い状態を保っている連続する複数の画像は、フレームや原料粉があまり写っていない画像であるということができる。フレームや浮遊する原料粉が多く写った画像では、バッチ山や側壁が不鮮明になり、画像内のエッジ数が減少するからである。また、フレームが写っている場合、画像内でフレームに該当する箇所の輝度値は高い値となる。従って、上記のように、フレームや原料粉があまり写っていない画像を複数選択し、さらにそれらの画像における対応画素のうち、最小の輝度値を特定することで、フレームや原料粉が写っていない状態の画像における輝度値を選択することができる。そのような輝度値を有する画像として前処理画像を定めるので、カメラ11が撮影した画像の一部に、炉内で浮遊する原料粉やフレームが写ったとしても、そのような原料粉やフレームを排除した前処理画像を生成することができる。すなわち、監視対象とするバッチ山が鮮明に写った画像を得ることができる。前処理手段19が前処理画像を生成する動作は、前処理ステップに相当する。 It can be said that a plurality of continuous images maintaining a state in which the edge count result is large are images in which the frame and the raw material powder are not shown so much. This is because, in an image in which many frames and floating raw material powder are shown, batch peaks and side walls become unclear and the number of edges in the image decreases. In addition, when a frame is captured, the luminance value of a portion corresponding to the frame in the image is a high value. Accordingly, as described above, by selecting a plurality of images in which the frame and the raw material powder are not shown so much, and by specifying the minimum luminance value among the corresponding pixels in those images, the frame and the raw material powder are not shown. The luminance value in the state image can be selected. Since determining the preprocessed image as an image having such a brightness value, a part of the image the camera 11 a is taken, even captured the raw material powder and a frame suspended in the furnace, such raw material powder and the frame Can be generated. That is, it is possible to obtain an image that clearly shows the batch mountain to be monitored. The operation in which the preprocessing means 19 generates a preprocessed image corresponds to a preprocessing step.
 なお、既に説明したように、フレームの影響が少なかったり、浮遊する原料粉が少なかったりするガラス溶融炉では、上記のような前処理を行う必要はない。その場合には、画像処理装置13は、カメラ11,11が撮影した画像を、そのまま画像記憶手段12に記憶させればよい。 In addition, as already explained, it is not necessary to perform the pretreatment as described above in a glass melting furnace in which the influence of the frame is small or the raw material powder that floats is small. In that case, the image processing device 13 may store the images taken by the cameras 11 a and 11 b in the image storage unit 12 as they are.
 次に、姿勢特定手段14が、カメラの姿勢を判断する動作について説明する。ここでは、カメラ11の姿勢を判断する場合を例にするが、カメラ11の姿勢判断処理も同様である。図8は、カメラの姿勢判断処理の処理経過の例を示すフローチャートである。なお、本例では、姿勢特定手段14が複数の基準パターンの画像およびその画像座標を記憶している場合を例に説明する。 Next, an operation in which the posture specifying unit 14 determines the posture of the camera will be described. Here, as an example the case of determining the orientation of the camera 11 a, the attitude determination processing of the camera 11 b is similar. FIG. 8 is a flowchart illustrating an example of processing progress of the camera posture determination processing. In this example, the case where the posture specifying unit 14 stores images of a plurality of reference patterns and their image coordinates will be described as an example.
 上述のように、前処理手段19は、一定の周期(例えば、数秒の周期)毎に、カメラ11が撮影した画像から前処理画像を生成し、その画像を画像記憶手段12に記憶させる。 As described above, the preprocessing unit 19, every fixed period (e.g., period of a few seconds), generates a preprocessed image from the image by the camera 11 a is taken, and stores the image in the image storage unit 12.
 姿勢特定手段14は、画像記憶手段12に記憶された複数の撮影画像(本例では、カメラ11が撮影した画像に基づいて生成された前処理画像)を読み込んで、カメラ11の姿勢にずれが生じたか否かを判断する処理を定期的に行う。ただし、前処理手段19の処理周期が例えば数秒であるのに比べ、姿勢特定手段14の処理周期は、前処理手段19による処理周期よりも長い。例えば、姿勢特定手段14の処理周期は数時間としてもよい。 The posture identifying means 14 (in this example, the front camera 11 a is generated based on the captured image processed image) a plurality of captured images stored in the image storage unit 12 reads, the posture of the camera 11 a A process for determining whether or not a deviation has occurred is periodically performed. However, the processing cycle of the posture specifying unit 14 is longer than the processing cycle of the preprocessing unit 19 compared to the processing cycle of the preprocessing unit 19 being, for example, several seconds. For example, the processing cycle of the posture specifying means 14 may be several hours.
 姿勢特定手段14は、処理開始タイミングになったと判断すると、画像記憶手段12に記憶された直近の所定枚数の撮影画像(カメラ11が撮影した画像に基づいて生成された前処理画像)を読み込む。この所定枚数は予め定めておけばよい。姿勢特定手段14は、読み込んだ直近の所定枚数の撮影画像(前処理画像)の平均画像を生成する(ステップS1)。具体的には、姿勢特定手段14は、読み込んだ所定枚数の撮影画像に関して、対応する画素毎に輝度値の平均値を計算し、その平均値を輝度値とする画像を生成し、その画像を平均画像とすればよい。本例では、平均画像を生成する場合を例示したが、対応する画素毎に輝度値の中間値を計算し、その中間値を輝度値とする画像(中間値画像)を生成してもよい。 The posture identifying unit 14 determines that becomes the processing start timing, reads the captured image of the latest predetermined number stored in the image storage unit 12 (front camera 11 a is generated based on the captured image processed image) . The predetermined number may be determined in advance. The posture specifying means 14 generates an average image of the read predetermined number of photographed images (preprocessed images) (step S1). Specifically, the posture specifying unit 14 calculates an average value of luminance values for each corresponding pixel with respect to a predetermined number of read captured images, generates an image using the average value as a luminance value, An average image may be used. In this example, the case where an average image is generated is illustrated, but an intermediate value of luminance values may be calculated for each corresponding pixel, and an image (intermediate value image) having the intermediate value as a luminance value may be generated.
 また、本例では、ステップS1で複数の画像から平均画像を生成する場合を例示したが、画像記憶手段12に記憶された1枚の画像に対してステップS2以降の処理を行ってもよい。すなわち、ステップS1の処理を省略してもよい。 Further, in this example, the case where an average image is generated from a plurality of images in step S1 is exemplified, but the processing after step S2 may be performed on one image stored in the image storage unit 12. That is, the process of step S1 may be omitted.
 姿勢特定手段14は、ステップS1で生成した平均画像に対して、姿勢特定手段14が予め記憶している複数の基準パターンに関するパターンマッチングを行う(ステップS2)。本例では、画像同士が類似する程度が高いほど、計算される類似度の値が小さくなる場合を例にして説明する。ステップS2において、姿勢特定手段14は、予め記憶している基準パターンの画像と平均画像内の各部とで、類似度を計算する。そして、類似する程度が最も高い(本例では、類似度が最も小さい値となる)画像内の位置を特定する。例えば、図4に例示する基準パターンの画像およびその画像座標を予め記憶していたとすると、姿勢特定手段14は、平均画像内から、図4に例示する基準パターンの画像との類似度の値が閾値以下となっている箇所を特定し、さらにそれらの箇所の中から、類似度の値が最も小さい箇所を特定する。この箇所が、平均画像内において基準パターンに相当する部分である。さらに、姿勢特定手段14は、例えば、特定した箇所の中心画素の画像座標を特定する。すなわち、姿勢特定手段14は、図4に例示する基準パターンの画像に最も類似する箇所を平均画像から特定し、例えば、その中心画素の画像座標を特定する。姿勢特定手段14は、この処理を、予め記憶していた基準パターンの画像毎に行う。 The posture specifying unit 14 performs pattern matching on the plurality of reference patterns stored in advance by the posture specifying unit 14 on the average image generated in step S1 (step S2). In this example, the case where the calculated similarity value becomes smaller as the degree of similarity between images is higher will be described as an example. In step S <b> 2, the posture specifying unit 14 calculates the similarity between the reference pattern image stored in advance and each unit in the average image. Then, the position in the image having the highest degree of similarity (in this example, the similarity is the smallest value) is specified. For example, if the image of the reference pattern illustrated in FIG. 4 and the image coordinates thereof are stored in advance, the posture specifying unit 14 has a similarity value with the image of the reference pattern illustrated in FIG. 4 from the average image. A location that is equal to or less than the threshold is specified, and further, a location having the smallest similarity value is specified from those locations. This portion is a portion corresponding to the reference pattern in the average image. Furthermore, the posture specifying unit 14 specifies, for example, the image coordinates of the central pixel at the specified location. That is, the posture specifying unit 14 specifies a portion most similar to the image of the reference pattern illustrated in FIG. 4 from the average image, for example, specifies the image coordinates of the central pixel. The posture specifying means 14 performs this process for each image of the reference pattern stored in advance.
 類似度の計算は、公知の方法で行えばよい。例えば、類似度の例として、SSD(Sum of Squared Difference )やSAD(Sum of Absolute Difference)が挙げられる。SSDは、類似度算出対象となる一対の画像における対応する画素同士の輝度値の差の二乗の合計値である。従って、姿勢特定手段14は、類似度算出対象となる一対の画像における対応する画素同士の組毎に、輝度値の差の二乗を計算し、さらにその合計値を計算することで、SSDを算出すればよい。また、SADは、類似度算出対象となる一対の画像における対応する画素同士の輝度値の差の絶対値の合計値である。従って、姿勢特定手段14は、類似度算出対象となる一対の画像における対応する画素同士の組毎に、輝度値の差の絶対値を計算し、さらにその合計値を計算することで、SADを算出すればよい。また、類似度算出対象となる画像が二値画像である場合には、姿勢特定手段14は、類似度算出対象となる一対の画像における対応する画素同士の組毎に、XOR(eXclusive OR:排他的論理和)を計算し、さらにその合計値を計算して、その計算結果を類似度としてもよい。SSD、SAD、および画素同士の組毎のXORの合計値はいずれも、画像同士が類似する程度が高いほど値が小さくなる類似度である。 Calculating the degree of similarity may be performed by a known method. For example, examples of the similarity include SSD (Sum? Of? Squared? Difference?) And SAD (Sum? Of? Absolute? Difference). The SSD is a total value of the squares of differences in luminance values between corresponding pixels in a pair of images to be subjected to similarity calculation. Therefore, the posture specifying unit 14 calculates the SSD by calculating the square of the difference between the luminance values for each pair of corresponding pixels in the pair of images as the similarity calculation target, and further calculating the total value thereof. do it. Further, SAD is a total value of absolute values of differences in luminance values between corresponding pixels in a pair of images to be subjected to similarity calculation. Accordingly, the posture specifying unit 14 calculates the absolute value of the difference in luminance value for each pair of corresponding pixels in the pair of images that are the similarity calculation target, and further calculates the total value thereof, thereby calculating the SAD. What is necessary is just to calculate. When the image that is the similarity calculation target is a binary image, the posture specifying unit 14 performs XOR (eXclusive OR) for each pair of corresponding pixels in the pair of images that are the similarity calculation target. Or the total value thereof, and the calculation result may be used as the similarity. The total value of SSD, SAD, and XOR for each pair of pixels is a similarity that decreases as the degree of similarity between images increases.
 なお、本例では、画像同士が類似する程度が高いほど値が小さくなる類似度を用いる場合を例に説明しているが、他の類似度を用いてもよい。例えば、姿勢特定手段14は、類似度として、正規化相互相関(NCC:Normalized Cross-Correlation)を計算してもよい。正規化相互相関は、画像同士が類似する程度が高いほど値が1に近くなる。従って、姿勢特定手段14は、類似度として正規化相互相関を算出する場合には、その類似度(正規化相互相関)の値が1に最も近い箇所を特定すればよい。 Note that, in this example, a case is described in which the degree of similarity in which the value decreases as the degree of similarity between images increases, but other degrees of similarity may be used. For example, the posture specifying means 14 may calculate a normalized cross-correlation (NCC) as the similarity. The normalized cross-correlation has a value closer to 1 as the degree of similarity between images increases. Therefore, when the normalized cross-correlation is calculated as the similarity, the posture specifying unit 14 may specify a location where the similarity (normalized cross-correlation) value is closest to 1.
 次に、姿勢特定手段14は、基準パターン毎に、ステップS2で特定した、類似する程度が最も高い箇所(本例では、類似度の値が最も小さくなる箇所)の画像座標と、予め記憶していた基準パターンの画像座標との差(すなわち、距離)を計算し、その距離に基づいて、カメラ11の姿勢にずれが生じたか否かを判定する(ステップS3)。姿勢特定手段14は、ステップS2で特定した画像座標と、予め記憶していた特徴座標との距離と、閾値とを比較し、座標間の距離が閾値以上であれば、カメラの姿勢にずれが生じたと判定し、座標間の距離が閾値未満であれば、カメラの姿勢にずれは生じていないと判定すればよい。なお、姿勢特定手段14は、予め複数の基準パターンを記憶しているので、基準パターン毎に座標間の距離(ステップS2で特定した画像座標と、予め記憶していた特徴座標との差)を計算する。この複数の距離と閾値とを比較して、カメラの姿勢にずれが生じているか否かを判定する基準は、特に限定されない。例えば、基準パターン毎に計算して得られた複数の座標間距離のうち、所定個以上が閾値以上になっていることを条件に、カメラの姿勢にずれが生じたと判定してもよい。あるいは、全ての座標間距離が閾値以上になっていることを条件に、カメラの姿勢にずれが生じたと判定してもよい。ここでは2つの基準を例示したが、他の基準に従ってカメラの姿勢にずれが生じたか否かを判定してもよい。 Next, the posture specifying unit 14 stores in advance, for each reference pattern, the image coordinates of the portion having the highest degree of similarity (the portion having the smallest similarity value in this example) specified in step S2. the difference between the image coordinates of which was the reference pattern (i.e., distance) is calculated, based on the distance, determines whether the deviation occurs in the posture of the camera 11 a (step S3). The posture specifying unit 14 compares the distance between the image coordinates specified in step S2 and the feature coordinates stored in advance with a threshold value. If the distance between the coordinates is equal to or larger than the threshold value, the posture of the camera is shifted. If it determines with having arisen and the distance between coordinates is less than a threshold value, what is necessary is just to determine that there is no shift | offset | difference in the attitude | position of a camera. Since the posture specifying unit 14 stores a plurality of reference patterns in advance, the distance between the coordinates (the difference between the image coordinates specified in step S2 and the feature coordinates stored in advance) for each reference pattern. calculate. A criterion for comparing the plurality of distances with the threshold value and determining whether or not the posture of the camera has shifted is not particularly limited. For example, it may be determined that a deviation has occurred in the posture of the camera on condition that a predetermined number or more of a plurality of inter-coordinate distances obtained by calculation for each reference pattern is equal to or greater than a threshold value. Alternatively, it may be determined that a deviation has occurred in the posture of the camera on condition that all the inter-coordinate distances are equal to or greater than the threshold value. Here, two criteria are illustrated, but it may be determined whether or not a deviation has occurred in the posture of the camera according to other criteria.
 カメラの姿勢にずれが生じていると判定した場合(ステップS3におけるYes)、姿勢特定手段14は、予め記憶していた基準パターンの画像およびその画像座標の組における画像座標を、ステップS2で特定した画像座標に置換することにより、記憶しておく基準パターンの画像および画像座標の組における画像座標を更新する(ステップS4)。すなわち、姿勢特定手段14は、平均画像内で基準パターンに該当する箇所として特定した箇所の画像座標(上記の例では、その箇所の中心画素の画像座標)を、その基準パターンの画像と組になる画像座標として、記憶する画像座標を更新する。ステップS4の処理により、カメラの姿勢のずれに合わせて、平均画像内における基準パターンの座標(画像座標)が更新されることになる。ただし、姿勢特定手段14は、更新前の画像座標に関しても、ステップS5の処理で用いる。ステップS5で用いるまで、更新前の画像座標も記憶しておく。 When it is determined that the camera posture is deviated (Yes in step S3), the posture specifying unit 14 specifies the image coordinates of the reference pattern image and the image coordinate stored in advance in step S2. By substituting the image coordinates, the image coordinates in the set of the reference pattern image and the image coordinates stored are updated (step S4). That is, the posture specifying unit 14 sets the image coordinates of the part specified as the part corresponding to the reference pattern in the average image (in the above example, the image coordinates of the central pixel of the part) in combination with the image of the reference pattern. The stored image coordinates are updated as the image coordinates. By the process of step S4, the coordinates (image coordinates) of the reference pattern in the average image are updated in accordance with the deviation of the camera posture. However, the posture specifying means 14 is also used in the process of step S5 for the image coordinates before update. The image coordinates before update are also stored until used in step S5.
 ステップS4の後、姿勢特定手段14は、基準点を用いてカメラ11の姿勢を推定する(ステップS5)。ステップS5において、姿勢特定手段14は以下の処理を行えばよい。姿勢特定手段14は、更新前の基準パターンの画像座標(予め記憶していた基準パターンの画像座標)と、更新後の基準パターンの画像座標とに基づいて、基準パターンが画像内でどれだけ、どの方向にずれたかを計算する。基準パターンが複数存在する場合には、例えば、基準パターン毎のずれ量の平均や、ずれの方向の平均を計算し、その平均値を基準パターンのずれ量、およびずれの方向とすればよい。あるいは他の基準で、更新前後における基準パターンのずれ量およびずれの方向を定めてもよい。姿勢特定手段14は、基準パターンのずれの方向、およびずれ量に合わせて、予め記憶している基準点の画像座標をずらす。すなわち、更新前後での基準パターンのずれに合わせて、基準点の画像座標の座標値を更新する。そして、姿勢特定手段14は、カメラ11の各種姿勢における個々の基準点の画像座標を、実空間における各基準点の3次元座標から算出する。そして、姿勢特定手段14は、各基準点の3次元座標から算出した画像座標が、更新後の各基準点の画像座標に最も近くなる姿勢を特定し、その姿勢がカメラ11の姿勢であると判定する。そして、指定推定処理(すなわち、ステップS5の処理)を終了する。 After step S4, the posture specifying means 14 estimates the posture of the camera 11a using the reference point (step S5). In step S5, the posture specifying means 14 may perform the following processing. Based on the image coordinates of the reference pattern before update (the image coordinates of the reference pattern stored in advance) and the image coordinates of the updated reference pattern, the posture specifying unit 14 determines how much the reference pattern is in the image, Calculate in which direction it has shifted. In the case where there are a plurality of reference patterns, for example, the average deviation amount for each reference pattern or the average deviation direction may be calculated, and the average value may be used as the deviation amount and deviation direction of the reference pattern. Alternatively, the reference pattern deviation amount and the deviation direction may be determined based on other standards. The posture specifying means 14 shifts the image coordinates of the reference points stored in advance according to the direction and amount of deviation of the reference pattern. That is, the coordinate value of the image coordinates of the reference point is updated in accordance with the deviation of the reference pattern before and after the update. Then, the posture identifying section 14, the image coordinates of the individual reference points in various orientations of the cameras 11 a, is calculated from the three-dimensional coordinates of each reference point in the real space. Then, the posture identifying section 14, the image coordinates calculated from the three-dimensional coordinates of each reference point, identifies the closest become posture image coordinates of each reference point of the updated, its posture is a posture of the camera 11 a Is determined. Then, the designated estimation process (that is, the process of step S5) is terminated.
 また、カメラの姿勢にずれが生じていないと判定した場合(ステップS3におけるNo)、姿勢特定手段14は、ステップS2で特定した、平均画像内における基準パターンに相当する箇所の画像で、予め記憶していた基準パターンの画像を更新する(ステップS6)。すなわち、ステップS2において、平均画像内で基準パターンに相当する箇所として特定した部分の画像を抽出し、その画像を新たな基準パターンの画像として記憶する。姿勢特定手段14は、この処理を基準パターン毎に行う。このステップS6の処理により、姿勢特定手段14が予め記憶していた基準パターンの画像およびその画像座標の組における基準パターンの画像が更新される。 When it is determined that there is no deviation in the camera posture (No in step S3), the posture specifying unit 14 stores in advance an image of a portion corresponding to the reference pattern in the average image specified in step S2. The image of the reference pattern that has been updated is updated (step S6). That is, in step S2, an image of a portion specified as a portion corresponding to the reference pattern in the average image is extracted, and the image is stored as a new reference pattern image. The posture specifying means 14 performs this process for each reference pattern. By the processing in step S6, the image of the reference pattern and the image of the reference pattern in the set of image coordinates stored in advance by the posture specifying means 14 are updated.
 ガラス溶融炉内における側壁の状態が徐々に変化し、画像内で基準パターンに相当する箇所と、姿勢特定手段14が記憶している基準パターンの画像との類似する程度が低下することがある。例えば、図4に示すように観察窓のコーナー付近のパターンを基準パターンとしている場合であっても、コーナー部分に原料粉が徐々に付着していくことにより、撮影画像内における基準パターン部分の画像は、直角なコーナーの画像から、丸みのあるコーナーの画像に徐々に変化することがある。記憶している基準パターンの画像を更新しないと仮定すると、この変化が大きくなり、新たな画像が撮影されたときに、その画像と、図4に例示する基準パターンの画像とのマッチングを行えなくなる。しかし、ステップS5において、平均画像におけるパターンマッチングの結果に基づいて、記憶しておく基準パターンの画像を更新することにより、次回のパターンマッチングを精度よく行うことができる。例えば、予め記憶していた図4に例示する基準パターンの画像を、コーナー部分に丸みのある基準パターンの画像に徐々に更新していくことができる。この結果、次回のパターンマッチングを精度よく行うことができ、カメラの姿勢判定も精度よく行うことができる。 The state of the side wall in the glass melting furnace gradually changes, and the degree of similarity between the position corresponding to the reference pattern in the image and the image of the reference pattern stored in the posture specifying means 14 may decrease. For example, as shown in FIG. 4, even when the pattern near the corner of the observation window is used as the reference pattern, the raw material powder gradually adheres to the corner portion, so that an image of the reference pattern portion in the photographed image is obtained. May gradually change from a right-angled corner image to a rounded corner image. Assuming that the stored reference pattern image is not updated, this change becomes large, and when a new image is taken, the image cannot be matched with the reference pattern image illustrated in FIG. . However, in step S5, the next pattern matching can be accurately performed by updating the stored reference pattern image based on the pattern matching result in the average image. For example, the reference pattern image illustrated in FIG. 4 stored in advance can be gradually updated to a reference pattern image with rounded corners. As a result, the next pattern matching can be accurately performed, and the posture determination of the camera can also be accurately performed.
 姿勢特定手段14は、カメラ11が撮影した画像に基づいて生成され、画像記憶手段12に記憶された前処理画像に関して、一定の周期毎に、ステップS1以降の処理を行えばよい。同様に、カメラ11が撮影した画像に基づいて生成され、画像記憶手段12に記憶された前処理画像に関しても、一定の周期毎に、ステップS1以降の処理を行えばよい。 The posture specifying means 14 may perform the processes after step S1 at regular intervals on the preprocessed image generated based on the image taken by the camera 11a and stored in the image storage means 12. Similarly, with respect to the preprocessed image generated based on the image photographed by the camera 11b and stored in the image storage unit 12, the processing after step S1 may be performed at regular intervals.
 また、前処理が行われず、カメラ11,11が撮影した画像が、そのまま画像記憶手段12に記憶される場合であっても、姿勢特定手段14は、カメラ11が撮影した画像に対して、一定の周期毎に、ステップS1以降の処理を行えばよい。そして、同様に、カメラ11が撮影した画像に対して、一定の周期毎に、ステップS1以降の処理を行えばよい。 Even if the pre-processing is not performed and the images captured by the cameras 11 a and 11 b are stored in the image storage unit 12 as they are, the posture specifying unit 14 performs the processing on the images captured by the camera 11 a. Thus, the processing after step S1 may be performed at regular intervals. Then, likewise, the image of the camera 11 b is taken, every predetermined period, it may be performed the processing at and after Step S1.
 次に、真上から観察したときの一定領域におけるバッチ山の配置画像(図7参照)を作成し、観察データを算出する動作について説明する。図9は、この動作の処理経過の例を示すフローチャートである。ここでは、カメラ11による撮影画像(本例では、カメラ11が撮影した画像に基づいて生成された前処理画像)に対して画像処理装置13が処理を行う場合を例にして説明するが、画像処理装置13は、カメラ11による撮影画像(本例では、カメラ11が撮影した画像に基づいて生成された前処理画像)に対しても同様の処理を行う。 Next, an operation for creating an arrangement image (see FIG. 7) of batch mountains in a certain region when observed from directly above and calculating observation data will be described. FIG. 9 is a flowchart showing an example of processing progress of this operation. Here, (in this example, the camera 11 a is preprocessed image which is generated based on the captured image) captured image by the camera 11 a is will be described as an example when the image processing apparatus 13 performs processing for the image processing device 13 (in this example, the preprocessed image generated based on the image by the camera 11 b is taken) image captured by the camera 11 b the same processing is performed with respect to.
 まず、画像較正手段16は、画像記憶手段12に記憶されているカメラ11による撮影画像(本例では、前処理画像)を新しい方から順に複数枚読み込む。このとき読み込む撮影画像の枚数は予め定めておけばよい。そして、画像較正手段16は、その各撮影画像から、実空間における一定領域9に該当する範囲31(図6参照)を抽出する(ステップS10)。抽出された範囲31が示す画像(以下、抽出画像と記す。)は、泡を背景とするバッチ山の画像である。ここでは便宜的に、カメラ11の姿勢が変化していなかった場合を例にして説明するが、カメラ11の姿勢が変化した場合、画像較正手段16は、画像撮影時のカメラ11の姿勢に基づいて、撮影画像から、実空間における一定領域9に該当する範囲31を抽出すればよい。 First, the image calibration unit 16 (in this example, the preprocessed image) captured image by the camera 11 a stored in the image storage unit 12 reads a plurality sequentially from newer. The number of captured images to be read at this time may be determined in advance. Then, the image calibration unit 16 extracts from the respective captured images, a range 31 corresponding to a certain area 9 a in the real space a (see FIG. 6) (step S10). Extracted range 31 a an image shown (hereinafter, referred to as an extracted image.) Is a batch mountain image to background foam. Here for convenience is described a case where the posture of the camera 11 a is not changed as an example, when the posture of the camera 11 a is changed, the image calibration unit 16, when the image capturing of the camera 11 a of Based on the posture, a range 31 a corresponding to the certain region 9 a in the real space may be extracted from the captured image.
 なお、前処理が行われず、カメラ11,11が撮影した画像が、それぞれ、そのまま画像記憶手段12に記憶された場合、画像較正手段16は、ステップS10において、カメラ11が撮影してそのまま画像記憶手段12に記憶された撮影画像を、新しい方から順に複数枚読み込み、各撮影画像から抽出画像を抽出すればよい。また、カメラ11が撮影してそのまま画像記憶手段12に記憶された撮影画像に関しても、同様である。他の点に関しては、前処理を行った場合でも、行っていない場合でも同様である。ステップS10は、領域抽出ステップに相当する。 Note that when the preprocessing is not performed and the images captured by the cameras 11 a and 11 b are respectively stored in the image storage unit 12 as they are, the image calibration unit 16 captures the image by the camera 11 a in step S10. It is only necessary to read a plurality of photographed images stored in the image storage unit 12 in order from the newest one and extract an extracted image from each photographed image. Further, with regard photographed image by the camera 11 b is stored as it is in the image storage unit 12 imaging the same. The other points are the same whether or not pre-processing is performed. Step S10 corresponds to a region extraction step.
 次に、背景画像作成手段15は、複数枚の撮影画像からそれぞれ抽出された抽出画像に基づいて、バッチ山が存在しない場合の画像を作成する。すなわち、バッチ山の背景となる背景画像を作成する(ステップS11)。ステップS11では、最新の撮影画像から抽出された抽出画像と共通の画像座標の画素を有し、その画素の輝度値が泡を表す背景画像を作成する。ステップS11は、背景画像作成ステップに相当する。 Next, the background image creating means 15 creates an image when no batch mountain exists based on the extracted images respectively extracted from the plurality of photographed images. That is, a background image as a background of the batch mountain is created (step S11). In step S11, a background image having pixels having the same image coordinates as that of the extracted image extracted from the latest photographed image and a luminance value of the pixel representing a bubble is created. Step S11 corresponds to a background image creation step.
 図10は、ステップS11の背景画像作成処理の処理経過の例を示すフローチャートである。 FIG. 10 is a flowchart showing an example of processing progress of the background image creation processing in step S11.
 背景画像作成処理において、背景画像作成手段15は、最新の撮影画像から抽出された抽出画像における個々の画素を選択し、選択した画素、およびその画素に対応する他の抽出画像内の画素の輝度値に基づいて、選択した画素において背景を表す輝度値を決定する。この結果、バッチ山が存在しない場合の背景画像を得る。以下、図10を参照してこの処理を説明する。なお、ここでは、画素毎に、背景を表す輝度値を決定する場合を例にして説明するが、背景画像作成手段15は、抽出画像における個々のエリア毎に、背景を表す輝度値を決定してもよい。 In the background image creation process, the background image creation means 15 selects individual pixels in the extracted image extracted from the latest photographed image, and the brightness of the selected pixel and the pixels in the other extracted images corresponding to the selected pixel. Based on the value, a luminance value representing the background in the selected pixel is determined. As a result, a background image is obtained when no batch mountain exists. Hereinafter, this process will be described with reference to FIG. Note that, here, a case where the luminance value representing the background is determined for each pixel will be described as an example, but the background image creating unit 15 determines the luminance value representing the background for each individual area in the extracted image. May be.
 背景画像作成手段15は、最新の撮影画像から抽出された抽出画像の画素の中から1つの画素を選択する(ステップS21)。次に、背景画像作成手段15は、ステップS10(図9参照)で他の撮影画像から抽出された各抽出画像から、選択した画素に対応する画素(すなわち、一定領域9内の同じ位置に該当する画素)を抽出する(ステップS22)。 The background image creating means 15 selects one pixel from the pixels of the extracted image extracted from the latest photographed image (step S21). Then, the background image creating unit 15, from the extracted image extracted from other photographed image in step S10 (see FIG. 9), the pixel corresponding to the selected pixel (i.e., the same position in a predetermined region 9 a (Corresponding pixel) is extracted (step S22).
 次に、背景画像作成手段15は、ステップS21で選択した画素、およびその画素に対応する他の抽出画像内の画素(すなわち、ステップS22で得た画素)を対象にして、輝度値毎に、その輝度値に該当する画素数をカウントする(ステップS24)。ステップS24の処理は、ヒストグラム作成処理であるということができる。 Next, the background image creating means 15 targets the pixel selected in step S21 and the pixel in the other extracted image corresponding to the pixel (that is, the pixel obtained in step S22) for each luminance value. The number of pixels corresponding to the luminance value is counted (step S24). It can be said that the process of step S24 is a histogram creation process.
 続いて、背景画像作成手段15は、画素のカウント数(度数)が多くなっている輝度値の範囲内における輝度値のばらつきを評価する(ステップS25)。カウント数が多くなっている輝度値の範囲とは、例えば、カウント数が閾値(カウント数に対して定められた閾値)以上となる輝度値が連続して続く範囲である。図11および図12は、ステップS24の結果得られたヒストグラムである。図11に示す例では、画素のカウント数が多くなっている輝度値の範囲は、k~kである。図12に示す例では、画素のカウント数が多くなっている輝度値の範囲は、k~kである。ばらつきを評価する評価値として、例えば、このような範囲内でカウントされた画素の輝度値の標準偏差や分散を用いればよい。あるいは、画素のカウント数が多くなっている輝度値の範囲の幅を評価値として用いればよい。ステップS25では、このような評価値を算出すればよい。例示した標準偏差、分散、あるいは、画素のカウント数が多くなっている輝度値の範囲の幅等を評価値として算出する場合、評価値が小さいほど、輝度値のばらつきが小さいことになる。また、他の指標値を、ばらつきの評価値として用いてもよい。 Subsequently, the background image creating means 15 evaluates the variation of the luminance value within the luminance value range in which the pixel count number (frequency) is increased (step S25). The range of luminance values in which the number of counts is increased is, for example, a range in which luminance values in which the number of counts is equal to or greater than a threshold value (threshold value determined for the count number) continue. 11 and 12 are histograms obtained as a result of step S24. In the example shown in FIG. 11, the range of the luminance value in which the pixel count is large is k 1 to k 2 . In the example shown in FIG. 12, the range of luminance values in which the number of pixel counts is large is k 3 to k 4 . As the evaluation value for evaluating the variation, for example, the standard deviation or variance of the luminance values of the pixels counted within such a range may be used. Alternatively, the width of the luminance value range in which the pixel count number is increased may be used as the evaluation value. In step S25, such an evaluation value may be calculated. In the case of calculating the standard deviation, variance, or the width of the range of luminance values in which the number of pixel counts are increased as the evaluation value, the smaller the evaluation value, the smaller the variation in the luminance value. Another index value may be used as an evaluation value of variation.
 ステップS25の後、背景画像作成手段15は、ステップS25で算出した評価値に基づいて、画素のカウント数が多くなっている輝度値の範囲内における輝度値のばらつきが大きいか否かを判定する(ステップS26)。ステップS26では、予め定められた閾値(ばらつきの評価値に対する閾値)と評価値とを比較することによって、ばらつきが大きいか否かを判定すればよい。例えば、輝度値の標準偏差を評価値として計算した場合、評価値が閾値(評価値に対して定められた閾値)以上であれば、ばらつきが大きいと判定し、評価値が閾値未満であれば、ばらつきが小さいと判定すればよい。閾値の値は、評価値として採用する指標値(標準偏差、分散等)に応じて予め定めておけばよい。 After step S25, the background image creating means 15 determines whether or not there is a large variation in luminance value within the range of luminance values in which the number of pixel counts is large, based on the evaluation value calculated in step S25. (Step S26). In step S26, it is only necessary to determine whether or not the variation is large by comparing a predetermined threshold value (threshold value for the evaluation value of variation) with the evaluation value. For example, when the standard deviation of luminance values is calculated as an evaluation value, if the evaluation value is equal to or greater than a threshold value (threshold value determined for the evaluation value), it is determined that the variation is large, and if the evaluation value is less than the threshold value It can be determined that the variation is small. The threshold value may be determined in advance according to an index value (standard deviation, variance, etc.) adopted as the evaluation value.
 輝度値のばらつきが小さいと判定した場合(ステップS26におけるNo)、背景画像作成手段15は、カウント値が多くなっている輝度値の範囲内における最頻輝度値を判定する(ステップS28)。図11は、輝度値のばらつきが小さい場合のヒストグラムの例である。図11を例にすると、カウント値が多くなっている輝度値の範囲は、k~kであり、この範囲内での最頻輝度値(画素のカウント数が最大になっている輝度値)は、Sである。よって、背景画像作成手段15は、ステップS28において、Sの値を特定する。そして、そのSの値を、ステップS21で選択した座標の画素における輝度値として決定する。選択した座標において、輝度値のばらつきが小さいということは、その座標にはバッチ山は写らず、背景が写り続けたと言うことができる。よって、ばらつきが小さい場合には、上記のように最頻輝度値Sを、背景となる泡の輝度値として決定することができる。なお、上記のように最頻輝度値Sの代わりに、カウント値が多くなっている輝度値の範囲k~kに該当する画素の輝度値の平均値を算出し、その平均値を、背景を表す輝度値として決定してもよい。あるいは、輝度値の範囲k~kの中央値を、背景を表す輝度値として決定してもよい。 When it is determined that the variation in luminance value is small (No in step S26), the background image creating unit 15 determines the most frequent luminance value within the range of luminance values where the count value is large (step S28). FIG. 11 is an example of a histogram when the variation in luminance value is small. Taking FIG. 11 as an example, the range of luminance values where the count value is large is k 1 to k 2 , and the most frequent luminance value within this range (the luminance value where the pixel count number is maximum). ) Is S. Therefore, the background image creation means 15 specifies the value of S in step S28. Then, the value of S is determined as the luminance value in the pixel at the coordinate selected in step S21. If the selected coordinate has a small variation in luminance value, it can be said that the batch mountain is not captured at that coordinate and the background continues to be captured. Therefore, when the variation is small, the mode luminance value S can be determined as the luminance value of the bubble as the background as described above. Note that instead of the mode luminance value S as described above, the average value of the luminance values of the pixels corresponding to the luminance value range k 1 to k 2 in which the count value is large is calculated, and the average value is You may determine as a luminance value showing a background. Alternatively, the median value of the luminance value range k 1 to k 2 may be determined as the luminance value representing the background.
 一方、輝度値のばらつきが大きいと判定した場合(ステップS26におけるYes)、背景画像作成手段15は、カウント値が多くなっている輝度値の範囲内における判別基準値よりも大きな輝度値に該当する各画素の輝度値の平均値を算出する(ステップS27)。図12は、輝度値のばらつきが大きい場合のヒストグラムの例である。図12を例にすると、カウント値が多くなっている輝度値の範囲は、k~kである。また、判別基準値がTであるとする。このとき、背景画像作成手段15は、輝度値がTよりも大きく、kまでの範囲に該当する画素の輝度値の平均値を計算する。そして、背景画像作成手段15は、その平均値を、ステップS21で選択した座標の画素における輝度値として決定する。選択した座標において、輝度値のばらつきが大きいということは、その座標にバッチ山が写ったり、背景となる泡が写ったりしていると言うことができる。そして、泡の輝度値は、バッチ山の輝度値よりも大きい。よって、上記のように判別基準値よりも大きな範囲に該当する画素の輝度値の平均を、背景となる泡の輝度値として決定することができる。なお、上記のように平均値を算出する代わりに、カウント値が多くなっている輝度値の範囲内における判別基準値よりも大きな範囲(図12に示す例ではT~kの範囲)での最頻輝度値を判定し、その最頻輝度値を、選択した座標の画素における輝度値として決定してもよい。あるいは、T~kの範囲における中央値を、選択した座標の画素における輝度値として決定してもよい。 On the other hand, when it is determined that the variation in the luminance value is large (Yes in step S26), the background image creation unit 15 corresponds to a luminance value larger than the discrimination reference value within the range of the luminance value where the count value is large. An average value of luminance values of each pixel is calculated (step S27). FIG. 12 is an example of a histogram when the variation in luminance value is large. Taking FIG. 12 as an example, the range of luminance values where the count value is large is k 3 to k 4 . Further, it is assumed that the discrimination reference value is T. At this time, the background image creating means 15 calculates the average value of the luminance values of the pixels corresponding to the range up to k 4 where the luminance value is greater than T. Then, the background image creating means 15 determines the average value as the luminance value at the pixel at the coordinate selected in step S21. If the selected coordinates have a large variation in luminance value, it can be said that a batch mountain appears in the coordinates or a background bubble appears. And the brightness value of a bubble is larger than the brightness value of a batch mountain. Therefore, the average of the luminance values of the pixels corresponding to the range larger than the discrimination reference value as described above can be determined as the luminance value of the bubbles serving as the background. Instead of calculating the average value as described above, in a range larger than determination reference value within a range of luminance values that count value becomes a number (in the range of T ~ k 4 in the example shown in FIG. 12) The mode luminance value may be determined, and the mode luminance value may be determined as the luminance value in the pixel at the selected coordinate. Alternatively, the median value in the range from T to k 4 may be determined as the luminance value in the pixel at the selected coordinate.
 なお、判別基準値は、ばらつきの大きい範囲(本例では、輝度値の範囲)を2つに分離するための閾値であり、非特許文献3に記載された判別分析二値化法における閾値に該当する。従って、背景領域とバッチ山領域に関するクラス内分散とクラス間分散の分散比が最大となる閾値を判別基準値Tとすればよい。 Note that the discriminant reference value is a threshold value for separating a large variation range (luminance value range in this example) into two, and is a threshold value in the discriminant analysis binarization method described in Non-Patent Document 3. Applicable. Therefore, the threshold value that maximizes the variance ratio between the intra-class variance and the inter-class variance for the background area and the batch mountain area may be used as the discrimination reference value T.
 ここでは、判別分析二値化法で輝度値の範囲k~kを2つのクラスに分割する場合を示したが、他の方法で、輝度値の範囲k~kを2つのクラスに分割してもよい。例えば、モード法や、2つの正規分布を当てはめる方法等で輝度値の範囲k~kを2つのクラスに分割してもよい。そして、輝度値の高い方のクラスから、上記と同様に、選択した座標の画素における輝度値を決定すればよい。 Here, the case where the luminance value range k 3 to k 4 is divided into two classes by the discriminant analysis binarization method is shown, but the luminance value range k 3 to k 4 is divided into two classes by other methods. You may divide into. For example, the luminance value range k 3 to k 4 may be divided into two classes by a mode method, a method of fitting two normal distributions, or the like. Then, from the class with the higher luminance value, the luminance value in the pixel at the selected coordinate may be determined in the same manner as described above.
 背景画像作成手段15は、図10のフローチャートを用いて説明した上記の処理を、画素毎に行い、ステップS27またはステップS28で求めた輝度値を、ステップS21で選択した画素に対応する背景画像の画素の輝度値として決定する。この結果、最新の撮影画像から抽出された抽出画像において、バッチ山を除去した画像が得られる。また、この画像は、カメラ11の視点で観察した場合の背景画像である。 The background image creating unit 15 performs the above-described processing described with reference to the flowchart of FIG. 10 for each pixel, and uses the luminance value obtained in step S27 or step S28 for the background image corresponding to the pixel selected in step S21. It is determined as the luminance value of the pixel. As a result, in the extracted image extracted from the latest photographed image, an image from which the batch mountain is removed is obtained. The image is a background image when viewed from the perspective of the camera 11 a.
 また、背景画像作成手段15は、抽出画像を分割して得られる個々のエリア毎に、背景を表す輝度値を決定してもよい。この場合、ステップS21において、背景画像作成手段15は、最新の撮影画像から抽出された抽出画像から1つのエリアを選択する。エリアの定め方は特に限定されない。そして、背景画像作成手段15は、ステップS22では、他の撮影画像から抽出された各抽出画像から、選択したエリアに対応するエリア(一定領域9内の同じ部分に該当するエリア)を抽出する。そして、ステップS24以降では、ステップS21で選択したエリア、およびそのエリアに対応するエリア(ステップS22で得たエリア)に属する各画素を対象にして、ヒストグラムを作成し、輝度値のばらつきの評価値を算出し、ばらつきが大きいか否かに応じて、輝度値を算出すればよい(ステップS24~S28)。背景画像作成手段15は、この処理を、抽出画像を分割して得られる個々のエリア毎に行い、ステップS27またはステップS28で求めた輝度値を、ステップS21で選択したエリアに対応する背景画像のエリア内の各画素の輝度値として決定すればよい。 Further, the background image creating means 15 may determine a luminance value representing the background for each area obtained by dividing the extracted image. In this case, in step S21, the background image creating means 15 selects one area from the extracted image extracted from the latest photographed image. There is no particular limitation on how to define the area. Then, the background image creating unit 15, in step S22, from the extracted image extracted from another captured image, and extracts an area corresponding to the selected area (area corresponding to the same portion of the constant region 9 a) . In step S24 and subsequent steps, a histogram is created for each pixel belonging to the area selected in step S21 and the area corresponding to the area (the area obtained in step S22), and the evaluation value of the variation in luminance value is calculated. And the luminance value may be calculated according to whether or not the variation is large (steps S24 to S28). The background image creating means 15 performs this processing for each area obtained by dividing the extracted image, and uses the luminance value obtained in step S27 or step S28 for the background image corresponding to the area selected in step S21. What is necessary is just to determine as a luminance value of each pixel in an area.
 背景画像作成処理の後、ステップS12(図9参照)に移行する。ステップS12において、画像較正手段16は、背景画像作成処理(ステップS11)で得られた背景画像を、一定領域9を真上から観察したときの画像に変換する(ステップS12)。すなわち、ステップS11で得られた背景画像に関して、視点をカメラ11の位置から一定領域9の真上に変化させる視点変換処理を行い、その視点から観察した場合の背景画像を作成する。この結果、一定領域9にバッチ山が存在しない状態で、一定領域9を真上から観察した場合の画像が得られる。ステップS12は、背景画像変換ステップに相当する。 After the background image creation process, the process proceeds to step S12 (see FIG. 9). In step S12, the image calibration unit 16, a background image obtained by the background image creating process (step S11), and converts the image when observed from directly above the constant region 9 a (step S12). That is, for the background image obtained in step S11, performs viewpoint conversion processing for changing the viewpoint from the position of the camera 11 a directly above the constant region 9 a, creating a background image when viewed from the viewpoint. As a result, with no batch mountain exists in a certain area 9 a, an image of when observed from directly above the constant region 9 a is obtained. Step S12 corresponds to a background image conversion step.
 次に、画像較正手段16は、ステップS10において、最新の撮影画像から抽出された抽出画像を、一定領域9を真上から観察したときの画像に変換する(ステップS13)。すなわち、最新の撮影画像から抽出された抽出画像に関して、視点をカメラ11の位置から一定領域9の真上に変化させる視点変換処理を行い、その視点から観察した場合の画像に変換する。この変換後の画像には、バッチ山および背景が写っている。ステップS12,S13における変換処理は、同様の変換処理である。ステップS13は、抽出画像変換ステップに相当する。 Next, the image calibration unit 16, in step S10, the extraction image extracted from the latest captured image is converted into an image when observed from directly above the constant region 9 a (step S13). That is, for extraction image extracted from the latest captured image, performs viewpoint conversion processing for changing the viewpoint from the position of the camera 11 a directly above the constant region 9 a, converts the image when viewed from the viewpoint. This converted image shows a batch mountain and background. The conversion process in steps S12 and S13 is a similar conversion process. Step S13 corresponds to an extracted image conversion step.
 なお、ステップS12,S13における変換後の画像の大きさが異なる場合、画像較正手段16は、ステップS12,S13における変換後の画像の大きさを揃えるように補正を行ってよい。 In addition, when the sizes of the images after the conversion in steps S12 and S13 are different, the image calibration unit 16 may perform correction so that the sizes of the images after the conversion in steps S12 and S13 are made uniform.
 ステップS12,S13で得た変換後の画像をそのまま用いて、後述のステップS14以降の処理を実行してもよい。 The processing after step S14 described later may be executed using the converted image obtained in steps S12 and S13 as it is.
 あるいは、最新の撮影画像を検出する毎に、画像処理装置13は、ステップS10からステップS13までの処理を実行し、画像較正手段16は、ステップS12において得られる画像(一定領域9を真上から観察したときの背景画像)と、ステップS13において得られる画像(一定領域9を真上から観察したときの画像)をそれぞれ、複数枚記憶してもよい。そして、画像較正手段16は、ステップS12を実行する毎に得られる画像を、最新の所定枚数分選択し、選択した画像を合成し(例えば、平均画像を生成し)、同様に、ステップS13を実行する毎に得られる画像を、最新の所定枚数分選択し、選択した画像を合成してもよい。そして、ステップS12を実行する毎に得られる画像の合成画像(一定領域9を真上から観察したときの背景画像)と、ステップS13を実行する毎に得られる画像の合成画像(一定領域9を真上から観察したときの画像)とを用いて、後述のステップS14以降の処理を実行してもよい。 Alternatively, just above each for detecting a latest photographed image, the image processing apparatus 13 executes the processing from step S10 to step S13, the image calibration unit 16, an image (constant region 9 a obtained in step S12 a background image) when observed from the respective images (images when observed from directly above the constant region 9 a) obtained in step S13, may be plural storage. Then, the image calibration means 16 selects the latest predetermined number of images obtained each time step S12 is executed, synthesizes the selected images (for example, generates an average image), and similarly performs step S13. It is also possible to select the latest predetermined number of images obtained each time execution is performed and synthesize the selected images. Then, the combined image of the images obtained each time executing the step S12 (the background image when observed from directly above the constant region 9 a), the synthesis of the image obtained in each execution of step S13 the image (constant region 9 The processing after step S14, which will be described later, may be executed using an image obtained by observing a from directly above.
 固体状態の原料の多くは液面よりも下に存在する。そのため、ステップS12,S13でそれぞれ1枚ずつ得た画像を用いて、次のステップS14以降の処理を行うよりも、ステップS12毎に得た複数枚の画像の合成画像と、ステップS13毎に得た複数枚の画像の合成画像とを用いて、次のステップS14以降の処理を行った方が、得られる観察データから、固体状態の原料の全体像を把握しやすい。従って、上記のように、ステップS12を実行する毎に得られる画像を複数枚合成し、同様に、ステップS13を実行する毎に得られる画像を複数枚合成し、それらの合成画像を用いて、ステップS14以降の処理を実行することが好ましい。 Most of the raw materials in the solid state exist below the liquid level. Therefore, using the images obtained one by one in steps S12 and S13, the composite image of a plurality of images obtained in each step S12 and the image obtained in each step S13 are obtained rather than performing the processing in the next step S14 and subsequent steps. In addition, it is easier to grasp the whole image of the solid-state raw material from the obtained observation data by performing the processing after the next step S14 using the composite image of the plurality of images. Therefore, as described above, a plurality of images obtained each time Step S12 is executed are combined, and similarly, a plurality of images obtained each time Step S13 is executed are combined, and these combined images are used. It is preferable to execute the processing after step S14.
 ステップS13を実行する毎に得られる複数枚の画像を合成する場合、画像較正手段16は、例えば、複数の画像において対応する各画素の輝度値の平均値を計算し、その平均値を合成画像における対応画素の輝度値とすればよい。この処理を画素毎に行い、合成画像の各輝度値を定めることにより合成画像を生成すればよい。また、対応する各画素の輝度値の平均値の代わりに、対応する各画素の輝度値に最小値を特定し、その輝度値の最小値を、合成画像における対応画素の輝度値としてもよい。 When combining a plurality of images obtained each time step S13 is executed, the image calibration unit 16 calculates, for example, the average value of the luminance values of the corresponding pixels in the plurality of images, and uses the average value as the combined image. The luminance value of the corresponding pixel in can be used. What is necessary is just to produce | generate a synthesized image by performing this process for every pixel and determining each luminance value of a synthesized image. Further, instead of the average value of the luminance values of the corresponding pixels, a minimum value may be specified for the luminance value of each corresponding pixel, and the minimum value of the luminance value may be used as the luminance value of the corresponding pixel in the composite image.
 画像較正手段16は、ステップS12を実行する毎に得られる複数枚の画像を合成する場合にも同様の処理を行うことによって合成画像を生成すればよい。 The image calibration unit 16 may generate a composite image by performing the same process when combining a plurality of images obtained each time step S12 is executed.
 また、観察データとしてバッチ山の移動速度を算出する場合には、上記のような合成画像を生成せずに、ステップS12,S13で得られた各画像を用いて、ステップS14以降の処理を行えばよい。また、バッチ山の移動速度を算出する場合には、カメラ11a,11bが撮影した画像そのものを用いて、ステップS10以降の処理を行う。 In addition, when calculating the movement speed of the batch mountain as the observation data, the processing after step S14 is performed using the images obtained in steps S12 and S13 without generating the above-described composite image. Just do it. Further, when calculating the movement speed of the batch mountain, the processes after step S10 are performed using the images themselves taken by the cameras 11a and 11b.
 次に、差分演算手段17は、ステップS13の変換後の画像とステップS12の変換後の背景画像との間で、対応する画素同士の輝度値の差を算出する(ステップS14)。ここで、ステップS13の変換後の画像とは、ステップS13で得られた1枚の画像であっても、ステップS13の実行毎に得られた複数枚の画像の合成画像であってもよい。同様に、ステップS12の変換後の背景画像とは、ステップS12で得られた1枚の画像であっても、ステップS12の実行毎に得られた複数枚の画像の合成画像であってもよい。 Next, the difference calculation means 17 calculates a difference in luminance value between corresponding pixels between the image after the conversion at Step S13 and the background image after the conversion at Step S12 (Step S14). Here, the image after the conversion in step S13 may be one image obtained in step S13 or a composite image of a plurality of images obtained each time step S13 is executed. Similarly, the background image after the conversion in step S12 may be one image obtained in step S12 or a composite image of a plurality of images obtained each time step S12 is executed. .
 ステップS14において、差分演算手段17は、ステップS13の変換後の画像(バッチ山および背景が写った画像)の画素の輝度値から、ステップS12の変換後の背景画像の画素の輝度値を減算する。差分演算手段17は、この減算処理を、対応する画素同士の組毎に行う。 In step S14, the difference calculation means 17 subtracts the luminance value of the pixel of the background image after the conversion in step S12 from the luminance value of the pixel of the image after the conversion in step S13 (an image showing the batch mountain and the background). . The difference calculation means 17 performs this subtraction process for each pair of corresponding pixels.
 図13は、ステップS13の変換後の画像の例を示す。この画像には、背景とバッチ山10とが写っている。図14は、ステップS12の変換後の背景画像の例を示す。図15は、この2つの画像に対してステップS14の処理を行った結果の画像の例を示す。既に説明したように、泡の輝度にも多少の変化はあるので、ステップS14の処理後において、背景に該当する画素の輝度値が0になるとは限らない。 FIG. 13 shows an example of the image after the conversion in step S13. In this image, the background and the batch mountain 10 are shown. FIG. 14 shows an example of the background image after the conversion in step S12. FIG. 15 shows an example of an image obtained as a result of performing the process of step S14 on these two images. As described above, since there is some change in the brightness of the bubble, the brightness value of the pixel corresponding to the background is not always 0 after the process of step S14.
 ステップS14の後、差分演算手段17は、ステップS14で得た画像(図15参照)に対して、二値化処理を行う(ステップS15)。すなわち、差分演算手段17は、画像内の画素毎に、二値化処理用に予め定めた閾値以上の輝度値を“1”に置き換え、その閾値未満の輝度値を“0”置き換える処理を行う。背景に該当する画素の輝度値は、ステップS14の減算処理によって0近辺の値になっているので、二値化処理により“0”となる。また、バッチ山10に該当する画素の輝度値は、ステップS14の減算処理で値が大きく減少することはないので、二値化処理により“1”となる。この結果、背景に該当する画素の輝度値は“0”になり、バッチ山10に該当する画素の輝度値は“1”となる。二値化処理後の画像の例を図16に示す。二値化処理後の画像は、最新の撮影画像から抽出された抽出画像に基づいて作成された一定領域9におけるバッチ山の位置を表している。なお、この画像は、一定領域9の真上の視点から観察した状態を示しており、バッチ山の高さの情報は含んでいない。差分演算手段17は、ステップS15で生成した画像(以下、二値化画像)を記憶する。ステップS14,S15は、背景除外画像生成ステップに相当する。 After step S14, the difference calculation means 17 performs a binarization process on the image (see FIG. 15) obtained in step S14 (step S15). That is, for each pixel in the image, the difference calculation means 17 performs a process of replacing a luminance value equal to or higher than a predetermined threshold for binarization processing with “1” and replacing a luminance value less than the threshold with “0”. . Since the luminance value of the pixel corresponding to the background has become a value near 0 by the subtraction process in step S14, it becomes “0” by the binarization process. Further, the luminance value of the pixel corresponding to the batch crest 10 is set to “1” by the binarization process because the value is not greatly reduced by the subtraction process in step S14. As a result, the luminance value of the pixel corresponding to the background is “0”, and the luminance value of the pixel corresponding to the batch mountain 10 is “1”. An example of an image after binarization is shown in FIG. Binarization after image represents the position of the batch mountain in certain areas 9 a which is created based on the extracted image extracted from the latest captured image. Incidentally, this image shows a state observed from directly above the point of view of a certain region 9 a, information of the height of the batch mountain does not include. The difference calculation means 17 stores the image generated in step S15 (hereinafter, binarized image). Steps S14 and S15 correspond to a background excluded image generation step.
 ステップS15の後、観察データ算出手段18は、ステップS15で生成された二値化画像を用いて、一定領域9内に存在するバッチ山の観察データを算出する(ステップS16)。ただし、ステップS16では、直近に生成された二値化画像だけでなく、過去に遡って連続する二値化画像も用いて観察データを算出してもよい。また、ここでは、一定領域9に関する二値化画像の生成について説明したが、画像処理装置13は、カメラ11による撮影画像に基づいて、一定領域9に関する二値化画像も生成する。観察データ算出手段18は、一定領域9,9それぞれの二値化画像に基づいて観察データを算出してもよい。ステップS16は、観察データ算出ステップに相当する。 After step S15, the observation data calculating unit 18, using the binary image generated in step S15, and calculates the observation data of the batch pile present in the constant region 9 a (step S16). However, in step S16, the observation data may be calculated using not only the most recently generated binarized image but also a binarized image that continues from the past. Further, here, it has been described for the generation of the binarized image related to a certain area 9 a, the image processing apparatus 13, based on the image captured by the camera 11 b, also generates binarized image related to a certain region 9 b. The observation data calculation means 18 may calculate observation data based on the binarized images of the fixed areas 9 a and 9 b . Step S16 corresponds to an observation data calculation step.
 以下、ステップS16で算出する観察データの例を示す。観察データの例として、一定領域9,9それぞれの内外比が挙げられる。図17は、一定領域9,9を側壁6側の領域とガラス溶融炉の中央側の領域とに二等分した領域を示す説明図である。図1に示す要素と同様の要素については、図1と同一の符号を付し説明を省略する。領域51,52は、一定領域9を側壁6側の領域と中央側の領域とに二等分した領域であり、領域51が側壁6側の領域であり、領域52が中央側の領域である。同様に、領域41,42は、一定領域9を側壁6側の領域と中央側の領域とに二等分した領域であり、領域41が側壁6側の領域であり、領域42が中央側の領域である。観察データ算出手段18は、一定領域9に関する内外比として、領域51内のバッチ山の占有率と、領域52内のバッチ山の占有率の比を表す評価値を計算すればよい。また、同様に、一定領域9に関する内外比として、領域41内のバッチ山の占有率と、領域42内のバッチ山の占有率との比を表す評価値を計算すればよい。 Hereinafter, an example of the observation data calculated in step S16 is shown. As an example of the observation data, the inside / outside ratio of each of the constant regions 9 a and 9 b can be given. FIG. 17 is an explanatory diagram showing a region in which the constant regions 9 a and 9 b are divided into two equal parts: a region on the side wall 6 side and a region on the center side of the glass melting furnace. Elements similar to those shown in FIG. 1 are given the same reference numerals as those in FIG. Regions 51 and 52 is an area in which bisects constant region 9 a to the region of the side wall 6 side region and the central side, the region 51 is the region of the side wall 6 side, region 52 in the center side region is there. Similarly, regions 41 and 42 is an area in which bisects constant region 9 b in the region and the central side region of the side wall 6 side, region 41 is the region of the side wall 6 side, the region 42 is the center side It is an area. Observation data calculating means 18, as the inner and outer ratio for constant region 9 a, the occupancy rate of the batch mountain region 51 may be calculated evaluation value representing the ratio of occupancy of batch mountain region 52. Similarly, as the inner and outer ratio for constant region 9 b, may be calculated and occupancy batch mountain region 41, an evaluation value representing the ratio of the occupancy of the batch mountain region 42.
 例えば、側壁6側の領域(すなわち、領域51や領域41)におけるバッチ山の占有率をQとし、中央側の領域(すなわち、領域52や領域42)におけるバッチ山の占有率をRとした場合、観察データ算出手段18は、以下の式(1)で表される評価値を内外比として計算してもよい。ただし、Q,Rは百分率で表され、それぞれ0~100の範囲の値である。 For example, when the occupancy rate of the batch mountain in the side wall 6 side region (that is, the region 51 or the region 41) is Q, and the batch mountain occupancy rate in the central region (ie, the region 52 or the region 42) is R. The observation data calculation means 18 may calculate the evaluation value represented by the following formula (1) as the internal / external ratio. However, Q and R are expressed as percentages and are values in the range of 0 to 100, respectively.
 内外比=(R-Q)/(R+Q+α)   式(1) Internal / external ratio = (RQ) / (R + Q + α) Equation (1)
 式(1)においてαは定数であり、例えば、α=100としてもよい。この場合、内外比は、-0.5~0.5の範囲の値となる。観察データ算出手段18は、一定領域9,9に関してそれぞれ、内外比を算出すればよい。 In the formula (1), α is a constant, and for example, α may be 100. In this case, the inside / outside ratio is a value in the range of −0.5 to 0.5. The observation data calculation means 18 may calculate the inside / outside ratio for each of the fixed regions 9 a and 9 b .
 固体状態の原料が側壁6側に寄りすぎていると、原料が未溶解のままガラス溶融炉から流れ出ることがあり、その場合、ガラスの品質が低下する。内外比によって、固体状態の原料が側壁6側に寄りすぎていないかどうかを確認することができる。固体状態の原料が側壁6側に寄りすぎていると判断される場合は、バッチ山が中央に寄るように、ガラス溶融炉を操作すればよい。 If the raw material in the solid state is too close to the side wall 6 side, the raw material may flow out of the glass melting furnace without being melted, and in this case, the quality of the glass is deteriorated. It is possible to confirm whether or not the solid state raw material is too close to the side wall 6 by the inside / outside ratio. When it is determined that the raw material in the solid state is too close to the side wall 6 side, the glass melting furnace may be operated so that the batch mountain approaches the center.
 また、観察データ算出手段18は、一定領域9,9それぞれにおけるバッチ山の占有率を算出してもよい。 Moreover, the observation data calculation means 18 may calculate the occupancy ratio of the batch mountain in each of the constant regions 9 a and 9 b .
 また、観察データ算出手段18は、一定領域9,9それぞれにおけるバッチ山の先端位置(例えば、バッチ山の先端位置の座標)を算出してもよい。 Further, the observation data calculation means 18 may calculate the tip position of the batch mountain (for example, the coordinates of the tip position of the batch mountain) in each of the fixed regions 9 a and 9 b .
 また、バッチ山の状態や、フレームの広がり等が原因となり、二値化画像にバッチ山の先端位置が写っていない場合がある。この場合、観察データ算出手段18は、一定領域9を溶解した原料の進行方向に垂直な方向に分割し、各分割領域におけるバッチ山の面積を算出する。そして、上流側の分割領域から下流側の分割領域方向へのバッチ山の面積の変化が線形変化であるものとして、バッチ山の面積が0になる位置を算出し、その位置をバッチ山の先端位置と判定してもよい。一定領域9に関しても同様である。 In addition, there is a case where the tip position of the batch mountain is not shown in the binarized image due to the state of the batch mountain or the spread of the frame. In this case, the observation data calculating means 18 divides in a direction perpendicular to the traveling direction of the raw material obtained by dissolving a predetermined region 9 a, calculates an area of the batch mountain in each divided region. Then, assuming that the change in the area of the batch mountain from the upstream divided region toward the downstream divided region is a linear change, the position where the batch mountain area becomes 0 is calculated, and the position is the tip of the batch mountain. The position may be determined. The same applies to the constant region 9b .
 バッチ山の先端位置が下流側に伸びすぎると、未溶解のまま流れ出る可能性が生じる。観察データ算出手段18が算出したバッチ山の先端位置が下流側に伸びすぎている場合には、バッチ山の先端位置が上流側に戻るようにガラス溶融炉を操作すればよい。 If the tip of the batch mountain extends too far downstream, it may flow out undissolved. When the tip position of the batch peak calculated by the observation data calculation means 18 is excessively extended to the downstream side, the glass melting furnace may be operated so that the tip position of the batch peak returns to the upstream side.
 また、観察データ算出手段18は、上流側から見て右側の一定領域9における観察データの値と、上流側から見て左側の一定領域9における観察データの値との差を、観察データとして算出してもよい。例えば、観察データ算出手段18は、一定領域9におけるバッチ山の占有率と、一定領域9におけるバッチ山の占有率との差を算出してもよい。また、観察データ算出手段18は、一定領域9におけるバッチ山の先端位置と、一定領域9におけるバッチ山の先端位置との差を算出してもよい。以下、一定領域9,9における観察データの値の差を左右差と記す。この左右差も観察データの1つとして算出することで、上流側から見て右側と左側とで固体原料の状態に偏りがないかを確認することができる。例えば、上流側から見て右側と左側のいずれか一方のみで溶解が進み、他方では溶解が遅れている等の状況を確認することができ、その状況に応じて、ガラス溶融炉を操作するように判断できる。 Further, the observation data calculating means 18, the value of the observational data in the right-hand constant region 9 a as viewed from the upstream side, the difference between the value of the observational data when viewed from the upstream side in the predetermined region 9 b of the left observation data May be calculated as For example, the observation data calculating means 18, the occupancy rate of the batch mountain in certain areas 9 a, may calculate the difference between the occupancy of the batch mountain in certain areas 9 b. Further, the observation data calculating means 18, the tip position of the batch mountain in certain areas 9 a, may calculate the difference between the tip position of the batch mountain in certain areas 9 b. Hereinafter, the difference in the values of the observation data in the fixed regions 9 a and 9 b is referred to as a left / right difference. By calculating this left / right difference as one of the observation data, it is possible to confirm whether the state of the solid raw material is not biased between the right side and the left side as viewed from the upstream side. For example, it is possible to confirm that the melting progresses only on one of the right side and the left side when viewed from the upstream side, and that the melting is delayed on the other side, and the glass melting furnace is operated according to the situation. Can be judged.
 例えば、ステップS16で算出された左右差により、一定領域9,9のいずれか一方における原料の溶解が遅れていると判断される場合は、原料の溶解が遅れている方の一定領域に近い側壁のバーナーへの燃料投入量を増加する(すなわち、バーナーの火力を強める)等の操作を行えばよい。 For example, when it is determined that the dissolution of the raw material in either one of the constant regions 9 a and 9 b is delayed due to the difference between the left and right calculated in step S < b > 16, it is determined that the raw material is delayed in the constant region. An operation such as increasing the amount of fuel input to the burner on the near side wall (ie, increasing the burner's heating power) may be performed.
 なお、上記の例では、バッチ山の占有率や先端位置に関する左右差を算出する場合を説明したが、観察データ算出手段18は、他の観察データに関する左右差を算出してもよい。 In the above example, the case where the left-right difference regarding the occupancy ratio and the tip position of the batch mountain is calculated has been described. However, the observation data calculation unit 18 may calculate the left-right difference regarding other observation data.
 また、バッチ山の占有率、先端位置、およびそれらの左右差は、直近の1枚の二値化画像から算出してもよいが、直近の複数の二値化画像の合成画像から算出してもよい。なお、既に説明したように、固体状態の原料の全体像を把握する観点からは、ステップS12毎に得られる各画像を合成し、また、ステップS13毎に得られる各画像を合成し、それらの合成画像を用いて、ステップS14以降の処理を行って、二値化画像を生成することが好ましい。 Further, the occupancy ratio of the batch mountain, the tip position, and the left-right difference between them may be calculated from the most recent binary image, but may be calculated from the composite image of the most recent binary images. Also good. In addition, as already explained, from the viewpoint of grasping the whole image of the raw material in the solid state, the images obtained for each step S12 are synthesized, and the images obtained for each step S13 are synthesized, and those images are obtained. It is preferable to generate a binarized image by performing the processing after step S14 using the composite image.
 また、観察データ算出手段18は、連続する複数の二値化画像における同一のバッチ山の位置と、カメラの撮影間隔に基づいて、バッチ山全体の移動速度を算出してもよい。バッチ山全体の移動は緩やかであるので、連続する複数の二値化画像において、同一のバッチ山の位置の変化は少ない。よって、観察データ算出手段18は、連続する複数の二値化画像において、位置座標が最も近いバッチ山同士が同一のバッチ山であると判定すればよい。そして、同一のバッチ山の座標の変化から、そのバッチ山の移動距離を算出し、その移動距離と撮影間隔からバッチ山全体の移動速度を算出すればよい。本例では、一つのバッチ山の移動速度を、バッチ山全体の移動速度とみなしていることになる。なお、観察データとしてバッチ山の速度を算出する場合には、カメラ11a,11bが撮影した画像そのものを用いて、ステップS10以降の処理を行う。さらに、ステップS12で得られた1枚の画像と、ステップS13で得られた1枚の画像を用いて、ステップS14以降の処理を行う。 Further, the observation data calculation means 18 may calculate the movement speed of the entire batch mountain based on the position of the same batch mountain in a plurality of continuous binarized images and the photographing interval of the camera. Since the movement of the entire batch mountain is slow, there is little change in the position of the same batch mountain in a plurality of continuous binarized images. Therefore, the observation data calculation unit 18 may determine that the batch mountains having the closest position coordinates are the same batch mountain in a plurality of continuous binarized images. Then, the movement distance of the batch mountain may be calculated from the change in the coordinates of the same batch mountain, and the movement speed of the entire batch mountain may be calculated from the movement distance and the photographing interval. In this example, the movement speed of one batch mountain is regarded as the movement speed of the entire batch mountain. In addition, when calculating the speed of a batch mountain as observation data, the process after step S10 is performed using the image itself which the camera 11a, 11b image | photographed. Furthermore, the process after step S14 is performed using one image obtained in step S12 and one image obtained in step S13.
 また、観察データ算出手段18は、連続する複数の二値化画像における同一のバッチ山の位置に基づいて、バッチ山の移動方向を算出してもよい。 Further, the observation data calculation means 18 may calculate the movement direction of the batch mountain based on the same batch mountain position in a plurality of continuous binarized images.
 また、観察データ算出手段18は、連続する複数の二値化画像から、バッチ山の減少率を算出してもよい。例えば、観察データ算出手段18は、連続する各二値化画像で、同一のバッチ山を判定し、各二値化画像において、バッチ山の面積や長さの減少率を算出してもよい。なお、長さの減少率を算出する際、原料の流れる方向に沿った長さに基づいて減少率を算出してもよく、あるいは、原料の流れる方向に垂直な方向に沿った長さに基づいて減少率を算出してもよい。 Further, the observation data calculation means 18 may calculate the batch mountain reduction rate from a plurality of continuous binarized images. For example, the observation data calculation unit 18 may determine the same batch mountain in each continuous binarized image, and calculate the reduction rate of the area and length of the batch mountain in each binarized image. In calculating the length reduction rate, the reduction rate may be calculated based on the length along the flow direction of the raw material, or based on the length along the direction perpendicular to the flow direction of the raw material. Thus, the decrease rate may be calculated.
 なお、観察データ算出手段18は、バッチ山全体の移動速度、バッチ山の移動方向、バッチ山の減少率等を算出する際、連続する複数の二値化画像を用いることが好ましいが、連続していない複数の二値化画像を用いてもよい。 The observation data calculation means 18 preferably uses a plurality of continuous binarized images when calculating the movement speed of the entire batch mountain, the movement direction of the batch mountain, the reduction rate of the batch mountain, and the like. A plurality of binarized images may be used.
 このバッチ山の減少率は、バッチ山の高さの減少率との間に相関を有していると考えられ、バッチ山の減少率によりバッチ山の高さを判断することができる。バッチ山が高すぎると、溶解するのに時間がかかり、先端位置が伸びてしまう。 This batch mountain reduction rate is considered to have a correlation with the batch mountain height reduction rate, and the batch mountain height can be determined from the batch mountain reduction rate. If the batch crest is too high, it takes time to dissolve and the tip position is extended.
 さらに、観察データ算出手段18は、二値化画像から、個々のバッチ山の向き(バッチ山の伸びている方向)を算出してもよい。このような方向は、予め、基準となる方向を定めておき、その基準方向とのなす角度によって表せばよい。 Further, the observation data calculation means 18 may calculate the direction of each batch mountain (the direction in which the batch mountain extends) from the binarized image. Such a direction may be expressed in advance by determining a reference direction and an angle formed with the reference direction.
 また、観察データ算出手段18は、二値化画像から、個々のバッチ山の大きさを算出してもよい。 Further, the observation data calculation means 18 may calculate the size of each batch mountain from the binarized image.
 また、観察データ算出手段18は、二値化画像と、ステップS13で得られた画像とに基づいて、バッチ山におけるガスの吹き出し状態を評価する評価値を計算してもよい。バッチ山からガスが吹き出していると、画像内においてバッチ山の表面に陥没した穴が観察され粗く見える。よって、観察データ算出手段18は、ステップS13で得られた画像のうち、バッチ山に相当する領域を二値化画像を用いて判定し、その領域における輝度値の標準偏差を計算し、その標準偏差を、ガスの吹き出し状態の評価値としてもよい。 Further, the observation data calculation means 18 may calculate an evaluation value for evaluating the gas blowing state in the batch mountain based on the binarized image and the image obtained in step S13. When gas is blown out from the batch mountain, a hole depressed on the surface of the batch mountain is observed in the image and looks rough. Therefore, the observation data calculation means 18 determines a region corresponding to the batch mountain from the images obtained in step S13 using the binarized image, calculates a standard deviation of luminance values in the region, and calculates the standard. The deviation may be an evaluation value of the gas blowing state.
 また、ガスの吹き出しによって陥没した部分は、黒い領域として観察される。よって、観察データ算出手段18は、ステップS13で得られた画像のうち、バッチ山に相当する領域を二値化画像を用いて判定し、その領域内における黒色画素の総数をカウントし、そのカウント結果を、ガスの吹き出し状態の評価値としてもよい。 In addition, the portion depressed by the gas blowout is observed as a black region. Therefore, the observation data calculation means 18 determines a region corresponding to the batch mountain from the images obtained in step S13 using the binarized image, counts the total number of black pixels in the region, and counts the count. The result may be an evaluation value of the gas blowing state.
 非特許文献1や特許文献1では、バッチ山の占有率やバッチ山の先端位置(最下流位置)を評価することが記載されていたが、本発明では、それらに限らず、内外比、左右差、バッチ山の速度や移動方向、バッチ山の減少率、個々のバッチ山の向きや大きさ、バッチ山でのガスの吹き出し状態の評価値等の種々の観察データを測定することにより、バッチ山の定量的評価を安定的に行うことができる。また、その結果に基づいて、ガラス溶融炉を適切に運転することで高品質のガラスを製造することができる。 In Non-Patent Document 1 and Patent Document 1, it is described that the occupancy ratio of the batch mountain and the tip position (the most downstream position) of the batch mountain are evaluated. By measuring various observation data such as difference, batch crest speed and moving direction, batch crest reduction rate, individual batch crest direction and size, evaluation value of gas blowing state in batch crest, etc. The mountain can be quantitatively evaluated stably. Moreover, based on the result, high-quality glass can be produced by appropriately operating the glass melting furnace.
 また、本発明によれば、姿勢特定手段14が、撮影画像(より具体的には撮影画像の平均画像)に対して、基準パターンのパターンマッチングを行い、撮影画像内における基準パターンの画像座標に基づいて、カメラの姿勢のずれの有無を判定し、ずれが生じたと判定した場合には、姿勢のずれ量を用いて、カメラの姿勢(位置および向き)を特定する。そして、画像較正手段16が、カメラの姿勢に基づいて、実空間における一定領域9,9に該当する範囲を撮影画像から抽出する。さらに、背景画像作成手段15が、その抽出画像から背景画像を作成し、画像較正手段16が、抽出画像および背景画像を、視点をカメラの位置から一定領域の真上に変化させる視点変換処理を行い、差分演算手段17が両者の輝度値の差分を計算する。従って、清掃時等にカメラの姿勢が変化してしまったとしても、ガラス溶融炉内の一定の領域の観察を良好に継続することができる。 Further, according to the present invention, the posture specifying unit 14 performs pattern matching of the reference pattern on the captured image (more specifically, the average image of the captured image), and sets the image coordinates of the reference pattern in the captured image. Based on this, it is determined whether or not there is a shift in the posture of the camera. If it is determined that a shift has occurred, the posture (position and orientation) of the camera is specified using the amount of shift in the posture. Then, the image calibration unit 16, based on the attitude of the camera, extracting a range corresponding to the constant region 9 a, 9 b in real space from a captured image. Further, the background image creating unit 15 creates a background image from the extracted image, and the image calibration unit 16 performs viewpoint conversion processing for changing the viewpoint of the extracted image and the background image from the position of the camera to a position directly above a certain region. Then, the difference calculation means 17 calculates the difference between the two luminance values. Therefore, even if the posture of the camera changes during cleaning or the like, observation of a certain region in the glass melting furnace can be continued well.
 また、前処理手段19は、前処理として、カメラから入力された複数の画像の中から、エッジのカウント結果が多い状態を保っている連続する複数の画像を選択する。そして、前処理手段19は、選択した複数の画像において、対応する画素に着目し、その画素の中で最小となる輝度値を特定し、その輝度値を、前処理画像における対応画素の輝度値として定める。前処理手段19は、この処理を対応画素毎に行う。カメラが撮影した画像によっては、炉内で浮遊する原料粉が写ったり、フレームが写ったりして、背景やバッチ山が不鮮明になる場合もあるが、上記のように前処理を行うことで、フレームや原料粉等の外乱の影響が少ない画像を作成することができる。そして、このような画像を用いて、ステップS10以降の処理(図9参照)を行うことで、外乱の影響のすくない良好な背景画像や、バッチ山のみを示す画像も良好な画像を得ることができ、一定領域におけるバッチ山の状態を正確に監視することができる。 Also, the preprocessing means 19 selects a plurality of consecutive images that maintain a state where the edge count result is large from among a plurality of images input from the camera as preprocessing. Then, the preprocessing unit 19 pays attention to the corresponding pixel in the selected plurality of images, specifies the minimum luminance value among the pixels, and determines the luminance value of the corresponding pixel in the preprocessed image. Determine as The preprocessing means 19 performs this processing for each corresponding pixel. Depending on the image taken by the camera, raw powder floating in the furnace may appear, the frame may appear, and the background and batch mountains may be blurred, but by performing the pretreatment as described above, It is possible to create an image that is less affected by disturbance such as a frame or raw material powder. Then, using such an image, the process after step S10 (see FIG. 9) can be performed to obtain a good background image that is not affected by disturbance or an image that shows only batch mountains. It is possible to accurately monitor the condition of the batch mountain in a certain area.
 よって、本実施形態によれば、ガラス溶融炉内の一定の領域の観察を継続し、その一定領域におけるバッチ山の状態を良好に監視することができる。なお、既に説明したように、原料粉やフレームの影響が少ないガラス溶融炉を監視する場合には、前処理を行わなくてもよい。その場合、カメラが炉内を撮影して生成した画像そのものを用いて、ステップS10以降の処理(図9参照)を行ってもよい。 Therefore, according to this embodiment, it is possible to continue observation of a certain region in the glass melting furnace and to monitor the state of the batch mountain in the certain region well. In addition, as already demonstrated, when monitoring the glass melting furnace with little influence of raw material powder or a flame | frame, it is not necessary to perform a pre-processing. In that case, the processing after step S10 (see FIG. 9) may be performed using the image itself generated by photographing the inside of the furnace by the camera.
 また、本実施形態において、姿勢特定手段14が、撮影画像に対して、複数の基準パターンのパターンマッチングを行い、カメラの姿勢を特定する。このように、複数の基準パターンを用いるので、カメラの姿勢ずれ判定の信頼性が増す。 Further, in the present embodiment, the posture specifying unit 14 performs pattern matching of a plurality of reference patterns on the captured image, and specifies the posture of the camera. Thus, since a plurality of reference patterns are used, the reliability of the determination of the posture deviation of the camera is increased.
 次に、第1の実施形態の変形例について説明する。上記の第1の実施形態では、背景画像および最新の撮影画像から抽出された抽出画像に関して、それぞれ変換処理(ステップS12,S13。図9参照。)を行ってから、差分を計算する処理(ステップS14。図9参照。)を行う場合を示した。先に、画素同士の差分を計算する処理を行ってから、変換処理を行ってもよい。図18は、このような第1の実施形態の変形例におけるガラス溶融炉内監視システムの構成例を示すブロック図である。図18に示す各手段は、図2に示す各手段と同様の手段であり、図2と同一の符号で示す。ただし、本変形例では、各種画像の流れが、上記の第1の実施形態とは一部異なっているので、各種画像の流れを示す矢印が図2とは異なる。また、図19は、このような第1の実施形態の変形例における観察データ算出までの処理経過の例を示すフローチャートである。第1の実施形態で説明した処理と同様の処理に関しては図9と同一の符号を付し、説明を省略する。 Next, a modification of the first embodiment will be described. In the first embodiment described above, the conversion process (steps S12 and S13, see FIG. 9) is performed on the extracted image extracted from the background image and the latest photographed image, and the difference is calculated (step). S14. See FIG. 9). The conversion process may be performed after the process of calculating the difference between the pixels is performed first. FIG. 18 is a block diagram showing a configuration example of the monitoring system in the glass melting furnace in such a modification of the first embodiment. Each means shown in FIG. 18 is the same as each means shown in FIG. 2, and is denoted by the same reference numerals as in FIG. However, in this modification, the flow of various images is partially different from that in the first embodiment, and therefore the arrows indicating the flow of various images are different from those in FIG. FIG. 19 is a flowchart illustrating an example of a processing progress until observation data calculation in the modification of the first embodiment. The same processes as those described in the first embodiment are denoted by the same reference numerals as those in FIG.
 本変形例では、ステップS10,S11の後、差分演算手段17が、最新の撮影画像から抽出された抽出画像と、ステップS11で作成された背景画像との間で、対応する画素同士の輝度値の差を算出する(ステップS31)。このとき、差分演算手段17は、最新の撮影画像から抽出された抽出画像(バッチ山および背景が写った画像)の画素の輝度値から、背景画像の画素の輝度値を減算する。差分演算手段17は、この減算処理を、対応する画素同士の組毎に行う。この結果、カメラの視点から見た一定領域の画像であって、背景が除去された画像が得られる。ただし、上記の減算結果において、背景に該当する画素の輝度値が0になっているとは限らない。 In this modified example, after steps S10 and S11, the difference calculation means 17 performs a luminance value between corresponding pixels between the extracted image extracted from the latest photographed image and the background image created in step S11. Is calculated (step S31). At this time, the difference calculating means 17 subtracts the luminance value of the pixel of the background image from the luminance value of the pixel of the extracted image (image showing the batch mountain and the background) extracted from the latest photographed image. The difference calculation means 17 performs this subtraction process for each pair of corresponding pixels. As a result, it is possible to obtain an image of a certain region viewed from the viewpoint of the camera and having the background removed. However, in the above subtraction result, the luminance value of the pixel corresponding to the background is not always 0.
 そのため、差分演算手段17は、ステップS31の後、ステップS31で得られた画像に対して二値化処理を行う(ステップS32)。この結果、カメラの視点から見た一定領域の画像であって、背景に該当する画素の輝度値が“0”であり、バッチ山10に該当する画素の輝度値が“1”となる二値化画像が得られる。ステップS31,S32は、背景除外画像生成ステップに相当する。 Therefore, the difference calculation means 17 performs binarization processing on the image obtained in step S31 after step S31 (step S32). As a result, the binary image is an image of a certain region viewed from the viewpoint of the camera, the luminance value of the pixel corresponding to the background is “0”, and the luminance value of the pixel corresponding to the batch mountain 10 is “1”. A converted image is obtained. Steps S31 and S32 correspond to a background excluded image generation step.
 ステップS32の後、画像較正手段16は、ステップS32で生成された二値化画像に関して、視点をカメラの位置から一定領域の真上に変化させる視点変換処理を行う(ステップS33)。この結果、既に説明したステップS15(図9参照)で得た二値化画像と同様の二値化画像が得られる。ステップS33は、背景除外画像変換ステップに相当する。 After step S32, the image calibration unit 16 performs viewpoint conversion processing for changing the viewpoint from the position of the camera to a position directly above a certain region with respect to the binarized image generated in step S32 (step S33). As a result, a binarized image similar to the binarized image obtained in step S15 (see FIG. 9) already described is obtained. Step S33 corresponds to a background excluded image conversion step.
 ステップS32の後、観察データ算出手段18は、ステップS32における変換処理後の二値化画像を用いて、一定領域内に存在するバッチ山の観察データを算出する(ステップS16)。この処理は、既に説明したステップS16の処理と同様である。 After step S32, the observation data calculation means 18 calculates the observation data of the batch mountain existing in a certain region using the binarized image after the conversion process in step S32 (step S16). This process is the same as the process of step S16 already described.
[実施形態2]
 図20は、本発明の第2の実施形態のガラス溶融炉内監視システムの構成例を示すブロック図である。第1の実施形態と同様の構成要素は、図2と同一の符号を付し、説明を省略する。第2の実施形態のガラス溶融炉内監視システムは、カメラ11と、カメラ11と、画像処理装置13とを備える。画像処理装置13は、前処理手段19、画像記憶手段12、姿勢特定手段14、背景画像作成手段15、画像較正手段16、差分演算手段17、および観察データ算出手段18に加え、観察データ解析手段61と、溶融炉制御手段62とを備える。また、画像処理装置13は、図18に示すガラス溶融炉内監視システムの画像処理装置に観察データ解析手段61および溶融炉制御手段62を追加した構成であってもよい。
[Embodiment 2]
FIG. 20 is a block diagram illustrating a configuration example of the glass melting furnace monitoring system according to the second embodiment of the present invention. The same components as those in the first embodiment are denoted by the same reference numerals as those in FIG. Glass melting furnace monitoring system of the second embodiment includes a camera 11 a, and the camera 11 b, and an image processing unit 13 a. The image processing apparatus 13 a includes observation data analysis in addition to the preprocessing means 19, the image storage means 12, the posture specifying means 14, the background image creation means 15, the image calibration means 16, the difference calculation means 17, and the observation data calculation means 18. Means 61 and melting furnace control means 62 are provided. The image processing apparatus 13 a may be configured by adding the observed image processing apparatus for a glass melting furnace monitoring system data analysis means 61 and the melting furnace control device 62 shown in FIG. 18.
 観察データ解析手段61は、観察データ算出手段18によって値が算出される種々の観察データと、ガラス溶融炉の種々の運転パラメータとの相関の程度を判定する。換言すれば、観察データ解析手段61は、ガラス溶融炉の種々の運転パラメータが、観察データ算出手段18によって値が算出される種々の観察データに及ぼす影響の度合を導出する。観察データの例として、一定領域9,9それぞれにおけるバッチ山の占有率、バッチ山の先端位置、およびそれらの観察データの左右差、一定領域9,9における内外比、バッチ山の移動速度、バッチ山の減少率等が挙げられるが、観察データはこれらに限定されない。また、運転パラメータとして、バーナー燃料の燃焼条件(例えば、燃焼量等)、原料の投入条件(例えば、投入量等)、バッチ・カレット比等が挙げられるが、運転パラメータもこれらに限定されない。 The observation data analysis means 61 determines the degree of correlation between various observation data whose values are calculated by the observation data calculation means 18 and various operating parameters of the glass melting furnace. In other words, the observation data analysis means 61 derives the degree of influence of various operating parameters of the glass melting furnace on various observation data whose values are calculated by the observation data calculation means 18. As examples of observation data, the occupancy ratio of the batch mountain in each of the constant regions 9 a and 9 b , the tip position of the batch mountain, and the left-right difference between the observation data, the internal / external ratio in the constant regions 9 a and 9 b , the batch mountain The moving speed, the reduction rate of the batch mountain, and the like can be mentioned, but the observation data is not limited to these. The operation parameters include burner fuel combustion conditions (for example, combustion amount), raw material input conditions (for example, input amount), batch / cullet ratio, etc., but the operation parameters are not limited to these.
 観察データ解析手段61は、例えば、主成分分析および多変量解析(例えば、重回帰分析)によって、観察データと運転パラメータの相関の程度を判定する。例えば、観察データ解析手段61は、主成分分析を行い主成分を求め、その主成分を利用して多変量解析を行う。そして、観察データ解析手段61は、上記の過程で使用している係数を利用することで、各パラメータの影響度を導出する。パラメータの影響度とは、具体的には、運転パラメータが観察データに及ぼす影響の度合である。観察データ解析手段61がパラメータの影響度を導出する処理は、影響度導出ステップに相当する。 The observation data analysis means 61 determines the degree of correlation between the observation data and the operation parameter by, for example, principal component analysis and multivariate analysis (for example, multiple regression analysis). For example, the observation data analysis means 61 performs principal component analysis to obtain a principal component, and performs multivariate analysis using the principal component. And the observation data analysis means 61 derives the influence degree of each parameter by using the coefficient used in said process. Specifically, the influence level of the parameter is the degree of influence of the operation parameter on the observation data. The process in which the observation data analysis means 61 derives the parameter influence degree corresponds to an influence degree derivation step.
 図21は、1つの観察データ(ここでは、観察データAとする。)に対する運転パラメータの影響度を計算した結果の例を示すグラフである。図21では、観察データAと、運転パラメータである投入条件A(原料の投入量とする)、投入条件B、燃焼パラメータA~Dとの相関を示している。図21の縦軸は、各運転パラメータの影響度である。燃焼パラメータA~Dは、各場所のバーナーにおける燃焼量である。運転パラメータの影響度の値が正であれば、観察データとの間に正の相関があり、運転パラメータの影響度の値が負であれば、観察データとの間に負の相関がある。また、影響度の値の絶対値が大きいほど、運転パラメータと観察データとの相関の度合いが強いことを表す。 FIG. 21 is a graph showing an example of the result of calculating the influence of the operation parameter on one observation data (here, observation data A). FIG. 21 shows the correlation between the observation data A, the input parameters A (referred to as the amount of raw material input), the input conditions B, and the combustion parameters A to D, which are operation parameters. The vertical axis in FIG. 21 represents the degree of influence of each operation parameter. The combustion parameters A to D are the amount of combustion in the burner at each location. If the influence value of the operation parameter is positive, there is a positive correlation with the observation data, and if the influence value of the operation parameter is negative, there is a negative correlation with the observation data. In addition, the larger the absolute value of the influence value, the stronger the degree of correlation between the operation parameter and the observation data.
 例えば、図21に示す結果から、投入条件A(原料の投入量)を増加させれば、観察データAの値も増えることを意味する。また、燃焼パラメータAを増加させれば、観察データAの値は減少することを意味する。 For example, from the result shown in FIG. 21, if the input condition A (raw material input amount) is increased, it means that the value of the observation data A is also increased. Further, if the combustion parameter A is increased, the value of the observation data A is decreased.
 溶融炉制御手段62は、観察データ算出手段18によって算出された観察データを参照し、その観察データが、ガラス溶融炉の運転状況を変更すべき値に達していたら、その観察データとの間に相関性を有する運転パラメータを変更する。ここで、観察データとの間に相関性を有する運転パラメータとは、例えば、観察データに対する影響度の絶対値が予め定められた値以上になっている運転パラメータである。例えば、観察データの値が上限値を越え、高くなりすぎている場合には、その観察データとの間に正の相関を有する運転パラメータの値を減少させたり、あるいは、その観察データとの間に負の相関を有する運転パラメータの値を増加させたりする。また、例えば、観察データの値が下限値未満となり、低くなりすぎている場合には、その観察データとの間に正の相関を有する運転パラメータの値を増加させたり、あるいは、その観察データとの間に負の相関を有する運転パラメータの値を減少させたりする。具体例としては、観察データであるバッチ山の占有率と、運転パラメータである炉内温度との間に負の相関があると判定され、バッチ山の占有率が上限値を超えた場合、溶融炉制御手段62は、炉内温度を上昇させるようにガラス溶融炉を操作すればよい。すなわち、バーナーの火力を上昇させればよい。このように溶融炉制御手段62が運転パラメータを変更する処理は、溶融炉制御ステップに相当する。 The melting furnace control means 62 refers to the observation data calculated by the observation data calculation means 18, and if the observation data has reached a value at which the operating state of the glass melting furnace should be changed, the observation data is between the observation data. Change the correlated operating parameters. Here, the operation parameter having a correlation with the observation data is, for example, an operation parameter in which the absolute value of the degree of influence on the observation data is equal to or greater than a predetermined value. For example, if the value of the observation data exceeds the upper limit value and is too high, the value of the operating parameter having a positive correlation with the observation data is decreased, or the value of the observation data is Or increase the value of the operating parameter having a negative correlation. Further, for example, when the value of the observation data is less than the lower limit value and is too low, the value of the operation parameter having a positive correlation with the observation data is increased, or the observation data and The value of the operating parameter having a negative correlation between the two is decreased. As a specific example, when it is determined that there is a negative correlation between the occupancy rate of the batch mountain as the observation data and the furnace temperature as the operation parameter, and the occupancy rate of the batch mountain exceeds the upper limit value, The furnace control means 62 may operate the glass melting furnace so as to increase the furnace temperature. That is, the burner's heating power may be increased. Thus, the process in which the melting furnace control means 62 changes the operation parameter corresponds to a melting furnace control step.
 また、溶融炉制御手段62は、観察データの値が上限値を越えたり、下限値未満になったりしたときに警報を出力してもよい。 Also, the melting furnace control means 62 may output an alarm when the observation data value exceeds the upper limit value or falls below the lower limit value.
 なお、ガラス溶融炉の運転パラメータの変更は、オペレータが行ってもよい。この場合、溶融炉制御手段62は備えられていなくてもよい。また、この場合、オペレータが、観察データ算出手段18によって算出された観察データと、観察データ解析手段61によって算出された観察データと運転パラメータとの間の影響度とを参照して、どの運転パラメータをどのように変更するかを判断すればよい。 The operator may change the operating parameters of the glass melting furnace. In this case, the melting furnace control means 62 may not be provided. In this case, the operator refers to the observation data calculated by the observation data calculation means 18 and the influence degree between the observation data calculated by the observation data analysis means 61 and the operation parameters. What is necessary is just to judge how to change.
 本実施形態によれば、観察データ解析手段61が、観察データに対する運転パラメータの相関の程度を示す影響度を算出するので、監視したバッチ山の状態に応じて、ガラス溶融炉のどの運転パラメータを調節すればよいかを明確化することができる。 According to the present embodiment, the observation data analysis means 61 calculates the degree of influence indicating the degree of correlation of the operation parameter with respect to the observation data, so which operation parameter of the glass melting furnace is determined according to the monitored batch mountain state. It can be clarified whether adjustment is necessary.
 さらに、溶融炉制御手段62を設けることで、オペレータに依らずに、自動的にガラス溶融炉を適切な状態に制御することができる。 Furthermore, by providing the melting furnace control means 62, the glass melting furnace can be automatically controlled to an appropriate state without depending on the operator.
 上記の説明では、観察データ解析手段61が観察データに対する運転パラメータの影響度を算出する場合を示した。その他に、原料の状態の品質を表す品質データ(例えば、泡個数等)が得られる場合には、観察データ解析手段61は、品質データに対する観察データや運転パラメータの相関の程度を表す影響度を算出してもよい。この影響度も、例えば、主成分分析および多変量解析によって行えばよい。なお、泡個数が多いほど、窯の状態が悪いことを意味する。 In the above description, the case where the observation data analysis unit 61 calculates the influence of the operation parameter on the observation data is shown. In addition, when quality data (for example, the number of bubbles) indicating the quality of the raw material state is obtained, the observation data analyzing means 61 has an influence degree indicating the degree of correlation between the observation data and the operation parameters with respect to the quality data. It may be calculated. This degree of influence may also be performed by, for example, principal component analysis and multivariate analysis. In addition, it means that the state of a kiln is so bad that there are many foam numbers.
 図22は、1つの品質データである泡個数に対する観察データA,Bおよび運転パラメータである温度A~Dの影響度を計算した結果を示すグラフである。観察データA,Bは、撮影画像に基づいて生成した二値化画像により観察データ算出手段18が算出したデータである。温度A~Dは、ガラス溶融炉の各場所の温度を計測することによって得られた値である。図22に示す例においても、影響度の値が正であれば、観察データや温度と品質データとの間には正の相関があり、影響度の値が負であれば、観察データや温度と品質データとの間には負の相関がある。また、影響度の値の絶対値が大きいほど、相関の度合いが大きいことを示す。 FIG. 22 is a graph showing the result of calculating the influence of observation data A and B and the operating parameters temperatures A to D on the number of bubbles as one quality data. The observation data A and B are data calculated by the observation data calculation unit 18 using a binarized image generated based on the captured image. The temperatures A to D are values obtained by measuring the temperature at each location of the glass melting furnace. Also in the example shown in FIG. 22, if the influence value is positive, there is a positive correlation between the observation data and temperature and the quality data, and if the influence value is negative, the observation data and temperature And quality data have a negative correlation. In addition, the larger the absolute value of the influence value, the greater the degree of correlation.
 例えば、図22に示す結果から、観察データA,Bの値が大きいほど、泡個数が増えている(品質が悪くなっている)ことが分かる。また、温度Aの値が低いほど、泡個数が増えていることが分かる。 For example, it can be seen from the results shown in FIG. 22 that the larger the values of the observation data A and B, the greater the number of bubbles (the quality is worse). Moreover, it turns out that the number of bubbles increases, so that the value of the temperature A is low.
 なお、ある条件のもとで、ある観察データと品質データとの間に相関があると判定されたとしても、別の条件の下では、他の観察データとその品質データとの間に相関があると判定されることもある。図23は、観察データと品質データとの相関が失われたり新たに現れたりする状況の変化を示すグラフである。図23に示す左側の縦軸は、観察データの値を示す。右側の縦軸は品質データ(ここでは泡個数)の値を示す。横軸は、時間の経過を表す。図23に示す例では、計測期間の途中までは、観察データAと品質データとの間に相関が認められたが、後半になると、その相関は失われた。また、計測期間の途中までは、観察データBと品質データとの間に相関はなかったが、後半になると、観察データBと品質データとの相関が認められた。 Even if it is determined that there is a correlation between certain observation data and quality data under a certain condition, there is a correlation between other observation data and its quality data under another condition. It may be determined that there is. FIG. 23 is a graph showing changes in the situation in which the correlation between the observation data and the quality data is lost or newly appears. The left vertical axis shown in FIG. 23 indicates the value of the observation data. The vertical axis on the right side shows the value of quality data (here, the number of bubbles). The horizontal axis represents the passage of time. In the example shown in FIG. 23, a correlation was observed between the observation data A and the quality data until the middle of the measurement period, but the correlation was lost in the latter half. Further, there was no correlation between the observation data B and the quality data until the middle of the measurement period, but a correlation between the observation data B and the quality data was recognized in the latter half.
 従って、観察データ解析手段61は、繰り返し、観察データと品質データとの間の影響度を算出することが好ましい。 Therefore, it is preferable that the observation data analysis means 61 repeatedly calculate the degree of influence between the observation data and the quality data.
 なお、第2の実施形態では、観察データ解析手段61が算出した影響度に基づいて、観察データと相関を有する運転パラメータを判定し、観察データに応じてその運転パラメータを変更する場合を示した。オペレータが、二値化画像を参照して、どの運転パラメータを操作すべきかを判断できる場合、オペレータが二値化画像を参照をして、運転パラメータを増減させてもよい。例えば、二値化画像から、上流壁から見て右側のバッチ山の溶解が遅いと判断した場合、オペレータは、上流壁から見て右側のバーナーの火力を上昇させてもよい。 In the second embodiment, an operation parameter that has a correlation with observation data is determined based on the degree of influence calculated by the observation data analysis unit 61, and the operation parameter is changed according to the observation data. . When the operator can determine which operation parameter to operate by referring to the binarized image, the operator may increase or decrease the operation parameter by referring to the binarized image. For example, when it is determined from the binarized image that the dissolution of the right batch mountain as viewed from the upstream wall is slow, the operator may increase the heating power of the right burner as viewed from the upstream wall.
 また、上記の各実施形態において、カメラ11が一定領域9を真上から撮影する位置に配置され、カメラ11が一定領域9を真上から撮影する位置に配置されていてもよい。この場合にも、特徴的な物(例えば、側壁、バーナー等)が撮影範囲に含まれ、基準パターンや基準点も撮影されるものとする。このように、カメラ11が一定領域9を真上から撮影する位置に配置され、カメラ11が一定領域9を真上から撮影する位置に配置される場合、視点を一定領域9や一定領域9の真上に変化させる視点変換処理を行わなくてよい。すなわち、ステップS12,S13(図9参照)の視点変換処理を行わなくてよい。また、実施形態の変形例として示した処理経過(図19参照)において、ステップS33の視点変換処理を行わなくてよい。 Further, in the above embodiments, the camera 11 a is arranged at a position to be photographed from directly above the constant region 9 a, the camera 11 b may be arranged in a position for imaging from above the constant region 9 b . Also in this case, it is assumed that characteristic objects (for example, side walls, burners, etc.) are included in the shooting range, and a reference pattern and a reference point are also shot. Thus, the camera 11 a is arranged at a position to be photographed from directly above the constant region 9 a, when the camera 11 b is disposed at a position to be photographed from directly above the constant region 9 b, constant viewpoint area 9 a need not perform the viewpoint conversion processing or changing just above the constant region 9 b. That is, the viewpoint conversion process in steps S12 and S13 (see FIG. 9) need not be performed. Further, in the process progress shown as a modification of the embodiment (see FIG. 19), the viewpoint conversion process in step S33 may not be performed.
[実施形態3]
 次に、本発明の第3の実施形態として、ガラス物品の製造方法について説明する。本発明のガラス物品の製造方法には、第1の実施形態で説明したガラス溶融炉内監視方法が適用される。さらに、本発明のガラス物品の製造方法に、第2の実施形態で説明した観察データと運転パラメータとの相関の程度の判定、および運転パラメータの変更処理を適用してもよい。図24は、本実施形態のガラス物品の製造方法で用いるガラス物品の製造ラインの一例を示す模式図である。なお、図24では、カメラ11,11および画像処理装置13の図示を省略しているが、ガラス溶融炉1の近傍にはカメラ11,11が配置される。また、画像処理装置13も配置される。ただし、画像処理装置13の配置位置は限定されない。また、第2の実施形態で説明した画像処理装置13を配置してもよい。
[Embodiment 3]
Next, a glass article manufacturing method will be described as a third embodiment of the present invention. The glass melting furnace monitoring method described in the first embodiment is applied to the glass article manufacturing method of the present invention. Furthermore, the determination of the degree of correlation between the observation data and the operation parameter described in the second embodiment and the operation parameter change process may be applied to the method for manufacturing a glass article of the present invention. FIG. 24 is a schematic diagram illustrating an example of a production line for glass articles used in the method for producing a glass article of the present embodiment. In FIG. 24, the cameras 11 a and 11 b and the image processing device 13 are not shown, but the cameras 11 a and 11 b are disposed in the vicinity of the glass melting furnace 1. An image processing device 13 is also arranged. However, the arrangement position of the image processing apparatus 13 is not limited. It may also be an image processing apparatus 13 a described in the second embodiment are arranged.
 ガラス物品の製造ラインには、ガラス溶融炉1と、清澄槽30とが設けられる。なお、清澄槽30の種類は限定されない。清澄槽30は、槽の内部を減圧状態にして泡を除去する減圧タイプの清澄槽であってもよい。あるいは、清澄槽30は、槽の内部を高温にして泡を除去する高温タイプの清澄槽であってもよい。 A glass melting furnace 1 and a clarification tank 30 are provided in a production line for glass articles. In addition, the kind of clarification tank 30 is not limited. The clarification tank 30 may be a depressurization type clarification tank in which the inside of the tank is depressurized to remove bubbles. Alternatively, the clarification tank 30 may be a high-temperature type clarification tank in which the inside of the tank is heated to remove bubbles.
 ガラス溶融炉1(図24および図1参照)は、ガラスの原料を溶解させて、溶融ガラス71に変化させる。図24では、バッチ山の図示を省略している。清澄槽30は、溶融ガラス71に生じた泡を除去する。泡が除去された溶融ガラスは、成形ステップ、徐冷ステップに移行する。 The glass melting furnace 1 (see FIG. 24 and FIG. 1) melts a glass raw material and changes it into a molten glass 71. In FIG. 24, the illustration of the batch mountain is omitted. The clarification tank 30 removes bubbles generated in the molten glass 71. The molten glass from which the bubbles have been removed moves to a forming step and a slow cooling step.
 図25は、本実施形態のガラス物品の製造方法の例を示すフローチャートである。まず、ガラス溶融炉1にガラスの原料が投入される。ガラス溶融炉1は、バーナー5(図1参照)を備え、ガラス溶融炉1の内部を高温に維持している。そして、ガラス溶融炉1においてガラスの原料を加熱することにより、溶融ガラス71を製造する(ステップS91、ガラス溶融ステップ)。 FIG. 25 is a flowchart showing an example of a method for manufacturing a glass article of the present embodiment. First, a glass raw material is charged into the glass melting furnace 1. The glass melting furnace 1 includes a burner 5 (see FIG. 1), and maintains the interior of the glass melting furnace 1 at a high temperature. And the molten glass 71 is manufactured by heating the raw material of glass in the glass melting furnace 1 (step S91, glass melting step).
 ステップS91では、カメラ11,11がガラス溶融炉1の内部を撮影し、その結果得られた画像に対して画像処理装置13が第1の実施形態と同様の処理を行う。すなわち、ステップS51~S54(図5参照)、ステップS1~S6(図8参照)、ステップS10~S16(図9または図19参照)、ステップS21~S28(図10参照)等の処理を行う。この処理によって、観察データが得られ、ガラス溶融炉1の内部を良好に監視することができる。また、第2の実施形態で説明した画像処理装置13が、第2の実施形態と同様に、観察データとガラス溶融炉1の運転パラメータとの相関の程度の判定し、ガラス溶融炉1の運転パラメータを変更してもよい。 At step S91, the camera 11 a, 11 b is taken to the glass melting furnace 1, the image processing apparatus 13 performs the same processing as in the first embodiment with respect to the resulting image. That is, processes such as steps S51 to S54 (see FIG. 5), steps S1 to S6 (see FIG. 8), steps S10 to S16 (see FIG. 9 or FIG. 19), steps S21 to S28 (see FIG. 10), and the like are performed. By this processing, observation data is obtained, and the inside of the glass melting furnace 1 can be monitored well. The second image processing apparatus 13 a described in the embodiment of, as in the second embodiment, the degree of correlation between the observed data and the operating parameters of the glass melting furnace 1 determined, the glass melting furnace 1 The operation parameter may be changed.
 ステップS91で製造された溶融ガラス71は、清澄槽30に流される。この溶融ガラス71には泡が存在し、溶融ガラス71の表面に泡層(図示略)が生じる。清澄槽30の内部で、溶融ガラス71の泡を除去する(ステップS92、清澄ステップ)。 The molten glass 71 manufactured in step S91 is caused to flow into the clarification tank 30. Bubbles exist in the molten glass 71, and a bubble layer (not shown) is generated on the surface of the molten glass 71. Inside the clarification tank 30, bubbles of the molten glass 71 are removed (step S92, clarification step).
 ステップS92の後、泡が除去された溶融ガラスを成形する(ステップS93、成形ステップ)。成形ステップでは、例えば、フロート法によって溶融ガラスを成形すればよい。具体的には、泡が除去された溶融ガラス71を溶融錫(図示せず)上に浮かせて、搬送方向に進行させることによって連続した板状のガラスリボンとする。このとき、所定の板厚のガラスリボンを成形するために、ガラスリボンの両サイド部分に回転するロールを押圧し、ガラスリボンを幅方向(搬送方向に直角な方向)外側に引き伸ばす。 After step S92, the molten glass from which bubbles have been removed is formed (step S93, forming step). In the forming step, for example, the molten glass may be formed by a float process. Specifically, the molten glass 71 from which bubbles have been removed is floated on molten tin (not shown) and is advanced in the conveying direction to form a continuous plate-like glass ribbon. At this time, in order to form a glass ribbon having a predetermined plate thickness, a rotating roll is pressed against both side portions of the glass ribbon, and the glass ribbon is stretched outward in the width direction (direction perpendicular to the conveying direction).
 次に、ステップS93で成形されたガラスリボンを徐冷する(ステップS94、徐冷ステップ)。徐冷ステップでは、ガラスリボンを溶融錫から引き出し、徐冷炉(図示せず)の内部で徐々にガラスリボンを冷却する。徐冷炉の外部に搬送した後でも、さらに常温近くまでガラスリボンを徐冷する。 Next, the glass ribbon formed in step S93 is gradually cooled (step S94, slow cooling step). In the slow cooling step, the glass ribbon is pulled out from the molten tin, and the glass ribbon is gradually cooled inside a slow cooling furnace (not shown). Even after transporting to the outside of the slow cooling furnace, the glass ribbon is gradually cooled to near normal temperature.
 徐冷ステップの後、徐冷ステップで固化したガラスリボンを必要に応じて加工する(ステップS95、加工ステップ)。ステップS95における加工の例として、例えば、切断や研磨が挙げられる。ただし、切断や研磨に限定されず、他の加工処理を行ってもよい。 After the slow cooling step, the glass ribbon solidified in the slow cooling step is processed as necessary (step S95, processing step). Examples of processing in step S95 include cutting and polishing. However, the present invention is not limited to cutting and polishing, and other processing may be performed.
 本実施形態のガラス物品の製造方法によれば、ガラス溶融炉内の一定の領域の観察を良好に継続しつつ、ガラス物品を製造することができる。特に、画像処理装置13が、第2の実施形態と同様に、観察データとガラス溶融炉1の運転パラメータとの相関の程度の判定し、ガラス溶融炉1の運転パラメータを変更すれば、炉内の観察結果に応じた適切な運転パラメータでガラス溶融炉1を運転して、ガラス物品を製造することができる。 According to the method for manufacturing a glass article of the present embodiment, the glass article can be manufactured while observing a certain region in the glass melting furnace well. In particular, the image processing apparatus 13 a is, as in the second embodiment, the degree of correlation between the observed data and the operating parameters of the glass melting furnace 1 determined, by changing the operating parameters of the glass melting furnace 1, the furnace A glass article can be manufactured by operating the glass melting furnace 1 with an appropriate operation parameter according to the observation result.
 本出願を詳細にまた特定の実施態様を参照して説明したが、本発明の精神と範囲を逸脱することなく様々な変更や修正を加えることができることは当業者にとって明らかである。
 本出願は、2011年5月6日出願の日本特許出願(特願2011-103601)に基づくものであり、その内容はここに参照として取り込まれる。
Although this application has been described in detail and with reference to specific embodiments, it will be apparent to those skilled in the art that various changes and modifications can be made without departing from the spirit and scope of the invention.
This application is based on a Japanese patent application filed on May 6, 2011 (Japanese Patent Application No. 2011-103601), the contents of which are incorporated herein by reference.
産業上の利用の可能性Industrial applicability
 本発明は、ガラス溶融炉内のバッチ山を監視するガラス溶融炉内監視システムに好適に適用される。 The present invention is suitably applied to a glass melting furnace monitoring system for monitoring batch hills in a glass melting furnace.
 11,11 カメラ
 12 画像記憶手段
 13,13 画像処理装置
 14 姿勢特定手段
 15 背景画像作成手段
 16 画像較正手段
 17 差分演算手段
 18 観察データ算出手段
 19 前処理手段
 61 観察データ解析手段
 62 溶融炉制御手段
11 a , 11 b camera 12 image storage means 13, 13 a image processing device 14 posture specifying means 15 background image creation means 16 image calibration means 17 difference calculation means 18 observation data calculation means 19 preprocessing means 61 observation data analysis means 62 melting Furnace control means

Claims (18)

  1.  画像撮影手段が、ガラス溶融炉内に設けられた基準パターンと、ガラス溶融炉内で溶解したガラス原料の液面における一定範囲とを含む画像を撮影する画像撮影ステップと、
     画像内に写された基準パターンの位置のずれを用いて計算される前記画像撮影手段の姿勢に応じて、撮影された画像内から前記一定範囲に該当する領域を抽出する領域抽出ステップと、
     前記一定範囲に該当する領域として複数の画像から抽出された複数の抽出画像に基づいて、ガラス溶融炉内に積もったガラス原料であるバッチ山の背景となる背景画像を作成する背景画像作成ステップと、
     撮影された画像から前記一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、前記背景画像における対応画素の輝度値を減算する処理を画素毎に行うことで、前記バッチ山および前記背景が写った状態の前記抽出画像から前記背景を除外した背景除外画像を生成する背景除外画像生成ステップと、
     前記背景除外画像に基づいて、前記バッチ山に関する観察データを算出する観察データ算出ステップとを含む
     ことを特徴とするガラス溶融炉内監視方法。
    An image capturing step in which the image capturing means captures an image including a reference pattern provided in the glass melting furnace and a certain range in the liquid surface of the glass raw material melted in the glass melting furnace;
    A region extracting step of extracting a region corresponding to the predetermined range from the captured image according to the attitude of the image capturing means calculated using a shift in the position of the reference pattern captured in the image;
    A background image creating step for creating a background image as a background of a batch mountain that is a glass raw material accumulated in a glass melting furnace based on a plurality of extracted images extracted from a plurality of images as a region corresponding to the predetermined range; ,
    By performing, for each pixel, a process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to the certain range from the photographed image, the batch mountain and A background-excluded image generating step for generating a background-excluded image in which the background is excluded from the extracted image in which the background is reflected;
    An observation data calculation step of calculating observation data relating to the batch mountain based on the background excluded image.
  2.  前記背景画像作成ステップで、
     複数の抽出画像の対応画素毎または対応するエリア毎に、各輝度値に該当する画素の数をカウントし、各輝度値に該当する画素のカウント結果に基づいて、背景を表す輝度値を決定することによって、背景画像を作成する
     請求項1に記載のガラス溶融炉内監視方法。
    In the background image creation step,
    The number of pixels corresponding to each luminance value is counted for each corresponding pixel or corresponding area of a plurality of extracted images, and the luminance value representing the background is determined based on the count result of the pixels corresponding to each luminance value. The method for monitoring a glass melting furnace according to claim 1, wherein a background image is created.
  3.  前記背景除外画像生成ステップで、
     撮影された画像から一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、背景画像における対応画素の輝度値を減算する処理を画素毎に行い、画素毎の減算結果を二値化することによって背景除外画像を生成する
     請求項1または請求項2に記載のガラス溶融炉内監視方法。
    In the background exclusion image generation step,
    A process for subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to a certain range from the photographed image is performed for each pixel, and the subtraction result for each pixel is binary. The method for monitoring a glass melting furnace according to claim 1 or 2, wherein a background-excluded image is generated by converting the background exclusion image.
  4.  背景画像を、一定範囲を前記液面に対向する上方から観察したときの画像に変換する背景画像変換ステップと、
     前記一定範囲に該当する領域として抽出された抽出画像を、当該一定範囲を前記液面に対向する上方から観察したときの画像に変換する抽出画像変換ステップとを含み、
     背景除外画像生成ステップでは、前記抽出画像変換ステップによる変換後の抽出画像の輝度値から、前記背景画像変換ステップによる変換後の背景画像における対応画素の輝度値を減算する処理を行い、
     前記観察データ算出ステップでは、前記背景除外画像生成ステップで生成された背景除外画像に基づいて観察データを算出する
     請求項1から請求項3のうちのいずれか1項に記載のガラス溶融炉内監視方法。
    A background image converting step for converting a background image into an image when a certain range is observed from above facing the liquid surface;
    An extraction image conversion step of converting the extracted image extracted as a region corresponding to the certain range into an image when the certain range is observed from above facing the liquid surface;
    In the background excluded image generation step, a process of subtracting the luminance value of the corresponding pixel in the background image after conversion by the background image conversion step from the luminance value of the extraction image after conversion by the extraction image conversion step,
    The glass melting furnace monitoring according to any one of claims 1 to 3, wherein in the observation data calculation step, observation data is calculated based on the background exclusion image generated in the background exclusion image generation step. Method.
  5.  前記背景除外画像を、一定範囲を前記液面に対向する上方から観察したときの画像に変換する背景除外画像変換ステップを含み、
     観察データ算出ステップでは、前記背景除外画像変換ステップによる変換後の背景除外画像に基づいて観察データを算出する
     請求項1から請求項3のうちのいずれか1項に記載のガラス溶融炉内監視方法。
    A background exclusion image conversion step for converting the background exclusion image into an image when a certain range is observed from above facing the liquid surface;
    The observation method in the glass melting furnace according to any one of claims 1 to 3, wherein in the observation data calculation step, observation data is calculated based on the background excluded image converted by the background excluded image conversion step. .
  6.  画像撮影ステップで得られた各画像に対して、画像内の明暗のコントラストを表す量を算出し、前記コントラストを表す量に関して予め定められた条件を満たす画像を選択する前処理ステップを含む
     請求項1から請求項5のうちのいずれか1項に記載のガラス溶融炉内監視方法。
    A pre-processing step of calculating an amount representing contrast of light and dark in the image for each image obtained in the image capturing step and selecting an image satisfying a predetermined condition with respect to the amount representing the contrast. The monitoring method in a glass melting furnace of any one of Claims 1-5.
  7.  前処理ステップで、コントラストを表す量として、画像内のエッジ数を算出し、前記エッジ数が予め定められた閾値以上であるという条件を満たす複数の画像を選択し、選択した前記複数の画像に基づいて、一定範囲に該当する領域を抽出する対象となる画像を生成する
     請求項6に記載のガラス溶融炉内監視方法。
    In the preprocessing step, the number of edges in the image is calculated as an amount representing contrast, a plurality of images satisfying a condition that the number of edges is equal to or greater than a predetermined threshold is selected, and the selected plurality of images are selected. The glass melting furnace monitoring method according to claim 6, wherein an image to be a target for extracting an area corresponding to a certain range is generated based on the glass melting furnace monitoring method.
  8.  請求項1から請求項7のうちのいずれか1項に記載されたガラス溶融炉内監視方法における観察データ算出ステップで算出される観察データに対して、ガラス溶融炉の運転パラメータが与える影響の度合を導出する影響度導出ステップと、
     観察データが所定の条件を満たした場合に、当該観察データに対する前記影響の度合の絶対値が予め定められた値以上になっている運転パラメータを変更する溶融炉制御ステップとを含む
     ことを特徴とするガラス溶融炉操作方法。
    The degree of the influence of the operating parameters of the glass melting furnace on the observation data calculated in the observation data calculation step in the monitoring method in the glass melting furnace according to any one of claims 1 to 7. A degree of influence deriving step for deriving
    A melting furnace control step of changing an operating parameter in which the absolute value of the degree of the influence on the observation data is equal to or more than a predetermined value when the observation data satisfies a predetermined condition, Glass melting furnace operation method.
  9.  ガラス溶融炉内に設けられた基準パターンと、ガラス溶融炉内で溶解したガラス原料の液面における一定範囲とを含む画像を撮影する画像撮影手段と、
     画像内に写された基準パターンの位置のずれを用いて計算される前記画像撮影手段の姿勢に応じて、撮影された画像内から前記一定範囲に該当する領域を抽出する画像較正手段と、
     前記一定範囲に該当する領域として複数の画像から抽出された複数の抽出画像に基づいて、ガラス溶融炉内に積もったガラス原料であるバッチ山の背景となる背景画像を作成する背景画像作成手段と、
     撮影された画像から前記一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、前記背景画像における対応画素の輝度値を減算する処理を画素毎に行うことで、前記バッチ山および前記背景が写った状態の前記抽出画像から前記背景を除外した背景除外画像を生成する差分演算手段と、
     前記背景除外画像に基づいて、前記バッチ山に関する観察データを算出する観察データ算出手段とを備える
     ことを特徴とするガラス溶融炉内監視システム。
    Image photographing means for photographing an image including a reference pattern provided in the glass melting furnace and a certain range in the liquid surface of the glass raw material melted in the glass melting furnace,
    An image calibrating unit that extracts a region corresponding to the certain range from the captured image according to the attitude of the image capturing unit calculated using a shift in the position of the reference pattern captured in the image;
    A background image creating means for creating a background image as a background of a batch mountain that is a glass raw material accumulated in a glass melting furnace based on a plurality of extracted images extracted from a plurality of images as a region corresponding to the predetermined range; ,
    By performing, for each pixel, a process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to the certain range from the photographed image, the batch mountain and Difference calculating means for generating a background excluded image excluding the background from the extracted image in a state in which the background is reflected;
    An observation data calculation means for calculating observation data related to the batch mountain based on the background excluded image.
  10.  前記背景画像作成手段は、複数の抽出画像の対応画素毎または対応するエリア毎に、各輝度値に該当する画素の数をカウントし、各輝度値に該当する画素のカウント結果に基づいて、背景を表す輝度値を決定することによって、背景画像を作成する
     請求項9に記載のガラス溶融炉内監視システム。
    The background image creation means counts the number of pixels corresponding to each luminance value for each corresponding pixel or corresponding area of the plurality of extracted images, and based on the count result of the pixels corresponding to each luminance value, The glass melting furnace monitoring system according to claim 9, wherein a background image is created by determining a luminance value representing
  11.  前記差分演算手段は、撮影された画像から一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、背景画像における対応画素の輝度値を減算する処理を画素毎に行い、画素毎の減算結果を二値化することによって背景除外画像を生成する
     請求項9または請求項10に記載のガラス溶融炉内監視システム。
    The difference calculation means performs, for each pixel, a process for subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as a region corresponding to a certain range from the photographed image. The monitoring system in a glass melting furnace of Claim 9 or Claim 10 which produces | generates a background exclusion image by binarizing the subtraction result of.
  12.  前記画像較正手段は、背景画像を、一定範囲を前記液面に対向する上方から観察したときの画像に変換し、前記一定範囲に該当する領域として抽出した抽出画像を、当該一定範囲を前記液面に対向する上方から観察したときの画像に変換し、
     前記差分演算手段は、前記画像較正手段による変換後の抽出画像の輝度値から、画像較正手段による変換後の背景画像における対応画素の輝度値を減算する処理を行い、
     前記観察データ算出手段は、前記差分演算手段に生成された背景除外画像に基づいて観察データを算出する
     請求項9から請求項11のうちのいずれか1項に記載のガラス溶融炉内監視システム。
    The image calibrating means converts the background image into an image when a certain range is observed from above facing the liquid surface, and extracts the extracted image extracted as a region corresponding to the certain range as the liquid range. Converted to an image when observed from above facing the surface,
    The difference calculation means performs a process of subtracting the brightness value of the corresponding pixel in the background image after conversion by the image calibration means from the brightness value of the extracted image after conversion by the image calibration means,
    The glass melting furnace monitoring system according to any one of claims 9 to 11, wherein the observation data calculation unit calculates observation data based on a background excluded image generated by the difference calculation unit.
  13.  前記画像較正手段は、前記差分演算手段によって生成された背景除外画像を、一定範囲を前記液面に対向する上方から観察したときの画像に変換し、
     前記観察データ算出手段は、前記画像較正手段による変換後の背景除外画像に基づいて観察データを算出する
     請求項9から請求項11のうちのいずれか1項に記載のガラス溶融炉内監視システム。
    The image calibration unit converts the background excluded image generated by the difference calculation unit into an image when a certain range is observed from above facing the liquid surface,
    The glass melting furnace monitoring system according to any one of claims 9 to 11, wherein the observation data calculation unit calculates observation data based on a background excluded image after conversion by the image calibration unit.
  14.  画像撮影手段によって得られた各画像に対して、画像内の明暗のコントラストを表す量を算出し、前記コントラストを表す量に関して予め定められた条件を満たす画像を選択する前処理手段を備える
     請求項9から請求項13のうちのいずれか1項に記載のガラス溶融炉内監視システム。
    And a pre-processing unit that calculates an amount representing a contrast of light and dark in the image for each image obtained by the image photographing unit and selects an image that satisfies a predetermined condition with respect to the amount representing the contrast. The glass melting furnace monitoring system according to any one of claims 9 to 13.
  15.  前処理手段は、コントラストを表す量として、画像内のエッジ数を算出し、前記エッジ数が予め定められた閾値以上であるという条件を満たす複数の画像を選択し、選択した前記複数の画像に基づいて、一定範囲に該当する領域を抽出する対象となる画像を生成する
     請求項14に記載のガラス溶融炉内監視システム。
    The preprocessing means calculates the number of edges in the image as an amount representing contrast, selects a plurality of images satisfying a condition that the number of edges is equal to or greater than a predetermined threshold, and sets the selected plurality of images The glass melting furnace monitoring system according to claim 14, wherein an image to be a target for extracting a region corresponding to a certain range is generated based on the glass melting furnace monitoring system.
  16.  前記観察データ算出手段によって算出される観察データに対して、ガラス溶融炉の運転パラメータが与える影響の度合を導出する観察データ解析手段を備える
     請求項9から請求項15のうちのいずれか1項に記載のガラス溶融炉内監視システム。
    The observation data analyzing means for deriving the degree of the influence of the operating parameter of the glass melting furnace on the observation data calculated by the observation data calculating means. The monitoring system in the glass melting furnace described.
  17.  前記観察データが所定の条件を満たした場合に、当該観察データに対する前記影響の度合の絶対値が予め定められた値以上になっている運転パラメータを変更する溶融炉制御手段を備える
     請求項16に記載のガラス溶融炉内監視システム。
    The melting furnace control means for changing an operating parameter in which the absolute value of the degree of influence on the observation data is equal to or greater than a predetermined value when the observation data satisfies a predetermined condition. The monitoring system in the glass melting furnace described.
  18.  ガラス溶融炉内で溶融ガラスを製造するガラス溶融ステップと、
     清澄槽内で前記溶融ガラスの泡を除去する清澄ステップと、
     泡が除去された溶融ガラスを成形する成形ステップと、
     成形された溶融ガラスを徐冷する徐冷ステップとを含むとともに、
     画像撮影手段が、ガラス溶融炉内に設けられた基準パターンと、ガラス溶融炉内で溶解したガラス原料の液面における一定範囲とを含む画像を撮影する画像撮影ステップと、
     画像内に写された基準パターンの位置のずれを用いて計算される前記画像撮影手段の姿勢に応じて、撮影された画像内から前記一定範囲に該当する領域を抽出する領域抽出ステップと、
     前記一定範囲に該当する領域として複数の画像から抽出された複数の抽出画像に基づいて、ガラス溶融炉内に積もったガラス原料であるバッチ山の背景となる背景画像を作成する背景画像作成ステップと、
     撮影された画像から前記一定範囲に該当する領域として抽出された抽出画像の画素の輝度値から、前記背景画像における対応画素の輝度値を減算する処理を画素毎に行うことで、前記バッチ山および前記背景が写った状態の前記抽出画像から前記背景を除外した背景除外画像を生成する背景除外画像生成ステップと、
     前記背景除外画像に基づいて、前記バッチ山に関する観察データを算出する観察データ算出ステップとを含む
     ことを特徴とするガラス物品の製造方法。
    A glass melting step for producing molten glass in a glass melting furnace;
    A clarification step of removing bubbles of the molten glass in a clarification tank;
    Forming step of forming molten glass from which bubbles have been removed;
    And a slow cooling step of slowly cooling the molded molten glass,
    An image capturing step in which the image capturing means captures an image including a reference pattern provided in the glass melting furnace and a certain range in the liquid surface of the glass raw material melted in the glass melting furnace;
    A region extracting step of extracting a region corresponding to the predetermined range from the captured image according to the attitude of the image capturing means calculated using a shift in the position of the reference pattern captured in the image;
    A background image creating step for creating a background image as a background of a batch mountain that is a glass raw material accumulated in a glass melting furnace based on a plurality of extracted images extracted from a plurality of images as a region corresponding to the predetermined range; ,
    By performing, for each pixel, a process of subtracting the luminance value of the corresponding pixel in the background image from the luminance value of the pixel of the extracted image extracted as an area corresponding to the certain range from the photographed image, the batch mountain and A background-excluded image generating step for generating a background-excluded image in which the background is excluded from the extracted image in which the background is reflected;
    An observation data calculation step of calculating observation data relating to the batch mountain based on the background excluded image. A method for manufacturing a glass article, comprising:
PCT/JP2012/061252 2011-05-06 2012-04-26 Internal inspection method for glass-melting furnace, operation method for glass-melting furnance, and internal inspection system for glass-melting furnace WO2012153649A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2013513979A JP5928451B2 (en) 2011-05-06 2012-04-26 Glass melting furnace monitoring method, glass melting furnace operating method, glass melting furnace monitoring system
CN201280012165.XA CN103415476B (en) 2011-05-06 2012-04-26 System of supervision in supervision method, glass-melting furnace working method, glass-melting furnace in glass-melting furnace
KR1020137023603A KR101923239B1 (en) 2011-05-06 2012-04-26 Internal inspection method for glass-melting furnace, operation method for glass-melting furnace, and internal inspection system for glass-melting furnace

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011103601 2011-05-06
JP2011-103601 2011-05-06

Publications (1)

Publication Number Publication Date
WO2012153649A1 true WO2012153649A1 (en) 2012-11-15

Family

ID=47139133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/061252 WO2012153649A1 (en) 2011-05-06 2012-04-26 Internal inspection method for glass-melting furnace, operation method for glass-melting furnance, and internal inspection system for glass-melting furnace

Country Status (5)

Country Link
JP (1) JP5928451B2 (en)
KR (1) KR101923239B1 (en)
CN (1) CN103415476B (en)
TW (1) TWI522326B (en)
WO (1) WO2012153649A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160077161A (en) * 2013-10-28 2016-07-01 케이엘에이-텐코 코포레이션 Methods and apparatus for measuring semiconductor device overlay using x-ray metrology
CN110542311A (en) * 2019-08-29 2019-12-06 阿尔赛(苏州)无机材料有限公司 observable high-temperature experimental electric furnace
CN110876274A (en) * 2018-06-29 2020-03-10 法国圣戈班玻璃厂 Method for monitoring in real time the thermal time evolution of a furnace adapted to heat-soften flat glass articles
CN114387248A (en) * 2022-01-12 2022-04-22 苏州天准科技股份有限公司 Silicon material melting degree monitoring method, storage medium, terminal and crystal pulling equipment
US20230265004A1 (en) * 2020-08-31 2023-08-24 Brian M. Cooper Historically accurate simulated divided light glass unit and methods of making the same

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017002844A1 (en) * 2015-06-30 2017-01-05 AvanStrate株式会社 Glass substrate production method and glass substrate production device
US20240254029A1 (en) * 2021-05-19 2024-08-01 Glass Service, A.S. Method of control, control system and glass furnace, in particular for temperature/thermal control
WO2024215033A1 (en) * 2023-04-12 2024-10-17 한국수력원자력 주식회사 Glass melting furnace apparatus and method for operating same

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58142224A (en) * 1982-02-16 1983-08-24 オ−エンス イリノイ インコ−ポレ−テッド Method and device for estimating relative quantity of substance of two kind in surface region of mixture containing at least two kind of substance
JPS5944606A (en) * 1982-09-07 1984-03-13 Toyo Glass Kk Method for discriminating position where batch pile exists in melting furnace for glass
JPH01122041U (en) * 1988-02-10 1989-08-18
JP2009161396A (en) * 2008-01-07 2009-07-23 Nippon Electric Glass Co Ltd Production method for glass article, glass article and molten glass face monitoring system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10160824A1 (en) * 2000-12-14 2003-05-08 Software & Tech Glas Gmbh Process for controlling the quality-determining parameters of a glass bath used in glass production in tank furnaces comprises optically measuring the mixture and adjusting by means of fuel supply or distribution
JP4714607B2 (en) 2006-03-14 2011-06-29 新日本製鐵株式会社 Blast furnace outflow measurement system, blast furnace outflow measurement method, and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58142224A (en) * 1982-02-16 1983-08-24 オ−エンス イリノイ インコ−ポレ−テッド Method and device for estimating relative quantity of substance of two kind in surface region of mixture containing at least two kind of substance
JPS5944606A (en) * 1982-09-07 1984-03-13 Toyo Glass Kk Method for discriminating position where batch pile exists in melting furnace for glass
JPH01122041U (en) * 1988-02-10 1989-08-18
JP2009161396A (en) * 2008-01-07 2009-07-23 Nippon Electric Glass Co Ltd Production method for glass article, glass article and molten glass face monitoring system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160077161A (en) * 2013-10-28 2016-07-01 케이엘에이-텐코 코포레이션 Methods and apparatus for measuring semiconductor device overlay using x-ray metrology
JP2016540970A (en) * 2013-10-28 2016-12-28 ケーエルエー−テンカー コーポレイション Method and apparatus for measuring overlay of a semiconductor device using X-ray metrology
KR102152487B1 (en) 2013-10-28 2020-09-04 케이엘에이 코포레이션 Methods and apparatus for measuring semiconductor device overlay using x-ray metrology
CN110876274A (en) * 2018-06-29 2020-03-10 法国圣戈班玻璃厂 Method for monitoring in real time the thermal time evolution of a furnace adapted to heat-soften flat glass articles
CN110542311A (en) * 2019-08-29 2019-12-06 阿尔赛(苏州)无机材料有限公司 observable high-temperature experimental electric furnace
US20230265004A1 (en) * 2020-08-31 2023-08-24 Brian M. Cooper Historically accurate simulated divided light glass unit and methods of making the same
US11964897B2 (en) 2020-08-31 2024-04-23 The Cooper Group, Llc Historically accurate simulated divided light glass unit and methods of making the same
CN114387248A (en) * 2022-01-12 2022-04-22 苏州天准科技股份有限公司 Silicon material melting degree monitoring method, storage medium, terminal and crystal pulling equipment
CN114387248B (en) * 2022-01-12 2022-11-25 苏州天准科技股份有限公司 Silicon material melting degree monitoring method, storage medium, terminal and crystal pulling equipment

Also Published As

Publication number Publication date
KR20140015357A (en) 2014-02-06
KR101923239B1 (en) 2018-11-28
CN103415476A (en) 2013-11-27
JPWO2012153649A1 (en) 2014-07-31
JP5928451B2 (en) 2016-06-01
CN103415476B (en) 2015-08-05
TWI522326B (en) 2016-02-21
TW201247577A (en) 2012-12-01

Similar Documents

Publication Publication Date Title
JP5928451B2 (en) Glass melting furnace monitoring method, glass melting furnace operating method, glass melting furnace monitoring system
US20220143704A1 (en) Monitoring system and method of identification of anomalies in a 3d printing process
US20080314878A1 (en) Apparatus and method for controlling a machining system
CN102939513B (en) The manufacture method of shape measuring apparatus, process for measuring shape and glass plate
CN106791807B (en) A kind of method and apparatus of camera module dust detection
EP2490438B1 (en) Vision measuring device
CN109789484A (en) System and method for Z height measurement and adjustment in increasing material manufacturing
WO2009157530A1 (en) Embryo-monitoring apparatus
JP7394952B2 (en) Slag amount measuring device and slag amount measuring method
KR101481442B1 (en) Apparatus for island position detecting of ingot growth furnace and method for island position detecting
TWI574754B (en) Method for monitoring and controlling a rolling mill
WO2012081398A1 (en) Glass plate, method for inspecting glass plate, and method for manufacturing glass plate
JP5339070B2 (en) Displacement measuring apparatus and measuring method
JP5454392B2 (en) Ranging device and imaging device
Yu et al. A novel data-driven framework for enhancing the consistency of deposition contours and mechanical properties in metal additive manufacturing
CN103389040B (en) Method for detecting defects in optical films
CN107413679A (en) A kind of intelligent ore dressing device and method based on machine vision technique
KR101775057B1 (en) Apparatus and method for island position detecting of furnace
KR101780883B1 (en) Shape measuring device
WO2013100069A1 (en) Method of picking up image inside furnace, system for picking up image inside furnace, and method of manufacturing glass goods
JP7571618B2 (en) Method for detecting surface condition of raw material melt, method for producing single crystal, and CZ single crystal production apparatus
JP6048086B2 (en) Imaging apparatus and image processing program
RU2795303C1 (en) Method for automatic continuous surface quality control
KR100711403B1 (en) Apparatus for measuring tension of wire rod in wire rolling mill
Fabijańska et al. Edge detection with sub-pixel accuracy in images of molten metals

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12782557

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013513979

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20137023603

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12782557

Country of ref document: EP

Kind code of ref document: A1