CN110233967B - Mold template image generation system and method - Google Patents

Mold template image generation system and method Download PDF

Info

Publication number
CN110233967B
CN110233967B CN201910537845.7A CN201910537845A CN110233967B CN 110233967 B CN110233967 B CN 110233967B CN 201910537845 A CN201910537845 A CN 201910537845A CN 110233967 B CN110233967 B CN 110233967B
Authority
CN
China
Prior art keywords
template
image
time
module
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910537845.7A
Other languages
Chinese (zh)
Other versions
CN110233967A (en
Inventor
游旭新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangzhou Zhijue Intelligent Technology Co ltd
Original Assignee
Zhangzhou Zhijue Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangzhou Zhijue Intelligent Technology Co ltd filed Critical Zhangzhou Zhijue Intelligent Technology Co ltd
Priority to CN201910537845.7A priority Critical patent/CN110233967B/en
Publication of CN110233967A publication Critical patent/CN110233967A/en
Application granted granted Critical
Publication of CN110233967B publication Critical patent/CN110233967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/93Detection standards; Calibrating baseline adjustment, drift correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biochemistry (AREA)
  • Data Mining & Analysis (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system and a method for generating a template image of a mold, wherein the template image generating system comprises a time interval storage module, a template acquisition module, a template storage module, a template reading module and a template learning module; the time interval storage module is used for storing template image acquisition interval data of each time period; the template acquisition module is used for acquiring image information and taking an image acquired at a corresponding acquisition time point as a template image of the corresponding time point; the template storage module is used for storing dynamic template video files formed by template images at various time points; the template learning module is used for comparing the current image with the previously acquired template image between two adjacent acquisition time points, if the difference is greater than a set threshold value, adding the current time point outside the set regular acquisition time point as an acquisition time point, and storing the current image as the template image. The invention can improve the detection accuracy, reduce false alarm and simultaneously not influence the detection efficiency.

Description

Mold template image generation system and method
Technical Field
The invention belongs to the technical field of injection molds, relates to an injection mold device, and particularly relates to a system and a method for generating a mold template image of a timeline sequence multi-template.
Background
The quality of the injection mold, which is the most important molding equipment for processing injection molded products, is directly related to the quality of the products. In addition, the mold occupies a large proportion of the production cost of injection molding enterprises, so the service life of the mold directly influences the cost of injection molding products. Therefore, the quality of the injection mold is improved, the maintenance and the maintenance are good through the photoelectric technology, the service cycle of the injection mold is prolonged, and the injection mold is an important subject for cost reduction and efficiency improvement of injection product processing enterprises. Because injection molding product processing enterprises have a plurality of product varieties and frequent mold replacement, in a production period, the maintenance and real-time monitoring of the injection mold are very important, when an injection molding machine runs, the expensive mold in each period can be damaged because of residue or slide block dislocation, and a mold protector can prevent the conditions from happening!
In the photoelectric automatic mold protector, effective and reliable identification of plastic parts and mold cavity targets is a basic requirement for triggering protective inspection of a protective device. The injection molding machine is in continuous operation, and day and night are alternative, including weather change, including the power unstability of mill, various light source cross interference in the factory building. The injection molding protection device based on vision is a detection device based on template image comparison, and when the light source changes too much, a video detection protector adopting a single template frequently gives false alarms. Particularly, some injection molding machines have wide space, and a sunlight window or a transparent ceiling can be directly projected onto the injection molding machine, so that strong light changes form a changed shadow on a mold, and the detection of the video protection device fails. The current popular method is that an operator updates the template when a false alarm is generated, the template needs to be updated for many times every day due to different weather and time, the working intensity is increased, and the working emotion is influenced by the false alarm. The other method is to accumulate a large number of reference templates, dozens of reference templates and hundreds of reference templates, compare with all the templates one by one each time, have extremely large calculation amount and long calculation time, influence the working efficiency, simultaneously cause overhigh temperature due to large computer workload, and shorten the working life of the device.
Patent CN102156990A is a method for detecting blur parameters of image frames, and patent CN101568908A is a method for blurring images in the motion blur parameter detection of aerial remote sensing images. Many other patents, such as CN101453556A, CN101454715A, etc., are directed to motion blur detection, and the algorithm is to detect motion and then perform motion blur correction, rather than blur in the general sense.
In view of the above, there is an urgent need to design a new monitoring method to overcome the above-mentioned drawbacks of the existing monitoring methods.
Disclosure of Invention
The invention provides a system and a method for generating an image of a mold template, which can automatically generate the image of the mold template and improve the working efficiency; the generated mold template image is used in a mold monitoring system, so that the detection accuracy can be improved, the false alarm can be reduced, and the detection efficiency is not influenced.
In order to solve the technical problem, according to one aspect of the present invention, the following technical solutions are adopted:
a mold template image generation system, the template image generation system comprising: the template learning system comprises a time interval storage module, a template acquisition module, a template storage module, a template reading module and a template learning module;
the time interval storage module is used for storing template image acquisition interval data of each time period;
the template acquisition module is used for acquiring image information in real time or at set time intervals, and taking the image acquired at the corresponding acquisition time point as a template image of the corresponding time point;
the template storage module is used for storing template images acquired by the template acquisition module at various time points or/and storing dynamic template video files formed by the template images at various time points;
the template learning module is used for comparing the current image with the previously acquired template image between two adjacent acquisition time points, if the difference is greater than a set threshold value, adding the current time point outside the set regular acquisition time point as an acquisition time point, and storing the current image as the template image.
As an embodiment of the present invention, the mold template image generation system further includes:
the reference time determining module is used for determining reference time;
and the acquisition time point determining module is used for generating each acquisition time point by combining the acquisition interval data set by the time interval storage module on the basis of the reference time.
As an embodiment of the present invention, the template acquisition module is configured to snap-shoot a template image immediately after the reference time determination module generates the reference time; and then, capturing a template image at each acquisition time point determined by the acquisition time point determining module.
As an embodiment of the present invention, the mold template image generation system further includes:
the time interval generating module is used for setting acquisition interval data of each time period and storing the acquisition interval data in a time interval storage file;
and the clock trigger is used for reading the acquisition interval data corresponding to the current time from the time interval storage file.
As an embodiment of the present invention, after each time interval tmInters [ n ] is generated, the clock flip-flop continuously accumulates the time interval value according to the reference time; thus, accurate trigger time is continuously generated on the basis of the reference time;
the clock trigger compares the trigger time with the current time, and immediately sends a trigger signal to the template acquisition module to snapshot a new template when the trigger time is greater than or equal to the current time;
when the clock trigger triggers the template acquisition, the current image is captured from the monitoring camera as the template map corresponding to the time point. Storing the time interval tmInters [ n ] value into a file Ft, and storing the current template image moldImage [ n ] into a template video file Fm;
storing a plurality of captured template images into a file in a video mode according to the sequence corresponding to the time points to generate a corresponding dynamic template video, wherein each image in the video corresponds to one time point;
thus, at each time interval tmInters [ n ] in the Ft file, there is a mol Image [ n ] in the Fm file, and the data between the two files form a one-to-one correspondence.
As an embodiment of the present invention, the mold monitoring system further includes:
the template acquisition module is used for acquiring two continuous template images from the template storage module;
the template comparison module is used for comparing the two obtained continuous template images, and if the difference between the two continuous template images is greater than a set threshold value, a new template image is considered to be required to be generated between the two continuous template images;
the template generation module is used for carrying out linear interpolation on two continuous template images to generate a new template image;
the template generation module generates the pixel value f corresponding to the position (x, y) in the new template image by a new template image linear interpolation method[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure BDA0002101747060000031
when t [ mn']=T[m]When e is 1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']=moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When e is 0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']=moldImage[n];
Wherein, T [ m ]]Is actual time, f[m](x, y) is the corresponding T [ m ]]The pixel value of the template image at the (x, y) position at time, t m]Generating a new template image for the time needed in two actual times;
the template generation module is connected with the motion estimation module, and the motion estimation module is used for carrying out motion estimation on a part with larger difference between the two templates, finding out a position with the minimum difference, and carrying out interpolation by using the position local image data to form local image data of the part with larger difference.
A mold template image generation method, the template image generation method comprising the steps of:
a time interval storage step of storing template image acquisition interval data of each time period;
a template acquisition step, namely acquiring image information in real time or at set time intervals, and taking an image acquired at a corresponding acquisition time point as a template image of the corresponding time point;
a template storage step, in which template images acquired by the template acquisition module at various time points are stored, or/and dynamic template video files formed by the template images at various time points are stored;
and a template learning step, namely comparing the current image with the previously acquired template image between two adjacent acquisition time points, if the difference is greater than a set threshold value, adding the current time point outside the set regular acquisition time point as an acquisition time point, and storing the current image as the template image.
As an embodiment of the present invention, the mold template image generation method further includes a template generation step; carrying out linear interpolation on two continuous template images acquired from the template storage module to generate a new template image;
generated by a new template image linear interpolation method, and the pixel value f corresponding to the position (x, y) in the new template image[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure BDA0002101747060000041
when t [ mn']=T[m]When e is 1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']=moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When e is 0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']=moldImage[n];
Before generating a new template image, comparing the difference between the molImage [ m ] and the molImage [ n ], and reducing errors by using motion estimation when local difference is too large;
the template generation step comprises a motion estimation step, wherein the motion estimation step is used for carrying out motion estimation on a part with larger difference between two templates, finding out the position with the minimum difference, and carrying out interpolation by using the local image data at the position to form the local image data of the part with larger difference;
if the difference between two sequentially adjacent templates is larger than a set value, a new template image is generated between the two adjacent template images by linear interpolation; the template generation module performs linear interpolation on two adjacent template images to generate a new template image, a linear coefficient is calculated through the time difference between the time point corresponding to the new template image and the template images, the time points corresponding to the template images are obtained by accumulating time intervals on the basis of reference time, and the time of the new template image is the actual clock t; if the alarm is still given after the new template is generated and the alarm is judged to be false alarm by an operator, the current image can be used as the new template to be inserted into the template sequence, and the T-T is used as a new time interval value to be updated into the dynamic template video file Ft.
If the difference between two sequentially adjacent templates is larger than a set threshold value and the difference value is concentrated in only a partial area, starting a motion estimation module to improve the precision of generating a new template; the motion estimation module is used for carrying out motion estimation on the part with larger difference between the two templates, finding out the position with the minimum difference, and using the position local image as a local interpolation reference image of the part with larger difference.
A mold template image generation system, the template image generation system comprising:
the time interval storage module is used for storing template image acquisition interval data of each time period;
the template acquisition module is used for acquiring image information in real time or at set time intervals, and taking the image acquired at the corresponding acquisition time point as a template image of the corresponding time point;
and the template storage module is used for storing the template images acquired by the template acquisition module at all time points or/and storing dynamic template video files formed by the template images at all time points.
A mold template image generation method, the template image generation method comprising:
a time interval storage step of storing template image acquisition interval data of each time period;
a template acquisition step, namely acquiring image information in real time or at set time intervals, and taking an image acquired at a corresponding acquisition time point as a template image of the corresponding time point;
and a template storage step, in which template images acquired by the template acquisition module at various time points are stored, or/and dynamic template video files formed by the template images at various time points are stored.
The invention has the beneficial effects that: the system and the method for generating the die template image can automatically generate the die template image and improve the working efficiency. The generated mold template image is used in a mold monitoring system, so that the detection accuracy can be improved, the false alarm can be reduced, and the detection efficiency is not influenced.
The invention provides a solution based on multiple templates, but each detection is still only compared with one template, and the working efficiency is not influenced. Meanwhile, the corresponding template is called from the stored multi-template file according to the time point of each day, and false alarm can not be generated because the template is the reference image with the highest accuracy under the current working condition and is also the image most similar to the actual working condition. Therefore, the present invention effectively solves the above two problems.
The invention detects the image blur, is specially used for detecting whether the image blur occurs or not, can effectively detect whether the blur occurs or not no matter what reason the blur occurs, such as the image blur caused by lens virtual focus, lens dust, scene dust fog and the like, and is not very sensitive to noise.
Drawings
FIG. 1 is a schematic diagram of a system for generating a mold template image according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a timeline multi-template detection system according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of the operation of the time interval generator and the clock flip-flop according to an embodiment of the present invention.
Fig. 4 is a diagram illustrating a correspondence between a time point and multiple templates according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating the generation of a new time interval in accordance with an embodiment of the present invention.
Fig. 6 is a schematic diagram of a high-precision image restoration method based on local motion estimation according to an embodiment of the present invention.
FIG. 7 is a diagram illustrating an interpolation image effect of motion estimation on shadow changes according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of a template capture interval scheme according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a storage policy of an Ft data file according to an embodiment of the present invention.
FIG. 10 is a schematic diagram of the timeline multi-template detection system according to an embodiment of the present invention.
Fig. 11 is a schematic diagram of a method for generating a new template image according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
For a further understanding of the invention, reference will now be made to the preferred embodiments of the invention by way of example, and it is to be understood that the description is intended to further illustrate features and advantages of the invention, and not to limit the scope of the claims.
The description in this section is for several exemplary embodiments only, and the present invention is not limited only to the scope of the embodiments described. It is within the scope of the present disclosure and protection that the same or similar prior art means and some features of the embodiments may be interchanged.
The invention discloses a mold template image generation system, and FIG. 1 is a schematic diagram of the mold template image generation system according to an embodiment of the invention; referring to fig. 1, in an embodiment of the present invention, the mold template image generating system includes: the template acquisition system comprises a time interval storage module 1, a template acquisition module 4 and a template storage module 2. The time interval storage module 1 is used for storing template image acquisition interval data of each time period; the template acquisition module 4 is used for acquiring image information in real time or at set time intervals, and taking the image acquired at the corresponding acquisition time point as a template image of the corresponding time point; the template storage module 2 is used for storing the template images acquired by the template acquisition module at various time points or/and storing dynamic template video files formed by the template images at various time points.
In an embodiment of the present invention, the mold template image generation system further comprises a template learning module 6. The template learning module 6 is configured to compare the current image with a previously acquired template image between two adjacent acquisition time points, and if the difference is greater than a set threshold, add the current time point outside the set regular acquisition time point as an acquisition time point, and store the current image as a template image.
In general, the difference between the template images corresponding to two adjacent acquisition time points is lower than a set threshold; at the same time, two adjacent acquisition time points ti、ti+1At each time point t in betweenxCorresponding template image mxCorresponding to two adjacent acquisition time points ti、ti+1Template image m ofi、mi+1Is also below the set threshold. But if at two adjacent acquisition time points ti、ti+1At a certain time point t in betweenxAcquired template image mxAdjacent thereto, acquisition time ti、ti+1Acquired template image mi、mi+1If the difference is greater than the set threshold value, adding the current time point outside the set regular acquisition time point as an acquisition time point txAnd using the current image as a template image mxAnd the storage is carried out, so that the detection accuracy can be improved, and the detection system can conveniently carry out detection at any time point.
The template learning module is used for acquiring a template image at a certain time point in two set adjacent acquisition time points at some special time points, and the template image corresponding to the two adjacent acquisition time points at the time point has a larger difference, so that the situation of false alarm is avoided as much as possible by increasing the acquisition time points (and the corresponding template images) outside the set regular acquisition time points.
In an embodiment of the present invention, the mold template image generation system further includes: the device comprises a reference time determining module and an acquisition time point determining module. The reference time determining module is used for determining reference time (namely an acquisition time point for acquiring the template image for the first time); the acquisition time point determining module is used for generating each acquisition time point by combining the acquisition interval data set by the time interval storage module on the basis of the reference time.
In an embodiment of the present invention, the template acquisition module is configured to snap-shoot a template image immediately after the reference time determination module generates the reference time; and then, capturing a template image at each acquisition time point determined by the acquisition time point determining module.
In an embodiment of the present invention, the mold template image generation system further includes: a time interval generation module and a clock trigger. The time interval generation module is used for setting acquisition interval data of each time period and storing the acquisition interval data in a time interval storage file; the clock trigger is used for reading the acquisition interval data corresponding to the current time from the time interval storage file.
In an embodiment of the invention, after each time interval tmInters [ n ] is generated, the clock trigger continuously accumulates the time interval values according to the reference time; thus, accurate trigger time is continuously generated on the basis of the reference time; and the clock trigger compares the trigger time with the current time, and immediately sends a trigger signal to the template acquisition module to snapshot a new template when the trigger time is greater than or equal to the current time. When the clock trigger triggers the template acquisition, the current image is captured from the monitoring camera as the template map corresponding to the time point. The time interval tmInters [ n ] value is stored in the file Ft, and the current template image moldImage [ n ] is stored in the template video file Fm. And storing the captured template images into a file in a video mode according to the sequence corresponding to the time points to generate a corresponding dynamic template video, wherein each image in the video corresponds to one time point. Thus, at each time interval tmInters [ n ] in the Ft file, there is a mol Image [ n ] in the Fm file, and the data between the two files form a one-to-one correspondence.
In an embodiment of the present invention, the mold monitoring system further comprises: the template acquisition module, the template comparison module and the template generation module.
The template acquisition module is used for acquiring two continuous template images from the template storage module; the template comparison module is used for comparing the two obtained continuous template images, and if the difference between the two continuous template images is greater than a set threshold value, a new template image needs to be generated between the two continuous template images; the template generation module is used for carrying out linear interpolation on two continuous template images to generate a new template image.
The template generation module generates the pixel value f corresponding to the position (x, y) in the new template image by a new template image linear interpolation method[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure BDA0002101747060000071
when t [ mn']=T[m]When e is 1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']=moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When e is 0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']=moldImage[n]。
Wherein, T [ m ]]Is actual time, f[m](x, y) is the corresponding T [ m ]]The pixel value of the template image at the (x, y) position at time, t m]The time required to generate a new template image is the two actual times.
The template generation module is connected with the motion estimation module, and the motion estimation module is used for carrying out motion estimation on a part with larger difference between the two templates, finding out a position with the minimum difference, and carrying out interpolation by using the position local image data to form local image data of the part with larger difference.
If the difference between two sequentially adjacent templates is larger than a set value, a new template image is generated between the two adjacent template images by linear interpolation; the template generation module performs linear interpolation on two adjacent template images to generate a new template image, a linear coefficient is calculated through the time difference between the time point corresponding to the new template image and the template images, the time points corresponding to the template images are obtained by accumulating time intervals on the basis of reference time, and the time of the new template image is the actual clock t; if the alarm is still given after the new template is generated and the alarm is judged to be false alarm by an operator, the current image can be used as the new template to be inserted into the template sequence, and the T-T is used as a new time interval value to be updated into the dynamic template video file Ft.
In an embodiment of the invention, if the difference between two sequentially adjacent templates is greater than a set threshold value and the difference values are concentrated in only a partial region, a motion estimation module is started to improve the precision of generating a new template; the motion estimation module is used for carrying out motion estimation on the part with larger difference between the two templates, finding out the position with the minimum difference, and using the position local image as a local interpolation reference image of the part with larger difference.
The invention discloses a method for generating a template image of a mold, which comprises the following steps:
a time interval storage step of storing template image acquisition interval data of each time period;
a template acquisition step, namely acquiring image information in real time or at set time intervals, and taking an image acquired at a corresponding acquisition time point as a template image of the corresponding time point;
and a template storage step, in which template images acquired by the template acquisition module at various time points are stored, or/and dynamic template video files formed by the template images at various time points are stored.
In an embodiment of the present invention, the method for generating an image of a mold template further includes: and a template learning step, namely comparing the current image with the previously acquired template image between two adjacent acquisition reference points, if the difference is greater than a set threshold value, adding the current time point outside the set regular acquisition reference point as an acquisition time point, and storing the current image as a template image.
In an embodiment of the present invention, the mold template image generation method further includes a template generation step; and carrying out linear interpolation on two continuous template images acquired from the template storage module to generate a new template image.
Generated by a new template image linear interpolation method, and the pixel value f corresponding to the position (x, y) in the new template image[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x,y)*(1-e),wherein:
Figure BDA0002101747060000091
when t [ mn']=T[m]When e is 1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']=moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When e is 0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']=moldImage[n]。
Before generating a new template image, the difference between the molImage [ m ] and the molImage [ n ] is compared, and when the local difference is too large, the error is reduced by using motion estimation.
The template generating step comprises a motion estimation step, wherein motion estimation is carried out on a part with larger difference between two templates, the position with the smallest difference is found, and local image data of the position with larger difference is interpolated to form local image data of the part with larger difference.
If the difference between two sequentially adjacent templates is larger than a set value, a new template image is generated between the two adjacent template images by linear interpolation; the template generation module performs linear interpolation on two adjacent template images to generate a new template image, a linear coefficient is calculated through the time difference between the time point corresponding to the new template image and the template images, the time points corresponding to the template images are obtained by accumulating time intervals on the basis of reference time, and the time of the new template image is the actual clock t; if the alarm is still given after the new template is generated and the alarm is judged to be false alarm by an operator, the current image can be used as the new template to be inserted into the template sequence, and the T-T is used as a new time interval value to be updated into the dynamic template video file Ft.
If the difference between two sequentially adjacent templates is larger than a set threshold value and the difference value is concentrated in only a partial area, starting a motion estimation module to improve the precision of generating a new template; the motion estimation module is used for carrying out motion estimation on the part with larger difference between the two templates, finding out the position with the minimum difference, and using the position local image as a local interpolation reference image of the part with larger difference.
FIG. 10 is a schematic diagram of a timeline multi-template detection system according to an embodiment of the present invention; referring to fig. 10, in an embodiment of the present invention, the mold monitoring system includes: the template acquisition system comprises a time interval storage module 1, a template storage module 2, a template acquisition module 3, a template reading module 4 and a template difference comparison module 5.
The time interval storage module 1 is used for storing the acquisition interval data of each time period. The template storage module 2 is used for storing template images at various time points or/and storing dynamic template video files formed by the template images at various time points. The template acquisition module 3 is used for acquiring template images according to the acquisition interval data stored by the time interval storage module. The template reading module 4 is used for reading out corresponding template images from the template storage module, wherein each template image corresponds to a time point. The template difference comparison module 5 is used for comparing the template image obtained in real time with the template image at the corresponding time point in the template storage module to obtain a comparison result.
In an embodiment of the present invention, the template difference comparing module is configured to compare a difference between a template image obtained in real time and a template image at a corresponding time point in the template storage module, and if the difference is greater than a set threshold, it is determined that there is an abnormality.
In an embodiment of the present invention, the mold monitoring system further includes a template generation module for linearly interpolating two consecutive template images to generate a new template image.
FIG. 11 is a diagram illustrating a method for generating a new template image according to an embodiment of the present invention; referring to fig. 11, in an embodiment of the present invention, the template generating module generates the pixel value f corresponding to the position (x, y) in the new template image by a linear interpolation method of the new template image[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure BDA0002101747060000101
when t [ mn']=T[m]When e is 1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']=moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When e is 0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']=moldImage[n]。
T[m]Is actual time, f[m](x, y) is the corresponding T [ m ]]The pixel value of the template image at the (x, y) position at time, t m]The time required to generate a new template image is the two actual times.
Before generating a new template image, the difference between the molImage [ m ] and the molImage [ n ] is compared, and when the local difference is too large, the error is reduced by using motion estimation.
In an embodiment of the present invention, the template generating module is connected to the motion estimating module, and the motion estimating module is configured to perform motion estimation on a portion with a larger difference between two templates, find a minimum difference position, and interpolate local image data of the position to form local image data of the portion with the larger difference.
In an embodiment of the present invention, if the difference between two sequentially adjacent templates is greater than a set value, a new template image is generated between the two adjacent template images by using linear interpolation; the template generation module performs linear interpolation on two adjacent template images to generate a new template image, a linear coefficient is calculated through the time difference between the time point corresponding to the new template image and the template images, the time points corresponding to the template images are obtained by accumulating time intervals on the basis of reference time, and the time of the new template image is the actual clock t; if the alarm is still given after the new template is generated and the alarm is judged to be false alarm by an operator, the current image can be used as the new template to be inserted into the template sequence, and the T-T is used as a new time interval value to be updated into the dynamic template video file Ft.
If the difference between two sequentially adjacent templates is larger than a set threshold value and the difference value is concentrated in only a partial area, starting a motion estimation module to improve the precision of generating a new template; the motion estimation module is used for carrying out motion estimation on the part with larger difference between the two templates, finding out the position with the minimum difference, and using the position local image as a local interpolation reference image of the part with larger difference.
In an embodiment of the present invention, the template reading module reads out at least two template images in sequence each time, the template difference comparing module is configured to compare differences between two template images read out in sequence by the template reading module, and if the differences are lower than a set threshold, the two template images read out are used for detection; and if the difference is larger than the set threshold value, activating the template generation module.
In an embodiment of the present invention, the template reading module may also read out only one template image at a time. Reading out two template images is to prevent errors in the learning process. If the learning process and the monitoring detection process are independent, namely the learning process is not detected, and whether the learned template images meet the requirements or not is not known, two template images need to be read out in the monitoring process, and when the adjacent templates are found to be too large in difference, a new template is automatically generated, so that false alarms are reduced. If the detection is carried out simultaneously in the learning process, and the alarm is given out when the light change is too large, a template picture is added through manual intervention. Thus, the change between the template images is not too large, and the monitoring process does not need to read two template images.
In an embodiment of the present invention, the mold monitoring system further comprises: a time interval generation module and a clock trigger. The time interval generation module is used for setting acquisition interval data of each time period and storing the acquisition interval data in a time interval storage file. The clock trigger is used for reading the acquisition interval data corresponding to the current time from the time interval storage file.
FIG. 2 is a schematic diagram of the components of a timeline multi-template detection system in an embodiment of the present invention; referring to fig. 2, in an embodiment of the invention, a timeline multi-template detection system includes: the device comprises a time interval generation module S100, a clock trigger S200, a template acquisition module S300, a template sequence storage module S400, a template reading module S500, a template difference comparison module S600, a template generation module S700 and a motion estimation module S800.
The time interval generation module S100 automatically generates an acquisition interval tmlnters [ n ] according to the current time and the set adjustment parameter. The time interval generation mechanism has: increasing or decreasing according to the current time, according to a fixed interval, according to a linear change rule, and changing according to a nonlinear function change rule. The generated time interval tmInters [ n ] (n is 0,1,2 …) is queued in time order and stored in the dedicated file Ft in the file format: the header writes the base time followed by the time interval in sequence. Referring to fig. 9, fig. 9 is a schematic diagram illustrating a storage policy of an Ft data file according to an embodiment of the present invention. And accumulating all the time intervals according to the basic time, wherein each time interval corresponds to a specific time point which is the specific time for acquiring the template. Referring to fig. 8, fig. 8 is a schematic diagram illustrating a template capture interval scheme according to an embodiment of the present invention.
The clock trigger S200 reads out interval data tmInters [ n ] corresponding to a part of the current time from the interval storage file. And after the time interval is continuously accumulated on the basis of the reference time to generate corresponding time T, comparing the time T with the current actual clock T, if T is greater than T, namely the generated time is greater than the current actual time, indicating that the template acquisition time is reached or passed, and immediately sending a trigger signal to the template acquisition module. When the clock trigger triggers the template acquisition, the template acquisition module S300 extracts the current image as the template map.
The template sequence storage module S400 is used for storing template images; the template image is stored in a file in a video mode. Before storage, the current template map is compared to the previous template map. When the current template picture and the previous picture are found to have large changes, the time interval cannot be adjusted and the picture cannot be grabbed again because the time interval is set. Therefore, for the image with large variation, the image is stored in the mode of forcing to be the key frame, so as to improve the definition of the image, as shown in fig. 5.
The template reading module S500 is used to sequentially read out templates one by one from the dynamic template video file, and read out at least two template images at a time. The template difference comparing S600 module is used for comparing the difference between two adjacent template images, and if the difference is not large, the difference is directly used for detection.
The template generating module S700 is used to generate a new template. If the difference between two adjacent templates is too large, a new template drawing is generated between the two adjacent template drawings by linear interpolation if necessary. And performing linear interpolation on the front template image and the rear template image to generate a new template, wherein the linear coefficient can be calculated by the difference between the accumulated time T and the actual clock T, and the time point corresponding to the new template is the actual time T. If the alarm is still given after the new template is generated and the alarm is judged to be false alarm by an operator, the current image can be used as the new template to be inserted into the template sequence, and the T-T is used as a new time interval value to be updated into the file Ft.
If the difference between two sequentially adjacent templates is too large, if the difference value is concentrated in only a partial region, the motion estimation module S800 is started to improve the precision of generating a new template. The motion estimation is performed on the portion with the larger difference between the two templates, the minimum difference position is found, and the local image at the position is used as the local interpolation reference map of the portion with the larger difference, as shown in fig. 6. FIG. 7 is a diagram illustrating an interpolation image effect of motion estimation on shadow changes according to an embodiment of the present invention.
The invention discloses a mold monitoring and protecting method, which comprises the following steps:
the time interval storage module stores the acquisition interval data of each time period; the template storage module stores template images of all time points or/and stores dynamic template video files formed by the template images of all time points;
the template acquisition module acquires template images according to the acquisition interval data stored by the time interval storage module;
the template reading module reads out corresponding template images from the template storage module, and each template image corresponds to a time point;
and the template difference comparison module compares the template image acquired in real time with the template image at the corresponding time point in the template storage module to obtain a comparison result.
As an embodiment of the present invention, the mold monitoring and protecting method includes: monitoring the detection process;
in the actual monitoring detection phase, a clock trigger needs to read out the bar-by-bar readout time interval from Ft, while reading out the template image from Fm.
In the detection process, firstly reading reference time from a pre-stored file Ft, and then reading the next time interval tmInters [ n ] one by one; and calculating a time point T corresponding to each time interval on the basis of the reference time.
And the template reading module reads the current template image mol image [ n ] from the dynamic template video file according to the corresponding time point.
Subsequently, the time trigger starts to operate.
The time trigger reads the reference time from the memory, then reads the time interval in sequence, and continuously accumulates on the basis of the reference time to generate accurate trigger time.
And simultaneously reading out the corresponding template image moldImage [ n ] from the template video file Fm.
Reading tmInters [ n ] and moldImage [ n ] of the next time point at each actual time, wherein each template image only corresponds to one time point, and if the situation between the two time points is read, and if the difference between two adjacent template images is judged to be too large or false alarm occurs, and a new template image needs to be generated, the new template image is generated in a front-and-back template interpolation mode. Only two images before and after the current actual time are read, a new template can be generated according to the requirement.
Thus, the time trigger requires, for the first time, the reading of the next time interval and template image in addition to the reading of the reference time and corresponding template image.
And reading out one template from the video file of the dynamic template in sequence before generating each new template, and simultaneously reading out the template corresponding to the next time point in advance.
After reading two or more templates, the difference between the two template images before and after the actual time is compared.
A new template is generated using a plurality of template images as necessary. And when the difference value of the adjacent templates is larger, activating the template generation module, and generating a more accurate new template by using the front template and the rear template.
Sometimes, because the time interval between the templates is overlarge, a new template can be generated at any time for improving the precision.
When two difference template images only have large difference locally, a motion estimation module needs to be activated to find the optimal similar position of the local image.
The motion estimation module can improve the precision of generating the new template. And performing motion estimation on the part with larger difference between the two templates to find out the position with the minimum difference, and using the local image at the position as a local interpolation reference map of the part with larger difference.
As an embodiment of the present invention, the mold monitoring and protecting method includes: a template learning process;
in the template learning phase, a timeline sequence is first generated.
After the template learning starts, a reference time tmInters [0] is determined, and a time point T can be calculated by the time line sequence in an accumulation mode on the basis of the reference time.
The calculation of the timeline requires a time interval tmInters [ n ] (1,2,3 …). The time interval generation module can generate different time interval values according to different generation rules.
Referring to fig. 3, based on the reference time, the current time interval is generated according to the requirement of the current time point and the definition of the rule, and is stored in the file Ft after being arranged in a queue in sequence.
And after the reference time is generated, a template image is immediately captured, and then a template image is captured at the time point corresponding to each time interval.
After each time interval tmInters [ n ] is generated, the clock trigger continuously accumulates the time interval values according to the reference time. This continually generates accurate trigger times based on the reference time.
And the clock trigger compares the trigger time with the current time, and immediately sends a trigger signal to the template acquisition module to snapshot a new template when the trigger time is greater than or equal to the current time.
When the clock trigger triggers the template acquisition, the current image is captured from the monitoring camera as the template image corresponding to the time point. The time interval tmInters [ n ] value is stored in the file Ft, and the current template image moldImage [ n ] is stored in the template video file Fm.
Storing a plurality of captured template images into a file in a video mode according to the sequence corresponding to the time points to generate a corresponding dynamic template video, wherein each image in the video corresponds to one time point;
thus, at each time interval tmInters [ n ] in the Ft file, one mol image [ n ] corresponds to the Fm file, and the data in the two files form a one-to-one correspondence relationship, as shown in FIG. 4.
In the template learning stage, detection can also be performed simultaneously. The first template is a snapshot image corresponding to a reference time, then each time point is a standard graph, before the next snapshot, the current image is compared with the previous template image, when an alarm occurs, the operator judges that a new time interval tmInters [ n ] is generated immediately if the current image is determined to be a false alarm, that is, the current time point is added outside a set rule to be used as an acquisition time point, and the current image is stored as a template image, please refer to FIG. 5.
In one embodiment of the invention, when an injection molding machine of a certain factory is arranged on a south window edge of a workshop, the influence of sunlight is large at 9-11 am and 2-4 pm every day, and the sun is directly emitted to the injection molding machine in a certain time, and the following dynamic template triggering time interval scheme is determined according to the field condition:
1) turning to the daytime at night at 6-8 am, gradually increasing the light intensity, and triggering once every 120 seconds, namely tmInters [0] is 120;
2) 8-9 in the morning, in a transition stage, triggering once every 60 seconds, namely tmInters [1] ═ 60;
3) at 9-11 am, in the solar irradiation stage, triggering is performed every 5 seconds in order to continuously eliminate shadows, namely tmInters [3] is 5;
4) at 11-14 points, in a strong light stage in the daytime, the light changes little, and the trigger is performed every 300 seconds, namely tmInters [4] is 300;
5) 14-16 pm, in the afternoon sunshine irradiation stage, in order to continuously change and eliminate shadows, triggering once every 5 seconds, namely tmInters [5] is 5;
6) 16-18 points, wherein the sunlight gradually weakens, and the trigger is carried out once every 60 seconds, namely tmInters [6] ═ 60;
7) 18-20 points, changing the day to the night, and triggering once every 120 seconds, namely tmInters [7] ═ 120;
8) and (3) the night from 20 pm to 6 pm is a night, lighting is adopted, no change is caused, a single template can be used, the departure time can be set to be very long, and tmInters [8] < 1200.
The phase timing diagram of the above triggering scheme is shown in fig. 8.
Optimization strategy 1)
In the scheme, tmInters [ n ] has only 8 numerical values, so that the Ft file does not need to record the tmInters [ n ] value every time a template image is captured, and a data structure mode can be adopted:
StuInter[n]{
inticount;
inttmInter;
}
note that the data structure index is a tmlns index, each tmlnes of the structure corresponds to one use number, for example, 6-8 points tmlns [0] in the morning is 120, i.e., a template snapshot is triggered every 120 seconds, and 60 triggers are performed in 2 hours, so that icount in stulns [0] is 60, i.e.:
StuInter[0].icount=60;
StuInter[0].tmInter=120;
similarly, at 9-11 am, tmInters [3] is 5, and 1440 times of triggering are performed, so that:
StuInter[3].icount=1440;
StuInter[3].tmInter=5;
in this way, the data recorded in the Ft file is greatly reduced, but no matter in the template learning and time detection stages, every template still corresponds to a specific trigger time, and therefore, the actual number of time intervals and the number of templates are not reduced. The above data storage method is applicable to all the schemes of the present invention.
Optimization strategy 2)
In the scheme, the triggering interval time of each stage is gradient change, in order to link the change data of each stage, a dynamic change transition can be used, for example, 6-8 points tmInters [0] in the morning is 120, and 8-9 tmInters [1] in the morning is 60, and the following transition strategy can be adopted: the tmInters is reduced from 120 seconds to 60 seconds in 20 minutes in 7: 40-8: 00, the actual trigger frequency is 1200/((120+60)/2) times to 15 times, and the reduction value (120-60)/15 is 4 seconds each time, so that from 7:40 in the morning, the tmInters of 7:42 is 116 seconds, the next trigger time is 7:43:56, the tmInters is 112 seconds, and the tmInters of 8:00 is 60 seconds. The changes of other stages can be changed by the method, so that slow transition is realized, and the triggering interval time is more natural. The above transition method of the trigger interval is applicable to all schemes of the present invention.
Optimization strategy 3)
The above common-capture template image 60+60+1440+36+1440+120+60+30 is 3246 images, if the image is played at fps equal to 25, 129.84 seconds means that a video file with more than one minute is stored altogether, and the H264 compression with a high compression ratio is adopted, so that the disk space is small and is less than 500M. When the precision requirement is relatively high, 1G is a little more by using MJPEG compression. It is also possible to illustrate that a multi-template solution is feasible on the storage space. The above multi-template image storage method is applicable to all the schemes of the invention.
In one embodiment of the invention, a factory has a westward gate and a large gate, an injection molding machine is in a position close to the gate, sunlight is reflected from the ground into a workshop after 2 pm every day, and due to frequent movement of the entrance gate and the exit gate and outdoor courtyards, indoor light changes frequently, and the influence is large. In addition, yard loaders in courtyards outside doors work until 10 o 'clock ends at night, neon lighting is started after 6 o' clock, and a strong light source has certain influence on a mold monitoring system, particularly after dark. According to the field situation, the following dynamic template trigger time interval scheme is determined:
1) turning to the daytime at night at 6-8 am, gradually increasing the light intensity, and triggering once every 120 seconds, namely tmInters [0] < 120 >
2) 8-9 am, transition phase, triggering once every 60 seconds, namely tmInters [1] ═ 60
3) 9-13 points, the influence of the sun on the workshop is not obvious due to the western workshop gate, and the trigger is performed once every 300 seconds, namely tmInters [2] < 300 >
4) 13-14 points, the influence of the sun on the workshop is gradually obvious, and the trigger is performed once every 30 seconds, namely tmInters [3] < 30 >
5) 14-17 pm, in the afternoon sun irradiation stage, in order to continuously change and eliminate the shadow, triggering is carried out every 5 seconds, namely tmInters [4] is 5
6) 17-18 points, the sunlight gradually weakens, but the sunlight is influenced by the insolation of the sun, and the triggering is carried out once every 20 seconds, namely tmInters [5] < 20 >
7) 18-22 points, changing the day to the night and triggering once every 60 seconds, namely tmInters [6] ═ 60
8) Night from 22 o 'clock to 6 o' clock in dawn on the next day is a night, lighting is adopted, no change is caused, a single template can be used, the departure time can be set to be long, and tmInters [7] < 1200 >.
In summary, the system and the method for generating the mold template image provided by the invention can automatically generate the mold template image, thereby improving the working efficiency; the generated mold template image is used in a mold monitoring system, so that the detection accuracy can be improved, the false alarm can be reduced, and the detection efficiency is not influenced.
The invention provides a solution based on multiple templates, but each detection is still only compared with one template, and the working efficiency is not influenced. Meanwhile, the corresponding template is called from the stored multi-template file according to the time point of each day, and false alarm can not be generated because the template is the reference image with the highest accuracy under the current working condition and is also the image most similar to the actual working condition. Therefore, the present invention effectively solves the above two problems.
The invention detects the image blur, is specially used for detecting whether the image blur occurs or not, can effectively detect whether the blur occurs or not no matter what reason the blur occurs, such as the image blur caused by lens virtual focus, lens dust, scene dust fog and the like, and is not very sensitive to noise.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other components, materials, and parts, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (8)

1. A mold template image generation system, the template image generation system comprising: the template learning system comprises a time interval storage module, a template acquisition module, a template storage module, a template reading module and a template learning module;
the time interval storage module is used for storing template image acquisition interval data of each time period;
the template acquisition module is used for acquiring image information in real time or at set time intervals, and taking the image acquired at the corresponding acquisition time point as a template image of the corresponding time point;
the template storage module is used for storing template images acquired by the template acquisition module at various time points or/and storing dynamic template video files formed by the template images at various time points;
the template learning module is used for comparing a current image with a previously acquired template image between two adjacent acquisition time points, if the difference is greater than a set threshold value, adding the current time point outside a set regular acquisition time point as an acquisition time point, and storing the current image as a template image;
the mold monitoring system further comprises:
the template acquisition module is used for acquiring two continuous template images from the template storage module;
the template comparison module is used for comparing the two obtained continuous template images, and if the difference between the two continuous template images is greater than a set threshold value, a new template image is considered to be required to be generated between the two continuous template images;
the template generation module is used for carrying out linear interpolation on two continuous template images to generate a new template image;
the template generation module generates the pixel value f corresponding to the position (x, y) in the new template image by a new template image linear interpolation method[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure 666851DEST_PATH_IMAGE001
when t [ mn']=T[m]When, e =1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']= moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When, e =0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']= moldImage[n];
Wherein, T [ m ]]Is actual time, f[m](x, y) is the corresponding T [ m ]]Pixel value of template image at time (x, y) position, t [ mn']For the time required to generate a new template image out of two actual times, the molImage represents the template image;
the template generation module is connected with the motion estimation module, and the motion estimation module is used for carrying out motion estimation on a part with larger difference between the two templates, finding out a position with the minimum difference, and carrying out interpolation by using the position local image data to form local image data of the part with larger difference.
2. The mold template image generation system of claim 1, wherein:
the mold template image generation system further comprises:
the reference time determining module is used for determining reference time;
and the acquisition time point determining module is used for generating each acquisition time point by combining the acquisition interval data set by the time interval storage module on the basis of the reference time.
3. The mold template image generation system of claim 2, wherein:
the template acquisition module is used for immediately capturing a template image after the reference time is generated by the reference time determination module; and then, capturing a template image at each acquisition time point determined by the acquisition time point determining module.
4. The mold template image generation system of claim 1, wherein:
the mold template image generation system further comprises:
the time interval generating module is used for setting acquisition interval data of each time period and storing the acquisition interval data in a time interval storage file;
and the clock trigger is used for reading the acquisition interval data corresponding to the current time from the time interval storage file.
5. The mold template image generation system of claim 4, wherein:
after each time interval tmInters [ n ] is generated, the clock trigger continuously accumulates time interval values according to the reference time; thus, accurate trigger time is continuously generated on the basis of the reference time;
the clock trigger compares the trigger time with the current time, and immediately sends a trigger signal to the template acquisition module to snapshot a new template when the trigger time is greater than or equal to the current time;
when a clock trigger triggers template acquisition, capturing a current image from a monitoring camera as a template picture corresponding to the time point; storing the time interval tmInters [ n ] value into a file Ft, and storing the current template image moldImage [ n ] into a template video file Fm;
storing a plurality of captured template images into a file in a video mode according to the sequence corresponding to the time points to generate a corresponding dynamic template video, wherein each image in the video corresponds to one time point;
thus, at each time interval tmInters [ n ] in the Ft file, there is a mol Image [ n ] in the Fm file, and the data between the two files form a one-to-one correspondence.
6. A method for generating a template image of a mold, the method comprising the steps of:
a time interval storage step of storing template image acquisition interval data of each time period;
a template acquisition step, namely acquiring image information in real time or at set time intervals, and taking an image acquired at a corresponding acquisition time point as a template image of the corresponding time point;
a template storage step, in which template images collected at each time point in the template collection step are stored, or/and dynamic template video files formed by the template images at each time point are stored;
a template learning step, comparing the current image with the previously acquired template image between two adjacent acquisition time points, if the difference is greater than a set threshold value, adding the current time point outside the set regular acquisition time point as an acquisition time point, and storing the current image as a template image;
the mold template image generation method further comprises a template generation step; carrying out linear interpolation on two continuous template images acquired from the template storage step to generate a new template image;
generated by a new template image linear interpolation method, and the pixel value f corresponding to the position (x, y) in the new template image[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure 803566DEST_PATH_IMAGE002
when t [ mn']=T[m]When, e =1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']= moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When, e =0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']= moldImage[n];
Wherein, T [ m ]]Is actual time, f[m](x, y) is the corresponding T [ m ]]Pixel value of template image at time (x, y) position, t [ mn']For the time required to generate a new template image out of two actual times, the molImage represents the template image;
before generating a new template image, comparing the difference between the molImage [ m ] and the molImage [ n ], and reducing errors by using motion estimation when local difference is too large;
the template generation step comprises a motion estimation step, wherein the motion estimation step is used for carrying out motion estimation on a part with larger difference between two templates, finding out the position with the minimum difference, and carrying out interpolation by using the local image data at the position to form the local image data of the part with larger difference;
if the difference between two sequentially adjacent templates is larger than a set value, a new template image is generated between the two adjacent template images by linear interpolation; the template generating step is used for carrying out linear interpolation on two adjacent template images to generate a new template image, a linear coefficient is calculated through the time difference between the time point corresponding to the new template image and the template images, the time points corresponding to the template images are obtained by accumulating time intervals on the basis of reference time, and the time of the new template image is the actual clock t; if the alarm is still given after the new template is generated and the alarm is judged as a false alarm by an operator, the current image can be used as the new template to be inserted into the template sequence, and the T-T is used as a new time interval value to be updated into the dynamic template video file Ft;
if the difference between two sequentially adjacent templates is larger than a set threshold value and the difference value is concentrated in a partial area, starting a motion estimation step to improve the precision of generating a new template; and the motion estimation step is used for carrying out motion estimation on the part with larger difference between the two templates, finding out the position with the minimum difference, and using the position local image as a local interpolation reference image of the part with larger difference.
7. A mold template image generation system, the template image generation system comprising:
the time interval storage module is used for storing template image acquisition interval data of each time period;
the template acquisition module is used for acquiring image information in real time or at set time intervals, and taking the image acquired at the corresponding acquisition time point as a template image of the corresponding time point;
the template storage module is used for storing the template images acquired by the template acquisition module at various time points or/and storing dynamic template video files formed by the template images at various time points;
the mold monitoring system further comprises:
the template acquisition module is used for acquiring two continuous template images from the template storage module;
the template comparison module is used for comparing the two obtained continuous template images, and if the difference between the two continuous template images is greater than a set threshold value, a new template image is considered to be required to be generated between the two continuous template images;
the template generation module is used for carrying out linear interpolation on two continuous template images to generate a new template image;
the template generation module generates the pixel value f corresponding to the position (x, y) in the new template image by a new template image linear interpolation method[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure 255407DEST_PATH_IMAGE002
when t [ mn']=T[m]When, e =1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']= moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When, e =0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']= moldImage[n];
Wherein, T [ m ]]Is actual time, f[m](x, y) is the corresponding T [ m ]]Pixel value of template image at time (x, y) position, t [ mn']For the time required to generate a new template image out of two actual times, the molImage represents the template image;
the template generation module is connected with the motion estimation module, and the motion estimation module is used for carrying out motion estimation on a part with larger difference between the two templates, finding out a position with the minimum difference, and carrying out interpolation by using the position local image data to form local image data of the part with larger difference.
8. A mold template image generation method, characterized by comprising:
a time interval storage step of storing template image acquisition interval data of each time period;
a template acquisition step, namely acquiring image information in real time or at set time intervals, and taking an image acquired at a corresponding acquisition time point as a template image of the corresponding time point;
the template storage step, storing the template images collected at each time point in the template collection step, or/and storing dynamic template video files formed by the template images at each time point;
the mold template image generation method further comprises a template generation step; carrying out linear interpolation on two continuous template images acquired from the template storage step to generate a new template image;
generated by a new template image linear interpolation method, and the pixel value f corresponding to the position (x, y) in the new template image[mn′](x, y) is calculated from the pixel values of the corresponding positions of the front template picture and the rear template picture:
f[mn′](x,y)=f[m](x,y)*e+f[n](x, y) × (1-e), wherein:
Figure 571987DEST_PATH_IMAGE002
when t [ mn']=T[m]When, e =1, f[mn′](x,y)=f[m](x, y), i.e. moldImage [ mn']= moldImage[m](ii) a On the contrary, when t [ mn']=T[n]When, e =0, f[mn′](x,y)=f[n](x, y), i.e. moldImage [ mn']= moldImage[n];
Wherein, T [ m ]]Is actual time, f[m](x, y) is the corresponding T [ m ]]Pixel value of template image at time (x, y) position, t [ mn']For the time required to generate a new template image out of two actual times, the molImage represents the template image;
before generating a new template image, comparing the difference between the molImage [ m ] and the molImage [ n ], and reducing errors by using motion estimation when local difference is too large;
the template generation step comprises a motion estimation step, wherein the motion estimation step is used for carrying out motion estimation on a part with larger difference between two templates, finding out the position with the minimum difference, and carrying out interpolation by using the local image data at the position to form the local image data of the part with larger difference;
if the difference between two sequentially adjacent templates is larger than a set value, a new template image is generated between the two adjacent template images by linear interpolation; the template generating step is used for carrying out linear interpolation on two adjacent template images to generate a new template image, a linear coefficient is calculated through the time difference between the time point corresponding to the new template image and the template images, the time points corresponding to the template images are obtained by accumulating time intervals on the basis of reference time, and the time of the new template image is the actual clock t; if the alarm is still given after the new template is generated and the alarm is judged as a false alarm by an operator, the current image can be used as the new template to be inserted into the template sequence, and the T-T is used as a new time interval value to be updated into the dynamic template video file Ft;
if the difference between two sequentially adjacent templates is larger than a set threshold value and the difference value is concentrated in a partial area, starting a motion estimation step to improve the precision of generating a new template; and the motion estimation step carries out motion estimation on the part with larger difference between the two templates to find out the position with the minimum difference, and the position local image is used as a local interpolation reference image of the part with larger difference.
CN201910537845.7A 2019-06-20 2019-06-20 Mold template image generation system and method Active CN110233967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910537845.7A CN110233967B (en) 2019-06-20 2019-06-20 Mold template image generation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910537845.7A CN110233967B (en) 2019-06-20 2019-06-20 Mold template image generation system and method

Publications (2)

Publication Number Publication Date
CN110233967A CN110233967A (en) 2019-09-13
CN110233967B true CN110233967B (en) 2021-10-01

Family

ID=67857169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910537845.7A Active CN110233967B (en) 2019-06-20 2019-06-20 Mold template image generation system and method

Country Status (1)

Country Link
CN (1) CN110233967B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114683504B (en) * 2022-03-07 2023-10-31 佳睦拉索(上海)有限公司 Injection molding product molding control method and control equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0793485A (en) * 1993-09-22 1995-04-07 Toshiba Corp Image filing device
CN105799133A (en) * 2014-09-24 2016-07-27 上海智觉光电科技有限公司 Mold on-line monitoring and protecting method based on shape outline matching
CN106383131A (en) * 2016-09-20 2017-02-08 珠海格力电器股份有限公司 Visual detection method, device and system for printed matter
CN109360324A (en) * 2018-09-26 2019-02-19 深圳怡化电脑股份有限公司 Banknote detection method, banknote tester, finance device and computer readable storage medium
CN109636835A (en) * 2018-12-14 2019-04-16 中通服公众信息产业股份有限公司 Foreground target detection method based on template light stream

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0793485A (en) * 1993-09-22 1995-04-07 Toshiba Corp Image filing device
CN105799133A (en) * 2014-09-24 2016-07-27 上海智觉光电科技有限公司 Mold on-line monitoring and protecting method based on shape outline matching
CN106383131A (en) * 2016-09-20 2017-02-08 珠海格力电器股份有限公司 Visual detection method, device and system for printed matter
CN109360324A (en) * 2018-09-26 2019-02-19 深圳怡化电脑股份有限公司 Banknote detection method, banknote tester, finance device and computer readable storage medium
CN109636835A (en) * 2018-12-14 2019-04-16 中通服公众信息产业股份有限公司 Foreground target detection method based on template light stream

Also Published As

Publication number Publication date
CN110233967A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
US10432911B2 (en) Method for identification of contamination upon a lens of a stereoscopic camera
US10070053B2 (en) Method and camera for determining an image adjustment parameter
CN101615295B (en) Image processing system, image processing method
EP3889661B1 (en) Focusing method and apparatus, electronic device and storage medium
WO2021042816A1 (en) Method and device for detecting fault in monitoring apparatus
Albiol et al. Detection of parked vehicles using spatiotemporal maps
JP6036824B2 (en) Angle of view variation detection device, angle of view variation detection method, and field angle variation detection program
US9235880B2 (en) Camera and method for optimizing the exposure of an image frame in a sequence of image frames capturing a scene based on level of motion in the scene
KR100883632B1 (en) System and method for intelligent video surveillance using high-resolution video cameras
GB2450614A (en) Shadow detection and/or suppression in camera images
CN104717456A (en) Method and device for processing mobile detection
JP2002354340A (en) Imaging device
CN110233967B (en) Mold template image generation system and method
CN105678730A (en) Camera movement self-detecting method on the basis of image identification
CN110264458B (en) Mold monitoring system and method
US11627318B2 (en) Method and system for producing streams of image frames
CN102377984A (en) Monitored image recording method, monitoring system and computer program product
JP4025007B2 (en) Railroad crossing obstacle detection device
WO2013094115A1 (en) Time synchronization information computation device, time synchronization information computation method and time synchronization information computation program
JP4108593B2 (en) Inundation monitoring device
KR101559724B1 (en) Method and Apparatus for Detecting the Bad Pixels in Sensor Array and Concealing the Error
WO2019040068A1 (en) Image processing devices with efficient motion blur detection and methods of operating same
JP3567114B2 (en) Image monitoring apparatus and image monitoring method
KR100927554B1 (en) Night video surveillance system and method based on day and night video composition
CN114972218B (en) Pointer meter reading identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant