KR101748768B1 - Image generating method and apparatus for machine vision test, method and apparatus for simulating machine vision test, and machine vision testing system - Google Patents

Image generating method and apparatus for machine vision test, method and apparatus for simulating machine vision test, and machine vision testing system Download PDF

Info

Publication number
KR101748768B1
KR101748768B1 KR1020150144656A KR20150144656A KR101748768B1 KR 101748768 B1 KR101748768 B1 KR 101748768B1 KR 1020150144656 A KR1020150144656 A KR 1020150144656A KR 20150144656 A KR20150144656 A KR 20150144656A KR 101748768 B1 KR101748768 B1 KR 101748768B1
Authority
KR
South Korea
Prior art keywords
machine vision
image
vision inspection
virtual test
setting
Prior art date
Application number
KR1020150144656A
Other languages
Korean (ko)
Other versions
KR20170044933A (en
Inventor
윤기욱
백민규
김형균
이석중
Original Assignee
라온피플 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 라온피플 주식회사 filed Critical 라온피플 주식회사
Priority to KR1020150144656A priority Critical patent/KR101748768B1/en
Publication of KR20170044933A publication Critical patent/KR20170044933A/en
Application granted granted Critical
Publication of KR101748768B1 publication Critical patent/KR101748768B1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/127Calibration; base line adjustment; drift compensation
    • G01N2201/12707Pre-test of apparatus, e.g. dark test, sensor test
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The present invention relates to a method and apparatus for generating a virtual test image for machine vision inspection, a method and apparatus for machine vision simulation, and a machine vision test system. According to a first aspect of the present invention there is provided a method for generating a plurality of virtual test images for machine vision inspection comprising the steps of obtaining a sample image for generating the plurality of virtual test images, Setting at least one condition that can be applied to the sample image so as to reproduce the environment, and generating the plurality of virtual test images using the set conditions, wherein the machine vision inspection includes pattern matching Wherein the object is inspected for attributes using any one or more algorithms of pattern matching, point, line, face fitting, edge, color, and location, It is an environment that can change the image used for machine vision inspection.

Description

TECHNICAL FIELD [0001] The present invention relates to a virtual test image generation method and apparatus for machine vision inspection, a machine vision inspection simulation method and apparatus, and a machine vision test system for machine vision inspection. SYSTEM}

The present invention relates to a virtual test image generation method and apparatus for machine vision inspection, a machine vision inspection simulation method and apparatus, and a machine vision test system. More particularly, the present invention relates to a method and apparatus for generating a plurality of virtual test images, To a method, apparatus and system for simulating machine vision inspection using a virtual test image.

In recent years, industrial automation has been accelerated by combining factors such as machine vision, robot, sensor, PLC, and computer in the automotive, electronics, display, semiconductor, food and beverage, have.

Here machine vision means automating the industry through cameras (visual perception), CPU, and SW, instead of the conventional way people judge by the naked eye, to inspect or measure objects.

Machine vision is often used for industrial purposes rather than being used by end users. To maximize efficiency, machine vision must recognize smaller objects, recognize fast-moving objects, and be able to simultaneously read multiple objects rather than a single object. Therefore, the machine specific camera differs greatly in the characteristics of the equipment compared to the consumer camera.

That is, the consumer camera senses the visible ray region, while the machine non-exclusive camera mainly senses the infrared region. In order to sense the infrared rays, machine non-dedicated cameras use various types of infrared light and use pinhole lenses or telecentric lenses as well as lenses for consumer use. Also, machine non-dedicated cameras use grayscale black and white cameras more than color cameras.

The image extracted through the machine-specific camera is interpreted through vision software. Vision software can match patterns of objects from extracted images, fit points, lines or faces, discriminate colors, measure objects (Gauging), provide location information for robot guides Or read a 1D, 2D bar code displayed on an object, or perform optical character reading (OCR). Through this process, an object can be inspected, measured or read so that the automation of the industry can be realized.

The problem is that there are a wide variety of industries in which machine vision is applied, and the process environment for automating each product in each industry is very different. In addition, various lights and lenses applied to machine-specific cameras produce different optical environments. In other words, due to various process environments and various optical environments, it is difficult to predict the image extracted by the machine-specific camera. Actually, an automation environment for achieving a specific purpose, and a task of matching the machine non-dedicated camera to the environment require more time than thought, and the image extracted by the machine non-dedicated camera can not be read out through the vision software Also frequent.

Therefore, it is difficult to apply a specific machine non-dedicated camera to all industrial sites. For this purpose, before applying the machine non-dedicated camera to the industrial site, it is necessary to pre-test the sample image of the industrial site with a specific machine non- There is a need.

At this time, the sample image is likely to be an image separated from the image photographed by the machine-exclusive camera. That is, since the situation (photographing angle, photographing distance, photographing range, resolution, and the like) when the sample image is photographed may be different from the situation when the actual machine non- May be an image completely different from the image obtained by the actual machine non-dedicated camera.

For example, suppose a customer (a company that requires automation) asks a different machine vision vendor to sample a sample of the object they want to examine and inquires if pattern matching is possible. Machine vision equipment vendors can find successful pattern matching using vision software that performs pattern matching with sample images. However, after installing a machine-specific camera on the spot, it is often the case that pattern matching with an image by the camera fails.

That is, there is a high possibility that the sample image provided by the customer is a sample image in which no external variables such as a high-speed environment or illumination for an actual automation process are completely reflected.

Therefore, it is necessary to search for a technique for solving the above-mentioned problem.

Korean Registration No. 10-1465684, which is a prior art document, proposes a vision system that adjusts the brightness of a survey object by implementing a lighting control device as a rack type, thereby realizing an environment in which the vision system is used as an ideal environment It is proposed to control the hazard lighting control device. However, since the brightness of the illumination is controlled after the machine vision is applied to the actual environment, it is difficult to expect the effect of obtaining the image brightly or acquiring the darkness. As a result, prior art is difficult to test precisely the sample delivered by the customer prior to the actual configuration of the environment.

On the other hand, the background art described above is technical information acquired by the inventor for the derivation of the present invention or obtained in the derivation process of the present invention, and can not necessarily be a known technology disclosed to the general public before the application of the present invention .

An embodiment of the present invention is directed to a virtual test image generation method and apparatus for machine vision inspection, a machine vision inspection simulation method and apparatus, and a machine vision test system.

According to a first aspect of the present invention, there is provided a method for generating a plurality of virtual test images for machine vision inspection, comprising the steps of: Setting at least one condition applicable to the sample image so as to reproduce various environments in which machine vision inspection is performed; and generating the plurality of virtual test images using the set conditions Wherein the machine vision inspection is performed using one or more algorithms of pattern matching, point, line, face fitting, edge, color, and location, , And the various environments can change the image used in the machine vision inspection Characterized in that the environment.

According to a second aspect of the present invention there is provided a method for simulating machine vision inspection using a plurality of virtual test images, comprising the steps of: obtaining a sample image for generating the plurality of virtual test images; The method comprising: setting at least one condition applicable to the sample image so as to match various environments; generating the plurality of virtual test images using the set conditions; Triggering a machine vision inspection for each of the plurality of cameras, and when the triggering signal is inputted, performing the machine vision inspection and outputting the result.

According to a third aspect of the present invention, there is provided an apparatus for generating a plurality of virtual test images for machine vision inspection, comprising: an image obtaining unit for obtaining a sample image for generating the plurality of virtual test images; And an image generation unit configured to generate the plurality of virtual test images using the set conditions, wherein the machine vision unit is configured to generate the plurality of virtual test images based on the set conditions, The inspection is to check the attribute of the object using at least one of an algorithm such as pattern matching, point, line, face fitting, edge, color, and location, The various environments are characterized by an environment that can change the image used in the machine vision inspection.

According to a fourth aspect of the present invention there is provided an apparatus for simulating machine vision inspection using a plurality of virtual test images, the apparatus comprising: a plurality of virtual test images; An operation request unit for triggering the machine vision inspection, and a vision inspection unit for performing a machine vision inspection and outputting a result when the triggering signal is input.

According to a fifth aspect of the present invention, there is provided a machine vision test system comprising: an image generation apparatus for generating a plurality of virtual test images for machine vision inspection; and a simulation apparatus for simulating machine vision inspection using the plurality of virtual test images And a control unit.

According to one of the above-mentioned objects of the present invention, an embodiment of the present invention provides a virtual test image generation method and apparatus, a machine vision inspection simulation method and apparatus, and a machine vision test system for machine vision inspection .

Further, according to any one of the means for solving the problems of the present invention, it is possible to generate a large number of target images which can be distorted under various circumstances. This suggests a variety of situations that machine vision device designers have difficult to predict, enabling the design of machine vision devices that can operate more effectively.

Further, according to any one of the tasks of the present invention, a defect of the machine vision apparatus can be found in advance through the simulation of the operation of the machine vision apparatus. This allows machine vision devices to be built and operated with minimal resources by allowing the machine vision device to be pre-discovered and repaired prior to installation of the machine vision device in a factory or the like.

In addition, according to any one of the means for solving the problems of the present invention, an area, a pattern matching, an edge (a position angle, a width, a pitch), a stain, a blob, (Auto-Teach) process for creating a golden template in the machine vision inspection of an image, an image, an intensity, a color, an OCR, a barcode, and the like.

The effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned can be clearly understood by those skilled in the art from the following description will be.

1A to 1C are block diagrams of a machine vision test system according to an embodiment of the present invention.
2A and 2B are block diagrams illustrating a machine vision apparatus in accordance with an embodiment of the present invention.
3 is a block diagram showing an image generating apparatus and a simulation apparatus according to an embodiment of the present invention.
4A and 4B are diagrams for explaining a user interface provided by the image generating apparatus and the simulation apparatus according to an embodiment of the present invention.
5 is a flowchart illustrating an image generating method and a simulation method according to an embodiment of the present invention.
6 to 11 are diagrams for explaining an image generating method and a simulation method according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when an element is referred to as "comprising ", it means that it can include other elements as well, without departing from the other elements unless specifically stated otherwise.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

1A through 1C are block diagrams for explaining a machine vision test system according to an embodiment of the present invention.

1A, a machine vision test system 100 according to an embodiment of the present invention includes a machine vision device 20, an image generation device 30, and a simulation device 40. [

The machine vision apparatus 20 according to an embodiment of the present invention is an apparatus for performing machine vision inspection.

At this time, the machine vision inspection can check the property of the object using one or more algorithms such as pattern matching, point / line / face fitting, edge, color and location. .

An application (i.e., vision software) can be installed on the machine vision device 20 for machine vision inspection and the application can acquire a target image of the object and control the machine vision device 20 according to an algorithm have.

The machine vision device 20 may also include a camera (not shown) for capturing an object, or may communicate with an externally located camera (not shown).

The machine vision device 20 as described above will be described in more detail below with reference to Figures 2A and 2B.

On the other hand, the image generating apparatus 30 is a device for generating a virtual test image for simulating the operation of the machine vision apparatus 20, and such an image generating apparatus 30 will be described later in detail with reference to FIG.

On the other hand, the simulation apparatus 40 is an apparatus for simulating the operation of the machine vision apparatus 20 using the virtual test image generated by the image generating apparatus 30. Such a simulation apparatus 40 is shown in Fig. 3 Will be described later in more detail.

In this regard, each of the above-described machine vision apparatus 20, image generating apparatus 30, and simulation apparatus 40 may be included in or included with each other. That is, for example, the image generating apparatus 30 including the simulating apparatus 40 may be included in the machine vision apparatus 20, or the image generating apparatus 30 including the simulating apparatus 40 may be a machine Vision device 20 or a simulation device 40 including a machine vision device 20 may be located outside the image generation device 30 and may communicate with the image generation device 30. [ .

Further, each of the machine vision apparatus 20, the image generating apparatus 30, and the simulation apparatus 40 can communicate with each other via a network, which are located outside each other.

1B and 1C each illustrate a machine vision test system 100 in accordance with an embodiment of the present invention.

That is, FIG. 1B is a test system implemented with a machine vision camera M and a PC (workstation). At this time, the image generating apparatus 30 and the simulation apparatus 40 may be included in the PC (P), or may be driven under the operation of the vision software. And the machine vision device 20 may be included in the machine vision camera M.

On the other hand, FIG. 1C is a test system implemented with a smart camera (S). Here, smart camera means a stand-alone system that can operate itself without the help of a PC, and it is also possible to operate with a PC. At this time, the image generating device 30 and the simulation device 40 may be driven under the operation of the vision camera installed in the smart camera S or the PC. And the machine vision apparatus 20 may be included in the smart camera S.

2A and 2B are block diagrams illustrating a machine vision apparatus 20 according to an embodiment of the present invention.

2A, a machine vision apparatus 20 according to an embodiment of the present invention includes an object receiving unit 210, a preprocessing unit 220, a storage unit 230, a post-processing unit 240, 250).

The object receiving unit 210 acquires an image to be inspected for machine vision.

At this time, the object receiving unit 210 can acquire an image photographed by a camera (not shown) as a target image. When acquiring the photographed image, the object receiving unit 210 may be implemented by various sensors.

The object receiving unit 210 may acquire a target image by receiving the virtual test image generated by the image generating apparatus 30 according to an embodiment of the present invention.

Meanwhile, the preprocessing unit 220 performs preprocessing for the effective inspection of the target image before examining the target image.

For this, the preprocessing unit 220 may receive the target image from the target receiving unit 210 and may receive the target image through, for example, parallel, MIPI-CSI, LVDS, or the like.

The preprocessing unit 220 may perform preprocessing on the target image by performing filtering of the target image, noise correction, image format setting, and the like.

For this, the preprocessor 220 may be implemented as a field programmable gate array (FPGA).

The storage unit 230 may store an image that has been pre-processed by the preprocessing unit 220 as described above.

Also, the preprocessed image can be transmitted to the post-processing unit 240 through parallel, C2C, USB, and the like.

On the other hand, the post-processing unit 240 examines the preprocessed image according to an algorithm and generates a result. The result of the examination may be, for example, whether the preprocessed image satisfies a predetermined criterion.

To this end, the post-processing unit 240 may be implemented as an application processor (AP) to post-process the image. The AP implemented on the post-processing unit 240 can output the test result using the vision software installed on the embedded OS (Linux, Android, etc.).

Also, the post-processing unit 240 may output the inspection result by the user terminal (not shown) using the vision software that is run on the user terminal (not shown) through the transmission unit 250.

Meanwhile, the machine vision apparatus 20 according to another embodiment of the present invention may further include a transmission unit 250. [

Such a transmission unit 250 can communicate with a display unit (not shown). The display unit (not shown) is a module for interacting with a user. The user inputs a command to the machine vision apparatus 20 through a display unit (not shown), or receives a response of the machine vision apparatus, can confirm.

The transmission unit 250 can additionally communicate with a user terminal (not shown).

A user terminal (not shown) is located outside the machine vision device 20, and is equipped with vision software on a user terminal (not shown) to receive post-processed images from the machine vision device 20, Can be used to generate the result of the inspection.

The user terminal (not shown) may additionally include a module for interacting with the user. The user may input a command to the machine vision device 20 through a user terminal (not shown) The response of the vision device 20, or the result of the inspection.

The user terminal (not shown) may be implemented as a computer, a portable terminal, a television, a wearable device, or the like. Here, the computer includes, for example, a PC, a notebook equipped with a WEB Browser, a desktop, a laptop, and the like. The portable terminal may be, for example, As an apparatus, it may include all kinds of handheld based wireless communication apparatuses. In addition, the television may include an Internet Protocol Television (IPTV), an Internet television (TV), a terrestrial TV, a cable TV, and the like. Further, the wearable device is an information processing device of a type that can be directly worn on a human body, for example, a watch, a glasses, an accessory, a garment, shoes, or the like, and can be connected to a remote server via a network, Lt; / RTI >

The transmission unit 250 as described above may be implemented by GigE (Gigabit Ethernet) or USB.

Meanwhile, the machine vision apparatus 20 according to another embodiment of the present invention may include an object receiving unit 210, a preprocessing unit 220, and a transmission unit 250, as shown in FIG. 2B. That is, the machine vision apparatus 20 may not include the storage unit 230 and the post-processing unit 240 of FIG. 2A.

The post-processing unit 240 of FIG. 2A may be implemented on a user terminal (not shown) located outside the machine vision device 20. FIG. Accordingly, the user terminal (not shown) may receive the preprocessed image from the machine vision device 20 and perform the post-processing.

For this, the transmission unit 250 may be implemented by GigE, USB, Camera Link, or CoaXPress. Unlike GigE or USB, a user terminal (not shown) may additionally add a frame grabber to receive information from the machine vision device 20 when the transfer unit 250 is implemented with Camera Link or CoaXPress. Can be implemented.

3 is a block diagram illustrating an image generating apparatus 30 according to an embodiment of the present invention.

The image generating apparatus 30 may include an image obtaining unit 310, a setting unit 320, and an image generating unit 330, as shown in FIG.

The image obtaining unit 310 obtains a sample image for generating a plurality of virtual test images. When the machine vision apparatus 20 performs the machine vision inspection, the image used for the machine vision inspection is referred to as a 'sample image'. A sample image may be, for example, an image of a customer (a manufacturer requiring automation) to photograph an object he or she wants to examine.

The image acquiring unit 310 can acquire a sample image through communication with the machine vision apparatus 20. [

The image acquiring unit 310 may acquire a sample image by receiving a sample image through an interface unit (not shown).

To this end, the image generating apparatus 30 according to an embodiment of the present invention may further include an interface unit (not shown). An interface unit (not shown) is a module capable of interacting with a user. The interface unit can detect a user's input by sensing a click, a touch, a tap, etc. of a user. Can be displayed.

On the other hand, the setting unit 320 sets at least one condition applicable to the sample image so that various environments in which the machine vision inspection is performed can be reproduced.

At this time, 'diverse environment' means an environment that can change the image used for machine vision inspection.

This condition can be determined, for example, by collating a sample image with a shot image that was not normally obtained even though the machine vision apparatus 20 performed normal inspection in the past. For example, if Gaussian noise is generated in a photographed image of a specific object and the normal object can not be inspected even though the specific object is normal, Gaussian noise can be set as a condition applicable to the sample image.

In order to set such a condition, the setting unit 320 may provide an interface for condition setting to the user through an interface unit (not shown), and may input conditions from the user.

According to an embodiment of the present invention, the setting unit 320 may provide the following five types of image generation modes to provide a user-friendly interface. However, the image generation mode according to one embodiment of the present invention is not limited to the five types described below, and may be provided as fewer or more image generation options for the convenience of the user.

First, the setting unit 320 may provide a mode for automatically generating an image (i.e., an 'automatic' image generating mode). That is, the setting unit 320 can automatically set a condition to be applied to the sample image. For this, the setting unit 320 can receive only the number of images to be generated from the user and generate an image. Further, the setting unit 320 inputs the number of images of each level (Lv.) 1 for generating an image with the smallest deformation applied to the sample image, and the number of images for each level 3, It is also possible to receive the total of the images according to the levels 1 to 3, respectively.

Secondly, the setting unit 320 may provide a mode (an 'image generation mode for each environment element') for generating an image for each environmental element. That is, the setting unit 320 may set a condition to be applied to the sample image as an environmental element, and the environmental element may be any one of at least one of illumination unevenness, camera position deviation, noise, vibration, and defective focus. That is, the 'environmental factor' is a parameter that is used for the brightness, contrast, shading, rotation, shift, zoom, Perspective, Gaussian noise, Blur, Scratch, Lens Flare, and Dust may be combined and extracted, and the environmental factor may be varied other than the examples described above. The setting unit 320 may receive the number corresponding to the level 1 to level 3 for each environmental element and generate an image.

Thirdly, the setting unit 320 may provide a mode ('filter' image generation mode) for generating an image for each filter value. The filter value may be any one or more of, for example, brightness, contrast, shading, rotation, shift, zoom, perspective, Gaussian noise, blur, scratch, lens flare and dust, . Each filter value can be adjusted in intensity, and intensity values can be expressed in levels 1 through 3.

Table 1 below illustrates the relationship between environmental factors and filter values. The level of the filter can be set by the filter value, the levels of the various elements can be combined by combining the levels of the various filters, and the level when the image is generated automatically by combining the levels of the environmental elements can be adjusted have.

Environmental factor filter Value 1 Value 2 Uneven illumination Brightness
Contrast
Radial Shading
Linear Shading
Constant
Constant
Location (x, y)
Constant


Radius
Camera position deviation Rotation
Shift
Zoom
Perspective
Phase
x-offset
scaling factor

y-offset
noise Gaussian Noise
Point & Noise (Salt & Peeper Noise)
Sigma
Probability
Vibration and poor focus Gaussian Blur
Motion Blur
Radial Blur
Zoom Blur
Sigma
Direction (0 ~ 3)
Location (x, y)
Location (x, y)
Kernel size
Line size
Radius
Line Size
Foreign body, damage Scratch
Crack
Lens Flare
Dust
Count
Count
Location (x, y)
Location (x, y)


Scale

Fourth, the setting unit 320 may provide an image generation mode ('template' image generation mode) through a template. That is, the setting unit 320 may set a condition to be applied to the sample image as a template, and the template may be a condition in which the conditions matching the machine vision inspection environment are stored in advance. Such a template may be provided in a tree structure, and it is preferable that the template is provided in a form that can be widely used by users in consideration of environmental factors, filters, and the like.

Fifth, the setting unit 320 may provide a user-defined image generation mode ('user defined' image generation mode). That is, the setting unit 320 can set a condition to be applied to the sample image as a user definition, and the user definition may be stored in a user-defined one or more of automatic setting, filter setting, environment element setting, . The user-defined generation mode is for the user to store the settings that the user wants to reuse while the user is practicing the present invention, and to use the settings in future.

On the other hand, the image generating unit 330 generates a plurality of virtual test images using the set conditions.

To this end, the image generating unit 330 may generate a plurality of virtual test images by applying the conditions set by the setting unit 320 to the sample images.

At this time, there may be a plurality of conditions, and one or more virtual test images may be generated as a plurality of conditions are simultaneously or sequentially applied to the sample image.

In addition, if there are a plurality of generated virtual test images, the image generating unit 330 may synthesize the plurality of virtual test images to generate another virtual test image.

Meanwhile, according to a preferred embodiment of the present invention, the image generating apparatus 30 may further include a simulation apparatus 40.

3, the simulation apparatus 40 may include an operation request unit 340 and a vision inspection unit 350, and such a simulation apparatus 40 may be included in the image generation apparatus 30. [ have.

The operation requesting unit 340 triggers the machine vision inspection for each or a part or all of the plurality of virtual test images.

To this end, the operation requesting unit 340 may acquire a virtual test image from the image generating apparatus 30. [

That is, the operation requesting unit 340 can request the virtual test image to the image generating apparatus 30, acquire the virtual test image as a response thereof, and also receive the virtual test image as the image generating apparatus 30 transmits the virtual test image can do.

On the other hand, when the triggering signal is inputted, the vision inspection unit 350 can perform the machine vision inspection and output the result.

That is, the vision inspection unit 350 can request the operation of the machine vision apparatus 20 for the virtual test image.

For this, the operation request unit 340 may transmit the virtual test image to the machine vision apparatus 20 together with the operation request information.

The machine vision device 20 can acquire the virtual test image as the target image and perform the machine vision inspection on the virtual test image according to the algorithm set on the machine vision device 20. [

The vision inspection unit 350 can also determine that an error has occurred in the operation of the machine vision apparatus.

That is, the vision inspection unit 350 may determine whether the result of the operation request is different from the predetermined result, and determine that an error has occurred in the operation of the machine vision apparatus. For this, the vision inspection unit 350 can receive the operation result for the sample image from the machine vision apparatus 20 as a predetermined result, and can also receive the result according to the operation request from the machine vision apparatus 20. [

On the other hand, if it is determined that an error has occurred in the operation of the machine vision apparatus, the vision inspection unit 350 extracts a filter value or the like applied to the generation of the virtual test image that caused the error, To change the vision software installed in the machine vision device 20. [ The vision inspection unit 350 can also notify the designer that the machine vision apparatus 20 itself needs to be repaired.

Hereinafter, how the present invention can be implemented in a user interface environment will be described with reference to FIGS. 4A and 4B. 4A and 4B are diagrams for explaining a user interface provided by the image generating apparatus and the simulation apparatus according to an embodiment of the present invention.

 The image acquiring unit 310 acquires a sample image (i.e., an original image) by an operation of loading an original image in the 'File' toolbar menu or 'Loading' in the lower part of the ' And display the sample image in the 'original image' area. The sample image may be displayed on the full screen by a user's operation, displayed or enlarged or reduced, and may be 'exported' to the 'object to be inspected' region, so that the operation request unit 340 may temporarily store the sample image.

The setting unit 320 is interlocked with the 'image generation mode' and can set an image generation mode in various forms such as 'automatic', 'environmental element', 'filter', 'template' . The description of the image creation mode according to each of the above-described forms is the same as the description of each of the image creation modes described above in the setting unit 320. [

The image generating unit 330 may generate an image corresponding to the image setting mode of the setting unit 320 and display the image in the 'generated image' area. A user interface such as full screen, zoom, and screen division may be provided, and log information of an image setting mode such as an environment element and a filter may be displayed in the form of a text or a table. 4A and 4B illustrate that the log information of the image setting mode is recorded in the form of a table.

The operation requesting unit 340 may temporarily store the image to be inspected in the vision inspection unit 350 in the 'inspection target' area. The data stored at this time may be data on the original image or data selected as an object to be inspected among the generated images. The operation requesting unit 340 can trigger the machine vision inspection when the 'test execution' button is activated, and accordingly transmits the triggering signal to the vision inspection unit 350. The vision inspection unit 350 performs the machine vision inspection And output the result.

5 is a flowchart illustrating an image generating method and a simulation method according to an embodiment of the present invention.

The image generating method according to the embodiment shown in Fig. 5 includes steps that are processed in a time-series manner in each of the image generating apparatus 30 and the simulation apparatus 40 shown in Fig. Therefore, the contents described above with respect to the image generating apparatus 30 shown in FIG. 3 can be applied to each of the image generating method and the simulation method according to the embodiment shown in FIG.

FIGS. 6 to 11 are diagrams for explaining an image generating method and a simulation method according to an embodiment of the present invention, which will be described below with reference to FIGS. 6 to 11. FIG.

First, a method by which the image generating apparatus 30 generates an image will be described.

That is, the image generating apparatus 30 may acquire a sample image for generating a plurality of virtual test images (S510).

That is, the image generating apparatus 30 can acquire the sample image 600 as shown in Fig.

Meanwhile, the image generating apparatus 30 may set one or more conditions to be applied to the acquired sample image (S520).

At this time, the image generating apparatus 30 sets the condition to be applied as a filter value, sets a condition to be applied to the sample image as an environmental element, sets a condition to be applied to the sample image as a template, You can set it up automatically or customize the conditions that apply to the sample image.

If the condition is set as described above, the image generating apparatus 30 can generate a virtual test image (S530).

That is, the image generating apparatus 30 may generate a plurality of virtual test images by applying conditions to the sample images.

As a result, the image generating apparatus 30 can generate a virtual test image in which conditions are applied to the sample images, respectively, as shown in Figs. 7 to 11, respectively.

That is, assuming that the camera focus is in a bad state or the camera is vibrated, the image generating device 30 generates a motion blur for at least one of 'Gaussian blur', 'motion blur', 'radial blur' You can set the filter value. For example, the image generating apparatus 30 can set the motion direction and the blur size as values of 'motion blur'. Then, the image generating apparatus 30 applies the conditions set for each of the 'Gaussian blur', 'motion blur', 'radial blur' and 'zoom blur' to the sample image, A plurality of virtual test images can be generated as shown in (b), (c) and (d) of FIG.

In addition, the image generating apparatus 30 may set a filter value for at least one of 'brightness', 'contrast', 'radial shading', and 'linear shading', assuming that camera illumination is uneven. For example, the image generating apparatus 30 can set the position and the radius size as the values of the 'radial shading'. 8 (b), 8 (c), and 8 (d), the image generating apparatus 30 applies the set conditions, A plurality of test images can be generated.

In addition, the image generating apparatus 30 may set a filter value for at least one of 'rotation', 'shift', 'zoom', and 'perspective' based on the positional deviation of the camera. For example, the image generating device 30 may set the x-axis offset and the y-axis offset as the value of the 'shift'. 9 (a), 9 (b), 9 (b), 9 (b) and 9 (c)] by applying the values set for each of the 'rotation', 'shift', 'zoom' a plurality of virtual test images can be generated in the order shown in Figs. 9C and 9D.

Further, assuming that noise is applied to the sample image 600, the image generating apparatus 30 can set a filter value for 'Gaussian noise' or 'Point noise', and for example, a sigma value for Gaussian noise Can be set. Then, the image generating apparatus 30 can apply a value set for the Gaussian noise to the sample image to generate a virtual test image as shown in FIG.

Furthermore, the image generating apparatus 30 may transform the sample image into a virtual test image, as shown in FIG. 11 (a), on the premise that the lens of the camera is scratched (scratch). In addition, the image generating apparatus 30 can generate a virtual test image as shown in FIGS. 11 (b) and 11 (c), assuming that the lens of the camera has cracks or dust. In addition, it is possible to generate a virtual test image as shown in (d) of FIG. 11 on the premise that a lens flare phenomenon of the camera occurs.

The image generating apparatus 30 may generate another virtual test image by combining the plurality of virtual test images when the generated virtual test images are plural. At this time, the synthesized ratio can be arbitrarily set. For example, when two virtual test images are synthesized, two virtual test images each having a transparency of 50% can be superimposed on each other.

On the other hand, using one or more virtual test images generated as described above, a method for simulating the simulation apparatus 40 will be described below.

That is, the simulation apparatus 40 can receive one or more virtual test images generated by the image generating apparatus 30.

That is, the simulation apparatus 40 may acquire a virtual test image as a target image and cause the machine vision apparatus 20 to trigger a machine vision check for each or a part of the plurality of virtual test images (S540) . Accordingly, when the triggering signal is inputted, the simulation apparatus 40 can perform the machine vision inspection and output the result (S550).

At this time, if the inspection result performed by the machine vision apparatus 20 is different from the preset result, the image generating apparatus 30 can determine that an error has occurred in the operation of the machine vision apparatus 20. [ Then, the operation of the machine vision apparatus 20 can be changed or the vision software installed on the machine vision apparatus 20 can be changed. For example, the simulation apparatus 40 can extract a filter value applied to the generation of a virtual test image that has caused an error, and calculate a vision value, which is installed on the machine vision apparatus 20 based on the extracted filter value You can change the software.

On the other hand, since the image generating device 30 can generate a plurality of virtual test images, the simulation device 40 can cause the machine vision device 20 to operate until the inspection for all of the virtual test images is completed have. Thus, it is possible to simulate whether the machine vision inspection is performed correctly under various environments in which the machine inspection is performed.

The term " part " used in the present embodiment means a hardware component such as software or a field programmable gate array (FPGA) or an ASIC, and 'part' performs certain roles. However, 'part' is not meant to be limited to software or hardware. &Quot; to " may be configured to reside on an addressable storage medium and may be configured to play one or more processors. Thus, by way of example, 'parts' may refer to components such as software components, object-oriented software components, class components and task components, and processes, functions, , Subroutines, segments of program patent code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.

The functions provided within the components and components may be combined with a smaller number of components and components or separated from additional components and components.

In addition, the components and components may be implemented to play back one or more CPUs in a device or a secure multimedia card

Each of the image generation method and the simulation method according to the embodiment described with reference to FIG. 5 may be implemented in the form of a recording medium including instructions executable by a computer such as a program module executed by a computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. In addition, the computer-readable medium may include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes any information delivery media, including computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism.

Further, each of the image generation method and the simulation method according to an embodiment of the present invention may be implemented as a computer program (or a computer program product) including instructions executable by a computer. A computer program includes programmable machine instructions that are processed by a processor and can be implemented in a high-level programming language, an object-oriented programming language, an assembly language, or a machine language . The computer program may also be recorded on a computer readable recording medium of a type (e.g., memory, hard disk, magnetic / optical medium or solid-state drive).

Thus, each of the image generation method and the simulation method according to the embodiment of the present invention can be implemented by the computer program as described above being executed by the computing device. The computing device may include a processor, a memory, a storage device, a high-speed interface connected to the memory and a high-speed expansion port, and a low-speed interface connected to the low-speed bus and the storage device. Each of these components is connected to each other using a variety of buses and can be mounted on a common motherboard or mounted in any other suitable manner.

Where the processor may process instructions within the computing device, such as to display graphical information to provide a graphical user interface (GUI) on an external input, output device, such as a display connected to a high speed interface And commands stored in memory or storage devices. As another example, multiple processors and / or multiple busses may be used with multiple memory and memory types as appropriate. The processor may also be implemented as a chipset comprised of chips comprising multiple independent analog and / or digital processors.

The memory also stores information within the computing device. In one example, the memory may comprise volatile memory units or a collection thereof. In another example, the memory may be comprised of non-volatile memory units or a collection thereof. The memory may also be another type of computer readable medium such as, for example, a magnetic or optical disk.

And the storage device can provide a large amount of storage space to the computing device. The storage device may be a computer readable medium or a configuration including such a medium and may include, for example, devices in a SAN (Storage Area Network) or other configurations, and may be a floppy disk device, a hard disk device, Or a tape device, flash memory, or other similar semiconductor memory device or device array.

It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

The scope of the present invention is defined by the appended claims rather than the detailed description and all changes or modifications derived from the meaning and scope of the claims and their equivalents are to be construed as being included within the scope of the present invention do.

100: Simulation system
20: Machine vision device
30: Image generating device
310: image obtaining unit 320: setting unit
330:
340: operation requesting unit 350: vision checker

Claims (20)

A method for generating a plurality of virtual test images for machine vision inspection,
Obtaining a sample image for generating the plurality of virtual test images;
Setting at least one condition applicable to the sample image so as to reproduce various environments in which the machine vision inspection is performed; And
And generating the plurality of virtual test images by applying the set conditions to the sample images,
The machine vision inspection may be performed by inspecting an object using an algorithm using at least one of pattern matching, point, line, face fitting, edge, color, and location And,
Wherein the various environment is an environment that can give a change to the image used in the machine vision inspection.
The method according to claim 1,
Further comprising triggering a machine vision inspection for each or a portion of the plurality of virtual test images.
The method according to claim 1,
And when the triggering signal is inputted, performing the machine vision inspection and outputting the result.
The method according to claim 1,
Wherein the setting step comprises:
A condition to be applied to the sample image is set as a filter value,
The filter value may include at least one of a brightness, a contrast, a shading, a rotation, a shift, a zoom, a perspective, a Gaussian noise, a blur, , A scratch (Scratch), a lens flare (Df), and dust (Dust).
The method according to claim 1,
Wherein the setting step comprises:
Setting a condition to be applied to the sample image as an environmental element,
Characterized in that the environmental element is at least one of illumination non-uniformity, camera position deviation, noise, vibration and poor focus.
6. The method of claim 5,
The environmental element may include at least one of a brightness, a contrast, a shading, a rotation, a shift, a zoom, a perspective, a Gaussian noise, a blur, Wherein at least one filter value of at least one of a plurality of filters, a Scratch, a lens flare, and a dust filter is combined and extracted.
The method according to claim 1,
Wherein the setting step comprises:
A condition to be applied to the sample image is set as a template,
Wherein the template is a condition in which a condition conforming to the machine vision inspection environment is stored in advance.
The method according to claim 1,
Wherein the setting step comprises:
Automatically setting a condition to be applied to the sample image,
Wherein the automatic setting is automatically set in correspondence with the number of images to be generated or automatically set in response to input of a degree of deformation at a level with a small degree of deformation of the sample image.
The method according to claim 1,
Wherein the setting step comprises:
Setting a condition to be applied to the sample image as user-defined,
Wherein the user definition is one in which at least one of an automatic setting, a filter setting, an environment element setting, and a template setting is previously stored and defined by a user.
A method for simulating machine vision inspection using a plurality of virtual test images,
Obtaining a sample image for generating the plurality of virtual test images;
Setting at least one condition applicable to the sample image so as to match various environments in which machine vision inspection is performed;
Generating the plurality of virtual test images by applying the set conditions to the sample images;
Triggering a machine vision inspection for each or a portion of the plurality of virtual test images; And
And when the triggering signal is inputted, performing the machine vision inspection and outputting a result.
An apparatus for generating a plurality of virtual test images for machine vision inspection,
An image obtaining unit obtaining a sample image for generating the plurality of virtual test images;
A setting unit configured to set at least one condition applicable to the sample image so as to reproduce various environments in which the machine vision inspection is performed; And
And an image generator for generating the plurality of virtual test images by applying the set conditions to the sample images,
The machine vision inspection may be performed by inspecting an object using an algorithm using at least one of pattern matching, point, line, face fitting, edge, color, and location And,
Wherein the various environments are environments in which changes can be made to the images used in the machine vision inspection.
12. The method of claim 11,
In addition,
A condition to be applied to the sample image is set as a filter value,
The filter value may include at least one of a brightness, a contrast, a shading, a rotation, a shift, a zoom, a perspective, a Gaussian noise, a blur, , A scratch (Scratch), a lens flare (Lens Flare), and a dust (Dust).
12. The method of claim 11,
In addition,
Setting a condition to be applied to the sample image as an environmental element,
Wherein the environmental element is at least one of illumination non-uniformity, camera position deviation, noise, vibration, and defective focus.
14. The method of claim 13,
The environmental element may include at least one of a brightness, a contrast, a shading, a rotation, a shift, a zoom, a perspective, a Gaussian noise, a blur, Wherein at least one of a filter value, a scratch value, a lens flare, and a dust value is combined and extracted.
12. The method of claim 11,
In addition,
A condition to be applied to the sample image is set as a template,
Wherein the template is a condition in which conditions matching the machine vision inspection environment are stored in advance.
12. The method of claim 11,
In addition,
Automatically setting a condition to be applied to the sample image,
Wherein the automatic setting is automatically set in correspondence with the number of images to be generated, or automatically set in response to input of a large number of levels at a level with less variation of the sample image.
12. The method of claim 11,
In addition,
Setting a condition to be applied to the sample image as user-defined,
Wherein the user definition is one in which at least one of an automatic setting, a filter setting, an environment element setting, and a template setting has been stored and defined by the user.
An apparatus for communicating with the image generation apparatus of claim 11 and simulating machine vision inspection using the plurality of virtual test images,
An operation request unit for acquiring a plurality of virtual test images and triggering machine vision inspection for each or a part of the images of the plurality of virtual test images; And
And a vision inspection unit for performing a machine vision inspection and outputting a result when the triggering signal is inputted.
As a machine vision test system,
An image generation device for applying a condition for reproducing various environments in which machine vision inspection is performed to a sample image to generate a plurality of virtual test images for machine vision inspection; And
And a simulation apparatus for simulating machine vision inspection using the plurality of virtual test images.
20. The method of claim 19,
Perform machine vision inspection to check the properties of the object using one or more algorithms of Pattern Matching, Point, Line, Face Fitting, Edge, Color and Location. Further comprising a machine vision device,
Wherein the simulation apparatus is configured to cause the machine vision apparatus to perform a machine vision inspection on each or a portion of the images of the plurality of virtual test images.
KR1020150144656A 2015-10-16 2015-10-16 Image generating method and apparatus for machine vision test, method and apparatus for simulating machine vision test, and machine vision testing system KR101748768B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150144656A KR101748768B1 (en) 2015-10-16 2015-10-16 Image generating method and apparatus for machine vision test, method and apparatus for simulating machine vision test, and machine vision testing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150144656A KR101748768B1 (en) 2015-10-16 2015-10-16 Image generating method and apparatus for machine vision test, method and apparatus for simulating machine vision test, and machine vision testing system

Publications (2)

Publication Number Publication Date
KR20170044933A KR20170044933A (en) 2017-04-26
KR101748768B1 true KR101748768B1 (en) 2017-06-19

Family

ID=58705018

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150144656A KR101748768B1 (en) 2015-10-16 2015-10-16 Image generating method and apparatus for machine vision test, method and apparatus for simulating machine vision test, and machine vision testing system

Country Status (1)

Country Link
KR (1) KR101748768B1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102018896B1 (en) * 2017-11-20 2019-09-06 인곡산업 주식회사 Vision inspection equipment platform to check the quality of endmill
KR102135632B1 (en) 2018-09-28 2020-07-21 포항공과대학교 산학협력단 Neural processing device and operating method thereof
CN110296992A (en) * 2019-08-15 2019-10-01 西北农林科技大学 It is a kind of that rear wild cabbage inside and outside quality detection evaluating apparatus is adopted based on machine vision
KR20220162728A (en) * 2020-10-14 2022-12-08 엘지전자 주식회사 Artificial intelligence device and method for generating learning data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004294358A (en) * 2003-03-28 2004-10-21 Hitachi High-Technologies Corp Method and apparatus for inspecting defect
JP2005017159A (en) 2003-06-27 2005-01-20 Hitachi High-Technologies Corp Inspection recipe setting method in flaw inspection device and flaw inspecting method
JP2009192440A (en) 2008-02-16 2009-08-27 Mega Trade:Kk Storage structure of test program file of automatic inspection device
KR101182768B1 (en) * 2012-02-21 2012-09-13 주식회사 엠비젼 Examining apparatus and method for machine vision system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004294358A (en) * 2003-03-28 2004-10-21 Hitachi High-Technologies Corp Method and apparatus for inspecting defect
JP2005017159A (en) 2003-06-27 2005-01-20 Hitachi High-Technologies Corp Inspection recipe setting method in flaw inspection device and flaw inspecting method
JP2009192440A (en) 2008-02-16 2009-08-27 Mega Trade:Kk Storage structure of test program file of automatic inspection device
KR101182768B1 (en) * 2012-02-21 2012-09-13 주식회사 엠비젼 Examining apparatus and method for machine vision system

Also Published As

Publication number Publication date
KR20170044933A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
KR101748768B1 (en) Image generating method and apparatus for machine vision test, method and apparatus for simulating machine vision test, and machine vision testing system
JP6955211B2 (en) Identification device, identification method and program
US11295434B2 (en) Image inspection system and image inspection method
EP3109826A2 (en) Using 3d vision for automated industrial inspection
US11158039B2 (en) Using 3D vision for automated industrial inspection
KR102108956B1 (en) Apparatus for Performing Inspection of Machine Vision and Driving Method Thereof, and Computer Readable Recording Medium
WO2019186915A1 (en) Abnormality inspection device and abnormality inspection method
US8315457B2 (en) System and method for performing multi-image training for pattern recognition and registration
US20170061209A1 (en) Object processing method, reference image generating method, reference image generating apparatus, object processing apparatus, and recording medium
KR101964805B1 (en) Guide providing method and apparatus for machine vision
US20220366658A1 (en) Systems and methods of augmented reality guided image capture
US20140320638A1 (en) Electronic device and method for detecting surface flaw of sample
JP2012242281A (en) Method, device and program for calculating center position of detection object
JP7127046B2 (en) System and method for 3D profile determination using model-based peak selection
JP2022009474A (en) System and method for detecting lines in vision system
US11012612B2 (en) Image inspection system and image inspection method
CN116908185A (en) Method and device for detecting appearance defects of article, electronic equipment and storage medium
US10791661B2 (en) Board inspecting apparatus and method of compensating board distortion using the same
JP7380332B2 (en) Image processing device, control method and program for the image processing device
KR20170131257A (en) Image analysis method and apparatus for machine vision
JP2006145228A (en) Unevenness defect detecting method and unevenness defect detector
Pokharel Machine Vision and Object Sorting: PLC Communication with LabVIEW using OPC
KR102491420B1 (en) Vision inspection method for vision inspection using dynamic plate based on artificial intelligence learning
CN110274911B (en) Image processing system, image processing apparatus, and storage medium
CN116046791B (en) Method and device for detecting defect of adhesive dispensing

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant