WO2013136592A1 - 領域指定方法及び領域指定装置 - Google Patents
領域指定方法及び領域指定装置 Download PDFInfo
- Publication number
- WO2013136592A1 WO2013136592A1 PCT/JP2012/079839 JP2012079839W WO2013136592A1 WO 2013136592 A1 WO2013136592 A1 WO 2013136592A1 JP 2012079839 W JP2012079839 W JP 2012079839W WO 2013136592 A1 WO2013136592 A1 WO 2013136592A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- image
- small
- designation
- small area
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Definitions
- the present invention relates to a technology for assisting a user operation of designating a partial area out of an image.
- segmentation which separates an image into a part to be extracted (referred to as foreground) and another part (referred to as background) by digital image processing by a computer.
- area division in order to improve the separation accuracy and to carry out division according to the user's intention, a method may be adopted in which the user specifies a part of an area or pixel to be foreground or background as an initial value.
- a user interface for specifying an area or pixel on an image a method of specifying a rectangular area by dragging the mouse, a method of selecting a pixel by clicking the mouse, a mouse drawing a free curve by drawing software
- a method of specifying the outline of a group of pixels or an area by a stroke is used. Such a method makes it possible to designate any pixel group on the image as foreground or background.
- the conventional user interface is suitable for roughly designating an area or pixel group of an arbitrary shape, erroneous designation is likely to occur such that unintended pixels are selected. Therefore, in order to designate a narrow area
- the present invention has been made in view of the above-described circumstances, and an object thereof is to provide a technique for enabling an operation of designating a region from an image to be performed simply and as intended.
- the present invention adopts a user interface in which candidate small areas are superimposed on a target image and the user is allowed to select a desired small area from among them.
- the present invention is an area specifying method which allows a user to specify a part of the target image as a foreground or a background when performing region division processing for separating the target image into the foreground and the background.
- a small area setting step in which the computer sets one or more small areas larger than pixels in the target image; and a computer specifies that the boundary of the small area is drawn on the target image Step of displaying the display image on the display device, and the computer selecting the area to be the foreground or background from the one or more small areas on the designation image using the input device for the user And designating step.
- the small area setting step may include a division step of forming a plurality of small areas by dividing the target image into a predetermined pattern (this division method is referred to as “pattern division”). Since a predetermined pattern is used, the processing is simplified, and the small area can be set at high speed. Any pattern may be used for division, but for example, when a grid (grid) pattern is used, small areas are regularly arranged, which makes it easy to select small areas.
- the small area setting step further includes an extraction step of extracting a partial small area from the plurality of small areas formed in the division step, and in the display step, the small area is extracted in the extraction step Preferably, only the small area is drawn on the designation image.
- the extraction step for example, it is preferable to preferentially extract a small area having uniform color or luminance or a small area not including an edge. This is because there is a high possibility that the small area located at a position crossing the foreground and the background can be removed. Further, in the extraction step, the variation in the feature of color or luminance among the extracted small regions or the variation in position in the target image between the extracted small regions is maximized. It is also preferable that a small region to be extracted is selected. By setting the small region candidate group in this manner, the small regions can be set without omission in the foreground portion and the background portion intended by the user.
- the small area setting step includes a dividing step of forming a plurality of small areas by putting together pixels based on a feature of at least one of color, luminance, and an edge.
- a small area of a shape according to the shape, pattern, shading, etc. of the object in the target image is formed.
- each small area is formed of, for example, pixels with similar color or luminance characteristics or pixels separated by an edge, it is unlikely that the small area includes both foreground and background pixels. . Therefore, if the small area formed in this way is used as a candidate, even a narrow area or an area having a complicated shape can be selected simply and as intended.
- the small area setting step further includes an extraction step of extracting a part of small areas from the plurality of small areas formed in the division step, and in the display step, the extraction step Preferably, only the small area extracted in the step is drawn on the designation image.
- the extraction step a small area not including an edge, or a small area having a large size or width, or a small area having a high contrast at a boundary portion may be preferentially extracted.
- the extraction step the variation in color or luminance characteristics among the extracted small areas or the variation in position in the target image between the extracted small areas is as large as possible.
- small regions to be extracted are selected. By setting small region candidate groups in this manner, small regions can be set without omission for various positions in the target image, and for the foreground and background portions.
- the small area selected by the user as the area to be the foreground or background in the designation step is highlighted. This makes it possible to easily distinguish the small area selected as the area to be the foreground or background from the small area other than the small area, thereby preventing erroneous selection of the small area and improving usability. .
- the size of the small area with respect to the target image can be changed by the user.
- the area specification can be facilitated.
- the image updating step further includes an image updating step in which the computer updates the designation image displayed on the screen of the display device in accordance with an instruction of enlargement, reduction, parallel movement, or rotation of the image by the user.
- the small area is also enlarged, reduced, translated, or rotated together with the target image. For example, by enlarging the display or the like, it is possible to confirm in detail the small area or which pixel in the image the outline of the small area overlaps with, it is possible to facilitate accurate selection even in a narrow area or a complicated shape.
- the image updating step When a small area is set by pattern division, in the image updating step, only the target image is enlarged, reduced, translated or not, without changing the position and size of the small area on the screen. Rotation is also preferred. For example, if the small area straddles the foreground and the background on the initial display screen, the small area may be positioned in the foreground or in the background by enlarging, translating, rotating, etc. the target image. You can change the display to Therefore, it becomes easy to specify only the foreground or the background accurately.
- the input device has a move key and a select key
- the designating step is a step of selecting any small area on the designation image, and an input of the move key from the user.
- the small area currently in the selected state is set as the foreground or the background as the area to be selected when the small area to be selected is changed in order each time it is received and the input of the selection key is received from the user It is also preferable to include the step of selecting. According to such a user interface, the intended small area can be selected with certainty by the simple operation of the move key and the select key. In this case, the small area currently in the selected state may be highlighted. This makes it possible to easily distinguish between the small area in the selected state and the small area other than that, thereby making it possible to prevent erroneous selection of the small area and to improve usability.
- the input device is a touch panel provided on a screen of the display device, and in the designation step, the user touches a small area on the designation image displayed on the screen of the display device. It is also preferred that the region to be the foreground or background be selected. According to such a user interface, it is possible to more intuitively select the intended small area.
- the present invention can also be understood as an area designation method including at least one of the above-described processes, and can be regarded as an area division method in which area division of a target image is performed based on the area designated by this area designation method. You can also. Furthermore, the present invention can also be regarded as a program for causing a computer to execute the steps of these methods or a storage medium storing the program. Furthermore, the present invention can also be grasped as an area designation device or an area division device having at least one of the means for performing the above processing.
- FIG. 5 is a flowchart showing a flow of processing of setting an inspection area using the setting tool 103.
- FIG. 5 is a flowchart showing details of the process of step S43 of FIG. 4;
- A) is a figure which shows the example which displayed the taken-in image on the test
- (b) is a figure which shows the example of the test
- FIG. 6 is a view for explaining a designation image by pattern division according to the first embodiment.
- segmentation of 2nd Embodiment The figure for demonstrating an example of the extraction method of the small area
- the present invention allows the user to specify an area to be considered as foreground or an area to be taken as background in the target image as an initial value when performing processing called segmentation that separates the target image into foreground and background. It relates to a method of designating an area for
- the area specifying method and the area dividing method according to the present invention are, for example, a process of extracting an area of an inspection object from an original image in image inspection, and background synthesis in image editing.
- the present invention can be applied to various fields such as a trimming process and a process of extracting only an organ or a region to be diagnosed from a medical image.
- an example in which the area designation method according to the present invention is implemented in an inspection area setting function (setting tool) in an image inspection apparatus will be described.
- FIG. 1 schematically shows the configuration of the image inspection apparatus.
- the image inspection apparatus 1 is a system that performs an appearance inspection of an inspection object 2 conveyed on a conveyance path.
- the image inspection apparatus 1 includes hardware such as an apparatus main body 10, an image sensor 11, a display device 12, a storage device 13, and an input device 14.
- the image sensor 11 is a device for capturing a color or monochrome still image or moving image into the apparatus main body 10, and for example, a digital camera can be suitably used. However, when a special image (X-ray image, thermo image, etc.) other than the visible light image is used for inspection, a sensor in accordance with the image may be used.
- the display device 12 is a device for displaying an image captured by the image sensor 11, an inspection result, and a GUI screen related to inspection processing and setting processing. For example, a liquid crystal display or the like can be used.
- the storage device 13 is a device for storing various setting information (inspection area definition information, inspection logic, etc.) to which the image inspection apparatus 1 refers in inspection processing, inspection results, and the like. Etc. are available.
- the input device 14 is a device operated by the user to input an instruction to the apparatus body 10, and for example, a mouse, a keyboard, a touch panel, a dedicated console, etc. can be used.
- the device body 10 can be configured by a computer provided with a CPU (central processing unit), a main storage device (RAM), an auxiliary storage device (ROM, HDD, SSD, etc.) as hardware, and as its function, An inspection processing unit 101, an inspection area extraction unit 102, and a setting tool 103 are included.
- the inspection processing unit 101 and the inspection area extraction unit 102 are functions related to inspection processing
- the setting tool 103 is a function to support setting work of setting information necessary for inspection processing by the user. These functions are realized by loading a computer program stored in the auxiliary storage device or storage device 13 into the main storage device and executing the program by the CPU. Note that FIG.
- the apparatus body 10 may be configured by a computer such as a personal computer or a slate type terminal, or may be configured by a dedicated chip or an on-board computer.
- FIG. 2 is a flowchart showing the flow of the inspection process
- FIG. 3 is a diagram for explaining the process of extracting the inspection area in the inspection process.
- the flow of the inspection process will be described by taking inspection of the panel surface of the casing part of the mobile phone (detection of flaws and color unevenness) as an example.
- step S20 the inspection object 2 is photographed by the image sensor 11, and the image data is taken into the apparatus main body 10.
- the image (original image) captured here is displayed on the display device 12 as necessary.
- the upper part of FIG. 3 shows an example of the original image.
- the case part 2 to be inspected is shown at the center of the original image, and part of the case parts next to the conveyance path is shown on the left and right.
- the inspection area extraction unit 102 reads necessary setting information from the storage device 13.
- the setting information includes at least inspection area definition information and inspection logic.
- the examination area definition information is information defining the position and shape of the examination area to be extracted from the original image.
- the format of the inspection area definition information is arbitrary, and for example, a bit mask in which labels are changed between the inside and the outside of the inspection area, vector data in which the contour of the inspection area is expressed by Bezier curve or spline curve, etc. can be used.
- the inspection logic is information that defines the content of inspection processing, and corresponds to, for example, the type of feature amount used for inspection, a determination method, a parameter or threshold value used for feature amount extraction or determination processing, and the like.
- the inspection area extraction unit 102 extracts a portion to be an inspection area from the original image according to the inspection area definition information.
- the middle part of FIG. 3 shows a state in which the inspection area (shown by cross hatching) 30 defined by the inspection area definition information is superimposed on the original image. It can be seen that the inspection area 30 just overlaps the panel surface of the housing part 2.
- the lower part of FIG. 3 shows a state in which an image (inspection area image 31) of the portion of the inspection area 30 is extracted from the original image.
- the inspection area image 31 the transport path and the adjacent part shown around the casing part 2 are deleted. Further, the hinge portion 20 and the button portion 21 which are excluded from the target site of the surface inspection are also deleted.
- the inspection area image 31 obtained in this manner is delivered to the inspection processing unit 101.
- step S23 the inspection processing unit 101 extracts a necessary feature amount from the inspection area image 31 according to the inspection logic.
- the color of each pixel of the inspection area image 31 and the average value thereof are extracted as feature quantities for inspecting surface flaws and color unevenness.
- step S24 the inspection processing unit 101 determines the presence / absence of the flaw / color unevenness according to the inspection logic. For example, when a pixel group whose color difference with respect to the average value obtained in step S23 exceeds a threshold is detected, it is possible to determine that the pixel group is a flaw or color unevenness.
- step S ⁇ b> 25 the inspection processing unit 101 displays the inspection result on the display device 12 and records the inspection result in the storage device 13. Thus, the inspection process for one inspection object 2 is completed.
- the processing of steps S20 to S25 in FIG. 2 is repeated.
- a setting tool 103 for preparing inspection area definition information for cutting out an accurate inspection area image is prepared.
- FIG. 4 is a flow chart showing a flow of processing of setting an inspection area using the setting tool 103
- FIG. 5 is a flow chart showing details of the processing of step S43 of FIG.
- the setting screen of FIG. 6A is displayed on the display device 12.
- the setting screen is provided with an image window 50, an image capture button 51, a division display button 52, a foreground / background toggle button 53, an area size adjustment slider 54, an area division button 55, and a determination button 56.
- Selection of a button, movement of a slider, selection of a small area, and the like can be performed by a predetermined operation (for example, clicking of a mouse, depression of a predetermined key, or the like) using the input device 14.
- this setting screen is merely an example, and any UI may be used as long as it is possible to perform input operations and image confirmation described below.
- the setting tool 103 captures a sample of the inspection object by the image sensor 11 (step S40). It is preferable to use a non-defective inspection object as a sample and to perform imaging in the same state (the relative position of the image sensor 11 and the sample, illumination, etc.) as in the case of actual inspection processing.
- the obtained sample image data is taken into the apparatus main body 10.
- the setting tool 103 may read the data of the sample image from the auxiliary storage device or storage device 13. .
- the sample image acquired in step S40 is displayed in the image window 50 of the setting screen as shown in FIG. 6A (step S41).
- the object has a complicated shape, or the difference in color or luminance between the foreground (the part to be extracted as an examination area) and the background (the other part) is not very large. If it is a computer, it is difficult for the computer to automatically interpret and decide where to set the examination area. Therefore, in the present embodiment, the user is instructed as an initial value to the computer the area to be the foreground and the area to be the background in the sample image. At this time, in order to make area designation easy and as intended, candidates for designating possible areas are presented to the user (recommendation), and the user is allowed to select a desired area from among them.
- the setting tool 103 When the split display button 52 is pressed by the user, the setting tool 103 generates a grid pattern superimposed image (hereinafter simply referred to as a designation image) for area designation, and displays the designation image in the image window 50 (step S42).
- FIG. 7A shows a display example of the designation image. An equally spaced grid (grid) pattern is drawn on the original sample image, and this grid pattern results in a plurality of rectangular small areas being set on the sample image.
- the split display button 52 When the split display button 52 is in a selected state and a designation image is displayed (this state is called a split display mode), the foreground / background toggle button 53 and the area size adjustment slider 54 are enabled.
- the user can use the input device 14 to specify an area to be a foreground and an area to be a background on the designation image (step S43).
- FIG. 5 shows input event processing in the split display mode.
- the setting tool 103 is in a standby state until an input event from the user occurs (step S50). If any input event occurs, the process proceeds to step S51.
- step S51 If the input event is a change of the foreground / background toggle button 53 (step S51; Y), the setting tool 103 switches between the foreground designation mode and the background designation mode according to the state of the toggle button 53 (step S52). ).
- step S53 If the input event is the selection of the small area (step S53; Y), the process proceeds to step S54.
- the small area can be selected, for example, by moving the mouse cursor over any small area of the designation image and clicking the mouse button.
- the display device 12 is a touch panel display, selection of a small area can be performed by an intuitive operation of touching the small area of the designation image.
- the setting tool 103 checks whether the small area is already specified (step S54). If it is a small area which has already been designated, the designation is canceled (step S55).
- the small area is specified as the foreground if the current mode is the foreground specification mode (step S56; Y, S57), and if the current mode is the background specification mode, the small area Designate as a background (step S56; N, S58).
- the small area designated as the foreground or background changes (emphasizes) the color within the boundary of the small area and / or the small area so that it can be distinguished from other unspecified small areas, It is good to draw a mark. Further, it is preferable to make the color, the highlighting method, or the mark to be drawn different so that the foreground area and the background area can be distinguished.
- FIG. 7B shows an example in which two foreground regions (small regions indicated by cross hatching) and three background regions (small regions indicated by left hatching) are designated.
- the area size adjustment slider 54 is a UI for increasing or decreasing the size of a small area, that is, the interval of grids superimposed on the designation image.
- the designation image is updated in accordance with the area size changed by the slider 54.
- FIG. 7C shows an example where the area size is reduced. In FIG. 7A, 108 small areas of 9 rows and 12 columns are formed, whereas in FIG. 7C, 192 small areas of 12 rows and 16 columns are formed. For example, when the small area is too large for the object in the sample image and the small area straddles the foreground and the background, the area size can be reduced to enable fine area specification.
- step S60 If the input event is the pressing of the area division button 55, the divided display mode is ended (step S60; Y).
- the split display mode may be ended even when the split display button 52 is pressed again or when the image capture button 51 is pressed. If the split display mode is to be continued, the process returns to step S50.
- step S44 the setting tool 103 applies area segmentation (segmentation) processing to the sample image using the foreground and background designated in step S43 as initial values.
- the foreground part obtained as a result of the area division processing is extracted as an inspection area. Note that many algorithms have been proposed for the area division processing, and any setting algorithm can be used by the setting tool 103, so a detailed description will be omitted here.
- the inspection area extracted in step S44 is displayed in the image window 50 of the setting screen. The user can check whether the desired area is selected as the inspection area by looking at the inspection area displayed on the setting screen. At this time, it is preferable to overlay the inspection area (hatched portion) on the sample image as shown in FIG. 6B because comparison between the inspection object and the inspection area is facilitated.
- the setting tool 103 generates inspection area definition information for the extracted inspection area and stores the inspection area definition information in the storage device 13 (step S45). If the inspection area extracted in step S44 is not appropriate, the image acquisition (step S40) or the specification of the foreground / background (step S43) may be repeated.
- the size of the small area may be appropriately adjusted in accordance with the size and shape of the foreground portion (or background portion) in the target image. Yes, it becomes easy to specify the area.
- the division into small areas is performed using a lattice pattern, but the present invention is not limited to this, and a mesh pattern formed of elements such as triangles and hexagons or elements of an arbitrary shape is used. May be Further, the shape and size of the small area may be uniform or nonuniform, and the arrangement may be regularly or irregularly arranged.
- the designation image is generated by pattern division
- the designation image is generated by forming a plurality of small regions by putting together pixels based on the features of the image.
- the point is different. That is, only the contents of the process of step S42 in the flow of FIG. 4 are replaced, and the other configurations and processes are the same as those of the first embodiment.
- the division method according to the present embodiment divides an image into smaller areas than the area division (separation of the foreground and the background) performed in the subsequent stage, and therefore, will be hereinafter referred to as “over-division”.
- a method called super pixel or a method such as clustering and labeling can be used.
- the purpose of dividing into small areas is to make it easy to specify the foreground and background to be given as initial values of area division processing in the subsequent stage, so when performing over division, at least color, luminance, or edge features. It may be determined based on whether to integrate pixels. In the present embodiment described below, neighboring pixels having similar color or luminance characteristics are grouped to form a small area.
- FIG. 8A shows an example of a designation image formed by over-division.
- the size and shape of the small area are not uniform, and a small area having a shape according to the shape, pattern, shade, etc. of the object in the target image is formed. Be done. If the small area formed by the overdivision is too small to be selected, it is preferable to change the conditions by the area size adjustment slider 54 as shown in FIG. 8B and to recalculate the overdivision. If the size of each small area becomes large as shown in FIG. 8B, the area specification can be easily performed by the mouse cursor or the touch panel.
- FIG. 8C shows an example in which two foreground regions (small regions indicated by cross hatching) and two background regions (small regions indicated by left hatching) are specified in the designation image of FIG. 8B. Is shown.
- the configuration of the second embodiment described above can exert the following effects in addition to the same effects as the first embodiment. That is, since the small area formed by over-division has a shape that reflects the shape, pattern, shading, etc. of the object, it is possible to easily select even a narrow area or an area of a complicated shape.
- the subregions formed by over-division consist of pixel groups with similar color or luminance characteristics and pixel groups separated by edges, so it is possible that both foreground and background pixels are included in it. Low. Therefore, there is also an advantage that erroneous designation such as selection of unintended pixels is unlikely to occur.
- the color or brightness in the small area is uniform, or the edge (high contrast area) is not included in the small area.
- Etc. should be extracted preferentially.
- small areas are formed regardless of the features of the image, so small areas may be present at positions that straddle the foreground and background. Since it is not appropriate to designate such a small area as the foreground or background, it is more user-friendly to remove in advance from the options, and it is possible to eliminate the possibility that the small area is erroneously specified. .
- a very narrow area may be formed.
- a narrow area is not preferable because it is not only difficult to select but also reduces the visibility of the designation image. Therefore, in the case of over-division, for example, a method of preferentially extracting one having a large size (area) or width of a small area is preferable. Also, if there is almost no difference between the foreground and background colors or luminance, even if it is over-divided, a small area may be formed to straddle the foreground and background.
- a method is also preferable in which the contrast in the small area and the contrast in the boundary part (outline) of the small area are evaluated to preferentially extract the small area not including the edge or the small area with high contrast in the boundary part. It is. This makes it possible to eliminate small areas that include both foreground and background pixels.
- FIG. 9 shows an example of a method of extracting a small area.
- FIG. 9 is a graph in which all small areas formed by pattern division are plotted on a graph in which the horizontal axis represents the average of the luminance in the small area and the vertical axis represents the dispersion of the luminance in the small area. That is, the horizontal axis represents the diversity of luminance features between the small areas, and the vertical axis represents the uniformity of the luminance in the small areas. Subregions plotted on the lower side of the vertical axis may be preferentially extracted from various positions on the horizontal axis.
- the horizontal axis is divided into four luminance ranges A to D based on the distribution in the horizontal axis direction of the small regions, and the small area having the smallest dispersion is extracted from each of the luminance ranges A to D.
- the number extracted from each luminance range may be determined according to the number of small areas belonging to each luminance range, or may be determined according to the value of variance.
- FIG. 10A shows an example in which only the small area (black circle) extracted in FIG. 9 is drawn on the designation image. Since the small areas are appropriately arranged in the foreground and background parts, and the small areas are small and the small areas are separated from each other, area specification can be made easier than in the case of FIG. 7A. I understand.
- FIG. 10 (b) shows an example of extraction of a small area in the case of over-division. In this case as well, area specification is simplified.
- FIG. 11 shows a fourth embodiment of the present invention.
- the small area is selected using a mouse or a touch panel, but in the fourth embodiment, the small area is selected by an input device such as a keyboard or a keypad.
- the other configuration is the same as that of the other embodiments.
- the input device of the present embodiment is provided with a move key and a select key.
- any one small area in the designation image is in the selected state (the state in which the focus is on).
- the small area at the position of the third column from the left and the third row from the top is in the selected state, and the focus frame is drawn.
- the move key When the user presses the move key, the focus frame moves one by one.
- FIG. 11B shows a state in which the focus frame has moved to the right.
- the focus frame may be moved in any direction using the up, down, left, and right arrow keys, or the focus frame may be sequentially moved in one direction using one move key such as the space key. Good.
- the small area (small area having the focus frame) in the currently selected state is designated as the foreground or background (see FIG. 11C).
- the foreground or background may be determined according to the mode set by the foreground / background toggle button 53. For example, if the foreground selection key and the background selection key are provided separately, the mode The foreground or background may be determined according to the type of selection key pressed.
- the intended small area can be selected with certainty by the simple operation of the movement key and the selection key. Further, in the present embodiment, since the small area in the selected state is highlighted with the focus frame, it is easy to distinguish between the small area in the selected state and the other small areas, and erroneous selection of the small area While being able to prevent, it can also aim at the improvement of usability.
- the highlighting method is not limited to the focus frame, and any method may be used, such as changing the frame of the small area or the color in the area.
- FIG. 12 shows a fifth embodiment of the present invention.
- the user can enlarge, reduce, translate (scroll), or rotate the target image displayed in the image window 50.
- These operation instructions may be performed by, for example, a mouse drag or a wheel operation, or may be performed by a drag or pinch operation on the touch panel.
- FIG.12 (b) has shown the state which expanded and translated the image of Fig.12 (a).
- the display magnification and the display position change only in the target image, and the position and the size of the small area superimposed thereon do not change.
- This function can be used to align the small area to the desired area in the target image.
- the upper two of the three small areas are all arranged at positions across the foreground and the background.
- the small area can be arranged so as not to extend over the foreground and the background. Therefore, there is an advantage that it becomes easy to specify only the foreground or the background accurately by utilizing this function.
- FIG. 13 shows a sixth embodiment of the present invention.
- the image is enlarged / reduced, translated, rotated, and the position and size of the small area remain the same, while in the sixth embodiment, the image and the small area are put together.
- the point of enlargement etc. is different. Operation instructions such as enlargement are the same as those in the fifth embodiment.
- FIG. 13B shows a state in which the image of FIG. 13A is enlarged and translated.
- This function can be used, for example, to confirm the matching between the target image and the small area in detail.
- the image of the standard magnification shown in FIG. 13A it may be difficult to understand how the small area is formed in a narrow area or a portion having a complicated shape in the image.
- the enlarged image of FIG. 13B it is possible to confirm in detail which pixel in the image the small area and its outline overlap. Therefore, accurate area selection is facilitated by using this function.
- Image inspection device 2 Inspection object (chassis parts) 10: device body, 11: image sensor, 12: display device, 13: storage device, 14: input device 30: examination region, 31: examination region image 50: image window, 51: image capture button, 52: split display Button 53: foreground / background toggle button 54: area size adjustment slider 55: area division button 56: confirmation button 101: inspection processing section 102: inspection area extraction section 103: setting tool
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
(画像検査装置)
図1は、画像検査装置の構成を模式的に示している。この画像検査装置1は、搬送路上を搬送される検査対象物2の外観検査を行うシステムである。
図2及び図3を参照して、画像検査装置1の検査処理に関わる動作を説明する。図2は、検査処理の流れを示すフローチャートであり、図3は、検査処理における検査領域の抽出過程を説明するための図である。ここでは、説明の便宜のため、携帯電話の筐体部品のパネル面の検査(キズ、色ムラの検出)を例に挙げて検査処理の流れを説明する。
図4及び図5のフローチャートに沿って、設定ツール103の機能及び動作について説明する。図4は、設定ツール103を用いて検査領域を設定する処理の流れを示すフローチャートであり、図5は、図4のステップS43の処理の詳細を示すフローチャートである。また、適宜、図6及び図7の検査領域設定画面例も参照する。
以上述べた構成によれば、候補となる複数の小領域がコンピュータによりレコメンデーションされ、ユーザはそれらの候補の中から目的とする条件を満たす領域を選択するだけでよいので、直感的かつ簡単な領域指定が可能となる。また、小領域の境界を明示しその単位で領域指定を行わせるようにしたことで、従来のように任意の領域やピクセル群をマウス等で自由に入力させる方法に比べ、ユーザの指定に制約が生まれる。この制約により、意図しないピクセルまで選んでしまうといった誤指定の発生を防ぐことができるため、意図どおりの領域指定がしやすくなる。
図8を参照して、本発明の第2実施形態について説明する。第1実施形態ではパターン分割により指定用画像を生成したのに対し、第2実施形態では画像の特徴に基づいてピクセル同士をまとめることにより複数の小領域を形成することで指定用画像を生成する点が異なる。すなわち、図4のフローにおけるステップS42の処理の内容が置き換わるだけで、それ以外の構成及び処理については第1実施形態のものと同じである。
続いて、本発明の第3実施形態について説明する。第1及び第2実施形態では、指定用画像上にすべての小領域を表示したのに対し、第3実施形態では、一部の小領域のみを表示する点が異なる。すなわち、図4のフローにおけるステップS42の処理の内容が置き換わるだけで、それ以外の構成及び処理については第1実施形態のものと同じである。
図11は、本発明の第4実施形態を示している。上述した各実施形態では、マウス或いはタッチパネルを用いて小領域を選択していたが、第4実施形態では、キーボード或いはキーパッドのような入力装置で小領域を選択させる。それ以外の構成は、他の実施形態のものと同様である。
図12は、本発明の第5実施形態を示している。本実施形態では、画像ウィンドウ50に表示されている対象画像をユーザが拡大、縮小、平行移動(スクロール)、或いは、回転できるようにしている。これらの操作指示は、例えば、マウスのドラッグやホイール操作で行えるようにしてもよいし、タッチパネルにおけるドラッグやピンチ操作で行えるようにしてもよい。
図13は、本発明の第6実施形態を示している。上述した第5実施形態では、画像のみが拡大/縮小、平行移動、回転し、小領域の位置及び大きさはそのままであったのに対し、第6実施形態では、画像と小領域を一緒に拡大等する点が異なる。拡大等の操作指示は第5実施形態のものと同じである。
2:検査対象物(筐体部品)
10:装置本体、11:画像センサ、12:表示装置、13:記憶装置、14:入力装置
30:検査領域、31:検査領域画像
50:画像ウィンドウ、51:画像取込ボタン、52:分割表示ボタン、53:前景/背景トグルボタン、54:領域サイズ調整スライダ、55:領域分割ボタン、56:確定ボタン
101:検査処理部、102:検査領域抽出部、103:設定ツール
Claims (18)
- 対象画像を前景と背景に分離する領域分割処理を行うにあたり、前記対象画像の一部の領域を前景又は背景とすべき領域としてユーザに指定させる領域指定方法であって、
コンピュータが、ピクセルよりも大きな小領域を前記対象画像の中に一つ又は複数個設定する小領域設定ステップと、
コンピュータが、前記対象画像上に前記小領域の境界が描画された指定用画像を表示装置に表示する表示ステップと、
コンピュータが、前記指定用画像上の一つ又は複数の小領域の中から、前記前景又は背景とすべき領域を、ユーザに入力装置を用いて選択させる指定ステップと、を含むことを特徴とする領域指定方法。 - 前記小領域設定ステップは、前記対象画像を所定のパターンに分割することにより、複数の小領域を形成する分割ステップを含むことを特徴とする請求項1に記載の領域指定方法。
- 前記小領域設定ステップは、前記分割ステップにおいて形成された複数の小領域の中から、一部の小領域を抽出する抽出ステップをさらに含み、
前記表示ステップでは、前記抽出ステップで抽出された小領域のみが前記指定用画像に描画されることを特徴とする請求項2に記載の領域指定方法。 - 前記抽出ステップでは、色又は輝度が均一な小領域、或いは、エッジを含まない小領域、が優先的に抽出されることを特徴とする請求項3に記載の領域指定方法。
- 前記抽出ステップでは、抽出された小領域の間における色又は輝度の特徴のばらつき、或いは、抽出された小領域の間における前記対象画像の中での位置のばらつき、が可及的に大きくなるように、抽出される小領域が選ばれることを特徴とする請求項3に記載の領域指定方法。
- 前記小領域設定ステップは、色、輝度、及びエッジのうち少なくともいずれかの特徴に基づいてピクセル同士をまとめることにより、複数の小領域を形成する分割ステップを含むことを特徴とする請求項1に記載の領域指定方法。
- 前記小領域設定ステップは、前記分割ステップにおいて形成された複数の小領域の中から、一部の小領域を抽出する抽出ステップをさらに含み、
前記表示ステップでは、前記抽出ステップで抽出された小領域のみが前記指定用画像に描画されることを特徴とする請求項6に記載の領域指定方法。 - 前記抽出ステップでは、エッジを含まない小領域、或いは、サイズ又は幅が大きい小領域、或いは、境界部分のコントラストが高い小領域、が優先的に抽出されることを特徴とする請求項7に記載の領域指定方法。
- 前記抽出ステップでは、抽出された小領域の間における色又は輝度の特徴のばらつき、或いは、抽出された小領域の間における前記対象画像の中での位置のばらつき、が可及的に大きくなるように、抽出される小領域が選ばれることを特徴とする請求項7に記載の領域指定方法。
- 前記指定ステップにおいて前記前景又は背景とすべき領域としてユーザに選択された小領域が強調表示されることを特徴とする請求項1~9のうちいずれか1項に記載の領域指定方法。
- 前記対象画像に対する小領域の大きさがユーザにより変更可能であることを特徴とする請求項1~10のうちいずれか1項に記載の領域指定方法。
- コンピュータが、ユーザによる画像の拡大、縮小、平行移動、或いは、回転の指示に従って、前記表示装置の画面に表示されている指定用画像を更新する画像更新ステップをさらに含み、
前記画像更新ステップでは、前記対象画像とともに前記小領域も拡大、縮小、平行移動、或いは、回転させることを特徴とする請求項1~11のうちいずれか1項に記載の領域指定方法。 - コンピュータが、ユーザによる画像の拡大、縮小、平行移動、或いは、回転の指示に従って、前記表示装置の画面に表示されている指定用画像を更新する画像更新ステップをさらに含み、
前記画像更新ステップでは、前記小領域の画面上での位置及び大きさを変えずに、前記対象画像のみ拡大、縮小、平行移動、或いは、回転させることを特徴とする請求項1~5のうちいずれか1項に記載の領域指定方法。 - 前記入力装置が、移動キーと選択キーを有しており、
前記指定ステップは、
前記指定用画像上のいずれかの小領域を選択状態にするステップと、
ユーザから前記移動キーの入力を受け付けるたびに、選択状態とする小領域を順番に変えていくステップと、
ユーザから前記選択キーの入力を受け付けた場合に、現在選択状態にある小領域を、前記前景又は背景とすべき領域として選択するステップと、を含むことを特徴とする請求項1~11のうちいずれか1項に記載の領域指定方法。 - 前記現在選択状態にある小領域が強調表示されることを特徴とする請求項14に記載の領域指定方法。
- 前記入力装置が、前記表示装置の画面に設けられたタッチパネルであり、
前記指定ステップでは、ユーザが前記表示装置の画面に表示されている指定用画像上の小領域をタッチすることにより、前記前景又は背景とすべき領域が選択されることを特徴とする請求項1~13のうちいずれか1項に記載の領域指定方法。 - 請求項1~16のうちいずれか1項に記載の領域指定方法の各ステップをコンピュータに実行させることを特徴とするプログラム。
- 対象画像を前景と背景に分離する領域分割処理を行うにあたり、前記対象画像の一部の領域を前景又は背景とすべき領域としてユーザに指定させる領域指定装置であって、
ピクセルよりも大きな小領域を前記対象画像の中に一つ又は複数個設定する小領域設定手段と、
前記対象画像上に前記小領域の境界が描画された指定用画像を表示装置に表示する表示手段と、
前記指定用画像上の一つ又は複数の小領域の中から、前記前景又は背景とすべき領域を、ユーザに入力装置を用いて選択させる指定手段と、を有することを特徴とする領域指定装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020147024913A KR101707723B1 (ko) | 2012-03-14 | 2012-11-16 | 영역 지정 방법 및 영역 지정 장치 |
CN201280071295.0A CN104169972A (zh) | 2012-03-14 | 2012-11-16 | 区域指定方法和区域指定装置 |
US14/383,911 US20150043820A1 (en) | 2012-03-14 | 2012-11-16 | Area designating method and area designating device |
EP12871056.3A EP2827300A4 (en) | 2012-03-14 | 2012-11-16 | ZONE DESIGNATION METHOD AND DEVICE |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012057058A JP5867198B2 (ja) | 2012-03-14 | 2012-03-14 | 領域指定方法及び領域指定装置 |
JP2012-057058 | 2012-03-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013136592A1 true WO2013136592A1 (ja) | 2013-09-19 |
Family
ID=49160545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/079839 WO2013136592A1 (ja) | 2012-03-14 | 2012-11-16 | 領域指定方法及び領域指定装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20150043820A1 (ja) |
EP (1) | EP2827300A4 (ja) |
JP (1) | JP5867198B2 (ja) |
KR (1) | KR101707723B1 (ja) |
CN (1) | CN104169972A (ja) |
WO (1) | WO2013136592A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429469A (zh) * | 2019-04-17 | 2020-07-17 | 杭州海康威视数字技术股份有限公司 | 泊位位置确定方法、装置、电子设备及存储介质 |
CN112154302A (zh) * | 2019-10-31 | 2020-12-29 | 深圳市大疆创新科技有限公司 | 航线规划方法、控制终端及计算机可读存储介质 |
CN113377077A (zh) * | 2021-07-08 | 2021-09-10 | 刘志程 | 一种智能制造数字化工厂系统 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9973677B2 (en) * | 2013-10-14 | 2018-05-15 | Qualcomm Incorporated | Refocusable images |
JP6364958B2 (ja) * | 2014-05-26 | 2018-08-01 | 富士ゼロックス株式会社 | 画像処理装置、画像形成装置及びプログラム |
JP6667195B2 (ja) * | 2014-06-20 | 2020-03-18 | 株式会社リコー | データ生成装置、データ生成方法及びデータ生成プログラム |
KR101644854B1 (ko) * | 2014-11-19 | 2016-08-12 | 주식회사 픽스 | 영역 지정 방법 |
JP6469483B2 (ja) * | 2015-03-09 | 2019-02-13 | 学校法人立命館 | 画像処理装置、画像処理方法、及びコンピュータプログラム |
CN208678394U (zh) * | 2015-06-30 | 2019-04-02 | 株式会社高永科技 | 物品检查装置 |
JP6613876B2 (ja) * | 2015-12-24 | 2019-12-04 | トヨタ自動車株式会社 | 姿勢推定装置、姿勢推定方法、およびプログラム |
CN106325673A (zh) * | 2016-08-18 | 2017-01-11 | 青岛海信医疗设备股份有限公司 | 一种用于医疗显示的光标移动方法、装置和医疗设备 |
US11361152B2 (en) * | 2020-07-20 | 2022-06-14 | Labelbox, Inc. | System and method for automated content labeling |
KR102308381B1 (ko) * | 2020-11-09 | 2021-10-06 | 인그래디언트 주식회사 | 유연한 슈퍼픽셀에 기초한 의료 영상 라벨링 방법 및 이를 위한 장치 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61233868A (ja) * | 1985-04-09 | 1986-10-18 | Fujitsu Ltd | 領域分割方式 |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7013021B2 (en) * | 1999-03-19 | 2006-03-14 | Digimarc Corporation | Watermark detection utilizing regions with higher probability of success |
US6895112B2 (en) * | 2001-02-13 | 2005-05-17 | Microsoft Corporation | Red-eye detection based on red region detection with eye confirmation |
US6829384B2 (en) * | 2001-02-28 | 2004-12-07 | Carnegie Mellon University | Object finder for photographic images |
US20030072477A1 (en) * | 2001-10-12 | 2003-04-17 | Ashwin Kotwaliwale | Karyotype processing methods and devices |
US7343028B2 (en) * | 2003-05-19 | 2008-03-11 | Fujifilm Corporation | Method and apparatus for red-eye detection |
JP4482796B2 (ja) * | 2004-03-26 | 2010-06-16 | ソニー株式会社 | 情報処理装置および方法、記録媒体、並びにプログラム |
US20100278405A1 (en) * | 2005-11-11 | 2010-11-04 | Kakadiaris Ioannis A | Scoring Method for Imaging-Based Detection of Vulnerable Patients |
US8494210B2 (en) * | 2007-03-30 | 2013-07-23 | Optosecurity Inc. | User interface for use in security screening providing image enhancement capabilities and apparatus for implementing same |
JP2008035457A (ja) * | 2006-08-01 | 2008-02-14 | Nikon Corp | 電子カメラおよび画像処理プログラム |
JP2008059081A (ja) * | 2006-08-29 | 2008-03-13 | Sony Corp | 画像処理装置及び画像処理方法、並びにコンピュータ・プログラム |
US8363909B2 (en) * | 2007-03-20 | 2013-01-29 | Ricoh Company, Limited | Image processing apparatus, image processing method, and computer program product |
US8411952B2 (en) * | 2007-04-04 | 2013-04-02 | Siemens Aktiengesellschaft | Method for segmenting an image using constrained graph partitioning of watershed adjacency graphs |
JP5194776B2 (ja) * | 2007-12-21 | 2013-05-08 | 株式会社リコー | 情報表示システム、情報表示方法およびプログラム |
US20090252392A1 (en) * | 2008-04-08 | 2009-10-08 | Goyaike S.A.A.C.I.Y.F | System and method for analyzing medical images |
KR20100037468A (ko) * | 2008-10-01 | 2010-04-09 | 엘지전자 주식회사 | 감시 시스템 및 그 동작 방법 |
DE102008052928A1 (de) * | 2008-10-23 | 2010-05-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung, Verfahren und Computerprogramm zur Erkennung einer Geste in einem Bild, sowie Vorrichtung, Verfahren und Computerprogramm zur Steuerung eines Geräts |
CN101447017B (zh) * | 2008-11-27 | 2010-12-08 | 浙江工业大学 | 一种基于版面分析的选票快速识别统计方法及系统 |
US8532346B2 (en) * | 2009-03-11 | 2013-09-10 | Sony Corporation | Device, method and computer program product |
CN103096786A (zh) * | 2010-05-03 | 2013-05-08 | 国际科学技术医疗系统有限责任公司 | 宫颈瘤变检测和诊断的图像分析 |
WO2012030869A2 (en) * | 2010-08-30 | 2012-03-08 | Apple Inc. | Multi-image face-based image processing |
US9087455B2 (en) * | 2011-08-11 | 2015-07-21 | Yahoo! Inc. | Method and system for providing map interactivity for a visually-impaired user |
WO2013104053A1 (en) * | 2012-01-11 | 2013-07-18 | Smart Technologies Ulc | Method of displaying input during a collaboration session and interactive board employing same |
-
2012
- 2012-03-14 JP JP2012057058A patent/JP5867198B2/ja active Active
- 2012-11-16 CN CN201280071295.0A patent/CN104169972A/zh active Pending
- 2012-11-16 US US14/383,911 patent/US20150043820A1/en not_active Abandoned
- 2012-11-16 KR KR1020147024913A patent/KR101707723B1/ko active IP Right Grant
- 2012-11-16 EP EP12871056.3A patent/EP2827300A4/en not_active Withdrawn
- 2012-11-16 WO PCT/JP2012/079839 patent/WO2013136592A1/ja active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61233868A (ja) * | 1985-04-09 | 1986-10-18 | Fujitsu Ltd | 領域分割方式 |
Non-Patent Citations (2)
Title |
---|
JIFENG NING; LEI ZHANG; DAVID ZHAN; CHENGKE WU: "Interactive image segmentation by maximal similarity based region merging", PATTERN RECOGNITION, vol. 43, 2010, pages 445 - 456 |
See also references of EP2827300A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429469A (zh) * | 2019-04-17 | 2020-07-17 | 杭州海康威视数字技术股份有限公司 | 泊位位置确定方法、装置、电子设备及存储介质 |
CN111429469B (zh) * | 2019-04-17 | 2023-11-03 | 杭州海康威视数字技术股份有限公司 | 泊位位置确定方法、装置、电子设备及存储介质 |
CN112154302A (zh) * | 2019-10-31 | 2020-12-29 | 深圳市大疆创新科技有限公司 | 航线规划方法、控制终端及计算机可读存储介质 |
CN113377077A (zh) * | 2021-07-08 | 2021-09-10 | 刘志程 | 一种智能制造数字化工厂系统 |
Also Published As
Publication number | Publication date |
---|---|
CN104169972A (zh) | 2014-11-26 |
US20150043820A1 (en) | 2015-02-12 |
EP2827300A1 (en) | 2015-01-21 |
KR101707723B1 (ko) | 2017-02-27 |
JP5867198B2 (ja) | 2016-02-24 |
KR20140120370A (ko) | 2014-10-13 |
JP2013191036A (ja) | 2013-09-26 |
EP2827300A4 (en) | 2015-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013136592A1 (ja) | 領域指定方法及び領域指定装置 | |
US9891817B2 (en) | Processing an infrared (IR) image based on swipe gestures | |
JP6089886B2 (ja) | 領域分割方法および検査装置 | |
JP5858188B1 (ja) | 画像処理装置、画像処理方法、画像処理システムおよびプログラム | |
JP5880767B2 (ja) | 領域判定装置、領域判定方法およびプログラム | |
KR20150083651A (ko) | 전자 장치 및 그 데이터 표시 방법 | |
US11734805B2 (en) | Utilizing context-aware sensors and multi-dimensional gesture inputs to efficiently generate enhanced digital images | |
JP4560504B2 (ja) | 表示制御装置および表示制御方法およびプログラム | |
JP2014209327A (ja) | 画像処理装置、画像処理方法、及び画像処理プログラム | |
JPWO2012169190A1 (ja) | 文字入力装置及び表示変更方法 | |
JP2015165608A (ja) | 画像処理装置、画像処理方法、画像処理システムおよびプログラム | |
US20150170355A1 (en) | Wafer appearance inspection system and method of sensitivity threshold setting | |
JP6094691B2 (ja) | ソフトウェア実行操作補助装置及びソフトウェア実行操作補助方法 | |
JP7363235B2 (ja) | 情報処理装置及び情報処理プログラム | |
JP2014123260A (ja) | システム監視画面の画像作成装置 | |
WO2015107635A1 (ja) | 画像比較システム | |
US10628005B2 (en) | Image display device, image display method, and information storage medium | |
JP6930099B2 (ja) | 画像処理装置 | |
JP6142598B2 (ja) | 画像処理プログラム及び画像処理装置 | |
CN113228112A (zh) | 信息处理装置、信息处理方法和程序 | |
JP2011196816A (ja) | 分析データ解析装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12871056 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20147024913 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14383911 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012871056 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |