JP2009265998A - Image processing system, image processing method, and image processing program - Google Patents

Image processing system, image processing method, and image processing program Download PDF

Info

Publication number
JP2009265998A
JP2009265998A JP2008115639A JP2008115639A JP2009265998A JP 2009265998 A JP2009265998 A JP 2009265998A JP 2008115639 A JP2008115639 A JP 2008115639A JP 2008115639 A JP2008115639 A JP 2008115639A JP 2009265998 A JP2009265998 A JP 2009265998A
Authority
JP
Japan
Prior art keywords
image
boundary
luminance information
axis
object image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2008115639A
Other languages
Japanese (ja)
Inventor
Toru Takahashi
徹 高橋
Original Assignee
Lambda Systems Inc
株式会社ラムダシステムズ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lambda Systems Inc, 株式会社ラムダシステムズ filed Critical Lambda Systems Inc
Priority to JP2008115639A priority Critical patent/JP2009265998A/en
Publication of JP2009265998A publication Critical patent/JP2009265998A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide an image processing system, an image processing method and an image processing program, capable of enhancing an efficiency of extraction processing for an objective image, by detecting an objective image area corresponding to the objective image, when extracting the objective image from an image using an image processing technique. <P>SOLUTION: This image processing system for detecting a boundary partitioning the objective image area corresponding to the objective image, in order to extract the objective image out of the image including the objective image, acquires brightness information in a prescribed range on a plurality of axes in parallel each other, in every axis, and compares the acquired brightness information in each axis, to detect the boundary partitioning the objective image area. <P>COPYRIGHT: (C)2010,JPO&INPIT

Description

  The present invention relates to an image processing system, an image processing method, and an image processing program for detecting a boundary defining an object image area corresponding to an object image in order to extract the object image from an image including the object image. It is about.

  In recent years, in statistical processing such as marketing analysis, rather than identifying and analyzing a target object to be statistically observed by human eyes, an image obtained by capturing the target object is acquired, and the image is analyzed and processed. The identification and analysis of things is done.

  For example, Patent Literature 1 describes a traffic situation survey system using an image processing technique. This traffic situation investigation system records the running situation of traffic objects at a specific point in the city with a camera and analyzes the recorded images to identify the traffic objects and analyze the movement of the traffic objects. Thus, the traffic volume of the traffic object is counted.

  According to this traffic condition survey system, the identification and analysis of traffic objects to be statistical objects are performed by humans using image processing technology without relying on visual inspection. Can be shortened. Furthermore, this traffic condition survey system can analyze an image a plurality of times once an image recording a traffic object is acquired, and can improve the survey accuracy.

JP-A-2005-215909

  By the way, the object image extraction processing from the image in such an image processing technique is performed by, for example, extracting feature points on the object data in advance, and then extracting the feature points in the image and the feature points of the extracted object image. The object image is extracted by performing the matching process. However, the scale of the object image in the image, that is, the occupation ratio of the object image in the image is often unknown. In that case, it is necessary to sequentially switch the range to be subjected to the matching process, and it is difficult to reduce the processing time.

  The present invention has been made in view of the above circumstances, and when extracting an object image from an image using an image processing technique, an object image area corresponding to the object image is detected, and the object image is detected. An object of the present invention is to provide an image processing system, an image processing method, and an image processing program that improve the efficiency of the extraction processing.

  In order to solve the above problems, an image processing system as an exemplary aspect of the present invention includes an object image area corresponding to an object image in order to extract the object image from an image including the object image. An image processing system for detecting a boundary that demarcates a luminance information acquisition unit that acquires luminance information in a predetermined range on a plurality of axes that are parallel to each other on the image, and in a predetermined range acquired by the luminance information acquisition unit Boundary detection means for detecting a boundary that defines the object image region by comparing the luminance information for each axis.

  The image processing system according to the present invention acquires luminance information in a predetermined range on a plurality of mutually parallel axes set on an image by the luminance information acquisition means, and the luminance information for each axis by the boundary detection means. Compare. Then, the boundary detection means detects the boundary that defines the object image region within the predetermined range from which the luminance information is acquired by comparing the relative luminance information for each axis. By detecting the boundary that defines the object image area corresponding to the object image, it is possible to perform an object image extraction process by feature detection or the like in the defined object image area. By performing such an object image extraction process, it is possible to limit the range to be subjected to the extraction process and shorten the processing time as compared with the object image extraction process in the entire image. Is possible.

  Here, the luminance information is numerical value information of luminance that is light energy intensity expressed in an image, and indicates a brighter state as the luminance information is higher. Note that the brightness in the present invention includes the concept of brightness, and “high brightness” includes “high brightness”.

  The detection of the boundary that defines the object image area means that the difference between the luminance information inside the object image area and the luminance information of the boundary that defines the object image area is recognized, and the luminance on the target axis Define the object image area on that axis by determining which area the brightness information contains, or comparing multiple axes to determine which area is closer to the brightness information It can be determined whether or not there is a boundary to be.

  Here, the boundary defining the object image area indicates the outer periphery of the object image area. Further, when a plurality of object image areas are arranged in parallel, the boundary that defines the object image area indicates an area between the object image areas. The difference between the luminance inside the object image area and the luminance information that defines the object image area differs depending on the recording mode of the object. For example, in an image in which a plurality of objects are arranged close to each other and the areas between the objects are recorded as shadows, the luminance information at the boundary portion is lower (darker) than the object image area. On the other hand, when a plurality of objects are arranged apart from each other, the luminance information of the boundary portion is higher (brighter) than the object image area in an image recorded with higher (brighter) luminance information of the background. . That is, the difference between the luminance inside the object image area and the luminance information that defines the object image area depends on the luminance information of the object itself, the luminance information of the image area other than the object, and the arrangement mode of the object It differs depending on etc. Therefore, it is preferable that the reference for determining the boundary by the boundary detection unit is appropriately changed according to the luminance information in the object image area.

  The luminance information acquisition means acquires luminance information on a plurality of axes arranged in parallel, but does not acquire luminance information in a portion between the axes arranged in parallel. It is difficult to sufficiently reflect the luminance at the detection of the boundary of the object image area. Accordingly, the number of axes for obtaining the luminance information may be two or more in order to relatively compare the luminance information for each axis, but it is desirable that the number of axes is larger (the interval between the axes arranged in parallel is narrowed). By increasing the number of axes for acquiring brightness information, the number of parts between the axes can be reduced, and brightness information from more parts can be acquired, so that the detection accuracy of the boundary can be improved. Become.

  In addition, it is desirable that the boundary detection unit detects a boundary that demarcates the object image region based on a cumulative value of luminance information in a predetermined range acquired by the luminance information acquisition unit.

  The present invention determines whether or not a boundary defining an object image area exists on the axis by relatively comparing the luminance information on the axis for each axis. It may be unknown if it exists. For example, when the luminance information of only one arbitrary point on the axis is used as the luminance information on the axis, even if the axis and the boundary intersect at another point on the same axis, the luminance on the axis There is a possibility that it cannot be detected that the information deviates from the luminance information of the boundary region and crosses the boundary at any point on the axis.

  Accordingly, it is preferable that the luminance information on the axis reflects the luminance information at more points, and the luminance information is acquired at the minimum unit interval (for example, every adjacent pixel) of the image on the axis as much as possible. It is desirable. It is further desirable to detect the boundary of the object image area based on the accumulated value of the luminance information at a plurality of points. By using the accumulated value of the luminance information on the axis, the influence of noise due to the luminance information at any one point is reduced, making it easier to grasp the characteristics of the luminance information in units of axes, and making a relative comparison with other axes Can be performed more clearly, and the boundary detection accuracy can be further improved.

  Further, the boundary detection means demarcates the object image region based on the luminance information included in any one range of the predetermined or higher predetermined value or lower than the predetermined value among the luminance information for each axis. It is desirable to detect the boundary. This predetermined value is a luminance information value set in advance. By using the luminance information included in the range below the predetermined value or above the predetermined value, the arithmetic processing can be reduced and the processing time can be shortened.

  For example, the average value of the on-axis luminance information acquired by the luminance information acquisition unit is calculated, and the boundary is determined using only one range of luminance information including the luminance information value determined to be a boundary with the average value as a branch point. As a result, luminance information that is not determined to be a boundary from luminance information on a plurality of axes can be effectively excluded, the number of data used for boundary detection can be reduced, and the boundary detection process can be accelerated. Become.

  In addition, it is desirable that the boundary detection unit detects a boundary that defines the object image region based on a relative change amount of luminance information in a predetermined range for each axis.

  As described above, the present invention acquires luminance information in a predetermined range on a plurality of axes, and defines whether the luminance information in the predetermined range is luminance information inside the object image region or defines the object image region. It is determined whether or not the luminance information of the boundary is to be determined, but this determination criterion varies depending on the arrangement mode of the objects. However, in many cases, the luminance information changes greatly compared to the object image region, such as a boundary defining the object image region being behind the objects. That is, compared to the relative change in the luminance information between the axes arranged inside the object image area, the change in the luminance information on the axis including the boundary and the axis inside the object image area has a larger amount of change. There are many cases.

  Therefore, it is desirable to calculate the amount of change in luminance information on a plurality of axes arranged in parallel and detect the boundary based on the amount of change. In particular, when the amount of change in luminance information between the object image region and its boundary is large, it is possible to improve the detection accuracy of the boundary. When a boundary is detected based on the amount of change in luminance information, the luminance information for each axis is represented as a histogram, and the amount of change in the histogram can be used.

  Further, it is desirable that the boundary detection unit obtains object information of the object and detects a boundary that defines the object image region based on the object information and the luminance information for each axis.

  The present invention acquires whether or not there is a boundary defining an object image area on the axis by acquiring luminance information on the axis in a predetermined range and relatively comparing the luminance information in the predetermined range for each axis. Determine. That is, it is determined whether there is a boundary in a predetermined range on the axis, and when the axis and the boundary are arranged in parallel, the axis and the boundary overlap, and the arrangement includes the direction of the boundary. However, if the axis and the boundary are not parallel, the arrangement of the boundary may not be detected. That is, an axis that intersects the boundary can be detected, but it may not be possible to determine at which position the axis intersects the boundary or whether the boundary is located across a plurality of axes. Therefore, it is preferable to acquire the object information of the object and detect the boundary using this object information.

  Here, the object information is information for specifying a region of the object, which can exemplify the dimensions of the target object, the aspect ratio, the arrangement state, the shape of the object, and the like. For example, by acquiring that the arrangement state of the object is horizontal arrangement as object information, when the axis is in the vertical direction, the boundary of the object image area is positioned vertically, and the axis and the boundary are arranged in parallel. Therefore, the boundary is not arranged across a plurality of axes, and it can be seen that the boundary overlaps with the axis, so that the axis can be detected by a change in luminance information for each axis.

  Furthermore, it is possible to verify the correctness of the detected boundary by using the object information. Specifically, when the dimension between the boundaries is smaller than the dimension of the object, it is determined that the object image area is not configured and an error is determined, or the object image area portrait ratio and the object portrait ratio are equal to or greater than a predetermined value. If they are different, it can be determined that there is an error because the objects are arranged in an overlapping manner. By verifying whether the detected boundary is correct or not, the boundary detection accuracy can be further improved.

  The luminance information acquisition means acquires luminance information in a predetermined range for each axis, and acquires luminance information in a predetermined range on a plurality of second axes that intersect the axis and are parallel to each other for each second axis. Preferably, the boundary detecting means detects a boundary that defines the object image region based on the luminance information of the axis and the luminance information of the second axis.

  Boundary detection by acquiring luminance information on multiple axes arranged in parallel and comparing the luminance information for each axis can detect the axis that intersects the boundary. In this case, it may not be possible to determine whether the boundary crosses the boundary or whether the boundary is located across a plurality of axes. Therefore, by acquiring the luminance information on the second axis that intersects (not parallel to) the axis, and detects the boundary in consideration of the luminance information on the second axis, the luminance on the axis in at least two directions. Information can be compared relatively, and the detection accuracy of the boundary of the object can be increased.

  The second axis may be set in a predetermined direction in advance, but may be configured to use the luminance information of the second axis when the boundary cannot be detected only by the luminance information of the axis. A plurality of angles that intersect with the second axis may be set for each predetermined angle, and it is desirable to set them appropriately according to the detection situation of the boundary.

  In another exemplary aspect of the present invention, an image processing method detects a boundary that defines an object image region corresponding to an object image in order to extract the object image from an image including the object image. An image processing method comprising: obtaining luminance information in a predetermined range on a plurality of axes parallel to each other on an image; and demarcating an object image region by comparing the luminance information in a predetermined range for each axis Detecting a boundary to be performed.

  Also in this image processing method, the luminance information in a predetermined range on a plurality of axes arranged in parallel is acquired for each axis, and the luminance information in the predetermined range is relatively compared for each axis, whereby the target object is obtained. It is possible to detect the boundary that defines the object image region by comparing the luminance information in the image region and at the boundary with the luminance information on each axis. By detecting the boundary that defines the object image region in this way, the efficiency of the extraction process for extracting the object image from the image can be improved.

  In addition, an image processing program as another exemplary aspect of the present invention provides a boundary that defines a target image area corresponding to a target image in order to extract the target image from an image including the target image. An image processing program to be detected, the step of storing an image including an object in a storage unit, and the step of obtaining luminance information in a predetermined range on a plurality of axes parallel to each other on the stored image for each axis A computer-executable image processing program for sequentially executing a step of detecting a boundary defining an object image region and a step of outputting information of the detected boundary by comparing the acquired luminance information for each axis It is.

  Also by this image processing program, the luminance information in a predetermined range on a plurality of axes arranged in parallel is obtained for each axis, and the luminance information in the predetermined range is relatively compared for each axis, whereby the target object is obtained. It is possible to detect the boundary that defines the object image region by comparing the luminance information in the image region and at the boundary with the luminance information on each axis. By detecting the boundary that defines the object image region in this way, the efficiency of the extraction process for extracting the object image from the image can be improved.

  Further objects and other features of the present invention will become apparent from the preferred embodiments described below with reference to the accompanying drawings.

  According to the present invention, by comparing the relative levels of luminance information on an image, it is possible to detect a boundary that defines an object image region within a predetermined range from which luminance information is acquired. By detecting the boundary that defines the object image area corresponding to the object image, it is possible to perform an object image extraction process by feature detection or the like in the defined object image area. Compared with the object image extraction process in the entire image, such an object image extraction process can limit the range to be subjected to the extraction process and shorten the object image extraction process time. It becomes possible to do.

  Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a schematic block diagram showing the overall configuration of an image processing system according to an embodiment of the present invention. This image processing system is a system that extracts an object image based on an image in which an object that is a statistical object of marketing analysis is recorded. The present embodiment relates to statistical processing of a beverage can container (hereinafter, referred to as an object) as an object, and uses an image processing system to image a beverage can container image (from an image including the object image). Hereinafter, the boundary of the beverage can container image area (hereinafter referred to as the object image area) corresponding to the object image is detected. The detection of the boundary of the object image region is a preprocess for extraction processing for extracting the object image from the image. The object image extraction process is performed in the detected object image area to shorten the extraction process time.

  The image processing system 1 includes an image capturing device 2 that captures an image and an image processing device 3 that performs image processing on an image acquired from the image capturing device 2. The image capturing device 2 may be any device that can capture an image including an object and output the captured image to the image processing device 3. In this embodiment, a digital camera that stores a captured image as a digital signal and can output it to the outside is used.

  As shown in FIG. 1, the image processing apparatus 3 has a configuration centered on a central processing unit (CPU) 31 and a bus (BUS) 32, and images acquired from the image capturing apparatus 2 are stored in the BUS 32. Are connected, a keyboard device (KB) 34 as input means, and a display (DISP) 35 as output means. The storage device stores an operation system (OS) and an image processing program. The image processing device 3 realizes the following functions by causing the CPU to sequentially read and execute a program via a memory based on an image input from the image capturing device 1.

  The image processing apparatus 3 includes a luminance information acquisition unit that acquires luminance information of an image, a recording unit that stores the luminance information measured by the luminance information acquisition unit in the storage device 33, and the luminance information stored in the storage device 33. Boundary detection means for detecting a boundary that defines the object image region based on the boundary. Hereinafter, image processing for detecting the boundary of the object image region will be described in detail with reference to the drawings.

  FIG. 2 is a flowchart showing an image processing procedure for detecting an object image area in the image processing apparatus 3, and FIGS. 3 to 6 are diagrams showing an example of a display screen of the display 35 of the image processing apparatus 3.

  First, the image processing device 3 acquires an image in which an object is recorded from the image photographing device 2 (S.1). FIG. 3 shows a state in which an image acquired from the image capturing device 2 is displayed. The image is a photograph of a display shelf of an object installed in a store, and the display shelf includes a plurality of shelves A1 to A3 arranged in the vertical direction, and each shelf has a different type of object. B1 to B15 are arranged. In the figure, the horizontal direction in which the objects are arranged on the display shelf is defined as the X direction, and the vertical direction that is above and below the objects is defined as the Y direction. In addition, the boundary that defines the object image area includes not only the outer edge of each object image area but also an area between adjacent object image areas.

  Next, the image processing apparatus 3 extracts a partition area including a plurality of object image areas corresponding to the object image (S.2). In the present embodiment, the display shelves A1 to A3 have a plurality of levels in the vertical direction, and the position in the X direction of the boundary of the object image area differs for each display shelf, so the boundary is detected for each display shelf. Even if the display shelves are arranged in a plurality of stages, if the positions of the boundaries in the X direction overlap in the respective display shelves (the same position), all the display shelves are not extracted. It is possible to detect the boundary at the same time.

  The extraction of the partition area may be performed by image processing based on relative comparison of luminance information described later, or may be processing by inputting a display shelf number or range designation by a person. In the present embodiment, since a section area is extracted by a person inputting the number of display shelves, a screen for accepting the input of the number of display shelves is displayed on the display 35 of the image processing device 3 (not shown). . When receiving the input of the number of display shelves, the image processing device 3 divides the image by the number of shelves in the Y direction, and stores the divided image areas C1 to C3.

  Then S. The luminance information of the image of the partitioned area C3 extracted in 2 is acquired (S.3). The image processing apparatus 3 sets a plurality of axes parallel to the Y direction, and accumulates the luminance information in the partitioned area C3 for each axis (S.4). FIG. 4 is a diagram showing a luminance accumulation value obtained by accumulating the luminance information of the partitioned area C3 obtained by No. 4, and FIG. 5 is a diagram showing a luminance accumulated value of the D area which is a part of the partitioned area C3. The vertical axis represents the accumulated luminance value, and the horizontal axis represents the position in the X direction. Z1 to Z23 shown in FIG. 5 indicate axes parallel to the Y direction.

  The luminance information here is the brightness per unit area, and the higher the numerical value, the brighter it is. In the image according to the present embodiment, the objects are close to each other and the objects are shaded. Therefore, when the luminance information of the object image region portion is compared with the luminance information of the boundary portion, the luminance information of the boundary portion is low. In addition, since the boundary of the object image area and the axis are parallel, the axis and the boundary are overlapped. Therefore, the boundary is detected based on the criterion that there is a boundary on the position axis in the X direction where the accumulated luminance value is the smallest.

  Note that the luminance information of the boundary portion may be higher than the target image region portion depending on the arrangement form of the target object and the luminance information of the target image. In that case, it is preferable to detect the boundary based on a determination criterion that the boundary is located at the position where the accumulated luminance value is the largest.

  Next, an average value of the accumulated luminance values of each axis is calculated, and only an accumulated value equal to or less than the average value is extracted with the average value as a boundary (S.5). In this embodiment, since it is determined that there is a boundary at the position in the X direction where the accumulated luminance value is the smallest, the axis where the accumulated luminance value is equal to or greater than the average has high luminance information at points on the axis, and there is no boundary. judge. Therefore, only the cumulative value below the average value is extracted from the average value as a boundary, and only the axis having a relatively small luminance cumulative value is extracted from the entire axes. Furthermore, the luminance accumulated value equal to or higher than the average value is updated as the average value, and the influence on the detection of the boundary by the accumulated value equal to or higher than the average value is suppressed. FIG. 6 is a diagram showing a state in which the average value of the accumulated luminance value of the partitioned area C3 shown in FIG. 4 is calculated and the accumulated luminance value equal to or higher than the average value is updated to the average value. In the case where there are a plurality of inflection points in a region where the luminance fluctuation is large compared to the case where the inflection points are extracted without performing the update process by updating the accumulated luminance value equal to or higher than the average value as the average value. Inflection points that become noise can be effectively eliminated. Therefore, it is possible to improve the boundary detection accuracy.

  And S. Based on the accumulated luminance value for each axis extracted in step 5, the change amount of the accumulated luminance value per unit distance from the adjacent axis, that is, the slope of the graph shown in FIG. 5 is calculated (S.6). By calculating the amount of change in the accumulated luminance value, it is possible to compare relative changes in adjacent axes in the X direction. FIG. 7 is a diagram showing the amount of change in the slope of the accumulated luminance value shown in FIG. The plus and minus shown correspond to the slope.

  Then S. The average value of the slope calculated in step 6 is calculated, and only the axis where the absolute value of the slope is equal to or greater than the average value is extracted using the absolute value of the average value as a branch point (S.7). The axis at which the boundary is determined to be located (the accumulated luminance value is the minimum value) is a valley (inflection point) indicated by the graph. The inflection point is a valley shown in the graph, and an inflection point is not obtained when the slope is a positive value. Therefore, only a value with a negative slope is extracted, and an average value of the slope is calculated. FIG. 8 shows only the minus value extracted from the slope data of the accumulated luminance value shown in FIG. 7, calculates the average value of the negative values, and shows only values smaller than the average value (absolute value is large). It is a figure.

  In this embodiment, since the boundary is detected on the basis that there is a boundary on the axis of the position in the X direction where the luminance cumulative value is the smallest, pay attention to the valley of the graph indicating the luminance cumulative value, and the slope Only negative values were processed. However, in the case of the criterion that there is a boundary on the axis of the position where the accumulated luminance value is the largest under different image conditions, the slope may be an inflection point with a positive value, and the peak of the graph indicating the accumulated luminance value The process may be performed while paying attention to a positive value of the slope. Furthermore, regardless of either the positive value or the negative value, the processing may be performed using both values, and can be appropriately set according to the image condition and the like.

  S. The inflection point is detected by comparing the amount of displacement (scalar) based on the accumulated luminance value on the axis extracted in step 7 (S.8). S. 7, the change amount (change amount per unit distance) between adjacent axes was calculated, and an axis having a relatively large change amount was extracted. Then S. In step 8, the inflection point is detected based on the absolute displacement amount, that is, the length of the illustrated graph. S. In step 7, a displacement amount is calculated for each extracted axis, and an average value of the displacement amounts is calculated. Data that is smaller than this average value and smaller than the adjacent displacement is deleted, and the remaining data is the inflection point.

  The inflection point is a point at which the slope of the accumulated luminance value changes from minus to plus, and an axis having a point where the accumulated luminance value graph is not zero (changed) is detected as a boundary of the object image region ( S.9). The axis having the inflection point thus detected is the axis X13 shown in FIG. 5, and this axis is detected as a boundary. FIG. 9 shows the boundary of the object image area detected in the partition area C3 in this way.

  The flow from the measurement of luminance information in the image area to the detection of the boundary is summarized as follows. The luminance cumulative value of the axis along the Y direction at the X-axis position is acquired (S.4), and the value below the average of the luminance cumulative value is deleted to delete the axis having no inflection point (S.5). ). Next, a value that is equal to or less than the average of the slopes of the accumulated luminance values is deleted to delete an axis that does not have an inflection point (S.7). This S.I. The axis extracted in 7 has low luminance information and a large amount of change per unit distance. An inflection point is detected by extracting an axis having a larger displacement than the adjacent axis. The axis having the detected inflection point is determined as the boundary.

  As described above, according to the embodiment, it is possible to define a target object image area including a target object using the detected boundary, and perform a target image extraction process in the target object image area. In the extraction process, by analyzing the feature points of the image data, the type of the object is specified by the matching process from a plurality of types of objects. When performing matching processing of an object without defining an object image area, it is necessary to perform matching processing by sequentially enlarging and reducing feature points of the object because the scale of the object is not known. On the other hand, by defining the object image area, it is possible to perform the extraction process in the defined area as compared to performing the object matching process without defining the object image area. It is possible to shorten the analysis processing time.

  In this embodiment, since the number of objects included in the partitioned area C3 is unknown and the number of boundaries to be detected is unknown, the inflection point is determined by comparing the amount of change in the adjacent luminance accumulated value. (S.8). However, when the number of object image areas (that is, the number of boundary lines) is known, the inclination of the graph of the accumulated luminance value in the entire partition area is calculated, and the axis with the large inclination is extracted by the number of boundary lines. It is possible to detect the boundary by extracting inflection points. That is, according to the present invention, the inflection point of the luminance accumulated value or the point that becomes the minimum value or the maximum value of the luminance accumulated value can be extracted by using the luminance accumulated value, and the boundary of the object image area can be detected. Needless to say, the above-described configuration can be appropriately combined.

  Furthermore, in addition to the image processing of the above-described embodiment, image processing can be performed using the object information of the target object. The object information is the dimensions of the object, the longitudinal dimension ratio, and the arrangement state, and it is possible to verify the correctness of the detected boundary using the object information. Specifically, after detecting a boundary, the dimension between adjacent boundaries is compared with the dimension of the object information, and if there is a difference in the dimension more than a predetermined value, it is determined that there is an error or is defined from the detected boundary. The vertical ratio of the target object image area is compared with the vertical ratio of the object information, and if there is a difference of a predetermined ratio or more, an error is determined.

  Factors that are determined to be errors include, for example, recording in a state where the objects overlap, and light spots due to different light irradiation modes on the individual objects. By performing the error determination based on the object information of the target object, it is possible to prompt visual confirmation or to perform the determination of the boundary again, thereby improving the detection accuracy of the boundary.

  In the above-described embodiment, the boundary is detected using the luminance accumulation value of the Y-direction axis, but the luminance accumulation value of the second axis different from the Y-direction axis is used together with this luminance accumulation value. A boundary may be detected. In the embodiment, since the objects are arranged along the X direction and the boundary is along the Y direction, the amount of change in the accumulated luminance value on the axis is large at the boundary position, and the boundary can be detected. However, when the boundary of the object image region and the direction of the axis for accumulating the luminance information are not parallel, the inflection point of the luminance accumulated value may not be clearly displayed at the boundary position.

  The second axis is an axis that intersects (is not parallel to) an axis parallel to the Y direction (hereinafter referred to as the first axis), and a plurality of second axes are arranged in the same manner as the first axis, and are parallel to each other. By using the accumulated luminance values on the second axis and the first axis, the amount of change in the luminance information in at least two directions can be acquired.

  Based on the amount of change in the luminance information in the two directions, the shelf image area in the above-described embodiment can be detected. Since the shelves A <b> 1 to A <b> 3 are arranged along the X direction, the second axis is set as a plurality of axes parallel to the X direction, the luminance accumulated value is acquired for each axis, and the luminance accumulated value is compared for each axis. . The luminance information of the target object image and the luminance information of the shelf image (the region in which the shelf is included in the image) are different. By comparing the amount of change in the luminance information for each axis, the shelf image is converted into the shelf image as in the above-described embodiment. It is possible to detect a boundary that defines a corresponding shelf image region and detect a shelf image region. By detecting the shelf image area, the input of the number of shelves by a person can be omitted, and the processing can be speeded up.

  The configuration using the accumulated luminance value of the second axis may be used depending on the arrangement form of the objects, or used for calculating the number of shelves when the objects are arranged in a plurality of stages as in the embodiment. Alternatively, it may be used when an error determination is made in the process using the object information of the object, and may be used in combination with the process described above as appropriate.

  In the present embodiment. The image processing system includes an image capturing device, but the present invention is not limited to this embodiment as long as it has a configuration capable of acquiring luminance information of an image.

  As mentioned above, although preferable embodiment of this invention was described, this invention is not limited to these, A various deformation | transformation and change are possible within the range of the summary.

1 is a schematic block diagram showing the overall configuration of an image processing system according to an embodiment of the present invention. A flowchart showing an image processing procedure for detecting an object image region It is an example of the display screen of the display of an image processing apparatus, and is a figure which displayed the image acquired from the image imaging device. It is an example of a display screen of the display of the image processing apparatus, and is a screen that displays a cumulative luminance value in the Y direction at each position in the X direction of the partition region C3. It is an example of a display screen of the display of the image processing apparatus, and is a screen that displays a cumulative luminance value in the Y direction at each position in the X direction of the D region. It is an example of a display screen of the display of the image processing apparatus, and is a screen that displays a state in which a luminance cumulative value that is equal to or higher than the average value among the luminance luminance values of the partition region C3 is updated to an average value. It is an example of a display screen of the display of the image processing apparatus, and is a screen that displays the amount of change per unit distance with the adjacent axis of the accumulated luminance value of the partition region C3. It is an example of the display screen of the display of the image processing apparatus, only a negative value is extracted from the gradient data of the luminance cumulative value of the partition area C3, an average value of the negative values is calculated, and only a value smaller than this average value is calculated. Is a screen displaying It is an example of the display screen of the display of an image processing apparatus, and is a screen which displayed the boundary of the target object image area | region of the division area C3.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 Image processing system 2 Image photographing device 3 Image processing device 31 Central processing unit (CPU)
32 Bus (BUS)
33 Storage device 34 Keyboard device (KB)
35 Display (DISP)
A1-A3 display shelf B1-B15 object C1-C3 partition area

Claims (8)

  1. An image processing system for detecting a boundary defining an object image region corresponding to an object image in order to extract the object image from an image including the object image,
    Luminance information acquisition means for acquiring luminance information in a predetermined range on a plurality of axes parallel to each other on the image;
    An image processing system comprising: boundary detection means for detecting a boundary that demarcates the object image region by comparing luminance information in a predetermined range acquired by the luminance information acquisition means for each axis.
  2.   The image processing system according to claim 1, wherein the boundary detection unit detects a boundary that defines the object image region based on a cumulative value of luminance information in a predetermined range acquired by the luminance information acquisition unit.
  3.   The boundary detection means demarcates the object image region based on luminance information included in any one range of a predetermined value greater than or less than a predetermined value among the luminance information for each axis. The image processing system according to claim 1, wherein a boundary is detected.
  4.   The said boundary detection means detects the boundary which demarcates the said target object image area | region based on the relative variation | change_quantity for every axis | shaft of the luminance information in the said predetermined range. Image processing system.
  5.   The boundary detection means acquires object information of a target object, and detects a boundary that defines the target object image region based on the object information and luminance information for each axis in the predetermined range. The image processing system according to claim 4.
  6. The luminance information acquisition means acquires the luminance information in a predetermined range on the axis for each axis, and the luminance information in a predetermined range on a plurality of second axes that intersect the axis and are parallel to each other. For each axis,
    The image according to any one of claims 1 to 5, wherein the boundary detection unit detects a boundary that defines the object image region based on luminance information of the axis and luminance information of the second axis. Processing system.
  7. An image processing method for detecting a boundary defining an object image region corresponding to an object image in order to extract the object image from an image including the object image,
    Obtaining luminance information in a predetermined range on a plurality of axes parallel to each other on the image;
    Detecting a boundary that defines the object image region by comparing luminance information in the predetermined range for each axis.
  8. An image processing program for detecting a boundary defining an object image area corresponding to an object image in order to extract the object image from an image including the object image,
    Storing an image including the object in a storage means;
    Obtaining luminance information in a predetermined range on a plurality of axes parallel to each other on the stored image, for each axis;
    Detecting the boundary defining the object image region by comparing the acquired luminance information for each axis; and
    A computer-executable image processing program for sequentially executing the detected boundary information;
JP2008115639A 2008-04-25 2008-04-25 Image processing system, image processing method, and image processing program Pending JP2009265998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008115639A JP2009265998A (en) 2008-04-25 2008-04-25 Image processing system, image processing method, and image processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008115639A JP2009265998A (en) 2008-04-25 2008-04-25 Image processing system, image processing method, and image processing program

Publications (1)

Publication Number Publication Date
JP2009265998A true JP2009265998A (en) 2009-11-12

Family

ID=41391765

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008115639A Pending JP2009265998A (en) 2008-04-25 2008-04-25 Image processing system, image processing method, and image processing program

Country Status (1)

Country Link
JP (1) JP2009265998A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014229246A (en) * 2013-05-27 2014-12-08 日本電気株式会社 Detection device, method, and program
WO2016063484A1 (en) * 2014-10-23 2016-04-28 日本電気株式会社 Image processing apparatus, display control apparatus, image processing method and recording medium
JP2016115349A (en) * 2014-12-10 2016-06-23 株式会社リコー Method, system and computer readable program for analyzing image including organized plural objects

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002284309A (en) * 2001-03-22 2002-10-03 Ooku System:Kk Inventory system
JP2004272885A (en) * 2003-02-21 2004-09-30 Shinko Electric Ind Co Ltd Device and method for edge extraction
JP2009187482A (en) * 2008-02-08 2009-08-20 Nippon Sogo System Kk Shelf allocation reproducing method, shelf allocation reproduction program, shelf allocation evaluating method, shelf allocation evaluation program, and recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002284309A (en) * 2001-03-22 2002-10-03 Ooku System:Kk Inventory system
JP2004272885A (en) * 2003-02-21 2004-09-30 Shinko Electric Ind Co Ltd Device and method for edge extraction
JP2009187482A (en) * 2008-02-08 2009-08-20 Nippon Sogo System Kk Shelf allocation reproducing method, shelf allocation reproduction program, shelf allocation evaluating method, shelf allocation evaluation program, and recording medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014229246A (en) * 2013-05-27 2014-12-08 日本電気株式会社 Detection device, method, and program
WO2016063484A1 (en) * 2014-10-23 2016-04-28 日本電気株式会社 Image processing apparatus, display control apparatus, image processing method and recording medium
US10438079B2 (en) 2014-10-23 2019-10-08 Nec Corporation Image processing apparatus, image processing method and recording medium
JP2016115349A (en) * 2014-12-10 2016-06-23 株式会社リコー Method, system and computer readable program for analyzing image including organized plural objects
US9811754B2 (en) 2014-12-10 2017-11-07 Ricoh Co., Ltd. Realogram scene analysis of images: shelf and label finding

Similar Documents

Publication Publication Date Title
TWI600897B (en) Computer-implemented method, non-transitory computer-readable medium, and system for detecting defects on a wafer
US9141184B2 (en) Person detection system
JP4657869B2 (en) Defect detection apparatus, image sensor device, image sensor module, image processing apparatus, digital image quality tester, defect detection method, defect detection program, and computer-readable recording medium
JP5603403B2 (en) Object counting method, object counting apparatus, and object counting program
JP5221436B2 (en) Facial feature point detection apparatus and program
JP4644819B2 (en) Minute displacement measurement method and apparatus
JP4369922B2 (en) Biological image collation device and collation method thereof
US7158674B2 (en) Scene change detection apparatus
TW201439986A (en) Detecting defects on a wafer using defect-specific and multi-channel information
CN103528617B (en) A kind of cockpit instrument identifies and detection method and device automatically
US8675950B2 (en) Image processing apparatus and image processing method
US8965103B2 (en) Image processing apparatus and image processing method
US9619708B2 (en) Method of detecting a main subject in an image
JP4389602B2 (en) Object detection apparatus, object detection method, and program
CN103619238B (en) For determining the apparatus and method of skin inflammation value
US8019164B2 (en) Apparatus, method and program product for matching with a template
US7769227B2 (en) Object detector
EP2339292A1 (en) Three-dimensional measurement apparatus and method thereof
JP4664432B2 (en) Shot size identification device and method, electronic device, and computer program
JP2005334219A (en) Diagnostic imaging support system and its method
EP1522878A1 (en) Method for determining the displacement of luggage in order to scan a suspicious region in the luggage
TWI380223B (en) Image processing apparatus and image processing method
JP2007129709A (en) Method for calibrating imaging device, method for calibrating imaging system including arrangement of imaging devices, and imaging system
JP6220061B2 (en) Wafer inspection using free form protection area
JP5980555B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20110125

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120308

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120313

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20121002