WO2006019165A1 - ラベルイメージの生成方法および画像処理システム - Google Patents
ラベルイメージの生成方法および画像処理システム Download PDFInfo
- Publication number
- WO2006019165A1 WO2006019165A1 PCT/JP2005/015163 JP2005015163W WO2006019165A1 WO 2006019165 A1 WO2006019165 A1 WO 2006019165A1 JP 2005015163 W JP2005015163 W JP 2005015163W WO 2006019165 A1 WO2006019165 A1 WO 2006019165A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- identification information
- pixel
- pixels
- labeling
- pixel block
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/457—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/955—Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/28—Indexing scheme for image data processing or generation, in general involving image processing hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Definitions
- the present invention relates to labeling used for extracting image elements and the like.
- Japanese Patent Application Laid-Open No. 7-192130 discloses performing a temporary labeling process in a labeling process using a one-dimensional SIMD (Single Instruction Stream Multiple Data Stream) type processor.
- SIMD Single Instruction Stream Multiple Data Stream
- the technique of this publication uses a one-dimensional SIMD type processor to execute temporary labeling processing in order on each line of an image.
- Japanese Patent Laid-Open No. 2002-230540 discloses that a plurality of PEs of a one-dimensional SIMD processor perform a labeling process in parallel for each pixel in an oblique direction in a pixel array of an input image. Yes. By processing pixels in an oblique direction in parallel, the adjacent pixels necessary for determining whether or not the target pixel is connected are labeled before the target pixel. For this reason, the parallel processing function of SIMD type processors can be used effectively to increase the processing speed. To realize this method, a one-dimensional SIM D-type processor with several thousand PEs is required to scan in an oblique direction even for an image of about 200 DPI.
- One embodiment of the present invention is a method for generating a label image, which includes the following steps.
- a pixel block including a plurality of pixels adjacent to each other in multi-dimension is input as a unit from data including a plurality of pixels for forming an image.
- the identification information common to all the ON / OFF pixels to be grouped included in the pixel block is labeled.
- the pixel block is two-dimensionally adjacent to each other in two 2D 2 images. Consists of elements.
- the pixel block is composed of eight pixels of 2 X 2 X 2 that are adjacent to each other in three dimensions. One pixel included in this pixel block is adjacent to another pixel included in the pixel block. Therefore, when pixels that make up an image are grouped by labeling common identification information for pixels connected in eight directions (eight connected) based on the binarized pixels, the group included in the pixel block is included. Common identification information can be labeled for all pixels in one state or value that are to be rubbed, ie, on or off (“1” or “0”).
- Another embodiment of the present invention is an image processing system including the following.
- An interface configured to input in parallel data including a plurality of pixels adjacent to each other in a multi-dimensional manner from data including a plurality of pixels for forming an image.
- a labeling processor configured to label common identification information in parallel for all the ON / OFF pixels to be grouped included in the pixel block based on the binarized pixels.
- the image processing system preferably includes a processor including a processing area that includes a plurality of processing elements and that includes a plurality of data paths that are operated in parallel by the plurality of processing elements.
- the interface and labeling processor can be configured in the processing area of this processor, and can provide a processor capable of executing a process of inputting a plurality of pixels and a process of labeling a plurality of pixels in a pipeline manner.
- Another embodiment of the present invention is an image processing method including the following.
- a pixel block including a plurality of pixels adjacent to each other in multi-dimension is input as one unit from data including a plurality of pixels for forming an image. c2. Based on the binarized pixels, the identification information common to all the ON / OFF pixels to be grouped included in the pixel block is labeled.
- Labeled image power also distinguishes image elements.
- the feature value includes the primary or secondary moment, area, perimeter, density, spread, etc. of the image element.
- the feature value of the image element includes volume, center of gravity, moment, and the like. Identifying image elements and determining their feature values is useful in many applications, including processes that require image recognition.
- the label image can be used to determine the position and tilt of the installed parts for industrial robots that are automatically mounted. In automated driving devices, label images are used to recognize roads or obstacles. In 3D CT scans, the label image is used for processing or preprocessing to process the basic characteristics of the imaged object.
- a process of generating a label image includes a first stage that scans an image, labels temporary identification information indicating a relationship with neighboring pixels, and generates combined information of the plurality of temporary identification information. Then, the temporary identification information and the combined state thereof can be divided into the second stage of labeling the true identification information indicating the image element.
- the input process and the labeling process can be applied to both the first stage and the second stage, and the processing speed of each stage can be improved.
- a first processing system that scans an image, labels temporary identification information, and generates combined information of the temporary identification information, and a true information indicating an image element based on the combined information
- a second processing system for labeling the identification information.
- the first processing system and the second processing system each include an interface and a labeling processor, and the labeling processor of the first processing system labels the temporary identification information as common identification information and performs the second processing.
- System Labeling The processor labels true identification information as common identification information.
- the image processing system preferably has a reconfigurable processor comprising a processing area and a control unit for reconfiguring the processing area.
- the interface and labeling processor included in the system and the interface and labeling processor included in the second processing system can be configured in the processing area after the processing of the first processing system is completed. By reconfiguring the first processing system and the second processing system at different timings in the processing area, the hardware resources of the processor are effectively utilized, and a small and high-performance image processing system is provided. it can.
- a reconfigurable integrated circuit device such as an FPGA equipped with a plurality of processing units is one of nodeware having a function capable of executing a large number of processes in parallel.
- the reconfigurable integrated circuit device described in the applicant's international publication WO02Z095946 is an integrated circuit device suitable for an image processing system because the circuit configuration can be changed dynamically.
- temporary identification information is labeled in units of pixel blocks. Accordingly, provisional identification information can be selected in units of pixel blocks rather than in units of individual pixels.
- an adjacent pixel group including a pixel that touches the pixel block and is previously labeled with the temporary identification information is input. .
- the following processing is performed in units of pixel blocks.
- the adjacent pixel group contains temporary identification information that can be inherited, the temporary identification information is inherited as common identification information.
- the adjacent pixel group contains temporary identification information that can be inherited elsewhere, record the combined information of the inherited temporary identification information and the uninherited temporary identification information.
- the new and temporary identification information shall be common identification information.
- the process of decoding the pixel block and the adjacent pixel group and the process of selecting and labeling the temporary identification information that can be inherited or the new temporary identification information are piped. It can be configured to perform in a line system.
- the second step which is executed after the first stage, labels the true identification information as common identification information.
- This stage includes an independent input process and a labeling process with respect to the first stage.
- the true identification information common to the pixel blocks in the joint relationship is shared based on the joint information. Label as identification information.
- identification information common to all the grouping target pixels included in the pixel block it is determined whether or not the grouping target pixels included in the pixel block are continuous or connected. Nah ... In the case of a pixel block that includes only 2 X 2 pixels that label the common identification information, these pixels are simply connected based on 8 connections. It is possible to label common identification information for pixels that are not necessarily connected by increasing the number of pixels included in the pixel block or by labeling common identification information in relation to pixel blocks. it can. This type of labeling enables rough grouping of pixels included in high-resolution image data. In other words, even non-connected pixels can be grouped under a predetermined condition.
- the same identification information can be attached to the pixels included in the pixel block by batch processing in parallel, the processing speed for labeling is improved.
- the process of labeling with this method does not include the process of converting the resolution of the image, so identification information that is roughly grouped is attached to high-resolution image data that does not degrade the accuracy of the image data. be able to.
- At least one pixel block and an adjacent pixel group including at least one pixel block adjacent to the pixel block are input, and the pixel block and the adjacent pixel group are input. Both of which contain grouping target pixels
- temporary identification information included in the adjacent pixel group can be inherited. If there is a pixel within the related range as a related range of pixel blocks, the temporary identification information is inherited even if the pixels are not connected or connected. Therefore, common identification information can be given to pixels having relevance in a range beyond connection.
- Adjacent pixel group that also has block power is input, and if both the large pixel block and the adjacent pixel group include pixels to be grouped, temporary identification information included in the adjacent pixel group can be inherited.
- This large pixel block includes four pixel blocks and is composed of 16 pixels. Therefore, the four pixel blocks and the six adjacent pixel blocks can be grouped by giving common identification information to the pixels belonging to the related range as the related range. With this type of labeling, 16 pixels can be labeled in parallel. For this reason, 40 pixels including large pixel blocks and adjacent pixel groups are processed in parallel. Therefore, it is a labeling method suitable for implementation on hardware (processor) having multiple processing elements that operate in parallel.
- the power of inheritance logic becomes complex It is also possible to input multiple large pixel blocks and their related neighboring pixel groups and process them in parallel.
- the true identification information common to the large pixel blocks in the coupling relationship is shared based on the coupling information.
- Another embodiment of the present invention is a method for analyzing an image, and includes the following steps.
- a pixel block including a plurality of pixels that are adjacent to each other in multi-dimensions is input as one unit.
- the identification information common to all the ON / OFF pixels to be grouped included in the pixel block is labeled. e3. Repeat the calculation in units that include at least one pixel block to calculate the feature value of each image element.
- the same identification information is collectively given to the pixels included in the pixel block in units of the pixel block. Therefore, since the pixel element is a set of pixel blocks, the feature value of each image element can be calculated by repeating the calculation in units including the pixel block. Also in the image processing system, it is desirable to have a first processor configured to repeat the operation in units including at least one pixel block and calculate the feature value of each image element. If the image processing system includes a reconfigurable processor, this first processor can also be reconfigured into the processing area at an appropriate timing after the processing of the first processing system is completed.
- a method that further includes a step of calculating a block feature value contributing to a feature value of an image element in units of labeled pixel blocks in parallel with the labeling step is useful. Determining the feature value of each pixel block is meaningful as preprocessing for calculating and summing the feature values of the image elements grouped by the identification information.
- the step of calculating the block feature value it is also possible to obtain a feature amount using binary pixels, and further, a block feature is obtained by multi-valued pixels included in the pixel block to be labeled. The value can be calculated.
- the image processing system is supplied with data including pixel blocks from the interface in parallel with the labeling processor, and calculates block feature values that contribute to the feature values of the image element in units of labeled pixel blocks. It is desirable to further have a configured second processor.
- the second sub-processor is preferably configured to calculate a value that contributes to the feature value of the image element from the multivalued pixels included in the pixel block to be labeled.
- FIG. 1 shows how an image is scanned in units of pixel blocks.
- FIG. 2 (a) shows an enlarged configuration of a pixel block and a pixel configuration of an adjacent pixel group, and FIG. 2 (b) shows an array of temporary identifiers (temporary IDs).
- Fig. 3 shows a combination of the configuration of a pixel block and the configuration of an adjacent pixel group when a temporary identifier is selected.
- FIG. 4 is a table collectively showing combinations of pixel block configurations and adjacent pixel group configurations when a temporary identifier is selected.
- FIG. 5 shows how an image is scanned in units of large pixel blocks.
- FIG. 6 shows an enlarged configuration of a large pixel block and a configuration of a pixel block of an adjacent pixel group
- FIG. 6 (b) shows an array of temporary identifiers (temporary IDs).
- FIGS. 7 (a) to 7 (d) show combinations of the configuration of a large pixel block and the configuration of adjacent pixel groups when selecting a temporary identifier.
- FIG. 8 is a flowchart showing an outline of image processing.
- FIG. 9 shows a schematic configuration of a reconfigurable processing apparatus suitable for image processing.
- FIG. 10 (a) to (c) show the configuration of an image processing apparatus using a reconfigurable processing apparatus.
- FIG. 11 shows a schematic configuration of a first stage interface and a labeling processor for labeling a temporary identifier.
- FIG. 12 shows a schematic configuration of the logic part of the labeling processor shown in FIG.
- FIG. 13 shows a schematic configuration of a processor (second processor) for analyzing shading.
- FIG. 14 shows a schematic configuration of the threshold and value unit of the processor shown in FIG.
- FIG. 15 shows an outline of grayscale data.
- FIG. 16 shows a schematic configuration of a second stage interface and a labeling processor for labeling a true identifier.
- FIG. 17 shows a schematic configuration of an analysis processor (first processor) that performs processing for extracting a maximum value in the Y direction.
- FIG. 1 shows the basic concept of block labeling. Take a binary-coded two-dimensional image (binary image) 1 that is output (displayed, printed, etc.) in frame units.
- This image 1 is a two-dimensional array of a plurality of pixels 5 having a value of “0” (off) or “1” (on).
- information included in image data including these pixels 5 can be analyzed. From the information contained in image 1, image elements consisting of pixels 5 in a predetermined relationship are segmented or resolved, and automatic analysis of image 1 is performed, or specific components of image 1 are shown to the user for further analysis. Can be scraped to do.
- the block labeling process enables rough grouping, so that even if pixels 5 are continuously continuous, even if there is a certain range or a certain distance relationship on the image, Be able to judge that it constitutes one component.
- Rough grouping identifies pixels that are separated by a few pixels at most, including consecutive pixels, as the same group.
- a configuration (component) of pixels with ON “1” can be regarded as an image element, and a configuration (component) of pixels with OFF “0” can also be captured as an image element. is there.
- an image element is composed of pixels with ON “1”
- the identification information is block-labeled with the pixel “1” as a grouping target.
- Figures 1 and 2 show the block labels for pixels related to image elements with connected pixels. An example of belling is shown.
- a label image in which identification information for distinguishing image elements is labeled to pixels, it is necessary to determine the connection state for a large number of pixels included in one image.
- an image element In a two-dimensional image, an image element is a connected area that spreads in the two-dimensional direction. Retrieval of image elements in the 2D direction requires a huge amount of memory and is usually inefficient because of the high possibility of duplicate processing. Therefore, first, in a one-dimensional direction, a search is performed while determining whether or not there is a connection with a pixel to which the temporary identification information is previously labeled, and the temporary identification information is labeled.
- the provisional identification information When labeling the provisional identification information while scanning the image, if the provisional identification information is subsequently linked, the provisional identification information is inherited and Join information is generated. When the scan of the image is completed and the combined information about the image is collected, the true identification information indicating the connected element is selected from the temporary identification information and the connection information, and a labeled image is generated again. To do.
- This label image makes it possible to distinguish independent image elements, and can be used for various image processing.
- the pixels 5 are not vertically or horizontally compared to being processed one-dimensionally, such as one pixel at a time.
- Four pixels 5 adjacent to are processed in parallel as one unit (pixel block) 2.
- This pixel block 2 has a 2 ⁇ 2 two-dimensional array, and the pixels 5 included in the pixel block 2 are adjacent to each other. Therefore, if 8 connections with connection directions in 8 directions are used as a base, if any of the multiple pixels 5 included in 1 pixel block 2 is “1”, it is not necessary to perform a new logical operation. All the pixels 5 included in 2 are connected and always have the same identification information, for example, identification data (identifier) such as a label.
- identification data identifier
- the scanning direction with the pixel block 2 as a unit does not affect either the top, bottom, left or right.
- the adjacent pixel group 4 that is a target for determining the connection state of the pixels included in the elementary block 2 includes six pixels 5 adjacent to the upper side and the left side of the pixel block 2.
- the data temporary identifier, temporary ID, or temporary label
- the data for temporarily identifying the four pixels P included in the pixel block 2 is common, and these are included in the four pixels included in the pixel block 2. Label in parallel with pixel P.
- the four pixels P (i, j), P (i, j + l), P (i + 1, j) and 4 data (temporary identifier, temporary ID or temporary label) for temporarily identifying P (i + 1, j + 1) PID (i, j), PID (i, j + l), PID (i + l, j) and PID (i + l, j + 1) are common. Therefore, the common identifier is labeled in parallel for a plurality of pixels.
- the temporary identifier for pixel block 2 is the six pixels P (i—1, 1), P (i-1, j), P (i It is determined by referring to the temporary identifiers of —1, j + l), P (i —1, j + 2), P (i, j— 1) and P (i + 1, j— 1). This process is repeated while scanning the entire image 1 in units of pixel block 2.
- the pixel 5 included in the pixel block 2 is referred to as the pixels gO to g3 in the above order
- the pixel 5 included in the adjacent pixel group 4 is referred to as the pixels rO to r5 in the above order.
- the pixel block 2 inherits the temporary identifier included in the adjacent pixel group 4 from the pixel state of the adjacent pixel group 4 and the pixel state of the pixel block 2.
- FIG. 3 shows an example in which the temporary identifier labeled on the pixel 5 of the pixel block 2 is determined only by the state of the pixel gO at the upper left of the pixel block 2.
- pixel gO of pixel block 2 is “0”, and inheritance of the temporary identifier is not determined only for pixel gO of pixel block 2.
- the temporary identifier contained in the adjacent pixel group 4 is not inherited regardless of the state of the adjacent pixel group 4, and the pixel g3 is given a new temporary identifier.
- the pixel gO of the pixel block 2 is “1”, and the pixels r0 to r2, r4, and r5 of the adjacent pixel group 4 are “0”. Therefore, for the pixel gO, the adjacent pixel group 4 does not include a temporary identifier to be inherited. However, depending on the state of the pixel r3 of the adjacent pixel group 4 and the pixel gl of the pixel block 2, the pixel block 2 may have a temporary identifier included in the adjacent pixel group 4. May be inherited. If there is no inheritable temporary identifier, a new temporary identifier is given to the pixel in pixel block 2 containing pixel gO.
- the pixel gO force of the pixel block 2 is “1”.
- temporary identifiers are labeled on the pixel rO and the pixel r2 of the adjacent pixel group 4. That is, the pixel rO and the pixel r2 of the adjacent pixel group 4 in the figure are “1”, and the adjacent pixel group 4 is preliminarily labeled with a temporary identifier. An identifier is given.
- temporary identifiers are assigned to pixel r2 and pixel r5 of adjacent pixel group 4.
- the ON “1” pixels in the adjacent pixel group 4 are not continuous (connected), and the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary identifiers labeled on these pixels may be different.
- the temporary label that can be inherited as pixel block 2 is not limited to that associated with pixel gO.
- the pixel gO of the pixel block 2 is “1”.
- the pixel rO and the pixel rl of the adjacent pixel group 4 are “1”, and since these pixels rO and rl are connected, there is a high possibility that they have the same temporary identifier.
- the pixel r4 of the adjacent pixel group 4 is “1” and has a temporary identifier.
- the temporary identifier is inherited for pixel gO.
- FIG. 4 shows combinations of pixels g 0 to g 3 related to determination of the temporary identifier of the pixel block 2 and combinations of adjacent pixel groups 4 corresponding thereto.
- Combinations # 1 to # 5 inherit the power of the temporary identifier assigned to adjacent pixel group 4, and the temporary identifier is the pixel block. This shows the case attached to K.2.
- the combination of the states of the adjacent pixel group 4 shown in FIG. 4 is a logical sum, and if a temporary identifier is added to the pixel indicated by “1”, either of the temporary identifiers is given. Is inherited as a temporary identifier of pixel block 2 depending on the state of the pixel of pixel block 2.
- the above labeling is a strictly continuous extraction of components.
- it is also meaningful to extract image elements by rough grouping that identifies intermittent components within a range of one pixel or several pixels. Since this rough grouping does not require strict continuity, image data that has become discontinuous in the range of one pixel or several pixels when data is converted into data by a scanner or the like. Therefore, it can be used to extract the components that are originally continuous.
- image elements that are related to pixels can be extracted at high speed and without reducing the accuracy of the image data, so labeling that extracts continuous elements is used. Therefore, it can also be used as a pre-processing for analyzing images in earnest. For example, a high-resolution image is labeled on a low-resolution image to create a label image, a boundary position is provisionally determined, and then the original high-resolution image is labeled again on the area near the boundary. A high-resolution label image may be generated to determine the boundary position.
- the labeling range for high resolution images can be limited. However, the low-resolution image data is only for provisional determination of the boundary position, and the data is coarse.
- the boundary position can be provisionally determined at high speed without generating degraded image data.
- the data resolution Therefore, the same image data can be used for high-precision grouping and for determining the feature values of image elements.
- FIG. 5 and FIG. 6 show examples of rough grouping using block labeling.
- This grouping can identify pixels for image elements that do not necessarily have connected pixels.
- the pixel 5 to be grouped is an on “1” pixel as described above. Even in this rough grouping, when identifying the two-dimensionally arranged pixels 5, it is not necessary to process each pixel 5 independently.
- a pixel processing unit, that is, pixel block 2 is assumed. In the rough grouping, if at least one of the pixels 5 included in the pixel block 2 is ON “1”, the pixel block 2 is treated as ON as a whole, and a plurality of adjacent pixel blocks 2 are processed. If at least one ON pixel 5 is included in both of them, all the ON pixels 5 included in the pixel block 2 have the same identification information (identifier, ID, or label value). .
- the pixels 5 included in the pixel block 2 are adjacent to each other in two dimensions. Therefore, if any of the plurality of pixels 5 included in one pixel block 2 is “1”, it is not necessary to perform a logical operation on the positional relationship between the pixels. Pixels “1” 5 included in pixel block 2 are continuous or connected, and are always given the same identifier. Furthermore, if the pixel block 2 includes at least one “5” pixel 5, the pixel block 2 is on. If both adjacent pixel blocks 2 are on, a common identifier is given to all the pixels 5 included in those pixel blocks 2.
- all the pixels included in the pixel block 2 are calculated by calculating the positional relationship of the pixel block 2 and not calculating the positional relationship of the individual pixels 5 included in the pixel block 2. Can be grouped. For this reason, the number of pixels that can be labeled in parallel increases, and the processing time spent for labeling can be shortened.
- FIGS. 5 and 6 In the rough grouping shown in FIGS. 5 and 6, four pixel blocks 2 adjacent in the vertical and horizontal directions are labeled in parallel as one large pixel block 3, that is, a pixel processing unit, and a label image is displayed. Is to be generated.
- the large pixel block 3 includes four 2 ⁇ 2 pixel blocks 2 adjacent to each other in two dimensions. Therefore, if any one of the plurality of pixel blocks 2 included in one large pixel block 3 is on, no new logical operation is required. So These pixel blocks 2 are on, and a common identifier is labeled to the pixels 5 included in these pixel blocks 2. For this reason, 16 pixels 5 of 2 X 2 X 4 can be processed in parallel by performing a dulling process with the large pixel block 3 as a unit, and the relationship between these 16 pixels 5 It is possible to omit processing for performing logical operation.
- Pixel 5 included in large pixel block 3 has a distance relationship within the range of two pixel blocks 2, and is considered to be identified as belonging to a group of pixels connected by such a relationship. be able to. Further, when both the large pixel block 3 and the pixel block 2 adjacent to the large pixel block 3 include the ON pixel 5, a common identifier is included in the large pixel block 3 and the pixel block 2. It can be understood that by labeling the existing pixel 5, a grouping having a distance relationship within the range of up to three pixel blocks 2 is performed.
- the scanning direction with the large pixel block 3 as a unit does not affect either the top, bottom, left or right.
- the search is performed from the left to the right (Y direction) of the image 1 shown in FIG. Therefore, the adjacent pixel group 4 for which the relationship with one large pixel block 3 is determined is composed of six pixel blocks 2 adjacent to the upper and left sides of the large pixel block 3.
- FIGS. 6A and 6B show the configuration of the pixel block 2 included in the large pixel block 3 and the adjacent pixel group 4.
- the large pixel block 3 is composed of pixel blocks B L5, 6, 8, and 9 (hereinafter, each pixel block 2 is indicated by BL), and each pixel block has a temporary identifier PID5. 6, 8, and 9 are common.
- the temporary identification values of the six small pixel blocks BL0 to BL4 and 7 included in the adjacent pixel group (adjacent pixel block group) 4 PID 0-4 and 7 are referenced.
- FIGS. 7A to 7D show that the large pixel block 3 is changed to the adjacent pixel group 4 from the on / off state of the pixel block 2 of the adjacent pixel group 4 and the on / off state of the pixel block 2 of the large pixel block 3. It shows an algorithm for inheriting and labeling the included temporary identifier or labeling a new temporary identifier.
- FIG. 7A all of the pixel blocks 2 included in the large pixel block 3 are zero. That is, since the large pixel block 3 does not include the ON pixel 5 to be grouped, the process of labeling the temporary identifier is not performed (NOP).
- all the pixel blocks 2 included in the adjacent pixel group 4 are 0, and the large pixel block 3 includes ON pixels.
- the neighboring pixel group 4 does not include the ON pixel 5 to be grouped, and does not include a temporary identifier that can be inherited. Therefore, a new temporary identifier is given to all the pixels 5 of the pixel block 2 included in the large pixel block 3. That is, a new temporary identifier force S that is common to the ON pixels 5 included in the large pixel block 3 is labeled.
- the non-adjacent pixel block 2 is ON, and the large pixel block 3 includes ON pixels. .
- the pixel block 2 of the large pixel block 3 inherits any of the plurality of temporary identifiers possessed by the adjacent pixel group 4, and the inherited temporary identifier is labeled to the pixel 5 of the large pixel block 3.
- the pixel block 2 of the adjacent pixel group 4 is included in the common group via the large pixel block 3. Therefore, there is a possibility that a new connection relationship may occur in the temporary identifier of the pixel block 2 of the adjacent pixel group 4, and the connection information of the temporary identifier in which the connection relationship has occurred is output.
- the adjacent pixel block 2 is ON, and the large pixel block 3 includes ON pixels. . Therefore, the temporary identifier of the adjacent pixel group 4 is commonly assigned to the pixel 5 of the pixel block 2 of the large pixel block 3. In the adjacent pixel group 4, since the adjacent pixel block 2 is on, the same temporary identifier has already been given to the pixel block 2, and a new connection relationship does not occur. [0049]
- the algorithm shown in FIG. 7 performs grouping by assigning a common temporary identifier as belonging to the same group if it is included in the large pixel block 3 even if the pixel block 2 is not adjacent. Is.
- a common temporary identifier is assigned to the pixels 5 included in the range of up to three small pixel blocks 2.
- the algorithm shown in Fig. 7 it is possible to adopt an algorithm that assigns a common temporary identifier only to pixel block 2 that is completely adjacent.
- the algorithm is the same as that described above with reference to FIGS. 3 and 4 and labels a common temporary identifier for the pixels 5 included in the range of two pixel blocks 2 at the maximum.
- the condition of the large pixel block 3 is summarized as whether or not the large pixel block 3 includes the ON pixel 5. Therefore, the state of the large pixel block 3 can be determined by calculating the logical sum of the 16 pixels 5 included in the large pixel block 3.
- the state of adjacent pixel group 4 is determined by the ON state of pixel block 2 included in adjacent pixel group 4, and the state of each pixel block 2 is the logical sum of four pixels 5 included in each pixel block 2. This can be determined by calculation. Therefore, the process of labeling the temporary identifier can be executed in a pipeline manner by hardware having the capability or function of performing a logical OR on a plurality of pixel data in parallel.
- a pixel group included in a high-resolution image can be grouped at high speed without reducing the resolution of the high-resolution image.
- the image boundaries are recognized at high speed.
- Software and an image processing apparatus can be provided, and the image accuracy can be maintained, so that the characteristic value of the image element can be obtained with high accuracy.
- FIG. 8 is a flowchart showing an example of processing for analyzing an image using block labeling.
- main data input / output is indicated by a one-dot chain line.
- the image processing 10 generates a label image and calculates a feature value of an image element distinguished from the label image.
- the process of generating the label image 25 included in the image processing 10 is a first process of scanning the image and attaching temporary identification information (temporary identifier, temporary ID, temporary label) to the pixel group constituting the image element.
- the pixels grouped by the block labeling are not limited to pixels connected to each other, and may be pixels that are not connected or have a predetermined relationship. Therefore, the image elements distinguished by the image processing 10 are not limited to those composed of connected pixels.
- the image processing 10 further includes an analysis stage 13 for extracting feature values of image elements formed by grouped pixel groups.
- This image processing 10 further includes a first step of labeling a temporary identifier in order to extract a feature value even in the case of a multi-value or gradation expression (grayscale) image that is not just a binary image.
- a step 14 of calculating a block feature value of a pixel block composed of multi-valued pixels is provided.
- the first stage 11 for labeling the temporary identifier includes the pixel data 29 including a plurality of pixels for forming an image, the 16 pixel data included in the large pixel block 3, and the adjacent pixels adjacent thereto. It has an input process 100 for acquiring 24 pixel data of group 4 and providing them to the labeling process 200 below. Further, the first stage 11 includes a labeling process 200 for labeling a temporary identifier common to the 16 pixels 5 included in the large pixel block 3. In the input process 100, in step 101, the data of the pixel 5 included in the large pixel block 3 is input from the pixel data file 29.
- the multi-value pixel data 5 acquired from the pixel data file 29 is binarized in step 103. This step is not necessary if the pixel data in file 29 has already been binarized. Further, the data of the pixel 5 of the adjacent pixel group 4 that has been previously labeled with the temporary identifier S and temporarily stored in the buffer (buffer memory) 28, and the data of the temporary identifier labeled with these pixels 5 Is obtained at Step 104! /.
- step 201 the conditions of the large pixel block 3 and the adjacent pixel group 4 are logically calculated, and in step 202, the presence or absence of a temporary identifier that can be inherited is determined.
- the algorithm for inheriting the temporary identifier is as described with reference to Figs. 7 (a) to (d). If adjacent pixel group 4 contains only one temporary identifier that can be inherited (condition dl), in step 205, the temporary identifier is inherited and labeled as a common temporary identifier for pixel 5 of large pixel block 3. , And output to the temporary label image file 27 in units of large pixel block 3. Further, temporary identifier information included in the adjacent pixel group 4 required for processing of the subsequent large pixel block 3 is temporarily stored in the buffer memory 28 that can be accessed at high speed in units of the pixel block 2.
- step 203 When adjacent pixel group 4 includes a plurality of temporary identifiers that can be inherited or should be inherited (condition d2), in step 203, combined information for the plurality of temporary identifiers is recorded. That is, the combined information of the temporary identifier inherited to the pixel 5 of the large pixel block 3 and the other identifiers that are not inherited is output to the combined information file 26. Further, in step 205, the temporary identifier succeeded to the pixel 5 of the large pixel block 3 is labeled and output to the temporary label image file 27.
- a new temporary identifier is generated in step 204, and the new identifier is assigned to pixel 5 of large pixel block 3 in step 205. Label and output to temporary label image file 27. In this manner, a temporary label image in which temporary identifiers are labeled on the pixels constituting the input image is generated.
- the first stage 11 of labeling this temporary identifier The data of 40 pixels Pi included in the large pixel block 3 and the adjacent pixel group 4 are read in parallel.
- the labeling process 200 a process of labeling a temporary identifier for a grouping target pixel (in this example, an ON or “1” pixel) among the 16 pixels Pi included in the large pixel block 3 is performed.
- the input process 100 and the labeling process 200 can be executed in a pipeline manner by making them hardware as a series of processes.
- a step 201 for decoding the input 40 pixel Pi and a step 205 for labeling the temporary identifier determined thereby are executed in a pipeline manner. These processes can be implemented in hardware as described. Therefore, the process of the first stage 11 for labeling temporary identifiers for the 16 pixels 5 included in the large pixel block 3 can be executed substantially in one clock.
- the hardware is configured as the first stage 11 so that the processing of these steps 203 and 204 is processed in parallel with the step 201 for calculating inheritance or the step 205 for labeling. It is possible to execute the processing of the first stage 11 without delaying the pipeline that reads and labels 16 pixels.
- the multi-valued data of the pixel 5 included in the large pixel block 3 to which the temporary identifier is labeled is converted. Analyze and calculate shading information in large pixel block 3 units.
- the gray level information in units of blocks is compressed as a block feature value to the unit of large pixel block 3 (in this example, 1Z16) and output to the block feature value file 22. Since the 16 temporary pixels 5 included in the large pixel block 3 are labeled with the same temporary identifier, the same true identifier is labeled later to form the same pixel element.
- grayscale information for example, maximum / minimum density, average, etc.
- grayscale data for example, maximum / minimum density, average, etc.
- the grayscale information of the image element can be obtained, and the processing time for analyzing the grayscale information can be shortened.
- the pixel 5 included in the large pixel block 3 is input from the processing pixel data 29.
- the process of accessing the pixel data file 29 to calculate the grayscale information can be omitted by obtaining the grayscale information in units of the large pixel block 3 in parallel with the first stage 11.
- the processing time for analyzing can be shortened.
- the combined information power accumulated in the combined information file 26 also generates the integrated table 23.
- Step 203 if the adjacent pixel group 4 includes a pixel 5 labeled with a different temporary identifier, a pair of a temporary identifier inherited by the pixel of the large pixel block 3 and an unsuccessful temporary identifier is paired. Is recorded in the combined information file 26.
- the inherited temporary identifier and the unsuccessful temporary identifier are identification information indicating the same group (image element). For this reason, in the second stage 12, the identifier (true identifier) finally belonging to the same group is labeled again on the pixel 5 on which those temporary identifiers are labeled. Therefore, it is necessary to integrate the inherited temporary identifier and the unsuccessful temporary identifier in advance, and in step 15, the integration table 23 is generated.
- a common true identifier (true label) is allocated to the temporary identifiers labeled to the pixels belonging to the same group from the combined information 26 of the temporary identifier, and the temporary identifier and the true identifier are assigned.
- An integrated table 23 showing the correspondence of The integrated table 23, for example, can use a temporary identifier as an address and read a corresponding true identifier.
- the integrated table 23 can be converted into a true identifier by referring to the temporary identifier as an address.
- the combination of a plurality of temporary identifiers indicates that pixels labeled with a plurality of temporary identifiers are connected if the process is to extract an image element in which the pixels are connected.
- the combination of a plurality of temporary identifiers does not necessarily mean that pixels labeled with a plurality of temporary identifiers are connected. However, these pixels have a predetermined range of relevance.
- the true identifier is labeled on the pixel data stored in the temporary label image file 27 to generate a label image (true label data). And output to label image file 25.
- Temporary label images can also be recorded in bitmap format. Recording in units of pixel block 2 with a common temporary identifier, and further in units of large pixel block 3, saves memory space, and in the second stage 12, pixel data is read in units of large pixel block 3. It is easy to put out.
- step 121 the pixel data included in the temporary label image data 27 is input in parallel in units of large pixel blocks 3.
- step 122 if unprocessed pixels remain in the temporary label image data 27, the temporary identifier of the large pixel block 3 is converted into a true identifier in step 123 by referring to the integration table 23. Using the identifier as a common identifier, the pixels 5 included in the large pixel block 3 are labeled in parallel. As a result, label data composed of pixels 5 having a predetermined relationship and labeled with a true identifier for identifying an independent image element is generated and output to the label image file 25. Also in the step 123 of labeling the true identifier, a common true identifier is labeled in parallel for the grouping target pixels included in the large pixel block 3 unit.
- the analysis stage 13 is executed.
- analysis is performed in units of large pixel block 3, and the block feature value is calculated.
- step 132 the process of counting the block feature values of the large pixel block 3 having the same true identifier is repeated, and the feature value for each image element is calculated.
- a feature value that can also calculate a binary pixel or a binary data force can be calculated in units of a pixel block or a large pixel block based on temporary label image data 27 obtained by labeling a binary pixel with a temporary identifier.
- the block feature value of large pixel block 3 is obtained in stage 14 as described above. Therefore, by summing up in step 133, feature values related to the shading of each image element can also be calculated.
- the feature value includes the area, the center of gravity, and the vertical and horizontal dimensions of the image element.
- the true identifier is based on the labeled label image 25. Therefore, instead of calculating feature values for each image element, block feature values can be aggregated for each image element by referring to the integration table 23. Therefore, if there are sufficient hardware resources, the hardware can be configured to execute the analysis stage 13 in parallel with the second stage 12.
- the first stage 11 for labeling the temporary identifier and the second stage 12 for labeling the true identifier are executed in this order. These processes (steps) do not overlap for the same image.
- the analysis stage 13 may be executed after the second stage 12 or may be executed in parallel.
- the second stage 12 that labels the true identifier and the analysis stage 13 can be executed in parallel after step 15 of generating the integration table 23 is completed.
- the execution timings of the first stage 11 and the second stage 12 do not overlap. Therefore, the image processing 10 is executed while reconfiguring the circuit for executing the first stage 11 and the circuit for executing the second stage 12 by reconfigurable hardware. Thus, hardware resources can be used efficiently.
- the image processing 10 can process a large number of pixel data in parallel, thereby reducing the processing time. Therefore, the processing 10 is implemented in a processor that includes a plurality of processing elements and includes a processing area in which a plurality of data paths that are operated in parallel by the plurality of processing elements. Processing time can be reduced. It is desirable that the processing element has a logic operation function of a certain scale and is included in a reconfigurable integrated circuit device.
- the processing device 30 shown in FIG. 9 is an example of reconfigurable hardware, and includes an area where a circuit can be dynamically reconfigured.
- the processing device 30 includes a matrix area (processing area) 31 in which various data paths can be configured by connecting processing elements (hereinafter referred to as EXE) 32 having a certain degree of arithmetic function, for example, ALU. Further, the processing device 30 controls the connection of the EXE 32 in the matrix 31 to dynamically configure the data path, the RAM 34 in which the hardware information (configuration information) of the data path configured in the matrix 31 is recorded, Data processed by matrix 31 circuit And a buffer 35 for temporarily recording. Further, the processing device 30 is provided with an interface for inputting / outputting data to / from the external memory 36.
- EXE processing elements
- a processing device that can configure a data path that operates in parallel by connecting multiple EXE32s is suitable for processing multiple pixel data in parallel, and is a hardware resource suitable for image processing 10. is there.
- the connection of EXE32 in the matrix area 31 (hereinafter referred to as matrix) 31 of the processing device 30 is reconfigured so that each stage 11 to 13 of the image processing 10 is executed in order, so that the dedicated processing for performing the image processing 10 is performed.
- an image processing system 50 that executes the image processing 10 using the processing device 30 will be described. If hardware resources such as EXE32 in the matrix 31 are sufficient, the processing device 30 can simultaneously execute other processing than just image processing related to labeling.
- FIGS. 10A to 10C show how the matrix 31 that is a processing area is reconfigured so that the processing device 30 functions as the image processing system 50.
- FIG. In order to cause the processing device 30 to function as the image processing system 50, in this example, three types of configuration information 51 to 53 are prepared in advance and stored in the configuration RAM 34 of the processing device 30. Then, the controller 33 changes the configuration of the matrix 31 at an appropriate timing, and executes the image processing 10.
- FIG. 10 (a) shows a matrix 31 so that the first stage 11 and the process 14 for analyzing multi-valued pixel data in units of large pixel blocks 3 are executed in parallel according to the first configuration information 51. Indicates a reconfigured state.
- FIG. 10 (a) shows a matrix 31 so that the first stage 11 and the process 14 for analyzing multi-valued pixel data in units of large pixel blocks 3 are executed in parallel according to the first configuration information 51. Indicates a reconfigured state.
- FIG. 10 (b) shows a state in which the matrix 31 is reconfigured so as to execute the process of generating the integration table by the second configuration information 52.
- FIG. 10C shows a state in which the matrix 31 is reconfigured so that the second stage 12 and the analysis stage 13 are executed in parallel by the third configuration information 53.
- an interface having a configuration for executing the process 100 input by the first stage 11 is added to the matrix region 31 of the processing device 30 according to the first configuration information 51.
- a labeling processor (labeling engine) 55 having a configuration for executing the labeling process 200 is configured.
- an analysis processor having a configuration for executing step 14 for analyzing multivalued pixel data in the matrix region 31 according to the first configuration information 51.
- Analysis engine, second processor and a peripheral circuit 57 including a circuit for supplying data from the interface 54 to the labeling processor 55 and the analysis processor 56 are configured.
- the interface 54 has a function of inputting pixel data included in the large pixel block 3 in parallel and a function of inputting temporary identifier data of the adjacent pixel group 4.
- the labeling processor 55 includes a function 55a for calculating and determining the inheritance of the temporary identifier, a function 55b for labeling the temporary identifier, a function 55c for outputting the combined information of the inherited temporary identifier and the uninherited temporary identifier, And a function 55d for generating a temporary identifier.
- a function for labeling temporary identifiers 55b uses the inherited temporary identifier or the new U ⁇ temporary identifier as a common temporary identifier, and performs parallel labeling on all ON pixels 5 to be grouped included in the large pixel block 3. To do.
- FIG. 11 shows in more detail the outline of the circuit configured in the matrix 31 by the first configuration information 51.
- the interface 54 loads the pixel data included in the large pixel block 3 from the pixel data file 29 in the external memory 36, and binarizes the binary data by the binary key circuit 61 and supplies it to the labeling processor 55.
- multi-value pixel data is supplied to the processor 56 for analysis.
- the temporary identifier (temporary ID) of the adjacent pixel group 4 is obtained from the nota 28 and supplied to the labeling processor 55.
- the labeling processor 55 includes a logic circuit 65 that calculates a logical sum of data supplied from the interface 54, a look-up table (LUT) 66 that determines whether there is a temporary ID to be inherited based on the result of the logical sum, A selector 67 for selecting a temporary ID and a selector 68 for selecting combined information are provided.
- LUT look-up table
- the logic circuit 65 calculates 10 values based on the logical sum of each of the 10 pixel blocks 2 (BLO to BL9 in FIG. 6) corresponding to the large pixel block 3 and the adjacent pixel block 2. Generates address 79 with The LUT 66 uses the value 79 as an address input and outputs the microcode stored there as an ID control signal 71.
- the microcode 71 controls various logics including the selectors 67 and 68.
- the data generation circuit 69 that performs labeling labels the temporary IDs in parallel for the 16 pixels 5 included in the large pixel block 3.
- the data generation circuit 69 adds the selected temporary ID 72 to the binary 16 pixel data supplied from the interface circuit 54 to obtain 1-word (32-bit) block pixel data 73. Output.
- Block pixel data The data 73 includes an ID 73d and 16 pieces of pixel data 73p. Therefore, the labeling for the 16 pixel data contained in the large pixel block 3 is batched as one word data and processed in parallel.
- the temporary label image data output to the temporary label image file 27 is composed of block pixel data 73.
- FIG. 12 shows a schematic circuit configuration until the labeling processor 55 generates and outputs block pixel data 73 from the supplied pixel data.
- the interface 54 actually uses the pixel data 29 stored in the line buffer 35 from the external memory 36 to the pixel data included in the large pixel block 3 and its adjacent pixel group (adjacent pixel block group) 4. Is cut out by a shift register and a mask circuit. For example, the pixel data 5 of the lines LiO to 5 and columns Co0 to 7 shown in FIGS. 5 and 6 is loaded. These 40 bits of 40-bit pixel data can be read in 1 clock (1 cycle) if a sufficient bus width can be secured.
- the logic circuit 65 of the labeling processor 55 calculates the logical sum of the pixel data 5 of the 0th line Li 0 and the 1st line Lil by the OR circuit 65a, and the blocks BL0 to 3 are turned on, that is, Determine that each block has at least one ON pixel.
- the OR circuit 65b calculates the logical sum of the pixel data 5 of the second line Li2 and the third line Li3, and determines the ON state of the blocks BL4-6.
- the OR circuit 65c calculates the logical sum of the pixel data 5 of the fourth line Li4 and the fifth line Li5, and determines the ON state of the blocks B L7-9.
- the state of the adjacent pixel group 4 and the state of the pixel block 3 can be determined from the calculation results of these OR circuits 65a, 65b and 65c. Therefore, the OR of the outputs of these OR circuits 65a, 65b, and 65c is further calculated by the OR circuit 65d, and the OR result of 10 pixel blocks BL0 to BL9 is generated as a 10-bit address input 79. Supply to LUT66. As a result, an appropriate microcode for the LUT66 force is output as the ID control signal 71.
- the LUT 66 can be configured using RAM elements provided in advance in the matrix area 31.
- a series of processes of loading the pixel data 5 and calculating the logical sum in order and outputting the ID control signal 71 with such a circuit configuration are performed sequentially without returning.
- the pixel data related to one or more large pixel blocks 3 can be parallelized by configuring a number of parallel processing data paths using a large number of elements 32 arranged in the reconfigurable matrix 31.
- pipeline processing can be performed. Therefore, at least one large pixel block 3, that is, a temporary ID of at least 16 pixels can be determined in one clock (one cycle).
- the data generation circuit 69 includes information for 16 pixels included in one large pixel block 3, and information of a temporary ID commonly assigned to them, and is 1 word length (32 bits).
- Block pixel data 73 is generated and output to the temporary label image file 27 as temporary label image data.
- the block pixel data 73 can further include position information of the large pixel block 3, characteristic values of the large pixel block 3 calculated for the information power of 16 pixels, and the like.
- the data for the 16 pixels included in the large pixel block 3 and the provisional ID data 72 labeled on them are supplied to the data generation circuit 69.
- the provisional ID data 72 of the large pixel block 3 is supplied to the data generation circuit 69 by the ID control signal 71 of the LUT 66, the pixel data of the large pixel block 3 is input and a certain amount of computation time is required. It becomes.
- the data for 16 pixels loaded by the input interface 54 is supplied to the data generation circuit 69 via an appropriate delay circuit or pipeline register, thereby synchronizing with the temporary ID data 72 of the large pixel block 3. Can be supplied to the data generation circuit 69. Therefore, in the labeling processor 55, processing from loading the pixel data of the large pixel block 3 from the line buffer 35 to labeling and outputting the temporary ID to the pixel data can be executed in a pipeline manner.
- the temporary ID is determined for at least one large pixel block 3, that is, at least 16 pixels, in substantially one clock, and the temporary ID is labeled.
- Temporary label image data can be output. Therefore, the image processing device 50 can group at least 16 pixels in one cycle, and can perform image processing at a speed that is at least ten times as fast as the process of labeling in units of one pixel.
- pixel data 73p having the original resolution is stored, and the resolution of the analyzed image is not deteriorated.
- FIG. 13 shows a schematic configuration of the processor 56 that extracts the feature quantity in units of the large pixel block 3.
- the analysis processor 56 is supplied with 16 pixels of original data included in one large pixel block 3 cut out from the line buffer 35 by the interface 54, that is, grayscale (multi-value) pixel data. Is done. Each pixel data is judged by the threshold value processing unit 62 as to whether or not the data gives the maximum or minimum of shading. The 16-pixel data subjected to the threshold processing is calculated by the selectors 63a and 63b, and the maximum value and the minimum value are calculated. If there is no error in the calculation of the maximum value and the minimum value, the density data 74 is output to the block feature value file 22 through the gate circuit 63d.
- FIG. 14 shows a circuit configuration for performing threshold processing for one pixel in the threshold processing unit 62.
- the pixel data 29p for one pixel is compared with the first threshold value 62b by the comparator 62a, and it is determined that the pixel data 29p is significant when the pixel data 29p is larger than the first threshold value 62b.
- the carry 62x is asserted, and the pixel data 29p is output as data for comparing the maximum values by the selector 62e.
- “0” is output from the selector 62e and is ignored as the maximum value.
- the pixel data 29p is compared with the second threshold value 62d by the comparator 62c, and it is determined that the pixel data 29p is significant if the pixel data 29p is smaller than the second threshold value 62d.
- carry 62y is asserted, and pixel data 29p is output as data for comparing the minimum value by selector 62f.
- “FF” is output from the selector 62f and ignored as the minimum value.
- the logical sum is calculated by the circuit 62g, and further, the logical sum including the comparison result of the other pixels is calculated by the circuit 62h.
- the analysis processor 56 outputs grayscale information 74 in units of the large pixel block 3.
- the block feature data 74 which is the shade information in units of blocks, has a one-to-one correspondence with the block pixel data 73. Therefore, by subsequently counting based on the temporary identifier (temporary ID) and the integrated table 23, it is possible to obtain the feature value (shading information) for each image element.
- the matrix 31 is reconfigured by the second configuration information 52 so as to generate an integrated table, as shown in FIG. 10 (b).
- the combined information file 26 a temporary identifier inherited to the pixel of the large pixel block 3 and an uninherited temporary identifier paired therewith are recorded. Therefore, as the next step for generating the label image, an integrated table 23 is generated that gives the same true identifier (true ID) to one or more temporary identifier pairs in a connected relationship.
- the algorithm for generating the integration table 23 from the combined information file 26 is as follows.
- the combined information file 26 a plurality of entries indicating the combination of two temporary IDs are recorded.
- the integrated table 23 can read the corresponding true label by accessing the temporary ID as an address.
- the nth entry in the binding information file 26 are “a” and “b”, the nth entry is stored in the group queue.
- h2 Store the entry from the head of the group queue, for example, the pair of “a” and “b” in the comparison target register.
- the group queue power reads the next entry, stores it in the comparison target register, and performs the same operation.
- the information stored in the integrated table 23 is obtained for one true ID.
- the grouping is complete.
- the n + 1st entry is also read from the combined information file 26 and the same operation is performed.
- the combined information once stored in the group queue is not stored again in the group queue.
- a unique true ID is assigned to each temporary ID.
- the integrated table 23 is generated by the above operation.
- the second configuration information 52 configures a data node for executing the above algorithm in the matrix region 31.
- the matrix 31 is reconfigured to execute the second stage 12 by the third configuration information 53, as shown in FIG. 10 (c).
- the matrix 31 is reconfigured to execute the analysis stage 13.
- an interface 59 for inputting block pixel data 73 having a temporary ID 73d and pixel data 73p from the temporary label image file 27 in order to execute the second stage 12 A labeling processor (labeling engine) 60 for relabeling the temporary ID to the true ID is configured.
- a circuit 81 that decodes the block pixel data 73 and calculates a characteristic value in units of the large pixel block 3, and a block unit characteristic
- An analysis processor (analysis engine, first processor) 80 for calculating a feature value for each image element word is configured, including a circuit 82 for collecting values based on the integrated table 23.
- FIG. 16 shows a circuit example of a labeling processor 60 that inputs block pixel data 73 and refers to the integration table 23 to label a true identifier (true ID, true label) in units of large pixel block 3. It is shown.
- the interface circuit 59 accesses the temporary label image file 27 to obtain block pixel data 73.
- the block pixel data 73 includes 16 pieces of pixel data 73p constituting the large pixel block 3, and these pixel data are input in parallel.
- the true identifier labeling processor 59b accesses the integrated table 23 using the temporary ID 73d of the block pixel data 73 as an address, and acquires the true ID.
- the element 32 of the matrix 31 is used as a selector that operates in parallel, and is labeled to the on “1” pixel to be grouped. Set the pixel to “0” and label image file Output to isle 25.
- the labeling processor 60 can also output block pixel data obtained by rewriting the ID value 73d of the block pixel data 73 from the temporary ID to the true ID as label image data.
- the true ID is labeled in parallel for the data 73p for 16 pixels in parallel.
- FIG. 17 shows a circuit example of the analysis processor 80.
- This circuit 80 implements a logic for obtaining the maximum value in the Y coordinate direction.
- This circuit 80 is a first circuit 81 for obtaining a feature value (maximum value) of each large pixel block 3 by using a decoder, and summing them by a true ID, and the maximum value of the pixels grouped by the true ID.
- a second circuit 82 for obtaining The first circuit 81 includes a decoder 83 that converts block pixel data 73 including data for 16 pixels into data 73p for 16 pixels into control data, and a feature value for each large pixel block 3 based on the control data. That is, a selector 84 for obtaining the maximum value in the Y coordinate direction is provided.
- the second circuit 82 converts the temporary ID 73d of the block pixel data 73 into a true ID using the integration table 23, and uses the true ID as an address to access the Y—Max table 85.
- Y—Max table IZF86 and the maximum value are selected.
- the selector 87 receives the Y coordinate maximum value of true ID obtained from the table 85 via the Y—Max table I / F 86 and the Y coordinate obtained by the selector 84 as input, and selects the maximum value. Further, the selector 87 outputs the new maximum value to the Y—Max table 85 via the I / F 86 and updates the maximum value.
- the analysis processor 80 further reads the subsequent block pixel data 73 and compares the true ID via the integrated table 23. If the true ID is the same, the analysis processor 80 writes the subsequent block pixel data 73 before writing to the table 85.
- a circuit 89 for obtaining the maximum value including the data 73 is provided. This circuit 89 shortens the processing time when block pixel data 73 with the same true ID continues. Can shrink. Since the analysis processor 80 performs a read 'modify' write process on the Y—Max table 85, it is necessary to assume a situation where the same true ID is continuously input, and the pipeline latency increases. End up. By adding the circuit 89 in this figure and reading and comparing the subsequent block pixel data 73 in advance, the latency of the feedback path can be reduced from 5 cycles to 3 cycles.
- the image processing method 10 and the image processing device 50 described above can group non-adjacent pixels according to a desired rule.
- the processing method and processing apparatus for labeling only the connected pixels are proposed in an almost common configuration by changing the logic for labeling the temporary identifier to the one described with reference to FIGS. Can be provided.
- the basic small pixel block 2 is composed of four adjacent pixels. When grouping related pixels in a longer range, the basic pixel block is composed of five or more pixels. It may be configured.
- the power of large pixel block 3 composed of four adjacent pixel blocks 2 When grouping related pixels in a longer range, a large pixel block is composed of five or more pixel blocks 2. You may do it.
- binarizing pixels it is not limited to monochrome, and each color component of a power image can be binarized.
- block labeling can be applied not only to 2D images but also to 3D images. In this case, as described above, a basic pixel block is composed of 8 pixels adjacent to each other. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020077005396A KR101199195B1 (ko) | 2004-08-20 | 2005-08-19 | 라벨 이미지의 생성 방법 및 화상처리 시스템 |
JP2006531881A JP4803493B2 (ja) | 2004-08-20 | 2005-08-19 | ラベルイメージの生成方法および画像処理システム |
US10/590,778 US7974471B2 (en) | 2004-08-20 | 2005-08-19 | Method of generating a labeled image and image processing system with pixel blocks |
EP05772583.0A EP1783688B1 (en) | 2004-08-20 | 2005-08-19 | Method for generating label image and image processing system |
KR1020127011233A KR101225146B1 (ko) | 2004-08-20 | 2005-08-19 | 라벨 이미지의 생성 방법 및 화상처리 시스템 |
US13/049,544 US8208728B2 (en) | 2004-08-20 | 2011-03-16 | Method of generating a labeled image and image processing system with pixel blocks |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-240281 | 2004-08-20 | ||
JP2004240281 | 2004-08-20 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/590,778 A-371-Of-International US7974471B2 (en) | 2004-08-20 | 2005-08-19 | Method of generating a labeled image and image processing system with pixel blocks |
US13/049,544 Division US8208728B2 (en) | 2004-08-20 | 2011-03-16 | Method of generating a labeled image and image processing system with pixel blocks |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006019165A1 true WO2006019165A1 (ja) | 2006-02-23 |
Family
ID=35907548
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/015163 WO2006019165A1 (ja) | 2004-08-20 | 2005-08-19 | ラベルイメージの生成方法および画像処理システム |
Country Status (6)
Country | Link |
---|---|
US (2) | US7974471B2 (ja) |
EP (2) | EP2618308B1 (ja) |
JP (1) | JP4803493B2 (ja) |
KR (2) | KR101225146B1 (ja) |
CN (1) | CN100578545C (ja) |
WO (1) | WO2006019165A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7912286B2 (en) * | 2005-05-10 | 2011-03-22 | Ricoh Company, Ltd. | Image processing apparatus and method of image processing capable of effective labeling |
US20110110591A1 (en) * | 2009-11-09 | 2011-05-12 | Ming-Hwa Sheu | Multi-point image labeling method |
US9042651B2 (en) | 2009-11-09 | 2015-05-26 | National Yunlin University Of Science And Technology | Multi-point image labeling method |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8233535B2 (en) * | 2005-11-18 | 2012-07-31 | Apple Inc. | Region-based processing of predicted pixels |
US8213734B2 (en) * | 2006-07-07 | 2012-07-03 | Sony Ericsson Mobile Communications Ab | Active autofocus window |
JP4257925B2 (ja) * | 2006-08-24 | 2009-04-30 | シャープ株式会社 | 画像処理方法、画像処理装置、原稿読取装置、画像形成装置、コンピュータプログラム及び記録媒体 |
KR101030430B1 (ko) * | 2007-09-12 | 2011-04-20 | 주식회사 코아로직 | 영상 처리 장치와 방법 및 그 기록매체 |
JP2009082463A (ja) * | 2007-09-28 | 2009-04-23 | Fujifilm Corp | 画像分析装置、画像処理装置、画像分析プログラム、画像処理プログラム、画像分析方法、および画像処理方法 |
CN102473312B (zh) * | 2009-07-23 | 2015-03-25 | 日本电气株式会社 | 标记生成装置、标记生成检测系统、标记生成检测装置及标记生成方法 |
US8446439B1 (en) * | 2009-08-06 | 2013-05-21 | The United States Of America As Represented By The Secretary Of The Navy | Apparatus and method for single pass BLOB analysis of high frame rate video |
US8600171B2 (en) * | 2009-12-10 | 2013-12-03 | Canon Kabushiki Kaisha | Image labeling using parallel processing |
US8657200B2 (en) * | 2011-06-20 | 2014-02-25 | Metrologic Instruments, Inc. | Indicia reading terminal with color frame processing |
US9721319B2 (en) | 2011-10-14 | 2017-08-01 | Mastercard International Incorporated | Tap and wireless payment methods and devices |
US9443165B2 (en) | 2012-06-08 | 2016-09-13 | Giesecke & Devrient Gmbh | Blob-encoding |
US9262704B1 (en) * | 2015-03-04 | 2016-02-16 | Xerox Corporation | Rendering images to lower bits per pixel formats using reduced numbers of registers |
CN104836974B (zh) | 2015-05-06 | 2019-09-06 | 京东方科技集团股份有限公司 | 视频播放器、显示装置、视频播放系统和视频播放方法 |
KR101850772B1 (ko) * | 2015-05-27 | 2018-04-23 | 삼성에스디에스 주식회사 | 의료용 메타 데이터베이스 관리 방법 및 그 장치 |
KR101699029B1 (ko) * | 2015-08-07 | 2017-01-23 | 이노뎁 주식회사 | 영역 처리 속도를 향상한 영상 처리 장치 및 영상 처리 방법 |
US10720124B2 (en) * | 2018-01-15 | 2020-07-21 | Microsoft Technology Licensing, Llc | Variable pixel rate display interfaces |
TWI710973B (zh) * | 2018-08-10 | 2020-11-21 | 緯創資通股份有限公司 | 手勢識別方法、手勢識別模組及手勢識別系統 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61224672A (ja) * | 1985-03-29 | 1986-10-06 | Toshiba Corp | 特徴抽出装置 |
JPH03222074A (ja) * | 1990-01-29 | 1991-10-01 | Canon Inc | 画像処理用ラベル付け装置 |
JPH07105368A (ja) * | 1993-10-06 | 1995-04-21 | Tokimec Inc | 画像のラベリング方法および画像のラベリング装置 |
JPH0950527A (ja) * | 1995-08-09 | 1997-02-18 | Fujitsu Ltd | 枠抽出装置及び矩形抽出装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH077444B2 (ja) * | 1986-09-03 | 1995-01-30 | 株式会社東芝 | 三次元画像の連結成分抽出装置 |
JP2966084B2 (ja) * | 1990-11-29 | 1999-10-25 | 本田技研工業株式会社 | 画像処理における局所的領域分割方法 |
US5384904A (en) * | 1992-12-08 | 1995-01-24 | Intel Corporation | Image scaling using real scale factors |
JP3307467B2 (ja) * | 1993-04-09 | 2002-07-24 | 三菱電機株式会社 | ラベリング方式およびラベリング回路 |
JP2891616B2 (ja) * | 1993-09-24 | 1999-05-17 | 富士通株式会社 | 仮ラベル割付処理方式と実ラベル割付処理方式 |
JP2734959B2 (ja) | 1993-12-27 | 1998-04-02 | 日本電気株式会社 | 仮ラベル付け方法 |
CN1516102A (zh) * | 1998-02-09 | 2004-07-28 | 精工爱普生株式会社 | 液晶显示装置及其驱动方法和使用该液晶显示装置的电子装置 |
JP2000285237A (ja) * | 1999-03-31 | 2000-10-13 | Minolta Co Ltd | 画像処理装置、画像処理方法及び画像処理プログラムを記録した記録媒体 |
US6643400B1 (en) | 1999-03-31 | 2003-11-04 | Minolta Co., Ltd. | Image processing apparatus and method for recognizing specific pattern and recording medium having image processing program recorded thereon |
JP2002230540A (ja) | 2001-02-02 | 2002-08-16 | Fuji Xerox Co Ltd | 画像処理方法 |
TWI234737B (en) | 2001-05-24 | 2005-06-21 | Ip Flex Inc | Integrated circuit device |
DE60232125D1 (de) | 2001-09-21 | 2009-06-10 | Ricoh Kk | Multi-Level Datenverarbeitung für die Aufzeichnung |
CN1459761B (zh) * | 2002-05-24 | 2010-04-21 | 清华大学 | 基于Gabor滤波器组的字符识别技术 |
US7477775B2 (en) * | 2003-07-18 | 2009-01-13 | Olympus Corporation | Microscope system |
CN1216349C (zh) * | 2003-08-14 | 2005-08-24 | 中国人民解放军第一军医大学 | 基于广义模糊随机场的图像优化分割方法 |
-
2005
- 2005-08-19 KR KR1020127011233A patent/KR101225146B1/ko active IP Right Grant
- 2005-08-19 WO PCT/JP2005/015163 patent/WO2006019165A1/ja active Application Filing
- 2005-08-19 US US10/590,778 patent/US7974471B2/en not_active Expired - Fee Related
- 2005-08-19 JP JP2006531881A patent/JP4803493B2/ja active Active
- 2005-08-19 CN CN200580027460A patent/CN100578545C/zh active Active
- 2005-08-19 EP EP13164299.3A patent/EP2618308B1/en active Active
- 2005-08-19 KR KR1020077005396A patent/KR101199195B1/ko active IP Right Grant
- 2005-08-19 EP EP05772583.0A patent/EP1783688B1/en active Active
-
2011
- 2011-03-16 US US13/049,544 patent/US8208728B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61224672A (ja) * | 1985-03-29 | 1986-10-06 | Toshiba Corp | 特徴抽出装置 |
JPH03222074A (ja) * | 1990-01-29 | 1991-10-01 | Canon Inc | 画像処理用ラベル付け装置 |
JPH07105368A (ja) * | 1993-10-06 | 1995-04-21 | Tokimec Inc | 画像のラベリング方法および画像のラベリング装置 |
JPH0950527A (ja) * | 1995-08-09 | 1997-02-18 | Fujitsu Ltd | 枠抽出装置及び矩形抽出装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1783688A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7912286B2 (en) * | 2005-05-10 | 2011-03-22 | Ricoh Company, Ltd. | Image processing apparatus and method of image processing capable of effective labeling |
US20110110591A1 (en) * | 2009-11-09 | 2011-05-12 | Ming-Hwa Sheu | Multi-point image labeling method |
US9042651B2 (en) | 2009-11-09 | 2015-05-26 | National Yunlin University Of Science And Technology | Multi-point image labeling method |
Also Published As
Publication number | Publication date |
---|---|
EP2618308B1 (en) | 2017-05-10 |
US20110164818A1 (en) | 2011-07-07 |
EP1783688A1 (en) | 2007-05-09 |
KR101199195B1 (ko) | 2012-11-07 |
US8208728B2 (en) | 2012-06-26 |
KR20070046916A (ko) | 2007-05-03 |
US20070248266A1 (en) | 2007-10-25 |
JPWO2006019165A1 (ja) | 2008-05-08 |
KR101225146B1 (ko) | 2013-01-22 |
CN101006467A (zh) | 2007-07-25 |
US7974471B2 (en) | 2011-07-05 |
CN100578545C (zh) | 2010-01-06 |
EP1783688B1 (en) | 2017-04-12 |
KR20120066058A (ko) | 2012-06-21 |
EP1783688A4 (en) | 2012-10-31 |
EP2618308A1 (en) | 2013-07-24 |
JP4803493B2 (ja) | 2011-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4803493B2 (ja) | ラベルイメージの生成方法および画像処理システム | |
CN107851327B (zh) | 粗细搜索方法、图像处理装置及记录介质 | |
Vega-Rodríguez et al. | An FPGA-based implementation for median filter meeting the real-time requirements of automated visual inspection systems | |
JPH1166325A (ja) | 物体の境界決定方法および装置並びに物体の境界決定プログラムを記録した記録媒体 | |
US8229251B2 (en) | Pre-processing optimization of an image processing system | |
Sutheebanjard | Decision tree for 3-D connected components labeling | |
Rakesh et al. | Skeletonization algorithm for numeral patterns | |
CN107766863B (zh) | 图像表征方法和服务器 | |
Davalle et al. | Hardware accelerator for fast image/video thinning | |
JP2006085686A (ja) | 画像処理方法および装置 | |
WO2022074746A1 (ja) | 劣化検出装置、劣化検出方法、及びプログラム | |
US6760466B2 (en) | Automatic image replacement and rebuilding system and method thereof | |
Goyal et al. | A parallel thinning algorithm for numeral pattern images in BMP format | |
Nguyen et al. | Fast parallel algorithms: from images to level sets and labels | |
CN112991139A (zh) | 一种基于分割视窗提取fast特征点的算法加速方法 | |
JP3272381B2 (ja) | 領域境界点抽出方法 | |
Neeraja et al. | FPGA based area efficient median filtering for removal of salt-pepper and impulse noises | |
CN115908473A (zh) | 一种图像连通域连接方法及装置 | |
Rodrigues et al. | A systolic array approach to determine the image threshold by local edge evaluation | |
JP2009070250A (ja) | 画像処理装置及びプログラム | |
AU2009227822A1 (en) | A method for improved colour representation of image regions | |
JPS63837B2 (ja) | ||
AU2009233625A1 (en) | Progressive window colour quantisation | |
JPH0224782A (ja) | 2値画像の論理フィルタ処理における結果画像の変化検出方式 | |
IE47847B1 (en) | Automatic image processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006531881 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580027460.2 Country of ref document: CN |
|
REEP | Request for entry into the european phase |
Ref document number: 2005772583 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005772583 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020077005396 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2005772583 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10590778 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10590778 Country of ref document: US |