WO2006132194A1 - 情報処理装置及び情報処理方法、画像処理装置及び画像処理方法、並びにコンピュータ・プログラム - Google Patents
情報処理装置及び情報処理方法、画像処理装置及び画像処理方法、並びにコンピュータ・プログラム Download PDFInfo
- Publication number
- WO2006132194A1 WO2006132194A1 PCT/JP2006/311251 JP2006311251W WO2006132194A1 WO 2006132194 A1 WO2006132194 A1 WO 2006132194A1 JP 2006311251 W JP2006311251 W JP 2006311251W WO 2006132194 A1 WO2006132194 A1 WO 2006132194A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- node
- integration
- nodes
- area
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Definitions
- the present invention relates to an information processing apparatus that handles enormous amounts of data, and relates to an information processing apparatus that grows raw data composed of a large number of nodes that cannot perceive each into a small number of perceptible units called segments.
- the present invention relates to an image processing apparatus and an image processing method for generating and displaying a two-dimensional image of a physical object, and a computer 'program.
- the present invention relates to an image processing apparatus, an image processing method, and a computer program that are handled as a collection or segment of a large number of nodes such as polygons and perform two-dimensional image processing of objects.
- a mesh segmentation process is performed in which a polygon mesh is adjusted to an appropriate roughness by performing processing such as division of an image region and merge of the divided regions.
- Image processing apparatus, image processing method, and computer program for example, 2D or 3D computer 'Image processing apparatus and image for performing progressive mesh segmentation processing according to applications using graphics'
- the present invention relates to a processing method and a computer program.
- CG computer 'graphics
- This kind of graphic 'system is generally composed of a' geometry 'subsystem as the front end and a raster subsystem as the back end.
- the geometry subsystem treats objects as a collection of many fine polygons (usually triangles), or polygon meshes, and coordinates such as coordinate transformation, clipping, and light source calculations for each vertex defining the polygon. Perform academic calculations.
- the roughness of the mesh obtained by dividing the original object into regions greatly affects the processing load and image quality.
- the number of vertices to be processed increases in proportion to this, and the processing amount also increases.
- the size of the polygon is increased, the final image will be rough. For this reason, mesh segmentation processing that performs image segmentation, merges the segmented areas, etc., and adjusts the polygon mesh to an appropriate roughness according to the application using CG. Necessary.
- Mesh segmentation is a fundamental technique for growing raw data into a small number of perceptible units called “segments”.
- Mesh segmentation has been started in the early 1970s of computer image processing in the 1970s (for example, see Non-Patent Document 1), but is still an active field. From the beginning, mesh 'segmentation' has dealt with color images, moving images, distance images (known as depth images or range images), 3D solid data, 3D meshes, and so on.
- Hierarchical segmentation can be realized by creating multiple polygon meshes (segments) with different roughness in the mesh 'segmentation process. In addition, the hierarchical mesh 'segmentation' is performed progressively or smoothly, which broadens the range of applications that use images.
- Mesh 'segmentation' is basically processed based on the similarity between adjacent image regions.
- the input video color signal is converted into a predetermined color space, and the initial video division is performed to divide the input video into a plurality of areas according to the positions of the color pixels of the input video in this color space
- the divided regions are divided into a plurality of layers according to the horizontal adjacency relationship between the divided regions and the vertical inclusion relationship, and adjacent regions are grouped in each layer to form each layer. Extract vertical inclusion relations between area groups to structure the areas, and determine the coupling order between the areas according to the horizontal relation between the areas and the vertical inclusion relation between the area groups. Then, the success or failure of the connection between the adjacent regions is evaluated based on the determined combination order, and the regions are combined if it is determined that the evaluated regions are regions having substantially the same video characteristics. (See, for example, Patent Document 1) If).
- Patent Document 1 Japanese Patent Laid-Open No. 2001-43380
- Non-patent literature 1 A. Rosenfeld, 'Picture processing by computer (Acaaemi c Press, 1969)
- Non-Patent Document 2 Sagi Katz and Ayellet Tal, "Hierarchical mesh decompo sition using fuzzy clustering and cots. (In Proc. SIGGRAPH (200 3). ACM Trans. OnGraphics22, 3 (2003), 382- 391)
- An object of the present invention is to provide an excellent information processing apparatus and information processing capable of growing raw data composed of a large number of minute nodes that cannot be perceived individually into a small number of perceptible units called segments. It is to provide a method, as well as a computer 'program.
- a further object of the present invention is to treat a physical object as an aggregate (that is, a segment) of a large number of fine nodes, and to grow the segment by an integration process between the nodes.
- An object of the present invention is to provide an excellent image processing apparatus and image processing method capable of processing an image, and a computer program.
- a further object of the present invention is a mesh segmentation process in which an image area is divided and the divided areas are merged to grow the area and adjust the polygonal mesh to an appropriate roughness. It is an object of the present invention to provide an excellent image processing apparatus and image processing method, and a computer program.
- a further object of the present invention is to provide an excellent image processing apparatus capable of performing a progressive mesh segmentation process at high speed and with high accuracy in accordance with an application using a 2D or 3D computer 'graphics. And providing an image processing method and computer 'program.
- the present invention has been made in consideration of the above problems, and a first aspect of the present invention is an information processing apparatus that handles data in which a topology is formed by a plurality of nodes each having an attribute value.
- a topology evaluation unit that obtains a weighting factor of edges connecting nodes based on attribute values of nodes adjacent to each other on the topology, and sorts the edges based on the weighting factors;
- a node integration processing unit It is an information processing apparatus characterized by comprising.
- the integration of the nodes is repeatedly executed. From raw data consisting of many nodes that cannot be perceived, it can be grown into a small number of perceptible units called segments.
- the statistical processing algorithm here, for example, whether adjacent nodes are similar based on a judgment formula that also derives the concentration in-equity phenomenon power in the attribute information of each node, In other words, it is determined whether or not the nodes can be integrated.
- the node integration processing based on such a statistical processing algorithm can be performed at high speed because it is configured by a simple calculation that statistically processes attribute information possessed by each node. For example, millions of polygons can be processed per second using a general computer such as a personal computer.
- a general computer such as a personal computer.
- the parameter value included in the judgment formula it is possible to freely set the standard for integrating the nodes and to grow to the desired roughness segment, and the system has scalability.
- the topology of a plurality of nodes constituting raw data is used as an input value, and node integration processing is performed recursively according to a statistical processing algorithm.
- mesh growing that is, performing mesh growing
- an arbitrarily rough segment can be generated.
- a plurality of segments having different roughness can be generated smoothly.
- the second aspect of the present invention is an image processing apparatus that handles an object as a polygonal mesh having a plurality of polygonal forces and performs image processing.
- An adjacency graph input unit for inputting an adjacency graph describing a polygon mesh
- An adjacent graph evaluation unit that compares attribute values of image regions connected by edges, assigns a weight factor to the edges based on the comparison result, and sorts the edges in the adjacent graph based on the weight values;
- An image area integration processing unit that extracts image area pairs sandwiching edges according to the sorted order, evaluates whether the image areas should be integrated based on a statistical processing algorithm, and performs image area integration processing ,
- An image processing apparatus comprising: The image processing apparatus can further include a micro area processing unit that processes a micro area remaining as a result of the image area integration processing.
- a second aspect of the present invention relates to an image processing apparatus for generating and displaying a two-dimensional image of a two-dimensional or three-dimensional physical object.
- the object to be processed is usually treated as an aggregate of many fine polygons (usually triangles), that is, a polygon mesh, and image processing is performed.
- the roughness of the polygon mesh greatly affects the processing load and image quality, so the image area is divided and the divided areas are merged, and the polygon is changed according to the application using 3DCG. If the mesh is adjusted to an appropriate roughness, mesh mesh segmentation is required.
- the application of images can be expanded by performing mesh segmentation progressively or smoothly.
- the second aspect of the present invention by determining whether or not adjacent image regions should be integrated based on a statistical processing algorithm, a minute amount obtained by dividing a three-dimensional object is determined. From the large number of polygons, the integration of the image area is repeatedly executed to generate a polygonal mesh with the desired roughness.
- the statistical processing algorithm here, for example, based on the judgment formula derived from the concentration imbalance phenomenon force in the area of the polygon constituting the image area, the adjacent image areas are similar to each other, in other words, the image area Determine whether can be integrated.
- Integration processing of image regions based on such a statistical processing algorithm can be performed at high speed because it is configured by a simple calculation that statistically processes the area of a polygon. For example, millions of polygons can be processed per second using a general computer such as a personal computer. In addition, by adjusting the parameter values included in the judgment formula, it is possible to freely set a standard for integrating image regions and generate a polygon mesh with a desired roughness, and the system has scalability. [0025] Therefore, according to the present invention, a set of a large number of small polygons obtained by dividing a physical object to be processed is used as an input value, and an integration process of image regions composed of polygon meshes is performed according to a statistical processing algorithm.
- a polygonal mesh having an arbitrary roughness By performing (ie, performing mesh growing), a polygonal mesh having an arbitrary roughness can be generated.
- a plurality of polygon meshes having different roughness can be generated smoothly. In other words, it can be applied to various interactive applications that can realize progressive mesh 'segmentation.
- Examples of mesh segmentation applications according to the present invention include parameterization and texture mapping, image morphing, multi-resolution modeling, image editing, image compression, animation, and shape matching. it can.
- a polygonal mesh as an image region is expressed in the form of an adjacency graph (Incidence Graph) that describes the relationship between a plurality of polygons as its constituent elements.
- an adjacency graph Incidence Graph
- individual polygons constituting a polygon mesh are treated as nodes, and the corresponding nodes are connected using edges corresponding to the sides where adjacent polygons contact each other. Use the described adjacency graph as input.
- sorting is performed by evaluating each edge of the input adjacent graph. Specifically, the edge is evaluated by comparing the attribute values of the image regions connected by the edge and assigning a weighting factor to the edge based on the comparison result.
- the image region referred to here includes a polygon that is a minimum unit and an image region that is configured as a polygon mesh obtained by integrating a plurality of polygons.
- the attribute value referred to here for example, the area of the image region (the average area of each polygon mesh integrated in the image region) is used, and the difference in the area between the image regions connected by the edge is determined as the edge. Can be given as a weight value. In this case, the smaller the area difference between image regions, the smaller the weight value, and the higher the processing order in subsequent image integration processing. Or, in addition to the polygonal area constituting the image area, pixel attribute information such as the normal direction and color of the image area (average color in the image area for at least one component of RGB) (however, In the case of polygon meshes with textures, edge weights can be given.
- image regions connected by an edge are based on a judgment formula that also derives a concentration imbalance phenomenon force in the area of a polygon that forms the image region. Is determined based on the following statistical algorithm for the two image regions R and R connected by edges:
- Image region R has area S and is composed of n polygons
- image region R has kkk 1 area S and is composed of n polygons
- A is the maximum area of the polygon.
- the judgment formula based on the above statistical processing algorithm includes the parameter Q for controlling the roughness of the segmentation, the value of the parameter Q that can obtain the desired segmentation roughness is determined. External force can also be given. Further, when an external force is required for the desired segmentation roughness or the number of divided image areas, it may be converted into a corresponding parameter Q value and given to the system. By setting such flexible parameter Q, it is possible to realize progressive mesh segmentation, which makes it easy to apply to various interactive applications.
- the node in the initial state is a minimum unit polygon in the adjacent graph.
- the node grows into an image region composed of a polygon mesh made up of a plurality of polygons. To do.
- the area and the number of polygons are calculated and stored as node statistical information. Is done. Further, when the integration of the image areas is executed, the area of the image area newly generated by the integration and the number of polygons are calculated, and the update process of the node statistical information is performed.
- the area of the grown image area is enormous and the number of polygons becomes a large value.
- the information about the polygon near the boundary is more important in determining whether the integration with the adjacent image area is correct or not. Fall into the result.
- the above judgment formula based on the statistical processing algorithm cannot perform accurate boundary judgment.
- Border Crust only the polygon in the vicinity of the border where each image region to be integrated, that is, “Border Crust” is left, and the subsequent image region integration processing may be performed.
- Border Crust it is possible to make a success / failure judgment about the integration of the subsequent image areas more accurately than when using Circular Crust.
- the adjacency graph must be updated in addition to the node statistics information alone, and the amount of calculation is enormous.
- the third aspect of the present invention is described in a computer-readable format so that processing for handling data having a topology formed by a plurality of nodes each having an attribute value is executed on the computer.
- a computer program that obtains a weighting factor of edges connecting nodes based on attribute values of adjacent nodes in the topology and sorts edges based on the weighting factors. Evaluation procedure;
- a pair of nodes connected by edges is taken out, and the node is Node integration processing procedure for evaluating whether or not the nodes should be integrated based on a predetermined statistical processing algorithm, and performing node region integration processing;
- the fourth aspect of the present invention is described in a computer-readable format so that an object is handled as a polygon mesh having a plurality of polygonal forces, and processing for image processing is executed on a computer.
- the computer's program is based on the comparison result by comparing the adjacent graph input procedure for inputting the adjacent graph describing the polygonal mesh to the computer and the attribute value of each image area connected by the edge.
- An adjacency graph evaluation procedure that assigns weight factors to edges and sorts edges in the adjacency graph based on weight values;
- An image area integration processing procedure for extracting image area pairs that sandwich edges in the sorted order, evaluating whether image areas should be integrated based on a statistical processing algorithm, and performing image area integration processing ,
- the computer 'program according to each of the third and fourth aspects of the present invention defines a computer program described in a computer-readable format so as to realize predetermined processing on the computer' system. Is. In other words, by installing the computer program according to the third and fourth aspects of the present invention in the computer system, a cooperative action is exhibited on the computer system, and the first aspect of the present invention. The same effects as the information processing apparatus according to the first aspect and the image processing apparatus according to the second aspect can be obtained.
- an excellent information processing apparatus and information processing capable of growing raw data consisting of a large number of minute nodes that cannot be perceived individually into a small number of perceptible units called segments.
- a method, as well as a computer 'program can be provided.
- a two-dimensional or three-dimensional object is treated as a collection of a large number of fine polygons (usually triangles), that is, a polygon mesh, and the two-dimensional image is processed.
- the mesh segmentation process to adjust the polygon mesh to an appropriate roughness can be suitably performed.
- a progressive mesh 'segmentation process can be performed at high speed and with high V accuracy according to the computer's graphics-based abrasion.
- the image processing apparatus can perform high-speed image area integration processing based on a statistical processing algorithm, and can perform progressive mesh segmentation processing even on a general computer. It is feasible.
- the mesh segmentation process according to the present invention can freely set a standard for integrating image regions by adjusting a parameter value included in a determination formula, and can achieve a desired roughness.
- a polygon mesh can be generated.
- the system is also scalable and can be applied to various interactive applications such as parameterization and texture mapping, image morphing, multi-resolution modeling, image editing, image compression, animation, and shape matching. it can.
- FIG. 1 is a diagram schematically showing a functional configuration of an image processing apparatus according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating an adjacency graph.
- FIG. 3 is a diagram illustrating an adjacency graph.
- FIG. 4 is a diagram for explaining a processing method for evaluating an edge.
- FIG. 5 is a flowchart showing an example of a processing procedure for performing a mesh 'segmentation process.
- FIG. 6 is a diagram showing an example of a segmentation result obtained interactively when a user sets a multi-scale parameter Q using a slide bar.
- FIG. 7 is a diagram showing a state in which the image area integration processing has progressed.
- FIG. 8 is a diagram showing a state in which only a polygon near the boundary, that is, “Circular Crust” is left over the entire circumference of the image area newly generated by integration.
- Figure 9 shows mesh 'segmentation process with only "Circular Crust" left It is the flowchart which showed the process sequence for doing.
- FIG. 10 is a diagram showing a state in which only a polygon in the vicinity of the boundary where each image region to be integrated touches, that is, “B order Crust” is left.
- FIG. 11 is a flowchart showing a processing procedure for performing a mesh segmentation process in which only “Border Crust” is left.
- FIG. 12 is a diagram showing a state in which Border Crust is extracted from adjacent image regions R ;
- FIG. 13 is a diagram for explaining a process of updating an adjacency graph when a mesh segmentation process in which only “Border Crust” is left is performed.
- FIG. 14 is a diagram showing how to adjust the number of image areas to be divided by setting the Q value via the operation of the slide “bar” during mesh “segmentation”.
- FIG. 15 is a diagram showing how to adjust the number of image areas to be divided by setting the Q value via the operation of the slide “bar” during mesh “segmentation”.
- FIG. 16 is a diagram showing how to adjust the number of image areas to be divided by setting the Q value via the operation of the slide bar during mesh “segmentation”.
- FIG. 17 is a diagram schematically showing a functional configuration of an information processing apparatus according to an embodiment of the present invention.
- FIG. 18 is a diagram schematically showing a state in which adjacent nodes are integrated to generate a new node.
- FIG. 19 is a flowchart showing a procedure for performing segmentation processing by the information processing apparatus shown in FIG.
- the present invention relates to an information processing apparatus that handles raw data in which a topology is formed by a large number of minute nodes that cannot perceive an individual, and a predetermined statistical processing algorithm for attribute information possessed by each node
- a topology is formed by a large number of minute nodes that cannot perceive an individual
- a predetermined statistical processing algorithm for attribute information possessed by each node
- FIG. 17 schematically shows the functional configuration of the information processing apparatus according to an embodiment of the present invention.
- the information processing apparatus 50 shown in the figure evaluates a node input unit 51 that inputs raw data having a topology formed by a plurality of nodes as a processing target, and each edge connecting adjacent nodes on the topology.
- a topology evaluation unit 52 that performs sorting, and extracts a pair of nodes connected by edges according to the sorted order, and evaluates them based on a statistical processing algorithm!
- a micro node processing unit 54 that processes the remaining fine nodes without growing into a sufficiently large segmentation as a result of the node integration process.
- this type of image processing apparatus 10 may be designed as a dedicated hardware device, it corresponds to each functional module 51 to 54 on a general computer system such as a personal computer (PC). It can also be realized in the form of! / ⁇ when the application program that executes the processing is started.
- a typical computer system uses, for example, Pentium (registered trademark) IV (1.6 GHz) of Intel Corporation as a processor, and has a main memory configured with 1 GB of RAM.
- the application 'program is operating It can be coded in the C ++ language using the API (application 'programming' interface) provided by the 'system (OS).
- the processing target data input to the node input unit 51 has a topology formed by a plurality of nodes.
- the topology consists of multiple nodes and edges connecting the nodes, and each node has attribute information. Further, when the integration processing unit 53 performs integration between nodes, attribute information relating to a new node is calculated.
- Topology evaluation unit 52 performs sorting by evaluating edges connecting adjacent nodes included in the input data. Specifically, the edge is evaluated by comparing the attribute values of the nodes connected by the edges and assigning a weighting factor to the edges based on the comparison result, and then using the weight values to determine the edges in the topology. Sorting. The weight value given to an edge serves as an index of similarity between image regions connected by the edge.
- the area (the average value of the areas of all the original nodes integrated into the node) is used as attribute information, and the difference in area between nodes connected by edges is determined. Given as edge weight values, sorting is done in increasing order of weight values. In this case, the smaller the area difference between nodes, the smaller the weight value, and the higher the processing order in subsequent integration processing.
- the edge weight value can be evaluated using pixel attribute information (average color of at least one component of RGB).
- the integration processing unit 53 takes out a pair of nodes sandwiching the edges in the sorted order, and performs integration processing to grow the segmentation. Since the edge is given a weight as an index of the similarity between the image regions connected by the edge, performing the integration processing in ascending order of the weight preferentially executes the integration processing between similar nodes. Equivalent to.
- the integration processing unit 53 determines whether or not to integrate a pair of nodes based on a statistical processing algorithm. Specifically, for the statistical information Stats.f (i) and Stats.f (j) that the adjacent nodes f (i) and f (j) have as attribute values, respectively, a judgment formula based on the following statistical algorithm ( When satisfying (Predicate), it is determined that nodes f (i) and f (j) should be integrated.
- the above judgment formula is derived from statistical concentration inequality, which is a phenomenon appearing in the polygonal area constituting the image region. This phenomenon is common in the field of statistics as the central limit theorem.
- the central limit theorem defines the error between the sample mean and the true mean. Regardless of the sample distribution, the error approximately follows a normal distribution when the number of samples is increased.
- Q on the right side of the above equation is a parameter for controlling the roughness of the segmentation.
- Q When Q is increased, the value on the right-hand side decreases, making it difficult to satisfy the judgment formula. As a result, node integration is suppressed.
- Q when Q is set to a small value, the value on the right-hand side increases, and the judgment formula is easily satisfied, which promotes node integration, that is, segmentation growth.
- Fig. 18 schematically shows how the i-th node Vi, j-th node, and Vj are integrated based on the integration judgment formula (merging Predicate), and a new node V 'is generated. Is shown.
- Each node Vi and Vj includes a general information part such as the number of nodes Ni and Nj included in the node, identification information IDi and IDj, and a media (data) part for storing attribute information. Since the node in the initial state does not have its own power, the number of nodes N is 1, but the number of nodes N of V obtained by integration is Ni + Nj.
- the new identification information ID is generated from the original identification information IDi and IDj using a disjoint set having a Union-Find data structure.
- the attribute information of the media part is obtained from the statistical information from the attribute information of each node Vi and Vj.
- the average color of each node Vi and Vj becomes the attribute information of the new node.
- the area of a node Is the attribute information
- the average area of each node Vi and Vj is the attribute information of the new sword.
- the minute node processing unit 54 processes the remaining minute noise without growing into a sufficiently large segmentation as a result of the node integration processing. For example, a minute noise left without being integrated between or inside nodes that have grown into a large segmentation is integrated into one of the adjacent segmentations regardless of whether the judgment formula is satisfied or not, and the processing result To improve the appearance.
- FIG. 19 shows a procedure of segmentation processing executed on the information processing apparatus 20 shown in FIG. 17 in the form of a flowchart.
- raw data to be processed is input (step S21).
- Raw data also serves as a node force that forms the topology.
- the node input unit 51 scans the topology of the input data, assigns identification information IDi to each node Vi, and uses the identification information and attribute information stored in the media part of the node as node statistical information. Once registered, the initialization process is performed.
- topology evaluation unit 52 performs sorting by evaluating each edge connecting adjacent nodes (step S32). Specifically, the attribute information difference between nodes connected by edges is given as edge weight values, and sorting is performed in the order of increasing weight values (increasing order).
- the parameter Q for controlling the roughness of the segmentation is set via the parameter setting unit 55 (step S33).
- the integration processing unit 53 extracts a pair of nodes connected by edges according to the sorted order (step S34). Then, the integration processing is performed based on whether these nodes satisfy the judgment formula based on the statistical algorithm (step S35).
- the judgment formula used here derives the statistical concentration imbalance force, which is a phenomenon that appears in the area of the polygon that composes the image area (as described above).
- the parameter Q set in step S33 is used as the judgment formula. Use.
- the integration processing unit 3 When the integration processing unit 3 integrates the nodes, the integration processing unit 3 generates a new node V ', gives a new ID' for identifying the node, and sets the node newly generated by the integration. Attribute information is calculated and node statistical information is updated (step S36).
- the integration processing unit 53 performs node update processing (step S37).
- the weighting factor of each edge between adjacent nodes is recalculated, and the edge is resorted according to the weight value.
- the process returns to step S34, and a pair of nodes connected by the edges is taken out in the sorted order, and the node integration process based on the statistical processing algorithm is repeated.
- step S34 the micro region processing unit 54 processes the remaining fine nodes without growing into a sufficiently large segmentation (step S38). For example, a minute node left unintegrated during or inside a large segmentation is integrated into one of the adjacent segmentations regardless of whether the judgment formula is satisfied or not, and the processing result looks good To do.
- the present invention can be applied to an image processing apparatus for generating and displaying a two-dimensional image of a two-dimensional or three-dimensional object.
- 2D or 3D physical objects to be processed are treated as a collection of many fine polygons (usually triangles), ie polygon meshes, and image processing is performed.
- Polygon mesh roughness greatly affects processing load and image quality. For this reason, it is necessary to perform processing such as dividing image areas and merging the divided areas, and adjusting the polygon mesh to an appropriate roughness according to the application that uses computer graphics. Segmentation processing is required. Progressive or smooth mesh segmentation broadens the range of applications that use images.
- a three-dimensional object is determined by determining whether or not adjacent image regions should be integrated using a statistical processing algorithm.
- the image area integration is repeatedly executed from a large number of small polygons obtained by dividing the image to generate a polygonal mesh having a desired roughness.
- adjacent image areas can be integrated based on a judgment formula derived from the phenomenon of concentration in-equality phenomenon in a polygonal mesh as an image area. Determine whether or not.
- Image region integration processing based on such a statistical processing algorithm is configured by a simple calculation of statistically processing the area of a polygon, so that high-speed processing is possible. For example, a millions of polygons can be processed per second using a common computer such as a personal 'computer.
- a common computer such as a personal 'computer.
- FIG. 1 schematically shows a functional configuration of an image processing apparatus according to an embodiment of the present invention.
- the illustrated image processing apparatus 10 includes an image information input unit 1 that inputs 3D image information to be processed in the form of an adjacent graph, and an adjacent graph evaluation that performs sorting by evaluating each edge of the input adjacent graph.
- Part 2 and an image region integration processing unit 3 that extracts a pair of image regions sandwiching the edges according to the sorted order, evaluates them based on a statistical processing algorithm, and performs mesh growing, and
- a micro area processing unit 4 is provided for processing a micro area remaining as a result of the image area integration processing.
- this type of image processing apparatus 10 may be designed as a dedicated hardware apparatus, it corresponds to each functional module 1 to 4 on a general computer system such as a personal computer (PC). It can also be realized by launching an application program that executes the process.
- a general computer system uses, for example, Intel Pentium (registered trademark) IV (1.6 GHz) as a processor, and has a main memory composed of 1 GB of RAM.
- Application programs can be coded in the C ++ language using, for example, API (application 'programming' interface) provided by OpenGL.
- a polygon mesh as an image region is represented by an adjacency graph (Incidence Graph or Region A) that describes the relationship between the polygons that constitute its component.
- djacent Graph RAG
- the adjacency graph what is handled by force nodes and edges composed of multiple nodes and edges connecting the nodes varies. For example, if a polygon is a node, its side or vertex can be an edge. Or, if a polygon edge is a node, a vertex or polygon can be an edge. Alternatively, if the vertex is a node, a polygon side or polygon can be an edge.
- the image processing apparatus 1 handles an adjacency graph configured with polygons as nodes and polygon edges as edges.
- the image information input unit 1 uses the individual polygons constituting the polygon mesh as nodes, and the adjacent graph described by connecting the corresponding nodes using edges corresponding to the sides where the adjacent polygons touch each other.
- each polygon T belonging to the target image area is associated with the node N. If there is a unique edge belonging to polygons T and T between node N and node N, it is generated as an edge e that connects both nodes.
- the adjacency graph can be constructed directly from the vertex and face index arrays by performing polygon sorting according to edge endpoints.
- Edges or edges belonging to individual polygons are polygon meshes, that is, edges that are borders of image areas (Boundary edges) and edges that are adjacent to other polygons in the polygon mesh instead of polygon meshes (Interior) Edge). Since the edge that hits the border of the image area belongs to only one polygon, only the edges other than the border (that is, inside the image area) are processed. This processing is sufficient if there is an index array of vertices and faces, and complicated adjacent data structures such as half-edge and quad-edge are not required.
- FIG. 2 shows an example of the simplest adjacency graph.
- the polygonal mesh shown on the left of the figure is composed of two triangles T and T that touch each other at the edge or edge e.
- the adjacency graph describing this polygonal mesh is composed of two nodes N and N corresponding to each triangle T and T, and an edge e connecting both nodes, as shown on the right side of the figure.
- FIG. 3 shows a configuration example of a slightly complicated adjacency graph.
- the polygonal mesh shown on the left of the figure consists of seven triangles T to T, where ⁇ is in contact with ⁇ and ⁇ is ⁇
- the adjacency graph describing this polygon mesh is constructed by connecting nodes corresponding to each triangle by edges or edges belonging to both adjacent triangles, as shown on the right side of the figure.
- the node is a polygon that is a minimum unit of the polygon mesh in the initial state.
- a certain! / ⁇ is an individual pixel in a 2D image, or a voxel in a 3D stereoscopic image.
- the node grows into an image area composed of a polygonal mesh that also has a plurality of polygonal (or pixel or botacell) forces.
- the number of polygons composing the image area, that is, the polygon mesh n (N) (initial value is 1) is stored as “node statistical information”.
- the fact that each node holds the area and the number of polygons is the information necessary for determining whether or not the nodes, that is, the image regions have been successfully integrated, using a determination formula based on a statistical processing algorithm.
- the adjacency graph evaluation unit 2 performs sorting by evaluating each edge of the input adjacency graph. Specifically, the edge evaluation is performed by comparing the attribute values of the image regions connected by the edges, assigning a weight factor to the edges based on the comparison result, and adding the weight factor to the edge graph based on the weight values. Sort the edges.
- the image area referred to here includes an image area configured as a polygon which is a minimum unit and a polygon mesh obtained by integrating a plurality of polygons.
- the attribute value for example, the area of the image area (the average value of the areas of all the polygons integrated in the image area) is used, and the difference in area between the image areas connected by the edge is represented by the weight of the edge. It is given as a value, and sorting is performed in order of increasing weight value (increasing order). In this case, the smaller the area difference between image regions, the smaller the weight value, and the higher the processing order in subsequent image integration processing.
- FIG. 4 illustrates a processing method for evaluating an edge.
- the weight value W (e) of the edge e is calculated by the following equation.
- pixel attribute information such as the normal direction and color of the image area (average color of at least one component of RGB) (however, having a texture)
- Edge weights can be given using the difference of various attribute values of adjacent vertices (such as polygon mesh).
- V iXw + j.
- Each inner pixel will have 4 adjacent nodes, and the total number of edges m will be 2wh—w—h.
- nodes V and V
- the weighting factor between and can be expressed by the following equation, for example.
- the image region integration processing unit 3 takes out a pair of image regions sandwiching the edges in the sorted order, and performs integration processing (mesh growing). Since the edge is given a weight as an index of the similarity between the image regions connected by the edge, the integration processing between the similar image regions is preferentially performed by performing the integration processing in order of decreasing weight. Equivalent to doing this.
- the image region integration processing unit 3 determines whether or not to integrate image region pairs connected by edges extracted in the sorted order, based on a statistical processing algorithm.
- image region R has area S and is composed of n polygons k k k
- image region R has area S and is composed of n polygons.
- the A is the maximum area of the polygon, and Q is a parameter for controlling the roughness of the segmentation.
- the above judgment formula is derived from a statistical concentration imbalance force, which is a phenomenon that appears in the area of the polygon that forms the image region. This phenomenon is common in the field of statistics as a central limit theorem (even if the population is an arbitrary distribution, if the population force increases in sample size, the distribution of the sample mean is It will eventually converge to a normal distribution).
- Q on the right side of the above equation is a parameter for controlling the roughness of the segmentation.
- Q is increased, the value on the right side becomes smaller and it becomes difficult to satisfy the judgment formula. As a result, integration of image areas is suppressed.
- Q is set to a small value, the value on the right side becomes large and the judgment formula is easily satisfied, so that the integration of the image areas is promoted, and a coarser mesh 'segmentation result can be obtained.
- edge weight is calculated based on RGB color information as shown in the above equation (4)
- the following statistical algorithm is used for adjacent nodes V and V connected by the edge.
- n and n are the number of pixels included in the corresponding node.
- Q is Seg
- the node grows to an image area composed of a polygon mesh composed of a plurality of polygons.
- identification information id (N) for uniquely identifying each node N, the area area (N) of the corresponding image area (initially one polygon), and the corresponding image area
- the image region integration processing unit 3 gives a new id for identifying a new node when the nodes are integrated, and calculates the area of the image region and the number of polygons newly generated by the integration.
- the node statistics information is updated.
- the Union-Find algorithm can be used to generate new identification information (see above).
- the micro region processing unit 4 processes a micro region remaining as a result of the integration processing of the image regions. For example, a minute polygon mesh that is left unintegrated between or inside large image areas is integrated into one of the adjacent image areas, regardless of whether or not the judgment formula is satisfied, and the processing result looks great. To improve.
- the minute region referred to here is, for example, a polygonal mesh having an area of less than several percent with respect to the entire mesh surface.
- FIG. 5 shows, in the form of a flowchart, an example of a processing procedure for performing a mesh ′ segmentation process in the image processing apparatus 10 according to the present embodiment.
- the image information input unit 1 inputs image information of a three-dimensional object to be processed (step Sl).
- the image information is described in the form of an adjacency graph composed of polygons as nodes and polygon edges as edges (see the above and FIG. 3).
- the image information input unit 1 scans the input adjacency graph, gives identification information id (N) to each node N, obtains the area of the corresponding polygon, and identifies identification information for each node, surface Register (initialize) the product and the number of polygons (initial value is 1) in the node statistics.
- the pseudo program 'code that initializes the node statistics information is shown below. However, id () is an array that stores the identification information of the node indicated by the argument, area () is an array that stores the area of the node of the identification information indicated by the argument, and n () is indicated by the argument It is an array that stores the number of polygons that make up the identification information node.
- the adjacent graph evaluation unit 2 performs sorting by evaluating each edge of the input adjacent graph (step S2). Specifically, the difference in area between image regions connected by edges is given as the edge weight value, and sorting is performed in ascending order of the weight value. The smaller the area difference between image regions, the smaller the weight value, and the higher the processing order in subsequent image integration processing.
- a parameter Q for controlling the roughness of the segmentation is set from the parameter setting unit 5 (step S3).
- the image region integration processing unit 3 takes out a pair of image regions sandwiching the edges in the sorted order (step S4). Then, integration processing is performed based on whether or not these image regions have a power that satisfies the judgment formula based on the statistical algorithm (step S5).
- the judgment formula used here derives the statistical concentration imbalance force, which is a phenomenon that appears in the area of the polygon that composes the image area (described above), and the parameter Q set in step S3 Use.
- each node N respect, and uniquely identified for identifying information id (Ni), and the corresponding image region area (first one polygon) has area (N), the corresponding There is a record that holds the number n (N) (initial value is 1) of the number of polygons composing the image area, that is, the polygon mesh (described above).
- n initial value is 1
- the image area integration processing unit 3 When the image areas are integrated, the image area integration processing unit 3 generates a new node, gives a new id for identifying this node, and creates the area and polygon of the image area newly generated by the integration.
- the node statistics information is updated by calculating the number of nodes (step S6).
- N or NV, whichever is the old identification information, is used as the identification information of the new V zone.
- Robert Endre Ding & 1 ⁇ ! 1 devised 1; 1 ⁇ 011— ⁇ 1 (1 anoregorism (see above) can be used.
- the micro region processing unit 4 processes the micro regions remaining as a result of the image region integration processing (Ste S7). For example, a minute polygon mesh that is left unintegrated between or inside large image areas is integrated into one of the adjacent image areas, regardless of whether or not the judgment formula is satisfied, and the processing result looks great. To improve.
- the microregions referred to here are, for example, polygonal meshes having an area of less than several percent with respect to the entire mesh surface.
- the image region integration processing based on the statistical processing algorithm as described above can be performed at high speed because it is configured by a simple calculation that statistically processes the area of a polygon. For example, a millions of polygons per second can be processed using a general computer system (described above).
- a general computer system described above.
- the parameter value Q included in the judgment formula it is possible to freely set a standard for integrating image areas and generate a polygonal mesh with a desired roughness, and the system has scalability. .
- the user can interactively set the Q value via the parameter setting unit 5, for example. For example, you can prepare a slide bar on the display screen and enter Q on this bar.
- Fig. 6 shows an example of the segmentation result obtained interactively when the user sets the multi-scale parameter Q using the slide bar.
- the image region integration processing unit 3 and the micro region processing unit 4 need to perform repeated processing, and the processing time is almost linear.
- Q is increased, the value on the right side decreases, making it difficult to satisfy the judgment formula. As a result, integration of image areas is suppressed.
- Q is set to a small value, the value on the right side becomes large and the decision formula is easily satisfied, so that integration of the image areas is promoted and a coarser mesh 'segmentation result can be obtained.
- the polygons near the boundary ie, “Circular Crust” around the entire circumference of the newly created image region are left, and the subsequent image regions Integration processing can be performed.
- the update processing of the node statistical information that occurs when leaving this "Circular Crust” requires a relatively small amount of calculation and can accurately determine success / failure for subsequent image region integration.
- Fig. 9 shows the processing procedure for performing the mesh 'segmentation process leaving only "Circular Crust" in the form of a flowchart.
- the image information input unit 1 inputs image information of a three-dimensional object to be processed (step S11).
- Image information is described in the form of an adjacency graph composed of polygons as nodes and polygon edges as edges (see above and Fig. 3).
- the image information input unit 1 scans the input adjacency graph, gives identification information id (N) to each node N, obtains the area of the corresponding polygon, and identifies identification information for each node, Register (initialize) the area and the number of polygons (initial value is 1) in the node statistics. Since the initialization processing of node statistics is the same as that described in Fig. 5, the description is omitted here.
- the adjacent graph evaluation unit 2 performs sorting by evaluating each edge of the input adjacent graph (step S12). Specifically, the difference in the area between the image regions connected by the edges is given as the edge weight value, and sorting is performed in order of decreasing weight value.
- the parameter Q for controlling the roughness of the segmentation is set via the parameter setting unit 5 (step S13).
- the image region integration processing unit 3 takes out a pair of image regions sandwiching the edges in the sorted order (step S14). Then, integration processing is performed based on whether or not these image regions satisfy the judgment formula based on the statistical algorithm (step S15).
- the judgment formula used here also derives the statistical concentration imbalance force, which is a phenomenon that appears in the area of the polygon that composes the image area (described above), and the parameter Q set in step S13. Is used.
- the image area integration processing unit 3 When the image areas are integrated, the image area integration processing unit 3 generates a new node and gives a new id for identifying this node, and the area of the image area newly generated by the integration.
- the node statistics information is updated by calculating the number of polygons and the number of polygons (step S16).
- Circle Crust is generated for the union RUR of these image regions. This processing is realized by applying processing such as morphology to the image area.
- the micro region processing unit 4 processes the micro regions remaining as a result of the image region integration processing. (Step S17). For example, a minute polygon mesh that is left unintegrated between or inside large image areas is integrated into one of the adjacent image areas, regardless of whether or not the judgment formula is satisfied, and the processing result looks great. To improve.
- the microregions referred to here are, for example, polygonal meshes having an area of less than several percent with respect to the entire mesh surface.
- Border Crust the polygon near the border where each image area to be integrated touches, that is, "Border Crust" is left, and the subsequent image area integration processing is performed. It may be.
- Border Crust it is possible to make a success / failure judgment about the integration of the subsequent image areas more accurately than when using the Circular Crust.
- Border Crust not only the node statistical information but also the adjacency graph must be updated, so the amount of calculation becomes enormous.
- FIG. 11 shows the processing procedure for performing the mesh 'segmentation process leaving only "Border Crust" in the form of a flowchart.
- the image information input unit 1 inputs image information of a three-dimensional object to be processed (step S21).
- Image information is described in the form of an adjacency graph composed of polygons as nodes and polygon edges as edges (see above and Fig. 3).
- the image information input unit 1 scans the input adjacency graph, gives identification information id (N) to each node N, obtains the area of the corresponding polygon, and identifies identification information for each node, Register (initialize) the area and the number of polygons (initial value is 1) in the node statistics. Since the initialization processing of node statistics is the same as that described in Fig. 5, the description is omitted here.
- the adjacent graph evaluation unit 2 evaluates each edge of the input adjacent graph and performs sorting (step S22). Specifically, the area between image areas connected by edges The difference is given as the edge weight value, and sorting is performed in the increasing order of the weight value.
- the parameter Q for controlling the roughness of the segmentation is set via the parameter setting unit 5 (step S23).
- the image area integration processing unit 3 takes out a pair of image areas sandwiching the edges in the sorted order (step S24). Then, integration processing is performed based on whether or not these image regions satisfy the judgment formula based on the statistical algorithm (step S25).
- the judgment formula used here derives the statistical concentration imbalance force that appears in the area of the polygons that make up the image area (described above), and the parameter Q set in step S23. Is used.
- the image area integration processing unit 3 When the image areas are integrated, the image area integration processing unit 3 generates a new node, gives a new id for identifying this node, and the area of the image area newly generated by the integration.
- the node statistics information is updated by calculating the number of polygons and the number of polygons (step S26).
- n (id ' ⁇ N l )) n (B i (j B
- the node N and indicated by the argument of the Merge function are integrated. Then, the boundary that touches the image area of the node N in the image area R of the node N ; and the boundary that touches the image area of the node N in the image area R of the node are respectively extracted with the function Extract Boundary.
- the previously obtained area (BUB) is substituted into the area (icf (N)) of the new node.
- n (B. U B) is assigned to the number of polygons n (id '(N)) of the new sword.
- the node statistical information update process is completed by giving new identification information icT (N) and icT (N) to the nodes N and N, respectively.
- the image region integration processing unit 3 performs an adjacent graph update process (step S27).
- the edge weighting factor included in the adjacent graph is recalculated, and the edge is resorted according to the weight value. Then, the process returns to step S24, a pair of image areas sandwiching the edges is taken out in the sorted order, and the image area integration process based on the statistical processing algorithm is repeated.
- the identification information id (N) of all the image regions R adjacent to each of the image regions R and R to be processed for generating Border Crust is searched. And found out
- FIG. 13 illustrates a state where the adjacency graph is updated.
- R adjacent to R and R adjacent to R are found as image areas adjacent to each of the image areas R and R that are processing targets for generating Border Crystal.
- Border Crystal it is assumed that R adjacent to R and R adjacent to R are found as image areas adjacent to each of the image areas R and R that are processing targets for generating Border Crystal.
- the micro region processing unit 4 determines that the result of the image region integration processing is Then, the remaining fine area is processed (step S28). For example, a minute polygon mesh that is left unintegrated in or between large image areas is integrated into one of the adjacent image areas, regardless of whether or not the judgment formula is satisfied.
- the micro area here is, for example, a polygonal mesh having an area of less than several percent with respect to the entire mesh surface.
- the judgment formula (described above) used in the image region integration processing unit 3 includes the parameter Q for controlling the roughness of the segmentation, so that a parameter that provides the desired segmentation roughness can be obtained.
- the value of Q can be given from the parameter setting section 5.
- the parameter setting unit 5 may convert the parameter Q into the value of the corresponding parameter Q and give it to the system.
- the user can give the number of image areas when performing mesh 'segmentation', and the integration of image areas based on statistical processing algorithms is fast, so the number of image areas can be changed dynamically, that is, freely. Can do.
- the original 3D object is divided into N image areas. If the user responds to this processing result that the user wants a result divided into M areas, the parameter setting unit 5 obtains a Q value so that the image area becomes M, This is given to the image region integration processing unit 3 and the mesh 'segmentation process is re-executed.
- the parameter setting unit 5 obtains a Q value so that the image area becomes M, This is given to the image region integration processing unit 3 and the mesh 'segmentation process is re-executed.
- the progressive mesh 'segmentation is performed and the hierarchical segmentation is realized by inputting a plurality of Qs continuously by the parameter setting unit 5. can do. Since the image region integration process based on the statistical processing algorithm is fast, the user can change the number of image regions dynamically, that is, freely when performing mesh 'segmentation.
- the present inventor has developed an image search ( shape matching). For example, a keyword can be set for each segmented image area and image search can be performed (for example, “Modeling by example” (In Proc. SIGGRAPH (2004) Vol. 23, Issue 3, pp. 652— 663)).
- the present invention it is possible to construct keyword information having a hierarchical structure with respect to an image of an original three-dimensional object by assigning a keyword to each mesh segmentation layer.
- search that is, shape matching
- different search results can be obtained for each hierarchy.
- the Q value can be controlled in the parameter setting unit 5 so that a desired search result can be obtained.
- the number of image areas can be changed dynamically, that is, freely when mesh segmentation is performed, since the image area integration process based on the statistical processing algorithm is fast. It is. In other words, if the user resets the Q value via the parameter setting unit 5 according to the search result, the number of parts can be changed freely with just a ⁇ ⁇ operation. The degree control can be operated freely.
- FIGS. 14 to 16 show how to adjust the number of image areas to be divided by setting the Q value via the operation of the slide bar during mesh 'segmentation'.
- the number of integrated areas is 116, as shown in the upper right corner of the page.
- the mesh 'segmentation process according to the present invention can freely set a standard for integrating image regions to generate a polygonal mesh with a desired roughness, and the system has scalability. It can be applied to a variety of interactive applications such as non-metallization and texture mapping, image shaping, multi-resolution modeling, image editing, image compression, animation, and shape matching.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Multimedia (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2006800204746A CN101194290B (zh) | 2005-06-07 | 2006-06-05 | 信息处理装置和信息处理方法、图像处理装置和图像处理方法 |
US11/913,264 US8224089B2 (en) | 2005-06-07 | 2006-06-05 | Information processing device and information processing method, image processing device and image processing method, and computer program |
JP2007520097A JP4780106B2 (ja) | 2005-06-07 | 2006-06-05 | 情報処理装置及び情報処理方法、画像処理装置及び画像処理方法、並びにコンピュータ・プログラム |
EP06747181A EP1890268A1 (en) | 2005-06-07 | 2006-06-05 | Image processing device and image processing method and computer program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005166466 | 2005-06-07 | ||
JP2005-166466 | 2005-06-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006132194A1 true WO2006132194A1 (ja) | 2006-12-14 |
Family
ID=37498395
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/311251 WO2006132194A1 (ja) | 2005-06-07 | 2006-06-05 | 情報処理装置及び情報処理方法、画像処理装置及び画像処理方法、並びにコンピュータ・プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US8224089B2 (ja) |
EP (1) | EP1890268A1 (ja) |
JP (1) | JP4780106B2 (ja) |
KR (1) | KR20080012954A (ja) |
CN (1) | CN101194290B (ja) |
WO (1) | WO2006132194A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8320672B2 (en) | 2005-12-08 | 2012-11-27 | Sony Corporation | Image processing apparatus, image processing method, and computer program |
CN105930204A (zh) * | 2016-04-11 | 2016-09-07 | 沈阳东软医疗系统有限公司 | 一种单事件时间信息处理方法和装置 |
WO2020049619A1 (ja) * | 2018-09-03 | 2020-03-12 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、情報処理方法、及びプログラム |
CN117848423A (zh) * | 2024-03-07 | 2024-04-09 | 南京中鑫智电科技有限公司 | 一种换流变阀侧套管壳体完整性的在线监测方法、系统、设备及介质 |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008059081A (ja) * | 2006-08-29 | 2008-03-13 | Sony Corp | 画像処理装置及び画像処理方法、並びにコンピュータ・プログラム |
JP4539756B2 (ja) * | 2008-04-14 | 2010-09-08 | 富士ゼロックス株式会社 | 画像処理装置及び画像処理プログラム |
WO2009139161A1 (ja) * | 2008-05-15 | 2009-11-19 | 株式会社ニコン | 画像処理装置、画像処理方法、処理装置、処理方法およびプログラム |
WO2010139091A1 (en) * | 2009-06-03 | 2010-12-09 | Google Inc. | Co-selected image classification |
US8526723B2 (en) * | 2009-06-23 | 2013-09-03 | Los Alamos National Security, Llc | System and method for the detection of anomalies in an image |
US8428354B2 (en) * | 2009-06-23 | 2013-04-23 | Los Alamos National Security, Llc | Image segmentation by hierarchial agglomeration of polygons using ecological statistics |
US9459851B2 (en) * | 2010-06-25 | 2016-10-04 | International Business Machines Corporation | Arranging binary code based on call graph partitioning |
US9177041B2 (en) * | 2010-09-03 | 2015-11-03 | Robert Lewis Jackson, JR. | Automated stratification of graph display |
US9280574B2 (en) | 2010-09-03 | 2016-03-08 | Robert Lewis Jackson, JR. | Relative classification of data objects |
JP5772446B2 (ja) * | 2010-09-29 | 2015-09-02 | 株式会社ニコン | 画像処理装置及び画像処理プログラム |
WO2012155446A1 (zh) * | 2011-07-18 | 2012-11-22 | 中兴通讯股份有限公司 | 局部图像平移方法及带有触摸屏的终端 |
CN103890814B (zh) * | 2011-10-18 | 2017-08-29 | 英特尔公司 | 基于表面的图形处理 |
CN103164487B (zh) * | 2011-12-19 | 2016-05-25 | 中国科学院声学研究所 | 一种基于密度与几何信息的数据聚类方法 |
US10110412B2 (en) * | 2012-10-17 | 2018-10-23 | Disney Enterprises, Inc. | Dynamically allocated computing method and system for distributed node-based interactive workflows |
JP6236817B2 (ja) * | 2013-03-15 | 2017-11-29 | 株式会社リコー | 画像形成装置 |
JP5367919B1 (ja) * | 2013-07-22 | 2013-12-11 | 株式会社 ディー・エヌ・エー | 画像処理装置及び画像処理プログラム |
CN104463825B (zh) * | 2013-09-16 | 2019-06-18 | 北京三星通信技术研究有限公司 | 用于在三维体积图像中检测对象的设备和方法 |
US11245593B2 (en) * | 2016-04-25 | 2022-02-08 | Vmware, Inc. | Frequency-domain analysis of data-center operational and performance metrics |
JP6712965B2 (ja) * | 2017-04-25 | 2020-06-24 | 京セラ株式会社 | 電子機器、生成方法及び生成システム |
KR101989029B1 (ko) * | 2017-12-11 | 2019-06-13 | 한양대학교 산학협력단 | 복수의 쓰레드를 이용하는 그래프 엔진 및 그 그래프 엔진의 동작 방법 |
US11401786B2 (en) | 2019-03-06 | 2022-08-02 | Saudi Arabian Oil Company | Systems and methods for hydrocarbon reservoir well connectivity graph optimization, simulation and development |
JP7376881B2 (ja) * | 2020-01-29 | 2023-11-09 | ユーアイアーマー.コム エルエルシー | 画像処理のためのシステム、方法、および装置 |
US11158031B1 (en) | 2021-05-24 | 2021-10-26 | ReportsNow, Inc. | Systems, methods, and devices for image processing |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08138082A (ja) * | 1994-11-07 | 1996-05-31 | Internatl Business Mach Corp <Ibm> | 四角形メッシュの生成方法及びシステム |
JPH09128561A (ja) * | 1995-10-30 | 1997-05-16 | Chokosoku Network Computer Gijutsu Kenkyusho:Kk | 三次元図形データ削減方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3973273B2 (ja) * | 1997-09-22 | 2007-09-12 | 三洋電機株式会社 | 画像生成装置および画像生成方法 |
KR100294924B1 (ko) * | 1999-06-24 | 2001-07-12 | 윤종용 | 영상분할 장치 및 방법 |
US6577759B1 (en) * | 1999-08-17 | 2003-06-10 | Koninklijke Philips Electronics N.V. | System and method for performing region-based image retrieval using color-based segmentation |
US6898316B2 (en) * | 2001-11-09 | 2005-05-24 | Arcsoft, Inc. | Multiple image area detection in a digital image |
US7623709B2 (en) * | 2005-09-06 | 2009-11-24 | General Electric Company | Method and system for segmenting image data |
JP2008059081A (ja) * | 2006-08-29 | 2008-03-13 | Sony Corp | 画像処理装置及び画像処理方法、並びにコンピュータ・プログラム |
US8073217B2 (en) * | 2007-11-01 | 2011-12-06 | Siemens Medical Solutions Usa, Inc. | Structure segmentation via MAR-cut |
-
2006
- 2006-06-05 CN CN2006800204746A patent/CN101194290B/zh not_active Expired - Fee Related
- 2006-06-05 JP JP2007520097A patent/JP4780106B2/ja not_active Expired - Fee Related
- 2006-06-05 EP EP06747181A patent/EP1890268A1/en not_active Withdrawn
- 2006-06-05 KR KR1020077028730A patent/KR20080012954A/ko active IP Right Grant
- 2006-06-05 US US11/913,264 patent/US8224089B2/en not_active Expired - Fee Related
- 2006-06-05 WO PCT/JP2006/311251 patent/WO2006132194A1/ja active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08138082A (ja) * | 1994-11-07 | 1996-05-31 | Internatl Business Mach Corp <Ibm> | 四角形メッシュの生成方法及びシステム |
JPH09128561A (ja) * | 1995-10-30 | 1997-05-16 | Chokosoku Network Computer Gijutsu Kenkyusho:Kk | 三次元図形データ削減方法 |
Non-Patent Citations (3)
Title |
---|
NIELSEN F. AND NOCK R.: "On Region Merging: The Statistical Soundness of Fast Sorting, with Applications", PROCEEDINGS OF THE 2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR'03), vol. 2, 2003, pages 19 - 26, XP010644583 * |
NOCK R. AND NIELSEN F.: "Grouping with Bias Revisited", PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR'04), vol. 2, 2004, pages 460 - 465, XP010708678 * |
NOCK R. AND NIELSEN F.: "Statistical Region Merging", IEEE TRANSACTIONS ON PATTERNS ANALYSIS AND MACHINE INTELLIGENCE, vol. 26, no. 11, 2004, pages 1452 - 1458, XP001211318 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8320672B2 (en) | 2005-12-08 | 2012-11-27 | Sony Corporation | Image processing apparatus, image processing method, and computer program |
CN105930204A (zh) * | 2016-04-11 | 2016-09-07 | 沈阳东软医疗系统有限公司 | 一种单事件时间信息处理方法和装置 |
CN105930204B (zh) * | 2016-04-11 | 2019-07-12 | 东软医疗系统股份有限公司 | 一种单事件时间信息处理方法和装置 |
WO2020049619A1 (ja) * | 2018-09-03 | 2020-03-12 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、情報処理方法、及びプログラム |
JPWO2020049619A1 (ja) * | 2018-09-03 | 2021-06-10 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、情報処理方法、及びプログラム |
JP6990777B2 (ja) | 2018-09-03 | 2022-01-12 | 株式会社ソニー・インタラクティブエンタテインメント | 情報処理装置、情報処理方法、及びプログラム |
US11461957B2 (en) | 2018-09-03 | 2022-10-04 | Sony Interactive Entertainment Inc. | Information processing device, information processing method, and program |
CN117848423A (zh) * | 2024-03-07 | 2024-04-09 | 南京中鑫智电科技有限公司 | 一种换流变阀侧套管壳体完整性的在线监测方法、系统、设备及介质 |
CN117848423B (zh) * | 2024-03-07 | 2024-05-17 | 南京中鑫智电科技有限公司 | 一种换流变阀侧套管壳体完整性的在线监测方法、系统、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN101194290A (zh) | 2008-06-04 |
KR20080012954A (ko) | 2008-02-12 |
EP1890268A1 (en) | 2008-02-20 |
CN101194290B (zh) | 2010-06-09 |
JP4780106B2 (ja) | 2011-09-28 |
US20090175543A1 (en) | 2009-07-09 |
JPWO2006132194A1 (ja) | 2009-01-08 |
US8224089B2 (en) | 2012-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006132194A1 (ja) | 情報処理装置及び情報処理方法、画像処理装置及び画像処理方法、並びにコンピュータ・プログラム | |
US10424087B2 (en) | Systems and methods for providing convolutional neural network based image synthesis using stable and controllable parametric models, a multiscale synthesis framework and novel network architectures | |
CN104933709B (zh) | 基于先验信息的随机游走ct肺组织图像自动分割方法 | |
JP6786497B2 (ja) | 統計的技術を用いて形成されたデジタル歯冠モデルに表面詳細を追加するシステム及び方法 | |
Ahmadi et al. | Context-aware saliency detection for image retargeting using convolutional neural networks | |
Shabat et al. | Design of porous micro-structures using curvature analysis for additive-manufacturing | |
Kolouri et al. | Transport-based analysis, modeling, and learning from signal and data distributions | |
Krasnoshchekov et al. | Order-k α-hulls and α-shapes | |
CN108492370A (zh) | 基于TV和各向异性Laplacian正则项的三角网格滤波方法 | |
Monga et al. | Representing geometric structures in 3D tomography soil images: Application to pore-space modeling | |
Tsuchie et al. | High-quality vertex clustering for surface mesh segmentation using Student-t mixture model | |
CN110176063B (zh) | 一种基于人体拉普拉斯变形的服装变形方法 | |
Zhao et al. | NormalNet: Learning-based mesh normal denoising via local partition normalization | |
Lalos et al. | Signal processing on static and dynamic 3d meshes: Sparse representations and applications | |
CN116993947B (zh) | 一种三维场景可视化展示方法及系统 | |
CN112884884A (zh) | 一种候选区域生成方法及系统 | |
CN110955809A (zh) | 一种支持拓扑结构保持的高维数据可视化方法 | |
US11645813B2 (en) | Techniques for sculpting digital faces based on anatomical modeling | |
CN109410333A (zh) | 一种高质量超面片聚类生成方法 | |
CN108376390B (zh) | 一种动态感知平滑滤波算法 | |
Lavoué et al. | Semi-sharp subdivision surface fitting based on feature lines approximation | |
JP2017043075A (ja) | 立体物造形用データ削減装置 | |
Calabuig-Barbero et al. | Implementation of efficient surface discretisation algorithms adapted to geometric models specific to the footwear industry | |
Li et al. | Cluster-based fine-to-coarse superpixel segmentation | |
CN117152311B (zh) | 基于双分支网络的三维表情动画编辑方法及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680020474.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2007520097 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006747181 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020077028730 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2006747181 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11913264 Country of ref document: US |