WO2002045022A2 - Process for constructing a 3d scene model utilizing key images - Google Patents

Process for constructing a 3d scene model utilizing key images Download PDF

Info

Publication number
WO2002045022A2
WO2002045022A2 PCT/EP2001/013291 EP0113291W WO0245022A2 WO 2002045022 A2 WO2002045022 A2 WO 2002045022A2 EP 0113291 W EP0113291 W EP 0113291W WO 0245022 A2 WO0245022 A2 WO 0245022A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
pixels
region
regions
Prior art date
Application number
PCT/EP2001/013291
Other languages
French (fr)
Other versions
WO2002045022A3 (en
Inventor
Philippe Robert
Yannick Nicolas
Anne Lorette
Jurgen Stauder
Original Assignee
Thomson Licensing S.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing S.A. filed Critical Thomson Licensing S.A.
Priority to AU2002223682A priority Critical patent/AU2002223682A1/en
Publication of WO2002045022A2 publication Critical patent/WO2002045022A2/en
Publication of WO2002045022A3 publication Critical patent/WO2002045022A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T7/596Depth or shape recovery from multiple images from stereo images from three or more stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the invention relates to a process for constructing a 3D scene model by analyzing image sequences and utilizing key images. This is an improvement to a previous invention, a patent application for which was filed by the Applicant on 17 September 1999 under registration number 9911671.
  • This previous invention describes a method of constructing a static 3D scene representation from a video sequence and from associated 3D information, namely, for each viewpoint, a depth map and a collection of parameters describing the relationship between the image reference and the 3D benchmark.
  • the principle of construction consists in selecting the necessary and sufficient information making it possible to restore the images of the sequence with a controlled quality.
  • a binary mask associated with each image describes the information selected which takes the form of regions, that is to say of sets of adjoining pixels, these regions generally being distributed over a few images. From these masks and from the original information, video images, depth maps and viewpoint parameters, it is possible to construct a facetted 3D model.
  • the selection of pixels in the images of the sequence is based on a relevance criterion calculated for each of the pixels.
  • This selection is performed by comparing the relevance values of the pixels corresponding to one and the same 3D point, the pixels with the greatest relevance then being selected.
  • the relevance parameter is itself the result of the combination of the local 3D resolution of the surface to which the pixel corresponds and of a weight taking account of the cost of selecting the pixel :
  • the selection procedure is iterative: the 3D resolution of a pixel remains constant, but the weight associated with each pixel changes as a function of the selecting of the pixels.
  • the choice of the weight is very important since it conditions the result of the selection and also the speed of convergence of the iterative procedure. Now, the procedure may be very lengthy if the inter-image redundancy is very great and if the resolution of the pixels is similar. For example, a slow sideways motion creates this kind of situation. It has been proposed in the reference patent application that the weight maps be initialized in such a way as to favour certain images from the outset: an initial weight is calculated for each image, for example in the form of the percentage of pixels having no counterparts in the other images. These pixels then being "relevant", the image which contains them is favoured. Nonetheless, the procedure remains expensive in terms of computation time, in particular in the case of a sideways camera motion.
  • the selected pixels are distributed randomly over the images. Unlike the case of a frontal motion, the variation in surface resolution between viewpoints is very slight and a large number of iterations is necessary before an image is selected at the expense of its neighbours. The selection procedure therefore converges only slowly.
  • the subject of the invention is a process for constructing a 3D scene model by analyzing image sequences, each image corresponding to a viewpoint defined by its position and its orientation, comprising the following steps:
  • the construction of the 3D model being carried out on the basis of the regions, characterized in that it comprises an additional step of determining key images, an image being defined as a key image, during a first pass of reviewing the images of the sequence, as a function of the percentage of pixels of the image having no counterpart in the other images of its list, then during a second pass, if no image previously defined, including in the course of this second pass, as a key image, belongs to its list, and in that the calculation of the weight for the first iteration is performed as a function of whether or not the pixel belongs to a key image.
  • the process comprising a coding of the regions of an image on the basis of a splitting of the image into image blocks is characterized in that a region is defined by the smallest rectangle comprising the set of image blocks belonging, even partly, to the region to be coded and in that the weight allocated td the pixels of a block is calculated as a function of the percentage of pixels selected in the block.
  • the process comprising a coding of the regions of an image on the basis of a splitting of the image into image blocks is, characterized in that a region is defined by the smallest rectangle comprising the set of image blocks belonging, even partly, to the region to be coded and in that the weight allocated to the pixels of the region is calculated as a function of the ratio of the number of pixels selected to the total number of pixels of the non-empty blocks in the encompassing rectangle.
  • the process is characterized in that the calculation of the resolution allocated to a pixel is performed by taking into account the depth of neighbouring pixels selected on the basis of a discontinuity cue.
  • the process is characterized in that a discontinuity cue is taken into account when defining the regions.
  • the process is characterized in that the regions are numbered, the numbers of the regions required for the construction of an image are associated with the image, and in that the numbering is performed in an ordered manner from the region furthest away to the closest region or vice versa.
  • the subject of the invention is also a process for navigating in a 3D scene consisting in creating images as a function of the movement of the viewpoint, characterized in that the images are created on the basis of the process for constructing the 3D model described previously.
  • the weighting thus performed on the selected pixels belonging to key images makes it possible to increase the speed of processing by reducing the number of iterations. It allows better compression and/or better image quality.
  • the present invention proposes a new definition of the image weights which is based on the lists associated with the viewpoints.
  • This definition relies on the notion of a "key image" of the image sequence.
  • a key image is defined on the one hand as being an image possessing, in its list, no already selected key image, on the other hand as having an initial weight greater than a predefined threshold, that is to say having a percentage of pixels having no counterparts in the other images which is greater than a threshold. Thus, there is little redundancy between the key images.
  • Figure 1 gives an example of selecting images to be key images, in a sequence consisting of 7 images.
  • the scene referenced 8 is represented by a vertical line.
  • the various images of the sequence representing the scene are manifested by the viewpoint, numbered from 1 to 7, and by the viewing angle represented by two traces leaving the viewpoint and being projected onto the scene. Note that, in the example, the angle of view of the image 1 (viewpoint 1) interferes with those of the images 2 and 3.
  • the image points are the projections of the scene onto the image planes (focal planes not represented in the figure) defined by the camera at this viewpoint.
  • the extreme regions of the scene are viewed solely from the viewpoints 1 and 7.
  • a second pass is then performed to determine the other key images of the sequence.
  • Images 2 and 3 have, in their list, key images and therefore cannot be declared to be key images.
  • the list corresponding to image 4 embraces images 2, 3, 5, 6 and none of these images are key images. This image 4 is chosen as a key image, supplementing the key images obtained during the first pass.
  • images 5, 6 also have, in their list, key images and are therefore not selected.
  • the determination of the key images is dependent on the order in which the images are analyzed. If an image has just been declared to be a key image, the analysis of the next image takes account thereof.
  • the original key images that is to say those determined during the first pass, are images which have many points which are found nowhere else.
  • the next key images selected are images which do not have, in their list, any previously selected key image.
  • the threshold corresponding to the percentage of points not found in other images, and above which an image may be declared to be a key image is taken equal to 0.01. This threshold can be defined for each sequence and can be auto-adaptive.
  • a set of viewpoints is thus obtained which summarizes the information content of the input sequence. By assigning an additional weight in the evaluation of the relevance of the pixels contained in these key images, preference is given to the formation of regions in these viewpoints relative to the other viewpoints. This initialization allows faster convergence to the final representation.
  • Figure 2 represents a flowchart describing the various steps of the process according to the invention.
  • Ad hoc processing provides, for each image, a depth map as well as the position and the orientation of the corresponding viewpoint. There is no depth information in the zones corresponding to deleted mobile objects.
  • step 10 For each pixel of each image a value of resolution is calculated, this is step 10.
  • a first and second partitioning are then carried out during step 11 to determine lists of images.
  • Step 12 performs a calculation of initial weights.
  • Step 13 consists in determining the key images from the lists and initial weights.
  • Step 14 performs a modification of the initial weights as a function of whether or not the pixels belong to a key image.
  • the next step 15 performs the updating of the weights as a function of the iterations, that is to say of the changes of the masks so as to provide, step 16, relevance values allocated to the pixels.
  • the next step 17 carries out a selecting of the pixels as a function of their relevance. A sequence of masks of the selected pixels is then obtained for the image sequence, in step 18.
  • steps 15 to 18 are iteratively repeated so as to refine the masks. These steps are iteratively repeated until the masks no longer change significantly. Then, step 19 is instigated to carry out the construction of the facetted 3D model from the selected pixels only.
  • a depth map as well as the position and the orientation of the corresponding viewpoint are available at the system input, for each image of the sequence.
  • Step 10 consists of a calculation, for each pixel of an image, of a value of resolution giving a resolution map for the image.
  • the process then produces, step 11 , a partition of the sequence.
  • Two partitioning operations are in fact performed so as to limit the data handling, both in the phase of constructing the representation and in the utilization phase (navigation).
  • a first partitioning of the sequence is performed by identifying the viewpoints having no intersection of their observation field. This will make it possible to avoid comparing them, that is to say comparing the images relating to these viewpoints, during the subsequent steps. Any intersections between the observation fields, of pyramidal shape, of each viewpoint, are therefore determined by detecting the intersections between the edges of these fields. This operation does not depend on the content of the scene, but only on the relative position of the viewpoints. With each current image is thus associated a set of images whose observation field possesses an intersection with that of this current image, this set constituting a list.
  • a projection is performed during this partitioning step 11 allowing a second partitioning.
  • a projection similar to that described later in regard to step 17, is carried out to identify the counterpart pixels. If an image has too few counterpart pixels corresponding to the pixels of an image of its list, the latter is deleted from the list and a final list is thus allocated to each image.
  • the next step 12 consists of a calculation of initial weights for the images.
  • the calculation is performed on the basis of the percentage of points of the image having no counterpart in the other images relative to the number of points of the image.
  • the next step 13 consists in determining the key images relating to a sequence. From the lists defined for the various viewpoints of the sequence, step 11, and the calculation of the initial weights for the images, step 12, the key images for the sequence are determined. The initial weight allocated to an image is compared with a threshold so as to define a first list of key images. An image is selected as a key image if the initial weight is greater than the threshold. A first list of key images is thus obtained. Thereafter, for each image not selected as a key image, its list is examined to see whether no key image is to be found therein. If such is the case, then the image in question is added to the list of key images. Then, we go to the next image in the sequence. This procedure makes it possible to identify key images having little redundancy between themselves but which are such that the set contains the maximum information about the scene displayed.
  • the next step 14 relates to the modifying of the initial weights by taking account of the image type. If the image is not a key image, the weights are not modified and therefore the initial weights calculated in step 12 are the ones which are utilized during the first iteration. If the image is a key image, the initial weights are modified. They are increased, for example by one unit.
  • Weight[l] 1 + (% of points selected)
  • the initial weight is overweighted for the key images.
  • Step 15 relates to the updating of the weights of the pixels of the image as a function of the iterations.
  • the calculation of the weight, during this updating, can be performed in such a way as to penalize the regions of small size or more coarsely, the images having few selected points.
  • the calculation of the weight can be done according to one of the methods proposed in the reference patent application.
  • a value is allocated to each pixel so as to provide an imagewise relevance map.
  • the key images being overweighted, they are favoured during the initialization of the relevance maps.
  • the selection of the pixels is the subject of step 17.
  • this is a search for the counterpart in the other viewpoints, and a comparing of the relevance values for the identification of the pixel with greatest relevance which is then selected.
  • Step 18 groups together the masks relating to each of the images making up the sequence so as to provide the sequence of masks.
  • Step 18 is looped back to step 15 to refine the calculated relevance values.
  • the weights and therefore the relevance values are recalculated from the masks obtained in the previous iteration.
  • a stopping criterion, calculated during step 18, makes it possible to terminate the iterations and thus go to step 19.
  • Another aspect of the invention consists in taking account of the coding techniques based on image blocks (or pixel blocks) or on macroblocks during the calculation of the weightings and the selecting of the pixels so as to limit the cost of coding of this selecting.
  • a weight is associated with each pixel so as to modulate the resolution criterion, its role being to take into account the cost of selecting the pixels. Specifically, from among the pixels selected and described in a binary mask, the isolated pixels are more expensive to describe than those grouped together into regions. This weight changes as a function of the selection of the pixels in the images. At each iteration, a set of pixels is selected in each image, and it is necessary to evaluate the weight of each pixel in the image, whether or not this pixel is selected, as a function of this new selection so as to cause the relevance of each pixel to change and cause the selection to converge to an optimal solution combining better resolution and minimum coding cost. The weight therefore makes it possible to take into account the cost of coding in the scheme for selecting the pixels.
  • the coding of the shape of an object or of a region in our case consists in coding the smallest rectangle consisting of an integer number of blocks containing the region to be coded. Its dimensions are for example M x K pixels horizontally and N x L vertically. The M x N blocks of K x L pixels belonging to this rectangle are coded.
  • the idea consists in favouring the full blocks and the empty blocks in this rectangle, that is to say the blocks K x L of the rectangle lying either entirely in the region consisting of the selected pixels, or entirely outside the region, with respect to the blocks straddling the boundaries of the region. To do this, the calculation and assigning of the weights is done by favouring the filling of the almost full blocks and the emptying of the almost empty blocks. These blocks are in fact generally less expensive to code than the others and the cost of coding the mask is thereby reduced.
  • the image can be presplit into blocks K x L or else the split can be adapted, at each iteration, to the current regions so as to minimize the number of useful blocks.
  • the cost of coding the shape which cost is defined within the framework of the MPEG-4 standard, for Intra coding, is calculated by splitting the rectangle encompassing the region to be coded into macroblocks of 16 x 16, the encompassing rectangle being formed in such a way as to contain a minimum integer number of blocks of 16 x 16.
  • the empty or full macroblocks are identified and an index is assigned to them depending on the type of pixels contained in the processed macroblock. These macroblocks are not coded as regards shape, thereby rendering them less expensive.
  • the class of the pixels When the macroblock contains pixels of different types, selected and nonselected, the class of the pixels, a binary class defining whether a pixel is or is not selected, can be coded using a predictive and adaptive arithmetic coding algorithm, by considering the causal environment.
  • the coding cost is defined on the basis of the probability of the event. The most probable class as a function of the causal environment will be the least expensive to describe.
  • a weight can be calculated per macroblock as a function of the ratio r of the number of pixels selected in a block to the number of pixels of the block. This weight is allocated to all the pixels of the macroblock. The objective is to minimize the number of incomplete macroblocks contained in the rectangle encompassing the region under study.
  • Another solution consists in taking into account the size of the region in the calculation of this weight, for example as follows: considering the rectangle consisting of an integer number of blocks of K x L and encompassing a given region, no additional weight is assigned to the empty blocks and an additional weight is assigned to the other blocks, corresponding to the ratio of the number of pixels selected to the total number of pixels of the non-empty blocks of the encompassing rectangle. Additional weight is involved here since it in fact entails an overweighting. A weight allocated to a block, in step 15, according to the methods described earlier is retained if no additional weight is assigned to this block. Otherwise, this weight is multiplied by the additional weight assigned to the block. This modification of the weight takes place in step 15.
  • the resolution calculated on the basis of a window centred on each pixel, is presumed to represent the 3D resolution of the surface to which this pixel corresponds. It is therefore necessary for the pixels taken into account in this calculation to correspond to one and the same surface.
  • the discontinuity information makes it possible to take account of only the pixels belonging to the same surface as the current pixel.
  • Step 10 calculates, for each pixel of the image, a resolution value. This value is obtained on the basis of a window centred on the pixel by taking account of the depth information. A distribution over a large depth gives a smaller resolution than a distribution over a small depth. The idea is to retain in the window, during this step, only the pixels having no discontinuity with the pixel at the centre of the window. It is on the basis of these selected pixels alone that the calculation of the resolution value is done.
  • the discontinuity information is obtained by appropriate processing of the depth maps. A map of the discontinuities which reveals the segmentation can thus be coupled with the depth map so as to be utilized during this step 10. This discontinuity information also makes it possible to avoid the creation of false surfaces between different surfaces, at the time of rendition.
  • this discontinuity information must be included in the representation and transmitted to the receiver.
  • the representation is a facetted 3D model, the facetization is based on these discontinuities, and the facets contain this information implicitly.
  • the representation is image based, then the edges of the selected regions are what must implicitly contain the discontinuities. To do this, each image is segmented beforehand into adjoining preregions whose edges are erected on the discontinuities. After each iteration of the scheme for selecting the relevant pixels, the regions resulting therefrom are oversegmented in such a way that any final region possesses all the pixels of the original region which belong moreover to one and the same preregion.
  • the discontinuities are therefore taken into account for the calculation of the masks during step 17, the regions defined by the selected pixels undergoing a partition as a function of the preregion information. Indeed, this step 17 makes it possible to identify and eliminate the inter-image redundancy while retaining only the pixels with the greatest relevance, the pixels selected defining the regions.
  • the images relating to the regions thus obtained are therefore compared with the discontinuity maps defining the preregions so as to undergo additional processing, still during this step 17, a segmentation, in such a way that each final region belongs only to a single preregion.
  • An initial region can thus transform itself into several regions. These regions are numbered and the calculation of the weights during the next iteration will be done by considering there to be several regions of smaller surface area rather than a single region encompassing them.
  • the final representation thus consists of masks describing the regions, textures and depth maps corresponding to these masks, and viewpoint parameters.
  • One of the results of this representation is also a numbering of the regions, and for each viewpoint of the original sequence a list of the region numbers required for the reconstruction of the image. This reconstruction will be done by projecting the 3D regions onto the current viewpoint, while making allowance for the respective distances of the various regions in respect of occultations of one region by another.
  • the numbering of the regions is performed during step 17 which defines the various regions for each image.
  • each image is therefore assigned a list of regions making it possible to reconstruct the image.
  • the pixels corresponding to the closest regions will overwrite the pixels corresponding to the regions furthest away in accordance with what is actually seen from the viewpoint, the closest regions hiding the regions furthest away. It is thus no longer useful to compare, for two points in space whose projection corresponds to the same pixel, their depth so as to make the selection.
  • This region number information is included in the 3D representation model and is used by the rendition process, during the creation of the images, which is thus eased.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The process is characterized in that it comprises a step of determining key images (13), an image being defined as a key image, during a first pass of reviewing the images of the sequence, as a function of the percentage of pixels of the image having no counterpart in the other images of its list, a list being defined by the images having a minimum of corresponding points, then during a second pass, if no image previously defined, including in the course of this second pass, as a key image belongs to its list, and in that the calculation of the weight (14) making it possible to select the pixels is performed as a function of whether or not the pixel belongs to a key image.

Description

Process for constructing a 3D scene model utilizing key images.
The invention relates to a process for constructing a 3D scene model by analyzing image sequences and utilizing key images. This is an improvement to a previous invention, a patent application for which was filed by the Applicant on 17 September 1999 under registration number 9911671.
This previous invention describes a method of constructing a static 3D scene representation from a video sequence and from associated 3D information, namely, for each viewpoint, a depth map and a collection of parameters describing the relationship between the image reference and the 3D benchmark. The principle of construction consists in selecting the necessary and sufficient information making it possible to restore the images of the sequence with a controlled quality. A binary mask associated with each image describes the information selected which takes the form of regions, that is to say of sets of adjoining pixels, these regions generally being distributed over a few images. From these masks and from the original information, video images, depth maps and viewpoint parameters, it is possible to construct a facetted 3D model. The selection of pixels in the images of the sequence is based on a relevance criterion calculated for each of the pixels. This selection is performed by comparing the relevance values of the pixels corresponding to one and the same 3D point, the pixels with the greatest relevance then being selected. The relevance parameter is itself the result of the combination of the local 3D resolution of the surface to which the pixel corresponds and of a weight taking account of the cost of selecting the pixel :
Relevancej ixel] = Resolution[pixel] * ( 1 + Weight[pixel] ) The weight can be dependent on various parameters: - the quantity of points selected in the image
- the size and/or the compactness of the regions
- the classification of the pixels in the close environment (selected or not selected)
It may therefore be the same for an entire image or on the contrary different for each pixel. The selection procedure is iterative: the 3D resolution of a pixel remains constant, but the weight associated with each pixel changes as a function of the selecting of the pixels.
The choice of the weight is very important since it conditions the result of the selection and also the speed of convergence of the iterative procedure. Now, the procedure may be very lengthy if the inter-image redundancy is very great and if the resolution of the pixels is similar. For example, a slow sideways motion creates this kind of situation. It has been proposed in the reference patent application that the weight maps be initialized in such a way as to favour certain images from the outset: an initial weight is calculated for each image, for example in the form of the percentage of pixels having no counterparts in the other images. These pixels then being "relevant", the image which contains them is favoured. Nonetheless, the procedure remains expensive in terms of computation time, in particular in the case of a sideways camera motion. Initially, since the resolutions are much the same and in certain cases are distinguished only by noise, the selected pixels are distributed randomly over the images. Unlike the case of a frontal motion, the variation in surface resolution between viewpoints is very slight and a large number of iterations is necessary before an image is selected at the expense of its neighbours. The selection procedure therefore converges only slowly.
The aim of the present invention is to alleviate the aforesaid drawbacks. Accordingly, the subject of the invention is a process for constructing a 3D scene model by analyzing image sequences, each image corresponding to a viewpoint defined by its position and its orientation, comprising the following steps:
- calculation, for an image, of a resolution map corresponding to the 3D resolution of the pixels of the image,
- partitioning of the images of the sequence, performed by identifying, for a current image, the images whose corresponding viewpoints have an observation field possessing an intersection with the observation field relating to the current image, and by choosing from among these images those possessing a minimum number of common pixels, that is to say corresponding to one and the same 3D point, so as to form a list of images which is associated therewith, these steps being followed by the following iterations:
- calculation of a weight allocated to the pixel on the basis of whether it belongs to a region and of characteristics of the region,
- selection of a pixel of the current image as a function of its resolution and of its weight compared with those of the common pixels of the other images of its list, the selected pixels defining the regions, the construction of the 3D model being carried out on the basis of the regions, characterized in that it comprises an additional step of determining key images, an image being defined as a key image, during a first pass of reviewing the images of the sequence, as a function of the percentage of pixels of the image having no counterpart in the other images of its list, then during a second pass, if no image previously defined, including in the course of this second pass, as a key image, belongs to its list, and in that the calculation of the weight for the first iteration is performed as a function of whether or not the pixel belongs to a key image.
According to a variant, the process comprising a coding of the regions of an image on the basis of a splitting of the image into image blocks is characterized in that a region is defined by the smallest rectangle comprising the set of image blocks belonging, even partly, to the region to be coded and in that the weight allocated td the pixels of a block is calculated as a function of the percentage of pixels selected in the block.
According to another variant, the process comprising a coding of the regions of an image on the basis of a splitting of the image into image blocks is, characterized in that a region is defined by the smallest rectangle comprising the set of image blocks belonging, even partly, to the region to be coded and in that the weight allocated to the pixels of the region is calculated as a function of the ratio of the number of pixels selected to the total number of pixels of the non-empty blocks in the encompassing rectangle.
According to another variant, the process is characterized in that the calculation of the resolution allocated to a pixel is performed by taking into account the depth of neighbouring pixels selected on the basis of a discontinuity cue. According to another variant, the process is characterized in that a discontinuity cue is taken into account when defining the regions.
According to another variant, the process is characterized in that the regions are numbered, the numbers of the regions required for the construction of an image are associated with the image, and in that the numbering is performed in an ordered manner from the region furthest away to the closest region or vice versa.
The subject of the invention is also a process for navigating in a 3D scene consisting in creating images as a function of the movement of the viewpoint, characterized in that the images are created on the basis of the process for constructing the 3D model described previously.
The weighting thus performed on the selected pixels belonging to key images makes it possible to increase the speed of processing by reducing the number of iterations. It allows better compression and/or better image quality.
In the reference patent application, a partitioning of the images of the sequence was defined, giving rise to the compiling of lists assigned to each image. Thus, for a current image, the images of the sequence whose viewpoints have, with this current image, a significant number, greater than a threshold, of 3D points in common are identified. This operation performed prior to the selecting of the pixels makes it possible to limit the number of image comparisons since an image is compared only with those belonging to its list.
The present invention proposes a new definition of the image weights which is based on the lists associated with the viewpoints. This definition relies on the notion of a "key image" of the image sequence. A key image is defined on the one hand as being an image possessing, in its list, no already selected key image, on the other hand as having an initial weight greater than a predefined threshold, that is to say having a percentage of pixels having no counterparts in the other images which is greater than a threshold. Thus, there is little redundancy between the key images.
Figure 1 gives an example of selecting images to be key images, in a sequence consisting of 7 images. The scene referenced 8 is represented by a vertical line. The various images of the sequence representing the scene are manifested by the viewpoint, numbered from 1 to 7, and by the viewing angle represented by two traces leaving the viewpoint and being projected onto the scene. Note that, in the example, the angle of view of the image 1 (viewpoint 1) interferes with those of the images 2 and 3. For a viewpoint, the image points are the projections of the scene onto the image planes (focal planes not represented in the figure) defined by the camera at this viewpoint.
The image 1 and the image 7 possessing an appreciable number of points which are not found again in the other images of the scene, their initial weight is greater than the threshold and they are therefore declared, during a first pass of reviewing the images of the sequence, to be key images. The extreme regions of the scene are viewed solely from the viewpoints 1 and 7. A second pass is then performed to determine the other key images of the sequence.
The lists associated with the images previously selected as key images are:
- for image 1 , image 2 and 3
- for image 7, image 5 and 6. Images 2 and 3 have, in their list, key images and therefore cannot be declared to be key images. The list corresponding to image 4 embraces images 2, 3, 5, 6 and none of these images are key images. This image 4 is chosen as a key image, supplementing the key images obtained during the first pass. Finally, images 5, 6 also have, in their list, key images and are therefore not selected.
The determination of the key images is dependent on the order in which the images are analyzed. If an image has just been declared to be a key image, the analysis of the next image takes account thereof.
The original key images, that is to say those determined during the first pass, are images which have many points which are found nowhere else. The next key images selected are images which do not have, in their list, any previously selected key image. In one example, the threshold corresponding to the percentage of points not found in other images, and above which an image may be declared to be a key image, is taken equal to 0.01. This threshold can be defined for each sequence and can be auto-adaptive. A set of viewpoints is thus obtained which summarizes the information content of the input sequence. By assigning an additional weight in the evaluation of the relevance of the pixels contained in these key images, preference is given to the formation of regions in these viewpoints relative to the other viewpoints. This initialization allows faster convergence to the final representation.
Figure 2 represents a flowchart describing the various steps of the process according to the invention.
At the system input, referenced 9, we have data relating to an image sequence acquired by a camera moving within a static real scene, as indicated previously. It is, however, entirely conceivable for certain mobile objects to be present in the image. In this case, specific processing identifies these objects which are then marked and ignored during subsequent processing. Ad hoc processing provides, for each image, a depth map as well as the position and the orientation of the corresponding viewpoint. There is no depth information in the zones corresponding to deleted mobile objects.
For each pixel of each image a value of resolution is calculated, this is step 10. A first and second partitioning are then carried out during step 11 to determine lists of images. Step 12 performs a calculation of initial weights. Step 13 consists in determining the key images from the lists and initial weights. Step 14 performs a modification of the initial weights as a function of whether or not the pixels belong to a key image. The next step 15 performs the updating of the weights as a function of the iterations, that is to say of the changes of the masks so as to provide, step 16, relevance values allocated to the pixels. The next step 17 carries out a selecting of the pixels as a function of their relevance. A sequence of masks of the selected pixels is then obtained for the image sequence, in step 18. After this step 18, steps 15 to 18 are iteratively repeated so as to refine the masks. These steps are iteratively repeated until the masks no longer change significantly. Then, step 19 is instigated to carry out the construction of the facetted 3D model from the selected pixels only. The various steps, especially those which differ from the steps described in the reference patent application, are now explained in detail.
A depth map as well as the position and the orientation of the corresponding viewpoint are available at the system input, for each image of the sequence.
Step 10 consists of a calculation, for each pixel of an image, of a value of resolution giving a resolution map for the image.
The process then produces, step 11 , a partition of the sequence. Two partitioning operations are in fact performed so as to limit the data handling, both in the phase of constructing the representation and in the utilization phase (navigation).
A first partitioning of the sequence is performed by identifying the viewpoints having no intersection of their observation field. This will make it possible to avoid comparing them, that is to say comparing the images relating to these viewpoints, during the subsequent steps. Any intersections between the observation fields, of pyramidal shape, of each viewpoint, are therefore determined by detecting the intersections between the edges of these fields. This operation does not depend on the content of the scene, but only on the relative position of the viewpoints. With each current image is thus associated a set of images whose observation field possesses an intersection with that of this current image, this set constituting a list.
A projection is performed during this partitioning step 11 allowing a second partitioning. For each image list, a projection, similar to that described later in regard to step 17, is carried out to identify the counterpart pixels. If an image has too few counterpart pixels corresponding to the pixels of an image of its list, the latter is deleted from the list and a final list is thus allocated to each image.
The next step 12 consists of a calculation of initial weights for the images. The calculation is performed on the basis of the percentage of points of the image having no counterpart in the other images relative to the number of points of the image.
The next step 13 consists in determining the key images relating to a sequence. From the lists defined for the various viewpoints of the sequence, step 11, and the calculation of the initial weights for the images, step 12, the key images for the sequence are determined. The initial weight allocated to an image is compared with a threshold so as to define a first list of key images. An image is selected as a key image if the initial weight is greater than the threshold. A first list of key images is thus obtained. Thereafter, for each image not selected as a key image, its list is examined to see whether no key image is to be found therein. If such is the case, then the image in question is added to the list of key images. Then, we go to the next image in the sequence. This procedure makes it possible to identify key images having little redundancy between themselves but which are such that the set contains the maximum information about the scene displayed.
The next step 14 relates to the modifying of the initial weights by taking account of the image type. If the image is not a key image, the weights are not modified and therefore the initial weights calculated in step 12 are the ones which are utilized during the first iteration. If the image is a key image, the initial weights are modified. They are increased, for example by one unit.
Thus, if the image I is a key image:
Weight[l] = 1 + (% of points selected)
Otherwise: Weightfl] = (% of points selected)
Thus, the initial weight is overweighted for the key images.
Step 15 relates to the updating of the weights of the pixels of the image as a function of the iterations. The calculation of the weight, during this updating, can be performed in such a way as to penalize the regions of small size or more coarsely, the images having few selected points.
In fact, the calculation of the weight can be done according to one of the methods proposed in the reference patent application.
A relevance value combining the resolution and the weight is deduced during step 16. It can for example be calculated thus: relevance = resolution x (1 + weight)
A value is allocated to each pixel so as to provide an imagewise relevance map. The key images being overweighted, they are favoured during the initialization of the relevance maps.
The selection of the pixels is the subject of step 17. Here, for each pixel, this is a search for the counterpart in the other viewpoints, and a comparing of the relevance values for the identification of the pixel with greatest relevance which is then selected. This results, for each image of the sequence, in a binary image or mask, the pixels for which the value 1 is allocated corresponding for example to the pixels selected.
Step 18 groups together the masks relating to each of the images making up the sequence so as to provide the sequence of masks.
Step 18 is looped back to step 15 to refine the calculated relevance values. At each iteration, the weights and therefore the relevance values are recalculated from the masks obtained in the previous iteration.
A stopping criterion, calculated during step 18, makes it possible to terminate the iterations and thus go to step 19.
We consider for example the number N,' of pixels selected for each image j at iteration i, then we calculate the difference
Figure imgf000011_0001
corresponding to the sum over the viewpoints of the differences, for each viewpoint, between the number of pixels selected at the current iteration and this number calculated at the previous iteration, then the absolute value is compared with a threshold.
Another aspect of the invention consists in taking account of the coding techniques based on image blocks (or pixel blocks) or on macroblocks during the calculation of the weightings and the selecting of the pixels so as to limit the cost of coding of this selecting.
As was seen in the introduction, a weight is associated with each pixel so as to modulate the resolution criterion, its role being to take into account the cost of selecting the pixels. Specifically, from among the pixels selected and described in a binary mask, the isolated pixels are more expensive to describe than those grouped together into regions. This weight changes as a function of the selection of the pixels in the images. At each iteration, a set of pixels is selected in each image, and it is necessary to evaluate the weight of each pixel in the image, whether or not this pixel is selected, as a function of this new selection so as to cause the relevance of each pixel to change and cause the selection to converge to an optimal solution combining better resolution and minimum coding cost. The weight therefore makes it possible to take into account the cost of coding in the scheme for selecting the pixels.
Let us consider coding blocks of dimensions K pixels by L lines. The coding of the shape of an object or of a region in our case, consists in coding the smallest rectangle consisting of an integer number of blocks containing the region to be coded. Its dimensions are for example M x K pixels horizontally and N x L vertically. The M x N blocks of K x L pixels belonging to this rectangle are coded. The idea consists in favouring the full blocks and the empty blocks in this rectangle, that is to say the blocks K x L of the rectangle lying either entirely in the region consisting of the selected pixels, or entirely outside the region, with respect to the blocks straddling the boundaries of the region. To do this, the calculation and assigning of the weights is done by favouring the filling of the almost full blocks and the emptying of the almost empty blocks. These blocks are in fact generally less expensive to code than the others and the cost of coding the mask is thereby reduced.
The image can be presplit into blocks K x L or else the split can be adapted, at each iteration, to the current regions so as to minimize the number of useful blocks.
In the MPEG4 standard, the values of K and of L are taken equal to 16 pixels. The cost of coding the shape, which cost is defined within the framework of the MPEG-4 standard, for Intra coding, is calculated by splitting the rectangle encompassing the region to be coded into macroblocks of 16 x 16, the encompassing rectangle being formed in such a way as to contain a minimum integer number of blocks of 16 x 16. The empty or full macroblocks are identified and an index is assigned to them depending on the type of pixels contained in the processed macroblock. These macroblocks are not coded as regards shape, thereby rendering them less expensive. When the macroblock contains pixels of different types, selected and nonselected, the class of the pixels, a binary class defining whether a pixel is or is not selected, can be coded using a predictive and adaptive arithmetic coding algorithm, by considering the causal environment. The coding cost is defined on the basis of the probability of the event. The most probable class as a function of the causal environment will be the least expensive to describe.
The way of favouring the obtaining of full or empty macroblocks may be as follows.
A weight can be calculated per macroblock as a function of the ratio r of the number of pixels selected in a block to the number of pixels of the block. This weight is allocated to all the pixels of the macroblock. The objective is to minimize the number of incomplete macroblocks contained in the rectangle encompassing the region under study.
Another solution consists in taking into account the size of the region in the calculation of this weight, for example as follows: considering the rectangle consisting of an integer number of blocks of K x L and encompassing a given region, no additional weight is assigned to the empty blocks and an additional weight is assigned to the other blocks, corresponding to the ratio of the number of pixels selected to the total number of pixels of the non-empty blocks of the encompassing rectangle. Additional weight is involved here since it in fact entails an overweighting. A weight allocated to a block, in step 15, according to the methods described earlier is retained if no additional weight is assigned to this block. Otherwise, this weight is multiplied by the additional weight assigned to the block. This modification of the weight takes place in step 15.
Consideration of discontinuities
When adjoining pixels in an image in fact belong to different 3D surfaces, there is a high risk of creating false surfaces during the synthesis of images. Indeed, most of the processing operations dedicated to image rendering, for example those used in graphical maps, have triangular facets as basic primitives. If a facet is formed from pixels representing surfaces situated at different distances, that is to say on either side of a depth discontinuity, false surfaces are created leading to large errors of reconstruction. This depthwise discontinuity information is therefore essential. Arising from appropriate processing, it is associated with the depth maps.
It may be used when calculating the resolution values associated with each pixel. Indeed, the resolution, calculated on the basis of a window centred on each pixel, is presumed to represent the 3D resolution of the surface to which this pixel corresponds. It is therefore necessary for the pixels taken into account in this calculation to correspond to one and the same surface. When the window contains several different surfaces, the discontinuity information makes it possible to take account of only the pixels belonging to the same surface as the current pixel.
Step 10 calculates, for each pixel of the image, a resolution value. This value is obtained on the basis of a window centred on the pixel by taking account of the depth information. A distribution over a large depth gives a smaller resolution than a distribution over a small depth. The idea is to retain in the window, during this step, only the pixels having no discontinuity with the pixel at the centre of the window. It is on the basis of these selected pixels alone that the calculation of the resolution value is done. The discontinuity information is obtained by appropriate processing of the depth maps. A map of the discontinuities which reveals the segmentation can thus be coupled with the depth map so as to be utilized during this step 10. This discontinuity information also makes it possible to avoid the creation of false surfaces between different surfaces, at the time of rendition. Consequently, this discontinuity information must be included in the representation and transmitted to the receiver. If the representation is a facetted 3D model, the facetization is based on these discontinuities, and the facets contain this information implicitly. If the representation is image based, then the edges of the selected regions are what must implicitly contain the discontinuities. To do this, each image is segmented beforehand into adjoining preregions whose edges are erected on the discontinuities. After each iteration of the scheme for selecting the relevant pixels, the regions resulting therefrom are oversegmented in such a way that any final region possesses all the pixels of the original region which belong moreover to one and the same preregion.
The discontinuities are therefore taken into account for the calculation of the masks during step 17, the regions defined by the selected pixels undergoing a partition as a function of the preregion information. Indeed, this step 17 makes it possible to identify and eliminate the inter-image redundancy while retaining only the pixels with the greatest relevance, the pixels selected defining the regions. The images relating to the regions thus obtained are therefore compared with the discontinuity maps defining the preregions so as to undergo additional processing, still during this step 17, a segmentation, in such a way that each final region belongs only to a single preregion. An initial region can thus transform itself into several regions. These regions are numbered and the calculation of the weights during the next iteration will be done by considering there to be several regions of smaller surface area rather than a single region encompassing them. Thus, if a region defined beforehand in an image is traversed by a discontinuity, this region is separated into two regions and the processing is performed as if there were two independent regions. The masks eventually obtained therefore incorporate this discontinuity information. If the coding techniques described earlier are considered, instead of utilizing a single rectangle inscribed within the region defined beforehand, two rectangles inscribed in the regions regarded as independent are used.
Ordered list of regions
All the pixels of all the images can be reconstructed from the selected pixels. The final representation thus consists of masks describing the regions, textures and depth maps corresponding to these masks, and viewpoint parameters. One of the results of this representation is also a numbering of the regions, and for each viewpoint of the original sequence a list of the region numbers required for the reconstruction of the image. This reconstruction will be done by projecting the 3D regions onto the current viewpoint, while making allowance for the respective distances of the various regions in respect of occultations of one region by another. The numbering of the regions is performed during step 17 which defines the various regions for each image.
It is then possible, in the representation, to order the region numbers as a function of their distance from the viewpoint, so as to ease the rendition scheme. This prior ordering makes it possible, at the time of rendition, to project the regions from the furthest away to the closest without having to compare the distances.
To each image is therefore assigned a list of regions making it possible to reconstruct the image. By firstly projecting the points of the regions furthest away, the pixels corresponding to the closest regions will overwrite the pixels corresponding to the regions furthest away in accordance with what is actually seen from the viewpoint, the closest regions hiding the regions furthest away. It is thus no longer useful to compare, for two points in space whose projection corresponds to the same pixel, their depth so as to make the selection. This region number information is included in the 3D representation model and is used by the rendition process, during the creation of the images, which is thus eased. The insertion into the representation of a list, for each input image, of the regions required for the reconstruction of this image, such that this list is ordered from the region furthest away to the closest region, therefore makes it possible to facilitate the image rendition step, by projecting the regions successively from the furthest away to the closest. Thus, there is no need, at the time of image rendition, to compare the depths so as to manage the occultations.
The various alternative embodiments described may be implemented independently of the process for selecting the key images and may therefore be utilized in combination with the process which is the subject of the reference patent application.

Claims

Claims
1 Process for constructing a 3D scene model by analyzing image sequences, each image corresponding to a viewpoint defined by its position and its orientation, comprising the following steps:
- calculation, for an image, of a resolution map (10) corresponding to the 3D resolution of the pixels of the image, - partitioning (11) of the images of the sequence, performed by identifying, for a current image, the images whose corresponding viewpoints have an observation field possessing an intersection with the observation field relating to the current image, and by choosing from among these images those possessing a minimum number of common pixels, that is to say corresponding to one and the same 3D point, so as to form a list of images which is associated therewith, these steps being followed by the following iterations:
- calculation of a weight (15) allocated to the pixel on the basis of whether it belongs to a region and of characteristics of the region, - selection of a pixel of the current image (16, 17) as a function of its resolution and of its weight compared with those of the common pixels of the other images of its list, the selected pixels defining the regions, the construction of the 3D model (18) being carried out on the basis of the regions, characterized in that it comprises an additional step of determining key images (13), an image being defined as a key image, during a first pass of reviewing the images of the sequence, as a function of the percentage of pixels of the image having no counterpart in the other images of its list, then during a second pass, if no image previously defined, including in the course of this second pass, as a key image, belongs to its list, and in that the calculation of the weight (14) for the first iteration (15, 16, 17, 18) is performed as a function of whether or not the pixel belongs to a key image.
2 Process according to Claim 1 , characterized in that images declared to be key images during the first pass are those for which the percentage is greater than a threshold. 3 Process according to Claim 2, characterized in that the threshold is adapted for each sequence.
4 Process according to Claim 1 , comprising a coding of the regions of an image on the basis of a splitting of the image into image blocks, characterized in that a region is defined by the smallest rectangle comprising the set of image blocks belonging, even partly, to the region to be coded and in that the weight allocated to the pixels of a block (15) is calculated as a function of the percentage of pixels selected in the block.
5 Process according to Claim 1 , comprising a coding of the regions of an image on the basis of a splitting of the image into pixel blocks, characterized in that a region is defined by the smallest rectangle comprising the set of image blocks belonging, even partly, to the region to be coded and in that the weight allocated to the pixels of the region is calculated (15) as a function of the ratio of the number of pixels selected to the total number of pixels of the non-empty blocks in the encompassing rectangle.
6 Process according to Claim 1 , characterized in that the calculation of the resolution (10) allocated to a pixel is performed by taking into account the depth of neighbouring pixels selected on the basis of a discontinuity cue.
7 Process according to Claim 1, characterized in that a discontinuity cue is taken into account when defining the regions (16, 17).
8 Process according to Claim 1 , characterized in that the regions are numbered, the numbers of the regions required for the construction of an image are associated with the image, and in that the numbering is performed in an ordered manner from the region furthest away to the closest region or vice versa.
9 Process for navigating in a 3D scene consisting in creating images as a function of the movement of the viewpoint, characterized in that the images are created on the basis of the process for constructing the 3D model according to Claim 1.
PCT/EP2001/013291 2000-11-29 2001-11-16 Process for constructing a 3d scene model utilizing key images WO2002045022A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002223682A AU2002223682A1 (en) 2000-11-29 2001-11-16 Process for constructing a 3d scene model utilizing key images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0015413A FR2817375B1 (en) 2000-11-29 2000-11-29 METHOD FOR CONSTRUCTING A 3D SCENE MODEL USING KEY IMAGES
FR00/15413 2000-11-29

Publications (2)

Publication Number Publication Date
WO2002045022A2 true WO2002045022A2 (en) 2002-06-06
WO2002045022A3 WO2002045022A3 (en) 2002-08-01

Family

ID=8857005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2001/013291 WO2002045022A2 (en) 2000-11-29 2001-11-16 Process for constructing a 3d scene model utilizing key images

Country Status (3)

Country Link
AU (1) AU2002223682A1 (en)
FR (1) FR2817375B1 (en)
WO (1) WO2002045022A2 (en)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SEQUEIRA V ET AL: "3D environment modelling using laser range sensing" ROBOTICS AND AUTONOMOUS SYSTEMS, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 16, no. 1, 1 November 1995 (1995-11-01), pages 81-91, XP004001940 ISSN: 0921-8890 *
ZHENG J Y ET AL: "Interactive human motion acquisition from video sequences" PROCEEDINGS COMPUTER GRAPHICS INTERNATIONAL 2000, PROCEEDINGS COMPUTER GRAPHICS INTERNATIONAL 2000, GENEVA, SWITZERLAND, 19-24 JUNE 2000, pages 209-217, XP002180364 2000, Los Alamitos, CA, USA, IEEE Comput. Soc, USA ISBN: 0-7695-0643-7 *

Also Published As

Publication number Publication date
AU2002223682A1 (en) 2002-06-11
FR2817375A1 (en) 2002-05-31
FR2817375B1 (en) 2002-12-27
WO2002045022A3 (en) 2002-08-01

Similar Documents

Publication Publication Date Title
JP3679426B2 (en) A system that encodes image data into multiple layers, each representing a coherent region of motion, and motion parameters associated with the layers.
US7139423B1 (en) Method for building a three-dimensional scene by analyzing a sequence of images
US6661914B2 (en) Method of reconstruction of tridimensional scenes and corresponding reconstruction device and decoding system
EP3043320B1 (en) System and method for compression of 3d computer graphics
KR100843112B1 (en) Image matching
Elseberg et al. One billion points in the cloud–an octree for efficient processing of 3D laser scans
US5983251A (en) Method and apparatus for data analysis
US11373339B2 (en) Projection-based mesh compression
US6266158B1 (en) Image encoding/decoding device and method
JP4870079B2 (en) Visibility data compression method, decompression method, compression system, and decoder
CN114600464A (en) Method for encoding and decoding, encoder, decoder and software
CN116721210A (en) Real-time efficient three-dimensional reconstruction method and device based on neurosigned distance field
WO2002045022A2 (en) Process for constructing a 3d scene model utilizing key images
CA2528709A1 (en) Method of representing a sequence of pictures using 3d models, and corresponding devices and signal
Matsuzaki et al. Efficient deep super-resolution of voxelized point cloud in geometry compression
CN117710893B (en) Multidimensional digital image intelligent campus digitizing system
RU2812090C1 (en) Encoding and decoding method, encoder and decoder
WO2003003748A1 (en) Prioritizing in segment matching
US20070070059A1 (en) Refinement of block motion estimate using texture mapping
WO2021092454A1 (en) Arbitrary view generation
JP2006524384A (en) Method and system for filling in a parallelogram

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP