WO2014208230A1 - 座標算出装置及び方法、並びに画像処理装置及び方法 - Google Patents
座標算出装置及び方法、並びに画像処理装置及び方法 Download PDFInfo
- Publication number
- WO2014208230A1 WO2014208230A1 PCT/JP2014/063663 JP2014063663W WO2014208230A1 WO 2014208230 A1 WO2014208230 A1 WO 2014208230A1 JP 2014063663 W JP2014063663 W JP 2014063663W WO 2014208230 A1 WO2014208230 A1 WO 2014208230A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- coordinate
- pixel
- image
- distance
- distorted image
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 title description 30
- 230000003287 optical effect Effects 0.000 claims abstract description 47
- 239000002131 composite material Substances 0.000 claims abstract description 34
- 238000004364 calculation method Methods 0.000 claims description 101
- 238000003672 processing method Methods 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 description 32
- 230000014509 gene expression Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 10
- 238000003702 image correction Methods 0.000 description 9
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
- G06T3/047—Fisheye or wide-angle transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
Definitions
- the present invention relates to an image processing apparatus and method for correcting a distorted image obtained by photographing using a super-wide-angle optical system such as a fisheye lens or an omnidirectional mirror to obtain an image of a perspective projection method, and such an image processing apparatus and
- the present invention relates to a coordinate calculation apparatus and method used in the method.
- Ultra-wide-angle optical systems such as fish-eye lenses or omnidirectional mirrors can project images with an angle of view of 180 degrees or more onto a single imaging surface, and are used in various fields that require a wide imaging field of view. .
- Patent Document 1 shows that the zenith angle is used (column 7).
- the zenith angle is used, for example, when projection is performed by orthographic projection, stereoscopic projection, or equisolid angle projection, it is necessary to obtain a trigonometric function of the zenith angle.
- Patent Document 1 teaches the use of a lookup table in order to simplify the process of obtaining a trigonometric function.
- the distance from the optical axis may be used. Also in this case, the calculation of the trigonometric function becomes unnecessary and the amount of calculation can be reduced. However, since the change in the distance from the optical axis with respect to the change in the zenith angle is small near the horizon of the projection sphere, a bit of data representing the distance in the optical axis direction near the horizon is used for calculation with the desired accuracy. It is necessary to increase the number.
- the present invention has been made to solve the above-described problem, so that the projection from the projection spherical surface to the imaging surface, that is, the calculation of the coordinates on the coordinate surface can be performed with high accuracy while suppressing the amount of calculation.
- the purpose is to.
- the coordinate calculation apparatus is In a coordinate calculation apparatus that converts three-dimensional coordinates on a projection sphere into two-dimensional coordinates on a distorted image, A composite index calculation unit that calculates a composite index obtained by combining the height of the point of interest on the projection sphere obtained from the three-dimensional coordinates and the distance from the optical axis; A distance calculation unit that calculates a distance from an origin in the distorted image of a point corresponding to the point of interest in the distorted image from the composite index; And a coordinate calculation unit that calculates a two-dimensional coordinate in the distorted image from a distance from the origin in the distorted image.
- the coordinates can be calculated with a small number of bits and with high accuracy.
- FIG. 1 It is a block diagram which shows the structural example of the image processing apparatus which concerns on this invention. It is a figure which shows an example of a fisheye image. It is a figure which shows the projection to a projection spherical surface from a to-be-photographed object, and the projection to an imaging surface from a projection spherical surface. It is a figure which shows x, y, z coordinate value on a projection spherical surface. It is a figure which shows the positional relationship of an imaging surface, a projection spherical surface, and an output image surface. (A)-(d) is a figure which shows zoom magnification, a pan angle, a tilt angle, and a plane inclination angle. FIG.
- FIG. 2 is a block diagram illustrating a configuration example of an image correction processing unit of the image processing apparatus in FIG. 1.
- (A) And (b) is a figure which shows the outline
- (A)-(c) is a figure which shows the example of the zenith angle of the focus point on a projection spherical surface, height, and the distance from an optical axis.
- It is a block diagram which shows the coordinate calculation apparatus of Embodiment 1 of this invention. It is a graph which shows the relationship between the zenith angle of the attention point on a projection spherical surface, the synthetic parameter
- FIG. 1 shows an image processing apparatus according to Embodiment 1 of the present invention.
- the illustrated image processing apparatus includes a fish-eye lens 101 as an example of an ultra-wide-angle optical system, an imaging signal generation unit 102, a video memory 103, an image correction processing unit 104, and a corrected video output circuit 105.
- the fisheye lens 101 optically captures an image with a wide angle of view, and for example, a circular distorted image is obtained as shown in FIG.
- the circular image in FIG. 2 is an image of a subject within a range up to an angle of view of 180 degrees.
- FIG. 2 shows a simplified image obtained when a direction along one road is imaged from an intersection of roads.
- the distorted image captured by the fisheye lens 101 is referred to as a fisheye image.
- the imaging plane F perpendicular to the optical axis AL of the fisheye lens 101 and the intersection O of the imaging plane F and the optical axis AL are located at the subject side (imaging target space side) of the imaging plane F, and its bottom surface is the imaging plane.
- a projection plane (projection spherical surface) S on the hemisphere that coincides with is hypothesized. It can be considered that all light from the subject travels toward the center O, is projected onto the projection spherical surface S, and is projected onto the imaging surface F from the projection spherical surface S.
- Projection from the projection spherical surface S to the imaging surface F is performed by any one of an orthographic projection method, a stereoscopic projection method, an equidistant projection method, an equal stereoscopic projection method, and the like. Which method is used for projection differs depending on the fisheye lens 101 used.
- FIG. 3 shows a case where light from the direction indicated by the chain line BL is projected onto the point Ps on the projection spherical surface S and projected onto the point Pf on the imaging surface F.
- the position of the point Pf on the imaging surface F is represented by two-dimensional coordinates (p, q).
- the two-dimensional coordinates represent positions in the directions of two coordinate axes extending in two directions orthogonal to the imaging surface F, that is, the P-axis and Q-axis directions.
- the position of the point Ps on the projection spherical surface S is indicated by three-dimensional coordinates (x, y, z).
- the three-dimensional coordinates have an origin at the intersection point of the imaging surface F and the optical axis AL (therefore, the center of the bottom surface of the hemisphere) O, the Z axis as the coordinate axis extending in the optical axis direction AL, the P axis on the imaging surface F
- the position in the direction of the X-axis and the Y-axis, which are coordinate axes respectively corresponding to the Q-axis, is represented.
- the sizes of the x, y, and z coordinate values are shown separately.
- the position of the point Ps can also be expressed by a zenith angle ⁇ and an azimuth angle ⁇ .
- the zenith angle ⁇ of the point Ps is an inclination angle from the optical axis AL, in other words, an angle formed by a straight line connecting the point Ps and the origin O and the optical axis AL.
- the azimuth angle ⁇ of the point Ps is an angle in the rotation direction about the optical axis, and for example, the direction of the X axis is the reference direction.
- the azimuth angle of the point Pf is the same as the azimuth angle of the point Ps, and is represented by a symbol ⁇ .
- the relationship between the distance Rf from the optical axis AL to the point Pf (and hence the distance from the origin O to the point Pf) Rf and the distance r from the optical axis AL to the point Ps differs depending on the projection method. That is, the relationship between the zenith angle ⁇ of the point Ps and the distance Rf from the optical axis AL to the point Pf varies depending on the projection method as follows.
- Rf Rb ⁇ sin ⁇ (1)
- Rf Rb ⁇ tan ( ⁇ / 2) (2)
- Rb is the radius of the projection spherical surface S.
- the imaging signal generation unit 102 includes an imaging element having, for example, a CCD having the imaging surface F shown in FIG. 3, and generates an electrical image signal representing an optical image.
- the video memory 103 stores the image signal generated by the imaging signal generation unit 102 in units of frames.
- the image correction processing unit 104 reads the image signal stored in the video memory 103, corrects the fisheye image represented by the image signal, and outputs a corrected image.
- a part of the fisheye image that is, a region including the position in the fisheye image corresponding to the line-of-sight direction DOV shown in FIG. 5 and the surrounding position (from the line-of-sight direction DOV and its surrounding directions).
- the portion of the fish-eye image generated by the light of (2) is selected, and the selected image is converted into a perspective projection image.
- the image corrected by the image correction processing unit 104 is referred to as an “output image”.
- the output image is an image obtained by perspectively projecting an image on the projection spherical surface S onto a plane (output image plane) H, and is represented by the same symbol H as that of the output image plane.
- the plane H is in contact with the projection spherical surface S at the intersection point G between the projection spherical surface S and the straight line indicating the line-of-sight direction DOV.
- the position of the point Ph on the plane (output image plane) H is represented by coordinates (u, v) having the contact point G with the projection spherical surface S as the origin. Coordinates (u, v) represent positions in the directions of the U axis and V axis, which are mutually orthogonal coordinate axes, on the output image plane H.
- the U axis makes an angle ⁇ with respect to the rotation reference axis J.
- the rotation reference axis J passes through the origin G, is parallel to the XY plane, and is orthogonal to the straight line OG (a straight line connecting the origin O and the origin G).
- the angle ⁇ is referred to as a plane inclination angle or a rotation angle.
- the azimuth angle ⁇ of the origin G is called the pan angle and is represented by the symbol ⁇ .
- the zenith angle ⁇ of the origin G is called a tilt angle and is represented by the symbol ⁇ .
- the size of the output image that is, the angle of view of the output image (electronic zoom angle of view) can be changed by the magnification (zoom magnification) m.
- FIG. 6A conceptually shows the change in the size of the output image by changing the zoom magnification.
- 6B, 6C, and 6D separately show the pan angle ⁇ , the tilt angle ⁇ , and the plane tilt angle ⁇
- the point Ps (x, y, z) on the projection sphere S is the line between the point Ps and the origin O and the output image plane H. Projected to the intersection Ph (u, v).
- the image correction processing unit 104 calculates the position in the fisheye image corresponding to the position of each pixel (pixel of interest) of the output image in order to generate the above-described output image (image projected on the output image plane H). Then, the pixel value of the pixel of interest of the output image is calculated based on the pixel value of the pixel in the fisheye image at or near the calculated position, and an image is generated from an array of pixels having the calculated pixel value.
- the corrected video output circuit 105 outputs the corrected image signal as a normal television video signal, for example, an NTSC video signal. Note that this signal can be encoded by an encoding device and transmitted to display a video at a remote place.
- the image correction processing unit 104 includes an output range selection unit 201, a spherical coordinate calculation unit 202, a fisheye image coordinate calculation unit 203, and a pixel value calculation unit 204. Image correction processing performed by the image correction processing unit 104 will be described with reference to FIGS. 5 and 8A and 8B.
- the output range selection unit 201 selects the desired line-of-sight direction DOV, that is, the pan angle ⁇ and the tilt angle ⁇ , and selects the desired plane tilt angle ⁇ and the size of the output image, that is, the zoom magnification m, thereby outputting the output image. Select the range. This selection is performed according to a user operation, for example. Also, the range of the output image may be automatically changed over time.
- the selection of the output range by the output range selection unit 201 is performed by determining, changing, or rotating the line-of-sight direction performed by mechanically panning, tilting, and rotating (changing the plane tilt angle) of the camera. This is equivalent to the determination or change of the angle and the determination or change of the magnification by optical zoom, and the processing by the output range selection unit 201 is also called electronic PTZ (pan, tilt, zoom) processing.
- the spherical coordinate calculation unit 202 projects the output image H whose range is selected by the output range selection unit 201 onto the projection spherical surface S, and the projection spherical surface corresponding to each point in the image H.
- the three-dimensional coordinates of the points on S are calculated.
- the calculation of the three-dimensional coordinates is a process of obtaining the position (x, y, z) on the projection spherical surface S corresponding to the position of each point in the output image H, for example, (u, v).
- the position on the projection sphere S corresponding to each point Ph in the output image is the position of the intersection of the projection sphere S with the straight line connecting each point Ph and the origin O.
- a range on the projection spherical surface S corresponding to the output image H is indicated by a symbol Sh in FIGS. 5 and 8A.
- the fish-eye image coordinate calculation unit 203 projects the image of the range Sh (image on the projection sphere) shown in FIG. 8A onto the fish-eye image as shown in FIG. Calculate eye image coordinates.
- This projection corresponds to the position of the corresponding point Pf in the fisheye image corresponding to the position (x, y, z) of each point Ps on the projection spherical surface S, that is, the position (p, q) on the imaging surface F.
- a range (cutout range) in a fish-eye image (indicated by the same symbol F as that of the imaging surface) corresponding to the range Sh is indicated by a symbol Fh in FIG. 8B.
- projection methods such as orthographic projection, equidistant projection, stereoscopic projection, and equiangular angle projection, for projecting from the projection spherical surface S to the fisheye image, depending on which projection method the fisheye lens is for. Can be decided.
- the pixel value calculation unit 204 calculates the pixel value of each pixel of the output image in FIG. In calculating this pixel value, the position (p, q) in the fisheye image corresponding to the position (u, v) of each pixel (pixel of interest) in the output image of FIG. ) Pixel value is used. That is, if a pixel exists at a position in the fisheye image corresponding to the position of the pixel of interest, the pixel value of the pixel is used as the pixel value of the pixel of interest, and the pixel value of the fisheye image corresponding to the position of the pixel of interest is used. If no pixel exists at the position, the pixel value of the pixel of interest is obtained by interpolation based on the pixel values of one or more pixels around the position in the corresponding fisheye image.
- the present invention uses a composite index obtained by combining the height and the distance from the optical axis.
- a composite index obtained by combining the height and the distance from the optical axis.
- the zenith angle is used as shown in FIG. 9A (FIG. 1, column 7).
- Data representing the zenith angle ⁇ can be expressed with the same accuracy by data having the same bit width near the zenith of the projection spherical surface and near the horizon (imaging surface).
- the zenith angle it is necessary to calculate a trigonometric function when the method of projecting from the projection spherical surface to the fisheye image is an orthographic projection method, a stereoscopic projection method, or an equal solid angle projection method.
- a lookup table is used to simplify the calculation of trigonometric functions.
- the position (height) in the optical axis direction that is, the coordinate value z shown in FIG. 9B may be used.
- the calculation of the trigonometric function becomes unnecessary and the amount of calculation can be reduced.
- the change in height with respect to the change in the zenith angle is small, so that a lot of bit accuracy is required only in the vicinity of the zenith. (In order to perform calculation with the same accuracy, it is necessary to use data having a larger bit width.)
- the amount of change in height is about 1.745.
- the amount of change in height when the zenith angle changes from 0 degree to 1 degree near the zenith is about 1.523 ⁇ 10 ⁇ 4 , and the accuracy differs by about 100 times.
- the coordinate calculation apparatus of the present invention is intended to solve the above-mentioned problem, and it has been made so that a large difference in calculation accuracy does not occur even if the bit width of data is not changed near the zenith or near the horizon. It is characterized by.
- FIG. 10 shows a configuration example of the coordinate calculation device 203a.
- the illustrated coordinate calculation device 203a is used as the fisheye image coordinate calculation unit 203 in FIG. 7 and calculates the coordinates of a two-dimensional image.
- the coordinate calculation device 203a in FIG. 10 includes a composite index calculation unit 301, a distance calculation unit 302, and a coordinate calculation unit 303.
- the coordinate calculation device 203a receives three-dimensional coordinates (x, y, z) representing the position on the projection spherical surface S corresponding to the position of the target pixel of the output image H.
- the three-dimensional coordinates (x, y, z) are selected by the output range selection unit 201 in accordance with the pan angle ⁇ , the tilt angle ⁇ , the plane tilt angle ⁇ , and the zoom magnification, and the spherical coordinate calculation unit 202 This is obtained by projecting onto the projected spherical surface.
- the three-dimensional coordinates (x, y, z) are input to the composite index calculation unit 301.
- the distance r from the optical axis r ⁇ (x 2 + y 2 ) (6)
- the combined index Rn are calculated according to the following formulas (7a), (7b), (8a), and (8b).
- FIG. 11 shows the point of interest Ps on the projection sphere S, the zenith angle (angle formed by the line connecting the point of interest Ps and the origin O and the optical axis AL) ⁇ , the composite index Rn, the height z, and the light.
- the relationship of the distance r from the axis is shown.
- the horizontal axis represents the zenith angle ⁇ of the point of interest Ps.
- the composite index Rn, the height z, and the distance r from the optical axis are shown on the vertical axis.
- the height z has a zenith angle of 0 degrees and a slope of 0, the change in the value with respect to the zenith angle becomes small near 0 degrees, and the calculation using the height is performed with high accuracy.
- the distance r from the optical axis is 90 degrees when the zenith angle ⁇ is 90 degrees, so the change in the value with respect to the zenith angle ⁇ is small near 90 degrees, and the calculation using the distance r from the optical axis is high.
- a lot of bit width is required to perform with accuracy.
- the composite index Rn has no significant difference in inclination at any zenith angle, and therefore the bit width is small.
- the composite index Rn is sent to the distance calculation unit 302.
- the distance calculation unit 302 is represented by z ⁇ ⁇ (x 2 + y 2 ) (7a) In the range where
- Rf Rb ⁇ F ( ⁇ ) (12)
- Rf Rb ⁇ sin ⁇ (12a)
- Rf 2Rb ⁇ tan ( ⁇ / 2) (12b)
- ⁇ cos ⁇ 1 (z / Rb) (13)
- Is Is obtained.
- the fish-eye image two-dimensional coordinates (p, q) are calculated from the image height Rf calculated by the expressions (10) and (11) and the projected spherical three-dimensional coordinates (x, y, z). To do.
- the calculation formula is as follows.
- Rf is obtained using a lookup table storing the relationship between Rn and Rf. Also good. In this case, if Rf is stored for all values of Rn, a large-capacity look-up table is required. To avoid this, a plurality of Rn, that is, N discrete representative values DN [ A lookup table storing Rf values DF [0] to DF [N ⁇ 1] corresponding to 0] to DN [N ⁇ 1] is prepared, and the given index pixel is calculated by the synthesis index calculation unit 301.
- Rn value, representative value DN [n] closest to Rn below Rn, representative value DN [n + 1] greater than Rn and closest to Rn, and DN [n], DN [n + 1] are read as input values.
- Rf corresponding to Rn may be obtained by interpolation calculation using the output values DF [n] and DF [n + 1]. The arithmetic expression in that case is shown by Formula (22).
- N in the formula (22) satisfies the condition represented by the following formula (24).
- DN [n] ⁇ Rn ⁇ DN [n + 1]
- DF [N ⁇ 1] is output as Rf.
- DN [n] and DF [n] are generally In the range where the condition of Expression (7a) is satisfied, Expression (10) In a range where the condition of Expression (8a) is satisfied, Expression (11) In order to approximate the relationship between Rn and Rf represented by a polygonal line, the difference between N, for example, DN [n] and DN [n + 1] is constant within a range of values that Rn and Rf can take. Determined.
- formula (10) and formula (11) are used as formula (10a) and formula (11a).
- formula (10) and formula (11) are used.
- (10b) and (10b) are used.
- the relationship between Rn and Rf is obtained by actual measurement.
- the relationship between Rn and Rf can be obtained by photographing a square lattice-like two-dimensional chart or the like with a fisheye lens and examining how much the square lattice is distorted on the photographed image.
- the present invention has been described above as a coordinate calculation device and an image processing device including the coordinate calculation device, a coordinate calculation method and an image processing method implemented by these devices also form part of the present invention.
- FIG. FIG. 12 shows a processing procedure in the coordinate calculation method performed by the coordinate calculation apparatus of FIG.
- the projection spherical three-dimensional coordinates (x, y, z) are increased according to the equations (7a), (7b), (8a), and (8b) shown in the first embodiment.
- a composite index Rn that combines the distance from the optical axis.
- step ST2 a distance Rf from the origin in the fisheye image is calculated from the composite index Rn.
- This processing may be performed by the calculations shown in Expression (10) and Expression (11), or may be performed using a lookup table.
- step ST3 the fisheye image is calculated from the distance Rf from the origin in the fisheye image and the projected spherical three-dimensional coordinates (x, y, z) according to the equations (21Aa) to (21Bc) shown in the first embodiment. Two-dimensional coordinates (p, q) are calculated.
- the coordinate calculation method shown in FIG. 12 can also be realized by software, that is, by a programmed computer.
- the present invention can be applied to processes other than the above processing.
- the present invention can be applied to processing for obtaining a panoramic image from a fisheye image.
- the coordinate calculation device of the present invention or an image processing device including the coordinate calculation device can be used in a monitoring system.
- 201 output range selection unit 202 spherical coordinate calculation unit, 203 fisheye image coordinate calculation unit, 203a coordinate calculation device, 204 pixel value calculation unit, 301 composite index calculation unit, 302 distance calculation unit, 303 coordinate calculation unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Geometry (AREA)
- Studio Devices (AREA)
Abstract
Description
投影球面上の3次元座標を歪曲画像上の2次元座標に変換する座標算出装置において、
前記3次元座標から得られる、投影球面上の着目点の高さと光軸からの距離を組み合わせた合成指標を算出する合成指標算出部と、
前記合成指標から、前記歪曲画像中における前記着目点に対応する点の、前記歪曲画像中の原点からの距離を算出する距離算出部と、
前記歪曲画像中における前記原点からの距離から、前記歪曲画像中の2次元座標を算出する座標算出部と
を備えたことを特徴とする。
図1は、本発明の実施の形態1の画像処理装置を示す。図示の画像処理装置は、超広角光学系の一例としての魚眼レンズ101と、撮像信号生成部102と、映像メモリ103と、画像補正処理部104と、補正映像出力回路105とを有する。
魚眼レンズ101の光軸ALに垂直な撮像面Fと、撮像面Fと光軸ALとの交点Oを中心とし、撮像面Fの被写体側(撮像対象空間側)に位置し、その底面が撮像面に一致する半球上の投影面(投影球面)Sを仮想する。
被写体からの光はすべて、上記の中心Oに向かって進み、投影球面Sに投影され、投影球面Sから撮像面Fに投影されるものと考えることができる。
投影球面Sから撮像面Fへの投影は、正射影方式、立体射影方式、等距離射影方式、等立体射影方式などのいずれかで行われる。いずれの方式で射影が行われるかは、用いられている魚眼レンズ101によって異なる。
撮像面F上の点Pfの位置は、2次元座標(p,q)で表される。この2次元座標は撮像面Fの互いに直交する2つの方向に延びた2つの座標軸、即ちP軸及びQ軸の方向における位置を表す。
点Psの天頂角θは、光軸ALからの傾き角、言い換えると、点Psと原点Oを結ぶ直線と光軸ALとの成す角である。
点Psの方位角∂は、光軸を中心とする回転方向の角度であり、例えばX軸の方向を基準方向とする。
即ち、点Psの天頂角θと、光軸ALから点Pfまでの距離Rfとの関係は、以下のように、射影方式によって異なる。例えば、正射影方式の場合には、
Rf=Rb×sinθ (1)
の関係があり、立体射影の場合には、
Rf=Rb×tan(θ/2) (2)
の関係がある。
上記の式において、Rbは投影球面Sの半径である。図2に示される円形魚眼画像の半径Rb(画像の中心から画角180度の点までの距離)が投影球面Sの半径に相当する。
r=Rb×sinθ (3)
と言う関係がある。従って、正射影方式の場合には、
Rf=r (4)
の関係があり、立体射影方式の場合には、
映像メモリ103は、撮像信号生成部102で生成された画像信号をフレーム単位で記憶する。
平面Hは、視線方向DOVを示す直線と、投影球面Sとの交点Gにおいて投影球面Sに接するものである。平面(出力画像面)H上の点Phの位置は、投影球面Sとの接点Gを原点とする座標(u,v)で表される。座標(u,v)は出力画像面H上の互いに直交する座標軸であるU軸、V軸の方向の位置を表す。
原点Gの方位角∂をパン角と言い、符号αで表す。原点Gの天頂角θをチルト角と言い、符号βで表す。出力画像のサイズ、即ち出力画像の画角(電子ズーム画角)は倍率(ズーム倍率)mによって変更可能である。図6(a)は、ズーム倍率の変更による出力画像のサイズの変更を概念的に示す。図6(b)、(c)、(d)は、図5と同じ出力画像面Hについてパン角α、チルト角β、平面傾斜角φを別々に示す。
画像補正処理部104は、上記した出力画像(出力画像面Hに射影された画像)を生成するため、出力画像の各画素(着目画素)の位置に対応する、魚眼画像中の位置を算出し、算出した魚眼画像中の位置又はその近傍の画素の画素値に基づいて出力画像の上記着目画素の画素値を算出し、算出された画素値を有する画素の配列により画像を生成する。
画像補正処理部104で行われる画像補正処理を、図5、並びに図8(a)及び(b)を参照して説明する。
投影球面Sから魚眼画像に射影する方式には、正射影、等距離射影、立体射影、等立体角射影等のいくつかの射影方式があり、魚眼レンズがどの射影方式のものであるかに応じて決められる。
一般に、この射影において、魚眼画像F中の像高を求めるために、天頂角を用いる場合と、光軸方向の位置を表す値(高さ)を用いる場合と、光軸からの距離を用いることが考えられる。
図10の座標算出装置203aは、合成指標算出部301、距離算出部302及び座標算出部303を有する。
3次元座標(x,y,z)は合成指標算出部301に入力される。
r=√(x2+y2) (6)
と高さzを組み合わせた合成指標Rnを下記の式(7a)、式(7b)、式(8a)、式(8b)に従って算出する。
の場合には、
Rn=√(x2+y2) (7b)
z<√(x2+y2) (8a)
の場合には、
Rn=√2×Rb-z (8b)
z≧Rb/√2 (7c)
z<Rb/√2 (8c)
図11で横軸が着目点Psの天頂角θを表す。
縦軸上に合成指標Rn、高さz、及び光軸からの距離rが示されている。
また光軸からの距離rは、天頂角θが90度で傾きが0になるため、90度付近で天頂角θに対する値の変化が小さくなり、光軸からの距離rを用いた計算を高精度で行うには、多くのビット幅が必要になる。
一方、合成指標Rnは、どの天頂角においても傾きに大きな差異がなく、従って、ビット幅が少なくて済む。
距離算出部302では、合成指標Rnに基づいて、点Pfの魚眼画像中の原点からの距離(像高)Rfを下記の式で表される演算により求める。即ち、
像高Rfが、投影球面半径Rbと天頂角θの関数として、
Rf=Rb・F(θ) (9)
で表される場合、距離算出部302は、
z≧√(x2+y2) (7a)
が満たされる範囲では、
球面上の点Psが魚眼画像の点Pfに投影される場合、
点Pfの像高Rfは、Rbと天頂角θの関数として表される。
Rf=Rb・F(θ) (12)
例えば、正射影の場合には、
Rf=Rb・sinθ (12a)
で表され、立体射影の場合には、
Rf=2Rb・tan(θ/2) (12b)
で表される。
式(12)中のθは、
θ=cos-1(z/Rb) (13)
で表される。
式(12)と、式(13)から、
Rb2=x2+y2+z2 (15)
従って、
z2=Rb2-(x2+y2) (16)
の関係がある。
Rn2=(x2+y2) (17)
式(17)と、式(16)から、
z2=Rb2-Rn2 (18)
従って、
z=√(Rb2-Rn2) (19)
式(14)と、式(19)から、
式(8a)が満たされる場合には、式(11)で
Rfが求められることが分かる。
式(12a)と、式(13)から、
Rf=Rb・sin(cos-1(z/Rb)) (14a)
式(14a)と、式(19)から、
式(12b)と、式(13)から、
Rf=2Rb・tan(cos-1(z/Rb)) (14b)
式(14b)と式(19)から、
の場合には、
p=x×Rf/r (21Ab)
q=y×Rf/r (21Ac)
Rf=0 (21Ba)
の場合には、
p=0 (21Bb)
q=0 (21Bc)
この場合、Rnのすべての値に対してRfを格納すると、ルックアップテーブルとして容量の大きなものが必要となるので、それを避けるため、Rnの複数の、即ちN個の離散的代表値DN[0]~DN[N-1]について対応するRfの値DF[0]~DF[N-1]を記憶したルックアップテーブルを用意し、与えられた着目画素について合成指標算出部301で算出されたRnの値と、Rn以下でRnに最も近い代表値DN[n]、Rnよりも大きくRnに最も近い代表値DN[n+1]、並びにDN[n]、DN[n+1]を入力値として読み出された出力値DF[n]、DF[n+1]を用いた補間演算により、Rnに対応するRfを求めることとしても良い。その場合の演算式が式(22)で示される。
DN[n]≦DN[n+1] (23)
(但し、nは、0乃至N-2のいずれか)
DN[n]≦Rn<DN[n+1] (24)
なおRnがDN[N-1]以上となる場合には、DF[N-1]をRfとして出力する。
式(7a)の条件が満たされる範囲では式(10)で、
式(8a)の条件が満たされる範囲では式(11)で、
それぞれ表されるRnとRfの関係を折れ線近似するように、Rn、Rfが取り得る値の範囲内に、N個、例えばDN[n]とDN[n+1]の差が一定となるように、定められる。
正射影を用いる場合には、式(10)、式(11)として、式(10a)、式(11a)が用いられる、立体射影を用いる場合には、式(10)、式(11)として、式(10b)、式(10b)が用いられる。
図12は、図10の座標算出装置で実施される座標算出方法における処理の手順を示す。
図12においてステップST1では、実施の形態1で示した式(7a)、式(7b)、式(8a)、及び式(8b)に従って、投影球面3次元座標(x,y,z)から高さと光軸からの距離を複合した合成指標Rnを算出する。
この処理は、式(10)、式(11)に示す演算で行っても良く、ルックアップテーブルを用いて行っても良い。
Claims (8)
- 投影球面上の3次元座標を歪曲画像上の2次元座標に変換する座標算出装置において、
前記3次元座標から得られる、投影球面上の着目点の高さと光軸からの距離を組み合わせた合成指標を算出する合成指標算出部と、
前記合成指標から、前記歪曲画像中における前記着目点に対応する点の、前記歪曲画像中の原点からの距離を算出する距離算出部と、
前記歪曲画像中における前記原点からの距離から、前記歪曲画像中の2次元座標を算出する座標算出部と
を備えたことを特徴とする座標算出装置。 - 前記3次元座標を(x,y,z)で表し、前記投影球面の半径をRbで表すとき、前記合成指標算出部は、
z≧√(x2+y2)の場合には、Rn=√(x2+y2)
で、
z<√(x2+y2)の場合には、Rn=√2×Rb-z
で前記合成指標Rnを求めることを特徴とする請求項1に記載の座標算出装置。 - 前記座標算出部は、
Rf≠0の場合には、
p=x×Rf/√(x2+y2)
q=y×Rf/√(x2+y2)
により、
Rf=0の場合には、
p=0
q=0
により、前記2次元座標(p、q)を求めることを特徴とする請求項3又は4に記載の座標算出装置。 - 請求項1乃至5のいずれかに記載の座標算出装置と、
出力画像の各画素を着目画素として、該着目画素の座標に対応する、前記投影球面上の3次元座標を算出する球面座標算出部と、
前記座標算出装置で算出された前記歪曲画像中の2次元座標に基づいて、前記出力画像の前記着目画素の画素値を算出する画素値算出部とを有し、
前記座標算出装置は、前記球面座標算出部で算出された3次元座標に対応する前記歪曲画像中の位置を表す2次元座標を、前記着目画素の座標に対応する位置を表す座標として求め、
前記画素値算出部は、前記着目画素の座標に対応する、前記歪曲画像中の位置に画素が存在すれば、当該対応する位置の画素の画素値を、前記着目画素の画素値として用い、前記着目画素の座標に対応する、前記歪曲画像中の位置に画素が存在しなければ、当該対応する位置の周囲の1又は2以上の画素の画素値に基づく補間により、前記着目画素の画素値を求める
ことを特徴とする画像処理装置。 - 投影球面上の3次元座標を歪曲画像上の2次元座標に変換する座標算出方法において、
前記3次元座標から得られる、投影球面上の着目点の高さと光軸からの距離を組み合わせた合成指標を算出する合成指標算出ステップと、
前記合成指標から、前記歪曲画像中における前記着目点に対応する点の、前記歪曲画像中の原点からの距離を算出する距離算出ステップと、
前記歪曲画像中における前記原点からの距離から、前記歪曲画像中の2次元座標を算出する座標算出ステップと
を有することを特徴とする座標算出方法。 - 請求項7に記載の座標算出方法を構成する前記合成指標算出ステップ、前記距離算出ステップ、及び前記座標算出ステップと、
出力画像の各画素を着目画素として、該着目画素の座標に対応する、前記投影球面上の3次元座標を算出する球面座標算出ステップと、
前記座標算出ステップで算出された前記歪曲画像中の2次元座標に基づいて、前記出力画像の前記着目画素の画素値を算出する画素値算出ステップとを有し、
前記合成指標算出ステップは、前記球面座標算出ステップで算出された3次元座標から得られる、前記投影球面上の着目点の高さと光軸からの距離を用いて、前記合成指標を算出し、
前記座標算出ステップは、前記球面座標算出ステップで算出された3次元座標に対応する前記歪曲画像中の位置を表す2次元座標を、前記着目画素の座標に対応する位置を表す座標として求め、
前記画素算出ステップは、前記着目画素の座標に対応する、前記歪曲画像中の位置に画素が存在すれば、当該対応する位置の画素の画素値を、前記着目画素の画素値として用い、前記着目画素の座標に対応する、前記歪曲画像中の位置に画素が存在しなければ、当該対応する位置の周囲の1又は2以上の画素の画素値に基づく補間により、前記着目画素の画素値を求める
ことを特徴とする画像処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/785,801 US9436973B2 (en) | 2013-06-24 | 2014-05-23 | Coordinate computation device and method, and an image processing device and method |
JP2014541471A JP5666069B1 (ja) | 2013-06-24 | 2014-05-23 | 座標算出装置及び方法、並びに画像処理装置及び方法 |
EP14817958.3A EP3016065B1 (en) | 2013-06-24 | 2014-05-23 | Coordinate computation device and method, and image processing device and method |
CN201480035309.2A CN105324791B (zh) | 2013-06-24 | 2014-05-23 | 坐标计算装置和方法、以及图像处理装置和方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013131439 | 2013-06-24 | ||
JP2013-131439 | 2013-06-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014208230A1 true WO2014208230A1 (ja) | 2014-12-31 |
Family
ID=52141582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/063663 WO2014208230A1 (ja) | 2013-06-24 | 2014-05-23 | 座標算出装置及び方法、並びに画像処理装置及び方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9436973B2 (ja) |
EP (1) | EP3016065B1 (ja) |
JP (1) | JP5666069B1 (ja) |
CN (1) | CN105324791B (ja) |
WO (1) | WO2014208230A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017179722A1 (ja) * | 2016-04-15 | 2017-10-19 | パナソニックIpマネジメント株式会社 | 画像処理装置及び撮像装置 |
JP2017220785A (ja) * | 2016-06-07 | 2017-12-14 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
JP2021180017A (ja) * | 2020-12-03 | 2021-11-18 | アポロ インテリジェント コネクティビティ (ベイジン) テクノロジー カンパニー リミテッドApollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | 路側感知方法、路側感知装置、電子デバイス、記憶媒体、路側設備、及びプログラム |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI558208B (zh) * | 2015-07-14 | 2016-11-11 | 旺玖科技股份有限公司 | 影像處理方法、影像處理裝置及顯示系統 |
US9984436B1 (en) * | 2016-03-04 | 2018-05-29 | Scott Zhihao Chen | Method and system for real-time equirectangular projection |
US9961261B2 (en) * | 2016-06-20 | 2018-05-01 | Gopro, Inc. | Image alignment using a virtual gyroscope model |
JP6780315B2 (ja) * | 2016-06-22 | 2020-11-04 | カシオ計算機株式会社 | 投影装置、投影システム、投影方法及びプログラム |
US10154238B2 (en) * | 2016-11-11 | 2018-12-11 | Roland Dg Corporation | Projection system and modeling machine |
US10931971B2 (en) * | 2016-12-27 | 2021-02-23 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding 360-degree image |
US10762658B2 (en) * | 2017-10-24 | 2020-09-01 | Altek Corporation | Method and image pick-up apparatus for calculating coordinates of object being captured using fisheye images |
JP2020521348A (ja) * | 2017-10-24 | 2020-07-16 | エルジー エレクトロニクス インコーポレイティド | 魚眼ビデオ情報を含む360度ビデオを送受信する方法及びその装置 |
WO2020102772A1 (en) * | 2018-11-15 | 2020-05-22 | Qualcomm Incorporated | Coordinate estimation on n-spheres with spherical regression |
JP6795637B2 (ja) | 2019-02-20 | 2020-12-02 | ミネベアミツミ株式会社 | アンテナ装置、及び、給電装置 |
CN113052758B (zh) * | 2021-03-10 | 2024-04-26 | 上海杰图天下网络科技有限公司 | 全景图像中点目标大地坐标的量测方法、系统、设备和介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000235645A (ja) * | 1999-02-12 | 2000-08-29 | Advanet Corp | 画像変換用演算装置 |
JP2005339313A (ja) * | 2004-05-28 | 2005-12-08 | Toshiba Corp | 画像提示方法及び装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6833843B2 (en) * | 2001-12-03 | 2004-12-21 | Tempest Microsystems | Panoramic imaging and display system with canonical magnifier |
JP2008052589A (ja) | 2006-08-25 | 2008-03-06 | Konica Minolta Holdings Inc | 広角画像の歪み補正方法 |
JP4974765B2 (ja) | 2007-05-30 | 2012-07-11 | 京セラ株式会社 | 画像処理方法及び装置 |
JP2009064225A (ja) | 2007-09-06 | 2009-03-26 | Canon Inc | 画像処理装置及び画像処理方法 |
JP4243767B2 (ja) | 2007-09-21 | 2009-03-25 | 富士通株式会社 | 魚眼レンズカメラ装置及びその画像抽出方法 |
KR101404527B1 (ko) * | 2007-12-26 | 2014-06-09 | 다이니폰 인사츠 가부시키가이샤 | 화상 변환 장치 및 화상 변환 방법 |
JP4629131B2 (ja) * | 2008-09-03 | 2011-02-09 | 大日本印刷株式会社 | 画像変換装置 |
KR100988872B1 (ko) * | 2009-07-08 | 2010-10-20 | 주식회사 나노포토닉스 | 회전 대칭형의 광각 렌즈를 이용하여 복합 영상을 얻는 방법과 그 영상 시스템 및 하드웨어적으로 영상처리를 하는 이미지 센서 |
CN102509261B (zh) * | 2011-10-10 | 2014-05-07 | 宁波大学 | 一种鱼眼镜头的畸变校正方法 |
JP5966341B2 (ja) * | 2011-12-19 | 2016-08-10 | 大日本印刷株式会社 | 画像処理装置、画像処理方法、画像処理装置用プログラム、画像表示装置 |
CN102663734A (zh) * | 2012-03-15 | 2012-09-12 | 天津理工大学 | 鱼眼镜头的标定及鱼眼图像的畸变矫正方法 |
-
2014
- 2014-05-23 EP EP14817958.3A patent/EP3016065B1/en active Active
- 2014-05-23 CN CN201480035309.2A patent/CN105324791B/zh not_active Expired - Fee Related
- 2014-05-23 WO PCT/JP2014/063663 patent/WO2014208230A1/ja active Application Filing
- 2014-05-23 US US14/785,801 patent/US9436973B2/en active Active
- 2014-05-23 JP JP2014541471A patent/JP5666069B1/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000235645A (ja) * | 1999-02-12 | 2000-08-29 | Advanet Corp | 画像変換用演算装置 |
JP3126955B2 (ja) | 1999-02-12 | 2001-01-22 | 株式会社アドバネット | 画像変換用演算装置 |
JP2005339313A (ja) * | 2004-05-28 | 2005-12-08 | Toshiba Corp | 画像提示方法及び装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3016065A4 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017179722A1 (ja) * | 2016-04-15 | 2017-10-19 | パナソニックIpマネジメント株式会社 | 画像処理装置及び撮像装置 |
JPWO2017179722A1 (ja) * | 2016-04-15 | 2019-02-21 | パナソニックIpマネジメント株式会社 | 画像処理装置及び撮像装置 |
US11463620B2 (en) | 2016-04-15 | 2022-10-04 | Panasonic Intellectual Property Management Co., Ltd. | Image processing apparatus and image capturing apparatus |
JP2017220785A (ja) * | 2016-06-07 | 2017-12-14 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
JP2021180017A (ja) * | 2020-12-03 | 2021-11-18 | アポロ インテリジェント コネクティビティ (ベイジン) テクノロジー カンパニー リミテッドApollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | 路側感知方法、路側感知装置、電子デバイス、記憶媒体、路側設備、及びプログラム |
JP7223072B2 (ja) | 2020-12-03 | 2023-02-15 | 阿波▲羅▼智▲聯▼(北京)科技有限公司 | 路側感知方法、路側感知装置、電子デバイス、記憶媒体、路側設備、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN105324791A (zh) | 2016-02-10 |
US20160078590A1 (en) | 2016-03-17 |
EP3016065A1 (en) | 2016-05-04 |
JPWO2014208230A1 (ja) | 2017-02-23 |
JP5666069B1 (ja) | 2015-02-12 |
US9436973B2 (en) | 2016-09-06 |
EP3016065A4 (en) | 2017-01-11 |
EP3016065B1 (en) | 2019-07-10 |
CN105324791B (zh) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5666069B1 (ja) | 座標算出装置及び方法、並びに画像処理装置及び方法 | |
US7570280B2 (en) | Image providing method and device | |
JP6115606B2 (ja) | 撮像装置および撮像システム | |
US7834907B2 (en) | Image-taking apparatus and image processing method | |
US9667864B2 (en) | Image conversion apparatus, camera, image conversion method, and storage medium with program stored therein | |
US8441541B2 (en) | Control apparatus and control method therefor | |
JP4661829B2 (ja) | 画像データ変換装置、及びこれを備えたカメラ装置 | |
US9313411B2 (en) | Camera, distortion correction device and distortion correction method | |
KR101912396B1 (ko) | 가상 카메라 기반의 임의 시점 영상 생성 장치 및 방법 | |
US20130058589A1 (en) | Method and apparatus for transforming a non-linear lens-distorted image | |
CN103839227B (zh) | 鱼眼图像校正方法和装置 | |
US20130208081A1 (en) | Method for combining images | |
KR101583646B1 (ko) | 전 방위 평면 이미지를 생성하는 방법 및 장치 | |
CN107113376A (zh) | 一种图像处理方法、装置及摄像机 | |
JP5846172B2 (ja) | 画像処理装置、画像処理方法、プログラムおよび撮像システム | |
KR20090078463A (ko) | 왜곡 영상 보정 장치 및 방법 | |
KR101465112B1 (ko) | 카메라 시스템 | |
JP6665917B2 (ja) | 画像処理装置 | |
JP6222205B2 (ja) | 画像処理装置 | |
JP2019029721A (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP2009064225A (ja) | 画像処理装置及び画像処理方法 | |
JP2009123131A (ja) | 撮像装置 | |
JP6439845B2 (ja) | 画像処理装置 | |
JP6071364B2 (ja) | 画像処理装置、その制御方法、および制御プログラム | |
JP2019008494A (ja) | 画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480035309.2 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2014541471 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14817958 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14785801 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014817958 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |