WO2013029606A2 - Verfahren, anordnung und fahrassistenzsystem zur ermittlung der räumlichen verteilung von objekten relativ zu einem fahrzeug - Google Patents
Verfahren, anordnung und fahrassistenzsystem zur ermittlung der räumlichen verteilung von objekten relativ zu einem fahrzeug Download PDFInfo
- Publication number
- WO2013029606A2 WO2013029606A2 PCT/DE2012/100255 DE2012100255W WO2013029606A2 WO 2013029606 A2 WO2013029606 A2 WO 2013029606A2 DE 2012100255 W DE2012100255 W DE 2012100255W WO 2013029606 A2 WO2013029606 A2 WO 2013029606A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- camera
- voxel
- images
- image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/564—Depth or shape recovery from multiple images from contours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the invention relates to a method and an arrangement for determining the spatial distribution of objects in a limited spatial area relative to a vehicle equipped with at least one camera.
- the invention also relates to a driver assistance system that uses a plurality of arrangements according to the invention for exchanging information about the spatial distribution of objects relative to the vehicles between vehicles.
- Sensors detected data e.g. support braking, automatically observe certain distances to vehicles in front, recognize road signs such as speed limits or prohibition signs in particular and can issue corresponding warnings acoustically and / or visually.
- a driver assistance system eg a brake system
- a driver assistance system eg a brake system
- the determination of depth information ie the distance of an object to a vehicle is not trivial, especially for objects such as pedestrians and larger distances
- Systems based on ultrasound, radar or laser waves for the determination of depth information require, on the one hand, special transmitters and receivers, which can generally be used only for the purpose of depth detection, and, on the other hand, the active (invasive) emission of corresponding waves
- Radar waves are not reflected equally well by all objects.
- the term "camera” here all types of image sensors and corresponding image processing hardware and software are understood to be possible to record an at least roughly resolved, spatially two-dimensional image.
- stereo vision which is based on the same principles as human vision.
- pictures from the same scene are taken from two different locations
- Stereo vision is computationally expensive and error prone, as often similar points are wrongly considered as corresponding points, resulting in the creation of so-called phantom objects. This occurs especially when the images contain objects with periodic patterns, eg fences.
- Another method for determining depth information is computationally expensive and error prone, as often similar points are wrongly considered as corresponding points, resulting in the creation of so-called phantom objects. This occurs especially when the images contain objects with periodic patterns, eg fences.
- Depth information can be used in real time.
- the invention has for its object to provide a method and an arrangement that allow the spatial distribution of objects in a limited space area relative to one with at least one camera
- the invention is based on the idea of using cheaper and now in many vehicles e.g. built in anyway for traffic sign recognition
- Voxelraums was mapped taking into account the mapping laws and then the pixel values associated with the respective pixels (eg gray values) in the Voxels and to possibly add up there already existing values, wherein the process is repeated several times, which leads to accumulate certain values in those voxels in whose associated real volume elements, which then allows one
- the two-dimensional images can be viewed directly under the above-mentioned question, or beforehand a fast-to-perform image processing, e.g. be subjected to a low-pass filtering.
- a fast-to-perform image processing e.g. be subjected to a low-pass filtering.
- Picture position to close the distance to the camera results from the Evaluation of a single two-dimensional image still no clear depth information, because different object constellations can lead to the same two-dimensional image.
- a small but camera-proximate object positioned in viewing space can produce the same image as a large, but camera-aligned, object of the same shape and color.
- Such ambiguities are inventively by evaluation of multiple images of the same scene, but from different
- Shots from a few different aspect angles e.g. 15 to 20 images covering only a small aspect angle range, e.g. 5 to 10 degrees, sufficient to gain important depth information on traffic scenarios to increase traffic safety.
- This finding is surprising insofar as experience shows that complete information about the existing objects can not be detected when viewing a scene only from a small aspect angle range. For example, looking at the outwardly curved side of a column from only a small aspect angle range, it can not be recognized whether the column has a cross-section, e.g. round or semicircular.
- the tomographic methods known from medicine also always teach that an object must be viewed from as many different aspect angles as possible or, in the case of transmission tomography, must be irradiated, in order to obtain relevant information. In medical tomography, this requirement can also easily be met, while no road user "circles" an object in order to obtain information about its travel path.
- the invention allows for the first time, from the recorded while driving by a camera located in the vehicle itself To gain important safety-relevant in-depth information about the current traffic scenario quickly and cheaply.
- the invention makes it possible to decide for the considered area of space whether and, if so, where an object, ie a solid body, is present relative to the vehicle.
- an object ie a solid body
- the recorded images need not be subjected to pattern recognition followed by a recognition decision. Rather, the pixel values assigned to the individual pixels of a two-dimensional image can be used directly or, if necessary, according to a fast-to-execute filtering, for example a weighted one
- Averaging from e.g. five spatially and temporally adjacent images of a sequence of images are transmitted into the voxel space and accumulated there.
- FIG. 1 shows a highly schematic schematic of a vehicle moving relative to an object and a two-dimensional image of the corresponding traffic scene taken by the vehicle. is a schematic diagram to illustrate the problem that can not be obtained from a two-dimensional image without further depth information.
- Fig. 3 illustrates a basic principle of the invention, accordingly
- Vehicle is first determined which spatial area to be considered. in the
- volume elements typically cubes or cubes of a side length of e.g. 10 to 30 cm, divided, each volume element by a voxel of a
- Voxel space is represented by B x H x T (B, H, T e N + ) voxels, where B stands for the number of voxels in the width direction, H for the number of voxels in the height direction and T for the number of voxels in the depth direction.
- the space area is divided into cubes having a side length of 20 cm.
- each voxel is assigned an m-dimensional vector (me N + ) whose individual components are, for example, color and brightness values or, for purposes of normalization, counts over the number of entries in the respective voxel, to what follows will be received.
- m memory cells are reserved in a suitable memory device, and the values in the m memory cells are reset to a predetermined initial value, i.d.R. set to zero.
- Volume element is on the camera level in the order of one pixel, so if B is greater than or equal to b and H is greater than or equal to h.
- the resolution of the voxel space in height and width should therefore correspond approximately to the resolution of the two-dimensional image. This prevents a conspicuous but small object that differs from its surroundings clearly in eg a color or brightness value is distinguished, is not recognized only because significantly more pixels are received in the image of a voxel, as are taken by the object in a two-dimensional image of high resolution.
- the camera may be advantageous to a commercial, for example in the
- Fig. 1 shows a highly schematic of the process of receiving a
- Width axis with x the height axis with y and the depth axis with z is designated.
- a single object 10 is laterally obliquely in front of a vehicle 12 in the plane x-z. If this scene is taken from the vehicle, a two-dimensional image 14 of the plane x-y with b x h pixels is obtained, where the real object
- Each pixel of the captured image is assigned an n-dimensional vector (ne N + ), with the vectors in the simplest case, if it is
- Gray scale images are one-dimensional (scalar), so that their components, only a generally integer from a predetermined range of values, e.g. [0, 63], with a predetermined number, e.g. 64, of possible numerical values which then correspond to possible gray levels of the respective digital image, e.g. the lowest value for a white, the highest value for a completely black picture element, but of course any other
- the pixels may also be colored, e.g. in color images, be associated with three-dimensional vectors whose three components correspond to the possible intensity levels of three primary colors, e.g. Red, green, blue or cyan, magenta and yellow.
- three primary colors e.g. Red, green, blue or cyan, magenta and yellow.
- the pixels can also be associated with vectors with more than three dimensions, eg 5-dimensional, with three components, for example Color intensity levels, a component a gray scale level and a
- the components of the vectors are usually referred to briefly as image or brightness values.
- image or brightness values For the determination of whether or not an object is present in the considered space, as a rule the observation of a gray scale image is sufficient.
- Color images may be converted to grayscale images before the evaluation described below.
- the images taken are grayscale images, and each pixel is only a scalar pixel value, e.g. assigned a number between 0 and 255. In this case, it is sufficient to assign each voxel one or preferably two memory cells, wherein in one of the
- Gray levels are accumulated, the other e.g. for counting (for purposes of normalization) how many two-dimensional images have been "viewed" by the voxel.
- a step may be taken which may be termed "viewing the two-dimensional image from the voxel space” and in which at least certain voxels, preferably all voxels, of the voxel space are based on the voxel space
- pixel value (s) associated with the pixel (s) are determined by appropriate methods, e.g. Averaging or weighted averaging, an image value or (if the vectors associated with the pixels have more than one component and more than one component is evaluated) determines image values and to the one or more in the respective memory cell (s) of the respective voxel stored value (s) added.
- the pixel value of a pixel on which a voxel maps is added to the numerical value present in the corresponding memory cell of the voxel. Forms a voxel on more than one pixel or hits from the center of a
- one of the memory cells reserved for each voxel may be
- Entry Counts function to detect the number of entries per voxel, which may be used to normalize or weight the values contained in the other memory cell (s) of each voxel. Since several different two-dimensional images of the considered space are consecutively subjected to the "Voxel space observation" described above, with the corresponding image values being accumulated in the voxels, it may be expedient to prevent individual extreme values from corrupting the result. To do this, voxels that have mapped many images may be weighted differently than voxels in their memory cells at a few, e.g. a single backproject an entry was made.
- At least certain of the b x h pixels of a filtering in particular a low-pass filtering or a filtering with a two-dimensional image from the voxel space
- the filter within each image can act on the lines and / or columns and / or within a sequence of temporally and spatially offset images on a temporal course of particular pixels or each pixel. Since, as explained below, it suffices to consider only certain levels in the voxel space, it may accordingly be sufficient to include only certain pixels, e.g. some horizontal lines, which advantageously further reduces the computational effort.
- Fig. 2 illustrates that without knowledge of the object properties from the map 20 of a real object 10 in a single two-dimensional image
- the scene must be viewed from different aspect angles, so that a sequence of temporally and spatially offset images of the scene arises, wherein the temporal and spatial distances of the individual images of the sequence need not be the same. It only needs the relative position of Vehicle or camera when taking a picture relative to the space considered area to be known.
- the vehicle is moved relative to the space region, wherein detected by suitable means a measure of the movement or the relative position of the vehicle and the space area and another image of the space of interest is recorded.
- FIG. 3 this is indicated schematically by way of example.
- images of the object 10 are taken at different times t1, t2 and t3, to which the vehicle not shown here is located at different locations. It should be noted at this point that this is not to mean that the vehicle is in the recording of the images and then moved. Rather, the driver moves the vehicle in the normal way
- volume elements of the space region whose assigned voxels in at least one of the memory cells have numerical values which lie in a certain range of values, as volume elements containing an object, but of course provides meaningful results only if the scene was viewed from different angles.
- the steps of "taking a picture", “moving the vehicle” and “evaluating the pictures” are repeated several times, e.g. between 5 and 30, preferably between 10 and 20 times repeated, of course, when evaluating the images, the degree of movement of the vehicle and thus the new position of the vehicle relative to the space area (ie the new aspect angle) is taken into account and the numerical values in the
- Memory cells are accumulated according to predeterminable criteria (e.g., application of additional filters) before any actually deemed to be relevant
- 2-dimensional image can be an object at virtually arbitrary locations within the cone shown in Fig. 2, it follows from the in Fig. 3rd
- Area 40 must be located.
- volume elements of the space region whose assigned voxels in at least one of the m memory cells have numerical values which lie in a certain value range are included as an object
- Volume elements identified which can then be used for a variety of purposes. For example, warning messages can be issued to the driver or certain driver assistance systems can be activated if a collision with an object threatens. The thus obtained can be particularly advantageous
- Voxelraum is analyzed and only after analysis of e.g. 10-20 images of the step of identifying, or any captured image can be immediately examined to see which voxels have mapped which pixels and which volume elements contain objects.
- Indicating scale information wherein the volume elements identified as containing an object, which lie in the illustrated plane, are visibly displayed differently than the other in-plane volume elements.
- the cutting plane may be at a fixed height, e.g. 50 cm above the ground, or several cutting planes can be displayed superimposed.
- the current image of the first camera can also be displayed on a display device in the vehicle, wherein the image is superimposed on a color-coded representation generated from the information obtained according to the invention, from which a distance information with respect to the distance of any present in the image objects relative to the Vehicle is readable.
- a color-coded representation generated from the information obtained according to the invention, from which a distance information with respect to the distance of any present in the image objects relative to the Vehicle is readable.
- Information retrieval Rather, it is more important that the information be forwarded to other driver assistance systems and these systems so that depth information on the spatial distribution of objects relative to the vehicle can be made available in quasi real-time, as previously could only get a human by viewing the scene.
- the space area is at least partially subdivided and new voxels are added to the voxel space, while others are omitted.
- predetermined criteria for example the inclusion of a certain number of images, eg after 15 pictures, or after covering a certain distance, the space area is at least partially subdivided and new voxels are added to the voxel space, while others are omitted.
- a new voxel space is defined, which overlaps partially with previously defined, so that meaningful depth information can be obtained via the voxels, which are common to several adjacent voxel spaces.
- the position of the objects in the considered spatial area can also change, since they can be moving objects. Since the inventive
- Object recognition works very fast, also an object movement can be detected and visualized, e.g. by representing a so-called motion shadow on a display device in the vehicle. Also, when sharing
- images of at least one second camera are taken of at least one second spatial area that at least partially overlaps the first spatial area, so that the images captured by the second camera in the
- Determining the spatial distribution of objects in the first space area can be considered.
- the second camera may be attached to the vehicle on which the first camera is mounted.
- the information of the second or further cameras can be backprojected into the overlapping voxel space.
- the at least one second camera is attached to another vehicle or to a stationary device, for example to a traffic sign, which thus allows a completely different view of the area of interest and makes it possible, for example, to detect objects hidden behind another object , For example, if a vehicle approaches you in its Driving direction on the right side of the road after a crossing parked truck behind the children play, while another, equipped with a system according to the invention vehicle from a cross street on the crossroads so that the children lie in his view, the vehicles can as described below Form an ad-hoc network and exchange complementary information about the respective traffic scenario.
- the accurate detection of the absolute position of the vehicle is necessary: This can be done via the positioning functions of mobile units or via
- Positioning systems such as e.g. GPS done.
- An arrangement according to the invention for determining the spatial distribution of objects in a specific spatial area relative to a vehicle equipped with at least one camera comprises a data processing unit designed to carry out the method described above, comprising at least one camera and means for detecting the movement of the vehicle relative to the first room area.
- the arrangement further comprises means for wireless communication with fixed means for transmitting position data and / or
- Image data and / or means for wireless communication with other vehicles for the exchange of image data are preferably designed to form a so-called.
- Ad-hoc network which is characterized by flat hierarchical structures and in the foreign vehicles, for example, coming from different directions at an intersection communicate with each other to inventively determined To exchange information about areas of interest.
- Arrangements according to the invention, together with fixed devices with precisely known position data, can be used wirelessly with the arrangements for the purpose of transmitting position data and / or image data
- Driver assistance systems can be implemented which convey information to the drivers also about complex traffic scenarios and / or with certain safety systems arranged in the vehicles, e.g.
- Brake assist systems can work together to avoid impending collisions. In this case, e.g. be done so that in fixed
- FIG. 1 Facilities such as e.g. Traffic lights (traffic lights), suitable transmitting / receiving means are installed, which allow it with appropriate inventive arrangements equipped vehicles that approach the fixed device, to determine the vehicle location very quickly and accurately. If more vehicles of the same stationary device approach each other, the means installed in the device can assume a guiding function in the exchange of information between the vehicles in order to bring about a data comparison or data supplement via the space regions respectively considered by the vehicles.
- Traffic lights traffic lights
- cameras can also be installed in the stationary devices, the supplementary images of the traffic scenario from another
- the invention enables technical systems that warn a driver optically and / or acoustically and / or by movement, for example by shaking the steering wheel, when an object is in a driving trajectory of his vehicle. Also, the invention enables technical systems that initiate braking when an object is in a driving trajectory of a vehicle and the driver appears overwhelmed with the situation.
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112012003630.1T DE112012003630B4 (de) | 2011-08-29 | 2012-08-29 | Verfahren, anordnung und fahrassistenzsystem zur ermittlung der räumlichen verteilung von objekten relativ zu einem fahrzeug |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102011053067 | 2011-08-29 | ||
DE102011053067.3 | 2011-08-29 | ||
DE102011057111 | 2011-12-28 | ||
DE102011057111.6 | 2011-12-28 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2013029606A2 true WO2013029606A2 (de) | 2013-03-07 |
WO2013029606A3 WO2013029606A3 (de) | 2013-04-25 |
Family
ID=47088604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/DE2012/100255 WO2013029606A2 (de) | 2011-08-29 | 2012-08-29 | Verfahren, anordnung und fahrassistenzsystem zur ermittlung der räumlichen verteilung von objekten relativ zu einem fahrzeug |
Country Status (2)
Country | Link |
---|---|
DE (1) | DE112012003630B4 (de) |
WO (1) | WO2013029606A2 (de) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9286522B2 (en) | 2013-01-15 | 2016-03-15 | Mobileye Vision Technologies Ltd. | Stereo assist with rolling shutters |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000161915A (ja) * | 1998-11-26 | 2000-06-16 | Matsushita Electric Ind Co Ltd | 車両用単カメラ立体視システム |
US7446766B2 (en) * | 2005-02-08 | 2008-11-04 | Seegrid Corporation | Multidimensional evidence grids and system and methods for applying same |
US7786898B2 (en) * | 2006-05-31 | 2010-08-31 | Mobileye Technologies Ltd. | Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications |
DE102006055344A1 (de) * | 2006-11-23 | 2008-05-29 | Vdo Automotive Ag | Verfahren zur drahtlosen Kommunikation zwischen Fahrzeugen |
-
2012
- 2012-08-29 DE DE112012003630.1T patent/DE112012003630B4/de not_active Expired - Fee Related
- 2012-08-29 WO PCT/DE2012/100255 patent/WO2013029606A2/de active Application Filing
Non-Patent Citations (1)
Title |
---|
HAMANO, T. ET AL.: "Direct Estimation of Structure From Non-linear Motion by Voting Algorithm Without Tracking and Matching", ZEITSCHRIFT PATTERN RECOGNOTION, vol. 1, 1992, pages 505 - 508 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9286522B2 (en) | 2013-01-15 | 2016-03-15 | Mobileye Vision Technologies Ltd. | Stereo assist with rolling shutters |
US9531966B2 (en) | 2013-01-15 | 2016-12-27 | Mobileye Vision Technologies Ltd. | Stereo assist with rolling shutters |
US9854185B2 (en) | 2013-01-15 | 2017-12-26 | Mobileye Vision Technologies Ltd. | Stereo assist with rolling shutters |
US10200638B2 (en) | 2013-01-15 | 2019-02-05 | Mobileye Vision Technologies Ltd. | Stereo assist with rolling shutters |
US10764517B2 (en) | 2013-01-15 | 2020-09-01 | Mobileye Vision Technologies Ltd. | Stereo assist with rolling shutters |
Also Published As
Publication number | Publication date |
---|---|
WO2013029606A3 (de) | 2013-04-25 |
DE112012003630A5 (de) | 2014-05-22 |
DE112012003630B4 (de) | 2020-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE69932021T2 (de) | Dreidimensionales Darstellungssystem mit einer einzelnen Kamera für Fahrzeuge | |
DE60126382T2 (de) | Verfahren und Gerät zur Erkennung von Gegenständen | |
DE60020420T2 (de) | Situationsdarstellungs-Anzeigesystem | |
DE102010035772B4 (de) | System und Verfahren zum Liefern von Leitinformation für einen Fahrer eines Fahrzeugs | |
EP2805183B1 (de) | Verfahren und vorrichtung zum visualisieren der umgebung eines fahrzeugs | |
DE102018121019A1 (de) | Erweitern von realen sensoraufzeichnungen mit simulierten sensordaten | |
DE102020100080A1 (de) | Adaptive transparenz eines virtuellen fahrzeugs in einem simulierten bildgebungssystem | |
WO2004029877A2 (de) | Verfahren und vorrichtung zur videobasierten beobachtung und vermessung der seitlichen umgebung eines fahrzeugs | |
DE102015105248A1 (de) | Erzeugen eines bildes von der umgebung eines gelenkfahrzeugs | |
EP3537384A2 (de) | Visuelles surround-view-system zur überwachung des fahrzeuginneren | |
DE102009051526A1 (de) | System und Verfahren zum Abbilden der Fahrzeugumgebung mit ein- und verstellbarer Perspektive | |
DE102014013155A1 (de) | Bildanzeigesystem | |
DE102004018813A1 (de) | Verfahren zur Erkennung und/oder Verfolgung von Objekten | |
DE102018212655A1 (de) | Erkennung der Bewegungsabsicht eines Fußgängers aus Kamerabildern | |
DE112018007485T5 (de) | Straßenoberflächen-Detektionsvorrichtung, Bildanzeige-Vorrichtung unter Verwendung einer Straßenoberflächen-Detektionsvorrichtung, Hindernis-Detektionsvorrichtung unter Nutzung einer Straßenoberflächen-Detektionsvorrichtung, Straßenoberflächen-Detektionsverfahren, Bildanzeige-Verfahren unter Verwendung eines Straßenoberflächen-Detektionsverfahrens, und Hindernis-Detektionsverfahren unter Nutzung eines Straßenoberflächen-Detektionsverfahrens | |
DE102018108751B4 (de) | Verfahren, System und Vorrichtung zum Erhalten von 3D-Information von Objekten | |
EP3078015B1 (de) | Verfahren und vorrichtung zum erzeugen einer warnung mittels zweier durch kameras erfasster bilder einer fahrzeugumgebung | |
DE102018132805A1 (de) | Verfahren für eine verbesserte Objekterfassung | |
DE10141055B4 (de) | Verfahren zur Bestimmung von Bewegungsinformationen | |
WO2019201537A1 (de) | Verfahren und steuergerät zum kennzeichnen einer person | |
DE102019105630B4 (de) | Anzeigesteuervorrichtung, Fahrzeugumgebungsanzeigesystem und Computerprogramm | |
DE102006037600B4 (de) | Verfahren zur auflösungsabhängigen Darstellung der Umgebung eines Kraftfahrzeugs | |
DE10148062A1 (de) | Verfahren zur Verarbeitung eines tiefenaufgelösten Bildes | |
DE10335601B4 (de) | Verfahren zur Objektklassifizierung unter Verwendung einer 3D-Modelldatenbank | |
DE102013215408A1 (de) | Anzeigesystem für ein Fahrzeug und Verfahren für die Anzeige einer Fahrzeugumgebung |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12778943 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1120120036301 Country of ref document: DE Ref document number: 112012003630 Country of ref document: DE |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: R225 Ref document number: 112012003630 Country of ref document: DE Effective date: 20140522 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12778943 Country of ref document: EP Kind code of ref document: A2 |