EP3685303A1 - Verfahren zur erkennung einer objektinstanz und/oder orientierung eines objekts - Google Patents
Verfahren zur erkennung einer objektinstanz und/oder orientierung eines objektsInfo
- Publication number
- EP3685303A1 EP3685303A1 EP18759883.4A EP18759883A EP3685303A1 EP 3685303 A1 EP3685303 A1 EP 3685303A1 EP 18759883 A EP18759883 A EP 18759883A EP 3685303 A1 EP3685303 A1 EP 3685303A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- orientation
- samples
- sample
- loss function
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the invention relates to a method for detecting an object instance and determining the orientation of already localized objects in noisy environments.
- Object instance recognition and 3D orientation estimation are well-known problems in the field of computer vision.
- Current methods often have problems with clutter and occlusion. They are also sensitive to background and lighting changes.
- the most commonly used orientation estimator uses a single classifier per object, so the complexity grows linearly with the number of objects. For industrial purposes, however, are scalable
- the method presented herein is closely related and can be viewed as a representative of 3D retrieval methods.
- the queries are taken out of the context of the real scene and are therefore free of seizures and obscurations.
- ge ⁇ Neilllich it is not necessary ge ⁇ Neilllich to determine the orientation, posture or pose of the object, which is essential for the continued application, such as grasping in robotics.
- known 3D retrieval scales aim to detect only the object class and not the instance of the object, which limits its use to records for object instance detection. Since the approach presented here is based on various approaches to manifold learning, most of the relevant work in the area is also considered.
- 3D retrieval methods are mainly divided into two classes: model-based and view-based. Model-based methods work directly with 3D models and try to represent them through different types of features.
- the method presented herein falls into the group of view-based methods, but outputs a specific instance (of the object) as output instead of an object class. Moreover, a certain robustness towards background Ground violations are required because real scenes are used.
- Manifold learning is an approach to nonlinear dimensionality reduction, motivated by the idea that high-dimensional data, such as images, in a space with a smaller dimension
- This concept using CNNS is well studied in [7] on page 20.
- a so-called Siamese network is used, taking two inputs instead of one and a cost function.
- the cost function is defined such that for similar properties, the square of the Euclidean distance is minimized between these and "hinge loss func ⁇ tion" die for dissimilar properties is applied, which ferenz terms forces apart the objects by means of a dif-. In the article it is this concept applied to orientation estimation.
- Hashing proposed in which an object that is derived from a mono- Zigen or more embodiments, such as text and images, is displayed in a different room, in wel ⁇ chem similar properties as close as possible together and unähn ⁇ Liche objects as far as possible be shown away.
- Latest manifold learning approaches using the recently introduced triplet networks English, "triplet networks), the Siamese networks in generating well separated Mannigfaltig ⁇ speeds exceed [9, page 20].
- a triplet network takes three images as input (instead of two in the case of the Siamese network), with two images of the same class and the third of a different class.
- the cost function attempts to map the output descriptors of the images of the same class closer to each other than the of another class. This enables a quick ⁇ res and robust Manifold Learning, as both positive and negative examples within a single runtime be ⁇ be taken into account.
- the loss function sets two constraints: the Euclidean distance between the views of dissimilar objects is large, whereas the distance between the views of objects of the same class is the relative distance to their orientations. Therefore, the method learns to embed the object views into a descriptor space of lower dimension. Object instance recognition is then resolved by an efficient and ska ⁇ lierbares method for searching for nearest neighbors is applied to the Deskriptorraum to zufinden the nearest neighbors up.
- the process is in addition to the orientation of the object on the identity and thus solves two problems ge ⁇ separated at the same time, further increasing the value of this procedural ⁇ proceedings.
- the approach of [10] adds a classification clause to the
- Triplettloss added and learns the embedding of the input image ⁇ space in a difference feature space (English: discriminatory feature space). This approach is tailored to the "object class search" job and trains only on real images, not on rendered 3D object models.
- the problem is solved by the subject matter of the independent An ⁇ entitlement.
- Preferred embodiments of the invention are the subject of the dependent claims.
- the invention provides a method for detecting an object instance and determining an orientation of (already) localized objects in noisy environments by means of an artificial neural network or CNN, with the following steps:
- a triplet be formed from three samples such that a first and a second sample originate from the same object under similar orientation, with a third sample selected such that the third sample is from a different object than the first sample or, if it originates from the same object as the first sample, egg ⁇ ne to the first sample has dissimilar orientation.
- the loss function has a triplet loss function of the following form: where x is the image of each sample, f (x) is the output of the artificial neural network, and m is the dynamic margin. It is preferred that such forms ⁇ ge of two samples, a pair of, that the two samples are from the same object, and have a similar or identical orientation on ⁇ , wherein the two samples obtained under different image sensing conditions.
- the recording of the object so it ⁇ follows that measures of at least one point of view, several up are made, the camera is rotated about its recording axis to more samples with rotation information, wherein ⁇ play, in the form of quaternions to obtain.
- the similarity of the orientation between two samples is determined by means of a similarity metric, wherein the dynamic margin is determined as a function of the similarity.
- Descriptor space efficient nearest neighbor search method can be applied.
- the introduction of a dynamic margin allows faster training times and better accuracy of the resulting low-dimensional manifolds.
- the data sets used contain the following data: SD mesa models of a plurality of objects 10 and / or RGB D
- three sets are generated: a training set S t rain, template set S db and a test set S test -
- the training set S t rain is used exclusively for training the CNN.
- the test amount of test is only used in the test phase for evaluation.
- the template amount S d b is used both in the training and in the test phase.
- Each of these sets S train, S db , S te st comprises a plurality of samples 16.
- the samples 16 for the quantities S train, S db , S te st are generated to prepare the data.
- the amounts of S t generated rain, S db, S te st of two kinds of Supervisorda ⁇ th 18 real images 20 and 22.
- the synthetic images real images 20 10 represent the objects in the real-world environments and 14 with a commercially available RGB-D sensor, such as Kinect or Primesense generated.
- the real pictures 20 can be provided with the records.
- the synthetic images 22 are initially unavailable and are generated by rendering textured 3D mesh models.
- each triangle is recursively divided into four triangles.
- a coarse sampling which is shown on the left in FIG. 1 and can be achieved by two subdivisions of the icosahedron and / or a fine sampling, which is shown on the right in FIG. 1 and is achieved by three successive subdivisions can.
- the coarse sampling is used to generate the template amount S db , while in particular the fine sampling is used for the training set S t rain.
- samples 16 can be generated.
- a small area 32 is extracted which covers the object 10 and is centered around the object 10. This is achieved for example by a virtual placement of a cube 34, 10 is particularly centered at the centroid 36 of the Whether ⁇ jekts and having for example a dimension of 40 cm 3.
- the preparation ⁇ surface 32 are preferably normalized.
- the RGB channels before ⁇ preferably normalized to a mean of 0 and a Standardabwei ⁇ monitoring of Figure 1.
- the depth channel is preferably set to the interval [-1; 1], in particular everything
- each Be is rich ⁇ x stored in addition to the identity of the object 10 and its orientation q in a sample 16 as an image 32nd
- the samples 16 are preferably divided between the training set S train, the amount of template S a b and the test ⁇ S quantitative test accordingly.
- the amount of template S a b ent ⁇ holds in particular only synthetic images 22 preferably ba ⁇ sierend on the coarse sampling.
- the coarse sampling is preferably used both in the training phase (to form triplets 38) and the test phase (as the data base for the search for nearest neighbors).
- the samples 16 of the template set S a b define a search database on which the search for next neighbor is performed later.
- the training set S ⁇ train comprises a mixture of real images 20 and 22.
- the synthetic images synthetic images 22 represent samples 16 originating from the fine sampling. Preferably about 50% of the real images 20 is added to the Trai ⁇ beginnings amount S train. These 50% are selected by taking those real images 20 which, in terms of orientation, are close to the samples 16 of the template set S a b . The remaining real images 20 are stored in the test set Stest, which is used to estimate the performance of the method.
- the training set S r a in and the template set S k have been generated, there is sufficient data to train the CNN. Further, it is preferable to set an input format for the CNN defined by the loss function of the CNN. In the present case, the loss function as a sum of two sepa ⁇ rater Loss Terme is:
- the first sum ⁇ mand L ripi ets is a loss term which is defined by a set T of triplets 38, wherein a triplet 38 is a group of samples 16 (s ⁇ ; sj; sk) is such that s ⁇ and Sj always from The same object 10 originates from a similar orientation and Sk is derived either from another object 10 or from the same object 10 but with less similar orientation.
- a single triplet 38 includes a pair of similar samples s ⁇ , Sj and a pair of dissimilar samples s ⁇ , s k .
- the sample s ⁇ is also referred to as an “anchor” the sample Sj as the positive sample or “puller” and the sample Sk as a negative sample or "pusher”
- the triplet loss component L tr ipi ett has the following form: j. _ in n / (* i) - / (* fc) iil A, 9 .
- x is the input image of a given sample
- f (x) is the output of the neural Net when entering the input image x
- m is the margin
- N is the number of triplets 38 in the stack.
- the marginal term introduces the margin for classification and sets the minimum ratio for the Euclidean distance of the similar and dissimilar pairs of samples 16.
- L r ipiets can be set by ⁇ two properties to be achieved, namely: on the one hand maximizing the Euclidean distance between descriptors of two different objects, and on the other hand adjusting the Euclidean distance between the descriptors of the same object 10, so that these representative of the Similarity of their orientation.
- the second summand L pa rs is a pairwise term. It is defined over a set P of sample pairs (s ⁇ Sj). Samples within a single pair come from the same object 10 under either very similar orientation or the same orientation with different image capturing conditions. Different image sensing conditions include - but are not limited to: changes in illumination, differing ⁇ che backgrounds and clutter. It is also conceivable that one sample originates from a real image 20 while the other comes from a synthetic image 22. The aim of this term is to represent two samples as close as possible to each other:
- the CNN learns to treat the same object equally under different image capturing conditions by mapping the objects 10 to substantially the same point.
- the minimization can ensure that samples with similar orientation in the descriptor space are set close to each other, which in turn is an important criterion for the triplet term L r ipiets.
- the field of view of the camera is rotated to each ⁇ the viewpoint 24 to the receptacle axis 42 and a sample taken at a certain frequency.
- seven samples 40 are generated per vertex 26, in the range between -45 ° and + 45 ° with a step angle of 15 °.
- the rotations Q of the objects 10 or of the models are represented by quaternions, the angle between the quaternions of the compared samples serving as an orientation comparison metric
- the margin interval is set to the angular distance between these samples.
- the distance is set to a constant value that is greater than the maximum possible angle difference.
- Surface normals can be preferably used as a further execution ⁇ art, which repre ⁇ advantage an image of the object 10, in addition to any already considered RGB and depth channels.
- a surface normal at the point p is defined as a 3D vector that is orthogonal to the Tan ⁇ gene level to the model surface at the point p.
- the surface normals provide a powerful embodiment that describes the curvature of the object model.
- surface normals are preferably generated based on the depth map images, so that no further sensor data is required.
- the method known from [11] may be used to obtain a fast and robust estimate. With this refinement, a smoothing of the surface noise can take place and therefore also a better estimation of the surface normal in the vicinity of depth discontinuities.
- One approach is to use real images 20 for exercise. If there are no or only a few real images 20 available, the CNN must be taught otherwise to ignore and / or simulate background.
- Present a smoking from a group is selected from at least ⁇ containing: white noise, random shapes, Gradi ⁇ ducks noise and real backgrounds.
- white noise a floating point number between 0 and 1 is generated from a uniform distribution for each pixel and added to it. In the case of RGB, this process is repeated for each color, a total of three times.
- the idea is to represent the ⁇ hocob projects so that they have similar depth and color values ⁇ .
- the color of the objects is again sampled from egg ⁇ ner uniform distribution between 0 and 1, wherein the position of a uniform distribution between 0 and the width of the sample image is sampled.
- This at ⁇ set can also be used to display the foreground interference by random shapes are placed on the actual model.
- the third type of noise is fractal noise, which is often used in computer graphics for texture or landscape generation.
- the fractal noise can be generated as described in [12]. It results in a uniform sequence of pseudo-random numbers and avoids drastic changes in intensity, as occur with white noise. Overall, this is closer to a real scenario.
- RGB-D images are of real backgrounds in a similar manner as in [13]. From a rea ⁇ len Figure 20, an area is sampled 32 in the required size and used as a background for a synthetically generated model. This embodiment is particularly Nön ⁇ exist if it is known in advance, in which the objects are arranged circumstancessar- th.
- a disadvantage of the baseline method is that the stacks are created and stored before execution. That means, that the same backgrounds are used again and again at each epoch, which limits variability. It is suggested to create the stacks online. At each iteration, the background of the selected positive sample is filled with one of the available types.
- FIG. 8 compares the classification rate and average angle errors for correctly classified samples over a set of training epochs (one pass of the training set S train) for both implementations, i. the CNN, which have a static (SM) and dynamic margin (DM) loss function.
- SM static
- DM dynamic margin
- FIG 9 shows the test samples, the means of Deskriptornetz ⁇ factory, CNN, the one with the old (left) and the new loss function was (right) trained.
- the difference in separation ⁇ degree of objects is clear: right figure Whether ⁇ projects are well-separated and obtain distance the minimum margin, which opens into a perfect score classification;
- the left figure shows stillracefflebare ⁇ object structures, which are however placed close to each other and partially overlap, causing a classification confusion that was quantitatively estimated in FIG. 8
- FIG. 10 shows the same diagrams as FIG. 8, but for a descriptor space with a higher dimension, for example 32D. This results in a significant jump in quality for both embodiments.
- the method according to the invention learns the classification much faster and allows the same angular accuracy for a larger number of correctly classified test samples.
- FIG 11 shows the classification and Orientie ⁇ approximately accuracies for the different types of noise.
- White noise shows the worst overall results with only 26% classification accuracy. Since 10% accuracy are achieved even when zupart ⁇ time to sample items from a uniform distribution, is not a big improvement.
- FIG. 1 This test shows the effect of the newly introduced surface standard channel.
- three input image channels are used, namely
- the areas 32 are preferably used for training, which are ⁇ finally represented by the above-mentioned channels.
- FIG. 12 shows the classification rate and orientation error diagrams for three differently trained networks: depth (d), normal (nor), and depth and normal (north). It can be seen that the network CNN only performs better with surface normals than the CNN with
- Depth maps The surface normals are generated completely on the basis of depth maps. No additional sensor data is needed. In addition, the result is even better if depth maps and surface normals are used simultaneously.
- the goal of the test on large data sets is how well the method can be generalized to a larger number of models.
- Table III shows a histogram of classified test samples for some tolerated angular errors. As can be seen results for 50 models, each of about 300 test samples reonia ⁇ sentiert is a classification accuracy of 98.7% and a very good angular accuracy. As a result, the method scales such that it is suitable for industrial applications.
- the method described herein has improved speed of learning, robustness to disturbance rates, and versatility in the industry.
- a new dynamic margin loss feature allows for faster CNN learning and greater classification accuracy.
- the process uses in-plane rotations and new background roughness .
- surface normals can be used as another powerful image execution type. Also, an efficient method for creating stacks was presented that allows greater variability in training.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102017216821.8A DE102017216821A1 (de) | 2017-09-22 | 2017-09-22 | Verfahren zur Erkennung einer Objektinstanz und/oder Orientierung eines Objekts |
PCT/EP2018/072085 WO2019057402A1 (de) | 2017-09-22 | 2018-08-15 | Verfahren zur erkennung einer objektinstanz und/oder orientierung eines objekts |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3685303A1 true EP3685303A1 (de) | 2020-07-29 |
Family
ID=63405177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18759883.4A Withdrawn EP3685303A1 (de) | 2017-09-22 | 2018-08-15 | Verfahren zur erkennung einer objektinstanz und/oder orientierung eines objekts |
Country Status (5)
Country | Link |
---|---|
US (1) | US20200211220A1 (de) |
EP (1) | EP3685303A1 (de) |
CN (1) | CN111149108A (de) |
DE (1) | DE102017216821A1 (de) |
WO (1) | WO2019057402A1 (de) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102419011B1 (ko) * | 2018-04-06 | 2022-07-07 | 지멘스 악티엔게젤샤프트 | 종래의 cad 모델들을 사용한 이미지들로부터 객체 인식 |
CN110084161B (zh) * | 2019-04-17 | 2023-04-18 | 中山大学 | 一种人体骨骼关键点的快速检测方法及系统 |
US11467668B2 (en) * | 2019-10-21 | 2022-10-11 | Neosensory, Inc. | System and method for representing virtual object information with haptic stimulation |
US11416065B1 (en) * | 2019-11-08 | 2022-08-16 | Meta Platforms Technologies, Llc | Synthesizing haptic and sonic feedback for textured materials in interactive virtual environments |
CN111179440B (zh) * | 2020-01-02 | 2023-04-14 | 哈尔滨工业大学 | 一种面向自然场景的三维物体模型检索方法 |
US11875264B2 (en) * | 2020-01-15 | 2024-01-16 | R4N63R Capital Llc | Almost unsupervised cycle and action detection |
CN112950414B (zh) * | 2021-02-25 | 2023-04-18 | 华东师范大学 | 一种基于解耦法律要素的法律文本表示方法 |
US20220335679A1 (en) * | 2021-04-15 | 2022-10-20 | The Boeing Company | Computing device and method for generating realistic synthetic image data |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3961525B2 (ja) * | 2004-09-22 | 2007-08-22 | 株式会社コナミデジタルエンタテインメント | 画像処理装置、画像処理方法、ならびに、プログラム |
US8639038B2 (en) * | 2010-06-18 | 2014-01-28 | National Ict Australia Limited | Descriptor of a hyperspectral or multispectral image |
EP3171297A1 (de) * | 2015-11-18 | 2017-05-24 | CentraleSupélec | Bildsegmentierung mit gemeinsamer randerkennung und objekterkennung mittels tiefen lernens |
WO2017156243A1 (en) * | 2016-03-11 | 2017-09-14 | Siemens Aktiengesellschaft | Deep-learning based feature mining for 2.5d sensing image search |
-
2017
- 2017-09-22 DE DE102017216821.8A patent/DE102017216821A1/de not_active Withdrawn
-
2018
- 2018-08-15 EP EP18759883.4A patent/EP3685303A1/de not_active Withdrawn
- 2018-08-15 US US16/646,456 patent/US20200211220A1/en not_active Abandoned
- 2018-08-15 CN CN201880060873.8A patent/CN111149108A/zh active Pending
- 2018-08-15 WO PCT/EP2018/072085 patent/WO2019057402A1/de unknown
Also Published As
Publication number | Publication date |
---|---|
US20200211220A1 (en) | 2020-07-02 |
WO2019057402A1 (de) | 2019-03-28 |
CN111149108A (zh) | 2020-05-12 |
DE102017216821A1 (de) | 2019-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019057402A1 (de) | Verfahren zur erkennung einer objektinstanz und/oder orientierung eines objekts | |
DE112012005350B4 (de) | Verfahren zum Schätzen der Stellung eines Objekts | |
DE60316690T2 (de) | System zur bildvergleichung mit verwendung eines dreidimensionalen objectmodelles, bildvergleichverfahren und bildvergleichprogramm | |
EP2584493B1 (de) | Verfahren zur Unterscheidung zwischen einem realen Gesicht und einer zweidimensionalen Abbildung des Gesichts in einem biometrischen Erfassungsprozess | |
DE112016004535T5 (de) | Universelles Übereinstimmungsnetz | |
DE112018000298T5 (de) | System und verfahren zur poseninvarianten gesichtsausrichtung | |
DE102006041645A1 (de) | Verfahren und Vorrichtung zur Orientierungsbestimmung in einem Bild | |
DE60126040T2 (de) | Erkennung von Gegenständen mit Verwendung linearer Unterräume | |
DE102017220307A1 (de) | Vorrichtung und Verfahren zum Erkennen von Verkehrszeichen | |
DE10043460A1 (de) | Auffinden von Körperpartien durch Auswerten von Kantenrichtungsinformation | |
DE102015200260A1 (de) | Verfahren zum Erstellen eines Deskriptors für ein Szenenbild | |
DE112014006911T5 (de) | Verfahren und System zum Scannen eines Objekts unter Verwendung eines RGB-D-Sensors | |
DE112010002677T5 (de) | Verfahren und vorrichtung zum bestimmen einer formübereinstimmung in drei dimensionen | |
EP0844590A1 (de) | Verfahren zur fraktalen Bildcodierung und Anordnung zur Durchführung des Verfahrens | |
EP3511904B1 (de) | Verfahren zum bestimmen einer pose eines objekts in einer umgebung des objekts mittels multi-task-lernens, sowie steuerungsvorrichtung | |
WO2013037357A1 (de) | Maschinelles lernverfahren zum maschinellen erlernen von erscheinungsformen von objekten in bildern | |
DE102006044595B4 (de) | Bildverarbeitungsvorrichtung zur Segmentierung anhand von Konturpunkten | |
WO2020078615A1 (de) | Verfahren und vorrichtung zur bestimmung einer umgebungskarte | |
DE102020211636A1 (de) | Verfahren und Vorrichtung zum Bereitstellen von Daten zum Erstellen einer digitalen Karte | |
EP1098268A2 (de) | Verfahren zur dreidimensionalen optischen Vermessung von Objektoberflächen | |
DE10297595T5 (de) | Verfahren zum automatischen Definieren eines Teilemodells | |
DE102006036345A1 (de) | Verfahren zur Lagebestimmung von Objekten im dreidimensionalen Raum | |
WO2000003311A2 (de) | Verfahren und anordnung zur ermittlung eines ähnlichkeitsmasses einer ersten struktur mit mindestens einer vorgegebenen zweiten struktur | |
DE102004007049A1 (de) | Verfahren zur Klassifizierung eines Objekts mit einer Stereokamera | |
DE10361838B3 (de) | Verfahren zur Bewertung von Ähnlichkeiten realer Objekte |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200218 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20220302 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20220713 |