WO2023285128A1 - Procédé pour commander un système cible et en particulier pour capturer au moins un objet d'usage, en particulier un moyen de commande de capture - Google Patents

Procédé pour commander un système cible et en particulier pour capturer au moins un objet d'usage, en particulier un moyen de commande de capture Download PDF

Info

Publication number
WO2023285128A1
WO2023285128A1 PCT/EP2022/067629 EP2022067629W WO2023285128A1 WO 2023285128 A1 WO2023285128 A1 WO 2023285128A1 EP 2022067629 W EP2022067629 W EP 2022067629W WO 2023285128 A1 WO2023285128 A1 WO 2023285128A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing unit
database
usage
data
image
Prior art date
Application number
PCT/EP2022/067629
Other languages
German (de)
English (en)
Inventor
Markus Garcia
Thomas Zellweger
Original Assignee
Markus Garcia
Thomas Zellweger
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DE102022002203.6A external-priority patent/DE102022002203A1/de
Application filed by Markus Garcia, Thomas Zellweger filed Critical Markus Garcia
Publication of WO2023285128A1 publication Critical patent/WO2023285128A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Definitions

  • Method for controlling a target system and in particular for detecting at least one object of use in particular a detection controller
  • the present application relates to a device for physically, in particular optically, detecting at least one object of use and a corresponding device.
  • the present application relates to a method for controlling a target system and in particular for detecting at least one object of use, in particular a detection controller.
  • the device presented here for the physical, in particular for the optical detection of at least one object of use at least one provision of an object of use comprises at least one processing unit, by means of which an object of use and/or an identification means assigned uniquely, preferably uniquely, to the object of use is physically identified can be detected, from which at least one characteristic value of the object of use can be obtained, and further wherein the processing unit and/or a CPU is set up and provided for carrying out a classification of the object of use insofar as a characteristic value of the object of use is stored with at least one in a database of the processing unit and/or is comparable to a database of an external CPU, and the processing unit and/or the CPU and/or the user himself/herself selects a database object corresponding to the characteristic value and displays the processing in a screen processing unit, so that a camera image of the usage object together with the database object can be at least partially optically superimposed on the screen and/or displayed side by side,
  • the processing unit and/or the CPU can be used to carry out at least one physical capture process, in particular at least one photograph, of the object of use based on the database object displayed on the screen, so that the user captures the object of use in such a way that a captured image of the usage object is displayed identically or identically to scale to the database object displayed on the screen at the same time, with the usage object being assignable by the processing unit and/or the CPU and/or the user to at least one usage object class, for example a vehicle type, as a result of the detection process.
  • the object of use can generally be an object that is or is to be used or that contains a use, in particular a three-dimensional object act.
  • the term "use" in the sense of the application means any handling with regard to a purpose.
  • the processing unit can be a computing and/or memory unit.
  • the processing unit is also set up and provided for taking photographs.
  • the processing unit can have at least one camera or at least part of a camera.
  • the object of use is designed to be portable, which means that the object of use cannot be connected to a floor or to a handling device connected to the floor element.
  • “portable” can mean, in particular, that the processing element has such dimensions and such a weight that it is set up and intended to be held manually, in particular with one hand.
  • the processing unit can also be detachably or permanently connected to a floor on which the object of use is placed. Such a connection can be established via the handling device described above.
  • the processing unit can be guided along guide paths, for example along at least one rail or rail system relative to the object of use.
  • the characteristic value can be a real number greater than 0, but it is also conceivable for the characteristic value to be composed of different partial characteristic values.
  • a usage object can therefore have a partial parameter with regard to an exterior color, for example, a further parameter with regard to maximum dimensions in height and width and/or depth, and a further partial parameter with regard to weight.
  • a parameter can therefore be formed by combining these three partial parameters.
  • a combination can be in the form of a sum or a fraction.
  • the characteristic value is preferably determined in the form of a sum of the aforementioned partial characteristic values.
  • the individual partial characteristic values can also be included in the summation with different weighting. For this purpose it is conceivable that each partial characteristic value has a first weight factor as a weight factor, the second partial characteristic value with a second weight factor and a third partial characteristic value with a third weight factor according to the formula:
  • K G1 * K1 + G2 * K2 + G3 * K3, where the values K1 to K3 represent the respective partial values and the factors G1 to G3 (which represent real positive numbers or can also be zero, depending on the selection) designate respective weighting factors of the partial characteristic values.
  • the usage object classification presented here can be a purely visual comparison between the usage object recorded with the above-mentioned camera and a usage object template stored in a database in an optical manner.
  • the usage object is divided into individual object classes or can be categorized in terms of data technology.
  • an (analogue) and visual comparison between the object of use recorded and the object of use stored in the database a technical, for example analogue, comparison is then made.
  • the object of use is therefore broken down into individual data, for example data classes, by a conversion unit, which is then compared individually or together with data or data classes correspondingly stored in the database.
  • the data object can accordingly be a template image of the corresponding usage object stored in the database.
  • the processing unit is set up and provided for a detection process to be carried out, for example by a user and/or an implementation device of the object of use, so that the object of use is detected in such a way that an image of the object of use captured by the detection process is identical or identical scaled identically to the database object displayed on the screen at the same time.
  • identity means the closest approximation to a correspondingly stored object in the database based on the characteristic value and/or the optical dimensions of the object of use. This can mean that the object of use, which can be recorded by the processing unit, does not correspond in all dimensions and wear and tear to the object of use identified in the database according to the characteristic value, but the closest match is produced on the basis of predefined minimum dimensions.
  • the object of use is a vehicle, for example a BMW 3 Series.
  • the object of use itself can, for example, have a spoiler and/or be lowered. If a corresponding object of use is not also stored in the database with an additional spoiler and a lowered version, but the database only generally shows a basic model of a BMW 3 series, the processing unit and/or the database and/or the external CPU can still do this Select the basic 3-model as the closest match to the object of use, for example because the characteristic values of the object of use are identical, for example based on a vehicle sticker.
  • the vehicle type can be, for example, a 3-class BMW or any other vehicle permitted on German roads or the international road system.
  • the physical detection process includes at least one temporal detection sequence, where at least two different recordings of the object of use are carried out during the recording sequence, with each recording being assigned to at least one database object.
  • the detection sequence records instructions to an implementation device and/or to a user to photograph the object of use from different angles, different distances with different color contrasts or the like in order to simplify identification with a object of use stored in the database.
  • the processing unit is set up and provided for traveling along this lateral detection sequence.
  • the solution sequence can therefore have precise instructions to the user and/or an implementation device with regard to location, recording brightness or similar, so that the processing unit, which preferably includes an optical camera, optically fends off the object of use along predetermined points.
  • orientation points can be marking elements which the camera of the processing unit can record particularly easily.
  • marking elements are barcodes or barcodes and/or NFC chips.
  • marking elements can therefore also be passive components. However, it is also conceivable that such marking elements can be detachably applied to the object of use, for example glued on. Such usage objects can have their own energy supply, for example a battery supply. Such battery-equipped marking elements can emit electromagnetic radiation in the optically visible or in the invisible, for example infrared or microwave range, which can be detected by a locating element of the processing unit and as a result of which the processing unit is able to determine the position in which it is located relative to the object of use.
  • the marking elements are virtual marking elements, which are loaded from the database and which, like the object of use itself, are from the database as an image, for example as a third image together with a photograph of the object of use and accordingly virtually from the Appearance of the usage object loaded in the database can be displayed on the screen of the usage object, can therefore just like the database objects (which can depict the usage objects in virtual terms and which are stored in the database), as well as other database objects in the database of the processing unit and/or or the external CPU.
  • both the usage object and the further database object (at least one marking element) can be loaded together into the processing unit and/or displayed on the screen of the processing unit with one and the same characteristic value.
  • the chronologically sequential version instructions specify a detection distance and/or a detection angle relative to the object of use for the user.
  • the recording instructions therefore provide a precise schedule, preferably in a fully automated manner, with regard to the recording of the individual images, in particular photographs of the objects of use.
  • AI Artificial intelligence
  • AI also artificial intelligence
  • AI or A.I. English artificial intelligence
  • AI is a sub-area of computer science that deals with the automation of intelligent behavior and machine learning.
  • the term is difficult to define because there is already a lack of a precise definition of "intelligence”. Nevertheless, it is used in research and development.
  • Artificial intelligence usually describes the attempt to emulate certain human decision-making structures, e.g. B. a computer is built and programmed in such a way that it can work on problems relatively independently. Often, however, it also refers to imitated intelligence, whereby “intelligent behavior” is supposed to be simulated using mostly simple algorithms, for example in the case of computer opponents in computer games.
  • Strong AI would be computer systems that can take on the work of completing difficult tasks on an equal footing with humans.
  • weak Kl is about mastering specific application problems. Human thinking and technical applications are to be supported here in individual areas. The ability to learn is a key requirement for AI systems and must be an integral part, not an afterthought.
  • a second main criterion is the ability of an AI system to deal with uncertainty and probabilistic information. Of particular interest are those applications for which it is generally understood that a form of “intelligence” is necessary.
  • the weak class is concerned with the simulation of intelligent behavior using mathematics and computer science, it is not concerned with creating awareness or a deeper understanding of intelligence.
  • Visual intelligence makes it possible to recognize and analyze images or shapes. Handwriting recognition, identification of people through face recognition, comparison of fingerprints or the iris, industrial quality control and production automation (the latter in combination with findings from robotics) are mentioned here as application examples.
  • At least one embodiment is such a device for the physical, in particular for the optical, detection of at least one object of use, in which the detection of the object of use and in particular also the tracing of the object of use, i. H. the virtual representation of the object of use itself is created on a screen by means of an artificial intelligence of the device.
  • AI artificial intelligence
  • a tablet or mobile phone camera to scan a car with the camera (similar or the same is done, for example, when recognizing a QR code) and in particular with the images stored in the database such an image and such an information pool is formed for the user that most closely corresponds to the vehicle actually recorded with the camera.
  • the Polo is from the year of manufacture 1985, with additional information with pointers, Arrows or other is shown. This makes it easier for the user to identify the object and the handling of further information, which is then simply displayed on a screen of the tablet, smartphone or on the basis of an app for a computer on a computer screen.
  • API interface can be used to generate and skillfully display corresponding augmented reality screen information together with an object then perceived via a camera independently of a tablet or smartphone on a fixed computer.
  • a class based on or arising from the class includes the following alternative, additional or new exemplary steps:
  • Method for controlling a target system and in particular for detecting at least one usage object, in particular a detection controller, on the basis of operating data of at least one source system, in particular several source systems, in particular of the processing unit comprising: a) Receiving operating data of the source system, wherein distinguish the operating data, in particular the image data, by means of source system-specific identifiers, b) training, using a neural network, a neural model on the basis of the received operating data of the source system, taking into account the source system-specific identifier(s), for example the characteristic value, wherein c) the database object (4) is generated by means of the processing unit and/or the CPU and is stored in the CPU, with each database object having at least source data of the characteristic value or the characteristic value itself, so that the object of use (1) is recorded in such a way that a the capture process recorded image of the object of use (1) corresponds to operating data of the source system or is generated from them, d) receiving operating data from the target system, in particular the recording controller, with this operating data having data
  • the operating data can be individual values, for example partial characteristic values, from the recording of the vehicle.
  • the target data correspond to such values, for example partial characteristic values, which the system sets as a target with regard to the resolution of the recorded image, the amount of data in the image, the position of the vehicle in the image, a compression rate, color levels of the vehicle in the image, edge markings of the vehicle in the image, and/or a noise ratio, in particular at least one of the above compared to a vehicle background.
  • partial characteristic values which the system sets as a target with regard to the resolution of the recorded image, the amount of data in the image, the position of the vehicle in the image, a compression rate, color levels of the vehicle in the image, edge markings of the vehicle in the image, and/or a noise ratio, in particular at least one of the above compared to a vehicle background.
  • the acquisition control within the training loop (partial parameters), data sets of the image, the position of the vehicle on the image
  • the outline of the vehicle is then generated, with the outline being stored as a separate image file by the CPU or another storage medium.
  • the characteristic value is taken from an identification means, for example a usage badge of the usage object and/or can be taken, in particular scanned.
  • the characteristic value is therefore also preferably recorded fully automatically by the processing unit, which has an optical camera, for example. It is preferably no longer necessary for the user and/or the implementation device to have to manually enter the characteristic value into the processing unit.
  • the processing unit includes or is a smartphone or a camera. If the processing unit is a smartphone or a camera, it can be handled manually, as already mentioned above.
  • the processing unit is fastened to a receiving element, which moves according to the requirements of the detection sequence moved relative to the object of use.
  • the processing unit can therefore move together with the receiving element in accordance with the detection sequence relative to the object of use.
  • the processing unit can be or include a smartphone or a camera, and the processing unit can also be a manageable processing unit. However, this is attached to a larger unit, namely the receiving element.
  • the receiving element preferably includes all the necessary components in order to be able to move along the object of use fully automatically or by manual force on the part of the user.
  • the recording element is a drone, which is steered relative to the object of use according to the recording sequence in order to be able to carry out the individual recordings preferably along or on the above-mentioned marking elements.
  • a “drone” can be an unmanned vehicle, preferably an unmanned aircraft with one or more Flelicopter rotors.
  • the drone can then be controlled and thus steered wirelessly or with a cable via a control device by the user and/or by the implementation device manually or fully automatically.
  • the drone makes it possible to take a very space-saving approach when recording the object of use around the object of use.
  • a safety distance between the object of use and other objects of use for example other cars in a car showroom, can be dispensed with, so that the drone travels the individual positions to be photographed according to the determination sequence, preferably hovering, without other objects of use not involved having to be driven very far away.
  • the drone would then simply approach the object of use from above and, for example, also drive into the interior of the car in order to be able to take interior shots as well.
  • FIG. 3A -3E another embodiment of the device described here.
  • FIG. 1 A device 100 according to the invention is shown in FIG. 1
  • FIG. 4A shows the artificial side
  • data from an input image 71 is subjected to image brightness correction 80, with data from this image brightness correction 80 being transmitted on both sides in parallel to a classification model 81 and a segmentation model 82, in which the IF that ID 91 of this data is then created.
  • the data is then assigned an ID, with an ID of 0.283 referring to a "segment exterior details map post processing", with an ID of 3.084 the data relate to a "segment interior map post processing" and with an ID of that ID of 1,085 there is a "segment exterior map post processing".
  • the data is combined into an IF template type 92.
  • the so-called image is then processed in a further step according to the requirements of the unsharp template and image size adjustments cropped 87 to the desired size, which the output image 72 then renders.
  • the data can take the type "Complete” 88 in order to be able to combine the segmentation map of the alpha mask without background. This is then carried out on the shadow generation model 89, which then also represents the output image 72.
  • the Normal type 90 where the segmentation map is combined with the alpha mask to leave only the background of the surface. As with the previous types, this image is then displayed as Output image 72.
  • the output image 72 can then be formed into an input image 71 again by an AI server 73 .
  • Figure 4A thus shows the method for controlling a target system and in particular for detecting at least one object of use, in particular a detection controller, on the basis of operating data from at least one source system, in particular several source systems, in particular the processing unit, which is responsible for receiving Be includes operating data of the source system, with the operating data, in particular the image data, differing by source system-specific identifiers.
  • It also includes training, using a neural network, a neural model on the basis of the received operating data of the source system, taking into account the source system-specific identifier(s), for example the characteristic value, with the processing unit and/or the CPU using the database object (4th ) is generated and stored in the CPU, where each database object has at least source data of the characteristic value or the characteristic value itself, so that the object of use (1) is recorded in such a way that an image of the object of use (1) recorded by the recording process contains operating data of the source system corresponds to or is generated from them.
  • a neural model on the basis of the received operating data of the source system, taking into account the source system-specific identifier(s), for example the characteristic value, with the processing unit and/or the CPU using the database object (4th ) is generated and stored in the CPU, where each database object has at least source data of the characteristic value or the characteristic value itself, so that the object of use (1) is recorded in such a way that an image of the object of use (1) recorded by the recording process contains operating data of the source system
  • the method includes receiving operating data from the target system, in particular from the acquisition controller, with this operating data having data that corresponds or is to predetermined target data of a target object to be imaged, and further training of the trained neural model on the basis of the operating data of the target system and finally the control of the source and/or the target system by means of the further trained neural network, and in particular until the operating data of the source system corresponds to the operating data of the target system and/or is and/or is within a tolerance range, in particular the identifier .
  • the operating data can be individual values, for example partial characteristic values, from the recording of the vehicle.
  • the target data correspond to such values, for example partial characteristic values, which the system specifies as a target with regard to the resolution of the recorded image, the amount of data in the image, the position of the vehicle in the image, a compression rate, color levels of the vehicle in the image , edge markings of the vehicle on the image, and/or a noise ratio, in particular at least one of the above compared to a vehicle background.
  • partial characteristic values which the system specifies as a target with regard to the resolution of the recorded image, the amount of data in the image, the position of the vehicle in the image, a compression rate, color levels of the vehicle in the image , edge markings of the vehicle on the image, and/or a noise ratio, in particular at least one of the above compared to a vehicle background.
  • the acquisition control within the training loop (partial parameters), data sets of the image, the position of the vehicle on the image
  • the outline of the vehicle is then generated, with the outline being stored as a separate image file by the CPU or another storage medium.
  • FIG. 4B shows the computer vision augmented reality side.
  • an image stream 74 is carried out in parallel through the sensor data 110, camera stream 111, device position 112 and the device orientation 113. This data is then aggregated and sent to the ARKit/ARCore 114. There the image stream 74 is revised with the world tracking 115, image recognition 116 and the content tracking 117 by the 3D rendering 118 and the image processing 119 and then reaches the ni image matches guide 93. If the data for the image match, is a photo and/or video and/or screenshot 120 is taken, which then forms the guideline corresponding image 75. In the event that the revised data does not match the image guide, it is sent back to the image stream 74 and the process begins again.
  • FIG. 4 is self-explanatory here.
  • the device 100 includes a processing unit 2, by means of which a usage object 1 and/or a usage object 1 is uniquely and preferably uniquely assigned, the identification means 11 can be physically detected, where out at least one characteristic value of the object of use can be obtained and furthermore, the processing unit 2 and/or a CPU being set up and provided for carrying out a classification of the object of use insofar as a characteristic value of the object of use is at least one in a database of the processing unit 2 and/or or is comparable to a database of an external CPU and the processing unit 2 and/or the CPU and/or the user himself/herself selects a database object 4 that corresponds to the characteristic value 3 and displays it on a screen of the processing unit 2, so that a camera image of the usage object 1 is combined can be at least partially optically superimposed on the screen 21 with the database object 4 and/or displayed next to one another.
  • a processing unit 2 by means of which a usage object 1 and/or a usage object 1 is uniquely and preferably uniquely assigned, the identification means 11 can be physically
  • the processing unit 2 and/or the CPU it is possible to carry out at least one physical detection process 5 based on the database object 4 displayed on the screen 21, so that the user detects the usage object 1 in such a way that an image of the usage object detected by the detection process is identical or identical to scale, at least essentially identical, with the database object 4 displayed on the screen 21 is displayed at the same time, whereby the detection process causes the usage object 1 to be recognized by the processing unit 2 and/or the CPU and/or the user at least one usage object class, for example a Vehicle types can be assigned.
  • the detection process causes the usage object 1 to be recognized by the processing unit 2 and/or the CPU and/or the user at least one usage object class, for example a Vehicle types can be assigned.
  • FIG. 2A An example of a first step is shown in FIG. 2A, with a usage object class (e.g. images 30), in particular in the form of an example vehicle type, visually appearing on screen 21 on usage object 1 shown there, which is shown in the form of a smartphone is shown.
  • a usage object class e.g. images 30
  • the example vehicle type is not only shown in reduced form in area B1 on the screen, but also in an enlarged form, for example a 1:1 form, with a gray shaded background on screen 21 (see area B2).
  • This optically represented usage object class serves as a guide to the object to be photographed.
  • a controller 40 is also shown, by setting a contrast and/or a brightness of the orientation image, ie in particular of the images 30, which each correspond to an optical representation of a use object class. In this way, problems that arise when the light is very bright can be eliminated.
  • FIG. 2B shows a characteristic value detection based on a usage sticker 50 of the utility vehicle. In this case, the usage badge 50 is optically scanned by the processing unit 2 .
  • the angle at which the processing unit 2, shown here as a smartphone as an example, must be held changes, as a result of which an optimal quality for the comparison and classification process can be achieved.
  • FIG. 2C shows that the processing unit 2 must be held in different angular positions relative to the object of use 1.
  • the processing unit 2 is attached to a receiving element 23, in this case a drone.
  • FIG. 3A therefore not only shows a drone 23, but also the processing unit 2 and the object of use 1, with a drone 23 initially being entered at a distance into the processing unit 2 beforehand when the drone is launched, or being specified by the detection sequence.
  • the drone Before the drone can orient itself automatically and without a drone pilot, it needs information about the object of use 1.
  • the drone can then be placed at a defined distance in front of the vehicle 11 (see Figure 3B) to use the vehicle dimensions in relation to the starting point fly all positions according to the acquisition sequence.
  • Corresponding marking elements 60 are shown in FIG. 3C, which are either attached to the object of use 1 or are virtually optically “laid over” it.
  • the marking can be what is known as ARUCo marking. These can be high-contrast symbols that were specially developed for camera use. These can contain not only orientation aids, but also information. With such a marker, the drone 23 can therefore recognize the starting point of the drone flight itself. Another course of the drone flight is shown in FIG. 3D, which can also be seen in FIG. 3E. However, FIG. 3E also shows how a focal length of a lens of the processing unit 2 transported by the drone 23 affects the recording quality. On the usage object 1 shown on the far left, this was recorded with a wide-angle camera, while the usage object 1 shown in the middle was recorded with a normal-angle camera and the usage object 1 on the far right was recorded with a telecamera. The wide-angle camera can allow a distance of 0 to 45 mm to the utility vehicle 2, the normal-angle camera a distance of about 50 mm and a telephoto lens can allow a distance of 55 mm or more.
  • Focal lengths of less than 50 mm and greater than 50 mm can produce different distortion and distortion effects. Due to the various use of focal lengths of, for example, 6 mm, visible distortions occur in the images taken. In order to have a comparison of all images afterwards, the photographs taken should not be post-processed, so that the various lenses mentioned above are used must.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif destiné à capturer physiquement, en particulier optiquement, au moins un objet d'usage, comportant l'étape consistant à effectuer au moins un processus de capture physique, par exemple par le biais d'un utilisateur et/ou d'un dispositif d'exécution, en particulier au moins une photographie, pour l'objet d'usage, avec pour conséquence que l'objet d'usage est capturé de telle manière qu'une image de l'objet d'usage capturée par le biais du processus de capture est présentée de manière identique ou sur une échelle identique à l'objet de base de données présenté sur un écran et en même temps que celui-ci, le processus de capture affectant l'objet d'usage qui provient de l'unité de traitement et/ou du CPU et/ou de l'utilisateur à au moins une classe d'objets d'usage, par exemple un type de véhicule.
PCT/EP2022/067629 2021-07-14 2022-06-27 Procédé pour commander un système cible et en particulier pour capturer au moins un objet d'usage, en particulier un moyen de commande de capture WO2023285128A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DE102021118155 2021-07-14
DE102021118155.0 2021-07-14
DE102022002203.6A DE102022002203A1 (de) 2022-06-17 2022-06-17 Verfahren zur Steuerung eines Zielsystems und insbesondere zum Erfassen zumindest eines Nutzungsobjektes, insbesondere einer Erfassungssteuerung
DE102022002203.6 2022-06-17

Publications (1)

Publication Number Publication Date
WO2023285128A1 true WO2023285128A1 (fr) 2023-01-19

Family

ID=82546996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/067629 WO2023285128A1 (fr) 2021-07-14 2022-06-27 Procédé pour commander un système cible et en particulier pour capturer au moins un objet d'usage, en particulier un moyen de commande de capture

Country Status (1)

Country Link
WO (1) WO2023285128A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190197789A1 (en) * 2017-12-23 2019-06-27 Lifeprint Llc Systems & Methods for Variant Payloads in Augmented Reality Displays

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190197789A1 (en) * 2017-12-23 2019-06-27 Lifeprint Llc Systems & Methods for Variant Payloads in Augmented Reality Displays

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LE HUY ET AL: "Machine Learning with Synthetic Data - a New Way to Learn and Classify the Pictorial Augmented Reality Markers in Real-Time", 2020 35TH INTERNATIONAL CONFERENCE ON IMAGE AND VISION COMPUTING NEW ZEALAND (IVCNZ), 25 November 2020 (2020-11-25), pages 1 - 6, XP055970271, ISBN: 978-1-7281-8579-8, DOI: 10.1109/IVCNZ51579.2020.9290606 *

Similar Documents

Publication Publication Date Title
DE60127644T2 (de) Lehrvorrichtung für einen Roboter
DE69817498T2 (de) Handzeigegerät
EP2166510B1 (fr) Procédé de détermination de la position et de l'orientation d'une caméra installée dans un véhicule
DE10355283A1 (de) Hand/Auge-Kalibrierverfahren und zugehöriges Punktextraktionsverfahren
DE102011050640A1 (de) Verfahren und Vorrichtung für die Wiederherstellung von Roboterlagedaten
DE102016119605A1 (de) Kalibrierungssystem und Kalibrierungsverfahren zur Kalibrierung der mechanischen Parameter des Handgelenksteils eines Roboters
EP1590714B1 (fr) Projection d'information synthetique
EP2071510A2 (fr) Procédé et système destinés à aligner un modèle virtuel sur un objet réel
EP2381207B1 (fr) Mesure de cible 3D et orientation de cible à partir de données IR
DE10215885A1 (de) Automatische Prozesskontrolle
EP3726425B1 (fr) Procédé de détection physique, en particulier optique, d'au moins un objet utile
DE102022130652A1 (de) Anlernen eines roboters durch vorführen mit visualservoing
WO2018215332A1 (fr) Visualisation externe d'images acquises d'un habitacle de véhicule dans un casque de réalité virtuelle
DE102017007737A1 (de) Verfahren und Vorrichtung zum Erfassen einer Abbildung einer Pflanze mit einer Sensoreinrichtung
DE10151983A1 (de) Verfahren zur Dokumentation einer Unfallsituation
DE102014012710A1 (de) Verfahren und Vorrichtung zum Bestimmen der 3D-Koordinaten eines Objekts
EP3575912A1 (fr) Tondeuse robotisée
WO2023285128A1 (fr) Procédé pour commander un système cible et en particulier pour capturer au moins un objet d'usage, en particulier un moyen de commande de capture
DE102019110344A1 (de) Vorrichtung zum physikalischen, insbesondere zum optischen, Erfassen zumindest eines Nutzungsobjektes
EP4145238A1 (fr) Procédé de commande d'un véhicule aérien sans pilote pour un vol d'inspection servant à inspecter un objet et véhicule aérien d'inspection sans pilote
DE102018216561A1 (de) Verfahren, Vorrichtung und Computerprogramm zum Ermitteln einer Strategie eines Agenten
DE60305345T2 (de) Bildverarbeitungseinrichtung mit erkennung und auswahl von lichtquellen
DE102019110345A1 (de) Verfahren zum physikalischen, insbesondere zum optischen, Erfassen zumindest eines Nutzungsobjektes
DE102020127797B4 (de) Sensorverfahren zum optischen Erfassen von Nutzungsobjekten zur Detektion eines Sicherheitsabstandes zwischen Objekten
DE102019217225A1 (de) Verfahren zum Trainieren eines maschinellen Lernsystems für eine Objekterkennungsvorrichtung

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22741460

Country of ref document: EP

Kind code of ref document: A1