WO2019066693A1 - Système d'assistance à l'opérateur et procédé associé au système - Google Patents

Système d'assistance à l'opérateur et procédé associé au système Download PDF

Info

Publication number
WO2019066693A1
WO2019066693A1 PCT/SE2018/050829 SE2018050829W WO2019066693A1 WO 2019066693 A1 WO2019066693 A1 WO 2019066693A1 SE 2018050829 W SE2018050829 W SE 2018050829W WO 2019066693 A1 WO2019066693 A1 WO 2019066693A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
working equipment
view
occluded
processing unit
Prior art date
Application number
PCT/SE2018/050829
Other languages
English (en)
Inventor
Per Gustafsson
Original Assignee
Cargotec Patenter Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cargotec Patenter Ab filed Critical Cargotec Patenter Ab
Priority to EP18762407.7A priority Critical patent/EP3687937B1/fr
Publication of WO2019066693A1 publication Critical patent/WO2019066693A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/46Position indicators for suspended loads or for crane elements

Definitions

  • the present disclosure relates to an operator assistance system and a method in connection with the system.
  • the operator assistance system is in particular used on a vehicle provided with a working equipment, e.g. a crane, in assisting the operator during loading and unloading procedures.
  • a working equipment e.g. a crane
  • Working vehicles are often provided with various working equipment, e.g.
  • movable cranes which are attached to the vehicle via a joint.
  • These cranes comprise movable crane parts, e.g. booms, that may be extended, and that are joined together by joints such that the crane parts may be folded together at the vehicle and extended to reach a load.
  • Various tools e.g. buckets or forks, may be attached to the crane tip, often via a rotator.
  • the object of the present invention is to achieve an operator assistance system provided with a digital see-through capability for vehicles with a working equipment that solves the aforementioned visibility issues.
  • the invention relates to an operator assistance system for a vehicle provided with a working equipment.
  • the assistance system comprises an image capturing system arranged at the vehicle and/or at the working equipment and capable of capturing parameters related to images, of the working equipment and of the environment outside the vehicle in a predetermined field of view.
  • the operator assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator.
  • the image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of the predetermined field of view, and that the processing unit is configured to generate a merged image based upon said overlapping images.
  • the processing unit is configured to determine a shape and position of an image representation of the working equipment occluding a part of an image in the predetermined field of view, obtained by one of the sensor unit assemblies, by processing image related signals from at least one of the other sensor unit assemblies.
  • the processing unit is further configured to determine if the part occluded by the working equipment is visible in any image in the predetermined field of view obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, an occluded part image, being an image representation of the occluded part, is determined in the field of view obtained by said other sensor unit assembly.
  • the processing unit is then configured to merge the occluded part image into the merged image in the determined position of the working equipment, and to display the merged image at the display unit.
  • At least one of the sensor unit assemblies comprises at least one angle sensor and/or at least one length sensor structured to be arranged at the working equipment, and adapted to measure angles and lengths related to movements of the working equipment, and at least one camera unit mounted at the vehicle and/or at the working equipment (6).
  • the sensor unit assemblies comprises at least two camera units that are mounted at separate mounting positions at the vehicle in relation to the working equipment, or at the working equipment, such that different sides of the working equipment are visible at images obtained by the at least two camera units.
  • the processing unit is provided with a set of image representations of the working equipment, and is further configured to apply a pattern recognition algorithm to identify any image representation of the working equipment in an image in the predetermined field of view by comparison to the set of image representations. Thereby it is assured that a safe and fast recognition of an occluding object is achieved.
  • the processing unit is further configured to identify a predetermined part, e.g. a hook, a load, a fork, of the occluding working equipment to be visible in the merged image, and to display said predetermined part at said display unit. This is advantageous as the operator then easily can manoeuvre the hook or fork as it is clearly visible and highlighted at the display unit.
  • the processing unit is configured to determine a presentation mode of the occluded part image, from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent, a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity. This is an important feature as an operator may choose an optimal presentation in dependence of a specific situation.
  • the working equipment is a crane, a demountable arm, a boom, or a bucket, or any other tool arranged at a vehicle.
  • a method is provided that is applied by an operator assistance system for a vehicle provided with a working equipment.
  • the assistance system comprises an image capturing system arranged at the vehicle, and/or at the working equipment, and capable of capturing parameters related to images of the working equipment and of the environment outside the vehicle in a predetermined field of view.
  • the assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator.
  • the image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of the predetermined field of view, and that the processing unit is configured to generate a merged image based upon the overlapping images.
  • the method comprises:
  • the method further comprises:
  • identifying a predetermined part e.g. a hook, a load, a fork, of the occluding working equipment to be visible in said merged image
  • the method comprises:
  • Figure 1 is a schematic illustration of a vehicle provided with an operator assistance system according to the present invention.
  • Figure 2 is a schematic illustration of images obtained by sensor unit assemblies, and of a merged image.
  • Figures 3a-3c are schematic views from above illustrating various aspects of embodiments of the present invention.
  • Figure 4 is a schematic view from above illustrating another embodiment of the present invention.
  • FIG. 5 is a flow diagram illustrating the method according to the present invention. Detailed description
  • a block diagram is disclosed schematically illustrating an operator assistance system 2 for a vehicle 4 provided with a working equipment 6.
  • the vehicle may be a cargo vehicle, a truck, a forklift or a working vehicle provided with a working equipment 6 that is e.g. a crane, a demountable arm, a mast boom, a hook-lift equipment provided with a predetermined part 22, e.g. a hook, a fork, or a bucket.
  • the assistance system comprises an image capturing system 8 arranged at the vehicle and/or at the working equipment 6, and capable of capturing parameters related to images of the working equipment 6 and of the environment outside the vehicle in a predetermined field of view 10, a processing unit 12 configured to receive image related parameter signals 14 from the image capturing system 8, to process the image related parameter signals, and a display unit 16 configured to present captured images to an operator.
  • the image capturing system 8 comprises at least two sensor unit assemblies 18, 20 capable of capturing parameters related to essentially overlapping images of said predetermined field of view 10.
  • the processing unit 12 has a processing capability and is configured to generate a merged image based upon the overlapping images.
  • the sensor unit assembly is a camera unit, an angle sensor, a length sensor, or any other sensing unit capable of capturing parameters related to images.
  • the parameters related to images may be parameters directly related to images, e.g. optical parameters detected by e.g. a camera unit, or indirectly related to images, e.g. various parameters representing movement and position of the working equipment, e.g. angles, lengths, width related to parts of the working equipment.
  • the processing unit is configured to determine a shape and position of an image representation of the working equipment 6 occluding a part of an image in the predetermined field of view 10, obtained by one of said sensor unit assemblies, by processing image related signals from at least one of the other sensor unit assemblies.
  • the processing unit 12 is further configured to determine if the part occluded by the working equipment 6 is visible in any image in the predetermined field of view 10 obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, an occluded part image, being an image representation of the occluded part, is determined in the field of view 10 obtained by said other sensor unit assembly.
  • the processing unit is then configured to merge the determined occluded part image into the merged image in the determined position of the working equipment 6, and to display the merged image at the display unit 16.
  • At least one of the sensor unit assemblies comprises at least one angle sensor and at least one length sensor structured to be arranged at the working equipment, and adapted to measure angles and lengths related to movements of said working equipment.
  • at least one camera unit 18, 20 is provided which is mounted at the vehicle 4 and/or at the working equipment 6.
  • the angle sensor(s) and/or the length sensor(s) are arranged at the working equipment and structured to generate sensor signals including angle values and length values which are applied to the processing unit. Based upon those sensor signals and information indicating the type of working equipment the processing unit may determine the shape and position of the working equipment.
  • the sensor unit assemblies comprise at least two camera units 18, 20 that are mounted at separate mounting positions at the vehicle 4 in relation to the working equipment 6, or at the working equipment 6, such that different sides of the working equipment are visible at images obtained by the at least two camera units 18, 20.
  • the camera units 18, 20 are mounted at separate mounting positions at the working equipment, e.g. a boom of a crane, and move thereby together with the crane.
  • the definition that the camera units are mounted at different sides of the working equipment should be interpreted broadly, and not being limited to different sides in a horizontal plane.
  • the important aspect is that the view of sights obtained by the camera units mounted at the different sides cover parts that potentially may be occluded by the working equipment during normal use.
  • the camera units may be mounted at different heights or at other positions where a full overview may be obtained.
  • two camera units are mounted at different sides of the working equipment, e.g.
  • FIG. 2 is shown above two images obtained by two camera units arranged at opposite sides of the working equipment, or by a sensor unit assembly at the working equipment, and at least one camera unit.
  • the working equipment is visible at the right image where it occludes parts of the environment and thereby prevents the operator from having complete visual overview.
  • the position and shape of the occluding working equipment is determined, and the positions are then applied in the image obtained by another camera unit to identify the corresponding positions therein. This is schematically illustrated on the image to the left where the image representation of the occluded part is indicated by dashed lines.
  • the processing unit is provided with a set of image
  • representations of the working equipment and more particularly a set of searchable data representations of images of the working equipment in various views and sizes.
  • the processing unit then is configured to apply a search procedure to identify any image representation of the working equipment in an image in the predetermined field of view obtained by any of the sensor unit assemblies by comparison to the set of image representations.
  • This is preferably performed by applying a dedicated pattern recognition algorithm capable of comparing captured images with the stored set of data representations of images of the working equipment.
  • the working equipment may occlude a part, e.g. a larger part or a smaller part, of the field of view obtained by one camera unit. Even if only a smaller part of the field of view is occluded the search procedure may be performed.
  • the search procedure preferably is continuously performed for all images obtained by all sensor unit assemblies, e.g. the camera units, and that occluded parts in images obtained from different sensor unit assemblies and in different positions then may be identified, and eventually applied in a merged image.
  • the processing unit is configured to identify a predetermined part 22 of the occluding working equipment (see figure 1 and figures 3a-3c), e.g. a hook, a load, a bucket, or a fork, to be visually presented in the merged image, and to display the predetermined part at the display unit. This is preferably performed by applying another pattern recognition algorithm to be set up in advance dependent on the presently used predetermined part of the working equipment.
  • the merged image is presented to the operator on the display unit.
  • the display unit may be any type of presentation unit configured and arranged such that the operator is provided with the necessary support in order to operate the working equipment safely and efficiently. Various presentation modes are available, where the occluding working equipment
  • the processing unit is configured to determine a presentation mode of the occluded part image, from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent and a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity.
  • the processing unit is configured to determine a presentation mode where an outer boundary of the occluded part image is indicated in the merged image.
  • the boundary of the occluded part image may e.g. be indicated by a dashed line.
  • Figures 3a-3c illustrate various presentation modes.
  • stereo cameras may be used for making 3D pictures, or for range imaging. Unlike most other approaches to depth sensing, such as structured light or time-of-flight measurements, stereo vision is a purely passive technology which also works in bright daylight.
  • the image capturing system may comprise at least one camera unit, but may in addition also include one or many sensor units, e.g. angle sensor units, length sensor units, capable of capturing various supporting image data to be supplied to the processing unit.
  • sensor units e.g. angle sensor units, length sensor units, capable of capturing various supporting image data to be supplied to the processing unit.
  • the image capturing system applies the Lidar-technology.
  • Lidar is sometimes considered an acronym of Light Detection and Ranging (sometimes Light Imaging, Detection, and Ranging), and is a surveying method that measures distance to a target by illuminating that target with a laser light.
  • Lidar is popularly used to make high-resolution maps, with applications in geodesy, forestry, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry.
  • Lidar sometimes is called laser scanning and 3D scanning, with terrestrial, airborne, and mobile applications.
  • the image capturing system also includes a 3D scanning device.
  • a 3D scanner is a device that analyses a real-world object or environment to collect data on its shape and possibly its appearance (e.g. colour). The collected data can then be used to construct digital three-dimensional models.
  • Many different technologies can be used to build these 3D-scanning devices; each technology comes with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present, for example, optical technologies encounter many difficulties with shiny, mirroring or
  • industrial computed tomography scanning can be used to construct digital 3D models, applying non-destructive testing.
  • 3D scanner The purpose of a 3D scanner is usually to create a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours on the surface of the subject can also be determined.
  • 3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.
  • a so-called time-of-flight Lidar scanner may be used, together with the camera units, to produce a 3D model.
  • the Lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically.
  • the laser beam is used to measure the distance to the first object on its path.
  • the time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject.
  • a time-of-flight laser range finder finds the distance of a surface by timing the round- trip time of a pulse of light.
  • a laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light c is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface.
  • the accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the t; 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre.
  • the laser range finder only detects the distance of one point in its direction of view.
  • the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points.
  • the view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy.
  • Typical time-of-flight 3D laser scanners can measure the distance of 10,000-100,000 points every second.
  • the image capturing system uses a structured-light 3D scanner that projects a pattern of light on the subject and look at the deformation of the pattern on the subject.
  • the pattern is projected onto the subject using either an LCD projector or other stable light source.
  • a camera offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.
  • structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion.
  • the display unit may be a display arranged e.g. at a control unit or in the vehicle.
  • the display unit 14 is a pair of glasses, for example of the type sold under the trademark Hololens.
  • the pair of glasses is structured to present the 3D representation such that the 3D representation is overlaid on the
  • the display unit 14 is a pair of virtual reality goggles. These types of goggles comprise two displays to be arranged in front of the operator's eyes. This variation is particularly advantageous when the operator has no direct line of sight to an object to be handled. Often VR goggles are provided with an orientation sensor that senses the orientation of the VR goggles. It may then be possible for a user to change the field of view to locate potential obstacles close to the load, provided that the object detecting device has a larger field of vision than the image presented at the displays of the VR goggles.
  • the present invention also relates to a method applied by an operator assistance system for a vehicle provided with a working equipment. The method will now be described in detail with reference to the flow diagram shown in figure 5.
  • the assistance system is described above and comprises thus an image capturing system arranged at the vehicle and/or at the working equipment, and capable of capturing parameters related to images of the working equipment and of the environment outside said vehicle in a predetermined field of view.
  • the assistance system further comprises a processing unit configured to receive image related parameter signals from the image capturing system and to process the image related parameter signals, and a display unit configured to present images to an operator.
  • the image capturing system comprises at least two sensor unit assemblies capable of capturing parameters related to essentially overlapping images of said predetermined field of view.
  • the sensor unit assembly may be a camera unit, an angle sensor, a length sensor, or any other sensor unit capable of capturing parameters related to images.
  • the processing unit is configured to generate a merged image based upon said overlapping images.
  • the method then comprises determining a shape and position of an image representation of the working equipment occluding a part of an image in the predetermined field of view, by processing image related signals, and determining if the part occluded by the working equipment is visible in any image in the predetermined field of view obtained by any of the other sensor unit assemblies. If the occluded part is visible by any of the other sensor unit assemblies, the method further comprises determining an occluded part image, being an image
  • the method preferably comprises identifying a predetermined part, e.g. a hook, a load, a fork, of the occluding working equipment to be visible in the merged image, and then displaying the predetermined part at the display unit.
  • a predetermined part e.g. a hook, a load, a fork
  • the method comprises determining a presentation mode of the occluded part image.
  • the presentation mode is determined from a set of presentation modes including a transparent mode, wherein the working equipment defining the occluded part image is fully transparent, and a semi-transparent mode, wherein the working equipment defining the occluded part image has a variable opacity.
  • the method further comprises determining a presentation mode where an outer boundary of the occluded part image is indicated in the merged image, e.g. by a dashed line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Component Parts Of Construction Machinery (AREA)

Abstract

La présente invention concerne un système d'assistance à l'opérateur (2) pour un véhicule (4) muni d'un équipement de travail (6), le système d'assistance comprenant un système de capture d'image (8) disposé au niveau dudit véhicule et/ou au niveau dudit équipement de travail (6), et susceptible de capturer les paramètres liés aux images, dudit équipement de travail (6) et de l'environnement à l'extérieur dudit véhicule dans un champ de vision prédéterminé (10). Une unité de traitement (12) est configurée pour recevoir les signaux de paramètre liés à l'image (14) à partir dudit système de capture d'image (8) et pour traiter lesdits signaux de paramètre liés à l'image (14), et une unité d'affichage (16) configurée pour présenter les images à un opérateur. L'unité de traitement (12) est configurée pour déterminer une forme et une position d'une représentation d'image dudit équipement de travail (6) occluant une partie d'une image dans le champ de vision prédéterminé (10), obtenu par l'un desdits deux ensembles unité de capteur (16, 18), par traitement des signaux liés à l'image à partir d'au moins l'un des autres ensembles unité de capteur, l'unité de traitement (12) est en outre configurée pour déterminer si la partie occluse par ledit équipement de travail (6) est visible dans n'importe quelle image dans le champ de vision prédéterminé (10) obtenu par n'importe lequel des autres ensembles unité de capteur. Si la partie occluse est visible par n'importe lequel des autres ensembles unité de capteur, une image de partie occluse, étant une représentation d'image de ladite partie occluse, est déterminée dans le champ de vision (10) obtenu par ledit autre ensemble unité de capteur. L'unité de traitement est configurée pour fusionner ladite image de partie occluse dans ladite image fusionnée dans ladite position déterminée dudit équipement de travail (6), et pour afficher ladite image fusionnée sur ladite unité d'affichage (16).
PCT/SE2018/050829 2017-09-26 2018-08-16 Système d'assistance à l'opérateur et procédé associé au système WO2019066693A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18762407.7A EP3687937B1 (fr) 2017-09-26 2018-08-16 Système d'assistance à l'opérateur et procédé associé au système

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1751193 2017-09-26
SE1751193-2 2017-09-26

Publications (1)

Publication Number Publication Date
WO2019066693A1 true WO2019066693A1 (fr) 2019-04-04

Family

ID=63442763

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2018/050829 WO2019066693A1 (fr) 2017-09-26 2018-08-16 Système d'assistance à l'opérateur et procédé associé au système

Country Status (2)

Country Link
EP (1) EP3687937B1 (fr)
WO (1) WO2019066693A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020229593A1 (fr) * 2019-05-16 2020-11-19 Jungheinrich Ag Procédé d'assistance au stockage dans un chariot de manutention et chariot de manutention
WO2023100889A1 (fr) * 2021-11-30 2023-06-08 株式会社タダノ Système d'aide à la manœuvre et véhicule de travail

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010204821A (ja) * 2009-03-02 2010-09-16 Hitachi Constr Mach Co Ltd 周囲監視装置を備えた作業機械
JP2013113044A (ja) * 2011-11-30 2013-06-10 Sumitomo (Shi) Construction Machinery Co Ltd 建設機械用モニタシステム
WO2014157567A1 (fr) * 2013-03-28 2014-10-02 三井造船株式会社 Cabine d'opérateur de grue et grue

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010204821A (ja) * 2009-03-02 2010-09-16 Hitachi Constr Mach Co Ltd 周囲監視装置を備えた作業機械
JP2013113044A (ja) * 2011-11-30 2013-06-10 Sumitomo (Shi) Construction Machinery Co Ltd 建設機械用モニタシステム
WO2014157567A1 (fr) * 2013-03-28 2014-10-02 三井造船株式会社 Cabine d'opérateur de grue et grue

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020229593A1 (fr) * 2019-05-16 2020-11-19 Jungheinrich Ag Procédé d'assistance au stockage dans un chariot de manutention et chariot de manutention
WO2023100889A1 (fr) * 2021-11-30 2023-06-08 株式会社タダノ Système d'aide à la manœuvre et véhicule de travail

Also Published As

Publication number Publication date
EP3687937B1 (fr) 2021-10-06
EP3687937A1 (fr) 2020-08-05

Similar Documents

Publication Publication Date Title
US11292700B2 (en) Driver assistance system and a method
EP3589575B1 (fr) Véhicule équipé d'un agencement pour déterminer une représentation tridimensionnelle d'un élément mobile
US10132611B2 (en) Laser scanner
EP3235773B1 (fr) Dispositif d'obtention d'informations environnementales pour un véhicule de travail
US7974461B2 (en) Method and apparatus for displaying a calculated geometric entity within one or more 3D rangefinder data sets
WO2018169467A1 (fr) Véhicule équipé d'une grue dotée d'un dispositif de détection d'objet
US7777761B2 (en) Method and apparatus for specifying and displaying measurements within a 3D rangefinder data set
EP2805180B1 (fr) Appareil de suivi laser doté d'une fonctionnalité pour la préparation de cibles graphiques
US20060193521A1 (en) Method and apparatus for making and displaying measurements based upon multiple 3D rangefinder data sets
US20210125487A1 (en) Methods and systems for detecting intrusions in a monitored volume
JPH06293236A (ja) 走行環境監視装置
EP3687937B1 (fr) Système d'assistance à l'opérateur et procédé associé au système
US10890430B2 (en) Augmented reality-based system with perimeter definition functionality
KR102332616B1 (ko) 라이다와 ar을 이용한 건설 장비용 디스플레이 장치
JP2020142903A (ja) 画像処理装置、および制御プログラム
EP4246184A1 (fr) Serrure de vue de caméra logicielle permettant l'édition de dessin sans décalage dans la vue
EP4227708A1 (fr) Alignement et visualisation de réalité augmentée d'un nuage de points
US20240161435A1 (en) Alignment of location-dependent visualization data in augmented reality
WO2024102428A1 (fr) Alignement de données de visualisation dépendant de l'emplacement dans une réalité augmentée

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18762407

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018762407

Country of ref document: EP

Effective date: 20200428