WO2017147826A1 - 智能设备的图像处理方法及装置 - Google Patents

智能设备的图像处理方法及装置 Download PDF

Info

Publication number
WO2017147826A1
WO2017147826A1 PCT/CN2016/075387 CN2016075387W WO2017147826A1 WO 2017147826 A1 WO2017147826 A1 WO 2017147826A1 CN 2016075387 W CN2016075387 W CN 2016075387W WO 2017147826 A1 WO2017147826 A1 WO 2017147826A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
dimensional
posture angle
image processing
Prior art date
Application number
PCT/CN2016/075387
Other languages
English (en)
French (fr)
Inventor
武克易
Original Assignee
武克易
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武克易 filed Critical 武克易
Priority to PCT/CN2016/075387 priority Critical patent/WO2017147826A1/zh
Publication of WO2017147826A1 publication Critical patent/WO2017147826A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to the field of UAV monitoring equipment, and more particularly to an image processing method and apparatus for a smart device.
  • virtual reality technology is a computer simulation system that can create and experience virtual worlds. It uses computer to generate a simulation environment. It is a multi-source information fusion interactive system simulation of three-dimensional dynamic vision and physical behavior to immerse users. Go to the environment.
  • perceptions such as hearing, touch, force, and motion, and even smell and taste, also known as multi-perception.
  • Natural skills refer to the rotation of the person's head, eyes, gestures, or other human behaviors.
  • the computer processes the data that is appropriate to the actions of the participants, and responds to the user's input in real time and feeds them back to the user. Five senses.
  • the drone monitoring equipment In order to enhance the authenticity of the virtual space and provide customers with an immersive visual experience, the drone monitoring equipment often needs to provide different real images and virtual images for the user as the posture of the machine is adjusted.
  • the technical problem to be solved by the present invention is to provide an image processing method and apparatus for a smart device in view of the above-mentioned drawbacks of the prior art.
  • a plurality of feature information parameters in the current real image are analyzed and extracted, and the first three-dimensional coordinates and the first posture angle corresponding to the feature information in the real scene space coordinate system are obtained, and each of the feature information is obtained in real time.
  • the image processing method of the present invention further includes the steps of:
  • the two-dimensional image of the annotation information is displayed in a superimposed form in the current real image screen according to the latest three-dimensional coordinates and the latest posture angle.
  • the image processing method of the present invention wherein the manner of acquiring the current real image is: acquiring by a camera shooting mode.
  • the database stores a plurality of annotation information corresponding to the feature information parameter in a feature value index manner, where the annotation information includes address information, name, source, length, width, or depth.
  • annotation information is displayed in a floating form in the form of characters, graphics or annotations.
  • the present invention also provides an image processing apparatus for a drone monitoring device, which includes:
  • An image acquisition module configured to acquire a current real image
  • An analysis module configured to analyze and extract a plurality of feature information parameters in the current real image, and obtain a first three-dimensional coordinate and a first posture angle corresponding to each feature information in a real scene space coordinate system, and obtain the real-time image Change information of position of each of the feature information in a real scene space coordinate system;
  • a searching module configured to search for the annotation information from the corresponding database according to the feature information parameter
  • a coordinate transformation module configured to convert the first three-dimensional coordinates into second three-dimensional coordinates according to the change information, and convert the first posture angle into a second posture angle;
  • a display module configured to display the two-dimensional image of the annotation information in a superimposed manner on the current real image according to the first three-dimensional coordinate and the first posture angle, or according to the converted second three-dimensional coordinate and the second posture angle In the picture.
  • the coordinate transformation module is further configured to continue to perform the first according to the change information when acquiring the continuously changing information of the position of each of the feature information in the real scene space coordinate system Converting a three-dimensional coordinate to the latest three-dimensional coordinate, and converting the first attitude angle to the latest posture angle;
  • the display module is further configured to display the two-dimensional image of the annotation information in a superimposed form in a current real image screen according to the latest three-dimensional coordinates and a latest posture angle.
  • the image processing device of the present invention wherein the image acquisition module is a camera.
  • the annotation information is displayed in a floating manner in the form of characters, graphics or annotations.
  • the invention has the beneficial effects that the virtual information coordinate in the image display can be adjusted according to the real image transformation to provide the user with the best display experience.
  • FIG. 1 is a flow chart of an image processing method of a drone monitoring device according to a preferred embodiment of the present invention
  • FIG. 2 is a further flow chart of an image processing method of a UAV monitoring device in accordance with a preferred embodiment of the present invention
  • FIG. 3 is a block diagram showing the principle of an image processing apparatus of a drone monitoring device in accordance with a preferred embodiment of the present invention.
  • Step S101 Acquire a current real image.
  • Step S102 analyzing and extracting a plurality of feature information parameters in the current real image
  • Step S103 Acquire a first three-dimensional coordinate and a first posture angle corresponding to each feature information in a real scene space coordinate system
  • Step S104 Obtain real-time change information of each feature information in a real scene space coordinate system
  • Step S105 Search for the annotation information from the corresponding database according to the feature information parameter
  • Step S106 Convert the first three-dimensional coordinates into the second three-dimensional coordinates according to the change information, and convert the first posture angle into the second posture angle;
  • Step S107 Display the two-dimensional image of the annotation information in a superimposed form on the current real image screen according to the first three-dimensional coordinate and the first posture angle, or according to the converted second three-dimensional coordinate and the second posture angle.
  • the above real image may be a still image or a moving image, and the brightness of the display image may be adjusted.
  • analyzing and extracting a plurality of feature information parameters in the current real image includes a plurality of setting modes, such as an item labeling mode, a route indication information labeling mode, an English labeling mode, a material labeling mode, and the like.
  • the image processing method further includes:
  • Step S1061 when acquiring the continuously changing information of the position of each feature information in the real scene space coordinate system, continuously converting the first three-dimensional coordinates into the latest three-dimensional coordinates according to the change information, and converting the first posture angle into the latest posture angle ;
  • Step S1071 The two-dimensional image of the annotation information is displayed in a superimposed form in the current real image screen according to the latest three-dimensional coordinates and the latest posture angle.
  • the above-described step of displaying the two-dimensional image of the annotation information in the superimposed form in the current real image screen is displayed in an omitted form when the annotation information content is excessive.
  • the manner of acquiring the current real image is: acquiring by a camera shooting mode.
  • the database stores the annotation information corresponding to the plurality of feature information parameters in the feature value index manner, and the annotation information includes the address information, the name, the source, the length, the width, or the depth.
  • the annotation information is displayed in a floating manner in the form of characters, graphics or annotations.
  • the present invention also provides an image processing apparatus for a drone monitoring device, as shown in FIG. 3, comprising:
  • An image obtaining module 10 configured to acquire a current real image
  • the analyzing module 20 is configured to analyze and extract a plurality of feature information parameters in the current real image, and obtain corresponding first three-dimensional coordinates and a first posture angle of each feature information in a real scene space coordinate system, and acquire each feature information in real time. Information on the change of position in the real scene space coordinate system;
  • the searching module 30 is configured to search for the annotation information from the corresponding database according to the feature information parameter;
  • the coordinate transformation module 40 is configured to convert the first three-dimensional coordinates into the second three-dimensional coordinates according to the change information, and convert the first posture angle into the second posture angle;
  • the display module 50 is configured to display the two-dimensional image of the annotation information in a superimposed form according to the first three-dimensional coordinates and the first posture angle, or according to the converted second three-dimensional coordinates and the second posture angle.
  • the coordinate transformation module 40 is further configured to continuously convert the first three-dimensional coordinates into the latest three-dimensional coordinates according to the change information when acquiring the continuously changing information of the position of each feature information in the real scene space coordinate system, and Converting the first attitude angle to the latest attitude angle;
  • the display module 50 is further configured to display the two-dimensional image of the annotation information in a superimposed form in the current real image screen according to the latest three-dimensional coordinates and the latest posture angle.
  • the image acquisition module is preferably a camera, and may be one or more cameras.
  • the annotation information is displayed in a floating manner in the form of characters, graphics or annotations.
  • the virtual information coordinates in the image display can be adjusted according to the real image transformation to provide the user with the best display experience.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种智能设备的图像处理方法及装置,图像处理方法包括以下步骤:获取当前真实图像(S101);分析并提取当前真实图像中的多个特征信息参数(S102),获取各个特征信息在真实情景空间坐标系中对应的第一三维坐标及第一姿态角(S103),实时获取各个特征信息在真实情景空间坐标系中位置的变化信息(S104);根据特征信息参数从对应的数据库中查找标注信息(S105);根据变化信息将第一三维坐标转换为第二三维坐标,并将第一姿态角转换为第二姿态角(S106);根据第一三维坐标及第一姿态角,将标注信息的二维图像以叠加形式显示在当前真实图像画面中(S107)。通过采用上述方案可使得图像显示中的虚拟信息坐标能实施根据真实图像变换而调整,为用户提供最佳显示体验。

Description

智能设备的图像处理方法及装置 技术领域
本发明涉及无人机监控设备技术领域,更具体地说,涉及一种智能设备的图像处理方法及装置。
背景技术
随着国内外无人机相关技术飞速发展,无人机系统种类繁多、用途广且特点鲜明。在民用无人机中,进行航拍是一大应用。另外,虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真系统它利用计算机生成一种模拟环境是一种多源信息融合的交互式的三维动态视景和实体行为的系统仿真使用户沉浸到该环境中。除计算机图形技术所生成的视觉感知外,还有听觉、触觉、力觉、运动等感知,甚至还包括嗅觉和味觉等,也称为多感知。自然技能是指人的头部转动,眼睛、手势、或其他人体行为动作,由计算机来处理与参与者的动作相适应的数据,并对用户的输入做出实时响应,并分别反馈到用户的五官。
为增强虚拟空间的真实性,为客户提供身临其境的视觉体验,无人机监控设备常常需要随着机器的姿态调整为用户提供不同的真实图像和虚拟图像。
发明内容
本发明要解决的技术问题在于,针对现有技术的上述缺陷,提供一种智能设备的图像处理方法及装置。
本发明解决其技术问题所采用的技术方案是:
构造一种无人机监控设备的图像处理方法,其中,包括以下步骤:
获取当前真实图像;
分析并提取所述当前真实图像中的多个特征信息参数,获取各个所述特征信息在真实情景空间坐标系中对应的第一三维坐标及第一姿态角,并实时获取各个所述特征信息在真实情景空间坐标系中位置的变化信息;
根据所述特征信息参数从对应的数据库中查找标注信息;
根据变化信息将所述第一三维坐标转换为第二三维坐标,并将所述第一姿态角转换为第二姿态角;
根据所述第一三维坐标及第一姿态角,或根据转换后的第二三维坐标及第二姿态角,将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中。
本发明所述的图像处理方法,其中,还包括步骤:
在获取到各个所述特征信息在真实情景空间坐标系中位置的不断变化信息时,持续根据变化信息将所述第一三维坐标转换为最新的三维坐标,并将所述第一姿态角转换为最新姿态角;
根据所述最新的三维坐标和最新的姿态角,将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中。
本发明所述的图像处理方法,其中,所述将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中的步骤中:
在所述标注信息内容过多时,以省略的形式显示。
本发明所述的图像处理方法,其中,所述获取当前真实图像的方式为:通过摄像头拍摄方式获取。
本发明所述的图像处理方法,其中,所述数据库中按特征值索引方式存储有多个所述特征信息参数对应的标注信息,所述标注信息包括地址信息、名称、来源、长度、宽度或深度。
本发明所述的图像处理方法,其中,所述标注信息以文字、图形或批注形式悬浮显示。
本发明还提供了一种无人机监控设备的图像处理装置,其中,包括:
图像获取模块,用于获取当前真实图像;
分析模块,用于分析并提取所述当前真实图像中的多个特征信息参数,并获取各个所述特征信息在真实情景空间坐标系中对应的第一三维坐标及第一姿态角,并实时获取各个所述特征信息在真实情景空间坐标系中位置的变化信息;
查找模块,用于根据所述特征信息参数从对应的数据库中查找标注信息;
坐标变换模块,用于根据变化信息将所述第一三维坐标转换为第二三维坐标,并将所述第一姿态角转换为第二姿态角;
显示模块,用于根据所述第一三维坐标及第一姿态角,或根据转换后的第二三维坐标及第二姿态角,将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中。
本发明所述的图像处理装置,其中,所述坐标变换模块,还用于在获取到各个所述特征信息在真实情景空间坐标系中位置的不断变化信息时,持续根据变化信息将所述第一三维坐标转换为最新的三维坐标,并将所述第一姿态角转换为最新姿态角;
所述显示模块,还用于根据所述最新的三维坐标和最新的姿态角,将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中。
本发明所述的图像处理装置,其中,所述图像获取模块为摄像头。
本发明所述的图像处理装置,其中,所述标注信息以文字、图形或批注形式悬浮显示。
本发明的有益效果在于:采用本发明方案可使得图像显示中的虚拟信息坐标能实施根据真实图像变换而调整,为用户提供最佳显示体验。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将结合附图及实施例对本发明作进一步说明,下面描述中的附图仅仅是本发明的部分实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他附图:
图1是本发明较佳实施例的无人机监控设备的图像处理方法流程图;
图2是本发明较佳实施例的无人机监控设备的图像处理方法进一步的流程图;
图3是本发明较佳实施例的无人机监控设备的图像处理装置原理框图。
具体实施方式
为了使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本发明的部分实施例,而不是全部实施例。基于本发明的实施例,本领域普通技术人员在没有付出创造性劳动的前提下所获得的所有其他实施例,都属于本发明的保护范围。
本发明较佳实施例的无人机监控设备的图像处理方法流程如图1所示,包 括以下步骤:
步骤S101、获取当前真实图像;
步骤S102、分析并提取当前真实图像中的多个特征信息参数;
步骤S103、获取各个特征信息在真实情景空间坐标系中对应的第一三维坐标及第一姿态角;
步骤S104、实时获取各个特征信息在真实情景空间坐标系中位置的变化信息;
步骤S105、根据特征信息参数从对应的数据库中查找标注信息;
步骤S106、根据变化信息将第一三维坐标转换为第二三维坐标,并将第一姿态角转换为第二姿态角;
步骤S107、根据第一三维坐标及第一姿态角,或根据转换后的第二三维坐标及第二姿态角,将标注信息的二维图像以叠加形式显示在当前真实图像画面中。
其中,上述真实图像可以是静止的图像或运动的图像,显示图像亮度可以调节。上述步骤S102中,分析并提取当前真实图像中的多个特征信息参数包括多种设定模式,例如:物品标注模式、路线指示信息标注模式、英文标注模式、材质标注模式等等。
进一步地,如图2所示,上述图像处理方法还包括:
步骤S1061、在获取到各个特征信息在真实情景空间坐标系中位置的不断变化信息时,持续根据变化信息将第一三维坐标转换为最新的三维坐标,并将第一姿态角转换为最新姿态角;
步骤S1071、根据最新的三维坐标和最新的姿态角,将标注信息的二维图像以叠加形式显示在当前真实图像画面中。
具体地,上述将标注信息的二维图像以叠加形式显示在当前真实图像画面中的步骤中:在标注信息内容过多时,以省略的形式显示。
进一步地,上述图像处理方法中,获取当前真实图像的方式为:通过摄像头拍摄方式获取。
进一步地,上述图像处理方法中,数据库中按特征值索引方式存储有多个特征信息参数对应的标注信息,标注信息包括地址信息、名称、来源、长度、宽度或深度。
本发明的图像处理方法,其中,标注信息以文字、图形或批注形式悬浮显示。
本发明还提供了一种无人机监控设备的图像处理装置,如图3所示,包括:
图像获取模块10,用于获取当前真实图像;
分析模块20,用于分析并提取当前真实图像中的多个特征信息参数,并获取各个特征信息在真实情景空间坐标系中对应的第一三维坐标及第一姿态角,并实时获取各个特征信息在真实情景空间坐标系中位置的变化信息;
查找模块30,用于根据特征信息参数从对应的数据库中查找标注信息;
坐标变换模块40,用于根据变化信息将第一三维坐标转换为第二三维坐标,并将第一姿态角转换为第二姿态角;
显示模块50,用于根据第一三维坐标及第一姿态角,或根据转换后的第二三维坐标及第二姿态角,将标注信息的二维图像以叠加形式显示在当前真实图像画面中。
上述图像处理装置中,坐标变换模块40还用于在获取到各个特征信息在真实情景空间坐标系中位置的不断变化信息时,持续根据变化信息将第一三维坐标转换为最新的三维坐标,并将第一姿态角转换为最新姿态角;
显示模块50,还用于根据最新的三维坐标和最新的姿态角,将标注信息的二维图像以叠加形式显示在当前真实图像画面中。
同样,上述图像处理装置中,图像获取模块优选为摄像头,可以是一个或多个摄像头。
同样,上述图像处理装置中,标注信息以文字、图形或批注形式悬浮显示。
通过采用上述方案可使得图像显示中的虚拟信息坐标能实施根据真实图像变换而调整,为用户提供最佳显示体验。
应当理解的是,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。

Claims (10)

  1. 一种无人机监控设备的图像处理方法,其特征在于,包括以下步骤:
    获取当前真实图像;
    分析并提取所述当前真实图像中的多个特征信息参数,获取各个所述特征信息在真实情景空间坐标系中对应的第一三维坐标及第一姿态角,并实时获取各个所述特征信息在真实情景空间坐标系中位置的变化信息;
    根据所述特征信息参数从对应的数据库中查找标注信息;
    根据变化信息将所述第一三维坐标转换为第二三维坐标,并将所述第一姿态角转换为第二姿态角;
    根据所述第一三维坐标及第一姿态角,或根据转换后的第二三维坐标及第二姿态角,将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中。
  2. 根据权利要求1所述的图像处理方法,其特征在于,还包括步骤:
    在获取到各个所述特征信息在真实情景空间坐标系中位置的不断变化信息时,持续根据变化信息将所述第一三维坐标转换为最新的三维坐标,并将所述第一姿态角转换为最新姿态角;
    根据所述最新的三维坐标和最新的姿态角,将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中。
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中的步骤中:
    在所述标注信息内容过多时,以省略的形式显示。
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述获取当前真 实图像的方式为:通过摄像头拍摄方式获取。
  5. 根据权利要求1所述的图像处理方法,其特征在于,所述数据库中按特征值索引方式存储有多个所述特征信息参数对应的标注信息,所述标注信息包括地址信息、名称、来源、长度、宽度或深度。
  6. 根据权利要求1所述的图像处理方法,其特征在于,所述标注信息以文字、图形或批注形式悬浮显示。
  7. 一种无人机监控设备的图像处理装置,其特征在于,包括:
    图像获取模块,用于获取当前真实图像;
    分析模块,用于分析并提取所述当前真实图像中的多个特征信息参数,并获取各个所述特征信息在真实情景空间坐标系中对应的第一三维坐标及第一姿态角,并实时获取各个所述特征信息在真实情景空间坐标系中位置的变化信息;
    查找模块,用于根据所述特征信息参数从对应的数据库中查找标注信息;
    坐标变换模块,用于根据变化信息将所述第一三维坐标转换为第二三维坐标,并将所述第一姿态角转换为第二姿态角;
    显示模块,用于根据所述第一三维坐标及第一姿态角,或根据转换后的第二三维坐标及第二姿态角,将所述标注信息的二维图像以叠加形式显示在当前真实图像画面中。
  8. 根据权利要求7所述的图像处理装置,其特征在于,所述坐标变换模块,还用于在获取到各个所述特征信息在真实情景空间坐标系中位置的不断变化信息时,持续根据变化信息将所述第一三维坐标转换为最新的三维坐标,并将所述第一姿态角转换为最新姿态角;
    所述显示模块,还用于根据所述最新的三维坐标和最新的姿态角,将所述 标注信息的二维图像以叠加形式显示在当前真实图像画面中。
  9. 根据权利要求8所述的图像处理装置,其特征在于,所述图像获取模块为摄像头。
  10. 根据权利要求9所述的图像处理装置,其特征在于,所述标注信息以文字、图形或批注形式悬浮显示。
PCT/CN2016/075387 2016-03-02 2016-03-02 智能设备的图像处理方法及装置 WO2017147826A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/075387 WO2017147826A1 (zh) 2016-03-02 2016-03-02 智能设备的图像处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/075387 WO2017147826A1 (zh) 2016-03-02 2016-03-02 智能设备的图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2017147826A1 true WO2017147826A1 (zh) 2017-09-08

Family

ID=59742453

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/075387 WO2017147826A1 (zh) 2016-03-02 2016-03-02 智能设备的图像处理方法及装置

Country Status (1)

Country Link
WO (1) WO2017147826A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047717A (zh) * 2019-12-24 2020-04-21 北京法之运科技有限公司 一种对三维模型进行文字标注的方法
CN111105505A (zh) * 2019-11-25 2020-05-05 北京智汇云舟科技有限公司 一种基于三维地理信息的云台动态影像快速拼接方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646275A (zh) * 2012-02-22 2012-08-22 西安华旅电子科技有限公司 通过跟踪和定位算法实现虚拟三维叠加的方法
CN103218854A (zh) * 2013-04-01 2013-07-24 成都理想境界科技有限公司 在增强现实过程中实现部件标注的方法及增强现实系统
JP2013183333A (ja) * 2012-03-02 2013-09-12 Alpine Electronics Inc 拡張現実システム
CN104750969A (zh) * 2013-12-29 2015-07-01 刘进 智能机全方位增强现实信息叠加方法
CN105096382A (zh) * 2015-07-09 2015-11-25 浙江宇视科技有限公司 一种在视频监控图像中关联真实物体信息的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646275A (zh) * 2012-02-22 2012-08-22 西安华旅电子科技有限公司 通过跟踪和定位算法实现虚拟三维叠加的方法
JP2013183333A (ja) * 2012-03-02 2013-09-12 Alpine Electronics Inc 拡張現実システム
CN103218854A (zh) * 2013-04-01 2013-07-24 成都理想境界科技有限公司 在增强现实过程中实现部件标注的方法及增强现实系统
CN104750969A (zh) * 2013-12-29 2015-07-01 刘进 智能机全方位增强现实信息叠加方法
CN105096382A (zh) * 2015-07-09 2015-11-25 浙江宇视科技有限公司 一种在视频监控图像中关联真实物体信息的方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105505A (zh) * 2019-11-25 2020-05-05 北京智汇云舟科技有限公司 一种基于三维地理信息的云台动态影像快速拼接方法和系统
CN111047717A (zh) * 2019-12-24 2020-04-21 北京法之运科技有限公司 一种对三维模型进行文字标注的方法

Similar Documents

Publication Publication Date Title
US11736756B2 (en) Producing realistic body movement using body images
US9654734B1 (en) Virtual conference room
US10192364B2 (en) Augmented reality product preview
CN106355153B (zh) 一种基于增强现实的虚拟对象显示方法、装置以及系统
US9595127B2 (en) Three-dimensional collaboration
CN108876934B (zh) 关键点标注方法、装置和系统及存储介质
US8644467B2 (en) Video conferencing system, method, and computer program storage device
CN106705837B (zh) 一种基于手势的物体测量方法及装置
CN109584295A (zh) 对图像内目标物体进行自动标注的方法、装置及系统
US20120162384A1 (en) Three-Dimensional Collaboration
US10235806B2 (en) Depth and chroma information based coalescence of real world and virtual world images
US20200311396A1 (en) Spatially consistent representation of hand motion
CN109035415B (zh) 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN108668050B (zh) 基于虚拟现实的视频拍摄方法和装置
CN110573992B (zh) 使用增强现实和虚拟现实编辑增强现实体验
US11900552B2 (en) System and method for generating virtual pseudo 3D outputs from images
US11928384B2 (en) Systems and methods for virtual and augmented reality
JP6656382B2 (ja) マルチメディア情報を処理する方法及び装置
Camba et al. From reality to augmented reality: Rapid strategies for developing marker-based AR content using image capturing and authoring tools
CN107066605A (zh) 基于图像识别的设备信息自动调阅展示方法
EP3141985A1 (en) A gazed virtual object identification module, a system for implementing gaze translucency, and a related method
US10582190B2 (en) Virtual training system
WO2017147826A1 (zh) 智能设备的图像处理方法及装置
Fadzli et al. A robust real-time 3D reconstruction method for mixed reality telepresence
Saggio et al. Augmented reality for restoration/reconstruction of artefacts with artistic or historical value

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16892017

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16892017

Country of ref document: EP

Kind code of ref document: A1