WO2019000325A1 - 用于无人机航拍的增强现实方法、处理器及无人机 - Google Patents

用于无人机航拍的增强现实方法、处理器及无人机 Download PDF

Info

Publication number
WO2019000325A1
WO2019000325A1 PCT/CN2017/090820 CN2017090820W WO2019000325A1 WO 2019000325 A1 WO2019000325 A1 WO 2019000325A1 CN 2017090820 W CN2017090820 W CN 2017090820W WO 2019000325 A1 WO2019000325 A1 WO 2019000325A1
Authority
WO
WIPO (PCT)
Prior art keywords
portrait
virtual
drone
aerial image
charm
Prior art date
Application number
PCT/CN2017/090820
Other languages
English (en)
French (fr)
Inventor
刘以奋
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780004992.7A priority Critical patent/CN108475442A/zh
Priority to PCT/CN2017/090820 priority patent/WO2019000325A1/zh
Publication of WO2019000325A1 publication Critical patent/WO2019000325A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Definitions

  • the invention relates to an augmented reality method, a processor and a drone for aerial photography of a drone, and belongs to the technical field of image processing of drones.
  • a drone is a non-manned aerial vehicle that is controlled by radio remote control equipment and a program that is solidified in the flight controller.
  • the existing drones are usually equipped with shooting equipment through the gimbal, so that the ground environment can be photographed in the air for a better shooting experience.
  • the existing UAV shooting device can only present the captured real image to the viewer, and does not enhance the image information, and the audience experience is not rich enough.
  • embodiments of the present invention provide an augmented reality method, a processor, and a drone for aerial photography of a drone.
  • an augmented reality method for aerial photography of a drone comprising: acquiring an aerial image captured by a photographing device mounted on the drone; and identifying portrait information in the aerial image; The portrait information adds a virtual ornament to the portrait; the geographic feature of the background portion of the aerial image is identified, wherein the background portion is a portion other than the portrait in the aerial image; and the geographic feature is added to the background according to the geographic feature Virtual environment.
  • a processor comprising a storable medium having stored therein an executable instruction set, the executable instruction set comprising: an aerial image acquisition instruction for acquiring an unmanned An aerial image captured by a photographing device mounted on the machine; a portrait recognition command for identifying portrait information in the aerial image; a virtual charm addition command for adding a virtual charm to the portrait according to the portrait information; And identifying a geographic feature of the background portion of the aerial image, wherein the background portion is other than the portrait in the aerial image And a virtual environment adding instruction, configured to add a virtual environment corresponding to the geographic feature according to the geographic feature.
  • a drone a body, a pan/tilt fixed to the body, a photographing device mounted on the pan/tilt, and a processor installed in the body;
  • the processing The device includes a storable medium, and the storable medium stores an executable instruction set, where the executable instruction set includes: an aerial image acquisition instruction for acquiring an aerial image captured by a photographing device mounted on the drone; and portrait recognition a command for identifying portrait information in the aerial image; a virtual charm adding instruction for adding a virtual ornament to the portrait according to the portrait information; and a background recognition instruction for identifying a geographic feature of the background portion of the aerial image, wherein The background portion is a portion other than the portrait in the aerial image; the virtual environment adds an instruction to add a virtual environment corresponding to the geographic feature according to the geographic feature.
  • the content of the aerial image captured by the drone can be enriched, the viewer is provided with a better sensory experience, and the interest of the aerial photography of the drone is improved.
  • FIG. 1 is a schematic structural diagram of a drone according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart diagram of an augmented reality method for aerial photography of a drone according to an embodiment of the present invention.
  • 111 a processor
  • 113 a storable medium
  • a UAV is a non-manned aerial vehicle that can be controlled by a remote control device (such as a remote control or a mobile terminal with a control program installed) or an autonomous control program programmed on the flight controller.
  • Existing drones include fixed-wing drones or rotary-wing drones.
  • Rotorcraft drones include single-rotor drones and multi-rotor drones. For multi-rotor drones, they typically include a frame, a boom, a rotor assembly, and a flight controller mounted within the rack. The arm is connected to the frame, and the rotor assembly is mounted on the arm. By controlling the propeller speed, steering and acceleration in the rotor assembly, the multi-rotor UAV advances, retreats, turns, hoveres, takes off and falls. .
  • FIG. 1 is a schematic structural diagram of a drone according to an embodiment of the present invention.
  • the drone 100 of the present embodiment includes: a fuselage, and a flight controller 110 installed in the fuselage, wherein the flight controller 100 includes a processor 111, which may be processed chip.
  • the pan/tilt head 200 is mounted on the body, and the pan/tilt head 200 is equipped with a photographing apparatus 300, and the drone apparatus 100 can control the drone 100 to take an image from the air (that is, an aerial photograph image) by a photographing button of a remote control apparatus (not shown).
  • the captured video or picture can be transmitted back to the display device (not shown) by wireless transmission, so that it is presented in front of the viewer in real time.
  • the captured video or picture can also be processed by the processor 111 and then presented to the viewer through the display device to enhance the viewing experience of the viewer.
  • the aerial image or the contrast can be adjusted by the processor 111.
  • the existing simple image processing process does not enhance the realistic image of the aerial image.
  • the processor 111 of the present embodiment can execute an augmented reality method (described in detail below) for the aerial image of the drone to enhance the viewing experience of the viewer while viewing the aerial image.
  • an augmented reality method (described in detail below) for the aerial image of the drone to enhance the viewing experience of the viewer while viewing the aerial image.
  • the processor 111 in this embodiment is installed in the body of the drone, it is obvious that the augmented reality method in the following embodiments can be performed by a processor other than the drone, for example, capable of receiving no one.
  • FIG. 2 is a schematic flowchart diagram of an augmented reality method for aerial photography of a drone according to an embodiment of the present invention. As shown in FIG. 2, the augmented reality method of this embodiment includes:
  • the photographing device can be connected to the flight controller of the drone by wireless or wired, so that the processor in the flight controller can acquire the aerial image captured by the photographing device.
  • the aerial image can be a picture or a video.
  • the user presses the shooting button on the remote controller, and after receiving the pressing signal, the control chip of the remote controller sends a shooting control signal to the drone through the wireless transmitter; after receiving the shooting control signal, the wireless receiver of the drone receives the shooting control signal Send it to the shooting device wirelessly or by wire to enable the shooting device to turn on the shooting mode for shooting; the captured aerial image is then wirelessly or wired back to the processor of the flight controller for the following processing.
  • the processor of the flight controller After the processor of the flight controller receives the aerial image returned by the photographing device, the portrait in the aerial image is opened to obtain the position of the portrait in the aerial image and the contour of the portrait.
  • the position and outline of the portrait can be recognized by the method described in the prior art. For example, by accepting the portrait object information input by the user, and then performing a histogram calculation on the aerial image based on the received portrait object information, the position and outline of the portrait are obtained. Specifically, when inputting the portrait target information, any one of the positions of the portraits or some pixels may be selected in the aerial image; or the area where the portrait is located may be selected.
  • the feature recognition method based on face recognition or object recognition may also be used to recognize the three-dimensional feature of the human head in the aerial image, thereby based on the recognized three-dimensional feature.
  • the grayscale, grayscale statistics, or average grayscale of the pixel in the region may be used to find the grayscale of the human head in the adjacent region of the aerial image.
  • the area where the gradation statistics or the average gradation is the same or the preset threshold is satisfied is determined as the position of the portrait and the outline of the portrait.
  • the image processing method of the existing binocular camera can be used to recognize the portrait information in the aerial image, that is, the information including the position of the portrait and the outline of the portrait. .
  • some virtual ornaments can be added to the portrait, including but not limited to drawing the flapping wings on the back of the human body, drawing the tail at the tail of the human body, drawing a halo and a horn on the head of the human body, etc., thereby making the portrait more For a wealth of information to enhance the viewer's viewing experience.
  • the method of adding a virtual item adopted by the existing augmented reality method may be used, and no further description is made.
  • the standard virtual ornaments may not exactly match the pose of the portrait. Therefore, optional, may also include:
  • the posture of the human body is determined according to the outline of the portrait. Specifically, when the contour of the portrait is acquired, it can be found in the human posture information database whether there is the same contour information of the silhouette of the portrait, and if found, the human posture corresponding to the portrait contour is acquired. In the specific comparison, the spacing between the plurality of feature points in the portrait contour may be used as the basis of the comparison, or the entire curve of the contour may be used as the basis of the comparison.
  • the virtual charm is added to the portrait according to the posture information of the human body and the position information of the portrait. For example, when it is found that the human body posture corresponding to the silhouette of the portrait is upright, the aura is directly added in the Y-axis direction where the top of the portrait is located. For another example, when it is found that the human body posture corresponding to the portrait contour is lying, an aura is added in the X-axis direction in which the head is located.
  • the accuracy of the virtual charm insertion can be improved by:
  • the insertion position of the virtual charm is determined according to the position of the portrait and the outline of the portrait. For example, by adding a bubble to enclose a portrait in a bubble, by analyzing and calculating the position of the portrait and the outline of the portrait, the horizontal and vertical coordinates of each pixel of the portrait can be determined, thereby determining the size of the bubble and the horizontal and vertical insertion of the base point. The coordinates, that is, the insertion position of the bubble.
  • the type of virtual ornaments that need to be added can be obtained, and then according to the position of the portrait, the outline of the portrait, and
  • the type of virtual charm determines the insertion position of the virtual charm.
  • the virtual charm to be added is acquired as a halo
  • the horizontal and vertical coordinates of the portrait head in the aerial image are recognized from the position of the portrait and the contour of the portrait, as a reference for adding the aura at the top of the head
  • the virtual ornament that needs to be added is a wing
  • the horizontal position of the portrait back in the aerial image is recognized from the position of the portrait and the outline of the portrait.
  • the ordinate is used as a reference for adding wings to the back.
  • the angle of rotation of the virtual charm is determined according to the posture of the human body. If the aura in the virtual charm is for an upright posture, then when the posture of the human body is lying, the aura needs to be rotated by ninety degrees, and when it is in other postures, the corresponding angle is rotated.
  • the correspondence between the above rotation angle and the human body posture can be realized by establishing a mapping table.
  • the virtual charm is then added to the portrait of the aerial image based on the above-described insertion position and rotation angle.
  • the background portion of the aerial image other than the portrait is analyzed.
  • the ground contour of the background portion can be extracted, and the ground contour is compared with various ground contours in the database to determine that the ground contour in the aerial image is specifically Plains, hills, mountains, deserts, oceans, rivers or lakes.
  • the ground contour of the background portion may be extracted by using the foregoing method for recognizing a portrait, or other methods in the prior art may be used, and details are not described herein.
  • a virtual environment corresponding to the geographical feature of the grassland such as a herd or a flock may be added to the grassland.
  • a virtual environment corresponding to the geographical feature of the grassland such as a herd or a flock
  • an erupted magma may be added to the mountain while determining the coverage of the magma according to the position and contour of the portrait.
  • a warship, a fish school, or the like may be added to the sea surface, or a virtual environment such as lightning or storm may be added to the sky.
  • the virtual environment corresponding to the geographic feature can be implemented by a preset mapping table, and the correspondence does not mean that it is consistent with the real environment, but only reflects a mapping relationship. For example, you can add a realistic mapping relationship between birds in the plain sky, or you can add a shark in the sky as an environment that does not exist in the real world.
  • the UAV is generally equipped with a global positioning device, such as a GPS, and the processor can read the GPS information.
  • a global positioning device such as a GPS
  • the processor can read the GPS information.
  • Determine the latitude and longitude coordinates of the drone that is, the geographic location of the drone.
  • the latitude and longitude coordinates show that the drone is flying in cities, canyons, plains, seas, mountains, lakes, deserts or other places, so that the geographical features of aerial images can be quickly obtained.
  • the GPS information may be sent to the geographic information database to search for geographic feature information of the latitude and longitude coordinates reflected by the GPS information.
  • the drone is photographing the ground or aerial targets from high altitude, there is a considerable distance between the subject and the drone, which may cause the geographical features of the drone and the aerial image to be actual.
  • the geographical features are inconsistent.
  • the drone is located on a cliff on the coast, and it is photographed by the sea; or, the drone is located above the oasis of the desert, but the camel is in the distant desert; or, no one The machine is located on the lake, but it is photographed on the shore of the village or farmland.
  • the following methods may be adopted:
  • the attitude information of the drone and the pan/tilt mounted thereon calculating the angle between the photographing device and the ground according to the posture information of the drone and the pan/tilt.
  • a large number of sensors are installed on the drone, and the attitude of the drone, that is, the pitch angle and the yaw angle of the drone, can be detected by the sensor.
  • the pitch angle and the yaw angle of the gimbal can be obtained or calculated by the control parameters of the pitch motor and the yaw motor.
  • the angle between the camera mounted on the gimbal and the ground can be obtained.
  • the trigonometric function can be calculated.
  • a specific geographic location of the focus image of the aerial image is obtained, thereby determining a geographic feature of the background portion of the aerial image according to the specific geographic location.
  • the geographic feature corresponding to the specific geographic location may be read in the geographic information database, or the geographic feature of the geographic location may be determined directly by the latitude and longitude information of the specific geographic location.
  • the geographic information image of the location may be obtained in the geographic information base according to the specific geographic location, and the aerial image may be corrected by the geographic information image, for example, supplementing some distorted information.
  • an augmented reality method of the present embodiment is illustrated by taking a photograph of a lake angle angler photographed by a drone's photographing device as an example:
  • the processor of the flight controller receives the above photo returned by the camera by wireless or wired transmission.
  • the processor uses the portrait recognition method to identify the image in the photo.
  • the image processing method used by the existing binocular camera can be used to identify the photo, and a series of consecutive points in the photo are obtained, and these consecutive points constitute the image of the angler.
  • the line connecting the points at the edge position is the outline of the angler, and of course the angle of the start and end points of the angler on the X-axis and the Y-axis is also obtained, thereby providing a coordinate basis for the next step of adding a virtual ornament to the angler.
  • the processor acquires the type of virtual charm that needs to be added, and then, along with the position of the angler and the outline of the angler, determines the position in which the virtual charm is inserted and the angle of rotation of the virtual charm to make the virtual charm more closely match the image of the angler. For example, when you need to add a halo over the angler's head, add wings to the back, and add a tail to the tail. First, look for the silhouette of the figure in the human posture information database to match the angle of the angler to determine whether the angler is standing or sitting, and whether it is tilted when sitting.
  • the aura of the head and the wings need to be rotated correspondingly to a certain angle, and the rotation corresponding to the posture is read. After the angle information, rotate the aura and wings to the appropriate angle.
  • the coordinates of the top, back and tail of the angler can be determined, and then the rotated aura is inserted into the image based on the coordinates of the top of the head. Similarly, the coordinates of the back are used.
  • the rotated wings are inserted into the image for the reference, and the tail is inserted into the image based on the coordinates of the tail.
  • the processor also performs geographic feature recognition of background portions other than the angler. For example, the processor acquires the attitude information of the drone and the pan/tilt, and calculates an angle between the camera carried on the gimbal relative to the ground by an executable program that is solidified in the processor, and further acquires the angle of the drone. Height, latitude and longitude coordinates, and then calculate the specific latitude and longitude coordinates of the background portion by another executable program, thereby reading the geographical features of the specific latitude and longitude coordinates in the geographic information database, in order to prepare for adding a virtual environment to the background portion of the photo .
  • the auxiliary pixel analysis or by comparison with the image of the location in the geographic information base, can determine the boundary between the ground and the lake in the background portion of the photo, and the area of the lake and the ground in the image.
  • the above recognition of the portrait and the recognition of the background portion can be performed simultaneously or sequentially, and adding virtual ornaments to the portrait and adding a virtual environment to the background can also be performed simultaneously or sequentially.
  • the processor 111 may implement the method of the foregoing embodiment by integrating a hardware circuit (for example, a programmable circuit) on an integrated circuit board of the processor, or may be implemented by an executable instruction set.
  • these executable instruction sets can be stored in the storable medium 113 of the processor 111, or can be stored in a separate memory or online server.
  • the executable instruction set may include: an aerial image acquisition instruction for acquiring an aerial image captured by a shooting device mounted on the drone; a portrait recognition instruction for identifying the portrait information in the aerial image; and a virtual accessory Adding instructions for adding a virtual ornament to the portrait according to the portrait information; a background recognition instruction for identifying a geographic feature of the background portion of the aerial image, wherein the background portion is a portion other than the portrait in the aerial image; the virtual environment Adding instructions for adding a virtual environment corresponding to the geographic feature according to the geographic feature.
  • the augmented reality method, the processor and the drone of the above embodiment enrich the content of the aerial image by adding a virtual ornament to the portrait and adding a virtual environment to the background portion by processing the aerial image. It enhances the viewer's viewing experience and enhances the fun of drone aerial photography.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

一种用于无人机航拍的增强现实方法,包括:获取无人机上搭载的拍摄设备所拍摄的航拍图像;识别所述航拍图像中的人像信息;根据所述人像信息为人像添加虚拟饰物;识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分;根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。上述增强现实方法,可以丰富无人机航拍图像的内容,提高观众的感官体验,提升无人机航拍的趣味性。还提供一种处理器及无人机。

Description

用于无人机航拍的增强现实方法、处理器及无人机 技术领域
本发明涉及一种用于无人机航拍的增强现实方法、处理器及无人机,属于无人机图像处理技术领域。
背景技术
无人机,是利用无线电遥控设备以及飞行控制器中固化的程序进行控制的不载人飞行器。现有的无人机上一般都会通过云台搭载拍摄设备,从而在空中对地面环境进行拍照,以获得更好的拍摄体验。但是,现有的无人机拍摄设备只能将所拍摄到的真实影像呈现给观众,并没有对影像信息进行增强,观众的体验不够丰富。
发明内容
为了解决现有技术中存在的上述或其他潜在问题,本发明实施例提供一种用于无人机航拍的增强现实方法、处理器及无人机。
根据本发明的一些实施例,提供一种用于无人机航拍的增强现实方法,包括:获取无人机上搭载的拍摄设备所拍摄的航拍图像;识别所述航拍图像中的人像信息;根据所述人像信息为人像添加虚拟饰物;识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分;根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
根据本发明的一些实施例,提供一种处理器,包括可存储介质,所述可存储介质中存储有可执行指令集,所述可执行指令集包括:航拍图像获取指令,用于获取无人机上搭载的拍摄设备所拍摄的航拍图像;人像识别指令,用于识别所述航拍图像中的人像信息;虚拟饰物添加指令,用于根据所述人像信息为人像添加虚拟饰物;背景识别指令,用于识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的 部分;虚拟环境添加指令,用于根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
根据本发明的一些实施例,提供一种无人机,机身、固定在机身上的云台、所述云台上搭载的拍摄设备、以及安装在机身内的处理器;所述处理器包括可存储介质,所述可存储介质中存储有可执行指令集,所述可执行指令集包括:航拍图像获取指令,用于获取无人机上搭载的拍摄设备所拍摄的航拍图像;人像识别指令,用于识别所述航拍图像中的人像信息;虚拟饰物添加指令,用于根据所述人像信息为人像添加虚拟饰物;背景识别指令,用于识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分;虚拟环境添加指令,用于根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
根据本发明实施例的技术方案,可以丰富无人机所拍摄到的航拍图像的内容,给观众带来更好的感官体验,提升无人机航拍的趣味性。
附图说明
通过参照附图的以下详细描述,本发明实施例的上述和其他目的、特征和优点将变得更容易理解。在附图中,将以示例以及非限制性的方式对本发明的多个实施例进行说明,其中:
图1为本发明实施例提供的无人机的结构示意图;
图2为本发明实施例提供的用于无人机航拍的增强现实方法的流程示意图。
图中:
100、无人机;            110、飞行控制器;
111、处理器;            113、可存储介质;
200、云台;              300、拍摄设备。
具体实施方式
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
无人机(UAV),是一种可以通过遥控设备(例如遥控器或者安装有控制程序的移动终端)、或者飞行控制器上烧录的自主控制程序进行控制的不载人飞行器。现有的无人机包括固定翼无人机或者旋翼式无人机。旋翼式无人机包括单旋翼无人机以及多旋翼无人机。对多旋翼无人机而言,其一般包括机架、机臂、旋翼组件以及安装在机架内的飞行控制器。机臂连接在机架上,旋翼组件则安装在机臂上,通过对旋翼组件中螺旋桨速度、转向以及加速度的控制实现多旋翼无人机的前进、后退、转向、悬停、起飞以及降落等。
图1为本发明实施例提供的无人机的结构示意图。如图1所示,本实施例的无人机100,包括:机身、以及安装在机身内的飞行控制器110,其中,飞行控制器100包括处理器111,该处理器111可以是处理芯片。
在机身上安装云台200,该云台200搭载有拍摄设备300,通过遥控设备(图中未示出)的拍摄按钮可以控制无人机100从空中拍摄图像(也即航拍图像)。拍摄到的视频或者图片可以通过无线传输的方式回传给显示设备(图中未示出),从而实时呈现在观众的眼前。当然,也可以将拍摄到的视频或者图片经过处理器111的处理后再通过显示设备呈现在观众眼前,以提升观众的观看体验。例如,在现有技术中,可以通过处理器111对航拍图像进行白平衡或者对比度进行调整。可是,现有的这种简单图像处理过程并不能增强航拍图像的现实感。
基于此,本实施例的处理器111可以执行一种用于无人机航拍图像的增强现实方法(以下将详述),以提高观众在观看航拍图像时的观看感受。应当理解,本实施例中的处理器111虽然安装在无人机的机身内,但明显可以通过无人机之外的处理器来执行以下实施例中的增强现实方法,例如能够接收无人机上航拍图像的遥控器上的处理器、电脑中的处理器、或者服务器的处理器、显示设备的处理器等等,这些处理器都属于本申请处理器的保护范围。
图2为本实施例提供的用于无人机航拍的增强现实方法的流程示意图。如图2所示,本实施例的增强现实方法,包括:
S101、获取无人机上搭载的拍摄设备所拍摄的航拍图像。
具体的,拍摄设备可以通过无线或者有线的方式与无人机的飞行控制器连接,以便飞行控制器中的处理器能够获取到拍摄设备所拍摄到的航拍图像, 该航拍图像可以是图片或者视频。例如,用户按压遥控器上的拍摄按钮,遥控器的控制芯片接收到该按压信号后,通过无线发射器向无人机发送拍摄控制信号;无人机的无线接收器接收到上述拍摄控制信号后,通过无线或者有线方式将其发送到拍摄设备上,以使拍摄设备开启拍摄模式进行拍摄;拍摄得到的航拍图像再通过无线或者有线方式回传到飞行控制器的处理器中进行以下处理。
S102、识别所述航拍图像中的人像信息。
当飞行控制器的处理器接收到拍摄设备回传的航拍图像之后,开设对航拍图像中的人像进行识别,以便得到人像在航拍图像中的位置以及人像轮廓等信息。
在对人像进行识别时,可以采用现有技术中记载的方法识别人像的位置以及轮廓。例如,通过接受用户输入的人像目标信息,然后根据接收到的人像目标信息对航拍图像进行直方图运算,获得人像的位置以及轮廓。具体在输入人像目标信息时,可以通过在航拍图像中点选人像所在位置的任意一个或者一些像素点;也可以通过框选人像所在区域。
又例如,由于航拍图像一般是俯视图,因此,也可以采用基于人脸识别或者物体识别所使用的特征识别方法,识别出航拍图像中人体头部的三维立体特征,从而根据识别出的三维立体特征确定人像的位置以及人像的轮廓。具体的,当识别出人体头部的三维立体特征以后,可以基于该区域中像素点的灰度、灰度统计或者平均灰度等,在航拍图像的相邻区域内寻找与人体头部灰度、灰度统计或者平均灰度相同或者满足预设阈值的区域确定为人像的位置以及人像的轮廓。当然,为了获得更精确的人像信息,也可以采用现有技术中使用的一些附加的图像处理方法,例如,通过对区域内的像素进行对比度分析或者直方图计算等方式获取到更为精确的人像位置以及人像轮廓。此外,应当理解,如何建立航拍图像中的人体头部的特征点并非本方案的重点,可以采用现有技术中的任意方法。
再例如,当拍摄设备配置有双目摄像头时,可以采用现有的双目摄像头的图像处理方法识别出航拍图像中的人像信息,也即包括人像的位置和人像的轮廓在内的各项信息。
S103、根据所述人像信息为人像添加虚拟饰物。
当识别出人像信息后就可以在人像上添加一些虚拟饰物,包括但不限于在人体背部绘制出煽动的翅膀、在人体尾部绘制尾巴、在人体头部绘制光环和犄角等,从而使人像具有更为丰富的信息,以提升观众的观看体验。具体在为人像添加虚拟饰物时,可以采用现有的增强现实方法所采用的添加虚拟物品的方法,不再进行赘述。
由于实际环境中人可能会有各种各样的姿态,例如,直立、躺卧、或者跳跃、奔跑等,因此,在添加虚拟饰物时,标准的虚拟饰物可能与人像的姿态并不能完全匹配。故,可选的,还可以包括:
根据所述人像的轮廓确定人体的姿态。具体的,当获取到人像的轮廓后可以在人体姿态信息库中查找是否存在与该人像的轮廓相同的人像轮廓信息,若查找到,则获取与该人像轮廓相对应的人体姿态。在具体比对时,可以将人像轮廓中的多个特征点之间的间距作为比对的基础,也可以是轮廓的曲线整体作为比对的基础。
当获取到人体的姿态信息后,则根据该人体姿态信息和人像的位置信息来为人像添加虚拟饰物。举例来说,当查找到与人像轮廓对应的人体姿态为直立时,则直接在人像的头顶所在的Y轴方向添加光环。又例如,当查找到与人像轮廓对应的人体姿态为躺卧时,则在头顶所在的X轴方向添加光环。
进一步,考虑到人体姿态信息的多变性和人像位置以及人像的轮廓在航拍图像上的大小变化,为了更准确的在人像上添加虚拟饰物,可以通过以下方式提高虚拟饰物插入的准确度:
根据人像的位置以及人像的轮廓确定虚拟饰物的插入位置。以添加气泡以将人像包裹在气泡内为例,通过对人像的位置以及人像的轮廓进行分析计算,可以确定人像的每一个像素点的横纵坐标,从而确定气泡的大小以及插入基点的横纵坐标,也即气泡的插入位置。可选地,考虑到某些虚拟饰物,例如光环、尾巴和翅膀等,只能添加到人像的特定部位,因此,可以再获取需要添加虚拟饰物的种类,再根据人像的位置、人像的轮廓以及虚拟饰物的种类确定虚拟饰物的插入位置。举例来说,当获取到需要添加的虚拟饰物为光环时,则从人像的位置和人像的轮廓中识别出人像头顶在航拍图像中的横纵坐标,以此作为在头顶添加光环的基准;当获取到需要添加的虚拟饰物是翅膀时,则从人像的位置和人像的轮廓中识别出人像背部在航拍图像中的横 纵坐标,以此作为在背部添加翅膀的基准。
根据人体的姿态确定虚拟饰物的旋转角度。如果虚拟饰物中的光环是针对直立姿态的,则当人体的姿态为躺卧时,则光环需要旋转九十度,当为其他姿态时,则旋转相应的角度。上述旋转角度与人体姿态之间的对应关系可以通过建立映射表来实现。
然后根据上述插入位置和旋转角度将虚拟饰物添加到航拍图像的人像上。
S104、识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分。
对航拍图像中除人像外的背景部分进行分析,例如,可以提取背景部分的地面轮廓,并将该地面轮廓与数据库中的各种地面轮廓进行比对,以确定航拍图像中的地面轮廓具体是平原、丘陵、高山、沙漠、海洋、还是河流或者湖泊等。需要说明的是,本步骤中提取背景部分的地面轮廓可以采用前述识别人像的方法,也可以采用现有技术中的其他方法,不再赘述。
S105、根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
当确定背景部分的地理特征后,例如,当得到背景部分的地面轮廓为草原时,则可以在草原上添加牛群或者羊群等与草原这一地理特征相对应的虚拟环境。又例如,当得到的背景部分的地面轮廓为高山时,则可以在高山上添加喷发的岩浆,同时根据人像的位置和轮廓来确定岩浆的覆盖范围。再例如,当得到的背景部分的地面轮廓为大海时,则可以在海面上添加战舰、鱼群等,或者在天空中添加闪电、暴风雨等虚拟环境。当然,与地理特征相对应的虚拟环境可以由预设的映射表来实现,并且,这种对应并非指与现实环境相一致,而只是反映一种映射关系。例如,可以在平原的天空中添加小鸟这种符合现实环境的映射关系,也可以在天空中添加鲨鱼这种在现实环境中并不存在的环境作为映射关系。
为了提高对背景部分地理特征识别的速度,在一些实施例中可以通过以下方式进行:
获取无人机的地理位置,并根据无人机的地理位置确定无人机所在位置的地理特征并将其作为航拍图像背景部分的地理特征。具体的,无人机上一般安装有全球定位装置,例如GPS,处理器可以通过读取GPS信息确 定无人机所在的经纬度坐标,也即该无人机的地理位置。明显地,该经纬度坐标体现出了无人机是在城市、峡谷、平原、大海、高山、湖泊、沙漠或者其他地方飞行,从而快速得到航拍图像的地理特征。更具体的,当无人机获取到GPS信息后可以将该GPS信息发送到地理信息数据库中查找与该GPS信息所反映的经纬度坐标的地理特征信息。
进一步,考虑到无人机是从高空中对地面或者空中目标进行拍摄,其拍摄对象和无人机之间具有相当的距离,这部分距离可能导致无人机所在位置的地理特征与航拍图像实际的地理特征不一致。例如,无人机位于海岸边的悬崖上,而其拍摄的是大海的图像;又或者,无人机位于沙漠的绿洲上方,所拍摄的却是远方沙漠上的驼队;再或者,无人机位于湖泊上,但其拍摄的却是岸边的村庄或者农田。为了纠正这种拍摄目标位置与无人机位置不一致可能出现的地理特征不一致的问题,在某些实施例中可以采取以下方法:
获取无人机以及其上安装的云台的姿态信息;根据无人机和云台的姿态信息计算拍摄设备与地面的夹角。具体的,无人机上安装有大量的传感器,可以通过传感器检测到无人机的姿态,也即无人机的俯仰角和偏航角。而无人机上云台的转动这时通过俯仰电机和偏航电机进行控制的,则可以通过俯仰电机和偏航电机的控制参数获得或者计算得到云台的俯仰角和偏航角。通过对无人机和云台各自的俯仰角和偏航角的叠加计算,可以得到云台上安装的拍摄设备与地面之间的夹角。
然后再根据上述拍摄设备与地面之间的夹角、无人机气压计所获取到的无人机的高度、以及无人机GPS获取到的地理位置(经纬度信息),就可以通过三角函数计算得到航拍图像对焦点的具体地理位置,从而根据该具体地理位置确定航拍图像背景部分的地理特征。例如,可以在地理信息库中读取与该具体地理位置对应的地理特征,或者,直接通过具体地理位置的经纬度信息确定该位置的地理特征。可选地,在一些实施例中,可以根据该具体地理位置在地理信息库中获取该位置的地理信息图像,并通过该地理信息图像对航拍图像进行校正,例如,补充某些失真的信息。
最后以无人机的拍摄设备拍摄到的一张湖边垂钓者的照片为例说明本实施例的增强现实方法:
飞行控制器的处理器通过无线或者有线传输的方式接收到摄像头回传的上述照片。
处理器使用人像识别方法对照片中的图像进行识别。例如,当拍摄设备为双目摄像头时,可以采用现有的双目摄像头所使用的图像处理方法对照片进行识别,得到照片中一系列连续的点,这些连续的点构成了垂钓者的影像,位于边缘位置的点的连线即为垂钓者的轮廓,当然同时也获得了垂钓者在X轴和Y轴的起始点和终止点,从而为下一步在垂钓者身上添加虚拟饰物提供坐标依据。
处理器再获取到需要添加的虚拟饰物的种类,然后和垂钓者的位置以及垂钓者的轮廓一起确定虚拟饰物插入的位置以及虚拟饰物的旋转角度,以使虚拟饰物与垂钓者的影像更加契合。例如,当需要在垂钓者的头顶添加光环,并在背部添加翅膀,以及在尾部添加尾巴时。首先,在人体姿态信息库中寻找与垂钓者轮廓一致的人像轮廓,从而确定垂钓者是站姿还是坐姿,当其为坐姿时是否有倾斜。以垂钓者坐姿垂钓且身体有向前倾斜为例,在人体姿态信息库中找到该人体轮廓信息后,可以确定头顶的光环以及翅膀需要相应的旋转一定角度,读取到与该姿态对应的旋转角度信息后,将光环和翅膀旋转相应角度。同时,根据对垂钓者位置以及轮廓的分析,可以确定出垂钓者头顶、背部以及尾部的坐标,然后以头顶的坐标为基准将旋转后的光环插入到图像中,同理的,以背部的坐标为基准将旋转后的翅膀插入到图像中,以及以尾部的坐标为基准将尾巴插入到图像中。
处理器还对垂钓者之外的背景部分进行地理特征识别。例如,处理器获取到无人机和云台的姿态信息,并通过固化在处理器中的可执行程序计算出云台上承载的摄像头相对于地面的夹角,并进一步获取到无人机的高度、经纬度坐标,然后通过另一可执行程序计算得到背景部分的具体经纬度坐标,从而在地理信息库中读取到该具体经纬度坐标的地理特征,以便为在该照片背景部分添加虚拟环境做准备。此外,再辅助像素分析、或者通过与地理信息库中该位置的图像进行比对,可以确定该照片背景部分中地面与湖面的分界线,以及湖面和地面在图像中的区域。
在确定了地面和湖面的分界线以及各自的区域后,可以在湖面上添加小舟、小鱼等,或者在地面上添加小狗、岩石等,也可以在天空中添加雨、 雪等、或者还可以将整个天空区域从白天替换为晚上等。当然,也可以进一步识别出钓钩所在位置,并在钓钩上添加鲨鱼或者其他物品等。
当然,以上识别人像和识别背景部分可以同时或者先后进行,并且在人像上添加虚拟饰物和在背景中添加虚拟环境也可以同时或者先后进行。
此外,还需要说明的是,处理器111在实现上述实施例的方法时可以通过将硬件电路(例如,可编程电路)集成在处理器的集成电路板上来实现,也可以通过可执行指令集的方式来实现,这些可执行指令集可以存储在处理器111的可存储介质113中,也可以存储在单独的存储器或者在线服务器内。
具体的,可执行指令集,可以包括:航拍图像获取指令,用于获取无人机上搭载的拍摄设备所拍摄的航拍图像;人像识别指令,用于识别所述航拍图像中的人像信息;虚拟饰物添加指令,用于根据所述人像信息为人像添加虚拟饰物;背景识别指令,用于识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分;虚拟环境添加指令,用于根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
上述各个可执行指令的具体执行过程可以参见上述实施例方法步骤的具体过程,在此不再进行赘述。
综上所述,上述实施例的增强现实方法、处理器以及无人机,通过对航拍图像的处理,在人像上增加了虚拟饰物、在背景部分添加了虚拟环境,从而丰富了航拍图像的内容,提高了观众的观看感受,提升了无人机航拍的趣味性。
最后,尽管已经在这些实施例的上下文中描述了与本技术的某些实施例相关联的优点,但是其他实施例也可以包括这样的优点,并且并非所有实施例都详细描述了本发明的所有优点,由实施例中的技术特征所客观带来的优点均应视为本发明区别于现有技术的优点,均属于本发明的保护范围。

Claims (39)

  1. 一种用于无人机航拍的增强现实方法,其特征在于,包括:
    获取无人机上搭载的拍摄设备所拍摄的航拍图像;
    识别所述航拍图像中的人像信息;
    根据所述人像信息为人像添加虚拟饰物;
    识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分;
    根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
  2. 根据权利要求1所述的增强现实方法,其特征在于,所述人像信息包括:位置以及轮廓。
  3. 根据权利要求2所述的增强现实方法,其特征在于,根据所述人像信息为人像添加虚拟饰物,包括:
    根据所述人像的轮廓确定人体的姿态;
    根据所述人像的位置以及人体的姿态为人像添加虚拟饰物。
  4. 根据权利要求3所述的增强现实方法,其特征在于,根据所述人像的位置以及人体的姿态为人像添加虚拟饰物,包括:
    根据所述人像的位置以及人像的轮廓确定所述虚拟饰物的插入位置;
    根据所述人体的姿态确定所述虚拟饰物的旋转角度;
    根据所述插入位置和旋转角度为人像添加虚拟饰物。
  5. 根据权利要求4所述的增强现实方法,其特征在于,根据所述人像的位置以及人像的轮廓确定所述虚拟饰物的插入位置,包括:
    获取所述虚拟饰物的种类;
    根据所述人像的位置、人像的轮廓以及虚拟饰物的种类确定所述虚拟饰物的插入位置。
  6. 根据权利要求3所述的增强现实方法,其特征在于,根据所述人像的轮廓确定人体的姿态,包括:
    在人体姿态信息库中查找所述人像的轮廓,若找到,则获取与该人像的轮廓相对应的人体的姿态。
  7. 根据权利要求2-6任一项所述的增强现实方法,其特征在于,识别所述航拍图像中的人像信息,包括:
    接收用户输入的人像目标信息;
    根据所述人像目标信息对所述航拍图像进行直方图运算,获得所述人像的位置以及轮廓。
  8. 根据权利要求7所述的增强现实方法,其特征在于,所述人像目标信息为在所述航拍图像上的点选信号或者框选信号。
  9. 根据权利要求1-6任一项所述的增强现实方法,其特征在于,识别所述航拍图像中的人像信息,包括:
    识别所述航拍图像中头部的三维立体特征,根据所述三维立体特征确定所述人像的位置以及人像的轮廓。
  10. 根据权利要求1-6任一项所述的增强现实方法,其特征在于,识别所述航拍图像背景部分的地理特征,包括:
    获取所述无人机的地理位置;
    根据所述无人机的地理位置确定所述无人机所在位置的地理特征并将其作为航拍图像背景部分的地理特征。
  11. 根据权利要求10所述的增强现实方法,其特征在于,还包括:
    获取所述无人机以及其上安装的云台的姿态信息;
    根据所述无人机和云台的姿态信息计算所述拍摄设备与地面间的夹角;
    根据所述夹角、无人机的高度、以及无人机的地理位置,计算得到所述航拍图像对焦点的具体地理位置;
    根据所述具体地理位置确定所述航拍图像背景部分的地理特征。
  12. 根据权利要求11所述的增强现实方法,其特征在于,根据所述具体地理位置确定所述航拍图像背景部分的地理特征,包括:
    在地理信息库中读取与该具体地理位置对应的地理特征。
  13. 根据权利要求10所述的增强现实方法,其特征在于,所述地理特征包括以下任意一种:高山、峡谷、平原、草地、大海、河流、湖泊和沙漠。
  14. 一种处理器,其特征在于,包括可存储介质,所述可存储介质中存储有可执行指令集,所述可执行指令集包括:
    航拍图像获取指令,用于获取无人机上搭载的拍摄设备所拍摄的航拍图像;
    人像识别指令,用于识别所述航拍图像中的人像信息;
    虚拟饰物添加指令,用于根据所述人像信息为人像添加虚拟饰物;
    背景识别指令,用于识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分;
    虚拟环境添加指令,用于根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
  15. 根据权利要求14所述的处理器,其特征在于,所述人像信息包括:位置以及轮廓。
  16. 根据权利要求15所述的处理器,其特征在于,所述虚拟饰物添加指令,
    还用于根据所述人像的轮廓确定人体的姿态;以及,
    根据所述人像的位置以及人体的姿态为人像添加虚拟饰物。
  17. 根据权利要求16所述的处理器,其特征在于,所述虚拟饰物添加指令,
    还用于根据所述人像的位置以及人像的轮廓确定所述虚拟饰物的插入位置;以及,
    根据所述人体的姿态确定所述虚拟饰物的旋转角度;和,
    根据所述插入位置和旋转角度为人像添加虚拟饰物。
  18. 根据权利要求17所述的处理器,其特征在于,所述虚拟饰物添加指令,
    还用于获取所述虚拟饰物的种类;以及,
    根据所述人像的位置、人像的轮廓以及虚拟饰物的种类确定所述虚拟饰物的插入位置。
  19. 根据权利要求16所述的处理器,其特征在于,所述虚拟饰物添加指令,
    还用于在人体姿态信息库中查找所述人像的轮廓,若找到,则获取与该人像的轮廓相对应的人体的姿态。
  20. 根据权利要求15-19任一项所述的处理器,其特征在于,所述人像识别指令,
    用于接收用户输入的人像目标信息;以及,
    根据所述人像目标信息对所述航拍图像进行直方图运算,获得所述人像 的位置以及轮廓。
  21. 根据权利要求20所述的处理器,其特征在于,所述人像目标信息为在所述航拍图像上的点选信号或者框选信号。
  22. 根据权利要求14-19任一项所述的处理器,其特征在于,所述人像识别指令,
    用于识别所述航拍图像中头部的三维立体特征,并根据所述三维立体特征确定所述人像的位置以及人像的轮廓。
  23. 根据权利要求14-19任一项所述的处理器,其特征在于,所述背景识别指令,
    用于获取所述无人机的地理位置;以及,
    根据所述无人机的地理位置确定所述无人机所在位置的地理特征并将其作为航拍图像背景部分的地理特征。
  24. 根据权利要求23所述的处理器,其特征在于,所述背景识别指令,
    用于获取所述无人机以及其上安装的云台的姿态信息;以及,
    根据所述无人机和云台的姿态信息计算所述拍摄设备与地面间的夹角;并,
    根据所述夹角、无人机的高度、以及无人机的地理位置,计算得到所述航拍图像对焦点的具体地理位置;和,
    根据所述具体地理位置确定所述航拍图像背景部分的地理特征。
  25. 根据权利要求24所述的处理器,其特征在于,所述背景识别指令,
    还用于在地理信息库中读取与该具体地理位置对应的地理特征。
  26. 根据权利要求23所述的处理器,其特征在于,所述地理特征包括以下任意一种:高山、峡谷、平原、草地、大海、河流、湖泊和沙漠。
  27. 一种无人机,其特征在于,包括:机身、固定在机身上的云台、所述云台上搭载的拍摄设备、以及安装在机身内的处理器;
    所述处理器包括可存储介质,所述可存储介质中存储有可执行指令集,所述可执行指令集包括:
    航拍图像获取指令,用于获取无人机上搭载的拍摄设备所拍摄的航拍图像;
    人像识别指令,用于识别所述航拍图像中的人像信息;
    虚拟饰物添加指令,用于根据所述人像信息为人像添加虚拟饰物;
    背景识别指令,用于识别所述航拍图像背景部分的地理特征,其中,背景部分为所述航拍图像中人像以外的部分;
    虚拟环境添加指令,用于根据所述地理特征为背景添加与该地理特征相对应的虚拟环境。
  28. 根据权利要求27所述的无人机,其特征在于,所述人像信息包括:位置以及轮廓。
  29. 根据权利要求28所述的无人机,其特征在于,所述虚拟饰物添加指令,
    还用于根据所述人像的轮廓确定人体的姿态;以及,
    根据所述人像的位置以及人体的姿态为人像添加虚拟饰物。
  30. 根据权利要求29所述的无人机,其特征在于,所述虚拟饰物添加指令,
    还用于根据所述人像的位置以及人像的轮廓确定所述虚拟饰物的插入位置;以及,
    根据所述人体的姿态确定所述虚拟饰物的旋转角度;和,
    根据所述插入位置和旋转角度为人像添加虚拟饰物。
  31. 根据权利要求30所述的无人机,其特征在于,所述虚拟饰物添加指令,
    还用于获取所述虚拟饰物的种类;以及,
    根据所述人像的位置、人像的轮廓以及虚拟饰物的种类确定所述虚拟饰物的插入位置。
  32. 根据权利要求29所述的无人机,其特征在于,所述虚拟饰物添加指令,
    还用于在人体姿态信息库中查找所述人像的轮廓,若找到,则获取与该人像的轮廓相对应的人体的姿态。
  33. 根据权利要求28-32任一项所述的无人机,其特征在于,所述人像识别指令,
    用于接收用户输入的人像目标信息;以及,
    根据所述人像目标信息对所述航拍图像进行直方图运算,获得所述人像 的位置以及轮廓。
  34. 根据权利要求33所述的无人机,其特征在于,所述人像目标信息为在所述航拍图像上的点选信号或者框选信号。
  35. 根据权利要求27-32任一项所述的无人机,其特征在于,所述人像识别指令,
    用于识别所述航拍图像中头部的三维立体特征,并根据所述三维立体特征确定所述人像的位置以及人像的轮廓。
  36. 根据权利要求27-32任一项所述的无人机,其特征在于,所述背景识别指令,
    用于获取所述无人机的地理位置;以及,
    根据所述无人机的地理位置确定所述无人机所在位置的地理特征并将其作为航拍图像背景部分的地理特征。
  37. 根据权利要求36所述的无人机,其特征在于,所述背景识别指令,
    用于获取所述无人机以及其上安装的云台的姿态信息;以及,
    根据所述无人机和云台的姿态信息计算所述拍摄设备与地面间的夹角;并,
    根据所述夹角、无人机的高度、以及无人机的地理位置,计算得到所述航拍图像对焦点的具体地理位置;和,
    根据所述具体地理位置确定所述航拍图像背景部分的地理特征。
  38. 根据权利要求37所述的无人机,其特征在于,所述背景识别指令,
    还用于在地理信息库中读取与该具体地理位置对应的地理特征。
  39. 根据权利要求36所述的无人机,其特征在于,所述地理特征包括以下任意一种:高山、峡谷、平原、草地、大海、河流、湖泊和沙漠。
PCT/CN2017/090820 2017-06-29 2017-06-29 用于无人机航拍的增强现实方法、处理器及无人机 WO2019000325A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780004992.7A CN108475442A (zh) 2017-06-29 2017-06-29 用于无人机航拍的增强现实方法、处理器及无人机
PCT/CN2017/090820 WO2019000325A1 (zh) 2017-06-29 2017-06-29 用于无人机航拍的增强现实方法、处理器及无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/090820 WO2019000325A1 (zh) 2017-06-29 2017-06-29 用于无人机航拍的增强现实方法、处理器及无人机

Publications (1)

Publication Number Publication Date
WO2019000325A1 true WO2019000325A1 (zh) 2019-01-03

Family

ID=63266022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/090820 WO2019000325A1 (zh) 2017-06-29 2017-06-29 用于无人机航拍的增强现实方法、处理器及无人机

Country Status (2)

Country Link
CN (1) CN108475442A (zh)
WO (1) WO2019000325A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652987A (zh) * 2020-06-12 2020-09-11 浙江商汤科技开发有限公司 一种ar合影图像生成的方法及装置
CN112966546A (zh) * 2021-01-04 2021-06-15 航天时代飞鸿技术有限公司 一种基于无人机侦察图像的嵌入式姿态估计方法
CN115767288A (zh) * 2022-12-02 2023-03-07 亿航智能设备(广州)有限公司 一种航拍数据处理方法、航拍相机、飞行器及存储介质
CN116033231A (zh) * 2021-10-27 2023-04-28 海鹰航空通用装备有限责任公司 一种视频直播ar标签叠加方法和装置

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712249B (zh) * 2018-12-31 2023-05-26 成都纵横大鹏无人机科技有限公司 地理要素增强现实方法及装置
CN109727317B (zh) * 2019-01-07 2021-02-09 京东方科技集团股份有限公司 增强现实系统及控制方法
CN111476134A (zh) * 2020-03-31 2020-07-31 广州幻境科技有限公司 一种基于增强现实的地质考察数据处理系统和方法
CN111640196A (zh) * 2020-06-08 2020-09-08 浙江商汤科技开发有限公司 一种太空舱特效生成的方法、装置、电子设备及存储介质
CN111696215A (zh) * 2020-06-12 2020-09-22 上海商汤智能科技有限公司 一种图像处理方法、装置及设备
CN111640203B (zh) * 2020-06-12 2024-04-12 上海商汤智能科技有限公司 一种图像处理方法及装置
CN113066125A (zh) * 2021-02-27 2021-07-02 华为技术有限公司 一种增强现实方法及其相关设备
WO2022222082A1 (zh) * 2021-04-21 2022-10-27 深圳传音控股股份有限公司 图像控制方法、移动终端及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105759833A (zh) * 2016-02-23 2016-07-13 普宙飞行器科技(深圳)有限公司 一种沉浸式无人机驾驶飞行系统
CN105869198A (zh) * 2015-12-14 2016-08-17 乐视移动智能信息技术(北京)有限公司 多媒体照片生成方法、装置、设备及手机
CN105872438A (zh) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 一种视频通话方法、装置及终端
US20160307373A1 (en) * 2014-01-08 2016-10-20 Precisionhawk Inc. Method and system for generating augmented reality agricultural presentations
US20160330405A1 (en) * 2013-09-27 2016-11-10 Intel Corporation Ambulatory system to communicate visual projections
CN106131488A (zh) * 2016-07-12 2016-11-16 北京仿真中心 一种基于无人机的增强现实方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100231712B1 (ko) * 1997-06-13 1999-11-15 정선종 무인정찰기 시스템의 운용 방법
US9210413B2 (en) * 2012-05-15 2015-12-08 Imagine Mobile Augmented Reality Ltd System worn by a moving user for fully augmenting reality by anchoring virtual objects
US9876954B2 (en) * 2014-10-10 2018-01-23 Iec Infrared Systems, Llc Calibrating panoramic imaging system in multiple dimensions
CN104457704B (zh) * 2014-12-05 2016-05-25 北京大学 基于增强地理信息的无人机地面目标定位系统及方法
CN106155315A (zh) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 一种拍摄中增强现实效果的添加方法、装置及移动终端
CN206193950U (zh) * 2016-08-31 2017-05-24 陈昊 基于增强现实的无人飞行器体验系统
CN106228615A (zh) * 2016-08-31 2016-12-14 陈昊 基于增强现实的无人飞行器体验系统及其体验方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160330405A1 (en) * 2013-09-27 2016-11-10 Intel Corporation Ambulatory system to communicate visual projections
US20160307373A1 (en) * 2014-01-08 2016-10-20 Precisionhawk Inc. Method and system for generating augmented reality agricultural presentations
CN105869198A (zh) * 2015-12-14 2016-08-17 乐视移动智能信息技术(北京)有限公司 多媒体照片生成方法、装置、设备及手机
CN105872438A (zh) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 一种视频通话方法、装置及终端
CN105759833A (zh) * 2016-02-23 2016-07-13 普宙飞行器科技(深圳)有限公司 一种沉浸式无人机驾驶飞行系统
CN106131488A (zh) * 2016-07-12 2016-11-16 北京仿真中心 一种基于无人机的增强现实方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652987A (zh) * 2020-06-12 2020-09-11 浙江商汤科技开发有限公司 一种ar合影图像生成的方法及装置
CN111652987B (zh) * 2020-06-12 2023-11-07 浙江商汤科技开发有限公司 一种ar合影图像生成的方法及装置
CN112966546A (zh) * 2021-01-04 2021-06-15 航天时代飞鸿技术有限公司 一种基于无人机侦察图像的嵌入式姿态估计方法
CN116033231A (zh) * 2021-10-27 2023-04-28 海鹰航空通用装备有限责任公司 一种视频直播ar标签叠加方法和装置
CN115767288A (zh) * 2022-12-02 2023-03-07 亿航智能设备(广州)有限公司 一种航拍数据处理方法、航拍相机、飞行器及存储介质

Also Published As

Publication number Publication date
CN108475442A (zh) 2018-08-31

Similar Documents

Publication Publication Date Title
WO2019000325A1 (zh) 用于无人机航拍的增强现实方法、处理器及无人机
US11070725B2 (en) Image processing method, and unmanned aerial vehicle and system
US10650235B2 (en) Systems and methods for detecting and tracking movable objects
CN108702448B (zh) 无人机图像采集方法及无人机、计算机可读存储介质
EP3273318B1 (fr) Système autonome de prise de vues animées par un drone avec poursuite de cible et localisation améliorée de la cible
CN110799921A (zh) 拍摄方法、装置和无人机
CN113038023A (zh) 拍摄控制方法及装置
US20180262789A1 (en) System for georeferenced, geo-oriented realtime video streams
WO2018072155A1 (zh) 一种用于控制无人机的穿戴式设备及无人机系统
KR101896654B1 (ko) 드론을 이용한 3d 영상 처리 시스템 및 방법
CN111433818A (zh) 目标场景三维重建方法、系统及无人机
WO2019100219A1 (zh) 输出影像生成方法、设备及无人机
JPWO2018198634A1 (ja) 情報処理装置、情報処理方法、情報処理プログラム、画像処理装置および画像処理システム
WO2022077296A1 (zh) 三维重建方法、云台负载、可移动平台以及计算机可读存储介质
CN106094876A (zh) 一种无人机目标锁定系统及其方法
WO2020048365A1 (zh) 飞行器的飞行控制方法、装置、终端设备及飞行控制系统
JP6688901B2 (ja) 3次元形状推定方法、3次元形状推定システム、飛行体、プログラム、及び記録媒体
WO2019230604A1 (ja) 検査システム
WO2018214401A1 (zh) 移动平台、飞行体、支持装置、便携式终端、摄像辅助方法、程序以及记录介质
CN111344650B (zh) 信息处理装置、飞行路径生成方法、程序以及记录介质
CN113807435A (zh) 一种基于多传感器的遥感图像特征点高程获取方法
KR20190122077A (ko) 드론을 이용한 구면 공간 영상 촬영할 때의 수평 보정 방법
WO2017160381A1 (en) System for georeferenced, geo-oriented real time video streams
CN108475410B (zh) 三维立体水印添加方法、装置及终端
US20210256732A1 (en) Image processing method and unmanned aerial vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17916240

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17916240

Country of ref document: EP

Kind code of ref document: A1