WO2018195892A1 - 三维立体水印添加方法、装置及终端 - Google Patents

三维立体水印添加方法、装置及终端 Download PDF

Info

Publication number
WO2018195892A1
WO2018195892A1 PCT/CN2017/082352 CN2017082352W WO2018195892A1 WO 2018195892 A1 WO2018195892 A1 WO 2018195892A1 CN 2017082352 W CN2017082352 W CN 2017082352W WO 2018195892 A1 WO2018195892 A1 WO 2018195892A1
Authority
WO
WIPO (PCT)
Prior art keywords
watermark
dynamic
target video
information
dimensional
Prior art date
Application number
PCT/CN2017/082352
Other languages
English (en)
French (fr)
Inventor
苏冠华
艾楚越
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/082352 priority Critical patent/WO2018195892A1/zh
Priority to CN201780004602.6A priority patent/CN108475410B/zh
Publication of WO2018195892A1 publication Critical patent/WO2018195892A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present invention relates to the field of video processing technologies, and in particular, to a method, an apparatus, and a terminal for adding a three-dimensional watermark.
  • the traditional video watermark is a watermark directly superimposed on the video, which relies on the clip of the lens to achieve the story expression of the video.
  • the traditional video watermark has no correlation with the space of the video image.
  • the processing method of the desktop software requires a large amount of image operations to re-simulate the camera angle during video imaging and the stereo space in the video, resulting in large resource consumption and time consuming.
  • Embodiments of the present invention provide a method, a device, and a terminal for adding a three-dimensional watermark to quickly add a three-dimensional watermark to a target video, and implement dynamic adjustment of a three-dimensional watermark display state.
  • a three-dimensional watermark adding method includes:
  • the target watermark information is fused with the analog lens stereo space to generate a three-dimensional watermark for the target video.
  • a three-dimensional watermark adding device includes:
  • a watermark input unit configured to receive target watermark information
  • a parameter acquisition unit configured to acquire dynamic shooting parameter information corresponding to the target video, where the dynamic shooting parameter information is used to record dynamic shooting parameters of the unmanned aerial vehicle when the target video is captured;
  • a space simulation unit configured to establish, according to the dynamic shooting parameter information, an analog lens stereo space in which the UAV captures the target video;
  • a watermark generating unit configured to fuse the target watermark information with the analog lens stereo space to generate a three-dimensional watermark for the target video.
  • a terminal comprising a processor and a memory, the processor being electrically coupled to the memory, the memory for storing executable program instructions, the processor for reading executable program instructions in the memory, and Do the following:
  • the target watermark information is fused with the analog lens stereo space to generate a three-dimensional watermark for the target video.
  • the method, device, and terminal for acquiring a three-dimensional watermark are obtained by acquiring dynamic shooting parameter information of an unmanned aerial vehicle when the target video is captured, and then, according to the dynamic shooting parameter information, when the watermark needs to be added to the target video
  • the UAV captures an analog lens stereo space of the target video, and combines the target watermark information with the analog lens stereo space to quickly generate a three-dimensional watermark for the target video.
  • the dynamic shooting parameters of the target video during the shooting process can be conveniently obtained, the dynamic shooting parameters are stored in association with the target video, so that when the three-dimensional watermark is added to the target video, the Professional software to perform motion and spatial analysis, but directly establish an analog lens stereo space according to the dynamic shooting parameters, and then fuse the target watermark information with the lens stereo space to form a corresponding three-dimensional watermark, which is beneficial to reduce The generation time of the three-dimensional watermark.
  • the dynamic shooting parameters dynamically adjust the display state of the three-dimensional watermark, thereby optimizing the display effect of the watermark.
  • FIG. 1 is a first schematic flowchart of a method for adding a three-dimensional watermark according to an embodiment of the present invention
  • FIG. 2 is a second schematic flowchart of a method for adding a three-dimensional watermark according to an embodiment of the present invention
  • FIG. 3 is a third schematic flowchart of a method for adding a three-dimensional watermark according to an embodiment of the present invention
  • 4A to 4D are schematic diagrams showing application scenarios of a method for adding a three-dimensional watermark according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a first structure of a three-dimensional watermark adding device according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing a second structure of a three-dimensional watermark adding device according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram showing a third structure of a three-dimensional watermark adding device according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • a method for adding a three-dimensional watermark is provided to quickly add a three-dimensional watermark to a target video, and implement dynamic adjustment of a three-dimensional watermark display state.
  • the method for adding a three-dimensional watermark includes at least the following steps:
  • Step 101 Receive target watermark information.
  • Step 102 Obtain dynamic shooting parameter information corresponding to the target video, where the dynamic shooting parameter information is used to record dynamic shooting parameters of the unmanned aerial vehicle when the target video is captured;
  • Step 103 Establish, according to the dynamic shooting parameter information, the unmanned aerial vehicle to shoot the target The analog lens stereo space of the standard video;
  • Step 104 merging the target watermark information with the analog lens stereo space to generate a three-dimensional watermark for the target video.
  • the target watermark information may include at least one of text information, picture information, animation information, and the like.
  • the three-dimensional watermark may include at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animated watermark.
  • the dynamic shooting parameter information may include at least one of flight path information, flight attitude information, flight speed information, pan/tilt angle information, lens focal length information, and lens field angle information of the unmanned aerial vehicle.
  • the unmanned aerial vehicle may be configured to capture the simulated lens stereo space of the target video, and then the unmanned aerial vehicle and the target object in the target video may be determined according to the simulated lens stereoscopic space. Dynamic relative positional relationship between.
  • the acquiring dynamic shooting parameter information corresponding to the target video includes:
  • Step 201 Acquire dynamic shooting parameters of the unmanned aerial vehicle when shooting the target video
  • Step 202 Generate dynamic shooting parameter information corresponding to the target video according to the dynamic shooting parameter.
  • Step 203 Store the dynamic shooting parameter information in association with the target video.
  • the unmanned aerial vehicle may acquire the dynamic flight coordinates (x, y, z) of the unmanned aerial vehicle by means of GPS positioning, Beidou positioning, etc., where x represents longitude information, and y represents latitude information. z represents flight height information, and then generates flight path information and flight speed information according to changes in dynamic flight coordinates, and generates flight attitude information according to output data of the flight attitude sensor built in the UAV.
  • the flight attitude sensor comprises an Interial Measurement Unit (IMU).
  • IMU Interial Measurement Unit
  • the pan/tilt angle information is generated based on the angle change of the pan/tilt mounted on the unmanned aerial vehicle, and the lens focal length information and the lens field of view angle information are generated based on the imaging parameters of the imaging lens mounted on the unmanned aerial vehicle.
  • a mapping relationship between the target video and the corresponding dynamic shooting parameter information is established by associating the dynamic shooting parameter information with the target video, so that a three-dimensional watermark is added to the target video.
  • the dynamic shooting parameter information corresponding to the target video may be acquired according to the mapping relationship.
  • a mapping relationship between the target video and corresponding dynamic shooting parameter information may be established by adding a specific type tag to the target video.
  • the dynamic shooting parameter information corresponding to the target video can be acquired by reading the specific type tag.
  • the dynamic shooting parameter information may be stored in the data stream of the target video, and when the three-dimensional watermark needs to be added to the target video, the data of the target video may be directly The corresponding dynamic shooting parameter information is read in the stream.
  • time stamp information corresponding to different shooting parameter information needs to be recorded in the dynamic shooting parameter information, so that the shooting parameter information and the video data stream are generated. Interrelated in time.
  • the method further includes:
  • Step 105 Determine a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
  • Step 106 Adjust a display state of the three-dimensional watermark according to a dynamic relative positional relationship between the unmanned aerial vehicle and a target object in the target video.
  • the dynamics between the UAV and the target object in the target video may be determined on a frame-by-frame basis according to the simulated lens stereo space. Relative positional relationship. Further, the display state of the three-dimensional watermark is adjusted according to a relative positional relationship between the unmanned aerial vehicle corresponding to each frame image in the target video and the target object.
  • the adjusting the display state of the three-dimensional watermark according to the dynamic relative positional relationship between the unmanned aerial vehicle and the target object in the target video comprises:
  • the zoom ratio of the target object in the video also changes.
  • the three-dimensional watermark can be dynamically adjusted according to the change of the zoom ratio of the target object.
  • the size is scaled to ensure that the watermark size is scaled synchronously with the size of the target object. For example, in a certain frame image of the target video, the ratio of the target object is 1, and in the adjacent next frame image, the ratio of the target object is 0.5, that is, in two adjacent frames. In the image, the target object is doubled, and at this time, the three may be according to the scaling ratio of the target object.
  • the dimensional stereo watermark is also doubled, thereby realizing the dynamic adjustment of the three-dimensional watermark zoom size and optimizing the watermark display effect.
  • the adjusting the display state of the three-dimensional watermark according to the dynamic relative positional relationship between the unmanned aerial vehicle and the target object in the target video comprises:
  • the position of the UAV relative to the target object may be changed during the process of capturing the target video, so that the UAV may have different biases with respect to the target object in different frame images.
  • Move the angle In this embodiment, the dynamic offset angle of the UAV relative to the target object in the target video is calculated according to the flight trajectory information and the flight attitude information, and further, according to the dynamic offset angle, Adjusting a rotation angle of the three-dimensional watermark relative to the simulated lens stereo space such that the three-dimensional watermark can dynamically rotate following a change in an offset angle of the UAV.
  • the adjusting the display state of the three-dimensional watermark according to the dynamic relative positional relationship between the unmanned aerial vehicle and the target object in the target video comprises:
  • the angle of the pan/tilt is dynamically adjusted according to the flight path of the unmanned aerial vehicle and the change of the flight attitude, for example, according to the flight of the unmanned aerial vehicle.
  • the pitch angle of the gimbal is adjusted according to the change of the height
  • the yaw angle of the gimbal is adjusted according to the change of the flying attitude of the unmanned aerial vehicle.
  • the dynamic pitch angle and the dynamic yaw angle of the pan/tilt during the shooting of the target video are acquired, and then the three-dimensional watermark is adjusted according to the dynamic pitch angle with respect to the analog lens stereo space.
  • the adjusting the display state of the three-dimensional watermark according to the dynamic relative positional relationship between the unmanned aerial vehicle and the target object in the target video comprises:
  • the unmanned aerial vehicle changes the dynamic height of the unmanned aerial vehicle relative to the target object according to the change of the flight trajectory.
  • the dynamic height of the unmanned aerial vehicle relative to the target object is obtained according to the flight path information of the unmanned aerial vehicle, and then the pitch rotation angle of the three-dimensional watermark relative to the simulated lens stereo space is adjusted according to the dynamic height, thereby The display state of the three-dimensional watermark is adjusted according to the change of the dynamic height of the unmanned aerial vehicle.
  • the three-dimensional watermark when the dynamic height is lower than a preset height threshold, the three-dimensional watermark may be in an upright state with respect to a reference ground plane of the analog lens stereo space, when the dynamic height is equal to or higher than a preset height.
  • the threshold is increased, the three-dimensional watermark can be dynamically adjusted to be tiled with respect to the reference ground plane of the analog lens stereo space, so that the height can be guaranteed.
  • the three-dimensional watermark is presented more clearly.
  • the method further includes:
  • Step 107 Correct the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the pan-tilt angle information, and the lens field angle information.
  • the UAV since the UAV is in flight when shooting the target video, it is inevitable that the flight attitude is unstable due to environmental factors, for example, short-term jitter or flight speed caused by changes in wind speed in the shooting environment. Short-term changes, which will affect the pan-tilt angle and the angle of view of the lens. This short-term disturbance may cause the display state of the three-dimensional watermark to change, thus affecting the watermark display effect.
  • the display state of the three-dimensional watermark is performed according to at least one of the flight attitude information, the flight speed information, the pan-tilt angle information, and the lens field angle information. Correction, for example, adjusting the rotation angle of the three-dimensional watermark according to the flight attitude information, can reduce the short-term variation of the flying posture of the UAV to the three-dimensional watermark Display the effect of the state to further optimize the watermark display effect.
  • the method before the receiving the target watermark information, the method further includes:
  • the dynamic shooting parameter information may be recorded, and the dynamic shooting parameter information is stored in association with the target video.
  • the user can establish a communication connection with the unmanned aerial vehicle through a smart terminal such as a mobile phone, and then download the target video and its associated stored dynamic shooting parameter information from the unmanned aerial vehicle, and pass the intelligence.
  • the video editing software on the terminal plays and edits the target video offline, and adds a three-dimensional watermark.
  • FIG. 4A where 400 is a smart terminal, 410 is an offline play interface of the target video, and 430 is a target object in the target video.
  • the watermark editing identifier 411 may be generated on the offline playing interface 410, and the three-dimensionality for the target video may be received by the watermark editing identifier 411.
  • Stereo watermark add instruction When the target video is played offline through the video editing software on the smart terminal 400, the watermark editing identifier 411 may be generated on the offline playing interface 410, and the three-dimensionality for the target video may be received by the watermark editing identifier 411.
  • Stereo watermark add instruction When the target video is played offline through the video editing software on the smart terminal 400, the watermark editing identifier 411 may be generated on the offline playing interface 410, and the three-dimensionality for the target video may be received by the watermark editing identifier 411.
  • Stereo watermark add instruction When the target video is played offline through the video editing software on the smart terminal 400, the watermark editing identifier 411 may be
  • a watermark information input interface 413 may be generated on the offline playing interface 410 for inputting the target watermark information.
  • the watermark information input interface may be a virtual keyboard, and the text watermark information input by the user may be received through the virtual keyboard; or the watermark information input interface may also be a file selection window, and the file may be passed through the file. Select the window to select the corresponding image watermark information or animation watermark information.
  • the target video can be automatically acquired from the unmanned aerial vehicle and synchronized and played online, and the dynamic shooting parameter information corresponding to the target video can be acquired;
  • a watermark editing identifier is generated on the online playing interface of the target video to receive a three-dimensional watermark adding instruction for the target video through the watermark editing identifier.
  • the video editing software may establish, according to the dynamic shooting parameter information, an analog lens stereoscopic space in which the unmanned aerial vehicle captures the target video, and according to the The analog lens stereo space is generated in real time on the target video to generate a corresponding three-dimensional watermark, such as the text watermark "HELLOW" shown in FIG. 4B.
  • the method further includes:
  • the adjusting the display state of the three-dimensional watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional watermark.
  • the editing instruction may be a touch operation instruction directly directed to the three-dimensional watermark “HELLOW”, such as a touch operation instruction such as drag and drop, stretch, zoom, and rotate, thereby implementing display of the three-dimensional watermark. Manual adjustment of the status.
  • the watermark editing identifier 411 can also receive a hidden instruction for the three-dimensional watermark; and further trigger the three-dimensional three-dimensional in the target video according to the hidden instruction.
  • the watermark switches from the display state to the hidden state.
  • the hidden instruction for the three-dimensional watermark may also be a specific touch gesture directly on the playing interface of the target video.
  • the display state of the three-dimensional watermark "HELLOW” follows the positional relationship of the UAV with respect to the target object 430 as the target video is played.
  • the dynamic adjustment is performed by changing, for example, dynamically zooming with the distance of the lens relative to the target object 430 or the focal length of the lens, and finally realizing the fusion of the three-dimensional watermark and the simulated lens stereo space.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory RAM.
  • a three-dimensional watermark adding apparatus 500 including:
  • a watermark input unit 501 configured to receive target watermark information
  • the parameter acquisition unit 502 is configured to acquire dynamic shooting parameter information corresponding to the target video, where the dynamic shooting parameter information is used to record dynamic shooting parameters of the unmanned aerial vehicle when the target video is captured;
  • a space simulation unit 503 configured to establish the unmanned flight according to the dynamic shooting parameter information Taking the analog lens stereo space of the target video;
  • the watermark generating unit 504 is configured to fuse the target watermark information with the analog lens stereo space to generate a three-dimensional watermark for the target video.
  • the parameter obtaining unit 502 is specifically configured to:
  • the dynamic shooting parameter information is stored in association with the target video.
  • the dynamic shooting parameter information includes at least one of flight path information, flight attitude information, flight speed information, pan/tilt angle information, lens focal length information, and lens field angle information of the unmanned aerial vehicle. Kind.
  • the space simulation unit 503 is specifically configured to:
  • the analog lens stereo space is used to determine a dynamic relative positional relationship between the UAV and a target object in the target video.
  • the three-dimensional watermark adding apparatus 500 further includes a watermark adjusting unit 505, configured to:
  • the watermark adjusting unit 505 is specifically configured to:
  • the watermark adjusting unit 505 is specifically configured to:
  • the watermark adjusting unit 505 is specifically configured to:
  • the watermark adjusting unit 505 is specifically configured to:
  • the watermark adjusting unit 505 is further configured to:
  • the three-dimensional watermark adding apparatus 500 further includes:
  • a video acquisition unit 506, configured to read from the unmanned aerial vehicle and play the target video offline;
  • the identifier generating unit 507 is configured to generate a watermark editing identifier on the offline playing interface of the target video, where the watermark editing identifier is used to receive a three-dimensional watermark adding instruction for the target video.
  • the video acquisition unit 506 is further configured to acquire and synchronously play the target video online in real time from an unmanned aerial vehicle during shooting of the target video;
  • the identifier generating unit 507 is further configured to generate a watermark editing identifier on the online play interface of the target video, where the watermark edit identifier is used to receive a three-dimensional watermark adding instruction for the target video.
  • the three-dimensional watermark adding apparatus 500 further includes a watermark editing unit 508, configured to:
  • the adjusting the display state of the three-dimensional watermark includes adjusting the three-dimensional watermark At least one of a zoom size, a display position, and a rotation angle.
  • the three-dimensional watermark adding apparatus 500 further includes a watermark hiding unit 509, configured to:
  • triggering according to the hidden instruction, triggering the three-dimensional watermark in the target video to switch from a display state to a hidden state.
  • the three-dimensional watermark includes at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animated watermark.
  • a terminal 800 including a processor 801 and a memory 803.
  • the processor 801 is electrically connected to the memory 803, and the memory 803 is used to store an executable program.
  • the processor 801 is configured to read executable program instructions in the memory 803 and perform the following operations:
  • the target watermark information is fused with the analog lens stereo space to generate a three-dimensional watermark for the target video.
  • the acquiring dynamic shooting parameter information corresponding to the target video includes:
  • the dynamic shooting parameter information is stored in association with the target video.
  • the dynamic shooting parameter information includes at least one of flight path information, flight attitude information, flight speed information, pan/tilt angle information, lens focal length information, and lens field angle information of the unmanned aerial vehicle. Kind.
  • the establishing, according to the dynamic shooting parameter information, the simulated lens stereo space in which the UAV captures the target video includes:
  • the analog lens stereo space is used to determine a dynamic relative positional relationship between the UAV and a target object in the target video.
  • the operation further includes:
  • the adjusting the display state of the three-dimensional watermark according to the dynamic relative positional relationship between the unmanned aerial vehicle and the target object in the target video comprises:
  • the adjusting the display state of the three-dimensional watermark according to the dynamic relative positional relationship between the unmanned aerial vehicle and the target object in the target video comprises:
  • the adjusting the display state of the three-dimensional watermark according to the dynamic relative positional relationship between the unmanned aerial vehicle and the target object in the target video comprises:
  • the unmanned aerial vehicle and the target object in the target video Adjusting the display state of the three-dimensional watermark including:
  • the operation further includes:
  • the operation before the receiving the target watermark information, the operation further includes:
  • the operation before the receiving the target watermark information, the operation further includes:
  • the target video is acquired and synchronized online from the unmanned aerial vehicle in real time;
  • the operation further includes:
  • the adjusting the display state of the three-dimensional watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional watermark.
  • the operation further includes:
  • triggering according to the hidden instruction, triggering the three-dimensional watermark in the target video to switch from a display state to a hidden state.
  • the three-dimensional watermark includes at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animated watermark.
  • the method, device, and terminal for acquiring a three-dimensional watermark are obtained by acquiring dynamic shooting parameter information of an unmanned aerial vehicle when the target video is captured, and then, according to the dynamic shooting parameter information, when the watermark needs to be added to the target video
  • the UAV captures an analog lens stereo space of the target video, and by combining the target watermark information with the analog lens stereo space, a three-dimensional watermark for the target video can be quickly generated, which is beneficial to reducing three-dimensional The generation time of the watermark.
  • the dynamic adjustment of the state of the three-dimensional watermark can be dynamically adjusted according to the dynamic shooting parameters, thereby optimizing the display effect of the watermark.

Abstract

一种三维立体水印添加方法、装置及终端,所述方法包括:接收目标水印信息;获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。所述方法可以快速地实现在目标视频中添加三维立体水印。

Description

三维立体水印添加方法、装置及终端
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或该专利披露。
技术领域
本发明涉及视频处理技术领域,尤其涉及一种三维立体水印添加方法、装置及终端。
背景技术
传统的视频水印,是直接叠加在视频中的水印,依赖于镜头的剪辑来达到视频的故事表达效果,传统的视频水印与视频画面的空间并没有任何关联性。目前,为实现更好的水印显示效果,例如视频中的三维立体字幕,可以通过桌面端的后期剪辑软件(例如Adobe After Effect)来分析视频中的立体空间与动作,进而将字幕与视频中的立体空间、动作匹配起来,从而实现将水印与视频的融合。然而,采用桌面端软件的处理办法,需要大量的图像运算来重新模拟视频成像时的相机角度与视频中的立体空间,从而导致资源消耗较大,且比较费时。
发明内容
本发明实施例提供一种三维立体水印添加方法、装置及终端,以快速地在目标视频中添加三维立体水印,并实现三维立体水印显示状态的动态调整。
一种三维立体水印添加方法,包括:
接收目标水印信息;
获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;
将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
一种三维立体水印添加装置,包括:
水印输入单元,用于接收目标水印信息;
参数获取单元,用于获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
空间模拟单元,用于根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;
水印生成单元,用于将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
一种终端,包括处理器和存储器,所述处理器与所述存储器电连接,所述存储器用于存储可执行程序指令,所述处理器用于读取所述存储器中的可执行程序指令,并执行如下操作:
接收目标水印信息;
获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;
将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
所述三维立体水印添加方法、装置及终端通过获取无人飞行器在拍摄所述目标视频时的动态拍摄参数信息,进而可以在需要对所述目标视频添加水印时,根据所述动态拍摄参数信息建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间,通过将目标水印信息与所述模拟镜头立体空间进行融合,实现快速地生成针对所述目标视频的三维立体水印。由于所述目标视频在拍摄过程中的动态拍摄参数可以方便地获取,通过将所述动态拍摄参数与所述目标视频进行关联存储,从而使得在对所述目标视频添加三维立体水印时,无需通过专业的软件来进行运动与空间分析,而是直接根据所述动态拍摄参数建立模拟镜头立体空间,进而将所述目标水印信息与所述镜头立体空间融合以形成对应的三维立体水印,有利于降低三维立体水印的生成时间。同时,还可以根据所述 动态拍摄参数对三维立体水印显示状态的动态调整,从而优化水印的显示效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例的三维立体水印添加方法的第一流程示意图;
图2为本发明实施例的三维立体水印添加方法的第二流程示意图;
图3为本发明实施例的三维立体水印添加方法的第三流程示意图
图4A至图4D为本发明实施例的三维立体水印添加方法的应用场景示意图;
图5为本发明实施例的三维立体水印添加装置的第一结构示意图;
图6为本发明实施例的三维立体水印添加装置的第二结构示意图;
图7为本发明实施例的三维立体水印添加装置的第三结构示意图;
图8为本发明实施例的终端的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,在本发明一个实施例中,提供一种三维立体水印添加方法,以快速地在目标视频中添加三维立体水印,并实现三维立体水印显示状态的动态调整。所述三维立体水印添加方法至少包括如下步骤:
步骤101:接收目标水印信息;
步骤102:获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
步骤103:根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目 标视频的模拟镜头立体空间;
步骤104:将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
其中,所述目标水印信息可以包括文字信息、图片信息、动画信息等信息中的至少一种。相应地,所述三维立体水印可以包括三维文字水印、三维图像水印和三维动画水印中的至少一种。所述动态拍摄参数信息可以包括所述无人飞行器的飞行轨迹信息、飞行姿态信息、飞行速度信息、云台角度信息、镜头焦距信息和镜头视场角度信息中的至少一种。根据所述动态拍摄参数信息,可以建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间,进而可以根据所述模拟镜头立体空间确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系。
请参阅图2,在一种实施方式中,所述获取目标视频对应的动态拍摄参数信息,包括:
步骤201:获取无人飞行器在拍摄目标视频时的动态拍摄参数;
步骤202:根据所述动态拍摄参数,生成所述目标视频对应的动态拍摄参数信息;
步骤203:将所述动态拍摄参数信息与所述目标视频关联存储。
具体地,无人飞行器在拍摄目标视频的过程中,可以通过GPS定位、北斗定位等方式获取无人飞行器的动态飞行坐标(x,y,z),其中x表示经度信息,y表示纬度信息,z表示飞行高度信息,进而根据动态飞行坐标的变化生成飞行轨迹信息和飞行速度信息,并根据无人飞行器内置的飞行姿态传感器的输出数据生成飞行姿态信息。可选地,飞行姿态传感器包括惯性测量单元(Interial Measurement Unit,IMU)。同时,根据搭载于所述无人飞行器上的云台的角度变化生成云台角度信息,并根据搭载于所述无人飞行器上的摄像镜头的拍摄参数生成镜头焦距信息和镜头视场角度信息。
进一步地,通过将所述动态拍摄参数信息与所述目标视频关联存储,从而建立所述目标视频与对应的动态拍摄参数信息之间的映射关系,以便在需要对所述目标视频添加三维立体水印时,可以根据所述映射关系获取到与所述目标视频对应的动态拍摄参数信息。例如,可以通过为所述目标视频添加特定的类型标签来建立所述目标视频与对应的动态拍摄参数信息之间的映射关系,在需 要对所述目标视频添加三维立体水印时,通过读取所述特定的类型标签,即可获取与所述目标视频对应的动态拍摄参数信息。在一种实施方式中,还可以将所述动态拍摄参数信息存储于所述目标视频的数据流中,进而在需要对所述目标视频添加三维立体水印时,可以直接从所述目标视频的数据流中读取对应的动态拍摄参数信息。
可以理解,在将所述动态拍摄参数信息与所述目标视频关联存储时,需要在所述动态拍摄参数信息中记录不同的拍摄参数信息对应的时间戳信息,以使得拍摄参数信息与视频数据流之间在时间上相互关联。
请参阅图3,所述生成针对所述目标视频的三维立体水印之后,所述方法还包括:
步骤105:逐帧确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系;
步骤106:根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态。
可以理解,在建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间之后,可以根据所述模拟镜头立体空间逐帧确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系。进一步地,根据所述目标视频中每一帧图像对应的无人飞行器与目标对象之间的相对位置关系,调整所述三维立体水印的显示状态。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述飞行轨迹信息和所述镜头焦距信息中的至少一者,计算所述目标视频中目标对象的动态缩放比例;
根据所述目标对象的动态缩放比例,调整所述三维立体水印的缩放尺寸。
可以理解,随着飞行轨迹和镜头焦距中的至少一者的变化,目标对象在视频中的缩放比例也会变化,此时,可以根据目标对象的缩放比例的变化,动态地调整三维立体水印的缩放尺寸,从而保证水印尺寸与目标对象的尺寸同步缩放。例如,在所述目标视频的某一帧图像中,所述目标对象的比例为1,而在相邻的下一帧图像中,所述目标对象的比例为0.5,即在相邻的两帧图像中,目标对象的缩小了一倍,此时,可以根据所述目标对象的缩放比例,将所述三 维立体水印也缩小一倍,从而实现三维立体水印缩放尺寸的动态调节,优化水印显示效果。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述飞行轨迹信息和所述飞行姿态信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态偏移角度;
根据所述无人飞行器相对于所述目标对象的动态偏移角度,调整所述三维立体水印相对于所述模拟镜头立体空间的旋转角度。
可以理解,无人飞行器在拍摄目标视频的过程中,无人飞行器相对于所述目标对象的位置可以是变化的,从而在不同的帧图像中,无人飞行器相对于目标对象可能存在不同的偏移角度。在本实施方式中,通过根据所述飞行轨迹信息和所述飞行姿态信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态偏移角度,进而根据所述动态偏移角度,调整所述三维立体水印相对于所述模拟镜头立体空间的旋转角度,从而使得所述三维立体水印可以跟随无人飞行器的偏移角度的变化而动态旋转。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述云台角度信息,计算搭载在所述飞行器上的云台的动态旋转角度,所述云台的动态旋转角度包括动态俯仰角和动态偏航角中的至少一种;
根据所述云台的动态俯仰角,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度;和/或,
根据所述云台的动态偏航角,调整所述三维立体水印相对于所述模拟镜头立体空间的横向旋转角度。
具体地,无人飞行器在拍摄目标视频的过程中,为保证拍摄镜头的稳定性,云台的角度会根据无人飞行器的飞行轨迹和飞行姿态的变化而动态调整,例如根据无人飞行器的飞行高度的变化而调整云台的俯仰角,以及根据无人飞行器的飞行姿态的变化而调整云台的偏航角。在本实施方式中,通过获取所述云台在拍摄目标视频过程中的动态俯仰角和动态偏航角,进而根据所述动态俯仰角调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度,并根据所述动态偏航角调整所述三维立体水印相对于所述模拟镜头立体空间的横向 旋转角度,从而实现三维立体水印与模拟镜头立体空间更好的融合,优化水印显示效果。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述飞行轨迹信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态高度;
根据所述无人飞行器相对于所述目标对象的动态高度,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度。
具体地,无人飞行器在拍摄目标视频的过程中,根据飞行轨迹的变化,无人飞行器相对于目标对象的动态高度也会变化。在本事是方式中,通过根据无人飞行器的飞行轨迹信息获取无人飞行器相对于目标对象的动态高度,进而根据所述动态高度,调整三维立体水印相对于模拟镜头立体空间的俯仰旋转角度,从而实现根据无人飞行器动态高度的变化来调整三维立体水印的显示状态。例如,当所述动态高度低于预设高度阈值时,可以使所述三维立体水印相对于所述模拟镜头立体空间的参考地平面为竖立状态,当所述动态高度等于或高于预设高度阈值时,则可以跟随所述动态高度的增加,将所述三维立体水印动态地调整为相对于所述模拟镜头立体空间的参考地平面为平铺状态,如此,则可以保证在高空拍摄视角下将所述三维立体水印呈现得更清楚。
在一种实施方式中,所述方法还包括:
步骤107:根据所述飞行姿态信息、所述飞行速度信息、所述云台角度信息和所述镜头视场角度信息中的至少一者,对所述三维立体水印的显示状态进行修正。
可以理解,由于无人飞行器在拍摄目标视频时处于飞行状态,难免会受到环境因素的影响而导致飞行姿态的不稳定,例如,受拍摄环境中风速变化的影响而导致短时抖动或飞行速度的短时变化,从而会影响到云台角度和镜头视场角度,这种短时的扰动可能会导致三维立体水印的显示状态也会出现变化,从而影响水印显示效果。在本实施方式中,通过根据所述飞行姿态信息、所述飞行速度信息、所述云台角度信息和所述镜头视场角度信息中的至少一者,对所述三维立体水印的显示状态进行修正,例如根据飞行姿态信息调整三维立体水印的旋转角度,可以降低无人飞行器的飞行姿态的短时变化对三维立体水印的 显示状态的影响,进一步优化水印显示效果。
在一种实施方式中,所述接收目标水印信息之前,所述方法还包括:
从无人飞行器中读取并离线播放所述目标视频;
在所述目标视频的离线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
具体地,无人飞行器在拍摄所述目标视频时,可以记录动态拍摄参数信息,并将动态拍摄参数信息与所述目标视频关联存储。当需要对目标视频添加三维立体水印时,用户可以通过手机等智能终端与无人飞行器建立通信连接,进而从无人飞行器中下载所述目标视频及其关联存储的动态拍摄参数信息,并通过智能终端上的视频编辑软件对所述目标视频进行离线播放和编辑,并添加三维立体水印。
请参阅图4A,其中,400为智能终端,410为目标视频的离线播放界面,430为目标视频中的目标对象。在通过智能终端400上的视频编辑软件离线播放所述目标视频时,可以在所述离线播放界面410上生成水印编辑标识411,进而可通过所述水印编辑标识411接收针对所述目标视频的三维立体水印添加指令。
请参阅图4B,当所述水印编辑标识411接收到针对所述目标视频的三维立体水印添加指令之后,可以在所述离线播放界面410上生成水印信息输入界面413,用于输入目标水印信息。例如,所述水印信息输入界面可以是虚拟的键盘,进而可以通过所述虚拟键盘接收用户输入的文字水印信息;或者,所述水印信息输入界面也可以是文件选取窗口,进而可以通过所述文件选取窗口选取对应的图像水印信息或动画水印信息。
可以理解,在目标视频的拍摄过程中,也可以通过所述智能终端从无人飞行器实时获取并同步在线播放所述目标视频,并获取所述目标视频对应的动态拍摄参数信息;进而在所述目标视频的在线播放界面上生成水印编辑标识,以通过所述水印编辑标识接收针对目标视频的三维立体水印添加指令。
可以理解,在通过所述水印信息输入界面输入水印信息时,所述视频编辑软件可以根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间,并根据所述模拟镜头立体空间,在所述目标视频上实时生成对应的三维立体水印,例如图4B中所示的文字水印“HELLOW”。
请参阅图4C,在生成针对所述目标视频的三维立体水印之后,所述方法还包括:
接收针对所述三维立体水印的编辑指令;
根据所述编辑指令调整所述三维立体水印的显示状态;
其中,所述调整所述三维立体水印的显示状态包括调整所述三维立体水印的缩放尺寸、显示位置和旋转角度中的至少一种。
其中,所述编辑指令可以是直接针对所述三维立体水印“HELLOW”的触控操作指令,例如拖放、拉伸、缩小、旋转等触控操作指令,从而实现对所述三维立体水印的显示状态的手动调整。
可以理解,在生成针对所述目标视频的三维立体水印之后,还可以通过水印编辑标识411接收针对所述三维立体水印的隐藏指令;进而根据所述隐藏指令,触发所述目标视频中的三维立体水印从显示状态切换为隐藏状态。可以理解,所述针对所述三维立体水印的隐藏指令也可以是直接在所述目标视频的播放界面上的特定触控手势。
请参阅图4D,在生成针对所述目标视频的三维立体水印之后,随着目标视频的播放,三维立体水印“HELLOW”的显示状态会跟随无人飞行器相对于所述目标对象430的位置关系的变化而进行动态调整,例如,随着镜头相对于目标对象430的远近或者镜头焦距的变化而进行动态缩放,最终实现三维立体水印与模拟镜头立体空间的融合。
可以理解,实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体RAM等。
请参阅图5,在本发明一个实施例中,提供一种三维立体水印添加装置500,包括:
水印输入单元501,用于接收目标水印信息;
参数获取单元502,用于获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
空间模拟单元503,用于根据所述动态拍摄参数信息,建立所述无人飞行 器拍摄所述目标视频的模拟镜头立体空间;
水印生成单元504,用于将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
在一种实施方式中,所述参数获取单元502,具体用于:
无人飞行器在拍摄目标视频时的动态拍摄参数,生成所述目标视频对应的动态拍摄参数信息;
将所述动态拍摄参数信息与所述目标视频关联存储。
在一种实施方式中,所述动态拍摄参数信息包括所述无人飞行器的飞行轨迹信息、飞行姿态信息、飞行速度信息、云台角度信息、镜头焦距信息和镜头视场角度信息中的至少一种。
在一种实施方式中,所述空间模拟单元503,具体用于:
根据所述飞行轨迹信息、所述飞行姿态信息、所述飞行速度信息、所述云台角度信息、所述镜头焦距信息和所述镜头视场角度信息中的至少一种,建立所述无人飞行器的模拟镜头立体空间;
其中,所述模拟镜头立体空间用于确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系。
请参阅图6,在一种实施方式中,所述三维立体水印添加装置500还包括水印调整单元505,用于:
逐帧确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系;
根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态。
在一种实施方式中,所述水印调整单元505,具体用于:
根据所述飞行轨迹信息和所述镜头焦距信息中的至少一者,计算所述目标视频中目标对象的动态缩放比例;
根据所述目标对象的动态缩放比例,调整所述三维立体水印的缩放尺寸。
在一种实施方式中,所述水印调整单元505,具体用于:
根据所述飞行轨迹信息和所述飞行姿态信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态偏移角度;
根据所述无人飞行器相对于所述目标对象的动态偏移角度,调整所述三维 立体水印相对于所述模拟镜头立体空间的旋转角度。
在一种实施方式中,所述水印调整单元505,具体用于:
根据所述云台角度信息,计算搭载在所述飞行器上的云台的动态旋转角度,所述云台的动态旋转角度包括动态俯仰角和动态偏航角中的至少一种;
根据所述云台的动态俯仰角,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度;和/或,
根据所述云台的动态偏航角,调整所述三维立体水印相对于所述模拟镜头立体空间的横向旋转角度。
在一种实施方式中,所述水印调整单元505,具体用于:
根据所述飞行轨迹信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态高度;
根据所述无人飞行器相对于所述目标对象的动态高度,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度。
在一种实施方式中,所述水印调整单元505,还用于:
根据所述飞行姿态信息、所述飞行速度信息、所述云台角度信息和所述镜头视场角度信息中的至少一者,对所述三维立体水印的显示状态进行修正。
请参阅图7,在一种实施方式中,所述三维立体水印添加装置500还包括:
视频获取单元506,用于从无人飞行器中读取并离线播放所述目标视频;
标识生成单元507,用于在所述目标视频的离线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
在一种实施方式中,所述视频获取单元506,还用于在目标视频的拍摄过程中,从无人飞行器实时获取并同步在线播放所述目标视频;
所述标识生成单元507,还用于在所述目标视频的在线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
请参阅图7,在一种实施方式中,所述三维立体水印添加装置500还包括水印编辑单元508,用于:
接收针对所述三维立体水印的编辑指令;
根据所述编辑指令调整所述三维立体水印的显示状态;
其中,所述调整所述三维立体水印的显示状态包括调整所述三维立体水印 的缩放尺寸、显示位置和旋转角度中的至少一种。
请参阅图7,在一种实施方式中,所述三维立体水印添加装置500还包括水印隐藏单元509,用于:
接收针对所述三维立体水印的隐藏指令;
根据所述隐藏指令,触发所述目标视频中的三维立体水印从显示状态切换为隐藏状态。
在一种实施方式中,所述三维立体水印包括三维文字水印、三维图像水印和三维动画水印中的至少一种。
可以理解,所述三维立体水印添加装置500中各单元的功能及其具体实现还可以参照图1至图4所示方法实施例中的相关描述,此处不再赘述。
请参阅图8,在本发明一个实施例中,提供一种终端800,包括处理器801和存储器803,所述处理器801与所述存储器803电连接,所述存储器803用于存储可执行程序指令,所述处理器801用于读取所述存储器803中的可执行程序指令,并执行如下操作:
接收目标水印信息;
获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;
将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
在一种实施方式中,所述获取目标视频对应的动态拍摄参数信息,包括:
获取无人飞行器在拍摄目标视频时的动态拍摄参数;
根据所述动态拍摄参数,生成所述目标视频对应的动态拍摄参数信息;
将所述动态拍摄参数信息与所述目标视频关联存储。
在一种实施方式中,所述动态拍摄参数信息包括所述无人飞行器的飞行轨迹信息、飞行姿态信息、飞行速度信息、云台角度信息、镜头焦距信息和镜头视场角度信息中的至少一种。
在一种实施方式中,所述根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间,包括:
根据所述飞行轨迹信息、所述飞行姿态信息、所述飞行速度信息、所述云台角度信息、所述镜头焦距信息和所述镜头视场角度信息中的至少一种,建立所述无人飞行器的模拟镜头立体空间;
其中,所述模拟镜头立体空间用于确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系。
在一种实施方式中,所述操作还包括:
逐帧确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系;
根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述飞行轨迹信息和所述镜头焦距信息中的至少一者,计算所述目标视频中目标对象的动态缩放比例;
根据所述目标对象的动态缩放比例,调整所述三维立体水印的缩放尺寸。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述飞行轨迹信息和所述飞行姿态信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态偏移角度;
根据所述无人飞行器相对于所述目标对象的动态偏移角度,调整所述三维立体水印相对于所述模拟镜头立体空间的旋转角度。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述云台角度信息,计算搭载在所述飞行器上的云台的动态旋转角度,所述云台的动态旋转角度包括动态俯仰角和动态偏航角中的至少一种;
根据所述云台的动态俯仰角,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度;和/或,
根据所述云台的动态偏航角,调整所述三维立体水印相对于所述模拟镜头立体空间的横向旋转角度。
在一种实施方式中,所述根据所述无人飞行器与所述目标视频中目标对象 之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
根据所述飞行轨迹信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态高度;
根据所述无人飞行器相对于所述目标对象的动态高度,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度。
在一种实施方式中,所述操作还包括:
根据所述飞行姿态信息、所述飞行速度信息、所述云台角度信息和所述镜头视场角度信息中的至少一者,对所述三维立体水印的显示状态进行修正。
在一种实施方式中,所述接收目标水印信息之前,所述操作还包括:
从无人飞行器中读取并离线播放所述目标视频;
在所述目标视频的离线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
在一种实施方式中,所述接收目标水印信息之前,所述操作还包括:
在目标视频的拍摄过程中,从无人飞行器实时获取并同步在线播放所述目标视频;
在所述目标视频的在线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
在一种实施方式中,所述生成针对所述目标视频的三维立体水印之后,所述操作还包括:
接收针对所述三维立体水印的编辑指令;
根据所述编辑指令调整所述三维立体水印的显示状态;
其中,所述调整所述三维立体水印的显示状态包括调整所述三维立体水印的缩放尺寸、显示位置和旋转角度中的至少一种。
在一种实施方式中,所述生成针对所述目标视频的三维立体水印之后,所述操作还包括:
接收针对所述三维立体水印的隐藏指令;
根据所述隐藏指令,触发所述目标视频中的三维立体水印从显示状态切换为隐藏状态。
在一种实施方式中,所述三维立体水印包括三维文字水印、三维图像水印和三维动画水印中的至少一种。
可以理解,所述处理器801执行的各操作的具体步骤及其具体实现还可以参照图1至图4所示方法实施例中的相关描述,此处不再赘述。
所述三维立体水印添加方法、装置及终端通过获取无人飞行器在拍摄所述目标视频时的动态拍摄参数信息,进而可以在需要对所述目标视频添加水印时,根据所述动态拍摄参数信息建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间,通过将目标水印信息与所述模拟镜头立体空间进行融合,可以快速地生成针对所述目标视频的三维立体水印,有利于降低三维立体水印的生成时间。同时,还可以根据所述动态拍摄参数对三维立体水印显示状态的动态调整,从而优化水印的显示效果。
可以理解,以上所揭露的仅为本发明的较佳实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (32)

  1. 一种三维立体水印添加方法,其特征在于,包括:
    接收目标水印信息;
    获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
    根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;
    将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
  2. 如权利要求1所述的方法,其特征在于,所述获取目标视频对应的动态拍摄参数信息,包括:
    获取无人飞行器在拍摄目标视频时的动态拍摄参数;
    根据所述动态拍摄参数,生成所述目标视频对应的动态拍摄参数信息;
    将所述动态拍摄参数信息与所述目标视频关联存储。
  3. 如权利要求1或2所述的方法,其特征在于,所述动态拍摄参数信息包括所述无人飞行器的飞行轨迹信息、飞行姿态信息、飞行速度信息、云台角度信息、镜头焦距信息和镜头视场角度信息中的至少一种。
  4. 如权利要求3所述的方法,其特征在于,所述根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间,包括:
    根据所述飞行轨迹信息、所述飞行姿态信息、所述飞行速度信息、所述云台角度信息、所述镜头焦距信息和所述镜头视场角度信息中的至少一种,建立所述无人飞行器的模拟镜头立体空间;
    其中,所述模拟镜头立体空间用于确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系。
  5. 如权利要求4所述的方法,其特征在于,所述方法还包括:
    逐帧确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系;
    根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态。
  6. 如权利要求5所述的方法,其特征在于,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
    根据所述飞行轨迹信息和所述镜头焦距信息中的至少一者,计算所述目标视频中目标对象的动态缩放比例;
    根据所述目标对象的动态缩放比例,调整所述三维立体水印的缩放尺寸。
  7. 如权利要求5所述的方法,其特征在于,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
    根据所述飞行轨迹信息和所述飞行姿态信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态偏移角度;
    根据所述无人飞行器相对于所述目标对象的动态偏移角度,调整所述三维立体水印相对于所述模拟镜头立体空间的旋转角度。
  8. 如权利要求5所述的方法,其特征在于,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
    根据所述云台角度信息,计算搭载在所述飞行器上的云台的动态旋转角度,所述云台的动态旋转角度包括动态俯仰角和动态偏航角中的至少一种;
    根据所述云台的动态俯仰角,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度;和/或,
    根据所述云台的动态偏航角,调整所述三维立体水印相对于所述模拟镜头立体空间的横向旋转角度。
  9. 如权利要求5所述的方法,其特征在于,所述根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态,包括:
    根据所述飞行轨迹信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态高度;
    根据所述无人飞行器相对于所述目标对象的动态高度,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度。
  10. 如权利要求6至9任一项所述的方法,其特征在于,所述方法还包括:
    根据所述飞行姿态信息、所述飞行速度信息、所述云台角度信息和所述镜头视场角度信息中的至少一者,对所述三维立体水印的显示状态进行修正。
  11. 如权利要求1所述的方法,其特征在于,所述接收目标水印信息之前,所述方法还包括:
    从无人飞行器中读取并离线播放所述目标视频;
    在所述目标视频的离线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
  12. 如权利要求1所述的方法,其特征在于,所述接收目标水印信息之前,所述方法还包括:
    在目标视频的拍摄过程中,从无人飞行器实时获取并同步在线播放所述目标视频;
    在所述目标视频的在线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
  13. 如权利要求11或12所述的方法,其特征在于,所述生成针对所述目标视频的三维立体水印之后,所述方法还包括:
    接收针对所述三维立体水印的编辑指令;
    根据所述编辑指令调整所述三维立体水印的显示状态;
    其中,所述调整所述三维立体水印的显示状态包括调整所述三维立体水印 的缩放尺寸、显示位置和旋转角度中的至少一种。
  14. 如权利要求11或12所述的方法,其特征在于,所述生成针对所述目标视频的三维立体水印之后,所述方法还包括:
    接收针对所述三维立体水印的隐藏指令;
    根据所述隐藏指令,触发所述目标视频中的三维立体水印从显示状态切换为隐藏状态。
  15. 如权利要求1所述的方法,其特征在于,所述三维立体水印包括三维文字水印、三维图像水印和三维动画水印中的至少一种。
  16. 一种三维立体水印添加装置,其特征在于,包括:
    水印输入单元,用于接收目标水印信息;
    参数获取单元,用于获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
    空间模拟单元,用于根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;
    水印生成单元,用于将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
  17. 如权利要求16所述的装置,其特征在于,所述参数获取单元,具体用于:
    获取无人飞行器在拍摄目标视频时的动态拍摄参数;
    根据所述动态拍摄参数,生成所述目标视频对应的动态拍摄参数信息;
    将所述动态拍摄参数信息与所述目标视频关联存储。
  18. 如权利要求16或17所述的装置,其特征在于,所述动态拍摄参数信息包括所述无人飞行器的飞行轨迹信息、飞行姿态信息、飞行速度信息、云台角度信息、镜头焦距信息和镜头视场角度信息中的至少一种。
  19. 如权利要求18所述的装置,其特征在于,所述空间模拟单元,具体用于:
    根据所述飞行轨迹信息、所述飞行姿态信息、所述飞行速度信息、所述云台角度信息、所述镜头焦距信息和所述镜头视场角度信息中的至少一种,建立所述无人飞行器的模拟镜头立体空间;
    其中,所述模拟镜头立体空间用于确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系。
  20. 如权利要求18所述的装置,其特征在于,所述装置还包括水印调整单元,用于:
    逐帧确定所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系;
    根据所述无人飞行器与所述目标视频中目标对象之间的动态相对位置关系,调整所述三维立体水印的显示状态。
  21. 如权利要求20所述的装置,其特征在于,所述水印调整单元,具体用于:
    根据所述飞行轨迹信息和所述镜头焦距信息中的至少一者,计算所述目标视频中目标对象的动态缩放比例;
    根据所述目标对象的动态缩放比例,调整所述三维立体水印的缩放尺寸。
  22. 如权利要求20所述的装置,其特征在于,所述水印调整单元,具体用于:
    根据所述飞行轨迹信息和所述飞行姿态信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态偏移角度;
    根据所述无人飞行器相对于所述目标对象的动态偏移角度,调整所述三维立体水印相对于所述模拟镜头立体空间的旋转角度。
  23. 如权利要求20所述的装置,其特征在于,所述水印调整单元,具体用于:
    根据所述云台角度信息,计算搭载在所述飞行器上的云台的动态旋转角度,所述云台的动态旋转角度包括动态俯仰角和动态偏航角中的至少一种;
    根据所述云台的动态俯仰角,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度;和/或,
    根据所述云台的动态偏航角,调整所述三维立体水印相对于所述模拟镜头立体空间的横向旋转角度。
  24. 如权利要求20所述的装置,其特征在于,所述水印调整单元,具体用于:
    根据所述飞行轨迹信息,计算所述无人飞行器相对于所述目标视频中目标对象的动态高度;
    根据所述无人飞行器相对于所述目标对象的动态高度,调整所述三维立体水印相对于所述模拟镜头立体空间的俯仰旋转角度。
  25. 如权利要求21至24任一项所述的装置,其特征在于,所述水印调整单元,还用于:
    根据所述飞行姿态信息、所述飞行速度信息、所述云台角度信息和所述镜头视场角度信息中的至少一者,对所述三维立体水印的显示状态进行修正。
  26. 如权利要求16所述的装置,其特征在于,所述装置还包括:
    视频获取单元,用于从无人飞行器中读取并离线播放所述目标视频;
    标识生成单元,用于在所述目标视频的离线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
  27. 如权利要求27所述的装置,其特征在于,所述视频获取单元,还用于在目标视频的拍摄过程中,从无人飞行器实时获取并同步在线播放所述目标视频;
    所述标识生成单元,还用于在所述目标视频的在线播放界面上生成水印编辑标识,所述水印编辑标识用于接收针对目标视频的三维立体水印添加指令。
  28. 如权利要求26或27所述的装置,其特征在于,所述装置还包括水印编辑单元,用于:
    接收针对所述三维立体水印的编辑指令;
    根据所述编辑指令调整所述三维立体水印的显示状态;
    其中,所述调整所述三维立体水印的显示状态包括调整所述三维立体水印的缩放尺寸、显示位置和旋转角度中的至少一种。
  29. 如权利要求26或27所述的装置,其特征在于,所述装置还包括水印隐藏单元,用于:
    接收针对所述三维立体水印的隐藏指令;
    根据所述隐藏指令,触发所述目标视频中的三维立体水印从显示状态切换为隐藏状态。
  30. 如权利要求16所述的装置,其特征在于,所述三维立体水印包括三维文字水印、三维图像水印和三维动画水印中的至少一种。
  31. 一种终端,其特征在于,包括处理器和存储器,所述处理器与所述存储器电连接,所述存储器用于存储可执行程序指令,所述处理器用于读取所述存储器中的可执行程序指令,并执行如下操作:
    接收目标水印信息;
    获取目标视频对应的动态拍摄参数信息,所述动态拍摄参数信息用于记录无人飞行器在拍摄所述目标视频时的动态拍摄参数;
    根据所述动态拍摄参数信息,建立所述无人飞行器拍摄所述目标视频的模拟镜头立体空间;
    将所述目标水印信息与所述模拟镜头立体空间进行融合,生成针对所述目标视频的三维立体水印。
  32. 如权利要求31所述的终端,其特征在于,所述处理器还用于执行如权利要求2至15任一项所述的方法中的各步骤。
PCT/CN2017/082352 2017-04-28 2017-04-28 三维立体水印添加方法、装置及终端 WO2018195892A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/082352 WO2018195892A1 (zh) 2017-04-28 2017-04-28 三维立体水印添加方法、装置及终端
CN201780004602.6A CN108475410B (zh) 2017-04-28 2017-04-28 三维立体水印添加方法、装置及终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/082352 WO2018195892A1 (zh) 2017-04-28 2017-04-28 三维立体水印添加方法、装置及终端

Publications (1)

Publication Number Publication Date
WO2018195892A1 true WO2018195892A1 (zh) 2018-11-01

Family

ID=63265979

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/082352 WO2018195892A1 (zh) 2017-04-28 2017-04-28 三维立体水印添加方法、装置及终端

Country Status (2)

Country Link
CN (1) CN108475410B (zh)
WO (1) WO2018195892A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020107406A1 (zh) * 2018-11-30 2020-06-04 深圳市大疆创新科技有限公司 一种拍摄图像处理方法及相关设备
CN109963204B (zh) * 2019-04-24 2023-12-22 努比亚技术有限公司 水印添加方法、装置、移动终端及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101137008A (zh) * 2007-07-11 2008-03-05 裘炅 一种将位置信息隐藏于视频、音频或图的摄像装置及方法
US20100228406A1 (en) * 2009-03-03 2010-09-09 Honeywell International Inc. UAV Flight Control Method And System
CN105373629A (zh) * 2015-12-17 2016-03-02 谭圆圆 基于无人飞行器的飞行状态数据处理装置及其方法
CN106339079A (zh) * 2016-08-08 2017-01-18 清华大学深圳研究生院 一种基于计算机视觉的利用无人飞行器实现虚拟现实的方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7254249B2 (en) * 2001-03-05 2007-08-07 Digimarc Corporation Embedding location data in video
US7042470B2 (en) * 2001-03-05 2006-05-09 Digimarc Corporation Using embedded steganographic identifiers in segmented areas of geographic images and characteristics corresponding to imagery data derived from aerial platforms
US7061510B2 (en) * 2001-03-05 2006-06-13 Digimarc Corporation Geo-referencing of aerial imagery using embedded image identifiers and cross-referenced data sets
US8798148B2 (en) * 2007-06-15 2014-08-05 Physical Optics Corporation Apparatus and method employing pre-ATR-based real-time compression and video frame segmentation
KR101580987B1 (ko) * 2014-03-05 2015-12-29 광운대학교 산학협력단 깊이 및 텍스쳐 영상 기반의 3차원 입체 영상을 위한 워터마킹 방법
CN104767816A (zh) * 2015-04-15 2015-07-08 百度在线网络技术(北京)有限公司 拍摄信息采集方法、装置和终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101137008A (zh) * 2007-07-11 2008-03-05 裘炅 一种将位置信息隐藏于视频、音频或图的摄像装置及方法
US20100228406A1 (en) * 2009-03-03 2010-09-09 Honeywell International Inc. UAV Flight Control Method And System
CN105373629A (zh) * 2015-12-17 2016-03-02 谭圆圆 基于无人飞行器的飞行状态数据处理装置及其方法
CN106339079A (zh) * 2016-08-08 2017-01-18 清华大学深圳研究生院 一种基于计算机视觉的利用无人飞行器实现虚拟现实的方法及装置

Also Published As

Publication number Publication date
CN108475410B (zh) 2022-03-22
CN108475410A (zh) 2018-08-31

Similar Documents

Publication Publication Date Title
US11854149B2 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
CN105981368B (zh) 在成像装置中的照片构图和位置引导
US7805066B2 (en) System for guided photography based on image capturing device rendered user recommendations according to embodiments
CN110249626B (zh) 增强现实图像的实现方法、装置、终端设备和存储介质
US9799136B2 (en) System, method and apparatus for rapid film pre-visualization
US20030202120A1 (en) Virtual lighting system
US11232626B2 (en) System, method and apparatus for media pre-visualization
US11818467B2 (en) Systems and methods for framing videos
US11622072B2 (en) Systems and methods for suggesting video framing
US20190208124A1 (en) Methods and apparatus for overcapture storytelling
WO2022027447A1 (zh) 图像处理方法、相机及移动终端
WO2018195892A1 (zh) 三维立体水印添加方法、装置及终端
US10643303B1 (en) Systems and methods for providing punchouts of videos
JP2010183384A (ja) 撮影カメラ学習装置及びそのプログラム
KR101741150B1 (ko) 영상에디팅을 수행하는 영상촬영장치 및 방법
US11615582B2 (en) Enclosed multi-view visual media representation
TWI794512B (zh) 用於擴增實境之系統及設備及用於使用一即時顯示器實現拍攝之方法
AU2018203096B2 (en) System, method and apparatus for rapid film pre-visualization
KR101725932B1 (ko) 영상에디팅을 수행하는 영상촬영장치 및 방법
Tulijoki The process of combining animation with live-action films
NZ719982B2 (en) System, method and apparatus for rapid film pre-visualization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17907969

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17907969

Country of ref document: EP

Kind code of ref document: A1