CN108475410B - Three-dimensional watermark adding method, device and terminal - Google Patents

Three-dimensional watermark adding method, device and terminal Download PDF

Info

Publication number
CN108475410B
CN108475410B CN201780004602.6A CN201780004602A CN108475410B CN 108475410 B CN108475410 B CN 108475410B CN 201780004602 A CN201780004602 A CN 201780004602A CN 108475410 B CN108475410 B CN 108475410B
Authority
CN
China
Prior art keywords
watermark
dimensional
target video
dynamic
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780004602.6A
Other languages
Chinese (zh)
Other versions
CN108475410A (en
Inventor
苏冠华
艾楚越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN108475410A publication Critical patent/CN108475410A/en
Application granted granted Critical
Publication of CN108475410B publication Critical patent/CN108475410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

A three-dimensional watermark adding method, a device and a terminal are provided, the method comprises the following steps: receiving target watermark information; acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video; establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information; and fusing the target watermark information and the analog lens stereo space to generate a three-dimensional stereo watermark for the target video. The method can quickly realize the addition of the three-dimensional watermark in the target video.

Description

Three-dimensional watermark adding method, device and terminal
The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Technical Field
The invention relates to the technical field of video processing, in particular to a three-dimensional watermark adding method, a three-dimensional watermark adding device and a three-dimensional watermark adding terminal.
Background
The traditional video watermark is directly superimposed in the video, the story expression effect of the video is achieved by depending on the clip of a shot, and the traditional video watermark does not have any relevance with the space of a video picture. At present, in order to achieve a better watermark display Effect, for example, a three-dimensional stereo subtitle in a video, a stereo space and an action in the video may be analyzed by post-editing software (for example, Adobe After Effect) at a desktop end, and then the subtitle is matched with the stereo space and the action in the video, so that the watermark and the video are fused. However, the processing method using desktop end software requires a large amount of image operations to re-simulate the camera angle and the stereo space in the video during video imaging, which results in a large resource consumption and a long time.
Disclosure of Invention
The embodiment of the invention provides a method, a device and a terminal for adding a three-dimensional watermark, which are used for quickly adding the three-dimensional watermark into a target video and realizing dynamic adjustment of a three-dimensional watermark display state.
A three-dimensional stereo watermark adding method comprises the following steps:
receiving target watermark information;
acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and fusing the target watermark information and the analog lens stereo space to generate a three-dimensional stereo watermark for the target video.
A three-dimensional stereoscopic watermarking apparatus, comprising:
a watermark input unit for receiving target watermark information;
the parameter acquisition unit is used for acquiring dynamic shooting parameter information corresponding to a target video, and the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
the space simulation unit is used for establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and the watermark generating unit is used for fusing the target watermark information with the simulated lens three-dimensional space to generate a three-dimensional watermark aiming at the target video.
A terminal comprising a processor and a memory, the processor being electrically connected to the memory, the memory being configured to store executable program instructions, the processor being configured to read the executable program instructions from the memory and perform the following operations:
receiving target watermark information;
acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and fusing the target watermark information and the analog lens stereo space to generate a three-dimensional stereo watermark for the target video.
The three-dimensional watermark adding method, the device and the terminal can establish a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information when the target video needs to be added with the watermark by acquiring the dynamic shooting parameter information of the unmanned aerial vehicle when the target video is shot, and realize the rapid generation of the three-dimensional watermark aiming at the target video by fusing the target watermark information and the simulated lens three-dimensional space. The dynamic shooting parameters of the target video in the shooting process can be conveniently acquired, and the dynamic shooting parameters and the target video are stored in a correlation mode, so that when the three-dimensional watermark is added to the target video, a simulated lens three-dimensional space is directly established according to the dynamic shooting parameters without performing motion and space analysis through professional software, the target watermark information and the lens three-dimensional space are fused to form the corresponding three-dimensional watermark, and the generation time of the three-dimensional watermark is favorably shortened. Meanwhile, the display state of the three-dimensional watermark can be dynamically adjusted according to the dynamic shooting parameters, so that the display effect of the watermark is optimized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first flowchart of a three-dimensional watermark adding method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a three-dimensional watermark adding method according to an embodiment of the present invention;
fig. 3 is a third flow chart of a three-dimensional watermark adding method according to an embodiment of the present invention
Fig. 4A to 4D are schematic views of application scenarios of the three-dimensional watermark adding method according to the embodiment of the present invention;
fig. 5 is a schematic diagram of a first structure of a three-dimensional watermark adding apparatus according to an embodiment of the present invention;
fig. 6 is a second structural diagram of a three-dimensional watermark adding apparatus according to an embodiment of the present invention;
fig. 7 is a third structural diagram of a three-dimensional watermark adding apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, in an embodiment of the present invention, a three-dimensional watermark adding method is provided to quickly add a three-dimensional watermark to a target video and to dynamically adjust a display state of the three-dimensional watermark. The three-dimensional watermark adding method at least comprises the following steps:
step 101: receiving target watermark information;
step 102: acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
step 103: establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
step 104: and fusing the target watermark information and the analog lens stereo space to generate a three-dimensional stereo watermark for the target video.
The target watermark information may include at least one of text information, picture information, animation information, and the like. Accordingly, the three-dimensional stereoscopic watermark may include at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animation watermark. The dynamic shooting parameter information may include at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens view field angle information of the unmanned aerial vehicle. According to the dynamic shooting parameter information, a simulated lens stereo space for the unmanned aerial vehicle to shoot the target video can be established, and then the dynamic relative position relation between the unmanned aerial vehicle and the target object in the target video can be determined according to the simulated lens stereo space.
Referring to fig. 2, in an embodiment, the obtaining of the dynamic shooting parameter information corresponding to the target video includes:
step 201: acquiring dynamic shooting parameters of the unmanned aerial vehicle when shooting a target video;
step 202: generating dynamic shooting parameter information corresponding to the target video according to the dynamic shooting parameters;
step 203: and storing the dynamic shooting parameter information in association with the target video.
Specifically, in the process of shooting a target video, the unmanned aerial vehicle can acquire dynamic flight coordinates (x, y, z) of the unmanned aerial vehicle in a GPS (global positioning system) positioning mode, a Beidou positioning mode and the like, wherein x represents longitude information, y represents latitude information, and z represents flight altitude information, so that flight track information and flight speed information are generated according to the change of the dynamic flight coordinates, and flight attitude information is generated according to output data of a flight attitude sensor built in the unmanned aerial vehicle. Optionally, the attitude sensor comprises an Inertial Measurement Unit (IMU). Meanwhile, generating cradle head angle information according to the angle change of the cradle head carried on the unmanned aerial vehicle, and generating lens focal length information and lens view field angle information according to shooting parameters of a camera lens carried on the unmanned aerial vehicle.
Further, the dynamic shooting parameter information and the target video are stored in an associated manner, so that a mapping relationship between the target video and the corresponding dynamic shooting parameter information is established, and when a three-dimensional watermark needs to be added to the target video, the dynamic shooting parameter information corresponding to the target video can be acquired according to the mapping relationship. For example, a mapping relationship between the target video and the corresponding dynamic shooting parameter information may be established by adding a specific type tag to the target video, and when a three-dimensional watermark needs to be added to the target video, the dynamic shooting parameter information corresponding to the target video may be acquired by reading the specific type tag. In an embodiment, the dynamic shooting parameter information may also be stored in a data stream of the target video, and further, when a three-dimensional watermark needs to be added to the target video, the corresponding dynamic shooting parameter information may be directly read from the data stream of the target video.
It can be understood that, when storing the dynamic shooting parameter information in association with the target video, time stamp information corresponding to different shooting parameter information needs to be recorded in the dynamic shooting parameter information, so that the shooting parameter information and the video data stream are associated with each other in time.
Referring to fig. 3, after the generating the three-dimensional stereo watermark for the target video, the method further includes:
step 105: determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
step 106: and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
It is understood that after the simulated lens stereo space of the target video shot by the unmanned aerial vehicle is established, the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video can be determined frame by frame according to the simulated lens stereo space. Further, the display state of the three-dimensional watermark is adjusted according to the relative position relationship between the unmanned aerial vehicle corresponding to each frame of image in the target video and the target object.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
It can be understood that, as at least one of the flight path and the focal length of the lens changes, the scaling of the target object in the video also changes, and at this time, the scaling size of the three-dimensional watermark can be dynamically adjusted according to the change of the scaling of the target object, so as to ensure that the size of the watermark is scaled synchronously with the size of the target object. For example, in a certain frame image of the target video, the proportion of the target object is 1, and in the next adjacent frame image, the proportion of the target object is 0.5, that is, in two adjacent frame images, the target object is reduced by one time, at this time, the three-dimensional stereo watermark can be reduced by one time according to the scaling proportion of the target object, so that the dynamic adjustment of the scaling size of the three-dimensional stereo watermark is realized, and the watermark display effect is optimized.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
It is understood that the position of the unmanned aerial vehicle relative to the target object may be changed during the process of shooting the target video, so that the unmanned aerial vehicle may have different offset angles relative to the target object in different frame images. In this embodiment, a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video is calculated according to the flight trajectory information and the flight attitude information, and then a rotation angle of the three-dimensional watermark relative to the simulated lens stereo space is adjusted according to the dynamic offset angle, so that the three-dimensional watermark can dynamically rotate along with a change of the offset angle of the unmanned aerial vehicle.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
Specifically, in the process of shooting the target video by the unmanned aerial vehicle, in order to ensure the stability of the shooting lens, the angle of the cradle head is dynamically adjusted according to the flight track and the change of the flight attitude of the unmanned aerial vehicle, for example, the pitch angle of the cradle head is adjusted according to the change of the flight altitude of the unmanned aerial vehicle, and the yaw angle of the cradle head is adjusted according to the change of the flight attitude of the unmanned aerial vehicle. In this embodiment, by obtaining a dynamic pitch angle and a dynamic yaw angle of the pan/tilt head during shooting of a target video, a pitch rotation angle of the three-dimensional watermark with respect to the stereoscopic space of the analog lens is adjusted according to the dynamic pitch angle, and a lateral rotation angle of the three-dimensional watermark with respect to the stereoscopic space of the analog lens is adjusted according to the dynamic yaw angle, so that the three-dimensional watermark and the stereoscopic space of the analog lens are better fused, and a watermark display effect is optimized.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
Specifically, in the process of shooting a target video by the unmanned aerial vehicle, according to the change of the flight trajectory, the dynamic height of the unmanned aerial vehicle relative to the target object also changes. In the method, the dynamic height of the unmanned aerial vehicle relative to the target object is obtained according to the flight track information of the unmanned aerial vehicle, and then the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space is adjusted according to the dynamic height, so that the display state of the three-dimensional watermark is adjusted according to the change of the dynamic height of the unmanned aerial vehicle. For example, when the dynamic height is lower than a preset height threshold, the three-dimensional watermark may be in an upright state with respect to the reference ground plane of the analog lens stereo space, and when the dynamic height is equal to or higher than the preset height threshold, the three-dimensional watermark may be dynamically adjusted to be in a tiled state with respect to the reference ground plane of the analog lens stereo space, so that the three-dimensional watermark may be ensured to be more clearly presented at a high-altitude shooting angle.
In one embodiment, the method further comprises:
step 107: and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
It can be understood that, because the unmanned aerial vehicle is in a flying state when shooting the target video, the unmanned aerial vehicle is inevitably influenced by environmental factors to cause instability of a flying posture, for example, the unmanned aerial vehicle is influenced by wind speed change in a shooting environment to cause short-time jitter or short-time change of flying speed, so that a holder angle and a lens view field angle are influenced, and the display state of the three-dimensional watermark may also be changed due to the short-time disturbance, so that a watermark display effect is influenced. In this embodiment, by modifying the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the pan-tilt angle information, and the lens view angle information, for example, adjusting the rotation angle of the three-dimensional watermark according to the flight attitude information, the influence of the short-time change of the flight attitude of the unmanned aerial vehicle on the display state of the three-dimensional watermark can be reduced, and the watermark display effect can be further optimized.
In one embodiment, before receiving the target watermark information, the method further includes:
reading the target video from the unmanned aerial vehicle and playing the target video off line;
and generating a watermark editing identifier on the offline playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
Specifically, when the unmanned aerial vehicle shoots the target video, the dynamic shooting parameter information can be recorded and stored in association with the target video. When a three-dimensional watermark needs to be added to a target video, a user can establish communication connection with the unmanned aerial vehicle through an intelligent terminal such as a mobile phone and the like, so that the target video and the dynamic shooting parameter information stored in association with the target video are downloaded from the unmanned aerial vehicle, the target video is played and edited offline through video editing software on the intelligent terminal, and the three-dimensional watermark is added.
Referring to fig. 4A, 400 is an intelligent terminal, 410 is an offline playing interface of a target video, and 430 is a target object in the target video. When the target video is played offline through the video editing software on the intelligent terminal 400, the watermark editing identifier 411 may be generated on the offline playing interface 410, and then a three-dimensional watermark adding instruction for the target video may be received through the watermark editing identifier 411.
Referring to fig. 4B, after the watermark editing identifier 411 receives a three-dimensional watermark adding instruction for the target video, a watermark information input interface 413 may be generated on the offline playing interface 410 for inputting target watermark information. For example, the watermark information input interface may be a virtual keyboard, and further may receive text watermark information input by a user through the virtual keyboard; or, the watermark information input interface may also be a file selection window, and the corresponding image watermark information or animation watermark information may be selected through the file selection window.
It can be understood that in the shooting process of the target video, the target video can be obtained from the unmanned aerial vehicle in real time through the intelligent terminal and synchronously played on line, and dynamic shooting parameter information corresponding to the target video is obtained; and further generating a watermark editing identifier on the online playing interface of the target video so as to receive a three-dimensional watermark adding instruction aiming at the target video through the watermark editing identifier.
It can be understood that when the watermark information is input through the watermark information input interface, the video editing software may establish a simulated lens stereo space in which the unmanned aerial vehicle shoots the target video according to the dynamic shooting parameter information, and generate a corresponding three-dimensional stereo watermark, such as a text watermark "HELLOW" shown in fig. 4B, on the target video in real time according to the simulated lens stereo space.
Referring to fig. 4C, after generating the three-dimensional stereoscopic watermark for the target video, the method further includes:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
The editing instruction may be a touch operation instruction directly aiming at the three-dimensional watermark "HELLOW", for example, a touch operation instruction such as drag-and-drop, stretch, shrink, rotation, and the like, so as to manually adjust the display state of the three-dimensional watermark.
It is understood that after the three-dimensional stereo watermark for the target video is generated, a hiding instruction for the three-dimensional stereo watermark may also be received through the watermark editing identifier 411; and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction. It is to be understood that the hiding instruction for the three-dimensional stereoscopic watermark may also be a specific touch gesture directly on the playing interface of the target video.
Referring to fig. 4D, after the three-dimensional watermark for the target video is generated, as the target video is played, the display state of the three-dimensional watermark "HELLOW" is dynamically adjusted according to the change of the position relationship of the unmanned aerial vehicle relative to the target object 430, for example, dynamic scaling is performed according to the distance of the lens relative to the target object 430 or the change of the focal length of the lens, so as to finally realize the fusion of the three-dimensional watermark and the simulated lens stereo space.
It is understood that all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Referring to fig. 5, in an embodiment of the present invention, a three-dimensional watermark adding apparatus 500 is provided, including:
a watermark input unit 501, configured to receive target watermark information;
a parameter obtaining unit 502, configured to obtain dynamic shooting parameter information corresponding to a target video, where the dynamic shooting parameter information is used to record a dynamic shooting parameter of an unmanned aerial vehicle when shooting the target video;
a space simulation unit 503, configured to establish a simulated lens stereo space in which the unmanned aerial vehicle shoots the target video according to the dynamic shooting parameter information;
a watermark generating unit 504, configured to fuse the target watermark information with the simulated lens stereo space, and generate a three-dimensional stereo watermark for the target video.
In an embodiment, the parameter obtaining unit 502 is specifically configured to:
the method comprises the steps that dynamic shooting parameters of an unmanned aerial vehicle are obtained when the unmanned aerial vehicle shoots a target video, and dynamic shooting parameter information corresponding to the target video is generated;
and storing the dynamic shooting parameter information in association with the target video.
In one embodiment, the dynamic shooting parameter information includes at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens view angle information of the unmanned aerial vehicle.
In an embodiment, the spatial simulation unit 503 is specifically configured to:
establishing a simulated lens three-dimensional space of the unmanned aerial vehicle according to at least one of the flight track information, the flight attitude information, the flight speed information, the holder angle information, the lens focal length information and the lens view field angle information;
wherein the simulated lens stereo space is used for determining a dynamic relative position relationship between the unmanned aerial vehicle and a target object in the target video.
Referring to fig. 6, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes a watermark adjusting unit 505 for:
determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
In an embodiment, the watermark adjusting unit 505 is specifically configured to:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
In an embodiment, the watermark adjusting unit 505 is further configured to:
and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
Referring to fig. 7, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes:
a video obtaining unit 506, configured to read and play the target video offline from the unmanned aerial vehicle;
and an identifier generating unit 507, configured to generate a watermark editing identifier on the offline playing interface of the target video, where the watermark editing identifier is used to receive a three-dimensional watermark adding instruction for the target video.
In one embodiment, the video obtaining unit 506 is further configured to obtain and synchronously play the target video online in real time from the unmanned aerial vehicle during the shooting process of the target video;
the identifier generating unit 507 is further configured to generate a watermark editing identifier on the online playing interface of the target video, where the watermark editing identifier is used to receive a three-dimensional watermark adding instruction for the target video.
Referring to fig. 7, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes a watermark editing unit 508, configured to:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
Referring to fig. 7, in an embodiment, the three-dimensional stereo watermarking apparatus 500 further includes a watermark hiding unit 509, configured to:
receiving a hiding instruction for the three-dimensional watermark;
and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction.
In one embodiment, the three-dimensional stereoscopic watermark includes at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animation watermark.
It can be understood that the functions and specific implementations of the units in the three-dimensional watermark adding apparatus 500 may also refer to the related descriptions in the method embodiments shown in fig. 1 to fig. 4, and are not described herein again.
Referring to fig. 8, in an embodiment of the present invention, a terminal 800 is provided, which includes a processor 801 and a memory 803, where the processor 801 is electrically connected to the memory 803, the memory 803 is used for storing executable program instructions, and the processor 801 is used for reading the executable program instructions in the memory 803 and performing the following operations:
receiving target watermark information;
acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and fusing the target watermark information and the analog lens stereo space to generate a three-dimensional stereo watermark for the target video.
In one embodiment, the obtaining of the dynamic shooting parameter information corresponding to the target video includes:
acquiring dynamic shooting parameters of the unmanned aerial vehicle when shooting a target video;
generating dynamic shooting parameter information corresponding to the target video according to the dynamic shooting parameters;
and storing the dynamic shooting parameter information in association with the target video.
In one embodiment, the dynamic shooting parameter information includes at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens view angle information of the unmanned aerial vehicle.
In one embodiment, the establishing a simulated lens stereo space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information includes:
establishing a simulated lens three-dimensional space of the unmanned aerial vehicle according to at least one of the flight track information, the flight attitude information, the flight speed information, the holder angle information, the lens focal length information and the lens view field angle information;
wherein the simulated lens stereo space is used for determining a dynamic relative position relationship between the unmanned aerial vehicle and a target object in the target video.
In one embodiment, the operations further comprise:
determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
In one embodiment, the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video includes:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
In one embodiment, the operations further comprise:
and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
In one embodiment, before receiving the target watermark information, the operations further include:
reading the target video from the unmanned aerial vehicle and playing the target video off line;
and generating a watermark editing identifier on the offline playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
In one embodiment, before receiving the target watermark information, the operations further include:
in the shooting process of a target video, acquiring the target video from an unmanned aerial vehicle in real time and synchronously playing the target video on line;
and generating a watermark editing identifier on the online playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
In one embodiment, after the generating the three-dimensional stereoscopic watermark for the target video, the operations further include:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
In one embodiment, after the generating the three-dimensional stereoscopic watermark for the target video, the operations further include:
receiving a hiding instruction for the three-dimensional watermark;
and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction.
In one embodiment, the three-dimensional stereoscopic watermark includes at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animation watermark.
It is understood that the specific steps of the operations executed by the processor 801 and the specific implementation thereof may also refer to the description in the method embodiments shown in fig. 1 to fig. 4, and are not described herein again.
The three-dimensional watermark adding method, the device and the terminal can establish the simulated lens stereo space of the target video shot by the unmanned aerial vehicle according to the dynamic shooting parameter information when the target video needs to be added with the watermark by acquiring the dynamic shooting parameter information of the unmanned aerial vehicle when the target video is shot, can quickly generate the three-dimensional watermark aiming at the target video by fusing the target watermark information and the simulated lens stereo space, and are beneficial to reducing the generation time of the three-dimensional watermark. Meanwhile, the display state of the three-dimensional watermark can be dynamically adjusted according to the dynamic shooting parameters, so that the display effect of the watermark is optimized.
It should be understood that the above-described embodiments are merely exemplary of the present invention, and should not be construed as limiting the scope of the present invention, but rather as embodying all or part of the above-described embodiments and equivalents thereof as may be made by those skilled in the art, and still fall within the scope of the invention as claimed.

Claims (31)

1. A three-dimensional watermark adding method is characterized by comprising the following steps:
receiving target watermark information;
acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and fusing the target watermark information and the simulated lens stereo space to generate a three-dimensional stereo watermark aiming at the target video, wherein the display state of the three-dimensional stereo watermark in the target video is dynamically adjusted.
2. The method of claim 1, wherein the obtaining of the dynamic shooting parameter information corresponding to the target video comprises:
acquiring dynamic shooting parameters of the unmanned aerial vehicle when shooting a target video;
generating dynamic shooting parameter information corresponding to the target video according to the dynamic shooting parameters;
and storing the dynamic shooting parameter information in association with the target video.
3. The method of claim 1 or 2, wherein the dynamic photographing parameter information includes at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens field angle information of the unmanned aerial vehicle.
4. The method of claim 3, wherein the establishing a simulated lens stereo space for the UAV to capture the target video according to the dynamic capture parameter information comprises:
establishing a simulated lens three-dimensional space of the unmanned aerial vehicle according to at least one of the flight track information, the flight attitude information, the flight speed information, the holder angle information, the lens focal length information and the lens view field angle information;
wherein the simulated lens stereo space is used for determining a dynamic relative position relationship between the unmanned aerial vehicle and a target object in the target video.
5. The method of claim 4, wherein the method further comprises:
determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
6. The method of claim 5, wherein the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative positional relationship between the UAV and the target object in the target video comprises:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
7. The method of claim 5, wherein the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative positional relationship between the UAV and the target object in the target video comprises:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
8. The method of claim 5, wherein the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative positional relationship between the UAV and the target object in the target video comprises:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
9. The method of claim 5, wherein the adjusting the display state of the three-dimensional stereoscopic watermark according to the dynamic relative positional relationship between the UAV and the target object in the target video comprises:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
10. The method of any of claims 6 to 9, further comprising:
and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
11. The method of claim 1, wherein prior to receiving the target watermark information, the method further comprises:
reading the target video from the unmanned aerial vehicle and playing the target video off line;
and generating a watermark editing identifier on the offline playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
12. The method of claim 1, wherein prior to receiving the target watermark information, the method further comprises:
in the shooting process of a target video, acquiring the target video from an unmanned aerial vehicle in real time and synchronously playing the target video on line;
and generating a watermark editing identifier on the online playing interface of the target video, wherein the watermark editing identifier is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
13. The method of claim 11 or 12, wherein after the generating the three-dimensional stereoscopic watermark for the target video, the method further comprises:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
14. The method of claim 11 or 12, wherein after the generating the three-dimensional stereoscopic watermark for the target video, the method further comprises:
receiving a hiding instruction for the three-dimensional watermark;
and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction.
15. The method of claim 1, wherein the three-dimensional stereoscopic watermark comprises at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animated watermark.
16. A three-dimensional watermark adding apparatus, comprising:
a watermark input unit for receiving target watermark information;
the parameter acquisition unit is used for acquiring dynamic shooting parameter information corresponding to a target video, and the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
the space simulation unit is used for establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and the watermark generating unit is used for fusing the target watermark information with the simulated lens stereo space to generate a three-dimensional stereo watermark aiming at the target video, and the display state of the three-dimensional stereo watermark in the target video is dynamically adjusted.
17. The apparatus of claim 16, wherein the parameter obtaining unit is specifically configured to:
acquiring dynamic shooting parameters of the unmanned aerial vehicle when shooting a target video;
generating dynamic shooting parameter information corresponding to the target video according to the dynamic shooting parameters;
and storing the dynamic shooting parameter information in association with the target video.
18. The apparatus according to claim 16 or 17, wherein the dynamic photographing parameter information includes at least one of flight trajectory information, flight attitude information, flight speed information, pan-tilt angle information, lens focal length information, and lens field angle information of the unmanned aerial vehicle.
19. The apparatus of claim 18, wherein the spatial simulation unit is specifically configured to:
establishing a simulated lens three-dimensional space of the unmanned aerial vehicle according to at least one of the flight track information, the flight attitude information, the flight speed information, the holder angle information, the lens focal length information and the lens view field angle information;
wherein the simulated lens stereo space is used for determining a dynamic relative position relationship between the unmanned aerial vehicle and a target object in the target video.
20. The apparatus of claim 18, wherein the apparatus further comprises a watermark adjustment unit to:
determining a dynamic relative positional relationship between the UAV and a target object in the target video on a frame-by-frame basis;
and adjusting the display state of the three-dimensional watermark according to the dynamic relative position relationship between the unmanned aerial vehicle and the target object in the target video.
21. The apparatus as claimed in claim 20, wherein said watermark adjusting unit is specifically configured to:
calculating the dynamic scaling of a target object in the target video according to at least one of the flight track information and the lens focal length information;
and adjusting the scaling size of the three-dimensional watermark according to the dynamic scaling of the target object.
22. The apparatus as claimed in claim 20, wherein said watermark adjusting unit is specifically configured to:
calculating a dynamic offset angle of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information and the flight attitude information;
and adjusting the rotation angle of the three-dimensional stereo watermark relative to the simulated lens stereo space according to the dynamic offset angle of the unmanned aerial vehicle relative to the target object.
23. The apparatus as claimed in claim 20, wherein said watermark adjusting unit is specifically configured to:
calculating the dynamic rotation angle of a cloud deck carried on the aircraft according to the cloud deck angle information, wherein the dynamic rotation angle of the cloud deck comprises at least one of a dynamic pitch angle and a dynamic yaw angle;
adjusting the pitching rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic pitch angle of the holder; and/or the presence of a gas in the gas,
and adjusting the transverse rotation angle of the three-dimensional watermark relative to the three-dimensional space of the analog lens according to the dynamic yaw angle of the holder.
24. The apparatus as claimed in claim 20, wherein said watermark adjusting unit is specifically configured to:
calculating the dynamic height of the unmanned aerial vehicle relative to a target object in the target video according to the flight track information;
and adjusting the pitching rotation angle of the three-dimensional watermark relative to the simulated lens three-dimensional space according to the dynamic height of the unmanned aerial vehicle relative to the target object.
25. The apparatus of any of claims 21 to 24, wherein the watermark adjusting unit is further configured to:
and correcting the display state of the three-dimensional watermark according to at least one of the flight attitude information, the flight speed information, the holder angle information and the lens view field angle information.
26. The apparatus of claim 16, wherein the apparatus further comprises:
the video acquisition unit is used for reading the target video from the unmanned aerial vehicle and playing the target video off line;
and the mark generation unit is used for generating a watermark editing mark on the off-line playing interface of the target video, wherein the watermark editing mark is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
27. The apparatus of claim 16, wherein the apparatus further comprises:
the video acquisition unit is used for acquiring and synchronously playing the target video on line from the unmanned aerial vehicle in real time in the shooting process of the target video;
and the mark generating unit is used for generating a watermark editing mark on the online playing interface of the target video, wherein the watermark editing mark is used for receiving a three-dimensional watermark adding instruction aiming at the target video.
28. An apparatus according to claim 26 or 27, wherein the apparatus further comprises a watermark editing unit for:
receiving an editing instruction aiming at the three-dimensional watermark;
adjusting the display state of the three-dimensional watermark according to the editing instruction;
wherein the adjusting the display state of the three-dimensional stereoscopic watermark includes adjusting at least one of a zoom size, a display position, and a rotation angle of the three-dimensional stereoscopic watermark.
29. An apparatus according to claim 26 or 27, wherein the apparatus further comprises a watermark hiding unit for:
receiving a hiding instruction for the three-dimensional watermark;
and triggering the three-dimensional watermark in the target video to be switched from a display state to a hidden state according to the hiding instruction.
30. The apparatus of claim 16, wherein the three-dimensional stereoscopic watermark comprises at least one of a three-dimensional text watermark, a three-dimensional image watermark, and a three-dimensional animated watermark.
31. A terminal comprising a processor and a memory, the processor being electrically coupled to the memory, the memory being configured to store executable program instructions, the processor being configured to read the executable program instructions from the memory and perform the following operations:
receiving target watermark information;
acquiring dynamic shooting parameter information corresponding to a target video, wherein the dynamic shooting parameter information is used for recording dynamic shooting parameters of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the target video;
establishing a simulated lens three-dimensional space for the unmanned aerial vehicle to shoot the target video according to the dynamic shooting parameter information;
and fusing the target watermark information and the simulated lens stereo space to generate a three-dimensional stereo watermark aiming at the target video, wherein the display state of the three-dimensional stereo watermark in the target video is dynamically adjusted.
CN201780004602.6A 2017-04-28 2017-04-28 Three-dimensional watermark adding method, device and terminal Active CN108475410B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/082352 WO2018195892A1 (en) 2017-04-28 2017-04-28 Method and apparatus for adding three-dimensional stereoscopic watermark, and terminal

Publications (2)

Publication Number Publication Date
CN108475410A CN108475410A (en) 2018-08-31
CN108475410B true CN108475410B (en) 2022-03-22

Family

ID=63265979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780004602.6A Active CN108475410B (en) 2017-04-28 2017-04-28 Three-dimensional watermark adding method, device and terminal

Country Status (2)

Country Link
CN (1) CN108475410B (en)
WO (1) WO2018195892A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020107406A1 (en) * 2018-11-30 2020-06-04 深圳市大疆创新科技有限公司 Photographed image processing method and related device
CN109963204B (en) * 2019-04-24 2023-12-22 努比亚技术有限公司 Watermark adding method and device, mobile terminal and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101137008A (en) * 2007-07-11 2008-03-05 裘炅 Camera device and method for concealing position information in video, audio or image
CN104767816A (en) * 2015-04-15 2015-07-08 百度在线网络技术(北京)有限公司 Photography information collecting method, device and terminal
KR20150104305A (en) * 2014-03-05 2015-09-15 광운대학교 산학협력단 A watermarking method for 3D stereoscopic image based on depth and texture images

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7042470B2 (en) * 2001-03-05 2006-05-09 Digimarc Corporation Using embedded steganographic identifiers in segmented areas of geographic images and characteristics corresponding to imagery data derived from aerial platforms
US7061510B2 (en) * 2001-03-05 2006-06-13 Digimarc Corporation Geo-referencing of aerial imagery using embedded image identifiers and cross-referenced data sets
US7254249B2 (en) * 2001-03-05 2007-08-07 Digimarc Corporation Embedding location data in video
US8798148B2 (en) * 2007-06-15 2014-08-05 Physical Optics Corporation Apparatus and method employing pre-ATR-based real-time compression and video frame segmentation
US20100228406A1 (en) * 2009-03-03 2010-09-09 Honeywell International Inc. UAV Flight Control Method And System
CN105373629A (en) * 2015-12-17 2016-03-02 谭圆圆 Unmanned aerial vehicle-based flight condition data processing device and method
CN106339079A (en) * 2016-08-08 2017-01-18 清华大学深圳研究生院 Method and device for realizing virtual reality by using unmanned aerial vehicle based on computer vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101137008A (en) * 2007-07-11 2008-03-05 裘炅 Camera device and method for concealing position information in video, audio or image
KR20150104305A (en) * 2014-03-05 2015-09-15 광운대학교 산학협력단 A watermarking method for 3D stereoscopic image based on depth and texture images
CN104767816A (en) * 2015-04-15 2015-07-08 百度在线网络技术(北京)有限公司 Photography information collecting method, device and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《立体视频水印技术及其应用研究》;李文锋;《中国优秀博硕士学位论文全文数据库(硕士)》;20160315;第I138-89页 *
《面向立体视频的自适应数字水印算法》;朱仲杰等;《中国图象图形学报》;20070131;第68-71页 *

Also Published As

Publication number Publication date
CN108475410A (en) 2018-08-31
WO2018195892A1 (en) 2018-11-01

Similar Documents

Publication Publication Date Title
US11854149B2 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
US11381758B2 (en) System and method for acquiring virtual and augmented reality scenes by a user
US11074755B2 (en) Method, device, terminal device and storage medium for realizing augmented reality image
CN106683195B (en) AR scene rendering method based on indoor positioning
CN109471842B (en) Image file format, image file generating method, image file generating device and application
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
CN109076263B (en) Video data processing method, device, system and storage medium
CN108702464B (en) Video processing method, control terminal and mobile device
US9756260B1 (en) Synthetic camera lenses
JP2018067301A (en) Methods, devices and systems for automatic zoom when playing augmented reality scene
WO2019000325A1 (en) Augmented reality method for aerial photography of unmanned aerial vehicle, processor, and unmanned aerial vehicle
EP3273318A1 (en) Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
CN106791360A (en) Generate the method and device of panoramic video
CN112312113B (en) Method, device and system for generating three-dimensional model
WO2019204372A1 (en) R-snap for production of augmented realities
WO2022027447A1 (en) Image processing method, and camera and mobile terminal
CN110263615A (en) Interaction processing method, device, equipment and client in vehicle shooting
CN108475410B (en) Three-dimensional watermark adding method, device and terminal
CN112995507A (en) Method and device for prompting object position
US11875080B2 (en) Object sharing method and apparatus
KR20190122077A (en) Horizontal Correction Method for Spherical Image Recording Using Drones
WO2023201574A1 (en) Control method for unmanned aerial vehicle, image display method, unmanned aerial vehicle, and control terminal
CN115237363A (en) Picture display method, device, equipment and medium
KR101741149B1 (en) Method and device for controlling a virtual camera's orientation
JP2021093592A (en) Image processing device, image processing method, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant