CN108055567B - Video processing method and device, terminal equipment and storage medium - Google Patents

Video processing method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN108055567B
CN108055567B CN201711010208.1A CN201711010208A CN108055567B CN 108055567 B CN108055567 B CN 108055567B CN 201711010208 A CN201711010208 A CN 201711010208A CN 108055567 B CN108055567 B CN 108055567B
Authority
CN
China
Prior art keywords
watermark
image
image data
added
synthesizing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711010208.1A
Other languages
Chinese (zh)
Other versions
CN108055567A (en
Inventor
刘飞跃
田东渭
郭伟
王程博
杨玉奇
周朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing environment and Wind Technology Co., Ltd.
Original Assignee
Beijing Mijinghefeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mijinghefeng Technology Co ltd filed Critical Beijing Mijinghefeng Technology Co ltd
Priority to CN201711010208.1A priority Critical patent/CN108055567B/en
Publication of CN108055567A publication Critical patent/CN108055567A/en
Application granted granted Critical
Publication of CN108055567B publication Critical patent/CN108055567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The embodiment of the invention provides a video processing method, a video processing device, terminal equipment and a storage medium, which relate to the technical field of Internet, and the method comprises the following steps: obtaining source video data and obtaining a watermarking strategy, wherein the watermarking strategy comprises one or more of the following items: watermark content, frame number corresponding to the watermark content and watermark position; extracting each frame of image data from the source video data; processing the image data according to the watermark adding strategy to obtain watermark image data of each frame; and synthesizing the watermark image data into target video data added with the watermark and storing the target video data. The embodiment of the invention can realize the purpose of adding the dynamic watermark to the video data.

Description

Video processing method and device, terminal equipment and storage medium
Technical Field
The invention relates to the technical field of internet, in particular to a video processing method and device, a terminal device and a storage medium.
Background
With the development of terminal technology, terminal devices such as mobile phones and tablet computers are more and more popular, and great convenience is brought to life, study and work of people.
These terminal devices are generally installed with various applications, so that users can use the various applications in the terminal devices to perform required operations, such as playing games through a game application, distributing and playing audio and video through a video application, and the like.
Disclosure of Invention
The invention provides a video processing method, a corresponding video processing device, a terminal device and a storage medium, which aim to realize the purpose of adding dynamic watermarks in video data.
According to an aspect of the present invention, there is provided a video processing method, the method including: obtaining source video data and obtaining a watermarking strategy, wherein the watermarking strategy comprises one or more of the following items: watermark content, frame number corresponding to the watermark content and watermark position; extracting each frame of image data from the source video data; processing the image data according to the watermark adding strategy to obtain watermark image data of each frame; and synthesizing the watermark image data into target video data added with the watermark and storing the target video data.
Optionally, the method further comprises: determining a synthesis step corresponding to the watermark according to the acquired watermark content and/or the frame number corresponding to the watermark content; a step-by-step composite image is generated according to each of the synthesis steps.
Optionally, the watermark content includes: an application logo and/or a user identification.
Optionally, the processing the image data according to the watermark adding policy to obtain watermark image data of each frame includes: determining each step-by-step synthetic image corresponding to the watermark to be added according to the watermark adding strategy; and synthesizing each step-by-step synthesized image corresponding to the watermark to be added with each frame of image data according to sequential circulation to obtain each frame of watermark image data.
Optionally, the step-by-step synthesis of the image to be watermarked with the image data of each frame in sequence includes: synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed; and circularly executing the synthesis steps.
Optionally, the step-by-step synthesis of the image to be watermarked with the image data of each frame in sequence includes: synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed; synthesizing each step-by-step synthesized image corresponding to the watermark to be added with continuous n frames of image data according to a reverse order until the step of synthesizing the first step-by-step synthesized image is completed; and circularly executing the synthesis steps.
Optionally, the step-by-step synthesizing each image corresponding to the watermark to be added with the n consecutive frames of image data respectively includes: synthesizing the continuous n frames of image data and the step-by-step synthesized image corresponding to the watermark to be added; and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
Optionally, synthesizing the watermark image data into watermarked target video data and storing the watermarked target video data includes: and synthesizing the watermark image data into the watermark added target video data according to the time stamp, and storing the watermark added target video data.
According to another aspect of the present invention, there is provided a video processing apparatus, comprising:
the acquisition module is used for acquiring source video data and acquiring a watermarking strategy, wherein the watermarking strategy comprises one or more of the following items: watermark content, frame number corresponding to the watermark content and watermark position;
the extraction module is used for extracting image data of each frame from the source video data;
the processing module is used for processing the image data according to the watermark adding strategy to obtain watermark image data of each frame;
and the video synthesis module is used for synthesizing the watermark image data into the target video data added with the watermark and storing the target video data.
Optionally, the method further comprises: a synthesis step determining module, configured to determine a synthesis step corresponding to the watermark according to the obtained watermark content and/or the number of frames corresponding to the watermark content; and the synthetic image generating module is used for generating a step synthetic image according to each synthetic step.
Optionally, the watermark content includes: an application logo and/or a user identification.
Optionally, the processing module includes: the determining submodule is used for determining each step-by-step synthetic image corresponding to the watermark to be added according to the watermark adding strategy; and the synthesis submodule is used for synthesizing each step-by-step synthesis image corresponding to the watermark to be added with each frame of image data according to sequential circulation to obtain each frame of watermark image data.
Optionally, the synthesizing submodule is configured to synthesize each step-by-step synthesized image corresponding to the watermark to be added with the continuous n frames of image data respectively according to a positive sequence until a synthesizing step of a last step-by-step synthesized image is completed; and circularly executing the synthesis steps.
Optionally, the synthesizing submodule is configured to synthesize each step-by-step synthesized image corresponding to the watermark to be added with the continuous n frames of image data respectively according to a positive sequence until a synthesizing step of a last step-by-step synthesized image is completed; synthesizing each step-by-step synthesized image corresponding to the watermark to be added with the continuous n frames of image data according to the reverse order until the step of synthesizing the first step-by-step synthesized image is completed; and circularly executing the synthesis steps.
Optionally, the synthesizing submodule is configured to synthesize the continuous n frames of image data and the step-by-step synthesized image corresponding to the watermark to be added; and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
Optionally, the video synthesizing module is configured to synthesize the watermark image data into watermarked target video data according to the timestamp, and store the watermarked target video data.
According to still another aspect of the present invention, there is provided a terminal device including: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform a video processing method as described in one or more of the implementations of the invention.
Embodiments of the present invention also provide a machine-readable medium having stored thereon instructions, which, when executed by one or more processors, cause a terminal device to perform a video processing method as described in one or more of the embodiments of the present invention.
According to the video processing method and the device, after the source video data and the watermark adding strategy are obtained, each frame of image data can be extracted from the source video data, each extracted frame of image data can be processed according to the watermark adding strategy to obtain each frame of watermark image data, and then the watermark image data can be synthesized into the target video data added with the watermark and stored, so that each step-by-step synthesized image corresponding to the watermark can be displayed in the subsequent process of playing the target video data, the purpose of displaying the dynamic watermark in the video data is achieved, and the beneficial effect of adding the dynamic watermark in the video data is achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating the steps of a video processing method according to one embodiment of the invention;
FIG. 2 is a flow chart illustrating the steps of a video processing method according to an alternative embodiment of the invention;
FIG. 3 is a schematic diagram of a composite image to be watermarked for each step in accordance with an example of the invention;
fig. 4A is a block diagram showing a configuration of a video processing apparatus according to an embodiment of the present invention;
FIG. 4B shows a block diagram of a video processing apparatus according to an alternative embodiment of the invention;
FIG. 5 schematically shows a block diagram of a server for performing the method according to the invention;
fig. 6 schematically shows a storage unit for holding or carrying program code implementing a method according to the invention; and
fig. 7 is a block diagram illustrating a partial structure related to a terminal device provided in an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Referring to fig. 1, a flowchart illustrating steps of a video processing method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 102, obtaining source video data and obtaining a watermark adding strategy.
In the embodiment of the present invention, the source video data may be used to represent the obtained source video, for example, the source video may be a video that is pre-stored in the terminal device, may also be a video that is currently downloaded from a server by the terminal device, may also be a video that is currently recorded by the terminal device, may also be a video that is searched through a video application in the terminal device, and so on.
In order to add a dynamically displayed watermark to video data, in the embodiment of the present invention, after determining a video that needs to be processed currently, source video data of the video and a corresponding watermark addition policy may be obtained, for example, the source video data and the watermark addition policy may be obtained by a video application in a terminal device, so that a watermark may be added to each frame of image data in the source video data corresponding to each step-by-step composite image according to the watermark addition policy subsequently. The watermarking strategy can be used for adding a dynamically displayed watermark in video data, and may include one or more of the following: watermark content, the number of frames corresponding to the watermark content, the watermark location, etc.
In particular, the watermark content may be used to determine what content needs to be displayed in the watermark image, and may include production information of the video, such as an application flag, a user identification, and the like, which may include a video application. The frame number corresponding to the watermark content can be used for determining the image frame number required for displaying a complete watermark image in the video data playing process, for example, when the frame number corresponding to the watermark content is M frames, continuous M frames of image data are required for displaying a complete watermark image in the video data; or the method may be used to determine the number of image frames required to display a step-by-step composite image corresponding to the watermark in the video data playing process, for example, when the number of frames corresponding to the watermark content is a frame, it may be determined that n consecutive frames of image data are required to display each step-by-step composite image corresponding to the watermark in the video data, if the watermark requires M compositing steps to perform compositing, it may be determined that the number of image frames required to display a complete watermark image in the video data playing process is the number of frames corresponding to the product of M and n, for example, when the product of M and n is M, it may be determined that M consecutive frames of image data are required to complete the display of each step-by-step composite image corresponding to a complete watermark in the video data, and so on. The watermark position may be used to determine a display position of the watermark in the video data, for example, may be used to determine a display position of a watermark image of the watermark in each frame of image data of the video data, and may also be used to determine a display position of each step-by-step synthesized image corresponding to the watermark in each frame of image data of the video data, and the like, which is not limited in this embodiment of the present invention.
And 104, extracting image data of each frame from the source video data.
Specifically, the video data of one video may include at least two frames of image data. According to the embodiment of the invention, after the source video data is acquired, the image data of each frame can be extracted from the source video data, so that the corresponding step-by-step synthetic image of the watermark is respectively added in the image data of each frame in the following process.
And 106, processing the image data according to the watermark adding strategy to obtain watermark image data of each frame.
Specifically, the embodiment of the present invention may process the image data according to the watermark adding policy based on the playing time sequence of each frame of image data, for example, each frame of image data and each step-by-step synthesized image corresponding to the watermark may be synthesized separately, so as to add the step-by-step synthesized image corresponding to the watermark to each frame of image data, and obtain each frame of synthesized watermark image data.
In an optional implementation manner, each step-by-step synthesized image corresponding to the watermark to be added may be determined according to the watermark adding policy, and then each step-by-step synthesized image corresponding to the watermark to be added may be sequentially and cyclically synthesized with each frame of image data to obtain each frame of watermark image data. The watermark to be added can be used for representing a watermark image which needs to be added currently in the source video. For example, the watermark image data can be circularly synthesized with each frame of image data in the source video data according to the synthesis sequence of the watermark image to obtain each frame of watermark image data of the step-by-step synthesized image added with the watermark image, so that the watermark image can gradually become complete along with the playing of each frame of watermark image data and can circularly appear in the playing interface corresponding to each watermark image data until each frame of watermark image data is played.
And 108, synthesizing the watermark image data into the target video data added with the watermark and storing the target video data.
After each frame of watermark image data of the step-by-step synthesized image corresponding to the added watermark is obtained, the target video data added with the watermark image can be synthesized according to each frame of watermark image data, for example, each watermark image data can be synthesized into the target video data added with the watermark according to the time stamp, and then the target video data added with the watermark can be stored, so that the target video data can be conveniently used subsequently.
For example, the target video data can be used for video playing to circularly display each step-by-step composite image of the watermark image in the video playing process, so that the watermark added to the target video data can be displayed in a dynamic mode, and the effect of adding the dynamic watermark to the video data is achieved; if the target video data can be used for video publishing, for example, a user can upload the target video data to a server corresponding to a video application, so that other users can obtain the target video data to which the dynamic watermark is added, the purpose of video sharing is achieved, meanwhile, the copyright of the published target video data can be protected based on the added dynamic watermark, and the protection effect of the video data is enhanced.
To sum up, after the source video data and the watermark adding strategy are obtained, each frame of image data can be extracted from the source video data, and the extracted frame of image data can be processed according to the watermark adding strategy to obtain each frame of watermark image data, namely, each step-by-step composite image corresponding to the watermark is added to each frame of image data in the source video data, and then the watermark image data can be synthesized into the target video data added with the watermark and stored, so that each step-by-step composite image corresponding to the watermark can be displayed in the subsequent process of playing the target video data, the purpose of displaying the dynamic watermark in the video data is realized, and the user requirement of adding the dynamic watermark in the video data is further met.
In the actual processing, after determining the watermark to be added to the video, the embodiment of the invention can generate a plurality of corresponding step-by-step composite images according to the synthesis steps corresponding to the watermark, so as to conveniently and circularly add each step-by-step composite image corresponding to the watermark in the source video data. In an optional embodiment of the present invention, may further include: determining a synthesis step corresponding to the watermark image according to the acquired watermark content and/or the frame number corresponding to the watermark content; a step-by-step composite image is generated according to each of the synthesis steps. Specifically, the watermark content may be determined according to a watermark material selected or input by a user, for example, the watermark content may be generated according to an application mark, a user name, a picture, and the like selected by the user; the application flag and/or the user identifier of the video application may also be determined according to default parameters, for example, the application flag and/or the user identifier of the video application may be determined as the watermark content to be watermarked, which is not limited in this embodiment of the present invention.
It should be noted that, in the embodiment of the present invention, each step-by-step composite image corresponding to a watermark may be generated before source video data is acquired, so that each step-by-step composite image corresponding to a watermark to be added may be directly acquired when source video data is acquired subsequently, thereby facilitating subsequent addition of a watermark to video data.
Of course, after the source video data is acquired, the watermark adding strategy may also be acquired according to the watermarks that are required to be added to the source video data, the synthesizing step corresponding to the watermark is determined according to the watermark content in the watermark adding strategy and/or the frame number corresponding to the watermark content, and a step-by-step synthesized image may be generated according to each synthesizing step, so that each step-by-step synthesized image corresponding to the watermark to be added may be acquired, which is not specifically limited in the embodiment of the present invention.
Referring to fig. 2, a flowchart illustrating steps of a device detection method according to an alternative embodiment of the present invention is shown.
Step 202, source video data is obtained, and a watermark adding strategy is obtained.
For example, after the user selects a video that needs to be processed, the terminal device may obtain source video data of the video through a video application, and may obtain a watermark adding policy corresponding to a watermark to be added for the source video, so as to execute step 204 according to watermark content in the watermark adding policy and/or a frame number corresponding to the watermark content. For another example, after the terminal device acquires the source video data of the video, it may acquire each step-by-step composite image corresponding to the watermark to be added for the video, and then jump to step 208 to execute, so as to add the distributed composite image corresponding to the watermark in each frame of image data of the source video data.
And 204, determining the synthesis steps corresponding to the watermark images according to the acquired watermark content and/or the frame number corresponding to the watermark content, and generating a step-by-step synthesized image according to each synthesis step.
After the watermark content and/or the frame number corresponding to the watermark content are/is acquired, the synthesis step corresponding to the watermark image can be determined according to the acquired watermark content and/or the frame number corresponding to the watermark content; therefore, after the corresponding synthesis step of the watermark image is determined, such as in the case of determining that the watermark image added by the video data is a micro mark (Logo) of a video application, if the Logo (Logo) of the video application is an icon consisting of a hexagon and a triangle, it may be determined that the corresponding compositing step of the watermark image comprises a hexagonal compositing step and a triangular compositing step, the step-by-step composite image corresponding to each synthesis step may be generated according to the synthesis sequence of the watermark image, that is, a first step-by-step composite image may be generated according to a first synthesis step, a second step-by-step composite image may be generated according to a second synthesis step, a third step-by-step composite image may be generated according to a third synthesis step, and the like until the step-by-step composite image corresponding to the last composite step of generating the watermark image.
For example, when the micro-mark applied to the video is an icon composed of a hexagon and a triangle, it may be determined that the synthesis step corresponding to the watermark image includes a hexagon synthesis step and a triangle synthesis step, as shown in fig. 3, the first step synthesis image 301 may be generated according to the first synthesis step, the second step synthesis image 302 may be generated according to the second synthesis step, the third step synthesis image 303 may be generated according to the third synthesis step, the fourth step synthesis image 304 may be generated according to the fourth synthesis step, the fifth step synthesis image 305 may be generated according to the fifth synthesis step, the sixth step synthesis image 306 may be generated according to the sixth synthesis step, the seventh step synthesis image 307 may be generated according to the seventh synthesis step, the eighth step synthesis image 308 may be generated according to the eighth synthesis step, the ninth step synthesis image 309 may be generated according to the ninth synthesis step, namely, the step-by-step composite image corresponding to the last composite step is completed, and a complete watermark image is obtained.
In the embodiment of the present invention, optionally, the watermark content may include an application mark and/or a user icon; wherein the application Logo can be used to identify the video application, such as an icon, Logo (Logo), etc. that may be the video application; the user identification may be used to determine the producer of the video, such as may be a user account number (ID) or the like.
Step 206, extracting each frame of image data from the source video data.
And 208, synthesizing each step-by-step synthesized image corresponding to the watermark to be added with each frame of image data according to sequence circulation to obtain each frame of watermark image data.
After extracting each frame of image data in the source video data, the embodiment of the present invention may add and synthesize each step-by-step synthesized image corresponding to the watermark image to one frame or continuous multi-frame image data in a cycle according to the synthesizing order of the watermark image, for example, according to the positive order and/or the reverse order of the synthesizing steps, until the step of adding the step-by-step synthesized image corresponding to the watermark image to each frame of image data in the source video data is completed. The number of frames of image data to which a step-by-step synthesized image is added may be set according to the time required for displaying a complete watermark image and/or the number of synthesizing steps, or may be set according to the time required for a user to view a step-by-step synthesized image, which is not limited in the embodiment of the present invention. For example, it may be set that one step-by-step synthesized image needs to be added to consecutive n frames of image data, n being an integer greater than 1.
In an alternative embodiment, the synthesizing the to-be-watermarked step-by-step synthesized images with the frame image data according to a sequential loop may include: synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed; and circularly executing the synthesis steps. Specifically, according to the positive sequence of the synthesis steps, each step-by-step synthesis image corresponding to the watermark to be added is respectively added to the continuous n frames of image data for synthesis, that is, the same step-by-step synthesis image can be respectively synthesized with the continuous n frames of image data, so that the continuous n frames of watermark image data added with the same step-by-step synthesis image can be obtained until the synthesis step of the last step-by-step synthesis image is completed, and the watermark image can be gradually and completely displayed along with the playing of each frame of watermark image data; and then adding the next complete watermark image corresponding to each step-by-step synthesized image in the image data without the added step-by-step synthesized image until the step-by-step synthesized image corresponding to the watermark image is added to all the image data in the source video data, so that the watermark image can be circularly displayed in a playing interface corresponding to each watermark image data to realize the dynamic display of the watermark image, and the requirement of a user for adding the dynamic display watermark in the video data is met.
For example, when it is preset that one step-by-step synthesized image can be synthesized with consecutive 3 frames of image data, in combination with the above example, the first step-by-step synthesized image 301 can be synthesized with the first frame of image data, the second frame of image data, and the third frame of image data in the source video data, respectively, the second step-by-step synthesized image 302 can be synthesized with the fourth frame of image data, the fifth frame of image data, and the sixth frame of image data in the source video data, respectively, then the third step-by-step synthesized image 303 can be synthesized with the seventh frame of image data, the eighth frame of image data, and the ninth frame of image data in the source video data, and so on until the synthesizing step of the ninth step-by-step synthesized image 309 is completed; then, the first step-by-step synthesized image 301 is synthesized with the twenty-eighth frame image data, the twenty-ninth frame image data, and the thirty-third frame image data in the source video data, the second step-by-step synthesized image 302 is synthesized with the thirty-first frame image data, the thirty-twelfth frame image data, and the thirty-third frame image data in the source video data, and so on until the step of adding the distributed synthesized image to each frame image data of the source video data is completed.
In another alternative embodiment, the synthesizing the image to be watermarked corresponding to each step by step with the image data of each frame according to a sequential cycle may include: synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed; synthesizing each step-by-step synthesized image corresponding to the watermark to be added with continuous n frames of image data according to a reverse order until the step of synthesizing the first step-by-step synthesized image is completed; and circularly executing the synthesis steps. Specifically, according to the positive sequence of the synthesis steps, each step-by-step synthesis image corresponding to the watermark to be added is respectively added to the continuous n frames of image data for synthesis, that is, the same step-by-step synthesis image can be respectively synthesized with the continuous n frames of image data, so that the continuous n frames of watermark image data added with the same step-by-step synthesis image can be obtained until the synthesis step of the last step-by-step synthesis image is completed, and the watermark image can be gradually and completely displayed along with the playing of each frame of watermark image data; then, according to the reverse order of the synthesis steps, each step-by-step synthesis image corresponding to the watermark to be added is respectively added into the continuous n frames of image data for synthesis until the synthesis step of the first step-by-step synthesis image is completed, so that the watermark image can be circularly executed along with the gradual disappearance of the playing of each frame of watermark image data, the synthesis steps are circularly executed, the watermark image can be gradually displayed and gradually disappeared again in the playing process of each frame of watermark image video data, then the information is gradually displayed and gradually changed, and the information is circularly displayed in the playing interface corresponding to each watermark image data, thereby realizing the dynamic display of the watermark image.
For another example, in a case where it is preset that one step-by-step synthesized image can be synthesized with consecutive 3 frames of image data respectively, after the first to ninth step-by-step synthesized image synthesizing steps are completed in the forward order of the synthesizing steps, the ninth step-by-step synthesized image 309 may be synthesized with the twenty-eighth, twenty-ninth, and thirty-third frames of image data in the source video data in the reverse order of the synthesizing steps, and then the eighth step-by-step synthesized image 302 may be synthesized with the thirty-eleventh, thirty-twelfth, and thirty-third frames of image data in the source video data, and so on until the synthesizing step of the first step-by-step synthesized image 301 is completed; and then, executing a step of synthesizing the watermark image corresponding to each step of synthesized image according to the positive sequence of the synthesizing step, and after nine steps of synthesized image according to the positive sequence of the synthesizing step are completed, completing the step of synthesizing the watermark image corresponding to each step of synthesized image according to the reverse sequence, and so on until the step of adding the distributed synthesized image in each frame of image data of the source video data is completed.
In an optional embodiment of the present invention, the synthesizing each step-by-step synthesized image corresponding to the watermark to be added with the consecutive n frames of image data respectively may include: synthesizing the continuous n frames of image data and the step-by-step synthesized image corresponding to the watermark to be added; and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
And step 210, synthesizing the watermark image data into watermark-added target video data according to the time stamp, and storing the watermark-added target video data.
Specifically, the embodiment of the present invention may synthesize each watermark image data according to the timestamp corresponding to the source video data, thereby generating the target video data to which the watermark is added, and then store the target video data to which the watermark is added, so as to facilitate subsequent use of the target video data.
For simplicity of explanation, the method embodiments are described as a series of acts or combinations, but those skilled in the art will appreciate that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the embodiments of the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4A, a block diagram of an embodiment of a video processing apparatus according to an embodiment of the present invention is shown, which may specifically include the following modules:
an obtaining module 402, configured to obtain source video data and obtain a watermarking strategy, where the watermarking strategy includes one or more of the following: watermark content, frame number corresponding to the watermark content and watermark position;
an extracting module 404, configured to extract image data of each frame from the source video data;
a processing module 406, configured to process the image data according to the watermarking strategy to obtain watermark image data of each frame;
and the video synthesizing module 408 is configured to synthesize the watermark image data into the watermarked target video data and store the watermarked target video data.
Referring to fig. 4B, a block diagram of a video processing apparatus according to an alternative embodiment of the present invention is shown.
In an optional embodiment of the present invention, the following modules may be further included:
a step of synthesizing determining module 410, configured to determine a step of synthesizing the watermark according to the obtained watermark content and/or the number of frames corresponding to the watermark content;
a composite image generating module 412 for generating a step composite image according to each of the composite steps.
In this embodiment of the present invention, optionally, the watermark content includes: an application logo and/or a user identification.
In an alternative embodiment of the present invention, the processing module 406 may include the following sub-modules:
a determining submodule 4062, configured to determine, according to the watermark adding policy, each step-by-step synthesized image corresponding to the watermark to be added;
and the synthesizing submodule 4064 is configured to synthesize each step-by-step synthesized image corresponding to the watermark to be added with each frame of image data according to sequential circulation to obtain each frame of watermark image data.
In an optional embodiment of the present invention, the synthesizing submodule 4064 is configured to synthesize each step-by-step synthesized image corresponding to the watermark to be added with the n consecutive frames of image data respectively according to a positive sequence until a synthesizing step of a last step-by-step synthesized image is completed; and circularly executing the synthesis steps.
In another optional embodiment of the present invention, the synthesizing submodule 4064 is configured to synthesize each step-by-step synthesized image corresponding to the watermark to be added with the n consecutive frames of image data respectively according to a positive sequence until the synthesizing step of the last step-by-step synthesized image is completed; synthesizing each step-by-step synthesized image corresponding to the watermark to be added with the continuous n frames of image data according to the reverse order until the step of synthesizing the first step-by-step synthesized image is completed; and circularly executing the synthesis steps.
In the embodiment of the present invention, optionally, the synthesizing submodule 4064 is configured to synthesize the continuous n frames of image data and the step-by-step synthesized image corresponding to the watermark to be added; and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
In an optional embodiment of the present invention, the video composition module 408 is configured to combine the watermark image data into watermarked target video data according to the timestamp, and store the watermarked target video data.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an electronic device according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form. The electronic devices may include servers (clusters), terminal devices, and the like.
For example, fig. 5 shows a server, such as a management server, a storage server, an application server, a cloud control service server cluster, and the like, which can implement the method according to the present invention. The server conventionally includes a processor 510 and a computer program product or computer-readable medium in the form of a memory 520. The memory 520 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 520 has a memory space 530 for program code 531 for performing any of the method steps in the method described above. For example, the storage space 530 for the program code may include respective program codes 531 for implementing various steps in the above method, respectively. The program code can be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a portable or fixed storage unit as described with reference to fig. 6. The storage unit may have a storage section, a storage space, and the like arranged similarly to the memory 520 in the server of fig. 5. The program code may be compressed, for example, in a suitable form. Typically, the storage unit comprises computer readable code 531', i.e. code that can be read by a processor, such as 510, for example, which when executed by a server causes the server to perform the steps of the method described above.
An embodiment of the present invention provides a terminal device, including: one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the terminal device to perform a video processing method as described in one or more of the embodiments of the invention.
Embodiments of the present invention provide a machine-readable medium having stored thereon instructions, which, when executed by one or more processors, cause a terminal device to perform a video processing method as described in one or more of embodiments of the present invention.
Fig. 7 shows, for convenience of description, only a portion related to the embodiment of the present invention, and details of the specific technology are not disclosed, please refer to a method portion in the embodiment of the present invention. The terminal device may be any device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and the like.
Fig. 7 is a block diagram illustrating a partial structure related to a terminal device provided in an embodiment of the present invention. Referring to fig. 7, the terminal device includes: radio Frequency (RF) circuit 710, memory 720, input unit 730, display unit 740, sensor 750, audio circuit 760, wireless fidelity (WiFi) module 770, processor 780, power supply 790 and camera 7110. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 7 does not constitute a limitation of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each constituent component of the terminal device with reference to fig. 7:
the RF circuit 710 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 780; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 710 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 710 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 720 may be used to store software programs and modules, and the processor 780 may execute various functional applications of the terminal device and data processing by operating the software programs and modules stored in the memory 720. The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal device, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the input unit 730 may include a touch panel 731 and other input devices 732. The touch panel 731, also referred to as a touch screen, can collect touch operations of a user (e.g. operations of the user on or near the touch panel 731 by using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 731 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 780, and can receive and execute commands from the processor 780. In addition, the touch panel 731 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 730 may include other input devices 732 in addition to the touch panel 731. In particular, other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 740 may be used to display information input by the user or information provided to the user and various menus of the terminal device. The Display unit 740 may include a Display panel 741, and optionally, the Display panel 741 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting Diode (OLED), or the like. Further, the touch panel 731 can cover the display panel 741, and when the touch panel 731 detects a touch operation on or near the touch panel 731, the touch operation is transmitted to the processor 780 to determine the type of the touch event, and then the processor 780 provides a corresponding visual output on the display panel 741 according to the type of the touch event. Although in fig. 7, the touch panel 731 and the display panel 741 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 731 and the display panel 741 may be integrated to implement the input and output functions of the terminal device.
The terminal device may also include at least one sensor 750, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 741 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 741 and/or a backlight when the terminal device is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the terminal device, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device, detailed description is omitted here.
Audio circuitry 760, speaker 761, and microphone 762 may provide an audio interface between a user and a terminal device. The audio circuit 760 can transmit the electrical signal converted from the received audio data to the speaker 761, and the electrical signal is converted into a sound signal by the speaker 761 and output; on the other hand, the microphone 762 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 760, processes the audio data by the audio data output processor 780, and transmits the processed audio data to, for example, another terminal device via the RF circuit 710, or outputs the audio data to the memory 720 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the terminal equipment can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 770, and provides wireless broadband Internet access for the user. Although fig. 7 shows the WiFi module 770, it is understood that it does not belong to the essential constitution of the terminal device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 780 is a control center of the terminal device, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby integrally monitoring the terminal device. Optionally, processor 780 may include one or more processing units; preferably, the processor 780 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 780.
The terminal device also includes a power supply 790 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 780 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
The camera 7110 may perform a photographing function.
Although not shown, the terminal device may further include a bluetooth module or the like, which is not described in detail herein.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The invention discloses a1 and a video processing method, wherein the method comprises the following steps:
obtaining source video data and obtaining a watermarking strategy, wherein the watermarking strategy comprises one or more of the following items: watermark content, frame number corresponding to the watermark content and watermark position;
extracting each frame of image data from the source video data;
processing the image data according to the watermark adding strategy to obtain watermark image data of each frame;
and synthesizing the watermark image data into target video data added with the watermark and storing the target video data.
A2, the method of a1, further comprising: determining a synthesis step corresponding to the watermark according to the acquired watermark content and/or the frame number corresponding to the watermark content; a step-by-step composite image is generated according to each of the synthesis steps.
A3, the method of a2, the watermark content comprising: an application logo and/or a user identification.
A4, the method as in a1, where the processing the image data according to the watermarking policy to obtain watermark image data of each frame includes: determining each step-by-step synthetic image corresponding to the watermark to be added according to the watermark adding strategy; and synthesizing each step-by-step synthesized image corresponding to the watermark to be added with each frame of image data according to sequential circulation to obtain each frame of watermark image data.
A5, the method as in a4, that synthesizes each step-by-step synthesized image corresponding to the watermark to be added with each frame of image data according to a sequential cycle, including: synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed; and circularly executing the synthesis steps.
A6, the method as in a4, that synthesizes each step-by-step synthesized image corresponding to the watermark to be added with each frame of image data according to a sequential cycle, including: synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed; synthesizing each step-by-step synthesized image corresponding to the watermark to be added with continuous n frames of image data according to a reverse order until the step of synthesizing the first step-by-step synthesized image is completed; and circularly executing the synthesis steps.
A7, the method as described in a5 or a6, for synthesizing each step-by-step synthesized image to be watermarked with the consecutive n frames of image data respectively, includes: synthesizing the continuous n frames of image data and the step-by-step synthesized image corresponding to the watermark to be added; and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
A8, the method of a1, the synthesizing and storing the watermark image data into watermarked target video data, comprising: and synthesizing the watermark image data into the watermark added target video data according to the time stamp, and storing the watermark added target video data.
The invention also discloses B9 and a video processing device, the device comprises:
the acquisition module is used for acquiring source video data and acquiring a watermarking strategy, wherein the watermarking strategy comprises one or more of the following items: watermark content, frame number corresponding to the watermark content and watermark position;
the extraction module is used for extracting image data of each frame from the source video data;
the processing module is used for processing the image data according to the watermark adding strategy to obtain watermark image data of each frame;
and the video synthesis module is used for synthesizing the watermark image data into the target video data added with the watermark and storing the target video data.
B10, the apparatus of B9, further comprising: a synthesis step determining module, configured to determine a synthesis step corresponding to the watermark according to the obtained watermark content and/or the number of frames corresponding to the watermark content; and the synthetic image generating module is used for generating a step synthetic image according to each synthetic step.
B11, the apparatus as in B10, the watermark content comprising: an application logo and/or a user identification.
B12, the apparatus as described in B9, the processing module comprising:
the determining submodule is used for determining each step-by-step synthetic image corresponding to the watermark to be added according to the watermark adding strategy;
and the synthesis submodule is used for synthesizing each step-by-step synthesis image corresponding to the watermark to be added with each frame of image data according to sequential circulation to obtain each frame of watermark image data.
B13, the apparatus of claim 12, the synthesis sub-module for synthesizing each step-wise synthesized image to be watermarked with the successive n frames of image data in a positive order, respectively, until the synthesis step of the last step-wise synthesized image is completed; and circularly executing the synthesis steps.
B14, the apparatus according to B12, the synthesizing sub-module is configured to synthesize each step-by-step synthesized image corresponding to the watermark to be added with the consecutive n frames of image data respectively according to the positive order until the synthesizing step of the last step-by-step synthesized image is completed; synthesizing each step-by-step synthesized image corresponding to the watermark to be added with the continuous n frames of image data according to the reverse order until the step of synthesizing the first step-by-step synthesized image is completed; and circularly executing the synthesis steps.
B15, the device according to B13 or B14, the combining sub-module is configured to combine the consecutive n frames of image data and the corresponding one of the stepwise combined images to be added with the watermark; and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
B16, device according to B9,
and the video synthesis module is used for synthesizing the watermark image data into the watermark added target video data according to the time stamp and storing the watermark added target video data.
The invention also discloses C17, a terminal device, comprising: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform the video processing method of one or more of claims 1-8.
The invention also discloses D18, a machine-readable medium having stored thereon instructions which, when executed by one or more processors, cause a terminal device to perform a video processing method according to one or more of claims 1-8.

Claims (18)

1. A method of video processing, said method comprising:
obtaining source video data and obtaining a watermarking strategy, wherein the watermarking strategy comprises one or more of the following items: the method comprises the following steps of watermarking content, a frame number corresponding to the watermarking content and a watermarking position, wherein the frame number corresponding to the watermarking content is used for determining an image frame number required for displaying a complete watermarking image in the video data playing process or is used for determining an image frame number required for displaying a step-by-step synthesized image corresponding to the watermarking in the video data playing process;
extracting each frame of image data from the source video data;
processing the image data according to the watermark adding strategy to obtain watermark image data of each frame;
synthesizing the watermark image data into target video data added with the watermark and storing the target video data;
and playing the target video data and displaying the dynamic watermark.
2. The method of claim 1, further comprising:
determining a synthesis step corresponding to the watermark according to the acquired watermark content and/or the frame number corresponding to the watermark content;
a step-by-step composite image is generated according to each of the synthesis steps.
3. The method of claim 2, wherein the watermarking content comprises: an application logo and/or a user identification.
4. The method of claim 1, wherein said processing said image data according to said watermarking strategy to obtain watermarked image data for each frame comprises:
determining each step-by-step synthetic image corresponding to the watermark to be added according to the watermark adding strategy;
and synthesizing each step-by-step synthesized image corresponding to the watermark to be added with each frame of image data according to sequential circulation to obtain each frame of watermark image data.
5. The method of claim 4, wherein combining the respective fractional composite images to which the watermark is to be added with the respective frame image data in a sequential loop comprises:
synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed;
and circularly executing the synthesis steps.
6. The method of claim 4, wherein combining the respective fractional composite images to which the watermark is to be added with the respective frame image data in a sequential loop comprises:
synthesizing each step-by-step synthetic image corresponding to the watermark to be added with continuous n frames of image data according to a positive sequence until the synthesis step of the last step-by-step synthetic image is completed;
synthesizing each step-by-step synthesized image corresponding to the watermark to be added with continuous n frames of image data according to a reverse order until the step of synthesizing the first step-by-step synthesized image is completed;
and circularly executing the synthesis steps.
7. The method of claim 5 or 6, wherein the step of compositing each of the stepwise composite images to be watermarked with a respective one of the n consecutive frames of image data comprises:
synthesizing the continuous n frames of image data and the step-by-step synthesized image corresponding to the watermark to be added;
and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
8. The method of claim 1, wherein synthesizing and storing the watermarked image data into watermarked target video data comprises:
and synthesizing the watermark image data into the watermark added target video data according to the time stamp, and storing the watermark added target video data.
9. A video processing apparatus, said apparatus comprising:
an obtaining module, configured to obtain source video data and obtain a watermark adding policy, where the watermark adding policy includes: the method comprises the following steps of watermarking content, a frame number corresponding to the watermarking content and a watermarking position, wherein the frame number corresponding to the watermarking content is used for determining an image frame number required for displaying a complete watermarking image in the video data playing process or is used for determining an image frame number required for displaying a step-by-step synthesized image corresponding to the watermarking in the video data playing process;
the extraction module is used for extracting image data of each frame from the source video data;
the processing module is used for processing the image data according to the watermark adding strategy to obtain watermark image data of each frame;
the video synthesis module is used for synthesizing the watermark image data into target video data added with the watermark and storing the target video data;
and the video playing module is used for playing the target video data and displaying the dynamic watermark.
10. The apparatus of claim 9, further comprising:
a synthesis step determining module, configured to determine a synthesis step corresponding to the watermark according to the obtained watermark content and/or the number of frames corresponding to the watermark content;
and the synthetic image generating module is used for generating a step synthetic image according to each synthetic step.
11. The apparatus of claim 10, wherein the watermarked content comprises: an application logo and/or a user identification.
12. The apparatus of claim 9, wherein the processing module comprises:
the determining submodule is used for determining each step-by-step synthetic image corresponding to the watermark to be added according to the watermark adding strategy;
and the synthesis submodule is used for synthesizing each step-by-step synthesis image corresponding to the watermark to be added with each frame of image data according to sequential circulation to obtain each frame of watermark image data.
13. The apparatus of claim 12,
the synthesis submodule is used for synthesizing each step-by-step synthetic image corresponding to the watermark to be added with the continuous n frames of image data according to the positive sequence until the synthesis step of the last step-by-step synthetic image is completed; and circularly executing the synthesis steps.
14. The apparatus of claim 12, wherein the composition sub-module is configured to combine each of the step-by-step composite images corresponding to the watermark to be added with the n consecutive frames of image data in a positive order until the step of combining the last step-by-step composite image is completed; synthesizing each step-by-step synthesized image corresponding to the watermark to be added with the continuous n frames of image data according to the reverse order until the step of synthesizing the first step-by-step synthesized image is completed; and circularly executing the synthesis steps.
15. The apparatus according to claim 13 or 14, wherein the compositing sub-module is configured to composite consecutive n frames of image data and the corresponding one of the stepwise composite images to be watermarked; and then synthesizing the next continuous n frames of image data and the next step-by-step synthetic image corresponding to the watermark to be added until all the step-by-step synthetic images corresponding to the watermark to be added are synthesized.
16. The apparatus of claim 9,
and the video synthesis module is used for synthesizing the watermark image data into the watermark added target video data according to the time stamp and storing the watermark added target video data.
17. A terminal device, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the terminal device to perform the video processing method of one or more of claims 1-8.
18. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause a terminal device to perform a video processing method as claimed in one or more of claims 1-8.
CN201711010208.1A 2017-10-25 2017-10-25 Video processing method and device, terminal equipment and storage medium Active CN108055567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711010208.1A CN108055567B (en) 2017-10-25 2017-10-25 Video processing method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711010208.1A CN108055567B (en) 2017-10-25 2017-10-25 Video processing method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108055567A CN108055567A (en) 2018-05-18
CN108055567B true CN108055567B (en) 2020-11-06

Family

ID=62119739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711010208.1A Active CN108055567B (en) 2017-10-25 2017-10-25 Video processing method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108055567B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737895A (en) * 2018-06-06 2018-11-02 北京酷我科技有限公司 A kind of method that static images synthesize anti-fake video
CN108777814A (en) * 2018-06-06 2018-11-09 北京酷我科技有限公司 A kind of method of static images synthetic video
WO2020132828A1 (en) * 2018-12-24 2020-07-02 深圳市大疆创新科技有限公司 Data processing method, unmanned aerial vehicle, glasses device and storage medium
CN113630606B (en) * 2020-05-07 2024-04-19 百度在线网络技术(北京)有限公司 Video watermark processing method, video watermark processing device, electronic equipment and storage medium
CN112333558B (en) * 2020-10-27 2022-06-10 江苏税软软件科技有限公司 Video file watermarking method
CN113438549A (en) * 2021-06-22 2021-09-24 中国农业银行股份有限公司 Processing method and device for adding watermark to video
CN113947513A (en) * 2021-09-26 2022-01-18 安徽尚趣玩网络科技有限公司 Video watermark processing method, system, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1210315A (en) * 1997-08-29 1999-03-10 富士通株式会社 Device for generating, detecting, recording, and reproducing watermarked moving image having copy preventing capability and storage medium for storing program or moving image
CN1946179A (en) * 2006-10-20 2007-04-11 北京大学 Water mark method and device for digital video signal and detecting method and device
CN101742193A (en) * 2009-12-22 2010-06-16 北京中星微电子有限公司 Method for adding watermark into digital movie
CN102905127A (en) * 2012-08-09 2013-01-30 山东师范大学 Video watermark implementation method
CN104835106A (en) * 2015-04-24 2015-08-12 华东交通大学 Full frequency domain sub-band digital watermarking embedding method based on wavelet decomposition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1210315A (en) * 1997-08-29 1999-03-10 富士通株式会社 Device for generating, detecting, recording, and reproducing watermarked moving image having copy preventing capability and storage medium for storing program or moving image
CN1946179A (en) * 2006-10-20 2007-04-11 北京大学 Water mark method and device for digital video signal and detecting method and device
CN101742193A (en) * 2009-12-22 2010-06-16 北京中星微电子有限公司 Method for adding watermark into digital movie
CN102905127A (en) * 2012-08-09 2013-01-30 山东师范大学 Video watermark implementation method
CN104835106A (en) * 2015-04-24 2015-08-12 华东交通大学 Full frequency domain sub-band digital watermarking embedding method based on wavelet decomposition

Also Published As

Publication number Publication date
CN108055567A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108055567B (en) Video processing method and device, terminal equipment and storage medium
CN108055490B (en) Video processing method and device, mobile terminal and storage medium
US11355157B2 (en) Special effect synchronization method and apparatus, and mobile terminal
CN108022279B (en) Video special effect adding method and device and intelligent mobile terminal
CN107885533B (en) Method and device for managing component codes
CN106412691B (en) Video image intercepting method and device
CN106547599B (en) Method and terminal for dynamically loading resources
CN110533755B (en) Scene rendering method and related device
US9760998B2 (en) Video processing method and apparatus
CN110582018A (en) Video file processing method, related device and equipment
CN104134230A (en) Image processing method, image processing device and computer equipment
CN105808044A (en) Information push method and device
CN104036536B (en) The generation method and device of a kind of stop-motion animation
CN108628985B (en) Photo album processing method and mobile terminal
CN107995440B (en) Video subtitle map generating method and device, computer readable storage medium and terminal equipment
CN107103074B (en) Processing method of shared information and mobile terminal
CN107786876A (en) The synchronous method of music and video, device and mobile terminal
CN105447124A (en) Virtual article sharing method and device
CN106791916B (en) Method, device and system for recommending audio data
CN112118397B (en) Video synthesis method, related device, equipment and storage medium
CN109814930A (en) A kind of application loading method, device and mobile terminal
CN108038244B (en) Method and device for displaying cover of work by utilizing widget and mobile terminal
CN107330867B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN107396178B (en) Method and device for editing video
CN110022445B (en) Content output method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180917

Address after: 100015, 15 floor, 3 building, 10 Jiuxianqiao Road, Chaoyang District, Beijing, 17 story 1701-48A

Applicant after: Beijing environment and Wind Technology Co., Ltd.

Address before: 100012 No. 28 building, No. 27 building, Lai Chun Yuan, Chaoyang District, Beijing, No. 28, 2, 201, No. 112, No. 28.

Applicant before: Beijing Chuan Shang Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant