CN113507573A - Video generation method, video generation device, electronic device and readable storage medium - Google Patents

Video generation method, video generation device, electronic device and readable storage medium Download PDF

Info

Publication number
CN113507573A
CN113507573A CN202110930297.1A CN202110930297A CN113507573A CN 113507573 A CN113507573 A CN 113507573A CN 202110930297 A CN202110930297 A CN 202110930297A CN 113507573 A CN113507573 A CN 113507573A
Authority
CN
China
Prior art keywords
video
image
input
line image
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110930297.1A
Other languages
Chinese (zh)
Inventor
杭欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110930297.1A priority Critical patent/CN113507573A/en
Publication of CN113507573A publication Critical patent/CN113507573A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Abstract

The application discloses a video generation method, a video generation device, electronic equipment and a readable storage medium, and belongs to the technical field of video processing. The video generation method comprises the following steps: receiving a first input of a user to a first video; in response to the first input, converting the figure image in the first video into a figure line image, and generating a second video; the second video comprises the character line image, or the second video comprises the character line image and the background image of the first video.

Description

Video generation method, video generation device, electronic device and readable storage medium
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a video generation method, a video generation device, an electronic device and a readable storage medium.
Background
At present, the video editing requirements of users are higher and higher, and currently, many video editing software exist, some editing effects such as filters, special effects and the like exist in the software, but the risk of leaking human face information, figure information and other human image information may still exist when videos are published.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video generation method, a video generation apparatus, an electronic device, and a readable storage medium, which can solve the problem of leakage of character image information in video distribution in the related art.
In a first aspect, an embodiment of the present application provides a video generation method, where the method includes:
receiving a first input of a user to a first video;
in response to the first input, converting the figure image in the first video into a figure line image, and generating a second video;
the second video comprises the character line image, or the second video comprises the character line image and the background image of the first video.
In a second aspect, an embodiment of the present application provides a video generating apparatus, including:
the receiving module is used for receiving a first input of a user to the first video;
the generating module is used for responding to the first input, converting the figure image in the first video into a figure line image and generating a second video;
the second video comprises the character line image, or the second video comprises the character line image and the background image of the first video.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, when a first input of a user to a first video is received, a character image in the first video is converted into a character line image, and therefore a second video is generated. The generated second video comprises the character line image or comprises the character line image and the background image. Through turning into personage's image into personage line image, can hide personage image information, avoid revealing user privacy, increase video editing's variety and interest simultaneously.
Drawings
Fig. 1 is a schematic flow chart of a video generation method according to an embodiment of the present application;
FIG. 2 is a schematic view of an album interface of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic view of a video clip interface of an electronic device of an embodiment of the present application;
FIG. 4 is one of the video effect diagrams of a second video according to an embodiment of the present application;
FIG. 5 is a second diagram of video effects of a second video according to an embodiment of the present application;
FIG. 6 is a third diagram of video effects of a second video according to an embodiment of the present application;
fig. 7 is a schematic block diagram of a video generation apparatus according to an embodiment of the present application;
FIG. 8 is one of the schematic block diagrams of an electronic device of an embodiment of the present application;
fig. 9 is a second schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video generation method, the video generation apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of the present application provides a video generation method, as shown in fig. 1, the video generation method includes:
102, receiving a first input of a user to a first video;
and 104, responding to the first input, converting the character image in the first video into a character line image, and generating a second video.
The second video comprises the character line image, or the second video comprises the character line image and the background image of the first video.
The first input includes, but is not limited to, a single click input, a double click input, a long press input, a slide input, and the like. Specifically, the embodiment of the present application does not specifically limit the manner of the first input, and may be any realizable manner.
In the embodiment, a video image conversion method is provided, and specifically, when a first input of a user to a first video is received, a character image in the first video is converted into a character line image, so that a second video is generated. The generated second video comprises the character line image or comprises the character line image and the background image.
The figure line image can be a figure outline line, figure outlines are determined through recognition of the figure image in the first video, and then the figure outline line image is generated. Or, a deep learning technique (e.g., a convolutional neural Network (convolutional neural Network) algorithm or a Generic Adaptive Network (GAN) algorithm is used to perform character line transformation, and specifically, a character image is input into a deep learning model, which is a model trained by using information of a large number of colors, lines, and the like of the line image, so as to output the character line image.
It should be noted that the first input of the first video by the user may be performed during the recording of the first video by the electronic device. Of course, the first video may also be a recorded video, and the user performs a first input on the first video in an album interface of the electronic device. Illustratively, a user clicks an album icon on a desktop interface of the electronic device to enter the album interface, and a video cover of each video is displayed in the album interface. And when the user clicks one of the video covers, selecting a first video corresponding to the video cover. The electronic equipment has the function of sequencing videos according to the shooting time of the videos, and a user can quickly find the first video needing to be processed through the shooting time of the videos.
By the mode, the character image information can be hidden, the privacy of a user is prevented from being revealed, and the diversity and the interestingness of video editing are increased.
After the second video is generated, the second video can be saved, and a format type saved in the second video can be selected in the saving process, wherein the format type comprises any one of the following types: gif, mp4, flv, avi.
Further, in an embodiment of the present application, before receiving a first input of a first video from a user, the method further includes: displaying a first mark on a video cover of a first video, wherein the first mark is used for indicating that the first video contains a character image; receiving a first input of a first video by a user, comprising: a first input by a user to the first identifier is received.
In this embodiment, the first video is a recorded video, which has been stored in the electronic device, and the user can find the first video through the album interface.
As shown in FIG. 2, when a user opens an album interface 202 of the electronic device, a plurality of videos are displayed in the album interface 202. Because the line filter operation is required to be carried out on the character image, the video content is required to contain the character on the premise that the video content contains the character, and whether the character image is contained in the video content of the video can be automatically identified. Illustratively, whether the video content contains the character image is determined by identifying whether the video content of the video contains the human face image or not, and when the video content of the video contains the human face image, the video content is determined to contain the character image. The face recognition may be performed by using a face recognition model (for example, a haar classifier of opencv), specifically, the face recognition model is used to perform face image recognition on each video frame of the video, and when a face image is recognized in any video frame, it is determined that a person image exists in the video content of the video.
As shown in fig. 2, for a video containing a person image, a first identifier is displayed on a video cover of a corresponding video in the album interface 202 to indicate that the video content of the video contains the person image. Illustratively, a small identification of the person (i.e., the first identification 204) appears in the upper right corner of the video cover containing the person image video, with the person images in video 1, video 4, video 5, and video 7. In contrast, if no person-related information appears in the video, no first indication of a person appears in the upper right corner of the cover of the video. The position of the first mark can be set according to the needs, and is not limited to the upper right corner of the video cover shown in fig. 2, and the first mark can also be other symbols or signs. The mark in the present application is used for indicating words, symbols, images and the like of information, and a control or other container can be used as a carrier for displaying information, including but not limited to a word mark, a symbol mark, an image mark.
By the method, a user can know which video or videos contain the character images conveniently according to the first identification, and the user does not need to open the videos to confirm the videos by himself.
Furthermore, the user performs first input on the first identifier, so that the person image in the first video can be converted into the person line image, and a second video is generated.
In the embodiment of the application, the user can quickly convert the video into the video of the figure line image by operating the first identifier, so that the user operation is facilitated, and the user operation steps are saved.
Further, in one embodiment of the present application, in response to a first input, converting a person image in a first video into a person line image, and generating a second video, includes: displaying at least one second identifier in response to the first input, the second identifier indicating video conversion information of a second video; receiving a second input of the user to a first target identifier in the at least one second identifier; responding to a second input, converting the figure image in the first video into a figure line image according to the video conversion information indicated by the first target identification, and generating a second video; wherein the video conversion information comprises at least one of: the method comprises the following steps of video background color, line color of a character line image, line thickness of the character line image, filling color of the character line image and filling transparency of the character line image.
Wherein the second input includes, but is not limited to, a single click input, a double click input, a long press input, a slide input, etc. Specifically, the embodiment of the present application does not specifically limit the manner of the second input, and may be any realizable manner.
In this embodiment, in response to the first input, at least one second indicator is displayed, the second indicator indicating video conversion information for a second video, that is, the different indicators correspond to different video conversion style effects. When the user selects one of the second identifiers (i.e., the first target identifier), the person image in the first video is converted into the person line image according to the corresponding video conversion information, and a second video is generated.
It should be noted that the video conversion information includes at least one of the following items: the method comprises the following steps of video background color, line color of a character line image, line thickness of the character line image, filling color of the character line image and filling transparency of the character line image.
Illustratively, after the user selects a video to be processed, as shown in fig. 3, a video clip interface of the video is entered, and a plurality of second identifiers (including second identifier 1, second identifier 2, second identifier 3, second identifier 4, etc.) are displayed below the video clip interface, where each second identifier indicates video conversion information of a different second video, and corresponds to an effect of the different second video. And if the user selects one of the plurality of second identifications, converting the character image in the first video into the character line image by using the video conversion information corresponding to the selected second identification to generate a second video. For example, if the user selects the second identifier 1, the generated second video is a black background and white character lines as shown in fig. 4; if the user selects the second identifier 2, as shown in fig. 5, the generated character line image of the second video is in a simple stroke form, and the character line image is filled with colors; if the user selects the second mark 3, the character line image of the second video is generated in a pixel drawing form as shown in fig. 6.
It should be noted that the above character line transformation effect is only used for expressing the video style of the second video, and does not affect the actual video content.
By the mode, a plurality of sets of character line conversion effects are provided, a plurality of choices are provided for a user, videos with different styles and effects can be generated, and the user needs are met.
Further, in one embodiment of the present application, in response to a first input, converting a person image in a first video into a person line image, and generating a second video, includes: displaying at least one third identifier in response to the first input, the third identifier indicating a conversion object of the first video; receiving a third input of a user to a second target identifier in the at least one third identifier; and responding to the third input, converting the person image corresponding to the conversion object indicated by the second target identification into a person line image according to the conversion object indicated by the second target identification, and generating a second video.
Wherein the third input includes, but is not limited to, a single click input, a double click input, a long press input, a swipe input, etc. Specifically, the embodiment of the present application does not specifically limit the manner of the third input, and may be any realizable manner.
In this embodiment, in response to the first input, at least one third identifier is displayed, the third identifier indicating a conversion object of the first video, that is, a different identifier corresponds to a different conversion object of the first video. And when the user selects one of the third identifiers (namely the second target identifier), determining a corresponding conversion object, converting the character image of the conversion object of the first video into the character line image, and generating a second video.
Through the mode, the character line filter is combined with the video object conversion, various conversion object types are provided, and the method can be suitable for different application scenes.
Further, in an embodiment of the present application, the conversion object includes any one of: original video, preset video clips and video abstract.
In this embodiment, if the conversion object is the original video, the second video maintains the original length of the first video, that is, the character line conversion effect will be applied to the entire video of the first video.
If the conversion object is a video abstract, the character line conversion effect acts on the video abstract of the first video, so that the length of the generated second video is shortened, and the size of the memory occupied by the second video is reduced. The video summary of the first video may include a still video summary and a dynamic video summary, for example, for the still video summary, first extracting key frames in the first video, and then performing video composition on all the key frames to obtain the still video summary, the key frames may be set according to user requirements, for example, extracting key frames of the video at every preset time, or for sports videos, taking image frames of a wonderful goal as key frames. For the dynamic video abstract, firstly, moving object analysis and moving object extraction are carried out, then, the moving track of each moving object is analyzed, and different moving objects are respectively synthesized with a common background scene to generate the dynamic video abstract.
And if the conversion object is a preset video segment, the character line conversion effect acts on the video segment required by the user in the first video. The user can preset the conversion object based on different types of videos, for example, for sports videos, the conversion object can be set as a wonderful goal segment or a prize-awarding segment, and for dance videos, the conversion content can be set as a dancer dance segment. Illustratively, for a sports character scene, a highlight goal segment can be automatically extracted and a highlight goal character line animation can be generated, so that the length of a video is shortened, the size of a memory occupied by the video is reduced, the method can be used for making an expression package and a head portrait, and the video sharing is facilitated. The automatic extraction of the wonderful goal segment specifically comprises the following steps: and identifying and analyzing the motion of the moving object and the position of the ball in the video, determining a plurality of frames of video images corresponding to the goal process, and obtaining a wonderful goal segment according to the video images.
By the method, various conversion object types are provided, the flexibility of video conversion is improved, different application scenes can be suitable, and different requirements of users are met.
Further, in one embodiment of the present application, in response to a first input, converting a person image in a first video into a person line image, and generating a second video, includes: acquiring a first video frame containing a figure image in a first video; segmenting the first video frame to obtain a foreground figure image and a background image; and carrying out style migration on the foreground character image to obtain a character line image.
In this embodiment, a first video frame including a person image in a first video is subjected to segmentation processing to segment a foreground person image and a background image, where the first video frame is any one of video frames including a person image in the first video. And carrying out style migration on the segmented foreground character image so as to obtain a character line image. And if the person line image only needs to be displayed in the video, generating a second video from the person line image.
It should be noted that style migration based on the GAN algorithm may be adopted to achieve the purpose of converting the foreground character image into the character line image, and in this case, the background image of the first video is not subjected to style migration processing.
Through the mode, the character image in the first video is converted into the character line image, character image information can be hidden, user privacy is prevented from being revealed, and meanwhile diversity and interestingness of video editing are increased.
Further, in an embodiment of the present application, in response to the first input, converting the person image in the first video into a person line image, and generating the second video, further includes: carrying out style migration on the background image to obtain a background line image; carrying out image synthesis on the figure line image and the background line image to obtain a second video frame; and generating a second video according to the second video frame.
In this embodiment, after a first video frame including a person image in a first video is segmented into a foreground person image and a background image, style migration is performed on the segmented background image, so as to obtain a background line image. And under the condition that both the character and the background in the required video are displayed in a line form, carrying out image synthesis on the character line image and the background line image to obtain a second video frame, and generating a second video from the second video frame.
It should be noted that, for a video frame not including a person image, style transition may be performed to obtain a corresponding background line image, and then the background line image is synthesized with the second video frame to obtain a second video.
By the method, the figure image in the first video is converted into the figure line image, and the background image in the first video is converted into the background line image, so that on one hand, figure image information can be hidden, and the privacy of a user is prevented from being revealed; on the other hand, the diversity and interest of video editing can be increased.
In the video generation method provided in the embodiment of the present application, the execution subject may be a video generation apparatus, or a control module in the video generation apparatus for executing the video generation method. The video generation apparatus provided in the embodiment of the present application will be described with reference to an example in which a video generation apparatus executes a video generation method.
An embodiment of the present application provides a video generating apparatus, as shown in fig. 7, the video generating apparatus 700 includes:
a receiving module 702, configured to receive a first input of a first video from a user;
the generating module 704 is configured to convert the person image in the first video into a person line image in response to the first input, and generate the second video.
The second video comprises the character line image, or the second video comprises the character line image and the background image of the first video.
In the embodiment, when a first input of a user to the first video is received, the character image in the first video is converted into the character line image, so that the second video is generated. The generated second video comprises the character line image or comprises the character line image and the background image. By the mode, the character image information can be hidden, the privacy of a user is prevented from being revealed, and the diversity and the interestingness of video editing are increased.
Further, in an embodiment of the present application, the video generating apparatus 700 further includes: the first display module is used for displaying a first mark on a video cover of the first video, and the first mark is used for indicating that the first video contains a character image; the receiving module 702 is specifically configured to receive a first input of the first identifier by the user.
In this embodiment, the first video is a recorded video, and is already stored in the electronic device, after the user opens the album interface, the video cover is displayed in the album interface, and the user can correspondingly find the first video to be processed through the first identifier on the video cover. By the method, a user can know which video or videos contain the character images conveniently according to the first identification, and the user does not need to open the videos to confirm the videos by himself.
Further, in an embodiment of the present application, the video generating apparatus 700 further includes: a second display module for displaying at least one second identifier in response to the first input, the second identifier indicating video conversion information of a second video; a receiving module 702, configured to receive a second input of the first target identifier in the at least one second identifier from the user; a generating module 704, specifically configured to respond to a second input, convert a person image in the first video into a person line image according to the video conversion information indicated by the first target identifier, and generate a second video; wherein the video conversion information comprises at least one of: the method comprises the following steps of video background color, line color of a character line image, line thickness of the character line image, filling color of the character line image and filling transparency of the character line image.
In this embodiment, in response to the first input, at least one second indicator is displayed, the second indicator indicating video conversion information for a second video, that is, the different indicators correspond to different video conversion style effects. When the user selects one of the second identifiers (i.e., the first target identifier), the person image in the first video is converted into the person line image according to the corresponding video conversion information, and a second video is generated. By the mode, a plurality of sets of character line conversion effects are provided, a plurality of choices are provided for a user, videos with different styles and effects can be generated, and the user needs are met.
Further, in an embodiment of the present application, the video generating apparatus 700 further includes: a third display module, configured to display at least one third identifier in response to the first input, where the third identifier is used to indicate a conversion object of the first video; a receiving module 702, further configured to receive a third input of a second target identifier in the at least one third identifier from the user; the generating module 704 is specifically configured to, in response to the third input, convert the person image corresponding to the conversion object indicated by the second target identifier into a person line image according to the conversion object indicated by the second target identifier, and generate the second video.
In this embodiment, in response to the first input, at least one third identifier is displayed, the third identifier indicating a conversion object of the first video, that is, a different identifier corresponds to a different conversion object of the first video. And when the user selects one of the third identifiers (namely the second target identifier), determining a corresponding conversion object, converting the character image of the conversion object of the first video into the character line image, and generating a second video. Through the mode, the character line filter is combined with the video object conversion, various conversion object types are provided, and the method can be suitable for different application scenes.
Further, in an embodiment of the present application, the conversion object includes any one of: original video, preset video clips and video abstract.
In the embodiment, multiple conversion object types are provided, the flexibility of video conversion is improved, different application scenes can be suitable, and different requirements of users are met.
Further, in an embodiment of the present application, the generating module 704 is specifically configured to: acquiring a first video frame containing a figure image in a first video; segmenting the first video frame to obtain a foreground figure image and a background image; and carrying out style migration on the foreground character image to obtain a character line image.
In this embodiment, a first video frame including a person image in a first video is subjected to segmentation processing to segment a foreground person image and a background image, where the first video frame is any one of video frames including a person image in the first video. And carrying out style migration on the segmented foreground character image so as to obtain a character line image. And if the person line image only needs to be displayed in the video, generating a second video from the person line image. Through the mode, the character image in the first video is converted into the character line image, character image information can be hidden, user privacy is prevented from being revealed, and meanwhile diversity and interestingness of video editing are increased.
Further, in an embodiment of the present application, the generating module 704 is further configured to: carrying out style migration on the background image to obtain a background line image; carrying out image synthesis on the figure line image and the background line image to obtain a second video frame; and generating a second video according to the second video frame.
In this embodiment, after a first video frame including a person image in a first video is segmented into a foreground person image and a background image, style migration is performed on the segmented background image, so as to obtain a background line image. And under the condition that both the character and the background in the required video are displayed in a line form, carrying out image synthesis on the character line image and the background line image to obtain a second video frame, and generating a second video from the second video frame. By the method, the figure image in the first video is converted into the figure line image, and the background image in the first video is converted into the background line image, so that on one hand, figure image information can be hidden, and the privacy of a user is prevented from being revealed; on the other hand, the diversity and interest of video editing can be increased.
The video generation apparatus 700 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the Mobile electronic device may be a Mobile phone, a tablet Computer, a notebook Computer, a palm top Computer, an in-vehicle electronic device, a wearable device, an Ultra-Mobile Personal Computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-Mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (Personal Computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The video generation apparatus 700 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video generation apparatus provided in the embodiment of the present application can implement each process implemented in the video generation method embodiments of fig. 1 to 6, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 802, a memory 804, and a program or an instruction stored in the memory 804 and executable on the processor 802, where the program or the instruction is executed by the processor 802 to implement each process of the above-described embodiment of the video generation method, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: radio unit 902, network module 904, audio output unit 906, input unit 908, sensor 910, display unit 912, user input unit 914, interface unit 916, memory 918, and processor 920.
Those skilled in the art will appreciate that the electronic device 900 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 920 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 914 is used for receiving a first input of the first video from the user; a processor 920, configured to convert a character image in the first video into a character line image in response to a first input, and generate a second video; the second video comprises the character line image, or the second video comprises the character line image and the background image of the first video.
In the embodiment, when a first input of a user to the first video is received, the character image in the first video is converted into the character line image, so that the second video is generated. The generated second video comprises the character line image or comprises the character line image and the background image. By the mode, the character image information can be hidden, the privacy of a user is prevented from being revealed, and the diversity and the interestingness of video editing are increased.
Further, in an embodiment of the present application, the display unit 912 is configured to display a first identifier on a video cover of the first video, where the first identifier is used to indicate that the first video contains a person image; the user input unit 914 is specifically configured to receive a first input of the first identifier by the user.
Further, in an embodiment of the present application, the display unit 912 is further configured to display, in response to the first input, at least one second identifier, where the second identifier is used to indicate video conversion information of a second video; the user input unit 914 is further configured to receive a second input of the first target identifier in the at least one second identifier from the user; the processor 920 is specifically configured to respond to a second input, convert the person image in the first video into a person line image according to the video conversion information indicated by the first target identifier, and generate a second video; wherein the video conversion information comprises at least one of: the method comprises the following steps of video background color, line color of a character line image, line thickness of the character line image, filling color of the character line image and filling transparency of the character line image.
Further, in an embodiment of the present application, the display unit 912 is further configured to display, in response to the first input, at least one third identifier, where the third identifier is used to indicate a conversion object of the first video; the user input unit 914 is further configured to receive a third input of the second target identifier from the at least one third identifier; the processor 920 is specifically configured to, in response to the third input, convert the personal image corresponding to the conversion object indicated by the second target identifier into a personal line image according to the conversion object indicated by the second target identifier, and generate the second video.
Further, in an embodiment of the present application, the conversion object includes any one of: original video, preset video clips and video abstract.
Further, in an embodiment of the present application, the processor 920 is specifically configured to: acquiring a first video frame containing a figure image in a first video; segmenting the first video frame to obtain a foreground figure image and a background image; and carrying out style migration on the foreground character image to obtain a character line image.
Further, in an embodiment of the present application, the processor 920 is further configured to: carrying out style migration on the background image to obtain a background line image; carrying out image synthesis on the figure line image and the background line image to obtain a second video frame; and generating a second video according to the second video frame.
It should be understood that, in the embodiment of the present application, the radio frequency unit 902 may be used for transceiving information or transceiving signals during a call, and in particular, receive downlink data of a base station or send uplink data to the base station. Radio frequency unit 902 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The network module 904 provides wireless broadband internet access to the user, such as assisting the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 906 may convert audio data received by the radio frequency unit 902 or the network module 904 or stored in the memory 918 into an audio signal and output as sound. Also, the audio output unit 906 may provide audio output related to a specific function performed by the electronic apparatus 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 906 includes a speaker, a buzzer, a receiver, and the like.
The input unit 908 is used to receive audio or video signals. The input Unit 908 may include a Graphics Processing Unit (GPU) 9082 and a microphone 9084, and the Graphics processor 9082 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 912, or stored in the memory 918 (or other storage medium), or transmitted via the radio frequency unit 902 or the network module 904. The microphone 9084 may receive sound and may be capable of processing the sound into audio data, which may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 902 in case of a phone call mode.
The electronic device 900 also includes at least one sensor 910, such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, a light sensor, a motion sensor, and others.
The display unit 912 is used to display information input by the user or information provided to the user. The display unit 912 may include a display panel 9122, and the display panel 9122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
The user input unit 914 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 914 includes a touch panel 9142 and other input devices 9144. Touch panel 9142, also referred to as a touch screen, can collect touch operations by a user on or near it. The touch panel 9142 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 920, receives a command from the processor 920, and executes the command. Other input devices 9144 can include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, on-off keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 9142 can be overlaid on the display panel 9122, and when the touch panel 9142 detects a touch operation thereon or nearby, the touch panel 9142 can transmit the touch operation to the processor 920 to determine the type of the touch event, and then the processor 920 can provide corresponding visual output on the display panel 9122 according to the type of the touch event. The touch panel 9142 and the display panel 9122 may be provided as two separate components or may be integrated into one component.
The interface unit 916 is an interface for connecting an external device to the electronic apparatus 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 916 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 900 or may be used to transmit data between the electronic apparatus 900 and the external device.
Memory 918 may be used to store software programs as well as various data. The memory 918 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Further, the memory 918 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 920 performs various functions of the electronic device 900 and processes data by executing or executing software programs and/or modules stored within the memory 918 and by invoking data stored within the memory 918 to thereby perform overall monitoring of the electronic device 900. Processor 920 may include one or more processing units; preferably, the processor 920 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the video generation method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of video generation, comprising:
receiving a first input of a user to a first video;
in response to the first input, converting the person image in the first video into a person line image, and generating a second video;
the second video comprises the character line image, or the second video comprises the character line image and a background image of the first video.
2. The video generation method of claim 1, wherein prior to said receiving a first input of a first video by a user, further comprising:
displaying a first identifier on a video cover of the first video, wherein the first identifier is used for indicating that the first video contains a character image;
the receiving a first input of a first video by a user comprises:
and receiving a first input of the first identification from a user.
3. The video generation method according to claim 1 or 2, wherein the converting the personal image in the first video into a personal line image in response to the first input, and generating a second video includes:
displaying at least one second identifier in response to the first input, the second identifier indicating video conversion information of the second video;
receiving a second input of a first target identifier in at least one second identifier from a user;
in response to the second input, converting the figure image in the first video into the figure line image according to the video conversion information indicated by the first target identification, and generating the second video;
wherein the video conversion information comprises at least one of: the method comprises the following steps of video background color, line color of the figure line image, line thickness of the figure line image, filling color of the figure line image and filling transparency of the figure line image.
4. The video generation method according to claim 1 or 2, wherein the converting the personal image in the first video into a personal line image in response to the first input, and generating a second video includes:
displaying at least one third identifier in response to the first input, the third identifier indicating a conversion object of the first video;
receiving a third input of a user to a second target identifier in at least one third identifier;
and responding to the third input, converting the figure image corresponding to the conversion object indicated by the second target identification into the figure line image according to the conversion object indicated by the second target identification, and generating the second video.
5. The video generation method according to claim 4, wherein the conversion object includes any one of: original video, preset video clips and video abstract.
6. The video generation method according to claim 1 or 2, wherein the converting the personal image in the first video into a personal line image in response to the first input, and generating a second video includes:
acquiring a first video frame containing a figure image in the first video;
carrying out segmentation processing on the first video frame to obtain a foreground character image and a background image;
and carrying out style migration on the foreground character image to obtain the character line image.
7. The video generation method according to claim 6, wherein the converting the person image in the first video into a person line image in response to the first input, and generating a second video further comprises:
carrying out style migration on the background image to obtain a background line image;
carrying out image synthesis on the figure line image and the background line image to obtain a second video frame;
and generating the second video according to the second video frame.
8. A video generation apparatus, comprising:
the receiving module is used for receiving a first input of a user to the first video;
the generating module is used for responding to the first input, converting the character image in the first video into a character line image and generating a second video;
the second video comprises the character line image, or the second video comprises the character line image and a background image of the first video.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the video generation method of any of claims 1 to 7.
10. A readable storage medium on which a program or instructions are stored, characterized in that said program or instructions, when executed by a processor, implement the steps of the video generation method according to any one of claims 1 to 7.
CN202110930297.1A 2021-08-13 2021-08-13 Video generation method, video generation device, electronic device and readable storage medium Pending CN113507573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110930297.1A CN113507573A (en) 2021-08-13 2021-08-13 Video generation method, video generation device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110930297.1A CN113507573A (en) 2021-08-13 2021-08-13 Video generation method, video generation device, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN113507573A true CN113507573A (en) 2021-10-15

Family

ID=78015556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110930297.1A Pending CN113507573A (en) 2021-08-13 2021-08-13 Video generation method, video generation device, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN113507573A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614638A (en) * 2018-04-23 2018-10-02 太平洋未来科技(深圳)有限公司 AR imaging methods and device
CN109859295A (en) * 2019-02-01 2019-06-07 厦门大学 A kind of specific animation human face generating method, terminal device and storage medium
CN110232722A (en) * 2019-06-13 2019-09-13 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN111385644A (en) * 2020-03-27 2020-07-07 咪咕文化科技有限公司 Video processing method, electronic equipment and computer readable storage medium
CN111696028A (en) * 2020-05-22 2020-09-22 华南理工大学 Method and device for processing cartoon of real scene image, computer equipment and storage medium
CN112489173A (en) * 2020-12-11 2021-03-12 杭州格像科技有限公司 Method and system for generating portrait photo cartoon
CN112511815A (en) * 2019-12-05 2021-03-16 中兴通讯股份有限公司 Image or video generation method and device
CN112561786A (en) * 2020-12-22 2021-03-26 作业帮教育科技(北京)有限公司 Online live broadcast method and device based on image cartoonization and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614638A (en) * 2018-04-23 2018-10-02 太平洋未来科技(深圳)有限公司 AR imaging methods and device
CN109859295A (en) * 2019-02-01 2019-06-07 厦门大学 A kind of specific animation human face generating method, terminal device and storage medium
CN110232722A (en) * 2019-06-13 2019-09-13 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN112511815A (en) * 2019-12-05 2021-03-16 中兴通讯股份有限公司 Image or video generation method and device
CN111385644A (en) * 2020-03-27 2020-07-07 咪咕文化科技有限公司 Video processing method, electronic equipment and computer readable storage medium
CN111696028A (en) * 2020-05-22 2020-09-22 华南理工大学 Method and device for processing cartoon of real scene image, computer equipment and storage medium
CN112489173A (en) * 2020-12-11 2021-03-12 杭州格像科技有限公司 Method and system for generating portrait photo cartoon
CN112561786A (en) * 2020-12-22 2021-03-26 作业帮教育科技(北京)有限公司 Online live broadcast method and device based on image cartoonization and electronic equipment

Similar Documents

Publication Publication Date Title
CN109857905B (en) Video editing method and terminal equipment
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN108712603B (en) Image processing method and mobile terminal
CN108710458B (en) Split screen control method and terminal equipment
CN111010610A (en) Video screenshot method and electronic equipment
CN109753202B (en) Screen capturing method and mobile terminal
CN111461985A (en) Picture processing method and electronic equipment
CN109068063B (en) Three-dimensional image data processing and displaying method and device and mobile terminal
CN111722775A (en) Image processing method, device, equipment and readable storage medium
CN110719527A (en) Video processing method, electronic equipment and mobile terminal
CN107908330A (en) The management method and mobile terminal of application icon
CN112312217A (en) Image editing method and device, computer equipment and storage medium
CN112954046A (en) Information sending method, information sending device and electronic equipment
CN111080747B (en) Face image processing method and electronic equipment
CN110544287B (en) Picture allocation processing method and electronic equipment
CN109669710B (en) Note processing method and terminal
CN109166164B (en) Expression picture generation method and terminal
CN109639981B (en) Image shooting method and mobile terminal
US20230368338A1 (en) Image display method and apparatus, and electronic device
CN109670105B (en) Searching method and mobile terminal
CN108509126B (en) Picture processing method and mobile terminal
CN108319409B (en) Application program control method and mobile terminal
CN113507573A (en) Video generation method, video generation device, electronic device and readable storage medium
CN112988001B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN113163256B (en) Method and device for generating operation flow file based on video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination