CN110557565B - Video processing method and mobile terminal - Google Patents

Video processing method and mobile terminal Download PDF

Info

Publication number
CN110557565B
CN110557565B CN201910811285.XA CN201910811285A CN110557565B CN 110557565 B CN110557565 B CN 110557565B CN 201910811285 A CN201910811285 A CN 201910811285A CN 110557565 B CN110557565 B CN 110557565B
Authority
CN
China
Prior art keywords
video
videos
target
synthesized
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910811285.XA
Other languages
Chinese (zh)
Other versions
CN110557565A (en
Inventor
彭桂林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910811285.XA priority Critical patent/CN110557565B/en
Publication of CN110557565A publication Critical patent/CN110557565A/en
Application granted granted Critical
Publication of CN110557565B publication Critical patent/CN110557565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Abstract

The embodiment of the invention discloses a video processing method and a mobile terminal, wherein the method comprises the following steps: in the process of recording the target video, responding to the received label setting operation, and marking a video label for the target video; acquiring a plurality of videos to be synthesized according to the recorded target video, wherein each video to be synthesized corresponds to one video tag, and the video tags corresponding to the videos to be synthesized belong to the same recording theme; and synthesizing the plurality of videos to be synthesized into the target subject video. By adopting the embodiment of the invention, a plurality of videos are automatically combined into a complete video under the condition that the operations of clipping, splicing and the like of the videos by a user are not needed, the operation threshold of combining the videos by the user is reduced, the user experience is improved, the video processing becomes more intelligent, quicker and more convenient, the video processing efficiency is improved, the functions of the mobile terminal are enriched, and the market competitiveness of the mobile terminal is increased.

Description

Video processing method and mobile terminal
Technical Field
The present invention relates to the field of mobile terminals, and in particular, to a video processing method and a mobile terminal.
Background
At present, with the rapid development and popularization of mobile terminals, cameras of the mobile terminals are also applied more and more widely and more frequently in daily life of people, and functions of photographing, video recording and the like which can be realized based on the cameras of the mobile terminals are also more and more diversified.
Specifically, many users record the fluid in life by taking pictures and recording videos with a camera of a mobile terminal, for example, record the growth process of children, the process of aging slowly, the scenic spots traveled on a trip, the change process of a certain thing, and the like. However, when recording is performed in a video recording manner, the recorded video exists in an album of the mobile terminal in a single individual form, and when a user wants to perform a video composition operation such as recording a certain change process through a video, the user needs to manually perform operations such as cutting and splicing the recorded video through related software, which is complicated to operate; moreover, the user needs to know and master the use method of the software for editing and processing the video in the later period in advance, so that the method is not friendly to most users who are not familiar with the software, the users often give up editing and processing the video directly, and the user experience is reduced.
Therefore, a new video processing scheme is needed, which can solve the above-mentioned problems that the operation is complicated and the user needs to have a certain video editing professional knowledge when editing the video, so as to improve the user experience.
Disclosure of Invention
The embodiment of the invention provides a video processing method and a mobile terminal, and aims to solve the problems that the operation steps are complicated when the video is edited by the conventional mobile terminal, and a user needs to have certain video editing and processing professional knowledge, so that the user experience is reduced.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a video processing method is provided, and the method includes:
in the process of recording the target video, responding to the received label setting operation, and marking a video label for the target video;
acquiring a plurality of videos to be synthesized according to the recorded target video, wherein each video to be synthesized corresponds to one video tag, and the video tags corresponding to the videos to be synthesized belong to the same recording theme;
and synthesizing the plurality of videos to be synthesized into a target subject video.
In a second aspect, a mobile terminal is provided, which includes:
the marking module is used for responding to the received label setting operation in the process of recording the target video and marking a video label for the target video;
the acquisition module is used for acquiring a plurality of videos to be synthesized according to the recorded target video, wherein each video to be synthesized corresponds to one video tag, and the video tags corresponding to the videos to be synthesized belong to the same recording theme;
and the synthesis module is used for synthesizing the videos to be synthesized into the target subject video.
In a third aspect, a mobile terminal is provided, comprising a processor, a memory and a computer program stored on the memory and being executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect.
In the embodiment of the invention, the video label can be marked for the target video in the real-time recording process of the target video, so that the situation that a user needs to look over the video again after recording the target video can be avoided, the time is saved, the convenience is realized, a plurality of videos to be synthesized can be further automatically obtained according to the recorded target video with the corresponding video label, and the plurality of videos to be synthesized with the video labels belonging to the same recording theme can be automatically synthesized into a complete target theme video meeting the requirements of the user. Therefore, the videos to be synthesized belonging to the same recording subject can be clearly and definitely classified through the video tags of the videos to be synthesized, a plurality of videos to be synthesized can be automatically synthesized into a complete video under the condition that the user does not need to clip, splice and the like the videos to be synthesized, the operation threshold of the user for synthesizing the videos is effectively reduced, the user experience is improved, the video processing becomes more intelligent and convenient, the video processing efficiency is improved, the functions of the mobile terminal are enriched, and the market competitiveness of the mobile terminal is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a video processing method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a setting interface of a video tag according to an embodiment of the present invention;
FIG. 3 is a flow chart of another video processing method according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a further video processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a mobile terminal in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The existing shooting and video recording functions of the camera of the mobile terminal stated in the background section can basically meet daily requirements of users, although the shot pictures can be automatically classified based on an AI (Artificial Intelligence) technology, such as people, gourmet, blue-sky-white clouds, flowers and the like. However, there is no corresponding classification manner for recorded videos at present, so when a user wants to perform a video composition operation through a certain change process of video recording and the like, the user needs to manually perform operations such as editing and splicing the recorded videos through related software, and may also need to review the video content again to screen out videos with the same theme for composition, which is tedious, time-consuming and labor-consuming in operation. Meanwhile, when performing operations such as video synthesis, the user needs to know and master the use method of the relevant video processing software in advance. Therefore, a new video processing scheme is needed to solve the above problems.
Referring to fig. 1, an embodiment of the present invention provides a video processing method, which is executed by a mobile terminal.
The method may specifically comprise:
step 101: in the process of recording the target video, responding to the received label setting operation, and marking a video label for the target video;
step 103: and acquiring a plurality of videos to be synthesized according to the recorded target video, wherein each video to be synthesized corresponds to one video tag, and the plurality of video tags corresponding to the plurality of videos to be synthesized belong to the same recording theme.
It is to be understood that the tag names of the video tags belonging to the same recording topic may be the same tag, or may be different tag names associated with the same recording topic.
Step 105: and synthesizing the plurality of videos to be synthesized into the target subject video.
In the embodiment of the invention, the video label can be marked for the target video in the real-time recording process of the target video, so that the situation that a user needs to look over the video again after recording the target video can be avoided, time is saved, convenience is realized, a plurality of videos to be synthesized can be further automatically obtained according to the recorded target video with the corresponding video label, and the plurality of videos to be synthesized, which respectively have the video labels belonging to the same recording theme, can be automatically synthesized into a complete target theme video meeting the requirements of the user. Therefore, the videos to be synthesized belonging to the same recording subject can be clearly and definitely classified through the video tags of the videos to be synthesized, a plurality of videos to be synthesized can be automatically synthesized into a complete video under the condition that the user does not need to clip, splice and the like the videos to be synthesized, the operation threshold of the user for synthesizing the videos is effectively reduced, the user experience is improved, the video processing becomes more intelligent and convenient, the video processing efficiency is improved, the functions of the mobile terminal are enriched, and the market competitiveness of the mobile terminal is improved.
Optionally, in the video processing method according to the embodiment of the present invention, when the video synthesis operation in step 103 is executed, synthesizing may be performed according to the recording time sequence of the multiple videos to be synthesized; of course, the composition may be performed in other predetermined sequence, such as the composition performed according to the set sequence matching the keyword in the tag name of the video tag, and so on.
Optionally, in the video processing method according to the embodiment of the present invention, after the step 105, the method may further include the following steps:
and storing the target subject video and deleting a plurality of videos to be synthesized.
It can be understood that the purpose of saving storage space can be achieved by only storing the target subject video finally obtained through the synthesis processing and deleting a plurality of videos to be synthesized for synthesizing the target subject video.
Optionally, in the video processing method according to the embodiment of the present invention, after the step 105, the method may further include the following steps:
responding to the received video playing operation, and determining whether to split the target theme video;
if so, splitting the target theme video into a plurality of videos to be synthesized, and playing videos corresponding to video playing operation in the plurality of videos to be synthesized;
if not, the target theme video is played.
It can be understood that, when responding to the video playing operation input by the user for the target theme video, it may be detected whether the user selects the operation of splitting the synthesized target theme video. Therefore, on one hand, when the user selects to split the target subject video, the target subject video is firstly split into a plurality of sub-videos for synthesizing the target subject video, namely the plurality of videos to be synthesized, and the sub-videos meeting the watching requirements of the user are played, so that the storage space is saved, and meanwhile, the sub-videos before the user watches the synthesized target subject video are not influenced; on the other hand, when the user chooses not to split the target theme video, the user is directly shown with a plurality of collections of videos to be synthesized belonging to the same recording theme, so that the user experience is improved by watching the target theme video to watch a certain change process and the like.
It should be noted that, in other embodiments of the present invention, in order to further facilitate the user to view the synthesized total video and the plurality of sub-videos before being synthesized, after synthesizing the target subject video based on the plurality of videos to be synthesized, the target subject video and the plurality of videos to be synthesized may be simultaneously stored in the mobile terminal.
Optionally, in the video processing method according to the embodiment of the present invention, there may be one or more video tags marked for the target video in step 101; when there are a plurality of video tags that mark the target video, the plurality of video tags may correspond to the entire time period of the target video, or may correspond to different time periods that constitute the target video.
Further optionally, in a specific embodiment, for a case that there are a plurality of video tags marked for the target video and the video tags correspond to a plurality of recording time periods constituting the target video one by one, the step 101 may specifically include the following steps:
in recording a target video, video tags are marked for a plurality of sub-videos corresponding to a plurality of recording periods one to one in response to a tag setting operation.
It can be understood that when the video tags are set for the target video in real time in the video recording process, the corresponding video tags can be respectively marked for the sub-videos corresponding to different recording time periods of the recorded target video, that is, a plurality of video tags are marked for one target video, so that the marking operation of the video tags is more intelligent, and the functions of the mobile terminal are further enriched.
Optionally, video tags of multiple sub-videos corresponding to multiple recording time periods of the target video may belong to the same recording topic; certainly, the target video may also belong to different recording topics, and then the classification of the sub-videos corresponding to different time periods of the target video may be implemented, that is, the multiple sub-videos of the target video are classified by the video tags belonging to different recording topics.
Further, the step 103 may be specifically executed as:
and acquiring a plurality of videos to be synthesized from the plurality of sub-videos.
It can be understood that a plurality of videos to be synthesized belonging to the same recording topic can be a plurality of sub-videos in a recorded target video, so that video segments in invalid time periods in the target video can be removed, the video segments belonging to the same recording topic are screened out and synthesized into a new target topic video meeting the user requirements, that is, the target topic video is cut from the target video, and the effects of simplifying the video and keeping the wonderful video segments are achieved.
Optionally, in the process of recording the target video, a scheme of marking a video tag for a sub-video of the target video may be specifically implemented as follows:
displaying at least one candidate label according to a video preview picture in a first recording time period, wherein the first recording time period is any one of a plurality of recording time periods;
and responding to the label setting operation, and selecting a target label from at least one candidate label to mark a first sub video corresponding to the first recording time interval.
It can be understood that when the video tags are marked, at least one candidate tag matched with the picture content can be intelligently recommended to a user for selection according to a video preview picture displayed in a recording time interval, so that the video tags which simultaneously meet the video content and the user requirement are marked for the sub-video corresponding to the recording time interval, and the user experience is further improved.
It should be noted that, for each of the plurality of recording time periods, the video tag may be set for the sub-video corresponding to each recording time period in the form of recommending the candidate tag according to the corresponding video preview image; the video label may be set for the sub-video corresponding to the partial recording time period only in the form of recommending the candidate label according to the corresponding video preview picture for the partial recording time period in the multiple recording time periods, and the video label may be set for the sub-video corresponding to the remaining recording time periods in the multiple recording time periods in other feasible manners, for example, a label that is input by the user in real time for the sub-video.
Further optionally, in another specific embodiment, for a case that there is one video tag marked for the target video, the step 101 may specifically include the following steps:
displaying at least one candidate label according to a video preview picture of a target video;
and responding to the label setting operation, and selecting a corresponding video label for the target video from at least one candidate label for marking.
It is understood that during the recording of a video, a unique video tag may also be marked for the entire video in response to a corresponding tag setting operation. Specifically, when the video label is marked, at least one candidate label matched with the picture content can be intelligently recommended to be selected by a user according to a video preview picture displayed in the recording time period of the target video, so that the video label which simultaneously accords with the video content and the user requirement is marked for the target video, and the user experience is further improved.
Further, the step 103 may be specifically executed as:
acquiring a video with at least one marked label belonging to the same recording theme as the video label of the target video, wherein the video with the at least one marked label is different from the target video;
and taking the target video and the video with at least one marked label as a plurality of videos to be synthesized.
It is understood that the videos to be synthesized belonging to the same recording topic may be a plurality of independently recorded videos, the videos to be synthesized include at least one marked video marked with a corresponding video tag in addition to the target video, and the target video has a video tag belonging to the same recording topic as the at least one video tag corresponding to the at least one marked video.
Optionally, in the video processing method according to the embodiment of the present invention, the multiple videos to be synthesized may further include at least one labeled video different from the target video and at least one sub-video having a video label in all sub-videos constituting the target video, where at least one video label corresponding to the at least one labeled video and at least one video label corresponding to the at least one sub-video belong to the same recording topic.
In addition, in the process of recording the video, the marking of the video tag can be realized, and the corresponding video tag can be marked for the stored video without the marked video tag, wherein the video can be the video recorded by a local camera, and can also be the video received from other equipment.
Furthermore, in the process of marking the video label on the video, the marked video label can be cancelled, modified and the like according to the actual situation, so that the video label marked on the video is more attached to the video, and the requirements of users can be met better.
It should be noted that the video tag marking method for recommending candidate tags for selection by a user based on a video preview picture is not only applicable to the video tag setting process in the recording process, but also applicable to the video tag setting process for the stored video tags that are not marked.
In addition, in any of the above processes of marking video tags for videos, besides selecting adaptive video tags for videos from a plurality of candidate tags recommended intelligently based on video preview pictures, a button capable of automatically setting tags in real time can be provided for a user, so that the user can set the tags intelligently and conveniently according to own habits or actual requirements, and user experience is further improved.
For example, as shown in fig. 2, a plurality of candidate tags, such as "baby", "tag 1", and "tag 2" shown in the figure, are recommended to the user according to the content in the video preview screen displayed by the camera window; in addition, the user can set the desired tab by himself via the "new tab" button.
Further, when a plurality of candidate tags are presented to the user, recommendation may be performed according to the matching degree of each candidate tag with the video preview picture, a part of tags with a higher matching degree may be recommended as candidate tags, and the candidate tags may be arranged in order from left to right according to the matching degree.
The following describes the video processing method according to the embodiment of the present invention with reference to fig. 3 and 4.
Referring to fig. 3, in the video processing method according to this embodiment, a user starts a camera and enters a video recording interface, and displays recommended candidate tags according to a current video preview picture, and displays a custom tag key at the same time, referring to fig. 2, the user sets and records a tag, and enters an album after recording a video, so that the user can view the video automatically synthesized according to the tags belonging to the same recording topic. The method can specifically comprise the following steps:
step 301: and starting the camera and entering a video recording interface.
Step 303: and displaying a plurality of recommended candidate labels according to a video preview picture currently displayed by the camera window.
Optionally, the labels are recommended according to AI learning, location, person, time image identification, and the like, for example, a label recommending travel when a tourist attraction is located, a label recommending birthday when a cake is detected, a label recommending holiday name as a label when holiday days are identified according to a recording date, and the like.
Step 305: the user performs selection operation based on a plurality of candidate tags or creates a new tag through a custom tag key to determine the tag to be marked.
Optionally, no tag selection or creation operation may be performed; if a tag selection operation is performed, the finally determined tag may be highlighted, such as highlighted.
Step 307: and starting to record the video, and automatically storing the recorded video to the photo album after the video recording operation is finished.
Step 309: the album judges whether the video is marked with a label, if not, step 311 is executed, otherwise, step 313 is executed.
Step 311: and displaying the video without the label tag to the user according to the video playing request.
Step 313: and automatically synthesizing the videos marked with the labels belonging to the same theme into a theme long video.
It can be understood that the user can watch all videos of a certain theme at one time without operating each video to be synthesized, so that the management of videos of a certain theme becomes more convenient and concise, and the recorded videos have more continuity.
Step 315: if the user chooses to split the synthesized theme long video, if yes, step 317 is executed, otherwise, step 319 is executed.
Step 317: and displaying the sub-video with the label to the user according to the video playing request.
Step 319: and displaying the synthesized theme long video to the user according to the video playing request.
In the specific embodiment, based on the camera function, through the AI image recognition recommendation tag and the user-defined tag, the album can automatically generate the recordable long video according to the sub-videos with the same tag, for some videos with changed records, a recorder can better experience the change of the shot object, meanwhile, the management video is simpler, a plurality of videos do not need to be stored by the user, when the user wants to see the videos with related themes, the user does not need to turn the album to find the sub-videos according to time one by one, and the corresponding total video can be directly obtained according to the tag.
Referring to fig. 4, in the video processing method of this embodiment, a user starts a camera to enter a video recording interface, and displays recommended candidate tags according to a current video preview picture, and displays a custom tag key at the same time, referring to fig. 2, the user sets the tags and records the video, the tags can be selected or cancelled at any time in the video recording process, the same tags can be selected at different time periods in the recording process, different tags can be selected, and the video automatically synthesized according to the tags belonging to the same recording topic can be viewed by entering an album after the video is recorded. The method can specifically comprise the following steps:
step 401: and starting the camera and entering a video recording interface.
Step 403: and displaying a plurality of recommended candidate labels according to a video preview picture currently displayed by the camera window.
Optionally, the labels are recommended according to AI learning, location, person, time image identification, and the like, for example, a label recommending travel when a tourist attraction is located, a label recommending birthday when a cake is detected, a label recommending holiday name as a label when holiday days are identified according to a recording date, and the like.
Step 405: and starting to record the video, wherein in the video recording process, a user performs selection operation based on a plurality of candidate tags or creates a new tag through a custom tag key so as to determine the tag to be marked.
Step 407: when an instruction generated based on a tag cancel operation by a user is detected, the set tag is canceled or modified.
Optionally, the finally determined label is highlighted, such as highlighted. In this step, the user may select a corresponding tag to set a highlight to be recorded or a desired recording subject to be recorded at any time when recording a video, and may cancel the set tag or change the set tag to another tag at any time.
Step 409: and after the video recording operation is completed, automatically storing the recorded video into the photo album.
Step 411: the album determines whether the video is marked with a tag, if not, step 413 is executed, otherwise, step 415 is executed.
Step 413: and displaying the video without the label tag to the user according to the video playing request.
Step 415: and cutting a plurality of sub-videos marked with the same label in the recorded video from the original video and automatically synthesizing the sub-videos into the long video with the theme.
The plurality of sub-videos correspond to a plurality of time periods in the recorded video in a one-to-one mode.
Step 417: and (4) whether the user selects to split the synthesized theme long video, if so, executing step 419, and otherwise, executing step 421.
Step 419: and displaying the sub-video with the label to the user according to the video playing request.
Step 421: and displaying the synthesized theme long video to the user according to the video playing request.
In the specific embodiment, based on the camera function, the recommended tag and the user-defined tag are identified through the AI image, the user can add the tag or cancel or modify the tag at any time when recording the video, and the subsequent photo album can automatically cut out the recordable long video from the original video according to a plurality of sub-videos with the same tag, so that videos in invalid periods for the user can be removed, the main body in the tag is more prominent, and the effect of cutting the highlight videos is achieved.
Referring to fig. 5, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal 500 may specifically include:
a marking module 501, configured to mark a video tag for a target video in response to a received tag setting operation in a process of recording the target video;
an obtaining module 503, configured to obtain multiple videos to be synthesized according to a recorded target video, where each video to be synthesized corresponds to one video tag, and multiple video tags corresponding to the multiple videos to be synthesized belong to the same recording topic;
and a synthesizing module 505, configured to synthesize the multiple videos to be synthesized into the target subject video.
Preferably, the mobile terminal 500 provided in the embodiment of the present invention may further include:
and the storage module is used for storing the target subject video and deleting the plurality of videos to be synthesized after the plurality of videos to be synthesized are synthesized into the target subject video.
Preferably, the mobile terminal 500 provided in the embodiment of the present invention may further include:
the determining module is used for responding to the received video playing operation after the plurality of videos to be synthesized are synthesized into the target subject video and determining whether the target subject video is split or not;
the first processing module is used for splitting the target theme video into a plurality of videos to be synthesized and playing videos corresponding to video playing operation in the plurality of videos to be synthesized under the condition that the split target theme video is determined;
and the second processing module is used for playing the target theme video.
Preferably, in the mobile terminal 500 provided in the embodiment of the present invention, the marking module 501 may be specifically configured to: in the process of recording the target video, responding to the label setting operation, and marking video labels for a plurality of sub-videos corresponding to a plurality of recording time periods one by one;
the obtaining module 503 may be specifically configured to: and acquiring a plurality of videos to be synthesized from the plurality of sub-videos.
Preferably, in the mobile terminal 500 provided in the embodiment of the present invention, the marking module 501 may be further configured to:
displaying at least one candidate label according to a video preview picture in a first recording time period, wherein the first recording time period is any one of a plurality of recording time periods;
and responding to the label setting operation, and selecting a target label from at least one candidate label to mark a first sub video corresponding to the first recording time interval.
Preferably, in the mobile terminal 500 provided in the embodiment of the present invention, the obtaining module 503 may be specifically configured to:
acquiring a video with at least one marked label belonging to the same recording theme as the video label of the target video, wherein the video with the at least one marked label is different from the target video;
and taking the target video and the video with at least one marked label as a plurality of videos to be synthesized.
It can be understood that, the mobile terminal 500 provided in the embodiment of the present invention can implement the foregoing processes of the video processing method executed by the mobile terminal 500, and the relevant descriptions about the video processing method are all applicable to the mobile terminal 500, and are not described herein again.
In the embodiment of the invention, the video label can be marked for the target video in the real-time recording process of the target video, so that the situation that a user needs to look over the video again after recording the target video can be avoided, time is saved, convenience is realized, a plurality of videos to be synthesized can be further automatically obtained according to the recorded target video with the corresponding video label, and the plurality of videos to be synthesized, which respectively have the video labels belonging to the same recording theme, can be automatically synthesized into a complete target theme video meeting the requirements of the user. Therefore, the videos to be synthesized belonging to the same recording subject can be clearly and definitely classified through the video tags of the videos to be synthesized, a plurality of videos to be synthesized can be automatically synthesized into a complete video under the condition that the user does not need to clip, splice and the like the videos to be synthesized, the operation threshold of the user for synthesizing the videos is effectively reduced, the user experience is improved, the video processing becomes more intelligent and convenient, the video processing efficiency is improved, the functions of the mobile terminal are enriched, and the market competitiveness of the mobile terminal is improved.
Fig. 6 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 610 is configured to perform the following processes:
in the process of recording the target video, responding to the received label setting operation, and marking a video label for the target video;
acquiring a plurality of videos to be synthesized according to the recorded target video, wherein each video to be synthesized corresponds to one video tag, and the video tags corresponding to the videos to be synthesized belong to the same recording theme;
and synthesizing the plurality of videos to be synthesized into the target subject video.
In the embodiment of the invention, the video label can be marked for the target video in the real-time recording process of the target video, so that the situation that a user needs to look over the video again after recording the target video can be avoided, time is saved, convenience is realized, a plurality of videos to be synthesized can be further automatically obtained according to the recorded target video with the corresponding video label, and the plurality of videos to be synthesized, which respectively have the video labels belonging to the same recording theme, can be automatically synthesized into a complete target theme video meeting the requirements of the user. Therefore, the videos to be synthesized belonging to the same recording subject can be clearly and definitely classified through the video tags of the videos to be synthesized, a plurality of videos to be synthesized can be automatically synthesized into a complete video under the condition that the user does not need to clip, splice and the like the videos to be synthesized, the operation threshold of the user for synthesizing the videos is effectively reduced, the user experience is improved, the video processing becomes more intelligent and convenient, the video processing efficiency is improved, the functions of the mobile terminal are enriched, and the market competitiveness of the mobile terminal is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used to receive and transmit signals during a message transmission or call process, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 602, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the mobile terminal 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The mobile terminal 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the mobile terminal 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not further described herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 608 is an interface through which an external device is connected to the mobile terminal 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 600 or may be used to transmit data between the mobile terminal 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby integrally monitoring the mobile terminal. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The mobile terminal 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 is logically connected to the processor 610 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program, when executed by the processor 610, implements each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the video processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the particular illustrative embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various modifications, equivalent arrangements, and equivalents thereof, which may be made by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A method of video processing, the method comprising:
in the process of recording a target video, responding to a received label setting operation, and marking a video label for the target video;
acquiring a plurality of videos to be synthesized according to the recorded target video, wherein each video to be synthesized corresponds to one video tag, the video tags corresponding to the videos to be synthesized belong to the same recording theme, and the tag names of the video tags belonging to the same recording theme are the same tag name or different tag names associated with the same recording theme;
synthesizing the plurality of videos to be synthesized into a target subject video;
the videos to be synthesized belonging to the same recording theme comprise a plurality of recorded sub-videos in one target video, wherein the plurality of sub-videos correspond to a plurality of recording time periods in the target video one by one;
wherein, in the process of recording the target video, marking a video tag for the target video in response to the received tag setting operation comprises: in the process of recording the target video, responding to the label setting operation, and marking video labels for a plurality of sub-videos corresponding to a plurality of recording time periods one by one;
wherein, the obtaining a plurality of videos to be synthesized according to the recorded target video comprises: acquiring at least one marked video belonging to the same recording theme as the video label of the target video, wherein the video of the at least one marked label is different from the target video; taking at least one sub video and at least one labeled video in the target video as a plurality of videos to be synthesized; at least one video label corresponding to the video with at least one labeled label and at least one video label corresponding to the at least one sub video belong to the same recording subject;
after the videos to be synthesized are synthesized into the target subject video, the method further comprises the following steps:
responding to the received video playing operation, and determining whether the target theme video is split or not;
if so, splitting the target theme video into the plurality of videos to be synthesized, and playing a video corresponding to the video playing operation in the plurality of videos to be synthesized;
and if not, playing the target theme video.
2. The method according to claim 1, wherein after the synthesizing the plurality of videos to be synthesized into the target subject video, the method further comprises:
and storing the target subject video and deleting the plurality of videos to be synthesized.
3. The method according to claim 1, wherein said marking video tags for a plurality of sub-videos corresponding to a plurality of recording periods one-to-one in response to the tag setting operation during the recording of the target video comprises:
displaying at least one candidate label according to a video preview picture in a first recording time period, wherein the first recording time period is any one of the plurality of recording time periods;
and responding to the label setting operation, and selecting a target label from the at least one candidate label to mark a first sub video corresponding to the first recording time interval.
4. A mobile terminal, characterized in that the mobile terminal comprises:
the marking module is used for responding to the received label setting operation in the process of recording the target video and marking a video label for the target video;
the acquisition module is used for acquiring a plurality of videos to be synthesized according to the recorded target video, wherein each video to be synthesized corresponds to one video tag, the video tags corresponding to the videos to be synthesized belong to the same recording theme, and the tag names of the video tags belonging to the same recording theme are the same tag name or different tag names associated with the same recording theme;
the synthesizing module is used for synthesizing the videos to be synthesized into a target subject video;
the videos to be synthesized belonging to the same recording theme comprise a plurality of recorded sub-videos in one target video, wherein the plurality of sub-videos correspond to a plurality of recording time periods in the target video one by one;
the determining module is used for responding to the received video playing operation after the plurality of videos to be synthesized are synthesized into the target subject video, and determining whether the target subject video is split or not;
the first processing module is used for splitting the target theme video into the plurality of videos to be synthesized and playing a video corresponding to the video playing operation in the plurality of videos to be synthesized under the condition that the target theme video is determined to be split;
the second processing module is used for playing the target theme video;
wherein the marking module is specifically configured to: in the process of recording the target video, responding to the label setting operation, and marking video labels for a plurality of sub-videos corresponding to a plurality of recording time periods one by one;
the acquisition module is specifically configured to: acquiring at least one marked video belonging to the same recording theme as the video label of the target video, wherein the video of the at least one marked label is different from the target video; taking at least one sub video and at least one labeled video in the target video as a plurality of videos to be synthesized; at least one video label corresponding to the at least one labeled video belongs to the same recording topic as at least one video label corresponding to the at least one sub-video.
5. The mobile terminal of claim 4, wherein the mobile terminal further comprises:
and the storage module is used for storing the target subject video and deleting the plurality of videos to be synthesized after the plurality of videos to be synthesized are synthesized into the target subject video.
6. The mobile terminal according to claim 4, wherein the marking module is further configured to:
displaying at least one candidate tag according to a video preview picture in a first recording time period, wherein the first recording time period is any one of the plurality of recording time periods;
and responding to the label setting operation, and selecting a target label from the at least one candidate label to mark a first sub video corresponding to the first recording time interval.
7. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 3.
8. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN201910811285.XA 2019-08-30 2019-08-30 Video processing method and mobile terminal Active CN110557565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910811285.XA CN110557565B (en) 2019-08-30 2019-08-30 Video processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910811285.XA CN110557565B (en) 2019-08-30 2019-08-30 Video processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN110557565A CN110557565A (en) 2019-12-10
CN110557565B true CN110557565B (en) 2022-06-17

Family

ID=68738440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910811285.XA Active CN110557565B (en) 2019-08-30 2019-08-30 Video processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN110557565B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212225A (en) * 2020-01-10 2020-05-29 上海摩象网络科技有限公司 Method and device for automatically generating video data and electronic equipment
CN111209438A (en) * 2020-01-14 2020-05-29 上海摩象网络科技有限公司 Video processing method, device, equipment and computer storage medium
CN111246289A (en) * 2020-03-09 2020-06-05 Oppo广东移动通信有限公司 Video generation method and device, electronic equipment and storage medium
CN111669620A (en) * 2020-06-05 2020-09-15 北京字跳网络技术有限公司 Theme video generation method and device, electronic equipment and readable storage medium
CN116391358A (en) * 2020-07-06 2023-07-04 海信视像科技股份有限公司 Display equipment, intelligent terminal and video gathering generation method
CN111787259B (en) * 2020-07-17 2021-11-23 北京字节跳动网络技术有限公司 Video recording method and device, electronic equipment and storage medium
CN112822419A (en) * 2021-01-28 2021-05-18 上海盛付通电子支付服务有限公司 Method and equipment for generating video information
CN114189641B (en) * 2021-11-30 2022-12-13 广州博冠信息科技有限公司 Video processing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002330A (en) * 2012-12-31 2013-03-27 合一网络技术(北京)有限公司 Method for editing multiple videos shot at same time and place through network, client side, server and system
CN105338259A (en) * 2014-06-26 2016-02-17 北京新媒传信科技有限公司 Video merging method and device
CN107820138A (en) * 2017-11-06 2018-03-20 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN109558516A (en) * 2018-11-06 2019-04-02 汪浩 A kind of video resource management system, method, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060127459A (en) * 2005-06-07 2006-12-13 엘지전자 주식회사 Digital broadcasting terminal with converting digital broadcasting contents and method
CN103780973B (en) * 2012-10-17 2017-08-04 三星电子(中国)研发中心 Video tab adding method and device
CN109167937B (en) * 2018-11-05 2022-10-14 北京达佳互联信息技术有限公司 Video distribution method, device, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002330A (en) * 2012-12-31 2013-03-27 合一网络技术(北京)有限公司 Method for editing multiple videos shot at same time and place through network, client side, server and system
CN105338259A (en) * 2014-06-26 2016-02-17 北京新媒传信科技有限公司 Video merging method and device
CN107820138A (en) * 2017-11-06 2018-03-20 广东欧珀移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN109558516A (en) * 2018-11-06 2019-04-02 汪浩 A kind of video resource management system, method, equipment and storage medium

Also Published As

Publication number Publication date
CN110557565A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN110557565B (en) Video processing method and mobile terminal
CN111093026B (en) Video processing method, electronic device and computer-readable storage medium
CN109145142B (en) Management method and terminal for shared information of pictures
CN107707828B (en) A kind of method for processing video frequency and mobile terminal
CN109660728B (en) Photographing method and device
CN111010610B (en) Video screenshot method and electronic equipment
CN110557683B (en) Video playing control method and electronic equipment
JP7393541B2 (en) Video display methods, electronic equipment and media
CN108174103B (en) Shooting prompting method and mobile terminal
CN108182271B (en) Photographing method, terminal and computer readable storage medium
US20210096739A1 (en) Method For Editing Text And Mobile Terminal
CN109819168B (en) Camera starting method and mobile terminal
CN111445927B (en) Audio processing method and electronic equipment
CN109257649B (en) Multimedia file generation method and terminal equipment
CN108984143B (en) Display control method and terminal equipment
US20230015943A1 (en) Scratchpad creation method and electronic device
CN111491205B (en) Video processing method and device and electronic equipment
CN111372029A (en) Video display method and device and electronic equipment
CN111752450A (en) Display method and device and electronic equipment
CN109669710B (en) Note processing method and terminal
CN110022445B (en) Content output method and terminal equipment
CN111064888A (en) Prompting method and electronic equipment
CN111221602A (en) Interface display method and electronic equipment
CN110908638A (en) Operation flow creating method and electronic equipment
CN111049977B (en) Alarm clock reminding method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant