CN112261483B - Video output method and device - Google Patents

Video output method and device Download PDF

Info

Publication number
CN112261483B
CN112261483B CN202011134123.6A CN202011134123A CN112261483B CN 112261483 B CN112261483 B CN 112261483B CN 202011134123 A CN202011134123 A CN 202011134123A CN 112261483 B CN112261483 B CN 112261483B
Authority
CN
China
Prior art keywords
video
cover
input
target
covers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011134123.6A
Other languages
Chinese (zh)
Other versions
CN112261483A (en
Inventor
骆晓康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weiwo Software Technology Co ltd
Original Assignee
Nanjing Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weiwo Software Technology Co ltd filed Critical Nanjing Weiwo Software Technology Co ltd
Priority to CN202011134123.6A priority Critical patent/CN112261483B/en
Publication of CN112261483A publication Critical patent/CN112261483A/en
Application granted granted Critical
Publication of CN112261483B publication Critical patent/CN112261483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The application discloses a video output method and device, and belongs to the technical field of communication. The method comprises the following steps: displaying a cover video map, wherein the cover video map comprises a plurality of video covers, and each video cover corresponds to one video; receiving a first input to the cover video map; determining at least two target video covers of the plurality of video covers according to input parameters of the first input in response to the first input; and outputting the target video based on the videos corresponding to the at least two target video covers. According to the method and the device, the required video materials can be quickly searched through the cover video map, and the time for screening the video materials by a user can be saved.

Description

Video output method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video output method and device.
Background
With the popularity of the vlog (video blog), the video materials are various, the position of shooting video is continuously changed, and various application programs also have different editing modes.
As more videos are shot, the positions of the videos are different every time, and when a user wants to splice some shot videos together to generate a vlog, it often takes a relatively long time to select video materials.
Disclosure of Invention
The embodiment of the application aims to provide a video output method and device, which can solve the problem that in the prior art, when some videos are spliced into a volog, a long time is required to be spent in selecting video materials.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video output method, including:
displaying a cover video map, wherein the cover video map comprises a plurality of video covers, and each video cover corresponds to one video;
receiving a first input to the cover video map;
determining at least two target video covers of the plurality of video covers according to input parameters of the first input in response to the first input;
and outputting the target video based on the videos corresponding to the at least two target video covers.
In a second aspect, embodiments of the present application provide a video output apparatus, including:
the video map display module is used for displaying a cover video map, wherein the cover video map comprises a plurality of video covers, and each video cover corresponds to one video;
a first input receiving module for receiving a first input to the cover video map;
a target cover determining module for determining at least two target video covers of the plurality of video covers according to input parameters of the first input in response to the first input;
and the target video output module is used for outputting target videos based on videos corresponding to the at least two target video covers.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the video output method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the video output method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the video output method according to the first aspect.
In the embodiment of the application, by displaying the cover video map, a first input to the cover video map is received, at least two target video covers in the plurality of video covers are determined according to input parameters of the first input in response to the first input, and target videos are output based on videos corresponding to the at least two target video covers. According to the method and the device for displaying the cover video map, when a user wants to quickly edit the vlog according to the geographical position track, the user can quickly search the required video materials through the cover video map, and the time for screening the video materials by the user can be saved.
Drawings
Fig. 1 is a flowchart of steps of a video output method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a video output device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video output method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of a video output method according to an embodiment of the present application is shown, and as shown in fig. 1, the video output method may specifically include the following steps:
step 101: and displaying a cover video map, wherein the cover video map comprises a plurality of video covers, and each video cover corresponds to one video.
The embodiment of the application can be applied to a scene for selecting videos at different positions and combining the videos in combination with the cover video map.
The cover video map is an electronic map with video covers displayed thereon, and a plurality of video covers are displayed on the cover video map, each video cover corresponding to one video.
In the process of shooting videos in advance by a user, a video cover corresponding to each shot video can be generated, and the video covers are determined by combining shooting positions, and in particular, the following detailed description can be made in combination with the following specific implementation modes.
In a specific implementation manner of the present application, before the step 101, the method may further include:
step A1: in the process of capturing a video, a capturing position is acquired.
In this embodiment, the shooting position refers to a position where the user is at when shooting the video, that is, a position where the user is shooting the video.
In the process of shooting the video by the user, the position of shooting the video by the user, namely the shooting position, can be acquired.
After the shooting position is acquired, step A2 is performed.
Step A2: after the initial video is shot, generating a video cover corresponding to the initial video according to the shooting position.
The initial video is a video shot by a user.
After the user shoots the initial video, a video cover corresponding to the initial video can be generated according to the shooting position, and specifically, the process of generating the video cover can adopt the following two modes:
1. and acquiring at least one frame of video image in the initial video, and generating a video cover corresponding to the initial video according to the at least one frame of video image.
2. And acquiring at least one position image corresponding to the shooting position, and generating a video cover corresponding to the initial video according to the at least one position image.
Of course, not limited thereto, in a specific implementation, the video cover corresponding to the initial video may be generated in other manners, and in particular, the embodiment may not be limited thereto according to the service requirement.
According to the method and the device for displaying the video cover, the video cover of the shot video is generated by combining the video shooting positions, and then, in the process of selecting the shot video subsequently, the video cover corresponding to the shooting positions can be displayed on the electronic map, so that a user can select the shot video by selecting the video cover, and the efficiency of selecting the video by the user can be improved.
In the present embodiment, after the video cover map is displayed, when one video is a video shot from a plurality of positions, the position of the video cover displayed on the video cover map can also be adjusted. In particular, the detailed description may be made in connection with the following detailed description.
In a specific implementation manner of the present application, the cover video map includes a first video cover, where the first video cover corresponds to at least two shooting positions, and after the step 101, the method may further include:
step S1: and receiving a second input to a second shooting location in the case that the first video cover is displayed at the first shooting location in the cover video map.
In this embodiment, the first video cover is a video cover displayed on a cover video map, and the first video cover includes at least two shooting positions, for example, videos corresponding to the first video cover are videos obtained by shooting in city a, city B and city C, and the shooting positions corresponding to the first video cover are city a, city B and city C.
In the case that the first video cover corresponds to at least two photographing positions, when the first video cover is displayed on the cover video map, the first video cover may be displayed at one of the photographing positions and connected with other photographing positions through an arrow (i.e., the other photographing positions corresponding to the video cover are indicated by a connection identifier), or the photographing positions corresponding to the video cover may be indicated at each photographing position in a manner of circularly playing the video cover, for example, one video cover corresponds to city a and city B, and at this time, the photographing positions corresponding to the video cover may be indicated by circularly playing the video cover at city a and city B of the cover video map.
It will be appreciated that the above examples are only examples listed for better understanding of the technical solutions of the embodiments of the present application, and are not to be construed as the only limitation of the present embodiments.
The second input refers to an input performed on a second photographing position for switching the photographing position at which the first video cover is displayed on the cover video map, i.e., the second input may be used to indicate that the first video cover is displayed at the second photographing position corresponding to the first video cover.
It may be understood that the second input may be an input formed by the user clicking a position displayed in the map at the second shooting position in the cover video map, or may be an input formed by the user clicking text or pattern information related to the second shooting position in the cover video map, for example, the second shooting position is XX scenic spot, and the user forms the second input by clicking the text "XX scenic spot" displayed on the map or clicking a building of the XX scenic spot displayed on the map, or the like.
In the case that the first video cover is displayed at the first photographing position of the cover video map vibration, the second input to the second photographing position may be received, and further, step S2 is performed.
Step S2: in response to the second input, the first video cover is displayed at the second capture location and display of the first video cover at the first capture location is canceled.
After receiving the second input to the second photographing position, the first video cover may be displayed at the second photographing position in response to the second input, and the display of the first video cover at the first photographing position may be canceled.
According to the embodiment of the application, under the condition that one video cover corresponds to a plurality of shooting positions, the display positions of the video covers can be adjusted by combining user input, so that the user can manage the video covers displayed in the cover video map conveniently.
After the cover video map is displayed, step 102 is performed.
Step 102: a first input is received to the cover video map.
The first input refers to an input performed on the cover video map for selecting a video cover.
After the cover video map is displayed, a first input performed by the user on the cover video map may be received, thereby performing step 103.
Step 103: in response to the first input, at least two target video covers of the plurality of video covers are determined according to input parameters of the first input.
The target video cover refers to a target video cover selected from a plurality of video covers, for example, the plurality of video covers displayed on the cover video map include cover 1, cover 2, cover 3, cover 4, and cover 5, and after the user selects cover 1 and cover 3, cover 1 and cover 3 can be regarded as target video covers.
After receiving the first input to the cover video map, at least two target video covers of the plurality of video covers may be determined in response to the first input based on input parameters of the first input. For example, after displaying the video cover map, a plurality of video covers are displayed on the video cover map, and the user clicks on the video cover to be selected to acquire at least two target video covers.
It will be appreciated that the above examples are only examples listed for better understanding of the technical solutions of the embodiments of the present application, and are not to be construed as the only limitation of the present embodiments.
After the at least two target video covers are acquired, step 104 is performed.
Step 104: and outputting the target video based on the videos corresponding to the at least two target video covers.
After the at least two target video covers are acquired, a target video may be generated based on videos corresponding to the at least two target video covers, and the target video may be output, and specifically, the videos corresponding to the at least two target video covers may be synthesized, so that the target video may be obtained through synthesis.
Of course, after the first input is received, a splicing order corresponding to the at least two target video covers may be determined according to an input order corresponding to the first input, so that videos corresponding to the at least two target video covers may be synthesized according to the splicing order to obtain a target video, and specifically, the following detailed description may be provided in connection with the following specific implementation manner.
In a specific implementation of the present application, the step 103 may include:
substep B1: responding to the first input, determining the at least two target video covers according to the input position of the first input, and determining the corresponding splicing sequence of the at least two target video covers according to the input sequence of the first input;
the step 104 may include:
substep C1: and splicing videos corresponding to the at least two target video covers based on the splicing sequence, and outputting the target videos.
In this embodiment, after receiving the first input to the cover video map, the first input may be responded to determine at least two target video covers, and determine the corresponding splicing order of the at least two target video covers according to the input order of the first input, for example, a video cover a, a video cover b, a video cover c, a video cover d and a video cover e are displayed on the video cover map, the user clicks the video cover a first on the video cover map, clicks the video cover e last on the video cover e, and then uses the video cover a, the video cover e and the video cover c as the target video covers when clicking the video cover c last, and the corresponding splicing orders of the three target video covers are the video cover a, the video cover e and the video cover c (from first to last).
It will be appreciated that the above examples are only examples listed for better understanding of the technical solutions of the embodiments of the present application, and are not to be construed as the only limitation of the present embodiments.
After the at least two target video covers and the splicing order are acquired, videos corresponding to the at least two target video covers can be spliced by combining the splicing order, so that the target video can be output.
Of course, in a specific implementation, after the user selects at least two target video covers, the user may also customize the splicing sequence corresponding to the videos of at least two target video covers, and in particular, the embodiment may be determined according to the service requirement, which is not limited.
In this embodiment, after the target video is synthesized, the synthesized target video may also be edited by the user, and specifically, the following detailed description may be provided in connection with the following specific implementation manner.
In another specific implementation manner of the present application, the method may further include:
substep D1: and performing video editing operation on the target video, and outputting an edited video.
In the present embodiment, the video editing operation may include at least one of an add music operation and a add subtitle operation.
After the target video is generated, a video editing operation may be performed on the target video by the user to generate and output an edited video, for example, after the target video is output, an operation of adding music and/or subtitle may be performed on the target video by the user to obtain an edited video.
According to the method and the device for editing the target video, personalized customization of the user can be achieved through editing operation of the user on the target video, and user experience is improved.
According to the video output method, through displaying the cover video map, first input to the cover video map is received, at least two target video covers in the plurality of video covers are determined according to input parameters of the first input in response to the first input, and target videos are output based on videos corresponding to the at least two target video covers. According to the method and the device for displaying the cover video map, when a user wants to quickly edit the vlog according to the geographical position track, the user can quickly search the required video materials through the cover video map, and the time for screening the video materials by the user can be saved.
It should be noted that, in the video output method provided in the embodiment of the present application, the execution subject may be a video output device, or a control module in the video output device for executing the video output method. In the embodiment of the present application, a video output device executes a video output method as an example, and the video output device provided in the embodiment of the present application is described.
Referring to fig. 2, a schematic structural diagram of a video output apparatus provided in an embodiment of the present application is shown, and as shown in fig. 2, the video output apparatus 200 may include the following modules:
the video map display module 210 is configured to display a cover video map; the cover video map comprises a plurality of video covers, and each video cover corresponds to one video;
a first input receiving module 220 for receiving a first input to the cover video map;
a target cover determining module 230, configured to determine at least two target video covers of the plurality of video covers according to an input parameter of the first input in response to the first input;
the target video output module 240 is configured to output a target video based on videos corresponding to the at least two target video covers.
Optionally, the method further comprises:
the shooting position acquisition module is used for acquiring a shooting position in the process of shooting video;
and the video cover generation module is used for generating a video cover corresponding to the initial video according to the shooting position after the initial video is shot.
Optionally, the cover video map includes a first video cover, the first video cover corresponding to at least two shooting locations, the apparatus further comprising:
a second input receiving module for receiving a second input to a second shooting location in the case where the first video cover is displayed at the first shooting location in the cover video map;
a cover position adjustment module for displaying the first video cover at the second shooting position in response to the second input, and canceling the display of the first video cover at the first shooting position;
wherein the at least two photographing positions include the first photographing position and the second photographing position.
Optionally, the target cover determination module 230 includes:
the target cover determining unit is used for responding to the first input, determining the at least two target video covers according to the input position of the first input, and determining the corresponding splicing sequence of the at least two target video covers according to the input sequence of the first input;
the target video output module 240 includes:
and the target video output unit is used for splicing videos corresponding to the at least two target video covers based on the splicing sequence and outputting the target videos.
According to the video output device, through displaying the cover video map, first input to the cover video map is received, the first input is responded, at least two target video covers in the plurality of video covers are determined according to input parameters of the first input, and target videos are output based on videos corresponding to the at least two target video covers. According to the method and the device for displaying the cover video map, when a user wants to quickly edit the vlog according to the geographical position track, the user can quickly search the required video materials through the cover video map, and the time for screening the video materials by the user can be saved.
The video output device in the embodiment of the application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video output device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video output device provided in this embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device 300, including a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of running on the processor 301, where the program or the instruction implements each process of the embodiment of the video output method when executed by the processor 301, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the display unit 406 is used for displaying a cover video map; the cover video map comprises a plurality of video covers, and each video cover corresponds to one video;
a user input unit 407 for receiving a first input to the cover video map;
a processor 410 for determining at least two target video covers of the plurality of video covers from input parameters of the first input in response to the first input; and outputting the target video based on the videos corresponding to the at least two target video covers.
According to the method and the device for displaying the cover video map, when a user wants to quickly edit the vlog according to the geographical position track, the user can quickly search the required video materials through the cover video map, and the time for screening the video materials by the user can be saved.
Optionally, the radio frequency unit 401 is further configured to obtain a shooting position during the process of shooting video;
the processor 410 is further configured to generate a video cover corresponding to the initial video according to the shooting position after the initial video is obtained by shooting.
Optionally, the user input unit 407 is further configured to receive a second input to a second shooting location in a case where the first video cover is displayed at the first shooting location in the cover video map;
the processor 410 is further configured to display the first video cover at the second shooting location in response to the second input, and cancel the display of the first video cover at the first shooting location;
wherein the at least two photographing positions include the first photographing position and the second photographing position.
Optionally, the processor 410 is further configured to determine, in response to the first input, the at least two target video covers according to an input position of the first input, and determine, in accordance with an input sequence of the first input, a splicing sequence corresponding to the at least two target video covers;
the processor 410 is further configured to splice videos corresponding to the at least two target video covers based on the splicing order, and output the target video.
According to the embodiment of the application, the user can edit the video for the synthesized target video, so that the interestingness of video synthesis is improved.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, with the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video output method, and the same technical effect can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or an instruction, implementing each process of the video output method embodiment, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (8)

1. A video output method, comprising:
displaying a cover video map, wherein the cover video map comprises a plurality of video covers, and each video cover corresponds to one video;
receiving a first input to the cover video map;
determining at least two target video covers of the plurality of video covers according to input parameters of the first input in response to the first input; outputting target videos based on videos corresponding to the at least two target video covers;
the cover video map includes a first video cover corresponding to at least two shooting locations, and further includes, after the displaying the cover video map:
receiving a second input to a second shooting location if the first video cover is displayed at a first shooting location in the cover video map;
in response to the second input, displaying the first video cover at the second shooting location and canceling the display of the first video cover at the first shooting location;
wherein the at least two photographing positions include the first photographing position and the second photographing position.
2. The method of claim 1, further comprising, prior to said displaying a cover video map:
acquiring a shooting position in the process of shooting a video;
after the initial video is shot, generating a video cover corresponding to the initial video according to the shooting position.
3. The method of claim 1, wherein the determining at least two target video covers of the plurality of video covers in response to the first input from input parameters of the first input comprises:
responding to the first input, determining the at least two target video covers according to the input position of the first input, and determining the corresponding splicing sequence of the at least two target video covers according to the input sequence of the first input;
the outputting the target video based on the videos corresponding to the at least two target video covers includes:
and splicing videos corresponding to the at least two target video covers based on the splicing sequence, and outputting the target videos.
4. A video output apparatus, comprising:
the video map display module is used for displaying a cover video map, wherein the cover video map comprises a plurality of video covers, and each video cover corresponds to one video;
a first input receiving module for receiving a first input to the cover video map;
a target cover determining module for determining at least two target video covers of the plurality of video covers according to input parameters of the first input in response to the first input;
the target video output module is used for outputting target videos based on videos corresponding to the at least two target video covers;
the cover video map includes a first video cover corresponding to at least two shooting locations, further including:
a second input receiving module for receiving a second input to a second shooting location in the case where the first video cover is displayed at the first shooting location in the cover video map;
a cover position adjustment module for displaying the first video cover at the second shooting position in response to the second input, and canceling the display of the first video cover at the first shooting position;
wherein the at least two photographing positions include the first photographing position and the second photographing position.
5. The apparatus as recited in claim 4, further comprising:
the shooting position acquisition module is used for acquiring a shooting position in the process of shooting video;
and the video cover generation module is used for generating a video cover corresponding to the initial video according to the shooting position after the initial video is shot.
6. The apparatus of claim 4, wherein the target cover determination module comprises:
the target cover determining unit is used for responding to the first input, determining the at least two target video covers according to the input position of the first input, and determining the corresponding splicing sequence of the at least two target video covers according to the input sequence of the first input;
the target video output module comprises:
and the target video output unit is used for splicing videos corresponding to the at least two target video covers based on the splicing sequence and outputting the target videos.
7. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the video output method of any one of claims 1-3.
8. A readable storage medium, characterized in that it stores thereon a program or instructions, which when executed by a processor, implement the steps of the video output method according to any of claims 1-3.
CN202011134123.6A 2020-10-21 2020-10-21 Video output method and device Active CN112261483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011134123.6A CN112261483B (en) 2020-10-21 2020-10-21 Video output method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011134123.6A CN112261483B (en) 2020-10-21 2020-10-21 Video output method and device

Publications (2)

Publication Number Publication Date
CN112261483A CN112261483A (en) 2021-01-22
CN112261483B true CN112261483B (en) 2023-06-23

Family

ID=74264502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011134123.6A Active CN112261483B (en) 2020-10-21 2020-10-21 Video output method and device

Country Status (1)

Country Link
CN (1) CN112261483B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004242288A (en) * 2003-01-17 2004-08-26 Matsushita Electric Ind Co Ltd Video reproducing apparatus and video recording and reproducing apparatus
KR20120116171A (en) * 2011-04-12 2012-10-22 (주)디스트릭트홀딩스 Apparatus and method for providing video service based on location
CN105721813A (en) * 2016-04-06 2016-06-29 成都都在哪网讯科技有限公司 Automatic video track forming method and system
CN105827959A (en) * 2016-03-21 2016-08-03 深圳市至壹科技开发有限公司 Geographic position-based video processing method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5401103B2 (en) * 2009-01-21 2014-01-29 日立コンシューマエレクトロニクス株式会社 Video information management apparatus and method
CN102521253B (en) * 2011-11-17 2013-05-22 西安交通大学 Visual multi-media management method of network users
CN103870599A (en) * 2014-04-02 2014-06-18 联想(北京)有限公司 Shooting data collecting method, device and electronic equipment
CN105681743B (en) * 2015-12-31 2019-04-19 华南师范大学 Video capture management method and system based on running fix and electronic map
CN105704444B (en) * 2015-12-31 2019-05-31 华南师范大学 Video capture management method and system based on moving map and time suboptimal control
CN110019959A (en) * 2017-12-30 2019-07-16 广州集星图信息科技有限公司 A kind of video capture management method based on moving map
CN110366027B (en) * 2019-08-29 2022-04-01 维沃移动通信有限公司 Video management method and terminal equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004242288A (en) * 2003-01-17 2004-08-26 Matsushita Electric Ind Co Ltd Video reproducing apparatus and video recording and reproducing apparatus
KR20120116171A (en) * 2011-04-12 2012-10-22 (주)디스트릭트홀딩스 Apparatus and method for providing video service based on location
CN105827959A (en) * 2016-03-21 2016-08-03 深圳市至壹科技开发有限公司 Geographic position-based video processing method
CN105721813A (en) * 2016-04-06 2016-06-29 成都都在哪网讯科技有限公司 Automatic video track forming method and system

Also Published As

Publication number Publication date
CN112261483A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN111756995A (en) Image processing method and device
CN113093968A (en) Shooting interface display method and device, electronic equipment and medium
CN112486444B (en) Screen projection method, device, equipment and readable storage medium
CN111953900B (en) Picture shooting method and device and electronic equipment
CN113905175A (en) Video generation method and device, electronic equipment and readable storage medium
CN113596555B (en) Video playing method and device and electronic equipment
CN112887794B (en) Video editing method and device
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN111625166B (en) Picture display method and device
CN110868632B (en) Video processing method and device, storage medium and electronic equipment
CN112261483B (en) Video output method and device
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112367487B (en) Video recording method and electronic equipment
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN115037874A (en) Photographing method and device and electronic equipment
CN112887623B (en) Image generation method and device and electronic equipment
CN113268961A (en) Travel note generation method and device
CN114500844A (en) Shooting method and device and electronic equipment
CN113868269A (en) Screenshot method and device, electronic equipment and readable storage medium
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN113139367A (en) Document generation method and device and electronic equipment
CN114390205B (en) Shooting method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof
CN112202958B (en) Screenshot method and device and electronic equipment
CN114285988B (en) Display method, display device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant