CN112261218B - Video control method, video control device, electronic device and readable storage medium - Google Patents

Video control method, video control device, electronic device and readable storage medium Download PDF

Info

Publication number
CN112261218B
CN112261218B CN202011133560.6A CN202011133560A CN112261218B CN 112261218 B CN112261218 B CN 112261218B CN 202011133560 A CN202011133560 A CN 202011133560A CN 112261218 B CN112261218 B CN 112261218B
Authority
CN
China
Prior art keywords
video
target
input
recording
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011133560.6A
Other languages
Chinese (zh)
Other versions
CN112261218A (en
Inventor
李海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011133560.6A priority Critical patent/CN112261218B/en
Publication of CN112261218A publication Critical patent/CN112261218A/en
Application granted granted Critical
Publication of CN112261218B publication Critical patent/CN112261218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42224Touch pad or touch panel provided on the remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The embodiment of the application provides a video control method, a video control device, electronic equipment and a readable storage medium. The video control method comprises the following steps: receiving a first input of a user to a target video object in a video recording interface of a first video in the process of recording the first video by a first camera; in response to the first input, performing target processing, outputting a second video of the target video object; the target processing comprises video recording of the target video object or video processing of the first video. When the video is recorded, a user can select the target video object on the video recording interface so as to achieve the purpose of recording or processing the video only aiming at the target video object, namely, obtain the independent video of the target video object, thereby improving the flexibility of video recording, being capable of obtaining the clearer video of the target video object and being convenient for the user to check the details of the target video object.

Description

Video control method, video control device, electronic device, and readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a video control method, a video control apparatus, an electronic device, and a readable storage medium.
Background
At present, a mobile phone becomes a necessity in life of people, and a video recording function is also a necessary function when people use the mobile phone. However, the mobile phone video recording method in the related art cannot ensure that all recorded video objects can have clear pictures, for example, when a video is recorded and a near view video object is focused and recorded, the definition of a far view video object in the video is reduced, and further when the video is played, the problem of low definition of the played pictures of the far view video object occurs.
Disclosure of Invention
The embodiment of the application provides a video control method, a video control device, electronic equipment and a readable storage medium, so as to solve the problem that a user cannot see details of a certain video object in a played video in the related art.
In order to solve the above problems, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video control method, including:
receiving a first input of a user to a target video object in a video recording interface of a first video in the process of recording the first video by a first camera;
in response to the first input, performing target processing, outputting a second video of the target video object;
the target processing comprises video recording of the target video object or video processing of the first video.
In a second aspect, an embodiment of the present application provides a video control apparatus, including:
the receiving module is used for receiving first input of a user to a target video object in a video recording interface of a first video in the process of recording the first video by the first camera;
a processing module for executing target processing in response to the first input, and outputting a second video of the target video object;
the target processing comprises video recording of the target video object or video processing of the first video.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the video control method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video control method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the video control method according to the first aspect.
In the embodiment of the application, in the process of recording a first video by a first camera, receiving a first input of a user to a target video object in a video recording interface of the first video; executing target processing in response to the first input, and outputting a second video of the target video object; the target processing comprises video recording of the target video object or video processing of the first video. When the video is recorded, a user can select a target video object on a video recording interface so as to achieve the purpose of recording or capturing the video only aiming at the target video object, namely, obtaining the independent video of the target video object, thereby improving the flexibility of video recording, obtaining clearer video of the target video object and facilitating the user to check the details of the target video object.
Drawings
Fig. 1 is a flowchart of a video control method provided in an embodiment of the present application;
fig. 2 is a second flowchart of a video control method according to an embodiment of the present application;
fig. 3 is a third flowchart of a video control method according to an embodiment of the present application;
fig. 4 is a fourth flowchart of a video control method according to an embodiment of the present application;
fig. 5 is a fifth flowchart of a video control method according to an embodiment of the present application;
fig. 6 is a sixth flowchart of a video control method according to an embodiment of the present application;
fig. 7 is a seventh flowchart of a video control method provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a display of a screen of an electronic device according to an embodiment of the present application;
fig. 9 is a second schematic display diagram of a screen of an electronic device according to an embodiment of the present application;
FIG. 10 is a third schematic view of a display of a screen of an electronic device according to an embodiment of the present application;
FIG. 11 is a fourth illustration of a display of a screen of an electronic device according to an embodiment of the present application;
FIG. 12 is a fifth illustration of a display of a screen of an electronic device according to an embodiment of the present disclosure;
FIG. 13 is a sixth illustration of a display of a screen of an electronic device according to an embodiment of the disclosure;
fig. 14a is a seventh schematic display diagram of a screen of an electronic device according to an embodiment of the present application;
fig. 14b is an eighth schematic display diagram of a screen of an electronic device according to an embodiment of the present application;
FIG. 15 is a ninth illustration of a display schematic diagram of a screen of an electronic device according to an embodiment of the disclosure;
FIG. 16 is a schematic diagram of a display of a screen of an electronic device according to an embodiment of the present application;
fig. 17 is an eleventh schematic view illustrating a display of a screen of an electronic device according to an embodiment of the present application;
FIG. 18 is a twelfth schematic display view of a screen of an electronic device according to an embodiment of the present application;
FIG. 19 is a thirteen schematic display diagram of a screen of an electronic device according to an embodiment of the present disclosure;
fig. 20 is a block diagram of a video control apparatus according to an embodiment of the present application;
fig. 21 is a second block diagram of a video control apparatus according to an embodiment of the present application;
fig. 22 is one of structural diagrams of an electronic device provided in an embodiment of the present application;
fig. 23 is a second structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video control method, the video control apparatus, the electronic device, and the computer-readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of the present application provides a video control method, as shown in fig. 1, the video control method including:
step 102, receiving a first input of a user to a target video object in a video recording interface of a first video in the process of recording the first video by a first camera.
It should be noted that the first input of the user is used to select a video object on the video recording interface, so as to perform a separate video recording on the video object. The first input includes, but is not limited to, a single click input, a double click input, a key input, a fingerprint input, a swipe input, a press input. The first video is displayed on the display device, wherein the first video is displayed on the display device, the second video is displayed on the display device, and the first video is displayed on the display device. For example, a user may perform a single-click operation on a target video object in the first video recording interface to select the target video object. For another example, there are 3 objects in the video recording interface, the user presses one or two keys to select the first object, then presses one to select the second object, and when the user does not press the keys for a certain time (e.g., 3 seconds) after selecting the second object, it is determined that the second object is the target video object. For fingerprint input, a user needs to input fingerprint information matched with a pre-stored fingerprint to select a target video object. For another example, if there are 3 objects in the first video, the first object is selected when the user inputs the fingerprint for the first time, the second object is selected when the user inputs the fingerprint for the second time, and the second object is determined as the target video object when the user does not continuously input the fingerprint for more than a certain time (e.g., 3 seconds) after selecting the second object. In addition, when an object is selected, the object can be highlighted, for example, the currently selected object is enlarged and displayed, a frame line is displayed around the currently selected object, and the like, so that the user can clearly know the currently selected object, and the highlight disappears after the video starts to be recorded. Specifically, the input mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
It should be noted that the first camera may be understood as a main camera, such as a standard lens, a long-focus lens, a macro lens, a wide-angle lens, a fisheye lens, a zoom lens, a fixed-focus lens, etc., and a user may set the camera before or during recording of a video, or directly adopt a default camera of video software. The first video is a video image shot by the first camera, and the first video may be a video shot by a user after performing video image enlarging or reducing operation, or may be a normal video without performing video image enlarging or reducing operation, which is not specifically limited in this embodiment of the application.
And 104, responding to the first input, executing target processing and outputting a second video of the target video object.
The target processing comprises video recording of the target video object or video processing of the first video.
In this embodiment, a user starts a camera application of the electronic device, the electronic device displays a video recording interface, and the user starts the first camera to record the first video after selecting to start video recording. At any time in the recording process of the first video, when a user needs to record an independent video for a certain video object on the video recording interface of the first video, the user can select the target video object, for example, click the target video object, further use the second camera to record an independent video for the target video object, and output the second video, or intercept the video of the target video object in the first video through an image processor of the electronic device, and further output the second video only containing the target video object. In the embodiment of the application, when the video is recorded, a user can select the target video object on the video recording interface so as to achieve the purpose of recording or processing the video of the target video object, namely, obtain the independent video of the target video object, so that the flexibility of video recording is improved, the clearer video of the target video object can be obtained, and the user can conveniently check the details of the target video object.
The electronic device includes, but is not limited to, a mobile terminal, a tablet computer, a notebook computer, a wearable device, a vehicle-mounted terminal, and the like.
Furthermore, the number of target video objects is not limited to one, that is, during the recording of the first video, the recording of at least one second video may be performed. For example, in the first video recording process, when a user clicks a first target video object on a first video recording interface 1 minute after the first video is recorded, recording or capturing a second video of the first target video object is started. And when the user clicks the second target video object on the first video recording interface for 2 minutes after the first video is recorded, recording or intercepting a second video of the second target video object. Also, the second video of the first target video object and the second video of the second target video object may be recorded or intercepted simultaneously.
As a possible implementation, as shown in fig. 2, the video control method includes:
step 202, in the process of recording the first video by the first camera, receiving a first input of a user to a target video object in a video recording interface of the first video.
And step 204, responding to the first input, performing video recording on the target video object or performing video processing on the first video, and outputting a second video of the target video object.
Step 206, receiving a second input of the target video object from the user.
Step 208, in response to the second input, stopping video recording of the target video object or stopping video processing of the first video, and outputting a second video.
Wherein the second input includes, but is not limited to, a click input, a slide input, a press input. Specifically, the input mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
In this embodiment, at any time in the recording process of the first video, when a user needs to record an independent video for a certain video object on the video recording interface of the first video, the user may select (e.g., click) the target video object, and then record an independent video for the target video object, and output the second video, or intercept the video of the target video object in the first video by using an image processor of the electronic device, and then output the second video only including the target video object. When the user selects the target video object again (e.g., clicks the target video object again), recording of a second video of the target video object may be stopped or the first video may be stopped from being intercepted.
For example, as shown in fig. 10, at 56 seconds of the first video recording, the user clicks on the first target video object 806 (i.e., boy in fig. 10) on the first video recording interface 804, and begins recording a second video of the first target video object 806. When the user clicks the first target video object 806 again, video recording may stop.
Therefore, the end time of the second video of the target video object can be selected according to the user requirements, the independent video of the target video object can be recorded, the flexibility of video recording is improved, and the requirement of the user on video recording can be better met.
In this embodiment, the video control method further includes: displaying a first control in the process of video recording of a target video object or video processing of a first video, wherein the first control is used for controlling video recording of the target video object or video processing of the first video; receiving a second input of the target video object by the user, comprising: an input to the first control by a user is received.
In this embodiment, the first control is displayed while recording a video of the target video object or processing a video of the first video, and the first control is a newly generated control for controlling recording of the video of the target video object or controlling video processing of the first video based on the target video object selected by the user. In one aspect, the user may be conveniently made aware that the target video object is being video recorded or that the first video is being video processed by the display of the first control. On the other hand, the user can start recording the second video of the target video object or end recording the second video of the target video object by clicking the first control, or start capturing the second video of the target video object or stop capturing the second video of the target video object. The user can conveniently and flexibly control video recording or capturing by operating the first control, and the user can autonomously select the end time of the second video of the target video object.
For example, as shown in fig. 11, after the second video recording of the first target video object 806 is started, a first control (i.e., a first secondary video recording control indicator 808) is displayed in the relevant area of the first target video object 806. As shown in fig. 12, at the 01 th minute 48 seconds of the first video recording, the user clicks the first control again, and the recording of the second video of the first target video object 806 is stopped.
It should be noted that the second input of the target video object by the user may also include an input to an interface of the second video of the target video object, for example, when the interface of the second video of the target video object is clicked, the recording or the interception of the second video may be stopped.
It is noted that when stopping recording or capturing the second video of the target video object, it does not affect recording or capturing of the second video of the first video and other target video objects.
As a possible implementation, as shown in fig. 3, the video control method includes:
step 302, in the process of recording the first video by the first camera, receiving a first input of a target video object in a video recording interface of the first video from a user.
And 304, responding to the first input, performing video recording on the target video object or performing video processing on the first video, and outputting a second video of the target video object.
And step 306, stopping recording the video of the target video object or stopping video processing of the first video under the condition that the recording of the first video is stopped, and outputting a second video.
In this embodiment, at any time in the first video recording process, when the user needs to end the video recording of the first video, the user may input to stop the recording of the first video, or before the user starts recording the first video, a fixed recording duration is set, for example, a video of 5 minutes is recorded. By the method, the user can autonomously select the duration of the first video, and the flexibility of video recording is improved.
Further, when the first video stops recording, the video recording or the video processing of the target video object may be simultaneously ended, that is, the second video is simultaneously ended at the time when the first video recording is ended. The user can stop recording or setting the first video, so that the second video can be automatically and simultaneously stopped, the operation steps of the user can be simplified, and the operation time of the user can be saved.
It should be noted that the user input to the first video may include an input to a third control corresponding to the first video or an input to the first video recording interface. The third control element can control the recording of the first video to be started or finished, and a user can start the recording of the first video or finish the recording of the first video by clicking the third control element.
Input to the third control or input to the first video recording interface includes, but is not limited to, a click input, a slide input, a press input. Specifically, the input mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
Furthermore, the user can continue to record the first video through inputting the third control again or inputting the first video recording interface, and continue to record or process the second video while continuing to record the first video, so that the user can flexibly control the video recording conveniently, and the user can autonomously select the recording end time of the first video and the recording end time of the second video.
For example, as shown in fig. 8, a user may control recording of a first video through a third control (i.e., a main video recording control identifier 802) displayed on a screen of the electronic device, start recording the first video using the first camera when the user clicks the third control for the first time, stop recording the first video when the user clicks the third control for the second time, and stop recording the video of the target video object at the same time. If the user clicks the third control a third time, the recording of the first video may continue.
As a possible implementation, the video processing the first video and outputting the second video includes: and performing video interception on a target video object in the video recording interface of the first video, and outputting a second video.
In this embodiment, the first video and the second video are recorded by the same camera, but the video contents are different, and the second video is an independent video only for one target video object in the first video. When the first video is recorded, an image processor of the electronic equipment carries out video interception on the first video and intercepts a second video of the target video object. By the method, on one hand, a clearer video of the target video object can be obtained; on the other hand, a plurality of videos can be obtained through only one camera, and hardware cost is saved.
In this embodiment, even if the electronic device has a plurality of cameras, the first video and the second video can be recorded by the same camera, for example, the first video is shot by using a standard lens, and if it is determined that the target video object is still shot by using the standard lens to be most suitable according to the distance information of the target video object, the second video of the target video object is still shot by using the standard lens, so that the target video object is ensured to be recorded by using the most suitable camera.
Further, N frames of the first video image of the first video include the target video object, N being a positive integer; the method for capturing the target video object in the video recording interface of the first video and outputting the second video comprises the following steps: scratching N second video images of a first target area in N frames of first video images; performing video synthesis on the N second video images, and outputting a second video; the first target area is an area where a target video object in each frame of the first video image is located.
In this embodiment, a specific method of video capture is defined for the purpose of obtaining the second video based on the first video. Specifically, if N frames of first video images of the first video have a target video object, for the N frames of first video images, a second video image corresponding to a first target region in each frame of first video image is scratched to obtain N second video images, and then the N second video images are synthesized into a second video. By the method, clearer video of the target video object can be obtained, and the flexibility of video recording is improved.
For example, for a recorded first video, a target video object is displayed in 10 frames of first video images, a second video image corresponding to an area where the target video object is located in each frame of first video image is deducted, and then the 10 second video images are subjected to video synthesis in sequence to obtain a second video of the target video object.
It should be noted that the first target area is an area where the target video object in the first video image is located, that is, the display content in the first target area at least includes the target video object.
As a possible implementation, the video recording of the target video object and the outputting of the second video of the target video object includes: and recording the video of the target video object through a second camera, and outputting a second video, wherein the first camera and the second camera are different cameras.
It should be noted that the second camera is not the same camera as the first camera, and the second camera may be understood as a sub-camera, which is a camera for performing a second video recording on a target video object, such as a standard lens, a long-focus lens, a macro lens, a wide-angle lens, a fisheye lens, a zoom lens, a fixed-focus lens, and the like. Specifically, the selection may be performed according to the target video object, for example, when the target video object is a distant object, if it is determined that the distance between the position of the center point of the target video object and the position of the center of the lens of the first camera falls within the focal length range of the long-focus lens, the long-focus lens is used to record the second video for the target video object. The second video is an independent video of the target video object, the display content in the second video is mainly the target video object, and the display content in the first video except the target video object is less included or not included.
By the method, on one hand, a clearer video of the target video object can be obtained; on the other hand, the video recording of the target video object by the most appropriate camera is ensured; on the other hand, the second video and the first video are recorded by different cameras respectively, the video recording is independent, the time of the video recording is not affected, and compared with a mode of capturing the second video on the first video, the data processing resource of the system can be saved.
In any of the above embodiments, when the number of the target video objects is multiple, the second camera that records videos of multiple target video objects may be the same camera or different cameras.
In this embodiment, when the second cameras for recording videos of the multiple target video objects are the same camera, one second camera records a segment of video, and the image processor of the electronic device can capture the video and capture the second video corresponding to each target video object, thereby avoiding the waste of camera resources caused by only recording one video at the same time by one camera and ensuring that the most suitable camera is used for recording videos of the target video objects. For example, a first video is shot using a standard lens, it is judged that it is most appropriate to shoot a first target video object using a macro lens according to distance information of the first target video object, a second video of the first target video object is shot using the macro lens, and it is judged that it is most appropriate to shoot a second target video object using the macro lens according to distance information of the second target video object, a second video of the second target video object is shot using the macro lens.
It should be noted that, in this case, although the second video of the first target video object and the second video of the second target video object are recorded by using the same camera, the display contents of the videos are different, and the second video of the first target video object is a video captured only for the first target video object, that is, the display content in the second video of the first target video object is mainly the first target video object, and the second video of the second target video object is a video captured only for the second target video object, that is, the display content in the second video of the second target video object is mainly the second target video object.
In this embodiment, when the second cameras for performing video recording on the multiple target video objects are different cameras, the second video of each target video object is recorded by one corresponding camera, the video recording is independent of each other, the time of each video recording does not affect each other, and the video recording effect is improved.
In any of the above embodiments, the video recording of the target video object by the second camera, before outputting the second video, further includes: obtaining distance information of a target video object; and determining a second camera matched with the distance information of the target video object.
In this embodiment, the second camera matched with the target video object is determined according to the distance information, so that the definition of a video recorded by the target video object can be improved.
In this embodiment, the distance information of the target video object includes a first distance between a first position of the target video object and a second position of a lens center of the first camera; determining a second camera matched with the distance information of the target video object, comprising: acquiring a first distance range corresponding to the first distance; determining a second camera matched with the first distance range; wherein different distance ranges correspond to different cameras.
In this embodiment, the second camera is determined according to a first distance range in which a first distance between a first position of the target video object and a second position of the lens center of the first camera is located. For example, when a first distance between a first position of a target video object and a second position of the lens center of the first camera falls within a distance range of the long focus lens (i.e., a focal range), a second video is recorded for the target video object using the long focus lens. When the distance between the first position of the target video object and the second position of the lens center of the first camera falls within the distance range (namely the focal length range) of the macro lens, the second video is recorded for the target video object by using the macro lens. The appropriate camera device can be selected for the target video object, the video recording definition is improved, and a better video recording effect is achieved.
In any of the above embodiments, in executing the target process, the method further includes: acquiring the display size of a target video object in a video recording interface in real time; and updating the video size of the second video according to the display size of the target video object in the video recording interface acquired in real time.
In this embodiment, the video size of the second video of the target video object is determined by the real-time display size of the target video object in the first video recording interface, so that it is ensured that the display content in the second video is mainly the target video object, thereby implementing detailed video recording and improving the video recording definition of the target video object.
It is understood that the target video object may be active in the first video recording interface, and the display size of the target video object in the first video recording interface is changed, so that the video size of the output second video is changed along with the change of the display size of the target video object in the first video recording interface. For example, the video recording of the target video object is started at 56 th second of the first video recording or the video processing of the first video is started, the second video is output, the display size of the target video object is 10mm wide and 2mm high between 56 th second and 1 st minute 05 second, and the video size of the second video of the target video object is 10mm wide and 2mm high in the time period. Between the 1 st minute 06 second and the 1 st minute 18 second, the display size of the target video object is changed to be 15mm wide and 25mm high, and then the video size of the second video of the target video object in the time period is 15mm wide and 25mm high. The video size of the second video adaptively changes along with the display size of the target video object, so that the detailed video recording is realized, and the video recording definition of the target video object is improved.
In any of the above embodiments, after performing the target process in response to the first input, outputting the second video of the target video object, the method further comprises: and storing the second video in association with the first video.
In the embodiment, when the video is stored, the second video of the target video object is stored in association with the first video, for example, the second video is stored in a mode of being attached to the first video, so that a user can conveniently view the video, and the video searching efficiency is improved.
For example, the second video is stored under the storage file directory of the first video. In addition, in order to achieve association with the first video, the name of the second video is the same as the named portion of the first video, for example, in the form of named _ serial number named of the second video as the first video, for example, named 202006101050 of the first video, named 202006101050_01 of the second video of the first target video object, named 202006101050_02 of the second target video object, named 202006101050_03 of the second video of the third target video object. Further, the name of the second video may also include a recording start time and a recording end time of the second video.
As a possible implementation, as shown in fig. 4, the video control method includes:
step 402, in the process of recording the first video by the first camera, receiving a first input of a user to a target video object in a video recording interface of the first video.
Step 404, in response to the first input, performing video recording on the target video object or performing video processing on the first video, and outputting a second video of the target video object.
In step 406, a third input from the user is received.
In response to the third input, the first video is played, step 408.
Step 410, in the process of playing the first video, receiving a fourth input of the user to the target video object in the video playing interface of the first video.
In response to the fourth input, a second video stored in association with the first video is played, step 412.
It should be noted that the third input of the user is used to play the first video, and the fourth input of the user is used to select a video object on the video playing interface of the first video, so as to perform separate video playing on the video object. The third input and the fourth input include, but are not limited to, a click input, a key input, a fingerprint input, a swipe input, a press input. The first video is displayed on the display device, wherein the first video is displayed on the display device, the second video is displayed on the display device, and the first video is displayed on the display device. For example, a user may perform a single-click operation on a target video object on the first video playing interface to select the target video object. For another example, if there are 3 objects in the played first video, the user presses one or two keys to select the first object and then presses one to select the second object, and if the user does not press the keys for a certain time (e.g., 3 seconds) after selecting the second object, it is determined that the second object is the target video object. For fingerprint input, a user needs to input fingerprint information matched with a pre-stored fingerprint to select a target video object. For another example, if there are 3 objects in the first video, the first object is selected when the user inputs the fingerprint for the first time, the second object is selected when the user inputs the fingerprint for the second time, and the second object is determined as the target video object when the user does not continuously input the fingerprint for more than a certain time (e.g., 3 seconds) after selecting the second object. In addition, when an object is selected, the object can be highlighted, for example, the currently selected object is enlarged and displayed, a frame line is displayed around the currently selected object, and the like, so that the user can clearly know the currently selected object, and the highlight disappears after the video starts to play. Specifically, the input mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
It should be noted that the first video can be understood as a main video, and as shown in fig. 14a, the first video can be a normal video (a person and a butterfly are displayed in the video) without performing a video zooming-in or zooming-out operation. The first video may also be a video played after the user performs an operation of enlarging or reducing the video frame, as shown in fig. 14b, after the user performs display and enlargement on the person in the first video, only the video of the person is displayed on the screen of the electronic device, and the butterfly originally existing in the first video cannot be displayed in the video, which is not specifically limited in this embodiment of the application. The second video is understood as a sub video corresponding to the first video, and is an independent video of the target video object, and the display content in the second video is mainly the target video object and includes little or no other display content in the first video except the target video object.
In this embodiment, in the process of playing the first video on the screen of the electronic device, the user may select the target video object on the first video playing interface to play the second video of the target video object, that is, play the independent video of the target video object, so as to improve the flexibility of video playing, make the playing picture of the target video object clearer, and facilitate the user to view the details of the target video object.
The electronic device includes, but is not limited to, a mobile terminal, a tablet computer, a notebook computer, a wearable device, a vehicle-mounted terminal, and the like.
In addition, the number of the target video objects is not limited to one, that is, during the playing of the first video, the playing of at least one second video can be performed. For example, as shown in fig. 14a, when the user clicks the first target video object 806 when the first video is played for 56 seconds, the second video of the first target video object 806 starts to be played. As shown in fig. 16, when the main video is played for the 1 st minute and 22 seconds, the user clicks the second target video object 810 (butterfly in fig. 16), and then the second video of the second target video object 810 starts to be played. Also, the second video of the first target video object 806 and the second video of the second target video object 810 may be played simultaneously.
The playing mode of the second video can comprise playing through the floating window on the first video playing interface, playing through the floating window on the main interface of the screen of the electronic equipment, playing separately from the first video and the like.
As a possible implementation, during the playing of the first video, the method further includes: displaying a second control in a second target area associated with the target video object, wherein the second control is used for controlling video playing of the target video object; receiving a fourth input of the target video object in the video playing interface of the first video from the user, including: receiving input of a user to the second control; the second target area is an area where a target video object in the first video is located.
In this embodiment, a second control is displayed while the first video is played, where the second control is a control for controlling video playing of a target video object, which is newly generated based on the target video object selected by the user. In one aspect, the user can be conveniently made aware that the target video object has an independent video through the display of the second control. On the other hand, the user can play the independent video by operating the second control, so that the user can conveniently and flexibly control the video playing.
It should be noted that the second target area is an area where the target video object is located in the playing interface of the first video, that is, the display content in the second target area at least includes the target video object.
For example, as shown in fig. 14a, when the first video is played to 56 th second, the second video of the first target video object 806 may start to be played, the second control (i.e., the first video playing control identifier 814) is displayed on the first target video object 806, and the user clicks the second control to start playing the second video of the first target video object 806.
The fourth input by the user to the target video object may also include an input to the target video object itself, for example, when the target video object is clicked in the video playing interface of the first video, the playing of the second video of the target video object may be started.
As a possible implementation, as shown in fig. 5, the video control method includes:
step 502, in the process of recording the first video by the first camera, receiving a first input of a target video object in a video recording interface of the first video from a user.
Step 504, in response to the first input, performing video recording on the target video object or performing video processing on the first video, and outputting a second video of the target video object.
Step 506, a third input from the user is received.
In response to the third input, the first video is played, step 508.
Step 510, in the process of playing the first video, receiving a fourth input of the user to the target video object in the video playing interface of the first video.
In response to the fourth input, playing a second video stored in association with the first video, step 512.
At step 514, a fifth input from the user to the first video is received.
In response to the fifth input, the playing of the first video is stopped, and the playing of the second video is stopped, step 516.
Wherein the fifth input includes, but is not limited to, a click input, a slide input, a press input. Specifically, the input mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
In this embodiment, in the process of playing the first video, when the user needs to end playing the first video, the user may perform an input operation on the first video, and then stop playing the first video. The method and the device can enable a user to autonomously select the playing ending time of the first video, and improve the flexibility of video playing control.
It should be noted that the fifth input of the first video by the user may include an input of a fourth control or an input of a playing interface of the first video. The fourth control is used for controlling the first video to start playing or end playing, and a user can start playing the first video or end playing the first video by clicking the fourth control.
In this embodiment, when the user selects to stop the playing of the first video, the playing of the second video of the target video object may be simultaneously ended, that is, the second video is simultaneously ended at the time when the playing of the first video is ended. The user can stop playing the first video, so that the playing of the second video is stopped at the same time, the user operation steps can be simplified, and the user operation time is saved.
It should be noted that, after the first video and the second video both stop playing, the input operation of the user on the target video object may be received again, and the second video of the target video object continues to be played, so that the video that is paused to be played continues to be played, the flexibility of video playing control is improved, and the viewing requirement of the user on the video can be met.
As a possible implementation, as shown in fig. 6, the video control method includes:
step 602, in the process of recording the first video by the first camera, receiving a first input of a user to a target video object in a video recording interface of the first video.
Step 604, in response to the first input, performing video recording on the target video object or performing video processing on the first video, and outputting a second video of the target video object.
Step 606, a third input from the user is received.
In response to the third input, the first video is played, step 608.
Step 610, in the process of playing the first video, receiving a fourth input of the user to the target video object in the video playing interface of the first video.
In response to the fourth input, a second video stored in association with the first video is played, step 612.
And 614, receiving a sixth input of the target video object from the user.
In response to the sixth input, the playing of the second video is stopped, step 616.
Wherein the sixth input includes, but is not limited to, a click input, a slide input, a press input. Specifically, the input mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
In this embodiment, in the process of playing the first video, the user may perform an input operation on the target video object on the first video playing interface to implement playing of the second video of the target video object. When the user inputs the target video object again, the playing of the second video of the target video object can be stopped, namely the playing of the independent video of the target video object is finished, so that the flexibility of video playing is improved, and the watching requirement of the user on the video can be better met.
It should be noted that the sixth input of the target video object by the user may include an input of a second control corresponding to the target video object or an input of a playing interface of a second video of the target video object. The second control is used for controlling the playing or ending of the second video, and the user can start playing the second video of the target video object or end playing the second video of the target video object by clicking the second control.
For example, as shown in fig. 18, when the first video plays to the 1 st minute 36 seconds, the user clicks the second control (i.e., the first video playing control identifier 814), and then the playing of the second video of the first target video object 806 is paused.
It is noted that when the playing of the second video of the target video object is stopped, the playing of the second video of the first video and other target video objects is not affected.
It should be noted that, after the second video stops playing, the input operation of the user on the target video object or the second control may be received again, and the second video of the target video object continues to be played, so that the video that is paused to be played continues to be played, the flexibility of video playing control is improved, and the viewing requirement of the user on the video can be met.
As a possible implementation, as shown in fig. 7, the video control method includes:
step 702, in the process of recording the first video by the first camera, receiving a first input of a target video object in a video recording interface of the first video from a user.
Step 704, in response to the first input, performing video recording on the target video object or performing video processing on the first video, and outputting a second video of the target video object.
Step 706, a third input from the user is received.
In response to the third input, the first video is played, step 708.
Step 710, in the process of playing the first video, receiving a fourth input of the user to the target video object in the video playing interface of the first video.
In response to a fourth input, a second video stored in association with the first video is played, step 712.
And 714, receiving a fifth input of the user to the first video, stopping playing the first video and stopping playing the second video in response to the fifth input, or receiving a sixth input of the user to the target video object and stopping playing the second video in response to the sixth input.
At step 716, a seventh input of the target video object from the user is received.
Step 718, in response to the seventh input, determining a target picture of the second video corresponding to the play time node of the first video.
And step 720, starting from the target picture, playing a second video of the target video object.
Wherein the seventh input includes, but is not limited to, a click input, a slide input, a press input. Specifically, the input mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
In this embodiment, in the process of playing the first video, the user may perform an input operation on the target video object on the first video playing interface to implement playing of the second video of the target video object. When the user performs an input operation on the target video object again, the playing of the second video of the target video object may be stopped, or the playing of the second video may be stopped at the same time as the playing of the first video is stopped. By the method, the flexibility of video playing is improved, and the watching requirement of a user on the video can be better met.
Further, after receiving a seventh input of the user, controlling the second video which is stopped to be played continuously, and the second video which is played continuously is played according to the playing time node of the first video, that is, determining a target picture of the second video corresponding to the playing time node of the first video, and further playing the second video from the target picture, so that the second video and the first video are played synchronously, the problem of disordered video playing is avoided, and the video playing effect is improved.
For example, if a second video of the target video object is recorded beginning at 56 seconds of the first video and ending at 1 minute 48 seconds of the first video, then at play, the user may start playing the second video at 56 seconds of the first video, pause playing the second video at 1 minute 06 seconds of the first video, and click playing the second video again at 1 minute 36 seconds of the first video. In this case, when the playback is clicked again, the video of the second video between 1 min 06 sec and 1 min 36 sec is not played, but is played directly from 1 min 36 sec.
In any of the above embodiments, displaying a second control in a second target area associated with the target video object includes: and under the condition that the first video is played to the first moment, displaying a second control in a second target area associated with the target video object, wherein the first moment is a time point when video recording is started on the target video object or video processing is started on the first video.
In this embodiment, when the first video is played to the recording start time point of the second video of the target video object or the capturing start time point of the second video, a second control is displayed in a second target area associated with the target video object in the first video playing interface, and when the user clicks the second control, the second video of the target video object is played. And indicating a second video of the target video object through a second control to prompt a user that the target video object has an independent video, wherein the independent video can be played, and the operation of the user is facilitated.
In this embodiment, in the process of playing the first video, the method further includes: and under the condition that the first video is played to a second moment, hiding the second control or keeping displaying the second control, wherein the second moment is a time point when video recording of the target video object is stopped or video processing of the first video is stopped.
In this embodiment, when the first video is played to the recording end time point of the second video of the target video object or the capturing end time point of the second video, the second control disappears on the second target area associated with the target video object in the first video playing interface, which indicates that the second video of the target video object has ended. Through the second control, the state (playable, suspendable and finished) of the second video can be prompted to the user, and the user operation is facilitated. Certainly, the second control can also not disappear, and the user can play back the second video of the target video object by selecting the second control again, so that the flexibility of video playing is improved, and the user can conveniently check the video.
The video control method of the embodiment of the application can comprise a video recording method and a video playing method. The video recording method according to the embodiment of the present application is further described below with reference to fig. 8 to 12, and the video playing method according to the embodiment of the present application is further described below with reference to fig. 13 to 19.
As shown in fig. 8, the user clicks a main video recording control identifier 802 (i.e., a third control) displayed on the screen of the electronic device, and starts recording a main video (i.e., a first video) by using a main camera (i.e., a first camera). As shown in fig. 9, after the main video starts recording, the display format of the main video recording control indicator 802 is updated to indicate to the user that the main video is being recorded, and the recording time (e.g., 00:00:32, i.e., 32 seconds) is displayed on the main video recording interface 804.
At any time during the recording of the main video, the user can click a target video object on the recording interface of the main video, and the target video object is the details concerned by the user. After the target video object is selected, a proper auxiliary camera (namely, a second camera, if the camera is opened, the camera is directly used) such as a long-focus lens, a macro lens and the like is selected to be opened according to the distance between the target video object and the main camera. And starting to record the secondary video of the target video object by using the secondary camera while recording the primary video. The user clicks the target video object again and the secondary video recording may stop. For example, as shown in fig. 10, at 56 seconds of the main video recording, the user clicks on a first target video object 806 on the main video recording interface 804, and opens a secondary camera (e.g., macro lens) to start recording a secondary video of the first target video object 806. As shown in fig. 11, after the recording of the secondary video of the first target video object 806 is started, a first secondary video recording control identifier 808 (i.e., a first control corresponding to the first target video object 806) is displayed on the first target video object 806, and if the user clicks the first secondary video recording control identifier 808, the recording of the secondary video of the first target video object 806 is stopped, so that the user can control the recording of the first secondary video through the identifier, and the user can autonomously select the recording end time of the target video object.
As shown in fig. 11, at the 01 minute 22 seconds of the main video recording, the user clicks the second target video object 810 on the main video recording interface 804, and opens the secondary camera (e.g., telephoto lens) to start recording the secondary video of the second target video object 810. After the secondary video recording of the second target video object 810 is started, as shown in fig. 12, a second secondary video recording control identifier 812 (i.e., a first control corresponding to the second target video object 810) is displayed on the second target video object 810. If the user clicks the second sub video recording control identifier 812, the recording of the sub video of the second target video object 810 is stopped, so that the user can conveniently control the recording of the second sub video through the identifier, and the user can autonomously select the recording end time of the target video object.
As shown in fig. 12, after the user clicks the first sub video recording control mark 808 of the first target video object 806 in the 01 th minute 48 seconds of the main video recording, the recording of the sub video of the first target video object 806 is stopped, and the display form of the first sub video recording control mark 808 is updated, so as to prompt the user that the recording of the first sub video is finished.
It should be noted that one secondary camera can be used for recording a plurality of secondary videos at the same time, the size of the secondary video is determined according to the target video object of the current secondary video recording, and the secondary video is stored in the form of a primary video attachment. And if the user does not actively stop recording the auxiliary video, finishing the auxiliary video at the moment when the recording of the main video is finished.
And opening the recorded video by the user, clicking to play, and playing the main video (namely the first video). As shown in fig. 13, after the main video starts playing, a playing time (for example, 00:00:32, that is, 32 seconds) is displayed.
As shown in fig. 14a, when the main video is played to 56 seconds, the sub video (i.e., the second video) of the first target video object 806 recorded by the sub camera may start playing (the sub video of the first target video object 806 is recorded at 56 seconds of the main video recording), a first video playing control identifier 814 (i.e., a second control corresponding to the first target video object 806) is added to the first target video object 806 to remind the user that the first sub video of the first target video object is playable, and the first video playing control identifier 814 may be displayed in a transparent manner. After the user clicks the first video playing control identifier 814, the user starts playing the secondary video of the first target video object 806. As shown in fig. 15, the sub-video of the first target video object 806 is played in a floating window manner, and during the playing of the sub-video of the first target video object 806, the display form of the first video playing control flag 814 is updated, thereby prompting the user that the first sub-video has started to be played. If the user clicks on the first video playback control identifier 814, playback of the secondary video of the first target video object 806 may be paused.
As shown in fig. 16, when the main video is played to the 1 st minute 22 seconds, the sub video (i.e., the second video) of the second target video object 810 recorded by the sub camera may start playing (the sub video of the second target video object 810 starts recording at the 1 st minute 22 seconds recorded by the main video), and a second video playing control identifier 816 (i.e., a second control corresponding to the second target video object 810) is added to the second target video object 810 to remind the user that the second sub video of the second target video object 810 is playable, where the second video playing control identifier 816 may be displayed in a transparent manner. At the same time, the first video playback control identifier 814 is displayed on the first target video object 806.
In the display situation where the sub-video of the first target video object and the sub-video of the second target video object both start to be played, as shown in fig. 17, a first video playing control identifier 814 is displayed on the first target video object 806, and the display form of a second video playing control identifier 816 on the second target video object 810 is updated, so as to prompt the user that the second sub-video has already started to be played.
When the main video is played for the 1 st minute 36 seconds, after the user clicks the first video play control flag 814, as shown in fig. 18, the playing of the sub video of the first target video object 806 is paused, and the display form of the first video play control flag 814 is updated, thereby prompting the user that the playing of the second sub video has been paused. At this time, the second video playing control identifier 816 is displayed on the second target video object 810 to indicate that the secondary video of the second target video object 810 is still playing. It should be noted that before the main video is played to the 1 st minute 48 seconds, the user may click the first video playing control flag 814, so that the sub-video of the first target video object 806 is played from the current playing time of the main video, wherein the sub-video of the first target video object 806 is stopped being recorded at the 1 st minute 48 seconds of the recording of the main video.
As shown in fig. 19, the sub-video of the first target video object 806 is recorded after the 1 st minute 48 seconds of the recording of the main video, so that the first video play control flag of the first target video object 806 disappears when the main video is played to the 1 st minute 48 seconds. At this time, the secondary video of the second target video object 810 is still playing, so the second video playing control identifier 816 is displayed on the second target video object 810.
It should be noted that, in the video control method provided in the embodiment of the present application, the execution main body may be a video control device, or a control module in the video control device for executing the loaded video control method. In the embodiment of the present application, a video control apparatus executing a method for loading video is taken as an example, and the video control apparatus provided in the embodiment of the present application is described.
Fig. 20 shows a schematic diagram of a possible structure of the video control apparatus according to the embodiment of the present application. As shown in fig. 20, the video control apparatus 2000 includes:
the receiving module 2002 is configured to receive, during a process that the first camera records the first video, a first input of a user to a target video object in a video recording interface of the first video;
a processing module 2004 for performing a target process in response to the first input, outputting a second video of the target video object; the target processing comprises video recording of the target video object or video processing of the first video.
Further, the receiving module 2002 is further configured to receive, during the performing of the target processing, a second input of the user to the target video object; the processing module 2004 is further configured to stop recording the video of the target video object or stop video processing of the first video in response to a second input, and output a second video.
Further, as shown in fig. 21, the video control apparatus 2000 further includes: a display module 2006, configured to display a first control, where the first control is used to control video recording of a target video object or control video processing of a first video; the receiving module 2002 is specifically configured to: user input to the first control is received.
Further, the processing module 2004 is further configured to stop recording the video of the target video object or stop video processing on the first video and output a second video in a case that recording of the first video is stopped.
Further, the processing module 2004 is specifically configured to perform video capture on the target video object in the video recording interface of the first video, and output a second video.
Further, N frames of the first video image of the first video include the target video object, N being a positive integer; the processing module 2004 is specifically configured to: scratching N second video images of a first target area in N frames of first video images; performing video synthesis on the N second video images, and outputting a second video; the first target area is an area where a target video object in each frame of the first video image is located.
Further, the processing module 2004 is specifically configured to perform video recording on the target video object through a second camera, and output a second video, where the first camera and the second camera are different cameras.
Further, the processing module 2004 is further configured to obtain distance information of the target video object; and determining a second camera matched with the distance information of the target video object.
Further, the distance information of the target video object includes a first distance between a first position of the target video object and a second position of the lens center of the first camera, and the processing module 2004 is specifically configured to obtain a first distance range corresponding to the first distance, and determine a second camera matched with the first distance range; wherein different distance ranges correspond to different cameras.
Further, the processing module 2004 is further configured to obtain a display size of the target video object in the video recording interface in real time; and updating the video size of the second video according to the display size of the target video object in the video recording interface acquired in real time.
Further, as shown in fig. 21, the video control apparatus 2000 further includes: the storage module 2008 is configured to store the second video in association with the first video.
Further, the receiving module 2002 is further configured to receive a third input from the user; the video control apparatus 2000 further includes: a playing module 2010, configured to play the first video in response to a third input; the receiving module 2002 is further configured to receive, during the playing process of the first video, a fourth input of the user to a target video object in the video playing interface of the first video; the playing module 2010 is further configured to play the second video stored in association with the first video in response to a fourth input.
Further, the displaying module 2006 is further configured to display a second control in a second target area associated with the target video object, where the second control is used to control video playing of the target video object; a receiving module 2002, specifically configured to receive an input of a second control from a user; the second target area is an area where a target video object in the first video is located.
Further, the receiving module 2002 is further configured to receive a fifth input of the first video from the user; the playing module 2010, further configured to stop playing the first video and stop playing the second video in response to a fifth input.
Further, the receiving module 2002 is further configured to receive a sixth input of the target video object from the user; the playing module 2010, further configured to stop playing the second video in response to a sixth input.
Further, the receiving module 2002 is further configured to receive a seventh input of the target video object from the user; the playing module 2010 is further configured to determine, in response to a seventh input, a target picture of the second video corresponding to the playing time node of the first video; and starting from the target picture, playing a second video of the target video object.
Further, the display module 2006 is specifically configured to display a second control in a second target area associated with the target video object when the first video is played to a first time, where the first time is a time point when video recording is started for the target video object or video processing is started for the first video.
Further, the display module 2006 is further configured to hide the second control or keep displaying the second control when the first video is played to a second time, where the second time is a time point when the video recording of the target video object is stopped or the video processing of the first video is stopped.
It should be noted that the video control apparatus 2000 is capable of implementing each process of the video control method provided in the embodiment of the present application, and achieving the same technical effect, and for avoiding repetition, the details are not repeated here.
The video control device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The video control apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The embodiment of the application provides electronic equipment. As shown in fig. 22, the electronic device 2200 includes:
the processor 2202, the memory 2204 and the program or the instructions stored on the memory 2204 and capable of running on the processor 2202, wherein the program or the instructions when executed by the processor 2202 implement the respective processes of the video control method as described above, and achieve the same technical effects.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 23 is a schematic hardware structure diagram of an electronic device implementing the embodiment of the present application. The electronic device 2300 includes, but is not limited to: a radio frequency unit 2302, a network module 2304, an audio output unit 2306, an input unit 2308, a sensor 2310, a display unit 2312, a user input unit 2314, an interface unit 2316, a memory 2318, a processor 2320, and the like.
Those skilled in the art will appreciate that the electronic device 2300 may further comprise a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 2320 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 23 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
In this embodiment of the present application, the electronic device 2300 is capable of performing video recording, wherein the user input unit 2314 is configured to receive a first input of a user to a target video object in a video recording interface of a first video in a process of recording the first video by a first camera; a processor 2320 for performing a target process in response to the first input, outputting a second video of the target video object; the target processing comprises video recording of the target video object or video processing of the first video.
Further, the user input unit 2314 is further configured to receive a second input of the target video object by the user; the processor 2320 is further configured to stop video recording of the target video object or stop video processing of the first video in response to a second input, and output a second video.
Further, the display unit 2312 is configured to display a first control, where the first control is used to control video recording of the target video object or control video processing on the first video; the user input unit 2314 is specifically configured to: an input to the first control by a user is received.
Further, the processor 2320 is further configured to stop recording the video of the target video object or stop video processing on the first video and output a second video in the case that recording of the first video is stopped.
Further, the processor 2320 is specifically configured to perform video capture on the target video object in the video recording interface of the first video, and output a second video.
Further, N frames of the first video image of the first video include the target video object, N being a positive integer; a processor 2320, specifically configured to scratch N second video images of the first target region in the N frames of first video images; performing video synthesis on the N second video images, and outputting a second video; the first target area is an area where a target video object in each frame of the first video image is located.
Further, the processor 2320 is specifically configured to control a second camera, perform video recording on the target video object, and output a second video, where the first camera and the second camera are different cameras.
Further, the processor 2320 is further configured to obtain distance information of the target video object; and determining a second camera matched with the distance information of the target video object.
Further, the distance information of the target video object includes a first distance between a first position of the target video object and a second position of the lens center of the first camera, and the processor 2320 is specifically configured to obtain a first distance range corresponding to the first distance, and determine a second camera matched with the first distance range; wherein different distance ranges correspond to different cameras.
Further, the processor 2320 is further configured to obtain a display size of the target video object in the video recording interface in real time; and updating the video size of the second video according to the display size of the target video object in the video recording interface acquired in real time.
Further, a memory 2318 is used for storing the second video in association with the first video.
Further, the user input unit 2314 is further configured to receive a third input by the user; a display unit 2312 for playing the first video in response to a third input; the receiving module 2002 is further configured to receive, during the playing process of the first video, a fourth input of the user to a target video object in the video playing interface of the first video; the display unit 2312 is further configured to play a second video stored in association with the first video in response to a fourth input.
Further, the display unit 2312 is further configured to display a second control in a second target area associated with the target video object, where the second control is used to control video playing of the target video object; a user input unit 2314, which is specifically used for receiving the input of the user to the second control; the second target area is an area where a target video object in the first video is located.
Further, the user input unit 2314 is further configured to receive a fifth input of the first video from the user; the display unit 2312 is further configured to stop playing the first video and stop playing the second video in response to a fifth input.
Further, the user input unit 2314 is further configured to receive a sixth input of the target video object by the user; the display unit 2312 is further configured to stop playing the second video of the target video object in response to a sixth input.
Further, the user input unit 2314 is further configured to receive a seventh input of the target video object by the user; a display unit 2312, further configured to determine, in response to a seventh input, a target screen of the second video corresponding to the play time node of the first video; and starting from the target picture, playing a second video of the target video object.
Further, the display unit 2312 is further configured to display a second control when the first video is played to a first time, where the first time is a time point when video recording is started for the target video object or video processing is started for the first video.
Further, the display unit 2312 is further configured to hide the second control or keep displaying the second control when the first video is played to a second time, where the second time is a time point when the video recording of the target video object is stopped or the video processing of the first video is stopped.
It should be understood that, in the embodiment of the present application, the radio frequency unit 2302 may be used for transceiving information or transceiving signals during a call, and in particular, receive downlink data of a base station or transmit uplink data to the base station. Radio frequency unit 2302 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The network module 2304 provides wireless broadband internet access to the user, such as assisting the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 2306 may convert audio data received by the radio frequency unit 2302 or the network module 2304 or stored in the memory 2318 into an audio signal and output as sound. Also, the audio output unit 2306 may also provide audio output related to a specific function performed by the electronic device 2300 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 2306 includes a speaker, a buzzer, a receiver, and the like.
The input unit 2308 is used to receive an audio or video signal. The input Unit 2308 may include a Graphics Processing Unit (GPU) 23082 and a microphone 23084, and the Graphics processor 23082 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 2312, or stored in the memory 2318 (or other storage medium), or transmitted via the radio frequency unit 2302 or the network module 2304. The microphone 23084 may receive sound and may be capable of processing the sound into audio data, which may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 2302 in case of a phone call mode.
The electronic device 2300 also includes at least one sensor 2310, such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, a light sensor, a motion sensor, and others.
The display unit 2312 is used to display information input by a user or information provided to the user. The display unit 2312 may include a display panel 23122, and the display panel 23122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
The user input unit 2314 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 2314 includes a touch panel 23142 and other input devices 23144. The touch panel 23142, also referred to as a touch screen, may collect touch operations by a user thereon or nearby. The touch panel 23142 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 2320, and receives and executes commands sent by the processor 2320. Other input devices 23144 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 23142 can be overlaid on the display panel 23122, and when the touch panel 23142 detects a touch operation on or near the touch panel 23142, the touch operation can be transmitted to the processor 2320 to determine the type of the touch event, and then the processor 2320 can provide a corresponding visual output on the display panel 23122 according to the type of the touch event. The touch panel 23142 and the display panel 23122 may be provided as two separate components or may be integrated into one component.
The interface unit 2316 interfaces an external device to the electronic apparatus 2300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 2316 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 2300 or may be used to transmit data between the electronic device 2300 and external devices.
The memory 2318 may be used for storing software programs as well as various data. The memory 2318 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Further, the memory 2318 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 2320 performs various functions of the electronic device 2300 and processes data by running or executing software programs and/or modules stored in the memory 2318 and calling data stored in the memory 2318, thereby monitoring the electronic device 2300 as a whole. Processor 2320 may include one or more processing units; preferably, the processor 2320 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications.
The electronic device may further include a power supply for supplying power to the various components, and the power supply may be logically connected to the processor through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system.
The embodiment of the present application provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the process of the above video control method is implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. A readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the video control method, and can achieve the same technical effect.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (17)

1. A video control method, comprising:
receiving a first input of a user to a target video object in a video recording interface of a first video in the process of recording the first video by a first camera;
in response to the first input, performing target processing, outputting a second video of the target video object;
wherein the target processing comprises video recording of the target video object or video processing of the first video;
in the process of executing the target process, the method further includes:
acquiring the display size of the target video object in the video recording interface in real time;
updating the video size of the second video according to the display size of the target video object in the video recording interface acquired in real time;
after said performing the target processing in response to the first input, outputting a second video of the target video object, the method further comprises:
storing the second video in association with the first video;
after the executing the target processing in response to the first input and outputting the second video of the target video object, the method further comprises: receiving a third input of the user; in response to the third input, playing the first video; receiving a fourth input of a user to the target video object in the video playing interface of the first video in the playing process of the first video; in response to the fourth input, playing the second video stored in association with the first video;
in the process of playing the first video, the method further comprises: displaying a second control in a second target area associated with the target video object, wherein the second control is used for controlling video playing of the target video object; the receiving a fourth input of the target video object in the video playing interface of the first video by the user includes: receiving input of a user to the second control; wherein the second target area is an area where the target video object in the first video is located.
2. The video control method according to claim 1, wherein in executing the target process, the method further comprises:
receiving a second input of the target video object by the user;
and in response to the second input, stopping video recording of the target video object or stopping video processing of the first video, and outputting the second video.
3. The video control method according to claim 2, wherein in executing the target process, the method further comprises:
displaying a first control for controlling video recording of the target video object or controlling video processing of the first video;
the receiving of the second input of the target video object by the user comprises:
and receiving input of a user to the first control.
4. The video control method according to claim 1, wherein in executing the target process, the method further comprises:
and under the condition that the first video stops recording, stopping recording the video of the target video object or stopping video processing of the first video, and outputting the second video.
5. The video control method according to any one of claims 1 to 4, wherein the video processing the first video and outputting the second video comprises:
and performing video interception on the target video object in the video recording interface of the first video, and outputting the second video.
6. The video control method according to claim 5, wherein N frames of the first video comprise the target video object, N being a positive integer;
the video capture of the target video object in the video recording interface of the first video and the output of the second video include:
scratching N second video images of a first target region in the N frames of first video images;
performing video synthesis on the N second video images, and outputting the second video;
wherein the first target area is an area where the target video object is located in each frame of the first video image.
7. The video control method according to any one of claims 1 to 4, wherein the video recording the target video object and outputting the second video comprises:
recording the video of the target video object through a second camera, and outputting the second video;
the first camera and the second camera are different cameras.
8. The video control method according to claim 7, wherein before the video recording of the target video object by the second camera and the outputting of the second video, the method further comprises:
obtaining distance information of the target video object;
and determining a second camera matched with the distance information of the target video object.
9. The video control method according to claim 8, wherein the distance information of the target video object includes a first distance between a first position of the target video object and a second position of a lens center of the first camera;
the determining the second camera matched with the distance information of the target video object includes:
acquiring a first distance range corresponding to the first distance;
determining a second camera matched with the first distance range;
wherein different distance ranges correspond to different cameras.
10. The video control method according to claim 1, wherein after said playing the second video stored in association with the first video in response to the fourth input, the method further comprises:
receiving a fifth input of the first video by the user;
in response to the fifth input, stopping playing the first video and stopping playing the second video.
11. The video control method according to claim 1, wherein after said playing the second video stored in association with the first video in response to the fourth input, the method further comprises:
receiving a sixth input of the target video object by the user;
in response to the sixth input, stopping playing the second video.
12. The video control method according to claim 10 or 11, wherein after said stopping of playing the second video, the method further comprises:
receiving a seventh input of the target video object by the user;
determining a target picture of the second video corresponding to a play time node of the first video in response to the seventh input;
and starting from the target picture, playing a second video of the target video object.
13. The video control method according to claim 1, wherein said displaying a second control in a second target area associated with the target video object comprises:
under the condition that the first video is played to the first moment, displaying the second control in a second target area associated with the target video object;
and the first moment is a time point when video recording is started for the target video object or video processing is started for the first video.
14. The video control method according to claim 13, wherein during the playing of the first video, the method further comprises:
hiding the second control or keeping displaying the second control when the first video is played to a second moment;
and the second moment is a time point when the video recording of the target video object is stopped or the video processing of the first video is stopped.
15. A video control apparatus, comprising:
the receiving module is used for receiving first input of a user to a target video object in a first video recording interface of a first video in the process of recording the first video by a first camera;
a processing module for performing a target process in response to the first input, outputting a second video of the target video object;
the processing module is further used for acquiring the display size of the target video object in the video recording interface in real time; updating the video size of the second video according to the display size of the target video object in the video recording interface acquired in real time;
the receiving module is further used for receiving a third input of the user;
the video control apparatus further includes:
the storage module is used for storing the second video and the first video in a correlation mode;
a play module to play the first video in response to the third input; the receiving module is further configured to receive, during the playing process of the first video, a fourth input of the user to the target video object in the video playing interface of the first video; the playing module is further configured to play the second video stored in association with the first video in response to the fourth input;
the display module is further used for displaying a second control in a second target area associated with the target video object, wherein the second control is used for controlling video playing of the target video object; the receiving module is specifically configured to receive an input of the second control from a user; wherein the second target area is an area where the target video object in the first video is located.
16. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video control method according to any one of claims 1 to 14.
17. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video control method according to any one of claims 1 to 14.
CN202011133560.6A 2020-10-21 2020-10-21 Video control method, video control device, electronic device and readable storage medium Active CN112261218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011133560.6A CN112261218B (en) 2020-10-21 2020-10-21 Video control method, video control device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011133560.6A CN112261218B (en) 2020-10-21 2020-10-21 Video control method, video control device, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN112261218A CN112261218A (en) 2021-01-22
CN112261218B true CN112261218B (en) 2022-09-20

Family

ID=74263250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011133560.6A Active CN112261218B (en) 2020-10-21 2020-10-21 Video control method, video control device, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112261218B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113067983B (en) * 2021-03-29 2022-11-15 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and storage medium
CN113794832B (en) * 2021-08-13 2023-07-11 维沃移动通信(杭州)有限公司 Video recording method, device, electronic equipment and storage medium
CN116112781B (en) * 2022-05-25 2023-12-01 荣耀终端有限公司 Video recording method, device and storage medium
CN116074620B (en) * 2022-05-27 2023-11-07 荣耀终端有限公司 Shooting method and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10609332B1 (en) * 2018-12-21 2020-03-31 Microsoft Technology Licensing, Llc Video conferencing supporting a composite video stream
CN110099211B (en) * 2019-04-22 2021-07-16 联想(北京)有限公司 Video shooting method and electronic equipment
CN110445966B (en) * 2019-08-09 2021-09-21 润博全景文旅科技有限公司 Panoramic camera video shooting method and device, electronic equipment and storage medium
CN110557566B (en) * 2019-08-30 2021-12-17 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111083354A (en) * 2019-11-27 2020-04-28 维沃移动通信有限公司 Video recording method and electronic equipment
CN111147779B (en) * 2019-12-31 2022-07-29 维沃移动通信有限公司 Video production method, electronic device, and medium
CN111405199B (en) * 2020-03-27 2022-11-01 维沃移动通信(杭州)有限公司 Image shooting method and electronic equipment

Also Published As

Publication number Publication date
CN112261218A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
CN112261218B (en) Video control method, video control device, electronic device and readable storage medium
CN107786827B (en) Video shooting method, video playing method and device and mobile terminal
CN111010610B (en) Video screenshot method and electronic equipment
CN110557683B (en) Video playing control method and electronic equipment
CN108174103B (en) Shooting prompting method and mobile terminal
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN110602401A (en) Photographing method and terminal
CN108710458B (en) Split screen control method and terminal equipment
US11545188B2 (en) Video processing method, video playing method, devices and storage medium
CN111010510A (en) Shooting control method and device and electronic equipment
CN111147779B (en) Video production method, electronic device, and medium
CN110769174B (en) Video viewing method and electronic equipment
CN109819168B (en) Camera starting method and mobile terminal
US20180144546A1 (en) Method, device and terminal for processing live shows
CN109922294B (en) Video processing method and mobile terminal
CN113938748B (en) Video playing method, device, terminal, storage medium and program product
CN113542610A (en) Shooting method, mobile terminal and storage medium
CN111770374B (en) Video playing method and device
CN111083374B (en) Filter adding method and electronic equipment
CN110086998B (en) Shooting method and terminal
CN108345657B (en) Picture screening method and mobile terminal
CN111818382B (en) Screen recording method and device and electronic equipment
CN111343402B (en) Display method and electronic equipment
CN112511741A (en) Image processing method, mobile terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant