CN116112780A - Video recording method and related device - Google Patents

Video recording method and related device Download PDF

Info

Publication number
CN116112780A
CN116112780A CN202210576792.1A CN202210576792A CN116112780A CN 116112780 A CN116112780 A CN 116112780A CN 202210576792 A CN202210576792 A CN 202210576792A CN 116112780 A CN116112780 A CN 116112780A
Authority
CN
China
Prior art keywords
window
recording
tracking
prompt
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210576792.1A
Other languages
Chinese (zh)
Other versions
CN116112780B (en
Inventor
易婕
黄雨菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210576792.1A priority Critical patent/CN116112780B/en
Publication of CN116112780A publication Critical patent/CN116112780A/en
Application granted granted Critical
Publication of CN116112780B publication Critical patent/CN116112780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application provides a video recording method and a related device. The method comprises the following steps: displaying a first interface, wherein the first interface comprises a tracking prompt for prompting setting of a tracking target and a first window displaying a first picture acquired by a camera; displaying a first tracking identifier when the first object is detected; responding to the triggering operation of the first tracking identifier, and displaying a second interface, wherein the second interface comprises a first window and the second window does not comprise a tracking prompt; the second window displays a second picture, and the second picture is a part of the first picture related to the first object; the second screen includes the first object when the first object is displayed at the first position of the first screen detected at the third time and when the first object is displayed at the second position of the first screen detected at a fourth time later than the third time. Therefore, the user can be prompted to set the tracking target, the picture corresponding to the tracking target can be additionally displayed after the tracking target is set, the focus tracking video is additionally obtained, and subsequent editing processing is reduced.

Description

Video recording method and related device
Technical Field
The application relates to the technical field of terminals, in particular to a video recording method and a related device.
Background
In order to improve user experience, terminal devices such as mobile phones and tablet computers are generally configured with a plurality of cameras. For example, a front camera and a rear camera are respectively arranged on the terminal device. The user can select a corresponding shooting mode, for example, a proactive mode, a post-shooting mode, a front-back double-shooting mode, and the like, according to his own needs.
In some scenarios, the video captured by the user via the terminal device includes one or more characters. When a user wants to obtain a video of a single person, a manual editing process is required for the video.
However, the manual editing mode is complex in operation and low in efficiency, so that the user experience of the terminal equipment is poor.
Disclosure of Invention
The embodiment of the application provides a video recording method and a related device, which are applied to the technical field of terminals. The method can track the selected person when recording the video, additionally generate one path of video, does not need manual editing by a user, reduces user operation and increases user experience.
In a first aspect, an embodiment of the present application proposes a video recording method, applied to a terminal device including a first camera, where the method includes: the terminal equipment displays a first interface of the camera application, wherein the first interface comprises a first window and a tracking prompt; the first window displays a first picture acquired by the first camera, and the tracking prompt is used for prompting a user to set a tracking target; at a first moment, when the terminal equipment detects that the first picture comprises a first object, displaying a first tracking identifier corresponding to the first object on the first picture; at a second moment, responding to the triggering operation of the user on the first tracking identifier, and displaying a second interface of the camera application by the terminal equipment, wherein the second interface comprises a first window and a second window, and does not comprise a tracking prompt; the second window displays a second picture, and the second picture is a part of the first picture related to the first object; at a third moment, when the terminal equipment detects that a first object is displayed at a first position of a first picture, the second picture comprises the first object; at a fourth moment, when the terminal equipment detects that the first object is displayed at the second position of the first picture, the second picture comprises the first object; wherein the second time is later than the first time, the third time is later than the second time, and the fourth time is later than the third time.
The first window may be a preview area or a recording area hereinafter. The first interface may be an interface before the tracking target is set, and may correspond to a preview interface without the tracking target set, or may correspond to a recording interface without the tracking target set.
The tracking identifier may be a tracking frame below, or may be in other forms (e.g., a thumbnail, a number, an icon, etc. of the object), which is not limited herein. The tracking mark may be located at the position where the object is displayed, or may be displayed in a line at the edge of the preview area or the recording area, which is not limited herein. The first tracking identifier may be a tracking identifier corresponding to any one of the objects, for example, a tracking frame corresponding to a male character. The interface displayed at the first time may correspond to the interface shown in fig. 10, or may correspond to the interface shown in fig. 12 b, for example.
The triggering operation of the first tracking identifier may be a clicking operation or a touching operation, and the embodiment of the present application does not limit the type of the triggering operation. The location, content, etc. of the tracking hint may be referred to in the following description of the tracking hint, which is not described herein.
It will be appreciated that in response to a user triggering operation of the first tracking identifier, a tracking target is set and an interface with a small window is displayed. A portion of the first window associated with the first object may be understood as a tracking screen hereinafter. Before the tracking target is set, the terminal device does not display the small window, and after the tracking target is set, the terminal device displays the small window. The second interface may be an interface of the set tracking target, and may correspond to a preview interface of the set tracking target, or may correspond to a recording interface of the set tracking target. The second interface may also be understood as an interface displaying a small window, and may correspond to a preview interface displaying a small window, or may correspond to a recording interface displaying a small window. The interface displayed at the second time may, for example, correspond to the interface shown in fig. 11 a or may correspond to the interface shown in fig. 12 b.
In the embodiment of the present application, the screen (tracking screen) displayed by the second window changes with the position change of the focus tracking target. Specifically, the position of the tracking object in the screen displayed in the second window changes and changes, and fig. 3B or 3C may be referred to. For example, the interface at the third time may be the interface shown by a in fig. 3B, and the interface at the fourth time may be the interface shown by B in fig. 3B; alternatively, the interface at the third time may be the interface shown by a in fig. 3C, and the interface at the fourth time may be the interface shown by b in fig. 3C.
In summary, the terminal device may display a tracking prompt to prompt the user to set a tracking target. The tracking target can be set based on the tracking mark, and after the tracking target is set, the terminal equipment can additionally obtain and display a picture corresponding to the tracking target, so that the tracking video corresponding to one or more paths of tracking targets is additionally obtained while the video is recorded, further the subsequent editing operation on the tracking target is reduced, and the editing efficiency is improved.
Optionally, the first object is centrally displayed in the second screen.
Optionally, the second window floats on an upper layer of the first window, and the second window is smaller than the first window.
Optionally, the method further comprises: at a fifth moment, when the terminal equipment detects that the first picture does not comprise the first object, the second picture does not comprise the first object, a first loss prompt is displayed on the first window and/or the second window, and the first loss prompt is used for prompting that the first object is lost; the fifth moment is later than the fourth moment.
Illustratively, the interface displayed at the fifth time may correspond to the interface shown in fig. 11 b, or the interface shown in fig. 11 c; and may also correspond to the interface shown in fig. 14 a. The location, content, etc. of the first missing cue may be referred to in the following description related to the missing cue, which is not described herein.
Therefore, when the first object is lost, the terminal equipment can display a loss prompt to prompt the user that the first object is lost, so that the attention of the user is improved, and the user experience is improved.
It can be appreciated that when the missing cue is displayed in the small window area, the situation that the small window shields the missing cue can be reduced, and shielding of the preview picture is reduced. Moreover, the loss prompt can be more clear, and the user can understand the loss prompt conveniently. The loss prompt is displayed in the small window, so that the loss prompt is more easily understood by a user as the loss of the tracking target in the small window. In addition, when the small window displays the characters with the missing prompt, the attention of the user to the small window can be improved.
When the missing cue is displayed in the recording area, the terminal device may display more text to alert the user to subsequent processing. And characters with larger character sizes can be displayed, so that the user can conveniently distinguish the characters.
Optionally, after the fifth time, the method further includes: at a sixth moment, when the terminal device detects that the third position of the first picture comprises the first object, the second picture comprises the first object again, and the second picture is a part of the first picture; the first window and/or the second window does not display the first loss prompt, and the first and sixth moments are moments in a first duration from the fourth moment; at a seventh moment, when the terminal device detects that the fourth position of the first picture comprises the first object, the second picture comprises the first object, and the second picture is a part of the first picture; wherein the sixth time is later than the fifth time, and the seventh time is later than the sixth time.
It is understood that the first time period may correspond to a sixth preset time period or a seventh preset time period hereinafter. And will not be described in detail herein. When the terminal equipment retrieves the tracking target, the terminal equipment continues to track the focus or records a tracking video.
Thus, when the terminal equipment retrieves the tracking target, the terminal equipment can continue to track the focus and record the tracking video.
Optionally, the method further comprises: when the first duration is reached from the fifth moment and the terminal equipment still does not detect the first object in the first picture, the terminal equipment cancels the tracking target and displays the first interface.
The first duration may correspond to a sixth preset duration, which is not described herein. The interface corresponding to the fifth time may be the interface shown in b of fig. 11, or the interface shown in c of fig. 11, and the interface after the fifth time starts to reach the first time period may correspond to the interface shown in fig. 10.
In this way, the terminal device can cancel tracking when it is not retrieved for a long time.
Optionally, at the eighth moment, the terminal device starts recording of the first window in response to a recording starting operation input by a user; when the eighth moment is earlier than the second moment, the terminal equipment starts recording the second window at the second moment, or when the eighth moment is later than the second moment and earlier than the second moment, the terminal equipment starts recording the second window at the eighth moment; when the first duration is reached from the fifth moment and the terminal equipment still does not detect the first object in the first picture, suspending recording in the second window; and displaying a pause prompt in the first window and/or the second window, wherein the pause prompt is used for prompting a user that the second window pauses recording, and the fifth moment is later than the eighth moment, and the first loss prompt is not displayed.
The recording starting operation input by the user may be an operation of clicking a recording control by the user, or may be other operations, which is not limited in the embodiment of the present application. The location, content, etc. of the pause prompt display may be referred to in the following description related to the pause prompt, and will not be described in detail herein.
The interface displayed at the eighth time may correspond to the interface shown in fig. 6 b, and the interface displayed at the second time may correspond to the interface shown in fig. 6 c; alternatively, the interface displayed at the second time may correspond to the interface shown in fig. 7 b, and the interface displayed at the eighth time may correspond to the interface shown in fig. 7 c
The interface displayed at the fifth time may correspond to the interface shown in fig. 14 a; the interface displayed after the fifth time start reaches the first time period may correspond to the interface shown as d in fig. 14.
Therefore, the terminal equipment can pause the recording of the first object after the first object is lost for a period of time, multiple processing modes are provided, and user experience is improved. The terminal device may also display a pause prompt to prompt the user to pause recording in the abnormal scene widget.
It can be appreciated that when the pause prompt is displayed in the small window area, the situation that the small window is blocked and the prompt is lost can be reduced, and the blocking of the preview picture is reduced. In addition, the pause prompt can be made clearer, and the user can understand the pause prompt conveniently. Pausing the display in the widget is more easily understood by the user as a loss of tracking target in the widget. In addition, when the small window displays the text with pause prompt, the attention of the user to the small window can be improved.
When the pause prompt is displayed in the recording area, the terminal device may display more text to prompt the user for subsequent processing. And characters with larger character sizes can be displayed, so that the user can conveniently distinguish the characters.
Optionally, the first window further displays recording duration information of the first window, the second window further displays recording duration information of the second window, and when the second window pauses recording, the recording duration information of the first window is continuously updated, and the recording duration information of the second window pauses updating.
The recording duration information of the first window may be the following recording duration; the recording duration information of the second window may be a small window recording duration hereinafter.
It will be appreciated that after the first object is lost, the widget pauses but the recording area may continue recording video.
Optionally, the first frame further includes a second object, and a tracking identifier associated with the second object, and the method further includes: at a ninth moment, the terminal equipment detects a triggering operation of a tracking identifier associated with the second object, and the second window is switched to comprise the second object; at a tenth moment, when the terminal device detects that the fifth position of the first picture comprises the second object, the second window displays the second object, and the first window and/or the second window does not display the first loss prompt and the pause prompt.
The triggering operation of the tracking identifier associated with the second object may be a clicking operation or other operations, which is not limited in the embodiment of the present application.
It can be understood that when the terminal device receives the operation of clicking the tracking frame corresponding to the second object, the object corresponding to the tracking target can be replaced; illustratively, the terminal device may receive an operation of the click tracking frame 1407 at the interface shown in b in fig. 14, and enter the interface shown in c in fig. 14.
Therefore, the terminal equipment can also switch the tracking target based on the tracking identification, support switching the tracking target and promote user experience. Furthermore, in a lost scenario, tracking targets may be switched.
Optionally, the method further comprises: the terminal equipment detects a record suspension operation in a first window; in response to the recording suspension operation, the terminal device suspends recording of the first window and the second window, displays an identification of suspending recording and a first picture acquired by the first camera in the first window, and displays a first object and an identification of suspending recording in the second window.
The pause recording operation may be an operation that the user clicks the pause control, or may be other operations, which are not limited herein. The mark for suspending recording may be the following "|" or other marks, and the form of the mark is not limited herein.
For example, the terminal device may receive an operation of clicking the pause control 1601 at the interface shown in a in fig. 16, enter the interface shown in b in fig. 16, pause recording in the recording area, and pause recording in the small window.
In this way the terminal device also supports suspending recordings.
Optionally, after the terminal device pauses the recording of the first window and the second window, the method further includes: at the eleventh moment, when the terminal device detects that the first window does not comprise the first object, the second window does not comprise the first object, and the first window and/or the second window display the first loss prompt.
The interface displayed at the eleventh time may correspond to the interface shown as c in fig. 16.
It will be appreciated that a loss prompt may be displayed when the tracking target is lost after the terminal device pauses recording. Therefore, the user can be prompted to trace the target to be lost, the user can know the information of the small window conveniently, and the attention of the user to the abnormal scene is improved.
Optionally, the first window includes a first switching control, and/or the second window includes a second switching control, and the method further includes: at a twelfth moment, when the second window is in a horizontal screen display state, responding to the triggering operation of the first switching control or the second switching control, and switching the second window into a vertical screen display state by the terminal equipment;
Or, at the twelfth moment, when the second window is in the vertical screen display state, responding to the triggering operation of the first switching control or the second switching control, and switching the second window into the horizontal screen display state by the terminal equipment;
after the terminal equipment completes the switching of the horizontal and vertical screen display states of the second window, if the second window does not comprise the first object, the terminal equipment displays a first tracking identifier in the first window, the first window and/or the second window displays a second loss prompt, and the second loss prompt is used for prompting a user to reset a tracking target;
and responding to the triggering operation of the first tracking identifier, and displaying the first object on the second window by the terminal equipment.
The interface displayed at the twelfth time may correspond to the interface shown as d in fig. 11. The second loss hint may be the loss hint 1111 in fig. 11.
It will be appreciated that the tracking target may be lost when the terminal device switches the widget pattern. The display loss prompt can prompt the user to reset the tracking target, so that the attention of the user to the abnormal scene is improved.
Optionally, at the first moment, the first window further displays a small window style switching prompt; when the terminal equipment detects the triggering operation of the first window, the first window does not display the small window style switching prompt.
The interface displayed at the first time may correspond to the interface shown as a in fig. 10.
Thus, the terminal equipment can prompt the user to set the small window style, and the user experience is improved.
Optionally, before the terminal device displays the first interface, the terminal device displays a third interface, where the third interface includes a first window, where the first window displays a first screen and an entry prompt, and the entry prompt is used to prompt a user to enter the first interface.
The third interface may be an interface in the following video mode; the third interface may correspond to the interface shown in fig. 8 or may correspond to the interface shown in b in fig. 9.
The display position, content, etc. of the entry prompt may be referred to in the following related description, and will not be described herein.
In this way, the terminal equipment can prompt the entrance of the principal angle mode to prompt the user to enter a new function; the condition that the user does not know the new function can be reduced, and the user experience is improved. After entering the main angle mode, the terminal equipment does not display an entry prompt.
Optionally, the method further comprises: the terminal equipment receives the operation of cancelling the entry prompt from the user; and the terminal equipment responds to the operation of canceling the entry prompt, and the third window does not display the entry prompt.
The operation of cancelling the entry prompt may be an operation of clicking a preview area in the video mode by the user.
Therefore, the terminal equipment can cancel the entry prompt, reduce shielding of the preview area and improve user experience.
Optionally, when the terminal device displays the third interface for the first N times, displaying an entry prompt on the third interface, where N is an integer greater than zero.
The terminal device can limit the display times of the entry prompt, and after displaying for N times, the entry prompt is not displayed.
Optionally, the terminal device does not display the entry prompt in the first duration after displaying the entry prompt for the nth time.
Optionally, when the terminal device detects that the first screen displayed on the third interface includes a plurality of objects, the first window displays an entry prompt.
Illustratively, as shown in FIG. 9, when no object is identified, no entry hint is displayed; upon identifying two objects, an entry hint is displayed.
Optionally, when the terminal device displays the third interface again after the terminal device displays the first interface, the third interface does not include the entry prompt.
In a second aspect, an embodiment of the present application provides a video recording apparatus, including: a display unit and a processing unit;
in a third aspect, an embodiment of the present application provides a video recording apparatus, which may be an electronic device. The electronic device includes a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The recording apparatus comprises a processor for invoking a computer program in memory to perform the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when executed on a terminal device, cause the terminal device to perform a method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product for causing a terminal device to carry out the method as in the first aspect when the computer program is run.
In a sixth aspect, embodiments of the present application provide a chip comprising a processor for invoking a computer program in a memory to perform a method as in the first aspect.
It should be understood that, the second aspect to the sixth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the beneficial effects obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic diagram of a hardware system structure of a terminal device provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a software system structure of a terminal device according to an embodiment of the present application;
Fig. 3A is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 3B is a schematic view of a main angle mode preview interface according to an embodiment of the present disclosure;
fig. 3C is a schematic diagram of a main angle mode recording interface according to an embodiment of the present application;
fig. 4 is an interface schematic diagram of a terminal device entering a main angle mode according to an embodiment of the present application;
fig. 5 is an interface schematic diagram of a terminal device entering a main angle mode according to an embodiment of the present application;
fig. 6 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application;
fig. 7 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application;
FIG. 8 is a schematic interface diagram of a main angle mode entry hint according to an embodiment of the present disclosure;
FIG. 9 is an interface schematic diagram of a main angle mode entry hint provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a preview interface corresponding to a principal angle mode according to an embodiment of the present application;
fig. 11 is a schematic diagram of a preview interface corresponding to a principal angle mode according to an embodiment of the present application;
fig. 12 is a schematic diagram of a main angle mode recording interface according to an embodiment of the present application;
fig. 13 is a schematic diagram of a main angle mode recording interface according to an embodiment of the present application;
Fig. 14 is a schematic diagram of a main angle mode recording interface according to an embodiment of the present application;
fig. 15 is a schematic diagram of a main angle mode recording interface according to an embodiment of the present application;
fig. 16 is a schematic diagram of a main angle mode recording interface according to an embodiment of the present application;
FIG. 17 is a schematic flow chart of a video recording method according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application.
Detailed Description
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The embodiment of the application provides a video recording method which can be applied to electronic equipment with a shooting function. The electronic device includes a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the terminal equipment.
In order to better understand the embodiments of the present application, the following describes the structure of the terminal device in the embodiments of the present application:
fig. 1 shows a schematic structure of a terminal device 100. The terminal device may include: radio Frequency (RF) circuitry 110, memory 120, input unit 130, display unit 140, sensor 150, audio circuitry 160, wireless fidelity (wireless fidelity, wiFi) module 170, processor 180, power supply 190, and bluetooth module 1100. It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 1 is not limiting of the terminal device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the respective constituent elements of the terminal device in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, after receiving downlink information of the base station, the downlink information is processed by the processor 180; in addition, the data of the design uplink is sent to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers (low noise amplifier, LNAs), diplexers, and the like. In addition, RF circuit 110 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (global system of mobile communication, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), long term evolution (long term evolution, LTE), email, and short message service (short messaging service, SMS), among others.
The memory 120 may be used to store software programs and modules, and the processor 180 performs various functional applications and data processing of the terminal device by running the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, a boot loader (boot loader), etc.; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the terminal device, and the like. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. It will be appreciated that in the embodiment of the present application, the memory 120 stores a program for connecting back to the bluetooth device.
The input unit 130 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal device. In particular, the input unit 130 may include a touch panel 131 and other input devices 132. The touch panel 131, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 131 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 131 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 180, and can receive commands from the processor 180 and execute them. In addition, the touch panel 131 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 130 may include other input devices 132 in addition to the touch panel 131. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 140 may be used to display information input by a user or information provided to the user and various menus of the terminal device. The display unit 140 may include a display panel 141, and alternatively, the display panel 141 may be configured in the form of a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 131 may cover the display panel 141, and when the touch panel 131 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in fig. 1, the touch panel 131 and the display panel 141 implement input and output functions of the terminal device as two independent components, in some embodiments, the touch panel 131 and the display panel 141 may be integrated to implement input and output functions of the terminal device.
The terminal device may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 or the backlight when the terminal device moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the terminal equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured for the terminal device are not described in detail herein.
Audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the terminal device. The audio circuit 160 may transmit the received electrical signal converted from audio data to the speaker 161, and the electrical signal is converted into a sound signal by the speaker 161 to be output; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, receives the electrical signal from the audio circuit 160, converts the electrical signal into audio data, outputs the audio data to the processor 180 for processing, transmits the audio data to, for example, another terminal device via the RF circuit 110, or outputs the audio data to the memory 120 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and terminal equipment can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 170, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 170, it is understood that it does not belong to the essential constitution of the terminal device, and can be omitted entirely as required within the scope of not changing the essence of the invention.
The processor 180 is a control center of the terminal device, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the terminal device. Optionally, the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
It will be appreciated that in the embodiment of the present application, the memory 120 stores a program of video recording, and the processor 180 may be configured to call and execute the program of video recording stored in the memory 120 to implement the method of video recording in the embodiment of the present application.
The terminal device further includes a power supply 190 (e.g., a battery) for powering the various components, which may be logically connected to the processor 180 via a power management system so as to provide for managing charging, discharging, and power consumption by the power management system.
The bluetooth technology belongs to a short-distance wireless transmission technology, and the terminal device can establish bluetooth connection with other terminal devices with bluetooth modules through the bluetooth module 1100, so that data transmission is performed based on a bluetooth communication link. Bluetooth module 1100 may be a bluetooth low energy (bluetooth low energy, BLE), or module, as desired. It can be understood that, in the embodiment of the present application, in the case that the terminal device is a user terminal and a service tool, the terminal device includes a bluetooth module. It will be understood that the bluetooth module does not belong to the essential constitution of the terminal device, and may be omitted entirely as required within the scope of not changing the essence of the invention, for example, the bluetooth module may not be included in the server.
Although not shown, the terminal device further includes a camera. Optionally, the position of the camera on the terminal device may be front, rear, or internal (which may extend out of the body when in use), which is not limited in this embodiment of the present application.
Alternatively, the terminal device may include a single camera, a dual camera, or a triple camera, which is not limited in the embodiments of the present application. Cameras include, but are not limited to, wide angle cameras, tele cameras, depth cameras, and the like. For example, the terminal device may include three cameras, one of which is a main camera, one of which is a wide-angle camera, and one of which is a tele camera.
Alternatively, when the terminal device includes a plurality of cameras, the plurality of cameras may be all front-mounted, all rear-mounted, all built-in, at least part of front-mounted, at least part of rear-mounted, at least part of built-in, or the like, which is not limited in the embodiment of the present application.
Fig. 2 is a block diagram illustrating a software configuration of the terminal device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include camera, gallery, phone, map, phone, music, settings, mailbox, video, social, etc. applications.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a resource manager, a view system, a notification manager, and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock the screen, touch the screen, drag the screen, intercept the screen, etc.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal equipment vibrates, and an indicator light blinks.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of terminal equipment software and hardware is illustrated below in connection with the scenario of terminal equipment interface switching.
When a touch sensor in the terminal equipment receives touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, touch strength, time stamp of the touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, calling an interface of an application framework layer by the camera application, starting the camera application, and further starting a display driver by calling a kernel layer to display a functional interface of the camera application.
When the camera application is started, the camera application can call a camera access interface in an application program framework layer, start a shooting function of a camera, and drive one or more cameras to acquire one or more frames of images in real time based on camera driving of a kernel layer. After the camera acquires the image, the image can be transmitted to the camera application in real time through the kernel layer, the system library and the application program framework layer, and then the camera application displays the image to a corresponding functional interface.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be implemented independently or combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
In order to improve user experience, terminal devices such as mobile phones and tablet computers are generally configured with a plurality of cameras. The terminal device may provide a plurality of photographing modes, such as a front photographing mode, a rear photographing mode, a front and rear double photographing mode, a main angle mode, etc., for the user through the plurality of cameras configured. The user can select a corresponding shooting mode to shoot according to the shooting scene.
The principal angle mode can be understood as a mode that when the terminal equipment records the video, a portrait focus-tracking video can be additionally generated, namely, more than two videos are saved when the recording is completed, one of the videos is a recorded original video, and other videos are videos automatically cut from the original video according to the tracked target portrait. The person image in the person image focus tracking video can be understood as a "principal angle" focused by a user, and the manner of generating the video corresponding to the "principal angle" can be as follows: and cutting out video content corresponding to the main angle from the video conventionally recorded by the terminal equipment.
The "principal angle" may be a living body such as a person or an animal, or may be a non-living body such as a vehicle. It is understood that any item that can be identified based on an algorithmic model can be used as the "principal angle" in embodiments of the present application. In the embodiment of the present application, the "principal angle" may be defined as a focus tracking object, and the focus tracking object may also be referred to as a principal angle object, a tracking target, a tracking object, a focus tracking target, etc., which is not limited by the concept of the "principal angle".
For convenience of understanding, a main angle mode among photographing modes is described below with reference to the accompanying drawings.
Fig. 3A is a schematic view of an application scenario provided in an embodiment of the present application. As shown in fig. 3A, a terminal device 301, a person 302, and a person 303 are included in the application scene.
The terminal device 301 may record a video containing the tracking target through a camera. The tracking target can be any person recorded by a camera, or any object such as an animal, a car and the like. In the scene shown in fig. 3A, the tracking target may be the person 302 in the photographed image or the person 303 in the photographed image.
In the principal angle mode, the terminal equipment can additionally obtain one or more tracking target corresponding focus tracking videos while recording the videos.
Specifically, when the terminal device 301 previews in the principal angle mode, the terminal device 301 may receive an operation of setting a tracking target by the user, and after the terminal device 301 starts recording the video, one or more additional tracking videos may be generated based on the tracking target. The terminal device 301 may set the person 302 in the photographed screen as the tracking target, or may set the person 303 in the photographed screen as the tracking target, for example.
Alternatively, when the terminal device 301 records in the main angle mode, the terminal device 301 may additionally generate one or more tracking videos based on the tracking target, upon receiving an operation of setting the tracking target by the user. Thus, the focus tracking video corresponding to the tracking target can be obtained without manually editing the whole video.
It can be understood that the terminal device can switch the person corresponding to the tracking target one or more times during the video recording process. Specifically, when the terminal device receives the operation of switching the tracking target by the user, the terminal device switches the person corresponding to the tracking target.
In one possible implementation, when the person corresponding to the tracking target is switched, different persons exist in the focus tracking video obtained by the terminal device.
Illustratively, when the terminal device detects that the user switches the tracking target from person 302 to person 303 during recording, the terminal device displays a tracking screen based on person 303. In a focus tracking video generated based on a tracking target after recording is finished, displaying a character 302 correspondingly before the tracking target is switched; displaying the character 303 correspondingly after the tracking target is switched; it can also be understood that the portion before the tracking target is switched corresponds to the person 302, and the portion after the tracking target is switched corresponds to the person 303.
Specifically, taking the 3 rd second in the recording process of the focus tracking video, the terminal device detects that the user switches the tracking target from the person 302 to the person 303 as an example. In the focus tracking video generated based on the tracking target, the person 302 is correspondingly displayed in the focus tracking video before the 3 rd second; the person 303 is displayed in correspondence with the focus tracking video after the 3 rd second. Therefore, the terminal equipment can switch the tracking target, so that the switching of the characters in the focus tracking video is realized, and the user experience is improved.
In another possible implementation, when the person corresponding to the tracking target is switched, the terminal device may obtain one path of the focus tracking video based on the tracking target before switching, and obtain the other path of the focus tracking video based on the tracking target after switching.
Illustratively, when the terminal device detects that the user switches the tracking target from the person 302 to the person 303 during recording, the terminal device generates the focus tracking video 1 based on the person 302, and generates the focus tracking video 2 based on the person 303 after the tracking target is switched to the person 303.
In some embodiments, in the recording process of the video, the terminal device may also start and end recording of the focus tracking video multiple times, and additionally generate multiple paths of focus tracking videos.
Illustratively, in recording video 1, when the terminal device receives an operation that the user ends tracking person 303, the terminal device may generate a focus tracking video 2 based on the person 303. When the terminal device receives the operation of the user tracking the person 302 after finishing the operation of tracking the person 303, the terminal device 301 may additionally generate the focus tracking video 3 based on the person 302. At this time, in addition to the normally recorded video 1, the terminal device additionally generates 2 chasing videos (i.e., chasing video 2 and chasing video 3). The embodiment of the application does not limit the number of the focus tracking videos.
Therefore, the terminal equipment can start and end recording of the focus tracking video for a plurality of times, generate a plurality of paths of focus tracking videos based on the tracking target, and improve user experience.
It should be noted that, in the scene shown in fig. 3A, the video recorded by the terminal device through the camera includes two characters. More or fewer characters may be included in the video of the terminal device. The number of characters recorded by the terminal equipment is not particularly limited.
It should be noted that, when the terminal device previews and records in the main angle mode, the operation of setting the tracking target by the user is not received, and when the recording is finished, a path of video can be obtained. When the terminal equipment receives the operation of setting the tracking target by the user through the main angle mode preview and then receives the operation of closing the small window by the user, the terminal equipment cancels the tracking of the tracking target, if the operation of setting the tracking target by the user is not received during recording, and when the recording is finished, one path of video can be obtained. It is understood that the principal angle mode may be provided in an application having a photographing function such as a camera. After the terminal device enters the main angle mode, the implementation of the main angle mode may include a preview mode and a recording mode.
It should be noted that, the interface displayed by the terminal device in the preview mode (before recording) and the recording mode (during recording) may be referred to as a preview interface; the pictures displayed in the preview interface of the preview mode (before recording) are not generated and saved; the pictures displayed in the preview interface of the recording mode (during recording) can be generated and saved. For convenience of distinction, hereinafter, a preview interface of a preview mode (before recording) is referred to as a preview interface; the preview interface of the recording mode (during recording) is referred to as a recording interface.
In the preview mode of the home angle mode, an image (preview screen) obtained by the camera may be displayed in the preview area, and an image (trace screen) of the trace target selected by the user may be displayed in the small window. In the preview mode, the terminal device may not generate a video, or may not store the content displayed in the preview area and the content displayed in the small window.
For example, a preview interface of the principal angle mode in the terminal device may be as shown in fig. 3B. The preview interface includes a preview area 304, a recording control 305.
The preview area 304 displays a preview screen. When the terminal device recognizes that a person is included in the preview screen, a tracking frame (e.g., tracking frame 307 and tracking frame 308) is displayed in the preview area. The tracking frame can prompt the user that the corresponding person can be set or switched to the tracking target, and the user can conveniently set or switch the tracking target. When the terminal device recognizes that a plurality of persons are included in the preview screen, the preview area may be displayed with a plurality of tracking frames. The number of tracking frames is less than or equal to the number of people identified by the terminal device. The tracking target is any one of a plurality of persons corresponding to the tracking frame in the preview screen. The tracking target may be referred to as a focus tracking object, a principal angle object, or the like, which is not limited in the present application.
In some embodiments, the tracking box (e.g., tracking box 307) corresponding to the person set to track the target is different from the tracking box display style corresponding to the person not set to track the target (e.g., tracking box 308). In this way, the user can distinguish and identify the tracked person (tracked target) conveniently. In addition to the different patterns of the tracking frames, the embodiments of the present application may also set the colors of the tracking frames, for example, the colors of the tracking frame 307 and the tracking frame 308 are different. Thus, the tracking target can be intuitively distinguished from other people.
The trace box may be a virtual box, such as trace box 307; the combination of the virtual box and "+" is also possible, such as the tracking box 308, which may be any display form, and the tracking box may be satisfied by a function that can be triggered by a user to implement tracking and may be set as a tracking target. The tracking frame may be marked at any location of a person that may be set to track a target, as embodiments of the present application are not particularly limited.
It can be understood that the tracking frame is one of the tracking identifiers, and the terminal device can also display other forms of tracking identifiers, so as to facilitate the user to set the tracking target. By way of example, other forms of tracking identification may be thumbnails of objects, numbers, letters, graphics, and the like. The tracking mark may be provided at any position of the object, may be provided near the object, or may be provided at the edge of the preview area. The embodiment of the application does not specifically limit the specific location of the tracking identifier.
For example, the terminal device may display a thumbnail arrangement of one or more objects at an edge of the preview area, and when the terminal device receives an operation that the user clicks any tracking identifier, set the object corresponding to the clicked tracking identifier as a tracking target.
In a possible implementation manner, the terminal device may identify the person through face recognition technology and display the tracking frame. The terminal device may determine the display position of the tracking frame, for example, a position where the body of the person is relatively centered, according to a technique such as human body recognition. Therefore, based on the position of the human body calculation tracking frame, the situation that the tracking frame is located at the position of the human face can be reduced, shielding of the tracking frame to the human face can be reduced, and user experience is improved. The technology used for the identification technology of the person and the tracking frame position calculation in the embodiment of the present application is not particularly limited.
In some embodiments, fig. 3B also includes a widget 306. The widget 306 displays a tracking screen. The tracking screen corresponds to a tracking target. When the tracking target is switched, the person in the tracking screen displayed in the widget 306 is switched. For example, if the tracking target is switched from the person corresponding to the tracking frame 307 to the person corresponding to the tracking frame 308, the tracking screen displayed in the widget 306 is changed accordingly.
The tracking screen may be part of a preview screen. In a possible implementation manner, the tracking picture is obtained by the terminal device cutting the preview picture according to a certain proportion based on the tracking target. The embodiment of the application does not specifically limit the frame displayed by the small window.
In some embodiments, the specification, position, horizontal and vertical screen display modes and the like of the small window are adjustable, and a user can adjust the style of the small window according to video habits.
Optionally, the tracking target is centrally displayed in the tracking screen.
Optionally, the small window floats above the recording area. And are not limited herein.
In some embodiments, widget 306 further includes a close control 309 and a first switch control 310.
It can be understood that, when the terminal device receives the operation of setting the tracking target by the user, a small window is displayed on the preview interface to display the tracking screen of the tracking target. And when the terminal equipment does not receive the operation of setting the tracking target by the user, the preview interface does not display a small window.
When the user triggers the close control 309 through clicking, touching, or other operations in the preview interface shown in fig. 3B, the terminal device receives the operation of closing the widget, closes the widget, and cancels the preview of the tracking target.
When the user triggers the first switching control 310 through clicking, touching, or other operations in the preview interface shown in fig. 3B, the terminal device receives an operation of switching the widget display mode (widget style), and the terminal device switches the style of the widget. In particular, the porthole may be switched from landscape to portrait or vice versa.
When the user triggers the recording control 305 through clicking, touching or other operations on the preview interface shown in fig. 3B, the terminal device receives an operation of starting recording, and starts recording video and tracking video.
Optionally, the preview interface may also include other controls, such as a main angle mode exit control 311, a setup control 312, a flash control 313, a widget style control 314, a zoom control 315, and the like.
When the main angle mode exit control 311 is triggered, the terminal device exits the main angle mode and enters the video recording mode. When the setting control 312 is triggered, the terminal device may adjust various setting parameters. The setting parameters include, but are not limited to: whether to turn on a watermark, store a path, a coding scheme, whether to save a geographic location, etc. When the flash control 313 is triggered, the terminal device may set a flash effect, for example, control the flash to be forcibly turned on, forcibly turned off, turned on at photographing, turned on according to environmental adaptation, and the like. When the zoom control 315 is triggered, the terminal device may adjust the focal length of the camera, thereby adjusting the magnification of the preview screen.
When the user triggers the second switching control 314 through clicking or touching on the preview interface shown in fig. 3B, the terminal device receives the operation of setting the widget style, and displays the widget style selection item for the user to select. The widget style selections include, but are not limited to: transverse or vertical, etc. The embodiments of the present application are not limited in this regard. In a possible implementation, the second switching control 314 corresponds to a display style of the widget, so that the user can distinguish the widget style conveniently.
It should be noted that, in the preview interface shown in fig. 3B, the widget style switching may be controlled by the first switching control 310, and the widget style switching may also be controlled by the second switching control 314. In a possible implementation, the first switch control 310 in the widget may be set in linkage with the second switch control 314 of the preview area. For example, when the widget is changed from landscape to portrait, the icon of the first switch control 310 is in the portrait preview style, and the icon of the second switch control 314 is also in the portrait preview style, or the icons of the first switch control 310 and the second switch control 314 are both in the landscape preview style, the user is prompted to click the preview style after the switch again.
It can be appreciated that in the preview scenario, after the terminal device sets the tracking target, the tracking screen of the widget may display the tracking target centrally. In some scenarios, the tracking target may be in a moving state, and when the tracking target moves but does not leave the lens, the tracking screen of the widget may continuously display the tracking target centrally.
For example, the preview interface may be configured such that the tracking target object includes a male character and a female character, and the terminal device sets the male character as a tracking target in response to a click operation of a tracking frame for the male character by the user, and enters the interface as shown in a in fig. 3B. In the interface shown in a in fig. 3B, the tracking screen of the small window displays a male character centered on the right side of the female character. The male character moves, and the terminal device can continuously focus on the male character and display the male character in the small window in a centered manner. When the male character walks to the left of the female character, the interface of the terminal device may be as shown in B of fig. 3B. In the B interface in fig. 3B, the tracking screen of the small window still displays a male character centered on the left side of the female character.
In a possible implementation manner, when the terminal device tracks the target, the focus moves along with the movement of the tracked target, and illustratively, in the interface shown as a in fig. 3B, the focus is located in the face area of the male character and is located in the middle right part of the screen; the male character moves, the terminal device can continue to focus on the male character, and when the male character walks to the left of the female character, the interface of the terminal device can be as shown in B of fig. 3B. In the interface shown in B in fig. 3B, the focus is located in the face region of the male character, in the middle left portion of the screen.
In the recording mode of the main angle mode, the terminal device can display an image (recording picture) obtained by the camera in a recording area, display an image (tracking picture) of a tracking target selected by a user in a small window, and generate a recording video and a focus tracking video which are recorded after the recording mode is started. And at the end of recording, the terminal equipment stores the video generated based on the recorded picture and the focus tracking video generated based on the tracking picture.
In some embodiments, the window may end the recording in advance compared to the recording of the recording area. And when the small window recording is finished, the terminal equipment stores the tracking video generated based on the tracking picture. Or it is understood that the terminal device may end the recording of the focus-following video in advance compared to the whole video.
In some embodiments, the small window may delay the start of recording compared to the recording of the recording area. Or it can be understood that after the terminal device starts to record the video, the terminal device starts to open a small window and starts to record the focus tracking video after detecting the operation of setting the tracking target by the user.
For example, the recording interface of the main angle mode in the terminal device may be as shown in fig. 3C. The recording interface includes a recording area 316, a pause control 317, and an end control 318.
Recording area 316 displays a recording picture and a recording duration. When the terminal device recognizes that a person is included in the recording screen, the recording area displays tracking frames (e.g., tracking frame 320 and tracking frame 321). It will be appreciated that the number of tracking frames is less than or equal to the number of people identified by the terminal device.
In some embodiments, the recording interface also displays a widget 319. The widget 319 displays a tracking screen. The tracking screen corresponds to a tracking target. When the tracking target is switched, the person in the tracking screen displayed in the small window 319 is switched. For example, if the tracking target is switched from the person corresponding to the tracking frame 320 to the person corresponding to the tracking frame 321, the tracking screen displayed in the small window 319 is changed accordingly.
The tracking frames may be part of the recording frames. In a possible implementation manner, the tracking picture is obtained by cutting the recording picture in real time according to a certain proportion based on the tracking target. The embodiment of the application does not specifically limit the frame displayed by the small window.
Optionally, the tracking target is centrally displayed in the tracking screen.
Optionally, the small window floats above the recording area. And are not limited herein.
The widget 319 also includes a widget end control 322 and a widget recording duration.
It can be understood that, when the terminal device receives the operation of setting the tracking target by the user, a small window is displayed on the recording interface to display the tracking picture of the tracking target. When the terminal equipment does not receive the operation of setting the tracking target by the user, the recording interface does not display a small window.
When the user triggers the end control 318 through clicking, touching or other operations on the recording interface shown in fig. 3C, the terminal device receives the operation of ending recording by the user, and the terminal device enters the preview interface in the main angle mode, stores the video corresponding to the recording picture, and the focus tracking video corresponding to the tracking picture.
When the user triggers the pause control 317 by clicking, touching, etc. on the recording interface shown in fig. 3C, the terminal device receives the operation of the user to pause recording, and the terminal device pauses recording of video in the recording area 316 and recording of the focus-following video in the small window 319.
When the user triggers the widget ending control 322 through clicking, touching or other operations on the recording interface shown in fig. 3C, the terminal device receives the operation of ending the widget recording by the user, and the terminal device continues to display the recording picture in the recording area 316, closes the widget 319 and stores the focus tracking video corresponding to the tracking picture in the widget 319.
In a possible implementation, the recording interface further includes a flash control 323. When the flash control 323 is triggered, the terminal device can set a flash effect.
It can be understood that when the terminal device records in the main angle mode, a path of video can be generated based on the recording picture of the recording area, and a path of tracking target corresponding to the tracking video can be additionally generated based on the tracking picture of the small window. The two paths of videos are independently stored in the terminal equipment. Therefore, the video corresponding to the tracking target can be obtained without manually editing the whole video later, the operation is simple and convenient, and the user experience is improved.
It can be understood that, in the recording scenario, after the terminal device sets the tracking target, the tracking screen of the widget may display the tracking target centrally. In some scenarios, the tracking target may be in a moving state, and when the tracking target moves but does not leave the lens, the tracking screen of the widget may continuously display the tracking target centrally.
For example, the preview interface may be configured such that the tracking target object includes a male character and a female character, and the terminal device sets the male character as a tracking target in response to a click operation of a tracking frame for the male character by the user, and enters the interface as shown in a in fig. 3C. In the interface shown in a in fig. 3C, the tracking screen of the small window displays a male character centered on the right side of the female character. The male character moves, and the terminal device can continuously focus on the male character and display the male character in the small window in a centered manner. When the male character walks to the left of the female character, the interface of the terminal device may be as shown in b of fig. 3C. In the B interface in fig. 3B, the tracking screen of the small window still displays a male character centered on the left side of the female character.
In a possible implementation manner, when the terminal device tracks the target, the focus moves along with the movement of the tracked target, and in an interface shown in a in fig. 3C, the focus is located in the face area of the male character and is located in the middle right part of the screen; the male character moves, the terminal device can continue to focus on the male character, and when the male character walks to the left of the female character, the interface of the terminal device can be as shown in b of fig. 3C. In the interface shown in b in fig. 3C, the focus is located in the face area of the male character, in the middle left portion of the screen.
It can be appreciated that, in the embodiment of the present application, a shooting mode in which one or more tracking videos can be additionally generated based on a tracking target is defined as a principal angle mode, and the shooting mode may also be referred to as a tracking mode, which is not limited in the embodiment of the present application.
The manner of entering the main angle mode in the terminal device and the interface involved in recording will be described below with reference to fig. 4 to 7. Fig. 4 and fig. 5 are schematic views of two main angle mode entering flows according to the embodiments of the present application. Fig. 6 and fig. 7 are schematic diagrams of interfaces involved in recording according to embodiments of the present application.
Fig. 4 is an interface schematic diagram of a terminal device entering a main angle mode according to an embodiment of the present application.
When the terminal device receives the operation of opening the camera application 401 by the user in the main interface shown in a in fig. 4, the terminal device may enter the photographing preview interface shown in b in fig. 4. The photographing preview interface may include a preview area and a photographing mode selection item. The preview area displays a preview picture in real time; shooting mode selection items include, but are not limited to: portrait, photograph, video, professional or more 402.
When the user triggers more 402 on the camera preview interface shown in b in fig. 4 through clicking, touching or the like, the terminal device receives the operation of the user to view other types of shooting modes, and enters the shooting mode selection interface shown in c in fig. 4. The shooting mode selection interface includes: shooting mode selection items. Shooting mode selection items include, but are not limited to: professional, panoramic, high dynamic-range image (HDR), time-lapse photography, watermarking, document correction, high-pixel, micro-movie, principal angle mode 403, or other types of shooting mode selections.
When the user triggers the principal angle mode 403 through clicking, touching or other operations on the shooting mode selection interface shown in c in fig. 4, the terminal device receives the operation of selecting the principal angle mode preview by the user, and enters the preview interface corresponding to the principal angle mode shown in d in fig. 4. The preview interface includes: preview area and recording control. The preview area displays a preview screen. When the person exists in the preview screen, a tracking frame is also displayed in the preview area. When the user triggers the tracking frame through clicking or touching operation, the terminal device receives the operation of setting the tracking target, sets the person corresponding to the tracking frame as the tracking target, and displays the tracking picture corresponding to the tracking target on the display interface through the small window.
Fig. 5 is an interface schematic diagram of another terminal device entering a main angle mode according to an embodiment of the present application.
When the terminal device receives the operation of opening the camera application 501 by the user in the main interface shown in a in fig. 5, the terminal device may enter the photographing preview interface shown in b in fig. 5. The photographing preview interface may include a preview area and a photographing mode selection item. The preview area displays a preview picture in real time; shooting mode selection items include, but are not limited to: portrait, photograph, video 502, professional or other type of photography mode selection.
When the user triggers the video 502 through clicking, touching or other operations on the camera preview interface shown in b in fig. 5, the terminal device receives the operation of selecting the video preview by the user, and enters the video preview interface shown in c in fig. 5. The video preview interface comprises: preview area, recording parameter selection item and shooting mode selection item. The preview area displays a preview picture in real time; recording parameter options include, but are not limited to: a main angle mode 503, a flash, a filter, a setting, or other types of recording parameter selections. Shooting mode selection items include, but are not limited to: portrait, photograph, video, professional or other types of photography mode selections.
When the user triggers the principal angle mode 503 through clicking, touching or other operations on the video preview interface shown in c in fig. 5, the terminal device receives the operation of selecting the principal angle mode preview by the user, and enters the preview interface corresponding to the principal angle mode shown in d in fig. 5. The preview interface includes: preview area and recording control. The preview area displays a preview screen. When the person exists in the preview screen, a tracking frame is also displayed in the preview area. When the user triggers the tracking frame through clicking or touching operation, the terminal device receives the operation of setting the tracking target, sets the person corresponding to the tracking frame as the tracking target, and displays the tracking picture corresponding to the tracking target on the display interface through the small window.
It can be understood that when the terminal device enters the main angle mode, the terminal device can be horizontally placed in a horizontal screen state or can be vertically placed in a vertical screen state, and the principle that the terminal device realizes the main angle mode is similar in the horizontal screen state or the vertical screen state.
It can be understood that in the main angle mode of the camera, the terminal device may select the tracking target after starting recording, or may select the tracking target before starting recording the video. The following describes two recording processes with reference to fig. 6 and 7, respectively. Fig. 6 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application.
When the user triggers the recording control 601 through clicking, touching or the like in the preview interface shown in a in fig. 6, the terminal device receives an operation of starting recording, and enters the recording interface shown in b in fig. 6. The recording interface includes: recording area 602, pause control 603, end control 604. The recording area displays a recording screen and a tracking frame 605. The tracking frame 605 may facilitate user selection of a tracking target.
When the user triggers the tracking frame 605 through clicking, touching, etc. operations in the recording interface shown in b in fig. 6, the terminal device receives the operation of setting the tracking target by the user, and the terminal device enters the recording interface shown in c in fig. 6, where the recording interface includes: a recording area, a pause control 606, an end control 607, a widget 608. The recording area displays a recording picture. The widget 608 includes a widget end control 609. The small window 608 displays a tracking screen corresponding to the tracking target. The tracking picture corresponding to the tracking target is a part of the recording picture.
On the basis of the flow shown in fig. 6, when the user triggers the pause control 606 through clicking or touching the recording interface shown in fig. 6 c, the terminal device receives the operation of suspending video recording, and the video recording is suspended, and the focus tracking video corresponding to the small window is also suspended.
When the user triggers the end control 607 through clicking or touching the recording interface shown in fig. 6 c, the terminal device receives the operation of ending video recording, the video recording ends, and the recording of the focus tracking video corresponding to the small window ends.
When the user triggers the small window ending control 609 through clicking or touching on the recording interface shown in fig. 6 c, the terminal device receives the operation of ending the recording of the focus tracking video, the recording of the focus tracking video corresponding to the small window ends, and the video continues to be recorded.
When the user triggers the widget 608 through a drag operation in the recording interface shown in fig. 6 c, the widget 608 position may be moved.
The moving distance of the widget 608 is related to the distance between the drag operation start position and the drag operation end position, and the moving direction of the widget 608 is related to the direction of the drag operation.
Fig. 7 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application.
When the user triggers the tracking frame 701 by clicking, touching, or the like in the preview interface shown in a in fig. 7, the terminal device receives an operation of setting a tracking target by the user, and enters the preview interface shown in b in fig. 7. The preview interface includes: preview area 702, recording control 703 and widget 704. The recording area displays a preview picture. The widget 704 displays a tracking screen. The tracking screen corresponds to a tracking target. The widget 704 also includes a close control 705 and a first switch control 706.
When the user triggers the recording control 703 through clicking, touching or other operations in the preview interface shown in b in fig. 7, the terminal device receives an operation of starting recording, and enters the recording interface shown in c in fig. 7. The recording interface includes: a recording area, a pause control 707, an end control 708, and a widget 709. The recording area displays a recording picture. Widget 709 includes widget ending control 710. The small window 709 displays a tracking screen corresponding to the tracking target. The tracking frame is a part of the recording frame.
The roles of pause control 707, end control 708, and widget 709 may be referred to in the relevant description of fig. 6 and will not be repeated here.
On the basis of the flow shown in fig. 7, when the user triggers the close control 705 by clicking, touching or the like in the preview interface shown in b in fig. 7, the terminal device receives the operation of closing the widget, closes the widget, and cancels the preview of the tracking target.
When the user triggers the first switching control 706 of the recording control through clicking, touching or other operations in the preview interface shown in b in fig. 7, the terminal device receives the operation of switching the display mode of the small window, and the terminal device switches the display mode of the small window. In particular, the portlets may be switched from landscape to portrait or vice versa.
In a possible implementation, the terminal device may also adjust the size of the small window. The embodiment of the application does not limit a specific implementation manner of the size adjustment of the small window.
The recording flow in the main angle mode is described above with reference to fig. 6 and 7. To facilitate the user's use of the principal angle mode, the terminal device may generate a variety of prompts to guide the user's familiarity with using the mode. The various cues include, but are not limited to: entry cues, use cues, tracking target loss cues, tracking target retrieval cues, or pause cues, etc.
The above mentioned prompts are described below in connection with the use of the camera application in the terminal device, respectively. Fig. 8 and 9 are illustrations of the entry of the main angle mode. Fig. 10 and 11 are illustrations of a tracking hint in a preview scene and a loss hint when a tracking target is lost. Fig. 12 is an interface schematic diagram corresponding to a recording scene. Fig. 13-16 are illustrations of a tracking prompt, a tracking frame displaying prompt, a loss prompt when a tracking target is lost, and a pause prompt in a recording scene.
The entry hint for the principal angle mode is described below with reference to fig. 8.
Exemplary, fig. 8 is an interface schematic diagram of a main angle mode entry hint according to an embodiment of the present application.
When detecting that the user selects the video mode on the preview interface of the camera APP, the terminal device enters a video preview interface as shown in fig. 8. The video preview interface comprises: preview area, recording parameter selection, shooting mode selection, and entry prompt 801. The preview area displays a preview picture in real time; recording parameter options include, but are not limited to: a main angle mode 802, a flash, a filter, a setting, or other types of recording parameter selections. Shooting mode selection items include, but are not limited to: portrait, photograph, video, professional or other types of photography mode selections.
In a possible implementation, an entry hint 801 is used to hint the entry of the user principal angle mode 802. The content of the entry hint may be "try-on principal angle mode".
In a possible implementation, the entry hint 801 is used to hint the role of the user's principal angle mode. The content of the entry prompt can be a main angle mode, a path of portrait tracking Jiao Shipin can be additionally generated, and the entry prompt can also be a trial main angle mode for tracking focus and double recording.
It will be appreciated that the entry prompt 801 may be displayed in the form of a bubble on the video preview interface or in other forms. The entry hint 801 may be located to the right of the main angle pattern 802, may be located to the underside of the main angle pattern 802, or elsewhere around the main angle pattern 802. The embodiment of the application does not limit the display form, specific content, display position and the like of the entry prompt.
When the user triggers an arbitrary area through clicking or touching on the video preview interface shown in fig. 8, the terminal device receives an operation of canceling the entry prompt, and the entry prompt 801 disappears. When the user triggers the principal angle mode 802 through clicking or touching the video preview interface shown in fig. 8, the terminal device receives the operation of using the principal angle mode by the user, and enters the preview interface corresponding to the principal angle mode.
It can be understood that the terminal device may prompt the principal angle mode when entering the video mode for the first time, may prompt multiple times, and may prompt the principal angle mode based on the preview screen.
In a possible implementation manner one, the terminal device displays an entry prompt when entering the video preview interface for the first time. Therefore, the user can be prompted to newly increase the principal angle mode, the user experience principal angle mode is guided, and the user experience is improved.
In a second possible implementation manner, when the terminal device enters the video preview interface each time, the entry prompt is displayed until the terminal device does not display the entry prompt when entering the video preview interface after entering the preview interface corresponding to the principal angle mode.
In this way, the user may be guided through the principal angle mode multiple times. In addition, when the terminal equipment enters a preview interface corresponding to the principal angle mode, the user is shown to experience and know the principal angle mode. The entrance prompt is not carried out later, so that the times of displaying the entrance prompt during video preview can be reduced, shielding of preview pictures is reduced, user operation is reduced, and user experience is improved.
Illustratively, when the terminal device first enters the video preview interface, an entry prompt is displayed. If the terminal equipment enters the preview interface corresponding to the main angle mode in any one of the two modes, no entry prompt is displayed when the terminal equipment enters the video preview interface for the second time and subsequently enters the video preview interface; if the terminal equipment does not enter the preview interface corresponding to the main angle mode, continuing to display the entry prompt when the terminal equipment enters the video mode preview interface for the second time.
In a possible implementation manner III, when the terminal device detects that the number of people in the preview screen in the video preview interface is greater than 2, displaying an entry prompt.
Specifically, when the terminal device enters the video preview interface, the preview screen does not include a person or includes a person, and the entry prompt is not displayed. When two or more characters are included in the preview screen, an entry prompt is displayed.
For example, as shown in fig. 9, when the terminal device receives an operation of performing a video preview by a user, it enters a video preview interface shown in a in fig. 9. The video preview interface comprises: preview area, recording parameter selection item, shooting mode selection item. Since the number of characters in the preview screen is less than 2, no entry prompt is displayed. When the terminal device receives the operation of the user for video preview, it enters a video preview interface shown in b in fig. 9. Since the number of characters in the preview screen is greater than 2, an entry prompt is displayed.
In some embodiments, the portal prompt is displayed when the duration of the preview screen including two or more people reaches a duration threshold. The duration threshold may be 1 second(s), or any other value, which is not limited herein.
Therefore, the number of times of displaying the entrance prompt in video preview can be reduced, shielding of preview pictures is reduced, user operation is reduced, and user experience is improved. It can be appreciated that, due to the shot switching or the shot direction adjustment, the terminal device may accidentally shoot the person, and the displayed preview screen includes two or more persons.
In some embodiments, the terminal device may also limit the time interval between two entry prompts. The time interval between the terminal device and the entrance prompt is larger than a first threshold value. The first threshold may be any value, for example, 24 hours (h), and is not limited herein. Thus, frequent display of the entry prompt can be reduced, and user experience is improved.
On the basis of the above embodiment, the terminal device may further limit the number of times of displaying the entry prompt. And when the display times of the entry prompt reach a second threshold value, the entry prompt is not displayed any more. The second threshold may be 10, or any other value, which is not limited herein.
In a possible implementation manner, when the terminal device enters the main angle mode and displays the entry prompt again, the content of the entry prompt changes. By way of example, from the use of the principal angle mode, a path of human image focus tracking video can be additionally generated, and the mode is changed into a trial principal angle mode, so as to perform focus tracking double recording.
It will be appreciated that the terminal device may apply any one, or any two, or all of the three possible implementations described above simultaneously. The embodiment of the present application is not particularly limited thereto.
The following describes a presentation in the preview scene after entering the main angle mode with reference to fig. 10 and 11.
On the basis of the embodiment, when the terminal equipment enters the preview interface corresponding to the main angle mode, the preview interface displays a tracking prompt. The tracking prompt is used for prompting a user to set a tracking target.
Exemplary, fig. 10 is a schematic diagram of a preview interface corresponding to a principal angle mode provided in an embodiment of the present application. When the terminal device receives the operation of clicking the main angle mode control by the user, the terminal device enters a preview interface shown as a in fig. 10. The preview interface includes: a preview area, a recording control 1001, and a tracking prompt 1002. The preview area displays a preview screen. The tracking prompt 1002 is used to prompt the user to set a tracking target.
The content of the tracking prompt 1002 may be "click a portrait tracking frame, may additionally generate a path of focus tracking video", or may be "click a tracking frame may additionally generate a path of video". The tracking tip 1002 may be displayed centrally on top of the preview area or may be displayed elsewhere in the preview area. The embodiment of the application does not limit the display position, the display form and the specific content of the tracking prompt.
Since the terminal device does not recognize the person in the preview interface shown in a in fig. 10, the tracking frame is not displayed in the preview area. When there is a person in the preview screen, a tracking frame is also displayed in the preview area (as shown by b in fig. 10).
It is understood that when the preview screen includes a plurality of tasks, the preview area displays a plurality of tracking frames.
On the basis of the embodiment shown in fig. 10, when the user triggers the recording control through clicking or touching or other operations on the preview interface shown in a in fig. 10 or the preview interface shown in b in fig. 10, the terminal device receives the operation of recording the video, and the tracking prompt (for example, tracking prompt 1002) disappears.
On the basis of the embodiment shown in fig. 10, when the user triggers the tracking frame by clicking or touching the preview interface shown in b in fig. 10, the terminal device receives an operation of setting the tracking target, the tracking prompt disappears, and a tracking screen is displayed. It is understood that the tracking screen corresponds to the tracking target, and the tracking screen may be a part of the preview screen.
Illustratively, when the user clicks the tracking frame 1003 on the preview interface shown in b in fig. 10, the terminal device receives an operation to set a tracking target, at which point the tracking prompt disappears, and the terminal device enters the preview interface shown in a in fig. 11. The preview interface is displayed with a small window 1101. The widget 1101 includes: a tracking screen, a first switch control 1102, and a close control 1103. The tracking picture can be obtained by clipping the content corresponding to the tracking target in the preview picture according to a certain proportion by the terminal equipment. When the terminal device receives the triggering operation for the first switching control 1102, the terminal device switches the display mode of the small window. For example, the small window is switched from landscape to portrait, or vice versa.
When the user triggers the closing control 1103 by clicking or touching the preview interface shown in b in fig. 10, the terminal device receives the operation of canceling tracking, closes the widget, and enters the interface displaying the tracking prompt.
It will be appreciated that the terminal device may prompt the user to set the tracking target only when the principal angle mode is first used. The user can be prompted for setting the tracking target for a plurality of times, and the user can be prompted for setting the tracking target when the person is recognized in the preview picture. And are not limited herein.
On the basis of the embodiment, when the terminal device enters the preview interface corresponding to the main angle mode, the preview interface may further display a small window style switching prompt. The widget style switching prompt is used to prompt the user to switch the widget style (e.g., landscape or portrait). The content of the small window style switching prompt can be 'switching focus tracking video horizontal/vertical'.
Optionally, the widget style switching prompt is further used for prompting the user to switch the operation mode of the widget style. The content of the small window style switching prompt can be 'click switchable focus tracking video horizontal/vertical'.
Illustratively, the preview interface shown in a of fig. 10 further includes: a widget style switching prompt 1004 and a second switching control 1005. The content of the widget style switching prompt 1004 may be "click switchable focus tracking video landscape/portrait".
When the user triggers the second switch control 1005 through clicking or touching on the video preview interface shown in fig. 10, the terminal device receives the operation of setting the small window style, and displays the small window style selection item for the user to select. The widget style selections include, but are not limited to: transverse or vertical, etc. The embodiments of the present application are not limited in this regard. In a possible implementation, the second toggle control 1005 corresponds to a display style of a widget.
It is understood that the widget style switching prompt 1004 may be displayed on the preview interface in the form of a bubble, or may be displayed on the preview interface in other forms. The widget style toggle prompt 1004 may be located on the right side of the second toggle control 1005, on the underside of the second toggle control 1005, or elsewhere around the second toggle control 1005. The embodiment of the present application does not limit the display form, the specific content, the display position, and the like of the widget style switching prompt 1004.
When the user triggers an arbitrary region through clicking or touching on the video preview interface shown in fig. 10, the terminal device receives the operation of canceling the small window display style switching prompt, and the small window style switching prompt disappears.
It can be understood that the terminal device may prompt the second switching control when entering the main angle mode for the first time, or prompt the second switching control when entering the main angle mode again after the first recording of the focus tracking video is completed.
In a possible implementation manner one, when the terminal device enters the preview interface of the main angle mode again after the recording of the focus tracking video through the main angle mode is finished, the terminal device displays a small window style switching prompt. Therefore, the user can be prompted to select the small window style, the user can conveniently set the small window display style before recording, and the user experience is improved.
In a second possible implementation manner, when the terminal device is in the preview interface of entering the main angle mode each time, the terminal device displays a small window style switching prompt.
In some embodiments, the terminal device may further limit the time interval of the two small window style switching cues. The time interval of the terminal equipment for carrying out the small window pattern switching prompt twice is larger than a third threshold value. The third threshold may be any value, for example, 24 hours (h), and is not limited herein. Therefore, frequent display of the small window style switching prompt can be reduced, and user experience is improved.
On the basis of the above embodiment, the terminal device may further limit the number of times of displaying the widget style switching prompt. And when the display times of the small window pattern switching prompt reach a fourth threshold value, the small window pattern switching prompt is not displayed any more. The fourth threshold may be 10, or any other value, which is not limited herein.
On the basis of the embodiment, the terminal equipment is further provided with a loss prompt for prompting the user to track the loss of the target.
It will be appreciated that there may be no tracking target in the preview screen of the terminal device for a number of reasons, i.e. a situation in which the tracking target is lost. Various reasons include, but are not limited to: tracking target movement, terminal device movement, small window switching display style (e.g., switch from landscape to portrait or vice versa), etc.
Fig. 11 is a schematic diagram of a preview interface corresponding to a principal angle mode provided in the present application. After the user has selected the male character in the tracking prompt as the tracking target, when the tracking target in the preview interface shown in a in fig. 11 is lost, the terminal device enters the preview interface shown in b in fig. 11. The preview interface includes: preview area 1104, widget 1105, and loss tip 1106. The widget 1105 displays a loss prompt 1106.
Loss hint 1106 is used to hint at tracking target loss. The content of the loss hint 1106 may be "target lost" or "object lost", and optionally, the loss hint 1106 may also be displayed in the preview area 1104. The embodiment of the application is used for displaying the display position, the display form and the specific content of the loss prompt 1106 in the small window 1105.
In a possible implementation manner, the loss prompt is also used for prompting a processing mode of a small window after the tracking target is lost. Illustratively, the preview interface shown in c in fig. 11 includes: preview area 1107, widget 1108 and loss hint 1109. The preview area 1107 displays a loss prompt 1109. The content of the loss hint 1109 may be "target lost, exit tracking after 5 seconds. Optionally, a loss hint 1109 may also be displayed in the widget 1108.
The embodiment of the present application does not limit the display position, display form and specific content of the loss prompt 1109 in the preview area 1107.
In some embodiments, when the tracking target is lost, the small window may suspend a cover layer, and a cover layer state with a lower brightness than the recording picture of the large window is presented, so as to remind the user that the recording of the small window is abnormal. Illustratively, the widget 1108 in the interface shown in b in FIG. 11 is shown as a covering state. The embodiment of the present application does not specifically limit the display of the small window.
It can be appreciated that when the missing cue is displayed in the small window area, the situation that the small window shields the missing cue can be reduced, and shielding of the preview picture is reduced. Moreover, the loss prompt can be more clear, and the user can understand the loss prompt conveniently. The loss prompt is displayed in the small window, so that the loss prompt is more easily understood by a user as the loss of the tracking target in the small window. In addition, when the small window displays the characters with the missing prompt, the attention of the user to the small window can be improved.
When the missing cue is displayed in the recording area, the terminal device may display more text to alert the user to subsequent processing. And characters with larger character sizes can be displayed, so that the user can conveniently distinguish the characters.
It can be understood that the loss prompt may disappear after the first preset duration, may also disappear when the tracking target appears in the preview screen, and may also disappear when an operation of switching to another tracking target is received.
The first preset duration may be 3s or 5s, and the specific value of the first preset duration is not limited in this embodiment of the present application.
On the basis of the above embodiment, the missing prompt may also be used to guide the user to reset the tracking target. Specifically, when the widget switches the display style, the tracking target may disappear. The missing prompt may be used to prompt for setting a tracking target. The content of the missing prompt can be "reselect the tracking target", or "please reselect the tracking target".
It should be noted that, when the widget style is switched (for example, from landscape to portrait or vice versa), the terminal device may lose the tracking target, and thus the user needs to reset the tracking target. For example, when the widget is switched from horizontal to vertical, the data for clipping the widget in the original horizontal preview mode may be lost, or the data for horizontal preview and the data for vertical preview are not matched, which results in that the widget cannot acquire the focus tracking position in the preview area after switching, and further results in that the widget loses tracking targets.
For example, when the user triggers the first switching control 1102 in the preview interface shown in a in fig. 11 by clicking or touching, the terminal device receives an operation of switching the widget display style, switching from landscape to portrait. If the terminal device loses tracking target when switching the small window display style, the terminal device enters a preview interface shown as d in fig. 11. The preview interface includes: preview area 1110, recording controls, tracking boxes, loss prompt 1111, and widget 1112. The preview area 1110 displays a preview screen. When the preview screen includes a person, a tracking frame is also displayed. When no person is included in the preview screen, the tracking frame is not displayed.
The widget 1112 displays a loss hint 1111. The loss prompt 1111 is used to prompt the user to reset the tracking target. The display style of the small window 1112 changes compared to the small window 1101 shown in a in fig. 11. The widget 1101 is shown in landscape orientation and widget 1112 is shown in portrait orientation.
Optionally, a loss hint 1111 may also be displayed in the preview area 1110.
In a possible implementation, the widget display style change may also be controlled by a widget display style icon. Specific implementation processes may refer to the above related descriptions, and are not repeated here.
The above-mentioned fig. 8-11 illustrate related cues in the main angle mode in the preview scene, and the following describes related cues in the main angle mode in the recording scene with reference to fig. 12-16.
It can be understood that the interfaces in the preview scene all include recording controls, and when the terminal device receives a triggering operation for the recording controls on the preview interface, the terminal device enters the recording interface. Specifically, the interfaces in the preview scene described above may be classified into a preview interface (e.g., the preview interface shown in fig. 10) in which a tracking prompt is displayed and a preview interface (e.g., the preview interface shown in fig. 11) in which a small window is displayed.
The terminal device can start recording from the preview interface displaying the tracking prompt, and can also start recording from the preview interface displaying the small window.
In some embodiments, in the principal angle mode, when the terminal device starts recording on the preview interface displaying the tracking prompt, and the recording screen of the terminal device does not include the character, the terminal device may not display the tracking prompt. When the terminal device starts recording on the preview interface displaying the tracking prompt and the recording picture of the terminal device includes a person, the terminal device may display the tracking prompt.
Fig. 12 is a schematic diagram of a recording interface according to an embodiment of the present application. When the terminal device receives the operation of recording the video by the user at the interface shown as a in fig. 10, the terminal device may enter a recording interface shown as a in fig. 12, where the recording interface includes: recording area 1201, pause control 1202, and end control 1203. Recording area 1201 displays a recording screen and a recording time period, and does not display a tracking prompt.
For example, when the terminal device receives an operation to start recording at the interface shown in b in fig. 10, the terminal device may enter the recording interface shown in b in fig. 12. The recording interface includes a recording area, a tracking box 1204, and a tracking prompt 1205.
The tracking prompt 1205 is used to prompt the user to set a tracking target. The embodiment of the application does not limit the display position, the display form and the specific content of the tracking prompt.
When the terminal device receives the operation of setting the tracking target, the tracking prompt 1205 disappears, or the tracking prompt 1205 may display the second preset time period and then disappear. The second preset duration may be 3s or 5s, and the specific value of the second preset duration is not limited in this embodiment of the present application.
When the user triggers the tracking frame 1204 through clicking or touching the recording interface shown in b in fig. 12, the terminal device receives the operation of setting the tracking target, and enters the recording interface shown in c in fig. 12. The recording interface includes a small window 1206 that does not display tracking cues. Widget 1206 includes a widget recording duration and a widget ending control.
On the basis of the above embodiment, when the recording area does not receive the user operation within the third preset duration, the tracking frame is hidden. And/or the small window area does not receive the user operation within the fourth preset time period, and the small window ending control is hidden. Therefore, shielding of recorded pictures or tracking pictures can be reduced, and user experience is improved.
The third preset duration may be 3s or 5s, and the specific value of the third preset duration is not limited in this embodiment of the present application. The fourth preset duration may be 3s or 4s, and the specific value of the fourth preset duration is not limited in this embodiment of the present application.
On the basis of the embodiment, when the recording area does not receive the user operation within the third preset duration, the tracking frame is hidden, and the tracking frame display prompt appears.
For example, when the terminal device is in the recording interface shown in b in fig. 12, the recording area does not receive the user operation within the third preset duration, and the terminal device enters the recording interface shown in a in fig. 13. The recording area in the recording interface does not display a tracking frame, and a tracking frame display prompt 1301 is displayed. The tracking frame display prompt is used for prompting the display mode of the user tracking frame. The tracking frame display prompt may be "click on recording area displayable tracking frame". The embodiment of the application does not limit the position, the display form, the specific content and the like of the display prompt display of the tracking frame.
For example, when the terminal device is in the recording interface shown in c in fig. 12, the recording area does not receive the user operation within the third preset duration, and the terminal device enters the recording interface shown in b in fig. 13. The recording area in the recording interface does not display a tracking frame, and a tracking frame display prompt 1302 is displayed.
In a possible implementation manner, in the recording interface shown in b in fig. 13, the end control 1303 in the widget may be hidden and not displayed.
In a possible implementation manner, the tracking frame display prompt disappears after the fifth preset duration is displayed, or when the terminal device receives the user operation in the recording area, the tracking frame is displayed, and the tracking frame display prompt disappears. The fifth preset duration may be 3s or 5s, and the specific value of the fifth preset duration is not limited in this embodiment of the present application.
On the basis of the above embodiment, the terminal device may lose tracking target during recording for various reasons. Various reasons include, but are not limited to: tracking target movement, terminal device movement, etc. The terminal device is also provided with a loss prompt to prompt the user to track the loss of the target.
The loss notification is described below with reference to fig. 14 and 15.
It can be understood that, due to the movement of the person or the movement of the terminal device, the situation that the tracking target is lost may occur in the recording screen of the terminal device.
Illustratively, when the terminal device loses tracking target, the terminal device enters the recording interface shown as a in fig. 14 from the recording interface shown as b in fig. 13. The recording interface includes: a recording area 1401, a widget 1402 and a loss prompt 1403. The widget 1402 displays a loss prompt 1401. Loss prompt 1401 is used to prompt the user to track the loss of target. The content of the loss hint 1401 may be "target loss". Alternatively, a loss hint 1403 may be displayed in recording area 1401. The embodiment of the present application does not limit the display position, display form and specific content of the loss prompt 1401.
On the basis of the embodiment, the missing prompt may also be used for prompting the processing mode of the focus tracking video. Illustratively, when the terminal device loses tracking target, the terminal device enters the recording interface shown in b in fig. 14 from the recording interface shown in b in fig. 13. The recording interface shown in b in fig. 14 includes: a recording area 1404, a small window 1405, and a loss prompt 1406. The recording area 1404 displays a loss indication 1406. The content of the loss prompt 1406 may be "target lost, pause the focus tracking video after 5 seconds. Optionally, a loss hint 1406 may also be displayed in the widget 1405. The display position, display form and specific content of the missing prompt 1406 in the preview area 1404 are not limited in the embodiments of the present application.
It will be appreciated that there are a number of ways in which the terminal device can handle when the tracking target is lost. The various ways of treatment include, but are not limited to: continuously recording a focus tracking video; suspending recording of the focus tracking video; suspending recording of the focus tracking video when the tracking target is not found within a sixth preset time period; and finishing recording the tracking video when the tracking video is not retrieved within the seventh preset time period.
Accordingly, the widget may continue to display a portion of the content in the recorded picture; or may display a frame of picture before the tracking target is lost; the method further comprises the steps of continuously displaying part of content in the recorded picture within a sixth preset time length, and displaying a frame of picture corresponding to the sixth preset time length when the sixth preset time length is reached; or the partial content in the recorded picture may be continuously displayed within the seventh preset time period, and the small window disappears when the seventh preset time period arrives.
In a possible implementation manner, when the tracking target is lost, the small window can suspend a layer of covering, and a covering state with lower brightness than that of the recording picture of the large window is presented, so as to remind a user of abnormal recording of the small window. Illustratively, a small window 1405 in the interface shown as b in fig. 14 is shown in a covered state. The embodiment of the application does not limit the specific content displayed by the widget, the state of the widget and the like.
It may be understood that the sixth preset duration may be 3s or 5s, and the specific value of the sixth preset duration is not limited in this embodiment of the present application. The seventh preset duration may be 3s or 5s, and the specific value of the seventh preset duration is not limited in this embodiment of the present application.
In a possible implementation manner, the missing prompt can automatically disappear after the eighth preset time period is displayed; or when the terminal equipment receives the operation of changing the tracking target, the losing prompt disappears; or when the terminal equipment only pauses the focus tracking video recording, the loss prompt disappears; or when the terminal equipment retrieves the tracking target, the loss prompt disappears.
It may be understood that the eighth preset duration may be 3s or 5s, and the specific value of the eighth preset duration is not limited in this embodiment of the present application.
In some embodiments, if the terminal device does not find the tracking target within the sixth preset time period, the terminal device pauses recording the tracking video. For example, when the terminal device does not retrieve the tracking target within the sixth preset time period, the terminal device enters the recording interface shown by d in fig. 14 from the recording interface shown by b in fig. 14. The small window in the recording interface pauses recording and does not display a loss prompt. The embodiment of the application does not specifically limit the display content of the small window when recording is stopped.
It can be understood that when the terminal device retrieves the tracking target, the terminal device continues to record the tracking video, and the small window continues to display the tracking picture. Or when the terminal equipment receives the operation of switching the tracking target, the terminal equipment receives the operation of replacing the tracking target and continuously records the focus tracking video.
In some embodiments, after the tracking target is lost, the widget may continue recording the picture of the position of the tracking target before the tracking target is lost, where the recorded picture may be an empty mirror that does not include the tracking target, and the recording time may be a sixth preset duration.
When the tracking target is retrieved and recording is restored, the terminal equipment can clip the video before the tracking target is lost, the empty mirror video and the video after the tracking target is retrieved. In one possible implementation, the terminal device may delete the empty mirror, and splice the video before the tracking target is lost and the video after the tracking target is retrieved to synthesize a path of video. In another possible implementation, the terminal device can perform blurring, soft focus, addition of a cover layer and the like on the empty mirror video, so that the influence of the empty mirror video on the continuity of the whole video is reduced, and the experience of a subsequent user for viewing the recorded video is improved.
In a possible implementation manner, if the tracking target is lost, the recording area further includes a tracking frame when the recording frame further includes other characters. Illustratively, the recording interface shown at b in fig. 14 also includes a tracking box 1407. When the user triggers the operation of the tracking frame 1407 by clicking, touching or the like on the recording interface shown in b in fig. 14, the terminal device receives the operation of changing the tracking target, cancels the loss prompt, and enters the recording interface shown in c in fig. 14. The recording interface includes a recording area, a pause control, an end control, and a widget. The small window displays a tracking screen of the person corresponding to the tracking frame 1407.
In a possible implementation manner, when the small window pauses recording the focus tracking video, the terminal device displays a pause prompt.
For example, when the terminal device does not retrieve the tracking target within the sixth preset time period, the terminal device enters the recording interface shown by a in fig. 15 from the recording interface shown by b in fig. 14. The recording interface includes: recording area 1501, pause control, end control, pause prompt 1502, and widget 1503. The recording area displays a recording picture. A pause prompt 1502 is displayed in the small window 1503. Pause prompt 1502 is used to prompt the user to pause recording of the focus-following video. The content of the pause prompt may be "recorded paused", or may be "in focus video paused". The small window 1503 includes: widget ending control 1504.
It will be appreciated that the pause prompt may be displayed in a small window (as shown by a in fig. 15) or in a recording area (as shown by b in fig. 15). The display position and the content of the pause prompt are not particularly limited in the embodiment of the application.
It should be noted that, the recording interfaces of the terminal device all include a pause control. When the terminal equipment receives the triggering operation aiming at the pause control, the terminal equipment pauses to record the video, and the small window pauses to record the focus tracking video.
When the terminal equipment pauses recording video, a preview picture is displayed in a recording area of a recording interface of the terminal equipment. And when the target is not tracked in the real-time picture, displaying a loss prompt.
For example, when the user triggers the pause control 1601 by clicking, touching, or the like in the recording interface shown in a in fig. 16, the terminal device receives an operation to pause recording, and enters the pause recording interface shown in b in fig. 16. The pause recording interface includes: a recording area, a start control 1602, an end control, and a widget 1603. The recording area displays recording pictures and recording time. The widget 1603 displays a tracking screen. Widget 1603 also displays a widget end control and a widget recording duration. The recording time displayed by the recording area and the small window recording time are not changed. Under the condition of suspending recording, the recording duration of the recording area and the small window recording duration can be expressed as a combination of "|" and time.
It will be appreciated that when the terminal device pauses recording, the recording area may display a frame of recording before the pause recording, or display the photographed picture in real time. When the terminal equipment pauses recording, the small window may display a frame of tracking picture before pausing recording; or a covered state is changed; or the tracking screen is displayed in real time. The embodiment of the application does not limit the display of the recording area, the display of the small window, and the like when the terminal equipment pauses recording.
It will be appreciated that the frames displayed by the recording area during the pause of recording video will not be saved in the video and the tracking frames displayed by the small window during the pause of recording video will not be saved in the in-focus video. The video after pause and the video before pause of the recording area are the same video, the focus tracking video after pause of the small window and the focus tracking video before pause are the same focus tracking video, for example, a user clicks a pause recording control when the recording time length is 4s, the terminal equipment responds to the click operation and pauses video recording, and the video recording time is 4s. After a period of time, when the terminal equipment receives clicking operation for starting the control, the terminal equipment starts recording the 5 th s video on the basis of the 4s video, and the recording duration is correspondingly changed.
In some embodiments, when the terminal device pauses recording and no tracking target is in the recording screen, the terminal device enters the recording interface shown in c in fig. 16. The recording interface includes recording region 1604, start control 1605, end control, widget 1606. A loss prompt 1607 is displayed in the widget 1606. The widget 1606 may be in a covered state to indicate that the tracking target is lost. The state of the widget 1606, and the specific content of the widget display, are not limited herein.
Optionally, a loss hint 1607 is displayed in recording area 1604.
The effect of the missing cue in different areas may refer to the description of the missing cue in the preview scene, which is not described herein.
On the basis of the above embodiment, when the terminal device receives the operation of adjusting the shooting parameters by the user, the terminal device may also display a prompt for adjusting the shooting parameters. For example, a prompt to lock the exposure or not.
In the above embodiment, the terminal device may additionally generate one or more ways of focus tracking videos by using the person as the tracking target, and in a possible implementation manner, the terminal device may additionally generate one or more ways of focus tracking videos for the tracking target based on a pet (e.g., cat, dog), a preset object (e.g., vehicle), etc., and the specific method may refer to the content in the above embodiment, so that details are not repeated herein.
It will be appreciated that the interface of the terminal device described above is by way of example only and that the interface of the terminal device may also include more or less content. Further, the shape and form of each control in the above embodiments are merely examples. The embodiment of the application does not limit the content of the display interface and the shape and form of each control.
On the basis of the above embodiments, the embodiments of the present application provide a video recording method. Fig. 16 is a schematic flow chart of a video recording method according to an embodiment of the present application.
As shown in fig. 17, the recording method may include the steps of:
s1701, the terminal equipment displays a first interface of the camera application.
In the embodiment of the application, the first interface comprises a first window and a tracking prompt; the first window displays a first picture acquired by the first camera, and the tracking prompt is used for prompting a user to set a tracking target.
The first window may be the preview area or the recording area above. The first interface may be an interface before the tracking target is set, and may correspond to a preview interface without the tracking target set, or may correspond to a recording interface without the tracking target set.
S1702, at a first moment, when the terminal device detects that the first screen includes the first object, displaying a first tracking identifier corresponding to the first object on the first screen.
The tracking identifier may be the tracking box above, or may be in other forms (e.g., thumbnail images, numbers, icons, etc. of the object), which are not limited herein. The tracking mark may be located at the position where the object is displayed, or may be displayed in a line at the edge of the preview area or the recording area, which is not limited herein.
The first tracking identifier may be a tracking identifier corresponding to any one of the objects, for example, a tracking frame corresponding to a male character.
The interface displayed at the first time may correspond to the interface shown in fig. 10, or may correspond to the interface shown in fig. 12 b, for example.
S1703, at a second moment, responding to the triggering operation of the user on the first tracking identifier, and displaying a second interface of the camera application by the terminal equipment, wherein the second interface comprises a first window and a second window, and does not comprise a tracking prompt; the second window displays a second picture, and the second picture is a part of the first picture related to the first object.
The triggering operation of the first tracking identifier may be a clicking operation or a touching operation, and the embodiment of the present application does not limit the type of the triggering operation. The location, content, etc. of the tracking hint may be referred to the description of the tracking hint, which is not described herein.
It will be appreciated that in response to a user triggering operation of the first tracking identifier, a tracking target is set and an interface with a small window is displayed. The portion of the first window associated with the first object may be understood as the tracking frame above. Before the tracking target is set, the terminal device does not display the small window, and after the tracking target is set, the terminal device displays the small window.
The second interface may be an interface of the set tracking target, and may correspond to a preview interface of the set tracking target, or may correspond to a recording interface of the set tracking target. The second interface may also be understood as an interface displaying a small window, and may correspond to a preview interface displaying a small window, or may correspond to a recording interface displaying a small window.
The interface displayed at the second time may, for example, correspond to the interface shown in fig. 11 a or may correspond to the interface shown in fig. 12 b.
S1704, at a third time, when the terminal device detects that the first object is displayed at the first position of the first screen, the second screen includes the first object.
S1705, at a fourth moment, when the terminal device detects that the first object is displayed at the second position of the first screen, the second screen comprises the first object; wherein the second time is later than the first time, the third time is later than the second time, and the fourth time is later than the third time.
In the embodiment of the present application, the screen (tracking screen) displayed by the second window changes with the position change of the focus tracking target. Specifically, the position of the tracking object in the screen displayed in the second window changes and changes, and fig. 3B or 3C may be referred to. For example, the interface at the third time may be the interface shown by a in fig. 3B, and the interface at the fourth time may be the interface shown by B in fig. 3B; alternatively, the interface at the third time may be the interface shown by a in fig. 3C, and the interface at the fourth time may be the interface shown by b in fig. 3C.
In summary, the terminal device may display a tracking prompt to prompt the user to set a tracking target. The tracking target can be set based on the tracking mark, and after the tracking target is set, the terminal equipment can additionally obtain and display a picture corresponding to the tracking target, so that the tracking video corresponding to one or more paths of tracking targets is additionally obtained while the video is recorded, further the subsequent editing operation on the tracking target is reduced, and the editing efficiency is improved.
Optionally, the first object is centrally displayed in the second screen.
Optionally, the second window floats on an upper layer of the first window, and the second window is smaller than the first window.
Optionally, the method further comprises: at a fifth moment, when the terminal equipment detects that the first picture does not comprise the first object, the second picture does not comprise the first object, a first loss prompt is displayed on the first window and/or the second window, and the first loss prompt is used for prompting that the first object is lost; the fifth moment is later than the fourth moment.
Illustratively, the interface displayed at the fifth time may correspond to the interface shown in fig. 11 b, or the interface shown in fig. 11 c; and may also correspond to the interface shown in fig. 14 a. The location, content, etc. of the first missing cue may be referred to the description related to the missing cue, which is not described herein.
Therefore, when the first object is lost, the terminal equipment can display a loss prompt to prompt the user that the first object is lost, so that the attention of the user is improved, and the user experience is improved.
It can be appreciated that when the missing cue is displayed in the small window area, the situation that the small window shields the missing cue can be reduced, and shielding of the preview picture is reduced. Moreover, the loss prompt can be more clear, and the user can understand the loss prompt conveniently. The loss prompt is displayed in the small window, so that the loss prompt is more easily understood by a user as the loss of the tracking target in the small window. In addition, when the small window displays the characters with the missing prompt, the attention of the user to the small window can be improved.
When the missing cue is displayed in the recording area, the terminal device may display more text to alert the user to subsequent processing. And characters with larger character sizes can be displayed, so that the user can conveniently distinguish the characters.
Optionally, after the fifth time, the method further includes: at a sixth moment, when the terminal device detects that the third position of the first picture comprises the first object, the second picture comprises the first object again, and the second picture is a part of the first picture; the first window and/or the second window does not display the first loss prompt, and the first and sixth moments are moments in a first duration from the fourth moment; at a seventh moment, when the terminal device detects that the fourth position of the first picture comprises the first object, the second picture comprises the first object, and the second picture is a part of the first picture; wherein the sixth time is later than the fifth time, and the seventh time is later than the sixth time.
It is understood that the first time period may correspond to the sixth preset time period or the seventh preset time period hereinabove. And will not be described in detail herein. When the terminal equipment retrieves the tracking target, the terminal equipment continues to track the focus or records a tracking video.
Thus, when the terminal equipment retrieves the tracking target, the terminal equipment can continue to track the focus and record the tracking video.
Optionally, the method further comprises: when the first duration is reached from the fifth moment and the terminal equipment still does not detect the first object in the first picture, the terminal equipment cancels the tracking target and displays the first interface.
The first duration may correspond to the sixth preset duration above, and will not be described herein. The interface corresponding to the fifth time may be the interface shown in b of fig. 11, or the interface shown in c of fig. 11, and the interface after the fifth time starts to reach the first time period may correspond to the interface shown in fig. 10.
In this way, the terminal device can cancel tracking when it is not retrieved for a long time.
Optionally, at the eighth moment, the terminal device starts recording of the first window in response to a recording starting operation input by a user; when the eighth moment is earlier than the second moment, the terminal equipment starts recording the second window at the second moment, or when the eighth moment is later than the second moment and earlier than the second moment, the terminal equipment starts recording the second window at the eighth moment; when the first duration is reached from the fifth moment and the terminal equipment still does not detect the first object in the first picture, suspending recording in the second window; and displaying a pause prompt in the first window and/or the second window, wherein the pause prompt is used for prompting a user that the second window pauses recording, and the fifth moment is later than the eighth moment, and the first loss prompt is not displayed.
The recording starting operation input by the user may be an operation of clicking a recording control by the user, or may be other operations, which is not limited in the embodiment of the present application. The location, content, etc. of the pause prompt may be referred to in the description of the pause prompt, and will not be described herein.
The interface displayed at the eighth time may correspond to the interface shown in fig. 6 b, and the interface displayed at the second time may correspond to the interface shown in fig. 6 c; alternatively, the interface displayed at the second time may correspond to the interface shown in fig. 7 b, and the interface displayed at the eighth time may correspond to the interface shown in fig. 7 c
The interface displayed at the fifth time may correspond to the interface shown in fig. 14 a; the interface displayed after the fifth time start reaches the first time period may correspond to the interface shown as d in fig. 14.
Therefore, the terminal equipment can pause the recording of the first object after the first object is lost for a period of time, multiple processing modes are provided, and user experience is improved. The terminal device may also display a pause prompt to prompt the user to pause recording in the abnormal scene widget.
It can be appreciated that when the pause prompt is displayed in the small window area, the situation that the small window is blocked and the prompt is lost can be reduced, and the blocking of the preview picture is reduced. In addition, the pause prompt can be made clearer, and the user can understand the pause prompt conveniently. Pausing the display in the widget is more easily understood by the user as a loss of tracking target in the widget. In addition, when the small window displays the text with pause prompt, the attention of the user to the small window can be improved.
When the pause prompt is displayed in the recording area, the terminal device may display more text to prompt the user for subsequent processing. And characters with larger character sizes can be displayed, so that the user can conveniently distinguish the characters.
Optionally, the first window further displays recording duration information of the first window, the second window further displays recording duration information of the second window, and when the second window pauses recording, the recording duration information of the first window is continuously updated, and the recording duration information of the second window pauses updating.
The recording duration information of the first window may be the recording duration above; the recording duration information of the second window may be the small window recording duration above.
It will be appreciated that after the first object is lost, the widget pauses but the recording area may continue recording video.
Optionally, the first frame further includes a second object, and a tracking identifier associated with the second object, and the method further includes: at a ninth moment, the terminal equipment detects a triggering operation of a tracking identifier associated with the second object, and the second window is switched to comprise the second object; at a tenth moment, when the terminal device detects that the fifth position of the first picture comprises the second object, the second window displays the second object, and the first window and/or the second window does not display the first loss prompt and the pause prompt.
The triggering operation of the tracking identifier associated with the second object may be a clicking operation or other operations, which is not limited in the embodiment of the present application.
It can be understood that when the terminal device receives the operation of clicking the tracking frame corresponding to the second object, the object corresponding to the tracking target can be replaced; illustratively, the terminal device may receive an operation of the click tracking frame 1407 at the interface shown in b in fig. 14, and enter the interface shown in c in fig. 14.
Therefore, the terminal equipment can also switch the tracking target based on the tracking identification, support switching the tracking target and promote user experience. Furthermore, in a lost scenario, tracking targets may be switched.
Optionally, the method further comprises: the terminal equipment detects a record suspension operation in a first window; in response to the recording suspension operation, the terminal device suspends recording of the first window and the second window, displays an identification of suspending recording and a first picture acquired by the first camera in the first window, and displays a first object and an identification of suspending recording in the second window.
The pause recording operation may be an operation that the user clicks the pause control, or may be other operations, which are not limited herein. The mark for suspending recording may be "|" or other marks, and the form of the mark is not limited herein.
For example, the terminal device may receive an operation of clicking the pause control 1601 at the interface shown in a in fig. 16, enter the interface shown in b in fig. 16, pause recording in the recording area, and pause recording in the small window.
In this way the terminal device also supports suspending recordings.
Optionally, after the terminal device pauses the recording of the first window and the second window, the method further includes: at the eleventh moment, when the terminal device detects that the first window does not comprise the first object, the second window does not comprise the first object, and the first window and/or the second window display the first loss prompt.
The interface displayed at the eleventh time may correspond to the interface shown as c in fig. 16.
It will be appreciated that a loss prompt may be displayed when the tracking target is lost after the terminal device pauses recording. Therefore, the user can be prompted to trace the target to be lost, the user can know the information of the small window conveniently, and the attention of the user to the abnormal scene is improved.
Optionally, the first window includes a first switching control, and/or the second window includes a second switching control, and the method further includes: at a twelfth moment, when the second window is in a horizontal screen display state, responding to the triggering operation of the first switching control or the second switching control, and switching the second window into a vertical screen display state by the terminal equipment;
Or, at the twelfth moment, when the second window is in the vertical screen display state, responding to the triggering operation of the first switching control or the second switching control, and switching the second window into the horizontal screen display state by the terminal equipment;
after the terminal equipment completes the switching of the horizontal and vertical screen display states of the second window, if the second window does not comprise the first object, the terminal equipment displays a first tracking identifier in the first window, the first window and/or the second window displays a second loss prompt, and the second loss prompt is used for prompting a user to reset a tracking target;
and responding to the triggering operation of the first tracking identifier, and displaying the first object on the second window by the terminal equipment.
The interface displayed at the twelfth time may correspond to the interface shown as d in fig. 11. The second loss hint may be the loss hint 1111 in fig. 11.
It will be appreciated that the tracking target may be lost when the terminal device switches the widget pattern. The display loss prompt can prompt the user to reset the tracking target, so that the attention of the user to the abnormal scene is improved.
Optionally, at the first moment, the first window further displays a small window style switching prompt; when the terminal equipment detects the triggering operation of the first window, the first window does not display the small window style switching prompt.
The interface displayed at the first time may correspond to the interface shown as a in fig. 10.
Thus, the terminal equipment can prompt the user to set the small window style, and the user experience is improved.
Optionally, before the terminal device displays the first interface, the terminal device displays a third interface, where the third interface includes a first window, where the first window displays a first screen and an entry prompt, and the entry prompt is used to prompt a user to enter the first interface.
The third interface may be an interface in the video recording mode; the third interface may correspond to the interface shown in fig. 8 or may correspond to the interface shown in b in fig. 9.
The display position, content, etc. of the entry prompt may be referred to the related description above, and will not be described herein.
In this way, the terminal equipment can prompt the entrance of the principal angle mode to prompt the user to enter a new function; the condition that the user does not know the new function can be reduced, and the user experience is improved. After entering the main angle mode, the terminal equipment does not display an entry prompt.
Optionally, the method further comprises: the terminal equipment receives the operation of cancelling the entry prompt from the user; and the terminal equipment responds to the operation of canceling the entry prompt, and the third window does not display the entry prompt.
The operation of cancelling the entry prompt may be an operation of clicking a preview area in the video mode by the user.
Therefore, the terminal equipment can cancel the entry prompt, reduce shielding of the preview area and improve user experience.
Optionally, when the terminal device displays the third interface for the first N times, displaying an entry prompt on the third interface, where N is an integer greater than zero.
The terminal device can limit the display times of the entry prompt, and after displaying for N times, the entry prompt is not displayed.
Optionally, the terminal device does not display the entry prompt in the first duration after displaying the entry prompt for the nth time.
Optionally, when the terminal device detects that the first screen displayed on the third interface includes a plurality of objects, the first window displays an entry prompt.
Illustratively, as shown in FIG. 9, when no object is identified, no entry hint is displayed; upon identifying two objects, an entry hint is displayed.
Optionally, when the terminal device displays the third interface again after the terminal device displays the first interface, the third interface does not include the entry prompt.
The video recording method according to the embodiment of the present application has been described above, and the apparatus for performing the method according to the embodiment of the present application is described below. As shown in fig. 18, fig. 18 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application, where the video recording apparatus may be a terminal device in an embodiment of the present application, or may be a chip or a chip system in the terminal device.
As shown in fig. 18, the video recording apparatus 2100 may be used in a communication device, a circuit, a hardware component, or a chip, and includes: a display unit 2101, and a processing unit 2102. Wherein the display unit 2101 is used for supporting the step of displaying performed by the video recording apparatus 2100; the processing unit 2102 is configured to support the recording apparatus 2100 to execute steps of information processing.
In a possible implementation, the recording apparatus 2100 may also include a communication unit 2103. Specifically, the communication unit is configured to support the video recording apparatus 2100 to perform the steps of transmitting data and receiving data. The communication unit 2103 may be an input or output interface, a pin or circuit, or the like.
In a possible embodiment, the video recording apparatus may further include: a storage unit 2104. The processing unit 2102 and the storage unit 2104 are connected by a line. The memory unit 2104 may include one or more memories, which may be one or more devices, circuits, or means for storing programs or data. The storage unit 2104 may exist independently and is connected to the processing unit 2102 provided in the video recording apparatus via a communication line. The memory unit 2104 may also be integrated with the processing unit 2102.
The storage unit 2104 may store computer-executed instructions of the method in the terminal apparatus to cause the processing unit 2102 to execute the method in the above-described embodiment. The storage unit 2104 may be a register, a cache, a RAM, or the like, and the storage unit 2104 may be integrated with the processing unit 2102. The storage unit 2104 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, and the storage unit 2104 may be independent of the processing unit 2102.
The embodiment of the present application provides a terminal device, which may also be called a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The terminal device includes: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored in the memory to cause the terminal device to perform the method described above.
The embodiment of the application provides a terminal device, and the structure is shown in fig. 1. The memory of the terminal device may be configured to store at least one program instruction, and the processor is configured to execute the at least one program instruction, so as to implement the technical solution of the foregoing method embodiment. The implementation principle and technical effects are similar to those of the related embodiments of the method, and are not repeated here.
The embodiment of the application provides a chip. The chip comprises a processor for invoking a computer program in a memory to perform the technical solutions in the above embodiments. The principle and technical effects of the present invention are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application provides a computer program product, which enables a terminal device to execute the technical scheme in the embodiment when the computer program product runs on electronic equipment. The principle and technical effects of the present invention are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application provides a computer readable storage medium, on which program instructions are stored, which when executed by a terminal device, cause the terminal device to execute the technical solution of the above embodiment. The principle and technical effects of the present invention are similar to those of the above-described related embodiments, and will not be described in detail herein. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
The computer readable medium may include RAM, ROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium targeted for carrying or storing the desired program code in the form of instructions or data structures and may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (Digital Subscriber Line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes optical disc, laser disc, optical disc, digital versatile disc (Digital Versatile Disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the invention.

Claims (21)

1. A video recording method, applied to a terminal device including a first camera, the method comprising:
the terminal equipment displays a first interface of the camera application, wherein the first interface comprises a first window and a tracking prompt; the first window displays a first picture acquired by the first camera, and the tracking prompt is used for prompting a user to set a tracking target;
at a first moment, when the terminal equipment detects that the first picture comprises a first object, displaying a first tracking identifier corresponding to the first object on the first picture;
at a second moment, responding to the triggering operation of the user on the first tracking identifier, and displaying a second interface of the camera application by the terminal equipment, wherein the second interface comprises the first window and the second window, and does not comprise the tracking prompt; the second window displays a second picture, and the second picture is a part of the first picture related to the first object;
At a third moment, when the terminal device detects that the first object is displayed at the first position of the first screen, the second screen comprises the first object;
at a fourth time, when the terminal device detects that the first object is displayed at a second position of the first screen, the second screen comprises the first object;
wherein the second time is later than the first time, the third time is later than the second time, and the fourth time is later than the third time.
2. The method of claim 1, wherein the first object is centrally displayed in the second screen.
3. A method according to claim 1 or 2, wherein the second window floats on top of the first window and is smaller than the first window.
4. A method according to any one of claim 1 to 3, wherein,
at a fifth moment, when the terminal equipment detects that the first picture does not comprise the first object, the second picture does not comprise the first object, a first loss prompt is displayed in the first window and/or the second window, and the first loss prompt is used for prompting that the first object is lost; the fifth time is later than the fourth time.
5. The method of claim 4, wherein after the fifth time, the method further comprises:
at a sixth moment, when the terminal device detects that the third position of the first screen comprises the first object, the second screen comprises the first object again, and the second screen is a part of the first screen; the first window and/or the second window does not display the first loss prompt, and the first time is a time within a first duration from the fourth time;
at a seventh moment, when the terminal device detects that the fourth position of the first screen includes the first object, the second screen includes the first object, and the second screen is a part of the first screen;
wherein the sixth time is later than the fifth time, and the seventh time is later than the sixth time.
6. The method according to claim 4, wherein the method further comprises:
and when the first duration is reached from the fifth moment and the terminal equipment still does not detect the first object in the first picture, the terminal equipment cancels the tracking target and displays the first interface.
7. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
at the eighth moment, the terminal equipment responds to the recording starting operation input by the user to start recording of the first window; when the eighth time is earlier than the second time, the terminal equipment starts recording the second window at the second time, or when the eighth time is later than the second time and earlier than the second time, the terminal equipment starts recording the second window at the eighth time;
when the first duration is reached from the fifth moment and the terminal equipment still does not detect the first object in the first picture, suspending recording in the second window; and displaying a pause prompt in the first window and/or the second window, wherein the pause prompt is used for prompting a user to pause recording in the second window, and the fifth moment is later than the eighth moment, and the first loss prompt is not displayed.
8. The method of claim 7 wherein the first window further displays recording duration information of the first window, the second window further displays recording duration information of the second window, and when the second window pauses recording, the recording duration information of the first window is continuously updated, and the recording duration information of the second window pauses updating.
9. The method of any of claims 4-8, further comprising a second object in the first screen, and a tracking identifier associated with the second object, the method further comprising:
at a ninth moment, the terminal device detects a triggering operation of the tracking identifier associated with the second object, and the second window is switched to include the second object;
at a tenth moment, when the terminal device detects that the fifth position of the first picture comprises the second object, the second window displays the second object, and the first window and/or the second window does not display the first loss prompt and the pause prompt.
10. A method according to any one of claims 1-3, wherein the method further comprises:
the terminal equipment detects a pause recording operation in the first window;
and responding to the record suspending operation, suspending the recording of the first window and the second window by the terminal equipment, displaying an identifier for suspending the recording and a first picture acquired by the first camera in the first window, and displaying the first object and the identifier for suspending the recording in the second window.
11. The method of claim 10, wherein after the terminal device pauses the recording of the first window and the second window, further comprising:
at an eleventh moment, when the terminal device detects that the first window does not include the first object, the second window does not include the first object, and the first window and/or the second window display a first loss prompt.
12. A method according to any of claims 1-3, wherein a first switching control is included in the first window and/or a second switching control is included in the second window, the method further comprising:
at a twelfth moment, when the second window is in a horizontal screen display state, responding to the triggering operation of the first switching control or the second switching control, and switching the second window into a vertical screen display state by the terminal equipment;
or, at the twelfth moment, when the second window is in a vertical screen display state, responding to a triggering operation of the first switching control or the second switching control, and switching the second window into a horizontal screen display state by the terminal equipment;
After the terminal equipment completes the switching of the horizontal and vertical screen display states of the second window, if the second window does not comprise the first object, the terminal equipment displays the first tracking identifier in the first window, and the first window and/or the second window displays a second loss prompt which is used for prompting a user to reset a tracking target;
and responding to the triggering operation of the first tracking identifier, and displaying the first object on the second window by the terminal equipment.
13. A method according to any one of claims 1-3, wherein at the first moment in time the first window also displays a widget style switching prompt;
and when the terminal equipment detects the triggering operation of the first window, the first window does not display the small window style switching prompt.
14. The method according to any one of claims 1-13, wherein, before the terminal device displays the first interface,
the terminal equipment displays a third interface, wherein the third interface comprises a first window, the first window displays the first picture and an entrance prompt, and the entrance prompt is used for prompting a user to enter the first interface.
15. The method of claim 14, wherein the method further comprises:
the terminal equipment receives the operation of cancelling the entry prompt from the user;
and the terminal equipment responds to the operation of canceling the entry prompt, and the third window does not display the entry prompt.
16. The method according to claim 14 or 15, wherein,
and when the terminal equipment displays the third interface for the first N times, displaying the entry prompt on the third interface, wherein N is an integer greater than zero.
17. The method according to claim 14 or 15, wherein,
and the terminal equipment does not display the entrance prompt in a first time period after displaying the entrance prompt for the nth time.
18. The method according to claim 14 or 15, wherein,
and when the terminal equipment detects that the first picture displayed on the third interface comprises a plurality of objects, the first window displays the entrance prompt.
19. The method according to claim 14 or 15, wherein,
and when the terminal equipment displays the third interface again after the terminal equipment displays the first interface, the third interface does not comprise the entrance prompt.
20. A terminal device, characterized in that the terminal device comprises a processor for invoking a computer program in memory for performing the method according to any of claims 1-19.
21. A computer readable storage medium storing computer instructions which, when run on a terminal device, cause the terminal device to perform the method of any of claims 1-19.
CN202210576792.1A 2022-05-25 2022-05-25 Video recording method and related device Active CN116112780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210576792.1A CN116112780B (en) 2022-05-25 2022-05-25 Video recording method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210576792.1A CN116112780B (en) 2022-05-25 2022-05-25 Video recording method and related device

Publications (2)

Publication Number Publication Date
CN116112780A true CN116112780A (en) 2023-05-12
CN116112780B CN116112780B (en) 2023-12-01

Family

ID=86260326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210576792.1A Active CN116112780B (en) 2022-05-25 2022-05-25 Video recording method and related device

Country Status (1)

Country Link
CN (1) CN116112780B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130121668A1 (en) * 2011-11-14 2013-05-16 Brian Meaney Media editing with multi-camera media clips
CN104394313A (en) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 Special effect video generating method and device
CN109831622A (en) * 2019-01-03 2019-05-31 华为技术有限公司 A kind of image pickup method and electronic equipment
CN111093026A (en) * 2019-12-30 2020-05-01 维沃移动通信(杭州)有限公司 Video processing method, electronic device and computer-readable storage medium
CN112135046A (en) * 2020-09-23 2020-12-25 维沃移动通信有限公司 Video shooting method, video shooting device and electronic equipment
CN112616023A (en) * 2020-12-22 2021-04-06 荆门汇易佳信息科技有限公司 Multi-camera video target tracking method in complex environment
CN112954219A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN113079342A (en) * 2020-01-03 2021-07-06 深圳市春盛海科技有限公司 Target tracking method and system based on high-resolution image device
CN113536866A (en) * 2020-04-22 2021-10-22 华为技术有限公司 Character tracking display method and electronic equipment
CN113727015A (en) * 2021-05-31 2021-11-30 荣耀终端有限公司 Video shooting method and electronic equipment
CN113742609A (en) * 2020-05-27 2021-12-03 聚好看科技股份有限公司 Display device and method for guiding voice search function
CN113766129A (en) * 2021-09-13 2021-12-07 维沃移动通信(杭州)有限公司 Video recording method, video recording device, electronic equipment and medium
CN114339076A (en) * 2021-12-21 2022-04-12 北京达佳互联信息技术有限公司 Video shooting method and device, electronic equipment and storage medium
WO2022095788A1 (en) * 2020-11-09 2022-05-12 华为技术有限公司 Panning photography method for target user, electronic device, and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130121668A1 (en) * 2011-11-14 2013-05-16 Brian Meaney Media editing with multi-camera media clips
CN104394313A (en) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 Special effect video generating method and device
CN109831622A (en) * 2019-01-03 2019-05-31 华为技术有限公司 A kind of image pickup method and electronic equipment
CN112954219A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN111093026A (en) * 2019-12-30 2020-05-01 维沃移动通信(杭州)有限公司 Video processing method, electronic device and computer-readable storage medium
CN113079342A (en) * 2020-01-03 2021-07-06 深圳市春盛海科技有限公司 Target tracking method and system based on high-resolution image device
CN113536866A (en) * 2020-04-22 2021-10-22 华为技术有限公司 Character tracking display method and electronic equipment
CN113742609A (en) * 2020-05-27 2021-12-03 聚好看科技股份有限公司 Display device and method for guiding voice search function
CN112135046A (en) * 2020-09-23 2020-12-25 维沃移动通信有限公司 Video shooting method, video shooting device and electronic equipment
WO2022095788A1 (en) * 2020-11-09 2022-05-12 华为技术有限公司 Panning photography method for target user, electronic device, and storage medium
CN112616023A (en) * 2020-12-22 2021-04-06 荆门汇易佳信息科技有限公司 Multi-camera video target tracking method in complex environment
CN113727015A (en) * 2021-05-31 2021-11-30 荣耀终端有限公司 Video shooting method and electronic equipment
CN113766129A (en) * 2021-09-13 2021-12-07 维沃移动通信(杭州)有限公司 Video recording method, video recording device, electronic equipment and medium
CN114339076A (en) * 2021-12-21 2022-04-12 北京达佳互联信息技术有限公司 Video shooting method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116112780B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN108089788B (en) Thumbnail display control method and mobile terminal
CN108108114B (en) A kind of thumbnail display control method and mobile terminal
JP2022532102A (en) Screenshot method and electronic device
WO2019072178A1 (en) Method for processing notification, and electronic device
JP7302038B2 (en) USER PROFILE PICTURE GENERATION METHOD AND ELECTRONIC DEVICE
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN111597000B (en) Small window management method and terminal
CN108055587A (en) Sharing method, device, mobile terminal and the storage medium of image file
CN112565911B (en) Bullet screen display method, bullet screen generation device, bullet screen equipment and storage medium
CN111596830A (en) Message reminding method and device
CN114185503B (en) Multi-screen interaction system, method, device and medium
US20230119849A1 (en) Three-dimensional interface control method and terminal
CN107908348A (en) The method and mobile terminal of display
CN116095413B (en) Video processing method and electronic equipment
CN115695634B (en) Wallpaper display method, electronic equipment and storage medium
CN116112780B (en) Video recording method and related device
CN116132790B (en) Video recording method and related device
CN113518171B (en) Image processing method, device, terminal equipment and medium
CN116797767A (en) Augmented reality scene sharing method and electronic device
CN116112781B (en) Video recording method, device and storage medium
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium
CN116095465B (en) Video recording method, device and storage medium
CN116095460B (en) Video recording method, device and storage medium
KR20170021616A (en) Mobile terminal and method for controlling the same
KR20170027136A (en) Mobile terminal and the control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant