CN116132790A - Video recording method and related device - Google Patents

Video recording method and related device Download PDF

Info

Publication number
CN116132790A
CN116132790A CN202210576793.6A CN202210576793A CN116132790A CN 116132790 A CN116132790 A CN 116132790A CN 202210576793 A CN202210576793 A CN 202210576793A CN 116132790 A CN116132790 A CN 116132790A
Authority
CN
China
Prior art keywords
window
recording
moment
tracking
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210576793.6A
Other languages
Chinese (zh)
Other versions
CN116132790B (en
Inventor
黄雨菲
易婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210576793.6A priority Critical patent/CN116132790B/en
Publication of CN116132790A publication Critical patent/CN116132790A/en
Application granted granted Critical
Publication of CN116132790B publication Critical patent/CN116132790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application provides a video recording method and a related device. The method comprises the following steps: displaying a first interface of a camera application, the first interface including a first window and a second window; the first window displays a first picture acquired by the camera, the second window displays a second picture, and the second picture is a part of the first picture; when the first position of the first picture comprises a first object at the first moment, the second picture comprises the first object; when the second position of the first picture comprises the first object at a second moment later than the first moment, the second picture comprises the first object; the second window is displayed with a frame and/or a first control at the first moment; the second window does not display the bezel and/or the first control at the second time. In this way, the terminal device can additionally obtain and display the picture corresponding to the tracking target; the frame of the small window and/or the control in the small window can be hidden during recording, shielding of a tracking picture is reduced, immersive effect is improved, and user experience is improved.

Description

Video recording method and related device
Technical Field
The application relates to the technical field of terminals, in particular to a video recording method and a related device.
Background
In order to improve user experience, electronic devices such as mobile phones and tablet computers are generally configured with multiple cameras. For example, a front camera and a rear camera are respectively disposed on the electronic device. The user can select a corresponding shooting mode, for example, a proactive mode, a post-shooting mode, a front-back double-shooting mode, and the like, according to his own needs.
In some scenarios, one or more characters are included in a video captured by a user via an electronic device. When a user wants to obtain a video of a single person, a manual editing process is required for the video.
However, the manual editing mode is complex in operation and low in efficiency, so that the user experience of the electronic equipment is poor.
Disclosure of Invention
The embodiment of the application provides a video recording method and a related device, which are applied to electronic equipment. According to the method, the selected person can be tracked to additionally generate one path of video when the video is shot, manual editing by a user is not needed, user operation is reduced, and user experience is improved.
In a first aspect, an embodiment of the present application proposes a video recording method, applied to a terminal device including a first camera, where the method includes: the terminal equipment displays a first interface of the camera application, wherein the first interface comprises a first window and a second window; the first window displays a first picture acquired by the first camera, the second window displays a second picture, and the second picture is a part of the first picture; at a first moment, when the terminal device detects that the first position of the first picture comprises a first object, the second picture comprises the first object; at a second moment, when the terminal equipment detects that the second position of the first picture comprises the first object, the second picture comprises the first object, and the second moment is later than the first moment; wherein, at the first moment, the second window is displayed with a frame and/or a first control; at a second time, the second window does not display the bezel and/or the first control.
It is understood that the first interface may be a preview interface (for example, an interface shown by a in fig. 3B) provided with a tracking target, or may be a recording interface (for example, an interface shown by a in fig. 3C) provided with a tracking target. The first window may be understood as a preview area or a recording area below; the picture acquired by the first camera in real time can be understood as a preview picture or a recording picture below; the second window may be understood as a small window hereinafter; the screen displayed in the second window may be understood as a tracking screen corresponding to the tracking target. The first object may be understood as an object corresponding to the tracking target. Objects include, but are not limited to: characters, objects (vehicles, etc.), pets (cats, dogs, etc.).
In the embodiment of the present application, the screen (tracking screen) displayed by the second window changes with the position change of the focus tracking target. Specifically, the position of the tracking object in the screen displayed in the second window changes and changes, and fig. 3B or 3C may be referred to. For example, the interface at the first time may be the interface shown by a in fig. 3B, and the interface at the second time may be the interface shown by B in fig. 3B; alternatively, the interface at the first time may be the interface shown as a in fig. 3C, and the interface at the second time may be the interface shown as b in fig. 3C.
It is understood that the first control is a control in a widget, e.g., a widget ending control, etc., hereinafter. The first control is not limited herein. At a first moment, a frame and/or a first control is displayed on the second window; at a second time, the second window does not display the bezel and/or the first control. The terminal device may hide the frame of the widget and/or the first control.
For example, the interface displayed at the first time may correspond to the interface shown in a in fig. 10, with a widget frame and a widget ending control displayed; the interface displayed at the second time may correspond to the interface shown in b in fig. 10, with the widget frame and widget ending control not displayed.
In sum, the terminal device can additionally obtain and display the picture corresponding to the tracking target, and can hide the frame of the small window and/or the control in the small window when the tracking picture is displayed, so that the terminal device can hide the control in the small window, reduce the shielding of the tracking picture and improve the user experience. The terminal equipment can hide the small window frame, further improve immersive effect, promote user experience.
Optionally, the time interval between the first time instant and the second time instant is greater than a first threshold value.
The first threshold may correspond to a fifth preset time period described below. The first time may be a time when the focus tracking video starts to be recorded, and the second time may be a time after a time corresponding to a fifth preset time period after the focus tracking video starts to be recorded. The first time may be a time when the terminal device receives the trigger operation for the small window, and the second time may be a time after the terminal device receives a time corresponding to a fifth preset duration after the trigger operation for the small window.
Optionally, the first object is centrally displayed in the second screen.
Optionally, the second window floats on an upper layer of the first window, and the second window is smaller than the first window.
Optionally, after the second moment, the terminal device detects a triggering operation of the user on the second window; and responding to the triggering operation of the second window, and displaying the frame of the second window and/or the first control.
It can be understood that the triggering operation of the second window includes, but is not limited to, clicking the widget, adjusting the size of the widget, and adjusting the position of the widget.
For example, after the second moment, when the user clicks any position of the widget, the border of the widget and/or the control in the widget are displayed, and the terminal device may enter the recording interface shown in c in fig. 10 from the recording interface shown in b in fig. 10. And will not be described in detail herein.
Optionally, the first interface further includes a pause control, and the method further includes: after the second moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal equipment displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, the second window comprises a frame and/or the first control, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, the first window also displays a first picture acquired by the first camera, and the second window displays a second picture.
In this embodiment of the present application, the triggering operation of the pause control may be a clicking operation, or may be other operations, which is not limited herein. The mark for suspending recording may be the following "|" or other marks, and the form of the mark is not limited herein. The first recording duration information may correspond to the following recording duration; the second recording duration information may correspond to a small window recording duration as described below.
For example, after the second moment, when the user clicks the pause control, the frame of the widget and/or the control in the widget are displayed, and the terminal device may enter the recording interface shown in d in fig. 10 from the recording interface shown in b in fig. 10. And will not be described in detail herein.
Therefore, the control in the small window can be called out during suspension, and the operation of a user is facilitated. In addition, the borders of the portlets and/or the controls in the portlets can also make the portlets more visible so as to promote the user's attention to the portlets.
Optionally, the method further comprises: and at a third moment, when the terminal equipment detects that the first picture does not comprise the first object, displaying the frame and/or the first control of the second window, wherein the third moment is later than the second moment.
It can be understood that when the first picture does not include the first object and the first object is lost, the terminal device can call out the control in the widget, so that the user can operate conveniently. In addition, the borders of the portlets and/or the controls in the portlets can also make the portlets more visible so as to promote the user's attention to the portlets.
Optionally, the method further comprises: during the fourth time to the fifth time, the terminal equipment continuously detects that the first picture does not comprise the first object, and the second window does not comprise a frame and/or a first control;
at a fifth moment, the terminal device displays a third interface of the camera application, the third interface comprises a first window and a second window, the second window comprises an identifier for suspending recording, a frame and/or a first control, second recording duration information displayed by the second window is kept unchanged, the first window does not comprise the identifier for suspending recording, the displayed first recording duration information is changed continuously, the first window also displays a picture acquired by the first camera, and the fifth moment is later than the fourth moment, and the fourth moment is later than the second moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the interface at the second time may correspond to the recording interface shown at a in fig. 11, and the interface at the fourth time may correspond to the interface shown at b in fig. 11. The interface at the fifth time may correspond to the recording interface shown as c in fig. 11. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. The terminal equipment can pause the recording of the small window after the first object is lost for a certain time, and arouse the control in the small window, thereby facilitating the operation of the user. In addition, the borders of the portlets and/or the controls in the portlets can also make the portlets more visible so as to promote the user's attention to the portlets. In addition, the frame of the small window and/or the controls in the small window are not displayed in the losing period, so that the frame of the small window and/or the controls in the small window caused by frequent losing of the tracking target can be reduced to be displayed and hidden for many times, and the user experience is improved.
Optionally, the method further comprises: at a sixth moment, when the terminal equipment detects that the first picture comprises an object, displaying a tracking identifier in a first window, wherein the tracking identifier is associated with the object; and at a seventh moment, when the terminal equipment detects that the first picture comprises the object, the tracking mark is not displayed in the first window, the seventh moment is later than the sixth moment, the sixth moment is earlier than the first moment, or the sixth moment is later than the first moment, or the sixth moment is the first moment, and the time interval between the first moment and the second moment is larger than the second threshold value.
The tracking identifier may be a tracking frame below, or may be in other forms (e.g., a thumbnail, a number, an icon, etc. of the object), which is not limited herein. The tracking mark may be located at the position where the object is displayed, or may be displayed in a line at the edge of the preview area or the recording area, which is not limited herein.
Illustratively, the interface at the sixth time may correspond to the interface shown as a in fig. 8, with a tracking frame displayed; the interface at the seventh time may correspond to the interface shown in b in fig. 8, and the tracking frame is not displayed.
The second threshold may correspond to a first preset time period described below. The sixth time may be a time when the focus tracking video starts to be recorded, and the seventh time may be a time after a time corresponding to the first preset time period after the focus tracking video starts to be recorded. The sixth time may be a time when the terminal device receives the trigger operation for the recording area, and the seventh time may be a time after the time when the terminal device receives the first preset time after the trigger operation for the recording area. And are not limited herein.
Therefore, the terminal equipment can hide tracking marks such as the tracking frame, reduce shielding of recorded pictures, improve immersive effects and improve user experience. In addition, the widget frame and/or controls in the widget; and the tracking identification of the recording area can be controlled separately, so that the influence of user operation is reduced.
Optionally, at the seventh moment, the first window further includes a tracking identifier display prompt, where the tracking identifier display prompt is used to prompt the user to display the first tracking identifier.
The tracking frame display prompt may be "click recording area displayable tracking frame". The embodiment of the application does not limit the position, the display form, the specific content and the like of the display prompt display of the tracking frame.
For example, the interface at the seventh time may correspond to the interface shown in b in fig. 8, with a tracking frame display prompt displayed.
Thus, the terminal equipment can prompt the user to trace the display mode of the frame, and user experience is improved.
Optionally, at an eighth time, the first window does not display the tracking identifier display prompt, and the eighth time is later than the seventh time.
The following indication shows the content of the display and the moment of disappearance of the prompt can refer to the description of the corresponding content in fig. 8, which is not described herein.
In a possible implementation, the tracking frame display prompt may disappear after a second preset time period is displayed. The second preset duration may be 3s or 5s, and the specific value of the second preset duration is not limited in this embodiment of the present application.
Therefore, the terminal equipment can not display the tracking identification display prompt, reduce shielding of the tracking identification display prompt on the recorded picture, reduce interference and improve user experience.
Optionally, the method further comprises: after the seventh moment, the terminal equipment detects the triggering operation of the user on the first window; in response to a triggering operation on the first window, the first window includes a first tracking identifier and does not include a tracking identifier display prompt.
The triggering operation of the first window may be a clicking operation or other operations, which is not limited in the embodiment of the present application.
It will be appreciated that when the terminal device receives a user operation in the recording area, a tracking frame is displayed, and the tracking frame displays a prompt to disappear. Illustratively, when the terminal device detects that the user clicks on the recording area at the interface shown in b in fig. 8, the unlimited tracking box displays a prompt.
Optionally, the method further comprises: after the seventh moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal device displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, the first window comprises a first tracking mark, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, the first window also displays pictures acquired by the first camera, and the second window displays second pictures.
The triggering operation of the pause control, the record pause identifier, the first record duration information, the second record duration information, and the like can be referred to the following related description, and are not repeated here.
Illustratively, after the seventh moment, when the user clicks the pause control, the first tracking identifier is displayed, and the terminal device may enter the recording interface shown as d in fig. 8 from the recording interface shown as b in fig. 8. And will not be described in detail herein.
Therefore, the tracking mark can be called out when the user pauses, the user can conveniently switch the tracking target, the tracking target is determined, and the user experience can be improved.
Optionally, the method further comprises: at a ninth moment, when the terminal device detects that the first picture does not comprise the first object, the first window comprises the tracking identifier, and the ninth moment is later than the seventh moment.
It can be understood that, in the embodiment of the present application, the first object is a tracking target, and when the first object is lost, the terminal device may call out the tracking identifier, so as to facilitate the user to switch the tracking target. Illustratively, the interface at the seventh time may correspond to the recording interface shown in a of fig. 9A, and the interface displayed at the ninth time may correspond to the recording interface shown in b of fig. 9A. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. The terminal equipment can display the tracking identification, so that the user can conveniently switch the object corresponding to the tracking target, and the user operation is convenient.
Optionally, the method further comprises: during the tenth time to the eleventh time, the terminal device continuously detects that the first screen does not include the first object, and the first window does not include the tracking identifier; at an eleventh moment, the terminal device displays a third interface of the camera application, the first window displays the tracking identifier, and the eleventh moment is later than the tenth moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the interface at the seventh time may correspond to the recording interface shown by a in fig. 9B, and the interface at the tenth time may correspond to the interface shown by B in fig. 9B. The interface at the eleventh time may correspond to the recording interface shown as c in fig. 9B. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. The terminal equipment pauses the recording of the small window after the first object is lost for a certain time, displays the tracking identification, and is convenient for a user to switch the object corresponding to the tracking target and convenient for the user to operate. In addition, the tracking mark is not displayed in the losing period, so that the repeated display and hiding of the tracking mark caused by frequent losing of the tracking target can be reduced, and the user experience is improved.
Optionally, at the third time or at the ninth time, the first interface further includes a loss prompt, where the loss prompt is used to prompt that the first object is lost;
and/or, during the fourth time to the fifth time or during the tenth time to the eleventh time, the first interface further comprises a loss prompt, and/or, at the fifth time or at the eleventh time, the third interface further comprises a pause prompt, wherein the pause prompt is used for prompting the second window to pause recording.
Thus, when the first object is lost, the user can be prompted that the tracking target is lost; when the small window pauses to record after the first object is lost, the user can be prompted to pause recording abnormally when the small window records, so that the user can know the abnormal condition conveniently, and the user experience is improved.
Optionally, at a twelfth time, the first interface includes a second window; at a thirteenth time, the first interface includes a first window and does not include a second window, the first window includes a second control, the second control is associated with the second window, the thirteenth time is later than the twelfth time, the twelfth time is the first time, a time interval between the thirteenth time and the twelfth time is greater than a third threshold, and the third threshold is greater than the first threshold.
In this embodiment of the present application, the second control may be a scaled-down widget, or may be a control of another form. When the second control is triggered, the terminal device may display a normally sized widget. Illustratively, the interface displayed at the twelfth time corresponds to the recording interface shown as a in fig. 12; the interface displayed at the thirteenth time may correspond to the recording interface shown in b in fig. 12.
It is understood that the third threshold may correspond to a sixth preset time period described below. The twelfth time may be a time when the focus-following video starts to be recorded, and the thirteenth time may be a time after a time corresponding to a sixth preset time period after the focus-following video starts to be recorded. The twelfth time may be a time when the terminal device receives the trigger operation for the recording area, and the thirteenth time may be a time after the terminal device receives a time corresponding to the sixth preset duration after the trigger operation for the recording area. And are not limited herein.
Thus, the terminal equipment can reduce the area occupied by the small window, reduce the shielding of the recorded picture, improve the immersive effect and improve the user experience.
Optionally, the method further comprises: after thirteenth moment, the terminal equipment detects triggering operation of the second control; in response to a triggering operation on the second control, the first interface includes a first window and a second window, the first window not including the second control.
The triggering operation of the second control may be a clicking operation, a dragging operation, or other operations, which is not limited in the embodiment of the present application.
For example, when the terminal device detects that the operation such as clicking, touching, dragging, etc. by the user triggers the small window 1206 after shrinking in the recording interface shown in b in fig. 12, the terminal device enters the recording interface shown in c in fig. 12, and displays the small window with a normal size.
Therefore, the small window can be also called out, the user can conveniently view the tracking picture, and the user experience is improved.
Optionally, the method further comprises: after thirteenth moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal device displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, and the first window also displays pictures acquired by the first camera.
The triggering operation of the pause control, the record pause identifier, the first record duration information, the second record duration information, and the like can be referred to the following related description, and are not repeated here.
Illustratively, after the thirteenth moment, when the user clicks the pause control, a small window is displayed, and the terminal device may enter the recording interface shown as d in fig. 12 from the recording interface shown as b in fig. 12. And will not be described in detail herein.
Therefore, the small window can be called out when the user pauses, so that the user can conveniently view the tracking picture, and the user experience is improved.
Optionally, the method further comprises: at a fourteenth moment, when the terminal equipment detects that the first picture does not comprise the first object, the first interface comprises a second window, the first window does not comprise a second control, and the fourteenth moment is later than the thirteenth moment.
It can be understood that, in the embodiment of the present application, the first object is a tracking target, and when the first object is lost, the terminal device can call out a small window, so that a user can conveniently view a tracking picture, and user experience is improved. In addition, the attention of the user to the small window can be improved.
Optionally, at the fourteenth moment, the first interface further includes a loss hint, where the loss hint is used to hint that the first object is lost.
In this way, the user may be prompted to track the loss of the target. And the user experience is improved.
Optionally, the method further comprises: during the fifteenth time to the sixteenth time, when the terminal equipment continuously detects that the first picture does not comprise the first object, the first interface does not comprise a second window, and the first window comprises a second control; at a sixteenth moment, the terminal device displays a third interface of the camera application, the third interface comprises a first window and a second window, the second window comprises an identifier for suspending recording, the second recording duration information displayed by the second window is kept unchanged, the first window does not comprise the identifier for suspending recording, the displayed first recording duration information is changed continuously, the first window also displays a picture acquired by the first camera, the sixteenth moment is later than the fifteenth moment, and the fifteenth moment is later than the thirteenth moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the thirteenth interface may correspond to the recording interface shown in a of fig. 13, and the fifteenth interface may correspond to the interface shown in b of fig. 13. The interface at the sixteenth time may correspond to the recording interface shown as c in fig. 13. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. And the terminal equipment pauses the recording of the small window after the first object is lost for a certain time, and displays the small window, so that the user operation is convenient. In addition, the second control is displayed in the losing period, and the small window with the normal size is not displayed, so that the small window and the second control which are caused by frequent losing of the tracking target can be reduced to be alternately displayed for many times, and the user experience is improved.
Optionally, at the seventeenth moment, the first interface includes a second window; at an eighteenth moment, the first interface includes the first window and does not include the second window, the eighteenth moment is later than the seventeenth moment, the seventeenth moment is the first moment, a time interval between the eighteenth moment and the seventeenth moment is greater than a fourth threshold, and the fourth threshold is greater than the third threshold.
Illustratively, the interface displayed at the seventeenth time corresponds to the recording interface shown as a in fig. 14; the interface displayed at the eighteenth time may correspond to the recording interface shown in b in fig. 14.
It is understood that the fourth threshold may correspond to a seventh preset time period described below. The seventeenth time may be a time when the focus-following video starts to be recorded, and the eighteenth time may be a time after a time corresponding to a seventh preset time period after the focus-following video starts to be recorded. The seventeenth time may be a time when the terminal device receives the trigger operation for the recording area, and the eighteenth time may be a time after a time when the terminal device receives the trigger operation for the recording area and corresponding to a seventh preset time length. And are not limited herein.
Therefore, the terminal equipment can hide the small window, further reduce shielding of the recorded picture, improve the immersive effect and improve the user experience.
Optionally, the method further comprises: after the eighteenth moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal device displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, and the first window also displays pictures acquired by the first camera.
The triggering operation of the pause control, the record pause identifier, the first record duration information, the second record duration information, and the like can be referred to the following related description, and are not repeated here.
Illustratively, after the eighteenth moment, when the user clicks the pause control, a small window is displayed, and the terminal device may enter the recording interface shown as d in fig. 14 from the recording interface shown as b in fig. 14. And will not be described in detail herein.
Therefore, the small window can be called out when the user pauses, so that the user can conveniently view the tracking picture, and the user experience is improved.
Optionally, the method further comprises: at a nineteenth time, when the terminal device detects that the first object is not included in the first screen, the first interface includes the second window, and the nineteenth time is later than the eighteenth time.
It can be understood that, in the embodiment of the present application, the first object is a tracking target, and when the first object is lost, the terminal device can call out a small window, so that a user can conveniently view a tracking picture, and user experience is improved. In addition, the attention of the user to the small window can be improved.
Optionally, at nineteenth time, the first interface further includes a loss hint for hinting that the first object is lost.
In this way, the user may be prompted to track the loss of the target. And the user experience is improved.
Optionally, the method further comprises: during the twentieth time to the twentieth time, when the terminal device continuously detects that the first screen does not include the first object, the first interface does not include the second window; at a twenty-first moment, the terminal device displays a third interface of the camera application, the third interface comprises a first window and a second window, the second window comprises a record suspending mark, the second record duration information displayed by the second window is unchanged, the first window does not comprise the record suspending mark, the displayed first record duration information is changed continuously, the first window also displays a picture acquired by the first camera, the twentieth moment is later than the twentieth moment, and the twentieth moment is later than the eighteenth moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the eighteenth interface may correspond to the recording interface shown in a of fig. 15, and the twentieth interface may correspond to the interface shown in b of fig. 15. The interface at the twenty-first time may correspond to the recording interface shown as c in fig. 13. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. And the terminal equipment pauses the recording of the small window after the first object is lost for a certain time, and displays the small window, so that the user operation is convenient. In addition, the second control is displayed in the losing period, and the small window is not displayed, so that the phenomenon that the small window is hidden after being displayed for many times due to frequent losing of the tracking target can be reduced, and the user experience is improved.
Optionally, at the twenty-first time, the first window and/or the second window display a pause prompt, where the pause prompt is used to prompt the second window to pause recording.
Optionally, the method further comprises: during the twentieth time to the twentieth time, the first interface further includes a loss hint; and/or, at the twenty-first moment, the third interface further comprises a pause prompt, wherein the pause prompt is used for prompting the second window to pause recording.
Optionally, the missing cue is located in the first window or in the second window; the pause prompt is located in the first window or in the second window.
It will be appreciated that when the missing prompt or pause prompt is displayed in the widget region, the widget occlusion loss of the prompt may be reduced, reducing occlusion of the preview screen. Moreover, the missing prompt or the pause prompt can be more definite, and the user can understand the missing prompt or the pause prompt conveniently. The loss prompt is displayed in the small window, so that the loss prompt is more easily understood by a user as the loss of the tracking target in the small window. Pause cues are displayed in the widget and are more easily understood by the user as a recording pause in the widget. In addition, when the small window displays the characters with the missing prompt or the characters with the pause prompt, the attention of the user to the small window can be improved.
When a loss prompt or a pause prompt is displayed in the recording area, the terminal device may display more text to prompt the user for subsequent processing. And characters with larger character sizes can be displayed, so that the user can conveniently distinguish the characters.
In a second aspect, embodiments of the present application provide a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The terminal device comprises a processor for invoking a computer program in memory to perform the method as in the first aspect.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when run on a terminal device, cause the terminal device to perform a method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product for causing a terminal device to carry out the method as in the first aspect when the computer program is run.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor for invoking a computer program in a memory to perform a method as in the first aspect.
It should be understood that, the second aspect to the fifth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the beneficial effects obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic diagram of a hardware system structure of a terminal device provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a software system structure of a terminal device according to an embodiment of the present application;
fig. 3A is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 3B is a schematic view of a main angle mode preview interface according to an embodiment of the present disclosure;
Fig. 3C is a schematic diagram of a main angle mode recording interface according to an embodiment of the present application;
fig. 4 is an interface schematic diagram of a terminal device entering a main angle mode according to an embodiment of the present application;
fig. 5 is an interface schematic diagram of a terminal device entering a main angle mode according to an embodiment of the present application;
fig. 6 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application;
fig. 7 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application;
FIG. 8 is an interface schematic diagram of a main angle mode tracking frame according to an embodiment of the present disclosure;
fig. 9A is an interface schematic diagram of a main angle mode loss scenario provided in an embodiment of the present application;
fig. 9B is an interface schematic diagram of a main angle mode loss scenario provided in an embodiment of the present application;
FIG. 10 is a schematic view of a main angle mode small window according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram of a main angle mode loss scene provided in an embodiment of the present application;
FIG. 12 is a schematic view of a main angle mode small window according to an embodiment of the present disclosure;
fig. 13 is a schematic view of a main angle mode loss scene provided in an embodiment of the present application;
FIG. 14 is a schematic view of a main angle mode small window according to an embodiment of the present disclosure;
Fig. 15 is a schematic view of a main angle mode loss scene provided in an embodiment of the present application;
fig. 16 is a schematic flow chart of a video recording method according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application.
Detailed Description
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The embodiment of the application provides a video recording method which can be applied to electronic equipment with a shooting function. The electronic device includes a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the terminal equipment.
In order to better understand the embodiments of the present application, the following describes the structure of the terminal device in the embodiments of the present application:
fig. 1 shows a schematic structure of a terminal device 100. The terminal device may include: radio Frequency (RF) circuitry 110, memory 120, input unit 130, display unit 140, sensor 150, audio circuitry 160, wireless fidelity (wireless fidelity, wiFi) module 170, processor 180, power supply 190, and bluetooth module 1100. It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 1 is not limiting of the terminal device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the respective constituent elements of the terminal device in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, after receiving downlink information of the base station, the downlink information is processed by the processor 180; in addition, the data of the design uplink is sent to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers (low noise amplifier, LNAs), diplexers, and the like. In addition, RF circuit 110 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (global system of mobile communication, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), long term evolution (long term evolution, LTE), email, and short message service (short messaging service, SMS), among others.
The memory 120 may be used to store software programs and modules, and the processor 180 performs various functional applications and data processing of the terminal device by running the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, a boot loader (boot loader), etc.; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the terminal device, and the like. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. It will be appreciated that in the embodiment of the present application, the memory 120 stores a program for connecting back to the bluetooth device.
The input unit 130 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal device. In particular, the input unit 130 may include a touch panel 131 and other input devices 132. The touch panel 131, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 131 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 131 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 180, and can receive commands from the processor 180 and execute them. In addition, the touch panel 131 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 130 may include other input devices 132 in addition to the touch panel 131. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 140 may be used to display information input by a user or information provided to the user and various menus of the terminal device. The display unit 140 may include a display panel 141, and alternatively, the display panel 141 may be configured in the form of a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 131 may cover the display panel 141, and when the touch panel 131 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in fig. 1, the touch panel 131 and the display panel 141 implement input and output functions of the terminal device as two independent components, in some embodiments, the touch panel 131 and the display panel 141 may be integrated to implement input and output functions of the terminal device.
The terminal device may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 or the backlight when the terminal device moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the terminal equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured for the terminal device are not described in detail herein.
Audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the terminal device. The audio circuit 160 may transmit the received electrical signal converted from audio data to the speaker 161, and the electrical signal is converted into a sound signal by the speaker 161 to be output; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, receives the electrical signal from the audio circuit 160, converts the electrical signal into audio data, outputs the audio data to the processor 180 for processing, transmits the audio data to, for example, another terminal device via the RF circuit 110, or outputs the audio data to the memory 120 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and terminal equipment can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 170, so that wireless broadband Internet access is provided for the user. Although fig. 1 shows a WiFi module 170, it is understood that it does not belong to the essential constitution of the terminal device, and can be omitted entirely as required within the scope of not changing the essence of the invention.
The processor 180 is a control center of the terminal device, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the terminal device. Optionally, the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
It will be appreciated that in the embodiment of the present application, the memory 120 stores a program of video recording, and the processor 180 may be configured to call and execute the program of video recording stored in the memory 120 to implement the method of video recording in the embodiment of the present application.
The terminal device further includes a power supply 190 (e.g., a battery) for powering the various components, which may be logically connected to the processor 180 via a power management system so as to provide for managing charging, discharging, and power consumption by the power management system.
The bluetooth technology belongs to a short-distance wireless transmission technology, and the terminal device can establish bluetooth connection with other terminal devices with bluetooth modules through the bluetooth module 1100, so that data transmission is performed based on a bluetooth communication link. Bluetooth module 1100 may be a bluetooth low energy (bluetooth low energy, BLE), or module, as desired. It can be understood that, in the embodiment of the present application, in the case that the terminal device is a user terminal and a service tool, the terminal device includes a bluetooth module. It will be understood that the bluetooth module does not belong to the essential constitution of the terminal device, and may be omitted entirely as required within the scope of not changing the essence of the invention, for example, the bluetooth module may not be included in the server.
Although not shown, the terminal device further includes a camera. Optionally, the position of the camera on the terminal device may be front, rear, or internal (which may extend out of the body when in use), which is not limited in this embodiment of the present application.
Alternatively, the terminal device may include a single camera, a dual camera, or a triple camera, which is not limited in the embodiments of the present application. Cameras include, but are not limited to, wide angle cameras, tele cameras, depth cameras, and the like. For example, the terminal device may include three cameras, one of which is a main camera, one of which is a wide-angle camera, and one of which is a tele camera.
Alternatively, when the terminal device includes a plurality of cameras, the plurality of cameras may be all front-mounted, all rear-mounted, all built-in, at least part of front-mounted, at least part of rear-mounted, at least part of built-in, or the like, which is not limited in the embodiment of the present application.
Fig. 2 is a block diagram illustrating a software configuration of the terminal device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include camera, gallery, phone, map, phone, music, settings, mailbox, video, social, etc. applications.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a resource manager, a view system, a notification manager, and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock the screen, touch the screen, drag the screen, intercept the screen, etc.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal equipment vibrates, and an indicator light blinks.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of terminal equipment software and hardware is illustrated below in connection with the scenario of terminal equipment interface switching.
When a touch sensor in the terminal equipment receives touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, touch strength, time stamp of the touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, calling an interface of an application framework layer by the camera application, starting the camera application, and further starting a display driver by calling a kernel layer to display a functional interface of the camera application.
When the camera application is started, the camera application can call a camera access interface in an application program framework layer, start a shooting function of a camera, and drive one or more cameras to acquire one or more frames of images in real time based on camera driving of a kernel layer. After the camera acquires the image, the image can be transmitted to the camera application in real time through the kernel layer, the system library and the application program framework layer, and then the camera application displays the image to a corresponding functional interface.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be implemented independently or combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
In order to improve user experience, terminal devices such as mobile phones and tablet computers are generally configured with a plurality of cameras. The terminal device may provide a plurality of photographing modes, such as a front photographing mode, a rear photographing mode, a front and rear double photographing mode, a main angle mode, etc., for the user through the plurality of cameras configured. The user can select a corresponding shooting mode to shoot according to the shooting scene.
The principal angle mode can be understood as a mode that when the terminal equipment records the video, a portrait focus-tracking video can be additionally generated, namely, more than two videos are saved when the recording is completed, one of the videos is a recorded original video, and other videos are videos automatically cut from the original video according to the tracked target portrait. The person image in the person image focus tracking video can be understood as a "principal angle" focused by a user, and the manner of generating the video corresponding to the "principal angle" can be as follows: and cutting out video content corresponding to the main angle from the video conventionally recorded by the terminal equipment.
The "principal angle" may be a living body such as a person or an animal, or may be a non-living body such as a vehicle. It is understood that any item that can be identified based on an algorithmic model can be used as the "principal angle" in embodiments of the present application. In the embodiment of the present application, the "principal angle" may be defined as a focus tracking object, and the focus tracking object may also be referred to as a principal angle object, a tracking target, a tracking object, a focus tracking target, etc., which is not limited by the concept of the "principal angle".
For convenience of understanding, a main angle mode among photographing modes is described below with reference to the accompanying drawings.
Fig. 3A is a schematic view of an application scenario provided in an embodiment of the present application. As shown in fig. 3A, a terminal device 301, a person 302, and a person 303 are included in the application scene.
The terminal device 301 may record a video containing the tracking target through a camera. The tracking target can be any person recorded by a camera, or any object such as an animal, a car and the like. In the scene shown in fig. 3A, the tracking target may be the person 302 in the photographed image or the person 303 in the photographed image.
In the principal angle mode, the terminal equipment can additionally obtain one or more tracking target corresponding focus tracking videos while recording the videos.
Specifically, when the terminal device 301 previews in the principal angle mode, the terminal device 301 may receive an operation of setting a tracking target by the user, and after the terminal device 301 starts recording the video, one or more additional tracking videos may be generated based on the tracking target. The terminal device 301 may set the person 302 in the photographed screen as the tracking target, or may set the person 303 in the photographed screen as the tracking target, for example.
Alternatively, when the terminal device 301 records in the main angle mode, the terminal device 301 may additionally generate one or more tracking videos based on the tracking target, upon receiving an operation of setting the tracking target by the user. Thus, the focus tracking video corresponding to the tracking target can be obtained without manually editing the whole video.
It can be understood that the terminal device can switch the person corresponding to the tracking target one or more times during the video recording process. Specifically, when the terminal device receives the operation of switching the tracking target by the user, the terminal device switches the person corresponding to the tracking target.
In one possible implementation, when the person corresponding to the tracking target is switched, different persons exist in the focus tracking video obtained by the terminal device.
Illustratively, when the terminal device detects that the user switches the tracking target from person 302 to person 303 during recording, the terminal device displays a tracking screen based on person 303. In a focus tracking video generated based on a tracking target after recording is finished, displaying a character 302 correspondingly before the tracking target is switched; displaying the character 303 correspondingly after the tracking target is switched; it can also be understood that the portion before the tracking target is switched corresponds to the person 302, and the portion after the tracking target is switched corresponds to the person 303.
Specifically, taking the 3 rd second in the recording process of the focus tracking video, the terminal device detects that the user switches the tracking target from the person 302 to the person 303 as an example. In the focus tracking video generated based on the tracking target, the person 302 is correspondingly displayed in the focus tracking video before the 3 rd second; the person 303 is displayed in correspondence with the focus tracking video after the 3 rd second. Therefore, the terminal equipment can switch the tracking target, so that the switching of the characters in the focus tracking video is realized, and the user experience is improved.
In another possible implementation, when the person corresponding to the tracking target is switched, the terminal device may obtain one path of the focus tracking video based on the tracking target before switching, and obtain the other path of the focus tracking video based on the tracking target after switching.
Illustratively, when the terminal device detects that the user switches the tracking target from the person 302 to the person 303 during recording, the terminal device generates the focus tracking video 1 based on the person 302, and generates the focus tracking video 2 based on the person 303 after the tracking target is switched to the person 303.
In some embodiments, in the recording process of the video, the terminal device may also start and end recording of the focus tracking video multiple times, and additionally generate multiple paths of focus tracking videos.
Illustratively, in recording video 1, when the terminal device receives an operation that the user ends tracking person 303, the terminal device may generate a focus tracking video 2 based on the person 303. When the terminal device receives the operation of the user tracking the person 302 after finishing the operation of tracking the person 303, the terminal device 301 may additionally generate the focus tracking video 3 based on the person 302. At this time, in addition to the normally recorded video 1, the terminal device additionally generates 2 chasing videos (i.e., chasing video 2 and chasing video 3). The embodiment of the application does not limit the number of the focus tracking videos.
Therefore, the terminal equipment can start and end recording of the focus tracking video for a plurality of times, generate a plurality of paths of focus tracking videos based on the tracking target, and improve user experience.
It should be noted that, in the scene shown in fig. 3A, the video recorded by the terminal device through the camera includes two characters. More or fewer characters may be included in the video of the terminal device. The number of characters recorded by the terminal equipment is not particularly limited.
It should be noted that, when the terminal device previews and records in the main angle mode, the operation of setting the tracking target by the user is not received, and when the recording is finished, a path of video can be obtained. When the terminal equipment receives the operation of setting the tracking target by the user through the main angle mode preview and then receives the operation of closing the small window by the user, the terminal equipment cancels the tracking of the tracking target, if the operation of setting the tracking target by the user is not received during recording, and when the recording is finished, one path of video can be obtained. It is understood that the principal angle mode may be provided in an application having a photographing function such as a camera. After the terminal device enters the main angle mode, the implementation of the main angle mode may include a preview mode and a recording mode.
It should be noted that, the interface displayed by the terminal device in the preview mode (before recording) and the recording mode (during recording) may be referred to as a preview interface; the pictures displayed in the preview interface of the preview mode (before recording) are not generated and saved; the pictures displayed in the preview interface of the recording mode (during recording) can be generated and saved. For convenience of distinction, hereinafter, a preview interface of a preview mode (before recording) is referred to as a preview interface; the preview interface of the recording mode (during recording) is referred to as a recording interface.
In the preview mode of the home angle mode, an image (preview screen) obtained by the camera may be displayed in the preview area, and an image (trace screen) of the trace target selected by the user may be displayed in the small window. In the preview mode, the terminal device may not generate a video, or may not store the content displayed in the preview area and the content displayed in the small window.
For example, a preview interface of the principal angle mode in the terminal device may be as shown in fig. 3B. The preview interface includes a preview area 304, a recording control 305.
The preview area 304 displays a preview screen. When the terminal device recognizes that a person is included in the preview screen, a tracking frame (e.g., tracking frame 307 and tracking frame 308) is displayed in the preview area. The tracking frame can prompt the user that the corresponding person can be set or switched to the tracking target, and the user can conveniently set or switch the tracking target. When the terminal device recognizes that a plurality of persons are included in the preview screen, the preview area may be displayed with a plurality of tracking frames. The number of tracking frames is less than or equal to the number of people identified by the terminal device. The tracking target is any one of a plurality of persons corresponding to the tracking frame in the preview screen. The tracking target may be referred to as a focus tracking object, a principal angle object, or the like, which is not limited in the present application.
In some embodiments, the tracking box (e.g., tracking box 307) corresponding to the person set to track the target is different from the tracking box display style corresponding to the person not set to track the target (e.g., tracking box 308). In this way, the user can distinguish and identify the tracked person (tracked target) conveniently. In addition to the different patterns of the tracking frames, the embodiments of the present application may also set the colors of the tracking frames, for example, the colors of the tracking frame 307 and the tracking frame 308 are different. Thus, the tracking target can be intuitively distinguished from other people. The trace box may be a virtual box, such as trace box 307; the combination of the virtual box and "+" is also possible, such as the tracking box 308, which may be any display form, and the tracking box may be satisfied by a function that can be triggered by a user to implement tracking and may be set as a tracking target. The tracking frame may be marked at any location of a person that may be set to track a target, as embodiments of the present application are not particularly limited.
It can be understood that the tracking frame is one of the tracking identifiers, and the terminal device can also display other forms of tracking identifiers, so as to facilitate the user to set the tracking target. By way of example, other forms of tracking identification may be thumbnails of objects, numbers, letters, graphics, and the like. The tracking mark may be provided at any position of the object, may be provided near the object, or may be provided at the edge of the preview area. The embodiment of the application does not specifically limit the specific location of the tracking identifier.
For example, the terminal device may display a thumbnail arrangement of one or more objects at an edge of the preview area, and when the terminal device receives an operation that the user clicks any tracking identifier, set the object corresponding to the clicked tracking identifier as a tracking target.
In a possible implementation manner, the terminal device may identify the person through face recognition technology and display the tracking frame. The terminal device may determine the display position of the tracking frame, for example, a position where the body of the person is relatively centered, according to a technique such as human body recognition. Therefore, based on the position of the human body calculation tracking frame, the situation that the tracking frame is located at the position of the human face can be reduced, shielding of the tracking frame to the human face can be reduced, and user experience is improved. The technology used for the identification technology of the person and the tracking frame position calculation in the embodiment of the present application is not particularly limited.
In some embodiments, fig. 3B also includes a widget 306. The widget 306 displays a tracking screen. The tracking screen corresponds to a tracking target. When the tracking target is switched, the person in the tracking screen displayed in the widget 306 is switched. For example, if the tracking target is switched from the person corresponding to the tracking frame 307 to the person corresponding to the tracking frame 308, the tracking screen displayed in the widget 306 is changed accordingly.
The tracking screen may be part of a preview screen. In a possible implementation manner, the tracking picture is obtained by the terminal device cutting the preview picture according to a certain proportion based on the tracking target. The embodiment of the application does not specifically limit the frame displayed by the small window.
In some embodiments, the specification, position, horizontal and vertical screen display modes and the like of the small window are adjustable, and a user can adjust the style of the small window according to video habits.
In some embodiments, widget 306 further includes a close control 309 and a first switch control 310.
It can be understood that, when the terminal device receives the operation of setting the tracking target by the user, a small window is displayed on the preview interface to display the tracking screen of the tracking target. And when the terminal equipment does not receive the operation of setting the tracking target by the user, the preview interface does not display a small window.
Optionally, the tracking target is centrally displayed in the tracking screen.
Optionally, the small window floats above the recording area. And are not limited herein.
When the user triggers the close control 309 through clicking, touching, or other operations in the preview interface shown in fig. 3B, the terminal device receives the operation of closing the widget, closes the widget, and cancels the preview of the tracking target.
When the user triggers the first switching control 310 through clicking, touching, or other operations in the preview interface shown in fig. 3B, the terminal device receives an operation of switching the widget display mode (widget style), and the terminal device switches the style of the widget. In particular, the porthole may be switched from landscape to portrait or vice versa.
When the user triggers the recording control 305 through clicking, touching or other operations on the preview interface shown in fig. 3B, the terminal device receives an operation of starting recording, and starts recording video and tracking video.
Optionally, the preview interface may also include other controls, such as a main angle mode exit control 311, a setup control 312, a flash control 313, a widget style control 314, a zoom control 315, and the like.
When the main angle mode exit control 311 is triggered, the terminal device exits the main angle mode and enters the video recording mode. When the setting control 312 is triggered, the terminal device may adjust various setting parameters. The setting parameters include, but are not limited to: whether to turn on a watermark, store a path, a coding scheme, whether to save a geographic location, etc. When the flash control 313 is triggered, the terminal device may set a flash effect, for example, control the flash to be forcibly turned on, forcibly turned off, turned on at photographing, turned on according to environmental adaptation, and the like. When the zoom control 315 is triggered, the terminal device may adjust the focal length of the camera, thereby adjusting the magnification of the preview screen.
When the user triggers the second switching control 314 through clicking or touching on the preview interface shown in fig. 3B, the terminal device receives the operation of setting the widget style, and displays the widget style selection item for the user to select. The widget style selections include, but are not limited to: transverse or vertical, etc. The embodiments of the present application are not limited in this regard. In a possible implementation, the second switching control 314 corresponds to a display style of the widget, so that the user can distinguish the widget style conveniently.
It should be noted that, in the preview interface shown in fig. 3B, the widget style switching may be controlled by the first switching control 310, and the widget style switching may also be controlled by the second switching control 314. In a possible implementation, the first switch control 310 in the widget may be set in linkage with the second switch control 314 of the preview area. For example, when the widget is changed from landscape to portrait, the icon of the first switch control 310 is in the portrait preview style, and the icon of the second switch control 314 is also in the portrait preview style, or the icons of the first switch control 310 and the second switch control 314 are both in the landscape preview style, the user is prompted to click the preview style after the switch again.
It can be appreciated that in the preview scenario, after the terminal device sets the tracking target, the tracking screen of the widget may display the tracking target centrally. In some scenarios, the tracking target may be in a moving state, and when the tracking target moves but does not leave the lens, the tracking screen of the widget may continuously display the tracking target centrally.
For example, the preview interface may be configured such that the tracking target object includes a male character and a female character, and the terminal device sets the male character as a tracking target in response to a click operation of a tracking frame for the male character by the user, and enters the interface as shown in a in fig. 3B. In the interface shown in a in fig. 3B, the tracking screen of the small window displays a male character centered on the right side of the female character. The male character moves, and the terminal device can continuously focus on the male character and display the male character in the small window in a centered manner. When the male character walks to the left of the female character, the interface of the terminal device may be as shown in B of fig. 3B. In the B interface in fig. 3B, the tracking screen of the small window still displays a male character centered on the left side of the female character.
In a possible implementation manner, when the terminal device tracks the target, the focus moves along with the movement of the tracked target, and illustratively, in the interface shown as a in fig. 3B, the focus is located in the face area of the male character and is located in the middle right part of the screen; the male character moves, the terminal device can continue to focus on the male character, and when the male character walks to the left of the female character, the interface of the terminal device can be as shown in B of fig. 3B. In the interface shown in B in fig. 3B, the focus is located in the face region of the male character, in the middle left portion of the screen.
In the recording mode of the main angle mode, the terminal device can display an image (recording picture) obtained by the camera in a recording area, display an image (tracking picture) of a tracking target selected by a user in a small window, and generate a recording video and a focus tracking video which are recorded after the recording mode is started. And at the end of recording, the terminal equipment stores the video generated based on the recorded picture and the focus tracking video generated based on the tracking picture.
In some embodiments, the window may end the recording in advance compared to the recording of the recording area. And when the small window recording is finished, the terminal equipment stores the tracking video generated based on the tracking picture. Or it is understood that the terminal device may end the recording of the focus-following video in advance compared to the whole video.
In some embodiments, the small window may delay the start of recording compared to the recording of the recording area. Or it can be understood that after the terminal device starts to record the video, the terminal device starts to open a small window and starts to record the focus tracking video after detecting the operation of setting the tracking target by the user.
For example, the recording interface of the main angle mode in the terminal device may be as shown in fig. 3C. The recording interface includes a recording area 316, a pause control 317, and an end control 318.
Recording area 316 displays a recording picture and a recording duration. When the terminal device recognizes that a person is included in the recording screen, the recording area displays tracking frames (e.g., tracking frame 320 and tracking frame 321). It will be appreciated that the number of tracking frames is less than or equal to the number of people identified by the terminal device.
In some embodiments, the recording interface also displays a widget 319. The widget 319 displays a tracking screen. The tracking screen corresponds to a tracking target. When the tracking target is switched, the person in the tracking screen displayed in the small window 319 is switched. For example, if the tracking target is switched from the person corresponding to the tracking frame 320 to the person corresponding to the tracking frame 321, the tracking screen displayed in the small window 319 is changed accordingly.
The tracking frames may be part of the recording frames. In a possible implementation manner, the tracking picture is obtained by cutting the recording picture in real time according to a certain proportion based on the tracking target. The embodiment of the application does not specifically limit the frame displayed by the small window.
Optionally, the tracking target is centrally displayed in the tracking screen.
Optionally, the small window floats above the recording area. And are not limited herein.
The widget 319 also includes a widget end control 322 and a widget recording duration.
It can be understood that, when the terminal device receives the operation of setting the tracking target by the user, a small window is displayed on the recording interface to display the tracking picture of the tracking target. When the terminal equipment does not receive the operation of setting the tracking target by the user, the recording interface does not display a small window.
When the user triggers the end control 318 through clicking, touching or other operations on the recording interface shown in fig. 3C, the terminal device receives the operation of ending recording by the user, and the terminal device enters the preview interface in the main angle mode, stores the video corresponding to the recording picture, and the focus tracking video corresponding to the tracking picture.
When the user triggers the pause control 317 by clicking, touching, etc. on the recording interface shown in fig. 3C, the terminal device receives the operation of the user to pause recording, and the terminal device pauses recording of video in the recording area 316 and recording of the focus-following video in the small window 319.
When the user triggers the widget ending control 322 through clicking, touching or other operations on the recording interface shown in fig. 3C, the terminal device receives the operation of ending the widget recording by the user, and the terminal device continues to display the recording picture in the recording area 316, closes the widget 319 and stores the focus tracking video corresponding to the tracking picture in the widget 319.
In a possible implementation, the recording interface further includes a flash control 323. When the flash control 323 is triggered, the terminal device can set a flash effect.
It can be understood that when the terminal device records in the main angle mode, a path of video can be generated based on the recording picture of the recording area, and a path of tracking target corresponding to the tracking video can be additionally generated based on the tracking picture of the small window. The two paths of videos are independently stored in the terminal equipment. Therefore, the video corresponding to the tracking target can be obtained without manually editing the whole video later, the operation is simple and convenient, and the user experience is improved.
It can be understood that, in the recording scenario, after the terminal device sets the tracking target, the tracking screen of the widget may display the tracking target centrally. In some scenarios, the tracking target may be in a moving state, and when the tracking target moves but does not leave the lens, the tracking screen of the widget may continuously display the tracking target centrally.
For example, the preview interface may be configured such that the tracking target object includes a male character and a female character, and the terminal device sets the male character as a tracking target in response to a click operation of a tracking frame for the male character by the user, and enters the interface as shown in a in fig. 3C. In the interface shown in a in fig. 3C, the tracking screen of the small window displays a male character centered on the right side of the female character. The male character moves, and the terminal device can continuously focus on the male character and display the male character in the small window in a centered manner. When the male character walks to the left of the female character, the interface of the terminal device may be as shown in b of fig. 3C. In the B interface in fig. 3B, the tracking screen of the small window still displays a male character centered on the left side of the female character.
In a possible implementation manner, when the terminal device tracks the target, the focus moves along with the movement of the tracked target, and in an interface shown in a in fig. 3C, the focus is located in the face area of the male character and is located in the middle right part of the screen; the male character moves, the terminal device can continue to focus on the male character, and when the male character walks to the left of the female character, the interface of the terminal device can be as shown in b of fig. 3C. In the interface shown in b in fig. 3C, the focus is located in the face area of the male character, in the middle left portion of the screen.
It can be appreciated that, in the embodiment of the present application, a shooting mode in which one or more tracking videos can be additionally generated based on a tracking target is defined as a principal angle mode, and the shooting mode may also be referred to as a tracking mode, which is not limited in the embodiment of the present application.
The manner of entering the main angle mode in the terminal device and the interface involved in recording will be described below with reference to fig. 4 to 7. Fig. 4 and fig. 5 are schematic views of two main angle mode entering flows according to the embodiments of the present application. Fig. 6 and fig. 7 are schematic diagrams of interfaces involved in recording according to embodiments of the present application.
Fig. 4 is an interface schematic diagram of a terminal device entering a main angle mode according to an embodiment of the present application.
When the terminal device receives the operation of opening the camera application 401 by the user in the main interface shown in a in fig. 4, the terminal device may enter the photographing preview interface shown in b in fig. 4. The photographing preview interface may include a preview area and a photographing mode selection item. The preview area displays a preview picture in real time; shooting mode selection items include, but are not limited to: portrait, photograph, video, professional or more 402.
When the user triggers more 402 on the camera preview interface shown in b in fig. 4 through clicking, touching or the like, the terminal device receives the operation of the user to view other types of shooting modes, and enters the shooting mode selection interface shown in c in fig. 4. The shooting mode selection interface includes: shooting mode selection items. Shooting mode selection items include, but are not limited to: professional, panoramic, high dynamic-range image (HDR), time-lapse photography, watermarking, document correction, high-pixel, micro-movie, principal angle mode 403, or other types of shooting mode selections.
When the user triggers the principal angle mode 403 through clicking, touching or other operations on the shooting mode selection interface shown in c in fig. 4, the terminal device receives the operation of selecting the principal angle mode preview by the user, and enters the preview interface corresponding to the principal angle mode shown in d in fig. 4. The preview interface includes: preview area and recording control. The preview area displays a preview screen. When the person exists in the preview screen, a tracking frame is also displayed in the preview area. When the user triggers the tracking frame through clicking or touching operation, the terminal device receives the operation of setting the tracking target, sets the person corresponding to the tracking frame as the tracking target, and displays the tracking picture corresponding to the tracking target on the display interface through the small window.
Fig. 5 is an interface schematic diagram of another terminal device entering a main angle mode according to an embodiment of the present application.
When the terminal device receives the operation of opening the camera application 501 by the user in the main interface shown in a in fig. 5, the terminal device may enter the photographing preview interface shown in b in fig. 5. The photographing preview interface may include a preview area and a photographing mode selection item. The preview area displays a preview picture in real time; shooting mode selection items include, but are not limited to: portrait, photograph, video 502, professional or other type of photography mode selection.
When the user triggers the video 502 through clicking, touching or other operations on the camera preview interface shown in b in fig. 5, the terminal device receives the operation of selecting the video preview by the user, and enters the video preview interface shown in c in fig. 5. The video preview interface comprises: preview area, recording parameter selection item and shooting mode selection item. The preview area displays a preview picture in real time; recording parameter options include, but are not limited to: a main angle mode 503, a flash, a filter, a setting, or other types of recording parameter selections. Shooting mode selection items include, but are not limited to: portrait, photograph, video, professional or other types of photography mode selections.
When the user triggers the principal angle mode 503 through clicking, touching or other operations on the video preview interface shown in c in fig. 5, the terminal device receives the operation of selecting the principal angle mode preview by the user, and enters the preview interface corresponding to the principal angle mode shown in d in fig. 5. The preview interface includes: preview area and recording control. The preview area displays a preview screen. When the person exists in the preview screen, a tracking frame is also displayed in the preview area. When the user triggers the tracking frame through clicking or touching operation, the terminal device receives the operation of setting the tracking target, sets the person corresponding to the tracking frame as the tracking target, and displays the tracking picture corresponding to the tracking target on the display interface through the small window.
It can be understood that when the terminal device enters the main angle mode, the terminal device can be horizontally placed in a horizontal screen state or can be vertically placed in a vertical screen state, and the principle that the terminal device realizes the main angle mode is similar in the horizontal screen state or the vertical screen state.
It can be understood that in the main angle mode of the camera, the terminal device may select the tracking target after starting recording, or may select the tracking target before starting recording the video. The following describes two recording processes with reference to fig. 6 and 7, respectively. Fig. 6 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application.
When the user triggers the recording control 601 through clicking, touching or the like in the preview interface shown in a in fig. 6, the terminal device receives an operation of starting recording, and enters the recording interface shown in b in fig. 6. The recording interface includes: recording area 602, pause control 603, end control 604. The recording area displays a recording screen and a tracking frame 605. The tracking frame 605 may facilitate user selection of a tracking target.
When the user triggers the tracking frame 605 through clicking, touching, etc. operations in the recording interface shown in b in fig. 6, the terminal device receives the operation of setting the tracking target by the user, and the terminal device enters the recording interface shown in c in fig. 6, where the recording interface includes: a recording area, a pause control 606, an end control 607, a widget 608. The recording area displays a recording picture. The widget 608 includes a widget end control 609. The small window 608 displays a tracking screen corresponding to the tracking target. The tracking picture corresponding to the tracking target is a part of the recording picture.
On the basis of the flow shown in fig. 6, when the user triggers the pause control 606 through clicking or touching the recording interface shown in fig. 6 c, the terminal device receives the operation of suspending video recording, and the video recording is suspended, and the focus tracking video corresponding to the small window is also suspended.
When the user triggers the end control 607 through clicking or touching the recording interface shown in fig. 6 c, the terminal device receives the operation of ending video recording, the video recording ends, and the recording of the focus tracking video corresponding to the small window ends.
When the user triggers the small window ending control 609 through clicking or touching on the recording interface shown in fig. 6 c, the terminal device receives the operation of ending the recording of the focus tracking video, the recording of the focus tracking video corresponding to the small window ends, and the video continues to be recorded.
When the user triggers the widget 608 through a drag operation in the recording interface shown in fig. 6 c, the widget 608 position may be moved.
The moving distance of the widget 608 is related to the distance between the drag operation start position and the drag operation end position, and the moving direction of the widget 608 is related to the direction of the drag operation.
Fig. 7 is an interface schematic diagram corresponding to a main angle mode recording flow provided in an embodiment of the present application.
When the user triggers the tracking frame 701 by clicking, touching, or the like in the preview interface shown in a in fig. 7, the terminal device receives an operation of setting a tracking target by the user, and enters the preview interface shown in b in fig. 7. The preview interface includes: preview area 702, recording control 703 and widget 704. The recording area displays a preview picture. The widget 704 displays a tracking screen. The tracking screen corresponds to a tracking target. The widget 704 also includes a close control 705 and a first switch control 706.
When the user triggers the recording control 703 through clicking, touching or other operations in the preview interface shown in b in fig. 7, the terminal device receives an operation of starting recording, and enters the recording interface shown in c in fig. 7. The recording interface includes: a recording area, a pause control 707, an end control 708, and a widget 709. The recording area displays a recording picture. Widget 709 includes widget ending control 710. The small window 709 displays a tracking screen corresponding to the tracking target. The tracking frame is a part of the recording frame.
The roles of pause control 707, end control 708, and widget 709 may be referred to in the relevant description of fig. 6 and will not be repeated here.
On the basis of the flow shown in fig. 7, when the user triggers the close control 705 by clicking, touching or the like in the preview interface shown in b in fig. 7, the terminal device receives the operation of closing the widget, closes the widget, and cancels the preview of the tracking target.
When the user triggers the first switching control 706 of the recording control through clicking, touching or other operations in the preview interface shown in b in fig. 7, the terminal device receives the operation of switching the display mode of the small window, and the terminal device switches the display mode of the small window. In particular, the portlets may be switched from landscape to portrait or vice versa.
In a possible implementation, the terminal device may also adjust the size of the small window. The embodiment of the application does not limit a specific implementation manner of the size adjustment of the small window.
The terminal equipment can start the focus tracking video recording of the small window based on the scene and obtain multiple paths of videos. It should be noted that, the small window may display the picture related to the tracking target in the preview area/recording area, but the video recorded by the small window and the video recorded by the recording area are multiple independent videos, and are not a composite video in the form of picture-in-picture in which the tracking picture of the small window is nested in the recording picture of one recording area.
It should be noted that, if the terminal device does not open the recording of the small window, the terminal device may obtain a path of video recorded in the recording area. If the terminal equipment starts the small window recording, the terminal equipment can obtain one path of video recorded in the recording area and one path or multiple paths of video recorded in the small window. For example, during the video recording in the recording area, the terminal device may open the window recording multiple times, where the terminal device may end the recording of the window when detecting the clicking operation for the window ending control, and obtain one video. After the small window is opened again for recording, the terminal equipment can obtain a new video. The number of videos obtained by the terminal device based on the small window can be related to the number of times the small window is opened for recording.
Optionally, the user may browse the videos recorded in the recording area and the multiple videos recorded in the small window based on the album of the camera application, and the display sequence of the multiple videos may be the recording sequence of the videos, that is, the terminal device may sort the videos according to the ending time point or the starting time point of the recorded videos. The display sequence of the multiple paths of videos can be the reverse sequence of video recording, namely, the terminal equipment can be arranged in the reverse sequence according to the ending time or the starting time of video recording.
Alternatively, the video recorded in the recording area and the video recorded in the small window may be displayed in a video thumbnail of the same album interface. In order to facilitate distinguishing the video recorded in the recording area from the video recorded in the small window, the terminal device may set an identifier for the video recorded in the small window. For example, the terminal device may add an outer border, a font, a graphic, and the like to the video recorded in the small window, and may further set the size of the video thumbnail recorded in the small window, so that the small window and the video thumbnail recorded in the recording area have a size difference. It can be appreciated that the embodiment of the application does not limit the form of the video thumbnails in the album, the arrangement sequence of the video thumbnails, the storage sequence of the video, and the like.
It should be noted that, in some scenarios where the terminal device records a video, the user may want to only pay attention to the recorded content, and not want to have other unrelated content, so as not to generate interference and reduce the experience effect.
In the recording process after the terminal device enters the main angle mode, the terminal device can hide some elements in the recording interface, such as a tracking frame, a small window ending control, a small window and the like. Therefore, shielding of recorded pictures is reduced, interference is reduced, and user experience is improved.
In a possible implementation manner one, when the recording area does not receive the user operation within a first preset time period after the video recording is started, the tracking frame is hidden and not displayed; or the user operation is not received in the first preset time after the random operation of the user is received in the recording area, and the tracking frame is hidden and not displayed. And when the terminal equipment receives the user operation, displaying a tracking frame.
It can be understood that the first preset duration may be 3s or 5s, and the specific value of the first preset duration is not limited in this embodiment of the present application.
The hiding and displaying of the tracking frame is described below with reference to fig. 8 and 9.
Fig. 8 is a schematic diagram of a recording interface in a main angle mode according to an embodiment of the present application.
Taking the first preset duration of 5 seconds(s) as an example, the recording interface shown in a in fig. 8 includes: recording area 801, pause control, end control, recording duration, and widget 802. The recording area 801 displays a recording screen and tracking frame 803. The widget 802 displays a tracking screen. Widget 802 also includes a widget recording duration and a widget ending control.
When the terminal device does not receive the operation of any position of the recording area within the first preset time period after the terminal device starts to record the video, the terminal device enters the recording interface shown by b in fig. 8 from the recording interface shown by a in fig. 8. The recording interface includes a recording area 804, a pause control 805, an end control, a recording duration, and a widget. Recording area 804 displays a recording picture without displaying a tracking frame; the small window displays a tracking screen.
Therefore, the terminal equipment can hide the tracking frame, reduce shielding of recorded pictures and improve user experience.
On the basis of the embodiment, when the tracking frame is hidden, the terminal device displays a tracking frame display prompt.
Illustratively, the interface shown in b in fig. 8 further includes: the tracking box displays a prompt 806. The tracking frame display prompt is used for prompting the display mode of the user tracking frame. The tracking frame display prompt may be "click recording area displayable tracking frame". The embodiment of the application does not limit the position, the display form, the specific content and the like of the display prompt display of the tracking frame.
Thus, the terminal equipment can prompt the user to trace the display mode of the frame, and user experience is improved.
It can be understood that the tracking frame display prompt disappears after the second preset time period is displayed, or when the terminal device receives the user operation in the recording area, the tracking frame is displayed, and the tracking frame display prompt disappears. The second preset duration may be 3s or 5s, and the specific value of the second preset duration is not limited in this embodiment of the present application.
Therefore, the display prompt of the tracking frame can disappear, shielding of the recorded picture is reduced, interference is reduced, and user experience is improved.
It can be understood that, in the recording process, the terminal device only displays the tracking frame display prompt when the tracking frame is first hidden, and also can display the tracking frame display prompt when the tracking frame is hidden each time. The terminal device may further limit the number of times the tracking frame displays the prompt during one recording. The embodiment of the application does not limit the display times, display frequency and the like of the display prompts of the tracking frame in the one-time recording process.
In some embodiments, the terminal device displays the tracking frame display prompt only when the user first records using the principal angle mode. The terminal device may also display a tracking frame display prompt each time the user records using the principal angle mode. The terminal device can also display a tracking frame display prompt in the previous N recording processes. The embodiment of the application does not limit the display times, display frequency and the like of the display prompt of the tracking frame.
On the basis of the embodiment, when the terminal equipment receives operations such as clicking, touching and the like at any position of the recording area, the terminal equipment displays a tracking frame; or when receiving the operation of suspending recording, the terminal equipment displays a tracking frame; or when the tracking target is lost, the terminal device displays a tracking frame.
Illustratively, when the user triggers any position of the recording area 804 through clicking, touching, or other operations on the recording interface shown in b in fig. 8, the terminal device receives an operation of canceling the immersive experience, and enters the recording interface shown in c in fig. 8. A tracking box 807 is displayed in the recording interface.
Therefore, the terminal equipment can also call out the tracking frame, so that the tracking target can be conveniently confirmed, the tracking target can be switched, and the user experience is improved.
It can be appreciated that when the terminal device does not receive any operation within the first preset time period after receiving the operation of clicking the recording area 804 by the user, the terminal device hides the tracking frame again and does not display the tracking frame.
Illustratively, when the user triggers the pause control 805 by clicking, touching, or the like on the recording interface shown in b in fig. 8, the terminal device receives an operation to pause recording, and enters the pause recording interface shown in d in fig. 8. The pause recording interface includes: a recording area, a start control 808, an end control, and a widget. The recording area displays a recording screen, a tracking frame 809, and a recording duration. The small window displays a tracking screen. The widget also displays a widget ending control and a widget recording time length. The recording time displayed by the recording area and the small window recording time are not changed. Under the condition of suspending recording, the recording duration of the recording area and the small window recording duration can be expressed as a combination of "|" and time.
When the user triggers the start control 808 through clicking, touching or other operations on the recording interface shown in d in fig. 8, the terminal device receives the operation of continuing recording, and continues recording video and tracking video.
It will be appreciated that when the terminal device pauses recording, the recording area may display a frame of recording before the pause recording, or display the photographed picture in real time. When the terminal equipment pauses recording, the small window may display a frame of tracking picture before pausing recording; or a covered state is changed; or the tracking screen is displayed in real time. The embodiment of the application does not limit the display of the recording area, the display of the small window, and the like when the terminal equipment pauses recording.
It will be appreciated that the frames displayed by the recording area during the pause of recording video will not be saved in the video and the tracking frames displayed by the small window during the pause of recording video will not be saved in the in-focus video. The video after pause and the video before pause of the recording area are the same video, the focus tracking video after pause of the small window and the focus tracking video before pause are the same focus tracking video, for example, a user clicks a pause recording control when the recording time length is 4s, the terminal equipment responds to the click operation and pauses video recording, and the video recording time is 4s. After a period of time, when the terminal equipment receives clicking operation for starting the control, the terminal equipment starts recording the 5 th s video on the basis of the 4s video, and the recording duration is correspondingly changed.
Based on the embodiment shown in fig. 8, when the tracking target is lost, the terminal device may suspend recording of the in-focus video. It will be appreciated that the terminal device may lose track of the target for a number of reasons, and that no target is tracked on the recorded picture. Various reasons include, but are not limited to: tracking movement of a target, movement of a terminal device, or small window style switching (e.g., switching from landscape to portrait or vice versa), etc.
Fig. 9A is a schematic diagram of a scenario for tracking target loss according to an embodiment of the present application.
The recording interface shown as a in fig. 9A includes: recording area 901, pause control, end control, recording duration, and widget 902. The recording area 901 displays a recording screen, and does not display a tracking frame. The widget 902 displays a tracking screen for tracking a target.
When the tracking target is lost and the tracking target is not in the recording picture, the terminal device enters the recording interface shown in b in fig. 9A. The recording interface includes: a recording area 903, a pause control, an end control, and a widget 904. The recording area 903 displays a recording screen and a tracking frame 905.
It will be appreciated that there are a number of ways in which the terminal device can handle when the tracking target is lost. The various ways of treatment include, but are not limited to: continuously recording a focus tracking video; suspending recording of the focus tracking video; suspending recording of the focus tracking video when the tracking target is not found within the eighth preset time period; and finishing recording the tracking video when the tracking video is not retrieved within the ninth preset time period.
Accordingly, the widget may continue to display a portion of the content in the recorded picture; or may display a frame of picture before the tracking target is lost; the method further comprises the steps of continuously displaying part of content in the recorded picture within an eighth preset time length, and displaying a frame of picture corresponding to the eighth preset time length when the eighth preset time length is reached; or the partial content in the recorded picture may be continuously displayed within the ninth preset time period, and the small window disappears when the ninth preset time period arrives.
In a possible implementation manner, when the tracking target is lost, the small window can suspend a layer of covering, and a covering state with lower brightness than that of the recording picture of the large window is presented, so as to remind a user of abnormal recording of the small window.
In some embodiments, after the tracking target is lost, the widget may continue recording the picture of the position of the tracking target before the tracking target is lost, where the recorded picture may be an empty mirror that does not include the tracking target, and the recording time may be an eighth preset duration.
When the tracking target is retrieved and recording is restored, the terminal equipment can clip the video before the tracking target is lost, the empty mirror video and the video after the tracking target is retrieved. In one possible implementation, the terminal device may delete the empty mirror, and splice the video before the tracking target is lost and the video after the tracking target is retrieved to synthesize a path of video. In another possible implementation, the terminal device can perform blurring, soft focus, addition of a cover layer and the like on the empty mirror video, so that the influence of the empty mirror video on the continuity of the whole video is reduced, and the experience of a subsequent user for viewing the recorded video is improved.
Optionally, when the tracking target is lost, the terminal device further displays a loss prompt. The loss prompt is used to prompt the user that the target is lost, for example, the content of the loss prompt may be "target lost". Missing cues may also be used to indicate the manner in which the widget is processed. For example, the content of the loss hint may be "target lost, pause the chasing video after 5 seconds.
Illustratively, the recording interface shown at b in fig. 9A also includes a loss hint 906. The widget 904 displays a loss prompt 906. The content of the loss hint 906 is "target lost".
Optionally, a loss hint 906 may also be displayed in the recording area 903. The embodiment of the application does not limit the specific content, the display position and the like of the loss prompt.
It can be understood that the missing cue disappears after the third preset time period is displayed, or when the terminal device is retrieving the tracking target or switching the tracking target. The third preset duration may be 3s or 5s, and the specific value of the third preset duration is not limited in this embodiment of the present application.
It can be understood that after the tracking target is lost, the terminal device may stop recording the tracking video after the fourth preset duration; alternatively, recording of the focus tracking video may be stopped when the tracking target is lost.
For example, taking the fourth preset duration as 5 seconds as an example, if the terminal device does not retrieve the tracking target or receives the operation of switching the tracking target, the terminal device enters the recording interface shown in c in fig. 9A from the recording interface shown in b in fig. 9A. The recording interface includes: a recording area 907, a pause control, an end control, and a widget 908. The recording area displays a recording screen, a tracking frame 909, and a recording time length. The widget 908 also displays a widget end control and a widget recording duration. The recording duration shown by recording area 907 continues to increase while the widget recording duration shown by widget 908 does not change.
When the small window pauses recording, the small window may display a frame of tracking picture before pausing recording; or a covered state is changed; or the tracking screen is displayed in real time. The embodiment of the application does not limit the display of the recording area, the display of the small window, and the like when the terminal equipment pauses recording.
It will be appreciated that the tracking frames displayed by the widget during the pause recording are not saved in the in-focus video. The focus tracking video after the small window is paused is the same focus tracking video as the focus tracking video before the small window is paused, for example, when the small window recording time length is 4s, recording is paused, and the focus tracking video recording time is 4s. When the terminal equipment retrieves the tracking target or switches the tracking target, the terminal equipment starts recording the 5s tracking video on the basis of the 4s video, and the recording time length is correspondingly changed.
Optionally, the terminal device displays a pause prompt when recording the focus tracking video is paused. Illustratively, the recording interface shown at c in fig. 9A further includes: pause prompt 910. The widget 908 displays a pause prompt 910. Pause prompt 910 is used to prompt the user to pause recording of the focus-following video. The content of the pause prompt can be recorded in a pause mode or the content of the pause prompt can be recorded in a pause mode.
Optionally, a pause prompt 910 may also be displayed in recording area 907. The embodiment of the application does not limit the display position, the content and the like of the pause prompt.
Alternatively, when the tracking target is lost, the terminal device may stop recording the focus tracking video. For example, when the tracking target is lost, the terminal device may enter the recording interface shown as c in fig. 9A from the interface shown as a in fig. 9A.
In the embodiment shown in fig. 9A, when the tracking target is lost, the terminal device displays a tracking frame; and displaying the tracking frame when the focus tracking video is paused.
In some embodiments, the terminal device does not display the tracking frame when the tracking target is lost; the tracking frame is displayed when the tracking video is paused. Therefore, the number of times of calling the tracking frame can be reduced, shielding of a recording interface is reduced, and interference is reduced.
Illustratively, the recording interface shown as a in fig. 9B includes: recording region 911, pause control, end control, recording duration, and widget 912. The recording area 911 displays a recording screen, and does not display a tracking frame. The widget 912 displays a tracking screen.
When the tracking target is lost and the tracking target is not in the recording picture, the terminal device enters the recording interface shown in B in fig. 9B. The recording interface includes: recording area 913, pause control, end control, and widget 914. The recording area 913 displays a recording screen, and no tracking frame is displayed.
It is to be understood that the content displayed in the widget 914 may be referred to the description of the related interface in fig. 9A, and will not be repeated herein.
Therefore, the condition that the tracking target is hidden due to repeated loss and recovery of the tracking target in a short time can be reduced, the interference of repeated display of the tracking frame is reduced, and the user experience is improved.
It can be understood that in the embodiment shown in fig. 9B, when the tracking target is lost, the processing manner of the terminal device, the content of the small window display, etc. may refer to the related description in the embodiment shown in fig. 9A, which is not repeated herein.
Optionally, when the tracking target is lost, the terminal device further displays a loss prompt. Illustratively, the recording interface shown at B in fig. 9B also includes a loss hint 916. The widget 914 displays a loss prompt 916. The content of the loss prompt 916 may be "target loss," or "target loss," after 5 seconds, the focus tracking video is paused.
Optionally, a loss hint 916 may also be displayed in the widget 914. The embodiment of the application does not limit the specific content, the display position and the like of the loss prompt.
It can be understood that the missing cue disappears after the third preset time period is displayed, or when the terminal device is retrieving the tracking target or switching the tracking target. The third preset duration may be 3s or 5s, and the specific value of the third preset duration is not limited in this embodiment of the present application.
It can be understood that after the tracking target is lost, the terminal device may stop recording the tracking video after the fourth preset duration; alternatively, recording of the focus tracking video may be stopped when the tracking target is lost.
For example, taking the fourth preset duration as 5 seconds as an example, if the terminal device does not retrieve the tracking target or receives the operation of switching the tracking target, the terminal device enters the recording interface shown in c in fig. 9B from the recording interface shown in B in fig. 9B. The recording interface includes: recording region 917, pause control, end control, and widget 918. The recording area displays a recording picture and a tracking box 919.
Optionally, the terminal device displays a pause prompt when recording the focus tracking video is paused. Illustratively, the recording interface shown at c in fig. 9B further includes: pause prompt 920. Recording area 907 displays pause prompt 920. Pause prompt 920 is used to prompt the user to pause the recording of the focus-following video. The content of the pause prompt can be recorded in a pause mode or the content of the pause prompt can be recorded in a pause mode.
Optionally, a pause prompt 920 may also be displayed in widget 918. The embodiment of the application does not limit the display position, the content and the like of the pause prompt.
Alternatively, when the tracking target is lost, the terminal device may stop recording the focus tracking video. For example, when the tracking target is lost, the terminal device may enter the recording interface shown as c in fig. 9B from the interface shown as a in fig. 9B.
In some embodiments, when the tracking target is lost, the small window may suspend a cover layer, and a cover layer state with a lower brightness than the recording picture of the large window is presented, so as to remind the user that the recording of the small window is abnormal. Illustratively, the widget 904 in the interface shown as b in fig. 9A is shown as a covering state. The embodiment of the present application does not specifically limit the display of the small window.
It will be appreciated that when the missing prompt or pause prompt is displayed in the widget region, the widget occlusion loss of the prompt may be reduced, reducing occlusion of the preview screen. Moreover, the missing prompt or the pause prompt can be more definite, and the user can understand the missing prompt or the pause prompt conveniently. The loss prompt is displayed in the small window, so that the loss prompt is more easily understood by a user as the loss of the tracking target in the small window. Pause cues are displayed in the widget and are more easily understood by the user as a recording pause in the widget. In addition, when the small window displays the characters with the missing prompt or the characters with the pause prompt, the attention of the user to the small window can be improved.
When a loss prompt or a pause prompt is displayed in the recording area, the terminal device may display more text to prompt the user for subsequent processing. And characters with larger character sizes can be displayed, so that the user can conveniently distinguish the characters.
In some embodiments, when a plurality of persons are included in the recording screen, the terminal device may display a plurality of tracking frames. The number of tracking frames is not limited in the embodiment of the present application.
In some embodiments, the tracking frame corresponding to the person set as the tracking target is different from the tracking frame display style corresponding to the person not set as the tracking target. In this way, the user is facilitated to distinguish the tracked persons.
In some embodiments, when the recorded picture does not include a person, no person is available that can be set as a tracking target, and no tracking frame is displayed.
Note that, the embodiments shown in fig. 8, 9A and 9B are described with respect to recording from the preview interface with the small window, and thus, the interfaces shown in fig. 8, 9A and 9B each include the small window. The recording interface of the terminal device may also have no small window, and the hiding and displaying manners of the tracking frame are similar to those of the embodiment shown in fig. 8, which is not described herein. The embodiments of the present application are not limited in this regard.
In a second possible implementation manner, after a recording scene in the main angle mode is entered, when the small window does not receive a user operation within a fifth preset duration after the recording of the focus-following video is started, the small window frame and/or the small window end control are hidden and not displayed; or when the small window does not receive the user operation within a fifth preset time period after receiving any operation of the user, hiding and not displaying the small window frame and/or the small window ending control. And when the terminal equipment receives the user operation, displaying the window frame and/or the window ending control.
The hiding and displaying of elements in the widget is described below in connection with fig. 10 and 11.
Fig. 10 is a schematic diagram of a recording interface according to an embodiment of the present application.
Taking a fifth preset duration of 5 seconds(s) as an example, the recording interface shown in a in fig. 10 includes: recording area 1001, pause control, end control, recording duration, and widget 1002. The recording area 1001 displays a recording screen and a tracking frame. The small window 1002 displays a tracking screen for tracking the target. The widget 1002 also includes a widget recording duration and widget ending control 1003. The frame of the widget 1002 is displayed.
When the terminal device does not receive an operation for any position of the widget 1002 within the fifth preset time period, the terminal device enters a recording interface shown as b in fig. 10. The recording interface includes a recording area 1004, a pause control 1005, an end control, a recording duration, a widget 1006, and a widget recording duration. The widget 1006 does not display a widget frame and a widget end control, but similar to the widget 1002, a tracking screen for tracking the target is still displayed in the widget 1006.
Therefore, the terminal equipment can hide the small window ending control, reduce shielding of tracking pictures and improve user experience. The terminal equipment can hide the small window frame, so that the interference to the user is reduced, the user is more immersed, and the user experience is improved.
It can be understood that the fifth preset duration may be 3s or 4s, and the specific value of the fifth preset duration is not limited in this embodiment of the present application.
On the basis of the embodiment, when the terminal equipment receives operations such as clicking, touching and the like at any position of the small window, the terminal equipment displays a small window frame and/or a small window ending control; or when receiving the operation of suspending recording, the terminal equipment displays a small window frame and/or a small window ending control; or when recording the focus tracking video is suspended, the terminal equipment displays a small window frame and/or a small window ending control.
Illustratively, when the user triggers any position of the widget 1006 through clicking, touching, or the like on the recording interface shown in b in fig. 10, the terminal device receives an operation of canceling the immersive experience, and enters the recording interface shown in c in fig. 10. In this recording interface, widget 1007 displays widget border and widget end control 1008.
Therefore, the terminal equipment can call out the small window ending control, and the user can conveniently end recording the focus tracking video. The terminal equipment can call out the small window frame, so that a user can conveniently confirm the position of the small window, and the user experience is improved.
It will be appreciated that when the terminal device does not receive any operation within the first preset time period after receiving the operation of clicking the widget 1006 by the user, the terminal device hides the tracking frame again and does not display the tracking frame.
Illustratively, when the user triggers the pause control 1005 by clicking, touching, or the like on the recording interface shown in b in fig. 10, the terminal device receives the operation of pausing recording, and enters the pause recording interface shown in d in fig. 10. The pause recording interface includes: recording area, start control 1009, end control, and widget 1010. The recording area displays recording pictures and recording time. The widget 1010 displays a widget frame, a widget end control 1011 and a widget recording duration. The recording time displayed by the recording area and the small window recording time are not changed. Under the condition of suspending recording, the recording duration of the recording area and the small window recording duration can be expressed as a combination of "|" and time.
When the user triggers the start control 1009 through clicking, touching or other operations on the recording interface shown in d in fig. 10, the terminal device receives the operation of continuing recording, and continues recording video and tracking video.
It can be understood that in the embodiment shown in fig. 10, when the terminal device pauses recording, the content displayed in the recording area, the content displayed in the small window, etc. may refer to the related description in the embodiment shown in fig. 8, and will not be described herein.
On the basis of the embodiment shown in fig. 10, when the tracking target is lost, the terminal device may suspend recording of the in-focus video. It will be appreciated that the terminal device may lose track of the target for a number of reasons, and that no target is tracked on the recorded picture. Various reasons include, but are not limited to: tracking movement of a target, movement of a terminal device, etc.
Fig. 11 is a schematic view of a scene of tracking target loss provided in the embodiment of the present application after entering a recording scene in the main angle mode.
The recording interface shown in a of fig. 11 includes: recording region 1101, pause control, end control, recording duration, and widget 1102. The recording area 1101 displays a recording screen. The widget 1102 displays a tracking screen of a tracking target, and the widget 1102 does not display a widget frame and a widget end control.
It can be understood that in the embodiment shown in fig. 11, when the tracking target is lost, the processing manner of the terminal device, the content of the small window display, etc. may refer to the related description in the embodiment shown in fig. 9A, which is not repeated herein.
When the tracking target is lost and the tracking target is not in the recording picture, the terminal device enters a recording interface shown as b in fig. 11. The recording interface includes: a recording area 1103, a pause control, an end control, and a widget 1104. The recording area displays a recording picture. The widget 1104 displays a tracking screen, and the widget 1104 does not display a widget frame and a widget end control.
Optionally, when the tracking target is lost, the terminal device further displays a loss prompt. Illustratively, the recording interface shown at b in fig. 11 also includes a loss hint 1105. The content of the loss prompt 1105 may be "target loss" or "target loss", and the focus tracking video is paused after 5 seconds. Loss hint 1105 may be displayed in a small window or in a recording area. The embodiment of the application does not limit the specific content, the display position and the like of the loss prompt.
It can be understood that the missing cue disappears after the third preset time period is displayed, or when the terminal device is retrieving the tracking target or switching the tracking target. The third preset duration may be 3s or 5s, and the specific value of the third preset duration is not limited in this embodiment of the present application.
It can be understood that after the tracking target is lost, the terminal device may stop recording the tracking video after the fourth preset duration; alternatively, recording of the focus-following video may be stopped.
For example, taking the fourth preset duration as 5 seconds as an example, if the terminal device does not retrieve the tracking target or receives the operation of switching the tracking target, the terminal device enters the recording interface shown in c in fig. 11 from the recording interface shown in b in fig. 11. The recording interface includes: a recording area 1106, a pause control, an end control, and a widget 1107. The recording area displays a recording picture. The widget 1107 displays a tracking screen of the tracking target, a widget frame, and a widget ending control 1108.
It can be understood that in the embodiment shown in fig. 11, when the widget is paused after the tracking target is lost, the content of the widget display may be referred to the related description in the embodiment shown in fig. 9A, which is not repeated herein.
Optionally, the terminal device displays a pause prompt when recording the focus tracking video is paused. Illustratively, the recording interface shown in c of fig. 11 further includes: pause prompt 1109. Pause prompt 1109 is used to prompt the user to pause recording of the focus-following video. The content of the pause prompt can be recorded in a pause mode or the content of the pause prompt can be recorded in a pause mode. Pause prompt 1109 may be displayed in a small window or in a recording area. The embodiment of the application does not limit the display position, the content and the like of the pause prompt.
Alternatively, when the tracking target is lost, the terminal device may stop recording the focus tracking video. For example, when the tracking target is lost, the terminal device may enter the recording interface shown in c in fig. 11 from the interface shown in a in fig. 11.
In a third possible implementation manner, after a recording scene in a main angle mode is entered, when a small window does not receive a user operation within a sixth preset duration after a focus-following video starts to be recorded, the small window is contracted and hidden to an edge position of a recording area; or when the small window does not receive the user operation within a sixth preset time period after receiving any user operation, the small window is contracted and hidden to the edge position of the recording area.
The reduction and enlargement of the small window will be described with reference to fig. 12 and 13.
Fig. 12 is a schematic diagram of a recording interface according to an embodiment of the present application.
Taking a sixth preset duration of 5 seconds(s) as an example, the recording interface shown in a in fig. 12 includes: recording area 1201, pause control, end control, recording duration, and widget 1202. Recording area 1201 displays a recording screen and a tracking frame. The widget 1202 displays a tracking screen for tracking a target. The widget 1202 also includes a widget recording duration and widget ending control 1203.
When the terminal device does not receive an operation for any position of the widget 1202 within the sixth preset time period, the terminal device enters the recording interface shown in b in fig. 12. The recording interface includes a recording area 1204, a pause control 1205, an end control, a recording duration, and a reduced widget 1206. The reduced window 1206 still corresponds to the tracking frame of the tracking target, and the tracking video continues to be recorded.
The reduced window 1206 may be in a gray stripe shape, or may be in a circular shape, a square shape, or the like, and the embodiment of the present application does not limit the reduced window 1206.
The reduced window 1206 may be located at a border of the recording area 1204. Bezel locations include, but are not limited to: left side frame, lower frame, go up frame, right frame.
Therefore, the terminal equipment can reduce the small window, reduce the shielding of the recorded picture and improve the user experience. The small window is positioned at the frame position of the recording area, so that the interference on the recorded picture is further reduced.
It may be understood that the sixth preset duration may be 3s or 4s, and the specific value of the sixth preset duration is not limited in this embodiment of the present application.
Optionally, when the widget receives the operation of dragging to the edge position of the recording area, the terminal device may also shrink and hide the widget to the edge position of the recording area.
On the basis of the embodiment, when the terminal equipment receives the small window after operation triggering such as clicking, touching and the like, the terminal equipment displays the small window; or when receiving the operation of suspending recording, the terminal equipment displays a small window; or when recording the focus tracking video is suspended, the terminal equipment displays a small window.
Illustratively, when the user triggers the reduced widget 1206 through clicking, touching, dragging, etc. on the recording interface shown in b in fig. 12, the terminal device enters the recording interface shown in c in fig. 12. In this recording interface, the size of the small window 1207 corresponds to the size of the small window 1202 in the interface shown as a in fig. 12. The widget 1207 includes a widget recording duration and a widget ending control. The small window 1207 displays a tracking screen of the focus tracking target.
Therefore, the small window can be also called out, the user can conveniently view the tracking picture, and the user experience is improved.
Illustratively, when the user triggers the pause control 1205 by clicking, touching, or the like on the recording interface shown in b in fig. 12, the terminal device receives an operation to pause recording, and enters the pause recording interface shown in d in fig. 12. The pause recording interface includes: a recording area, a start control 1208, an end control, and a widget 1209. The recording area displays a recording picture. The widget 1209 includes a widget recording duration and a widget ending control. The size of the porthole 1209 corresponds to the size of the porthole 1202 in the interface shown as a in fig. 12.
It can be understood that in the embodiment shown in fig. 12, when the terminal device pauses recording, the content displayed in the recording area, the content displayed in the small window, etc. may refer to the related description in the embodiment shown in fig. 8, and will not be described herein.
When the user triggers the start control 1209 through clicking, touching or other operations on the recording interface shown in d in fig. 12, the terminal device receives the operation of continuing recording, and continues recording video and focus tracking video.
Based on the embodiment shown in fig. 12, when the tracking target is lost, the terminal device may suspend recording of the in-focus video. It will be appreciated that the terminal device may lose track of the target for a number of reasons, and that no target is tracked on the recorded picture. Various reasons include, but are not limited to: tracking movement of a target, movement of a terminal device, etc.
Fig. 13 is a schematic view of a scenario for tracking target loss according to an embodiment of the present application.
The recording interface shown in a of fig. 13 includes: recording region 1301, pause control, end control, recording duration, and reduced widget 1302. The recording area 1301 displays a recording screen. The reduced widget 1302 may be located at a border of the recording region 1301. Bezel locations include, but are not limited to: left side frame, lower frame, go up frame, right frame.
When the tracking target is lost and the tracking target is not in the recording picture, the terminal device enters a recording interface shown as b in fig. 13. The recording interface includes: recording region 1303, pause control, end control, and scaled down widget 1304. The recording area displays a recording picture.
The shape of the narrowed window 1304 may be referred to the description of the relevant interface in fig. 12, and will not be repeated here.
Therefore, the situation that the small window is reduced after being normally displayed for many times due to the fact that the tracking target is lost and retrieved many times in a short time can be reduced, interference of reduction change after the small window is normally displayed is reduced, and user experience is improved.
Optionally, when the tracking target is lost, the terminal device further displays a loss prompt. Illustratively, the recording interface shown at b in fig. 13 also includes a loss hint 1305. The content of the loss prompt 1305 may be "target loss" or "target loss", and the focus tracking video is paused after 5 seconds. The loss hint 1305 may be displayed in the recording area. The embodiment of the application does not limit the specific content, the display position and the like of the loss prompt.
It can be understood that the missing cue disappears after the third preset time period is displayed, or when the terminal device is retrieving the tracking target or switching the tracking target. The third preset duration may be 3s or 5s, and the specific value of the third preset duration is not limited in this embodiment of the present application.
It can be understood that after the tracking target is lost, the terminal device may stop recording the tracking video after the fourth preset duration; alternatively, recording of the focus-following video may be stopped.
For example, taking the fourth preset duration as 5 seconds as an example, if the terminal device does not retrieve the tracking target or receives the operation of switching the tracking target, the terminal device enters the recording interface shown in c in fig. 13 from the recording interface shown in b in fig. 13. The recording interface includes: recording area 1306, pause control, end control, and widget 1307. The recording area displays a recording picture. The widget 1307 displays a tracking screen of the tracked target, a widget recording time length and a widget ending control.
It can be understood that in the embodiment shown in fig. 13, when the widget is paused after the tracking target is lost, the content of the widget display may be referred to the related description in the embodiment shown in fig. 9A, which is not repeated herein.
Optionally, the terminal device displays a pause prompt when recording the focus tracking video is paused. Illustratively, the recording interface shown in c of fig. 13 further includes: pause prompt 1308. Pause prompt 1308 is used to prompt the user to focus the video to pause recording. The content of the pause prompt can be recorded in a pause mode or the content of the pause prompt can be recorded in a pause mode. Loss cues 1308 may be displayed in a small window or in a recording area. The embodiment of the application does not limit the display position, the content and the like of the pause prompt.
Alternatively, when the tracking target is lost, the terminal device may stop recording the focus tracking video. For example, when the tracking target is lost, the terminal device may enter the recording interface shown in c in fig. 13 from the interface shown in a in fig. 13.
In a fourth possible implementation manner, after entering a recording scene in the main angle mode, when the small window does not receive a user operation within a seventh preset time period after the small window starts to record the focus-following video, the small window is hidden; or when the small window does not receive the user operation within a seventh preset time period after receiving any operation of the user, hiding the small window.
The hiding and displaying of the portlets is described below in connection with fig. 14 and 15.
Fig. 14 is a schematic diagram of a recording interface according to an embodiment of the present application.
Taking a seventh preset duration of 5 seconds(s) as an example, the recording interface shown in a in fig. 14 includes: recording area 1401, pause control, end control, recording duration, and widget 1402. The recording area 1401 displays a recording screen and a tracking frame. The widget 1402 displays a tracking screen for tracking a target. Widget 1402 also includes a widget recording duration and widget ending control 1403.
When the terminal device does not receive any operation for the small Chai Geini 1402 within the seventh preset time period, the terminal device enters the recording interface shown in b in fig. 14. The recording interface does not display a small window; the recording interface includes a recording area 1404, a pause control 1405, an end control, and a recording duration. After the small window is hidden, the terminal equipment still corresponds to a tracking picture of the tracking target but does not display the tracking picture, and the tracking video continues to be recorded.
Therefore, the terminal equipment can hide the small window from being displayed, reduce shielding of recorded pictures and improve user experience.
It may be understood that the seventh preset duration may be 3s or 4s, and the specific value of the seventh preset duration is not limited in this embodiment of the present application.
On the basis of the embodiment, when the terminal equipment receives operations such as clicking, touching and the like at any position of the recording area, the terminal equipment displays a small window; or when receiving the operation of suspending recording, the terminal equipment displays a small window; or when recording the focus tracking video is suspended, the terminal equipment displays a small window.
Illustratively, when the user triggers any position of the recording area 1404 through clicking, touching, etc. on the recording interface shown in b in fig. 14, the terminal device enters the recording interface shown in c in fig. 15. The recording interface is displayed with a small window 1406. The widget 1406 includes a widget recording duration and a widget ending control. The small window 1406 displays a tracking screen of the focus tracking target.
Therefore, the small window can be also called out, the user can conveniently view the tracking picture, and the user experience is improved.
Illustratively, when the user triggers the pause control 1405 by clicking, touching, or the like on the recording interface shown in b in fig. 14, the terminal device receives an operation to pause recording, and enters the pause recording interface shown in d in fig. 14. The pause recording interface includes: a recording area, a start control 1406, an end control, and a widget 1407. The recording area displays a recording picture. Widget 1407 includes a widget recording duration and a widget ending control.
It can be understood that in the embodiment shown in fig. 14, when the terminal device pauses recording, the content displayed in the recording area, the content displayed in the small window, etc. may refer to the related description in the embodiment shown in fig. 8, and will not be described herein.
When the user triggers the start control 1406 through clicking, touching or other operations on the recording interface shown in d in fig. 14, the terminal device receives the operation of continuing recording, and continues recording video and focus tracking video.
Based on the embodiment shown in fig. 14, when the tracking target is lost, the terminal device may suspend recording of the in-focus video. It will be appreciated that the terminal device may lose track of the target for a number of reasons, and that no target is tracked on the recorded picture. Various reasons include, but are not limited to: tracking movement of a target, movement of a terminal device, etc.
Fig. 15 is a schematic view of a scenario for tracking target loss according to an embodiment of the present application.
The recording interface shown in a of fig. 15 includes: recording area 1501, pause control, end control, recording duration. The recording area 1501 displays a recording screen.
When the tracking target is lost and the tracking target is not in the recording picture, the terminal device enters a recording interface shown as b in fig. 15. The recording interface includes: recording area 1502, pause control, end control. The recording area displays a recording picture.
Therefore, the situation that the small window is displayed after being hidden for many times due to the fact that the tracking target is lost and retrieved for many times in a short time can be reduced, interference of the small window in many times is reduced, and user experience is improved.
Optionally, when the tracking target is lost, the terminal device further displays a loss prompt. Illustratively, the recording interface shown at b in fig. 15 also includes a loss hint 1503. The content of the loss prompt 1503 may be "target loss" or "target loss", and the focus-following video is paused after 5 seconds. Loss hint 1503 may be displayed in the recording area. The embodiment of the application does not limit the specific content, the display position and the like of the loss prompt.
It can be understood that the missing cue disappears after the third preset time period is displayed, or when the terminal device is retrieving the tracking target or switching the tracking target. The third preset duration may be 3s or 5s, and the specific value of the third preset duration is not limited in this embodiment of the present application.
It can be understood that after the tracking target is lost, the terminal device may stop recording the tracking video after the fourth preset duration; alternatively, recording of the focus-following video may be stopped.
For example, taking the fourth preset duration as 5 seconds as an example, if the terminal device does not retrieve the tracking target or receives the operation of switching the tracking target, the terminal device enters the recording interface shown in c in fig. 15 from the recording interface shown in b in fig. 15. The recording interface includes: recording area 1504, pause control, end control, and widget 1505. The recording area displays a recording picture. The widget 1505 displays a tracking screen of a tracking target, a widget recording time length and a widget ending control.
It can be understood that in the embodiment shown in fig. 15, when the widget is paused after the tracking target is lost, the content of the widget display may be referred to the related description in the embodiment shown in fig. 9A, which is not repeated herein.
Optionally, the terminal device displays a pause prompt when recording the focus tracking video is paused. Illustratively, the recording interface shown in c of fig. 15 further includes: pause prompt 1506. Pause prompt 1506 is used to prompt the user to pause recording of the focus-following video. The content of the pause prompt can be recorded in a pause mode or the content of the pause prompt can be recorded in a pause mode. The loss prompt 1506 may be displayed in a small window or in a recording area. The embodiment of the application does not limit the display position, the content and the like of the pause prompt.
Alternatively, when the tracking target is lost, the terminal device may stop recording the focus tracking video. For example, when the tracking target is lost, the terminal device may enter the recording interface shown in c in fig. 15 from the recording interface shown in a in fig. 15.
It should be noted that, in the above embodiments, recording is performed from the preview interface with the small window in the main angle mode, and the recording duration in the recording area in the interfaces shown in fig. 8 to 15 is consistent with the recording duration of the small window. When the terminal equipment selects a tracking target after starting recording or continues recording the focus tracking video after suspending a small window, the recording time length in the recording area is possibly inconsistent with the small window recording time length; the embodiments of the present application are not limited in this regard.
The four possible implementations can be applied to the terminal device alone or in multiple implementations.
The first possible implementation and the second possible implementation are applied in the terminal device at the same time. And hiding the tracking frame from display when the recording area does not receive the operation in the first preset time. And hiding and not displaying the widget ending control when the widget does not receive the operation in the fifth preset time.
It can be appreciated that when the second to fourth possible implementations are simultaneously applied to the terminal device, the seventh preset time period is longer than the sixth preset time period and longer than the fifth preset time period. Namely, when the fifth preset duration arrives, the widget ending control is hidden and not displayed; when the sixth preset time length comes, the small window is reduced; and when the seventh preset time period arrives, the small window is not displayed.
It will be appreciated that the terminal device may employ any one or any two or any three or any four of the above possible implementations. The technical means and technical effects are similar to those described above, and are not repeated here.
It will be appreciated that the interface of the terminal device described above is by way of example only and that the interface of the terminal device may also include more or less content. Further, the shape and form of each control in the above embodiments are merely examples. The embodiment of the application does not limit the content of the display interface and the shape and form of each control.
On the basis of the above embodiments, the embodiments of the present application provide a video recording method. Fig. 16 is a schematic flow chart of a video recording method according to an embodiment of the present application.
As shown in fig. 16, the recording method may include the steps of:
s1601, the terminal device displays a first interface of the camera application.
In the embodiment of the application, the first interface comprises a first window and a second window; the first window displays a first picture acquired by the first camera, the second window displays a second picture, and the second picture is a part of the first picture.
It is understood that the first interface may be a preview interface (for example, an interface shown by a in fig. 3B) provided with a tracking target, or may be a recording interface (for example, an interface shown by a in fig. 3C) provided with a tracking target. The first window may be understood as the preview area or the recording area above; the picture acquired by the first camera in real time can be understood as the preview picture or the recording picture; the second window may be understood as the small window above; the screen displayed in the second window may be understood as a tracking screen corresponding to the tracking target.
S1602, at a first time, when the terminal device detects that the first position of the first screen includes the first object, the second screen includes the first object.
The first object may be understood as an object corresponding to the tracking target. Objects include, but are not limited to: characters, objects (vehicles, etc.), pets (cats, dogs, etc.).
S1603, at a second time, when the terminal device detects that the second position of the first screen includes the first object, the second time being later than the first time; wherein, at the first moment, the second window is displayed with a frame and/or a first control; at a second time, the second window does not display the bezel and/or the first control.
In the embodiment of the present application, the screen (tracking screen) displayed by the second window changes with the position change of the focus tracking target. Specifically, the position of the tracking object in the screen displayed in the second window changes and changes, and fig. 3B or 3C may be referred to. For example, the interface at the first time may be the interface shown by a in fig. 3B, and the interface at the second time may be the interface shown by B in fig. 3B; alternatively, the interface at the first time may be the interface shown as a in fig. 3C, and the interface at the second time may be the interface shown as b in fig. 3C.
It is understood that the first control is a control in a widget, e.g., the widget ending control above, etc. The first control is not limited herein.
At a first moment, a frame and/or a first control is displayed on the second window; at a second time, the second window does not display the bezel and/or the first control. The terminal device may hide the frame of the widget and/or the first control.
For example, the interface displayed at the first time may correspond to the interface shown in a in fig. 10, with a widget frame and a widget ending control displayed; the interface displayed at the second time may correspond to the interface shown in b in fig. 10, with the widget frame and widget ending control not displayed.
In sum, the terminal device can additionally obtain and display the picture corresponding to the tracking target, and can hide the frame of the small window and/or the control in the small window when the tracking picture is displayed, so that the terminal device can hide the control in the small window, reduce the shielding of the tracking picture and improve the user experience. The terminal equipment can hide the small window frame, further improve immersive effect, promote user experience.
Optionally, the time interval between the first time instant and the second time instant is greater than a first threshold value.
The first threshold may correspond to the fifth preset time period described above. The first time may be a time when the focus tracking video starts to be recorded, and the second time may be a time after a time corresponding to a fifth preset time period after the focus tracking video starts to be recorded. The first time may be a time when the terminal device receives the trigger operation for the small window, and the second time may be a time after the terminal device receives a time corresponding to a fifth preset duration after the trigger operation for the small window.
Optionally, the first object is centrally displayed in the second screen.
Optionally, the second window floats on an upper layer of the first window, and the second window is smaller than the first window.
Optionally, after the second moment, the terminal device detects a triggering operation of the user on the second window; and responding to the triggering operation of the second window, and displaying the frame of the second window and/or the first control.
It can be understood that the triggering operation of the second window includes, but is not limited to, clicking the widget, adjusting the size of the widget, and adjusting the position of the widget.
For example, after the second moment, when the user clicks any position of the widget, the border of the widget and/or the control in the widget are displayed, and the terminal device may enter the recording interface shown in c in fig. 10 from the recording interface shown in b in fig. 10. And will not be described in detail herein.
Optionally, the first interface further includes a pause control, and the method further includes: after the second moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal equipment displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, the second window comprises a frame and/or the first control, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, the first window also displays a first picture acquired by the first camera, and the second window displays a second picture.
In this embodiment of the present application, the triggering operation of the pause control may be a clicking operation, or may be other operations, which is not limited herein. The mark for suspending recording may be "|" or other marks, and the form of the mark is not limited herein. The first recording duration information may correspond to the recording duration; the second recording duration information may correspond to the small window recording duration.
For example, after the second moment, when the user clicks the pause control, the frame of the widget and/or the control in the widget are displayed, and the terminal device may enter the recording interface shown in d in fig. 10 from the recording interface shown in b in fig. 10. And will not be described in detail herein.
Therefore, the control in the small window can be called out during suspension, and the operation of a user is facilitated. In addition, the borders of the portlets and/or the controls in the portlets can also make the portlets more visible so as to promote the user's attention to the portlets.
Optionally, the method further comprises: and at a third moment, when the terminal equipment detects that the first picture does not comprise the first object, displaying the frame and/or the first control of the second window, wherein the third moment is later than the second moment.
It can be understood that when the first picture does not include the first object and the first object is lost, the terminal device can call out the control in the widget, so that the user can operate conveniently. In addition, the borders of the portlets and/or the controls in the portlets can also make the portlets more visible so as to promote the user's attention to the portlets.
Optionally, the method further comprises: during the fourth time to the fifth time, the terminal equipment continuously detects that the first picture does not comprise the first object, and the second window does not comprise a frame and/or a first control;
at a fifth moment, the terminal device displays a third interface of the camera application, the third interface comprises a first window and a second window, the second window comprises an identifier for suspending recording, a frame and/or a first control, second recording duration information displayed by the second window is kept unchanged, the first window does not comprise the identifier for suspending recording, the displayed first recording duration information is changed continuously, the first window also displays a picture acquired by the first camera, and the fifth moment is later than the fourth moment, and the fourth moment is later than the second moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the interface at the second time may correspond to the recording interface shown at a in fig. 11, and the interface at the fourth time may correspond to the interface shown at b in fig. 11. The interface at the fifth time may correspond to the recording interface shown as c in fig. 11. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. The terminal equipment can pause the recording of the small window after the first object is lost for a certain time, and arouse the control in the small window, thereby facilitating the operation of the user. In addition, the borders of the portlets and/or the controls in the portlets can also make the portlets more visible so as to promote the user's attention to the portlets. In addition, the frame of the small window and/or the controls in the small window are not displayed in the losing period, so that the frame of the small window and/or the controls in the small window caused by frequent losing of the tracking target can be reduced to be displayed and hidden for many times, and the user experience is improved.
Optionally, the method further comprises: at a sixth moment, when the terminal equipment detects that the first picture comprises an object, displaying a tracking identifier in a first window, wherein the tracking identifier is associated with the object; and at a seventh moment, when the terminal equipment detects that the first picture comprises the object, the tracking mark is not displayed in the first window, the seventh moment is later than the sixth moment, the sixth moment is earlier than the first moment, or the sixth moment is later than the first moment, or the sixth moment is the first moment, and the time interval between the first moment and the second moment is larger than the second threshold value.
The tracking identifier may be the tracking box above, or may be in other forms (e.g., thumbnail images, numbers, icons, etc. of the object), which are not limited herein. The tracking mark may be located at the position where the object is displayed, or may be displayed in a line at the edge of the preview area or the recording area, which is not limited herein.
Illustratively, the interface at the sixth time may correspond to the interface shown as a in fig. 8, with a tracking frame displayed; the interface at the seventh time may correspond to the interface shown in b in fig. 8, and the tracking frame is not displayed.
The second threshold may correspond to the first preset time period described above. The sixth time may be a time when the focus tracking video starts to be recorded, and the seventh time may be a time after a time corresponding to the first preset time period after the focus tracking video starts to be recorded. The sixth time may be a time when the terminal device receives the trigger operation for the recording area, and the seventh time may be a time after the time when the terminal device receives the first preset time after the trigger operation for the recording area. And are not limited herein.
Therefore, the terminal equipment can hide tracking marks such as the tracking frame, reduce shielding of recorded pictures, improve immersive effects and improve user experience. In addition, the widget frame and/or controls in the widget; and the tracking identification of the recording area can be controlled separately, so that the influence of user operation is reduced.
Optionally, at the seventh moment, the first window further includes a tracking identifier display prompt, where the tracking identifier display prompt is used to prompt the user to display the first tracking identifier.
The tracking frame display prompt may be "click recording area displayable tracking frame". The embodiment of the application does not limit the position, the display form, the specific content and the like of the display prompt display of the tracking frame.
For example, the interface at the seventh time may correspond to the interface shown in b in fig. 8, with a tracking frame display prompt displayed.
Thus, the terminal equipment can prompt the user to trace the display mode of the frame, and user experience is improved.
Optionally, at an eighth time, the first window does not display the tracking identifier display prompt, and the eighth time is later than the seventh time.
The content of the display and the disappearing time of the tracking identifier display prompt can be referred to the description of the corresponding content in fig. 8, which is not repeated here.
In a possible implementation, the tracking frame display prompt may disappear after a second preset time period is displayed. The second preset duration may be 3s or 5s, and the specific value of the second preset duration is not limited in this embodiment of the present application.
Therefore, the terminal equipment can not display the tracking identification display prompt, reduce shielding of the tracking identification display prompt on the recorded picture, reduce interference and improve user experience.
Optionally, the method further comprises: after the seventh moment, the terminal equipment detects the triggering operation of the user on the first window; in response to a triggering operation on the first window, the first window includes a first tracking identifier and does not include a tracking identifier display prompt.
The triggering operation of the first window may be a clicking operation or other operations, which is not limited in the embodiment of the present application.
It will be appreciated that when the terminal device receives a user operation in the recording area, a tracking frame is displayed, and the tracking frame displays a prompt to disappear. Illustratively, when the terminal device detects that the user clicks on the recording area at the interface shown in b in fig. 8, the unlimited tracking box displays a prompt.
Optionally, the method further comprises: after the seventh moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal device displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, the first window comprises a first tracking mark, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, the first window also displays pictures acquired by the first camera, and the second window displays second pictures.
The triggering operation of the pause control, the record pause identifier, the first record duration information, the second record duration information, and the like can be referred to the above related description, and are not repeated here.
Illustratively, after the seventh moment, when the user clicks the pause control, the first tracking identifier is displayed, and the terminal device may enter the recording interface shown as d in fig. 8 from the recording interface shown as b in fig. 8. And will not be described in detail herein.
Therefore, the tracking mark can be called out when the user pauses, the user can conveniently switch the tracking target, the tracking target is determined, and the user experience can be improved.
Optionally, the method further comprises: at a ninth moment, when the terminal device detects that the first picture does not comprise the first object, the first window comprises the tracking identifier, and the ninth moment is later than the seventh moment.
It can be understood that, in the embodiment of the present application, the first object is a tracking target, and when the first object is lost, the terminal device may call out the tracking identifier, so as to facilitate the user to switch the tracking target. Illustratively, the interface at the seventh time may correspond to the recording interface shown in a of fig. 9A, and the interface displayed at the ninth time may correspond to the recording interface shown in b of fig. 9A. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. The terminal equipment can display the tracking identification, so that the user can conveniently switch the object corresponding to the tracking target, and the user operation is convenient.
Optionally, the method further comprises: during the tenth time to the eleventh time, the terminal device continuously detects that the first screen does not include the first object, and the first window does not include the tracking identifier; at an eleventh moment, the terminal device displays a third interface of the camera application, the first window displays the tracking identifier, and the eleventh moment is later than the tenth moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the interface at the seventh time may correspond to the recording interface shown by a in fig. 9B, and the interface at the tenth time may correspond to the interface shown by B in fig. 9B. The interface at the eleventh time may correspond to the recording interface shown as c in fig. 9B. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. The terminal equipment pauses the recording of the small window after the first object is lost for a certain time, displays the tracking identification, and is convenient for a user to switch the object corresponding to the tracking target and convenient for the user to operate. In addition, the tracking mark is not displayed in the losing period, so that the repeated display and hiding of the tracking mark caused by frequent losing of the tracking target can be reduced, and the user experience is improved.
Optionally, at the third time or at the ninth time, the first interface further includes a loss prompt, where the loss prompt is used to prompt that the first object is lost;
and/or, during the fourth time to the fifth time or during the tenth time to the eleventh time, the first interface further comprises a loss prompt, and/or, at the fifth time or at the eleventh time, the third interface further comprises a pause prompt, wherein the pause prompt is used for prompting the second window to pause recording.
Thus, when the first object is lost, the user can be prompted that the tracking target is lost; when the small window pauses to record after the first object is lost, the user can be prompted to pause recording abnormally when the small window records, so that the user can know the abnormal condition conveniently, and the user experience is improved.
Optionally, at a twelfth time, the first interface includes a second window; at a thirteenth time, the first interface includes a first window and does not include a second window, the first window includes a second control, the second control is associated with the second window, the thirteenth time is later than the twelfth time, the twelfth time is the first time, a time interval between the thirteenth time and the twelfth time is greater than a third threshold, and the third threshold is greater than the first threshold.
In this embodiment of the present application, the second control may be a scaled-down widget, or may be a control of another form. When the second control is triggered, the terminal device may display a normally sized widget. Illustratively, the interface displayed at the twelfth time corresponds to the recording interface shown as a in fig. 12; the interface displayed at the thirteenth time may correspond to the recording interface shown in b in fig. 12.
It is understood that the third threshold may correspond to the sixth preset time period described above. The twelfth time may be a time when the focus-following video starts to be recorded, and the thirteenth time may be a time after a time corresponding to a sixth preset time period after the focus-following video starts to be recorded. The twelfth time may be a time when the terminal device receives the trigger operation for the recording area, and the thirteenth time may be a time after the terminal device receives a time corresponding to the sixth preset duration after the trigger operation for the recording area. And are not limited herein.
Thus, the terminal equipment can reduce the area occupied by the small window, reduce the shielding of the recorded picture, improve the immersive effect and improve the user experience.
Optionally, the method further comprises: after thirteenth moment, the terminal equipment detects triggering operation of the second control; in response to a triggering operation on the second control, the first interface includes a first window and a second window, the first window not including the second control.
The triggering operation of the second control may be a clicking operation, a dragging operation, or other operations, which is not limited in the embodiment of the present application.
For example, when the terminal device detects that the operation such as clicking, touching, dragging, etc. by the user triggers the small window 1206 after shrinking in the recording interface shown in b in fig. 12, the terminal device enters the recording interface shown in c in fig. 12, and displays the small window with a normal size.
Therefore, the small window can be also called out, the user can conveniently view the tracking picture, and the user experience is improved.
Optionally, the method further comprises: after thirteenth moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal device displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, and the first window also displays pictures acquired by the first camera.
The triggering operation of the pause control, the record pause identifier, the first record duration information, the second record duration information, and the like can be referred to the above related description, and are not repeated here.
Illustratively, after the thirteenth moment, when the user clicks the pause control, a small window is displayed, and the terminal device may enter the recording interface shown as d in fig. 12 from the recording interface shown as b in fig. 12. And will not be described in detail herein.
Therefore, the small window can be called out when the user pauses, so that the user can conveniently view the tracking picture, and the user experience is improved.
Optionally, the method further comprises: at a fourteenth moment, when the terminal equipment detects that the first picture does not comprise the first object, the first interface comprises a second window, the first window does not comprise a second control, and the fourteenth moment is later than the thirteenth moment.
It can be understood that, in the embodiment of the present application, the first object is a tracking target, and when the first object is lost, the terminal device can call out a small window, so that a user can conveniently view a tracking picture, and user experience is improved. In addition, the attention of the user to the small window can be improved.
Optionally, at the fourteenth moment, the first interface further includes a loss hint, where the loss hint is used to hint that the first object is lost.
In this way, the user may be prompted to track the loss of the target. And the user experience is improved.
Optionally, the method further comprises: during the fifteenth time to the sixteenth time, when the terminal equipment continuously detects that the first picture does not comprise the first object, the first interface does not comprise a second window, and the first window comprises a second control; at a sixteenth moment, the terminal device displays a third interface of the camera application, the third interface comprises a first window and a second window, the second window comprises an identifier for suspending recording, the second recording duration information displayed by the second window is kept unchanged, the first window does not comprise the identifier for suspending recording, the displayed first recording duration information is changed continuously, the first window also displays a picture acquired by the first camera, the sixteenth moment is later than the fifteenth moment, and the fifteenth moment is later than the thirteenth moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the thirteenth interface may correspond to the recording interface shown in a of fig. 13, and the fifteenth interface may correspond to the interface shown in b of fig. 13. The interface at the sixteenth time may correspond to the recording interface shown as c in fig. 13. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. And the terminal equipment pauses the recording of the small window after the first object is lost for a certain time, and displays the small window, so that the user operation is convenient. In addition, the second control is displayed in the losing period, and the small window with the normal size is not displayed, so that the small window and the second control which are caused by frequent losing of the tracking target can be reduced to be alternately displayed for many times, and the user experience is improved.
Optionally, at the seventeenth moment, the first interface includes a second window; at an eighteenth moment, the first interface includes the first window and does not include the second window, the eighteenth moment is later than the seventeenth moment, the seventeenth moment is the first moment, a time interval between the eighteenth moment and the seventeenth moment is greater than a fourth threshold, and the fourth threshold is greater than the third threshold.
Illustratively, the interface displayed at the seventeenth time corresponds to the recording interface shown as a in fig. 14; the interface displayed at the eighteenth time may correspond to the recording interface shown in b in fig. 14.
It is understood that the fourth threshold may correspond to the seventh preset time period described above. The seventeenth time may be a time when the focus-following video starts to be recorded, and the eighteenth time may be a time after a time corresponding to a seventh preset time period after the focus-following video starts to be recorded. The seventeenth time may be a time when the terminal device receives the trigger operation for the recording area, and the eighteenth time may be a time after a time when the terminal device receives the trigger operation for the recording area and corresponding to a seventh preset time length. And are not limited herein.
Therefore, the terminal equipment can hide the small window, further reduce shielding of the recorded picture, improve the immersive effect and improve the user experience.
Optionally, the method further comprises: after the eighteenth moment, the terminal equipment detects the triggering operation of the pause control; in response to triggering operation of the pause control, the terminal device displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, and the first window also displays pictures acquired by the first camera.
The triggering operation of the pause control, the record pause identifier, the first record duration information, the second record duration information, and the like can be referred to the above related description, and are not repeated here.
Illustratively, after the eighteenth moment, when the user clicks the pause control, a small window is displayed, and the terminal device may enter the recording interface shown as d in fig. 14 from the recording interface shown as b in fig. 14. And will not be described in detail herein.
Therefore, the small window can be called out when the user pauses, so that the user can conveniently view the tracking picture, and the user experience is improved.
Optionally, the method further comprises: at a nineteenth time, when the terminal device detects that the first object is not included in the first screen, the first interface includes the second window, and the nineteenth time is later than the eighteenth time.
It can be understood that, in the embodiment of the present application, the first object is a tracking target, and when the first object is lost, the terminal device can call out a small window, so that a user can conveniently view a tracking picture, and user experience is improved. In addition, the attention of the user to the small window can be improved.
Optionally, at nineteenth time, the first interface further includes a loss hint for hinting that the first object is lost.
In this way, the user may be prompted to track the loss of the target. And the user experience is improved.
Optionally, the method further comprises: during the twentieth time to the twentieth time, when the terminal device continuously detects that the first screen does not include the first object, the first interface does not include the second window; at a twenty-first moment, the terminal device displays a third interface of the camera application, the third interface comprises a first window and a second window, the second window comprises a record suspending mark, the second record duration information displayed by the second window is unchanged, the first window does not comprise the record suspending mark, the displayed first record duration information is changed continuously, the first window also displays a picture acquired by the first camera, the twentieth moment is later than the twentieth moment, and the twentieth moment is later than the eighteenth moment.
It will be appreciated that when the first picture does not include the first object, the first object is lost. Illustratively, the eighteenth interface may correspond to the recording interface shown in a of fig. 15, and the twentieth interface may correspond to the interface shown in b of fig. 15. The interface at the twenty-first time may correspond to the recording interface shown as c in fig. 13. And will not be described in detail herein.
It is understood that the first picture does not include the first object, which is lost. And the terminal equipment pauses the recording of the small window after the first object is lost for a certain time, and displays the small window, so that the user operation is convenient. In addition, the second control is displayed in the losing period, and the small window is not displayed, so that the phenomenon that the small window is hidden after being displayed for many times due to frequent losing of the tracking target can be reduced, and the user experience is improved.
Optionally, at the twenty-first time, the first window and/or the second window display a pause prompt, where the pause prompt is used to prompt the second window to pause recording.
Optionally, the method further comprises: during the twentieth time to the twentieth time, the first interface further includes a loss hint; and/or, at the twenty-first moment, the third interface further comprises a pause prompt, wherein the pause prompt is used for prompting the second window to pause recording.
Optionally, the missing cue is located in the first window or in the second window; the pause prompt is located in the first window or in the second window.
It will be appreciated that when the missing prompt or pause prompt is displayed in the widget region, the widget occlusion loss of the prompt may be reduced, reducing occlusion of the preview screen. Moreover, the missing prompt or the pause prompt can be more definite, and the user can understand the missing prompt or the pause prompt conveniently. The loss prompt is displayed in the small window, so that the loss prompt is more easily understood by a user as the loss of the tracking target in the small window. Pause cues are displayed in the widget and are more easily understood by the user as a recording pause in the widget. In addition, when the small window displays the characters with the missing prompt or the characters with the pause prompt, the attention of the user to the small window can be improved.
When a loss prompt or a pause prompt is displayed in the recording area, the terminal device may display more text to prompt the user for subsequent processing. Can also display characters with larger character size, and is convenient for users to distinguish the characters
The video recording method according to the embodiment of the present application has been described above, and the apparatus for performing the method according to the embodiment of the present application is described below. As shown in fig. 17, fig. 17 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application, where the video recording apparatus may be a terminal device in an embodiment of the present application, or may be a chip or a chip system in the terminal device.
As shown in fig. 17, the video recording apparatus 2100 may be used in a communication device, a circuit, a hardware component, or a chip, and includes: a display unit 2101, and a processing unit 2102. Wherein the display unit 2101 is used for supporting the step of displaying performed by the video recording apparatus 2100; the processing unit 2102 is configured to support the recording apparatus 2100 to execute steps of information processing.
In a possible implementation, the recording apparatus 2100 may also include a communication unit 2103. Specifically, the communication unit is configured to support the video recording apparatus 2100 to perform the steps of transmitting data and receiving data. The communication unit 2103 may be an input or output interface, a pin or circuit, or the like.
In a possible embodiment, the video recording apparatus may further include: a storage unit 2104. The processing unit 2102 and the storage unit 2104 are connected by a line. The memory unit 2104 may include one or more memories, which may be one or more devices, circuits, or means for storing programs or data. The storage unit 2104 may exist independently and is connected to the processing unit 2102 provided in the video recording apparatus via a communication line. The memory unit 2104 may also be integrated with the processing unit 2102.
The storage unit 2104 may store computer-executed instructions of the method in the terminal apparatus to cause the processing unit 2102 to execute the method in the above-described embodiment. The storage unit 2104 may be a register, a cache, a RAM, or the like, and the storage unit 2104 may be integrated with the processing unit 2102. The storage unit 2104 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, and the storage unit 2104 may be independent of the processing unit 2102.
The embodiment of the present application provides a terminal device, which may also be called a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The terminal device includes: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored in the memory to cause the terminal device to perform the method described above.
The embodiment of the application provides a terminal device, and the structure is shown in fig. 1. The memory of the terminal device may be configured to store at least one program instruction, and the processor is configured to execute the at least one program instruction, so as to implement the technical solution of the foregoing method embodiment. The implementation principle and technical effects are similar to those of the related embodiments of the method, and are not repeated here.
The embodiment of the application provides a chip. The chip comprises a processor for invoking a computer program in a memory to perform the technical solutions in the above embodiments. The principle and technical effects of the present invention are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application provides a computer program product, which enables a terminal device to execute the technical scheme in the embodiment when the computer program product runs on electronic equipment. The principle and technical effects of the present invention are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application provides a computer readable storage medium, on which program instructions are stored, which when executed by a terminal device, cause the terminal device to execute the technical solution of the above embodiment. The principle and technical effects of the present invention are similar to those of the above-described related embodiments, and will not be described in detail herein. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
The computer readable medium may include RAM, ROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium targeted for carrying or storing the desired program code in the form of instructions or data structures and may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (Digital Subscriber Line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes optical disc, laser disc, optical disc, digital versatile disc (Digital Versatile Disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the invention.

Claims (20)

1. A video recording method, applied to a terminal device including a first camera, the method comprising:
the terminal equipment displays a first interface of a camera application, wherein the first interface comprises a first window and a second window; the first window is displayed with a first picture acquired by the first camera, the second window is displayed with a second picture, and the second picture is a part of the first picture;
at a first moment, when the terminal device detects that a first position of the first screen comprises a first object, the second screen comprises the first object;
at a second moment, when the terminal equipment detects that the second position of the first picture comprises the first object, the second picture comprises the first object, and the second moment is later than the first moment;
Wherein, at the first moment, the second window is displayed with a frame and/or a first control; and at the second moment, the second window does not display the frame and/or the first control.
2. The method of claim 1, wherein a time interval between the first time and the second time is greater than a first threshold.
3. The method of claim 1 or 2, wherein the first object is centrally displayed in the second screen.
4. A method according to any of claims 1-3, wherein the second window floats on top of the first window and is smaller than the first window.
5. The method according to any one of claim 1 to 4, wherein,
after the second moment, the terminal equipment detects the triggering operation of the user on the second window;
and responding to the triggering operation of the second window, and displaying the frame of the second window and/or the first control.
6. The method of any of claims 1-4, wherein the first interface further comprises a pause control, the method further comprising:
After the second moment, the terminal equipment detects triggering operation of the pause control;
responding to the triggering operation of the pause control, the terminal equipment displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, the second window comprises a frame and/or the first control, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, the first window also displays a first picture acquired by the first camera, and the second window displays a second picture.
7. The method according to any one of claims 1-4, further comprising:
and at a third moment, when the terminal equipment detects that the first picture does not comprise the first object, displaying the frame of the second window and/or the first control, wherein the third moment is later than the second moment.
8. The method according to any one of claims 1-4, further comprising:
During the fourth time to the fifth time, the terminal device continuously detects that the first picture does not comprise the first object, and the second window does not comprise the frame and/or the first control;
at the fifth moment, the terminal device displays a third interface of the camera application, the third interface includes the first window and the second window, the second window includes an identifier for suspending recording, the frame and/or the first control are/is kept unchanged, the second recording duration information displayed by the second window does not include the identifier for suspending recording, the displayed first recording duration information continues to change, the first window also displays a picture acquired by the first camera, the fifth moment is later than the fourth moment, and the fourth moment is later than the second moment.
9. The method according to any one of claims 1-8, further comprising:
at a sixth moment, when the terminal equipment detects that the first picture comprises an object, displaying a tracking identifier in the first window, wherein the tracking identifier is associated with the object;
And at a seventh moment, when the terminal device detects that the first picture comprises the object, the tracking identifier is not displayed in the first window, the seventh moment is later than the sixth moment, the sixth moment is earlier than the first moment, or the sixth moment is later than the first moment, or the sixth moment is the first moment, and a time interval between the first moment and the second moment is greater than a second threshold value.
10. The method of claim 9, wherein at the seventh time instant the first window further comprises a tracking identifier display prompt prompting a user for how to display the first tracking identifier.
11. The method of claim 10, wherein at the eighth time instant, the first window does not display the tracking identifier display hint, the eighth time instant is later than the seventh time instant.
12. The method according to any one of claims 9-11, further comprising:
after the seventh moment, the terminal equipment detects the triggering operation of the user on the first window;
and responding to triggering operation of the first window, wherein the first window comprises the first tracking identifier and does not comprise the tracking identifier display prompt.
13. The method according to any one of claims 9-11, further comprising:
after the seventh moment, the terminal equipment detects triggering operation of a pause control;
responding to the triggering operation of the pause control, the terminal equipment displays a second interface of the camera application, wherein the second interface comprises a first window and a second window, the first window and the second window both comprise marks for pausing recording, the first window comprises the first tracking marks, first recording duration information displayed by the first window is kept unchanged, second recording duration information displayed by the second window is kept unchanged, the first window also displays pictures acquired by the first camera, and the second window displays the second pictures.
14. The method according to any one of claims 9-11, further comprising:
at a ninth time, when the terminal device detects that the first screen does not include the first object, the first window includes the tracking identifier, and the ninth time is later than the seventh time.
15. The method according to any one of claims 9-11, further comprising:
During the tenth time to the eleventh time, the terminal device continuously detects that the first screen does not include the first object, and the first window does not include the tracking identifier;
and at the eleventh moment, the terminal equipment displays a third interface of the camera application, the first window displays the tracking identifier, and the eleventh moment is later than the tenth moment.
16. The method according to any one of claims 1 to 15, wherein,
at a third time or at a ninth time, the first interface further includes a loss hint for hinting that the first object is lost;
and/or, during the fourth time to the fifth time or the tenth time to the eleventh time, the first interface further comprises a missing cue, and/or, at the fifth time or the eleventh time, the third interface further comprises a pause cue, wherein the pause cue is used for prompting the second window to pause recording.
17. The method according to any one of claims 1 to 16, wherein,
at a twelfth time, the first interface includes the second window;
at a thirteenth moment, the first interface includes the first window and does not include the second window, the first window includes a second control, the second control is associated with the second window, the thirteenth moment is later than the twelfth moment, the twelfth moment is the first moment, a time interval between the thirteenth moment and the twelfth moment is greater than a third threshold, and the third threshold is greater than the first threshold.
18. The method according to any one of claims 1 to 17, wherein,
at a seventeenth time, the first interface includes the second window;
at an eighteenth moment, the first interface includes the first window and does not include the second window, the eighteenth moment is later than the seventeenth moment, the seventeenth moment is the first moment, a time interval between the eighteenth moment and the seventeenth moment is greater than a fourth threshold, and the fourth threshold is greater than a third threshold.
19. A terminal device, characterized in that the terminal device comprises a processor for invoking a computer program in memory for performing the method according to any of claims 1-18.
20. A computer readable storage medium storing computer instructions which, when run on a terminal device, cause the terminal device to perform the method of any of claims 1-18.
CN202210576793.6A 2022-05-25 2022-05-25 Video recording method and related device Active CN116132790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210576793.6A CN116132790B (en) 2022-05-25 2022-05-25 Video recording method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210576793.6A CN116132790B (en) 2022-05-25 2022-05-25 Video recording method and related device

Publications (2)

Publication Number Publication Date
CN116132790A true CN116132790A (en) 2023-05-16
CN116132790B CN116132790B (en) 2023-12-05

Family

ID=86301455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210576793.6A Active CN116132790B (en) 2022-05-25 2022-05-25 Video recording method and related device

Country Status (1)

Country Link
CN (1) CN116132790B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003134384A (en) * 2001-10-23 2003-05-09 Fuji Photo Film Co Ltd Camera
JP2007150496A (en) * 2005-11-25 2007-06-14 Sony Corp Imaging apparatus, data recording control method, and computer program
US20080246851A1 (en) * 2007-04-03 2008-10-09 Samsung Electronics Co., Ltd. Video data display system and method for mobile terminal
CN105519097A (en) * 2013-08-27 2016-04-20 高通股份有限公司 Systems, devices and methods for displaying pictures in a picture
CN110417991A (en) * 2019-06-18 2019-11-05 华为技术有限公司 A kind of record screen method and electronic equipment
CN111093026A (en) * 2019-12-30 2020-05-01 维沃移动通信(杭州)有限公司 Video processing method, electronic device and computer-readable storage medium
CN112584222A (en) * 2020-11-27 2021-03-30 北京搜狗科技发展有限公司 Video processing method and device for video processing
CN112714259A (en) * 2020-12-30 2021-04-27 广州极飞科技有限公司 State adjustment method and device for object to be shot
CN112954424A (en) * 2020-08-21 2021-06-11 海信视像科技股份有限公司 Display device and camera starting method
CN112954219A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN113037995A (en) * 2019-12-25 2021-06-25 华为技术有限公司 Shooting method and terminal in long-focus scene
US20210248361A1 (en) * 2020-02-07 2021-08-12 Canon Kabushiki Kaisha Electronic device
CN114205515A (en) * 2020-09-18 2022-03-18 荣耀终端有限公司 Anti-shake processing method for video and electronic equipment
WO2022068537A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Image processing method and related apparatus
WO2022083357A1 (en) * 2020-10-22 2022-04-28 海信视像科技股份有限公司 Display device and camera control method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003134384A (en) * 2001-10-23 2003-05-09 Fuji Photo Film Co Ltd Camera
JP2007150496A (en) * 2005-11-25 2007-06-14 Sony Corp Imaging apparatus, data recording control method, and computer program
US20080246851A1 (en) * 2007-04-03 2008-10-09 Samsung Electronics Co., Ltd. Video data display system and method for mobile terminal
CN105519097A (en) * 2013-08-27 2016-04-20 高通股份有限公司 Systems, devices and methods for displaying pictures in a picture
CN112954219A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN110417991A (en) * 2019-06-18 2019-11-05 华为技术有限公司 A kind of record screen method and electronic equipment
CN113037995A (en) * 2019-12-25 2021-06-25 华为技术有限公司 Shooting method and terminal in long-focus scene
CN111093026A (en) * 2019-12-30 2020-05-01 维沃移动通信(杭州)有限公司 Video processing method, electronic device and computer-readable storage medium
US20210248361A1 (en) * 2020-02-07 2021-08-12 Canon Kabushiki Kaisha Electronic device
CN112954424A (en) * 2020-08-21 2021-06-11 海信视像科技股份有限公司 Display device and camera starting method
CN114205515A (en) * 2020-09-18 2022-03-18 荣耀终端有限公司 Anti-shake processing method for video and electronic equipment
WO2022068537A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Image processing method and related apparatus
WO2022083357A1 (en) * 2020-10-22 2022-04-28 海信视像科技股份有限公司 Display device and camera control method
CN112584222A (en) * 2020-11-27 2021-03-30 北京搜狗科技发展有限公司 Video processing method and device for video processing
CN112714259A (en) * 2020-12-30 2021-04-27 广州极飞科技有限公司 State adjustment method and device for object to be shot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
倪德山: "《数字图像处理与模式识别研究》", 大连海事大学出版社, pages: 275 - 278 *

Also Published As

Publication number Publication date
CN116132790B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
JP2022532102A (en) Screenshot method and electronic device
JP7302038B2 (en) USER PROFILE PICTURE GENERATION METHOD AND ELECTRONIC DEVICE
CN108108114A (en) A kind of thumbnail display control method and mobile terminal
CN111597000B (en) Small window management method and terminal
CN108848313B (en) Multi-person photographing method, terminal and storage medium
KR20180133743A (en) Mobile terminal and method for controlling the same
CN108055587A (en) Sharing method, device, mobile terminal and the storage medium of image file
US20230119849A1 (en) Three-dimensional interface control method and terminal
WO2024051556A1 (en) Wallpaper display method, electronic device and storage medium
CN116095413B (en) Video processing method and electronic equipment
CN116132790B (en) Video recording method and related device
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
CN116112780B (en) Video recording method and related device
CN116797767A (en) Augmented reality scene sharing method and electronic device
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium
CN116112781B (en) Video recording method, device and storage medium
CN116095465B (en) Video recording method, device and storage medium
CN116095460B (en) Video recording method, device and storage medium
WO2023226725A9 (en) Video recording method and related apparatus
KR20170027136A (en) Mobile terminal and the control method thereof
CN116095461A (en) Video recording method and related device
CN116112782B (en) Video recording method and related device
CN112783993B (en) Content synchronization method for multiple authorized spaces based on digital map
CN117201865A (en) Video editing method, electronic equipment and storage medium
CN117440082A (en) Screen capturing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant