CN115484403B - Video recording method and related device - Google Patents

Video recording method and related device Download PDF

Info

Publication number
CN115484403B
CN115484403B CN202210946060.7A CN202210946060A CN115484403B CN 115484403 B CN115484403 B CN 115484403B CN 202210946060 A CN202210946060 A CN 202210946060A CN 115484403 B CN115484403 B CN 115484403B
Authority
CN
China
Prior art keywords
recording
preview
window
image
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210946060.7A
Other languages
Chinese (zh)
Other versions
CN115484403A (en
Inventor
张丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311475949.2A priority Critical patent/CN117479000A/en
Priority to CN202210946060.7A priority patent/CN115484403B/en
Publication of CN115484403A publication Critical patent/CN115484403A/en
Application granted granted Critical
Publication of CN115484403B publication Critical patent/CN115484403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a video recording method and a related device. The method comprises the following steps: displaying a first interface, wherein the first interface comprises a recording control and a first window displaying a first picture acquired by a camera in real time; generating a first recording request when a first operation for a recording control is received; acquiring an original image acquired by a camera based on a first recording request and storing the original image into a first memory queue; processing the original image of the first memory queue to obtain a first image; storing the first image to a first preview buffer queue and a first recording buffer queue; displaying a second interface comprising a first window, the first window displaying a first screen generated based on a first image in a first preview cache queue; encoding is performed based on the first image in the first recording buffer queue to save the first video. In this way, during recording, the images after the original images are processed are stored in the corresponding preview buffer queue and video buffer queue, so that the memory occupation is reduced, and the power consumption is reduced.

Description

Video recording method and related device
Technical Field
The application relates to the technical field of terminals, in particular to a video recording method and a related device.
Background
In order to improve user experience, terminal devices such as mobile phones and tablet computers are generally configured with a plurality of cameras. The terminal device may provide a plurality of photographing modes, such as a proactive mode, a post-photographing mode, a front-back double-photographing mode, and the like, for the user through the plurality of cameras configured. The user can select a corresponding shooting mode to shoot according to the shooting scene.
When the terminal equipment receives the operation of selecting the shooting mode by the user, the terminal equipment enters a corresponding preview interface and displays a shot image. When the terminal equipment receives the operation of recording the video by the user, the terminal equipment enters a corresponding recording interface, displays the shot image and stores the shot image.
However, when the terminal device records video, power consumption is large.
Disclosure of Invention
The embodiment of the application provides a video recording method and a related device, which are applied to the technical field of terminals. The preview buffer queue and the video buffer queue are respectively corresponding to the same memory queue; in the recording process, the original image in the memory queue is acquired and processed, and the processed image is stored in the corresponding preview cache queue and the corresponding video cache queue. Therefore, the number of the original images cached by the terminal equipment can be reduced, the memory occupation is reduced, and the power consumption is further reduced.
In a first aspect, an embodiment of the present application provides a video recording method. The method comprises the following steps: the terminal equipment displays a first interface, wherein the first interface comprises a first window and a recording control, and the first window displays a first picture acquired by a camera in real time; when receiving a first operation for a recording control, the terminal equipment generates a first recording request; the terminal equipment acquires an original image acquired by a camera based on a first recording request and stores the original image in a first memory queue; the terminal equipment processes the original image of the first memory queue to obtain a first image; the terminal equipment stores the first image into a first preview buffer queue and a first recording buffer queue, and the first preview buffer queue and the first recording buffer queue correspond to a first memory queue; the terminal equipment displays a second interface, wherein the second interface comprises a first window, a first picture is displayed on the first window, and the first picture is generated based on a first image in a first preview cache queue; the terminal device encodes based on the first image in the first recording buffer queue to save the first video.
In the embodiment of the application, the first interface can be a preview interface for displaying the image for preview. The first interface may correspond to the interface shown in fig. 4. The first window may correspond to a large window hereinafter. The second interface may be a recording interface, and video is recorded while the image is displayed for previewing. The first recording request may correspond to the third request or the fourth request hereinafter.
The first operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application.
It can be understood that in the recording process, the original image in the memory queue is acquired and processed, and the processed image is stored in the corresponding preview buffer queue and the corresponding video buffer queue. Therefore, the number of the original images cached by the terminal equipment can be reduced, the memory occupation is reduced, and the power consumption is further reduced.
Optionally, before the terminal device displays the first interface, the method further includes: the terminal equipment displays a third interface, wherein the third interface comprises a first control, and the first control corresponds to the first video recording mode; the terminal equipment receives a second operation aiming at the first control; the terminal equipment configures a first preview buffer queue, a first recording buffer queue and a first memory queue based on a first video recording mode.
The first control can be a video control for indicating a conventional video mode; and may also be used to indicate controls for other recording modes. The second operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application. The third interface may correspond to the camera preview interface shown in fig. 4 below.
Thus, the method can be applied to a mode of recording one video by one video generation, such as a conventional video recording mode, a professional video recording mode and the like.
Optionally, before the terminal device displays the first interface, the method further includes: the terminal equipment displays a fourth interface, wherein the fourth interface comprises a second control, and the second control corresponds to a second video recording mode; the terminal equipment receives a third operation aiming at the second control; the terminal equipment configures a first preview buffer queue, a second preview buffer queue, a first recording buffer queue, a second recording buffer queue, a first memory queue and a second memory queue based on a second video mode; the second preview buffer queue and the second recording buffer queue correspond to the second memory queue.
The second control can be a main angle mode control or a control of other video modes. The third operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application. The fourth interface may correspond to a preview interface corresponding to the main angle mode shown in fig. 4 below.
Thus, the method can be applied to a mode of generating video recordings of two paths of videos by one video recording, for example, a main angle mode and the like, and has wide application range.
Optionally, the second interface displays one or more tracking identifiers; the terminal equipment receives a fourth operation aiming at a first tracking identifier, wherein the first tracking identifier is one of one or more tracking identifiers; responding to the fourth operation, and generating a second recording request by the terminal equipment; the terminal equipment stores the original image acquired by the acquisition camera into a first memory queue and a second memory queue based on a second recording request; the terminal equipment performs algorithm processing on the original image of the first memory queue to obtain a first image, and performs algorithm processing on the original image of the first memory queue to obtain a second image; the terminal equipment stores the first image into a first preview buffer queue and a first recording buffer queue so as to store the second image into a second preview buffer queue and a second recording buffer queue; the terminal equipment displays a fifth interface, wherein the fifth interface comprises a first window and a second window, the first window displays a first picture, the first picture is generated based on a first image in a first preview cache queue, the second window displays a second picture, and the second picture is generated based on a second image in a second preview cache queue; the second picture is a part of pictures related to a first object in the first window, and the first object is an object corresponding to the first tracking identifier; the terminal device encodes based on the first image in the first recording buffer queue to store the first video, and encodes based on the second image in the second recording buffer queue to store the second video.
The tracking identifier may be a tracking box below, or may be other identifiers. The second window may correspond to a small window hereinafter. The fourth operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application. The second recording request may correspond to a fourth request hereinafter. The fifth interface may correspond to the interface shown in fig. 5 below.
The terminal device can set a tracking target for tracking during recording. When the terminal equipment selects a tracking target during recording, the terminal equipment acquires original images in the two memory queues for processing, and the processed images are stored in the corresponding preview cache queue and the corresponding video cache queue. Therefore, the number of the original images cached by the terminal equipment can be reduced, the memory occupation is reduced, and the power consumption is further reduced.
Optionally, the first interface displays one or more tracking identifiers; before the terminal equipment receives the first operation for the recording control, the method further comprises the following steps: the terminal equipment receives a fifth operation aiming at a second tracking identifier, wherein the second tracking identifier is one of one or more tracking identifiers; responding to a fifth operation, displaying a sixth interface by the terminal equipment, wherein the sixth interface comprises a first window, a second window and a recording control, the first window displays a first picture, the second window displays a second picture, the second picture is a part of pictures related to a first object in the first window, and the first object is an object corresponding to a first tracking identifier; the terminal equipment stores the original image acquired by the acquisition camera into a second memory queue based on the second recording request; the terminal equipment processes the original image of the second memory queue to obtain a second image; the terminal equipment stores the second image into a second preview buffer queue and a second recording buffer queue; the second interface further comprises a second window, wherein a second picture is displayed on the second window, and the second picture is generated based on a second image in the second preview cache queue; the terminal device encodes based on the second image in the second recording buffer queue to store the second video.
The tracking identifier may be a tracking box below, or may be other identifiers. The second window may correspond to a small window hereinafter. The fifth operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the type, mode, etc. of the fifth operation in the embodiment of the present application. The fifth interface may correspond to the interface shown in fig. 5 below. The first recording request may correspond to a fourth request hereinafter.
The terminal device may start recording after setting the tracking target. The terminal device can display the small window to record one path additionally.
Optionally, at the first moment, the terminal device detects that a first object is displayed at a first position of the first window, and the second window displays a part of pictures related to the first object at the first position in the first window; at a second moment, the terminal device detects that the first object is displayed at the second position of the first window, and the second window displays a part of pictures related to the first object at the second position of the first window.
In this way, the widget is displayed based on the tracking target. In the embodiment of the application, the picture (tracking picture) displayed by the second window changes along with the position change of the tracking target. Specifically, the position of the first object (tracking object) in the screen displayed in the second window changes and changes, and fig. 1A or 1B may be referred to. For example, the interface at the first time may be the interface shown as a in fig. 1A, and the interface at the second time may be the interface shown as b in fig. 1A; alternatively, the interface at the first time may be the interface shown as a in fig. 1B, and the interface at the second time may be the interface shown as B in fig. 1B.
In some embodiments, the focus varies with the position of the tracking target. Specifically, the focus changes with the position change of the tracking target, and reference may be made to fig. 1A or 1B.
Optionally, the second frame is cropped based on the first frame.
Optionally, before receiving the first operation, the method further includes: the terminal equipment generates a first preview request; the terminal equipment acquires an original image acquired by the camera based on the first preview request and stores the original image in a first memory queue; the terminal equipment processes the original image of the first memory queue to obtain a first image; the terminal device stores the first image in a first preview buffer queue to generate a first picture.
Therefore, when the terminal equipment previews, the recording cache queue does not have image cache, and video cannot be saved.
Optionally, the terminal device sets a first identifier for the first preview buffer queue and the first recording buffer queue; and/or the terminal equipment sets a second identifier for the second preview buffer queue and the second recording buffer queue.
The terminal equipment can identify the corresponding preview buffer queue and recording buffer queue through the identification, so that the control is convenient, and the implementation is easy.
In a second aspect, an embodiment of the present application provides a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The terminal device includes: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to cause the terminal device to perform a method as in the first aspect.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements a method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run, causes a computer to perform the method as in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip comprising a processor for invoking a computer program in a memory to perform a method according to the first aspect.
It should be understood that the second to fifth aspects of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
FIG. 1A is a schematic diagram of a preview interface of a principal angle mode according to an embodiment of the present application;
fig. 1B is a schematic diagram of a recording interface in a main angle mode according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a software architecture in a possible design;
fig. 3 is a schematic software structure of a terminal device according to an embodiment of the present application;
fig. 4 is an interface schematic diagram of a terminal device according to an embodiment of the present application;
fig. 5 is an interface schematic diagram of a terminal device according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a video recording method according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of a video recording method according to an embodiment of the present application;
FIG. 8 is a schematic flow chart of a video recording method according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application;
fig. 10 is a schematic hardware structure of a terminal device according to an embodiment of the present application.
Detailed Description
For purposes of clarity in describing the embodiments of the present application, the words "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The "at … …" in the embodiment of the present application may be an instant when a certain situation occurs, or may be a period of time after a certain situation occurs, which is not particularly limited. In addition, the display interface provided by the embodiment of the application is only used as an example, and the display interface can also comprise more or less contents.
For ease of understanding, some of the terms involved in the embodiments of the present application are described below.
1. Main angle mode: when the terminal equipment records the video, a mode of additionally generating a portrait focus tracking video, namely, more than two videos are saved when the recording is completed, wherein one of the videos is a recorded original video, and the other videos are videos automatically cut from the original video according to the tracked target portrait. The person image in the person image focus tracking video can be understood as a "principal angle" focused by a user, and the manner of generating the video corresponding to the "principal angle" can be as follows: and cutting out video content corresponding to the main angle from the video conventionally recorded by the terminal equipment.
The "principal angle" may be a living body such as a person or an animal, or may be a non-living body such as a vehicle. It will be appreciated that any item identifiable based on an algorithmic model may be used as a "principal angle" in embodiments of the present application. In the embodiment of the application, the principal angle can be defined as a focus tracking object, and the focus tracking object can also be called a principal angle object, a tracking target, a tracking object, a focus tracking target and the like.
For example, a preview interface of the principal angle mode in the terminal device may be as shown in fig. 1A. The preview interface includes a large window 101, a recording control 102.
The large window 101 displays a large window preview screen. When the terminal device recognizes that a person is included in the large-window preview screen, the large window displays tracking frames (for example, tracking frame 104 and tracking frame 105). The tracking frame can prompt the user that the corresponding person can be set or switched to the tracking target, and the user can conveniently set or switch the tracking target. When the terminal device recognizes that a plurality of persons are included in the large window preview screen, the large window may display a plurality of tracking frames. The number of tracking frames is less than or equal to the number of people identified by the terminal device. The tracking target is any one of a plurality of persons corresponding to the tracking frame in the large-window preview screen. The tracking target may be referred to as a focus tracking object, a principal angle object, etc., which is not limited by the present application.
In some embodiments, the tracking box (e.g., tracking box 104) corresponding to the person set to track the target is different from the tracking box display style corresponding to the person not set to track the target (e.g., tracking box 105). In this way, the user can distinguish and identify the tracked person (tracked target) conveniently. In addition to the different patterns of the tracking frames, the embodiments of the present application may also set the colors of the tracking frames, for example, the colors of the tracking frame 104 and the tracking frame 105 are different. Thus, the tracking target can be intuitively distinguished from other people.
The tracking frame may be a virtual frame, such as tracking frame 104; the combination of the virtual box and "+" may be used, such as the tracking box 105, which may be any display form, and the tracking box may be satisfied by a function that can be triggered by a user to implement tracking and may be set as a tracking target. The tracking frame may be marked at any location of a person that may be set to track a target, as embodiments of the application are not particularly limited.
It can be understood that the tracking frame is one of the tracking identifiers, and the terminal device can also display other forms of tracking identifiers, so as to facilitate the user to set the tracking target. By way of example, other forms of tracking identification may be thumbnails of objects, numbers, letters, graphics, and the like. In addition, the tracking mark can be arranged at any position of the object, can be arranged near the object and can be arranged at the edge of the large window. The embodiment of the application does not limit the specific position of the tracking mark.
For example, the terminal device may display a thumbnail arrangement of one or more objects at an edge of the large window, and when the terminal device receives an operation that the user clicks any tracking identifier, set the object corresponding to the clicked tracking identifier as a tracking target.
In a possible implementation manner, the terminal device may identify the person through face recognition technology and display the tracking frame. The terminal device may determine the display position of the tracking frame, for example, a position where the body of the person is relatively centered, according to a technique such as human body recognition. Therefore, based on the position of the human body calculation tracking frame, the situation that the tracking frame is located at the position of the human face can be reduced, shielding of the tracking frame to the human face can be reduced, and user experience is improved. The technology for identifying the person and calculating the position of the tracking frame in the embodiment of the application is not particularly limited.
In some embodiments, fig. 1A also includes a small window 103. The widget 103 displays a widget preview screen. The small window preview screen corresponds to the tracking target. When the tracking target is switched, the person in the widget preview screen displayed in the widget 103 is switched. For example, if the tracking target is switched from the person corresponding to the tracking frame 104 to the person corresponding to the tracking frame 105, the widget preview screen displayed on the widget 103 is changed accordingly.
The small window preview screen may be part of a large window preview screen. In a possible implementation manner, the small window preview screen is obtained by the terminal device cutting the large window preview screen according to a certain proportion in real time based on the tracking target. The embodiment of the application does not limit the picture displayed by the small window in detail.
In some embodiments, the specification, position, horizontal and vertical screen display modes and the like of the small window are adjustable, and a user can adjust the style of the small window according to video habits. Optionally, the tracking target is centrally displayed in the small window preview screen. Optionally, the small window floats above the large window. And are not limited herein.
In some embodiments, widget 103 further includes a close control 106 and a first switch control 107.
It can be understood that, when the terminal device receives the operation of setting the tracking target by the user, a small window is displayed on the preview interface, so as to display a small window preview screen of the tracking target. And when the terminal equipment does not receive the operation of setting the tracking target by the user, the preview interface does not display a small window.
When the user triggers the close control 106 through clicking, touching or other operations in the preview interface shown in fig. 1A, the terminal device receives the operation of closing the small window, and the terminal device closes the small window to cancel the preview of the tracking target. When the user triggers the first switching control 107 through clicking, touching, or other operations in the preview interface shown in fig. 1A, the terminal device receives an operation of switching the widget display mode (widget style), and the terminal device switches the style of the widget. In particular, the porthole may be switched from landscape to portrait or vice versa. When the user triggers the recording control 102 through clicking, touching or other operations on the preview interface shown in fig. 1A, the terminal device receives an operation of starting recording, and starts recording video and tracking video.
Optionally, the preview interface may also include other controls, such as a main angle mode exit control 108, a setup control 109, a flash control 110, a second toggle control 111, a zoom control 112, and the like.
When the main angle mode exit control 108 is triggered, the terminal device exits the main angle mode and enters the video mode. When the setting control 109 is triggered, the terminal device may adjust various setting parameters. The setting parameters include, but are not limited to: whether to turn on a watermark, store a path, a coding scheme, whether to save a geographic location, etc. When the flash control 110 is triggered, the terminal device may set a flash effect, for example, control the flash to be forcibly turned on, forcibly turned off, turned on when photographing, turned on according to environmental adaptation, and the like. When the zoom control 112 is triggered, the terminal device may adjust the focal length of the camera, thereby adjusting the magnification of the large-window preview screen.
When the user triggers the second switch control 111 through clicking or touching on the preview interface shown in fig. 1A, the terminal device receives the operation of setting the widget style, and displays the widget style selection item for the user to select. The widget style selections include, but are not limited to: transverse or vertical, etc. The embodiment of the present application is not limited thereto. In a possible implementation manner, the second switching control 111 corresponds to a display style of the widget, so that the user can distinguish the widget style conveniently.
Note that, in the preview interface shown in fig. 1A, the widget style switching may be controlled through the first switching control 107, and the widget style switching may also be controlled through the second switching control 111. In a possible implementation, the first switch control 107 in the small window may be set in linkage with the second switch control 111 of the large window. For example, when the widget is changed from landscape to portrait, the icon of the first switch control 107 is in the portrait preview style, and the icon of the second switch control 111 is also in the portrait preview style, or the icons of the first switch control 107 and the second switch control 111 are both in the landscape preview style, the user is prompted to click the preview style after the switch again.
It can be appreciated that, in the preview scene, after the terminal device sets the tracking target, the small window preview screen of the small window may display the tracking target centrally. In some scenarios, the tracking target may be in a moving state, and when the tracking target moves but does not leave the lens, the widget preview screen of the widget may continuously display the tracking target centrally.
For example, the preview interface may be configured such that the tracking target object includes a male character and a female character, and the terminal device sets the male character as a tracking target in response to a click operation of a tracking frame for the male character by the user, and enters the interface as shown in a in fig. 1A. In the interface shown in a in fig. 1A, a small window preview screen of a small window displays a male character in the middle, the male character being on the right side of the female character. The male character moves, and the terminal device can continuously focus on the male character and display the male character in the small window in a centered manner. When the male character walks to the left of the female character, the interface of the terminal device may be as shown in b of fig. 1A. In the interface b in fig. 1A, the widget preview screen of the widget still displays a male character centered on the left side of the female character.
In a possible implementation manner, when the terminal device tracks the target, the focus moves along with the movement of the tracked target, and illustratively, in the interface shown as a in fig. 1A, the focus is located in the face area of the male character and is located in the middle right part of the screen; the male character moves, the terminal device can continuously focus on the male character, and when the male character walks to the left side of the female character, the interface of the terminal device can be shown as b in fig. 1A. In the interface shown in b in fig. 1A, the focus is located in the face region of the male character, in the middle left portion of the screen.
In the recording mode of the main angle mode, the terminal device can display an image (a large window preview picture) obtained by the camera in a large window, display an image (a small window preview picture) of a tracking target selected by a user in a small window, and generate a recording video and a focus tracking video which are recorded after the recording mode is started. And at the end of recording, the terminal equipment stores the video generated based on the large-window preview picture and the focus tracking video generated based on the small-window preview picture.
In some embodiments, a small window may end recording in advance compared to the recording of a large window. And when the small window recording is finished, the terminal equipment stores the focus tracking video generated based on the small window preview picture. Or it is understood that the terminal device may end the recording of the focus-following video in advance compared to the whole video.
In some embodiments, a small window may delay the start of recording as compared to the recording of a large window. Or it can be understood that after the terminal device starts to record the video, the terminal device starts to open a small window and starts to record the focus tracking video after detecting the operation of setting the tracking target by the user.
For example, the recording interface of the main angle mode in the terminal device may be shown in fig. 1B. The recording interface includes a big window 113, a pause control 114, and an end control 115.
The large window 113 displays a large window preview screen and a recording time period. When the terminal device recognizes that a person is included in the large-window preview screen, the large window displays tracking frames (for example, tracking frame 117 and tracking frame 118). It will be appreciated that the number of tracking frames is less than or equal to the number of people identified by the terminal device.
In some embodiments, the recording interface also displays a small window 116. The widget 116 displays a widget preview screen. The small window preview screen corresponds to the tracking target. When the tracking target is switched, the person in the widget preview screen displayed in the widget 116 is switched. For example, if the tracking target is switched from the person corresponding to the tracking frame 117 to the person corresponding to the tracking frame 118, the widget preview screen displayed in the widget 116 is changed accordingly.
The small window preview screen may be part of a large window preview screen. In a possible implementation manner, the small window preview screen is obtained by cutting the large window preview screen in real time according to a certain proportion based on the tracking target. The embodiment of the application does not limit the picture displayed by the small window in detail.
Optionally, the tracking target is centrally displayed in the small window preview screen. Optionally, the small window floats above the large window. And are not limited herein.
The widget 116 also includes a widget end control 119 and a widget recording duration.
It can be understood that, when the terminal device receives the operation of setting the tracking target by the user, a small window is displayed on the recording interface, so as to display a small window preview screen of the tracking target. When the terminal equipment does not receive the operation of setting the tracking target by the user, the recording interface does not display a small window.
When the user triggers the end control 115 through clicking, touching or other operations on the recording interface shown in fig. 1B, the terminal device receives the operation of ending recording by the user, and the terminal device enters the preview interface in the main angle mode, stores the video corresponding to the large-window preview picture and the focus tracking video corresponding to the small-window preview picture. When the user triggers the pause control 114 through clicking, touching or other operations on the recording interface shown in fig. 1B, the terminal device receives the operation of the user to pause recording, and the terminal device pauses recording of video in the large window 113 and recording of the focus-following video in the small window 116. When the user triggers the small window ending control 119 through clicking, touching or other operations on the recording interface shown in fig. 1B, the terminal device receives the operation of ending the small window recording by the user, and continues to display the large window preview picture in the large window 113, closes the small window 116, and stores the focus tracking video corresponding to the small window preview picture in the small window 116.
In a possible implementation, the recording interface further includes a flash control 120. When the flash control 120 is triggered, the terminal device may set a flash effect.
It can be understood that when the terminal device records in the main angle mode, a path of video can be generated based on the large window preview picture of the large window, and a path of focus tracking video corresponding to the tracking target can be additionally generated based on the small window preview picture of the small window. The two paths of videos are independently stored in the terminal equipment. Therefore, the video corresponding to the tracking target can be obtained without manually editing the whole video later, the operation is simple and convenient, and the user experience is improved.
It can be appreciated that, in the recording scenario, after the terminal device sets the tracking target, the widget preview screen of the widget may display the tracking target centrally. In some scenarios, the tracking target may be in a moving state, and when the tracking target moves but does not leave the lens, the widget preview screen of the widget may continuously display the tracking target centrally.
For example, the preview interface may be configured such that the tracking target object includes a male character and a female character, and the terminal device sets the male character as the tracking target in response to a click operation of a tracking frame for the male character by the user, and enters the interface as shown in a in fig. 1B. In the interface shown in a in fig. 1B, the small window preview screen of the small window displays a male character in the middle, the male character being on the right side of the female character. The male character moves, and the terminal device can continuously focus on the male character and display the male character in the small window in a centered manner. When the male character walks to the left of the female character, the interface of the terminal device may be as shown in B of fig. 1B. In the interface b in fig. 1A, the widget preview screen of the widget still displays a male character centered on the left side of the female character.
In a possible implementation manner, when the terminal device tracks the target, the focus moves along with the movement of the tracked target, and illustratively, in the interface shown as a in fig. 1B, the focus is located in the face area of the male character and is located in the middle right part of the screen; the male character moves, the terminal device can continue to focus on the male character, and when the male character walks to the left of the female character, the interface of the terminal device can be shown as B in fig. 1B. In the interface shown in B in fig. 1B, the focus is located in the face region of the male character, in the middle left portion of the screen.
It can be understood that, in the embodiment of the present application, a shooting mode in which one or more tracking videos can be additionally generated based on a tracking target is defined as a principal angle mode, and the shooting mode may also be referred to as a tracking mode, which is not limited in the embodiment of the present application.
The video recording method provided by the embodiment of the application can be applied to the electronic equipment with the video recording function. The electronic device comprises a terminal device. The terminal device may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), etc. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
In the embodiment of the present application, the terminal device may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), and so on.
The video recording method provided by the embodiment of the application can be applied to various video recording modes of the terminal equipment. For example, a main angle mode, a normal video mode, a double shot from front to back, and the like. It can be understood that when the terminal device adopts the main angle mode for recording, the terminal device can record video aiming at the selected close-up character when the environment where the recorded character is located is panoramic, and the terminal device records the video once and takes pieces in multiple modes.
In order to improve user experience, terminal devices such as mobile phones and tablet computers are generally configured with a plurality of cameras. The terminal device may provide a plurality of photographing modes, such as a proactive mode, a post-photographing mode, a front-back dual-photographing mode, and the like, for the user through the plurality of cameras configured. The user can select a corresponding shooting mode to shoot according to the shooting scene.
When the terminal equipment receives the operation of selecting the shooting mode by the user, the terminal equipment enters a corresponding preview interface and displays a shot image. When the terminal equipment receives the operation of recording the video by the user, the terminal equipment enters a corresponding recording interface, displays the shot image and stores the shot image.
Taking one video recording and storing as an example, when the terminal equipment receives an operation of selecting a shooting mode by a user, the terminal equipment acquires an original image acquired by a camera, stores the original image in a first memory queue, and processes the original image in the first memory queue to display the original image on a preview interface; when the terminal equipment receives the operation of recording the video by the user, acquiring an original image acquired by the camera, storing the original image in a first memory queue, and processing the original image in the first memory queue to display the original image on a preview interface; and acquiring an original image acquired by the camera, storing the original image in a second memory queue, and processing the original image in the second memory queue to encode for storage.
Thus, when the terminal equipment records the video, two memory queues are needed to store the image, the memory occupation is large, and the power consumption of the terminal equipment is large.
It can be understood that when the main angle mode is adopted for recording, when the environment panorama where the person is recorded, close-up recording can be carried out on the selected person, and one-time recording can be carried out to obtain the panoramic video and the focus tracking video of the selected person. When the shooting mode is the main angle mode, a large amount of memory is occupied, and the power consumption of the terminal equipment is large.
Taking the shooting mode as the main angle mode as an example, a recording process of the main angle mode in a possible design will be described with reference to fig. 2.
Fig. 2 is a schematic diagram illustrating an internal implementation flow of a terminal device in a possible design. As shown in fig. 2, the terminal device may be divided into five layers, from top to bottom, an application layer, an application framework layer, a hardware abstraction layer (hardware abstraction layer, HAL), a kernel layer (kernel), and a hardware layer (hardware).
The application layer may include a series of application packages. As shown in fig. 2, the application package may include applications for cameras, gallery, phone, map, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a resource manager, a view system, a camera access interface, and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock the screen, touch the screen, drag the screen, intercept the screen, etc.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The camera access interface enables an application to perform camera management and access the camera device. Such as managing the camera for image capture, etc.
The hardware abstraction layer may include a plurality of library modules, which may be camera library modules, for example. The Android system can load a corresponding library module for the equipment hardware, so that the purpose of accessing the equipment hardware by an application program framework layer is achieved. In the embodiment of the application, the camera library module comprises a control module, an image processing module, a drawing receiving module, a memory queue and a cache queue.
The control module configures the memory queue and the cache queue based on a configuration command (the configuration command corresponds to a shooting mode) issued by the camera application. The control module is further configured to adjust a shooting mode (e.g., a big window preview mode, a small window preview mode, a big window recording mode, a small window recording mode, a beauty parameter, an anti-shake parameter, an exposure time, etc.) of the camera based on a request issued by the application after the configuration is finished. The control module may also adjust related parameters during subsequent image processing based on a request issued by the application, for example, the number of images selected from the memory queue, a beauty parameter during algorithm processing, an anti-shake parameter, and the like.
It is to be appreciated that the shooting modes in camera applications may include, but are not limited to: a normal video mode, a main angle mode, a multi-mirror video mode, etc. The configuration command corresponding to the conventional video recording mode comprises a preview stream and a recording stream. The configuration command corresponding to the multi-mirror video mode comprises two paths of preview streams and two paths of recording streams. The configuration command corresponding to the main angle mode comprises two paths of preview streams and two paths of recording streams.
It will be appreciated that the specific content included in the request relates to the video mode of the camera. Illustratively, when the terminal device enters a large window preview scene in the main angle mode, the request is used to indicate a preview of the first view angle (panorama). When the terminal device enters a small window preview scene in the main angle mode, the request is used to indicate a preview of a first view angle (panoramic) and a preview of a second view angle (tracking object based). When the terminal device enters a large window recording scene in a main angle mode, the request is used for indicating the preview and recording of the first view angle. When the terminal device enters a small window recording scene in a main angle mode, the request is used for indicating the preview and recording of the first view angle and the preview and recording of the second view angle.
The image processing module is used for carrying out algorithm processing such as noise reduction and fusion on the images, converting formats and the like, and storing the processed images into corresponding cache queues.
The image receiving module is used for acquiring an original image shot by bottom hardware (such as a TOF camera, a camera and the like) and storing the original image into a corresponding memory queue.
The memory queue is used for storing the original image shot by the bottom hardware (e.g. TOF camera) acquired by the image receiving module.
The buffer queue is used for storing the image processed by the image processing module so as to display the upper layer application preview or perform coding processing.
In a possible design, the number of queues in the cache queue is the same as the number of queues in the memory queue.
The kernel layer is a layer between hardware and software. The kernel layer is used for driving the hardware so that the hardware works. The kernel layer may include a camera driver, a display driver, an audio driver, etc., which is not limited in this embodiment of the present application.
The hardware layer may be various types of sensors, shooting type sensors including, for example, TOF cameras, multispectral sensors, etc.
The following describes a recording process of the principal angle mode in a possible design with reference to fig. 2. The video recording process comprises configuration flow, preview display and recording.
Exemplary, the configuration flow in the recording process of the main angle mode in the possible design is as follows: when the terminal equipment receives triggering operation of the main angle mode, the camera application issues a configuration command to a camera library module in the hardware abstraction layer through the camera access interface, wherein the configuration command is used for indicating configuration of two paths of preview streams and two paths of video streams. The control module in the camera library module allocates two preview buffer queues (e.g., preview 1, preview 2) and two video buffer queues (e.g., video 1, video 2) and 4 memory queues (e.g., memory 1, memory 2, memory 3, memory 4) based on the configuration command; and after the configuration is finished, the camera library module sends a message for indicating the configuration is finished to the upper layer application.
The preview display during recording of the main angle mode in a possible design is described below.
After receiving the message for indicating the configuration end, the application issues a preview request to a camera library module in the hardware abstraction layer through the camera access interface. Taking the preview request for indicating the large-window preview and the small-window preview as an example, the control module controls based on the preview request, and the image receiving module stores the original image shot by the camera into the memory 1 and the memory 3. The image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and the like to obtain a first image, and stores the first image into the preview 1 for callback to an application for size window preview display; the image processing module selects an original image stored in the memory 3 to perform algorithm processing, format conversion and the like to obtain a third image, and stores the third image to the preview 2 for callback to the application for small window preview display.
Recording during recording of the main angle mode in a possible design is described below.
If the terminal equipment receives the recording operation, the camera application issues a recording request to a camera library module in the hardware abstraction layer through the camera access interface. Taking the preview request for indicating the large window recording and the small window recording as an example, the control module controls the image receiving module to store the original images into the memories 1 to 4 based on the recording request.
The image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and the like to obtain a first image, and stores the first image into the preview 1 for callback to an application for size window preview display; the image processing module selects an original image stored in the memory 2 to perform algorithm processing, format conversion and the like to obtain a second image, and stores the second image and the processed image into the video 1 for callback to an application for large window coding storage. The image processing module selects an original image stored in the memory 3 to perform algorithm processing, format conversion and the like to obtain a third image, and stores the third image to the preview 2 for callback to the application for small window preview display. The image processing module selects an original image stored in the memory 4 to perform algorithm processing, format conversion and the like to obtain a fourth image, and stores the fourth image to the video 2 for callback to an application to perform small window coding storage.
It can be understood that when the terminal device records in the main angle mode, there may be 4 memory queues to store the original image, so that a large amount of memory may be occupied, resulting in high power consumption of the terminal device. In addition, 4 groups of image processing can be performed, and the memory occupation is calculated to be large.
Based on this, the embodiment of the application provides a video recording method, wherein a preview cache queue and a video recording cache queue correspond to the same memory queue; in the recording process, the original images in the memory queues are acquired and processed, the processed images are stored in the corresponding preview cache queues and the corresponding video cache queues, so that the number of the original images cached by the terminal equipment is reduced, the memory occupation is reduced, and the power consumption is further reduced.
For easy understanding, the software architecture of the terminal device is described below with reference to fig. 3.
The software system of the terminal device can adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture. The embodiment of the application takes an android (android) system with a layered architecture as an example, and illustrates a software structure of terminal equipment.
As shown in fig. 3, the terminal device may be divided into five layers, from top to bottom, an application layer, an application framework layer, a hardware abstraction layer (hardware abstraction layer, HAL), a kernel layer (kernel), and a hardware layer (hardware).
The application layer may include a series of application packages. As shown in fig. 3, the application package may include applications for cameras, gallery, phone, map, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layer may include a window manager, a content provider, a resource manager, a view system, a camera access interface, and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock the screen, touch the screen, drag the screen, intercept the screen, etc.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The camera access interface enables an application to perform camera management and access the camera device. Such as managing the camera for image capture, etc.
The hardware abstraction layer may include a plurality of library modules, which may be camera library modules, for example. The Android system can load a corresponding library module for the equipment hardware, so that the purpose of accessing the equipment hardware by an application program framework layer is achieved. In the embodiment of the application, the camera library module comprises a control module, an image processing module, a drawing receiving module, a memory queue and a cache queue.
The control module configures the memory queue and the cache queue based on a configuration command (the configuration command corresponds to a shooting mode) issued by the camera application.
It is to be appreciated that the shooting modes in camera applications may include, but are not limited to: a normal video mode, a main angle mode, a multi-mirror video mode, etc. The configuration command corresponding to the conventional video recording mode comprises a preview stream and a recording stream. The configuration command corresponding to the multi-mirror video mode comprises two paths of preview streams and two paths of recording streams. The configuration command corresponding to the main angle mode comprises two paths of preview streams and two paths of recording streams.
In the embodiment of the application, when the control module configures the memory queue and the cache queue, the preview cache queue and the record cache queue are both bound with the same memory queue. (illustratively, preview 1 and record 1 are both bound to memory 1; preview 2 and record 2 are both bound to memory 2).
The control module is further configured to adjust a shooting mode (e.g., a big window preview mode, a small window preview mode, a big window recording mode, a small window recording mode, a beauty parameter, an anti-shake parameter, an exposure time, etc.) of the camera based on a request issued by the application after the configuration is finished. The control module may also adjust related parameters during subsequent image processing based on a request issued by the application, for example, the number of images selected from the memory queue, a beauty parameter during algorithm processing, an anti-shake parameter, whether to perform split processing, and so on.
It will be appreciated that the specific content included in the request relates to the video mode of the camera. Illustratively, when the terminal device enters a large window preview scene in the main angle mode, the request is used to indicate a preview of the first view angle (panorama). When the terminal device enters a small window preview scene in the main angle mode, the request is used to indicate a preview of a first view angle (panoramic) and a preview of a second view angle (tracking object based). When the terminal device enters a large window recording scene in a main angle mode, the request is used for indicating the preview and recording of the first view angle. When the terminal device enters a small window recording scene in a main angle mode, the request is used for indicating the preview and recording of the first view angle and the preview and recording of the second view angle.
The image processing module is used for carrying out algorithm processing such as noise reduction and fusion on the images, converting formats and the like, and storing the processed images into corresponding cache queues.
The image receiving module is used for acquiring an original image shot by bottom hardware (such as a TOF camera) and storing the original image into a corresponding memory queue.
The memory queue is used to store raw images captured by underlying hardware (e.g., a TOF camera).
The buffer queue is used for storing the image processed by the image processing module so as to display the upper layer application preview or perform coding processing.
In a possible design, the number of queues in the cache queue is the same as the number of queues in the memory queue.
The kernel layer is a layer between hardware and software. The kernel layer is used for driving the hardware so that the hardware works. The kernel layer may include a camera driver, a display driver, an audio driver, etc., which is not limited in this embodiment of the present application.
The hardware layer may be various types of sensors, shooting type sensors including, for example, TOF cameras, multispectral sensors, etc.
The following describes the recording process of the main angle mode in the embodiment of the present application with reference to fig. 3. The video recording process comprises configuration flow, preview display and recording.
The configuration flow in the video recording process of the main angle mode in the embodiment of the application is as follows: when the terminal equipment receives triggering operation of the main angle mode, the camera application issues a configuration command to a camera library module in the hardware abstraction layer through the camera access interface, wherein the configuration command is used for indicating configuration of two paths of preview streams and two paths of video streams. The control module in the camera library module allocates two preview buffer queues (e.g., preview 1, preview 2) and two video buffer queues (e.g., video 1, video 2) and 2 memory queues (e.g., memory 1, memory 2) based on the configuration command; and after the configuration is finished, the camera library module sends a message for indicating the configuration is finished to the upper layer application.
The preview display during the recording process of the main angle mode in the embodiment of the application is described below.
After receiving the message for indicating the configuration end, the application issues a preview request to a camera library module in the hardware abstraction layer through the camera access interface. Taking the preview request for indicating the large-window preview and the small-window preview as an example, the control module controls based on the preview request, and the image receiving module stores the original image shot by the camera into the memory 1 and the memory 2. The image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and the like to obtain a first image, and stores the first image into the preview 1 for callback to an application for size window preview display; the image processing module selects an original image stored in the memory 2 to perform algorithm processing, format conversion and the like to obtain a second image, and stores the second image to the preview 2 for callback to the application for small window preview display.
The following describes recording in the recording process of the main angle mode in the embodiment of the present application.
If the terminal equipment receives the recording operation, the camera application issues a recording request to a camera library module in the hardware abstraction layer through the camera access interface. Taking the case that the recording request is used for indicating the large window recording and the small window recording as an example, the control module controls the image receiving module to store the original images into the memory 1 and the memory 2 based on the recording request. The image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and the like to obtain a first image, and performs shunting processing on the first image, and stores the first image to the preview 1 and the record 1 for callback to an application for large window preview display and callback to the application for large window coding storage. The image processing module selects an original image stored in the memory 2 to perform algorithm processing, format conversion and the like to obtain a second image, and performs split-flow processing on the second image, and stores the second image to the preview 2 and the record 2 so as to enable callback to an application for small window preview display and callback to the application for small window coding storage.
For ease of understanding, the recording process in the main angle mode will be described with reference to fig. 4 and 5.
Fig. 4 is a schematic diagram of a main angle mode recording flow according to an embodiment of the present application. As shown in fig. 4, when the terminal device receives an operation of opening the camera application 401 by the user in the main interface shown in a in fig. 4, the terminal device may enter the photographing preview interface shown in b in fig. 4. A large window 402 and a shooting mode selection item may be included in the shooting preview interface. The large window 402 displays a large window preview screen in real time; shooting mode selection items include, but are not limited to: portrait, photograph, video 403, professional or other type of photography mode selection.
When the user triggers the video 403 through clicking, touching, etc. on the camera preview interface shown in b in fig. 4, the terminal device receives the operation of selecting the video mode from the user, and enters the video preview interface shown in c in fig. 4. The video preview interface comprises: a large window 404, recording parameter selections, and shooting mode selections. The large window 404 displays a large window preview picture in real time; recording parameter options include, but are not limited to: a main angle mode 405, a flash, a filter, a setting, or other types of recording parameter selections. Shooting mode selection items include, but are not limited to: portrait, photograph, video, professional or other types of photography mode selections.
When the user triggers the principal angle mode 405 through clicking, touching or other operations on the video preview interface shown in c in fig. 4, the terminal device receives the operation of selecting the principal angle mode preview by the user, and enters the preview interface corresponding to the principal angle mode shown in d in fig. 4. The preview interface includes: a large window 406 and a recording control. The large window 406 displays a large window preview screen. When there is a person in the large window preview screen, the large window 406 also displays a tracking frame 407.
It will be appreciated that the terminal device may also enter the preview interface corresponding to the main angle mode shown as d in fig. 4 in other manners. The terminal device may also exemplarily use the preview interface corresponding to the principal angle mode shown as d in fig. 4 as the "other type of shooting mode selection item" in the shooting preview interface shown as b in fig. 4. The embodiment of the application does not limit the entering mode of the preview interface corresponding to the main angle mode and the like.
In a possible implementation manner one, when the user triggers the tracking frame 407 through clicking or touching, the terminal device receives an operation of setting a tracking target, sets a person corresponding to the tracking frame as the tracking target, and enters an interface shown in e in fig. 4. The preview interface includes: a large window 408, a small window 409, and a recording control 410. The large window is displayed with a large window preview screen. The widget 409 displays a widget preview screen. The small window preview screen corresponds to the tracking target.
When the user triggers the recording control 410 through clicking, touching or the like in the preview interface shown in e in fig. 4, the terminal device receives an operation of starting recording, and enters the recording interface shown in f in fig. 4. The recording interface includes: big window, pause control, end control, and small window. The large window is displayed with a large window preview screen. The small window displays a small window preview screen, and the small window preview screen corresponds to the tracking target. The small window preview screen is part of the large window preview screen.
It can be understood that when the terminal device starts recording, the terminal device also stores the content corresponding to the small window preview picture and the content corresponding to the large window preview picture when the terminal device displays the small window preview picture and the large window preview picture on the recording interface, and further stores the video corresponding to the small window and the video corresponding to the large window when the recording is finished.
In a second possible implementation manner, when the user triggers the recording control 511 by clicking or touching the interface shown in d in fig. 4, the terminal device receives an operation of starting recording, and enters the recording interface shown in a in fig. 5. The recording interface includes: a big window 501, a pause control and an end control. The large window 501 displays a large window preview screen. The large window 501 displays a large window preview screen. When there is a person in the large window preview screen, the large window 501 also displays a tracking frame 502.
When the user triggers the tracking frame 502 by clicking or touching, the terminal device receives an operation of setting a tracking target, sets a person corresponding to the tracking frame as the tracking target, and enters an interface shown as e in fig. 4. The preview interface includes: big window, little window and record control. The large window is displayed with a large window preview screen. The widget displays a widget preview screen. The small window preview screen corresponds to the tracking target.
It will be appreciated that the interfaces shown in fig. 4 and 5 are by way of example only, and that the corresponding interfaces may include more or less content, and are not limited in this regard. The terminal device may also select the principal angle mode in other manners, which is not limited in the embodiment of the present application.
The following describes the interaction process of the internal modules of the terminal device in the video configuration and preview display process of the main angle mode with reference to fig. 6. As shown in fig. 6, the terminal device includes a camera application, a camera access interface, a control module, a graph receiving module, a memory, a buffer queue, and a processing module, and the interaction process includes:
s601, the terminal equipment receives clicking operation aiming at the first video control.
The first video control corresponds to a video mode. Illustratively, the first video control may be the principal angle mode 405 in the interface shown as c in FIG. 4. When the terminal device receives a click operation for the principal angle mode 405, it enters the principal angle mode.
S602, the camera application issues a configuration command to the control module through the camera access interface.
The configuration command corresponds to a video mode corresponding to the first video control. For example, when the first video control is in the main angle mode, the configuration command is used to instruct to configure two preview streams and two recording streams.
The configuration command also comprises a camera number for starting the corresponding camera.
It will be appreciated that one or more cameras may be included in the terminal device, including but not limited to: front cameras, rear main cameras (may be simply referred to as rear main cameras), wide-angle cameras (may also be referred to as ultra-wide-angle cameras), and telephoto cameras, etc.
S603, the control module configures the cache queue and the memory queue based on the configuration command.
The control module configures two preview buffer queues (preview 1 and preview 2), two record buffer queues (record 1 and record 2) and two memory queues (memory 1 and memory 2) based on the configuration command, and binds the preview buffer queues and the record buffer queues to realize that the preview buffer queue (preview 1) of the large window corresponds to the record buffer queue (memory 1) of the large window, and the preview buffer queue (preview 2) of the small window corresponds to the record buffer queue (memory 2) of the small window.
In a possible implementation, the terminal device sets the same identifier (e.g., stream ID) for the bound preview buffer queue and record buffer queue. Thus, the subsequent terminal equipment identification control is facilitated.
The terminal device sets the same identifier for the preview 1 and the video 1; the terminal equipment sets the same identifier for the preview 2 and the video 2.
In this way, when the terminal equipment enters the main angle mode, video streams and preview streams are bound, two paths of preview streams and two corresponding sets of memory queues are configured, and the time consumption for distributing streams and the use of memories are reduced; the preview and recording results of the large window are consistent through one-to-one correspondence of the large window preview stream and the video stream; and the previewing and recording results of the small window are consistent through the one-to-one correspondence of the small window previewing stream and the video stream.
S604, the control module sends a message for indicating the configuration end to the camera application through the camera access interface after the configuration end.
S605, the camera application issues a first request to the control module via the camera access interface.
The first request is for indicating a big window preview.
S606, the control module controls the image receiving module to acquire an original image acquired by bottom hardware based on the first request.
S607, the image collecting module stores the original image into the memory 1.
S608, the image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and other processing to obtain a first image.
In an embodiment of the present application, the algorithmic processing includes, but is not limited to: noise reduction, linear brightening, tone mapping, gamma correction, fusion, etc. The algorithmic processing may also include beauty, anti-shake, etc.
S609, the image processing module stores the first image to preview 1.
S610, the camera application calls the first image in the preview 1 to carry out large window preview display through the camera access interface.
Adaptively, the terminal device displays a preview interface including a large window.
The terminal device illustratively displays the interface shown as d in fig. 4.
In this way, the terminal device can realize preview display of a large window.
S611, the terminal equipment receives the operation of clicking the tracking frame by the user, and the camera application transmits a second request to the control module through the camera access interface.
The second request is used to indicate a big-window preview and a small-window preview.
S612, the control module controls the image receiving module to acquire an original image acquired by bottom hardware based on the second request;
and S613, the image collecting module stores the original image into the memory 1 and the memory 2.
S614, the image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and other processing to obtain a first image; and selecting the original image stored in the memory 2 to perform algorithm processing, format conversion and other processing to obtain a second image.
S615, the image processing module stores the first image to preview 1 and stores the second image to preview 2.
S616, the camera application calls a first image in the preview 1 to conduct large window preview display and calls a second image in the preview 2 to conduct small window preview display through the camera access interface.
The terminal device displays a preview interface comprising a large window and a small window.
Illustratively, the terminal device displays the interface shown as e in fig. 4.
In this way, the terminal device can realize the preview display of the large window and the preview display of the small window.
The following describes the interaction process of the internal modules of the terminal device during recording with reference to fig. 7. The terminal equipment comprises a camera application, a camera access interface, a control module, a drawing receiving module, a memory, a cache queue and a processing module, wherein the interaction process comprises the following steps: if the terminal device receives the recording operation before executing S611 described above.
S701, the terminal equipment receives the recording operation, and the camera application transmits a third request to the control module through the camera access interface.
The third request is for indicating a big-window preview and a big-window recording.
Illustratively, when the terminal device receives the operation that the user clicks the recording control 411 while displaying the interface shown in d in fig. 4, the camera application issues a third request to the control module via the camera access interface.
S702, the control module controls the image receiving module to acquire an original image acquired by bottom hardware based on the preview request;
s703, the image collecting module stores the original image into the memory 1.
S704, the image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and other processing to obtain a first image.
S705, the image processing module stores the first image to preview 1 and record 1.
S706, the camera application calls the first image in the preview 1 through the camera access interface to carry out large window preview display; and calling the first image in the record 1 to carry out coding processing so as to store the video corresponding to the large window.
The terminal device displays a recording interface including a large window.
The terminal device displays, for example, the interface shown as a in fig. 5.
Thus, the terminal equipment can realize preview display and recording of a large window. And the video stream request corresponding to the large window is not processed at the bottom layer, so that the memory occupation of two paths of video stream processing is reduced.
S707, the terminal equipment receives the operation of clicking the tracking frame by the user, and the camera application sends a fourth request to the control module through the camera access interface.
The fourth request is for indicating a big-window preview and a big-window recording, and a small-window preview and a small-window recording.
Illustratively, when the terminal device receives the operation of clicking the tracking frame 502 by the user while displaying the interface shown as a in fig. 5, the camera application issues a fourth request to the control module via the camera access interface.
And S708, the control module controls the image receiving module to acquire the original image acquired by the bottom hardware based on the fourth request.
S709, the image collecting module stores the original image into the memory 1 and the memory 2.
S710, an image processing module selects an original image stored in the memory 1 to perform algorithm processing, format conversion and other processing to obtain a first image; and selecting the original image stored in the memory 2 to perform algorithm processing, format conversion and other processing to obtain a second image.
S711, the image processing module stores the first image to preview 1 and record 1, and stores the second image to preview 2 and record 2.
S712, the camera application calls the first image in the preview 1 through the camera access interface to carry out large window preview display, and calls the first image in the record 1 to carry out coding processing so as to store the video corresponding to the large window; the camera application calls a second image in the preview 2 through the camera access interface to carry out small window preview display, and calls a first image in the record 2 to carry out coding processing so as to store a video corresponding to the small window.
The terminal device displays a recording interface comprising a large window and a small window.
The terminal device illustratively displays the interface shown at b in fig. 5.
Therefore, the terminal equipment can realize the preview display and recording of the large window and the preview display and recording of the small window. And, the recording request corresponding to the big window does not carry out the bottom layer processing (i.e. the image stored in the recording 1 is not processed), the video stream request corresponding to the small window does not carry out the bottom layer processing (i.e. the image stored in the recording 2 is not processed), thereby reducing the memory occupation of the two paths of video stream processing.
It may be understood that, if the terminal device receives the recording operation after performing S611, the camera application issues a fourth request to the control module via the camera access interface. The terminal device performs S708-S712. The terminal device displays a recording interface comprising a large window and a small window. Illustratively, the terminal device displays the interface shown as e in fig. 4.
In summary, the terminal device can adopt a flexibly-adaptive caching strategy according to the dynamic requests of the large window and the small window, the two paths of preview stream data are respectively used for sending and displaying the large window and the small window, and video streams bound with the two paths of preview stream data are separated through the preview stream, so that two paths of video streams are obtained and used for video coding and recording, and one record is realized.
The above embodiment is described with respect to the recording process in the main angle mode. The method provided by the embodiment of the application can also be applied to other video modes. The recording process of other recording modes is similar to the recording process of the main angle mode, and is not repeated here.
For example, taking a conventional video recording mode (i.e. one video is generated by one recording) as an example, the terminal device configures one preview stream and one recording stream. Specifically, the terminal device receives a clicking operation for the second video control. The camera application issues a configuration command to the control module via the camera access interface. The control module configures a preview buffer queue (preview 1), a recording buffer queue (recording 1) and a memory queue (memory 1) based on the configuration command, and binds the preview buffer queue and the recording buffer queue to realize correspondence between the preview buffer queue (preview 1) of the large window and the recording buffer queue (memory 1) of the large window.
When the terminal device adopts the normal video mode preview, the terminal device executes the above-mentioned S604-S610. When the terminal device adopts the conventional video recording mode for recording, the terminal device executes the steps S701-S706.
It will be appreciated that the above embodiments are exemplified by a camera application, and the embodiments of the present application may also be applied to a three-party camera application, for example, a WeChat, a short video shooting application, and the like. The embodiment of the present application is not limited thereto.
In some embodiments, a user may set a shooting mode corresponding to the video recording method provided by the embodiment of the present application. When the terminal equipment receives that a user sets a first shooting mode (such as a beautifying video, an anti-shake video, a main angle mode and the like) to be the first mode (such as a power saving mode, a current saving mode and the like), the video recording method provided by the embodiment of the application is used for previewing and recording by using the first shooting mode. When the terminal equipment receives that the user sets the first shooting mode to the second mode (such as an optimization mode, etc.), the video recording method provided by the embodiment of the application is not used for previewing and recording by using the first shooting mode.
Fig. 8 is a schematic flow chart of a video recording method according to an embodiment of the present application. As shown in fig. 8, the video recording method includes:
s801, displaying a first interface by the terminal equipment, wherein the first interface comprises a first window and a recording control, and the first window is displayed with a first picture acquired by a camera in real time.
In the embodiment of the application, the first interface can be a preview interface for displaying the image for preview. The first interface may correspond to the interface shown in fig. 4. The first window may correspond to the large window above.
S802, when receiving a first operation for a recording control, the terminal equipment generates a first recording request.
The first operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application.
S803, the terminal equipment acquires an original image acquired by the camera based on the first recording request and stores the original image into a first memory queue; and the terminal equipment processes the original image of the first memory queue to obtain a first image.
The first recording request may correspond to the third request or the fourth request above.
S804, the terminal equipment stores the first image into a first preview buffer queue and a first recording buffer queue, wherein the first preview buffer queue and the first recording buffer queue correspond to the first memory queue.
S805, the terminal device displays a second interface, wherein the second interface comprises a first window, a first picture is displayed on the first window, and the first picture is generated based on a first image in a first preview cache queue.
S806, the terminal equipment encodes based on the first image in the first recording cache queue to store the first video.
The second interface may be a recording interface, and video is recorded while the image is displayed for previewing. The first video may correspond to a large window recorded video.
It can be understood that in the recording process, the original image in the memory queue is acquired and processed, and the processed image is stored in the corresponding preview buffer queue and the corresponding video buffer queue. Therefore, the number of the original images cached by the terminal equipment can be reduced, the memory occupation is reduced, and the power consumption is further reduced.
Optionally, before the terminal device displays the first interface, the method further includes: the terminal equipment displays a third interface, wherein the third interface comprises a first control, and the first control corresponds to the first video recording mode; the terminal equipment receives a second operation aiming at the first control; the terminal equipment configures a first preview buffer queue, a first recording buffer queue and a first memory queue based on a first video recording mode.
The first control can be a video control for indicating a conventional video mode; and may also be used to indicate controls for other recording modes. The second operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application. The third interface may correspond to the camera preview interface shown in fig. 4 above.
Thus, the method can be applied to a mode of recording one video by one video generation, such as a conventional video recording mode, a professional video recording mode and the like.
Optionally, before the terminal device displays the first interface, the method further includes: the terminal equipment displays a fourth interface, wherein the fourth interface comprises a second control, and the second control corresponds to a second video recording mode; the terminal equipment receives a third operation aiming at the second control; the terminal equipment configures a first preview buffer queue, a second preview buffer queue, a first recording buffer queue, a second recording buffer queue, a first memory queue and a second memory queue based on a second video mode; the second preview buffer queue and the second recording buffer queue correspond to the second memory queue.
The second control can be a main angle mode control or a control of other video modes. The third operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application. The fourth interface may correspond to the preview interface corresponding to the principal angle mode shown in fig. 4 above.
Thus, the method can be applied to a mode of generating video recordings of two paths of videos by one video recording, for example, a main angle mode and the like, and has wide application range.
Optionally, the second interface displays one or more tracking identifiers; the terminal equipment receives a fourth operation aiming at a first tracking identifier, wherein the first tracking identifier is one of one or more tracking identifiers; responding to the fourth operation, and generating a second recording request by the terminal equipment; the terminal equipment stores the original image acquired by the acquisition camera into a first memory queue and a second memory queue based on a second recording request; the terminal equipment performs algorithm processing on the original image of the first memory queue to obtain a first image, and performs algorithm processing on the original image of the first memory queue to obtain a second image; the terminal equipment stores the first image into a first preview buffer queue and a first recording buffer queue so as to store the second image into a second preview buffer queue and a second recording buffer queue; the terminal equipment displays a fifth interface, wherein the fifth interface comprises a first window and a second window, the first window displays a first picture, the first picture is generated based on a first image in a first preview cache queue, the second window displays a second picture, and the second picture is generated based on a second image in a second preview cache queue; the second picture is a part of pictures related to a first object in the first window, and the first object is an object corresponding to the first tracking identifier; the terminal device encodes based on the first image in the first recording buffer queue to store the first video, and encodes based on the second image in the second recording buffer queue to store the second video.
The tracking identifier may be the tracking box above, or may be other identifiers. The second window may correspond to the small window above. The fourth operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the embodiment of the present application. The second recording request may correspond to the fourth request above. The fifth interface may correspond to the interface shown in fig. 5 above. The second video may correspond to a video recorded in a small window.
The terminal device can set a tracking target for tracking during recording. When the terminal equipment selects a tracking target during recording, the terminal equipment acquires original images in the two memory queues for processing, and the processed images are stored in the corresponding preview cache queue and the corresponding video cache queue. Therefore, the number of the original images cached by the terminal equipment can be reduced, the memory occupation is reduced, and the power consumption is further reduced.
Optionally, the first interface displays one or more tracking identifiers; before the terminal equipment receives the first operation for the recording control, the method further comprises the following steps: the terminal equipment receives a fifth operation aiming at a second tracking identifier, wherein the second tracking identifier is one of one or more tracking identifiers; responding to a fifth operation, displaying a sixth interface by the terminal equipment, wherein the sixth interface comprises a first window, a second window and a recording control, the first window displays a first picture, the second window displays a second picture, the second picture is a part of pictures related to a first object in the first window, and the first object is an object corresponding to a first tracking identifier; the terminal equipment stores the original image acquired by the acquisition camera into a second memory queue based on the second recording request; the terminal equipment processes the original image of the second memory queue to obtain a second image; the terminal equipment stores the second image into a second preview buffer queue and a second recording buffer queue; the second interface further comprises a second window, wherein a second picture is displayed on the second window, and the second picture is generated based on a second image in the second preview cache queue; the terminal device encodes based on the second image in the second recording buffer queue to store the second video.
The tracking identifier may be the tracking box above, or may be other identifiers. The second window may correspond to the small window above. The fifth operation may be a click operation, a touch operation, or other operations (e.g., voice, etc.), which are not limited in the type, mode, etc. of the fifth operation in the embodiment of the present application. The fifth interface may correspond to the interface shown in fig. 5 above. The first recording request may correspond to the fourth request above. The second video may correspond to a video recorded in a small window.
The terminal device may start recording after setting the tracking target. The terminal device can display the small window to record one path additionally.
Optionally, at the first moment, the terminal device detects that a first object is displayed at a first position of the first window, and the second window displays a part of pictures related to the first object at the first position in the first window; at a second moment, the terminal device detects that the first object is displayed at the second position of the first window, and the second window displays a part of pictures related to the first object at the second position of the first window.
In this way, the widget is displayed based on the tracking target. In the embodiment of the application, the picture (tracking picture) displayed by the second window changes along with the position change of the tracking target. Specifically, the position of the first object (tracking object) in the screen displayed in the second window changes and changes, and fig. 1A or 1B may be referred to. For example, the interface at the first time may be the interface shown as a in fig. 1A, and the interface at the second time may be the interface shown as b in fig. 1A; alternatively, the interface at the first time may be the interface shown as a in fig. 1B, and the interface at the second time may be the interface shown as B in fig. 1B.
In some embodiments, the focus varies with the position of the tracking target. Specifically, the focus changes with the position change of the tracking target, and reference may be made to fig. 1A or 1B.
Optionally, the second frame is cropped based on the first frame.
Optionally, before receiving the first operation, the method further includes: the terminal equipment generates a first preview request; the terminal equipment acquires an original image acquired by the camera based on the first preview request and stores the original image in a first memory queue; the terminal equipment processes the original image of the first memory queue to obtain a first image; the terminal device stores the first image in a first preview buffer queue to generate a first picture.
Therefore, when the terminal equipment previews, the recording cache queue does not have image cache, and video cannot be saved.
Optionally, the terminal device sets a first identifier for the first preview buffer queue and the first recording buffer queue; and/or the terminal equipment sets a second identifier for the second preview buffer queue and the second recording buffer queue.
The terminal equipment can identify the corresponding preview buffer queue and recording buffer queue through the identification, so that the control is convenient, and the implementation is easy.
In some embodiments, a user may set a shooting mode corresponding to the video recording method provided by the embodiment of the present application. When the terminal equipment receives that a user sets a first shooting mode (such as a beautifying video, an anti-shake video, a main angle mode and the like) to be the first mode (such as a power saving mode, a current saving mode and the like), the video recording method provided by the embodiment of the application is used for previewing and recording by using the first shooting mode. When the terminal equipment receives that the user sets the first shooting mode to the second mode (such as an optimization mode, etc.), the video recording method provided by the embodiment of the application is not used for previewing and recording by using the first shooting mode.
The video recording method according to the embodiment of the present application has been described above, and the device for executing the video recording method according to the embodiment of the present application is described below. It will be appreciated by those skilled in the art that the methods and apparatus may be combined and referred to, and that the related apparatus provided in the embodiments of the present application may perform the steps of the video recording method described above.
As shown in fig. 9, fig. 9 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application, where the video recording apparatus may be a terminal device in an embodiment of the present application, or may be a chip or a chip system in the terminal device.
As shown in fig. 9, a video recording apparatus 2100 may be used in a communication device, a circuit, a hardware component, or a chip, and includes: a display unit 2101, and a processing unit 2102. Wherein the display unit 2101 is used for supporting the step of displaying performed by the video recording apparatus 2100; the processing unit 2102 is configured to support the recording apparatus 2100 to execute steps of information processing.
In a possible implementation, the recording apparatus 2100 may also include a communication unit 2103. Specifically, the communication unit is configured to support the video recording apparatus 2100 to perform the steps of transmitting data and receiving data. The communication unit 2103 may be an input or output interface, a pin or circuit, or the like.
In a possible embodiment, the video recording apparatus may further include: a storage unit 2104. The processing unit 2102 and the storage unit 2104 are connected by a line. The memory unit 2104 may include one or more memories, which may be one or more devices, circuits, or means for storing programs or data. The storage unit 2104 may exist independently and is connected to the processing unit 2102 provided in the video recording apparatus via a communication line. The memory unit 2104 may also be integrated with the processing unit 2102.
The storage unit 2104 may store computer-executed instructions of the method in the terminal apparatus to cause the processing unit 2102 to execute the method in the above-described embodiment. The storage unit 2104 may be a register, a cache, a RAM, or the like, and the storage unit 2104 may be integrated with the processing unit 2102. The storage unit 2104 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, and the storage unit 2104 may be independent of the processing unit 2102.
The video recording method provided by the embodiment of the application can be applied to electronic equipment with a video recording function. The electronic device includes a terminal device, and specific device forms and the like of the terminal device may refer to the above related descriptions, which are not repeated herein.
The embodiment of the application provides a terminal device, which comprises: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored in the memory to cause the terminal device to perform the method described above.
Fig. 10 shows a schematic structural diagram of a terminal device.
The terminal device may include a processor 1010, an external memory interface 1020, an internal memory 1021, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It will be appreciated that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal device. In other embodiments of the application, the terminal device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 1010 may include one or more processing units, such as: the processor 1010 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 1010 for storing instructions and data. In some embodiments, the memory in the processor 1010 is a cache memory. The memory may hold instructions or data that the processor 1010 has just used or recycled. If the processor 1010 needs to reuse the instruction or data, it can be called directly from memory. Repeated accesses are avoided and the latency of the processor 1010 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 1010 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the terminal device may include 1 or N cameras 193, N being a positive integer greater than 1.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only illustrative, and does not limit the structure of the terminal device. In other embodiments of the present application, the terminal device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The embodiment of the application provides a chip. The chip comprises a processor for invoking a computer program in a memory to perform the technical solutions in the above embodiments. The principle and technical effects of the present application are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium stores a computer program. The computer program realizes the above method when being executed by a processor. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
In one possible implementation, the computer readable medium may include RAM, ROM, compact disk-read only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium targeted for carrying or storing the desired program code in the form of instructions or data structures and accessible by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (Digital Subscriber Line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes optical disc, laser disc, optical disc, digital versatile disc (Digital Versatile Disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Embodiments of the present application provide a computer program product comprising a computer program which, when executed, causes a computer to perform the above-described method.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the application has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the application.

Claims (10)

1. A method of recording video, the method comprising:
the terminal equipment displays a first interface, wherein the first interface comprises a first window and a recording control, and the first window displays a first picture acquired by a camera in real time;
the terminal equipment generates a first recording request when receiving a first operation for the recording control;
the terminal equipment acquires an original image acquired by a camera based on the first recording request and stores the original image into a first memory queue;
the terminal equipment processes the original image of the first memory queue to obtain a first image;
the terminal equipment stores the first image into a first preview buffer queue and a first recording buffer queue, wherein the first preview buffer queue and the first recording buffer queue correspond to the first memory queue;
the terminal equipment displays a second interface, wherein the second interface comprises the first window, the first window displays the first picture, and the first picture is generated based on a first image in the first preview cache queue;
the terminal equipment encodes based on the first image in the first recording cache queue to store a first video;
Before the terminal device displays the first interface, the method further includes:
the terminal equipment displays a third interface, wherein the third interface comprises a first control, and the first control corresponds to the first video recording mode;
the terminal equipment receives a second operation aiming at the first control;
and the terminal equipment configures the first preview buffer queue, the first recording buffer queue and the first memory queue based on the first video recording mode.
2. The method of claim 1, wherein prior to the terminal device displaying the first interface, the method further comprises:
the terminal equipment displays a fourth interface, wherein the fourth interface comprises a second control, and the second control corresponds to the second video recording mode;
the terminal equipment receives a third operation aiming at the second control;
the terminal equipment configures the first preview buffer queue, the second preview buffer queue, the first recording buffer queue, the second recording buffer queue, the first memory queue and the second memory queue based on the second video mode;
the second preview buffer queue and the second recording buffer queue correspond to the second memory queue.
3. The method of claim 2, wherein the second interface displays one or more tracking identifications;
the terminal equipment receives a fourth operation aiming at a first tracking identifier, wherein the first tracking identifier is one of the one or more tracking identifiers;
responding to the fourth operation, and generating a second recording request by the terminal equipment;
the terminal equipment stores the original image acquired by the acquisition camera into the first memory queue and the second memory queue based on the second recording request;
the terminal equipment performs algorithm processing on the original image of the first memory queue to obtain the first image, and performs algorithm processing on the original image of the first memory queue to obtain a second image;
the terminal equipment stores the first image into a first preview buffer queue and a first recording buffer queue so as to store the second image into a second preview buffer queue and a second recording buffer queue;
the terminal device displays a fifth interface, wherein the fifth interface comprises a first window and a second window, the first window displays the first picture, the first picture is generated based on a first image in the first preview cache queue, the second window displays the second picture, and the second picture is generated based on a second image in the second preview cache queue;
The second picture is a part of pictures related to a first object in the first window, and the first object is an object corresponding to the first tracking identifier;
the terminal device encodes based on the first image in the first recording buffer queue to store the first video, and encodes based on the second image in the second recording buffer queue to store the second video.
4. The method of claim 2, wherein the first interface displays one or more tracking identifications;
before the terminal device receives the first operation for the recording control, the method further comprises:
the terminal equipment receives a fifth operation aiming at a second tracking identifier, wherein the second tracking identifier is one of the one or more tracking identifiers;
responding to the fifth operation, the terminal equipment displays a sixth interface, wherein the sixth interface comprises a first window, a second window and the recording control, the first window displays the first picture, the second window displays a second picture, the second picture is a part of pictures related to a first object in the first window, and the first object is an object corresponding to the first tracking identifier;
The terminal equipment stores the acquired original image acquired by the camera into the second memory queue based on the first recording request;
the terminal equipment processes the original image of the second memory queue to obtain a second image;
the terminal equipment stores the second image into a second preview buffer queue and a second recording buffer queue;
the second interface further comprises a second window, wherein the second window displays the second picture, and the second picture is generated based on a second image in the second preview cache queue;
and the terminal equipment encodes based on the second image in the second recording cache queue to store a second video.
5. The method according to claim 3 or 4, wherein,
at a first moment, the terminal device detects that the first object is displayed at a first position of the first window, and the second window displays a part of pictures related to the first object at the first position in the first window;
at a second moment, the terminal device detects that the first object is displayed at a second position of the first window, and the second window displays a part of pictures related to the first object at the second position in the first window.
6. The method of claim 4, wherein the second frame is cropped based on the first frame.
7. The method according to any of claims 1-4, wherein the terminal device, prior to receiving the first operation, further comprises:
the terminal equipment generates a first preview request;
the terminal equipment acquires an original image acquired by the camera based on the first preview request and stores the original image into a first memory queue;
the terminal equipment processes the original image of the first memory queue to obtain the first image;
and the terminal equipment stores the first image into the first preview buffer queue to generate the first picture.
8. The method according to any one of claim 1 to 4, wherein,
the terminal equipment sets a first identifier for the first preview cache queue and the first recording cache queue;
and/or the terminal equipment sets a second identifier for the second preview buffer queue and the second recording buffer queue.
9. A terminal device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
The processor executing computer-executable instructions stored in the memory to cause the terminal device to perform the method of any one of claims 1-8.
10. A computer readable storage medium storing a computer program, which when executed by a processor implements the method according to any one of claims 1-8.
CN202210946060.7A 2022-08-08 2022-08-08 Video recording method and related device Active CN115484403B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311475949.2A CN117479000A (en) 2022-08-08 2022-08-08 Video recording method and related device
CN202210946060.7A CN115484403B (en) 2022-08-08 2022-08-08 Video recording method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210946060.7A CN115484403B (en) 2022-08-08 2022-08-08 Video recording method and related device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311475949.2A Division CN117479000A (en) 2022-08-08 2022-08-08 Video recording method and related device

Publications (2)

Publication Number Publication Date
CN115484403A CN115484403A (en) 2022-12-16
CN115484403B true CN115484403B (en) 2023-10-24

Family

ID=84421933

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210946060.7A Active CN115484403B (en) 2022-08-08 2022-08-08 Video recording method and related device
CN202311475949.2A Pending CN117479000A (en) 2022-08-08 2022-08-08 Video recording method and related device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311475949.2A Pending CN117479000A (en) 2022-08-08 2022-08-08 Video recording method and related device

Country Status (1)

Country Link
CN (2) CN115484403B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095460B (en) * 2022-05-25 2023-11-21 荣耀终端有限公司 Video recording method, device and storage medium
CN116055868B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Shooting method and related equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769521A (en) * 2018-06-05 2018-11-06 Oppo广东移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN108874473A (en) * 2018-06-15 2018-11-23 Oppo广东移动通信有限公司 video capture method and related product
CN109271327A (en) * 2017-07-18 2019-01-25 杭州海康威视数字技术股份有限公司 EMS memory management process and device
CN112312023A (en) * 2020-10-30 2021-02-02 北京小米移动软件有限公司 Camera buffer queue allocation method and device, electronic equipment and storage medium
CN113542545A (en) * 2021-05-28 2021-10-22 青岛海信移动通信技术股份有限公司 Electronic equipment and video recording method
CN113992854A (en) * 2021-10-29 2022-01-28 Oppo广东移动通信有限公司 Image preview method and device, electronic equipment and computer readable storage medium
CN114125284A (en) * 2021-11-18 2022-03-01 Oppo广东移动通信有限公司 Image processing method, electronic device, and storage medium
CN114327942A (en) * 2021-12-24 2022-04-12 凌云光技术股份有限公司 Shared memory management method and cache service assembly
WO2022105759A1 (en) * 2020-11-20 2022-05-27 华为技术有限公司 Video processing method and apparatus, and storage medium
WO2022105803A1 (en) * 2020-11-20 2022-05-27 华为技术有限公司 Camera calling method and system, and electronic device
CN114710702A (en) * 2022-06-07 2022-07-05 武汉凌久微电子有限公司 Video playing method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394319B (en) * 2014-11-24 2018-02-16 浩云科技股份有限公司 A kind of Embedded high-definition network video video recorder
CN110072070B (en) * 2019-03-18 2021-03-23 华为技术有限公司 Multi-channel video recording method, equipment and medium
CN110213502B (en) * 2019-06-28 2022-07-15 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN116582741B (en) * 2020-05-07 2023-11-28 华为技术有限公司 Shooting method and equipment
CN115379112A (en) * 2020-09-29 2022-11-22 华为技术有限公司 Image processing method and related device
CN114520886A (en) * 2020-11-18 2022-05-20 华为技术有限公司 Slow-motion video recording method and equipment
CN113329176A (en) * 2021-05-25 2021-08-31 海信电子科技(深圳)有限公司 Image processing method and related device applied to camera of intelligent terminal
CN113347378A (en) * 2021-06-02 2021-09-03 展讯通信(天津)有限公司 Video recording method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271327A (en) * 2017-07-18 2019-01-25 杭州海康威视数字技术股份有限公司 EMS memory management process and device
CN108769521A (en) * 2018-06-05 2018-11-06 Oppo广东移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN108874473A (en) * 2018-06-15 2018-11-23 Oppo广东移动通信有限公司 video capture method and related product
CN112312023A (en) * 2020-10-30 2021-02-02 北京小米移动软件有限公司 Camera buffer queue allocation method and device, electronic equipment and storage medium
WO2022105759A1 (en) * 2020-11-20 2022-05-27 华为技术有限公司 Video processing method and apparatus, and storage medium
WO2022105803A1 (en) * 2020-11-20 2022-05-27 华为技术有限公司 Camera calling method and system, and electronic device
CN113542545A (en) * 2021-05-28 2021-10-22 青岛海信移动通信技术股份有限公司 Electronic equipment and video recording method
CN113992854A (en) * 2021-10-29 2022-01-28 Oppo广东移动通信有限公司 Image preview method and device, electronic equipment and computer readable storage medium
CN114125284A (en) * 2021-11-18 2022-03-01 Oppo广东移动通信有限公司 Image processing method, electronic device, and storage medium
CN114327942A (en) * 2021-12-24 2022-04-12 凌云光技术股份有限公司 Shared memory management method and cache service assembly
CN114710702A (en) * 2022-06-07 2022-07-05 武汉凌久微电子有限公司 Video playing method and device

Also Published As

Publication number Publication date
CN117479000A (en) 2024-01-30
CN115484403A (en) 2022-12-16

Similar Documents

Publication Publication Date Title
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN115484403B (en) Video recording method and related device
CN113747085B (en) Method and device for shooting video
EP4020967B1 (en) Photographic method in long focal length scenario, and mobile terminal
WO2021244455A1 (en) Image content removal method and related apparatus
US20210076080A1 (en) Method and server for generating image data by using multiple cameras
CN113099146B (en) Video generation method and device and related equipment
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN115689963B (en) Image processing method and electronic equipment
WO2021115483A1 (en) Image processing method and related apparatus
CN112004076B (en) Data processing method, control terminal, AR system, and storage medium
CN112184722A (en) Image processing method, terminal and computer storage medium
CN111314606A (en) Photographing method and device, electronic equipment and storage medium
WO2022057384A1 (en) Photographing method and device
WO2021180046A1 (en) Image color retention method and device
CN117692771A (en) Focusing method and related device
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN115914860A (en) Shooting method and electronic equipment
CN114390191A (en) Video recording method, electronic device and storage medium
CN116055861B (en) Video editing method and electronic equipment
CN114285963B (en) Multi-lens video recording method and related equipment
CN116723264B (en) Method, apparatus and storage medium for determining target location information
CN115334240B (en) Image shooting method, intelligent terminal and storage medium
CN116055867B (en) Shooting method and electronic equipment
WO2022228010A1 (en) Method for generating cover, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant