CN117221708A - Shooting method and related electronic equipment - Google Patents

Shooting method and related electronic equipment Download PDF

Info

Publication number
CN117221708A
CN117221708A CN202210751432.0A CN202210751432A CN117221708A CN 117221708 A CN117221708 A CN 117221708A CN 202210751432 A CN202210751432 A CN 202210751432A CN 117221708 A CN117221708 A CN 117221708A
Authority
CN
China
Prior art keywords
module
data stream
video
encoder
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210751432.0A
Other languages
Chinese (zh)
Inventor
王拣贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN117221708A publication Critical patent/CN117221708A/en
Pending legal-status Critical Current

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a shooting method and related electronic equipment, wherein the method comprises the following steps: displaying a recording interface, wherein the recording interface comprises a preview window and a first control, and the preview window is used for displaying images acquired by a camera; after a first operation acting on the first control is detected, one or more marks are displayed on a first image, wherein the first image is an image displayed by a preview window, and the one or more marks respectively correspond to one or more objects in the first image; detecting a second operation for the first mark, displaying a small window on the first interface, and displaying a close-up image of the first object on the small window; the first mark is any one of the one or more marks, and the first object is an object corresponding to the first mark; and storing the video of the preview window and the video of the small window in response to the operation of ending the video recording.

Description

Shooting method and related electronic equipment
Technical Field
The present application relates to the field of photographing, and in particular, to a photographing method and related electronic devices.
Background
Nowadays, electronic devices such as mobile phones that support capturing video can implement a capturing mode of automatic tracking. When recording video, the electronic device may receive a user selected principal angle. Then, the electronic device can always follow the main angle in the process of recording the video later, so that a close-up video with the video center always being the selected main angle is obtained.
Disclosure of Invention
The embodiment of the application provides a shooting method and related electronic equipment, which solve the problem that a previous small window video image appears in a rear small window video after a plurality of small window videos are recorded in a main angle recording mode.
In a first aspect, an embodiment of the present application provides a photographing method, which is applied to an electronic device having a camera, and the method includes: displaying a first interface; the first interface comprises a preview window, a first control and a video recording control, wherein the preview window is used for displaying images acquired by the camera; responding to a first operation aiming at a first control, displaying N marks on a first image, wherein the first image is the image currently displayed by a preview window, and the N marks respectively correspond to N objects in the first image; displaying a widget on the first interface in response to a second operation for the first marker, displaying a close-up image of the first object in the widget; the first mark is any one of N marks, and the first object is an object corresponding to the first mark; at a first moment, responding to a third operation for a video recording control, and recording a first video and a second video; the first video is a video of a preview window, and the second video is a video of a small window; displaying a second control on the widget; at a second moment, responding to a fourth operation for a second control, stopping recording a second video, and displaying no small window on the first interface; at a third moment, responding to a fifth operation aiming at the second mark, displaying a small window on the first interface, displaying a close-up image of a second object on the small window, wherein the second mark is any one of N marks, and the second object is an object corresponding to the second mark; and recording a third video, wherein the third video is a video of a small window.
With reference to the first aspect, in one possible implementation manner, in response to a third operation for the video recording control, in a process of recording the first video and the second video, the method further includes: detecting a first input operation for a third marker; and responding to the first input operation, displaying a close-up image of a third object on the small window, wherein the third mark is any mark except the first mark in the N marks, and the third object is the object corresponding to the third mark.
With reference to the first aspect, in one possible implementation manner, the electronic device further includes a mode module, a stream management module, a storage module, an encoding control module, an encoder module, a camera HAL module, and a camera.
With reference to the first aspect, in one possible implementation manner, after responding to the first operation for the first control, the method includes: the mode module triggers the stream management module to configure the first data stream and the second data stream; the first data stream is the data stream of the preview window, and the second data stream is the data stream of the small window; the stream management module configures a first data stream and a second data stream; the stream management module sends the data stream configuration information to the coding control module; the data stream configuration information comprises an address of a first storage area and an address of a second storage area, wherein the first storage area is used for caching the first data stream, and the second storage area is used for caching the second data stream; the storage module creates a first video file and a second video file, and sends video file information to the coding control module; the first video file is used for storing videos corresponding to the preview window, the second video file is used for storing videos corresponding to the small window, and the video file information comprises first video file information and second video file information; the coding control module configures a first coder parameter and a second coder parameter based on the video file information and the data stream configuration information, and sends the first coding configuration parameter to the coder module; the first encoding configuration parameters include a first encoder parameter and a second encoder parameter; the encoder module creates a first encoder based on the first encoder parameter and creates a second encoder based on the second encoder parameter; the first encoder corresponds to the preview window, and the second encoder corresponds to the widget; the stream management module configures a first stream identification parameter and a second stream identification parameter, and sends the first stream identification parameter and the second stream identification parameter to the camera HAL module; the first stream identification parameter is used for identifying a first data stream, and the second stream identification parameter is used for identifying a second data stream; the camera HAL module parses the first stream identification parameter and the second stream identification parameter. Thus, when video recording is performed after entering the main angle mode, the data stream of the preview window video and the data stream of the small window video can be encoded, so that video files of the preview window video and the small window video are obtained.
With reference to the first aspect, in one possible implementation manner, after responding to the third operation for the video recording control, the method further includes: the mode module triggers the stream management module to send a first data request message to the camera HAL module; the first data request message is used for instructing the camera HAL module to cache the first data stream into the first storage area and the second data stream into the second storage area; the mode module triggers the coding control module to start the first coder and the second coder; the stream management module sends a first data request message to the camera HAL module; the camera HAL copies the first data stream sent by the camera to obtain a copied data stream; the camera HAL module cuts each frame of image in the copied data stream by taking the first object as the center to obtain a second data stream; each frame of image in the second data stream is a close-up image of the first object; the camera HAL module caches the first data stream into a first storage area and the second data stream into a second storage area; the encoding control module starts the first encoder and the second encoder; the first encoder acquires a first data stream from the first storage area and encodes the first data stream to obtain an encoded first data stream; the first encoder sends the encoded first data stream to the encoding control module; the second encoder acquires a second data stream from the second storage area and encodes the second data stream to obtain an encoded second data stream; the second encoder sends the encoded second data stream to the encoding control module; the coding control module respectively packs the coded first data stream and the coded second data stream to obtain a packed first data stream and a packed second data stream; the encoding control module caches the packed first data stream into a first video file, and caches the packed second data stream into a second video file. In this way, in the main angle mode, video streams of the preview window and the small window can be encoded at the same time, so that two video files are obtained.
With reference to the first aspect, in one possible implementation manner, after responding to the fourth operation for the second control, the method further includes: the mode module triggers the stream management module to send a second data request message to the camera HAL module; the second data request message is for instructing the camera HAL module to stop buffering the second data stream into the second storage area; the mode module triggers the coding control module to control the second coder to stop working; the stream management module sends a second data request message to the camera HAL module; the camera HAL module does not cache the second data stream into the second storage area and does not replicate the first data stream; the encoding control module sends a first stop encoding request to the encoder module, wherein the first stop encoding request is used for indicating the encoding module to control the second encoder to stop working, deleting the second encoder and creating a third encoder; the coding control module sends a first storage message to the storage module; the first storage message is used for indicating the storage module to store the second video file and creating a third video file; the third video file is a video file of a small window; the encoder module instructs the second encoder to stop working, deletes the second encoder and creates a third encoder; the third encoder is a small window encoder; the storage module stores the second video file and creates a third video file; the storage module sends the information of the third video file to the coding control module; the coding control module configures a third encoder parameter based on the information of the third video file and the data stream configuration information, and sends the second coding configuration parameter to the encoder module; the second encoding configuration parameter includes a third encoder parameter; the encoder module creates a third encoder based on the second encoding configuration parameters. Thus, the recording of the small window video can be finished in advance, and the small window video file is obtained.
With reference to the first aspect, in a possible implementation manner, after responding to the fifth operation for the second flag, the method further includes: the mode module triggers the stream management module to send a third data request message to the camera HAL module; the third data request message is used for instructing the camera HAL module to cache the third data stream into the second storage area; the stream management module sends a third data request message to the camera HAL module; the camera HAL module copies the first data stream sent by the camera to obtain a copied data stream; the camera HAL module cuts each frame of image in the copied data stream by taking the second object as the center to obtain a third data stream; each frame of image in the third data stream is a close-up image of the second object; the camera HAL module adding a second timestamp in the third data stream; the camera HAL module caches a third data stream into the second storage area; the mode module triggers the coding control module to start a third coder; the coding control module starts a third coder; the third encoder obtains a third data stream from the second storage area; the third encoder sends the encoded third data stream to the encoding control module; the encoding control module packs the image data with the second time stamp being more than or equal to the second system time in the encoded third data stream to obtain a packed third data stream; the second system time is the time for storing the second video file; the encoding control module caches the packed third data stream into a third video file. In this way, the electronic device may additionally record the widget video, resulting in a new widget video.
With reference to the first aspect, in a possible implementation manner, after responding to the third operation for the video recording control, the first interface displays a stop recording control, and after responding to the third operation for the video recording control, the method further includes: detecting a sixth operation for the stop recording control; in response to the sixth operation, the first video and the third video are saved.
With reference to the first aspect, in a possible implementation manner, after detecting the sixth operation for stopping the recording control, the method further includes: the mode module triggers the stream management module to send a fourth data request message to the camera HAL module; the fourth data request message is used for instructing the camera HAL to stop caching the data stream; the mode module triggers the coding control module to control the first coder and the third coder to stop working; the third encoder is a small window encoder; the camera HAL module stops caching the first data stream in the first storage area and stops caching the third data stream in the second storage area; the third data stream is a small window data stream; the coding control module sends a second stop coding request to the coder module, wherein the second stop coding request is used for indicating the coding module to control the first coder and the third coder to stop working; the encoder module controls the first encoder and the third encoder to stop working; the mode module triggers the storage module to store the first video file and the third video file; the first video file is a file corresponding to the first video, and the third video file is a file corresponding to the third video; the storage module stores the first video file and the third video file.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors, a display screen and a memory; the memory is coupled to the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform: displaying a first interface through a display screen; the first interface comprises a preview window, a first control and a video recording control, wherein the preview window is used for displaying images acquired by the camera; responding to a first operation aiming at a first control, displaying N marks on a first image through a display screen, wherein the first image is the image currently displayed by a preview window, and the N marks respectively correspond to N objects in the first image; controlling the display screen to display a small window on the first interface in response to a second operation aiming at the first mark, and displaying a close-up image of the first object in the small window; the first mark is any one of N marks, and the first object is an object corresponding to the first mark; at a first moment, responding to a third operation for a video recording control, and recording a first video and a second video; the first video is a video of a preview window, and the second video is a video of a small window; controlling the display screen to display a second control on the small window; at a second moment, responding to a fourth operation for the second control, stopping recording the second video, and controlling the display screen not to display a small window on the first interface; at a third moment, responding to a fifth operation aiming at the second mark, controlling the display screen to display a small window on the first interface, and displaying a close-up image of a second object on the small window, wherein the second mark is any one of N marks, and the second object is an object corresponding to the second mark; and recording a third video, wherein the third video is a video of a small window.
With reference to the second aspect, in one possible implementation manner, the one or more processors call the computer instructions to cause the electronic device to perform: detecting a first input operation for a third marker; and responding to the operation, controlling the display screen to display a close-up image of a third object on the small window, wherein the third mark is any mark except the first mark in the N marks, and the third object is the object corresponding to the third mark.
With reference to the second aspect, in one possible implementation manner, the one or more processors call the computer instructions to cause the electronic device to perform: triggering a stream management module to configure a first data stream and a second data stream through a mode module; the first data stream is the data stream of the preview window, and the second data stream is the data stream of the small window; configuring a first data stream and a second data stream through a stream management module; transmitting the data stream configuration information to the coding control module through the stream management module; the data stream configuration information comprises the address of a first storage area and the address of a second storage area, wherein the first storage area is used for caching the first data stream, and the second storage area is used for caching the second data stream; creating a first video file and a second video file through a storage module, and sending video file information to a coding control module; the first video file is used for storing videos corresponding to the preview window, the second video file is used for storing videos corresponding to the small window, and the video file information comprises first video file information and second video file information; configuring, by the encoding control module, the first encoder parameter and the second encoder parameter based on the video file information and the data stream configuration information, and transmitting the first encoding configuration parameter to the encoder module; the first encoding configuration parameters include a first encoder parameter and a second encoder parameter; creating, by the encoder module, a first encoder based on the first encoder parameter, and a second encoder based on the second encoder parameter; the first encoder corresponds to the preview window, and the second encoder corresponds to the widget; configuring a first stream identification parameter and a second stream identification parameter through a stream management module, and transmitting the first stream identification parameter and the second stream identification parameter to a camera HAL module; the first stream identification parameter is used for identifying a first data stream, and the second stream identification parameter is used for identifying a second data stream; the first stream identification parameter and the second stream identification parameter are parsed by the camera HAL module.
With reference to the second aspect, in one possible implementation manner, the one or more processors call the computer instructions to cause the electronic device to perform: triggering a stream management module to send a first data request message to a camera HAL module through a mode module; the first data request message is used for instructing the camera HAL module to cache the first data stream into the first storage area and the second data stream into the second storage area; triggering the coding control module to start the first coder and the second coder through the mode module; sending a first data request message to the camera HAL module through the stream management module; copying the first data stream sent by the camera through the camera HAL to obtain a copied data stream; cutting each frame of image in the copied data stream by using the camera HAL module with the first object as the center to obtain a second data stream; each frame of image in the second data stream is a close-up image of the first object; caching the first data stream into a first storage area and caching the second data stream into a second storage area by a camera HAL module; starting a first encoder and a second encoder through an encoding control module; acquiring a first data stream from a first storage area through a first encoder, and encoding the first data stream to obtain an encoded first data stream; transmitting the encoded first data stream to an encoding control module through a first encoder; acquiring a second data stream from the second storage area through a second encoder, and encoding the second data stream to obtain an encoded second data stream; transmitting the encoded second data stream to an encoding control module through a second encoder; the method comprises the steps of respectively packaging an encoded first data stream and an encoded second data stream through an encoding control module to obtain a packaged first data stream and a packaged second data stream; and caching the packed first data stream into a first video file through the coding control module, and caching the packed second data stream into a second video file.
With reference to the second aspect, in one possible implementation manner, the one or more processors call the computer instructions to cause the electronic device to perform: triggering the flow management module to send a second data request message to the camera HAL module through the mode module; the second data request message is for instructing the camera HAL module to stop buffering the second data stream into the second storage area; triggering an encoding control module through a mode module to control the second encoder to stop working; sending a second data request message to the camera HAL module through a stream management module; not buffering the second data stream into the second storage area and not copying the first data stream by the camera HAL module; sending a first stop coding request to the coder module through the coding control module, wherein the first stop coding request is used for indicating the coding module to control the second coder to stop working, deleting the second coder and creating a third coder; sending a first storage message to a storage module through an encoding control module; the first storage message is used for indicating the storage module to store the second video file and creating a third video file; the third video file is a video file of a small window; indicating the second encoder to stop working through the encoder module, deleting the second encoder and creating a third encoder; the third encoder is a small window encoder; storing the second video file through a storage module, and creating a third video file; the information of the third video file is sent to the coding control module through the storage module; configuring, by the encoding control module, third encoder parameters based on information of the third video file and the data stream configuration information, and transmitting the second encoding configuration parameters to the encoder module; the second encoding configuration parameter includes a third encoder parameter; a third encoder is created by the encoder module based on the second encoding configuration parameters.
With reference to the second aspect, in one possible implementation manner, the one or more processors call the computer instructions to cause the electronic device to perform: triggering the flow management module to send a third data request message to the camera HAL module through the mode module; the third data request message is used for instructing the camera HAL module to cache the third data stream into the second storage area; sending a third data request message to the camera HAL module through the stream management module; copying the first data stream sent by the camera through the camera HAL module to obtain a copied data stream; cutting each frame of image in the copied data stream by using the camera HAL module with the second object as the center to obtain a third data stream; each frame of image in the third data stream is a close-up image of the second object; adding, by the camera HAL module, a second timestamp in the third data stream; caching, by the camera HAL module, a third data stream into the second storage area; triggering the coding control module to start a third coder through a mode module; starting a third encoder through the encoding control module; acquiring a third data stream from the second storage area by a third encoder; transmitting the encoded third data stream to an encoding control module through a third encoder; packaging the image data with the second time stamp being greater than or equal to the second system time in the encoded third data stream through the encoding control module to obtain a packaged third data stream; the second system time is the time for storing the second video file; and caching the packed third data stream into a third video file through the coding control module.
With reference to the second aspect, in one possible implementation manner, the one or more processors call the computer instructions to cause the electronic device to perform: detecting a sixth operation for the stop recording control; in response to the sixth operation, the first video and the third video are saved.
With reference to the second aspect, in one possible implementation manner, the one or more processors call the computer instructions to cause the electronic device to perform: triggering the flow management module to send a fourth data request message to the camera HAL module through the mode module; the fourth data request message is used for instructing the camera HAL to stop caching the data stream; triggering a coding control module through a mode module to control the first coder and the third coder to stop working; the third encoder is a small window encoder; stopping buffering the first data stream into the first storage area by the camera HAL module, and stopping buffering the third data stream into the second storage area; the third data stream is a small window data stream; sending a second stop coding request to the coder module through the coding control module, wherein the second stop coding request is used for indicating the coding module to control the first coder and the third coder to stop working; the first encoder and the third encoder are controlled to stop working through the encoder module; triggering a storage module to store the first video file and the third video file through a mode module; the first video file is a file corresponding to the first video, and the third video file is a file corresponding to the third video; and storing the first video file and the third video file through a storage module.
In a third aspect, an embodiment of the present application provides an electronic device, including: the touch screen, the camera, one or more processors and one or more memories; the one or more processors are coupled with the touch screen, the camera, the one or more memories for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the method as described in the first aspect or any of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a chip system for application to an electronic device, the chip system comprising one or more processors configured to invoke computer instructions to cause the electronic device to perform a method as described in the first aspect or any of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect or any one of the possible implementations of the first aspect.
Drawings
FIGS. 1A-1J are an exemplary set of user interface diagrams provided by embodiments of the present application;
FIGS. 2A-2M are another set of exemplary user interface diagrams provided by embodiments of the present application;
fig. 3A to 3E are flowcharts of a group of photographing methods according to an embodiment of the present application;
fig. 4 is a schematic hardware structure of the electronic device 100 according to the embodiment of the present application;
fig. 5 is a schematic software structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application for the embodiment. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second, third and the like in the description and in the claims and in the drawings are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprising," "including," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a series of steps or elements may be included, or alternatively, steps or elements not listed or, alternatively, other steps or elements inherent to such process, method, article, or apparatus may be included.
Only some, but not all, of the details relating to the application are shown in the accompanying drawings. Before discussing the exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
As used in this specification, the terms "component," "module," "system," "unit," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a unit may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or being distributed between two or more computers. Furthermore, these units may be implemented from a variety of computer-readable media having various data structures stored thereon. The units may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., second unit data from another unit interacting with a local system, distributed system, and/or across a network).
In the existing automatic focus tracking shooting mode, after a shooting main angle is selected, the video finally shot and stored by the electronic equipment is a close-up video of the selected main angle. The image content in the close-up video near the principal angle is mostly incomplete. Thus, the resulting video ignores other content than the principal angle during shooting. The user cannot obtain the environment (state, action, etc. of the subject around the principal angle) in which the principal angle is located at the time of shooting from the above video with difficulty.
Therefore, the embodiment of the application provides a shooting method. The method can be applied to electronic equipment such as mobile phones, tablet computers and the like. The electronic device 100 is hereinafter referred to as the above-described electronic device collectively.
Not limited to a cell phone, tablet computer, electronic device 100 may also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular telephone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (artificial intelligence, AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, and the specific type of the terminal is not particularly limited by the embodiments of the present application.
After determining the shooting principal angle, the electronic device 100 may generate two videos at the same time, and record the two videos as an original video and a close-up video. The original video is composed of original images acquired by the camera. The close-up video is obtained by identifying principal angles in an image based on an original image and then cropping the principal angle image. The close-up video is the main angle video which always takes the main angle as the shooting center. During the recording of the video, the electronic device 100 may display the original video and the close-up video at the same time for the user to preview.
Thus, after the principal angle is selected, the user can shoot a close-up video taking the principal angle as the center and can simultaneously obtain the original video consisting of the original images acquired by the original camera.
The following specifically describes a user schematic diagram of the electronic device 100 implementing the photographing method provided in the embodiment of the present application.
First, fig. 1A illustrates a user interface of an electronic device 100 that enables a camera to perform a shooting action.
As shown in fig. 1A, the user interface may include a menu bar 111, a capture control 112, a preview window 113, and a review control 114.
The menu bar 111 may have a plurality of photographing mode options displayed therein, such as a photographing mode of night scenes, videos, photographs, figures, and the like. Night scene mode may be used to take pictures in a scene with low light, such as at night. The video recording mode may be used to record video. The photographing mode can be used for photographing in daylight scenes. The portrait mode may be used to take a close-up photograph of a person.
When the camera is enabled to perform a shooting action, as shown in fig. 1A, the electronic device 100 may first enable a recording mode in preparation for starting recording video. Of course, the electronic device 100 may first enable other shooting modes such as shooting, portrait, etc., at which time the electronic device 100 may enter the video recording mode according to the user operation.
The photographing control 112 may be used to receive a photographing operation of a user. In the photographing scene (including photographing mode, portrait mode, night view mode), the above photographing operation is an operation for controlling photographing, which acts on the photographing control 112. In a scene where video is recorded (recording mode), the above-described shooting operation includes an operation to start recording, which acts on the shooting control 112.
The preview window 113 may be used to display the image stream captured by the camera in real time. At any time, one frame of image displayed in the preview window 113 is one frame of original image.
Review control 114 may be used to view a previously taken photograph or video. In general, the review control 114 can display a thumbnail of a previously taken photograph or a thumbnail of a first frame image of a previously taken video.
In the video mode, the user interface shown in FIG. 1A may also include a function bar 115. The functionality bar 115 may include a plurality of functionality controls therein, such as flash 1151, filter 1152, settings 1153, main angle mode 1154, and the like. Flash 1152 may be used to turn on or off the flash to change the screen brightness of the image captured by the camera. The filter 1152 may be used to select a filter style to adjust the color of the image in the preview window 113. The settings 1153 may be used to provide more controls for adjusting camera shooting parameters or image optimization parameters, such as white balance controls, ISO controls, united Yan Kongjian, body-building controls, etc., to provide a richer shooting service to the user.
The principal angle mode 1154 can be used to provide a function of capturing a close-up video centered at a principal angle. In particular, in the shooting method in the principal angle mode provided in the embodiment of the present application, the electronic device 100 may select and change the principal angle according to the user operation, and shoot and save two paths of videos: and generating an original video based on the original image acquired by the camera.
In displaying the user interface shown in fig. 1A, the electronic apparatus 100 may detect a user operation acting on the main angle mode 1152 to turn on the main angle mode photographing function. The above-described user operation acting on the principal angle mode 1154 is, for example, an operation of clicking the principal angle mode 1154. In response to the above operation, the electronic device 100 may execute an algorithm corresponding to the principal angle mode, enter into the principal angle mode shooting scene, referring to fig. 1B.
Fig. 1B illustrates a user interface of the electronic device 100 for shooting after entering a main angle mode shooting scene.
After entering the principal angle mode, the electronic device 100 may perform image recognition on an image captured by the camera, and recognize an object included in the image (i.e., object recognition). Such objects include, but are not limited to, humans, animals, plants. The embodiments of the present application will be described mainly with reference to the following examples.
Referring to the image shown in the preview window 113 in fig. 1B, at a certain moment, the image captured by the camera of the electronic device 100 includes the person 1 and the person 2. After receiving the above-described image, the electronic device 100 may first recognize an object included in the image using a preset object recognition algorithm. Here, the above-described object recognition algorithm may be a human body detection algorithm. It will be appreciated that when the electronic device 100 also supports objects that identify animal, plant types. Accordingly, the object recognition algorithm described above further includes a recognition algorithm for one or more animals, and a recognition algorithm for one or more plants, to which embodiments of the present application are not limited. At this time, the electronic apparatus 100 can recognize 2 objects including the person 1 and the person 2 in the image through the processing of the object recognition algorithm.
After receiving the image, the electronic device 100 may display the image in the preview window 113. Based on the objects included in the recognized image, the electronic apparatus 100 may display a selection frame corresponding to each of the objects while displaying the image. For example, the electronic device 100 may display the selection box 122 corresponding to the character 1 on the character 1 and the selection box 123 corresponding to the character 2 on the character 2. At this time, on the one hand, the user may confirm that the electronic device 100 has detected an object available for the user to select through the above-mentioned selection box; on the other hand, the user can set the corresponding object as the principal angle by clicking any one of the selection boxes.
Optionally, the electronic device 100 may further display a prompt 125 in the preview window 113, for example, "please select a principal angle character, start an automatic focus tracking video", for prompting the user to select a principal angle.
In the user interface shown in FIG. 1B, mei Yan Kongjian 127 may be included in the preview window 113. The united states Yan Kongjian 127 can be used to adjust the face image of a person in an image. After detecting the user operation acting on the face-beautifying Yan Kongjian 127, the electronic apparatus 100 can perform the face-beautifying process on the person in the image and display the face-beautifying processed image in the preview window. The user interface shown in fig. 1B may also display other photographing controls, such as a focus control for adjusting the focus of the camera, etc., which are not exemplified here.
While the user interface shown in fig. 1B is displayed, the electronic device 100 may detect a user operation acting on any of the selection boxes. In response to the above operation, the electronic device 100 may determine that the object corresponding to the above selection box is a principal angle. For example, referring to the user interface shown in fig. 1C, the electronic device 100 may detect a user operation acting on the selection box 123. In response to the above operation, the electronic apparatus 100 may determine that the character 2 corresponding to the selection box 123 is the principal angle.
Subsequently, electronic device 100 may display a small window in preview window 113 in picture-in-picture form and display a close-up image of person 2 in the small window. The close-up image is an image obtained by cutting out an original image (an image displayed in a preview window) captured by a camera with a selected principal angle as the center.
Fig. 1D illustrates a user interface in which electronic device 100 displays a widget and displays a close-up image of person 2 in the widget.
As shown in FIG. 1D, a widget 141 may be included in the preview window 113. At this time, a close-up image of the person 2 may be displayed in the widget 141. As the character 2 in the original image displayed in the preview window 113 changes, the close-up image of the character 2 displayed in the small window 141 also changes accordingly. The character 2 displayed in the small window 141 is always at the center of the small window 141. Thus, the continuous close-up image centered around person 2 displayed in widget 141 constitutes a close-up video of person 3.
After determining that the person 2 is the shooting principal angle, the selection box 123 corresponding to the person 2 may become the selection box 142 in fig. 1D. The user may distinguish between the objects of the principal angle and the non-principal angle by selecting box 142. Not limited to the selection box 142 shown in the user interface, the electronic device 100 may also display other styles of icons or use different colors, as embodiments of the application are not limited in this regard.
Optionally, the window 141 for presenting the close-up image may also include a close control 143 and a transpose control 144. The close control 143 can be used to close the window 141. The transpose control can be used to resize the widget 141.
Referring to fig. 1E, the electronic device 100 can detect a user operation on the close control 143. In response to the above operation, the electronic apparatus 100 may close the widget 141, referring to fig. 1F. As shown in fig. 1F, upon closing the widget 141, the electronic device 100 may cancel the previously selected principal angle (character 2). Accordingly, the selection box 142 corresponding to the character 2 may be changed to the selection box 123. At this time, the user may reselect any of the objects identified in the preview window 113 as a principal angle. The electronic device 100 may again display the widget 141 in the preview window 113 based on the redetermined principal angle. At this time, a close-up image obtained by processing the original image centering on the new principal angle is displayed in the small window 141.
Referring to FIG. 1G, the electronic device 100 can detect a user operation on the transpose control 144. In response to the above operation, the electronic apparatus 100 may adjust the vertically small window 141 in fig. 1F to be horizontal, referring to fig. 1H.
Optionally, after determining the principal angle, the electronic device 100 may first generate a 9:16 aspect ratio widget (vertical window) for presenting a close-up image, referring to widget 141 in FIG. 1D. The aspect ratios described above are exemplary and include, but are not limited to, 9:16 aspects of the aspect ratio of the mullion. Wherein, optionally, the electronic device 100 may fixedly display the above-mentioned small window 141 at the lower left (or lower right, upper left, upper right) of the screen. Upon detecting a user operation on the transpose control 144, the electronic device 100 can change the original vertical window to a small transverse window (transom window) with an aspect ratio of 16:9. Of course, the electronic device 100 may also generate the transom by default, and then adjust the transom to the portrait window according to the user operation, which is not limited by the embodiment of the present application. In this way, the user can adjust the video content and video format of the close-up video using the transpose control 144 to meet his own personalization needs.
In some examples, the small window may also adjust the display position according to the position of the main angle in the preview window, so as to avoid blocking the main angle in the preview window. Further, the electronic device 100 may also adjust the position and size of the widget according to the user operation. In some examples, the electronic device 100 may also detect a long press operation and a drag operation acting on the widget 141. In response to the above operation, the electronic apparatus 100 may move the widget to a position where the user drag operation is finally stopped.
In other examples, the electronic device 100 may also detect a double-click operation on the widget 141, in response to which the electronic device 100 may zoom in or out the widget 141. Not limited to the long press operation, the drag operation, and the double click operation described above, the electronic apparatus 100 may also control the adjustment of the position and size of the small window through gesture recognition and voice recognition. For example, the electronic device 100 may recognize that the user made a fist-making gesture through an image captured by the camera, and in response to the fist-making gesture, the electronic device 100 may narrow the widget 141. The electronic device 100 may recognize that the user makes a hand-open gesture through the image captured by the camera, and in response to the Zhang Shou gesture, the electronic device 100 may zoom in the widget 141.
Before starting recording the video, after determining the principal angle, if the selected principal angle is lost (the principal angle is not included in the image displayed in the preview window 113), the close-up image of the principal angle displayed in the small window 141 will freeze the last frame before the loss.
Referring to fig. 1D, the displayed image in the preview window 113 may be an nth frame image acquired by the camera. At this time, the image displayed in the small window 141 is a close-up image centered on the principal angle (person 2) obtained based on the above-described nth frame image. Referring to fig. 1I, the display image in the preview window 113 may be an n+1st frame image acquired by the camera. At this time, the principal angle (person 2) previously selected, that is, the principal angle is lost is not included in the n+1th frame image. At this time, the close-up image centered at the principal angle (person 2) obtained based on the n-th frame image remains displayed in the small window 141.
As shown in fig. 1I, after detecting that the principal angle is lost, the electronic device 100 may further display a prompt 151, for example, "principal angle is lost," in the preview window 113, and exit the focus after 5 seconds, for prompting the user to adjust the camera position or angle, so that the electronic device 100 may re-acquire the original image including the principal angle.
As shown in fig. 1J, if after 5 seconds, the electronic device 100 has not retrieved the principal angle (person 3), i.e., the previously selected principal angle (person 3) is not included in the image captured by the camera, the electronic device 100 may close the small window 141 and cancel the previously selected principal angle (person 3). The above 5 seconds are preset, and the electronic device 100 may also set other time periods, such as 10 seconds, etc., which are not limited in the embodiment of the present application.
During the preview process, after determining the principal angle, the electronic device 100 may begin recording video. In the principal angle mode provided by the embodiment of the present application, the electronic device 100 may generate an original video based on the original image displayed in the preview window 113, and at the same time, the electronic device 100 may also generate a close-up video based on the close-up image of the principal angle in the small window 141.
As shown in fig. 2A, the electronic device 100 may detect a user operation on the capture control 112. The above operation may be referred to as a user operation to start shooting. In response to the above, the electronic device 100 may begin recording video (raw video and close-up video), i.e., encoding and saving the raw image captured by the camera, and the close-up image centered at the principal angle.
After the video recording is started, the user interface as shown in fig. 2A may be changed to the one shown in fig. 2B. As shown in fig. 2B, after the video recording is started, the electronic device 100 may display the control module 211. A pause control 2111 and a stop control 2112 may be included in the control module 211. Pause control 2111 can be used to pause recorded video, including pause original video corresponding to recording preview window 113 and pause close-up video corresponding to recording portlet 141. Stop control 2112 can be used to stop recording video, including stopping recording original video, and stopping recording close-up video.
After the video recording is started, a time stamp may be displayed in both the preview window 113 and the widget 141. Such as timestamp "00:01" displayed in the upper left corner of preview window 113 and timestamp "00:01" displayed in the upper left corner of small window 141. Initially, the time stamps in preview window 113 and portlet 141 are the same. Subsequently, depending on the mirror out of the principal angle in the preview window 113, the time stamps in the preview window 113 and the small window 141 may be different, where they are not first spread out.
Optionally, a stop control 212 may also be displayed in the widget 141 after the video recording is started. The stop control 212 may be used to stop recording a close-up video. After detecting a user operation on stop control 212, electronic device 100 can close widget 141 and stop recording the close-up video corresponding to widget 141. Thereafter, optionally, the user may reselect the principal angle. At this point, the electronic device 100 does not stop recording the original video. After selecting the new principal angle, electronic device 100 may redisplay widget 141 and display a close-up video of the new principal angle in widget 141, recording the close-up video of the new principal angle.
After starting recording the video, the electronic device 100 may also provide a service to switch the main angle. Referring to the user interface shown in fig. 2C, at 5 seconds after the start of recording video, the electronic device 100 may detect a user operation acting on the selection box 122. The above operation may be referred to as a user operation to switch the principal angle. In response to the above operation, the electronic apparatus 100 may set the character 1 corresponding to the selection frame 122 as the principal angle. At this time, the character 2 previously set as the principal angle is no longer the principal angle.
Referring to the user interface shown in fig. 2D, after setting character 1 as the principal angle, electronic device 100 may display a close-up image of character 1 in widget 141, and no longer display a close-up image of character 2. Adaptively, the electronic device 100 may update the selection box 122 corresponding to the character 1 to the selection box 211, and simultaneously update the selection box 143 corresponding to the character 2 to the selection box 123.
In the process of switching the principal angle, the small window 141 may directly display the close-up image of the person 1 after switching, and display the jumping display effect. Optionally, the widget 141 may also implement a non-jump main angle switching display effect through a smoothing strategy. For example, after the principal angle is switched to the person 1, the electronic device 100 may determine a set of smoothly moving image frames according to the path of the person 2 to the person 1 in the preview window 113, and then display the image frames in the small window 141 to realize the non-jumping principal angle switching display. For example, the electronic device 100 may also use a fixed transitional effect to connect close-up images of the principal angle before and after switching. Such as superposition, swirling, panning, etc. that are commonly used in video editing. The embodiment of the present application is not limited thereto.
During the process of recording video, a user can save the recorded video of the small window by clicking a stop control in the small window. Refer to the user interface shown in fig. 2E. At 5 seconds after video recording, the electronic device 100 detects an input operation for the stop control 212, and in response to the operation, the electronic device 100 displays a user interface as shown in fig. 2F.
In this user interface, as shown in FIG. 2F, the preview window 113 does not include a widget. At this time, the person 1 and the person 2 are displayed in the preview window 113, the selection frame 122 is displayed in the person 1, and the selection frame 123 is displayed in the person 2. If the user wants to record the target object in the preview window 113 again in a separate small window, the user may click on any one of the selection boxes 122 to 123 in the preview window 113 again. As shown in fig. 2G, at 8 seconds after recording the video, after the electronic apparatus 100 detects a click operation with respect to the selection frame 123, the electronic apparatus 100 displays a user interface as shown in fig. 2H in response to the operation.
As shown in fig. 2H, a small window 141 is displayed in the preview window 113 of the user interface, character 1 and character 2 are displayed in the preview window 113, character 2 is displayed in the small window 141, and the recording time of displaying the small window video in the small window 141 is 0 th second.
During recording of video, the user's initially selected main angle may leave the viewing range of the camera of the electronic device 100. At this time, the main angle of the original image corresponding to the preview window 113 may be lost. Likewise, after identifying that the principal angle is lost, the electronic device 100 may also display a cue of the loss of principal angle and freeze a close-up image of the principal angle in the widget 141.
Referring to the user interface shown in fig. 2I, at 10 seconds after the start of recording video, the electronic device 100 may detect that the original image displayed in the preview window 113 (the original image captured by the camera) includes the person 2 but does not include the person 1 (principal angle), i.e., the principal angle is lost. At this time, the electronic device 100 may display a prompt 231 ("principal angle lost, exit focus after 5 seconds") in the preview window 113 to prompt the user to adjust the camera position or angle so that the electronic device 100 may re-acquire the original image including the principal angle. Meanwhile, the electronic apparatus 100 may keep displaying a close-up image of the principal angle (person 1) determined at the previous time in the small window 141. From the user's perspective, the close-up image displayed in the widget 141 pauses, and the freeze is on the close-up image of the principal angle (character 1) determined at the previous time. Accordingly, the timestamp shown in the upper left corner of the small window 141 pauses.
After seeing the prompt 231, the user can adjust the camera position so that the main angle is within the view range of the camera, so that the camera can re-capture the image including the main angle.
As shown in fig. 2J, if the electronic device 100 has not retrieved the principal angle (person 1) after 5 seconds (15 th second 00:15 after the start of recording video), i.e., the previously selected principal angle (person 1) is not included in the image captured by the camera, the electronic device 100 may display a prompt 232 ("principal angle focus has been paused"). At the same time, electronic device 100 may display a semi-transparent gray mask over the layer in which close-up image is displayed in widget 141 to prompt the user that the in-focus recording has been paused.
It will be appreciated that at the 10 th second shown in fig. 2I, the electronic device 100 has paused recording the close-up video in the widget 141. The time of 5 seconds, 10 th to 15 th seconds, is a transition time set for the electronic apparatus 100.
If the electronic device 100 re-recognizes the above-described principal angle (character 1) at some point after the recording of the close-up video is suspended, at this time, the electronic device 100 may display a close-up image of the newly acquired principal angle (character 1) in the small window 141 and continue recording the close-up video.
For example, referring to the user interface shown in fig. 2K, at 18 seconds after the start of recording video, the camera re-acquires the image including the person 1, i.e., the image displayed in the preview window 113 includes the person 1 again. At this time, the electronic apparatus 100 may determine a close-up image centered on the person 1 based on the above-described original image including the person 1, and then display the above-described close-up image in the widget 141. Accordingly, the timestamp displayed in the upper left corner of the small window 141 reverts to the timing. At the same time, the electronic device 100 continues to encode the above-described close-up image, i.e., continues to record the close-up video.
After recording the video for a while, the electronic device 100 may detect a user operation to end shooting. Referring to the user interface shown in fig. 2L, for example, at 25 seconds after the start of recording video, the electronic device 100 can detect a user operation on the stop control 2112. The above-described user operation may be referred to as a user operation to end shooting. In response to the user operation to end shooting, the electronic device 100 may stop encoding the image and package the image encoded during the period from the start of recording to the end of recording as video to save it in the local memory.
In response to the user operation of ending shooting, the electronic device 100 may stop encoding the original image corresponding to the preview window 113, and package the original image encoded during the period from the start of recording to the end of recording into an original video to store the original video in the local memory. At the same time, the electronic device 100 may stop encoding the close-up image corresponding to the widget 141, and package the close-up image encoded during the recording from the beginning to the ending into a close-up video for saving to the local memory.
After the save is completed, the electronic device 100 may display the user interface shown in fig. 2M. As shown in fig. 2L, the electronic device 100 can redisplay the capture control 112 and the review control 114. At this point, thumbnail images indicating the recorded original video and close-up video may be displayed on the review control 114. Alternatively, the identifier may be a thumbnail of the first frame image of the original video, or a thumbnail of the first frame image of the close-up video.
In displaying the user interface shown in fig. 2M, the electronic device 100 can detect a user operation acting on the review control 114. In response to the above, the electronic device 100 may play the captured original video and/or the close-up video. Thus, the user immediately looks at the original video and/or the close-up video.
The electronic device 100 can detect the object in the original image acquired by the camera in real time by implementing the automatic focus tracking shooting method (main angle mode) shown in fig. 1A to 1J and fig. 2A to 2M. The user may select any one of the one or more objects identified by the electronic device 100 as the principal angle at any time, or may switch the principal angle at any time.
When an object that has been set as a principal is lost for more than a while before recording is started, the electronic device 100 may disqualify the principal of the object and then instruct the user to reselect the principal. During the recording process, when the object set as the main angle is lost, the electronic device 100 may pause the recording; when the principal angle is retrieved, the electronic device 100 may continue recording. In this way, the user can get a coherent, main-angle centered close-up video, and the main angle is not limited to one object.
After recording is completed, the electronic device 100 may save the original video and the close-up video at the same time. The original video can reserve all image contents collected by the camera in the recording process. The close-up video may collectively present video content of the user-selected principal angle. The user can browse or use the original video or the close-up video according to different requirements, so that a richer use experience is provided for the user.
The following describes a flowchart of a photographing method according to an embodiment of the present application with reference to fig. 3A to 3E. As shown in fig. 3A, fig. 3A is a flowchart of a photographing method according to an embodiment of the present application, and the specific process is as follows:
step 301: the electronic device starts a camera application and enters a video recording interface.
Specifically, after the electronic device starts the camera application, the Mode module (Mode module) may start the camera to collect the image in real time. The video recording interface comprises a preview window and a focus tracking control, wherein the preview window is used for displaying images acquired by the camera in real time. Illustratively, the video recording interface may be the user interface described above in the embodiment of fig. 1A, the preview window may be the preview window 113 in the user interface, and the focus tracking control may be the main angle mode functionality control 1154 in the user interface. In the embodiment of the application, a video recording interface is taken as a first interface, and a first control is taken as a focus tracking control for example.
Step 302: and detecting input operation aiming at the focus tracking control, and starting a focus tracking function by the electronic equipment.
Specifically, after detecting an input operation for a focus tracking control, the electronic device opens a focus tracking function and identifies an object of an image in a preview window. The embodiment of the application is described by taking an example that an electronic device can identify an object in an image as a person. In addition, the electronic device displays a tracking frame related to the object on the preview window. Illustratively, the tracking frame may be any one of the selection frames 122-123 in the user interface shown in FIG. 1B described above. The embodiment of the application is described by taking a tracking frame as an example.
Step 303: the Mode module (Mode module) triggers the stream management module to configure the data streams of the preview window and the widget.
Specifically, the preview window is used for displaying an original image acquired by the camera, and the small window is used for displaying an image obtained by cutting based on the selected principal angle as the center on the basis of the original image. Illustratively, the widget may be widget 141 in the user interface illustrated in FIG. 1D above, and the preview window may be preview window 113 in the user interface.
It should be appreciated that since the electronic device does not detect an input operation for the tracking box, only a preview window is present on the video recording interface and no small window is present.
The data stream of the stream management module configuration preview window and the small window may be: a memory area (first memory area) for encoding the preview data stream by the encoder is allocated for the preview window in the Buffer, and a memory area (second memory area) for encoding the small window data stream by the encoder is allocated for the small window in the Buffer. The data streams of the preview window and the small window may be video streams.
Step 304: the stream management module configures the data streams of the preview window and the small window, generates data stream configuration information and sends the data stream configuration information to the coding control module.
Specifically, the configuration information of the data stream includes an address (video_surface_1) of the first storage area and an address (video_surface_2) of the second storage area.
Step 305: the mode module triggers the storage module to generate a first video file and a second video file.
Specifically, the first video file is used to store the encoded data stream of the preview window and the second video file is used to store the encoded video stream of the widget.
It should be understood that step 305 may be performed before step 303, step 305 may be performed after step 303, and step 305 may be performed simultaneously with step 303, which is not limited by the embodiments of the present application.
Step 306: the storage module generates a first video file and a second video file, and sends information of the first video file and the second video file to the coding control module.
Specifically, the information of the first video file may be identification information of the first video file, for example, an ID of the first video file. The information of the second video file may be identification information of the second video file, for example, an ID of the second video file.
Step 307: the encoding control module configures a first encoder parameter and a second encoder parameter, and corresponds the first encoder to address information of the first video file and the first storage area, and corresponds the second encoder to storage addresses of the second video file and the second storage area.
In particular, the encoding control module may instruct the encoder module to create the encoder of the preview window and the encoder of the widget by configuring the first encoder parameter and the second encoder parameter. The first encoder is an encoder corresponding to the preview window, and the second encoder is an encoder corresponding to the small window. The encoder corresponding to the preview window is used for encoding the data stream of the preview window, sending the encoded data stream to the encoding controller, encoding and packaging the data stream by the encoding controller, and writing the encoded and packaged data stream into the first video file. The encoder corresponding to the small window is used for encoding the data stream of the small window, transmitting the encoded data stream to the encoding controller, encoding and packaging the encoded data stream by the encoding controller, and writing the encoded and packaged data stream into the second video file.
The encoding control module can match the first encoder, the first Video file and the video_surface_1, establish an association relationship of the first encoder, the first Video file and the video_surface_1, enable the subsequent first encoder to acquire a Video stream of the preview window according to the video_surface_1, and encode the Video stream. And then, the coding controller codes and packages the coded data stream, and writes the coded and packaged data stream into a first video file created by the storage module. The encoding control module matches the second encoder, the second Video file and the video_surface_2, establishes an association relation of the second encoder, the second Video file and the video_surface_2, enables the subsequent second encoder to acquire a Video stream of a small window according to the video_surface_2, and encodes the Video stream. And then, the coding controller codes and packages the coded data stream, and writes the coded and packaged data stream into a second video file created by the storage module.
Step 308: the encoding control module sends a first encoding configuration parameter to the encoder module.
Specifically, the first encoding configuration parameter includes a first encoder parameter, a second encoder parameter, information of an association relationship between the first encoder and the first Video file and the video_surface_1, and information of an association relationship between the second encoder and the second Video file and the video_surface_2. The first encoding configuration parameter is used to instruct the encoder module to create an encoder for the preview window based on the first encoder parameter (first encoder) and to create an encoder for the widget based on the second encoder parameter (second encoder).
In addition, the encoder of the preview window is associated with the first Video file and video_surface_1, and the encoder of the widget is associated with the second Video file and video_surface_2. The subsequent first encoder can acquire the Video stream of the preview window according to the video_surface_1, encode the Video stream, and then send the encoded Video stream to the encoding control module. The subsequent second encoder can acquire the Video stream of the small window according to the video_surface_2, encode the Video stream, and then send the encoded Video stream to the encoding control module.
Step 309: the encoder module creates a first encoder and a second encoder according to the first encoding configuration parameters, associates the first encoder with the first video file and the first storage area, and associates the second encoder with the second video file and the second storage area.
Step 310: the flow management module configures a first flow identification parameter and a second flow identification parameter.
In particular, the first stream identification parameter is used to identify the data stream of the preview window and the second stream identification parameter is used to identify the data stream of the widget.
It should be understood that step 310 may be performed before step 304, step 310 may be performed after step 304, and step 310 may be performed simultaneously with step 304, which is not a limitation of embodiments of the present application.
Step 311: the stream management module configuration sends the first stream identification parameter and the second stream identification parameter to the camera HAL (Camera HAL) module.
Specifically, the first stream identification parameter includes address information of the first storage area, and the second stream identification parameter includes address information of the second storage area.
Step 312: the CameraHAL module analyzes the first stream identification parameter and the second stream identification parameter, and matches the data stream of the preview window and the data stream of the small window with the first stream identification parameter and the second stream identification parameter respectively.
Specifically, after parsing the first stream identification parameter and the second stream identification parameter, the camera hal module determines that the data stream corresponding to the first stream identification parameter is a video stream of a preview window after the electronic device starts recording the video (for example, the recording interface in fig. 2C). After the electronic device determines that the data stream corresponding to the second stream identification parameter is the data stream of the small window after the electronic device starts recording the video, the data stream of the preview window after the electronic device records the video is matched with the first stream identification parameter in advance, and the data stream of the small window is matched with the second stream identification parameter, so that the Camera HAL module can determine to send the data stream of the large window to the first storage area and send the data stream of the small window to the second storage area after obtaining the data stream of the preview window and the data stream of the small window.
After detecting the input operation of the start recording control on the video recording interface, the electronic equipment starts recording the video, displays a small window on the video recording interface, and cuts out the obtained image based on the selected principal angle as the center on the basis of displaying the original image (the image displayed on the preview window) on the small window.
Next, a process of encoding the data streams of the preview window and the widget in the case where the electronic device records the preview window video and the widget video at the same time will be described with reference to steps 313 to 322 in fig. 3B. Wherein the data streams of the preview window and the widget may be video streams.
Step 313: after video recording starts, a Mode module (Mode module) triggers a stream management module to dynamically send a first data request message to a CameraHAL module.
Specifically, after the electronic device detects a recording start control on the video recording interface, the electronic device starts recording, and triggers a Mode module (Mode module) to instruct the stream management module to send a first data request message. The embodiment of the application takes a video recording control as an example of a starting recording control for explanation.
The first data request message comprises a first stream identification parameter and a second stream identification parameter, and the first data request message is used for indicating the CameraHAL module to buffer the data stream of the preview window in a first storage area and buffer the data stream of the small window in a second storage area.
In one possible implementation, before recording begins, if a click operation for a first tracking frame (first mark) in the video recording interface is detected, the electronic device displays a preview window and a widget simultaneously on the video recording in response to the operation. The preview window displays an original image acquired by the camera, wherein the image comprises N objects (the embodiment of the application uses the objects as characters for illustration), a first object is displayed on the small window, and the first object is the character corresponding to the first tracking frame.
Step 314: the stream management module dynamically sends a first data request message to the CameraHAL module.
Step 315: the camera HAL module processes the data stream sent by the camera, copies and processes the data stream transmitted by the camera to obtain a first data stream and a second data stream, and sends the first data stream and the second data stream to the first storage area and the second storage area respectively.
Specifically, after receiving the first data request message, the camera hal module copies the received data stream transmitted by the camera to obtain two paths of data streams. These two data streams are the data streams during video recording. Illustratively, the data stream includes data of image frames acquired in real time by the camera. The data stream corresponding to the preview window is a first data stream, and the data stream corresponding to the small window is a second data stream. The camera HAL module can take the data stream collected by the camera as a first data stream, and copy the first data stream to obtain a copied data stream. Then, each frame of image in the copied data stream is clipped according to the selected principal angle (object) as the center, so that a clipped data stream is obtained, and the data stream is a second data stream.
In some embodiments, the camera hal module may also copy the data stream collected by the camera to obtain a copied data stream, and use the copied data stream as the first data stream. And cutting each frame of image in the data stream acquired by the camera according to the selected main angle (object) as the center, so as to obtain a cut data stream, wherein the data stream is a second data stream.
Because the first data stream and the second data stream are two identical data streams, in order to distinguish the first data stream from the second data stream, the camera hal module adds a first stream identification parameter in each frame image of the first data stream and adds a second stream identification parameter in each frame image of the second data stream, thereby distinguishing the first data stream from the second data stream. In this way, the Camera frame module (Camera FWK module) and other modules for transparent transmission of the first data stream and the second data stream can distinguish the two data streams according to the first stream identification parameter and the second stream identification parameter.
Furthermore, the camel hal module may add a time stamp pts in each frame of image data of the first data stream and the second data stream, or add pts only in each frame of image data of the second data stream. The timestamp pts may be a system time when the camera hal module receives the frame image data, or may be a system time when the camera hal module sends the frame image.
Step 316: after video recording is started, a Mode module (Mode module) triggers an encoder control module to start an encoder.
For example, when the electronic device detects a start recording control on the video recording interface, the electronic device starts recording and triggers the encoder control module to start the first encoder operation and the second encoder operation.
Step 316 may be performed before step 313 or after step 313, that is, may be performed simultaneously with step 313, and embodiments of the present application are not limited.
Step 317: the encoding control module sends a first initiation message to the encoder module.
Specifically, the encoding control module sends a first start message to the encoder module after receiving a trigger message of the Mode module (Mode module). The first start message is used to trigger the encoder module to start the first encoder and the second encoder.
Step 318: the first encoder obtains a first data stream from the first storage area and the second encoder obtains a second data stream from the second storage area.
Specifically, after the encoder module receives the first start message sent by the encoding control module, the encoder module triggers the first encoder and the second encoder to work. The first encoder will obtain the buffered first data stream from Buffer based on video_surface_1 (address of the first storage area). The second encoder will obtain the buffered second data stream from Buffer based on Video _ surface _2 (address of the second storage area).
Step 319: the first encoder encodes the first data stream to obtain an encoded first data stream, and the second encoder encodes the second data stream to obtain an encoded second data stream.
Step 320: the first encoder sends the encoded first data stream to the encoding control module, and the second encoder sends the encoded second data stream to the encoding control module.
Step 321: the encoding control module encodes and packages the encoded first data stream to obtain an encoded and packaged first data stream, and encodes and packages the encoded second data stream to obtain an encoded and packaged second data stream.
Step 322: the encoding control module sends the encoded and packed first data stream to the first video file, and sends the encoded and packed second data stream to the second video file.
In some embodiments, the user may end recording of video in the widget in advance, continuing recording of video in the preview window. Illustratively, as in the user interface shown in fig. 2B above, the stop control 212 is displayed in the widget 141, and when the electronic device detects a click operation on the stop control 212, the electronic device stops recording the widget video and saves the recorded widget video. At this time, the electronic device still records the video in the preview window.
Next, in connection with steps 323-332 in fig. 3C, a description will be given of a process of encoding a data stream by the electronic device in the case of ending video recording in the small window in advance and continuing to record video in the preview window.
Step 323: after detecting the input operation of stopping the recording control in the widget, the Mode module triggers the stream management module to dynamically send a second data request message to the CameraHAL module.
Specifically, the second data request message is used to instruct the CameraHAL module to process only the data stream of the preview window, that is: only the first data stream is processed and the first data stream is transmitted into the first storage area. Only the first stream identification parameter is included in the second data request message.
Illustratively, the stop recording control may be stop control 212 in FIG. 2B described above. The embodiment of the application takes the second control as an example of a control for stopping recording.
Step 324: the stream management module dynamically sends a second data request message to the camel module.
Step 325: the camera HAL module processes the data stream sent by the camera to obtain a first data stream, and sends the first data stream to the first storage area for caching.
Specifically, after the camel module receives the second data request message sent by the stream management module, the camel module does not copy the data of the camera received by the camel module, and does not cut each frame of image in the copied data stream. After the camera hal module processes the data stream (first data stream) received from the camera, a first stream identification parameter is added to each frame of image of the first data stream. The first data stream is then sent to the first storage area.
Step 326: after detecting an input operation for stopping the recording control in the widget, the Mode module (Mode module) triggers the encoding controller to send a first stop encoding request.
Specifically, the first stop encoding request is used to instruct the encoder module to stop and delete the second encoder. Furthermore, the first stop coding request is for instructing the encoder module to newly create an encoder for a widget, namely: and a third encoder.
It should be understood that step 323 and step 326 are performed simultaneously, step 323 may be performed before step 326, step 323 may be performed after step 326, and embodiments of the present application are not limited.
Step 327: the encoding control module instructs the encoder module to stop and delete the second encoder, creating a third encoder.
Step 328: the encoding control module sends a first save message to the storage module.
Specifically, the first save message is used to instruct the storage module to stop receiving the encoded and packetized second data stream sent by the encoding control module, and save a second video file, where the video saved in the second video file is a small window video (close-up video). In addition, the first save message also instructs the storage module to create a third video file.
It should be understood that steps 327 and 328 are performed simultaneously, steps 327 may be performed before steps 328, steps 327 may be performed after steps 328, and embodiments of the present application are not limited.
Step 329: the storage module stores the second video file, creates a third video file and sends information of the third video file to the coding control module.
Specifically, after receiving the first save message, the storage module saves the second video file. Wherein the second video file may be named with the first system Time time_1. The first system time may be a time at which the second video file was created. In addition, the electronic device can create a third video file. Wherein the third video file is for storing the widget video data.
The information of the third video file may be identification information of the third video file, such as ID of the third video file, which is not limited in the embodiment of the present application.
Step 330: the encoding control module configures a third encoder parameter and corresponds the third encoder to the third video file and the second storage area.
Specifically, the encoding control module may match the third encoder, the third video file, and the second storage area, and establish an association relationship between the first encoder and the third video file and between the third video file and the second storage area, so that the subsequent third encoder may obtain the data stream of the small window according to the address of the second storage area, encode the data stream, and then send the encoded data stream to the encoding control module for encoding and packaging. And then the coding control module sends the data stream of the coded and packaged small window to a third video file created by the storage module.
Step 331: the encoding control module sends the second encoding configuration parameters to the encoder module.
Specifically, the second encoding configuration parameter includes a third encoder parameter, an association relationship between the third encoder and the third Video file, and a video_surface_2. The second encoding configuration parameter is used to instruct the encoder module to create a windowed encoder (third encoder) based on the third encoder parameter. In addition, a small window encoder is associated with the third Video file and video_surface_2. The subsequent third encoder can acquire the Video stream of the small window according to the video_surface_2, encode the Video stream, and then send the encoded Video stream to the encoding control module.
Step 332: the encoder module creates a third encoder based on the second encoder configuration parameters and associates the third encoder with the third video file and the second storage area.
Steps 323-332 describe the encoding process of the data stream of the image acquired by the camera by the electronic device after stopping the video recording of the small window. After stopping the widget recording, the object in the preview window will display a selection box 221 and a selection box 123, as illustrated in fig. 2G above for example. When the electronic apparatus detects an input operation for the selection box 123, the electronic apparatus displays the small window again, and displays the character 1 of the principal angle selected by the user again on the small window (the previous principal angle is character 2).
Next, the process of selecting the main angle again to perform the new small window recording after the electronic device finishes the small window video recording in advance in connection with steps 333-342 in fig. 3D is described.
Step 333: after detecting that the user reselects the principal angle, the Mode module (Mode module) triggers the stream management module to dynamically send a third data request message to the camel hal module.
Illustratively, upon detecting a single click operation by the user on the selection box 123 (second tab) in fig. 2G described above, the electronic device determines that the user reselects the principal angle.
In addition, after detecting that the user reselects the principal angle, the mode module triggers the stream management module to dynamically send a third data request message to the camel hal module, where the third data request message may include the first stream identification parameter and the second stream identification parameter, where the third data request message is used to instruct the camel hal module to cache the data stream of the preview window in the first storage area and cache the data stream of the small window in the second storage area.
Step 334: the stream management module dynamically sends a third data request message to the CameraHAL module.
Step 335: the camera HAL module processes the data stream sent by the camera, copies and processes the data stream transmitted by the camera to obtain a first data stream and a third data stream, sends the first data stream to the first storage area, and sends the third data stream to the second storage area.
Specifically, after receiving the third data request message, the camera hal module copies the received data stream transmitted by the camera to obtain two paths of data streams. These two data streams are data streams during video recording, which illustratively include the camera capturing data of image frames captured in real-time. The data stream corresponding to the preview window is a first data stream, and the data stream corresponding to the small window is a second data stream. The camera HAL module can take the data stream collected by the camera as a first data stream, and copy the first data stream to obtain a copied data stream. Then, each frame of image in the copied data stream is clipped by taking the selected main angle (object) as the center, so that a clipped data stream is obtained, and the data stream is a third data stream.
In some embodiments, the camera hal module may also copy the data stream collected by the camera to obtain a copied data stream, and use the copied data stream as the first data stream. And cutting each frame of image in the data stream acquired by the camera according to the selected main angle (object) as the center, so as to obtain a cut data stream, wherein the data stream is a third data stream.
Since the first data stream and the third data stream are two identical data streams, in order to distinguish the first data stream from the third data stream, the CameraHAL module adds a first stream identification parameter in each frame image of the first data stream and adds a second stream identification parameter in each frame image of the third data stream, thereby distinguishing the first data stream from the third data stream. Thus, the Camera Fwk module for transparent transmission of the first data stream and the third data stream can send the first data stream to the first storage area and send the third data stream to the second storage area according to the first stream identification parameter and the second stream identification parameter.
Furthermore, the camel hal module may add a time stamp pts in each frame of image data of the first data stream and the third data stream, or add pts only in each frame of image data of the third data stream. The timestamp pts may be a system time when the camera hal module receives the frame image data, or may be a system time when the camera hal module sends the frame image.
Step 336: the Mode module (Mode module) triggers the encoder control module to send a second start message to the encoder module.
Specifically, the second start message is used to instruct the third encoder to operate. (video recording of the preview window has not been stopped before, and therefore the first encoder is always operating normally)
It should be understood that step 333 and step 336 are performed simultaneously, step 333 may be performed before step 336, step 333 may be performed after step 336, and embodiments of the present application are not limited.
Step 337: the encoder control module sends a second start message to the encoder module.
Step 338: the first encoder acquires the first data stream from the first storage area and the third encoder acquires the third data stream from the second storage area.
Specifically, after the encoder module receives the second start message sent by the encoding control module, the encoder module triggers the first encoder and the third encoder to work. The first encoder will obtain the buffered first data stream from Buffer based on video_surface_1 (address of the first storage area). The third encoder will obtain the buffered third data stream from Buffer based on Video _ surface _2 (address of the second storage area).
Step 339: the first encoder encodes the first data stream and transmits the encoded first data stream to the encoding control module, and the third encoder encodes the third data stream and transmits the encoded third data stream to the encoding control module.
Step 340: the encoding control module encodes and packages the encoded first data stream and transmits the encoded and packaged first data stream to the first video file.
Step 341: the encoding control module determines target image data in the third data stream.
Specifically, the third data stream and the second data stream are stored in the same location. Since Buffer stores data streams only temporarily, after a period of time, when there is a new data stream to be stored, buffer will clean up the original data stream so that the new data stream can be stored. When switching the principal angles, the time interval between switching between two adjacent principal angles may be small, resulting in the possibility of the second data stream remaining in the second storage area. If the image data in the residual second data stream is coded and packed, then in the video of the next small window, the picture of the video of the previous small window is displayed, and the picture display is disordered. Thus, the encoding control module may filter the images in the third data stream transmitted by the third encoder.
The encoding control module may determine the target image data from the time stamp pts for each frame of the image in the third data stream (the encoding control module may distinguish the first data stream from the third data stream by the first stream identification parameter and the second stream identification parameter). The target image data is the data of the image with pts greater than or equal to the second system time in the third data stream. The second system time may be a time of creating the third video file, or the second system time may be a start time of the third encoder, or may be a time of sending the second data request message by the stream management module, or may be a time of saving the second video file.
Step 342: and the coding control module codes and packages the target image data in the coded third data stream and sends the coded and packaged third data stream to the third video file.
Specifically, the encoding control module encodes and packages the target image data in the encoded third data stream, so that the problem that the first N frames of images display the pictures of the video in the second video file when the video in the third video file is played due to the residual second data stream can be effectively avoided.
It should be understood that step 339 may be performed before step 341, step 339 may be performed after step 341, and step 339 may be performed simultaneously with step 341, which is not limited by embodiments of the present application.
After the electronic device switches the main angle, the electronic device performs the encoding process of the data stream corresponding to the preview window and the data stream corresponding to the small window in steps 333-342.
Next, in conjunction with steps 343-349 in fig. 3E, the electronic device finishes recording the preview window and the widget at the same time, and a flow executed by each module in the electronic device is described.
Step 343: after detecting an input operation for ending the preview window and the small window video recording at the same time, the Mode module (Mode module) triggers the coding control module to dynamically send a fourth data request message to the camera hal module.
Specifically, the fourth data request message is used for indicating that the camera hal module no longer processes the data stream sent by the camera, stopping sending the first data stream to the first storage area, and stopping sending the third data stream to the second storage area. The second stop recording control may be stop control 2112 in 2L above.
Illustratively, the input operation to simultaneously end the preview window and the widget video recording may be the user operation on the stop control 2112 described above in fig. 2L.
Step 344: the stream management module dynamically sends a fourth data request message to the camel module.
Step 345: the Mode module (Mode module) triggers the coding control module to send working instructions for stopping coding to the first coder and the third coder.
It should be understood that step 343 and step 345 are performed simultaneously, step 343 may be performed before step 345, step 343 may be performed after step 345, and embodiments of the present application are not limited.
Step 346: the encoding control module sends a second stop encoding request to the encoder module.
Specifically, the second stop encoding request is to instruct the encoder module to stop and delete the first encoder and the third encoder.
Step 347: the first encoder and the third encoder stop encoding the data stream.
In some embodiments, after receiving the second stop encoding request, the encoder module instructs the first encoder and the third encoder to stop working and clears the first encoder and the third encoder.
Step 348: the mode module triggers the storage module to store the first video file and the third video file.
Specifically, the storage module may name the third video file with the second system time, and name the first video file with the current system time.
It should be appreciated that step 345 and step 348 are performed simultaneously, step 345 may be performed before step 348, step 345 may be performed after step 348, and embodiments of the present application are not limited.
Step 349: the storage module stores the first video file and the third video file.
The video cached in the first video file is a preview window video (original video), and the video cached in the third video file is a small window video (close-up video of the selected object).
Fig. 4 is a schematic hardware structure of the electronic device 100 according to an embodiment of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD). The display panel may also be manufactured using organic light-emitting diode (OLED), active-matrix organic light-emitting diode (AMOLED) or active-matrix organic light-emitting diode (active-matrix organic light emitting diode), flexible light-emitting diode (FLED), mini, micro-OLED, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N display screens 194, N being a positive integer greater than 1.
In an embodiment of the present application, the ability of the electronic device 100 to display the original image captured by the camera, the close-up image of the principal angle determined by the principal angle tracking, and the user interface shown in fig. 1A-1J and fig. 2A-2I, depends on the GPU, the display 194, and the display functions provided by the application processor.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
In the embodiment of the present application, the electronic device 100 implements the photographing method provided by the embodiment of the present application, firstly, depends on the image acquired by the ISP and the camera 193, and secondly, depends on the video codec and the image computing and processing capability provided by the GPU. The electronic device 100 may implement neural network algorithms such as face recognition, human body recognition, and re-recognition (ReID) through the computing processing capability provided by the NPU.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (dynamic random access memory, DRAM), synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous dynamic random-access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, e.g., fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc.
The nonvolatile memory may include a disk storage device, a flash memory (flash memory). The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. divided according to an operation principle, may include single-level memory cells (SLC), multi-level memory cells (MLC), triple-level memory cells (TLC), quad-level memory cells (QLC), etc. divided according to a storage specification, may include universal FLASH memory (english: universal FLASH storage, UFS), embedded multimedia memory cards (embedded multi media Card, eMMC), etc. divided according to a storage specification.
The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like. The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
In the embodiment of the present application, codes for implementing the photographing method described in the embodiment of the present application may be stored in a nonvolatile memory. In running the camera application, the electronic device 100 may load executable code stored in the non-volatile memory into the random access memory.
The external memory interface 120 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device 100. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone.
In the embodiment of the present application, in the process of enabling the camera to collect an image, the electronic device 100 may enable the microphone 170C to collect a sound signal at the same time, and convert the sound signal into an electrical signal for storage. In this way, the user can get an audio video.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation. The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc. The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
In the embodiment of the present application, the electronic device 100 may detect operations such as clicking, sliding, etc. performed on the display screen 194 by the user by using the touch sensor 180K, so as to implement the photographing methods shown in fig. 1A to 1J and fig. 2A to 2L.
The bone conduction sensor 180M may acquire a vibration signal. The keys 190 include a power-on key, a volume key, etc. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1.
Fig. 5 is a schematic software structure of an electronic device according to an embodiment of the present application.
As shown in fig. 5, the software framework of the electronic device according to the present application may include an application layer, an application framework layer (FWK), a system library, a An Zhuoyun row, a Hardware Abstraction Layer (HAL), and a kernel layer (kernel).
The application layer may include a series of application packages (also referred to as applications) such as cameras, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short messages, etc. Among other things, camera applications may be used to acquire images and video.
As shown in fig. 5, the camera application may include a camera mode module, a stream management module, an encoding control module, and a storage module.
The camera mode module may be used to monitor user operations and determine the mode of the camera. Modes of the camera may include, but are not limited to: a photographing mode, a video preview mode, a video mode, a time delay photographing mode, a continuous shooting mode and the like. The video preview mode may include a video preview mode in a focus tracking mode. The recording mode may include a recording mode in a focus tracking mode.
The stream management module is used for carrying out data stream management. For example, the delivery of data flow configuration information (which may be referred to simply as provisioning information). The stream management module may include an address of the data stream cache. For example, video_surface_1 and video_surface_2.Video_surface_1 is used to indicate Buffer for buffering the data stream of the preview window (the first data stream), and video_surface_2 is used to indicate Buffer for buffering the data stream of the widget (the second data stream and the third data stream).
The coding control module is used for coding and packaging the data stream coded by the coder and sending the coded and packaged data stream to the corresponding video file created by the storage module.
The storage module is used for storing original video (video of preview window) and close-up video (small window video).
The application framework layer provides an application programming interface (Application Programming Interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 5, the application framework layer may include a camera FWK and a media FMK.
The Camera FWK may provide an API interface to call an application (e.g., a Camera application), further receive a request from the application, and maintain business logic of the request flowing internally, and finally send the request to a Camera Service (Camera Service) for processing by calling a Camera AIDL cross-process interface, and then wait for a return of a Camera Service (Camera Service) result, and further send the final result to the Camera application. The English of AIDL is called Android Interface Definition Language, and the Chinese meaning is android interface definition language. Similarly, the media FWK may invoke a corresponding application (e.g., camera application) with an API interface, thereby receiving a request from the application (e.g., camera application), and passing down the request of the application, and then back to the application.
It is to be appreciated that the application framework layer can also include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like. For specific meaning, reference is made to the related art documents, and description thereof is not given here.
The Runtime (run time) is responsible for the scheduling and management of the system. Run time includes a core library and virtual machines. The core library comprises two parts: one part is the function that the programming language (e.g., java language) needs to call, and the other part is the core library of the system.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes the programming files (e.g., java files) of the application layer and the application framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface Manager (Surface Manager), media library (Media Libraries), three-dimensional graphics processing library (e.g., openGL ES), two-dimensional graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2D) and three-Dimensional (3D) layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
A Hardware Abstraction Layer (HAL) is an interface layer located between the operating system kernel and upper layer software, which aims at abstracting the hardware. The hardware abstraction layer is a device kernel driven abstraction interface for enabling application programming interfaces that provide higher level Java API frameworks with access to the underlying devices. The HAL contains a plurality of library modules, such as camera HAL, vendor library, display screen, bluetooth, audio, etc. Wherein each library module implements an interface for a particular type of hardware component. It is understood that the camera HAL may provide an interface for the camera FWK to access hardware components such as a camera head. The Vendor bin may provide an interface for media FWK to access hardware components such as encoders. To load library modules for the hardware component when the system framework layer API requires access to the hardware of the portable device, the Android operating system will load the library modules for the hardware component.
The kernel layer is the basis of the Android operating system, and the final functions of the Android operating system are completed through the kernel layer. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and a virtual card driver.
It should be noted that, the software structure schematic diagram of the electronic device shown in fig. 5 provided by the present application is only used as an example, and is not limited to specific module division in different layers of the Android operating system, and the description of the software structure of the Android operating system in the conventional technology may be referred to specifically. In addition, the shooting method provided by the application can be realized based on other operating systems, and the application is not limited to one by one.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk), etc.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by computer programs, which may be stored on a computer-readable storage medium, and which, when executed, may include the steps of the above-described method embodiments. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
In summary, the foregoing description is only an embodiment of the technical solution of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A photographing method applied to an electronic device having a camera, the method comprising:
displaying a first interface; the first interface comprises a preview window, a first control and a video recording control, wherein the preview window is used for displaying images acquired by the camera;
responding to a first operation aiming at the first control, displaying N marks on a first image, wherein the first image is the image currently displayed by the preview window, and the N marks respectively correspond to N objects in the first image;
Displaying a widget on the first interface in response to a second operation for the first marker, displaying a close-up image of the first object in the widget; the first mark is any one of the N marks, and the first object is an object corresponding to the first mark;
at a first moment, responding to a third operation for a video recording control, and recording a first video and a second video; the first video is the video of the preview window, and the second video is the video of the small window;
displaying a second control on the widget;
at a second moment, responding to a fourth operation for the second control, stopping recording a second video, and not displaying the small window on the first interface;
at a third moment, responding to a fifth operation aiming at a second mark, displaying the small window on the first interface, and displaying a close-up image of a second object on the small window, wherein the second mark is any one of the N marks, and the second object is an object corresponding to the second mark;
and recording a third video, wherein the third video is the video of the small window.
2. The method of claim 1, wherein in the recording of the first video and the second video in response to the third operation for the video recording control, further comprising:
Detecting a first input operation for a third marker;
and responding to the first input operation, displaying a close-up image of a third object on the small window, wherein the third mark is any one mark except the first mark in the N marks, and the third object is an object corresponding to the third mark.
3. The method of any of claims 1-2, wherein the electronic device comprises a mode module, a stream management module, a storage module, an encoding control module, an encoder module, a camera HAL module, the response to the first operation of the first control followed by:
the mode module triggers the flow management module to configure a first data flow and a second data flow; the first data stream is the data stream of the preview window, and the second data stream is the data stream of the small window;
the stream management module configures the first data stream and the second data stream;
the stream management module sends the data stream configuration information to the coding control module; the data stream configuration information comprises an address of a first storage area and an address of a second storage area, wherein the first storage area is used for caching the first data stream, and the second storage area is used for caching the second data stream;
The storage module creates a first video file and a second video file, and sends video file information to the coding control module; the first video file is used for storing the video corresponding to the preview window, the second video file is used for storing the video corresponding to the small window, and the video file information comprises first video file information and second video file information;
the encoding control module configures a first encoder parameter and a second encoder parameter based on the video file information and the data stream configuration information, and sends the first encoding configuration parameter to the encoder module; the first encoding configuration parameters include the first encoder parameters and the second encoder parameters;
the encoder module creating a first encoder based on the first encoder parameter and creating a second encoder based on the second encoder parameter; the first encoder corresponds to the preview window, and the second encoder corresponds to the widget;
the flow management module configures a first flow identification parameter and a second flow identification parameter, and sends the first flow identification parameter and the second flow identification parameter to the camera HAL module; the first stream identification parameter is used for identifying the first data stream, and the second stream identification parameter is used for identifying the second data stream;
The camera HAL module parses the first stream identification parameter and the second stream identification parameter.
4. The method of claim 3, wherein after the responding to the third operation for the video recording control, further comprising:
the mode module triggers the stream management module to send a first data request message to the camera HAL module; the first data request message is used for instructing the camera HAL module to cache a first data stream into the first storage area and a second data stream into the second storage area;
the mode module triggers the coding control module to start the first coder and the second coder;
the stream management module sends the first data request message to the camera HAL module;
the camera HAL copies the first data stream sent by the camera to obtain a copied data stream;
the camera HAL module cuts each frame of image in the copied data stream by taking the first object as a center to obtain a second data stream; each frame of image in the second data stream is a close-up image of the first object;
the camera HAL module caches the first data stream into the first storage area and the second data stream into the second storage area;
The encoding control module starts the first encoder and the second encoder;
the first encoder acquires the first data stream from the first storage area and encodes the first data stream to obtain an encoded first data stream;
the first encoder sends the encoded first data stream to the encoding control module;
the second encoder acquires the second data stream from the second storage area and encodes the second data stream to obtain an encoded second data stream;
the second encoder sends the encoded second data stream to the encoding control module;
the coding control module respectively packs the coded first data stream and the coded second data stream to obtain a packed first data stream and a packed second data stream;
the encoding control module caches the packed first data stream into the first video file, and caches the packed second data stream into the second video file.
5. The method of claim 4, wherein after the responding to the fourth operation for the second control, further comprising:
The mode module triggers the stream management module to send a second data request message to the camera HAL module; the second data request message is for instructing the camera HAL module to stop caching the second data stream into the second storage area;
the mode module triggers the coding control module to control the second encoder to stop working;
the stream management module sends the second data request message to the camera HAL module;
the camera HAL module does not cache the second data stream into the second storage area and does not replicate the first data stream;
the coding control module sends a first coding stopping request to the coder module, wherein the first coding stopping request is used for indicating the coding module to control the second coder to stop working, deleting the second coder and creating a third coder;
the coding control module sends a first storage message to the storage module; the first storage message is used for indicating the storage module to store the second video file and creating a third video file; the third video file is a video file of a small window;
the encoder module instructs the second encoder to stop working, deletes the second encoder and creates the third encoder; the third encoder is a small window encoder;
The storage module stores the second video file and creates the third video file;
the storage module sends the information of the third video file to the coding control module;
the coding control module configures a third encoder parameter based on the information of the third video file and the data stream configuration information, and sends a second coding configuration parameter to the encoder module; the second encoding configuration parameters include the third encoder parameters;
the encoder module creates the third encoder based on the second encoding configuration parameters.
6. The method of claim 5, wherein after the responding to the fifth operation for the second marker, further comprising:
the mode module triggers the flow management module to send a third data request message to the camera HAL module; the third data request message is for instructing the camera HAL module to cache a third data stream into the second storage area;
the stream management module sends the third data request message to the camera HAL module;
the camera HAL module copies the first data stream sent by the camera to obtain a copied data stream;
The camera HAL module cuts each frame of image in the copied data stream by taking the second object as a center to obtain a third data stream; each frame of image in the third data stream is a close-up image of the second object;
the camera HAL module adding a second timestamp in the third data stream;
the camera HAL module buffering the third data stream into the second storage area;
the mode module triggers the coding control module to start the third coder;
the encoding control module starts the third encoder;
the third encoder obtains the third data stream from the second storage area;
the third encoder sends the encoded third data stream to the encoding control module;
the encoding control module packs the image data with the second time stamp being more than or equal to the second system time in the encoded third data stream to obtain a packed third data stream; the second system time is the time for storing the second video file;
and the coding control module caches the packed third data stream into the third video file.
7. The method of any of claims 3-6, wherein the first interface displays a stop recording control after the response to the third operation of the video recording control, the response to the third operation of the video recording control further comprising:
detecting a sixth operation for the stop recording control;
and in response to the sixth operation, saving the first video and the third video.
8. The method of claim 7, wherein after detecting the sixth operation for the stop recording control, further comprising:
the mode module triggers the flow management module to send a fourth data request message to the camera HAL module; the fourth data request message is used to instruct the camera HAL to stop buffering data streams;
the mode module triggers the coding control module to control the first coder and the third coder to stop working; the third encoder is an encoder of the small window;
the camera HAL module stops caching the first data stream into the first storage area and stops caching the third data stream into the second storage area; the third data stream is the data stream of the small window;
The coding control module sends a second coding stopping request to the coder module, wherein the second coding stopping request is used for indicating the coding module to control the first coder and the third coder to stop working;
the encoder module controls the first encoder and the third encoder to stop working;
the mode module triggers the storage module to store the first video file and the third video file; the first video file is a file corresponding to the first video, and the third video file is a file corresponding to the third video;
the storage module stores the first video file and the third video file.
9. An electronic device, comprising: the device comprises a memory, a processor and a touch screen; wherein:
the touch screen is used for displaying content;
the memory is used for storing a computer program, and the computer program comprises program instructions;
the processor is configured to invoke the program instructions to cause the electronic device to perform the method of any of claims 1-8.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1-8.
CN202210751432.0A 2022-05-30 2022-06-29 Shooting method and related electronic equipment Pending CN117221708A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210603463 2022-05-30
CN2022106034631 2022-05-30

Publications (1)

Publication Number Publication Date
CN117221708A true CN117221708A (en) 2023-12-12

Family

ID=89043071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210751432.0A Pending CN117221708A (en) 2022-05-30 2022-06-29 Shooting method and related electronic equipment

Country Status (1)

Country Link
CN (1) CN117221708A (en)

Similar Documents

Publication Publication Date Title
WO2020253719A1 (en) Screen recording method and electronic device
CN110109636B (en) Screen projection method, electronic device and system
US11669242B2 (en) Screenshot method and electronic device
CN111526314B (en) Video shooting method and electronic equipment
CN114040242B (en) Screen projection method, electronic equipment and storage medium
CN114089933A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN112527222A (en) Information processing method and electronic equipment
WO2022068613A1 (en) Audio processing method and electronic device
US20230353862A1 (en) Image capture method, graphic user interface, and electronic device
CN113891009A (en) Exposure adjusting method and related equipment
WO2022042769A2 (en) Multi-screen interaction system and method, apparatus, and medium
WO2021238740A1 (en) Screen capture method and electronic device
CN115442509B (en) Shooting method, user interface and electronic equipment
WO2021204103A1 (en) Picture preview method, electronic device, and storage medium
CN117221708A (en) Shooting method and related electronic equipment
CN114222187A (en) Video editing method and electronic equipment
CN117221709A (en) Shooting method and related electronic equipment
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN115686403A (en) Display parameter adjusting method, electronic device, chip and readable storage medium
CN116055861B (en) Video editing method and electronic equipment
CN116055867B (en) Shooting method and electronic equipment
CN116723382B (en) Shooting method and related equipment
EP4277257A1 (en) Filming method and electronic device
WO2023231696A1 (en) Photographing method and related device
WO2023006035A1 (en) Screen mirroring method and system, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination