CN113596320B - Video shooting variable speed recording method, device and storage medium - Google Patents

Video shooting variable speed recording method, device and storage medium Download PDF

Info

Publication number
CN113596320B
CN113596320B CN202110676713.XA CN202110676713A CN113596320B CN 113596320 B CN113596320 B CN 113596320B CN 202110676713 A CN202110676713 A CN 202110676713A CN 113596320 B CN113596320 B CN 113596320B
Authority
CN
China
Prior art keywords
rate
video
frame
recording
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110676713.XA
Other languages
Chinese (zh)
Other versions
CN113596320A (en
Inventor
王晨清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110676713.XA priority Critical patent/CN113596320B/en
Publication of CN113596320A publication Critical patent/CN113596320A/en
Application granted granted Critical
Publication of CN113596320B publication Critical patent/CN113596320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a video shooting variable-speed recording method, a device, a storage medium and a program product, wherein the method comprises the steps of receiving a recording speed regulation operation in the video shooting process, wherein the recording speed regulation operation is used for regulating the video recording speed; adjusting the rate of video recording from a first rate to a second rate in response to the recording rate adjustment operation, wherein the first rate is different from the second rate; wherein, the speed of video recording is adjusted from a first speed to a second speed, and at least one of the following modes is included: and adjusting the reporting frame rate of the video frames, or adjusting the coding frame rate of the video frames, or extracting a part of video frames from the reported video frames for coding. The user can adjust the recording rate of the video in the video shooting process, and the user experience is improved.

Description

Video shooting variable speed recording method, device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video shooting variable speed recording method, device, storage medium, and program product.
Background
In order to improve user experience, electronic devices such as mobile phones and tablet computers generally provide different recording rates in video shooting. When the user shoots the video through the electronic equipment, the corresponding recording rate can be selected according to the requirement of the user.
However, in the prior art, a user can only select the recording rate before video shooting, and cannot realize the switching of the recording rate in the video shooting process. For example, when a user needs to adjust the recording rate of video shooting during video shooting, the current video shooting needs to be stopped first, and shooting is restarted after the recording rate is adjusted, so that the user experience is poor.
Disclosure of Invention
In view of this, embodiments of the present application provide a video shooting variable speed recording method, device, storage medium, and program product, so as to solve the problem in the prior art that the recording rate cannot be adjusted during the video shooting process.
In a first aspect, an embodiment of the present application provides a video shooting variable-speed recording method, which is applied to a terminal device, and the method includes: receiving a recording rate adjusting operation in a video shooting process, wherein the recording rate adjusting operation is used for adjusting the video recording rate;
adjusting the rate of video recording from a first rate to a second rate in response to the recording rate adjustment operation, wherein the first rate is different from the second rate;
wherein, the adjusting the video recording rate from the first rate to the second rate at least comprises one of the following modes: and adjusting the reporting frame rate of the video frames, or adjusting the coding frame rate of the video frames, or extracting part of the video frames from the reported video frames for coding.
Preferably, when the second rate is greater than the first rate, the adjusting the video recording rate from the first rate to the second rate includes at least one of:
and reducing the reporting frame rate of the video frame, improving the coding frame rate of the video frame, or reducing the extraction proportion of the reported video frame.
Preferably, when the second rate is greater than the first rate, the adjusting the rate of video recording from the first rate to the second rate includes:
and controlling the coding frame rate of the video frame to be kept unchanged, reducing the reporting frame rate of the video frame, and/or reducing the frame extraction proportion of the reported video frame.
Preferably, the controlling the encoding frame rate of the video frame to be kept unchanged, reducing the reporting frame rate of the video frame, and/or reducing the frame extraction ratio of the reported video frame includes:
if the reporting frame rate of the video frame is greater than the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is 100%, reducing the reporting frame rate of the video frame;
and if the reported frame rate of the video frame is equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is less than or equal to 100%, reducing the frame extraction proportion of the video frame.
Preferably, when the second rate is greater than the first rate, the adjusting the rate of video recording from the first rate to the second rate includes:
and controlling the reporting frame rate of the video frame to be kept unchanged, improving the coding frame rate of the video frame, and/or reducing the frame extraction proportion of the reported video frame.
Preferably, when the second rate is greater than the first rate, the adjusting the rate of video recording from the first rate to the second rate includes:
and controlling the frame extraction proportion of the reported video frame to be kept unchanged, reducing the reporting frame rate of the video frame, and/or improving the coding frame rate of the video frame.
Preferably, when the second rate is smaller than the first rate, the recording rate adjustment operation includes at least one of:
and increasing the reporting frame rate of the video frame, reducing the coding frame rate of the video frame, or increasing the extraction proportion of the reported video frame.
Preferably, when the second rate is smaller than the first rate, the adjusting the rate of video recording from the first rate to the second rate includes:
and controlling the coding frame rate of the video frame to be kept unchanged, improving the reporting frame rate of the video frame, and/or improving the frame extraction proportion of the reported video frame.
Preferably, the controlling the encoding frame rate of the video frame to be kept unchanged, the increasing the reporting frame rate of the video frame, and/or the increasing the frame extraction ratio of the reported video frame includes:
if the reporting frame rate of the video frame is greater than or equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is 100%, improving the reporting frame rate of the video frame;
and if the reported frame rate of the video frame is equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is less than 100%, improving the frame extraction proportion of the video frame.
Preferably, when the second rate is smaller than the first rate, the adjusting the rate of video recording from the first rate to the second rate includes:
and controlling the reporting frame rate of the video frame to be kept unchanged, reducing the coding frame rate of the video frame, and/or improving the frame extraction proportion of the reported video frame.
Preferably, when the second rate is smaller than the first rate, the adjusting the rate of video recording from the first rate to the second rate includes:
and controlling the frame extraction proportion of the reported video frame to be kept unchanged, improving the reporting frame rate of the video frame, and/or reducing the coding frame rate of the video frame.
Preferably, the extracting a part of the video frames from the reported video frames for encoding includes:
if two or more paths of video frames exist, rendering and combining the two or more paths of video frames into one path of video frame;
and extracting a part of video frames from the video frames after the rendering and the merging for encoding.
In a second aspect, embodiments of the present application provide an electronic device, comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any one of the first aspect.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium includes a stored program, where when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the method in any one of the above first aspects.
In a fourth aspect, the present application provides a computer program product, which contains executable instructions that, when executed on a computer, cause the computer to perform the method of any one of the above first aspects.
By adopting the technical scheme provided by the embodiment of the application, the user can adjust the recording rate of the video in the video shooting process, and the user experience is improved. In addition, the shooting of the video is not interrupted in the variable speed adjusting process, so that a video file can be generated by the video pictures shot before and after the variable speed adjustment by adopting the method, and the video file is convenient for users to play.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a video shooting variable speed recording method according to an embodiment of the present disclosure;
fig. 3 is a schematic view of a variable speed recording scenario provided in an embodiment of the present application;
fig. 4 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a video shooting variable speed recording method according to an embodiment of the present disclosure;
fig. 6A is a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application;
fig. 6B is a schematic view of a front-back picture-in-picture mode shooting scene according to an embodiment of the present application;
fig. 6C is a schematic view of a rear pd mode shooting scene according to an embodiment of the present application;
fig. 7A is a schematic view of a rendered scene according to an embodiment of the present application;
fig. 7B is a schematic diagram of another rendering scene provided in the embodiment of the present application;
fig. 7C is a schematic view of a video stream rendering and merging scene provided in the embodiment of the present application;
fig. 7D is a schematic view of another video stream rendering and merging scene provided in the embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Referring to fig. 1, a schematic view of an electronic device provided in an embodiment of the present application is shown. An electronic device is illustrated in fig. 1 by taking a mobile phone 100 as an example, and fig. 1 shows a front view and a rear view of the mobile phone 100. In order to provide a shooting function, a camera is generally provided in the mobile phone 100. For example, two front cameras 111 and 112 are arranged on the front side of the cellular phone 100 according to the embodiment of the present invention, and four rear cameras 121, 122, 123, and 124 are arranged on the rear side of the cellular phone 100.
It is to be understood that the illustration of fig. 1 is merely an exemplary illustration and should not be taken as a limitation on the scope of the present application. For example, the number and positions of cameras may be different for different mobile phones. In addition, the electronic device according to the embodiment of the present application may be a tablet PC, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, a smart car, a smart audio, a robot, smart glasses, a smart television, or the like, in addition to a mobile phone.
It should be noted that, in some possible implementations, the electronic device may also be referred to as a terminal device, a User Equipment (UE), and the like, which is not limited in this embodiment of the present application.
In order to improve the user experience, electronic devices such as mobile phones and tablet computers generally provide different recording rates in video shooting. When the user shoots the video through the electronic equipment, the corresponding recording rate can be selected according to the requirement of the user. For example, 0.5-speed, 1-speed, 2-speed, and the like are selected.
However, in the prior art, a user can only select the recording rate before video shooting, and cannot realize the switching of the recording rate in the video shooting process. For example, when a user needs to adjust the 1 × speed to the 2 × speed in the video shooting process, the current video shooting needs to be stopped first, then the recording rate is adjusted from the 1 × speed to the 2 × speed, and then the shooting is restarted, so that the user experience is poor. It can be understood that, because the above-mentioned process triggers two video shooting operations, two video files will be correspondingly generated, which is also inconvenient for the user to play the video in the later period.
In view of the foregoing problems, embodiments of the present application provide a video shooting variable speed recording method, device, storage medium, and program product, which can adjust a recording rate of a video in a video shooting process, thereby improving user experience.
Referring to fig. 2, a schematic flow chart of a video shooting variable speed recording method according to an embodiment of the present application is shown. The method can be applied to the electronic device shown in fig. 1, as shown in fig. 2, which mainly includes the following steps.
Step S201: in the video shooting process, receiving a recording rate adjusting operation, wherein the recording rate adjusting operation is used for adjusting the video recording rate.
The video recording related to the embodiment of the application can be understood as encoding the shot video image and then storing the encoded video image as a video file. The user can open the video file after completing the video recording and play.
The video recording rate according to the embodiments of the present application may be understood as a rate of a recorded video file. For example, when recording a video at 1 × speed, the generated video file is played at 1 × speed (the playing rate during playing is not adjusted, that is, the video file is played at the default 1 × speed); when the video is recorded according to the speed of 0.5 times, the speed of the generated video file is 0.5 times when the video file is played; when the video recording is carried out according to the 2-time speed, the generated video file is played at the 2-time speed.
In a specific implementation, in an initial state, a user may start a video shooting application and record a video at a first rate. For example, video recording is performed at 1 × speed. During the video recording process, a user may need to adjust the video recording rate, for example, to adjust the 1 × speed to the 2 × speed; alternatively, the 1-time speed is adjusted to 0.5-time speed.
At this time, the user may trigger a recording rate adjustment operation in the electronic device. The user may trigger the recording rate adjustment operation in a manner of a touch screen, a physical key, gesture control, voice control, or the like, which is not limited in this embodiment of the present application.
Step S202: and adjusting the video recording rate from a first rate to a second rate in response to the recording rate adjustment operation, wherein the first rate is different from the second rate.
When the electronic equipment receives a recording rate adjusting operation input by a user, the electronic equipment responds to the recording rate adjusting operation to adjust the video recording rate from a first rate to a second rate.
Referring to fig. 3, a schematic view of a variable speed recording scene provided in the embodiment of the present application is shown. In fig. 3A, the electronic device performs video recording at 1 × speed, which is understood to be the normal video recording rate. To enhance the user experience, a rate stamp for the current video recording, such as "1X" in FIG. 3A, may be displayed in the display screen. When the user needs to adjust the video recording rate, the area corresponding to the rate marking control 301 may be clicked on the display screen. For example, in FIG. 3A, clicking on the region corresponding to "1X" displays a rate marking control 302 in the display screen. For example, in fig. 3B, after the user clicks the area corresponding to "1X", a 0.25 × speed selection control "0.25X" is displayed in the display screen; 0.5 × speed selection control "0.5X"; 2X speed selection control "2X"; the 4 × speed selection control "4X". The user can click any one of the rate selection controls 302 according to the desired video recording rate to adjust the video recording rate. For example, when the user clicks the area corresponding to "2X", the electronic device starts to record video at 2 × speed, as shown in fig. 3C. To improve the user experience, when the electronic device performs variable speed recording (not 1 × speed recording), the display interface may be adjusted accordingly to prompt the user that the electronic device is currently in the variable speed recording state, such as the bottom area shown in fig. 3C. In the state shown in fig. 3C, the user can further adjust the video recording rate by using the above steps, and adjust the 2 × speed to the 1 × speed, or other multiple speeds. For example, in fig. 3D, the rate of video recording is adjusted to 1 × speed, i.e., the normal video recording rate is restored.
It can be understood that in the process of restoring to the normal video recording rate, the rate of video recording can be selected to be 1x speed, i.e., to restore to the normal rate, by the above-described recording rate adjustment manner. In addition, a recording speed recovery instruction can be directly triggered, and the speed of video recording can be directly recovered to the normal speed.
It should be understood that the illustration in fig. 3 is only an exemplary illustration of the embodiments of the present application, and should not be taken as limiting the scope of the present application.
By adopting the technical scheme provided by the embodiment of the application, the user can adjust the recording rate of the video in the video shooting process, and the user experience is improved.
In addition, the shooting of the video is not interrupted in the variable speed adjusting process, so that a video file can be generated by the video pictures shot before and after the variable speed adjustment by adopting the method, and the video file is convenient for users to play.
In a specific implementation, the adjustment of the video recording rate may be implemented by adjusting the reported frame rate of the video frame, adjusting the encoding frame rate of the video frame, and/or extracting a part of the video frame from the reported video frame for encoding. The following description is made with reference to a software configuration block diagram.
Referring to fig. 4, a block diagram of a software structure of an electronic device according to an embodiment of the present application is provided. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android (Android) system is divided into four layers, an application layer, a framework layer, a hardware abstraction layer, and a hardware layer from top to bottom.
The HardWare layer (HardWare, HW) is the HardWare located at the lowest level of the operating system. In fig. 4, HW includes a camera 1, a camera 2, a camera 3, and the like. The cameras 1, 2, 3 may correspond to a plurality of cameras on the electronic device. It is understood that each camera may capture a corresponding video image.
A Hardware Abstraction Layer (HAL) is an interface layer between the operating system kernel and the hardware circuitry, which is intended to abstract the hardware. The method hides the hardware interface details of a specific platform, provides a virtual hardware platform for an operating system, enables the virtual hardware platform to have hardware independence, and can be transplanted on various platforms. In fig. 4, the HAL includes a Camera hardware abstraction layer (Camera HAL), and the Camera HAL includes a Camera 1, a Camera 2, and the like. It can be understood that the cameras 1 and 2 are abstract devices. In a video shooting scene, the HAL reports video frames according to parameters issued by an upper layer and a required frame rate.
The Framework layer (FWK) provides an Application Programming Interface (API) and a programming Framework for applications at the application layer, including some predefined functions. In fig. 4, the framework layer includes a Camera device (CameraDevice) that provides a series of fixed parameters related to the CameraDevice, such as the underlying setup and output format, etc.
An Application layer (App) may comprise a series of Application packages. For example, the application package may include a camera application. The application layer may be further divided into an application interface (UI) and application logic.
In an embodiment of the present application, the application logic of the camera application includes a variable speed control module and an encoding framework module. The variable speed control module is used for issuing a frame rate parameter to the KAL through the Cameradevice of the FWK according to a video recording rate set by a user. And the coding frame module is used for performing frame extraction, coding and other works according to the video frame reported by the bottom layer.
The application interface of the camera application includes a video recording rate adjustment module, by which a user can adjust the rate of video recording, for example, 0.5 speed, 1 speed, 2 speed, etc.
In specific implementation, the HAL reports video frames according to the frame rate parameters issued by the upper layer and the corresponding reporting frame rate, and the programming frame performs coding according to a specific coding frame rate to generate a video file and complete video recording.
It can be understood that the reporting frame rate and the encoding frame rate of the video frames will affect the video recording rate. In addition, under the condition that the reporting frame rate and the coding frame rate are fixed, a frame extracting mode can be adopted to extract a part of video frames from the reported video frames for coding, so that the video recording rate is improved.
Understandably, under the condition that the reporting frame rate and the coding frame rate are fixed, the higher the frame extraction proportion is, the lower the video recording rate is; under the condition that the reporting frame rate and the frame extraction proportion are certain, the higher the coding frame rate is, the higher the video recording rate is; under the condition of a certain frame extraction ratio and a certain coding frame rate, the higher the reported frame rate is, the lower the recording rate of the video is.
Specifically, when the rate of video recording needs to be increased, the method can be realized by at least one of the following modes: and reducing the reporting frame rate of the video frame, improving the coding frame rate of the video frame, or reducing the extraction proportion of the reported video frame. When the video recording rate needs to be reduced, the method can be realized by at least one of the following ways: and controlling the coding frame rate of the video frame to be kept unchanged, reducing the reporting frame rate of the video frame, and/or reducing the frame extraction proportion of the reported video frame.
In a possible implementation manner, when the rate of video recording needs to be increased, the coding frame rate of the video frame may be controlled to remain unchanged, the reporting frame rate of the video frame may be reduced, and/or the frame extraction ratio of the reported video frame may be reduced. Or, the reporting frame rate of the video frame is controlled to be kept unchanged, the coding frame rate of the video frame is increased, and/or the frame extraction proportion of the reported video frame is reduced. Or, the frame extraction proportion of the reported video frame is controlled to be kept unchanged, the reporting frame rate of the video frame is reduced, and/or the coding frame rate of the video frame is improved.
Since the encoding frame rate is limited by the encoding capabilities of the encoder, the encoding frame rate of the video frames is controlled to remain unchanged in a preferred approach. For example, when the video recording rate needs to be increased, the encoding frame rate of the video frame is controlled to be kept unchanged, the reporting frame rate of the video frame is reduced, and/or the frame extraction proportion of the reported video frame is reduced.
Specifically, if the reporting frame rate of the video frame is greater than the encoding frame rate of the video frame and the frame extraction proportion of the reported video frame is 100%, the reporting frame rate of the video frame is reduced; and if the reported frame rate of the video frame is equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is less than or equal to 100%, reducing the frame extraction proportion of the video frame. That is, when the reporting frame rate is greater than the encoding frame rate, the reporting frame rate is preferentially reduced; and when the reporting frame rate is the same as the coding frame rate, reducing the frame extraction proportion of the video frame.
Corresponding to the above-mentioned improvement of the video recording rate, when the video recording rate needs to be reduced, the encoding frame rate of the video frame can be controlled to be kept unchanged, the reporting frame rate of the video frame is improved, and/or the frame extraction ratio of the reported video frame is improved. Or, the reporting frame rate of the video frame is controlled to be kept unchanged, the coding frame rate of the video frame is reduced, and/or the frame extraction proportion of the reported video frame is improved. Or, controlling the frame extraction proportion of the reported video frame to be kept unchanged, improving the reporting frame rate of the video frame, and/or reducing the coding frame rate of the video frame.
Since the encoding frame rate is limited by the encoding capabilities of the encoder, the encoding frame rate of the video frames is controlled to remain unchanged in a preferred approach. For example, when the video recording rate needs to be reduced, the encoding frame rate of the video frame is controlled to be kept unchanged, the reporting frame rate of the video frame is increased, and/or the frame extraction proportion of the reported video frame is increased.
Specifically, if the reporting frame rate of the video frame is greater than or equal to the encoding frame rate of the video frame and the frame extraction proportion of the reported video frame is 100%, the reporting frame rate of the video frame is increased; and if the reported frame rate of the video frame is equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is less than 100%, improving the frame extraction proportion of the video frame. That is, when the frame extraction ratio is less than 100%, the frame extraction ratio is preferentially increased; and when the frame extraction proportion is equal to 100%, the reporting frame rate is increased.
Referring to table one, a configuration scheme for reporting a frame rate, a frame extraction ratio, and an encoding frame rate at different multiple speeds is provided in the embodiment of the present application.
Table one:
serial number Reporting frame rate Frame extraction ratio Coding frame rate Multiple speed
1 30 25% 30 4
2 30 50% 30 2
3 30 100% 30 1
4 60 100% 30 0.5
5 120 100% 30 0.25
As shown in table i, in the process of recording a video at 1 × speed, the reporting frame rate and the encoding frame rate of the video frame are both 30 frames/second, and the decimation rate of the video frame is 100%, that is, no frame decimation processing is performed, and for convenience of comparison, the frame decimation rate is 100%.
When the fast recording is needed, the reporting frame rate and the coding frame rate can be kept unchanged, and part of video frames are extracted from the reported video frames for coding. For example, when the frame rate is 25%, the rate of video recording is 4 times speed; when the frame decimation ratio is 50%, the rate of video recording is 2 times the speed. When the frame extraction proportion is 25%, extracting one frame in every 4 frames; when the decimation ratio is 50%, one frame can be decimated in every 2 frames. Of course, other frame extraction rules can be designed by those skilled in the art, and the embodiment of the present application does not limit this.
When the slow recording is needed, the coding frame rate can be kept unchanged, the reporting frame rate of the video frame is improved, and the frame extraction processing is not performed at the moment. For example, when the reporting frame rate is 60 frames/second, the video recording rate is 0.5 times; when the reporting frame rate is 120 frames/second, the video recording rate is 0.25 times.
That is, under the normal double speed (1 double speed), the reporting frame rate and the coding frame rate of the configured video frame are equal. When the video recording speed needs to be improved, the frame extraction ratio is reduced on the basis of normal speed doubling. In other words, if the reporting frame rate of the video frame is greater than the coding frame rate of the video frame, the coding frame rate of the video frame is controlled to be kept unchanged, and the reporting frame rate of the video frame is reduced; if the reported frame rate of the video frame is less than or equal to the coding frame rate of the video frame, the coding frame rate of the video frame is controlled to be kept unchanged, and the extraction proportion of the reported video frame is reduced.
When the video recording speed needs to be reduced, the coding frame rate is improved on the basis of normal double speed. In other words, if a part of video frames are extracted from the reported video frames for encoding when the video is recorded according to the first rate, the encoding frame rate of the video frames is controlled to be kept unchanged, and the extraction proportion of the reported video frames is improved; if all the reported video frames are adopted for coding when the video is recorded according to the first rate, the coding frame rate of the video frames is controlled to be kept unchanged, and the reporting frame rate of the video frames is improved.
The reason why the encoding frame rate is controlled to remain unchanged in the above-described embodiment is that the encoding frame rate is limited by the encoding capability of the encoder. Of course, the embodiment of the present application is not limited to this, and those skilled in the art can also improve the encoding capability of the encoder, and further adjust the video recording rate by adjusting the encoding frame rate.
Referring to fig. 5, a schematic flow chart of a video shooting variable speed recording method according to an embodiment of the present application is shown. The method is applicable to the software architecture shown in fig. 4. As shown in fig. 5, it mainly includes the following steps.
Step S501: and recording the video according to the speed of 1 time, reporting the frame rate of 30fps, and reporting the coding frame rate of 30fps without frame extraction.
In the example of the application, when the speed is 1 times, the reporting frame rate and the recording frame rate of the video frame are both 30 fps. Meanwhile, the coding control module codes all reported video frames, namely, frame extraction processing is not carried out.
Step S502: the user triggers a 2x speed video recording operation.
When a user desires to increase the video recording rate, for example, desires to adjust to 2 × speed, the area corresponding to the "2 × speed" control may be clicked in the application interface, so as to trigger the 2 × speed video recording operation.
Step S503: and sending a 2-speed video recording instruction to the variable speed control module.
After receiving the 2-time video recording operation, the electronic equipment sends a 2-time video recording instruction to the variable speed control module so as to realize corresponding variable speed control through the variable speed control module.
Step S504: and the variable speed control module sends a 2-time speed video recording instruction to the coding control module.
After receiving the 2-speed video recording instruction, the variable speed control module sends the 2-speed video recording instruction to the encoding control module, so that the encoding control module can record the 2-speed video by adjusting the encoding mode.
Step S505: and extracting 50% of reported video frames, and coding the video frames at 30 fps.
It can be understood that, in the process, since the reporting frame rate is not adjusted, the hardware abstraction layer always reports the video frames at the frame rate of 30 fps.
And after receiving the 2-speed video recording instruction, the coding control module extracts 50% of the reported video frames and codes the video frames at 30 fps. That is, 15 frames are extracted per second and encoded at a frame rate of 30 frames/second. Thus, the rate of video recording can be adjusted to 2x speed.
Step S506: the user triggers a 0.5 speed video recording operation.
When a user desires to reduce the video recording rate, for example, desires to adjust to 0.5 × speed, the area corresponding to the "0.5 × speed" control may be clicked in the application interface, and the 0.5 × speed video recording operation is triggered.
Step S507: and sending a 0.5-time speed video recording instruction to the variable speed control module.
After receiving the 0.5-time video recording operation, the electronic equipment sends a 0.5-time video recording instruction to the variable speed control module so as to realize corresponding variable speed control through the variable speed control module.
Step S508: and the variable speed control module sends a 0.5-time speed video recording instruction to the coding control module.
After receiving the 0.5-time video recording instruction, the variable speed control module sends the 0.5-time video recording instruction to the coding control module, so that the coding control module can record the 0.5-time video by adjusting the coding mode.
Step S509: and the variable speed control module sends a frame rate parameter to the hardware abstraction layer, wherein the frame rate parameter is used for indicating the hardware abstraction layer to report the video frame according to 60 fps.
In the embodiment of the present application, the encoding frame rate is 30fps, and remains unchanged. Therefore, if the video recording rate needs to be adjusted to 0.5 times, the reporting frame rate of the video frames needs to be increased.
In the concrete implementation, the variable speed control module sends a frame rate parameter to the hardware abstraction layer, and the array parameter indicates the frame rate required to be reported by the hardware abstraction layer.
And after receiving the frame rate parameters sent by the variable speed control module, the hardware abstraction layer reports the frame rate parameters according to the frame rate indicated by the frame rate parameters. In the embodiment of the present application, the reporting frame rate indicated by the frame rate parameter is 60 fps.
Step S510: the encoding control module encodes at 30 fps.
It can be understood that when the reporting frame rate is 60fps and the coding frame rate is 30fps, the video recording rate is 0.5 times, and the adjustment of the video recording rate is realized.
In particular implementations, the electronic device may be involved in multiple shooting modes. For example, a single shot mode and a multi-shot mode. The single shooting mode may include a front single shooting mode, a rear single shooting mode, etc.; the multi-shot mode may include a front-shot mode, a rear-shot mode, a front-and-rear-shot mode, a front-picture-in-picture mode, a rear-picture-in-picture mode, a front-and-rear-picture-in-picture mode, etc. In the single shooting mode, one camera is adopted for video shooting; and two or more cameras are adopted to carry out video shooting in a multi-shooting mode.
Specifically, in a front single-shot mode, a front camera is adopted for video shooting; in the rear single-shot mode, a rear camera is adopted for video shooting; in a front double-shooting mode, two front cameras are adopted for video shooting; in a rear double-camera mode, two rear cameras are adopted for video shooting; in a front-back double-shooting mode, a front-mounted camera and a rear-mounted camera are adopted for video shooting; in the front-mounted picture-in-picture mode, two front-mounted cameras are adopted for video shooting, and a picture shot by one front-mounted camera is placed in a picture shot by the other front-mounted camera; in the rear picture-in-picture mode, two rear cameras are adopted for video shooting, and a picture shot by one rear camera is placed in a picture shot by the other rear camera; in the front-back picture-in-picture mode, a front camera and a back camera are adopted for video shooting, and pictures shot by the front camera or the back camera are placed in pictures shot by the back camera or the front camera.
Referring to fig. 6A, a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application is provided. In a front-back double-shooting mode, a front-facing camera is used for collecting a foreground picture, a rear-facing camera is used for collecting a background picture, and the foreground picture and the background picture are simultaneously displayed in a display interface.
Referring to fig. 6B, a schematic view of a front-back picture-in-picture mode shooting scene is provided in the embodiment of the present application. In the front-back picture-in-picture mode, a front-facing camera is used for collecting a foreground picture, a rear-facing camera is used for collecting a background picture, and the foreground picture is placed in the background picture.
Referring to fig. 6C, a schematic view of a rear pd mode shooting scene is provided in an embodiment of the present application. And under the rear picture-in-picture mode, one rear camera is adopted to collect a long-distance view picture, the other rear camera is adopted to collect a short-distance view picture, and the short-distance view picture is arranged in the long-distance view picture.
In some possible implementations, the capture mode may also be described as a single-pass mode, a two-pass mode, or a multi-pass mode. It can be understood that the single-path mode adopts one camera to shoot, the double-path mode adopts two cameras to shoot, and the multi-path mode adopts more than two cameras to shoot.
In some possible implementations, the shooting mode may also be described as a single view mode, a dual view mode, and a picture-in-picture mode. The single shot mode can comprise a front single shot mode and a rear single shot mode; the double-scene mode can comprise a front double-shot mode, a rear double-shot mode and a front and rear double-shot mode; the pip mode may include a front pip mode, a rear pip mode, and a front and rear pip mode.
According to the embodiment of the application, the video recording rate can be adjusted in a single shooting mode, and the video recording rate can be adjusted in a multi-shooting mode. Referring to table two, a variable speed recording scenario for video shooting is provided in the embodiment of the present application.
Table two:
serial number Variable speed recording scene
1 Single-channel fast recording
2 Single-pass slow recording
3 Dual path/multi-path fast recording
4 Dual path/multi-path slow recording
Wherein, fast recording means that the speed of video recording is increased relative to 1 time speed, for example, 1.5 times speed, 2 times speed, etc.; slow recording refers to slowing the rate of video recording relative to 1x speed, e.g., 0.5x speed, 0.25x speed, etc. The two-way/multi-way method related to the embodiment of the application refers to the method of collecting two or more than two video frames.
It can be understood that, if two or more video frames are collected, before encoding, the two or more video frames need to be rendered and combined into one video frame, and then encoding is performed.
And when frame extraction processing is required to be carried out in a multi-shot mode, carrying out frame extraction processing on the video frame after rendering and merging. Specifically, two or more paths of video frames are rendered and merged into one path of video frame, and then a part of video frames are extracted from the rendered and merged video frames for encoding. The following describes a process of rendering and merging video frames with reference to the drawings.
Referring to fig. 7A, a rendering scene schematic diagram provided in the embodiment of the present application is shown. The rendering process of the image is illustrated in fig. 7A by taking Open GL as an example.
In order to implement processing of a display image and an encoded image respectively, two rendering engines are usually provided, that is, an Open GL display rendering engine and an Open GL encoded rendering engine, and the Open GL display rendering engine and the Open GL encoded rendering engine may call an Open GL renderer to implement rendering processing of an image.
In a single-scene mode, the Open GL display rendering engine may monitor one video image through the first monitoring module and the second monitoring module, respectively, where one of the video images monitored by the two monitoring modules is used for display rendering and the other is used for encoding rendering. Of course, it is also possible to monitor the video image by using only one monitoring module, display and render the monitored video image, and encode and render the displayed and rendered video image. The method comprises the following specific steps:
the Open GL display rendering engine monitors the video images collected by the first camera through the first monitoring module and the second monitoring module respectively. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, the Open GL renderer transmits the acquired video image monitored by the first monitoring module of the Open GL display rendering engine to the display cache area for caching, the Open GL display rendering engine transmits the video image monitored by the second monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image monitored by the second monitoring module of the Open GL display rendering engine to the encoding cache area. And transmitting the video image buffered in the display buffer area to a display interface (SurfaceView), and displaying the video image in the display interface. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing on a video image is not required, the video images monitored by the first monitoring module and the second monitoring module of the Open GL display rendering engine may also be directly transmitted to the display cache region and the video image monitored by the second monitoring module to the encoding cache region without passing through the Open GL renderer, which is not limited in the present application.
In a double-view mode or a picture-in-picture mode, the Open GL display rendering engine monitors video images collected by the first camera and the second camera respectively through the first monitoring module and the second monitoring module, and transmits the monitored two paths of video images and a synthesis strategy to the Open GL renderer. And the Open GL renderer synthesizes the two paths of video images into one video image according to a synthesis strategy, and transmits the video image to a display cache region for caching. And respectively transmitting the video images cached in the display cache region to a display interface (SurfaceView) and a coding cache region. The video image is displayed within a display interface. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, in the above process, except that the video file generated by the encoding module is in MP4 format, other video images are in RGB format. That is to say, the Open GL displays that the video image monitored by the rendering engine is an image in RGB format, and the video image output after the Open GL renderer renders and synthesizes is also in RGB format. That is, the video image cached in the display buffer unit is in RGB format, and the video image sent to the display interface and the encoding buffer unit is also in RGB format. The Open GL coding rendering engine acquires a video image in an RGB format, and performs related rendering on the video image according to an image rendering instruction input by a user, wherein the obtained rendered video image is in the RGB format. The video image received by the coding module is in an RGB format, and the video image in the RGB format is coded to generate a video file in an MP4 format.
Referring to fig. 7B, another rendering scene schematic diagram provided in the embodiment of the present application is shown. The difference between the method and the device of fig. 7A is that in the monoscopic mode, the Open GL display rendering engine can monitor only one video image of the electronic device through one monitoring module. For example, the Open GL display rendering engine monitors a video image captured by the first camera through the first monitoring module. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image to the display cache area for caching. And transmitting the video images cached in the display cache region to a display interface. The video image is displayed in the display interface and transmitted to the encoding buffer. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing on a video image is not required, the video image monitored by the first monitoring module of the Open GL display rendering engine may also be directly transmitted to the display cache area without passing through the Open GL renderer, which is not limited in this application.
It should be noted that, in fig. 7A and 7B, the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the monoscopic mode are the same as the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the dual-scene mode. For convenience of illustration, in fig. 7A and 7B, the Open GL display rendering engine, the Open GL renderer, and the display buffer are drawn in both the single view mode and the dual view mode.
In particular, data sharing may be achieved between the Open GL display rendering engine and the Open GL encoding rendering engine through SharedContext.
The following describes a rendering process of an Open GL renderer by taking an example of merging two video images into one video image.
Referring to fig. 7C, a schematic view of rendering and merging scenes for a video stream according to an embodiment of the present application is shown. One frame of the video image captured by the first camera and one frame of the video image captured by the second camera are shown in fig. 7C. And the video images collected by the first camera and the second camera are 1080 × 960. And rendering and merging the video images collected by the first camera and the video images collected by the second camera according to the position information and the texture information of the video images collected by the first camera and the video images collected by the second camera to obtain a frame of 1080 × 1920 size images, wherein the spliced images are in a double-scene mode, namely the images collected by the first camera and the images collected by the second camera are displayed in parallel. The spliced image can be sent to an encoder for encoding and sent to a display interface for displaying.
Referring to fig. 7D, a schematic view of rendering a merged scene for another video stream provided in the embodiment of the present application is shown. One frame of the video image captured by the first camera and one frame of the video image captured by the second camera are shown in fig. 7D. The size of the video image collected by the first camera is 540 × 480, and the size of the video image collected by the second camera is 1080 × 960. And rendering and combining the video images acquired by the first camera and the video images acquired by the second camera according to the position information and the texture information of the video images acquired by the first camera and the second camera to obtain the image of one frame of picture-in-picture mode.
It is understood that the image sizes shown in fig. 7C-7D are only an exemplary illustration of the embodiments of the present application, and should not be taken as a limitation on the scope of the present application.
Corresponding to the above method embodiments, the present application also provides an electronic device, which is used for a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the electronic device is triggered to execute part or all of the steps in the above method embodiments.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 8, the electronic device 800 may include a processor 810, an external memory interface 820, an internal memory 821, a Universal Serial Bus (USB) interface 830, a charging management module 840, a power management module 841, a battery 842, an antenna 1, an antenna 2, a mobile communication module 850, a wireless communication module 860, an audio module 870, a speaker 870A, a receiver 870B, a microphone 870C, a headset interface 870D, a sensor module 880, a button 890, a motor 891, an indicator 892, a camera 893, a display 894, and a Subscriber Identification Module (SIM) card interface 895, among others. The sensor module 880 may include a pressure sensor 880A, a gyroscope sensor 880B, an air pressure sensor 880C, a magnetic sensor 880D, an acceleration sensor 880E, a distance sensor 880F, a proximity light sensor 880G, a fingerprint sensor 880H, a temperature sensor 880J, a touch sensor 880K, an ambient light sensor 880L, a bone conduction sensor 880M, and the like.
It is to be understood that the illustrated structure of the embodiments of the invention is not to be construed as a specific limitation to the electronic device 800. In other embodiments of the present application, the electronic device 800 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 810 may include one or more processing units, such as: the processor 810 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 810 for storing instructions and data. In some embodiments, the memory in processor 810 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 810. If the processor 810 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 810, thereby increasing the efficiency of the system.
In some embodiments, processor 810 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 810 may include multiple sets of I2C buses. The processor 810 may be coupled to the touch sensor 880K, charger, flash, camera 893, etc. through different I2C bus interfaces, respectively. For example: the processor 810 may be coupled to the touch sensor 880K via an I2C interface, such that the processor 810 and the touch sensor 880K communicate via an I2C bus interface to implement touch functionality of the electronic device 800.
The I2S interface may be used for audio communication. In some embodiments, processor 810 may include multiple sets of I2S buses. Processor 810 may be coupled to audio module 870 via an I2S bus enabling communication between processor 810 and audio module 870. In some embodiments, audio module 870 may communicate audio signals to wireless communication module 860 via an I2S interface to enable answering a call via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 870 and wireless communication module 860 may be coupled by a PCM bus interface. In some embodiments, the audio module 870 may also transmit audio signals to the wireless communication module 860 through the PCM interface, so as to receive phone calls through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect processor 810 and wireless communication module 860. For example: the processor 810 communicates with a bluetooth module in the wireless communication module 860 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 870 may transmit the audio signal to the wireless communication module 860 through the UART interface, so as to realize the function of playing music through the bluetooth headset.
MIPI interfaces may be used to connect processor 810 with peripheral devices such as display screen 894, camera 893, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 810 and camera 893 communicate over a CSI interface to implement the capture functionality of electronic device 800. The processor 810 and the display screen 894 communicate via the DSI interface to implement the display functions of the electronic device 800.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect processor 810 with camera 893, display 894, wireless communication module 860, audio module 870, sensor module 880, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 830 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 830 may be used to connect a charger to charge the electronic device 800, and may also be used to transmit data between the electronic device 800 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 800. In other embodiments of the present application, the electronic device 800 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 840 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 840 may receive charging input from a wired charger via the USB interface 830. In some wireless charging embodiments, the charging management module 840 may receive a wireless charging input through a wireless charging coil of the electronic device 800. While the charging management module 840 charges the battery 842, the power management module 841 may also supply power to the electronic device.
The power management module 841 is used to connect the battery 842, the charging management module 840 and the processor 810. The power management module 841 receives input from the battery 842 and/or the charge management module 840 and provides power to the processor 810, the internal memory 821, the display 894, the camera 893, and the wireless communication module 860, among other things. The power management module 841 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 841 may also be disposed in the processor 810. In other embodiments, the power management module 841 and the charging management module 840 may be disposed in the same device.
The wireless communication function of the electronic device 800 may be implemented by the antenna 1, the antenna 2, the mobile communication module 850, the wireless communication module 860, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 800 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 850 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 800. The mobile communication module 850 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 850 may receive electromagnetic waves from the antenna 1, filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 850 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 850 may be provided in the processor 810. In some embodiments, at least some of the functional blocks of the mobile communication module 850 may be disposed in the same device as at least some of the blocks of the processor 810.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 870A, the receiver 870B, etc.) or displays images or video through the display screen 894. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 810, in the same device as the mobile communication module 850 or other functional modules.
The wireless communication module 860 may provide solutions for wireless communication applied to the electronic device 800, including Wireless Local Area Networks (WLANs), such as wireless fidelity (Wi-Fi) networks, Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 860 may be one or more devices that integrate at least one communication processing module. The wireless communication module 860 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 810. The wireless communication module 860 may also receive signals to be transmitted from the processor 810, frequency modulate them, amplify them, and convert them into electromagnetic waves via the antenna 2 to radiate them.
In some embodiments, antenna 1 of electronic device 800 is coupled to mobile communication module 850 and antenna 2 is coupled to wireless communication module 860, such that electronic device 800 may communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 800 implements display functions via the GPU, the display screen 894, and the application processor, among other things. The GPU is a microprocessor for image processing, and is connected to a display screen 894 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 810 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 894 is used to display images, video, and the like. The display screen 894 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, electronic device 800 may include 1 or N display screens 894, N being a positive integer greater than 1.
The electronic device 800 may implement a shooting function through the ISP, the camera 893, the video codec, the GPU, the display screen 894, and the application processor, etc.
The ISP is used to process the data fed back by the camera 893. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 893.
The camera 893 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, the electronic device 800 may include 1 or N cameras 893, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 800 selects a frequency bin, the digital signal processor is used to perform a fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 800 may support one or more video codecs. In this way, the electronic device 800 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent cognition of the electronic device 800 can be achieved through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 820 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 800. The external memory card communicates with the processor 810 through the external memory interface 820 to implement data storage functions. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 821 may be used to store computer-executable program code, which includes instructions. The internal memory 821 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 800, and the like. In addition, the internal memory 821 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 810 performs various functional applications and data processing of the electronic device 800 by executing instructions stored in the internal memory 821 and/or instructions stored in a memory provided in the processor.
Electronic device 800 may implement audio functionality via audio module 870, speaker 870A, receiver 870B, microphone 870C, headset interface 870D, and an application processor, among other things. Such as music playing, recording, etc.
The audio module 870 is used to convert digital audio information into an analog audio signal output and also used to convert an analog audio input into a digital audio signal. The audio module 870 may also be used to encode and decode audio signals. In some embodiments, audio module 870 may be disposed in processor 810, or some functional modules of audio module 870 may be disposed in processor 810.
The speaker 870A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The electronic apparatus 800 may listen to music or a hands-free call through the speaker 870A.
Receiver 870B, also referred to as a "handset," is used to convert the electrical audio signals into acoustic signals. When the electronic device 800 answers a call or voice information, the voice can be answered by placing the receiver 870B close to the ear of the person.
Microphone 870C, also known as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal into the microphone 870C by uttering a voice near the microphone 870C through the mouth of the person. The electronic device 800 may be provided with at least one microphone 870C. In other embodiments, electronic device 800 may be provided with two microphones 870C to implement noise reduction functions in addition to collecting sound signals. In other embodiments, three, four or more microphones 870C may be further disposed on the electronic device 800 to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 870D is used to connect a wired headphone. The headset interface 870D may be the USB interface 830, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 880A is used to sense a pressure signal, which can be converted into an electrical signal. In some embodiments, pressure sensor 880A may be disposed on display screen 894. Pressure sensors 880A can be of a wide variety, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 880A, the capacitance between the electrodes changes. The electronic device 800 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 894, the electronic apparatus 800 detects the intensity of the touch operation based on the pressure sensor 880A. The electronic apparatus 800 may also calculate the position of the touch from the detection signal of the pressure sensor 880A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 880B may be used to determine the motion pose of the electronic device 800. In some embodiments, the angular velocity of electronic device 800 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 880B. The gyro sensor 880B may be used to photograph anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 880B detects the shake angle of the electronic device 800, calculates the distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 800 through a reverse movement, thereby achieving anti-shake. The gyro sensor 880B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 880C is used to measure air pressure. In some embodiments, the electronic device 800 calculates altitude, aiding in positioning and navigation from barometric pressure values measured by barometric pressure sensor 880C.
The magnetic sensor 880D includes a hall sensor. The electronic device 800 may detect the opening and closing of the flip holster using the magnetic sensor 880D. In some embodiments, when the electronic device 800 is a flip, the electronic device 800 can detect the opening and closing of the flip according to the magnetic sensor 880D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
Acceleration sensor 880E can detect the magnitude of acceleration of electronic device 800 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 800 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 880F for measuring distance. The electronic device 800 may measure distance by infrared or laser. In some embodiments, taking a scene, electronic device 800 may utilize distance sensor 880F to range for fast focus.
The proximity light sensor 880G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 800 emits infrared light to the outside through the light emitting diode. The electronic device 800 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 800. When insufficient reflected light is detected, the electronic device 800 can determine that there are no objects near the electronic device 800. The electronic device 800 can utilize the proximity light sensor 880G to detect that the user holds the electronic device 800 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 880G may also be used in holster mode, pocket mode automatically unlock and lock screen.
The ambient light sensor 880L is used to sense ambient light brightness. The electronic device 800 may adaptively adjust the brightness of the display screen 894 based on the perceived ambient light level. The ambient light sensor 880L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 880L may also cooperate with the proximity light sensor 880G to detect whether the electronic device 800 is in a pocket to prevent inadvertent contact.
The fingerprint sensor 880H is used to collect a fingerprint. The electronic device 800 can utilize the collected fingerprint characteristics to achieve fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
Temperature sensor 880J is used to detect temperature. In some embodiments, electronic device 800 implements a temperature processing strategy using the temperature detected by temperature sensor 880J. For example, when the temperature reported by the temperature sensor 880J exceeds a threshold, the electronic device 800 performs a reduction in performance of a processor located near the temperature sensor 880J to reduce power consumption to implement thermal protection. In other embodiments, the electronic device 800 heats the battery 842 when the temperature is below another threshold to avoid an abnormal shutdown of the electronic device 800 due to low temperatures. In other embodiments, electronic device 800 performs a boost on the output voltage of battery 842 when the temperature is below yet another threshold to avoid abnormal shutdown due to low temperatures.
Touch sensor 880K, also referred to as a "touch device". The touch sensor 880K may be disposed on the display screen 894, and the touch sensor 880K and the display screen 894 form a touch screen, which is also referred to as a "touch screen". The touch sensor 880K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operations may be provided via the display screen 894. In other embodiments, the touch sensor 880K can be disposed on a surface of the electronic device 800 at a different location than the display screen 894.
The bone conduction sensor 880M may acquire a vibration signal. In some embodiments, the bone conduction sensor 880M can acquire a vibration signal of the human voice vibrating a bone mass. The bone conduction sensor 880M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 880M may also be provided in a headset, incorporated into a bone conduction headset. The audio module 870 may analyze a voice signal based on the vibration signal of the bone block vibrated by the sound part acquired by the bone conduction sensor 880M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure pulsation signal acquired by the bone conduction sensor 880M, so as to realize a heart rate detection function.
The keys 890 include a power-on key, a volume key, and the like. The keys 890 may be mechanical keys. Or may be touch keys. The electronic device 800 may receive a key input, generate a key signal input related to user settings and function control of the electronic device 800.
The motor 891 may generate a vibration cue. The motor 891 may be used for incoming call vibration prompts, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 891 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 894. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 892 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 895 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic device 800 by being inserted into the SIM card interface 895 or by being pulled out of the SIM card interface 895. The electronic device 800 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 895 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 895 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 895 may also be compatible with different types of SIM cards. The SIM card interface 895 may also be compatible with external memory cards. The electronic device 800 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 800 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 800 and cannot be separated from the electronic device 800.
In a specific implementation manner, the present application further provides a computer storage medium, where the computer storage medium may store a program, and when the program runs, the computer storage medium controls a device in which the computer readable storage medium is located to perform some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
In a specific implementation, an embodiment of the present application further provides a computer program product, where the computer program product includes executable instructions, and when the executable instructions are executed on a computer, the computer is caused to perform some or all of the steps in the foregoing method embodiments.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In several embodiments provided by the present invention, any function, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present invention, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A video shooting variable-speed recording method is applied to terminal equipment, the terminal equipment is provided with a camera, and the method is characterized by comprising the following steps:
respectively monitoring video images collected by the camera through a first monitoring module and a second monitoring module;
transmitting the first video image monitored by the first monitoring module to a display cache region;
transmitting the first video image cached in the display cache region to a display interface, and displaying the first video image in the display interface;
transmitting the second video image monitored by the second monitoring module to a coding buffer area;
receiving a recording rate adjusting operation from a user in a video shooting process, wherein the recording rate adjusting operation is used for adjusting the video recording rate of the second video image;
adjusting the rate of video recording from a first rate to a second rate in response to the recording rate adjustment operation, wherein the first rate is different from the second rate;
performing related rendering on the second video image in the coding cache region, and sending the rendered second video image to a coding module;
the coding module carries out corresponding coding processing on the rendered second video image to generate a video file,
wherein, the adjusting the video recording rate from the first rate to the second rate at least comprises one of the following modes: adjusting the reported frame rate of the video frames, or adjusting the coding frame rate of the video frames, or extracting a part of the video frames from the reported video frames for coding,
the video recording rate is the rate of the generated video file when playing,
the receiving a recording rate adjustment operation from a user includes: receiving a selectable rate selected by a user from a plurality of selectable rates, and taking the selected selectable rate as the second rate, wherein the plurality of selectable rates comprise selectable rates larger than 1 time speed and selectable rates smaller than 1 time speed.
2. The method of claim 1, wherein when the second rate is greater than the first rate, the adjusting the rate of the video recording from the first rate to the second rate comprises at least one of:
and reducing the reporting frame rate of the video frame, improving the coding frame rate of the video frame, or reducing the extraction proportion of the reported video frame.
3. The method of claim 1, wherein when the second rate is greater than the first rate, the adjusting the rate of video recording from the first rate to the second rate comprises:
and controlling the coding frame rate of the video frame to be kept unchanged, reducing the reporting frame rate of the video frame, and/or reducing the frame extraction proportion of the reported video frame.
4. The method of claim 3, wherein the controlling the encoding frame rate of the video frames to remain unchanged, the reducing the reporting frame rate of the video frames, and/or the reducing the frame decimation ratio of the reported video frames comprises:
if the reporting frame rate of the video frame is greater than the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is 100%, reducing the reporting frame rate of the video frame;
and if the reported frame rate of the video frame is equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is less than or equal to 100%, reducing the frame extraction proportion of the video frame.
5. The method of claim 1, wherein when the second rate is greater than the first rate, the adjusting the rate of video recording from the first rate to the second rate comprises:
and controlling the reporting frame rate of the video frame to be kept unchanged, improving the coding frame rate of the video frame, and/or reducing the frame extraction proportion of the reported video frame.
6. The method of claim 1, wherein when the second rate is greater than the first rate, the adjusting the rate of video recording from the first rate to the second rate comprises:
and controlling the frame extraction proportion of the reported video frame to be kept unchanged, reducing the reporting frame rate of the video frame, and/or improving the coding frame rate of the video frame.
7. The method of claim 1, wherein when the second rate is less than the first rate, the recording rate adjustment operation comprises at least one of:
and increasing the reporting frame rate of the video frame, reducing the coding frame rate of the video frame, or increasing the extraction proportion of the reported video frame.
8. The method of claim 1, wherein when the second rate is less than the first rate, the adjusting the rate of video recording from the first rate to the second rate comprises:
and controlling the coding frame rate of the video frame to be kept unchanged, improving the reporting frame rate of the video frame, and/or improving the frame extraction proportion of the reported video frame.
9. The method of claim 8, wherein controlling the encoding frame rate of the video frames to remain unchanged, increasing the reporting frame rate of the video frames, and/or increasing the frame decimation ratio of the reported video frames comprises:
if the reporting frame rate of the video frame is greater than or equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is 100%, improving the reporting frame rate of the video frame;
and if the reported frame rate of the video frame is equal to the coding frame rate of the video frame and the frame extraction proportion of the reported video frame is less than 100%, improving the frame extraction proportion of the video frame.
10. The method of claim 1, wherein when the second rate is less than the first rate, the adjusting the rate of video recording from the first rate to the second rate comprises:
and controlling the reporting frame rate of the video frame to be kept unchanged, reducing the coding frame rate of the video frame, and/or improving the frame extraction proportion of the reported video frame.
11. The method of claim 1, wherein when the second rate is less than the first rate, the adjusting the rate of the video recording from the first rate to the second rate comprises:
and controlling the frame extraction proportion of the reported video frame to be kept unchanged, improving the reporting frame rate of the video frame, and/or reducing the coding frame rate of the video frame.
12. The method according to any one of claims 1-11, wherein said extracting a portion of video frames from the reported video frames for encoding comprises:
if two or more paths of video frames exist, rendering and combining the two or more paths of video frames into one path of video frame;
transmitting the path of video frame to a display cache region;
respectively transmitting the path of video frame cached in the display cache region to a display interface and a coding cache region;
displaying the video frame in the display interface;
performing related rendering on the path of video frame in the coding cache region;
and extracting a part of video frames from the video frames after the related rendering for coding to generate the video file.
13. An electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any of claims 1-12.
14. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium resides to perform the method of any one of claims 1-12.
CN202110676713.XA 2021-06-16 2021-06-16 Video shooting variable speed recording method, device and storage medium Active CN113596320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110676713.XA CN113596320B (en) 2021-06-16 2021-06-16 Video shooting variable speed recording method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110676713.XA CN113596320B (en) 2021-06-16 2021-06-16 Video shooting variable speed recording method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113596320A CN113596320A (en) 2021-11-02
CN113596320B true CN113596320B (en) 2022-07-01

Family

ID=78243987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110676713.XA Active CN113596320B (en) 2021-06-16 2021-06-16 Video shooting variable speed recording method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113596320B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520874B (en) * 2022-01-28 2023-11-24 西安维沃软件技术有限公司 Video processing method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277412A (en) * 2017-07-24 2017-10-20 腾讯科技(深圳)有限公司 Video recording method and device, graphics processor and electronic equipment
WO2021031915A1 (en) * 2019-08-22 2021-02-25 华为技术有限公司 Intelligent video recording method and apparatus
CN112532904A (en) * 2020-11-26 2021-03-19 维沃移动通信有限公司 Video processing method and device and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104270649B (en) * 2014-10-28 2019-01-22 中磊电子(苏州)有限公司 Image coding device and video encoding method
CN107396019B (en) * 2017-08-11 2019-05-17 维沃移动通信有限公司 A kind of slow motion video method for recording and mobile terminal
CN108347580B (en) * 2018-03-27 2020-09-25 聚好看科技股份有限公司 Method for processing video frame data and electronic equipment
CN109819171A (en) * 2019-02-26 2019-05-28 维沃移动通信有限公司 A kind of video capture method and terminal device
CN113411528B (en) * 2019-02-28 2022-10-11 华为技术有限公司 Video frame rate control method, terminal and storage medium
CN112637476A (en) * 2019-09-24 2021-04-09 中兴通讯股份有限公司 Video recording method, device, terminal and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277412A (en) * 2017-07-24 2017-10-20 腾讯科技(深圳)有限公司 Video recording method and device, graphics processor and electronic equipment
WO2021031915A1 (en) * 2019-08-22 2021-02-25 华为技术有限公司 Intelligent video recording method and apparatus
CN112532904A (en) * 2020-11-26 2021-03-19 维沃移动通信有限公司 Video processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113596320A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN113475057B (en) Video frame rate control method and related device
CN113473005B (en) Shooting transfer live-action insertion method, equipment and storage medium
WO2022262313A1 (en) Picture-in-picture-based image processing method, device, storage medium, and program product
CN114489533A (en) Screen projection method and device, electronic equipment and computer readable storage medium
CN110248037B (en) Identity document scanning method and device
CN114466107A (en) Sound effect control method and device, electronic equipment and computer readable storage medium
CN114339429A (en) Audio and video playing control method, electronic equipment and storage medium
CN114257920B (en) Audio playing method and system and electronic equipment
CN113596321A (en) Transition dynamic effect generation method, apparatus, storage medium, and program product
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN113593567B (en) Method for converting video and sound into text and related equipment
CN113596320B (en) Video shooting variable speed recording method, device and storage medium
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN109285563B (en) Voice data processing method and device in online translation process
CN113965693B (en) Video shooting method, device and storage medium
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN114661258A (en) Adaptive display method, electronic device, and storage medium
CN114079725A (en) Video anti-shake method, terminal device and computer-readable storage medium
CN113810595B (en) Encoding method, apparatus and storage medium for video shooting
WO2023020420A1 (en) Volume display method, electronic device, and storage medium
CN114125554B (en) Encoding and decoding method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant