CN113810595B - Encoding method, apparatus and storage medium for video shooting - Google Patents

Encoding method, apparatus and storage medium for video shooting Download PDF

Info

Publication number
CN113810595B
CN113810595B CN202110682672.5A CN202110682672A CN113810595B CN 113810595 B CN113810595 B CN 113810595B CN 202110682672 A CN202110682672 A CN 202110682672A CN 113810595 B CN113810595 B CN 113810595B
Authority
CN
China
Prior art keywords
shooting mode
video
shooting
mode
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110682672.5A
Other languages
Chinese (zh)
Other versions
CN113810595A (en
Inventor
王拣贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110682672.5A priority Critical patent/CN113810595B/en
Publication of CN113810595A publication Critical patent/CN113810595A/en
Application granted granted Critical
Publication of CN113810595B publication Critical patent/CN113810595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a video shooting coding method, video shooting equipment and a storage medium, wherein the method comprises the steps that electronic equipment starts a first shooting mode to carry out video shooting, and a video picture shot in the first shooting mode is coded; in the shooting process, the electronic equipment receives shooting mode switching operation, wherein the shooting mode switching operation is used for switching a first shooting mode into a second shooting mode, and the first shooting mode is different from the second shooting mode; encoding a video picture photographed by the second photographing mode; receiving shooting stopping operation, and generating a video file, wherein the video file comprises a video picture shot in the first shooting mode and a video picture shot in the second shooting mode; and storing the video file. The method is used for encoding the video shot in the first shooting mode and the video shot in the second shooting mode into one video file in the shooting mode switching process, so that subsequent playing is facilitated, and user experience is improved.

Description

Encoding method, apparatus and storage medium for video shooting
Technical Field
The present application relates to the field of computer technologies, and in particular, to an encoding method, an encoding apparatus, and a storage medium for video shooting.
Background
In order to improve user experience, electronic devices such as mobile phones and tablet computers are usually configured with a plurality of cameras, for example, a front camera and a rear camera are respectively configured on the electronic device. The user can select a corresponding shooting mode according to the requirement of the user, such as a forward shooting mode, a backward shooting mode, a forward and backward double shooting mode and the like.
During video shooting, a user may need to switch the shooting mode, for example, to switch the forward mode to the backward mode. However, in the prior art, a user cannot directly switch the shooting mode during the shooting process. For example, when a user needs to switch a front-view mode to a rear-view mode, the user needs to stop video shooting in the front-view mode, set the shooting mode to the rear-view mode, and shoot a video in the rear-view mode, which is poor in user experience.
Disclosure of Invention
In view of the above, the present application provides an encoding method, an encoding device, and a storage medium for video shooting, so as to solve the problem that the shooting mode cannot be freely switched in the video shooting process in the prior art.
In a first aspect, an embodiment of the present application provides an encoding method for video shooting, which is applied to an electronic device, and includes:
the electronic equipment starts a first shooting mode to shoot video, and encodes a video picture shot in the first shooting mode;
during shooting, the electronic equipment receives shooting mode switching operation, wherein the shooting mode switching operation is used for switching the first shooting mode to a second shooting mode, and the first shooting mode is different from the second shooting mode;
encoding a video picture photographed by the second photographing mode;
receiving shooting stopping operation, and generating a video file, wherein the video file comprises a video picture shot in the first shooting mode and a video picture shot in the second shooting mode;
and storing the video file.
Preferably, during the shooting, the electronic device receives a shooting mode switching operation, the shooting mode switching operation is used for switching the first shooting mode to the second shooting mode and before coding the video picture shot in the second shooting mode,
the method further comprises the following steps:
generating a transition dynamic effect picture,
and coding the transition dynamic effect picture.
Preferably, the encoding of the video picture photographed in the second photographing mode includes:
and after the video picture shot in the second shooting mode is obtained, coding the video picture shot in the second shooting mode.
Preferably, the method further comprises:
and receiving a pause shooting operation, and pausing the coding of the video picture shot in the first shooting mode, the transition dynamic effect picture or the video picture shot in the second shooting mode corresponding to the pause shooting operation.
Preferably, the coding duration of the transition dynamic effect picture is matched with a current interruption duration in the shooting mode switching process, and the current interruption duration is a time difference between a last frame of video picture reported by the first shooting mode and a first frame of video picture reported by the second shooting mode.
Preferably, the first shooting mode or the second shooting mode is at least two video pictures, and encoding the video pictures shot in the first shooting mode or the second shooting mode includes:
the electronic equipment monitors two or more paths of video pictures;
rendering and combining the two or more paths of video pictures;
and coding the video pictures after the rendering and merging processing.
Preferably, the rendering and merging the two or more video pictures includes:
and rendering and combining the two or more paths of video pictures according to the texture information, the position information and the combination strategy of the two or more paths of video pictures.
Preferably, the merging strategy at least comprises:
splicing the two or more paths of video pictures; or the like, or, alternatively,
and filling at least one video picture in the two or more paths of video pictures into other video pictures in the two or more paths of video pictures.
Preferably, the shooting mode switching operation is to delay turning off the first shooting mode and turning on the second shooting mode.
Preferably, after the first shooting mode is turned off in a delayed manner, the time for reporting the last frame of video image by the first shooting mode is matched with the time for reporting the first frame of video image by the second shooting mode.
In a second aspect, embodiments of the present application provide an electronic device, comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, the electronic device is triggered to perform the method of any one of the first aspects.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium includes a stored program, where when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the method in any one of the above first aspects.
By adopting the technical scheme provided by the embodiment of the application, the shooting mode switching operation can be triggered in the video shooting process, and the first shooting mode is switched to the second shooting mode. In the switching process of the shooting modes, the video shot in the first shooting mode and the video shot in the second shooting mode are coded into a video file, so that subsequent playing is facilitated, and user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application;
fig. 2B is a schematic view of a front-back picture-in-picture mode shooting scene according to an embodiment of the present application;
fig. 2C is a schematic view of a rear pd mode shooting scene according to an embodiment of the present application;
fig. 3 is a schematic view of a scene of switching shooting modes according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of an encoding method for video shooting according to an embodiment of the present disclosure;
fig. 5 is a schematic view of another shooting mode switching scene provided in the embodiment of the present application;
fig. 6A is a schematic diagram of an encoded stream according to an embodiment of the present application;
fig. 6B is a schematic diagram of another encoded stream according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another encoding method for video shooting according to an embodiment of the present disclosure;
fig. 8A is a schematic diagram of another encoded stream provided in the embodiment of the present application;
fig. 8B is a schematic view of another encoded stream according to an embodiment of the present application;
fig. 9 is a schematic diagram of another encoded stream provided in the embodiment of the present application;
fig. 10 is a schematic view of another shooting mode switching scenario provided in an embodiment of the present application;
fig. 11 is a schematic view of another shooting mode switching scene provided in the embodiment of the present application;
fig. 12 is a block diagram of a software structure of an electronic device according to an embodiment of the present application;
fig. 13 is a schematic flowchart of another encoding method for video shooting according to an embodiment of the present disclosure;
fig. 14A is a schematic view of a rendered scene according to an embodiment of the present application;
fig. 14B is a schematic view of another rendering scene provided in the embodiment of the present application;
fig. 15A is a schematic view of a video stream rendering and merging scene provided in an embodiment of the present application;
fig. 15B is a schematic view of another video stream rendering and merging scene provided in the embodiment of the present application;
fig. 15C is a schematic view of a transition live action rendering scene provided in the embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solution of the present application, the following detailed description is made with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Referring to fig. 1, a schematic view of an electronic device provided in an embodiment of the present application is shown. In fig. 1, an electronic device is exemplified by a mobile phone 100, and fig. 1 shows a front view and a rear view of the mobile phone 100, two front cameras 111 and 112 are arranged on the front side of the mobile phone 100, and four rear cameras 121, 122, 123, and 124 are arranged on the rear side of the mobile phone 100. Through the plurality of cameras, a plurality of shooting modes can be provided for a user. The user can select a corresponding shooting mode to shoot according to the shooting scene so as to improve the user experience.
It is to be understood that the illustration of fig. 1 is merely an exemplary illustration and should not be taken as a limitation on the scope of the present application. For example, the number and positions of the cameras may be different for different mobile phones. In addition, the electronic device according to the embodiment of the present application may be a tablet PC, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, a smart car, a smart audio, a robot, smart glasses, a smart television, or the like, in addition to a mobile phone.
It should be noted that, in some possible implementations, the electronic device may also be referred to as a terminal device, a User Equipment (UE), and the like, which is not limited in this embodiment of the present application.
In some possible implementations, the shooting modes involved in the electronic device may include a single-shot mode and a multi-shot mode. The single shooting mode may include a front single shooting mode, a rear single shooting mode, and the like; the multi-shot mode may include a front-shot mode, a rear-shot mode, a front-and-rear picture-in-picture mode, a rear-and-rear picture-in-picture mode, a front-and-rear picture-in-picture mode, etc.
Wherein, in the single shooting mode, one camera is adopted to shoot the video; and two or more cameras are adopted to shoot videos in a multi-shooting mode.
Specifically, in a front single-shot mode, a front camera is adopted for video shooting; in the rear single-shot mode, a rear camera is adopted for video shooting; in a front double-camera mode, two front cameras are adopted for video shooting; in a rear double-camera mode, two rear cameras are adopted for video shooting; in a front-back double-shooting mode, a front-mounted camera and a rear-mounted camera are adopted for video shooting; in the front-mounted picture-in-picture mode, two front-mounted cameras are adopted for video shooting, and a picture shot by one front-mounted camera is placed in a picture shot by the other front-mounted camera; in the rear picture-in-picture mode, two rear cameras are adopted for video shooting, and a picture shot by one rear camera is placed in a picture shot by the other rear camera; in the front-back picture-in-picture mode, a front camera and a back camera are adopted for video shooting, and pictures shot by the front camera or the back camera are placed in pictures shot by the back camera or the front camera.
Referring to fig. 2A, a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application is provided. In the front-back double-shooting mode, a front camera is used for collecting a foreground picture, a rear camera is used for collecting a background picture, and the foreground picture and the background picture are displayed in a display interface at the same time.
Referring to fig. 2B, a schematic view of a front-back picture-in-picture mode shooting scene is provided in the embodiment of the present application. In the front-back picture-in-picture mode, a front-facing camera is used for collecting a foreground picture, a rear-facing camera is used for collecting a background picture, and the foreground picture is placed in the background picture.
Referring to fig. 2C, a schematic view of a rear pd mode shooting scene is provided in an embodiment of the present application. And under the rear picture-in-picture mode, one rear camera is adopted to collect a long-distance view picture, the other rear camera is adopted to collect a short-distance view picture, and the short-distance view picture is arranged in the long-distance view picture.
It should be noted that the above-mentioned shooting modes are only some possible implementations listed in the embodiments of the present application, and those skilled in the art may configure other shooting modes according to actual needs, and the embodiments of the present application do not specifically limit this.
In some possible implementations, the capture mode may also be described as a single-pass mode, a two-pass mode, or a multi-pass mode. It can be understood that the single-path mode adopts a camera to shoot, the double-path mode adopts two cameras to shoot, and the multi-path mode adopts more than two cameras to shoot.
In some possible implementations, the shooting mode may also be described as a single view mode, a dual view mode, and a picture-in-picture mode. The single shot mode can comprise a front single shot mode and a rear single shot mode; the double-scene mode can comprise a front double-shot mode, a rear double-shot mode and a front and rear double-shot mode; the pip mode may include a front pip mode, a rear pip mode, and a front and rear pip mode.
During video shooting, a user may need to switch the shooting mode. Referring to table one, some possible shooting mode switching scenarios are listed for the embodiments of the present application.
Table one:
Figure GDA0003959678870000051
Figure GDA0003959678870000061
however, in the related art, a user cannot directly perform photographing mode switching during photographing. For example, during shooting in the front single shooting mode, the shooting mode cannot be switched to other shooting modes, and only stopping or shooting is performed. Therefore, when a user needs to switch the front single shooting mode to the rear single shooting mode, the video shooting needs to be stopped first, then the shooting mode is set to the rear single shooting mode, and the video shooting is performed through the rear single shooting mode, so that the user experience is poor.
To solve the problem, the embodiment of the application provides a shooting mode switching method, which can directly switch the shooting mode in the shooting process.
Referring to fig. 3, a schematic view of a scene of switching shooting modes according to an embodiment of the present application is shown. As shown in fig. 3, during the process of video shooting through the electronic device, the user can display the shot video picture in real time in the display interface. In addition, a shooting mode selection window 301 is further included in the display interface, and a user can select a corresponding shooting mode in the shooting mode selection window 301 to perform video shooting. For example, a front single shot mode, a rear single shot mode, a front and rear double shot mode, a front and rear picture-in-picture mode, and the like.
In the application scenario shown in fig. 3, the user first selects the front single shot mode to perform video shooting, and displays the foreground picture in real time in the display interface. When the user triggers the "front and rear double-shot" control in the shooting mode selection window 301, the electronic device receives a shooting mode switching operation, directly switches the front single-shot mode into the front and rear double-shot mode, and displays video pictures shot in the front and rear double-shot mode in real time in the display interface, for example, a foreground picture and a background picture shown in fig. 3. In other words, in the front-back double-shooting mode, the front camera and the back camera respectively collect the foreground picture and the background picture, and the foreground picture and the background picture are respectively displayed in the display interface.
It can be understood that, in the video shooting process, in addition to displaying the video stream captured by the camera in the display interface, video recording is also required, that is, the video stream captured by the camera (the video picture displayed in the display picture) is synchronously encoded into a video file (for example, a video file in an MP4 format) and stored in the electronic device. In the shooting mode switching process, the coding operation is not interrupted, continuous coding is kept, and the video shot before switching and the video shot after switching are directly coded into one video file. As described in detail below.
Referring to fig. 4, a schematic flowchart of an encoding method for video shooting according to an embodiment of the present disclosure is shown. The method can be applied to the electronic device shown in fig. 1, and mainly includes the following steps, as shown in fig. 4.
Step S401: the electronic equipment starts a first shooting mode to shoot videos, and video pictures shot in the first shooting mode are coded.
In the video shooting process, the shot video can be coded into the video file.
Specifically, in the process of video shooting according to the first shooting mode, the camera corresponding to the first shooting mode continuously reports video pictures, and sends the video pictures to the encoder for continuous encoding.
The first shooting mode related to the embodiment of the present application may be any one of a front single shooting mode, a rear single shooting mode, a front double shooting mode, a rear double shooting mode, a front and rear double shooting mode, a front picture-in-picture mode, a rear picture-in-picture mode, and a front and rear picture-in-picture mode, which is not limited in the embodiment of the present application.
Step S402: in the shooting process, the electronic equipment receives shooting mode switching operation, and the shooting mode switching operation is used for switching the first shooting mode to a second shooting mode, wherein the first shooting mode is different from the second shooting mode.
In practical applications, a user may need to switch a shooting mode during video shooting, and a shooting mode switching operation is input in the electronic device to switch the first shooting mode to the second shooting mode. The second shooting mode can be any one of a front single-shot mode, a rear single-shot mode, a front double-shot mode, a rear double-shot mode, a front and rear double-shot mode, a front picture-in-picture mode, a rear picture-in-picture mode and a front and rear picture-in-picture mode.
Step S403: and encoding the video picture shot in the second shooting mode.
It can be understood that, during the process of switching the first shooting mode to the second shooting mode, the video stream is switched, that is, the video stream corresponding to the first shooting mode is switched to the video stream corresponding to the second shooting mode. However, in this process, the encoding process of S401 is not terminated, and a new encoding process is not pulled up in S402, but the encoding process of S401 is maintained throughout the processes of S401 to S403, and the video stream corresponding to the second shooting mode is continuously encoded based on the encoding process of S401, or the encoding is continued after the video stream corresponding to the second shooting mode is uploaded, so that one video file including the video picture of the first shooting mode and the video picture of the second shooting mode is directly generated.
In a specific implementation, the video pictures shot in the second shooting mode can be monitored, and the monitored video pictures shot in the second shooting mode are sent to an encoder for encoding.
Step S404: and receiving shooting stopping operation, and generating a video file, wherein the video file comprises the video pictures shot in the first shooting mode and the video pictures shot in the second shooting mode.
And when the shooting stopping operation is received, stopping reporting the video picture by the camera corresponding to the second shooting mode, interrupting the video stream corresponding to the second shooting mode, stopping coding by the coding operation, and generating a video file. It is understood that the video file includes a video picture photographed in the first photographing mode and a video picture photographed in the second photographing mode.
Step S405: and storing the video file.
After the video file is generated, the video file may be stored in the electronic device.
By adopting the technical scheme provided by the embodiment of the application, the shooting mode switching operation can be triggered in the video shooting process, and the first shooting mode is switched to the second shooting mode. In the switching process of the shooting modes, the video shot in the first shooting mode and the video shot in the second shooting mode are coded into a video file, so that subsequent playing is facilitated, and user experience is improved. In practical applications, a user may trigger a pause of the shooting operation during shooting, and when receiving the pause of the shooting operation, the encoder pauses encoding. That is, after receiving the pause photographing operation, the encoding stream is interrupted. In a practical application scenario, after receiving the pause shooting operation, the video picture or the transition motion effect picture may be continuously sent to the encoder, but the encoder rejects the reception, thereby realizing pause encoding.
Of course, after pausing shooting, the user may also trigger the start of shooting operation so that the encoder resumes continuing encoding. It should be noted that after the shooting is paused, the encoder only pauses, the encoding process corresponding to the encoder is not finished, and after the user triggers the shooting start operation, the encoder still continues encoding on the basis of the previously encoded content, that is, only one video file is finally generated.
In some possible implementation manners, in the video shooting process, the shooting mode selection window is not displayed in the display interface of the electronic device, and the shooting mode selection window is displayed in the display interface after the start switching instruction is received, so that a user can trigger the shooting mode switching operation in the shooting mode selection window conveniently.
Referring to fig. 5, a schematic view of another shooting mode switching scene provided in the embodiment of the present application is shown. In the application scenario shown in fig. 5, the shooting mode before switching is the front single shooting mode; the switched shooting mode is a front-back double-shooting mode.
As shown in fig. 5A, in the initial state, the electronic apparatus performs video shooting in the front single shot mode, at which time the shooting mode selection window is not displayed within the display interface.
As shown in fig. 5B, when the user needs to switch the shooting mode, the area corresponding to the "switch" control is clicked in the display interface, and the switching operation is triggered and started.
As shown in fig. 5C, after the user triggers and starts the switching operation, a shooting mode selection window is displayed in the display interface, the shooting mode selection window includes multiple shooting mode identifiers, and the user can select a shooting mode in the shooting mode selection window.
As shown in fig. 5D, the user clicks an area corresponding to the "front-rear double shot" mode in the shooting mode selection window, and triggers a shooting mode switching operation for instructing switching of the front single shot mode to the front-rear double shot mode.
As shown in fig. 5E, after the switching of the shooting mode is completed, the display of the shooting mode selection window is canceled again, so that the video display interface can occupy a larger screen display area.
Referring to fig. 6A, a schematic diagram of an encoded stream according to an embodiment of the present application is provided. In the process of switching the shooting mode, the camera can be switched. Specifically, when the shooting mode switching operation is triggered, the first shooting mode is turned off, and the second shooting mode is turned on, which may cause the interruption of the video stream. That is, the video stream corresponding to the first shooting mode ends, and the video stream corresponding to the second shooting mode is not yet up-streamed. Accordingly, the encoded stream may be interrupted. As shown in fig. 6A, a blanking time of 1500ms exists between the encoded stream corresponding to the first shooting mode and the encoded stream corresponding to the second shooting mode. During the interruption of the stream, the black screen, the pause and the like of the video picture can be caused, and the user experience is influenced.
In order to solve the problem that the cut-off exists in the shooting mode switching process and affects user experience, an embodiment of the present application provides a video shooting encoding method, where a transition dynamic effect encoded stream is inserted at the cut-off time of the encoded stream, and transition is performed through transition dynamic effect, as shown in fig. 6B.
Referring to fig. 7, a schematic flowchart of another video capture encoding method according to an embodiment of the present disclosure is provided. The method is different from the method shown in fig. 4 in that after the first shooting mode is switched to the second shooting mode, a transition dynamic effect picture is generated and encoded before the video picture shot in the second shooting mode is encoded. As shown in fig. 7, it mainly includes the following steps.
Step S701: the electronic equipment starts a first shooting mode to carry out video shooting, and encodes a video picture shot in the first shooting mode.
Specifically, in the process of video shooting according to the first shooting mode, the camera corresponding to the first shooting mode continuously reports video pictures, and sends the video pictures to the encoder for continuous encoding.
Step S702: in the shooting process, the electronic equipment receives a shooting mode switching operation, wherein the shooting mode switching operation is used for switching the first shooting mode to a second shooting mode, and the first shooting mode is different from the second shooting mode.
In practical applications, a user may need to switch a shooting mode during video shooting, and a shooting mode switching operation is input in the electronic device to switch the first shooting mode to the second shooting mode.
Step S703: and generating a transition dynamic effect picture, and coding the transition dynamic effect picture.
In the embodiment of the application, after the electronic device receives the shooting mode switching operation, the first shooting mode stops reporting the video picture, and at this time, the transition dynamic effect picture is generated and sent to the encoder for encoding.
Fig. 8A is a schematic diagram of an encoded stream according to an embodiment of the present application. As shown in fig. 8A, after the encoded stream corresponding to the first shooting mode is ended, the encoding operation continues to encode the transition motion effect, and the picture frame corresponding to the transition motion effect is continuously refreshed into the video file.
Step S704: and encoding the video picture shot in the second shooting mode.
In one possible implementation manner, after the video pictures shot in the second shooting mode are monitored, the monitored video pictures shot in the second shooting mode are sent to an encoder for encoding.
Referring to fig. 8B, a schematic view of another encoded stream according to an embodiment of the present application is provided. As shown in fig. 8B, after the video pictures shot in the second shooting mode are monitored, the encoding operation continues to encode based on the encoded stream corresponding to the second shooting mode, and the video pictures corresponding to the second shooting mode are continuously refreshed into the video file.
In addition, in order to better join up the coded stream corresponding to the first shooting mode and the coded stream corresponding to the second shooting mode, in some possible implementation manners, the coding duration of the transition dynamic effect picture is configured to be matched with the current-cut duration in the switching process of the shooting modes, and the current-cut duration is a time difference between the last frame of video picture reported by the first shooting mode and the first frame of video picture reported by the second shooting mode.
Step S705: and receiving shooting stopping operation, and generating a video file, wherein the video file comprises the video picture shot in the first shooting mode, the transition dynamic effect picture and the video picture shot in the second shooting mode.
And when a shooting stopping instruction is received, the camera corresponding to the second shooting mode stops reporting the video frame, the video stream corresponding to the second shooting mode is interrupted, the encoder stops encoding, and a video file is generated. It can be understood that the video file includes a video picture shot in the first shooting mode, a transition animation picture and a video picture shot in the second shooting mode.
Step S706: and storing the video file.
After the video file is generated, the video file may be stored in the electronic device.
By adopting the technical scheme provided by the embodiment of the application, the transition dynamic effect is inserted into the flow-out time of the coded stream in the shooting mode switching process, and the display pictures before and after switching are transited through the transition dynamic effect, so that smooth video shooting experience is provided for users.
In addition to the above-mentioned scheme for inserting transition dynamic effects, in some possible implementations, the first shooting mode may be delayed to be disconnected, so as to implement the connection of the encoded stream between the first shooting mode and the second shooting mode. Specifically, after receiving a shooting mode switching operation, the first shooting mode is delayed to be closed, and the second shooting mode is started. After the first shooting mode is turned off in a delayed mode, the time for reporting the last frame of video picture by the first shooting mode is matched with the time for reporting the first frame of video picture by the second shooting mode. In some possible implementation manners, when the first shooting mode or the second shooting mode is at least two paths of video pictures, before encoding, the two or more paths of video pictures need to be rendered and combined first, and then the rendered and combined video pictures are encoded. In a specific implementation, the rendering and merging processing of two or more paths of video pictures includes: and rendering and combining the two or more paths of video pictures according to the texture information, the position information and the combination strategy of the two or more paths of video pictures. The merging strategy at least comprises: splicing the two or more paths of video pictures, namely a double-scene or multi-scene mode; or, at least one of the two or more paths of video pictures is filled into other video pictures in the two or more paths of video pictures, namely, a picture-in-picture mode. The specific manner of rendering the merge is described in detail below.
Referring to fig. 9, a schematic view of another encoded stream provided in the embodiment of the present application is shown. In this embodiment of the present application, after receiving a shooting mode switching operation, first, the second shooting mode is turned on, so that an encoded stream corresponding to the second shooting mode starts to be streamed up. After a certain time (for example, 1500 ms) is delayed, the first shooting mode is disconnected, and it can be understood that the encoded stream corresponding to the second shooting mode has arrived at this time, so as to connect the video stream corresponding to the first shooting mode and the video stream corresponding to the second shooting mode. That is to say, the first shooting mode in the embodiment of the present application generates 1500ms more video streams, and realizes the transition of the cut-off. In a specific implementation, the delayed off-time of the first shooting mode may be determined according to a current-cut time in a shooting mode switching process, so as to ensure that a time for reporting a last frame of video picture in the first shooting mode matches a time for reporting a first frame of video picture in the second shooting mode. That is, the time for reporting the last frame of video image in the first shooting mode is matched with the time for reporting the first frame of video image in the second shooting mode.
It should be noted that when there is no common camera in the first shooting mode and the second shooting mode, the camera corresponding to the first shooting mode may be directly controlled to be turned off in a delayed manner, and after the delay time is reached, the camera corresponding to the first shooting mode is turned off.
However, when there is a common camera in the first shooting mode and the second shooting mode, the common camera needs to be kept in an on state all the time.
Referring to fig. 10, a schematic view of another shooting mode switching scenario provided in the embodiment of the present application is shown. In fig. 10, the front-rear double view mode is switched to the rear-rear double view mode. In the shooting mode switching process, the first rear camera is a common camera before and after the shooting mode is switched.
As shown in fig. 10, in the front-back dual-view mode, the front camera collects a foreground image with a size of 1080 × 480; the first rear camera captures a first background image having a size of 1080 × 480. And after receiving the shooting mode switching operation, the front camera is closed in a delayed mode, and the second rear camera is opened. At this time, since the video image stream collected by the second rear camera has not yet arrived, the video images collected by the front camera and the first rear camera are still encoded. And when the time delay is reached, the video images collected by the second rear camera arrive, the front camera is closed, and the video images collected by the first rear camera and the second rear camera are coded. In this process, the first rear camera is always kept in an open state. It should be noted that in the front-back double-shot mode and the back-back double-shot mode in the embodiment of the present application, the size of the picture acquired by the first back camera is always 1080 × 480.
However, in some possible implementations, the sizes of the pictures that need to be captured by the common cameras before and after switching may be different, and in this case, the pictures captured by the common cameras need to be processed, which will be described below with reference to the drawings.
Referring to fig. 11, a schematic view of another shooting mode switching scenario provided in the embodiment of the present application is shown. In fig. 11, the foreground mode is switched to the foreground-background mode. It can be understood that, in this shooting mode switching process, the front-facing camera is a common camera before and after the shooting mode switching.
As shown in fig. 11, in the foreground mode, the front camera captures images of size 1080 × 960. And when receiving the operation of switching the foreground mode to the background mode, keeping the front camera in an open state and starting the rear camera. It can be appreciated that the foreground pictures collected by the front camera in size of 1080 × 960 continue to be encoded at this time. When the coded stream of the rear camera is reached, rendering and merging processing needs to be performed on the foreground picture and the background picture. However, since the foreground screen size is required to be 1080 × 480 in the front-back dual scene mode, it is necessary to clip the foreground screen size 1080 × 960 to 1080 × 480, and render and merge the foreground screen size 1080 × 480 with the background screen size 1080 × 480.
Referring to fig. 12, a block diagram of a software structure of an electronic device according to an embodiment of the present application is provided. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android (Android) system is divided into four layers, an application layer, a framework layer, a hardware abstraction layer, and a hardware layer from top to bottom.
An Application layer (App) may comprise a series of Application packages. For example, the application package may include a camera application. The application layer may be further divided into an application interface (UI) and application logic.
The application interface of the camera application comprises a shooting mode switching control so as to realize the switching of the shooting mode in the video shooting process.
The application logic of the camera application comprises a switching control module, a transition control module, a multi-shot coding module and the like. The switching control module is used for controlling the switching of shooting modes, such as a forward shooting mode, a backward shooting mode, a front-back mode, a back-back mode and a picture-in-picture mode. Among them, switching of the shooting mode may involve turning on or off a specific camera, shielding hardware differences of different chip platforms, and the like. The transition control module is used for generating transition dynamic effect in the switching process of the shooting mode. The multi-shooting encoding module is used for encoding in the shooting mode process to generate a video file, namely recording of the shot video is realized, and particularly encoding can be carried out through an encoder.
The Framework layer (FWK) provides an Application Programming Interface (API) and a programming Framework for an application program at the application layer, including some predefined functions. In fig. 12, the frame layer includes a camera frame. The Camera framework can be a Camera access interface (Camera 2 API), the Camera2 API is an interface for accessing the Camera device proposed by Android, and the data stream flows from the Camera to the Surface by adopting a pipeline design. The Camera2 API includes Camera management (CameraManager) and a Camera device (CameraDevice). The Camera manager is a management class of the Camera device, and can query the Camera device information of the device through the class object to obtain a Camera device object. The CameraDevice provides a series of fixed parameters related to the Camera device, such as the basic setup and output format.
The Hardware Abstraction Layer (HAL) is an interface layer between the operating system kernel and the hardware circuitry, which is intended to abstract the hardware. It hides the hardware interface details of specific platform, provides virtual hardware platform for operation system, makes it have hardware independence, and can be transplanted on several platforms. In fig. 10, the HAL includes a Camera hardware abstraction layer (Camera HAL), and the Camera HAL includes a Camera 1, a Camera2, and the like. It can be understood that the cameras 1 and 2 are abstract devices. In a video shooting scene, the HAL creates a data stream with a corresponding size according to the resolution delivered by an upper layer and the Surface size.
The HardWare layer (HardWare, HW) is the HardWare located at the lowest level of the operating system. In fig. 10, HW includes a camera 1, a camera2, a camera 3, and the like. The camera 1, the camera2, and the camera 3 may correspond to a plurality of cameras on the electronic device.
In order to facilitate better understanding of the technical solution by those skilled in the art, the following describes an overall process of switching the shooting mode.
Referring to fig. 13, a schematic flowchart of another video capture encoding method according to an embodiment of the present disclosure is provided. The method can be applied to the software structure shown in fig. 12, which mainly includes the following steps, as shown in fig. 13.
S1301: and encoding the video pictures shot in the single scene mode.
In the embodiment of the present application, the shooting mode includes a single view mode and a double view mode. Currently, a user performs video shooting in a single scene mode, and a multi-shot coding module codes shot video pictures in the shooting process.
S1302: triggering a shooting mode switching operation.
When a user wants to switch the shooting mode from the single-view mode to the double-view mode in the shooting process, the shooting mode switching operation is triggered, and the shooting mode switching operation is used for indicating the switching of the single-view mode to the double-view mode.
S1303: the switching control module starts the switching of the shooting mode.
And the switching control module starts the switching of the shooting mode after receiving the switching operation of the shooting mode.
S1304: and the switching control module sends a code keeping instruction to the multi-shot coding module.
It can be understood that, in the single shot mode, the multi-shot coding module codes the video pictures shot in the single shot mode in real time.
The multi-shot coding module is controlled to keep coding in the switching process of the shooting modes, so that a video picture shot in a single-scene mode, a transition dynamic effect picture and a video picture shot in a double-scene mode can be generated into a video file.
S1305: and the switching control module sends a transition starting dynamic effect instruction to the transition control module.
In order to avoid influencing the user experience during the shooting mode switching, transition animation is inserted during the interruption of the shooting mode switching. Specifically, the switching control module sends a start transition dynamic effect instruction to the transition control module, so that the transition control module generates a transition dynamic effect.
S1306: and the switching control module sends instructions of disconnecting the single-scene mode and starting the double-scene mode to the framework layer.
After the switching is started, the switching control module sends instructions of disconnecting the single-scene mode and starting the double-scene mode to the framework layer, so that the framework layer can disconnect the single-scene mode and start the double-scene mode conveniently.
S1307: and the frame layer sends instructions for disconnecting the single-scene mode and starting the double-scene mode to the hardware abstraction layer.
And after receiving the single scene disconnection and double scene starting commands, the framework layer sends the single scene disconnection and double scene starting commands to the hardware abstraction layer, so that the hardware abstraction layer can disconnect the single scene mode and start the double scene mode.
S1308: and the hardware abstraction layer sends instructions for disconnecting the single-scene mode and starting the double-scene mode to the hardware layer.
After receiving the single-scene mode disconnection and double-scene mode starting instructions, the hardware abstraction layer sends the single-scene mode disconnection and double-scene mode starting instructions to the hardware layer, so that the hardware layer can disconnect the single-scene mode and start the double-scene mode conveniently.
S1309: the transition control module generates transition dynamic effect.
And the transition control module generates a transition dynamic effect after receiving the start transition dynamic effect instruction.
S1310: and the transition control module sends transition dynamic effect to the multi-shot coding module.
And after the transition control module generates the transition dynamic effect, the transition dynamic effect is sent to the multi-shot coding module so as to code the transition dynamic effect.
S1311: and the multi-shot coding module codes the transition dynamic effect picture.
And the multi-shot coding module codes the transition dynamic effect picture after receiving the transition dynamic effect.
It should be noted that the multi-shot encoding module keeps encoding continuously during the switching of the shooting mode, that is, the encoding operation is not interrupted during the switching of the shooting mode, but the encoding stream is switched from the first shooting mode encoding stream to the second shooting mode encoding stream.
S1312: and the switching control module sends a transition stopping dynamic effect instruction to the transition control module.
And after the double-scene mode is switched, the switching control module sends a transition stopping dynamic effect instruction to the transition control module. The stop transition motion effect can be understood as that the transition control module stops generating transition motion effect images, stops sending transition motion effect pictures to the multi-shot coding module, and the like so as to stop coding of the transition motion effect pictures.
S1313: and the switching control module sends a code keeping instruction to the multi-shot coding module.
The multi-shot coding module is controlled to keep coding in the switching process of the shooting modes, so that the video shot in the single-scene mode, the video shot in the transition dynamic effect mode and the video shot in the double-scene mode can be generated into a video file.
S1314: video pictures shot in the dual view mode are encoded.
In the process of shooting the video pictures in the double-scene mode, the video pictures shot in the double-scene mode are continuously coded, and finally the video pictures shot in the single-scene mode, the transition dynamic effect and the video pictures shot in the double-scene mode are generated into a video file. It should be understood that fig. 13 is only one possible implementation manner listed in the embodiment of the present application, and should not be taken as a limitation to the scope of the present application.
In one possible implementation manner, the transition control module may call an Open Graphics Library (OpenGL) renderer to implement rendering processing on the video data.
Referring to fig. 14A, a rendering scene schematic diagram provided in the embodiment of the present application is shown.
In order to respectively implement processing of a display image and an encoded image, the Surface switching management module generally sets two rendering engines, namely, an Open GL display rendering engine and an Open GL encoded rendering engine, which may call an Open GL renderer to implement rendering processing of an image.
In a single-scene mode, the Open GL display rendering engine may monitor one video image through the first monitoring module and the second monitoring module, respectively, where one of the video images monitored by the two monitoring modules is used for display rendering and the other is used for encoding rendering. Of course, it is also possible to monitor the video image by using only one monitoring module, display and render the monitored video image, and encode and render the displayed and rendered video image. The method comprises the following specific steps:
the Open GL display rendering engine monitors the video images acquired by the first camera through the first monitoring module and the second monitoring module respectively. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, the Open GL renderer transmits the acquired video image monitored by the first monitoring module of the Open GL display rendering engine to the display cache area for caching, the Open GL display rendering engine transmits the video image monitored by the second monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image monitored by the second monitoring module of the Open GL display rendering engine to the encoding cache area. And transmitting the video image buffered in the display buffer area to a display interface (SurfaceView), and displaying the video image in the display interface. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing on video images is not required, the video images monitored by the first monitoring module and the second monitoring module of the Open GL display rendering engine may not pass through the Open GL renderer, but directly transmit the video images monitored by the first monitoring module of the Open GL display rendering engine to the display cache area, and transmit the video images monitored by the second monitoring module to the encoding cache area.
In a double-view mode or a picture-in-picture mode, the Open GL display rendering engine monitors video images collected by the first camera and the second camera respectively through the first monitoring module and the second monitoring module, and transmits the monitored two paths of video images and a synthesis strategy to the Open GL renderer. And the Open GL renderer synthesizes the two paths of video images into one video image according to a synthesis strategy, and transmits the video image to a display cache region for caching. And respectively transmitting the video images cached in the display cache region to a display interface (surface View) and a coding cache region. The video image is displayed within a display interface. The Open GL coding rendering engine acquires a video image in a coding buffer area, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark into the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, in the above process, except that the video file generated by the encoding module is in the MP4 format, other video images are in the RGB format. That is to say, the Open GL displays that the video image monitored by the rendering engine is an image in RGB format, and the video image output after the Open GL renderer renders and synthesizes is also in RGB format. That is, the video image cached in the display buffer is in RGB format, and the video image sent to the display interface and the encoding buffer is also in RGB format. The Open GL coding rendering engine acquires a video image in an RGB format, and performs related rendering on the video image according to an image rendering instruction input by a user, wherein the obtained rendered video image is in the RGB format. The video image received by the coding module is in RGB format, and the video image in RGB format is coded to generate a video file in MP4 format.
In a transition dynamic effect application scene, the Open GL display rendering engine and the Open GL encoding rendering engine respectively initialize a corresponding transition dynamic effect rendering environment of the Open GL renderer, that is, a transition dynamic effect Open GL environment, which is respectively used for rendering a transition dynamic effect display image and a transition encoding image of the Open GL renderer. The contents of this initialization may include a timer thread, texture, etc.
In another possible implementation manner, a transition dynamic effect Open GL environment of a corresponding Open GL renderer may be initialized only through an Open GL display rendering engine, and the Open GL renderer performs transition dynamic effect display image rendering. The Open GL coding rendering engine shares the transition dynamic effect display image, and generates a transition coding image according to the transition dynamic effect display image, so that the coding of the transition coding image is realized.
Referring to fig. 14B, a schematic view of another rendering scene provided in the embodiment of the present application is shown. The difference from fig. 14A is that in the monoscopic mode, the Open GL display rendering engine can monitor only one video image of the electronic device through one monitoring module. For example, the Open GL display rendering engine monitors a video image captured by the first camera through the first monitoring module. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image to the display cache area for caching. And transmitting the video image cached in the display cache region to a display interface. The video image is displayed in the display interface and transmitted to the encoding buffer. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing on a video image is not required, the video image monitored by the first monitoring module of the Open GL display rendering engine may also be directly transmitted to the display cache area without passing through the Open GL renderer, which is not limited in this application.
It should be noted that, in fig. 14A and fig. 14B, the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the monoscopic mode are the same as the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the dual-view mode. For convenience of illustration, in fig. 14A and 14B, the Open GL display rendering engine, the Open GL renderer, and the display buffer are drawn in both the single view mode and the dual view mode.
In particular, data sharing may be achieved between the Open GL display rendering engine and the Open GL encoding rendering engine through SharedContext.
The following describes a rendering process of an Open GL renderer, taking an example of merging two video images into one video image.
Referring to fig. 15A, a schematic view of rendering and merging scenes for a video stream according to an embodiment of the present application is shown. One frame of a video image captured by a first camera and one frame of a video image captured by a second camera are shown in fig. 13A. Wherein, the video images collected by the first camera and the second camera are 1080 × 960. And rendering and combining the video images collected by the first camera and the video images collected by the second camera according to the position information and the texture information of the video images collected by the first camera and the second camera to obtain a frame of 1080 × 1920 images, wherein the spliced images are in a double-scene mode, namely the images collected by the first camera and the images collected by the second camera are displayed in parallel. The spliced image can be sent to an encoder for encoding and sent to a display interface for displaying.
Referring to fig. 15B, a schematic view of rendering a merged scene for another video stream provided in the embodiment of the present application is shown. One frame of the video image captured by the first camera and one frame of the video image captured by the second camera are shown in fig. 13B. The size of the video image collected by the first camera is 540 × 480, and the size of the video image collected by the second camera is 1080 × 960. And rendering and combining the video images acquired by the first camera and the video images acquired by the second camera according to the position information and the texture information of the video images acquired by the first camera and the second camera to obtain the image of one frame of picture-in-picture mode.
Referring to fig. 15C, a scene schematic diagram is rendered for another transition animation provided in the embodiment of the present application. Fig. 15C shows a transition image of one frame, which is subjected to rotation processing according to a transition policy to obtain a rendered transition image of one frame.
It is understood that the image sizes shown in fig. 15A-15C are merely an exemplary illustration of the embodiments of the present application and should not be taken as limiting the scope of the present application.
Corresponding to the above method embodiments, the present application also provides an electronic device, which is used for a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the electronic device is triggered to execute part or all of the steps in the above method embodiments.
Fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 16, the electronic device 1600 may include a processor 1610, an external memory interface 1620, an internal memory 1621, a Universal Serial Bus (USB) interface 1630, a charging management module 1640, a power management module 1641, a battery 1642, an antenna 1, an antenna 2, a mobile communication module 1650, a wireless communication module 1660, an audio module 1670, a speaker 1670A, a receiver 1670B, a microphone 1670C, a headset interface 1670D, a sensor module 1680, keys 1690, a motor 1691, an indicator 1692, a camera 1693, a display 94, and a Subscriber Identification Module (SIM) card interface 1695, etc. The sensor module 1680 may include a pressure sensor 1680A, a gyroscope sensor 1680B, an air pressure sensor 1680C, a magnetic sensor 1680D, an acceleration sensor 1680E, a distance sensor 1680F, a proximity light sensor 1680G, a fingerprint sensor 1680H, a temperature sensor 1680J, a touch sensor 1680K, an ambient light sensor 1680L, a bone conduction sensor 1680M, and the like.
It is to be understood that the illustrated configuration of the embodiment of the invention does not constitute a specific limitation on the electronic device 1600. In other embodiments of the present application, electronic device 1600 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 1610 may include one or more processing units, such as: processor 1610 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1610 for storing instructions and data. In some embodiments, memory in processor 1610 is cache memory. The memory may hold instructions or data that have just been used or recycled by processor 1610. If processor 1610 needs to reuse the instructions or data, it can call directly from the memory. Avoiding repeated accesses reduces the latency of processor 1610, thereby increasing the efficiency of the system.
In some embodiments, processor 1610 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 1610 may include multiple sets of I2C buses. Processor 1610 may be coupled to touch sensor 1680K, charger, flash, camera 1693, etc., respectively, through various I2C bus interfaces. For example: processor 1610 may couple touch sensor 1680K via an I2C interface such that processor 1610 and touch sensor 1680K communicate via an I2C bus interface to implement touch functionality of electronic device 1600.
The I2S interface may be used for audio communication. In some embodiments, processor 1610 may include multiple sets of I2S buses. Processor 1610 may be coupled to audio module 1670 via an I2S bus to enable communication between processor 1610 and audio module 1670. In some embodiments, the audio module 1670 can communicate audio signals to the wireless communication module 1660 via an I2S interface to enable answering a call via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 1670 and wireless communication module 1660 may be coupled by a PCM bus interface. In some embodiments, the audio module 1670 can also transmit audio signals to the wireless communication module 1660 through the PCM interface, so as to receive phone calls through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 1610 and the wireless communication module 1660. For example: the processor 1610 communicates with the bluetooth module in the wireless communication module 1660 through the UART interface to implement the bluetooth function. In some embodiments, the audio module 1670 may transmit the audio signal to the wireless communication module 1660 through the UART interface, so as to realize the function of playing music through the bluetooth headset.
The MIPI interface may be used to connect processor 1610 with peripheral devices such as display 1694, camera 1693, etc. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 1610 and camera 1693 communicate over a CSI interface to implement the capture functions of electronic device 1600. Processor 1610 and display screen 1694 communicate via the DSI interface to implement display functions for electronic device 1600.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 1610 with the camera 1693, the display 1694, the wireless communication module 1660, the audio module 1670, the sensor module 1680, and so forth. The GPIO interface may also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, and the like.
The USB interface 1630 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1630 may be used to connect a charger to charge the electronic device 1600, and may also be used to transmit data between the electronic device 1600 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only an exemplary illustration, and does not limit the structure of the electronic device 1600. In other embodiments of the present application, the electronic device 1600 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charge management module 1640 is to receive charging input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 1640 may receive charging input for a wired charger via the USB interface 1630. In some wireless charging embodiments, the charging management module 1640 may receive wireless charging input through a wireless charging coil of the electronic device 1600. The charging management module 1640 can also supply power to the electronic device via the power management module 1641 while charging the battery 1642.
The power management module 1641 is used to connect the battery 1642, the charging management module 1640 and the processor 1610. The power management module 1641 receives input from the battery 1642 and/or the charge management module 1640 and provides power to the processor 1610, the internal memory 1621, the display screen 1694, the camera 1693, and the wireless communication module 1660, among other things. The power management module 1641 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 1641 may be disposed in the processor 1610. In other embodiments, the power management module 1641 and the charging management module 1640 may be provided in the same device.
The wireless communication function of the electronic device 1600 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1650, the wireless communication module 1660, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1600 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1650 can provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 1600. The mobile communication module 1650 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 1650 may receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 1650 may further amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves via the antenna 1 to radiate the electromagnetic waves. In some embodiments, at least some of the functional modules of the mobile communication module 1650 may be disposed in the processor 1610. In some embodiments, at least some of the functional blocks of the mobile communication module 1650 may be disposed in the same device as at least some of the blocks of the processor 1610.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 1670A, the receiver 1670B, etc.) or displays images or video through the display screen 1694. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 1610, and may be implemented in the same device as the mobile communication module 1650 or other functional modules.
The wireless communication module 1660 may provide a solution for wireless communication applied to the electronic device 1600, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and so on. The wireless communication module 1660 may be one or more devices that integrate at least one communication processing module. The wireless communication module 1660 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on the electromagnetic wave signal, and transmits the processed signal to the processor 1610. Wireless communication module 1660 may also receive signals to be transmitted from processor 1610, frequency modulate them, amplify them, and convert them to electromagnetic radiation via antenna 2.
In some embodiments, antenna 1 of electronic device 1600 is coupled with mobile communication module 1650 and antenna 2 is coupled with wireless communication module 1660 such that electronic device 1600 may communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 1600 implements display functions via the GPU, the display screen 1694, and the application processor, among other things. The GPU is a microprocessor for image processing, coupled to a display screen 1694 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1610 may include one or more GPUs that execute program instructions to generate or change display information.
Display screen 1694 is used to display images, video, etc. Display screen 1694 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, electronic device 1600 may include 1 or N display screens 1694, where N is a positive integer greater than 1.
The electronic device 1600 may implement a capture function via the ISP, the camera 1693, the video codec, the GPU, the display 1694, the application processor, and the like.
The ISP is used to process the data fed back by the camera 1693. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 1693.
Camera 1693 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 1600 may include 1 or N cameras 1693, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 1600 selects at a frequency bin, the digital signal processor is used to perform a fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 1600 may support one or more video codecs. Thus, the electronic device 1600 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent cognition of the electronic device 1600 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 1620 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 1600. The external memory card communicates with the processor 1610 through an external memory interface 1620 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 1621 may be used to store computer-executable program code, including instructions. The internal memory 1621 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The data storage area may store data created during use of the electronic device 1600 (e.g., audio data, phone book, etc.), and the like. Further, the internal memory 1621 may include a high-speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 1610 performs various functional applications and data processing of the electronic device 1600 by executing instructions stored in the internal memory 1621 and/or instructions stored in a memory provided in the processor.
Electronic device 1600 may implement audio functions through audio module 1670, speaker 1670A, receiver 1670B, microphone 1670C, headphone interface 1670D, and application processor, among other things. Such as music playing, recording, etc.
Audio module 1670 is used to convert digital audio information to an analog audio signal output and also to convert an analog audio input to a digital audio signal. Audio module 1670 may also be used to encode and decode audio signals. In some embodiments, the audio module 1670 may be disposed in the processor 1610, or some functional modules of the audio module 1670 may be disposed in the processor 1610.
The speaker 1670A, also called a "horn", is used to convert electrical audio signals into sound signals. The electronic device 1600 may listen to music or to a hands-free conversation through the speaker 1670A.
The receiver 1670B, also called "earpiece", is used to convert the electrical audio signal into a sound signal. When the electronic device 1600 answers a call or voice message, the voice can be answered by placing the receiver 1670B near the ear.
A microphone 1670C, also known as a "mouthpiece," or "microphone," is used to convert acoustic signals into electrical signals. When making a call or sending voice information, the user can speak via the mouth of the user near the microphone 1670C and input a voice signal into the microphone 1670C. Electronic device 1600 may provide at least one microphone 1670C. In other embodiments, the electronic device 1600 may be provided with two microphones 1670C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1600 may further include three, four, or more microphones 1670C to collect audio signals, reduce noise, identify audio sources, and perform directional recording.
The headset interface 1670D is used to connect wired headsets. The headset interface 1670D may be the USB interface 1630, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association (CTIA) standard interface.
The pressure sensor 1680A is used for sensing a pressure signal and converting the pressure signal into an electrical signal. In some embodiments, pressure sensor 1680A may be disposed on display screen 1694. Pressure sensors 1680A can be of a wide variety, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 1680A, the capacitance between the electrodes changes. The electronic device 1600 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 1694, the electronic device 1600 detects the intensity of the touch operation according to the pressure sensor 1680A. The electronic device 1600 can also calculate the touched position from the detection signal of the pressure sensor 1680A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
Gyroscope sensor 1680B may be used to determine a motion gesture of electronic device 1600. In some embodiments, the angular velocity of electronic device 1600 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 1680B. The gyro sensor 1680B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyroscope 1680B detects the shake angle of the electronic device 1600, calculates the distance that the lens module needs to compensate according to the shake angle, and allows the lens to counteract the shake of the electronic device 1600 through the reverse movement, thereby achieving anti-shake. The gyroscope sensor 1680B may also be used for navigation, somatosensory gaming scenes.
Air pressure sensor 1680C is used to measure air pressure. In some embodiments, the electronic device 1600 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 1680C.
The magnetic sensor 1680D includes a hall sensor. The electronic device 1600 can detect the opening and closing of the flip holster using the magnetic sensor 1680D. In some embodiments, when the electronic device 1600 is a flip phone, the electronic device 1600 can detect the opening and closing of the flip according to the magnetic sensor 1680D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
Acceleration sensor 1680E may detect the magnitude of acceleration of electronic device 1600 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 1600 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1680F for measuring distance. The electronic device 1600 may measure distance by infrared or laser. In some embodiments, shooting a scene, the electronic device 1600 may utilize the range sensor 1680F to range for fast focus.
The proximity light sensor 1680G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 1600 emits infrared light to the outside through the light emitting diode. The electronic device 1600 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 1600. When insufficient reflected light is detected, the electronic device 1600 may determine that there are no objects near the electronic device 1600. The electronic device 1600 can utilize the proximity sensor 1680G to detect that the user holds the electronic device 1600 close to the ear for communication, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 1680G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 1680L is used to sense ambient light brightness. The electronic device 1600 may adaptively adjust the brightness of the display 1694 according to the perceived ambient light level. The ambient light sensor 1680L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 1680L can also cooperate with the proximity light sensor 1680G to detect whether the electronic device 1600 is in a pocket, so as to prevent accidental touches.
The fingerprint sensor 1680H is used to collect a fingerprint. The electronic device 1600 can utilize the collected fingerprint characteristics to implement fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 1680J is used to detect temperature. In some embodiments, electronic device 1600 utilizes the temperature detected by temperature sensor 1680J to implement a temperature processing strategy. For example, when the temperature reported by temperature sensor 1680J exceeds a threshold, electronic device 1600 performs a reduction in performance of a processor located near temperature sensor 1680J in order to reduce power consumption and implement thermal protection. In other embodiments, electronic device 1600 heats battery 1642 when the temperature is below another threshold to avoid abnormal shutdown of electronic device 1600 due to low temperatures. In other embodiments, electronic device 1600 performs a boost on the output voltage of battery 1642 when the temperature is below a further threshold to avoid abnormal shutdown due to low temperature.
Touch sensor 1680K, also referred to as a "touch device. The touch sensor 1680K may be disposed on the display screen 1694, and the touch sensor 1680K and the display screen 1694 form a touch screen, which is also called a "touch screen". The touch sensor 1680K is used to detect a touch operation acting thereon or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operations may be provided through the display screen 1694. In other embodiments, touch sensor 1680K may also be placed on a surface of electronic device 1600 at a different location than display screen 1694.
The bone conduction sensor 1680M may acquire a vibration signal. In some embodiments, bone conduction sensor 1680M may acquire a vibration signal of a human voice vibrating a bone mass. The bone conduction sensor 1680M can also contact the human pulse to receive the blood pressure pulse signal. In some embodiments, bone conduction sensors 1680M may also be disposed in the headset, incorporated into a bone conduction headset. The audio module 1670 may analyze the voice signal based on the vibration signal of the bone block vibrated by the sound part obtained by the bone conduction sensor 1680M, so as to implement the voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 1680M, so as to realize a heart rate detection function.
Keys 1690 include a power on key, a volume key, etc. Keys 1690 may be mechanical keys. Or may be touch keys. The electronic device 1600 may receive key inputs, generate key signal inputs related to user settings and function control of the electronic device 1600.
Motor 1691 can generate a vibration cue. Motor 1691 can be used for both incoming call vibration cues and touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motors 1691 may also respond to different vibration feedback effects by acting on touch operations on different areas of the display screen 1694. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 1692 may be an indicator light that can indicate a charging status, a change in charge, or a message, a missed call, a notification, etc.
The SIM card interface 1695 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 1600 by being inserted into and removed from the SIM card interface 1695. The electronic device 1600 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. SIM card interface 1695 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 1695 can be inserted into multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 1695 is also compatible with different types of SIM cards. The SIM card interface 1695 is also compatible with external memory cards. The electronic device 1600 interacts with the network through the SIM card to implement functions such as telephony and data communications. In some embodiments, the electronic device 1600 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 1600 and cannot be separated from the electronic device 1600.
In a specific implementation manner, the present application further provides a computer storage medium, where the computer storage medium may store a program, and when the program runs, the computer storage medium controls a device in which the computer readable storage medium is located to perform some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and indicates that three relationships may exist, for example, a and/or B, and may indicate that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of electronic hardware and computer software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided by the present invention, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present invention, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all such changes or substitutions are included in the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. An encoding method for video shooting, applied to an electronic device, is characterized by comprising the following steps:
the electronic equipment starts a first shooting mode to shoot video, and encodes a video picture shot in the first shooting mode;
in the shooting process, the electronic equipment receives a shooting mode switching operation, wherein the shooting mode switching operation is used for switching the first shooting mode to a second shooting mode, and the first shooting mode is different from the second shooting mode;
encoding a video picture photographed by the second photographing mode;
receiving shooting stopping operation, and generating a video file, wherein the video file comprises a video picture shot in the first shooting mode and a video picture shot in the second shooting mode;
storing the video file;
when the first shooting mode and the second shooting mode do not have a shared camera, the shooting mode switching operation is used for delaying to close the first shooting mode and open the second shooting mode, and after the first shooting mode is delayed to close, the time for reporting the last frame of video picture by the first shooting mode is matched with the time for reporting the first frame of video picture by the second shooting mode;
when the first shooting mode and the second shooting mode have a common camera and the sizes of pictures to be collected by the common camera before and after switching are different, cutting the pictures collected by the common camera to meet the size requirement of the second shooting mode on the pictures collected by the common camera; and the shared camera is always kept in an open state, the cameras corresponding to the second shooting mode except the shared camera are opened, and when the coded stream of the cameras corresponding to the second shooting mode except the shared camera reaches, the video pictures obtained by the cutting processing and the video pictures collected by the cameras corresponding to the second shooting mode except the shared camera are rendered and merged.
2. The method according to claim 1, wherein during the shooting process, the electronic device receives a shooting mode switching operation, the shooting mode switching operation is used for switching a first shooting mode to a second shooting mode, and before encoding a video picture shot in the second shooting mode,
the method further comprises the following steps:
generating a transition dynamic effect picture,
and coding the transition dynamic effect picture.
3. The method according to claim 1 or 2, wherein said encoding the video picture taken in the second photographing mode comprises:
and after the video picture shot in the second shooting mode is obtained, coding the video picture shot in the second shooting mode.
4. The method of claim 2, further comprising:
and receiving a pause shooting operation, and pausing the coding of the video picture shot in the first shooting mode, the transition dynamic effect picture or the video picture shot in the second shooting mode corresponding to the pause shooting operation.
5. The method according to claim 1 or 2, wherein the first shooting mode or the second shooting mode is at least two video pictures, and the encoding of the video pictures shot in the first shooting mode or the second shooting mode comprises:
the electronic equipment monitors two or more paths of video pictures;
rendering and combining the two or more paths of video pictures;
and coding the video pictures after the rendering and merging processing.
6. The method according to claim 5, wherein the rendering and combining the two or more video pictures comprises:
and rendering and combining the two or more paths of video pictures according to the texture information, the position information and the combination strategy of the two or more paths of video pictures.
7. The method according to claim 6, wherein the merging strategy comprises at least:
splicing the two or more paths of video pictures; or the like, or a combination thereof,
and filling at least one video picture in the two or more paths of video pictures into other video pictures in the two or more paths of video pictures.
8. An electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any of claims 1-7.
9. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium resides to perform the method of any one of claims 1-7.
CN202110682672.5A 2021-06-16 2021-06-16 Encoding method, apparatus and storage medium for video shooting Active CN113810595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110682672.5A CN113810595B (en) 2021-06-16 2021-06-16 Encoding method, apparatus and storage medium for video shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110682672.5A CN113810595B (en) 2021-06-16 2021-06-16 Encoding method, apparatus and storage medium for video shooting

Publications (2)

Publication Number Publication Date
CN113810595A CN113810595A (en) 2021-12-17
CN113810595B true CN113810595B (en) 2023-03-17

Family

ID=78942606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110682672.5A Active CN113810595B (en) 2021-06-16 2021-06-16 Encoding method, apparatus and storage medium for video shooting

Country Status (1)

Country Link
CN (1) CN113810595B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201503681A (en) * 2013-04-09 2015-01-16 Rui Pedro Oliveira Attachable smartphone camera
JP2017184218A (en) * 2016-03-29 2017-10-05 キヤノン株式会社 Radiation imaging device and radiation imaging system
CN110035141A (en) * 2019-02-22 2019-07-19 华为技术有限公司 A kind of image pickup method and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MX2018004891A (en) * 2015-10-22 2018-06-20 Nissan Motor Display control method and display control device.
CN107820006A (en) * 2017-11-07 2018-03-20 北京小米移动软件有限公司 Control the method and device of camera shooting
CN112954219A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN111885305B (en) * 2020-07-28 2022-04-29 Oppo广东移动通信有限公司 Preview picture processing method and device, storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201503681A (en) * 2013-04-09 2015-01-16 Rui Pedro Oliveira Attachable smartphone camera
JP2017184218A (en) * 2016-03-29 2017-10-05 キヤノン株式会社 Radiation imaging device and radiation imaging system
CN110035141A (en) * 2019-02-22 2019-07-19 华为技术有限公司 A kind of image pickup method and equipment

Also Published As

Publication number Publication date
CN113810595A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN111316598B (en) Multi-screen interaction method and equipment
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN113422903B (en) Shooting mode switching method, equipment and storage medium
CN113475057B (en) Video frame rate control method and related device
CN113473005B (en) Shooting transfer live-action insertion method, equipment and storage medium
WO2022262313A1 (en) Picture-in-picture-based image processing method, device, storage medium, and program product
CN113596321B (en) Method, device and storage medium for generating transition dynamic effect
CN114489533A (en) Screen projection method and device, electronic equipment and computer readable storage medium
CN114710640A (en) Video call method, device and terminal based on virtual image
WO2023131070A1 (en) Electronic device management method, electronic device, and readable storage medium
CN114339429A (en) Audio and video playing control method, electronic equipment and storage medium
CN114257920B (en) Audio playing method and system and electronic equipment
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN112272191B (en) Data transfer method and related device
CN113747060A (en) Method, apparatus, storage medium, and computer program product for image processing
CN112532508B (en) Video communication method and video communication device
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN113596320B (en) Video shooting variable speed recording method, device and storage medium
CN113965693B (en) Video shooting method, device and storage medium
CN113810595B (en) Encoding method, apparatus and storage medium for video shooting
CN114257737A (en) Camera shooting mode switching method and related equipment
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN114745508B (en) Shooting method, terminal equipment and storage medium
CN114125554B (en) Encoding and decoding method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant