CN113473005B - Shooting transfer live-action insertion method, equipment and storage medium - Google Patents

Shooting transfer live-action insertion method, equipment and storage medium Download PDF

Info

Publication number
CN113473005B
CN113473005B CN202110682677.8A CN202110682677A CN113473005B CN 113473005 B CN113473005 B CN 113473005B CN 202110682677 A CN202110682677 A CN 202110682677A CN 113473005 B CN113473005 B CN 113473005B
Authority
CN
China
Prior art keywords
picture
video
shooting mode
transition
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110682677.8A
Other languages
Chinese (zh)
Other versions
CN113473005A (en
Inventor
韩林林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110682677.8A priority Critical patent/CN113473005B/en
Publication of CN113473005A publication Critical patent/CN113473005A/en
Application granted granted Critical
Publication of CN113473005B publication Critical patent/CN113473005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/45Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a shooting transition dynamic effect insertion method, equipment, a storage medium and a program product, wherein the method comprises the steps that electronic equipment starts a first shooting mode to carry out video shooting, and a video picture shot in the first shooting mode is displayed; in the shooting process, the electronic equipment receives shooting mode switching operation, wherein the shooting mode switching operation is used for switching a first shooting mode into a second shooting mode, and the first shooting mode is different from the second shooting mode; switching a video picture shot in a first shooting mode into a transition dynamic effect picture according to shooting mode switching operation, wherein the transition dynamic effect picture is related to the video picture shot in the first shooting mode; and switching the transition live action picture into a video picture shot in a second shooting mode. The method is used for inserting the transition dynamic effect at the current-cut time in the switching process of the shooting mode, and the display pictures before and after switching are transited through the transition dynamic effect, so that smooth video shooting experience is provided for users.

Description

Shooting transfer live-action insertion method, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a storage medium, and a program product for live insertion in a shooting transition.
Background
In order to improve user experience, electronic devices such as mobile phones and tablet computers are usually configured with a plurality of cameras, for example, a front camera and a rear camera are respectively configured on the electronic device. The user can select a corresponding shooting mode according to own requirements, such as a forward shooting mode, a backward shooting mode, a forward and backward double shooting mode and the like.
In a scene of video shooting, a user may need to switch shooting modes during video shooting. For example, the forward mode is switched to the backward mode. However, during the switching of the shooting mode, the video stream is cut off, resulting in poor user experience.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, device, storage medium, and program product for inserting a transition dynamic effect during shooting, so as to solve the problem in the prior art that a user experience is poor due to a video stream interruption in a shooting mode switching process.
In a first aspect, an embodiment of the present application provides a shooting transition live effect insertion method, including:
the electronic equipment starts a first shooting mode to carry out video shooting, and displays a video picture shot in the first shooting mode;
during shooting, the electronic equipment receives shooting mode switching operation, wherein the shooting mode switching operation is used for switching the first shooting mode to a second shooting mode, and the first shooting mode is different from the second shooting mode;
switching the video picture shot in the first shooting mode into a transition dynamic effect picture according to the shooting mode switching operation, wherein the transition dynamic effect picture is related to the video picture shot in the first shooting mode;
and switching the transition dynamic effect picture into a video picture shot in the second shooting mode.
Preferably, the switching the transition live action picture to the video picture shot in the second shooting mode includes:
and after the display of all the transition dynamic effect pictures is finished, switching the transition dynamic effect pictures into the video pictures shot in the second shooting mode.
Preferably, the switching the transition live action picture to the video picture shot in the second shooting mode includes:
and after the video picture shot in the second shooting mode is obtained, switching the transition dynamic effect picture into the video picture shot in the second shooting mode.
Preferably, the method further comprises:
and receiving a pause shooting operation, and displaying a video picture shot in the first shooting mode, the transition dynamic effect picture or a video picture shot in the second shooting mode corresponding to the pause shooting operation.
Preferably, the display duration of the transition dynamic effect picture is matched with a current interruption duration in the shooting mode switching process, and the current interruption duration is a time difference between a last frame of video picture reported by the first shooting mode and a first frame of video picture reported by the second shooting mode.
Preferably, the method further comprises the step of,
the electronic equipment monitors a video picture and displays the video picture; and, sending the video pictures to an encoder for encoding.
Preferably, the monitoring a video picture and displaying the video picture by the electronic device includes:
the electronic equipment monitors two or more paths of video pictures; and rendering and combining the two or more paths of video pictures to obtain the displayed video pictures.
Preferably, the rendering and merging the two or more video pictures includes:
rendering and combining the two or more paths of video pictures according to the texture information, the position information and the combination strategy of the two or more paths of video pictures; the synthesis strategy is used for synthesizing the information of the display positions and the display sizes of the two or more than two video pictures.
Preferably, the merging strategy at least comprises:
splicing the two or more paths of video pictures; or
And filling at least one video picture in the two or more paths of video pictures into other video pictures in the two or more paths of video pictures.
Preferably, the method further comprises:
in the shooting process, coding a video picture shot in the first shooting mode;
coding the transition dynamic effect picture;
encoding a video picture photographed by the second photographing mode;
receiving shooting stopping operation, and generating a video file, wherein the video file comprises a video picture shot in the first shooting mode, the transition dynamic effect picture and a video picture shot in the second shooting mode;
and storing the video file.
In a second aspect, embodiments of the present application provide an electronic device, comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any one of the first aspect.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium includes a stored program, where when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the method in any one of the above first aspects.
In a fourth aspect, the present application provides a computer program product, which contains executable instructions that, when executed on a computer, cause the computer to perform the method of any one of the above first aspects.
By adopting the technical scheme provided by the embodiment of the application, the transition dynamic effect is inserted at the current-cut time in the switching process of the shooting mode, and the display pictures before and after switching are transited through the transition dynamic effect, so that smooth video shooting experience is provided for users. In addition, the transition dynamic effect is synchronously coded into the generated video file, and smooth video playing experience is provided for users.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application;
fig. 2B is a schematic view of a front-back picture-in-picture mode shooting scene according to an embodiment of the present application;
fig. 2C is a schematic view of a rear pd mode shooting scene according to an embodiment of the present application;
fig. 3 is a schematic diagram of a display stream and an encoding stream in a shooting mode switching process according to an embodiment of the present application;
fig. 4 is a schematic view of a scene of switching shooting modes according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart of a transition dynamic effect insertion method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a display flow provided by an embodiment of the present application;
fig. 7 is a schematic flow chart of another transition dynamic effect insertion method according to an embodiment of the present application;
fig. 8A is a schematic diagram of an encoded stream according to an embodiment of the present application;
fig. 8B is a schematic diagram of another encoded stream according to an embodiment of the present application;
fig. 9 is a schematic flowchart of a method for generating and inserting transition dynamic effects according to an embodiment of the present application;
fig. 10 is a block diagram of a software structure of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic flow chart of another transition dynamic effect insertion method according to an embodiment of the present application;
fig. 12A is a block diagram of a software structure of a transition control module according to an embodiment of the present disclosure;
fig. 12B is a schematic diagram illustrating a connection relationship between a switching control module, a transition control module, and a multi-shot encoding module according to an embodiment of the present application;
fig. 13 is a schematic flow chart of another transition dynamic effect generation method according to an embodiment of the present application;
fig. 14 is a schematic flow chart of another transition dynamic effect generation method provided in the embodiment of the present application;
fig. 15A is a schematic view of a rendered scene according to an embodiment of the present application;
fig. 15B is a schematic diagram of another rendering scene provided in the embodiment of the present application;
fig. 16A is a schematic view of a video stream rendering and merging scene provided in the embodiment of the present application;
fig. 16B is a schematic view of another video stream rendering and merging scene provided in the embodiment of the present application;
fig. 16C is a schematic view of a transition live action rendering scene provided in the embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Referring to fig. 1, a schematic view of an electronic device provided in an embodiment of the present application is shown. In fig. 1, an electronic device is exemplified by a mobile phone 100, and fig. 1 shows a front view and a rear view of the mobile phone 100, two front cameras 111 and 112 are arranged on the front side of the mobile phone 100, and four rear cameras 121, 122, 123, and 124 are arranged on the rear side of the mobile phone 100. By configuring a plurality of cameras, a plurality of shooting modes, such as a forward shooting mode, a backward shooting mode, a forward and backward double shooting mode and the like, can be provided for a user. The user can select a corresponding shooting mode to shoot according to the shooting scene so as to improve the user experience.
It is to be understood that the illustration of fig. 1 is merely an exemplary illustration and should not be taken as a limitation on the scope of the present application. For example, the number and positions of the cameras may be different for different mobile phones. In addition, the electronic device according to the embodiment of the present application may be a tablet PC, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, a smart car, a smart audio, a robot, smart glasses, a smart television, or the like, in addition to a mobile phone.
It should be noted that, in some possible implementations, the electronic device may also be referred to as a terminal device, a User Equipment (UE), and the like, which is not limited in this embodiment of the present application.
In some possible implementations, the shooting modes involved in the electronic device may include a single-shot mode and a multi-shot mode. The single shooting mode may include a front single shooting mode, a rear single shooting mode, etc.; the multi-shot mode may include a front-shot mode, a rear-shot mode, a front-and-rear-shot mode, a front-picture-in-picture mode, a rear-picture-in-picture mode, a front-and-rear-picture-in-picture mode, etc.
Wherein, in the single shooting mode, one camera is adopted to shoot the video; and two or more cameras are adopted to shoot videos in a multi-shooting mode.
Specifically, in a front single-shot mode, a front camera is adopted for video shooting; in the rear single-shot mode, a rear camera is adopted for video shooting; in a front double-shooting mode, two front cameras are adopted for video shooting; in a rear double-camera mode, two rear cameras are adopted for video shooting; in a front-back double-shooting mode, a front-mounted camera and a rear-mounted camera are adopted for video shooting; in the front-mounted picture-in-picture mode, two front-mounted cameras are adopted for video shooting, and a picture shot by one front-mounted camera is placed in a picture shot by the other front-mounted camera; in the rear picture-in-picture mode, two rear cameras are adopted for video shooting, and a picture shot by one rear camera is placed in a picture shot by the other rear camera; in the front-back picture-in-picture mode, a front camera and a back camera are adopted for video shooting, and pictures shot by the front camera or the back camera are placed in pictures shot by the back camera or the front camera.
Referring to fig. 2A, a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application is provided. In a front-back double-shooting mode, a front-facing camera is used for collecting a foreground picture, a rear-facing camera is used for collecting a background picture, and the foreground picture and the background picture are simultaneously displayed in a display interface.
Referring to fig. 2B, a schematic view of a front-back picture-in-picture mode shooting scene is provided in the embodiment of the present application. In the front-back picture-in-picture mode, a front-facing camera is used for collecting a foreground picture, a rear-facing camera is used for collecting a background picture, and the foreground picture is placed in the background picture.
Referring to fig. 2C, a schematic view of a rear pd mode shooting scene is provided in an embodiment of the present application. And under the rear picture-in-picture mode, one rear camera is adopted to collect a long-distance view picture, the other rear camera is adopted to collect a short-distance view picture, and the short-distance view picture is arranged in the long-distance view picture.
It should be noted that the above-mentioned shooting modes are only some possible implementations listed in the embodiments of the present application, and those skilled in the art may configure other shooting modes according to actual needs, and the embodiments of the present application do not specifically limit this.
In some possible implementations, the capture mode may also be described as a single-pass mode, a two-pass mode, or a multi-pass mode. It can be understood that the single-path mode adopts one camera to shoot, the double-path mode adopts two cameras to shoot, and the multi-path mode adopts more than two cameras to shoot.
In some possible implementations, the shooting mode may also be described as a single view mode, a dual view mode, and a picture-in-picture mode. The single shot mode can comprise a front single shot mode and a rear single shot mode; the double-scene mode can comprise a front double-shot mode, a rear double-shot mode and a front and rear double-shot mode; the pip mode may include a front pip mode, a rear pip mode, and a front and rear pip mode.
During video shooting, a user may need to switch the shooting mode. Referring to table one, some possible shooting mode switching scenarios are listed for the embodiments of the present application.
Table one:
Figure BDA0003118236380000051
however, switching of the shooting mode usually causes switching of a camera for capturing a video image, which causes interruption of a video stream. The video frame switching method is embodied in a display interface, when the video frame collected in the shooting mode before switching is played, the video frame collected in the shooting mode after switching is not generated, so that the cut-off of the display interface is caused, and the user experience is influenced. In addition, the same problem occurs in the generated video file.
Referring to fig. 3, a schematic diagram of a display stream and an encoding stream in a shooting mode switching process according to an embodiment of the present application is provided. In the process of switching the shooting mode, the camera can be switched. Specifically, when the shooting mode switching operation is triggered, the first shooting mode is turned off, and the second shooting mode is turned on, which may cause the interruption of the video stream. That is, the video stream corresponding to the first shooting mode ends, and the video stream corresponding to the second shooting mode has not yet arrived. As shown in fig. 3, a cut-off time of 1500ms exists between the display stream corresponding to the first photographing mode and the display stream corresponding to the second photographing mode; there is a blanking time of 1500ms between the encoded stream corresponding to the first shooting mode and the encoded stream corresponding to the second shooting mode. During the interruption of the stream, the black screen, the stuck and the like of the shot video picture can be caused, and the user experience is influenced. In order to implement video anti-jitter and other functions, the encoded stream is buffered during the encoding process, for example, the encoded stream is buffered for 20 frames in the embodiment of the present application.
Aiming at the problem, the transition dynamic effect is inserted into the current interruption time in the shooting mode switching process, and the video pictures before and after switching are transited through the transition dynamic effect, so that the current interruption problem of a display interface and/or a generated video file in the shooting mode switching process is avoided, and the user experience is improved.
Referring to fig. 4, a scene schematic diagram for switching a shooting mode is provided in the embodiment of the present application. As shown in fig. 4, during the process of video shooting through the electronic device, the user can display the video picture during shooting in real time in the display interface. In addition, a shooting mode selection window is further included in the display interface, and a user can select a corresponding shooting mode in the shooting mode selection window to carry out video shooting. For example, a front monoscopic mode, a rear monoscopic mode, a front and rear bi-capturing mode, a front and rear picture-in-picture mode, and the like.
In the application scenario shown in fig. 4, the user first selects the front single shot mode to perform video shooting, and displays the foreground picture in the display interface 401 in real time. When the user triggers the "front and rear double shot" control in the shooting mode selection window 402, the electronic device receives the shooting mode switching operation, and switches the front single shot mode into the front and rear double shot mode. In the switching process, the electronic device generates a transition dynamic effect, and displays a transition dynamic effect picture in the display interface 401 during the interruption period of the video stream, so as to avoid the interruption phenomenon in the display interface 401. After the switching is completed, video pictures shot in the front-back double-shot mode, for example, a foreground picture and a background picture shown in fig. 4, are displayed in real time in the display interface 401. That is, in the front-rear double-shot mode, the front camera and the rear camera respectively capture a foreground picture and a background picture, and the foreground picture and the background picture are respectively displayed in the display interface 401.
It is appreciated that during video capture, in addition to being able to display captured video pictures within the display interface 401, captured video pictures may be encoded into a video file (e.g., a video file in MP4 format) and stored in the electronic device. In the shooting mode switching process, the video shot before switching, the transition dynamic effect and the video shot after switching are coded into one video file. The specific encoding method is described in detail below.
Referring to fig. 5, a schematic flow chart of a transition dynamic effect insertion method provided in the embodiment of the present application is shown. The method can be applied to the electronic device shown in fig. 1, and the method focuses on the process of inserting transition animation in the display stream, as shown in fig. 5, which mainly includes the following steps.
Step S501: the electronic equipment starts a first shooting mode to carry out video shooting, and displays a video picture shot in the first shooting mode.
The first shooting mode related to the embodiment of the present application may be any one of a front single shooting mode, a rear single shooting mode, a front double shooting mode, a rear double shooting mode, a front and rear double shooting mode, a front picture-in-picture mode, a rear picture-in-picture mode, and a front and rear picture-in-picture mode, which is not limited in the embodiment of the present application.
In specific implementation, the video picture shot in the first shooting mode can be monitored, and the monitored video picture shot in the first shooting mode is sent to the display interface to be displayed.
Step S502: in the shooting process, the electronic equipment receives shooting mode switching operation, and the shooting mode switching operation is used for switching the first shooting mode to a second shooting mode, wherein the first shooting mode is different from the second shooting mode.
In practical applications, when a user may need to switch the shooting mode during video shooting, a shooting mode switching operation is input in the electronic device to switch the first shooting mode to the second shooting mode. The user can input the shooting mode switching operation in the modes of a touch screen, a physical key, gesture control, voice control and the like.
The shooting mode switching related to the embodiment of the present application may be any one of the shooting mode switching scenarios described above, and the embodiment of the present application does not specifically limit this scenario.
Step S503: and switching the video picture shot in the first shooting mode into a transition dynamic effect picture according to the shooting mode switching operation, wherein the transition dynamic effect picture is related to the video picture shot in the first shooting mode.
As described above, in the process of switching the shooting mode, the video stream is usually cut off and is reflected in the display interface, when the video pictures collected in the shooting mode before the switching are played, the video pictures collected in the shooting mode after the switching are not generated, so that the cut-off of the display interface is caused, and the user experience is affected.
In view of the problem, the embodiment of the present application inserts the transition dynamic effect at the current interruption time in the shooting mode switching process. Specifically, after receiving a shooting mode switching operation, a video picture shot in a first shooting mode is switched to a transition live action picture.
In specific implementation, the transition dynamic effect picture is sent to a display interface to be displayed.
Step S504: and switching the transition dynamic effect picture into a video picture shot in the second shooting mode.
It can be understood that after the shooting mode is switched, the transition live action picture in the display interface needs to be switched to the video picture shot according to the second shooting mode.
In a specific implementation, the video picture shot in the second shooting mode can be monitored, and the monitored video picture shot in the second shooting mode is sent to the display interface to be displayed.
In one implementation, the switching the transition live action picture to the video picture taken in the second shooting mode in step S504 includes switching the transition live action picture to the video picture taken in the second shooting mode after the display of all the transition live action pictures is completed.
In a specific implementation, when the video picture shot in the second shooting mode is monitored, the transition dynamic effect picture may not be played yet, at this time, the transition dynamic effect picture is continuously played, and after all the transition dynamic effect pictures are sent to the display interface to be displayed, the monitored video picture shot in the second shooting mode is sent to the display interface to be displayed.
In one implementation, in step S504, switching the transition live view picture to the video picture captured in the second shooting mode includes switching the transition live view picture to the video picture captured in the second shooting mode after the video picture captured in the second shooting mode is acquired.
In a specific implementation, when the video picture shot in the second shooting mode is monitored, the transition dynamic effect picture may not be played yet, and at this time, the transition dynamic effect picture is played, and the monitored video picture shot in the second shooting mode is sent to the display interface to be displayed. Referring to fig. 6, a schematic view of a video stream provided in an embodiment of the present application is shown. As shown in fig. 6, in the shooting mode switching process, there is a break between the first shooting mode video stream and the second shooting mode video stream, and in the embodiment of the present application, a transition live view video stream is inserted between the first shooting mode video stream and the second shooting mode video stream, and a transition is performed by the transition live view.
It is understood that in the video photographing process, in addition to displaying the photographed video within the display interface, the photographed video pictures may be encoded into a video file (e.g., a video file in MP4 format) and stored in the electronic device. In the shooting mode switching process, the video picture shot before switching, the transition dynamic effect picture and the video picture shot after switching are coded and synthesized into one video file. The following description is made with reference to a flow chart.
Referring to fig. 7, a schematic flow chart of another transition dynamic effect insertion method provided in the embodiment of the present application is shown. The method can be applied to the electronic device shown in fig. 1, and the method focuses on explaining the process of inserting the transition dynamic effect into the coded stream, as shown in fig. 7, which mainly includes the following steps.
Step S701: and encoding the video pictures shot in the first shooting mode in the shooting process.
In the video shooting process, besides the video shot in the first shooting mode can be displayed in the display interface, the video shot in the first mode can be coded into a video file.
Specifically, in the process of video shooting according to the first shooting mode, the camera corresponding to the first shooting mode continuously reports video frames. In the process, a coding process is started to continuously code the video frames reported by the camera.
In a specific implementation, the video pictures shot in the first shooting mode may be monitored, and the monitored video pictures shot in the first shooting mode may be sent to an encoder for encoding.
The first shooting mode related to the embodiment of the present application may be any one of a front single shooting mode, a rear single shooting mode, a front double shooting mode, a rear double shooting mode, a front and rear double shooting mode, a front picture-in-picture mode, a rear picture-in-picture mode, and a front and rear picture-in-picture mode, which is not limited in the embodiment of the present application.
Step S702: and coding the transition dynamic effect picture.
In a specific implementation, the transition dynamic effect picture can be sent to the encoder for encoding;
referring to fig. 8A, a schematic diagram of an encoded stream according to an embodiment of the present application is provided. As shown in fig. 8A, after the encoded stream corresponding to the first shooting mode is ended, the encoding operation continues to encode the transition motion effect, and the picture frame corresponding to the transition motion effect is continuously refreshed into the video file.
Step S703: and encoding the video picture shot in the second shooting mode.
In specific implementation, the video pictures shot in the second shooting mode are monitored, and the monitored video pictures shot in the second shooting mode are sent to an encoder for encoding.
Referring to fig. 8B, another schematic diagram of an encoded stream according to an embodiment of the present application is provided. As shown in fig. 8B, after the encoded stream corresponding to the transition animation effect is finished, the encoding operation continues to encode based on the encoded stream corresponding to the second shooting mode, and the video picture corresponding to the second shooting mode is continuously refreshed into the video file.
Step S704: and receiving shooting stopping operation, and generating a video file, wherein the video file comprises the video picture shot in the first shooting mode, the transition dynamic effect picture and the video picture shot in the second shooting mode.
And when a shooting stopping instruction is received, the camera corresponding to the second shooting mode stops reporting the video frame, the video stream corresponding to the second shooting mode is interrupted, the encoder stops encoding, and a video file is generated. It can be understood that the video file includes a video picture shot in the first shooting mode, a transition animation picture and a video picture shot in the second shooting mode.
By adopting the technical scheme provided by the embodiment of the application, the transition dynamic effect is inserted at the current-cut time in the switching process of the shooting mode, and the display pictures before and after switching are transited through the transition dynamic effect, so that smooth video shooting experience is provided for users. In addition, the transition dynamic effect is synchronously coded into the generated video file, and smooth video playing experience is provided for users.
In practical application, a user may trigger a shooting pause operation in a shooting process, and when the electronic device receives the shooting pause operation, the monitored video picture shot in the first shooting mode, the transition dynamic effect picture or the monitored video picture shot in the second shooting mode is sent to the display interface to be displayed. That is, the display stream is not interrupted after receiving the pause photographing operation. For example, after receiving a shooting pause operation in a first shooting mode, sending a monitored video picture shot in the first shooting mode to a display interface in real time for displaying; after receiving the shooting pause operation in the process of displaying the transition dynamic effect, sending the generated transition dynamic effect picture to a display interface in real time for displaying, and after the transition dynamic effect picture is displayed, sending the monitored video picture shot in the second shooting mode to the display interface in real time for displaying; and after receiving the shooting pause operation in the second shooting mode, sending the monitored video pictures shot in the second shooting mode to a display interface in real time for displaying.
In addition, for the encoded stream, the encoder pauses encoding when a pause photographing operation is received. That is, after receiving the pause photographing operation, the encoding stream is interrupted. In a practical application scenario, after receiving the pause shooting operation, the video picture or the transition motion effect picture may be continuously sent to the encoder, but the encoder rejects the reception, thereby realizing pause encoding.
Referring to fig. 9, a schematic flow chart of a method for generating and inserting a transition dynamic effect according to an embodiment of the present application is provided. The method can be applied to the electronic device shown in fig. 1, as shown in fig. 9, which mainly includes the following steps.
Step S901: and initializing a transition dynamic GL environment.
In the embodiment of the present application, an image is rendered by an Open Graphics Library (OpenGL) renderer. OpenGL is a cross-language, cross-platform application programming interface for rendering 2D, 3D graphics. In some descriptions, OpenGL may also be referred to simply as "GL".
It can be appreciated that the transition live effect GL environment initialization needs to be done before the transition live effect is generated by the OpenGL renderer. The initialization of the transition dynamic effect GL environment may include initializing a texture size of the transition dynamic effect GL, applying for a corresponding data buffer, and the like.
Step S902: and triggering the one-way mode to switch the two-way mode.
In the shooting process, a user hopes to switch the shooting mode from the single-path mode to the double-path mode, at the moment, the shooting mode switching operation is triggered, and the shooting mode switching operation is used for indicating to switch the single-path mode to the double-path mode.
Step S903: and acquiring the last frame of video image shot in the one-way mode.
After the one-way mode is triggered to switch to the two-way mode, the last frame of video image shot in the one-way mode is obtained, the last frame of video image is an initial transition image, corresponding conversion processing is carried out on the initial transition image, and a corresponding transition image can be generated. Or acquiring any at least one frame of image in the first shooting mode, and performing the above operation on the any at least one frame of image to generate a corresponding transition image.
Step S904: a frame transition image is generated.
Specifically, the OpenGL renderer may calculate an image adjustment parameter corresponding to each transition image in the transition effect according to a transition policy (transition effect duration, transition effect frame rate, transition effect, etc.), and render a corresponding transition frame texture according to the image adjustment parameter to generate a transition image. The image adjustment parameters may include rotation angle, zoom ratio, transparency, blur degree, displacement amount, and the like. The OpenGL rendering process is described in detail below.
Step S905: and performing off-screen rendering on the rotating field image.
After a frame of transition image is generated, the transition image is subjected to off-screen rendering so as to display the transition image in a display interface. It can be understood that when the display interface displays the first frame transition image, the video picture shot in the single-channel mode is switched to the transition live action picture in the display interface.
Step S906: and judging whether the single-path coding is finished or not.
Because a certain buffer exists in the encoding process, the encoding has a certain lag relative to the video picture displayed on the display interface in the shooting process. For example, 20 frames are buffered in the buffer, and after the last frame of video image shot in the single-path mode is displayed in the display interface, the video image corresponding to the 20 frames of single-path mode still exists in the buffer and needs to be encoded.
At this time, whether the one-way encoding is finished is judged, if the one-way encoding is finished, the step S908 is performed, and the encoding of the frame picture with transition motion effect is started; otherwise, the process proceeds to step S907, and the single-pass encoding is continued.
Step S907: and (4) single-path coding.
And if the one-way coding is not finished, continuing to code the video picture shot in the one-way mode.
Step S908: and (5) dynamic effect coding.
If the single-path coding is finished, the transition dynamic effect picture is coded.
Step S909: switching control (switching to two-way mode).
On the other hand, after the one-way mode is switched to the two-way mode in step S902, the two-way mode starts to be activated. For example, a two-way camera is turned on, and related configuration of a two-way mode is performed.
Step S910: and monitoring two paths of video frames.
And after switching to the two-way mode, monitoring whether the two-way video frame is reported or not.
Step S911: the two-way mode GL environment is initialized.
In the two-way mode, an OpenGL renderer needs to be used to render and merge two-way video frames, so that the two-way mode GL environment needs to be initialized.
Step S912: and performing two-way video frame rendering and merging processing.
After monitoring the two-way video frame in step S910, the two-way video frame is rendered and merged by the OpenGL renderer.
Step S913: and (4) displaying by two paths.
And after the rendering and combining of the two paths of video frames are finished, sending the video frames subjected to the rendering and combining to a display interface for displaying. It can be understood that, at this time, within the display interface, the transition animation picture is switched to the video picture taken in the two-way mode.
Step S914: and (4) carrying out two-way coding.
Specifically, after the frame picture coding for transition to motion effect is completed, the two-way coding is continued, that is, the two-way video frame after the rendering and merging processing is started to be coded.
By adopting the technical scheme provided by the embodiment of the application, after the shooting mode switching operation is received, the transition dynamic effect is generated and used for carrying out transition on the display pictures before and after switching in the flow-cut time in the shooting mode switching process, so that smooth video shooting experience is provided for users. In addition, the transition dynamic effect can be used for being synchronously coded into a generated video file, and smooth video playing experience is provided for a user.
It should be noted that the above is only one embodiment, and may be applied to other mode switching.
Referring to fig. 10, a block diagram of a software structure of an electronic device according to an embodiment of the present application is provided. The software architecture of the present embodiment is merely an example, and may be applied to other operating systems. The layered architecture in this embodiment divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android (Android) system is divided into four layers, an application layer, a framework layer, a hardware abstraction layer, and a hardware layer from top to bottom.
An Application layer (App) may comprise a series of Application packages. For example, the application package may include a camera application. The application layer can be divided into a display interface and application logic.
The display interface of the camera application includes a monoscopic mode, a dual-scene mode, a picture-in-picture mode, and the like. Wherein, only one shooting picture is displayed in the single scene mode; displaying two shooting pictures in parallel in a double-scene mode; two shot pictures are displayed in a picture-in-picture mode, one of the shot pictures being located within the other shot picture.
The application logic of the camera application comprises a switching control module, a transition control module, a multi-shot coding module and the like. The switching control module is used for controlling the switching of the shooting mode; the transition control module is used for generating transition dynamic effect in the switching process of the shooting mode; the multi-shot coding is used for keeping coding in the process of switching the shooting modes and generating a video file.
The Framework layer (FWK) provides an Application Programming Interface (API) and a programming Framework for applications at the application layer, including some predefined functions. In fig. 10, the framework layer includes a Camera access interface (Camera2 API), and the Camera2 API is a set of interfaces for accessing a Camera device, which is derived by Android, and adopts a pipelined design to enable data flow from the Camera to the Surface. The Camera2 API includes Camera management (Camera manager) and Camera device (Camera device). The Camera manager is a management class of the Camera device, and the Camera device information of the device can be queried through the class object to obtain a Camera device object. The CameraDevice provides a series of fixed parameters related to the Camera device, such as the basic setup and output format.
A Hardware Abstraction Layer (HAL) is an interface layer between the operating system kernel and the hardware circuitry, which is intended to abstract the hardware. The method hides the hardware interface details of a specific platform, provides a virtual hardware platform for an operating system, enables the virtual hardware platform to have hardware independence, and can be transplanted on various platforms. In fig. 10, the HAL includes a Camera hardware abstraction layer (Camera HAL) including a Device (Device)1, a Device (Device)2, a Device (Device)3, and the like. It is understood that the devices 1, 2, and 3 are abstract devices.
The HardWare layer (HardWare, HW) is the HardWare located at the lowest level of the operating system. In fig. 10, HW includes a camera device (CameraDevice)1, a camera device (CameraDevice)2, a camera device (CameraDevice)3, and the like. Wherein the CameraDevice1, CameraDevice2, and CameraDevice3 can correspond to a plurality of cameras on the electronic device.
Referring to fig. 11, a schematic flow chart of another transition dynamic effect insertion method provided in the embodiment of the present application is shown. The method can be applied to the software structure shown in fig. 10, as shown in fig. 11, which mainly includes the following steps.
S1101: a video picture photographed in the single shot mode is displayed within a display interface.
In the embodiment of the present application, the shooting mode includes a single view mode and a double view mode. Currently, a user performs video shooting in a monoscopic mode, and a video picture shot in the monoscopic mode is displayed in real time in a display interface.
The single scene mode can be a front single shooting mode, a rear single shooting mode and the like; the dual-scene mode can be a front dual-shooting mode, a rear dual-shooting mode, a front dual-shooting mode and a rear dual-shooting mode.
S1102: and encoding the video pictures shot in the single scene mode.
During shooting, the multi-shot coding module codes shot video pictures.
S1103: triggering a shooting mode switching operation.
When a user wants to switch the shooting mode from the single-view mode to the double-view mode in the shooting process, the shooting mode switching operation is triggered, and the shooting mode switching operation is used for indicating the switching of the single-view mode to the double-view mode.
S1104: the switching control module starts the switching of the shooting mode.
And the switching control module starts the switching of the shooting mode after receiving the switching operation of the shooting mode.
S1105: and the switching control module sends a code keeping instruction to the multi-shot coding module.
It can be understood that, in the single shot mode, the multi-shot coding module codes the video pictures shot in the single shot mode in real time.
The multi-shot coding module is controlled to keep coding in the switching process of the shooting modes, so that a video picture shot in a single-scene mode, a transition dynamic effect picture and a video picture shot in a double-scene mode can be generated into a video file.
S1106: and the switching control module sends a command of switching to the double-scene picture to the display interface.
And in the single scene mode, the display interface displays a single scene picture. After the shooting mode is switched, the switching control module sends a command of switching to the double-scene picture to the display interface and indicates the display interface to be switched to the double-scene picture.
S1107: and the switching control module sends a transition starting dynamic effect instruction to the transition control module.
In order to avoid influencing the user experience during the shooting mode switching, transition animation is inserted during the interruption of the shooting mode switching. Specifically, the switching control module sends a start transition dynamic effect instruction to the transition control module, so that the transition control module generates a transition dynamic effect.
S1108: and the switching control module sends instructions for disconnecting the single-scene mode and starting the double-scene mode to the framework layer.
After the switching is started, the switching control module sends instructions of disconnecting the single-scene mode and starting the double-scene mode to the framework layer, so that the framework layer can disconnect the single-scene mode and start the double-scene mode conveniently.
S1109: and the frame layer sends instructions for disconnecting the single-scene mode and starting the double-scene mode to the hardware abstraction layer.
And after the single-scene mode is disconnected and the double-scene mode is started, the framework layer sends a single-scene disconnection and double-scene starting instruction to the hardware abstraction layer, so that the hardware abstraction layer can disconnect the single-scene mode and start the double-scene mode.
S1110: and the hardware abstraction layer sends instructions for disconnecting the single-scene mode and starting the double-scene mode to the hardware layer.
After the single-scene mode is disconnected and the double-scene mode is started, the hardware abstraction layer sends instructions for disconnecting the single-scene mode and starting the double-scene mode to the hardware layer, so that the hardware layer can disconnect the single-scene mode and start the double-scene mode conveniently.
S1111: the transition control module generates transition dynamic effect.
The transition control module generates a transition dynamic effect after receiving the start transition dynamic effect instruction, and the process of generating the transition dynamic effect refers to the description of the above embodiments.
S1112: and the transition control module sends transition dynamic effect to the multi-shot coding module.
And after the transition control module generates the transition dynamic effect, the transition dynamic effect is sent to the multi-shot coding module so as to code the transition dynamic effect.
S1113: and the multi-shot coding module codes the transition dynamic effect picture.
And after receiving the transition dynamic effect, the multi-shot coding module codes the transition dynamic effect picture.
It should be noted that the multi-shot encoding module keeps encoding continuously during the shooting mode switching process, that is, the encoding operation is not interrupted during the shooting mode switching process, but the encoding stream is switched from the first shooting mode encoding stream to the second shooting mode encoding stream.
S1114: and displaying the transition dynamic effect picture on a display interface.
Specifically, the transition control module sends the transition dynamic effect picture to the display interface after generating the transition dynamic effect so as to display the transition dynamic effect picture during the current interruption period.
S1115: and sending a double-scene picture starting message to the switching control module.
After the dual-scene mode is started, the dual-scene mode starting message is sent to the switching control module from the bottom layer step by step. Specifically, after the hardware abstraction layer starts the double-scene mode, a double-scene mode starting message is sent to the framework layer; after the framework layer starts the double-scene mode, sending a double-scene mode starting message to the camera management module; and after the camera management module starts the double-scene mode, sending a double-scene starting message to the switching control module.
S1116: and sending a double-scene picture switching completion message.
And after the upper-layer display interface finishes the double-scene picture switching, sending a double-scene picture switching finishing message to the switching control module. Up to this point, it indicates that the shooting mode switching is completed.
S1117: and the switching control module sends a transition stopping dynamic effect instruction to the transition control module.
And after the double-scene mode is switched, the switching control module sends a transition stopping dynamic effect instruction to the transition control module. The transition stopping motion effect can be understood as that the transition control module stops generating transition motion effect images, stops sending transition motion effect pictures to the multi-shot coding module and the display interface and the like so as to stop coding of the transition motion effect pictures and display in the shooting process.
S1118: and the switching control module sends a code keeping instruction to the multi-shot coding module.
The multi-shot coding module is controlled to keep coding in the switching process of the shooting modes, so that the video shot in the single-scene mode, the video shot in the transition dynamic effect mode and the video shot in the double-scene mode can be generated into a video file.
S1119: and displaying the video pictures shot in the double-scene mode in the display interface.
And after the shooting mode is switched, shooting the video pictures in the double-scene mode, and displaying the video pictures shot in the double-scene mode in the display interface.
S1120: video pictures shot in the dual view mode are encoded.
In the process of shooting the video pictures in the double-scene mode, continuously coding the video pictures shot in the double-scene mode, and finally generating a video file from the video pictures shot in the single-scene mode, the transition dynamic effect and the video pictures shot in the double-scene mode.
In order to facilitate a better understanding of the technical solution by those skilled in the art, the following describes the generation process of the rotating effect of the rotor in detail.
Referring to fig. 12A, a block diagram of a software structure of a transition control module according to an embodiment of the present application is provided. As shown in fig. 12A, the transition control module includes a texture manager, a rendering engine, a renderer, and a shader library. The rendering engine comprises a display rendering engine and an encoding rendering engine.
The texture manager may obtain texture (image) data for the transition, i.e., an initial transition image, which is used to generate the transition motion effect. The renderer is used for calculating image adjusting parameters corresponding to each frame of transition images in transition dynamic effects according to transition strategies (transition dynamic effect duration, transition dynamic effect frame rate, transition effects and the like), rendering corresponding transition frame textures according to the image adjusting parameters, further sending the transition frames to a display interface for displaying, and sending the transition frames to the encoder for encoding. Shader libraries are used in conjunction with the GPU shading program of the renderer and may include a plurality of shaders (shaders), e.g., vertex shaders, fragment shaders, etc. The display rendering engine is used for driving the renderer to generate transition frame textures in a specified time interval according to a specified frame rate and sending the transition frame textures to a display interface for display; and the coding rendering engine is used for driving the renderer to generate transition frame textures according to a specified frame rate in a specified time interval, and sending the transition frame textures to the multi-shot coding module for coding.
Referring to fig. 12B, a schematic diagram of a connection relationship between a switching control module, a transition control module, and a multi-shot coding module according to an embodiment of the present application is shown. As shown in fig. 12B, the switching control module is connected to the transition control module, and is configured to notify the transition control module to start the transition dynamic effect when the switching is started. The multi-shot coding module provides a transition dynamic effect recording interface for the transition control module, namely, a transition image generated by the transition control module can be sent to the multi-shot coding module for coding. The functions of the texture manager, the rendering engine, the renderer, and the shader can be referred to the description of the embodiment shown in fig. 12A, and for brevity, the description thereof is omitted here.
In the above embodiment, the flow of the transition effect generation and insertion method is described by taking as an example that the first shooting mode is the single view mode and the second shooting mode is the double view mode, and in practical application, the transition effect generation and insertion method may be applied to a case that the first shooting mode is a mode other than the single view mode and the second shooting mode is a mode other than the double view mode.
Referring to fig. 13, a schematic flow chart of another transition dynamic effect generation method provided in the embodiment of the present application is shown. The method is applicable to the software structure shown in fig. 12A and 12B, as shown in fig. 13, which mainly includes the following steps.
S1301: and the switching control module sends a transition starting dynamic effect instruction to the texture manager.
Specifically, after the user triggers the shooting mode switching operation, the switching control module sends a start transition dynamic effect instruction to the texture manager.
S1302: the texture manager acquires transition images.
The texture manager may generate texture (image) data for transitions. And after receiving the start transition dynamic effect instruction, the texture manager acquires an initial transition image, wherein the initial transition image is used for generating transition dynamic effects.
In a specific implementation, the transition image may be an image in a video frame captured in the first capture mode. The first shooting mode is a shooting mode before switching; the second shooting mode is a switched shooting mode.
It can be understood that, in order to make the transition effect better connect the first shooting mode and the second shooting mode, the transition image may be the last frame image of the video shot in the first shooting mode, or may be any at least one frame image in the first shooting mode.
S1303A: the texture manager sends a start transition animation display instruction to the display rendering engine.
After obtaining the transition image, the texture manager sends a transition starting dynamic effect display instruction to the display rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures in a specified time interval according to a specified frame rate, and the transition frame textures are sent to a display interface for display.
S1303B: the texture manager sends a start transition motion effect encoding instruction to the encoding rendering engine.
After obtaining the transition image, the texture manager sends a transition start dynamic effect coding instruction to the coding rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures in a specified time interval according to a specified frame rate, and the transition frame textures are sent to the multi-shooting coding module for coding.
S1304A: the display rendering engine configures a renderer.
The renderer is used for calculating image adjusting parameters corresponding to each transition image in the transition dynamic effect according to transition strategies (transition dynamic effect duration, transition dynamic effect frame rate, transition effect and the like), and rendering corresponding transition frame textures according to the image adjusting parameters. The image adjustment parameters may include rotation angle, zoom ratio, transparency, blur degree, displacement amount, and the like.
And after receiving a start transition dynamic effect display instruction, the display rendering engine configures a renderer, and the renderer can select a corresponding shader from a shader library. Such as vertex shaders, fragment shaders, etc.
S1304B: the encoding rendering engine configures a renderer.
And after receiving a start transition dynamic effect display instruction, the coding rendering engine configures a renderer, and the renderer can select a corresponding shader from a shader library.
S1305A: and the display rendering engine drives the renderer to draw a frame of transition dynamic effect display image in the transition dynamic effects.
And after the configuration of the renderer is completed, the display rendering engine drives the renderer to draw a frame of transition dynamic effect display image in the transition dynamic effects. Specifically, the renderer may calculate an image adjustment parameter of a frame of transition dynamic effect display image according to the transition policy and the current time, adjust the transition image according to the image adjustment parameter, and draw the frame of transition dynamic effect display image. The transition dynamic effect display image is an image in a transition dynamic effect picture displayed on the display interface in the middle process of switching from the first shooting mode to the second shooting mode.
S1305B: and the coding rendering engine drives the renderer to draw a frame of transition coding image in the transition animation.
And after the configuration of the renderer is completed, the coding rendering engine drives the renderer to draw a frame of transition coding image in the transition dynamic effect. Specifically, the renderer may calculate an image adjustment parameter of a transition coded image according to the transition policy and the current time, adjust the transition image according to the image adjustment parameter, and draw a transition coded image.
S1306A: and the renderer sends the transition dynamic effect display image to the display interface.
Specifically, after a frame of transition shooting display image is drawn, the renderer sends the transition image to a display interface for display.
S1306B: and the renderer sends the transition coded image to the multi-shot coding module.
Specifically, after a frame of transition coded image is drawn, the renderer sends the transition coded image to the multi-shot coding module for coding.
It can be understood that within the transition dynamic effect duration, the renderer continuously sends a transition dynamic effect display image to the display interface so as to display the transition dynamic effect on the display interface; and the renderer continuously sends transition coded images to the multi-shot coding module so as to generate a transition dynamic effect video file, and the transition dynamic effect video file is stored in the electronic equipment.
It can be understood that, in the above implementation, each transition animation display image and each transition coding image are generated according to the transition strategy.
Because the transition dynamic effect display image and the transition coding image have a one-to-one correspondence relationship, in some possible implementation manners, each frame of transition dynamic effect display image can be generated according to a transition strategy, and each frame of transition coding image is determined according to each frame of transition dynamic effect display image. That is, the transition coded picture directly copies the transition animation display picture, reducing the amount of calculation for adjusting the transition picture.
Referring to fig. 14, a schematic flow chart of another transition dynamic effect generation method provided in the embodiment of the present application is shown. The method is applicable to the software architecture shown in fig. 12, which mainly includes the following steps, as shown in fig. 14.
S1401: and the switching control module sends a start transition dynamic effect instruction to the texture manager.
Specifically, after the user triggers the shooting mode switching operation, the switching control module sends a start transition dynamic effect instruction to the texture manager.
S1402: the texture manager acquires transition images.
The texture manager may generate texture (image) data for transitions. And after receiving the start transition dynamic effect instruction, the texture manager acquires a transition image, wherein the transition image is used for generating transition dynamic effects.
S1403: the texture manager sends a start transition animation display instruction to the display rendering engine.
After obtaining the transition image, the texture manager sends a transition starting dynamic effect display instruction to the display rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures in a specified time interval according to a specified frame rate, and the transition frame textures are sent to a display interface for display.
S1404: the texture manager sends a start transition motion effect encoding instruction to the encoding rendering engine.
After obtaining the transition image, the texture manager sends a transition start dynamic effect coding instruction to the coding rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures in a specified time interval according to a specified frame rate, and the transition frame textures are sent to the multi-shooting coding module for coding.
S1405: the display rendering engine configures a renderer.
The renderer is used for calculating image adjusting parameters corresponding to each frame of transition images in transition dynamic effects according to transition strategies (transition dynamic effect duration, transition dynamic effect frame rate, transition effects and the like), and rendering corresponding transition frame textures according to the image adjusting parameters. The image adjustment parameters may include rotation angle, zoom ratio, transparency, blur degree, displacement amount, and the like.
And after receiving a start transition dynamic effect display instruction, the display rendering engine configures a renderer, and the renderer can select a corresponding shader from a shader library. Such as vertex shaders, fragment shaders, etc.
S1406: and the display rendering engine drives the renderer to draw a transition display image in the transition dynamic effect.
And after the configuration of the renderer is completed, the display rendering engine drives the renderer to draw a frame transition display image in the transition dynamic effect. Specifically, the renderer may calculate an image adjustment parameter of a frame of transition display image according to the transition policy and the current time, adjust the transition image according to the image adjustment parameter, and draw the frame of transition display image.
S1407: and the renderer sends a transition display image to the display interface.
Specifically, after a frame of transition dynamic effect display image is drawn, the renderer sends the transition dynamic effect display image to a display interface for display.
S1408: and the coding rendering engine determines a transition coding image according to the transition display image.
In the embodiment of the application, the encoding rendering engine does not drive the renderer to render the transition encoding image, but shares the rendering result of the display rendering engine, and copies the transition animation effect display image into the corresponding transition encoding image.
S1409: and the coding rendering engine sends the transition coding image to the multi-shooting coding module.
Specifically, after the transition coded picture is obtained, the transition coded picture is sent to a multi-shot coding module for coding.
Referring to fig. 15A, a rendering scene schematic diagram provided in the embodiment of the present application is shown. In order to implement processing of a display image and an encoded image separately, two rendering engines, i.e., a display rendering engine and an encoded rendering engine, are generally provided. In fig. 15A, an Open GL is taken as an example to describe a rendering process of an image, and hereinafter, the display rendering engine and the encoding rendering engine are respectively referred to as an Open GL display rendering engine and an Open GL encoding rendering engine, and the Open GL display rendering engine and the Open GL encoding rendering engine may call an Open GL renderer to implement a rendering process of the image.
In the single-view mode, the Open GL display rendering engine may monitor one video image through the first monitoring module and the second monitoring module, respectively, and one of the video images monitored by the two monitoring modules is used for transition image rendering and the other is used for encoding rendering. Of course, it is also possible to monitor the video image only by using one monitoring module, perform transition image rendering on the monitored video image, and perform encoding rendering on the video image after the transition image rendering. The method comprises the following specific steps:
the Open GL display rendering engine monitors the video images collected by the first camera through the first monitoring module and the second monitoring module respectively. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, the Open GL renderer transmits the acquired video image monitored by the first monitoring module of the Open GL display rendering engine to the display cache area for caching, the Open GL display rendering engine transmits the video image monitored by the second monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image monitored by the second monitoring module of the Open GL display rendering engine to the encoding cache area. The video image buffered in the display buffer is transferred to a photographed video picture (surface view), and the video image is displayed within the photographed video picture. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing on a video image is not required, the video images monitored by the first monitoring module and the second monitoring module of the Open GL display rendering engine may also be directly transmitted to the display cache region and the video image monitored by the second monitoring module to the encoding cache region without passing through the Open GL renderer, which is not limited in the present application.
In a double-view mode or a picture-in-picture mode, the Open GL display rendering engine monitors video images collected by the first camera and the second camera respectively through the first monitoring module and the second monitoring module, and transmits the monitored two paths of video images and a synthesis strategy to the Open GL renderer. And the Open GL renderer synthesizes the two paths of video images into one video image according to a synthesis strategy, and transmits the video image to a display cache region for caching. And respectively transmitting the video images buffered in the display buffer area to a shot video picture (SurfaceView) and a coding buffer area. The video image is displayed within the captured video frame. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, in the above process, except that the video file generated by the encoding module is in the MP4 format, other video images are in the RGB format. That is to say, the Open GL displays that the video image monitored by the rendering engine is an image in RGB format, and the video image output after the Open GL renderer renders and synthesizes is also in RGB format. That is, the video image cached in the display buffer is in RGB format, and the video image sent to the photographed video image and the encoding buffer is also in RGB format. The Open GL coding rendering engine acquires a video image in an RGB format, and performs related rendering on the video image according to an image rendering instruction input by a user, wherein the obtained rendered video image is in the RGB format. The video image received by the coding module is in an RGB format, and the video image in the RGB format is coded to generate a video file in an MP4 format.
In the application scene of transition dynamic effect, an Open GL display rendering engine and an Open GL coding rendering engine respectively initialize a corresponding transition dynamic effect rendering environment of an Open GL renderer, namely, the transition dynamic effect Open GL environment, which is respectively used for rendering a transition dynamic effect display image and a transition coding image of the Open GL renderer. The contents of this initialization may include a timer thread, texture, etc.
In another possible implementation manner, a transition dynamic effect Open GL environment of a corresponding Open GL renderer may be initialized only through an Open GL display rendering engine, and the Open GL renderer performs transition dynamic effect display image rendering. The Open GL coding rendering engine shares the transition dynamic effect display image, and generates a transition coding image according to the transition dynamic effect display image, so that the coding of the transition coding image is realized.
Referring to fig. 15B, a schematic view of another rendering scene provided in the embodiment of the present application is shown. The difference from fig. 15A is that in the monoscopic mode, the Open GL display rendering engine can monitor only one video image of the electronic device through one monitoring module. For example, the Open GL display rendering engine monitors a video image captured by the first camera through the first monitoring module. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image to the display cache area for caching. And transmitting the video images cached in the display cache region to the shot video picture. And displaying the video image in the shot video picture, and transmitting the video image to an encoding buffer area. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing on a video image is not required, the video image monitored by the first monitoring module of the Open GL display rendering engine may also be directly transmitted to the display cache area without passing through the Open GL renderer, which is not limited in this application.
It should be noted that, in fig. 15A and 15B, the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the monoscopic mode are the same as the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the dual-scene mode. For convenience of illustration, in fig. 15A and 15B, the Open GL display rendering engine, the Open GL renderer, and the display buffer are drawn in both the single view mode and the dual view mode.
In particular, data sharing may be achieved between the Open GL display rendering engine and the Open GL encoding rendering engine through SharedContext.
The following describes a rendering process of an Open GL renderer by taking an example of merging two video images into one video image.
Referring to fig. 16A, a schematic view of rendering and merging scenes for a video stream according to an embodiment of the present application is shown. One frame of a video image captured by a first camera and one frame of a video image captured by a second camera are shown in fig. 16A. And the video images collected by the first camera and the second camera are 1080 × 960. And rendering and merging the video images collected by the first camera and the video images collected by the second camera according to the position information and the texture information of the video images collected by the first camera and the video images collected by the second camera to obtain a frame of 1080 × 1920 size images, wherein the spliced images are in a double-scene mode, namely the images collected by the first camera and the images collected by the second camera are displayed in parallel. The spliced image can be sent to an encoder for encoding and sent to a shot video picture for displaying.
Referring to fig. 16B, a schematic view of rendering a merged scene for another video stream provided in the embodiment of the present application is shown. One frame of the video image captured by the first camera and one frame of the video image captured by the second camera are shown in fig. 16B. The size of the video image collected by the first camera is 540 × 480, and the size of the video image collected by the second camera is 1080 × 960. And rendering and combining the video images acquired by the first camera and the video images acquired by the second camera according to the position information and the texture information of the video images acquired by the first camera, so as to obtain an image in a frame of picture-in-picture mode.
Referring to fig. 16C, a scene schematic diagram of transition live action rendering provided in the embodiment of the present application is shown. Fig. 16C shows a frame of transition image, which is subjected to rotation processing according to the transition policy to obtain a frame of rendered transition image.
It is understood that the image sizes shown in fig. 16A-16C are merely illustrative of embodiments of the present application and should not be taken as limiting the scope of the present application.
Corresponding to the above method embodiments, the present application also provides an electronic device, which is used for a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the electronic device is triggered to execute part or all of the steps in the above method embodiments.
Fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 17, the electronic device 1700 may include a processor 1710, an external memory interface 1720, an internal memory 1721, a Universal Serial Bus (USB) interface 1730, a charging management module 1740, a power management module 1741, a battery 1742, an antenna 1, an antenna 2, a mobile communication module 1750, a wireless communication module 1760, an audio module 1770, a speaker 1770A, a receiver 1770B, a microphone 1770C, an earphone interface 1770D, a sensor module 1780, buttons 1790, a motor 1791, an indicator 1792, a camera 1793, a display 1794, a Subscriber Identification Module (SIM) card interface 1795, and the like. The sensor module 1780 may include a pressure sensor 1780A, a gyroscope sensor 1780B, an air pressure sensor 1780C, a magnetic sensor 1780D, an acceleration sensor 1780E, a distance sensor 1780F, a proximity light sensor 1780G, a fingerprint sensor 1780H, a temperature sensor 1780J, a touch sensor 1780K, an ambient light sensor 1780L, a bone conduction sensor 1780M, etc.
It is to be understood that the illustrated architecture of the present invention is not to be construed as a specific limitation of the electronic device 1700. In other embodiments of the present application, electronic device 1700 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 1710 may include one or more processing units, such as: the processor 1710 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 1710 for storing instructions and data. In some embodiments, the memory in the processor 1710 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 1710. If the processor 1710 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 1710, thereby increasing the efficiency of the system.
In some embodiments, the processor 1710 can include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 1710 may include multiple sets of I2C buses. The processor 1710 can be coupled to the touch sensor 1780K, the charger, the flash, the camera 1793, etc. through different I2C bus interfaces. For example: the processor 1710 can be coupled to the touch sensor 1780K via an I2C interface, such that the processor 1710 and the touch sensor 1780K communicate via an I2C bus interface to implement the touch functionality of the electronic device 1700.
The I2S interface may be used for audio communication. In some embodiments, the processor 1710 may include multiple sets of I2S buses. The processor 1710 may be coupled to the audio module 1770 via an I2S bus, enabling communication between the processor 1710 and the audio module 1770. In some embodiments, the audio module 1770 can communicate audio signals to the wireless communication module 1760 via the I2S interface to enable answering a call via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 1770 and the wireless communication module 1760 may be coupled through a PCM bus interface. In some embodiments, the audio module 1770 can also transmit audio signals to the wireless communication module 1760 through the PCM interface to enable answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 1710 to the wireless communication module 1760. For example: the processor 1710 communicates with the bluetooth module in the wireless communication module 1760 through the UART interface to implement the bluetooth function. In some embodiments, the audio module 1770 may transmit audio signals to the wireless communication module 1760 through a UART interface, so as to implement the function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 1710 with peripheral devices such as the display 1794 and the camera 1793. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 1710 and camera 1793 communicate over a CSI interface to implement the capture functionality of electronic device 1700. The processor 1710 and the display screen 1794 communicate via the DSI interface to implement the display function of the electronic device 1700.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 1710 with the camera 1793, the display 1794, the wireless communication module 1760, the audio module 1770, the sensor module 1780, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 1730 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1730 may be used to connect a charger to charge the electronic device 1700, and may also be used to transmit data between the electronic device 1700 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It is to be understood that the interfacing relationship between the modules according to the embodiment of the present invention is only illustrative, and does not limit the structure of the electronic apparatus 1700. In other embodiments of the present application, the electronic device 1700 may also adopt different interface connection manners or a combination of interface connection manners in the above embodiments.
The charging management module 1740 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 1740 may receive charging input from a wired charger via the USB interface 1730. In some wireless charging embodiments, the charging management module 1740 may receive wireless charging input through a wireless charging coil of the electronic device 1700. The charging management module 1740 may also supply power to the electronic device through the power management module 1741 while charging the battery 1742.
The power management module 1741 is configured to connect the battery 1742, the charging management module 1740, and the processor 1710. The power management module 1741 receives input from the battery 1742 and/or the charging management module 1740 and provides power to the processor 1710, the internal memory 1721, the display 1794, the camera 1793, and the wireless communication module 1760. Power management module 1741 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, a power management module 1741 may also be disposed within the processor 1710. In other embodiments, the power management module 1741 and the charge management module 1740 may be disposed in the same device.
The wireless communication functions of the electronic device 1700 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1750, the wireless communication module 1760, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1700 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1750 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 1700. The mobile communication module 1750 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 1750 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The mobile communication module 1750 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 1750 may be disposed in the processor 1710. In some embodiments, at least some of the functional blocks of the mobile communication module 1750 may be provided in the same device as at least some of the blocks of the processor 1710.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 1770A, the receiver 1770B, etc.) or displays an image or video through the display screen 1794. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 1710, in the same device as the mobile communication module 1750 or other functional blocks.
The wireless communication module 1760 may provide a solution for wireless communication applied to the electronic device 1700, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and so on. The wireless communication module 1760 may be one or more devices integrating at least one communication processing module. The wireless communication module 1760 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 1710. The wireless communication module 1760 may also receive a signal to be transmitted from the processor 1710, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the electronic device 1700 is coupled to the mobile communication module 1750 and the antenna 2 is coupled to the wireless communication module 1760 such that the electronic device 1700 can communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 1700 implements display functions via the GPU, the display 1794, and the application processor, among other things. The GPU is a microprocessor for image processing, connected to the display 1794 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 1710 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 1794 is used to display images, video, etc. The display 1794 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 1700 may include 1 or N display screens 1794, N being a positive integer greater than 1.
The electronic device 1700 may implement a shooting function through an ISP, a camera 1793, a video codec, a GPU, a display 1794, an application processor, and the like.
The ISP is used to process the data fed back by the camera 1793. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 1793.
The camera 1793 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 1700 may include 1 or N cameras 1793, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 1700 selects at a frequency bin, the digital signal processor is used to perform a fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 1700 may support one or more video codecs. As such, electronic device 1700 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 1700 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
External memory interface 1720 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of electronic device 1700. The external memory card communicates with the processor 1710 through an external memory interface 1720 to implement data storage functions. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 1721 may be used to store computer-executable program code, including instructions. The internal memory 1721 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The stored data area may store data (e.g., audio data, phone books, etc.) created during use of the electronic device 1700, and the like. In addition, the internal memory 1721 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 1710 performs various functional applications and data processing of the electronic device 1700 by executing instructions stored in the internal memory 1721 and/or instructions stored in a memory provided in the processor.
The electronic device 1700 can implement audio functions via the audio module 1770, the speaker 1770A, the microphone 1770B, the microphone 1770C, the headset interface 1770D, and the application processor, among other things. Such as music playing, recording, etc.
The audio module 1770 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 1770 may also be used to encode and decode audio signals. In some embodiments, the audio module 1770 may be disposed in the processor 1710, or some functional modules of the audio module 1770 may be disposed in the processor 1710.
The speaker 1770A, also known as a "horn," is used to convert electrical audio signals into sound signals. The electronic device 1700 can listen to music through the speaker 1770A or listen to a hands-free conversation.
A receiver 1770B, also called "earpiece", is used to convert the electrical audio signal into a sound signal. When the electronic device 1700 accepts a call or voice message, it can accept the voice by placing the receiver 1770B close to the ear.
A microphone 1770C, also known as a "microphone," converts acoustic signals into electrical signals. When making a call or sending voice information, the user can input a voice signal into the microphone 1770C by speaking the user's mouth near the microphone 1770C. The electronic device 1700 may be provided with at least one microphone 1770C. In other embodiments, the electronic device 1700 may be provided with two microphones 1770C to implement a noise reduction function in addition to collecting sound signals. In other embodiments, electronic device 1700 may further include three, four, or more microphones 1770C to collect audio signals, reduce noise, identify audio sources, perform directional recording, and so on.
The headset interface 1770D is used to connect wired headphones. The headset interface 1770D may be a USB interface 1730, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
Pressure sensor 1780A is configured to sense a pressure signal, which may be converted to an electrical signal. In some embodiments, the pressure sensor 1780A may be disposed on the display 1794. Pressure sensor 1780A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 1780A, the capacitance between the electrodes changes. The electronic device 1700 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 1794, the electronic apparatus 1700 detects the intensity of the touch operation based on the pressure sensor 1780A. The electronic apparatus 1700 can also calculate the position of the touch from the detection signal of the pressure sensor 1780A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 1780B may be used to determine the motion pose of the electronic device 1700. In some embodiments, the angular velocity of electronic device 1700 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensors 1780B. The gyro sensor 1780B may be used to photograph anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 1780B detects the shake angle of the electronic device 1700, calculates the distance that the lens module needs to compensate according to the shake angle, and allows the lens to counteract the shake of the electronic device 1700 through a reverse motion, thereby achieving anti-shake. The gyroscope sensor 1780B can also be used for navigation and body feeling of a game scene.
Barometric pressure sensor 1780C is used to measure barometric pressure. In some embodiments, electronic device 1700 calculates altitude, aiding in positioning and navigation from barometric pressure values measured by barometric pressure sensor 1780C.
The magnetic sensor 1780D includes a hall sensor. The electronic device 1700 can detect the opening and closing of the flip holster using the magnetic sensor 1780D. In some embodiments, when the electronic device 1700 is a flip phone, the electronic device 1700 can detect the opening and closing of the flip according to the magnetic sensor 1780D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
Acceleration sensor 1780E may detect the magnitude of acceleration of electronic device 1700 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 1700 is at rest. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1780F for measuring distance. The electronic device 1700 may measure distance by infrared or laser. In some embodiments, shooting a scene, the electronic device 1700 may utilize the distance sensor 1780F to range to achieve fast focus.
The proximity light sensor 1780G can include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 1700 emits infrared light to the outside through the light emitting diode. Electronic device 1700 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 1700. When insufficient reflected light is detected, the electronic device 1700 can determine that there are no objects near the electronic device 1700. The electronic device 1700 can utilize the proximity light sensor 1780G to detect that the user holds the electronic device 1700 close to the ear for conversation, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 1780G may also be used in holster mode, pocket mode, auto unlock and lock screen.
The ambient light sensor 1780L is used to sense ambient light level. The electronic device 1700 may adaptively adjust the brightness of the display 1794 based on the perceived ambient light level. The ambient light sensor 1780L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 1780L may also cooperate with the proximity light sensor 1780G to detect whether the electronic device 1700 is in a pocket to prevent inadvertent contact.
The fingerprint sensor 1780H is used to capture a fingerprint. The electronic device 1700 may utilize the collected fingerprint characteristics to implement fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 1780J is used to detect temperature. In some embodiments, the electronic device 1700 implements a temperature processing strategy using the temperature detected by the temperature sensor 1780J. For example, when the temperature reported by temperature sensor 1780J exceeds a threshold, electronic device 1700 performs a reduction in performance of a processor located near temperature sensor 1780J in order to reduce power consumption to implement thermal protection. In other embodiments, electronic device 1700 heats battery 1742 when the temperature is below another threshold to avoid a low temperature causing abnormal shutdown of electronic device 1700. In other embodiments, electronic device 1700 performs a boost on the output voltage of battery 1742 when the temperature is below yet another threshold to avoid an abnormal shutdown due to low temperature.
The touch sensor 1780K is also referred to as a "touch device". The touch sensor 1780K may be disposed on the display 1794, and the touch sensor 1780K and the display 1794 form a touch screen, which is also called a "touch screen". The touch sensor 1780K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation can be provided through the display 1794. In other embodiments, the touch sensor 1780K may be disposed on a surface of the electronic device 1700 at a different location than the display 1794.
The bone conduction sensor 1780M may acquire a vibration signal. In some embodiments, the bone conduction sensor 1780M may acquire a vibration signal of the body's voice vibrating a bone mass. The bone conduction sensor 1780M can also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, a bone conduction sensor 1780M may also be provided in the headset, integrated into the bone conduction headset. The audio module 1770 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound obtained by the bone conduction sensor 1780M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 1780M, so as to realize the heart rate detection function.
Keys 1790 include a power on key, volume key, etc. The keys 1790 can be mechanical keys. Or may be touch keys. The electronic apparatus 1700 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 1700.
The motor 1791 can generate a vibration cue. Motor 1791 can be used for both an incoming call vibration prompt and a touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 1791 may also respond to different vibration feedback effects by acting on different areas of the display 1794 for touch operation. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 1792 may be an indicator light that may be used to indicate a charge status, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 1795 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 1700 by being inserted into and removed from the SIM card interface 1795. The electronic device 1700 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 1795 can support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 1795 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 1795 may also be compatible with different types of SIM cards. The SIM card interface 1795 is also compatible with external memory cards. The electronic device 1700 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 1700 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 1700 and cannot be separated from the electronic device 1700.
In a specific implementation manner, the present application further provides a computer storage medium, where the computer storage medium may store a program, and when the program runs, the computer storage medium controls a device in which the computer readable storage medium is located to perform some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
In a specific implementation, an embodiment of the present application further provides a computer program product, where the computer program product includes executable instructions, and when the executable instructions are executed on a computer, the computer is caused to perform some or all of the steps in the foregoing method embodiments.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided by the present invention, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present invention, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all such changes or substitutions are included in the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A shooting transfer dynamic effect insertion method is characterized by comprising the following steps:
the method comprises the steps that the electronic equipment starts a first shooting mode to carry out video shooting, a first video picture shot in the first shooting mode is displayed, the first video picture comprises a first picture shot by a first camera, under the first shooting mode, an Open GL display rendering engine monitors one path of video picture through a first monitoring module and a second monitoring module respectively, one video picture monitored by the two monitoring modules is used for transition image rendering, and the other video picture is used for coding rendering;
in the shooting process, the electronic equipment receives a shooting mode switching operation and starts a second camera, wherein the shooting mode switching operation is used for switching the first shooting mode into a second shooting mode, and the first shooting mode is different from the second shooting mode;
switching the video picture shot in the first shooting mode into a transition dynamic effect picture according to the shooting mode switching operation, wherein the transition dynamic effect picture is related to the video picture shot in the first shooting mode;
switching the transition live-action picture to a second video picture shot in the second shooting mode, wherein the second video picture comprises a second picture shot by a second camera, and the first camera is different from the second camera;
the second video picture also comprises the first picture shot by the first camera; the method further comprises the step of enabling the user to select the target,
the electronic equipment monitors the second video picture and displays the second video picture; and, sending the second video picture to an encoder for encoding;
the electronic device monitors the second video picture and displays the second video picture, including:
the electronic equipment monitors the first picture and the second picture; rendering and combining the first picture and the second picture to obtain a displayed second video picture;
in the second shooting mode, the Open GL display rendering engine monitors video images acquired by the first camera and the second camera respectively through the first monitoring module and the second monitoring module, and transmits the monitored two paths of video images and the synthesis strategy to the Open GL renderer, and the Open GL renderer synthesizes the two paths of video images into one video image according to the synthesis strategy and transmits the video image to the display cache area for caching.
2. The method according to claim 1, wherein said switching said transition animation to a second video picture taken in said second photographing mode comprises:
and after the display of all the pictures of the transition dynamic effect is finished, switching the transition dynamic effect picture into a second video picture shot in the second shooting mode.
3. The method according to claim 1, wherein said switching said transition animation to a second video picture taken in said second photographing mode comprises:
and after a second video picture shot in the second shooting mode is acquired, switching the transition dynamic effect picture into the second video picture shot in the second shooting mode.
4. The method according to any one of claims 1-3, further comprising:
after receiving the shooting pause operation in the first shooting mode, sending the monitored video pictures shot in the first shooting mode to a display interface in real time for displaying;
after receiving the pause shooting operation in the process of displaying the transition dynamic effect, sending the generated transition dynamic effect picture to a display interface in real time for displaying, and after the transition dynamic effect picture is displayed, sending the monitored video picture shot in the second shooting mode to the display interface in real time for displaying; or
And after receiving the shooting pause operation in the second shooting mode, sending the monitored video pictures shot in the second shooting mode to a display interface in real time for displaying.
5. The method according to any one of claims 1 to 3, wherein a display duration of the transition animation is matched with a blanking duration during the switching of the shooting modes, and the blanking duration is a time difference between a last first video picture reported by the first shooting mode and a first second video picture reported by the second shooting mode.
6. The method according to claim 1, wherein the rendering and merging the first screen and the second screen comprises:
rendering and combining the first picture and the second picture according to the texture information, the position information and the combination strategy of the first picture and the second picture; the composition policy is information for composing display positions and display sizes of the first and second screens.
7. The method according to claim 6, wherein the merging strategy comprises at least:
splicing the first picture and the second picture; or
And filling one picture of the first picture and the second picture into the other picture.
8. The method of claim 1, further comprising:
in the shooting process, coding a first video picture shot in the first shooting mode;
coding the transition dynamic effect picture;
encoding a second video picture photographed by the second photographing mode;
receiving shooting stopping operation, and generating a video file, wherein the video file comprises a first video picture shot in the first shooting mode, a transition dynamic effect picture and a second video picture shot in the second shooting mode;
and storing the video file.
9. An electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the steps of any of claims 1-8.
10. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any one of claims 1-8.
CN202110682677.8A 2021-06-16 2021-06-16 Shooting transfer live-action insertion method, equipment and storage medium Active CN113473005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110682677.8A CN113473005B (en) 2021-06-16 2021-06-16 Shooting transfer live-action insertion method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110682677.8A CN113473005B (en) 2021-06-16 2021-06-16 Shooting transfer live-action insertion method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113473005A CN113473005A (en) 2021-10-01
CN113473005B true CN113473005B (en) 2022-08-09

Family

ID=77868903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110682677.8A Active CN113473005B (en) 2021-06-16 2021-06-16 Shooting transfer live-action insertion method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113473005B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115996274A (en) * 2021-10-18 2023-04-21 华为技术有限公司 Video production method and electronic equipment
CN115002335B (en) * 2021-11-26 2024-04-09 荣耀终端有限公司 Video processing method, apparatus, electronic device, and computer-readable storage medium
CN115037872B (en) * 2021-11-30 2024-03-19 荣耀终端有限公司 Video processing method and related device
CN114500835A (en) * 2022-01-20 2022-05-13 深圳市源德盛数码科技有限公司 Video shooting method, system, intelligent terminal and storage medium
CN114268741B (en) * 2022-02-24 2023-01-31 荣耀终端有限公司 Transition dynamic effect generation method, electronic device, and storage medium
CN115514871A (en) * 2022-09-30 2022-12-23 读书郎教育科技有限公司 Overturning camera preview optimization system and method based on intelligent terminal
CN118075600A (en) * 2022-11-22 2024-05-24 荣耀终端有限公司 Shooting mode switching method and related device
CN117135259B (en) * 2023-04-11 2024-06-07 荣耀终端有限公司 Camera switching method, electronic equipment, chip system and readable storage medium
CN117729426A (en) * 2023-07-05 2024-03-19 荣耀终端有限公司 Mode switching method, electronic device and storage medium
CN117424958B (en) * 2023-09-15 2024-06-07 荣耀终端有限公司 Switching method of camera display interface, electronic equipment, chip system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107820006A (en) * 2017-11-07 2018-03-20 北京小米移动软件有限公司 Control the method and device of camera shooting
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN111885305A (en) * 2020-07-28 2020-11-03 Oppo广东移动通信有限公司 Preview picture processing method and device, storage medium and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2800398A1 (en) * 2010-05-25 2011-12-01 Vidyo, Inc. Systems and methods for scalable video communication using multiple cameras and multiple monitors
US10168882B2 (en) * 2013-06-09 2019-01-01 Apple Inc. Device, method, and graphical user interface for switching between camera interfaces
CN104980644B (en) * 2014-04-14 2018-12-14 华为技术有限公司 A kind of image pickup method and device
CN105183296B (en) * 2015-09-23 2018-05-04 腾讯科技(深圳)有限公司 interactive interface display method and device
CN106210512B (en) * 2016-06-30 2019-06-07 维沃移动通信有限公司 A kind of camera switching method and mobile terminal
CN106792104A (en) * 2017-01-19 2017-05-31 北京行云时空科技有限公司 It is a kind of while supporting the method and system that shows of multiwindow image
US10537799B1 (en) * 2018-03-23 2020-01-21 Electronic Arts Inc. User interface rendering and post processing during video game streaming
WO2020019356A1 (en) * 2018-07-27 2020-01-30 华为技术有限公司 Method for terminal to switch cameras, and terminal
CN111866404B (en) * 2019-04-25 2022-04-29 华为技术有限公司 Video editing method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107820006A (en) * 2017-11-07 2018-03-20 北京小米移动软件有限公司 Control the method and device of camera shooting
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN111885305A (en) * 2020-07-28 2020-11-03 Oppo广东移动通信有限公司 Preview picture processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113473005A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN113473005B (en) Shooting transfer live-action insertion method, equipment and storage medium
CN113422903B (en) Shooting mode switching method, equipment and storage medium
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN111183632A (en) Image capturing method and electronic device
CN113596321B (en) Method, device and storage medium for generating transition dynamic effect
WO2022262313A1 (en) Picture-in-picture-based image processing method, device, storage medium, and program product
CN111327814A (en) Image processing method and electronic equipment
CN111050062B (en) Shooting method and electronic equipment
CN113475057A (en) Video frame rate control method and related device
CN113556466B (en) Focusing method and electronic equipment
CN113747060B (en) Image processing method, device and storage medium
CN114489533A (en) Screen projection method and device, electronic equipment and computer readable storage medium
CN114268741B (en) Transition dynamic effect generation method, electronic device, and storage medium
US20240056685A1 (en) Image photographing method, device, storage medium, and program product
CN113965693B (en) Video shooting method, device and storage medium
CN114257920B (en) Audio playing method and system and electronic equipment
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN113518189B (en) Shooting method, shooting system, electronic equipment and storage medium
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN114339429A (en) Audio and video playing control method, electronic equipment and storage medium
CN113596320B (en) Video shooting variable speed recording method, device and storage medium
CN115412678A (en) Exposure processing method and device and electronic equipment
CN113810595B (en) Encoding method, apparatus and storage medium for video shooting
CN114257737A (en) Camera shooting mode switching method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant