CN113596321B - Method, device and storage medium for generating transition dynamic effect - Google Patents

Method, device and storage medium for generating transition dynamic effect Download PDF

Info

Publication number
CN113596321B
CN113596321B CN202110682681.4A CN202110682681A CN113596321B CN 113596321 B CN113596321 B CN 113596321B CN 202110682681 A CN202110682681 A CN 202110682681A CN 113596321 B CN113596321 B CN 113596321B
Authority
CN
China
Prior art keywords
transition
image
transition effect
images
effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110682681.4A
Other languages
Chinese (zh)
Other versions
CN113596321A (en
Inventor
韩林林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110682681.4A priority Critical patent/CN113596321B/en
Publication of CN113596321A publication Critical patent/CN113596321A/en
Application granted granted Critical
Publication of CN113596321B publication Critical patent/CN113596321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a method, equipment, a storage medium and a program product for generating a transition movement effect, wherein the method comprises the steps of starting video recording in a first shooting mode according to user operation and shooting video images; receiving a shooting mode switching operation, wherein the shooting mode switching operation is used for indicating to switch a first shooting mode into a second shooting mode; acquiring a transition image, wherein the transition image is related to a video image shot by the first shooting mode; determining image adjustment parameters according to a transition strategy; and adjusting the transition image according to the image adjustment parameters to generate a transition movement effect. By adopting the technical scheme provided by the embodiment of the application, after receiving the shooting mode switching instruction, the transition movement effect is generated, and the transition movement effect is used for switching the front and rear video pictures in the cut-off time of the shooting mode switching process, so that smooth video shooting experience is provided for the user.

Description

Method, device and storage medium for generating transition dynamic effect
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a storage medium for generating a transition motion effect.
Background
In order to improve user experience, electronic devices such as mobile phones and tablet computers are generally configured with multiple cameras, for example, a front camera and a rear camera are respectively configured on the electronic devices. The user can select a corresponding shooting mode, for example, a proactive mode, a post-shooting mode, a front-back double-shooting mode, and the like, according to his own needs.
In a scene of video shooting, a user may need to switch shooting modes during video shooting. For example, the proactive mode is switched to the post-proactive mode. However, in the shooting mode switching process, interruption of the video stream may be caused, resulting in poor user experience.
Disclosure of Invention
In view of this, the present application provides a method, apparatus and storage medium for generating a transition motion effect, so as to solve the problem that in the prior art, in the process of switching shooting modes, a video stream is cut off, resulting in poor user experience.
In a first aspect, an embodiment of the present application provides a method for generating a transition motion effect in a video recording process, which is applied to a terminal device, and includes:
according to the user operation, starting video recording in a first shooting mode, and shooting video images;
receiving a shooting mode switching operation, wherein the shooting mode switching operation is used for indicating to switch a first shooting mode into a second shooting mode, and the first shooting mode and the second shooting mode are used for shooting based on different cameras;
Acquiring a transition image, wherein the transition image is related to a video image shot by the first shooting mode;
determining image adjustment parameters according to a transition strategy, wherein the transition strategy comprises a transition effect duration, a transition effect frame rate or a dynamic change type of the transition effect;
and adjusting the transition image according to the image adjustment parameters to generate a transition effect, wherein the transition effect comprises at least two frames of different transition images.
Preferably, the determining the image adjustment parameter according to the transition strategy includes:
determining the number N of transition images in the transition effect according to the transition effect duration and the transition effect frame rate, wherein N is more than or equal to 2;
and determining N image adjustment parameters corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect.
Preferably, the adjusting the transition image according to the image adjustment parameter to generate a transition motion effect includes:
and respectively adjusting the parameters of the transition images according to the N image adjustment parameters to generate N frames of transition images.
Preferably, the dynamic change type of the transition effect at least comprises one of the following:
Rotation, stretching, transparency grading, blur grading, or scaling.
Preferably, when the dynamic change type of the transition effect is rotation, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
and determining N rotation angles corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N rotation angles corresponding to the N frames of transition images in the transition effect continuously change in a preset angle range.
Preferably, N rotation angles corresponding to the N frames of transition images in the transition motion effect continuously change in a preset angle range, including:
the rotation angles corresponding to the 1 st frame of transition image to the N th frame of transition image in the transition effect are gradually increased or decreased in a preset angle range, or are increased and then decreased or are decreased and then increased.
Preferably, N rotation angles corresponding to the N frames of transition images in the transition motion effect continuously change in a preset angle range, including:
The rotation angle corresponding to the 1 st frame of transition image to the i th frame of transition image in the transition effect is gradually increased or decreased within a preset angle range, and the rotation angle corresponding to the i th frame of transition image to the N th frame of transition image in the transition effect is gradually increased or decreased within the preset angle range, wherein i is more than 1 and less than N.
Preferably, when the dynamic change type of the transition effect is stretching, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect, including:
and determining N stretching ratios corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N stretching ratios corresponding to the continuous N frames of transition images in the transition effect are gradually increased or decreased within a preset stretching ratio range.
Preferably, when the dynamic change type of the transition effect is transparency gradient, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
And determining N transparencies corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N transparencies corresponding to the N frames of transition images in the transition effect gradually decrease in a preset transparency range.
Preferably, when the dynamic change type of the transition motion effect is a blur degree gradient, determining N image adjustment parameters corresponding to N frames of transition images in the transition motion effect according to the dynamic change type of the transition motion effect and the number N of transition images in the transition motion effect includes:
and determining N ambiguities corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein N ambiguities corresponding to continuous N frames of transition images in the transition effect are gradually increased in a preset ambiguity range.
Preferably, when the dynamic change type of the transition effect is a scaling ratio, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
And determining N scaling ratios corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N scaling ratios corresponding to the continuous N frames of transition images in the transition effect are gradually reduced at preset scaling ratios.
Preferably, the transition image includes at least any one image of a video photographed in the first photographing mode.
Preferably, the transition image includes a last frame image in the video photographed in the first photographing mode.
Preferably, the first shooting mode and/or the second shooting mode is one of the following shooting modes:
front single shooting mode, rear single shooting mode, front double shooting mode, rear double shooting mode, front picture-in-picture mode, rear picture-in-picture mode and front picture-in-picture-after picture-in-picture mode;
wherein the first photographing mode and the second photographing mode are different.
In a second aspect, an embodiment of the present application provides an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method provided in the first aspect.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where when the program runs, the program controls a device in which the computer readable storage medium is located to execute a method provided in the first aspect.
By adopting the technical scheme provided by the embodiment of the application, after receiving the shooting mode switching instruction, the transition movement effect is generated, and the transition movement effect is used for switching the front and rear display pictures in the cut-off time of the shooting mode switching process, so that smooth video shooting experience is provided for the user. In addition, the transition movement effect can be used for synchronously encoding into the generated video file, so that smooth video playing experience is provided for users.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present application;
Fig. 2A is a schematic diagram of a shooting scene in a front-back dual shooting mode according to an embodiment of the present application;
fig. 2B is a schematic diagram of a scene shot in a front-back picture-in-picture mode according to an embodiment of the present application;
fig. 2C is a schematic view of a post-pd mode shooting scene provided in an embodiment of the present application;
fig. 3 is a schematic view of a shooting mode switching scenario provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart of a method for generating a transition effect according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a transition strategy according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a rotating transition effect generated according to the transition strategy shown in FIG. 5 according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another transition strategy according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a tensile transition effect generated according to the transition strategy shown in FIG. 7 according to an embodiment of the present application;
FIG. 9 is a schematic flow chart of a method for generating and inserting a transition effect according to an embodiment of the present application;
fig. 10 is a software structural block diagram of an electronic device according to an embodiment of the present application;
FIG. 11 is a software block diagram of a transition control module according to an embodiment of the present application;
fig. 12 is a schematic diagram of connection relationships between a switching control module, a transition control module, and a multi-shot coding module according to an embodiment of the present application;
FIG. 13 is a flowchart illustrating another method for generating a transitional effect according to an embodiment of the present disclosure;
FIG. 14 is a flowchart of another method for generating a transitional effect according to an embodiment of the present application;
fig. 15A is a schematic view of a rendering scene according to an embodiment of the present application;
FIG. 15B is a schematic view of another rendered scene provided in an embodiment of the present application;
fig. 16A is a schematic diagram of a video stream rendering combined scene according to an embodiment of the present application;
FIG. 16B is a schematic diagram of another video stream rendering merge scene provided in an embodiment of the present application;
FIG. 16C is a schematic diagram of a transition motion effect rendering scene according to an embodiment of the present disclosure;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions of the present application, embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, of the embodiments of the present application. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without making any inventive effort, are intended to be within the scope of the present application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one way of describing an association of associated objects, meaning that there may be three relationships, e.g., a and/or b, which may represent: the first and second cases exist separately, and the first and second cases exist separately. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Referring to fig. 1, a schematic diagram of an electronic device is provided in an embodiment of the present application. In fig. 1, the electronic device is illustrated by taking the mobile phone 100 as an example, and fig. 1 shows a front view and a rear view of the mobile phone 100, where two front cameras 111 and 112 are disposed on the front side of the mobile phone 100, and four rear cameras 121, 122, 123, and 124 are disposed on the rear side of the mobile phone 100. By configuring a plurality of cameras, a plurality of shooting modes, for example, a front-shooting mode, a rear-shooting mode, a front-back double-shooting mode, and the like, can be provided for the user. The user can select a corresponding shooting mode to shoot according to the shooting scene so as to improve the user experience.
It should be understood that the illustration in fig. 1 is merely exemplary and should not be taken as limiting the scope of the present application. For example, the number and location of the cameras may be different for different handsets. In addition, the electronic device related to the embodiment of the application may be a tablet computer, a personal computer (personal computer, PC), a personal digital assistant (personal digital assistant, PDA), a smart watch, a netbook, a wearable electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an in-vehicle device, a smart car, a smart sound, a robot, a smart glasses, a smart television, or the like, in addition to a mobile phone.
It should be noted that, in some possible implementations, the electronic device may also be referred to as a terminal device, a User Equipment (UE), or the like, which is not limited by the embodiments of the present application.
In some possible implementations, the shooting modes involved in the electronic device may include a single shooting mode and a multiple shooting mode. Wherein, the single shooting mode may comprise a front single shooting mode, a rear single shooting mode and the like; the multi-shot mode may include a front double shot mode, a rear double shot mode, a front-in-picture mode, a rear-in-picture mode, a front-rear in-picture mode, and the like.
In the single shooting mode, a camera is adopted for video shooting; two or more cameras are adopted to carry out video shooting in a multi-shooting mode.
Specifically, in a front single-shot mode, a front camera is adopted to shoot video; in a rear single-shot mode, a rear camera is adopted for video shooting; in a front double-shooting mode, two front cameras are adopted for video shooting; in the rear double-shot mode, two rear cameras are adopted for video shooting; in a front-back double-shooting mode, a front camera and a rear camera are adopted for video shooting; in the front-end picture-in-picture mode, two front-end cameras are adopted to carry out video shooting, and pictures shot by one front-end camera are placed in pictures shot by the other front-end camera; in the rear picture-in-picture mode, two rear cameras are adopted to carry out video shooting, and a picture shot by one rear camera is placed in a picture shot by the other rear camera; in the front and rear picture-in-picture mode, a front camera and a rear camera are adopted to carry out video shooting, and pictures shot by the front camera or the rear camera are placed in pictures shot by the rear camera or the front camera.
Referring to fig. 2A, a schematic view of a shooting scene in a front-back dual shooting mode is provided in an embodiment of the present application. In the front and rear double-shot mode, a front camera is adopted to collect the foreground picture, a rear camera is adopted to collect the rear Jing Huamian, and the foreground picture and the rear picture are simultaneously displayed in a display interface.
Referring to fig. 2B, a schematic view of a front-back picture-in-picture mode shooting scene is provided in an embodiment of the present application. In the front and rear picture-in-picture mode, a front camera is used for collecting a foreground picture, a rear camera is used for collecting a rear picture, and the foreground picture is placed in the rear picture.
Referring to fig. 2C, a schematic view of a post-pd mode shooting scene is provided in an embodiment of the present application. In the rear picture-in-picture mode, a rear camera is used for collecting a far-view picture, and another rear camera is used for collecting a near-view picture, and the near-view picture is placed in the far-view picture.
It should be noted that the above shooting modes are only some possible implementations listed in the embodiments of the present application, and those skilled in the art may configure other shooting modes according to actual needs, which are not particularly limited in the embodiments of the present application.
In some possible implementations, the shooting mode may also be described as a single-path mode, a two-path mode, or a multi-path mode. It can be understood that the single-path mode adopts one camera to shoot, the double-path mode adopts two cameras to shoot, and the multi-path mode adopts more than two cameras to shoot.
In some possible implementations, the shooting mode may also be described as a single view mode, a double view mode, and a picture-in-picture mode. The single scene mode can comprise a front single shot mode and a rear single shot mode; the dual-view mode may include a front dual-view mode, a rear dual-view mode, a front dual-view mode; the pip mode may include a front pip mode, a rear pip mode, and a front and rear pip mode.
During video capture, a user may need to switch capture modes. Referring to table one, some possible shooting mode switching scenarios are listed for the embodiments of the present application.
Table one:
Figure GDA0003783596370000051
however, switching of the shooting mode generally causes switching of the camera that captures the video image, resulting in interruption of the video stream. The method is characterized in that the method is embodied in a display interface, when the playing of the video pictures collected in the shooting mode before switching is finished, the video pictures collected in the shooting mode after switching are not generated yet, the interruption of the display interface can be caused, and the user experience is affected. In addition, the same problems occur in the generated video file.
To this problem, the embodiment of the application generates the transition effect in the shooting mode switching process, inserts the transition effect in the cut-off time, and transitions the video pictures before and after switching through the transition effect, so that the problem that the display interface and/or the generated video file is cut-off in the shooting mode switching process is avoided, and the user experience is improved.
Referring to fig. 3, a schematic view of a shooting mode switching scene is provided in an embodiment of the present application. As shown in fig. 3, a user may display a photographed video screen in real time in a display interface during video photographing by an electronic device. In addition, a shooting mode selection window is further included in the display interface, and a user can select a corresponding shooting mode in the shooting mode selection window to shoot video. For example, a front single shot mode, a rear single shot mode, a front-rear double shot mode, a front-rear picture-in-picture mode, and the like.
In the application scenario shown in fig. 3, the user first selects the pre-single-shot mode to perform video shooting, and a foreground screen is displayed in real time in the display interface 301. When the user triggers the "front and rear double-shot" control in the shooting mode selection window 302, the electronic device receives a shooting mode switching operation, and switches the front single-shot mode to the front and rear double-shot mode. In the switching process, the electronic device generates a transition motion effect, and displays a transition motion effect picture in the display interface 301 during the interruption period of the video stream, so as to avoid the interruption phenomenon in the display interface 301. After the switching is completed, video pictures taken in the front-rear double-shot mode, for example, a foreground picture and a background picture shown in fig. 3, are displayed in real time in the display interface 301. That is, in the front-rear double-shot mode, the acquisition of the foreground picture and the acquisition of the background picture are performed by the front camera and the rear camera, respectively, and the foreground picture and the background picture are displayed in the display interface 301, respectively.
It will be appreciated that in addition to the captured video being displayed within the display interface 301 during video capture, the captured video may be encoded into a video file (e.g., an MP4 format video file) and stored in the electronic device. In the shooting mode switching process, the video shot before switching, the transition movement effect and the video shot after switching are encoded into one video file. The specific coding scheme is described in detail below.
Referring to fig. 4, a flowchart of a method for generating a transition motion effect in a video recording process according to an embodiment of the present application is provided. The method can be applied to the electronic device shown in fig. 1, and mainly comprises the following steps as shown in fig. 4.
Step S401: and according to the user operation, starting video recording in a first shooting mode, and shooting video images.
In the initial state, the user can start a first shooting mode for video recording, and the first shooting mode can be any one of the shooting modes.
Step S402: and receiving a shooting mode switching operation, wherein the shooting mode switching operation is used for indicating to switch a first shooting mode into a second shooting mode, and the first shooting mode and the second shooting mode are used for shooting based on different cameras.
In practical applications, a user may need to switch shooting modes during video shooting, and input a shooting mode switching operation in an electronic device to switch a first shooting mode to a second shooting mode. The user may input the shooting mode switching operation through a touch screen, physical keys, gesture control, voice control, and other modes, which is not particularly limited in the embodiment of the present application.
The first shooting mode and the second shooting mode related to the embodiment of the application are based on different cameras for shooting. In particular, the shooting modes to which the electronic apparatus relates may include a single shooting mode and a multiple shooting mode. Wherein, the single shooting mode may comprise a front single shooting mode, a rear single shooting mode and the like; the multi-shot mode may include a front double shot mode, a rear double shot mode, a front-in-picture mode, a rear-in-picture mode, a front-rear in-picture mode, and the like.
In some possible implementations, the shooting mode may also be described as a single-path mode, a two-path mode, or a multi-path mode. It can be understood that the single-path mode adopts one camera to shoot, the double-path mode adopts two cameras to shoot, and the multi-path mode adopts two or more cameras to shoot.
In some possible implementations, the shooting mode may also be described as a single view mode, a double view mode, and a picture-in-picture mode. The single scene mode can comprise a front single shot mode and a rear single shot mode; the dual-view mode may include a front dual-view mode, a rear dual-view mode, a front dual-view mode; the pip mode may include a front pip mode, a rear pip mode, and a front and rear pip mode.
Step S403: and acquiring a transition image, wherein the transition image is related to the video image shot in the first shooting mode.
After receiving a shooting mode switching operation, the embodiment of the application acquires a transition image, wherein the transition image is related to an image of a video shot in the first shooting mode.
In a specific implementation, the transition image may be an image of a video shot in the first shooting mode, and it is understood that, in order to make the transition effect better link between the first shooting mode and the second shooting mode, the transition image may be a last frame image of the video shot in the first shooting mode.
Step S404: and determining image adjustment parameters according to a transition strategy, wherein the transition strategy comprises a transition effect duration, a transition effect frame rate or a dynamic change type of the transition effect.
In specific implementation, the number N of the transition images in the transition effect can be determined according to the transition effect duration and the transition effect frame rate, wherein N is more than or equal to 2; and determining N image adjustment parameters corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect. In addition, the transition strategy may also include a transition effect size. In some possible implementations, the transition motion efficiency frame rate may be 30fps, 60fps, etc.; the transition effect size may be full screen, 16:9, 21:9, 4:3, etc.
It can be appreciated that the need for a transition effect may be different in different application scenarios (shooting mode switching scenarios). For example, different transition effect durations, transition effect frame rates, transition effects, transition effect sizes, etc. need to be set. Therefore, the embodiment of the application needs to determine the image adjustment parameters according to the transition strategy, and further generate corresponding transition movement effects according to the image adjustment parameters.
In one embodiment, each frame of the transition image in the transition motion effect corresponds to an image adjustment parameter. That is, the image adjustment parameters corresponding to each frame of the transition image in the transition effect need to be determined according to the transition policy. For example, when the duration of the transition effect is 1s and the frame rate of the transition effect is 30, 30 image adjustment parameters corresponding to 30 frames of transition images need to be determined according to the transition policy.
Step S405: and adjusting the transition image according to the image adjustment parameters to generate a transition effect, wherein the transition effect comprises at least two frames of different transition images.
Specifically, parameters of the transition image are adjusted according to the N image adjustment parameters respectively, so that an N-frame transition image is generated.
It is understood that the transition effect consists of successive transition image frames. According to the embodiment of the application, the transition images are adjusted (rotation angle, scaling, transparency, ambiguity, displacement and the like) according to the image adjustment parameters corresponding to each frame of transition image in the transition effect, so that each frame of transition image in the transition effect is generated.
For example, when the duration of the transition effect is 1s and the frame rate of the transition effect is 30, the transition image is adjusted according to 30 image adjustment parameters corresponding to 30 frames of transition images in the transition effect, so as to generate 30 frames of transition images, namely the transition effect.
By adopting the technical scheme provided by the embodiment of the application, after receiving shooting mode switching operation, a transition movement effect is generated, and the transition movement effect is used for switching the front and rear display pictures in the cut-off time of the shooting mode switching process, so that smooth video shooting experience is provided for a user. In addition, the transition movement effect can be used for synchronously encoding into the generated video file, so that smooth video playing experience is provided for users.
In some possible implementations, the dynamic change type of the transition effect includes at least one of: rotation, stretching, transparency grading, blur grading, or scaling. Hereinafter, the description will be given.
It will be appreciated that in order to provide a smooth video capture experience for a user, the transition effect is required to be a dynamic picture display when displayed, alternatively the dynamic picture may be changed in at least one of the dynamic change types described above.
When the dynamic change type of the transition effect is rotation, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect, including: and determining N rotation angles corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N rotation angles corresponding to the N frames of transition images in the transition effect continuously change in a preset angle range.
It can be understood that, in the N-frame images of the transition effect, the 1 st frame transition image to the N-th frame transition image, the transition images of the adjacent two frames are different.
In one possible implementation manner, N rotation angles corresponding to the N frame of transition images in the transition motion effect continuously change in a preset angle range, including: the rotation angles corresponding to the 1 st frame of transition image to the N th frame of transition image in the transition effect are gradually increased or decreased in a preset angle range, or are increased and then decreased or are decreased and then increased.
For example, among N frames of the transition images, the transition image of the subsequent frame is rotated in the same direction with respect to the transition image of the previous frame.
For example, from the 1 st frame of the transition image to the i th frame of the transition image, the transition image of the subsequent frame rotates in the first direction relative to the transition image of the previous frame; and the field-transferring image of the next frame rotates in a second direction relative to the field-transferring image of the previous frame in all the i-th frame field-transferring image to the N-th frame field-transferring image, and the first direction is opposite to the second direction.
In one possible implementation manner, N rotation angles corresponding to the N frame of transition images in the transition motion effect continuously change in a preset angle range, including: the rotation angle corresponding to the 1 st frame of transition image to the i th frame of transition image in the transition effect is gradually increased or decreased within a preset angle range, and the rotation angle corresponding to the i th frame of transition image to the N th frame of transition image in the transition effect is gradually increased or decreased within the preset angle range, wherein i is more than 1 and less than N.
For example, from the 1 st frame of the transition image to the i th frame of the transition image, the transition image of the subsequent frame rotates in the first direction relative to the transition image of the previous frame; and the field-transferring image of the next frame rotates in a second direction relative to the field-transferring image of the previous frame in all the i-th frame field-transferring image to the N-th frame field-transferring image, and the first direction is opposite to the second direction.
When the dynamic change type of the transition effect is stretching, determining N image adjustment parameters corresponding to N frame transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect, including: and determining N stretching ratios corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N stretching ratios corresponding to the continuous N frames of transition images in the transition effect are gradually increased or decreased within a preset stretching ratio range.
It can be understood that, in the N-frame images of the transition effect, the sizes of the transition images of the 1 st frame transition image to the N-th frame transition image are different from each other.
For example, in the N-frame transferred image, the size of the transferred image of the subsequent frame is increased with respect to the size of the transferred image of the previous frame, or the size of the transferred image of the subsequent frame is decreased with respect to the size of the transferred image of the previous frame.
When the dynamic change type of the transition effect is transparency gradual change, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect, including: and determining N transparencies corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N transparencies corresponding to the N frames of transition images in the transition effect gradually decrease in a preset transparency range.
It can be understood that, in the N-frame images of the transition effect, the transparency of the transition images of the 1 st frame to the N-th frame are different from each other.
For example, in the N-frame transferred image, the transparency of the transferred image of the subsequent frame is increased with respect to the transparency of the transferred image of the previous frame, or the transparency of the transferred image of the subsequent frame is decreased with respect to the transparency of the transferred image of the previous frame.
When the dynamic change type of the transition effect is a fuzzy degree gradient, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect, including: and determining N ambiguities corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein N ambiguities corresponding to continuous N frames of transition images in the transition effect are gradually increased in a preset ambiguity range.
It can be understood that, in the N-frame images of the transition motion effect, the blur degree of the transition image of the following frame is increased relative to the blur degree of the transition image of the preceding frame from the 1 st-frame transition image to the N-frame transition image.
When the dynamic change type of the transition effect is a scaling ratio, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect, including: and determining N scaling ratios corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N scaling ratios corresponding to the continuous N frames of transition images in the transition effect are gradually reduced at preset scaling ratios.
It can be understood that in the N-frame images of the transition motion effect, the 1 st frame transition image to the N-th frame transition image, the length-width equal proportion of the transition images of two adjacent frames is changed.
For example, in the N-frame transferred image, the size of the transferred image of the subsequent frame is reduced in equal proportion to the size of the transferred image of the previous frame.
It should be understood that the above-mentioned dynamic variation type of the transition effect is only one possible implementation manner listed in the embodiments of the present application, and should not be taken as limiting the protection scope of the present application. The rotation and stretching are further described below.
Referring to fig. 5, a schematic diagram of a transition strategy is provided in an embodiment of the present application. Referring to fig. 6, a schematic diagram of a rotating transition effect generated according to the transition strategy shown in fig. 5 according to an embodiment of the present application is shown. In this application scenario, the shooting mode before switching is the post-single shooting mode, and the shooting mode after switching is the pre-single shooting mode, so that the background screen is displayed in the display interface in the initial state (fig. 6A).
As shown in FIG. 5, the total duration of the transition effect is 500ms, and the scene is gradually rotated from 0 degrees to +36 degrees in 0-300ms, as shown in FIGS. 6A-6D. Starting from 300ms, a background picture and a foreground picture are displayed simultaneously in the display interface, the background picture rotates by +36 degrees, the front Jing Huamian rotates by-36 degrees, and the foreground picture is positioned at the upper layer of the background picture, as shown in fig. 6D. The background picture is gradually rotated from +36 degrees to 0 degrees and the foreground picture is gradually rotated from-36 degrees to 0 degrees at 300ms-500ms, as shown in fig. 6D-6G. In fig. 6G, the foreground frame completely covers the background frame, so that the background frame can be canceled, only the foreground frame is reserved, and the transition motion effect is finished.
In addition, in 0-500ms, the transparency of the background picture is gradually reduced from 100% to 0%; the transparency of the foreground picture is gradually improved from 0% to 100% in 300ms-500 ms.
That is, in switching from the post-single-shot mode to the pre-single-shot mode, the background picture and the foreground picture rotate in opposite directions, and finally the foreground picture completely covers the background picture.
It will be appreciated that the transition effect is a dynamic effect that varies continuously over time, and that fig. 6 is a schematic diagram only with portions cut away. In addition, the person skilled in the art can set other rotation angles according to actual needs; or, other effects are added during the rotation of the picture, such as adjusting the blurring of the picture during rotation, etc. The embodiments of the present application are not limited in this regard.
Referring to fig. 7, another schematic diagram of a transition strategy is provided in an embodiment of the present application. Referring to fig. 8, a schematic diagram of a tensile transition effect generated according to the transition strategy shown in fig. 7 according to an embodiment of the present application is shown. In the application scene, the shooting mode before switching is a front-back double shooting mode; the switched shooting mode is a front single shooting mode.
As shown in fig. 7, the total duration of the transition motion effect is 450ms, and the foreground picture and the background picture are respectively processed with gaussian blur at 0-150ms, wherein the gaussian blur value of the foreground picture is gradually increased from 0 to 700; the gaussian blur value of the background picture gradually increases from 0 to 200. The foreground picture is gradually stretched along the X-axis direction (left-right direction when the mobile phone picture is horizontally displayed) in 150-450ms, the foreground picture is gradually stretched from the original position to the whole display interface, and the background picture is covered, as shown in fig. 8A-8D. In the state shown in fig. 8D, the foreground frame completely covers the background frame, so that the background frame can be canceled, only the foreground frame remains, and the transition effect ends.
Of course, those skilled in the art may add other image processing effects to the foreground and/or background images as desired, which are all within the scope of the present application.
It is understood that the transition effect may be related to the shooting mode before and after switching. For example, when the first photographing mode is a front-rear double photographing mode and the second photographing mode is a front single photographing mode, it is preferable to use a stretching effect, and to cover the rear view by stretching the front view in the front-rear double photographing mode, and then switch to the front single photographing mode.
Thus, in some possible implementations, a transition effect may be determined from the first shooting mode and the second shooting mode. In a specific implementation, a corresponding relationship between the switching mode and the transition effect can be established, and then the corresponding transition effect can be selected according to the shooting modes before and after switching.
It will be appreciated that after the transition effects are generated, the transition effects need to be inserted into the display stream for display and into the encoded stream for encoding. In order to facilitate the technical solution to be better understood by the person skilled in the art, the following describes the generation and insertion process of the transition effect by taking the single-path switching two-path as an example.
Referring to fig. 9, a flow chart of a method for generating and inserting a transition effect according to an embodiment of the present application is shown. The method can be applied to the electronic device shown in fig. 1, and mainly comprises the following steps as shown in fig. 9.
Step S901: transition dynamic GL environment initialization.
In an embodiment of the application, the image is rendered by an open graphics library (Open Graphics Library, openGL) renderer. OpenGL is a cross-language, cross-platform application programming interface for rendering 2D, 3D graphics. In some descriptions, openGL may also be simply referred to as "GL".
It can be appreciated that transition effect GL environment initialization is required before transition effects are generated by the OpenGL renderer. The initialization of the transition motion effect GL environment can comprise initializing the texture size of the transition motion effect GL, applying for a corresponding data buffer area, and the like.
Step S902: triggering the single-path mode to switch the double-path mode instruction.
During shooting, a user wants to switch the shooting mode from a single-path mode to a double-path mode, and at this time, a shooting mode switching operation is triggered, wherein the shooting mode switching operation is used for indicating to switch the single-path mode to the double-path mode.
Step S903: and acquiring the video image of the last frame of the single-channel mode.
After triggering the single-path mode switching double-path mode instruction, acquiring a last frame of video image of the single-path mode, wherein the last frame of video image is a transition image, and performing corresponding transformation processing on the transition image to generate a corresponding transition image.
Step S904: a frame of a transition image is generated.
Specifically, the OpenGL renderer may calculate, according to a transition policy (a transition effect duration, a transition effect frame rate, a transition effect, etc.), an image adjustment parameter corresponding to each frame of a transition image in the transition effect, and render, according to the image adjustment parameter, a corresponding transition frame texture, so as to generate a frame of transition image. The image adjustment parameters may include rotation angle, scaling, transparency, blur, displacement, etc. The rendering process of OpenGL is described in detail below.
Step S905: and performing off-screen rendering on the field-transferred image.
After a frame of transition image is generated, the transition image is subjected to off-screen rendering so as to display the transition image in a display interface. It can be understood that when the display interface displays the first frame of transition image, switching the video picture shot in the single-path mode to the transition effect picture is equivalent to the display interface.
After the generation of one frame of the transition image is completed, step S904 and step S905 are re-executed to generate the next frame of the transition image until the generation of N frames of the transition image is completed.
Step S906: and judging whether the single-pass coding is finished or not.
Since there is some buffering during encoding, encoding has some lag with respect to display. For example, 20 frames are cached in the cache, and when the last frame of video image in the single-path mode is displayed in the display interface, the video image corresponding to the 20 frames of single-path mode still exists in the cache to be encoded.
At this time, whether the single-pass encoding is finished is determined, if the single-pass encoding is finished, step S908 is performed to start encoding the frame picture with the transition motion effect; otherwise, the process advances to step S907 to continue the one-way encoding.
Step S907: single pass encoding.
If the one-way coding is not finished, continuing to code the video picture shot by the one-way mode.
Step S908: dynamic effect coding.
And if the single-pass coding is finished, starting to code the frame picture with the transition motion effect.
Step S909: switching control (cut two-way mode).
On the other hand, after the one-way mode switching two-way mode instruction is triggered in step S902, the start of the cut two-way mode is started. For example, a two-way camera is turned on, a related configuration of a two-way mode is performed, and the like.
Step S910: and monitoring the two-way video frames.
After switching to the two-way mode, monitoring whether the two-way video frame is reported or not is started.
Step S911: the two-way mode GL environment is initialized.
Since in the two-way mode, the two-way video frame needs to be rendered and combined by using the OpenGL renderer, the two-way mode GL environment needs to be initialized.
Step S912: and (5) double-path video frame rendering and merging processing.
After the two-way video is monitored in step S910, the two-way video frames are rendered and combined by the OpenGL renderer.
Step S913: and (5) double-path display.
And after the rendering and combining processing of the two-way video frames is completed, the video frames after the rendering and combining processing are sent to a display interface for display. It can be understood that at this time, the transition motion effect picture is switched to the video picture shot in the two-way mode in the display interface.
Step S914: and (5) double-path coding.
Specifically, after the frame picture coding of the transition dynamic effect is completed, the two-way coding is continued, namely, the coding of the video frames after the rendering and combining processing is started.
By adopting the technical scheme provided by the embodiment of the application, after receiving shooting mode switching operation, a transition movement effect is generated, and the transition movement effect is used for switching the front and rear display pictures in the cut-off time of the shooting mode switching process, so that smooth video shooting experience is provided for a user. In addition, the transition movement effect can be used for synchronously encoding into the generated video file, so that smooth video playing experience is provided for users.
Referring to fig. 10, a software architecture block diagram of an electronic device is provided in an embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android (Android) system is divided into four layers, namely an application layer, a framework layer, a hardware abstraction layer and a hardware layer from top to bottom.
The Application layer (App) may include a series of Application packages. For example, the application package may include a camera application. The application layer may be further divided into a display interface and application logic.
The display interface of the camera application includes a single view mode, a double view mode, a picture-in-picture mode, and the like. Wherein, only one shooting picture is displayed in a single scene mode; two shooting pictures are displayed in parallel in a double-view mode; two shots are displayed in the picture-in-picture mode, one shot being located in the other shot.
The application logic of the camera application comprises a switching control module, a transition control module, a multi-shot coding module and the like. The switching control module is used for controlling the switching of shooting modes; the transition control module is used for generating a transition movement effect in the shooting mode switching process; the multi-shot coding is used for keeping coding in the shooting mode switching process to generate a video file.
The Framework layer (FWK) provides an application programming interface (application programming interface, API) and programming Framework for the application layer's applications, including some predefined functions. In fig. 10, the framework layer includes a Camera access interface (Camera 2 API), which is an Android-pushed interface for accessing a Camera device, and adopts a pipeline design to enable a data stream to flow from the Camera to the Surface. The Camera2 API includes Camera management (Camera manager) and Camera device (Camera device). The Camera manager is a management class of the Camera equipment, and can query the Camera equipment information of the equipment through the class of objects to obtain a Camera equipment object. The Camera device provides a series of fixed parameters related to the Camera device, such as basic settings and output formats.
A Hardware Abstraction Layer (HAL) is an interface layer located between the operating system kernel and the hardware circuitry, which aims at abstracting the hardware. The hardware interface details of a specific platform are hidden, a virtual hardware platform is provided for an operating system, so that the operating system has hardware independence, and can be transplanted on various platforms. In fig. 10, the HAL includes a Camera hardware abstraction layer (Camera HAL) including a Device (Device) 1, a Device (Device) 2, a Device (Device) 3, and the like. It is understood that the devices 1, 2, and 3 are abstract devices.
The HardWare layer (HardWare, HW) is the HardWare that is located at the lowest level of the operating system. In fig. 10, HW includes a camera device (camera device) 1, a camera device (camera device) 2, a camera device (camera device) 3, and the like. Among them, the camera device1, the camera device2, and the camera device3 may correspond to a plurality of cameras on the electronic device.
Referring to fig. 11, a software structure block diagram of a transition control module according to an embodiment of the present application is provided. As shown in fig. 11, the transition control module includes a texture manager, a rendering engine, a renderer, and a shader library. The rendering engine comprises a display rendering engine and an encoding rendering engine.
The texture manager may obtain texture (image) data for the transition, i.e. a transition image, which is used to generate a transition effect. The renderer is used for calculating image adjustment parameters corresponding to each frame of transition image in the transition effect according to the transition strategy (such as transition effect duration, transition effect frame rate, transition effect and the like), rendering corresponding transition frame textures according to the image adjustment parameters, sending the transition frames to the display interface for display, and sending the transition frames to the encoder for encoding. The shader library is used in conjunction with the GPU shading program of the renderer, and may include a plurality of shaders (shaders), e.g., vertex shaders, fragment shaders, etc. The display rendering engine is used for driving the renderer to generate transition frame textures according to a specified frame rate in a specified time interval and sending the transition frame textures to the display interface for display; the encoding rendering engine is used for driving the renderer to generate transition frame textures according to the appointed frame rate in the appointed time interval and transmitting the transition frame textures to the multi-shot encoding module for encoding.
Referring to fig. 12, a schematic diagram of connection relationships between a switching control module, a transition control module, and a multi-shot coding module according to an embodiment of the present application is provided. As shown in fig. 12, the switching control module is connected to the transition control module, and is configured to notify the transition control module to start the transition effect when the switching is started. The multi-shot coding module provides a transition effect recording interface for the transition control module, namely a transition image generated by the transition control module can be sent to the multi-shot coding module for coding. The functions of the texture manager, the rendering engine, the renderer, and the shader may be described with reference to the embodiment shown in fig. 11, and are not described herein for brevity.
Referring to fig. 13, a flowchart of another method for generating a transition effect according to an embodiment of the present application is shown. The method is applicable to the software architecture shown in fig. 12, and as shown in fig. 13, it mainly includes the following steps.
S1301: the switching control module sends a command for starting transition motion effect to the texture manager.
Specifically, after the user triggers the shooting mode switching operation, the switching control module sends a command for starting the transition effect to the texture manager.
S1302: the texture manager acquires a transition field image.
The texture manager may generate texture (image) data for the transition. And after receiving the instruction for starting the transition movement effect, the texture manager acquires a transition image, wherein the transition image is used for generating the transition movement effect.
In a specific implementation, the transition image may be an image in a video frame captured in the first capturing mode. The first shooting mode is a shooting mode before switching; the second shooting mode is a shooting mode after switching.
It can be understood that, in order to make the transition movement effect better connect the first shooting mode and the second shooting mode, the transition image may be the last frame image of the video shot in the first shooting mode, or may be any at least one frame image in the first shooting mode.
S1303A: the texture manager sends a transition effect display starting instruction to the display rendering engine.
After the texture manager acquires the transition image, a transition effect display starting instruction is sent to the display rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures according to a specified frame rate in a specified time interval, and the transition frame textures are sent to a display interface for display.
S1303B: the texture manager sends a start transition effect encoding instruction to the encoding rendering engine.
After obtaining the transition image, the texture manager sends a command for starting the transition effect coding to the coding rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures according to a specified frame rate in a specified time interval, and the transition frame textures are sent to the multi-shot coding module for coding.
S1304A: the display rendering engine configures the renderer.
The renderer is used for calculating image adjustment parameters corresponding to each frame of transition image in the transition effect according to the transition strategy (transition effect duration, transition effect frame rate, transition effect and the like), and rendering corresponding transition frame textures according to the image adjustment parameters. The image adjustment parameters may include rotation angle, scaling, transparency, blur, displacement, etc.
After receiving the instruction for starting the transition dynamic previewing, the display rendering engine configures the renderer, and the renderer can select a corresponding shader in the shader library. Such as vertex shaders, fragment shaders, and the like.
S1304B: the encoding rendering engine configures the renderer.
After receiving the instruction for starting the transition dynamic previewing, the coding rendering engine configures the renderer, and the renderer can select a corresponding shader in the shader library.
S1305A: the display rendering engine drives the renderer to draw a frame of transition effect display image in the transition effect.
After the configuration of the renderer is completed, the display rendering engine drives the renderer to draw a frame of transition effect display image in the transition effect. Specifically, the renderer can calculate an image adjustment parameter of a frame of transition motion effect display image according to the transition strategy and the current time, adjust the transition image according to the image adjustment parameter, and draw a frame of transition motion effect display image. The transition effect display image is an image in a transition effect picture displayed by the display interface in the middle of switching from the first shooting mode to the second shooting mode.
S1305B: the encoding rendering engine drives the renderer to draw a frame of the transition encoded image in the transition effect.
After the configuration of the renderer is completed, the encoding rendering engine drives the renderer to draw a frame of transition encoded image in the transition effect. Specifically, the renderer can calculate an image adjustment parameter of a frame of the transcoding coded image according to the transcoding strategy and the current time, adjust the transcoding image according to the image adjustment parameter, and draw a frame of the transcoding coded image.
S1306A: the renderer sends the transition display image to the display interface.
Specifically, after one frame of transition shooting display image is drawn, the renderer sends the transition image to the display interface for display.
S1306B: the renderer Xiang Duo camera coding module sends the transcoded image.
Specifically, the renderer sends the transition coded image to the multi-shot coding module for coding after drawing a frame of transition coded image.
It can be appreciated that the renderer continues to send the transition effect display image to the display interface within the transition effect duration so as to display the transition effect on the display interface; the renderer continuously sends the transition coded image to the multi-shot coding module so as to generate a transition motion effect video file which is stored in the electronic equipment.
It will be appreciated that in the above implementation, each frame of the transition motion effect display image and each frame of the transition encoded image are generated separately according to the transition policy.
Because the transition effect display image and the transition encoding image have a one-to-one correspondence, in some possible implementations, each frame of transition effect display image may also be generated according to a transition policy, and each frame of transition encoding image may be determined according to each frame of transition effect display image. That is, the transition coded image directly replicates the transition effect display image, reducing the amount of calculation for adjusting the transition image.
Referring to fig. 14, a flowchart of another method for generating a transition effect according to an embodiment of the present application is shown. The method is applicable to the software architecture shown in fig. 12, and as shown in fig. 14, it mainly includes the following steps.
S1401: the switching control module sends a command for starting transition motion effect to the texture manager.
Specifically, after the user triggers the shooting mode switching operation, the switching control module sends a command for starting the transition effect to the texture manager.
S1402: the texture manager acquires a transition field image.
The texture manager may generate texture (image) data for the transition. And after receiving the instruction for starting the transition movement effect, the texture manager acquires a transition image, wherein the transition image is used for generating the transition movement effect.
S1403: the texture manager sends a transition effect display starting instruction to the display rendering engine.
After the texture manager acquires the transition image, a transition effect display starting instruction is sent to the display rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures according to a specified frame rate in a specified time interval, and the transition frame textures are sent to a display interface for display.
S1404: the texture manager sends a start transition effect encoding instruction to the encoding rendering engine.
After obtaining the transition image, the texture manager sends a command for starting the transition effect coding to the coding rendering engine, so that the display rendering engine drives the renderer to generate transition frame textures according to a specified frame rate in a specified time interval, and the transition frame textures are sent to the multi-shot coding module for coding.
S1405: the display rendering engine configures the renderer.
The renderer is used for calculating image adjustment parameters corresponding to each frame of transition image in the transition effect according to the transition strategy (transition effect duration, transition effect frame rate, transition effect and the like), and rendering corresponding transition frame textures according to the image adjustment parameters. The image adjustment parameters may include rotation angle, scaling, transparency, blur, displacement, etc.
After receiving the instruction for starting the transition dynamic previewing, the display rendering engine configures the renderer, and the renderer can select a corresponding shader in the shader library. Such as vertex shaders, fragment shaders, and the like.
S1406: the display rendering engine drives the renderer to draw a frame of transition display image in the transition effect.
After the configuration of the renderer is completed, the display rendering engine drives the renderer to draw a frame of transition display image in the transition effect. Specifically, the renderer can calculate an image adjustment parameter of a frame of transition display image according to the transition strategy and the current time, adjust the transition image according to the image adjustment parameter, and draw a frame of transition display image.
S1407: the renderer sends the transition display image to the display interface.
Specifically, after one frame of transition effect display image is drawn, the renderer sends the transition effect display image to the display interface for display.
S1408: the encoding rendering engine determines a transition encoded image from the transition display image.
In the embodiment of the application, the coding rendering engine does not drive the renderer to draw the transition coding image, but shares the drawing result of the display rendering engine, and copies the transition effect display image into the corresponding transition coding image.
S1409: the encoding rendering engine sends the transcoded encoded image to the multi-shot encoding module.
Specifically, after obtaining a transcoded encoded image, the transcoded encoded image is sent to a multi-camera encoding module for encoding.
Referring to fig. 15A, a schematic view of a rendering scene is provided in an embodiment of the present application. The rendering process of an image is described in fig. 15A taking an Open GL renderer as an example.
In order to realize processing of the display image and the encoded image, respectively, two rendering engines, that is, a display rendering engine and an encoded rendering engine, are generally provided, and hereinafter, the display rendering engine and the encoded rendering engine are referred to as an Open GL display rendering engine and an Open GL encoded rendering engine, respectively, which can call an Open GL renderer to realize rendering processing of the image.
In a single scene mode, the Open GL display rendering engine can monitor one path of video image through the first monitoring module and the second monitoring module respectively, wherein one of the video images monitored by the two monitoring modules is used for display rendering, and the other is used for coding rendering. Of course, only one monitoring module can be used for monitoring video images, the monitored video images are displayed and rendered, and the video images after display and rendering are encoded and rendered. The method comprises the following steps:
the OpenGL display rendering engine monitors video images acquired by the first camera through the first monitoring module and the second monitoring module respectively. The OpenGL display rendering engine transmits the video image monitored by the first monitoring module to the OpenGL renderer, the OpenGL renderer transmits the video image monitored by the first monitoring module of the acquired OpenGL display rendering engine to the display buffer area for buffering, the OpenGL display rendering engine transmits the video image monitored by the second monitoring module to the OpenGL renderer, and the OpenGL renderer transmits the video image monitored by the second monitoring module of the acquired OpenGL display rendering engine to the coding buffer area. And transmitting the video image cached in the display cache area to a display interface (SurfaceView), and displaying the video image in the display interface. The Open GL coding rendering engine acquires a video image in the coding buffer area, performs relevant rendering on the root of the video image, for example, performs beautifying processing on the video image or adds a watermark in the video image, and sends the rendered video image to the coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing is not required for video images, video images monitored by the first monitoring module and the second monitoring module of the Open GL display rendering engine may not pass through the Open GL renderer, but video images monitored by the first monitoring module of the Open GL display rendering engine may be directly transmitted to the display buffer area, and video images monitored by the second monitoring module may be transmitted to the encoding buffer area.
In a dual-view mode or a picture-in-picture mode, an OpenGL display rendering engine monitors video images acquired by a first camera and a second camera through a first monitoring module and a second monitoring module, and transmits the monitored two paths of video images and a synthesis strategy to an OpenGL renderer. And the Open GL renderer synthesizes the two paths of video images into a video image according to a synthesis strategy, and transmits the video image to the display buffer area for buffering. And respectively transmitting the video images cached in the display cache region to a display interface (SurfaceView) and an encoding cache region. The video image is displayed within a display interface. The Open GL coding rendering engine acquires a video image in the coding buffer area, performs relevant rendering on the root of the video image, for example, performs beautifying processing on the video image or adds a watermark in the video image, and sends the rendered video image to the coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, in the above process, except that the video file generated by the encoding module is in MP4 format, other video images are in RGB format. That is, the video image monitored by the Open GL display rendering engine is an RGB format image, and the video image output after the Open GL renderer renders the composite is also an RGB format. That is, the video image buffered in the display buffer is in RGB format, and the video image sent to the display interface and the encoding buffer is also in RGB format. The Open GL coding rendering engine acquires a video image in an RGB format, and performs related rendering on the video image according to an image rendering instruction input by a user, wherein the acquired rendered video image is in the RGB format. The video image received by the coding module is in an RGB format, and the video image in the RGB format is coded to generate a video file in an MP4 format.
In the application scene of the transition effect, the Open GL display rendering engine and the Open GL coding rendering engine respectively initialize a transition effect rendering environment of a corresponding Open GL renderer, namely the transition effect Open GL environment, which is respectively used for rendering a transition effect display image and a transition coding image of the Open GL renderer. The contents of the initialization may include a timer thread, texture, etc.
In another possible implementation manner, the Open GL display rendering engine may also initialize a corresponding Open GL renderer with a transition effect Open GL environment, where the Open GL renderer performs transition effect display image rendering. The Open GL coding rendering engine shares the transition effect display image, generates a transition coding image according to the transition effect display image, and further realizes coding of the transition coding image.
Referring to fig. 15B, another schematic view of a rendering scene according to an embodiment of the present application is provided. It differs from fig. 15A in that in the single view mode the Open GL display rendering engine may monitor a video image of one way of the electronic device through only one monitoring module. For example, the Open GL display rendering engine monitors the video image collected by the first camera through the first monitoring module. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image to the display buffer area for buffering. And transmitting the video images cached in the display cache area to a display interface. And displaying the video image in a display interface, and transmitting the video image to a coding buffer area. The Open GL coding rendering engine acquires a video image in the coding buffer area, performs relevant rendering on the root of the video image, for example, performs beautifying processing on the video image or adds a watermark in the video image, and sends the rendered video image to the coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the electronic device performs video shooting through a single camera, since special rendering processing is not required for video images, the video images monitored by the first monitoring module of the Open GL display rendering engine may be directly transmitted to the display buffer area without passing through the Open GL renderer, which is not limited in this application.
It should be noted that, in fig. 15A and 15B, the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the single view mode are the same as the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the double view mode. For convenience of explanation, in fig. 15A and 15B, the Open GL display rendering engine, the Open GL renderer, and the display buffer are drawn in both the single-view mode and the double-view mode.
Specifically, data sharing between the Open GL display rendering engine and the Open GL encoding rendering engine may be implemented through SharedContext.
The following describes a rendering process of the Open GL renderer by taking an example in which two video images are combined into one video image.
Referring to fig. 16A, a schematic view of a video stream rendering combined scene is provided in an embodiment of the present application. A frame of video images captured by a first camera and a frame of video images captured by a second camera are shown in fig. 16A. Video images acquired by the first camera and the second camera are 1080×960. And rendering and combining according to the position information and the texture information of the video image acquired by the first camera and the video image acquired by the second camera to obtain a frame of image 1080 x 1920, wherein the spliced image is in a double-view mode, namely the image acquired by the first camera and the image acquired by the second camera are displayed in parallel. The spliced image can be encoded by an encoder and displayed by a display interface.
Referring to fig. 16B, a schematic view of a merging scene is rendered for another video stream according to an embodiment of the present application. A frame of video images captured by the first camera and a frame of video images captured by the second camera are shown in fig. 16B. The size of the video image collected by the first camera is 540 x 480, and the size of the video image collected by the second camera is 1080 x 960. And rendering and combining according to the position information and the texture information of the video image acquired by the first camera and the video image acquired by the second camera to obtain an image in a picture-in-picture mode.
Referring to fig. 16C, a schematic view of a transition motion effect rendering scene is provided in an embodiment of the present application. In fig. 16C, a frame of a transition image is shown, and the transition image is rotated according to a transition policy, so as to obtain a frame of a rendered transition image.
It should be understood that the image sizes shown in fig. 16A-16C are only exemplary illustrations of embodiments of the present application and should not be taken as limiting the scope of the present application.
Corresponding to the above method embodiments, the present application further provides an electronic device, which is configured to store a memory for storing computer program instructions and a processor for executing the program instructions, where the computer program instructions, when executed by the processor, trigger the electronic device to execute some or all of the steps in the above method embodiments.
Referring to fig. 17, a schematic structural diagram of an electronic device according to an embodiment of the present application is provided. As shown in fig. 17, the electronic device 1700 may include a processor 1710, an external memory interface 1720, an internal memory 1721, a universal serial bus (universal serial bus, USB) interface 1730, a charge management module 1740, a power management module 1741, a battery 1742, an antenna 1, an antenna 2, a mobile communication module 1750, a wireless communication module 1760, an audio module 1770, a speaker 1770A, a receiver 1770B, a microphone 1770C, an earphone interface 1770D, a sensor module 1780, keys 1790, a motor 1791, an indicator 1792, a camera 1793, a display 1794, and a subscriber identity module (subscriber identification module, SIM) card interface 1795, etc. The sensor module 1780 may include a pressure sensor 1780A, a gyroscope sensor 1780B, an air pressure sensor 1780C, a magnetic sensor 1780D, an acceleration sensor 1780E, a distance sensor 1780F, a proximity sensor 1780G, a fingerprint sensor 1780H, a temperature sensor 1780J, a touch sensor 1780K, an ambient light sensor 1780L, a bone conduction sensor 1780M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 1700. In other embodiments of the present application, electronic device 1700 may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 1710 can include one or more processing units, such as: processor 1710 can include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 1710 for storing instructions and data. In some embodiments, the memory in the processor 1710 is a cache memory. The memory may hold instructions or data that the processor 1710 has just used or recycled. If the processor 1710 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 1710 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 1710 can include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 1710 may contain multiple sets of I2C buses. The processor 1710 may be coupled to the touch sensor 1780K, charger, flash, camera 1793, etc., respectively, through different I2C bus interfaces. For example: the processor 1710 may couple the touch sensor 1780K through an I2C interface, causing the processor 1710 to communicate with the touch sensor 1780K through an I2C bus interface, implementing the touch functionality of the electronic device 1700.
The I2S interface may be used for audio communication. In some embodiments, the processor 1710 may contain multiple sets of I2S buses. The processor 1710 may be coupled with the audio module 1770 through an I2S bus to enable communication between the processor 1710 and the audio module 1770. In some embodiments, the audio module 1770 may communicate audio signals to the wireless communication module 1760 via the I2S interface to enable phone calls to be received via the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 1770 and the wireless communication module 1760 may be coupled through a PCM bus interface. In some embodiments, the audio module 1770 may also communicate audio signals to the wireless communication module 1760 via the PCM interface to enable phone calls to be received via the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 1710 with the wireless communication module 1760. For example: the processor 1710 communicates with the bluetooth module in the wireless communication module 1760 through a UART interface, implementing a bluetooth function. In some embodiments, the audio module 1770 may communicate audio signals to the wireless communication module 1760 via a UART interface to implement a function of playing music via a bluetooth headset.
The MIPI interface may be used to connect processor 1710 with peripheral devices such as display 1794, camera 1793, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 1710 and camera 1793 communicate through a CSI interface, implementing the shooting functionality of electronic device 1700. The processor 1710 and the display 1794 communicate via a DSI interface to implement the display functionality of the electronic device 1700.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, GPIO interfaces may be used to connect the processor 1710 with the camera 1793, display 1794, wireless communication module 1760, audio module 1770, sensor module 1780, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 1730 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. USB interface 1730 may be used to connect a charger to charge electronic device 1700, or to transfer data between electronic device 1700 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present invention is only illustrative, and does not limit the structure of the electronic device 1700. In other embodiments of the present application, the electronic device 1700 may also employ different interfacing manners, or a combination of multiple interfacing manners, as in the above embodiments.
The charge management module 1740 is for receiving a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 1740 may receive a charging input of a wired charger through the USB interface 1730. In some wireless charging embodiments, the charging management module 1740 may receive wireless charging input through a wireless charging coil of the electronic device 1700. The battery 1742 is charged by the charging management module 1740 and the electronic device may be powered by the power management module 1741.
The power management module 1741 is for connecting the battery 1742, the charge management module 1740 and the processor 1710. The power management module 1741 receives input from the battery 1742 and/or the charge management module 1740 and provides power to the processor 1710, the internal memory 1721, the display 1794, the camera 1793, and the wireless communication module 1760, among others. The power management module 1741 may also be used to monitor battery capacity, battery cycle times, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 1741 may also be provided in the processor 1710. In other embodiments, the power management module 1741 and the charge management module 1740 may be provided in the same device.
The wireless communication functions of the electronic device 1700 may be implemented by antenna 1, antenna 2, mobile communication module 1750, wireless communication module 1760, modem processor, baseband processor, and so on.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1700 may be used to cover a single or multiple communication frequency bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1750 may provide a solution for wireless communication, including 2G/3G/4G/5G, as applied to the electronic device 1700. The mobile communication module 1750 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 1750 may receive electromagnetic waves from the antenna 1, filter, amplify, and the like the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 1750 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves to radiate through the antenna 1. In some embodiments, at least some of the functional modules of the mobile communication module 1750 may be disposed in the processor 1710. In some embodiments, at least some of the functional modules of the mobile communication module 1750 may be disposed in the same device as at least some of the modules of the processor 1710.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 1770A, receiver 1770B, etc.), or displays images or video through a display screen 1794. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 1750 or other functional module, independent of the processor 1710.
The wireless communication module 1760 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 1700. The wireless communication module 1760 may be one or more devices that integrate at least one communication processing module. The wireless communication module 1760 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 1710. The wireless communication module 1760 may also receive a signal to be transmitted from the processor 1710, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 1750 of electronic device 1700 are coupled, and antenna 2 and wireless communication module 1760 are coupled, such that electronic device 1700 may communicate with networks and other devices via wireless communication technologies. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 1700 implements display functions through a GPU, a display 1794, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 1794 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1710 may include one or more GPUs that execute program instructions to generate or change display information.
The display 1794 is used to display images, videos, and the like. The display 1794 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 1700 may include 1 or N display screens 1794, N being a positive integer greater than 1.
The electronic device 1700 may implement shooting functions through an ISP, a camera 1793, a video codec, a GPU, a display 1794, an application processor, and the like.
The ISP is used to process the data fed back by the camera 1793. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 1793.
Camera 1793 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 1700 may include 1 or N cameras 1793, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 1700 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 1700 may support one or more video codecs. Thus, the electronic device 1700 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 1700 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 1720 may be used to connect external memory cards, such as Micro SD cards, to enable expansion of the memory capabilities of the electronic device 1700. The external memory card communicates with the processor 1710 via an external memory interface 1720 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 1721 may be used to store computer executable program code including instructions. The internal memory 1721 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 1700 (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 1721 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash memory (universal flash storage, UFS), etc. The processor 1710 executes various functional applications of the electronic device 1700 and data processing by executing instructions stored in the internal memory 1721 and/or instructions stored in a memory provided in the processor.
The electronic device 1700 may implement audio functions through an audio module 1770, a speaker 1770A, a receiver 1770B, a microphone 1770C, an earphone interface 1770D, and an application processor, among others. Such as music playing, recording, etc.
The audio module 1770 is used to convert digital audio information to an analog audio signal output and also to convert an analog audio input to a digital audio signal. The audio module 1770 may also be used to encode and decode audio signals. In some embodiments, the audio module 1770 may be disposed in the processor 1710, or some functional modules of the audio module 1770 may be disposed in the processor 1710.
Speaker 1770A, also known as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 1700 may listen to music, or hands-free conversation, through the speaker 1770A.
A receiver 1770B, also referred to as an "earpiece," is used to convert the audio electrical signal into a sound signal. When electronic device 1700 picks up a phone call or voice message, the voice can be picked up by placing receiver 1770B in close proximity to the human ear.
A microphone 1770C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 1770C through the mouth, inputting a sound signal to the microphone 1770C. The electronic device 1700 may be provided with at least one microphone 1770C. In other embodiments, the electronic device 1700 may be provided with two microphones 1770C to enable noise reduction in addition to collecting sound signals. In other embodiments, the electronic device 1700 may also be provided with three, four, or more microphones 1770C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 1770D is used to connect a wired earphone. The earphone interface 1770D may be a USB interface 1730 or a 3.5mm open mobile electronic platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 1780A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 1780A may be disposed on a display 1794. The pressure sensor 1780A is of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 1780A. The electronics 1700 determines the strength of the pressure based on the change in capacitance. When a touch operation is applied to the display screen 1794, the electronic apparatus 1700 detects the touch operation intensity from the pressure sensor 1780A. The electronic device 1700 may also calculate the location of a touch based on the detection signal of the pressure sensor 1780A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 1780B may be used to determine a motion gesture of the electronic device 1700. In some embodiments, the angular velocity of the electronic device 1700 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 1780B. The gyro sensor 1780B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 1780B detects the shake angle of the electronic device 1700, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 1700 through the reverse motion, so as to realize anti-shake. The gyro sensor 1780B may also be used to navigate, somatosensory game scenes.
The air pressure sensor 1780C is used to measure air pressure. In some embodiments, the electronic device 1700 calculates altitude, aids in positioning and navigation, from barometric pressure values measured by the barometric pressure sensor 1780C.
The magnetic sensor 1780D includes a hall sensor. The electronic device 1700 may detect the opening and closing of the flip holster using the magnetic sensor 1780D. In some embodiments, when the electronic device 1700 is a flip machine, the electronic device 1700 may detect the opening and closing of the flip according to the magnetic sensor 1780D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 1780E may detect the magnitude of acceleration of the electronic device 1700 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 1700 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1780F for measuring distance. The electronic device 1700 may measure distance by infrared or laser. In some embodiments, shooting a scene, the electronic device 1700 may range using the distance sensor 1780F to achieve fast focus.
The proximity light sensor 1780G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 1700 emits infrared light outward through the light emitting diode. The electronic device 1700 uses a photodiode to detect infrared reflected light from a nearby object. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 1700. When insufficient reflected light is detected, the electronic device 1700 may determine that there is no object in the vicinity of the electronic device 1700. The electronic device 1700 may detect that the user holds the electronic device 1700 in close proximity to the ear using the proximity sensor 1780G to automatically extinguish the screen for power saving purposes. The proximity light sensor 1780G can also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 1780L is used to sense ambient light. The electronic device 1700 may adaptively adjust the brightness of the display 1794 based on the perceived ambient light level. The ambient light sensor 1780L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 1780L may also cooperate with proximity light sensor 1780G to detect if electronic device 1700 is in a pocket to prevent false touches.
The fingerprint sensor 1780H is used to collect a fingerprint. The electronic device 1700 may utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 1780J detects temperature. In some embodiments, the electronic device 1700 performs a temperature processing strategy using the temperature detected by the temperature sensor 1780J. For example, when the temperature reported by temperature sensor 1780J exceeds a threshold, electronic device 1700 performs a reduction in performance of a processor located in proximity to temperature sensor 1780J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, electronic device 1700 heats battery 1742 to avoid low temperatures causing electronic device 1700 to shut down abnormally. In other embodiments, when the temperature is below a further threshold, the electronic device 1700 performs boosting of the output voltage of the battery 1742 to avoid abnormal shutdown due to low temperatures.
Touch sensor 1780K, also referred to as a "touch device". The touch sensor 1780K may be disposed on the display 1794, and the touch sensor 1780K and the display 1794 form a touch screen, which is also referred to as a "touch screen". The touch sensor 1780K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 1794. In other embodiments, the touch sensor 1780K may also be disposed on a surface of the electronic device 1700 at a different location than the display 1794.
The bone conduction sensor 1780M may acquire a vibration signal. In some embodiments, bone conduction sensor 1780M may acquire a vibration signal of a human vocal tract vibrating bone mass. The bone conduction sensor 1780M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 1780M may also be provided in the headset, in combination with an osteoinductive headset. The audio module 1770 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 1780M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 1780M, and a heart rate detection function is achieved.
The keys 1790 include a power on key, a volume key, etc. The keys 1790 may be mechanical keys. Or may be a touch key. The electronic device 1700 can receive key inputs, generate key signal inputs related to user settings and function controls of the electronic device 1700.
The motor 1791 may generate a vibration alert. The motor 1791 may be used for incoming call vibration alerting as well as touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 1791 may also correspond to different vibration feedback effects by touching different areas of the display 1794. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 1792 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 1795 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 1795, or removed from the SIM card interface 1795, to enable contact and separation with the electronic device 1700. The electronic device 1700 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 1795 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 1795 can insert multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 1795 may also be compatible with different types of SIM cards. The SIM card interface 1795 may also be compatible with external memory cards. The electronic device 1700 interacts with the network through the SIM card to perform functions such as talking and data communication. In some embodiments, the electronic device 1700 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 1700 and cannot be separated from the electronic device 1700.
In a specific implementation, the application further provides a computer storage medium, where the computer storage medium may store a program, where when the program runs, the program controls a device where the computer readable storage medium is located to execute some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present invention, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely exemplary embodiments of the present invention, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present invention, which should be covered by the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. The method for generating the transition dynamic effect in the video recording process is applied to terminal equipment and is characterized by comprising the following steps:
according to user operation, video recording is started in a first shooting mode, a first video image is shot, the first video image comprises a first picture shot by a first camera, in the first shooting mode, an Open GL display rendering engine monitors one path of video image through a first monitoring module and a second monitoring module respectively, one of the video images monitored by the two monitoring modules is used for display rendering, and the other video image is used for coding rendering;
receiving a shooting mode switching operation, wherein the shooting mode switching operation is used for indicating to switch a first shooting mode into a second shooting mode, and the first shooting mode and the second shooting mode are used for shooting based on different cameras; the second shooting mode is used for shooting a second video image, the second video image comprises the first picture shot by the first camera and a second picture shot by a second camera, and the second camera is different from the first camera;
Acquiring a transition image, wherein the transition image is related to a video image shot in the first shooting mode, and the transition image comprises at least any image in the video shot in the first shooting mode;
determining image adjustment parameters according to a transition strategy, wherein the transition strategy comprises a transition effect duration, a transition effect frame rate or a dynamic change type of the transition effect, and the transition effect duration is matched with the cut-off time of switching the first shooting mode into the second shooting mode;
according to the image adjustment parameters, adjusting the transition images to generate transition motion effects, wherein the transition motion effects comprise at least two frames of different transition images;
wherein the determining the image adjustment parameter according to the transition policy, and the adjusting the transition image according to the image adjustment parameter, includes:
the Open GL renderer calculates image adjustment parameters corresponding to each frame of transition image in the transition effect according to the transition strategy, and renders corresponding transition frame textures according to the image adjustment parameters to generate a frame of transition image;
the method further comprises the steps of:
after switching to the second shooting mode, monitoring the second video image, displaying the second video image, and encoding the second video image;
The monitoring the second video image, displaying the second video image, includes:
monitoring the first picture and the second picture; rendering and combining the first picture and the second picture to obtain a second video image;
in the second shooting mode, an Open GL display rendering engine monitors video images acquired by the first camera and the second camera through a first monitoring module and a second monitoring module respectively, and transmits the monitored two paths of video images and a synthesis strategy to the Open GL renderer, and the Open GL renderer synthesizes the two paths of video images into one video image according to the synthesis strategy and transmits the video image to a display buffer area for buffering.
2. The method of claim 1, wherein determining image adjustment parameters according to a transition policy comprises:
determining the number N of transition images in the transition effect according to the transition effect duration and the transition effect frame rate, wherein N is more than or equal to 2;
and determining N image adjustment parameters corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect.
3. The method of claim 2, wherein adjusting the transition image according to the image adjustment parameter to generate a transition motion effect comprises:
and respectively adjusting the parameters of the transition images according to the N image adjustment parameters to generate N frames of transition images.
4. The method of claim 2, wherein the dynamically changing type of the transition effect comprises at least one of:
rotation, stretching, transparency grading, blur grading, or scaling.
5. The method according to claim 4, wherein when the dynamic change type of the transition effect is rotation, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
and determining N rotation angles corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N rotation angles corresponding to the N frames of transition images in the transition effect continuously change in a preset angle range.
6. The method of claim 5, wherein N rotation angles corresponding to N frames of transition images in the transition effect continuously vary within a preset angle range, comprising:
the rotation angles corresponding to the 1 st frame of transition image to the N th frame of transition image in the transition effect are gradually increased or decreased in a preset angle range, or are increased and then decreased or are decreased and then increased.
7. The method of claim 5, wherein N rotation angles corresponding to N frames of transition images in the transition effect continuously vary within a preset angle range, comprising:
the rotation angle corresponding to the 1 st frame of transition image to the i th frame of transition image in the transition effect is gradually increased or decreased within a preset angle range, and the rotation angle corresponding to the i th frame of transition image to the N th frame of transition image in the transition effect is gradually increased or decreased within the preset angle range, wherein i is more than 1 and less than N.
8. The method according to claim 4, wherein when the dynamic change type of the transition effect is stretch, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
And determining N stretching ratios corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N stretching ratios corresponding to the continuous N frames of transition images in the transition effect are gradually increased or decreased within a preset stretching ratio range.
9. The method according to claim 4, wherein when the dynamic change type of the transition effect is a transparency gradient, determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
and determining N transparencies corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N transparencies corresponding to the N frames of transition images in the transition effect gradually decrease in a preset transparency range.
10. The method according to claim 4, wherein when the dynamic change type of the transition effect is a blur level gradient, the determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
And determining N ambiguities corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein N ambiguities corresponding to continuous N frames of transition images in the transition effect are gradually increased in a preset ambiguity range.
11. The method according to claim 4, wherein when the dynamic change type of the transition effect is a scaling, the determining N image adjustment parameters corresponding to N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of transition images in the transition effect includes:
and determining N scaling ratios corresponding to the N frames of transition images in the transition effect according to the dynamic change type of the transition effect and the number N of the transition images in the transition effect, wherein the N scaling ratios corresponding to the continuous N frames of transition images in the transition effect are gradually reduced at preset scaling ratios.
12. The method of claim 1, wherein the transition image comprises a last frame image in the video captured in the first capture mode.
13. The method of claim 1, wherein the first photographing mode and/or the second photographing mode is one of the following photographing modes:
front single shooting mode, rear single shooting mode, front double shooting mode, rear double shooting mode, front picture-in-picture mode, rear picture-in-picture mode and front picture-in-picture-after picture-in-picture mode;
wherein the first photographing mode and the second photographing mode are different.
14. An electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any one of claims 1-13.
15. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program, when run, controls a device in which the computer readable storage medium is located to perform the method of any one of claims 1-13.
CN202110682681.4A 2021-06-16 2021-06-16 Method, device and storage medium for generating transition dynamic effect Active CN113596321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110682681.4A CN113596321B (en) 2021-06-16 2021-06-16 Method, device and storage medium for generating transition dynamic effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110682681.4A CN113596321B (en) 2021-06-16 2021-06-16 Method, device and storage medium for generating transition dynamic effect

Publications (2)

Publication Number Publication Date
CN113596321A CN113596321A (en) 2021-11-02
CN113596321B true CN113596321B (en) 2023-05-09

Family

ID=78244105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110682681.4A Active CN113596321B (en) 2021-06-16 2021-06-16 Method, device and storage medium for generating transition dynamic effect

Country Status (1)

Country Link
CN (1) CN113596321B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037872B (en) * 2021-11-30 2024-03-19 荣耀终端有限公司 Video processing method and related device
CN114510183B (en) * 2022-01-26 2023-04-18 荣耀终端有限公司 Dynamic effect duration management method and electronic equipment
CN114268741B (en) * 2022-02-24 2023-01-31 荣耀终端有限公司 Transition dynamic effect generation method, electronic device, and storage medium
CN117729426A (en) * 2023-07-05 2024-03-19 荣耀终端有限公司 Mode switching method, electronic device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804459A (en) * 2021-01-12 2021-05-14 杭州星犀科技有限公司 Image display method and device based on virtual camera, storage medium and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9325889B2 (en) * 2012-06-08 2016-04-26 Samsung Electronics Co., Ltd. Continuous video capture during switch between video capture devices
CN105183296B (en) * 2015-09-23 2018-05-04 腾讯科技(深圳)有限公司 interactive interface display method and device
KR102467869B1 (en) * 2016-02-19 2022-11-16 삼성전자주식회사 Electronic apparatus and operating method thereof
US10547776B2 (en) * 2016-09-23 2020-01-28 Apple Inc. Devices, methods, and graphical user interfaces for capturing and recording media in multiple modes
CN107820006A (en) * 2017-11-07 2018-03-20 北京小米移动软件有限公司 Control the method and device of camera shooting
CN112019767A (en) * 2020-08-07 2020-12-01 北京奇艺世纪科技有限公司 Video generation method and device, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804459A (en) * 2021-01-12 2021-05-14 杭州星犀科技有限公司 Image display method and device based on virtual camera, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113596321A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN113473005B (en) Shooting transfer live-action insertion method, equipment and storage medium
CN113422903B (en) Shooting mode switching method, equipment and storage medium
EP3893491A1 (en) Method for photographing the moon and electronic device
CN113596321B (en) Method, device and storage medium for generating transition dynamic effect
WO2022262313A1 (en) Picture-in-picture-based image processing method, device, storage medium, and program product
CN113411528B (en) Video frame rate control method, terminal and storage medium
CN111327814A (en) Image processing method and electronic equipment
CN110545354A (en) control method of electronic equipment with folding screen and electronic equipment
CN112860428A (en) High-energy-efficiency display processing method and equipment
US20240056685A1 (en) Image photographing method, device, storage medium, and program product
CN115794287A (en) Display method, electronic equipment and computer storage medium
CN112700377A (en) Image floodlight processing method and device and storage medium
CN113542613A (en) Device and method for photographing
CN114257920B (en) Audio playing method and system and electronic equipment
CN113965693B (en) Video shooting method, device and storage medium
CN114339429A (en) Audio and video playing control method, electronic equipment and storage medium
CN115150542B (en) Video anti-shake method and related equipment
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN113518189B (en) Shooting method, shooting system, electronic equipment and storage medium
CN115412678B (en) Exposure processing method and device and electronic equipment
CN113810595B (en) Encoding method, apparatus and storage medium for video shooting
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN116051351B (en) Special effect processing method and electronic equipment
CN113596320B (en) Video shooting variable speed recording method, device and storage medium
CN115762108A (en) Remote control method, remote control device and controlled device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant