WO2021104508A1 - 一种视频拍摄方法与电子设备 - Google Patents

一种视频拍摄方法与电子设备 Download PDF

Info

Publication number
WO2021104508A1
WO2021104508A1 PCT/CN2020/132547 CN2020132547W WO2021104508A1 WO 2021104508 A1 WO2021104508 A1 WO 2021104508A1 CN 2020132547 W CN2020132547 W CN 2020132547W WO 2021104508 A1 WO2021104508 A1 WO 2021104508A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
electronic device
area
movement mode
Prior art date
Application number
PCT/CN2020/132547
Other languages
English (en)
French (fr)
Inventor
李远友
罗巍
陈彬
朱聪超
胡斌
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2022531504A priority Critical patent/JP7450035B2/ja
Priority to EP20894021.3A priority patent/EP4044581A4/en
Priority to CN202080082673.XA priority patent/CN115191110B/zh
Priority to KR1020227018058A priority patent/KR102709021B1/ko
Priority to US17/780,872 priority patent/US11856286B2/en
Priority to CN202310692606.5A priority patent/CN117241135A/zh
Publication of WO2021104508A1 publication Critical patent/WO2021104508A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • This application relates to the field of image shooting technology, and in particular to a video shooting method and electronic equipment.
  • the implementation process of "panning the lens” can be seen, for example, as shown in (a) in FIG. 1.
  • the mobile phone rotates around the central axis.
  • the central axis is, for example, the mid-perpendicular line of the short side of the mobile phone on the plane where the display screen of the mobile phone is located. If the mobile phone rotates to the left around the central axis, it means to "pan the lens” to the left, as shown in Figure 1 (b); if the mobile phone rotates around the central axis to the right, it means to "pan the lens” to the right, as in Figure 1 (c) ).
  • the purpose of the present application is to provide a video shooting method and electronic equipment to improve the convenience when using a mobile phone to realize a lens-moving shooting method such as a lens shifting and panning lens.
  • a video shooting method applied to an electronic device including: activating a camera function; in response to a user's first operation, determining a first video recording template, the first video recording template including the first example sample, the first Two sample samples and preset audio, the first sample sample corresponds to the first lens movement mode, the second sample sample corresponds to the second lens movement mode, wherein the first lens movement mode and the second lens movement mode Different; display the video interface, the video interface includes the first mirror movement mode identification and the second mirror movement mode identification; in response to the user's second operation, keep the position of the electronic device still, start recording; automatically generate a composite video ,
  • the composite video includes a first video segment, a second video segment, and the preset audio, the first video segment is a video segment generated by the electronic device according to the first lens moving mode, and the first video segment
  • the second video segment is a video segment generated by the electronic device according to the second mirror movement mode.
  • the synthesized video can be directly used for uploading to social networks, sending contacts, etc., without too complicated video processing, easy to operate, and good user experience.
  • keeping the position of the electronic device still and starting recording includes: when the first lens movement mode flag is selected, responding to the user's instruction to shoot , Generating the first video clip according to the first mirror movement mode, and the duration of the first video clip is a first preset duration; when the second mirror movement mode flag is selected, shooting in response to the user's instruction The operation of generating the second video segment according to the second mirror moving mode, and the duration of the second video segment is a second preset duration.
  • the user can control to start recording and/or stop recording.
  • the first video recording template includes multiple lens movement methods, and the video duration corresponding to each lens movement method can be a preset fixed duration, that is, stop shooting when the preset duration is reached; or, it can also be a non-preset duration.
  • the user controls the phone to start and stop recording using the first lens movement mode through the shooting controls in the viewfinder interface.
  • the recording interface when the first video clip is generated according to the first lens movement mode, the recording interface also displays a countdown for generating the first video clip according to the first lens movement mode; When the second video clip is generated according to the second mirror movement mode, the recording interface also displays a countdown for generating the second video clip according to the second mirror movement mode.
  • the electronic device can display the countdown of the recording time to facilitate the user to grasp the recording progress (for example, the remaining time of the recording), and the interactive experience is better.
  • the method further includes: displaying a video recording interface, the video recording interface includes a first mirror movement mode identifier and a second mirror movement mode identifier; in response to the user's third operation, delete the first mirror movement Mode identification or second operation mode identification; in response to the user's fourth operation, keep the position of the electronic device still and start recording; automatically generate a composite video in which the electronic device is based on the undeleted operation
  • the video clip generated in the mirror mode is synthesized into the synthesized video and also includes preset audio.
  • the user can delete a certain mirror movement mode logo, for example, if the user deletes the mirror movement mode logo corresponding to the mirror movement mode that he does not like, then the corresponding mirror movement mode is deleted, and the mirror movement mode is based on the remaining mirror movement mode. Identifies the composite video of the video clip generated by the corresponding mirror motion mode.
  • the method further includes: displaying a video recording interface, the video recording interface includes a first mirror movement mode identifier and a second mirror movement mode identifier; in response to a third operation of the user, in the video interface A third lens movement mode flag is added to the third lens movement mode.
  • the third lens movement mode flag is used to indicate the third lens movement mode; in response to the user's fourth operation, keep the position of the electronic device stationary and start recording; automatically generate a composite video ,
  • the composite video includes the first video segment, the second video segment, the third video segment, and the preset audio, and the third video segment is the electronic device according to the third lens movement The video clip generated by the pattern.
  • the method further includes: displaying a video recording interface, the video recording interface includes a first lens movement mode identification and a second lens movement mode identification; in response to the user's third operation, adjusting the first The display order of the mirror movement mode identifier and the second mirror movement mode identifier is the first order; in response to the user's fourth operation, keep the position of the electronic device still and start recording; automatically generate a composite video, in the composite video The playback sequence of the first video segment and the second video segment is the first sequence.
  • the user can adjust the display order of the mirror movement mode flags, then the synthesis order of the video clips is adjusted, and the playback order of the two video clips in the synthesized video is also adjusted.
  • the first sample sample and/or the second sample sample are displayed in the video recording interface.
  • the recording interface it is convenient for the user to view the shooting effect of the first lens movement mode through the first example sample, and the second example sample to view the shooting effect of the second lens movement mode, and the interactive experience is better.
  • the display interface includes the first video segment and the second video segment; automatically generating the composite video, including: responding to The video synthesis instruction input by the user, synthesizes the video.
  • the user before the video is synthesized, the user can also view the first video segment and the second video segment separately, assuming that both video segments are satisfied, and the video is synthesized under the trigger operation of the user.
  • the method further includes: in response to the fourth operation, deleting the first video segment or the second video segment; or adding a local third video segment to the composite video. Video segment; or; adjusting the playback sequence of the first video segment or the second video segment in the composite video.
  • the user can delete a certain video clip, such as deleting the unsatisfactory clip taken by the user, or add the video clip that the user likes locally, or adjust the gap between the two video clips in the composite video. Play order.
  • the user can flexibly set the composite video, and the interactive experience is better.
  • the first video recording template is a default template or a user-defined template.
  • the user can not only use the default template of the electronic device, but the user can also customize the template, such as setting a template that the user personally likes, and the interactive experience is better.
  • the method further includes: automatically storing the first video segment and the second video segment, and the composite video.
  • the electronic device can automatically store each video segment and the video synthesized by each video segment.
  • the user can view each individual video segment locally or view the synthesized video.
  • the user experience is better. For example, the user can Upload individual video clips to social networks, or you can upload synthesized videos to social networks.
  • the method further includes: in response to a specific operation, changing audio in the composite video, or adding text and/or pictures to the composite video.
  • changing audio in the composite video or adding text and/or pictures to the composite video.
  • the user can change the audio in the composite video, or add text, pictures, etc. to the composite video, and the interactive experience is better.
  • an electronic device including:
  • One or more processors are One or more processors;
  • One or more memories are One or more memories
  • the one or more memories store one or more computer programs, and the one or more computer programs include instructions.
  • the electronic device Perform the following steps:
  • a first video recording template is determined.
  • the first video recording template includes a first sample sample, a second sample sample, and preset audio.
  • the first sample sample corresponds to the first lens movement mode
  • the The sample of the second example corresponds to the second lens movement mode, wherein the first lens movement mode and the second lens movement mode are different;
  • the video interface includes a first mirror movement mode identifier and a second mirror movement mode identifier;
  • the composite video includes a first video segment, a second video segment, and the preset audio
  • the first video segment is a video segment generated by the electronic device according to the first mirror movement mode
  • the second video segment is a video segment generated by the electronic device according to the second mirror movement mode.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device specifically executes the following steps:
  • the first lens movement mode flag When the first lens movement mode flag is selected, in response to the user's instruction to shoot, the first video clip is generated according to the first lens movement mode, and the duration of the first video clip is a first preset duration;
  • the second video clip is generated according to the second mirror movement mode, and the duration of the second video clip is a second preset duration.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device specifically executes the following steps:
  • the recording interface When the first video clip is generated according to the first mirror movement mode, the recording interface also displays the countdown for generating the first video clip according to the first mirror movement mode; when the first video clip is generated according to the second mirror movement mode When the second video segment is generated, the video recording interface also displays a countdown for generating the second video segment according to the second mirror movement mode.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • the video interface includes a first mirror movement mode identifier and a second mirror movement mode identifier;
  • a composite video is automatically generated, and the composite video includes a video clip generated by the electronic device according to the undeleted mirror movement mode and the preset audio.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • the video interface includes a first mirror movement mode identifier and a second mirror movement mode identifier;
  • the composite video includes the first video segment, the second video segment, the third video segment, and the preset audio
  • the third video segment is the electronic device according to the Video clips generated in the third mirror movement mode.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • the video interface includes a first mirror movement mode identifier and a second mirror movement mode identifier;
  • a composite video is automatically generated, and the playback sequence of the first video segment and the second video segment in the composite video is the first sequence.
  • the first sample sample and/or the second sample sample are displayed in the video recording interface.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • Automatically generate synthesized video including: synthesizing the video in response to the video synthesis instruction input by the user.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • the first video recording template is a default template or a user-defined template.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • the first video segment and the second video segment, and the composite video are automatically stored.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • the audio in the composite video is replaced, or text and/or pictures are added to the composite video.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device includes modules/units that execute the above-mentioned first aspect or any one of the possible design methods of the first aspect; these modules/units can be Hardware implementation can also be implemented by hardware executing corresponding software.
  • an embodiment of the present application further provides a chip, which is coupled with a memory in an electronic device, and is used to call a computer program stored in the memory and execute the first aspect of the embodiment of the present application and any one of the first aspect thereof.
  • the technical solution may be designed.
  • “coupled” means that two components are directly or indirectly combined with each other.
  • a computer-readable storage medium includes a computer program.
  • the computer program runs on an electronic device, the electronic device executes the method provided in the above-mentioned first aspect.
  • a program product including instructions, which when run on a computer, cause the computer to execute the method provided in the above-mentioned first aspect.
  • a graphical user interface on an electronic device having a display screen, one or more memories, and one or more processors, and the one or more processors are configured to execute One or more computer programs in the one or more memories, and the graphical user interface includes a graphical user interface displayed when the electronic device executes the method provided in the above first aspect.
  • a method for displaying preview images in a video scene is provided, which is applied to an electronic device.
  • the electronic device detects a first operation for turning on the camera; in response to the first operation, the camera is started; a second operation for indicating the first video recording mode is detected; in response to the second operation, the electronic device
  • a viewfinder interface is displayed on the display screen of the device, and the viewfinder interface includes a first preview image, and the first preview image is the first image located in the first area on the first image collected by the first wide-angle camera on the electronic device.
  • Image block keep the position of the electronic device fixed, and detect a third operation indicating the direction of image movement; in response to the third operation, display a second preview image in the viewfinder interface, the second preview The image is a second image block located in the second area on the second image collected by the first wide-angle camera, or the second preview image is an image block obtained after viewing angle conversion processing on the second image block ; Wherein, the orientation of the second area relative to the first area is related to the direction of the image movement.
  • the preview image when a user uses a mobile phone to record video, the preview image includes the scene A facing the user but does not include the scene B in the front right of the user. Keeping the position of the mobile phone still, the user inputs an instruction to move the image to the right (for example, input via a touch screen), and the preview image is updated to include a new preview image of scene B in front of the user (for example, scene A is not included). Therefore, while keeping the position of the electronic device in place, shooting methods such as “shifting the lens” or “panning the lens” can also be implemented, and the user experience is higher.
  • the orientation of the second area relative to the first area is related to the image movement direction, including: the orientation of the second area relative to the first area is the same as the image movement direction Or on the contrary, the embodiments of the present application are not limited.
  • the user can set the position of the second area relative to the first area to be the same or opposite to the direction of image movement.
  • the orientation of the second area relative to the first area is related to the image movement direction, including: the distance between the second area and the first edge of the second image The distance is the second distance, the distance between the first area and the first edge of the first image is the first distance, and the distance change of the second distance with respect to the first distance is the same as that of the image The direction of movement is related.
  • the first area is at a distance H from the left edge of the first image
  • the second area is at a distance H+A from the left edge of the second image.
  • A is a positive number
  • A is a negative number
  • the electronic device determines a third area on the third image, and the second orientation change of the third area relative to the second area is equal to the second area of the second area relative to the first area.
  • Amount of change in orientation displaying a third preview image in the viewfinder interface, the third preview image being a third image block located in the third area on the third image, or the third image
  • the block is an image block obtained after viewing angle conversion processing; wherein, the second orientation change is the distance change of the third distance relative to the second distance, and the first orientation change is the second distance relative to the first distance
  • the third distance is the distance between the third area and the first edge of the third image; the second distance is the second distance between the second area and the second image The distance between an edge; the first distance is the distance between the first area and the first edge of the first image.
  • the position of each preview image on the image captured by the first wide-angle camera changes by the same amount. Therefore, visually, the preview image in the viewfinder interface moves at a constant speed, and the user experience is high.
  • the second orientation change amount of the third area relative to the second area may be greater than the first orientation change amount of the second area relative to the first area. Therefore, visually, the preview image in the viewfinder interface moves at an accelerated rate, and has a certain sense of rhythm and visual impact.
  • the second orientation change amount of the third area relative to the second area may also be smaller than the first orientation change amount of the second area relative to the first area. Therefore, visually, the preview image in the viewfinder interface moves at a slower speed, and the video recording is more flexible and interesting.
  • a fourth preview image is displayed in the viewfinder interface, and the fourth preview image is an image collected by a second wide-angle camera.
  • the field of view angle of is smaller than the field of view of the first wide-angle camera; the first preview image is all or part of the image blocks within the overlapping range of the field of view of the first wide-angle camera and the second wide-angle camera.
  • the first wide-angle camera with a larger field of view when switching from other modes to the first recording mode, the first wide-angle camera with a larger field of view is activated, and the first image block in the first area on the first image collected by the first wide-angle camera is displayed in the viewfinder interface ,
  • the camera with a larger field of view can capture a larger image range, including more details, and the position of the first area can be moved on the image with a larger range, and the lens can be shifted or panned within a larger moving range.
  • the way of shooting, the user experience is higher.
  • magnification of the image collected by the second wide-angle camera is less than or equal to the magnification of the image collected by the first wide-angle camera.
  • the aforementioned third operation includes: a sliding operation on the first preview image; or,
  • the viewfinder interface displays a fifth preview image
  • the fifth preview image is the fifth image located in the fifth area on the fifth image collected by the first wide-angle camera.
  • Image block, or, the fifth preview image is an image block obtained after viewing angle conversion processing is performed on the fifth image block; the position of the fifth area relative to the second area remains unchanged. That is to say, when the image stop moving instruction is detected, the orientation of the preview image on the image will no longer change, and the position of the preview image in the visual viewfinder interface will no longer change.
  • the electronic device when the electronic device detects the image stop moving instruction, it generates and saves a video, and the video includes the second preview image. In other words, when the electronic device detects the image stop moving instruction, it automatically generates a video and saves the video, which is convenient for operation and improves user experience.
  • the detected image stop moving instruction includes:
  • the third operation is a sliding operation on the first preview image
  • an instruction to stop the movement of the image is generated
  • the third operation is a click operation on a control used to indicate the direction of image movement in the viewfinder interface
  • the image stop movement instruction is generated, or,
  • the third operation is a long-press operation for a control used to indicate the direction of image movement in the viewfinder interface
  • the image stop moving instruction is generated, or
  • the third operation is a pressing and dragging operation for a specific control in the viewfinder interface
  • the drag operation is detected to be up, the image stop moving instruction is generated.
  • image stop movement instruction is only an example and is not a limitation. Other generation of image stop movement instructions is also feasible, and the embodiment of the present application does not limit it.
  • the second image is one of M frames of images extracted from N frames of images collected by the first wide-angle camera, where N is an integer greater than or equal to 1, and M is an integer less than N ; Scratch frame playback can achieve the effect of fast playback. Therefore, the preview image can be played back quickly.
  • the second image is one of M frames of images obtained by inserting N frames of images collected by the first wide-angle camera into multiple frames of images, where N is an integer greater than or equal to 1, and M is greater than N Integer. Insert frame playback can achieve the effect of slow playback, so the preview image can be played in slow speed.
  • the image block obtained after the second image block undergoes viewing angle conversion processing satisfies the following formula:
  • (x', y') is the pixel on the image block obtained after viewing angle conversion processing
  • (x, y) is the pixel on the second image block
  • is the rotation angle
  • the rotation angle is the pre- Set.
  • a method for displaying preview images in a video recording scene is also provided, which is applied to electronic devices.
  • the electronic device detects a first operation for turning on the camera; in response to the first operation, the camera is started; a second operation for indicating the first video recording mode is detected; in response to the second operation, the electronic device
  • a viewfinder interface is displayed on the display screen of the device, the viewfinder interface includes a first preview image, and the first preview image is a first image collected by a camera on the electronic device; keeping the position of the electronic device fixed ,
  • a third operation indicating the direction of image rotation is detected; in response to the third operation, a second preview image is displayed in the viewfinder interface, and the second preview image is a second image collected by the camera according to the The image obtained after the image is rotated in the direction of rotation. That is to say, during the video recording process of the user electronic device, the preview image in the viewfinder interface can be rotated to achieve the effect of image rotation and shooting, and the user experience is higher.
  • the viewfinder interface displays a third preview image
  • the third preview image is an image obtained after the third image collected by the camera is rotated according to the image rotation direction
  • the third image The angle of rotation relative to the second image is the same as the angle of rotation of the second image relative to the first image.
  • the preview image in the viewfinder interface rotates at the same angle each time, that is, the image rotates at a uniform speed to achieve the effect of rotating shooting.
  • the camera is a first wide-angle camera
  • the first image is a first image block in a first area on a fourth image collected by the first wide-angle camera
  • the second image is the first A second image block in the second area on the fifth image captured by a wide-angle camera, where the position of the first area on the fourth image and the position of the second area on the fifth image are the same or different.
  • the aforementioned third operation includes: a circle drawing operation on the first preview image; or,
  • the electronic device when the electronic device detects the image stop rotation instruction, it generates and saves a video, and the video includes the second preview image. That is to say, when the electronic device detects the instruction to stop the rotation of the image, it automatically generates a video and saves the video, which is convenient for operation and improves user experience.
  • the detection of the image stop rotation instruction described above includes:
  • the third operation is a circle-drawing operation on the first preview image
  • the image stop rotating instruction is generated
  • the third operation is a click operation on a control used to indicate the direction of image rotation in the viewfinder interface
  • the image stop rotation instruction is generated, or,
  • the third operation is a long-press operation for a control used to indicate an image rotation direction in the viewfinder interface
  • the image stop rotation instruction is generated.
  • image stop rotation instruction is only an example and is not a limitation.
  • Other generation of an image stop rotation instruction is also feasible, which is not limited in the embodiment of the present application.
  • the second image is one frame of M frame images extracted from N frames of images collected by the first camera, N is an integer greater than or equal to 1, and M is an integer less than N; Frame playback can achieve the effect of fast playback. Therefore, the preview image can be played back quickly.
  • the second image is one of M frames of images obtained after N frames of images collected by the first camera are inserted into multiple frames of images, N is an integer greater than or equal to 1, and M is an integer greater than N . Insert frame playback can achieve the effect of slow playback, so the preview image can be played in slow speed.
  • an electronic device including: one or more processors; one or more memories; wherein the one or more memories store one or more computer programs, and the one or more The computer program includes instructions, which when executed by the one or more processors, cause the electronic device to perform the following steps:
  • the first operation for opening the camera is detected
  • a second operation for indicating the first video recording mode is detected
  • a viewfinder interface is displayed on the display screen of the electronic device, the viewfinder interface includes a first preview image, and the first preview image is captured by a first wide-angle camera on the electronic device The first image block located in the first area on the first image;
  • the second preview image is a second image block located in the second area on the second image collected by the first wide-angle camera
  • the second preview image is an image block obtained after viewing angle conversion processing on the second image block; wherein the orientation of the second area relative to the first area is related to the image moving direction.
  • the orientation of the second area relative to the first area is related to the moving direction of the image, including: the orientation of the second area relative to the first area and the image
  • the moving direction is the same or opposite.
  • the orientation of the second area relative to the first area is related to the image movement direction, including: the distance between the second area and the first edge of the second image The distance is the second distance, the distance between the first area and the first edge of the first image is the first distance, and the distance change of the second distance with respect to the first distance is the same as that of the image The direction of movement is related.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to execute the following steps:
  • a third preview image is displayed in the viewfinder interface, where the third preview image is a third image block in a third area on the third image; or is an image block obtained after the third image block undergoes viewing angle conversion processing;
  • the second direction change amount of the third area relative to the second area is equal to the first direction change amount of the second area relative to the first area;
  • the second azimuth change is the distance change of the third distance relative to the second distance
  • the first azimuth change is the distance change of the second distance relative to the first distance
  • the third distance is The distance between the third area and the first edge of the third image
  • the second distance is the distance between the second area and the first edge of the second image
  • the first The distance is the distance between the first area and the first edge of the first image.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further perform the following steps: before detecting the second operation for indicating the first video recording mode, A fourth preview image is displayed on the viewfinder interface, the fourth preview image is an image collected by a second wide-angle camera, and the field of view of the second wide-angle camera is smaller than that of the first wide-angle camera; The first preview image is all or part of the image blocks within the overlapping range of the angle of view of the first wide-angle camera and the second wide-angle camera.
  • magnification of the image collected by the second wide-angle camera is less than or equal to the magnification of the image collected by the first wide-angle camera.
  • the above-mentioned third operation includes:
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following step: when the image stop moving instruction is detected, the viewfinder interface displays a fifth preview Image, the fifth preview image is a fifth image block located in the fifth area on the fifth image collected by the first wide-angle camera, or the fifth preview image is a perspective view of the fifth image block The image block obtained after conversion processing; the position of the fifth area relative to the second area remains unchanged.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps: generate and save a video when the instruction to stop moving the image is detected. Including the first preview image and the second preview image.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device specifically executes the following steps:
  • the third operation is a sliding operation on the first preview image
  • an instruction to stop the movement of the image is generated
  • the third operation is a click operation on a control used to indicate the direction of image movement in the viewfinder interface
  • the image stop movement instruction is generated, or,
  • the third operation is a long-press operation for a control used to indicate the direction of image movement in the viewfinder interface
  • the image stop moving instruction is generated, or
  • the third operation is a pressing and dragging operation for a specific control in the viewfinder interface
  • the drag operation is detected to be up, the image stop moving instruction is generated.
  • the second image is one frame of M frame images extracted from N frames of images collected by the first wide-angle camera, N is an integer greater than or equal to 1, and M is an integer less than N;
  • the second image is one of M frames of images obtained after N frames of images collected by the first wide-angle camera are inserted into multiple frames of images, where N is an integer greater than or equal to 1, and M is greater than N Integer.
  • the image block obtained after the second image block undergoes the viewing angle conversion processing satisfies the following formula:
  • (x', y') is the pixel on the image block obtained after viewing angle conversion processing
  • (x, y) is the pixel on the second image block
  • is the rotation angle
  • the rotation angle is the pre- Set.
  • an electronic device including: one or more processors; one or more memories; wherein the one or more memories store one or more computer programs, and the one or more A computer program includes instructions that, when executed by the one or more processors, cause the electronic device to perform the following steps:
  • a first operation for turning on the camera is detected; in response to the first operation, the camera is started; a second operation for indicating the first video recording mode is detected; in response to the second operation, in the electronic device
  • a viewfinder interface is displayed on the display screen, and the viewfinder interface includes a first preview image, and the first preview image is a first image collected by a camera on the electronic device; keeping the position of the electronic device fixed, and detecting
  • To the third operation indicating the direction of image rotation in response to the third operation, a second preview image is displayed in the viewfinder interface, where the second preview image is a second image collected by the camera and is rotated according to the image The image obtained after the direction is rotated.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further execute the following steps:
  • the viewfinder interface displays a third preview image
  • the third preview image is an image obtained after the third image collected by the camera is rotated according to the image rotation direction
  • the third image is relative to the second image.
  • the rotation angle is the same as the rotation angle of the second image relative to the first image.
  • the camera is a first wide-angle camera
  • the first image is a first image block in a first area on a fourth image collected by the first wide-angle camera
  • the second image is the first wide-angle camera
  • the position of the first area on the fourth image and the position of the second area on the fifth image are the same or different.
  • the above-mentioned third operation includes:
  • the electronic device when the instruction is executed by the one or more processors, the electronic device is caused to further perform the following steps: generate and save a video when the instruction to stop rotating the image is detected. Including the first preview image and the second preview image.
  • the electronic device when the instruction is executed by the one or more processors, the electronic device specifically executes the following steps:
  • the third operation is a circle-drawing operation on the first preview image
  • the image stop rotating instruction is generated
  • the third operation is a click operation on a control used to indicate the direction of image rotation in the viewfinder interface
  • the image stop rotation instruction is generated, or,
  • the third operation is a long-press operation for a control used to indicate an image rotation direction in the viewfinder interface
  • the image stop rotation instruction is generated.
  • the second image is one of M frames of images extracted from the N frames of images collected by the first camera, where N is an integer greater than or equal to 1, and M is an integer less than N; or, so
  • the second image is one frame of M frame images obtained after N frames of images collected by the first camera are inserted into multiple frames of images, N is an integer greater than or equal to 1, and M is an integer greater than N.
  • an electronic device which includes modules/units that implement the eighth aspect or any one of the possible design methods of the eighth aspect; these modules/units can be implemented by hardware or The hardware executes the corresponding software implementation.
  • an electronic device which includes modules/units that implement the ninth aspect or any one of the possible design methods of the ninth aspect; these modules/units can be implemented by hardware or The hardware executes the corresponding software implementation.
  • a chip is also provided, which is coupled with a memory in an electronic device, and executes the eighth aspect of the embodiments of the present application and any possible design technical solutions of the eighth aspect; in the embodiments of the present application, "coupled "Means that two components are directly or indirectly combined with each other.
  • a chip is also provided, which is coupled with a memory in an electronic device to implement the ninth aspect of the embodiments of the present application and any possible design technical solutions of the ninth aspect; in the embodiments of the present application, "coupled "Means that two components are directly or indirectly combined with each other.
  • a computer-readable storage medium includes a computer program, and when the computer program runs on an electronic device, the electronic device executes the eighth aspect and the eighth aspect Any technical solution that may be designed.
  • a computer-readable storage medium includes a computer program, and when the computer program runs on an electronic device, the electronic device executes the ninth aspect and the ninth aspect thereof Any technical solution that may be designed.
  • a program product including instructions, which when the instructions run on a computer, cause the computer to execute the eighth aspect and any of the technical solutions that may be designed in the eighth aspect.
  • a program product including instructions, which when run on a computer, cause the computer to execute the ninth aspect and any of the technical solutions that may be designed in the ninth aspect.
  • a graphical user interface on an electronic device, the electronic device having one or more memories, and one or more processors, and the one or more processors are configured to execute One or more computer programs in one or more memories, and the graphical user interface includes a graphical user interface displayed when the electronic device executes the eighth aspect and any possible design technical solution of the eighth aspect.
  • a graphical user interface on an electronic device, the electronic device having one or more memories, and one or more processors, and the one or more processors are configured to execute Said one or more computer programs in one or more memories, the graphical user interface includes a graphical user interface displayed when the electronic device executes the ninth aspect and any of the possible designs of the ninth aspect.
  • FIG. 1 is a schematic diagram of panning and shifting the lens through a mobile phone in the prior art
  • 2A is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the application.
  • 2B is a schematic diagram of the software structure of an electronic device provided by an embodiment of the application.
  • FIG. 3A is a schematic diagram of the implementation principles of various mirror movement modes provided by the embodiments of the application.
  • FIG. 3B is a schematic diagram of an example of a shift mode provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of an example of a mobile phone GUI provided by an embodiment of the application.
  • 5A to 5B are schematic diagrams of micro-movie icons on mobile phones provided by an embodiment of the application.
  • Fig. 6 is a schematic diagram of a homepage in a micro-movie mode provided by an embodiment of the application.
  • FIGS. 7A to 7D are schematic diagrams of an example of a recording interface of a travel template provided by an embodiment of the application.
  • FIGS. 8A to 8D are schematic diagrams of another example of a recording interface of a travel template provided in an embodiment of the application.
  • FIGS. 9A to 9F are schematic diagrams of the effect display interface provided by the embodiments of the application.
  • FIG. 10 is a schematic diagram of a gallery interface provided by an embodiment of the application.
  • FIGS. 11A to 11B are schematic diagrams of another example of a homepage in a micro movie mode provided by an embodiment of the application.
  • FIG. 11C is a schematic diagram of an interface for selecting a lens moving mode and combining into a shooting template provided by an embodiment of the application;
  • FIG. 12 is a schematic flowchart of a video shooting method provided by an embodiment of the application.
  • FIG. 13-17 are schematic diagrams of a graphical user interface of an electronic device provided by an embodiment of this application.
  • 18 is a schematic diagram of the movement of the target area on the image collected by the ultra-wide-angle camera provided by an embodiment of the application;
  • 19A to 19D are schematic diagrams of the movement of the target area on the image collected by the ultra-wide-angle camera provided by an embodiment of the application;
  • 20-24 are schematic diagrams of a graphical user interface of an electronic device provided by an embodiment of this application.
  • 25 is a schematic diagram of the rotation of the target area on the image collected by the ultra-wide-angle camera provided by an embodiment of the application;
  • 26-27 are schematic diagrams of a graphical user interface of an electronic device provided by an embodiment of this application.
  • FIG. 28 is a schematic diagram of an enlarged target area on an image collected by an ultra-wide-angle camera provided by an embodiment of the application.
  • FIG. 29 is a schematic diagram of a graphical user interface of an electronic device provided by an embodiment of the application.
  • 30-31 are schematic flowcharts of a method for displaying preview images in a video recording scene provided by an embodiment of the application.
  • the preview image involved in the embodiment of the present application refers to the image displayed in the viewfinder interface of the electronic device.
  • the electronic device is a mobile phone
  • the mobile phone starts the camera application, turns on the camera, and displays the viewfinder interface, and the preview image is displayed on the viewfinder interface.
  • the video call function for example, the video communication function in WeChat
  • the camera is turned on to display the viewfinder interface
  • the preview image is displayed on the viewfinder interface.
  • the angle of view involved in the embodiments of the present application is an important performance parameter of the camera.
  • Field of view can also be referred to as “viewing angle”, “field of view range”, “field of view range” and other terms, and this article does not limit the name.
  • the field of view is used to indicate the maximum angle range that the camera can capture. If the object is within this angle range, the object will be captured by the camera and then presented in the preview image. If the object is outside this angle range, the object will not be captured by the camera, that is, it will not appear in the preview image.
  • the cameras can be divided into ordinary cameras, wide-angle cameras, ultra-wide-angle cameras, etc. due to different angles of view.
  • the focal length of an ordinary camera can be 45 to 40 mm, and the angle of view can be 40 to 60 degrees; the focal length of a wide-angle camera can be 38 to 24 mm, and the angle of view can be 60 to 84 degrees; the focal length of an ultra-wide-angle camera can be 20 To 13 mm, the viewing angle can be 94 to 118 degrees.
  • the video shooting method provided in the embodiments of the present application can be applied to electronic equipment, which includes a camera, and the camera is preferably a wide-angle camera or an ultra-wide-angle camera; of course, it can also be a common camera.
  • the number of cameras this application does not limit it, and it can be one or more. If there are more than one, it is better to include at least one wide-angle camera or ultra-wide-angle camera in the plurality.
  • the electronic device may be, for example, a mobile phone, a tablet computer, a wearable device (for example, a watch, a bracelet, a helmet, a headset, a necklace, etc.), a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR).
  • AR augmented reality
  • VR virtual reality
  • Devices notebook computers, ultra-mobile personal computers (UMPC), netbooks, personal digital assistants (personal digital assistants, PDAs) and other electronic devices, the embodiments of this application do not impose any restrictions on the specific types of electronic devices .
  • FIG. 2A shows a schematic structural diagram of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery 142 , Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 , Display screen 194, subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data. In some embodiments, the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the display screen 194 is used to display the display interface of the application, such as the viewfinder interface of the camera application.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and transforms it into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, and software codes of at least one application program (for example, an iQiyi application, a WeChat application, etc.).
  • the storage data area can store data generated during the use of the electronic device 100 (for example, captured images, recorded videos, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save pictures, videos and other files in an external memory card.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and the environment Light sensor 180L, bone conduction sensor 180M, etc.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 when the electronic device 100 is a flip machine, the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 100, applied to applications such as horizontal and vertical screen switching, and pedometer.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100.
  • the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as photographing, audio playback, etc.) can correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card. The SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the mobile phone may also include more or fewer components than those shown in the figure, or combine certain components, or split certain components. Or different component arrangements.
  • the combination/connection relationship between the components in FIG. 2A can also be adjusted and modified.
  • the camera 193 in the electronic device 100 may include one camera or multiple cameras. If multiple cameras are included, such as camera 1 and camera 2, the field of view of camera 1 is smaller than the field of view of camera 2. angle.
  • camera 1 is a telephoto camera
  • camera 2 is a wide-angle camera (can be a normal wide-angle camera or an ultra-wide-angle camera); or, camera 1 is a normal wide-angle camera, camera 2 is an ultra-wide-angle camera, and so on in different combinations.
  • the camera 1 and the camera 2 may both be rear cameras or both front cameras.
  • the electronic device 100 may also include more cameras, such as a telephoto camera.
  • the electronic device 100 may provide multiple recording modes, such as a normal recording mode, a shift mode, a shake mode, and so on.
  • a normal recording mode the electronic device 100 activates the camera 1 with a smaller angle of view, and displays the image collected by the camera 1 in the viewfinder interface.
  • the electronic device 100 switches from the normal recording mode to the shift mode, the camera 2 with a larger field of view is activated, and an image block on a frame of images collected by the camera 2 is displayed in the viewfinder interface.
  • the processor 110 for example, GPU or NPU
  • the image movement direction input by the user for example, the image movement direction is input by a sliding operation on the screen
  • the moving direction determines another image block on the next frame of image captured by the camera 2, and then displays the other image block in the viewfinder interface.
  • the position of the other image block relative to the previous image block is related to the image movement direction input by the user.
  • the user implements a shooting mode such as "shifting the lens" through the input image movement direction. Therefore, in the embodiment of the present application, during the mobile phone video recording process, the "shift lens" shooting method can be implemented without the user moving the position of the mobile phone, which is convenient to operate and has a high user experience.
  • Fig. 2B shows a software structure block diagram of an electronic device provided by an embodiment of the present application.
  • the software structure of the electronic device can be a layered architecture, for example, the software can be divided into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, respectively, the application layer, the application framework layer (framework, FWK), the Android runtime (Android runtime) and system libraries, and the kernel layer.
  • the application layer can include a series of application packages. As shown in FIG. 2B, the application layer may include cameras, settings, skin modules, user interfaces (UI), third-party applications, and so on. Among them, three-party applications can include WeChat, QQ, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • UI user interfaces
  • three-party applications can include WeChat, QQ, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer can include some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (media libraries), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library media libraries
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • system library may also include an image processing library, which is used to process images to achieve the shooting effects of panning, shifting, ascending, and descending.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the hardware layer may include various types of sensors, such as acceleration sensors, gyroscope sensors, and touch sensors involved in the embodiments of the present application.
  • the touch sensor 180K receives the touch operation, and the corresponding hardware interrupt is sent to the kernel layer.
  • the control corresponding to the click operation is the control of the camera application icon, and the camera application is started.
  • the camera driver in the kernel layer is called to drive a camera with a larger field of view (for example, an ultra-wide-angle camera) to capture an image.
  • the ultra-wide-angle camera sends the collected images to the image processing library in the system library.
  • the image processing library processes the images collected by the ultra-wide-angle camera, such as determining an image block on the image.
  • the display screen displays the image block in the viewfinder interface of the camera application, that is, the preview image.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes the sliding operations into original input events and stores them in the kernel layer.
  • the image processing library determines another image block on the image collected by the ultra-wide-angle camera, and the other image block is the same as the one image.
  • the orientation between the blocks is related to the sliding direction. Therefore, during the video recording process of the electronic device, the effect of "shifting the lens" can also be achieved while keeping it still.
  • the camera movement shooting technique such as “shifting the lens” or “panning the lens” can be realized when the mobile phone is kept still.
  • the moving mode may include moving up, moving down, moving left, and moving right; according to the moving speed, it may include moving at a constant speed, moving at a constant speed, moving at a decelerating speed, and other modes. Exemplarily, see Table 1 below, which lists various lens movement modes.
  • Table 1 above takes 36 lens movement modes as an example. It is understandable that more modes can be included. For example, take the shift mode as an example. Excluding the up, down, left, and right shifts as examples, it can also include movement in other directions. , I will not list them here.
  • the phone activates the camera, such as a wide-angle camera or an ultra-wide-angle camera.
  • the camera outputs an image stream.
  • the largest box in (a) in FIG. 3A represents the image output by the camera, which is generally an image that includes many photographed objects.
  • (a) in FIG. 3A only intercepts the image from the mth frame to the m+3th frame in the image stream as an example for introduction. Taking the image of the mth frame as an example, the small square mark in the corresponding large box is marked as the mth area, and the image block in the mth area can be cropped out as a preview image and displayed on the display screen. In other words, the preview image displayed on the mobile phone is an image block cut out of the image collected by the camera.
  • the introduction here is also applicable to the following shaking mode, push mode, and pull mode.
  • the preview image is an image block in the m-th area on the m-th frame image.
  • the next frame of preview image is the image block in the m+1th area on the m+1th frame image, and the position of the m+1th area is shifted to the right by the distance A relative to the position of the mth area.
  • the next frame of preview image is the image block in the m+2 area on the m+2 frame.
  • the position of the m+2 area is shifted to the right by the distance B relative to the position of the m+1 area, which is relative to the position of the m+th area.
  • the m-th area, the m+1-th area, and the m+2-th area are collectively referred to as the target area. That is to say, the position of the target area on the image collected by the camera gradually shifts to the right, resulting in the effect of shifting the lens to the right, but the mobile phone does not actually move.
  • (A) in FIG. 3A takes the m-th area being the central area on the m-th frame image as an example. It is understandable that the m-th area may also be other areas on the m-th frame image. For example, the left edge of the m-th area overlaps the left edge of the m-th frame image, that is, the target area moves from the leftmost side of the image to the rightmost side of the image.
  • FIG. 3A is illustrated by moving to the right as an example. It can be understood that it may also include movement in various directions, such as left shift, upward shift, downward shift, and diagonal shift. The principle is the same, and details are not repeated here.
  • pan up the center point of the target area moves from -793 to +793 in the Y direction.
  • the bottom to top pan is completed in 3 seconds (or the user-set time).
  • the upward shift is taken as an example, and it can be understood that the downward shift is the same principle and will not be repeated here.
  • the pan mode also needs to perform viewing angle conversion processing on the image blocks in the target area.
  • the preview image is an image block that has undergone viewing angle conversion.
  • the mobile phone can first convert the image captured by the camera (for example, the m-th frame image) for viewing angle conversion, and then determine the image block of the target area on the image after the viewing angle conversion processing as the preview image; or the mobile phone can also determine the camera capture first The image block in the target area on the image, and then the perspective conversion processing of the image block, and the image block after the perspective conversion processing is used as the preview image.
  • the preview image is the image block obtained after the view angle conversion processing is performed on the image block in the mth area; the next frame The preview image is an image block obtained by converting the view angle of the image block in the m+1th area.
  • the next frame of preview image is the image block obtained by converting the view angle of the image block in the m+2 area, and so on, so the effect of the lens is panning to the right is realized, but the position of the mobile phone does not actually move.
  • the viewing angle conversion can be realized by affine transformation.
  • the process of affine transformation includes: multiplying pixels on the image by a linear transformation matrix, and adding a translation vector to obtain an image with a converted perspective.
  • the image after visual conversion satisfies the following formula:
  • (x', y') is the pixel on the image after visual conversion
  • (x, y) is the pixel on the image before visual conversion
  • the matrix in the formula It is the matrix used to implement linear transformation and translation.
  • m11, m12, m21, and m22 are linear change parameters
  • m13 and m23 are translation parameters.
  • m11, m12, m21, and m22 are related to the rotation angle.
  • the rotation angle ⁇ can be determined in multiple ways.
  • the rotation angle ⁇ is a preset fixed value. Or, it can be set by the user. Therefore, after the mobile phone determines the rotation angle, it can perform viewing angle conversion processing based on the above formula.
  • the push mode corresponds to the shooting method of the push lens, which can be understood as the camera gradually pushing closer to the object, that is, the object being shot in the viewfinder interface is enlarged, which helps to focus on the details of the object.
  • the large box corresponding to the image of the m-th frame as an example includes a small box, that is, the m-th area, and the mobile phone cuts out the image blocks in the m-th area and displays it on the screen.
  • the preview image is an image block cropped from the m-th area on the m-th frame image.
  • the next frame of preview image is an image block cropped in the m+1th area, and the area of the m+1th area is smaller than the area of the mth area.
  • the next frame of preview image is an image block cropped in the m+2th area, and the area of the m+2th area is smaller than the area of the m+1th area, and so on.
  • the area of the image block is getting smaller and smaller, so when the image block is displayed on the display screen, in order to adapt to the size of the display screen, the image block needs to be enlarged and displayed. If the image block is getting smaller and smaller, then zoom in The multiple is getting bigger and bigger. Therefore, the shooting object in the preview image on the mobile phone is gradually enlarged to achieve the shooting effect of the camera gradually approaching the object, but the position of the mobile phone is not changed.
  • the pull mode corresponds to the shooting method of zooming the lens. It can be understood that the camera is gradually moving away from the object, that is, the object being shot in the viewfinder interface is reduced, which helps to capture the whole picture of the object.
  • the preview image is an image block in the mth area on the mth frame image.
  • the next frame of preview image is an image block in the m+1th area on the m+1th frame of image, and the area of the m+1th area is larger than the area of the mth area.
  • the next frame of preview image is an image block in the m+2th area on the m+2th frame of image, the area of the m+2th area is larger than the area of the m+1th area, and so on.
  • the area of the image block is getting larger and larger, so when the image block is displayed on the display screen, in order to adapt to the size of the display screen, the image block needs to be reduced and displayed. If the image block becomes larger and larger, then reduce The multiple is getting bigger and bigger. Therefore, the shooting object in the preview image on the mobile phone is reduced to achieve the effect that the camera gradually moves away from the shooting object, but the position of the mobile phone is not changed.
  • the mobile phone in addition to determining the image block in the target area, the mobile phone also needs to rotate the image block. For example, the mobile phone can first rotate the image collected by the camera, and then determine the image block in the target area on the rotated image as the preview image. Alternatively, the mobile phone may first determine the image block in the target area on the image collected by the camera, and then rotate the image block, and use the rotated image block as the preview image.
  • the next frame of preview image is an image block in which the image block in the m+1th area is rotated clockwise by an angle G.
  • the next frame of preview image is an image block in which the image block in the m+2 area is rotated clockwise by an angle of G+P, and so on.
  • the target area gradually rotates in the instantaneous direction. Therefore, the shooting object on the preview image gradually rotates in a clockwise direction to achieve the effect of the mobile phone rotating and shooting, but the position of the mobile phone does not change.
  • FIG. 3A is illustrated with clockwise rotation as an example. It can be understood that counterclockwise rotation may also be included, and the principle is the same, so it will not be repeated.
  • the uniform speed mode includes uniform speed moving, uniform speed shaking, uniform speed pushing, uniform speed pulling, uniform speed rotation and other modes.
  • the uniform speed movement can also include uniform speed up, uniform speed down, uniform speed left, uniform speed right movement, etc.
  • uniform speed rotation can also include uniform speed clockwise rotation, uniform speed counterclockwise rotation, etc., see the table above 1 shown.
  • the target area eg, the m-th area, the m+1-th area, the m+2th area, etc.
  • the target area is reduced by the same area each time to achieve a uniform speed Shooting effect.
  • Acceleration modes include accelerating shift, accelerating shaking, accelerating push, accelerating pulling, accelerating rotation and other modes.
  • the acceleration shift can also include acceleration up, acceleration down, acceleration left or acceleration right, etc.
  • acceleration rotation can also include acceleration clockwise rotation, acceleration counterclockwise rotation, etc., see Table 1 above.
  • the acceleration right shift as an example, please refer to (a) in Figure 3A, for example, A ⁇ B ⁇ C, that is, the target area (e.g., m-th area, m+1-th area, m-th area) +2 area, etc.)
  • the target area e.g., m-th area, m+1-th area, m-th area
  • the distance of each movement increases to achieve the effect of accelerating right movement.
  • the deceleration modes include deceleration shift, deceleration roll, deceleration push, deceleration pull, deceleration rotation and other modes.
  • the deceleration movement can also include deceleration up, deceleration down, deceleration left or deceleration right movement, etc.
  • deceleration rotation can also include deceleration clockwise rotation, deceleration counterclockwise rotation, etc., see Table 1 above.
  • the area reduction of the target area eg, the mth area, the m+1th area, the m+2th area, etc.
  • the area reduction of the target area gradually decreases, for example, the mth area
  • the first area difference between the +1 area and the m-th area is greater than the second area difference between the m+2-th area and the m+1-th area, so as to achieve the effect of decelerating push.
  • the target area on the p-th frame is the p-th area, which is shifted to the right by X relative to the m-th area, and X is less than A
  • the preview image is sequentially
  • the image block in the m area, the image block in the p area, the image block in the m+1 area, etc., and the p area is shifted to the right relative to the m area by X
  • the m+1 area is relative to the m area
  • a large number of lens-moving methods such as shifting the lens, panning the lens, sliding the lens and so on are used in the film shooting process.
  • film shooting requires professional shooting equipment and professional photographers. Therefore, this application considers the use of a mobile phone, and when the mobile phone is kept stationary, various mirror motion modes are used to achieve filming similar to a movie.
  • a mobile phone can provide a micro-movie mode (or can be called a movie mode). In the micro-movie mode, users can use the mobile phone to achieve movie-like shooting.
  • the micro movie mode includes a variety of story templates, each story template includes a variety of different mirror movement modes, when the mobile phone uses a certain story template for video shooting, you can use the different mirror movement modes included in the story template Video shooting is carried out to improve the quality of video shooting, and the operation is convenient. Even non-professional photographers can use various lens movement modes to complete shooting, which enhances the fun of video shooting to a certain extent.
  • FIG. 4 shows a graphical user interface (GUI) of the mobile phone, and the GUI is the home screen of the mobile phone.
  • the main interface includes icons of various applications, for example, the icons of camera applications are included.
  • the camera application is started, and another GUI as shown in (b) of FIG. 4 is displayed.
  • This GUI may be called a viewfinder interface (or a shooting interface). Preview images can be displayed in real-time on the viewfinder interface.
  • the mobile phone After detecting the user's operation for instructing the micro movie mode, the mobile phone enters (or starts) the micro movie mode.
  • the phone can use various story templates to record videos.
  • the viewfinder interface includes a button for indicating the micro movie mode.
  • the mobile phone detects the user's click on the button, the mobile phone enters the micro movie mode.
  • the button may be, for example, the button 501 shown in (a) in FIG. 5A.
  • the button may also be displayed at the position shown in (b) in FIG. 5A.
  • the button when entering the video recording mode, the button is displayed in the viewfinder interface, and the button may not be displayed in the photographing mode.
  • the button may also be displayed at the position shown in (c) in FIG. 5A.
  • the display position of the button may be set by default of the mobile phone, or may be set by the user, which is not limited.
  • the mobile phone when the mobile phone detects a click operation for more buttons, it displays a mode selection interface as shown in FIG. 5B, and the interface includes an icon of a micro movie mode.
  • the mobile phone detects the user's click on the icon, the mobile phone enters the micro movie mode.
  • the mobile phone when the mobile phone detects the user's preset gesture operation on the viewfinder interface, it enters the micro movie mode.
  • the preset gesture operation may be a gesture operation of drawing a circle in the viewfinder interface; or a long press operation on the preview image in the viewfinder interface, etc., which is not limited in this embodiment.
  • the mobile phone displays the viewfinder interface, if the mobile phone detects the user's voice instruction to enter the micro movie mode, it enters the micro movie mode.
  • each story template can include multiple mirror movement modes.
  • the mobile phone can also provide a video sample corresponding to each story template.
  • the video sample can be understood as a finished product that has been recorded using the story template.
  • the story template includes a travel template.
  • the video sample of the travel template includes three video clips, and each video clip is taken using a mirror mode. In this case, the user can roughly know the shooting effect of the travel template by watching the video sample of the travel template.
  • the interface after the mobile phone enters the micro-movie mode is shown in (a) in Figure 6.
  • the interface is referred to as the homepage of the micro-movie mode below.
  • the homepage includes multiple story templates, such as a travel template, a quiet template, and a dynamic template shown in (a) in FIG. 6.
  • the homepage may also include a preview box 601 for displaying video samples of the story template.
  • a preview box 601 for displaying video samples of the story template. For example, when the mobile phone detects the user's operation of selecting the travel template (for example, the operation of clicking the travel template), the preview box 601 displays a video sample of the travel template.
  • the video sample of the travel template is composed of three sample fragments
  • the video sample can be directly played in the preview box 601; or, each sample fragment can also be played separately.
  • the mobile phone may output a certain prompt.
  • the first small circle in the mark 602 is in the first color (for example, black), and the other two small circles are in the first color (for example, black).
  • the second color for example, white
  • the second small circle in the mark 602 is in the first color (for example, black), and the other two small circles are in the second Color (for example, white).
  • the first sample segment is played in the preview box 601.
  • the second sample segment is played in the preview box 601.
  • the first sample fragment is displayed in the preview box 601 by default.
  • the preview box 601 includes the next sample fragment; when the mobile phone detects the user's left-swiping operation in the preview box 601 again, the preview box 601 includes the next sample Example fragment.
  • the mark 602 can also prompt the user to preview the sample segment currently being played in the preview box 601.
  • the video sample may include music, and the music may be set by default, for example, set in a package with a travel template.
  • the preview box 601 may also display the number of sample clips, the total recording duration of the travel template, the recording duration of each video clip, and so on.
  • the mobile phone can prompt the user by displaying it on the touch screen or by sound, so as to inform the user of the mirror movement mode used by the story template.
  • a "details" button may also be displayed.
  • the mobile phone detects the user's operation of clicking the "details” button, it displays the interface shown in Figure 6 (b), which includes the lens movement mode used by each sample fragment included in the travel template, for example, The first sample clip uses the right shift mode, the second sample clip uses the push mode, and the third-party sample clip uses clockwise rotation.
  • the average speed has been taken as an example here.
  • the mirror movement mode used by the travel template can be viewed through the "details” button. Therefore, it can be understood that when the video sample of the quiet template is displayed in the preview box 601, the mirror movement mode used by the quiet module can be viewed by pressing the "details" button, which will not be repeated here.
  • the homepage also includes a control 603, which is used to enter the recording interface of a certain story template. For example, suppose that the mobile phone detects that the user selects a travel template, and then the mobile phone detects the operation of clicking the control 603, and enters the recording interface of the travel template.
  • the recording interface of the travel template is shown in FIG. 7A.
  • the recording interface includes a prompt 701 for prompting that the user is currently in a travel template.
  • the prompt 701 may not be displayed.
  • the recording interface also includes three marks: mark 702 to mark 704, where mark 702 is used to indicate the first mirror movement mode of the travel mode, mark 703 is used to indicate the second mirror movement mode, and mark 704 is used to indicate The third lens movement mode.
  • time 1 is displayed in the mark 702, and the time 1 is used to indicate the recording duration of the first lens movement mode.
  • the time 2 is displayed in the mark 703, and the time 2 is used to indicate the recording duration of the second lens moving mode.
  • Time 3 is displayed in the mark 704, and the time 3 is used to indicate the recording duration of the third lens moving mode.
  • Time 1, time 2, and time 3 can be set by default, and the three times can be the same or different (in Figure 7A, all three times are 3s as an example). Alternatively, time 1, time 2, and time 3 can also be set by the user (described below).
  • the recording interface also displays a button 706 for closing the recording interface of the travel template. Assume that the mobile phone detects the operation of clicking the button 706 and returns to the home page as shown in (a) in FIG. 6.
  • buttons 705 are also displayed in the recording interface.
  • the button 705 may be a recording button, which is used to control the start and/or stop of recording.
  • the travel template includes three lens movement modes, and for each lens movement mode, the start and/or stop of recording can be controlled by the button 705.
  • the mobile phone when the mobile phone detects an operation for selecting the mark 702 (for example, clicking the mark 702), the first mirror movement mode corresponding to the mark 702 is determined.
  • the operation of clicking the button 705 is detected, the mobile phone starts to use the first lens moving mode for video shooting.
  • the photographed object "tower” in the preview image in Figure 7B (a) is at the right side of the image
  • the “tower” in the preview image in Figure 7B (b) is at the middle position of the image
  • the (in Figure 7B) is ( c)
  • the "tower" of the preview image is on the left side of the image, which is equivalent to the effect of the mobile phone moving to the right, in fact the mobile phone has not moved.
  • the time in the mark 702 is automatically reduced.
  • the time in the mark 702 in (a) in FIG. 7B is 3s
  • the time in the mark 702 in (b) in FIG. 7B is reduced to 2s
  • the time in the mark 702 in (c) in FIG. 7B is reduced to 1s.
  • stop recording At this point, the recording process of the mobile phone using the first lens moving mode is over.
  • the second lens movement mode corresponding to the mark 703 is determined.
  • the operation of clicking the button 705 is detected, the mobile phone starts to use the second lens moving mode for video shooting.
  • the second lens moving module as the push mode as an example, the implementation principle is described in (b) in FIG. 3A, and the recording effect is shown in FIG. 7C.
  • the photographed object "tower" in the preview image in FIG. 7C (a) is visually perceived to be relatively far away, so the tower is small, and the "tower” in the preview image in FIG. 7C (b) is enlarged.
  • the "tower" of the preview image in (c) in FIG. 7C is further enlarged, which is equivalent to the effect of the mobile phone being close to the object, in fact the mobile phone does not move.
  • the time in the mark 703 is automatically reduced.
  • stop recording is stopped.
  • the recording process of the mobile phone using the second mirror mode is over.
  • the mobile phone when the mobile phone detects an operation for selecting the mark 704 (for example, clicking the mark 704), the third lens movement mode corresponding to the mark 704 is determined.
  • the operation of clicking the button 705 is detected, the mobile phone starts to use the third lens moving mode for video shooting.
  • the clockwise rotation mode of the third lens moving module as an example, the implementation principle is described in (c) in FIG. 3A, and the recording effect is shown in FIG. 7D.
  • the photographed object "tower" in the preview image in Figure 7D (a) is vertical
  • the "tower” in the preview image in Figure 7D (b) rotates clockwise
  • the preview in Figure 7D (c) is The “tower” of the image rotates further, which is equivalent to the shooting effect of the phone rotating clockwise. In fact, the phone does not move.
  • the time in the mark 704 is automatically reduced during the recording process of the mobile phone in the mirror mode 3.
  • stop recording is stopped.
  • the recording process of the mobile phone using the third lens moving mode is over.
  • the user selects the lens movement mode through the mark 702 to the mark 704, and then controls the mobile phone to use the selected lens movement mode to start shooting through the button 705.
  • the button 705 can also control the stop of shooting.
  • the user selects the first mirror movement mode.
  • the phone detects the operation of clicking the button 705, it starts recording in the first mirror movement mode, and when it detects the operation of clicking the button 705 again, it stops. Record in the first lens movement mode.
  • the second lens movement mode and the third lens movement mode have the same principle, and will not be repeated. That is to say, for each lens movement mode, not only the button 705 can be used to control the start of recording, but also the button 705 can be used to control the stop of recording.
  • the recording duration of each lens movement mode may not be preset. For example, the recording duration can be determined by the user. When the user wants to stop recording, just click the button 705.
  • Manner 2 In Manner 1 above, for each lens movement mode, the user needs to click the button 705 once to start recording. Different from Mode 1, in Mode 2, when the mobile phone detects an operation on the button 705, it automatically uses the three lens movement modes in sequence to record. For example, referring to Figure 7A, when the mobile phone detects the operation of clicking the button 705, it first starts recording in the first lens movement mode, and when the recording time reaches the preset support (for example, 3s), the recording ends, and it automatically adopts the second lens movement mode. Start recording, and then use the third lens movement mode to record. In this way, the user only needs to click the button 705 once, and the operation is convenient. Of course, in mode 2, the button 705 can also control recording to stop or pause, which will not be repeated here.
  • Mode 2 the button 705 can also control recording to stop or pause, which will not be repeated here.
  • the time in the mark 702 can be gradually reduced, and when the time is reduced to 0, stop recording in the first lens movement mode.
  • the principle is the same and will not be repeated.
  • the button 705 may also be a video synthesis button for synthesizing the recorded segment into a video.
  • mark 702 to mark 704 are used as recording buttons.
  • the mobile phone starts to record in the first lens movement mode, and the recording time is reached (for example, 3s), the recording is stopped, and the recorded segment is stored (referred to as segment 1 for the convenience of distinction) .
  • the second lens movement mode is used to record the segment 2.
  • the third lens movement mode is used to record the segment 3.
  • segment 1 to segment 3 are combined into one video.
  • the time in the mark 702 can be gradually reduced, and when the time is reduced to 0, stop using the first lens movement mode to record.
  • the principle is the same and will not be repeated.
  • the recording duration (for example, 3s) of each lens movement mode may not be preset.
  • the mobile phone starts recording in the first mirror movement mode, and when the operation of clicking the mark 702 is detected again, it stops recording in the first mirror movement mode.
  • the mark 702 is used to control the recording start and stop of the first lens movement mode, that is, for each lens movement mode, the recording duration can be determined by the user.
  • the recording duration of each lens movement mode is preset and is 3s as an example. It is understandable that the recording duration of each lens movement mode can be adjusted.
  • a selection box may be displayed, which includes a time setting button.
  • the interface shown in Figure 8A (b) is displayed, and the "+” button and the "-” button are displayed in the interface.
  • the "+” button is used to increase the time, such as increasing The maximum is 4s; the "-" button is used to reduce the time, for example, it is reduced to 2s.
  • FIG. 7A is an example of including three lens movement modes in the travel template. It is understandable that the travel template may also include more or fewer lens movement modes; for example, the user can add or delete the lens movement modes. Mirror mode.
  • the recording interface of the travel mode also includes a "+" button.
  • the interface shown in (b) in Figure 8B is displayed, which includes a list of lens movement modes.
  • an interface as shown in (c) in Figure 8B is displayed, and a mark 707 is added to the interface to indicate that the user chooses to add Mirror movement mode.
  • the sequence between different lens movement modes can be adjusted.
  • the mobile phone detects the operation of long pressing and dragging the mark 704
  • the mark 704 is in a movable state.
  • the order of the three lens movement modes is adjusted to the first lens movement mode, the third lens movement mode, and the second lens movement mode.
  • the sequence of the three segments in the synthesized video is: segment 1, segment 3, segment 2. Among them, segment 1 was taken using the first lens movement mode, segment 2 was taken using the second lens movement mode, and segment 3 was taken using the third lens movement mode.
  • the user may not remember which mirror movement modes the travel template includes.
  • the user may be prompted for the mirror movement mode of the travel mode in the recording interface of the travel mode.
  • the mobile phone detects the operation of clicking the button 603, it enters the interface shown in Figure 8D (b), which includes a small window, and the small window can be played Video sample of the travel template; or, each sample segment can be played separately in the small window.
  • the mobile phone detects the operation of clicking the button 603
  • the mobile phone when the mobile phone detects an operation on the mark 702, the first sample segment is played in the small window (for example, sample segment 1 is played in a loop or played only once) .
  • the second sample segment is played in the small window (for example, sample segment 1 is played in a loop or only once). In this way, during the recording process, the user can view the lens movement mode used by each sample clip.
  • the mobile phone uses the travel template to complete the video shooting, it can enter the effect display interface to facilitate the user to view the shooting effect.
  • the mobile phone when it detects an operation for the "return” button, it returns to the interface shown in (a) in FIG. 9A to re-record.
  • the preview frame 901 may occupy all or part of the display screen. If all the display screens are occupied, the "OK" button and the "Return” button can be displayed on the upper layer of the preview frame 901.
  • FIG. 9A takes the synthesized video displayed in the preview box 901 as an example.
  • each video segment can also be displayed independently.
  • FIG. 9B which is an example of another effect display interface.
  • the interface includes a mark for each video clip.
  • an operation for example, a click operation
  • segment 1 is played in the preview box 901.
  • segment 2 will be played in the preview box. Therefore, the user can view each recorded video clip one by one.
  • the mobile phone detects the operation of clicking the confirmation button, it combines the three segments into a video, stores the synthesized video, and returns to the interface as shown in FIG. 9A.
  • the order of the three video clips can be adjusted.
  • the mobile phone detects the operation of the marker for segment 2 (for example, long press and drag operation), and changes the display position of the marker, such as dragging the marker of segment 3 Move between the mark of segment 1 and the mark of segment 2, as shown in (b) of FIG. 9B. Therefore, the order of segment 3 and segment 2 is adjusted.
  • the display order of the video segments in the synthesized video is segment 1, segment 3, and segment 2.
  • the video segment can be deleted, and the remaining video segments can be combined into a video.
  • the mobile phone detects the operation of the mark for segment 2 (for example, long-press operation), and displays the delete button.
  • segment 3 is deleted, see Figure 9C In (b).
  • the mobile phone detects an operation for the "OK" button, it will use segment 1 and segment 2 to synthesize the video.
  • a re-recording button is displayed.
  • an interface as shown in (c) in FIG. 9C is displayed, and the interface is used to re-record segment 3. Therefore, only the mark 704 of the third lens movement mode may be displayed on the interface, and the marks of the first and second lens movement modes may not be displayed.
  • the mobile phone can also add a locally recorded video segment. Then, when synthesizing the video, segment 1 to segment 3 and the added local video are synthesized.
  • segment 1 to segment 3 and the added local video are synthesized.
  • the "+" button is displayed in the effect display interface.
  • the interface shown in (b) in Figure 9D is displayed, which is the interface of the mobile phone gallery.
  • an interface as shown in (c) in FIG. 9D is displayed, in which segment 4, that is, the video 913, is added.
  • segment 4 that is, the video 913
  • the mobile phone can also perform processing such as cropping, adding text or music to the recorded video clips.
  • FIG. 9E For example, referring to (a) in FIG. 9E, four icons are displayed in the preview box 901, which are a crop icon, a text icon, a music icon, and a silence icon. It is understandable that when fragment 1 is played in the preview box, the four icons act on fragment 1, and when fragment 2 is played in the preview box, the four icons act on fragment 2.
  • segment 1 displayed in the preview box when the hand detects an operation on the crop icon, an interface as shown in (b) in Figure 9E can be displayed, a crop frame 910 is displayed in the preview frame 901, and a crop frame 910 is displayed in the crop frame 910. Display all frame images of segment 1. For example, when the user wants to crop the last few frames, he can move the cropping bar 911 to the position shown in (b) in FIG. 9E, and then the last few frames are cropped. When it is detected that the completion button is clicked, the interface shown in (a) in FIG. 9E can be returned, and the clip 1 after cropping is displayed in the preview box 901 at this time.
  • an interface as shown in (c) in FIG. 9E is displayed.
  • a text input box is displayed in the preview box 901, and the user can input text in the text input box.
  • the completion button is clicked, it returns to the interface shown in (a) in FIG. 9E, and at this time, the segment 1 with text added is displayed in the preview box 901.
  • adding text is taken as an example for the introduction here. It is understandable that emoticons, animations, etc. can also be added.
  • the mobile phone When the mobile phone detects the operation on the music icon, it can display the interface as shown in (d) in Figure 9E.
  • the interface displays a list of song fragments.
  • the list includes tags for multiple song fragments, such as song fragments.
  • the mobile phone detects that the user selects the mark of the song segment A, the mobile phone can use the song segment A as the background music of the segment 1.
  • the mobile phone returns to the interface shown in (a) in FIG. 9E, and at this time, the segment 1 with the song segment A added is played in the preview box 901.
  • the story template can also include default music.
  • the default music of the travel template refers to the music used in the sample video of the travel template. Therefore, it is assumed that the user does not When music is selected, the phone uses the default music of the travel template as the background music of the video. Taking into account the rhythm of music, the image playback rhythm in the video can be consistent with the beat of the music playback.
  • the music includes drums. When a drum is played, one frame of image is played, and when the next drum is played, the next frame of image is played. and many more.
  • the mobile phone can also eliminate the original sound in one of the three recorded video clips.
  • the original sound can be understood as the sound in the recorded video.
  • the mobile phone detects an operation on the silence icon, the original sound of segment 1 is eliminated.
  • the original sound can be completely eliminated or partially eliminated.
  • the mobile phone can display an audio cropping box, such as the cropping box 910 described in (b) in Figure 9E, and the user can select the audio cropping box to silence the audio in segment 1. Part of the segment, then the unmuted segment in segment 1 retains the original sound.
  • FIG. 9E only lists four processing methods of cropping video clips, adding text, music, and eliminating the original. It is understandable that other processing methods can also be included. For example, various picture styles can be provided for processing the images in the clip, such as black and white style, anime style, ink style, strong exposure style, weak exposure style, etc. And so on, this application will not list them all.
  • the mobile phone can also choose to synthesize special effects, which are used to synthesize three video clips in a specific synthesizing manner.
  • synthesize special effects which are used to synthesize three video clips in a specific synthesizing manner.
  • the mobile phone detects the operation of the "motion effect” button, it displays an interface as shown in (b) in FIG. 9F, which includes a variety of synthetic special effects. For example, suppose the user selects "Fusion" and the mobile phone detects that the user clicks the "OK" button.
  • the way to synthesize segment 1 and segment 2 is to merge the last frame of segment 1 with the first frame of segment 2.
  • the corresponding synthesis effect is: after the mobile phone plays the second to last frame of segment 1, it plays the last frame of segment 1, which is the image obtained by fusion with the first frame of segment 2, and then continues to play the first frame of segment 2. 2 frames.
  • the mobile phone displays an interface as shown in (c) in Figure 9F for displaying the composite effect.
  • the mobile phone detects that the "Save Settings” button is clicked, it saves the composite effect of "fusion”, and then returns to the interface shown in (a) in Figure 9F.
  • the mobile phone detects the operation of clicking the "OK” button on this interface
  • use the saved synthesis effects to synthesize Fragment 1 and Fragment 3.
  • the original video and the synthesized video can be stored together.
  • the mobile phone when the mobile phone detects the "OK" button, it can store the video synthesized from segment 1 to segment 3; it can also store the original video.
  • the so-called original video can be understood as not using the mirror. Mode recorded video.
  • the original video is a video composed of images from the mth frame to the m+3th frame, rather than the mth area to the m+1th area.
  • the video is composed of image blocks.
  • it is an interface of a gallery application in a mobile phone.
  • Two videos are stored in the interface, one is a video shot using the mirror moving mode, and the other is not using the mirror moving mode.
  • a mark 1001 may be displayed on the video using the lens moving mode.
  • the mobile phone can also store each of the segments 1 to 3 separately.
  • the "+” button can also be displayed on the homepage.
  • the mobile phone detects an operation on this button, it can display the interface shown in Figure 11A (b), which is a mobile phone In the gallery interface, the user can select the video clips stored in the gallery.
  • the user selects the segment 1101
  • the mobile phone detects that the user clicks the "add” button, it opens the interface shown in (c) of FIG. 11A, and a new template is added to the interface.
  • the user can set the name of the new template.
  • the mobile phone detects an operation for a new template (for example, a long-press operation) and displays a named button.
  • the mobile phone detects an operation of clicking the named button, it names the new template.
  • the mobile phone After the mobile phone adds a custom template, you can analyze the mirror operation mode of the template.
  • the mirror motion mode corresponding to the custom module is used to shoot the video. That is, the user can use the template to shoot a video with a similar effect to the template. For example, if a user intercepts a segment from a movie as a custom template, then the video shot by the user using this template can have a similar effect to the movie. For non-professional photographers, they can also get high-quality shots with a high user experience.
  • templates that users do not like or templates that are not commonly used can be deleted.
  • both the default template and the custom template can be deleted, or, only the custom template can be deleted, and the default template cannot be deleted.
  • the mobile phone detects an operation for a new template (for example, a long-press operation) and displays a delete button.
  • the mobile phone detects the operation of clicking the delete button, the template is deleted.
  • FIG. 11A takes as an example that the custom template is a video local to the mobile phone. It is understandable that the custom module can also be set in other ways. For example, taking (a) in Figure 11A as an example, when the mobile phone detects an operation on the "+" button, it displays the interface shown in Figure 11C, which includes a list of mirror movement modes, and the user can select from the mirror movement mode Select multiple mirror movement modes from the list to form a customized story template.
  • an embodiment of the present application provides a video shooting method. As shown in Figure 12, the method may include the following steps:
  • S1201 start the camera function.
  • the mobile phone detects an operation for opening an application and starts the camera application.
  • the operation may be the operation of clicking the camera icon in (a) in FIG. 4, of course, it may also be other operations, as long as the camera application can be opened, and the embodiment of the present application does not limit the type of operation.
  • S1202 In response to the user's first operation, determine a first video recording template, where the first video recording template includes a first sample sample, a second sample sample, and preset audio, the first sample sample corresponding to the first lens movement mode,
  • the second example sample corresponds to a second lens movement mode, wherein the first lens movement mode and the second lens movement mode are different.
  • the first video recording template may be, for example, the travel template, the quiet template, etc. in FIG. 7A.
  • the first recording template can also be a default template or a user-defined template, as shown in Fig. 11A.
  • the first operation can be one or more operations. Assuming that the first operation is an operation, for example, after the mobile phone starts the camera application, the viewfinder interface shown in Figure 4 (b) is displayed.
  • the first operation can be the operation of clicking the button of the first recording template, or the voice instruction
  • the operation of a video template, etc. is not limited in the embodiment of the present application. Assume that the first operation includes multiple operations. For example, the first operation includes the operation of clicking the micro movie icon in FIG. 5A, the operation of clicking the travel template in (a) in FIG. 6, and the operation of clicking the control 603.
  • the first sample sample (or called the first video sample), the second sample sample (or called the second video sample) and the preset audio, please refer to the foregoing description.
  • S1203 Display a video recording interface, where the video recording interface includes a first mirror movement mode identifier and a second mirror movement mode identifier;
  • the video recording interface may be, for example, the interface shown in FIG. 7A.
  • Method 1 is, in response to the operation (ie, the second operation) of clicking the record button (such as the button 705 in FIG. 7A), start recording. Specifically, first start recording the first video clip in the first mirror movement mode, and after the recording is completed , Automatically record the second video clip according to the second lens movement mode.
  • Manner 2 is that when the first lens movement mode flag is selected, in response to the user's instruction to shoot, the first video clip is generated according to the first lens movement mode, and the duration of the first video clip is The first preset duration; when the second lens movement mode flag is selected, in response to the user's instruction to shoot, the second video clip is generated according to the second lens movement mode, and the second video clip The duration is the second preset duration. In other words, for each lens movement mode, the user can control to start recording and/or stop recording.
  • S1205 Automatically generate a composite video, the composite video includes a first video segment, a second video segment, and the preset audio, and the first video segment is generated by the electronic device according to the first mirror movement mode A video segment, and the second video segment is a video segment generated by the electronic device according to the second mirror movement mode.
  • the recording method can be the above method 1 or the method 2.
  • the second video segment is recorded, , Can automatically synthesize video.
  • another way is to display a display interface before automatically generating the composite video, the display interface includes the first video segment and the second video segment; in response to the video synthesis instruction input by the user, the composite video .
  • the recording interface when the first video clip is generated according to the first mirror movement mode, the recording interface also displays a countdown for generating the first video clip according to the first mirror movement mode; When the second video segment is generated in the second mirror movement mode, the recording interface also displays a countdown for generating the second video segment according to the second mirror movement mode.
  • a countdown for generating the first video clip according to the first mirror movement mode When the second video segment is generated in the second mirror movement mode, the recording interface also displays a countdown for generating the second video segment according to the second mirror movement mode.
  • the user can also delete the lens movement mode identification.
  • a mobile phone displays a video recording interface, and the video recording interface includes a first mirror movement mode identifier and a second mirror movement mode identifier; in response to the user's third operation, delete the first mirror movement mode identifier or the second mirror movement mode identifier; response In the fourth operation of the user, keep the position of the electronic device still and start recording;
  • a composite video is automatically generated, and the composite video includes a video clip generated by the electronic device according to the undeleted mirror movement mode and the preset audio. For example, if the first mirror movement mode identifier is deleted, the first mirror movement mode is deleted, and then the electronic device starts recording, and only generates a second video segment according to the second mirror movement mode, and does not need to synthesize video with other video segments.
  • the user can also add a lens movement mode identification.
  • the electronic device displays a video interface, and the video interface includes a first mirror movement mode identifier and a second mirror movement mode identifier; in response to the user's third operation, the third mirror movement mode identifier is added to the video interface, so The third lens movement mode identifier is used to indicate the third lens movement mode; in response to the user's fourth operation, keep the position of the electronic device still and start recording; automatically generate a composite video, the composite video includes the first A video segment, the second video segment, the third video segment, and the preset audio, where the third video segment is a video segment generated by the electronic device according to the third lens moving mode.
  • the user can also adjust the sequence of the lens movement mode identification.
  • the electronic device displays a video interface, and the video interface includes a first mirror movement mode identifier and a second mirror movement mode identifier; in response to the user's third operation, the first mirror movement mode identifier and the second mirror movement mode are adjusted
  • the display order of the logo is the first order; in response to the user's fourth operation, keep the position of the electronic device still and start recording; automatically generate a composite video, in which the first video segment and the first video segment The playback sequence of the second video segment is the first sequence.
  • the first sample sample and/or the second sample sample are displayed in the video recording interface.
  • the recording interface may be the interface described in (b) in FIG. 8D, and the example fragments may be displayed in a picture-in-picture manner in the viewfinder interface.
  • the description of (b) in 8D above please refer to the description of (b) in 8D above.
  • the electronic device may also delete the first video segment or the second video segment in response to the fourth operation; or, add a local third video segment to the composite video; or; adjust The playback sequence of the first video segment or the second video segment in the composite video.
  • the electronic device may also delete the first video segment or the second video segment in response to the fourth operation; or, add a local third video segment to the composite video; or; adjust The playback sequence of the first video segment or the second video segment in the composite video.
  • the first recording template is a default template or a user-defined template. For example, see Figure 11A above.
  • the electronic device may also automatically store the first video segment and the second video segment, and the composite video, for example, see the foregoing description of FIG. 10.
  • the electronic device may also change the audio in the composite video in response to a specific operation, or add text and/or pictures to the composite video.
  • the electronic device may also change the audio in the composite video in response to a specific operation, or add text and/or pictures to the composite video.
  • the use of the micro movie mode to achieve the combined use of multiple lens movement modes is introduced.
  • another video shooting method is provided in the second embodiment, that is, the mobile phone uses a certain lens moving mode alone for shooting.
  • FIG. 13 shows a graphical user interface (GUI) of the mobile phone, and the GUI is the desktop 401 of the mobile phone.
  • GUI graphical user interface
  • the mobile phone detects that the user clicks the icon 402 of the camera application on the desktop 401, it can start the camera application, start a normal wide-angle camera (such as a rear camera), and display another GUI as shown in (b) in Figure 13 ,
  • the GUI may be called the viewfinder interface 1303.
  • the viewfinder interface 1303 is a viewfinder interface in a video mode (normal video mode).
  • the mobile phone detects that the user clicks on the icon 1302, it will display the viewfinder interface of the camera mode by default, and the user can input operations such as sliding in the area 1304 (the area in the dashed box) in (b) in Figure 13 Operate, select the video mode, and then the mobile phone displays the viewfinder interface of the video mode.
  • the viewfinder interface 1303 includes a preview image.
  • the viewfinder interface 1303 may also include a control 1305 for indicating a skin beautification mode, a control 1306 for indicating a skin beautification level, and a video recording control 1307.
  • the mobile phone detects that the user clicks the operation of the video control 1307, the mobile phone starts to record the video.
  • the embodiments of the present application provide multiple recording modes, such as a normal recording mode and two mirror movement modes (for example, including shift mode and shake mode).
  • the user can instruct the mobile phone to use a certain mirror movement mode.
  • the process is different.
  • the mobile phone entering the shaking mode can be understood as the mobile phone processing based on the processing flow corresponding to the mobile mode.
  • the mobile phone entering the shift mode can be understood as the mobile phone processing based on the processing flow corresponding to the shaking mode.
  • the mobile phone starts the camera application and enters the normal video recording mode by default. After the user instructs a certain lens movement mode, it enters the corresponding lens movement mode. Or, after the mobile phone starts the camera application, it enters a certain lens movement mode by default, for example, the lens movement mode used when the camera application was used last time. Suppose that after the mobile phone starts the camera application, it enters the shift mode by default and can start the ultra-wide-angle camera. An image block on the image collected on the ultra-wide-angle camera is displayed in the viewfinder interface, for example, it may be an image block at the center position.
  • Method 1 referring to (a) in Figure 14, the mobile phone is currently in the normal recording mode.
  • the viewfinder interface 1303 displays a control 1308 for indicating the lens movement mode.
  • a GUI as shown in (b) of FIG. 14 is displayed, and a selection box 1309 is displayed in the GUI.
  • the selection box 1309 includes options for "shaking mode” and "shifting mode".
  • the viewfinder interface defaults to display a control 1308 for indicating the mirror movement mode, or after the user sets a shortcut for the mirror movement mode, the viewfinder interface displays a control 1308 for indicating the mirror movement mode. .
  • the user can set the shortcut of the lens movement mode through the setting menu in the camera application.
  • the display position of the control 1308 for indicating the lens movement mode in the viewfinder interface 1303 is not limited in this embodiment, or the user can also customize the display position of the control 1308 or the display of the control 1308 The position can be adjusted adaptively according to the horizontal or vertical screen of the mobile phone.
  • the form of the control 1308 for indicating the lens movement mode may adopt a form that does not block the preview image as much as possible, such as a transparent or semi-transparent form.
  • Mode 1 the control 1308 of the lens moving mode is visually presented in the viewfinder interface, which is convenient for the user to operate and has a higher user experience.
  • Method 2 refer to Figure 15 (a), the mobile phone is currently in the normal recording mode.
  • the viewfinder interface 1303 also includes a "more" control 1310.
  • a "more” control 1310 When the mobile phone detects the operation for selecting the "more” control 1310, another GUI as shown in (b) in Figure 15 is displayed.
  • the GUI displays icons corresponding to various shooting modes, including “shaking mode”. "Icon and "shift mode” icon.
  • the viewfinder interface may not display the control 1308 for indicating the lens movement mode, which can avoid blocking the preview image in the viewfinder interface.
  • FIG. 16 shows a schematic diagram of the GUI when the mobile phone enters the mobile mode.
  • prompt information 1312 may be displayed in the viewfinder interface, and the prompt information 1312 is used to indicate that the mobile phone is currently in the shift mode.
  • the prompt information 1312 may be displayed in a manner that does not block the preview image as much as possible, such as translucent or transparent.
  • a direction control 1311 is also displayed in the viewfinder interface. The user can input information for indicating the moving direction of the image through the direction control 1311. It should be understood that the display position of the direction control 1311 in the viewfinder interface is not limited in the embodiment of the present application. For example, it is displayed in the position shown in FIG. 16 by default, or the user can adjust its display position.
  • the mobile phone uses the first wide-angle camera (for example, a normal wide-angle camera) in the normal video recording mode.
  • the second wide-angle camera such as an ultra-wide-angle camera
  • the viewing angle of the first wide-angle camera is smaller than the viewing angle of the second wide-angle camera.
  • the viewfinder interface displays a first preview image, which is the first image block in the first area on the image collected by the ultra-wide-angle camera. It is understandable that after the mobile phone enters the shift mode, the ultra-wide-angle camera is started, and the first preview image may be the first image block in the first area on the first frame of image collected by the ultra-wide-angle camera.
  • the first image block may be an image block corresponding to the first preview image on the image captured by the ultra-wide-angle camera.
  • the first image block is the overlap between the image captured by the ultra-wide-angle camera and the image captured by the ordinary wide-angle camera. All or part of the image.
  • FIG. 17 shows a schematic diagram of the first area on the image collected by the ultra-wide-angle camera. The first area may be all or part of the area where the viewing angle range of the ultra-wide-angle camera and the ordinary wide-angle camera overlap. Comparing FIG. 17 and FIG. 18, it can be seen that after the mobile phone in FIG. 17 enters the shift mode, the preview image is the first image block in the first area in FIG. 18. After the mobile phone is switched from the normal recording mode to the shift mode, the normal wide-angle camera can be turned off or not.
  • the mobile phone switches from the normal recording mode to the shift mode, and the preview image remains unchanged.
  • the fact that the preview image remains unchanged can be understood as the preview image is not reduced or enlarged after switching to the shift mode.
  • the magnification of the preview image in the shift mode is the same as the magnification of the preview image in the normal video mode, for example, both are 1 times. Therefore, after the mobile phone is switched from the normal video mode to the shift mode, the user will not perceive that the preview image is suddenly enlarged or reduced.
  • the preview image can be changed in the normal video mode and the shift mode.
  • the preview image change can be understood as the preview image is reduced or enlarged after switching to the shift mode.
  • the magnification of the preview image in the normal video mode is 1x
  • the magnification of the preview image in the shift mode is 5 times, that is, when the normal video mode is switched to the shift mode, the preview image is enlarged. It is understandable that when the image magnification increases after switching to the shift mode, the position movement range of the first area on the image collected by the ultra-wide-angle camera increases, which can achieve the shooting effect of shifting the lens in a wider range.
  • the following embodiment introduces the process of implementing image translation by the mobile phone in the shift mode.
  • the ultra-wide-angle camera collects N frames of images. For example, the first frame, the second frame, the m-1th frame, and so on. It is understandable that after the mobile phone enters the shift mode, the ultra-wide-angle camera is activated, so the first preview image after entering the shift mode may be the first image block in the first area on the first frame of image collected by the ultra-wide-angle camera. Assuming that the mobile phone has not detected the image movement instruction, the mobile phone determines the second area on the second frame image, and the position of the second area relative to the first area does not move, and the preview image is refreshed from the image block in the first area to the second area Image block.
  • the position of the m-1th area on the m-1th frame image relative to the first area does not move, where m may be an integer greater than or equal to 3.
  • the preview image is refreshed to the image block in the m-1th area. That is to say, during the period from the first frame of image to the m-1th frame of image, the mobile phone does not detect the image movement instruction, so the position of the preview image on the image does not change.
  • the mobile phone detects an image right shift instruction.
  • the mobile phone determines the m-th area on the m-th frame image, and the position of the m-th area is shifted to the right by a distance A relative to the position of the m-1th area.
  • the distance from the m-1th area to the left edge of the image is H
  • the distance from the mth area to the left edge of the image is H+A.
  • the preview image in the viewfinder interface is refreshed from the image block in the m-1 area to the image block in the m area, that is, the position of the preview image on the image is shifted to the right by a distance A.
  • the mobile phone determines the m+1th area on the m+1th frame image, and the position of the m+1th area is shifted to the right by a distance B relative to the position of the mth area.
  • the distance from the m-th area to the left edge of the image is H+A
  • the distance from the m+1-th area to the left edge of the image is H+A+B. Therefore, the preview image in the viewfinder interface is refreshed from the image block in the m-th area to the image block in the m+1-th area, that is, the position of the preview image on the image is shifted to the right by a distance B.
  • the mobile phone determines the m+2th area on the m+2th frame image, and the position of the m+2th area is shifted to the right by a distance C relative to the position of the m+1th area.
  • the distance from the m+1th area to the left edge of the image is H+A+B
  • the distance from the m+2th area to the left edge of the image is H+A+B+C. Therefore, the preview image in the viewfinder interface is refreshed from the image block in the m+1th area to the image block in the m+2th area, that is, the position of the preview image on the image is shifted to the right by a distance C. Therefore, the position of the preview image on the image gradually shifts to the right.
  • the position of the m+3 area has not changed relative to the position of the m+2 area.
  • the preview image is refreshed from the image block in the m+2 area to the image block in the m+3 area, that is, the position of the preview image on the image does not change, and it stops moving to the right.
  • the preview image is refreshed from the image block in the m+3 area to the image block in the m+4 area, and the position of the preview image on the image remains unchanged until the image movement instruction is detected again and moved again.
  • the shooting effect is to say, after the mobile phone detects the instruction to move the image to the right, the preview image that is refreshed each time is shifted to the right by the same distance L relative to the position of the previous frame of the preview image on the image, until the stop movement instruction is
  • the +1 zone moves to the right by a distance of 3L, that is, the m+2 zone accelerates to the right with respect to the m+1 zone. Therefore, after the mobile phone detects the image right shift instruction, the preview image refreshed each time is accelerated to the right relative to the previous frame of preview image, and the shooting effect of the image accelerated right shift is realized.
  • the m-th area is moved to the right by a distance of 2L relative to the m-1th area
  • the m+1-th area is moved to the right by a distance L relative to the m-th area, that is, the m+1-th area decelerates and moves to the right relative to the m-th area
  • the distance to the right of the area relative to the m+1th area is 0, that is, the m+2th area decelerates and does not move to the right relative to the m+1th area. Therefore, after the mobile phone detects the image right shift instruction, each refreshed preview image decelerates and shifts to the right relative to the previous frame preview image, or even the speed drops to 0, achieving the shooting effect of the image decelerating and shifting to the right.
  • the mobile phone after the mobile phone detects the instruction for indicating the direction of image movement, it defaults to using one of the methods in Example 1, Example 2 or Example 3 above; or, after the mobile phone detects the instruction for indicating the direction of image movement, , The method of Example 1 is used by default. When an instruction to accelerate movement is detected, the method of Example 2 above is used; when the mobile phone detects an instruction to decelerate movement, the method of Example 3 above is used.
  • the ultra-wide-angle camera collects images frame by frame, assuming that N frames are collected, and the preview image is sequentially refreshed with image blocks in the target area on each frame of the N frame images. It can also be understood that the mobile phone does not perform frame extraction or interpolation processing on the N frames of images collected by the ultra-wide-angle camera, but refreshes the preview image sequentially into image blocks in the target area of each frame of the N frame image, which helps to improve The continuity and fluency of the preview image.
  • the ultra-wide-angle camera collects N frames of images, and the mobile phone extracts M frames of images from the N frames of images collected by the ultra-wide-angle camera, where M is an integer less than N, and is refreshed by the image block in the target area on each frame of the M frame image Preview the image, you can achieve a fast refresh (or called playback) effect.
  • M is an integer less than N
  • M is an integer less than N
  • M is an integer less than N
  • Example 1 as shown in FIG. 19A, the ultra-wide-angle camera collects N frames of images, for example, the first frame, the second frame, and the m-1th frame of images are sequentially collected.
  • the mobile phone detects the image right shift instruction, and the mobile phone determines the m-th frame image in the N-frame image and the target area on each subsequent frame image, assuming the next frame image
  • the phone starts to draw frames.
  • frames are extracted from the m-th frame, and the m-th frame, the m+i-th frame, and the m+i+j-th frame are extracted.
  • the m-th area on the m-th frame is shifted to the right by a distance L relative to the m-1th frame
  • the m+i-th area on the m+i-th frame image is shifted to the right by a distance i L relative to the upper m-th frame.
  • the m+i+j-th area on the i+j frame image is shifted to the right by a distance j L relative to the upper m+i-th frame.
  • the mobile phone detects an instruction to stop the image from moving.
  • the mobile phone continues to refresh the preview image with the image blocks in the target area on the image of the m+i+j+1th frame and the m+i+j+2th frame. That is, after the mobile phone detects the image stop moving instruction, it no longer uses the method of frame refreshing, and the target area on the image such as the m+i+j+1 frame image and the m+i+j+2 frame image is on the image
  • the position of and the position of the m+i+j-th area on the image do not change, that is, it stops moving.
  • the mobile phone after the mobile phone detects the instruction for indicating the direction of image movement, it refreshes the preview image using frame extraction and the position of the preview image on the image gradually moves in accordance with the image movement direction.
  • the instruction to stop the image movement is detected , No longer use the frame extraction method, the position of the preview image on the image no longer moves.
  • the values of i and j can be understood as the frame sampling interval, and different values of i and j can achieve different shooting effects.
  • the m+4th area is moved 2L to the right relative to the m+2th area, that is, the preview image refreshed each time is moved 2L to the right relative to the position of the previous frame of the preview image on the image.
  • FIG. 19A uses frame extraction, and the position of the preview image on the image can be uniformly moved at a faster speed (shifting 2L to the right each time). Move right. Moreover, the frame refresh can achieve the effect of fast refresh. In other words, while the preview image is refreshed at a faster speed, the position of the preview image on the image moves to the right at a faster speed.
  • the frame sampling interval can also be different, that is, i is not equal to j.
  • the effect of rapid refresh can be achieved due to the frame refreshing, that is, the preview image is refreshed at a faster speed while the preview image is on the image.
  • the position moves to the right at an average speed.
  • the mobile phone determines the target area in the manner of A ⁇ B ⁇ C ⁇ D in Figure 18, that is, while the preview image is refreshed at a faster speed, the position of the preview image on the image accelerates to the right.
  • the mobile phone determines the current area in the manner of A>B>C>D, that is, while the preview image is refreshed at a faster speed, the position of the preview image on the image decelerates and moves to the right.
  • the video can be an image block in the m-th area, an image block in the m+i-th area, and m+i+th frame.
  • the ultra-wide-angle camera collects N frames of images.
  • the mobile phone inserts multiple frames of images into N frames of images to obtain M frames of images, where M is an integer greater than N, and the preview images are refreshed sequentially through the M frames of images. Due to the increase in the number of images, the effect of slow refresh (or playback) can be achieved .
  • the image acquisition frame rate of the ultra-wide-angle camera is 240 fps, that is, 240 frames of images are acquired per second.
  • the image refresh (or playback) frame rate of the mobile phone is 30 fps, that is, 30 frames per second
  • the 240 frames of images need to be refreshed in 8 seconds.
  • the mobile phone inserts 120 frames of images into the 240 frames of images to obtain 360 frames of images, it only takes 12 seconds to complete the refresh, that is, the effect of slow refresh is realized.
  • Example 1 as shown in FIG. 19C, the ultra-wide-angle camera collects N frames of images, for example, the first frame, the second frame, and the m-1th frame of images are sequentially collected.
  • the mobile phone detects the image right shift instruction, and the mobile phone determines the target area on the m-th frame image and each subsequent frame.
  • the position of the target area on the next frame of image is relative to the previous frame
  • the position of the target area of the image is shifted to the right by the same distance L.
  • the mobile phone starts the frame insertion process, assuming that P frame image is inserted between the mth frame and the m+1th frame (the inserted image is represented by a dashed line), and Q is inserted between the m+1th frame and the m+2th frame Frame image.
  • P and Q can be the same or different.
  • the mobile phone can determine the P-th area on the P-th frame image (that is, a frame of image inserted between the m-th frame and the m+1-th frame).
  • the P-th area moves to the right by a distance X relative to the m-th area.
  • X may be a value in the range of L to 2L, such as 1.5L.
  • the mobile phone determines the Q-th area on the Q-th frame image (that is, a frame of image inserted between the m+1-th frame and the m+2th frame), and the Q-th area moves to the right by a distance Y relative to the m+1-th area.
  • the value of Y is not limited in the embodiment of the present application.
  • Y may be a value in the range of 2L to 3L, such as 2.5L.
  • the insert frame refresh can achieve the effect of slow refresh of the preview image, that is, while the preview image is refreshed at a slower speed, the position of the preview image on the image can be moved to the right at a lower speed evenly.
  • the mobile phone detects the instruction to stop the image movement, it determines the image block in the m+3 area on the m+3 frame image. , The image block in the m+4th area on the m+4th frame of image, and so on. That is to say, after the mobile phone detects the image stop moving instruction, it will no longer use the frame insertion method, and the position of the preview image on the image will no longer move.
  • Example 2 the mobile phone first determines the target area on each frame of the image, and then performs frame insertion processing.
  • the mobile phone can insert the frame first, and then determine the target area.
  • FIG. 19D suppose that before the preview image is refreshed to the m-th frame image, the mobile phone detects the image right shift instruction, and the mobile phone starts the frame insertion processing from the m frame image to obtain the M frame image. There is no limit to the number of image frames inserted between two adjacent frames of images. Take the insertion of a frame between two adjacent frames as an example, as shown in FIG. 19D, the dashed image is the inserted image.
  • the method for determining the target area on the M frame image by the mobile phone can be referred to the description of FIG.
  • the effect of slow refresh can be achieved due to the interpolation of the frame refresh, that is, the preview image is refreshed at a slower speed, and the preview image is on the image
  • the positions move to the right at an average speed.
  • the mobile phone determines the target area in the manner of A ⁇ B ⁇ C ⁇ D, that is, while the preview image is refreshed at a slower speed, the preview image accelerates to the right in the position of the image.
  • the mobile phone determines the current area in the manner of A>B>C>D, that is, while the preview image is refreshed at a slower speed, the position of the preview image on the image decelerates and moves to the right.
  • the target area can be determined using the method A ⁇ B ⁇ C ⁇ D in the first method above, or the frame extraction method in Example 1 in the second method described above can be used to achieve accelerated movement.
  • the mobile phone detects a decelerating movement command
  • the current area can be determined using the A>B>C>D method in the first method above, or the frame insertion method in Example 1 of the third method described above can be used to achieve slowed movement.
  • the sides can be aligned.
  • the sides of the first area and the second area are aligned, such as the distance between the bottom edge of the first area and the bottom edge of the first frame of image, and the bottom edge of the second area to the second frame
  • the distance between the bottom edges of the image is the same to ensure the stable display of the preview image as much as possible.
  • jitter occurs when the user holds the mobile phone.
  • the mobile phone will anti-shake the image collected by the ultra-wide-angle camera, and determine the first area and the second area on the image after the anti-shake processing. , The third area and other target areas.
  • the anti-shake processing may be anti-shake cutting.
  • the mobile phone can crop the edges of each frame of images collected by the ultra-wide-angle camera, and the specific cropping area is not limited in this embodiment of the application.
  • the mobile phone determines the first area on the remaining image after cropping the first frame of image, determining the second area on the remaining image after cropping the second frame of image, and determining the remaining image on the remaining image after cropping the third frame of image.
  • the mobile phone can determine the first area on the first frame of image, and perform anti-shake cropping on the first image block in the first area, for example, crop the edge of the first image block to preview the image Display the remaining image after the first image block has been trimmed by the edge.
  • the mobile phone can determine the second area on the second frame of image, and perform anti-shake cropping on the second image block in the second area, and so on. In other words, the mobile phone first determines the image block in the target area on each frame of image, and then performs anti-shake cropping processing.
  • the viewfinder interface also includes a direction control 1311.
  • the direction control 1311 may include four arrows distributed around the video control 1307.
  • the mobile phone detects that the user has clicked (for example, clicked) an arrow, the mobile phone starts to move the first area based on the direction indicated by the arrow. For example, if the user clicks the right arrow, the position of the preview image on the image starts to move to the right.
  • the mobile phone detects the user's click operation at any position in the viewfinder interface, it stops moving; or, when it detects the arrow pointing to the right again, it stops moving; or, after a certain period of time, it stops moving automatically.
  • Example 2 referring to FIG. 16, when the mobile phone detects that the user has pressed a certain arrow for a preset time period, it starts to move the first area based on the direction indicated by the arrow. When the user detects that the long press operation bounces up, the movement is stopped.
  • Example 3 referring to Fig. 16, when the mobile phone detects that the user presses the video control 1307 and drags in a certain direction (or can also be called sliding), the mobile phone starts to move the first area according to the dragging direction indicated by the drag operation. For example, if the user presses down the video control 1307 and drags it to the right, the mobile phone moves the first area to the right. When the mobile phone detects that the drag operation bounces up, it stops moving. Wherein, when the mobile phone detects that the finger presses the video control 1307 and pops up at the pressed position, it is determined to start recording. When the mobile phone detects that the finger presses the video control 1307 and drags it to a non-pressed position, it determines the direction of the drag operation, that is, the image movement direction, and moves the first area based on this direction.
  • the direction of the drag operation that is, the image movement direction
  • Example 4 referring to FIG. 16, when the mobile phone detects the user's sliding operation on the screen (for example, on the preview image), it starts to move the first area according to the sliding direction of the sliding operation. When the mobile phone detects that the sliding operation is stopped, it stops moving the first area.
  • the stop of the sliding operation can be understood as the user's finger sliding from point A to point B and staying at point B, or the user's finger bouncing up after sliding from point A to point B. It should be understood that in this case, the direction control 1311 may not be displayed in the viewfinder interface.
  • Example 5 the user inputs the direction of image movement through voice commands.
  • the user can click anywhere on the viewfinder interface, or indicate the end of the movement through a voice command. It should be understood that in this case, the direction control 1311 may not be displayed in the viewfinder interface.
  • the direction of the image movement can also be input through a keyboard, a touch pad, and the like.
  • the acceleration movement instruction or deceleration movement instruction mentioned above can be obtained in the following manner.
  • a mark 1320 for indicating the speed of movement is displayed on the viewfinder interface.
  • the moving speed here can be understood as the amount of change in the position of the preview image on the image.
  • the identifier 1320 defaults to a value of 1X.
  • 1X can be understood as 1 times the above L.
  • a speed adjustment bar 1321 is displayed.
  • the maximum speed provided by the speed adjustment bar 1321 in FIG. 16 is 3X
  • the minimum speed is 0.5X, which is not limited in the embodiment of the present application.
  • 1X, 2X, 3X, etc. can be understood as multiples of the aforementioned L, 3X is 3 times of L, and 2X is 2 times of L.
  • the user selects the speed by sliding on the speed adjustment bar 1321. Assuming that the user selects a speed of 2X, the indicator 1320 displays a value of 2X.
  • the mobile phone can also set the speed in other ways, such as setting the speed through the volume button. For example, if the mobile phone detects that the volume up button is triggered, it increases the speed, and if it detects that the volume down button is triggered, it decreases the speed. It should be noted that the embodiment of the present application does not limit the form of setting the shift speed of the mobile phone. For example, it is also feasible to provide three speed grade options of low speed, medium speed, and high speed on the viewfinder interface for the user to choose.
  • FIG. 20 is a schematic diagram of the effect of the viewfinder interface when the mobile phone is in the shift mode according to an embodiment of this application.
  • a preview image 1 is displayed in the viewfinder interface.
  • the mobile phone stays still.
  • the preview image 1 is refreshed to the preview image 2.
  • the scene included in the preview image 2 is located in the preview image 1. Below the scene in the middle is equivalent to moving the phone down to shoot.
  • the mobile phone continues to refresh the preview image 2 to the preview image 3.
  • the scene in the preview image 3 is located below the scene in the preview image 2, which is equivalent to the mobile phone continuing to move down to shoot.
  • the mobile phone detects that the user taps anywhere on the screen, and the movement ends. It can be seen from FIG. 20 that during the process of keeping the mobile phone still, the scene in the preview image gradually moves downward to achieve the shooting effect of the lens moving downward.
  • FIG. 21 shows a schematic diagram of the GUI when the mobile phone enters the shaking mode.
  • a prompt message 1313 may be displayed in the viewfinder interface, and the prompt message 1313 is used to indicate that the cell phone is currently in the shaking mode.
  • the prompt information 1313 may be displayed in a manner that does not block the preview image as much as possible, such as semi-transparent or transparent.
  • the GUI may also include a direction control 1311. The user can input the image moving direction through the direction control 1311.
  • the mobile phone can realize the "panning" shooting effect through the image movement direction input by the user.
  • the manner in which the user inputs the moving direction of the image, and the manner in which the instruction to stop the movement is input refer to the foregoing, and will not be repeated.
  • the mobile phone determines the target area on the image, it performs the viewing angle conversion processing on the image blocks in the target area, and then refreshes the preview image with the image blocks that have undergone the viewing angle conversion.
  • the mobile phone detects the right-shaking instruction of the image, determines the m-th area on the m-th frame of the image, and performs viewing angle conversion processing on the image blocks in the m-th area.
  • the mobile phone determines the m+1th area on the m+1th frame image, and performs time conversion processing on the image blocks in the m+1th area, and so on. Therefore, after the mobile phone detects the image right-shaking instruction, the preview image is sequentially refreshed into image blocks in the m-th area after viewing angle conversion processing, image blocks in the m+1-th area after viewing angle conversion, and so on.
  • the rotation angle ⁇ can be determined in many ways.
  • the rotation angle ⁇ is preset, such as a preset fixed value.
  • the rotation angle is related to the sliding operation.
  • the mobile phone stores the corresponding relationship between the sliding distance W of the sliding operation and the rotation angle ⁇ , the mobile phone detects the user's sliding operation on the screen, and determines the sliding distance W of the sliding operation, based on the distance W and the corresponding relationship Determine the corresponding rotation angle ⁇ .
  • the rotation angle ⁇ may also be related to the display value of the logo 1320. Assuming that the logo 1320 displays 2, the rotation angle ⁇ is a preset angle of 2 times, and the value of the preset angle is not limited in this application.
  • the size of the image collected by the ultra-wide-angle camera is limited, and when the first region is translated to the edge of the image of the ultra-wide-angle camera, the mobile phone outputs prompt information to prompt the user to move the mobile phone.
  • the mobile phone detects that the user continues to press the down arrow, and the position of the preview image on the image gradually moves downward.
  • the mobile phone may output a prompt message 1330 to prompt the user to manually move the mobile phone down, or output a prompt message to prompt the user that the user cannot continue to move.
  • the mobile phone when it detects an operation on the recording control 1307, it starts to record a video. After starting the recording, the user can also instruct to enter the shift mode or the shake mode, and then enter the direction of the image movement.
  • the phone will translate the position of the target area on the image captured by the ultra-wide-angle camera according to this direction, and continuously update the preview based on the image blocks in the target area image.
  • the mobile phone stores the preview image, and when the operation for the stop recording control is detected, the stored preview image is combined into a video, and the video is kept.
  • two videos can be correspondingly stored, one of which is a complete video, that is, each frame of image in the video is a complete image collected by an ultra-wide-angle camera.
  • the other video is a video recorded using shift mode or shaking mode.
  • Each frame of the video is an image block on the image captured by the ultra-wide-angle camera.
  • two videos are stored in the video folder in the photo album of the mobile phone, one of which is a complete video, and the other is a video recorded in shift mode or shake mode.
  • the logo 2301 can be displayed on the video recorded in the shift mode or the shake mode to facilitate the user to distinguish.
  • the mobile phone may also provide an image rotation mode. In this mode, the user does not need to manually rotate the mobile phone (for example, the mobile phone remains stationary), and the effect of image rotation and shooting can also be achieved.
  • the viewfinder interface includes indication information 1360, which is used to indicate the current image rotation mode.
  • the prompt information 1360 may not be displayed.
  • the viewfinder interface also includes a preview frame 1361 in which an image captured by the ultra-wide-angle camera (for example, a complete image captured by the ultra-wide-angle camera) is displayed.
  • a target frame 1362 is displayed in the preview frame 1361, and the image block in the target frame 1362 is the current preview image.
  • the viewfinder interface also includes an icon 1363 for indicating the rotation progress, and an icon 1364 for setting the rotation speed.
  • the mobile phone determines the target area on the m-th frame image, that is, the m-th area, and The image block in the m-th area is rotated by an angle G clockwise.
  • the mobile phone determines the m+1th area on the m+1th frame image.
  • the position of the m+1th area relative to the mth area remains unchanged, and the image block in the m+1th area is rotated clockwise by an angle of 2G. analogy.
  • the preview image is sequentially refreshed into the image block rotated clockwise by the angle G in the mth area, the image block rotated clockwise by the 2G angle in the m+1th area, and the image block rotated by the clockwise angle of 2G in the m+1th area. +2 area of the image block rotated clockwise by 3G, and so on. Therefore, the preview image is gradually rotated in a clockwise direction, and the rotation angle of the preview image refreshed each time is the same, that is, it rotates at a constant speed.
  • a stop rotation instruction is detected.
  • the mobile phone determines the m+3 area on the m+3 frame image.
  • the position of the m+3 area relative to the m+2 area remains unchanged.
  • the image block in the m+3 area is rotated clockwise by an angle of 3G, That is, the rotation angle relative to the image block in the m+2 area does not change, that is, the rotation stops.
  • the rotation angle between two adjacent frames of images is the same as the angle G, but the rotation angle between two adjacent frames may also be different.
  • the rotation angle of the image block in the m-th area is G
  • the rotation angle of the image block in the m+1-th area is 3G, that is, the preview image is rotated at an accelerated speed.
  • the rotation angle of the image block in the m-th area is G
  • the rotation angle of the image block in the m+1-th area is 0.5G, that is, the preview image is decelerated and rotated, and so on.
  • the above method of frame extraction refresh or frame insertion refresh can also be applied to this embodiment.
  • the method of drawing frames can achieve accelerated rotation
  • the method of inserting frames can achieve decelerated rotation.
  • FIG. 24 suppose that when the mobile phone detects an operation on the "+" on the icon 1364, it indicates that the user desires to increase the rotation angle.
  • the mobile phone can be implemented by drawing frames, similar to the embodiment shown in FIG. 19A
  • the method of extracting one frame every other frame in realizes the effect of 2G rotation angle of the preview image for each refresh.
  • the mobile phone detects the operation for the "-" under the icon 1364, it indicates that the user desires to reduce the rotation angle.
  • the mobile phone can implement this by inserting frames, similar to the inserting one frame every other frame in the embodiment shown in FIG. 10B In this way, the preview image rotation angle of each refresh is 0.5G.
  • the mobile phone mentioned above detects the instruction used to indicate the direction of the image rotation.
  • the mobile phone when the mobile phone detects an operation on the icon 1363, it rotates clockwise or counterclockwise by default, and the user can set it by himself.
  • an arrow pointing to the left and an arrow pointing to the right are displayed at the icon 1363.
  • the mobile phone detects that the user clicks the left arrow, it rotates counterclockwise, and when the mobile phone detects the user clicks the right arrow, it rotates clockwise.
  • the mobile phone After the mobile phone detects the operation of clicking the icon 1363, it starts to rotate, and automatically stops after a preset period of time (for example, 5s).
  • the phone can keep the synthesized video from the preview image during the rotation start to the end.
  • the mobile phone After the mobile phone detects the operation of clicking the icon 1363, it starts to rotate and keeps rotating until it rotates 360 degrees and stops rotating.
  • the mobile phone after the mobile phone detects the operation of clicking the icon 1363, it starts to rotate and continues to rotate until the user inputs an instruction to stop the rotation to stop the rotation. For example, the mobile phone detects the user's click operation at any position in the preview interface and stops the rotation, or detects the operation of clicking the icon 1363 again and stops the rotation.
  • the mobile phone when the mobile phone detects an operation of long-pressing the icon 1363 (the duration of pressing the icon 1363 is greater than the preset duration), it starts to rotate, and when the mobile phone detects that the long-press operation bounces, it stops rotating.
  • image rotation can be performed before starting recording (for example, before clicking the recording control for instructing to start recording), or after starting recording (for example, after clicking the recording control for instructing to start recording).
  • FIG. 26 shows a schematic diagram of the image rotating counterclockwise.
  • the target frame 1362 in the viewfinder interface can also be rotated synchronously to prompt the user the approximate rotation angle of the current preview image.
  • the current rotation progress can also be displayed on the icon 1363.
  • the rotation direction of the target frame 1362 and the rotation direction of the preview image may be the same or different, which is not limited in the embodiment of the present application.
  • the mobile phone may also provide a push-pull mode.
  • the push-pull mode the mobile phone can achieve the shooting effect of "push the lens” or "pull the lens".
  • push the lens can be understood as the way the camera is pushed closer to the object, that is, the object in the viewfinder interface is enlarged, which helps to focus on the details of the object;
  • pull the lens can be understood as the camera is far away from the object, that is, the object in the viewfinder interface is Zooming out will help you take the whole picture of the subject.
  • the mobile phone when the mobile phone is in the image rotation mode, shift mode, or shake mode, if the mobile phone detects a preset operation in the preview image (for example, a double-click operation or long-press operation on the preview image), it enters the push-pull mode.
  • a preset operation in the preview image for example, a double-click operation or long-press operation on the preview image
  • the embodiments of the present application provide multiple modes, including a normal video mode, a pan mode, a shift mode, an image rotation mode, and a push-pull mode.
  • the mobile phone when the mobile phone detects the user's double-click operation in the preview image, it realizes the cyclic switching between different modes.
  • the viewfinder interface includes indication information 1370, which is used to indicate the current push-pull mode.
  • the prompt message 1370 may not be displayed.
  • the viewfinder interface also includes a preview frame 1371, in which the image collected by the ultra-wide-angle camera is displayed.
  • a target frame 1372 is displayed in the preview frame 1371, and the image block in the target frame 1372 is the current preview image.
  • the viewfinder interface also includes an icon 1373 for indicating zooming in, an icon 1374 for indicating zooming in, and an icon 1375 for setting the speed of zooming.
  • the following embodiments take the zoom lens as an example to introduce the shooting process of zooming the lens while the mobile phone is kept still.
  • the mobile phone detects the instruction for zooming in, and determines the target area on the m-th frame image, that is, the m-th area.
  • the area of the region is larger than the area of the m-1th region.
  • the mobile phone determines the m+1th area on the m+1th frame image, and the area of the m+1th area is larger than the area of the m+1th area, and so on. Therefore, after the mobile phone detects the zoom-in command, the preview image is sequentially refreshed into image blocks in the m-th area, image blocks in the m+1-th area, image blocks in the m+2-th area, and so on. Therefore, the area occupied by the preview image on the image gradually increases, and the viewing angle range of the preview image becomes larger and larger, so that the shooting effect of the camera gradually moving away from the object is realized.
  • the mobile phone determines the m+3th area on the m+3th frame of image, and the area of the m+3th area relative to the m+2th area remains unchanged, that is, stops zooming in. Therefore, after the mobile phone detects the instruction to stop pulling the lens, the area occupied by the preview image no longer increases, and the camera is no longer far away from the object visually.
  • the amount of change in the area of the target area between two adjacent frames of images may be the same or different. Assuming that the area increase of the target area on two adjacent frames of images is the same, that is, the area of the target area increases at an average speed, that is, the shooting effect of the camera moving away from the object at an average speed is realized. Suppose that the area of the m+1th region is larger than the area of the mth region by S, and the area of the m+2th region is 2S larger than the area of the m+1th region, that is, the area of the target region increases rapidly, that is, the preview image is accelerated The effect of shooting far away from the subject.
  • the area of the m+1th region is larger than the area of the mth region by S
  • the area of the m+2th region is 0.5S larger than the area of the m+1th region
  • the mobile phone After the mobile phone detects the operation of clicking the icon 1373, it starts to increase the area of the target area, and automatically stops increasing after a preset period of time (for example, 5s).
  • the mobile phone can keep the video synthesized from the preview image during the zooming in from the start to the end.
  • the mobile phone After the mobile phone detects the operation of clicking the icon 1373, it starts to increase the area of the target area until it is equal to the area of the complete image collected by the ultra-wide-angle camera.
  • the mobile phone After the mobile phone detects the operation of clicking the icon 1373, it starts to increase the area of the target area until it detects that the user inputs an instruction to stop the increase. For example, the mobile phone detects the user's click operation at any position in the preview interface and stops the enlargement, or detects the operation of clicking the icon 473 again and stops the enlargement.
  • the mobile phone when the mobile phone detects an operation of long pressing the icon 1373 (the time duration of pressing the icon 1373 is greater than the preset duration), it starts to increase the area of the target area, and stops increasing when the mobile phone detects that the long pressing operation bounces up.
  • the mobile phone detects that the user clicks on the icon 1373 for instructing to zoom in, and starts to increase the area of the target area on the image.
  • the preview image is gradually refreshed into image blocks in a larger area of the target area, so that the shooting effect of the object gradually moving away from the camera is realized.
  • the area of the target frame 1372 in the viewfinder interface can be increased synchronously to remind the user of the approximate proportion of the current preview image to the complete image.
  • the pull lens mentioned above is taken as an example, and a similar method can be used for the push lens.
  • the mobile phone when the mobile phone detects the instruction to push the lens, it can determine the target area in a manner similar to that shown in Figure 28. The difference is that the area of the target area on the next frame of image is reduced relative to the area of the target area on the previous frame of image. , Realize the preview image enlargement.
  • the mobile phone detects the instruction to stop zooming, it stops reducing the area of the target area, that is, the preview image stops zooming in.
  • the mobile phone after the mobile phone uses the shift mode, the shake mode, or the image rotation mode to record a video, it can automatically score the video. For example, use the selected sound to score the video.
  • the voice may be a voice previously selected by the user from multiple voices provided by the camera application.
  • the sound here may include song fragments, ringtones, or other sounds, etc., which are not limited in the embodiment of the application.
  • the various embodiments of the present application can be combined arbitrarily to achieve different technical effects.
  • the preview image is gradually reduced or enlarged during the clockwise rotation; or, the position of the preview image on the image is gradually moved to the left while being gradually enlarged, etc., which is not limited in the embodiment of the present application.
  • the embodiments of the present application provide a method for displaying preview images in a video recording scene.
  • the method can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) as shown in FIG. 2.
  • the method may include the following steps:
  • the first operation for opening the camera is detected.
  • the first operation is, for example, an operation of the user clicking on the icon 402.
  • a second operation for indicating the first video recording mode is detected.
  • the electronic device may provide multiple recording modes. For example, in the normal recording mode and the first recording mode (for example, including the shift mode and the shake mode), the electronic device can enter a certain mode under the instruction of the user. Taking (a) in Figure 14 as an example, the electronic device displays the viewfinder interface in the normal video mode. After the electronic device detects the operation of clicking the lens movement mode control 408, it displays a selection box 1309. The second operation can be to click the selection box 1309 "Shake mode" or "shift mode” option in the operation. Assuming that the second operation is an operation of clicking the "shift mode" option in the selection box 1309, the electronic device enters the shift mode.
  • the viewfinder interface includes a first preview image
  • the first preview image is a first wide-angle view on the electronic device
  • the first image block located in the first area on the first image collected by the camera.
  • a preview image is a first image block in a first area on a first image collected by a first wide-angle camera (for example, an ultra-wide-angle camera).
  • the first image is, for example, the image shown in FIG. 17, and the first preview image is an image block in the first area on the first image.
  • the electronic device uses the second wide-angle camera in the normal video recording mode.
  • the first wide-angle camera is activated.
  • the field angle of the second wide-angle camera is smaller than the field angle of the first wide-angle camera.
  • the first wide-angle camera is, for example, an ultra-wide-angle camera
  • the second wide-angle camera is, for example, a normal wide-angle camera. That is, after the electronic device is switched from the normal video mode to the shift mode, the normal wide-angle camera is switched to the ultra-wide-angle camera, and the first preview image is the first image block in the first area on the first image collected by the ultra-wide-angle camera.
  • the above-mentioned first image may be the first frame of image collected by the ultra-wide-angle camera after the electronic device switches from the normal recording mode to the first recording mode and starts the ultra-wide-angle camera.
  • the third operation can be implemented in multiple ways.
  • the third operation may be an operation of the user clicking an arrow (for example, a right arrow) of the direction control 1311, and the direction indicated by the arrow is the image moving direction.
  • the third operation may also be an operation in which the user presses a certain arrow and the pressing time reaches a preset time period, and the direction indicated by the arrow is the image moving direction.
  • the third operation is an operation in which the user presses down the video control 1307 and drags in a certain direction (or can also be referred to as sliding), and the dragging direction is the image moving direction.
  • the sliding direction of the sliding operation is the image moving direction.
  • the third operation may also be an operation of inputting the image movement direction through a keyboard, a touch pad, or the like.
  • the second preview image is a second image located in a second area on the second image collected by the first wide-angle camera Block, or, the second preview image is an image block obtained after viewing angle conversion processing on the second image block; wherein the orientation of the second area relative to the first area and the image movement direction Related.
  • the above-mentioned first image may be the first frame of image collected by the ultra-wide-angle camera after the electronic device switches from the normal recording mode to the first recording mode and starts the ultra-wide-angle camera.
  • the first preview image is the first image block in the first area on the first frame of image.
  • the second preview image is the m-th image block in the m-th region (that is, the second region) on the m-th frame image (that is, the second image).
  • it is an image block obtained by converting the angle of view of the m-th image block in the m-th area on the m-th frame image.
  • the position of the m-th area (ie, the second area) relative to the first area changes.
  • the orientation of the second area relative to the first area is the same as or opposite to the image movement direction.
  • the image movement direction input by the user is to move to the right, and the second area is to the right of the first area, that is, the position of the preview image on the image captured by the ultra-wide-angle camera moves to the right; or the user inputs an image right movement instruction, and the second area The area is to the left of the first area, that is, the position of the preview image on the image moves to the left.
  • the user can set the image movement direction input by the user to be the same or opposite to the position movement direction of the preview image on the image collected by the ultra-wide-angle camera.
  • the orientation of the second area relative to the first area is related to the image movement direction, which can be understood as: the distance between the second area and the first edge of the second image is the second distance, and the first area and the first The distance between the first edges of an image is the first distance, and the distance change of the second distance relative to the first distance is related to the image moving direction.
  • the first edge may be the top, bottom, left, and right edges of the image collected by the ultra-wide-angle camera. For example, if the image moving direction is left or right, the first edge can be the left edge or the right edge. If the image movement direction is up or down, the first edge can be the top edge or the bottom edge.
  • the distance change amount of the second distance relative to the first distance is A.
  • the distance change amount A has many cases, which are related to the image moving direction. For example, when the image moving direction is right, A is greater than 0, that is, the second area moves to the right relative to the first area. When the image moving direction is left, A is less than 0, that is, the second area moves to the left relative to the first area.
  • the third preview image after the second preview image may be a third image block in the third area on the third image collected by the ultra-wide-angle camera.
  • the second image is the m-th frame image
  • the third image can be the m+1-th frame image
  • the third area is the m+1-th area on the m+1-th frame image
  • the third preview image That is, the m+1th image block in the m+1th area on the m+1th frame image.
  • the fourth preview image after the third preview image may be the m+2th image block of the m+2th area on the m+2th frame image, and so on.
  • the change in the second orientation of the third area (that is, the m+1th area on the m+1 frame image) relative to the second area (that is, the mth area on the mth frame image) is the third The amount of distance change relative to the second distance, where the third distance is the distance between the third area and the first edge of the third image (for example, the left edge of the image), that is, H+A+B; the second distance Is the distance between the second area and the first edge of the second image (for example, the left edge of the image), that is, H+A; then, the second orientation change amount is B.
  • the first change in orientation of the second area (that is, the m-th area on the m-th frame of image) relative to the first area (that is, the first area on the first frame of image) is the distance change of the second distance relative to the first distance
  • the second distance is the distance between the second area and the first edge of the second image (for example, the left edge of the image), that is, H+A;
  • the first distance is the distance between the first area and the first image
  • the distance between the first edges is H. Therefore, the first azimuth change amount is A.
  • the second orientation change amount B is equal to the first orientation change amount A, that is, the orientation change amount of the preview image on the image is the same, that is, the position of the preview image on the image moves at a constant speed.
  • the second azimuth change amount B may be less than or greater than the first azimuth change amount A to achieve different effects.
  • the third operation is used to indicate the direction of image movement.
  • the electronic device is in the preview mode.
  • the position of the preview image on the image captured by the ultra-wide-angle camera changes based on the image moving direction.
  • the electronic device detects an operation on the video control 1307, it starts video recording. After starting to record, the position of the preview image on the image continues to change.
  • stop recording instruction is detected, stop recording.
  • the third operation can be used to indicate the moving direction of the image, and can also be used to indicate the start of recording.
  • the electronic device detects the third operation of the user clicking an arrow (for example, the right arrow) of the direction control 411, displays the second preview image and starts recording.
  • the electronic device detects the instruction to stop the image movement, it stops the movement and stops recording, and saves the recording.
  • the video includes a second preview image.
  • the electronic device detects the image right shift instruction
  • the viewfinder interface displays the m-th image block in the m-th area on the m-th frame image, and starts recording, and then the preview image is refreshed into the m+1-th area and the m-th area in turn.
  • the recording includes the m-th image block, the m+1-th image block and the m+2-th image block.
  • the second image is one of M frames of images extracted from N frames of images collected by the first wide-angle camera, where N is an integer greater than or equal to 1, and M is an integer less than N;
  • N is an integer greater than or equal to 1
  • M is an integer less than N
  • the second image is one of the M frames of images obtained by inserting N frames of images collected by the first wide-angle camera into multiple frames of images, where N is an integer greater than or equal to 1, and M is an integer greater than N, specifically
  • For the frame insertion process refer to the description of FIG. 19C or FIG. 19D, and details are not repeated here.
  • the embodiments of the present application provide a method for displaying preview images in a video recording scene.
  • the method can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) as shown in FIG. 2A.
  • the method may include the following steps:
  • 3101 The first operation for opening the camera is detected.
  • steps 3101-3102 please refer to the description of the steps 3001-3002 in FIG. 30, which will not be repeated here.
  • a second operation for indicating the first video recording mode is detected.
  • the electronic device may provide multiple video recording modes, such as a normal video recording mode, an image rotation video recording mode, and so on.
  • the electronic device displays the viewfinder interface in the shift mode
  • the second operation may be an operation of double-clicking the viewfinder interface, or other operations that can be used to switch to the image rotation video recording mode.
  • the second operation may be an operation of clicking the "image rotation" option in the selection box 409.
  • a viewfinder interface on the display screen of the electronic device, where the viewfinder interface includes a first preview image, and the first preview image is collected by a camera on the electronic device The first image.
  • the camera is a normal camera or a first wide-angle camera.
  • the first image is a first image block in a first area on a first frame of image collected by the first wide-angle camera.
  • the electronic device detects the second operation (the operation of clicking the "image rotation” option in the selection box 1309), it enters the image rotation recording mode.
  • the viewfinder interface in the image rotation video mode can be seen in Figure 24.
  • the viewfinder interface displays the first preview image, which is the first area on the first frame image collected by the first wide-angle camera (for example, the ultra-wide-angle camera) The first image block within.
  • the electronic device uses the second wide-angle camera in the normal video recording mode.
  • the first wide-angle camera is activated.
  • the field angle of the second wide-angle camera is smaller than the field angle of the first wide-angle camera.
  • the first wide-angle camera is, for example, an ultra-wide-angle camera
  • the second wide-angle camera is, for example, a normal wide-angle camera.
  • the electronic device after the electronic device is switched from the normal recording mode to the image rotation recording mode, it is switched from the normal wide-angle camera to the ultra-wide-angle camera, and the first preview image is the first area in the first area on the first frame image collected by the ultra-wide-angle camera. Image block.
  • the third operation can be implemented in multiple ways. Taking FIG. 24 as an example, the third operation may be an operation of clicking the icon 1363. For example, after clicking the icon 1363, the default rotation is clockwise or counterclockwise. Alternatively, the third operation may also be an operation of drawing a circle in the viewfinder interface, and the circle drawing direction of the circle drawing operation is the image rotation direction. Alternatively, the third operation may also be an operation of clicking an arrow indicated to the left on the left of the icon 1363, and the image rotation direction is counterclockwise; or, the third operation may also be an operation of clicking an arrow indicated to the right on the right of the icon 1363. The image rotation direction is clockwise.
  • the above-mentioned first image may be that after the electronic device switches from the normal recording mode to the first recording mode (ie, image rotation recording mode), the ultra-wide-angle camera is activated.
  • the first preview image is the first image block in the first area on the first frame of image. Assume that during the period from the m-1th frame to the mth frame, a third operation for instructing the image to rotate clockwise is detected.
  • the electronic device determines the m-th image block in the m-th region (that is, the second region) on the m-th frame image (that is, the second image), and the second preview image is the image block after the m-th image block is rotated by the angle G.
  • the rotation direction of the second image relative to the first image is the same as or opposite to the image rotation direction indicated by the third operation, which is not limited in the embodiment of the present application.
  • the viewfinder interface displays a third preview image
  • the third preview image is an image obtained after the third image collected by the camera is rotated according to the image rotation direction; wherein, the third image
  • the angle of rotation relative to the second image is the same as the angle of rotation of the second image relative to the first image.
  • the camera as the first wide-angle camera (ultra-wide-angle camera) as an example
  • the third preview image after the second preview image may be after the third image block in the third area on the third image collected by the ultra-wide-angle camera is rotated by a certain angle Image block.
  • the second image is the m-th frame image
  • the second area is the m-th area
  • the second preview image is the image block after the image block in the m-th area is rotated by the angle G.
  • the third image is the (m+1)th frame image
  • the third area is the (m+1)th area
  • the third preview image is the image block in the (m+1)th area after the rotation angle of 2G. Therefore, the rotation angle of the third area relative to the second area is equal to the rotation angle of the second area relative to the first area. In other words, the preview image rotates at a constant speed.
  • the rotation angle of the third area relative to the second area may be different from the rotation angle of the second area relative to the first area.
  • the rotation angle of the third area relative to the second area is greater than that of the second area relative to the first area.
  • the angle of rotation that is, accelerated rotation.
  • the rotation angle of the third area relative to the second area is smaller than the rotation angle of the second area relative to the first area, that is, the rotation is reduced.
  • the third operation is used to indicate the direction of image rotation.
  • the electronic device is in the preview mode.
  • preview mode the preview image is rotated.
  • the electronic device detects an operation on the video control 1307, it starts video recording. After starting to record, the preview image continues to rotate.
  • the stop recording instruction is detected, stop recording.
  • the third operation can be used to indicate the direction of image rotation, and can also be used to indicate the start of video recording.
  • the electronic device detects the third operation of the user clicking the arrow pointing to the left on the left side of the icon 1363, displays the second preview image, and starts recording.
  • the electronic device detects the stop rotation instruction, it stops rotating and stops recording, and saves the recording.
  • the video includes a second preview image.
  • the electronic device detects the image clockwise rotation instruction
  • the viewfinder interface displays the image block in the m-th area after the m-th image block is rotated by the angle G, and starts recording, and then the preview image is sequentially refreshed to the m+1-th area ,
  • the image block after an angle of 2G and the m+2th image block are rotated by an angle of 3G.
  • the second image may be one of M frames of images extracted from N frames of images collected by the first wide-angle camera, where N is an integer greater than or equal to 1, and M is an integer less than N;
  • N is an integer greater than or equal to 1
  • M is an integer less than N
  • the second image is one of the M frames of images obtained by inserting N frames of images collected by the first wide-angle camera into multiple frames of images, where N is an integer greater than or equal to 1, and M is an integer greater than N, specifically
  • For the frame insertion process refer to the description of FIG. 19C or FIG. 19D, and details are not repeated here.
  • references described in this specification to "one embodiment” or “some embodiments”, etc. mean that one or more embodiments of the present application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the sentences “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless it is specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized.
  • the method provided in the embodiments of the present application is introduced from the perspective of an electronic device (for example, a mobile phone) as an execution subject.
  • the terminal device may include a hardware structure and/or a software module, and realize the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether a certain function among the above-mentioned functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

一种视频拍摄方法与电子设备。该方法包括:启动相机功能;响应于用户第一操作,确定第一录像模板,第一录像模板中包括第一示例样片、第二示例样片以及预设音频,第一示例样片对应第一运镜模式,第二示例样片对应第二运镜模式;显示包括第一运镜模式标识和第二运镜模式标识的录制界面;响应于用户第二操作,保持电子设备的位置不动,开始录像;自动生成合成视频,合成视频中包括第一视频片段、第二视频片段以及预设音频,第一视频片段为根据第一运镜模式生成的视频片段,第二视频片段为根据第二运镜模式生成的视频片段。通过这种方式可以将通过各种运镜模式得到的多个视频片段合成视频并配置音频,通过便捷的操作就可以得到质量较好的视频。

Description

一种视频拍摄方法与电子设备
相关申请的交叉引用
本申请要求在2019年11月29日提交中国专利局、申请号为201911207579.8、申请名称为“一种录像场景下的预览图像显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求在2020年02月03日提交中国专利局、申请号为202010079012.3、申请名称为“一种录像场景下预览图像的显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求在2020年09月30日提交中国专利局、申请号为202011066518.7、申请名称为“一种视频拍摄方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像拍摄技术领域,尤其涉及一种视频拍摄方法与电子设备。
背景技术
为了提升拍摄作品的质量,业界有各种运镜拍摄手法,俗称“摇镜头”、“移镜头”等。以手机为例,“移镜头”的实现过程例如可参见图1中的(a)所示,在拍摄物体的过程中,手机在水平方向移动,即“移镜头”。假设手机从A点移动到B点,在移动过程中,由于摄像头取景范围变化,所以手机显示屏上显示预览图像变化。当然,还可以包括垂直方向的“移镜头”。其中,向上移可以称为升镜头,向下移可以称为降镜头。“摇镜头”的实现过程例如可参见图1中的(a)所示,在拍摄物体的过程中,手机绕中心轴旋转。中心轴例如是手机显示屏所在平面上手机短边的中垂线。若手机绕中心轴向左旋转,即向左“摇镜头”,如图1中的(b);若手机绕中心轴向右旋转,即向右“摇镜头”,如图1中的(c)。
也就是说,若用户要通过便捷式电子设备(如手机)实现上述的“移镜头”或“摇镜头”等运镜拍摄手法,需要用户移动手机位置或旋转手机,操作不够便捷。
发明内容
本申请的目的在于提供了一种视频拍摄方法与电子设备,提升通过手机实现移镜头摇镜头等运镜拍摄方式时的便捷性。
第一方面,还提供一种视频拍摄方法,应用于电子设备,包括:启动相机功能;响应于用户第一操作,确定第一录像模板,所述第一录像模板中包括第一示例样片、第二示例样片以及预设音频,所述第一示例样片对应第一运镜模式,所述第二示例样片对应第二运镜模式,其中所述第一运镜模式和所述第二运镜模式不同;显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;响应于用户第二操作,保持所述电子设备的位置不动,开始录像;自动生成合成视频,所述合成视频中包括第一视频片段、第二视频片段以及所述预设音频,所述第一视频片段为所述电子设备根据所述第一运镜模式生成的视频片段,所述第二视频片段为所述电子设备根据所述第二运镜模式生成的视频片段。
因此,通过这种方式可以将通过各种运镜模式得到的各个视频片段合成视频并为合成视频配置预设音频,通过便捷的操作就可以得到质量较好的视频。合成的视频可以直接用 于上传社交网络、发送联系人等,不需要太复杂的视频处理过程,操作简便,用户体验较好。
在一种可能的设计中,响应于用户第二操作,保持所述电子设备的位置不动,开始录像,包括:在所述第一运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第一运镜模式生成所述第一视频片段,所述第一视频片段的时长为第一预设时长;在所述第二运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第二运镜模式生成所述第二视频片段,所述第二视频片段的时长为第二预设时长。
也就是说,针对每一种运镜模式,用户可以控制开始录制和/或停止录制。比如,第一录像模板中包括多种运镜方式,每种运镜方式对应的录像时长可以是预设的固定时长,即达到预设时长停止拍摄,;或者,也可以是非预设的时长,比如用户通过取景界面中的拍摄控件控制手机开始和停止使用第一运镜方式录制。
在一种可能的设计中,在根据所述第一运镜模式生成第一视频片段时,所述录像界面中还显示根据所述第一运镜模式生成所述第一视频片段的倒计时;在根据所述第二运镜模式生成所述第二视频片段时,所述录像界面中还显示根据所述第二运镜模式生成所述第二视频片段的倒计时。
也就是说,电子设备可以显示录制时间的倒计时,以方便用户掌握录制进度(比如录制剩余时长),交互体验较好。
在一种可能的设计中,所述方法还包括:显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;响应于用户第三操作,删除第一运镜模式标识或第二运镜模式标识;响应于用户第四操作,保持所述电子设备的位置不动,开始录制;自动生成合成视频,所述合成视频中由所述电子设备根据未删除的运镜模式生成的视频片段合成所述合成视频还包括预设音频。
也就是说,用户可以删除某个运镜模式标识,比如,用户将自己不喜欢的运镜模式对应的运镜模式标识删除,那么就删除了对应的运镜模式,根据剩下的运镜模式标识对应的运镜模式生成的视频片段合成视频。
在一种可能的设计中,所述方法还包括:显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;响应于用户第三操作,在所述录像界面中添加第三运镜模式标识,所述第三运镜模式标识用于指示第三运镜模式;响应于用户第四操作,保持所述电子设备的位置不动,开始录制;自动生成合成视频,所述合成视频中包括所述第一视频片段、所述第二视频片段、第三视频片段以及所述预设音频,所述第三视频片段为所述电子设备根据所述第三运镜模式生成的视频片段。
也就是说,假设用户喜欢某种运镜模式,可以添加该运镜模式的运镜模式标识,那么就添加了对应的运镜模式,根据原来的运镜模式生成的视频片段和添加的运镜模式生成的视频片段合成视频。
在一种可能的设计中,所述方法还包括:显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;响应于用户第三操作,调整所述第一运镜模式标识和第二运镜模式标识的显示顺序为第一顺序;响应于用户第四操作,保持所述电子设备的位置不动,开始录制;自动生成合成视频,所述合成的视频中所述第一视频片段和所述第二视频片段的播放顺序为所述第一顺序。
也就是说,用户可以调整运镜模式标识的显示顺序,那么就调整了视频片段的合成顺 序,那么合成视频中两个视频片段的播放顺序也被调整。
在一种可能的设计中,所述录像界面中显示所述第一示例样片和/或所述第二示例样片。
在录制界面通过第一示例样片方便用户查看第一运镜模式的拍摄效果,通过第二示例样片查看第二运镜模式的拍摄效果,交互体验较好。
在一种可能的设计中,自动生成合成视频之前,还包括:显示展示界面,所述展示界面中包括所述第一视频片段和所述第二视频片段;自动生成合成视频,包括:响应于用户输入的视频合成指令,合成视频。
也就是说,在合成视频之前,用户还可以分别查看第一视频片段和第二视频片段,假设对两个视频片段均满意,在用户的触发操作下,合成视频。
在一种可能的设计中,所述方法还包括:响应于所述第四操作,删除所述第一视频片段或所述第二视频片段;或者,在所述合成视频中添加本地的第三视频片段;或者;调整所述合成视频中所述第一视频片段或所述第二视频片段的播放顺序。
也就是说,用户可以删除某个视频片段,比如删除用户拍摄的不满意的片段,或者,也可以从本地添加用户喜欢的视频片段,或者,也可以调整合成视频中两个视频片段之间的播放顺序。总之,用户可以灵活的设置合成视频,交互体验较好。
在一种可能的设计中,所述第一录像模板是默认模板或用户自定义模板。
也就是说,用户不仅可以使用电子设备默认模板,用户也可以自定义模板,比如设置用户个人喜欢的模板,交互体验较好。
在一种可能的设计中,所述方法还包括:自动存储所述第一视频片段和所述第二视频片段,以及所述合成视频。也就是说,电子设备可以自动的将各个视频片段以及各个视频片段合成的视频存储,这样,用户可以在本地查看每个单独的视频片段,也可以查看合成视频,用户体验较好,比如用户可以将单独的视频片段上传社交网络,也可以将合成视频上传社交网络。
在一种可能的设计中,所述方法还包括:响应于特定操作,更换所述合成视频中的音频,或者,在所述合成视频中添加文字和/或图片。也就是说,用户可以更改合成视频中的音频,或者,在合成视频中添加文字、图片等,交互体验较好。
第二方面,还提供一种电子设备,包括:
一个或多个处理器;
一个或多个存储器;
其中,所述一个或多个存储器存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器执行时,使得所述电子设备执行如下步骤:
启动相机功能;
响应于用户第一操作,确定第一录像模板,所述第一录像模板中包括第一示例样片、第二示例样片以及预设音频,所述第一示例样片对应第一运镜模式,所述第二示例样片对应第二运镜模式,其中所述第一运镜模式和所述第二运镜模式不同;
显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
响应于用户第二操作,保持所述电子设备的位置不动,开始录像;
自动生成合成视频,所述合成视频中包括第一视频片段、第二视频片段以及所述预设音频,所述第一视频片段为所述电子设备根据所述第一运镜模式生成的视频片段,所述第 二视频片段为所述电子设备根据所述第二运镜模式生成的视频片段。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备具体执行如下步骤:
在所述第一运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第一运镜模式生成所述第一视频片段,所述第一视频片段的时长为第一预设时长;
在所述第二运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第二运镜模式生成所述第二视频片段,所述第二视频片段的时长为第二预设时长。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备具体执行如下步骤:
在根据所述第一运镜模式生成第一视频片段时,所述录像界面中还显示根据所述第一运镜模式生成所述第一视频片段的倒计时;在根据所述第二运镜模式生成所述第二视频片段时,所述录像界面中还显示根据所述第二运镜模式生成所述第二视频片段的倒计时。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
响应于用户第三操作,删除第一运镜模式标识或第二运镜模式标识;
响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
自动生成合成视频,所述合成视频中包括所述电子设备根据未删除的运镜模式生成的视频片段以及所述预设音频。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
响应于用户第三操作,在所述录像界面中添加第三运镜模式标识,所述第三运镜模式标识用于指示第三运镜模式;
响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
自动生成合成视频,所述合成视频中包括所述第一视频片段、所述第二视频片段、第三视频片段以及所述预设音频,所述第三视频片段为所述电子设备根据所述第三运镜模式生成的视频片段。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
响应于用户第三操作,调整所述第一运镜模式标识和第二运镜模式标识的显示顺序为第一顺序;
响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
自动生成合成视频,所述合成的视频中所述第一视频片段和所述第二视频片段的播放顺序为所述第一顺序。
在一种可能的设计中,所述录像界面中显示所述第一示例样片和/或所述第二示例样片。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
显示展示界面,所述展示界面中包括所述第一视频片段和所述第二视频片段;
自动生成合成视频,包括:响应于用户输入的视频合成指令,合成视频。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
响应于所述第四操作,删除所述第一视频片段或所述第二视频片段;或者,在所述合成视频中添加本地的第三视频片段;或者;调整所述合成视频中所述第一视频片段或所述第二视频片段的播放顺序。
在一种可能的设计中,所述第一录像模板是默认模板或用户自定义模板。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
自动存储所述第一视频片段和所述第二视频片段,以及所述合成视频。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
响应于特定操作,更换所述合成视频中的音频,或者,在所述合成视频中添加文字和/或图片。
第三方面,本申请实施例还提供了一种电子设备,所述电子设备包括执行上述第一方面或者第一方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第四方面,本申请实施例还提供一种芯片,所述芯片与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本申请实施例第一方面及其第一方面任一可能设计的技术方案,本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
第五方面,还提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行如上述第一方面提供的方法。
第六方面,还提供一种程序产品,包括指令,当所述指令在计算机上运行时,使得所述计算机执行如上述第一方面提供的方法。
第七方面,还提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行如上述第一方面提供的方法时显示的图形用户界面。
上述第二方面至第七方面的有益效果,请参见第一方面的有益效果,不重复赘述。
第八方面,提供了一种录像场景下预览图像的显示方法,应用于电子设备。例如,手机、平板电脑等。电子设备检测到用于打开相机的第一操作;响应于所述第一操作,启动相机;检测到用于指示第一录像模式的第二操作;响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的第一广角摄像头采集的第一图像上位于第一区域的第一图像块;保持所述电子设备的位置固定不动,检测到指示图像移动方向的第三操作;响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述第一广角摄像头采集的第二图像上位于第二区域内的第二图像块,或者,所述第二预览图像为对所述第二图像块经过视角转换处理之后得到的图像块;其中,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关。
举例来说,用户使用手机录像的过程中,预览图像中包括用户正对面的景物A不包括用户右前方的景物B。保持手机位置不动,用户输入图像右移指示(例如,通过触摸屏输入),预览图像更新为包括用户右前方的景物B的新的预览图像(例如,不包括景物A)。因此,保持电子设备位置不动的情况下,也可以实现移镜头”或“摇镜头”等拍摄方式,用户体验较高。
应当理解的是,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关,包括:所述第二区域相对于所述第一区域的方位与所述图像移动方向相同或相反,本申请实施例不作限定。例如,用户可以自行设置第二区域相对于第一区域的方位与图像移动方向相同或相反。
在一种可能的设计中,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关,包括:所述第二区域与所述第二图像的第一边缘之间的距离为第二距离,所述第一区域与所述第一图像的第一边缘之间的距离为第一距离,所述第二距离相对于所述第一距离的距离改变量与所述图像移动方向相关。
举例来说,假设第一区域距离第一图像左边缘距离H,第二区域距离第二图像左边缘距离H+A,当A为正数时,说明第二区域相对于第一区域的方位为向右,当A为负数时,说明第二区域相对于第一区域的方位为向左。
作为一种示例,电子设备确定第三图像上的第三区域,所述第三区域相对于所述第二区域的第二方位改变量等于所述第二区域相对于所述第一区域的第一方位改变量;在所述取景界面中显示第三预览图像,所述第三预览图像为所述第三图像上位于所述第三区域内的第三图像块,或为所述第三图像块经过视角转换处理之后得到的图像块;其中,所述第二方位改变量为第三距离相对于第二距离的距离改变量,所述第一方位改变量为第二距离相对于第一距离的距离改变量,所述第三距离为所述第三区域与所述第三图像的第一边缘之间的距离;所述第二距离为所述第二区域与所述第二图像的第一边缘之间的距离;所述第一距离为所述第一区域与所述第一图像的第一边缘之间的距离。
也就是说,每个预览图像在第一广角摄像头采集的图像上的位置的改变量相同。因此,在视觉上,取景界面中的预览图像是匀速移动的,用户体验较高。
作为另一种示例,所述第三区域相对于所述第二区域的第二方位改变量还可以大于所述第二区域相对于所述第一区域的第一方位改变量。因此,在视觉上,取景界面中的预览图像是加速移动的,具有一定节奏感和视觉冲击性。
当然,所述第三区域相对于所述第二区域的第二方位改变量还可以小于所述第二区域相对于所述第一区域的第一方位改变量。因此,在视觉上,取景界面中的预览图像是减速移动的,录像灵活性、趣味性更高。
在一种可能的设计中,电子设备在进入第一录像模式之前,所述取景界面中显示第四预览图像,所述第四预览图像为第二广角摄像头采集的图像,所述第二广角摄像头的视场角小于所述第一广角摄像头的视场角;所述第一预览图像是所述第一广角摄像头和所述第二广角摄像头视场角重叠范围内的全部或部分图像块。也就是说,从其它模式切换到第一录像模式时,启动视场角更大的第一广角摄像头,取景界面中显示第一广角摄像头采集的第一图像上第一区域内的第一图像块,视场角更大的摄像头采集的图像范围更大,包括的细节更多,第一区域的位置在图像上的可移动范围更大,可以在更大的移动范围内实现移镜头或摇镜头的拍摄方式,用户体验较高。
应理解,所述第二广角摄像头采集的图像的放大倍率小于或等于所述第一广角摄像头采集的图像的放大倍率。
上述所述第三操作,包括:在所述第一预览图像上的滑动操作;或,
针对所述取景界面内用于指示图像旋转方向的控件的操作,或,
按压所述取景界面内的特定控件并拖动的操作。
应理解,上述仅是第三操作的举例,而非限定,其它的用于输入图像移动方向的操作也是可行的,本申请实施例不作限定。
可以理解的是,检测到图像停止移动指令时,所述取景界面显示第五预览图像,所述第五预览图像为所述第一广角摄像头采集的第五图像上位于第五区域内的第五图像块,或者,所述第五预览图像为对所述第五图像块经过视角转换处理之后得到的图像块;所述第五区域相对于所述第二区域的方位不变。也就是说,当检测到图像停止移动指令时,预览图像子在图像上的方位不再改变,视觉上取景界面内预览图像的位置不再变化。
可以理解的是,电子设备检测到图像停止移动指令时,生成并保存视频,所述视频包括所述第二预览图像。也就是说,电子设备检测到图像停止移动指令时,自动生成视频,并保存该视频,方便操作,提升用户体验。
其中,所述检测到图像停止移动指令,包括:
所述第三操作为在所述第一预览图像上的滑动操作时,当检测到所述滑动操作弹起时,产生所述图像停止移动指令;或者,
所述第三操作为针对所述取景界面内用于指示图像移动方向的控件的点击操作时,当检测到在所述取景界面内任意位置的再次点击操作时,产生所述图像停止移动指令,或者,
所述第三操作为针对所述取景界面内用于指示图像移动方向的控件的长按操作时,当检测到所述长按操作弹起时,产生所述图像停止移动指令,或者,
所述第三操作为针对所述取景界面内的特定控件的按压并拖动的操作时,当检测到所述拖动操作弹起时,产生所述图像停止移动指令。
需要说明的是,上述图像停止移动指令仅是举例,不是限定,其它的产生图像停止移动指令也是可行的,本申请实施例不作限定。
示例的,所述第二图像为从所述第一广角摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;抽帧播放可以实现快速播放的效果。因此,预览图像可以快速播放。或者,所述第二图像为在所述第一广角摄像头采集的N帧图像插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数。插帧播放可以实现慢速播放的效果,所以,预览图像可以慢速播放。
所述第二图像块经过视角转换处理之后得到的图像块,满足如下公式:
x’=x*cos(θ)-sin(θ)*y
y’=x*sin(θ)+cos(θ)*y
其中,(x’,y’)是经过视角转换处理之后得到的图像块上的像素点,(x,y)是第二图像块上的像素点,θ为旋转角度,所述旋转角度是预设的。电子设备通过上述公式对图像块进行视角转换后,使得预览图像更加符合真实情况下摇动手机时呈现的预览图像。因此,在电子设备位置保持不动的情况下,用户可以通过电子设备实现移镜头或摇镜头的拍摄方式,用户体验较高。
第九方面,还提供一种录像场景下预览图像的显示方法,应用于电子设备。电子设备检测到用于打开相机的第一操作;响应于所述第一操作,启动相机;检测到用于指示第一录像模式的第二操作;响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的摄像头采集的第一图像;保持所述电子设备的位置固定不动,检测到指示图像旋转方向的第三操作;响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述摄像头采集的第二图像按照所述图像旋转方向旋转之后得到的图像。也就是说,用户电子设备录像过程中,取景界面中的预览图像可以旋转,实现图像旋转拍摄的效果,用户体验较高。
在一种可能的设计中,所述取景界面显示第三预览图像,所述第三预览图像为所述摄像头采集的第三图像按照所述图像旋转方向旋转之后得到的图像,所述第三图像相对于所述第二图像的旋转角度与所述第二图像相对于所述第一图像的旋转角度相同。
也就是说,用户电子设备录像过程中,取景界面中的预览图像每次旋转角度相同,即图像匀速旋转,实现旋转拍摄的效果。
示例性的,所述摄像头为第一广角摄像头,所述第一图像为所述第一广角摄像头采集的第四图像上第一区域内的第一图像块;所述第二图像为所述第一广角摄像头采集的第五图像上第二区域内的第二图像块,所述第一区域在所述第四图像上的位置和所述第二区域在所述第五图像上的位置相同或不同。
上述所述第三操作,包括:在所述第一预览图像上的画圈操作;或,
针对所述取景界面内用于指示图像旋转方向的控件的操作。
应理解,上述仅是第三操作的举例,而非限定,其它的用于输入图像移动方向的操作也是可行的,本申请实施例不作限定。
在一种可能的设计中,电子设备检测到图像停止旋转指令时,生成并保存视频,所述视频包括所述第二预览图像。也就是说,电子设备检测到图像停止旋转指令时,自动生成视频,并保存该视频,方便操作,提升用户体验。
上文中所述检测到图像停止旋转指令,包括:
所述第三操作为在所述第一预览图像上的画圈操作时,当检测到所述画圈操作弹起时,产生所述图像停止旋转指令;或者,
所述第三操作为针对所述取景界面内用于指示图像旋转方向的控件的点击操作时,当检测到在所述取景界面内任意位置的再次点击操作时,产生所述图像停止旋转指令,或者,
所述第三操作为针对所述取景界面内用于指示图像旋转方向的控件的长按操作时,当检测到所述长按操作弹起时,产生所述图像停止旋转指令。
需要说明的是,上述图像停止旋转指令仅是举例,不是限定,其它的产生图像停止旋转指令也是可行的,本申请实施例不作限定。
其中,所述第二图像为从所述第一摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;抽帧播放可以实现快速播放的效果。因此,预览图像可以快速播放。或者,所述第二图像为在所述第一摄像头采集的N帧图像插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数。插帧播放可以实现慢速播放的效果,所以,预览图像可以慢速播放。
第十方面,还提供一种电子设备,包括:一个或多个处理器;一个或多个存储器;其中,所述一个或多个存储器存储有一个或多个计算机程序,所述一个或多个计算机程序包 括指令,当所述指令被所述一个或多个处理器执行时,使得所述电子设备执行如下步骤:
检测到用于打开相机的第一操作;
响应于所述第一操作,启动相机;
检测到用于指示第一录像模式的第二操作;
响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的第一广角摄像头采集的第一图像上位于第一区域的第一图像块;
保持所述电子设备的位置固定不动,检测到指示图像移动方向的第三操作;
响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述第一广角摄像头采集的第二图像上位于第二区域内的第二图像块,或者,所述第二预览图像为对所述第二图像块经过视角转换处理之后得到的图像块;其中,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关。
在一种可能的设计中,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关,包括:所述第二区域相对于所述第一区域的方位与所述图像移动方向相同或相反。
在一种可能的设计中,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关,包括:所述第二区域与所述第二图像的第一边缘之间的距离为第二距离,所述第一区域与所述第一图像的第一边缘之间的距离为第一距离,所述第二距离相对于所述第一距离的距离改变量与所述图像移动方向相关。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备执行如下步骤:
取景界面中显示第三预览图像,所述第三预览图像为第三图像上第三区域内的第三图像块;或为所述第三图像块经过视角转换处理之后得到的图像块;所述第三区域相对于所述第二区域的第二方位改变量等于所述第二区域相对于所述第一区域的第一方位改变量;
其中,所述第二方位改变量为第三距离相对于第二距离的距离改变量,所述第一方位改变量为第二距离相对于第一距离的距离改变量,所述第三距离为所述第三区域与所述第三图像的第一边缘之间的距离;所述第二距离为所述第二区域与所述第二图像的第一边缘之间的距离;所述第一距离为所述第一区域与所述第一图像的第一边缘之间的距离。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:在检测到用于指示第一录像模式的第二操作之前,所述取景界面中显示第四预览图像,所述第四预览图像为第二广角摄像头采集的图像,所述第二广角摄像头的视场角小于所述第一广角摄像头的视场角;所述第一预览图像是所述第一广角摄像头和所述第二广角摄像头视场角重叠范围内的全部或部分图像块。
其中,所述第二广角摄像头采集的图像的放大倍率小于或等于所述第一广角摄像头采集的图像的放大倍率。
上述所述第三操作,包括:
在所述第一预览图像上的滑动操作;或,
针对所述取景界面内用于指示图像旋转方向的控件的操作,或,
按压所述取景界面内的特定控件并拖动的操作。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:检测到图像停止移动指令时,所述取景界面显示第五预览图像,所述 第五预览图像为所述第一广角摄像头采集的第五图像上位于第五区域内的第五图像块,或者,所述第五预览图像为对所述第五图像块经过视角转换处理之后得到的图像块;所述第五区域相对于所述第二区域的方位不变。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:检测到图像停止移动指令时,生成并保存视频,所述视频包括所述第一预览图像和所述第二预览图像。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备具体执行如下步骤:
所述第三操作为在所述第一预览图像上的滑动操作时,当检测到所述滑动操作弹起时,产生所述图像停止移动指令;或者,
所述第三操作为针对所述取景界面内用于指示图像移动方向的控件的点击操作时,当检测到在所述取景界面内任意位置的再次点击操作时,产生所述图像停止移动指令,或者,
所述第三操作为针对所述取景界面内用于指示图像移动方向的控件的长按操作时,当检测到所述长按操作弹起时,产生所述图像停止移动指令,或者,
所述第三操作为针对所述取景界面内的特定控件的按压并拖动的操作时,当检测到所述拖动操作弹起时,产生所述图像停止移动指令。
其中,所述第二图像为从所述第一广角摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;或者,所述第二图像为在所述第一广角摄像头采集的N帧图像插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数。
在一种可能的设计中,所述第二图像块经过视角转换处理之后得到的图像块,满足如下公式:
x’=x*cos(θ)-sin(θ)*y
y’=x*sin(θ)+cos(θ)*y
其中,(x’,y’)是经过视角转换处理之后得到的图像块上的像素点,(x,y)是第二图像块上的像素点,θ为旋转角度,所述旋转角度是预设的。
第十一方面,还提供一种电子设备,包括:一个或多个处理器;一个或多个存储器;其中,所述一个或多个存储器存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器执行时,使得所述电子设备执行如下步骤:
检测到用于打开相机的第一操作;响应于所述第一操作,启动相机;检测到用于指示第一录像模式的第二操作;响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的摄像头采集的第一图像;保持所述电子设备的位置固定不动,检测到指示图像旋转方向的第三操作;响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述摄像头采集的第二图像按照所述图像旋转方向旋转之后得到的图像。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
所述取景界面显示第三预览图像,所述第三预览图像为所述摄像头采集的第三图像按照所述图像旋转方向旋转之后得到的图像,所述第三图像相对于所述第二图像的旋转角度与所述第二图像相对于所述第一图像的旋转角度相同。
其中,所述摄像头为第一广角摄像头,所述第一图像为所述第一广角摄像头采集的第四图像上第一区域内的第一图像块;所述第二图像为所述第一广角摄像头采集的第五图像上第二区域内的第二图像块,所述第一区域在所述第四图像上的位置和所述第二区域在所述第五图像上的位置相同或不同。
上述所述第三操作,包括:
在所述第一预览图像上的画圈操作;或,
针对所述取景界面内用于指示图像旋转方向的控件的操作。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:检测到图像停止旋转指令时,生成并保存视频,所述视频包括所述第一预览图像和所述第二预览图像。
在一种可能的设计中,当所述指令被所述一个或多个处理器执行时,使得所述电子设备具体执行如下步骤:
所述第三操作为在所述第一预览图像上的画圈操作时,当检测到所述画圈操作弹起时,产生所述图像停止旋转指令;或者,
所述第三操作为针对所述取景界面内用于指示图像旋转方向的控件的点击操作时,当检测到在所述取景界面内任意位置的再次点击操作时,产生所述图像停止旋转指令,或者,
所述第三操作为针对所述取景界面内用于指示图像旋转方向的控件的长按操作时,当检测到所述长按操作弹起时,产生所述图像停止旋转指令。
所述第二图像为从所述第一摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;或者,所述第二图像为在所述第一摄像头采集的N帧图像插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数。
第十二方面,还提供一种电子设备,该电子设备包括执行第八方面或者第八方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第十三方面,还提供一种电子设备,该电子设备包括执行第九方面或者第九方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第十四方面,还提供一种芯片,所述芯片与电子设备中的存储器耦合,执行本申请实施例第八方面及其第八方面任一可能设计的技术方案;本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
第十五方面,还提供一种芯片,所述芯片与电子设备中的存储器耦合,执行本申请实施例第九方面及其第九方面任一可能设计的技术方案;本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
第十六方面,还提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行第八方面及其第八方面任一可能设计的技术方案。
第十七方面,还提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行第九方面及其第九方面任一可能设计的技术方案。
第十八方面,还提供一种程序产品,包括指令,当所述指令在计算机上运行时,使得所述计算机执行第八方面及其第八方面任一可能设计的技术方案。
第十九方面,还提供一种程序产品,包括指令,当所述指令在计算机上运行时,使得所述计算机执行第九方面及其第九方面任一可能设计的技术方案。
第二十方面,还提供一种电子设备上的图形用户界面,所述电子设备具有一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行第八方面及其第八方面任一可能设计的技术方案时显示的图形用户界面。
第二十一方面,还提供一种电子设备上的图形用户界面,所述电子设备具有一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行第九方面及其第九方面任一可能设计的技术方案时显示的图形用户界面。
以上第九方面到第二十一方面的有益效果,请参见第八方面的有益效果,不重复赘述。
附图说明
图1为现有技术中通过手机实现摇镜头、移镜头的示意图;
图2A为本申请实施例提供的电子设备的硬件结构的示意图;
图2B为本申请实施例提供的电子设备的软件结构的示意图;
图3A为本申请实施例提供的各种运镜模式的实现原理的示意图;
图3B为本申请实施例提供的移模式的一种示例的示意图;
图4为本申请实施例提供的手机GUI的一种示例的示意图;
图5A至图5B为本申请实施例提供的手机上微电影图标的示意图;
图6为本申请实施例提供的微电影模式的主页的示意图;
图7A至图7D为本申请实施例提供的旅行模板的录制界面一种示例的示意图;
图8A至图8D为本申请实施例提供的旅行模板的录制界面另一种示例的示意图;
图9A至图9F为本申请实施例提供的效果展示界面的示意图;
图10为本申请实施例提供的图库界面的示意图;
图11A至图11B为本申请实施例提供的微电影模式的主页的另一种示例的示意图;
图11C为本申请实施例提供的选择运镜模式组合成拍摄模板的界面的示意图;
图12为本申请实施例提供的视频拍摄方法的流程示意图;
图13-图17为本申请一实施例提供的电子设备的图形用户界面的示意图;
图18为本申请一实施例提供的超广角摄像头采集的图像上目标区域移动的示意图;
图19A至图19D为本申请一实施例提供的超广角摄像头采集的图像上目标区域移动的示意图;
图20-图24为本申请一实施例提供的电子设备的图形用户界面的示意图;
图25为本申请一实施例提供的超广角摄像头采集的图像上目标区域旋转的示意图;
图26-图27为本申请一实施例提供的电子设备的图形用户界面的示意图;
图28为本申请一实施例提供的超广角摄像头采集的图像上目标区域放大的示意图;
图29为本申请一实施例提供的电子设备的图形用户界面的示意图;
图30-图31为本申请一实施例提供的录像场景下预览图像显示方法的流程示意图。
具体实施方式
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
本申请实施例涉及的预览图像,是指电子设备的取景界面中显示的图像。比如,电子设备是手机时,手机启动相机应用,打开摄像头,显示取景界面,该取景界面中显示预览图像。继续以手机为例,手机启动视频通话功能(例如微信中的视频通信功能)时,打开摄像头,显示取景界面,该取景界面中显示预览图像。
本申请实施例涉及的视场角,是摄像头的一个重要的性能参数。“视场角”也可以称为“视角”、“视场范围”、“视野范围”等词汇,本文对于该名称不作限制。视场角用于指示摄像头所能拍摄到的最大的角度范围。若物体处于这个角度范围内,该物体便会被摄像头捕捉到,进而呈现在预览图像中。若物体处于这个角度范围之外,该物体便不会被摄像头捕捉到,即不会呈现在预览图像中。
通常,摄像头的视场角越大,则拍摄范围就越大,焦距就越短;而摄像头的视场角越小,则拍摄范围就越小,焦距就越长。因此,摄像头因视场角不同可分为普通摄像头、广角摄像头、超广角摄像头等。例如,普通摄像头的焦距可以是45至40毫米,视角可以是40度至60度;广角摄像头的焦距可以是38至24毫米,视角可以是60度至84度;超广角摄像头的焦距可以是20至13毫米,视角可以是94至118度。
本申请实施例提供的视频拍摄方法可以应用于电子设备,所述电子设备中包括摄像头,所述摄像头最好是广角摄像头或超广角摄像头;当然,也可以是普通摄像头。至于摄像头的数量,本申请不作限定,可以是一个,也可以是多个。如果是多个,多个中最好可以包括至少一个广角摄像头或超广角摄像头。
所述电子设备例如可以是手机、平板电脑、可穿戴设备(例如,手表、手环、头盔、耳机、项链等)、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,本申请实施例对电子设备的具体类型不作任何限制。
示例性的,图2A示出了电子设备100的结构示意图。如图2A所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处 理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
显示屏194用于显示应用的显示界面,例如相机应用的取景界面等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,以及至少一个应用程序(例如爱奇艺应用,微信应用等)的软件代码等。存储数据区可存储电子设备100使用过程中所产生的数据(例如拍摄的图像、录制的视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将图片,视频等文件保存在外部存储卡中。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
其中,传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。
陀螺仪传感器180B可以用于拍摄防抖。气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备100姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用 于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现与电子设备100的接触和分离。
可以理解的是,图2A所示的部件并不构成对电子设备100的具体限定,手机还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。此外,图2A中的部件之间的组合/连接关系也是可以调整修改的。
在本申请实施例中,电子设备100中摄像头193可以包括1个摄像头或多个摄像头,如果包括多个摄像头比如包括摄像头1和摄像头2,其中,摄像头1的视场角小于摄像头2的视场角。例如,摄像头1是长焦摄像头,摄像头2是广角摄像头(可以是普通广角摄像头或者超广角摄像头);或者,摄像头1是普通广角摄像头,摄像头2是超广角摄像头,等等不同组合。在一些实施例中,摄像头1和摄像头2可以均是后置摄像头或均是前置摄像头。应理解,电子设备100还可以包括更多的摄像头,例如长焦摄像头。
电子设备100可以提供多种录制模式,例如普通录制模式、移模式、摇模式等。在普通录像模式下,电子设备100启动视场角较小的摄像头1,取景界面内显示摄像头1采集的图像。当电子设备100从普通录像模式切换到移模式时,启动视场角较大的摄像头2,取景界面中显示摄像头2采集的一帧图像上的一个图像块。在电子设备100保持不动的情况下,若处理器110(例如,GPU或NPU)响应于用户输入的图像移动方向(例如,通过在屏幕上的滑动操作输入图像移动方向),根据所述图像移动方向,确定摄像头2采集的下一帧图像上的另一个图像块,然后将所述另一个图像块显示在取景界面中。其中,所述另一个图像块相对于前一个图像块之间的方位与用户输入的图像移动方向相关。也就是说,用户通过输入的图像移动方向实现如“移镜头”的拍摄方式。因此,本申请实施例中,手机录像过程中,不需要用户移动手机的位置也可以实现“移镜头”的拍摄方式,操作便捷,用户体验较高。
图2B示出了本申请一实施例提供的电子设备的软件结构框图。如图2B所示,电子设备的软件结构可以是分层架构,例如可以将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层(framework,FWK),安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。如图2B所示,应用程序层可以包括相机、 设置、皮肤模块、用户界面(user interface,UI)、三方应用程序等。其中,三方应用程序可以包括微信、QQ、图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预先定义的函数。如图2B所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
此外,系统库还可以包括图像处理库,用于对图像进行处理,以实现摇、移、升、降的拍摄效果。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
硬件层可以包括各类传感器,例如本申请实施例中涉及的加速度传感器、陀螺仪传感 器、触摸传感器等。
下面结合本申请实施例的录像场景下预览图像的显示方法,示例性说明电子设备的软件以及硬件的工作流程。
触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。以该触摸操作是触摸单击操作为例,假设该单击操作所对应的控件为相机应用图标的控件,相机应用启动。假设相机应用当前处于移模式,则调用内核层中的摄像头驱动,以驱动视场角较大的摄像头(例如超广角摄像头)捕获图像。超广角摄像头将采集的图像发送给系统库中的图像处理库中。
图像处理库对超广角摄像头采集的图像进行处理,例如确定图像上的一个图像块。显示屏在相机应用的取景界面中显示该图像块,即预览图像。在电子设备保持不动的情况下,假设触摸传感器180K接收到滑动操作,相应的硬件中断被发给内核层。内核层将滑动操作加工成原始输入事件存储在内核层。假设相机应用从内核层获取该原始输入事件,识别该输入事件对应的是滑动方向,则图像处理库在超广角摄像头采集的图像上确定另一个图像块,该另一个图像块与所述一个图像块之间的方位与所述滑动方向相关。因此,电子设备录像过程中,保持不动的情况下,也可以实现“移镜头”的效果。
为了便于理解,本申请以下实施例将以电子设备是手机为例,结合附图对本申请实施例提供的视频拍摄方法进行具体阐述。
本申请提供的视频拍摄方法中,在手机保持不动的情况下,可以实现“移镜头”或“摇镜头”等运镜拍摄手法。
为了方便描述,本文将“移镜头”称为“移模式”,将“摇镜头”称为“摇模式”,将“移模式”、“摇模式”等统称为“运镜模式”。此处,仅以“移模式”、“摇模式”为例,可以理解的是,运镜模式还可以更细化。比如,移模式按照移动方向可以包括上移、下移、左移和右移等模式;按照移动速度可以包括加速移、匀速移、减速移动等模式。示例性的,参见下表1,列出各种运镜模式。
表1:各种运镜模式
Figure PCTCN2020132547-appb-000001
上表1以36种运镜模式为例,可以理解的是,还可以包括更多模式,比如以移模式 为例,除去上、下、左、右移为例,还可以包括其它方向的移动,此处不一一列举。
以下,介绍当手机保持不动时,上述各种“运镜模式”的实现原理。
(1)移模式
手机启动摄像头,比如广角摄像头或超广角摄像头。所述摄像头输出图像流。参见图3A中的(a)所示,图3A中的(a)中最大的方框代表摄像头输出的图像,一般是包括较多拍摄物体的图像。为了方便描述,图3A中的(a)中仅截取了图像流中第m帧到第m+3帧图像为例进行介绍。以第m帧图像为例对应大方框中小方块标被标注为第m区域,将第m个区域内的图像块可以裁剪出作预览图像显示在显示屏上。也就是说,手机上显示的预览图像是摄像头采集的图像裁剪出的一个图像块。此处的介绍对下面的摇模式、推模式、拉模式等同样适用。
以右移为例,假设从第m帧开始右移,参见图3A中的(a),预览图像是第m帧图像上的第m区域内的图像块。下一帧预览图像是第m+1帧图像上第m+1区域内的图像块,第m+1区域相对于第m区域的位置右移距离A。下一帧预览图像是第m+2帧图像上第m+2区域内的图像块,第m+2区域的位置相对于第m+1区域的位置右移距离B,相对于第m区域的位置右移距离A+B,以此类推。将第m区域、第m+1区域、第m+2区域等统称为目标区域。也就是说,目标区域在摄像头采集的图像上的位置逐渐右移,产生镜头右移的效果,但实际上手机并未移动。
图3A中的(a)以第m区域是第m帧图像上的中心区域为例,可以理解的是,第m区域还可以是第m帧图像上的其它区域。比如,第m区域的左边缘与第m帧图像的左边缘重叠,即目标区域从图像最左侧移动到图像最右侧。
图3A中的(a)是以向右移动为例进行说明。可以理解的是,还可以包括左移、上移、下移,对角线移等各种方向的移动,原理相同,不再赘述。
下面给出一个具体的示例。为了简化,以预览图像1s刷新1帧为例。
假设摄像头输出的图像流中的图像为4148*2765、目标区域为2094*1178。以目标区域从图像最左侧移动到图像最右侧为例,且以匀速移动为例。参见图3B中的(a)、(b)和(c),右移过程:目标区域的中心点在X方向从-1027移到+1027,所以需要3秒完成从最左到最右的平移。此处以右移为例,可以理解的是左移是相同原理,不再赘述。而且此处是以1s刷新1帧为例的,可以理解的是,实际应用中1s可以刷新多帧。
同理,向上平移:目标区域的中心点在Y方向从-793移到+793.在3秒(或者以用户设定时长)完成最下至最上的平移。此处以上移为例,可以理解的是向下移是相同原理,不再赘述。
(2)摇模式
与移模式不同的是,摇模式除了需要改变目标区域在摄像头采集的图像上位置之外,还需要对目标区域内的图像块作视角转换处理,预览图像是经过视角转换的图像块。比如,手机可以先将摄像头采集的图像(如,第m帧图像)作视角转换处理,然后确定经过视角转换处理的图像上目标区域的图像块作为预览图像;或者,手机也可以先确定摄像头采集的图像上目标区域内的图像块,然后将图像块视角转换处理,经过视角转换处理的图像块作为预览图像。
以右摇为例,假设从第m帧开始右摇,参见图3A中的(a)所示,预览图像是将第m区域内的图像块作视角转换处理后得到的图像块;下一帧预览图像是将第m+1区域内的图像块作视角转换后得到的图像块。下一帧预览图像是将第m+2区域内的图像块作视角转换后得到的图像块,以此类推,所以实现镜头右摇的效果,但实际上手机位置不动。
其中,视角转换可以通过仿射变换实现。示例性的,仿射变换的过程包括:将图像上的像素点乘以线性变换矩阵,再加上平移向量得到视角转换后的图像。示例性的,视觉转换后的图像满足如下公式:
Figure PCTCN2020132547-appb-000002
通过上述可得:
x’=m11*x+m12*y+m13
y’=m21*x+m22*y+m23
其中,(x’,y’)是视觉转换后的图像上的像素点,(x,y)是视觉转换前的图像上的像素点,公式中的矩阵
Figure PCTCN2020132547-appb-000003
为用于实现线性变换和平移的矩阵。其中,m11,m12,m21,m22为线性变化参数,m13,m23为平移参数。m11,m12,m21,m22与旋转角度相关。假设摇镜头的旋转角度为θ,m11=cos(θ),m12=-sin(θ),m21=sin(θ),m22=cos(θ),m13=0,m23=0;因此,上述公式可变形如下:
x’=x*cos(θ)-sin(θ)*y
y’=x*sin(θ)+cos(θ)*y
示例性的,旋转角度θ可以有多种方式确定。例如,旋转角度θ是预设的固定值。或者,是用户可以设置的。因此,手机确定旋转角度后,可以基于上述公式进行视角转化处理。
(3)推模式
推模式对应推镜头的拍摄方式,可以理解为摄像头逐渐向物体推近,即取景界面内拍摄物体被放大,有助于聚焦物体细节。
参见图3A中的(b),以第m帧图像为例对应的大方框中包括小方框即第m区域,手机将第m区域内的图像块裁剪出显示在屏幕上。
假设从第m帧开始使用推模式,继续参见图3A中的(b)所示,预览图像是从第m帧图像上第m区域内裁剪出的图像块。下一帧预览图像是第m+1区域内裁剪出的图像块,第m+1区域的面积小于第m区域的面积。下一帧预览图像是第m+2区域内裁剪出的图像块,第m+2区域的面积小于第m+1区域的面积,以此类推。也就是说,图像块的面积越来越小,所以当图像块在显示屏上显示时,为了适配显示屏的尺寸,那么图像块需要被放大显示,如果图像块越来越小,那么放大倍数也就越来越大。因此,手机上预览图像内的拍摄物体逐渐被放大,实现摄像头逐渐靠近物体的拍摄效果,但手机位置未改变。
(4)拉模式
拉模式对应拉镜头的拍摄方式,可以理解为摄像头逐渐远离物体,即取景界面内拍摄物体被缩小,有助于拍摄物体全貌。
区别于拉模式,假设从第m帧开始推模式,预览图像是第m帧图像上的第m区域内的图像块。下一帧预览图像是第m+1帧图像上第m+1区域内的图像块,第m+1区域的面积大于第m区域的面积。下一帧预览图像是第m+2帧图像上第m+2区域内的图像块,第m+2区域的面积大于第m+1区域的面积,以此类推。也就是说,图像块的面积越来越大,所以当图像块在显示屏上显示时,为了适配显示屏的尺寸,那么图像块需要被缩小显示,如果图像块越来越大,那么缩小倍数也就越来越大。因此,手机上预览图像内的拍摄物体被缩小,实现摄像头逐渐远离拍摄物体的效果,但手机位置未改变。
(5)旋转模式
旋转模式下,手机除了需要确定目标区域中的图像块之外,还需要对图像块作旋转处理。比如,手机可以先将摄像头采集的图像作旋转处理,然后确定经过旋转处理后的图像上目标区域内的图像块作为预览图像。或者,手机也可以先确定摄像头采集的图像上目标区域内的图像块,然后将图像块作旋转处理,经过旋转处理的图像块作为预览图像。
以顺时针旋转为例,假设从第m帧开始旋转,参见图3A中的(c),预览图像是第m区域内的图像块。下一帧预览图像是将第m+1区域内的图像块顺时针旋转角度G后的图像块。下一帧预览图像是将第m+2区域内的图像块顺时针旋转角度G+P后的图像块,以此类推。也就是说,目标区域按照瞬时间方向逐渐旋转。因此,预览图像上的拍摄物体逐渐以顺时针方向旋转,实现手机旋转拍摄的效果,但手机位置并未改变。
图3A中的(c)是以顺时针旋转为例进行说明。可以理解的是,还可以包括逆时针旋转,原理相同,不再赘述。
(6)匀速模式
匀速模式包括匀速移、匀速摇、匀速推、匀速拉、匀速旋转等模式。更细化地,均速移还可以包括均速上、均速下、均速左、均速右移等,匀速旋转还可以包括均速顺时针旋转、均速逆时针旋转等,参见上表1所示。
以匀速移为例,且以均速右移为例,请参见图3A中的(a),比如,A=B=C,即目标区域(如,第m区域、第m+1区域、第m+2区域等)每次移动相同的距离,实现均速右移的效果。
以匀速推为例,请参见图3A中的(b),目标区域(如,第m区域、第m+1区域、第m+2区域等)每次减小相同的面积,实现匀速推的拍摄效果。
以匀速旋转为例,且以均速瞬时间旋转为例,请参见图3A中的(c),比如,G=P=W,即目标区域(如,第m区域、第m+1区域、第m+2区域等)每次旋转相同的角度,实现匀速旋转的效果。
(7)加速模式
加速模式包括加速移、加速摇、加速推、加速拉、加速旋转等模式。更细化地,加速移还可以包括加速上、加速下、加速左或加速右移等,加速旋转还可以包括加速顺时针旋转、加速逆时针旋转等,参见上表1所示。
以加速移为例,且以加速右移为例,请参见图3A中的(a),比如,A<B<C,即目标区域(如,第m区域、第m+1区域、第m+2区域等)每次移动的距离增大,实现加速右移的效果。
以加速推为例,请参见图3A中的(b),即目标区域(如,第m区域、第m+1区域、第m+2区域等)的面积减少量逐渐增大,比如,第m+1区域与第m区域之间的第一面积差小于第m+2区域与第m+1区域的第二面积差,实现加速推的效果。
以加速旋转为例,且以加速顺时针旋转为例,请参见图3A中的(c),比如,G<P<W,即目标区域(如,第m区域、第m+1区域、第m+2区域等)每次旋转的角度增大,实现匀速旋转的效果。
继续以加速右移为例,除去通过设置A、B、C的取值满足A<B<C之外,还有其它方式可以实现加速右移动。比如,继续参见图3A中的(a),设置A=B=C,然后通过抽帧的方式实现加速移。比如,抽出第m帧、第m+1帧、第m+3帧,所以预览图像依次是第m区域内的图像块、第m+1区域内的图像块、第m+3区域内的图像块,而第m+1区域相对于第m区域右移A,第m+3区域相对于第m+1区域右移B+C,实现加速右移。
(8)减速模式
减速模式包括减速移、减速摇、减速推、减速拉、减速旋转等模式。更细化地,减速移还可以包括减速上、减速下、减速左或减速右移等,减速旋转还可以包括减速顺时针旋转、减速逆时针旋转等,参见上表1所示。
以减速移为例,且以减速右移为例,请参见图3A中的(a)为例,比如,A>B>C,即目标区域(如,第m区域、第m+1区域、第m+2区域等)每次移动的距离降低,实现减速右移的效果。
以减速推为例,请参见图3A中的(b),即目标区域(如,第m区域、第m+1区域、第m+2区域等)的面积减少量逐渐降低,比如,第m+1区域与第m区域之间的第一面积差大于第m+2区域与第m+1区域的第二面积差,实现减速推的效果。
以减速旋转为例,且以减速顺时针旋转为例,请参见图3A中的(c),比如,G>P>W,即目标区域(如,第m区域、第m+1区域、第m+2区域等)每次旋转的角度降低,实现减速旋转的效果。
继续以减速右移为例,除去通过设置A、B、C的取值满足A>B>C之外,还有其它方式可以实现减速右移。比如,继续参见图3A中的(a),设置A=B=C,然后通过插帧的方式实现减速移。比如,在第m帧与第m+1帧之间插入第p帧,第p帧上的目标区域是第p区域,相对于第m区域右移X,X小于A,所以预览图像依次是第m区域内的图像块、第p区域内的图像块、第m+1区域内的图像块等,而第p区域相对于第m区域的右移X,第m+1区域相对于第m区域右移A,实现减速右移的效果,但手机位置未改变。
以下,介绍手机保持不动的情况下,应用上述各种“运镜模式”拍摄视频的过程。
一般来说,为了给观众带来极致的观影效果,电影拍摄过程中会使用大量的移镜头、摇镜头、推拉镜头等运镜拍摄手法。然而,电影的拍摄需要专业的拍摄设备以及专业的摄影人员。因此,本申请考虑通过手机,且在手机保持不动的情况下,利用各种运镜模式实现类似电影的拍摄。比如,手机中可以提供微电影模式(或者可称为电影模式),在微电 影模式下,用户可以使用手机实现类似电影的拍摄。具体而言,微电影模式中包括多种故事模板,每种故事模板包括多种不同的运镜模式,手机使用某种故事模板进行视频拍摄时,可以使用该故事模板所包括的不同运镜模式进行视频拍摄,以提升视频拍摄质量,而且操作便捷,即便非专业摄像人员也可以使用各种运镜模式完成拍摄,而成一定程度上提升了视频拍摄的趣味性。
以下,结合附图对本申请实施例的技术方案进行具体阐述。
图4中的(a)示出了手机的一种图形用户界面(graphical user interface,GUI),该GUI为手机的主界面(home screen)。主界面中包括多种应用的图标,比如,其中包括相机应用的图标。当手机检测到用户点击相机应用的图标的操作时,启动相机应用,显示如图4中的(b)所示的另一GUI,该GUI可以称为取景界面(或拍摄界面)。取景界面上内可以实时显示预览图像。
在检测到用户用于指示微电影模式的操作后,手机进入(或启动)微电影模式。微电影模式下,手机可以使用各种故事模板录制视频。
其中,用户指示微电影模式的方式可以有多种。
比如,取景界面上包括用于指示微电影模式的按钮。当手机检测到用户点击该按钮的操作时,手机进入微电影模式。
示例性的,所述按钮例如可以是图5A中的(a)所示的按钮501。或者,所述按钮还可以显示在图5A中的(b)所示的位置。可选的,在进入录像模式时,取景界面中显示所述按钮,在拍照模式下所述按钮可以不显示。或者,所述按钮还可以显示在图5A中的(c)所示位置。其中,所述按钮的显示位置可以是手机默认设置好的,或者,也可以是用户设置的,对此不作限定。
或者,以图4中的(b)所示,手机检测到针对更多按钮的点击操作时,显示如图5B所示的模式选择界面,该界面中包括微电影模式的图标。当手机检测到用户点击该图标的操作时,手机进入微电影模式。
或者,手机在取景界面上检测到用户的预设手势操作时,进入微电影模式。例如,所述预设手势操作例如可以是在取景界面中划一个圆圈的手势操作;或者,是在取景界面内预览图像上的长按操作,等等,本实施例不限定。
或者,在手机显示取景界面后,若手机检测到用户语音指示进入微电影模式时,进入微电影模式。
手机进入微电影模式之后,可以显示多个故事模板,每个故事模板可以包括多种运镜模式。
可选的,为了方便用户直观的了解故事模板,手机还可以提供每种故事模板对应的视频样例,视频样例可以理解为已经使用故事模板录制好的成品。比如,故事模板包括旅行模板。旅行模板的视频样例包括三个视频片段,每个视频片段使用一种运镜模式拍摄而得。这样的话,用户通过观看旅行模板的视频样例就可以大概的知道旅行模板拍摄效果。
比如,手机进入微电影模式后的界面请参见图6中的(a)所示,为了方便描述,下文将所述界面称为微电影模式的主页。主页中包括多个故事模板,比如图6中的(a)中示出的旅行模板、安静模板、动感模板。
主页中还可以包括预览框601,用于显示故事模板的视频样例。比如,当手机检测到 用户选择旅行模板的操作(比如点击旅行模板的操作)时,预览框601中显示旅行模板的视频样例。
考虑到旅行模板的视频样例是由三份样例片段合成的,所以,预览框601中可以直接播放所述视频样例;或者,也可以单独播放每份样例片段。比如,预览框601中播放完第一份样例片段之后,自动播放下一个样例片段,或者等待一定时长自动播放下一个样例片段。为了方便用户区分预览框601当前正在播放哪一个样例片段,手机可以输出一定的提示。比如,如图6中的(a),当预览框601中正在播放第一份样例片段时,标记602中的第一个小圈呈第一颜色(比如黑色),其它两个小圈呈第二颜色(比如,白色);当预览框601中正在播放第二份样例片段时,标记602中的第二个小圈呈第一颜色(比如黑色),其它两个小圈呈第二颜色(比如,白色)。
或者,手机检测到用户选择第一份样例片段的操作时,预览框601中播放第一份样例片段。手机检测到选择第二份样例片段的操作时,预览框601中播放第二份样例片段。举例来说,继续参见图6中的(a),预览框601中默认显示第一份样例片段。手机检测到用户在预览框601中的左滑操作时,预览框601中包括下一个样例片段;当手机再次检测到用户在预览框601中左滑操作时,预览框601中包括下一个样例片段。同时,标记602也可以提示用户预览框601中当前正在播放的样例片段。
可选的,视频样例中可以包括音乐,该音乐可以是默认设置的,比如是旅行模板配套设置的。
可选的,除去标记602之外,预览框601中还可以显示样例片段的份数、旅行模板的总录制时长、每个视频片段的录制时长等。
考虑到用户可能会想要了解每种故事模板所使用的运镜模式。手机可以通过在触摸屏上显示或通过声音的方式给用户以提示说明,以告知用户故事模板所使用的运镜模式。例如,参见图6中的(a)所示,当旅行模板被选中时,预览框601中除了显示旅行模块的视频样例之外,还可以显示“详情”按键。当手机检测到用户点击“详情”按键的操作时,显示如图6中的(b)所示的界面,该界面中包括旅行模板包括的每个样例片段所使用的运镜模式,比如,第一份样例片段使用右移模式,第二份样例片段使用推模式,第三方样例片段使用顺时针旋转方式。为了方便描述,此处均已均速为例。
需要说明的是,当预览框601中正显示旅行模板的视频样例时,通过“详情”按键可以查看旅行模板所使用的运镜模式。因此,可以理解的是,当预览框601中正显示安静模板的视频样例时,通过“详情”按键可以查看安静模块所使用的运镜模式,不再赘述。
继续参见图6中的(a),主页中还包括控件603,该控件603用于进入某种故事模板的录制界面。比如,假设手机检测到用户选中旅行模板,然后手机检测到点击控件603的操作,则进入旅行模板的录制界面。
示例性的,旅行模板的录制界面如图7A所示。所述录制界面中包括提示701,用于提示用户当前处于旅行模板。当然,也可以不显示提示701。所述录制界面还包括三个标记:标记702至标记704,其中,标记702用于指示旅行模式的第一种运镜模式,标记703用于指示第二种运镜模式,标记704用于指示第三种运镜模式。
可选的,标记702中显示时间1,该时间1用于指示使用第一种运镜模式的录制时长。 同样、标记703中显示时间2,该时间2用于指示使用第二种运镜模式的录制时长。标记704中显示时间3,该时间3用于指示使用第三种运镜模式的录制时长。时间1、时间2和时间3可以是默认设置好的,三个时间可以相同或不同(图7A中以三个时间都是3s为例)。或者,时间1、时间2和时间3也可以是用户设置的(下文介绍)。
所述录制界面还显示按钮706,用于关闭旅行模板的录制界面。假设手机检测到点击按钮706的操作,返回如图6中的(a)所示的主页。
所述录制界面中还显示按钮705。可选的,按钮705可以是录制按钮,用于控制录制的开始和/或停止。
方式1,旅行模板包括三种运镜模式,对于每种运镜模式,可以通过按钮705控制录制的开始和/或停止。
继续参见图7A,当手机检测到用于选择标记702的操作(如,单击标记702)时,确定标记702对应的第一种运镜模式。当检测到点击按钮705的操作时,手机开始使用第一种运镜模式进行视频拍摄。以第一种运镜模块是右移模式为例,其实现原理参见图3A中的(a)的介绍,其录制效果请参见图7B。具体地,如图7B中的(a)中预览图像中拍摄物体“塔”在图像右侧位置,图7B中的(b)中预览图像中“塔”在图像中间位置,图7B中的(c)中预览图像的“塔”在图像左侧位置,相当于手机向右移动的效果,实际上手机并未移动。
在手机使用第一种运镜模式进行视频拍摄的过程中,标记702中的时间自动减少。比如,图7B中的(a)中标记702中时间为3s,图7B中的(b)中标记702中时间减为2s,图7B中的(c)中标记702中时间减为1s。当时间减为0时,停止录制。至此,手机使用第一种运镜模式的录制过程结束。
再比如,参见图7A,当手机检测到用于选择标记703的操作(如,单击标记703)时,确定标记703对应的第二种运镜模式。当检测到点击按钮705的操作时,手机开始使用第二种运镜模式进行视频拍摄。以第二种运镜模块是推模式为例,其实现原理参见图3A中的(b)的介绍,其录制效果请参见图7C。具体地,如图7C中的(a)中预览图像中拍摄物体“塔”从视觉上感觉相对较远,所以塔较小,图7C中的(b)中预览图像中“塔”被放大,给用户靠近“塔”的感觉,图7C中的(c)中预览图像的“塔”进一步被放大,相当于手机靠近物体的效果,实际上手机并未移动。
同理,在手机使用第二种运镜模式录制的过程中,标记703中的时间自动减少。当时间减为0时,停止录制。至此,手机使用第二种运镜模式的录制过程结束。
再比如,参见图7A,当手机检测到用于选择标记704的操作(如,单击标记704)时,确定标记704对应的第三种运镜模式。当检测到点击按钮705的操作时,手机开始使用第三种运镜模式进行视频拍摄。以第三种运镜模块是顺时针旋转模式为例,其实现原理参见图3A中的(c)的介绍,其录制效果请参见图7D。具体地,如图7D中的(a)中预览图像中拍摄物体“塔”竖直,图7D中的(b)中预览图像中“塔”顺时针旋转,图7D中的(c)中预览图像的“塔”进一步旋转,相当于手机顺时针旋转的拍摄效果,实际上手机并未移动。
同理,手机以运镜模式3录制过程中,标记704中的时间自动减少。当时间减为0时,停止录制。至此,手机使用第三种运镜模式的录制过程结束。
因此,在上面的方式1中,用户通过标记702至标记704选择运镜模式,然后通过按钮705控制手机使用选择的运镜模式开始拍摄。
当然,按钮705还可以控制拍摄停止。比如,以图7A为例,用户选择第一种运镜模式,当手机检测到点击按钮705的操作时,开始以第一种运镜模式录制,当检测到再次点击按钮705的操作时,停止以第一种运镜模式录制。对于第二种运镜模式和第三种运镜模式是相同原理,不重复赘述。也就是说,对于每种运镜模式,不仅可以通过按钮705控制录制开始,还通过按钮705控制录制停止。这种情况下,每种运镜模式的录制时长可以不预先设置,比如,录制时长可由用户决定,在用户想要停止录制时,点击按钮705即可。
方式2,在上述方式1中,对于每种运镜模式,都需要用户点击一次按钮705来开始录制。区别于方式1,在方式2中,手机检测到针对按钮705的操作时,自动的按照顺序依次使用三种运镜模式进行录制。比如,参见图7A,手机检测到点击按钮705的操作时,先以第一种运镜模式开始录制,录制时长达到预设支持(比如3s)时录制结束后,自动以第二种运镜模式开始录制,然后使用第三种运镜模式录制。这种方式只需用户点击一次按钮705即可,操作便捷。当然,方式2中,按钮705也可以控制录制停止或暂停,不再赘述。
同理,方式2中,在手机使用第一种运镜模式进行录制的过程中,标记702中的时间可以逐渐减小,当时间减为0时,停止以第一种运镜模式录制。使用第二种运镜模式和第三种运镜模式进行录制时,原理相同,不再赘述。
可替代性的,按钮705也可以是视频合成按钮,用于将录制的片段合成视频。
比如,标记702至标记704作为录制按钮。继续参见图7A,当检测到点击标记702的操作时,手机开始以第一种运镜模式录制,达到录制时长(比如3s)停止录制,存储录制到的片段(为了方便区分称为片段1)。当检测到点击标记703的操作时,使用第二种运镜模式录制得到片段2。当检测到点击标记704的操作时,使用第三种运镜模式录制得到片段3。当检测到点击按钮705的操作时,将片段1至片段3合成一个视频。
同样,在手机使用第一种运镜模式录制的过程中,标记702中的时间可以逐渐减小,当时间减为0时,停止使用第一种运镜模式录制。使用第二种运镜模式和第三种运镜模式进行录制时,原理相同,不再赘述。
可选的,每种运镜模式的录制时长(比如3s)也可以不预先设置。比如,当检测到标记702的点击操作时,手机开始以第一种运镜模式录制,当再次检测到点击标记702的操作时,停止以第一种运镜模式录制。对于第二种和第三种运镜模式相同原理。也就是说,通过标记702来控制第一种运镜模式录制开始和停止,即每种运镜模式,录制时长可由用户决定。
前面以每种运镜模式的录制时长预设的且是3s为例,可以理解的是,每种运镜模式的录制时长可以调整。比如,参见图8A中的(a)所示,手机检测到针对标记702的操作(如,长按操作)时,可以显示选择框,其中包括时间设置按钮。当检测到针对时间设置按钮的操作时,显示如图8A中的(b)所示的界面,该界面中显示“+”按键和“-”按键,“+”按键用于增加时间,比如增大为4s;“-”按键用于减少时间,比如减少为2s。
需要说明的是,图7A是以旅行模板中包括三种运镜模式为例,可以理解的是,旅行模板中还可以包括更多或更少的运镜模式;比如,用户可以添加或删除运镜模式。
以删除运镜模式为例,比如,参见图8A中的(a),手机检测到针对标记702的操作(如,长按操作)时,显示选择框,其中包括删除按钮。当手机检测到删除按钮的操作时,删除标记702,相应的,删除标记702对应的第一种运镜模式。
以添加运镜模式为例,比如,参见图8B中的(a),旅行模式的录制界面中还包括“+”按钮,手机检测到针对“+”按钮的操作(如,点击操作)时,显示如图8B中的(b)所示的界面,其中包括运镜模式列表。用户选中某种运镜模式后,若手机检测到点击“添加”按钮的操作时,显示如图8B中的(c)所示的界面,该界面中增加标记707,用于指示用户选择添加的运镜模式。
可选的,不同运镜模式之间的顺序可以调整。比如,参见图8C所示,手机检测到长按且拖动标记704的操作时,标记704处于可移动状态。当检测到标记704被拖动到标记702和标记703之间时,三种运镜模式的顺序调整为,第一种运镜模式、第三种运镜模式、第二种运镜模式。那么,合成的视频中三个片段的顺序为:片段1、片段3、片段2。其中,片段1是使用第一种运镜模式拍摄而得,片段2是使用第二种运镜模式拍摄而得,片段3是使用第三种运镜模式拍摄而得。
考虑到手机进入旅行模板的录制界面(比如图7A的界面)之后,用户可能不记得旅行模板包括哪些运镜模式。为了方便用户查看旅行模板所包括的运镜模式,可以在旅行模式的录制界面中提示用户旅行模式的运镜模式。比如,参见图8D中的(a)所示,当手机检测到点击按钮603的操作时,进入如图8D中的(b)所示的界面,该界面中包括小窗口,小窗口中可以播放旅行模板的视频样例;或者,小窗口中可以单独播放每个样例片段。比如,如图8D中的(b)所示,当手机检测到针对标记702的操作时,所述小窗口中播放第一份样例片段(例如,循环播放样例片段1或只播放一次)。当手机检测到针对标记703的操作时,所述小窗口中播放第二份样例片段(例如,循环播放样例片段1或只播放一次)。这样的话,在录制的过程中,用户可以查看每份样例片段所使用的运镜模式。
可选的,手机使用旅行模板完成视频拍摄后,可以进入效果展示界面,以方便用户查看拍摄效果。
以上述方式2为例,参见图9A中的(a),手机以最后一种运镜模式录制达到录制时长(如3s)后,可自动进入如图9A中的(b)所示效果展示界面,该界面中预览框901,用于显示由三个视频片段合成的视频。如果手机检测到针对“确定”按键的操作,手机存储合成的视频。比如,返回到如图9A中的(a)所示的界面,该界面中左下角的图库标记中显示合成的视频中的图像。如果用户对合成的视频不满意,可以重新录制。比如,当手机检测到针对“返回”按键的操作时,返回到图9A中的(a)所示的界面,以重新录制。可选的,预览框901可以占用全部或部分显示屏。如果占用全部显示屏,“确定”按键和“返回”按键可以显示在预览框901的上层。
需要说明的是,图9A中的(b)以预览框901中展示合成的视频为例,可以理解的是,每个视频片段也可以独立展示。比如,参见图9B中的(a)所示,为另一种效果展示界面的示例。该界面中包括每个视频片段的标记。比如,当手机检测到针对片段1的标记的操作(如,单击操作)时,预览框901中播放片段1。当手机检测到针对片段2的标记的操 作时,预览框中播放片段2。因此,用户可以一一查看录制的每个视频片段。当手机检测到点击确认按键的操作时,将三个片段合成视频并存储合成的视频,并返回如图9A所示的界面。
可选的,三个视频片段的顺序可以调整。比如,继续参见图9B中的(a)所示,手机检测到针对片段2的标记的操作(如,长按且拖动的操作),改变该标记的显示位置,比如将片段3的标记拖动到片段1的标记和片段2的标记之间,如图9B中的(b)所示。因此,片段3和片段2的顺序调整。此时,如果手机检测到针对“确定”按键的操作,合成的视频中的视频片段的显示顺序为片段1、片段3、片段2。
可选的,考虑到存在一些情况,比如,用户对三个视频片段中的某个视频片段不满足,此时,可以删除该视频片段,将剩余的视频片段合成视频。比如,参见图9C中的(a),手机检测到针对片段2的标记的操作(比如,长按操作),显示删除按键,当检测到针对删除按键的操作时,删除片段3,参见图9C中的(b)。此时,如果手机检测到针对“确定”按键的操作,则使用片段1和片段2合成视频。
或者,用户对某个视频片段不满意的情况下,也可以重新录制该片段。比如,继续参见图9C中的(a),手机检测到针对片段3的标记的操作(如,长按操作)时,显示重录按键。当手机检测到点击重录按键的操作时,显示如图9C中的(c)所示的界面,该界面用于重新录制片段3。因此,该界面中可以只显示第三运镜模式的标记704,不显示第一种和第二种运镜模式的标记。比如,继续参见图9C中的(c),手机检测到点击按钮705的操作时,开始以第三种运镜模式录制,当达到录制时长(比如3s)后,自动返回到如图8C所示的界面,其中片段3是重新录制的片段。
可选的,在将片段1至片段3合成视频之前,手机还可以添加本地已录制的视频片段。那么,在合成视频时,将片段1至片段3以及添加的本地视频合成。比如,参见图9D中的(a),效果展示界面中显示“+”按键。当手机检测到点击“+”按键的操作时,显示如图9D中的(b)所示的界面,该界面为手机图库的界面。假设用户选择视频913,当手机检测到点击“添加”按键的操作时,显示如图9D中的(c)所示的界面,该界面中添加了片段4即视频913。此时,如果手机检测到点击确定按键的操作时,将4个片段合成视频。
可选的,手机还可以对录制的视频片段进行裁剪、添加文字或音乐等处理。
比如,参见图9E中的(a)所示,预览框901中显示四个图标,分别为裁剪图标、文本图标、音乐图标、消音图标。可以理解的是,当预览框中播放片段1时,所述四个图标作用于片段1,当预览框中播放片段2时,所述四个图标作用于片段2。
以预览框中显示片段1为例,当手检测到针对裁剪图标的操作时,可以显示如图9E中的(b)所示的界面,预览框901中显示裁剪框910,裁剪框910中可以显示片段1的所有帧图像。比如,当用户想要裁剪最后几帧,可以将裁剪条911移动到图9E中的(b)所示的位置,那么最后几帧被裁剪掉。当检测到点击完成按键时,可以返回图9E中的(a)所示的界面,此时预览框901中显示裁剪之后的片段1。
当手机检测到针对文本图标的操作时,显示如图9E中的(c)所示的界面,该界面中预览框901中显示文本输入框,用户可以在文本输入框中输入文本。当检测到点击完成按键时,返回图9E中的(a)所示的界面,此时预览框901中显示添加了文字后的片段1。需要说明的是,此处以添加文字为例进行介绍,可以理解的是,还可以添加表情、动图等。
当手机检测到针对音乐图标的操作时,可以显示如图9E中的(d)所示的界面,该界面中显示歌曲片段的列表,该列表中包括多个歌曲片段的标记,比如,歌曲片段A的标记、歌曲片段B的标记等。当手机检测到用户选择歌曲片段A的标记时,手机可以将歌曲片段A作为片段1的背景音乐。比如,当检测到点击完成按键时,手机返回图9E中的(a)所示的界面,此时预览框901中播放添加了歌曲片段A的片段1。可选的,处了运镜模式之外,故事模板还可以包括默认音乐,以旅行模板为例,旅行模板的默认音乐是指旅行模板的样例视频中所使用的音乐,所以,假设用户未选择音乐的情况下,手机将旅行模板的默认音乐作为视频的背景音乐。考虑到音乐具有节奏性,视频中的图像播放节奏可以与音乐播放的节拍一致,比如音乐中包括鼓点,当一个鼓点播放时,播放一帧图像,当下一个鼓点播放时,播放下一帧图像,等等。
可选的,手机还可以将录制的三个视频片段中的某个视频片段内的原声消除。所述原声可以理解为录制的视频中的声音。比如,参见图9E中的(a),当手机检测到针对消音图标的操作时,消除片段1的原声。可选的,对于某个视频片段,原声可以全部消除或部分消除。以片段1为例,当检测到消音图标的操作时,手机可以显示音频裁剪框,如类似图9E中的(b)所述的裁剪框910,用户可以通过音频裁剪框选择片段1中要消音的部分片段,那么片段1中未消音的片段保留原声。
需要说明的是,图9E仅列举出对视频片段的裁剪、添加文字、音乐、消除原始的四种处理方式。可以理解的是,还可以包括其它处理方式,比如,还可以提供各种图片风格,用于对片段中的图像进行处理,如黑白风格、动漫风格、水墨风格、强曝光风格、弱曝光风格等等,本申请不一一列举。
可选的,手机还可以选择合成特效,所述合成特效用于以特定的合成方式将三个视频片段合成。比如,参见图9F中的(a),当手机检测到“动效”按钮的操作时,显示如图9F中的(b)所示的界面,该界面中包括多种合成特效。比如,假设用户选择“融合”,手机检测到用户点击“确定”按键的操作时,片段1和片段2合成的方式为:将片段1的最后1帧与片段2的第1帧融合。对应的合成效果为:手机播放完片段1的倒数第2帧后,播放片段1的最后一帧,该最后一帧是与片段2的第一帧融合得到的图像,然后继续播放片段2的第2帧。为了方便用户查看和效果,手机显示如图9F中的(c)所示的界面,用于展示合成效果。当手机检测到点击“保存设置”的按键时,保存“融合”这种合成特效,然后返回图9F中的(a)所示的界面,当手机在该界面检测到点击“确定”按键的操作时,以保存的合成特效合成片段1和片段3。
再比如,以图9F中的(b)为例,假设用户选择逐渐放大,对应的合成效果为:手机在播放完片段1之后,将片段2中的第一帧图像以逐渐放大的方式播放,然后播放片段2的第2帧图像。需要说明的是,此处以“融合”、“逐渐放大”等为例进行说明,对于其他的合成特效也是可以的,本申请不再一一举例。
可选的,手机存储视频时,可以一并存储原始视频和合成的视频。比如,以图9B中的(a)为例,手机检测到“确定”按键时,可以存储由片段1至片段3合成的视频;也可以存储原始视频,所谓原始视频可以理解为未使用运镜模式录制的视频。为了方便理解,以图3A中的(a)所示的右移为例,原始视频是由第m帧到第m+3帧图像构成的视频,而 不是第m区域至第m+1区域内的图像块构成的视频。比如,参见图10所示,为手机中图库应用的界面,该界面中存储两个视频,一个是使用了运镜模式拍摄的视频,另一个未使用运镜模式。为了方便用户区分,使用运镜模式的视频上可以显示标记1001。可选的,在检测到“确定”按键时,手机还可以一并将片段1至片段3中的每一个片段单独存储。
在上面的实施例中,以图6中的(a)为例,微电影模式的主页中提供了多种故事模板,所述多种故事模板是默认设置好的(比如手机出厂时设置好的),可以理解的是,用户也可以自定义故事模板。为了方便区分,可以将默认设置好的模板统称“默认模板”,将用户自定义的模板统称“自定义模板”。
比如,参见图11A中的(a),主页中还可以显示“+”按钮,手机检测到针对该按钮的操作时,可以显示如图11A中的(b)所示的界面,该界面为手机中图库的界面,用户可以选择图库中存储的视频片段。假设用户选择片段1101,当手机检测到用户点击“添加”按键时,打开如图11A中的(c)所示的界面,该界面中添加了新的模板。用户可以设置该新模板的名称。比如,参见图11B,手机检测到针对新模板的操作(比如,长按操作)显示命名按键,当手机检测到点击命名按键的操作时,为新模板命名。
手机添加自定义模板之后,可以解析该模板的运镜模式。当用户选择自定义模块时,使用该自定义模块对应的运镜模式拍摄视频。即,用户可以使用模板拍摄出与模板类似效果的视频。比如,用户从电影上截取片段作为自定义模板,那么用户使用该模板拍摄出的视频可以与电影的效果类似,对于非专业摄影人员也可以得到质量较高的拍摄作品,用户体验较高。
可以理解的是,对于用户不喜欢的模板或不常用的模板,可以删除。其中,默认模板和自定义模板均可以删除,或者,仅可以删除自定义模板,默认模板不可删除。比如,参见图11B,手机检测到针对新模板的操作(比如,长按操作)显示删除按键,当手机检测到点击删除按键的操作时,删除该模板。
图11A中的(b)以自定义模板是手机本地的视频为例,可以理解的是,自定义模块还可以有其它方式设置。比如,以图11A中的(a)为例,当手机检测到针对“+”按钮的操作时,显示如图11C所示的界面,该界面中包括运镜模式列表,用户可以从运镜模式的列表中选择多种运镜模式组合成自定义的故事模板。假设用户选择右移模式、拉模式、左摇模式,当手机检测到点击“组合”按键的操作时,可显示如图11A中的(c)所示的界面,该界面中新添加的模板由用户选择的多种运镜模式组成。
结合上述实施例及相关附图,本申请实施例提供了一种视频拍摄方法。如图12所示,该方法可以包括以下步骤:
S1201,启动相机功能。比如,手机检测到用于打开应用应用的操作,启动相机应用。所述操作可以是图4中的(a)中点击相机图标的操作,当然,还可以是其它操作,只要能够打开相机应用即可,本申请实施例不限定操作类型。
S1202,响应于用户第一操作,确定第一录像模板,所述第一录像模板中包括第一示例样片、第二示例样片以及预设音频,所述第一示例样片对应第一运镜模式,所述第二示例样片对应第二运镜模式,其中所述第一运镜模式和所述第二运镜模式不同。
其中,第一录像模板比如可以是图7A中的旅行模板、安静模板等等。当然,第一录 像模板也可以是默认模板,也可以是用户自定义的模板,比如图11A所示。
第一操作可以是一个或多个操作。假设第一操作是一个操作,比如,手机启动相机应用之后,显示如图4中的(b)所示的取景界面,第一操作可以是点击第一录像模板的按钮的操作,或者语音指示第一录像模板的操作等等,本申请实施例不限定。假设第一操作包括多个操作,比如,第一操作包括点击图5A中微电影图标的操作,以及点击图6中的(a)中旅行模板的操作,以及点击控件603的操作。
其中,关于第一示例样片(或称为第一视频样例),所述第二示例样片(或称为第二视频样例)以及预设音频请参见前文描述。
S1203,显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
以第一录像模板是旅行模板为例,录像界面例如可以是图7A所示的界面。
S1204,响应于用户第二操作,保持所述电子设备的位置不动,开始录像。
方式1为,响应于点击录制按钮(比如图7A中的按钮705)的操作(即第二操作),开始录像,具体地,先开始以第一运镜模式录制第一视频片段,录制完成后,自动根据第二运镜模式录制第二视频片段。方式2为,在所述第一运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第一运镜模式生成所述第一视频片段,所述第一视频片段的时长为第一预设时长;在所述第二运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第二运镜模式生成所述第二视频片段,所述第二视频片段的时长为第二预设时长。也就是说,针对每一种运镜模式,用户可以控制开始录制和/或停止录制。
S1205,自动生成合成视频,所述合成视频中包括第一视频片段、第二视频片段以及所述预设音频,所述第一视频片段为所述电子设备根据所述第一运镜模式生成的视频片段,所述第二视频片段为所述电子设备根据所述第二运镜模式生成的视频片段。
一种方式为,响应于点击录制按钮(比如图7A中的按钮705)的操作(即第二操作),开始录像,录制方式可以是上述方式1或方式2,当第二视频片段录制完成之后,可以自动的合成视频。或者,另一种方式为,在自动生成合成视频之前,显示展示界面,所述展示界面中包括所述第一视频片段和所述第二视频片段;响应于用户输入的视频合成指令,合成视频。
可选的,在根据所述第一运镜模式生成第一视频片段时,所述录像界面中还显示根据所述第一运镜模式生成所述第一视频片段的倒计时;在根据所述第二运镜模式生成所述第二视频片段时,所述录像界面中还显示根据所述第二运镜模式生成所述第二视频片段的倒计时。比如图7B、图7C或图7D。
可选的,用户还可以删除运镜模式标识。比如,手机显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;响应于用户第三操作,删除第一运镜模式标识或第二运镜模式标识;响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
自动生成合成视频,所述合成视频中包括所述电子设备根据未删除的运镜模式生成的视频片段以及所述预设音频。比如,删除了第一运镜模式标识即删除了第一运镜模式,那么电子设备开始录制,仅根据第二运镜模式生成第二视频片段即可,无需与其他视频片段合成视频。
可选的,用户还可以添加运镜模式标识。比如,电子设备显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;响应于用户第三操作,在所述录像界面中添加第三运镜模式标识,所述第三运镜模式标识用于指示第三运镜模式;响应于用户第 四操作,保持所述电子设备的位置不动,开始录制;自动生成合成视频,所述合成视频中包括所述第一视频片段、所述第二视频片段、第三视频片段以及所述预设音频,所述第三视频片段为所述电子设备根据所述第三运镜模式生成的视频片段。
可选的,用户还可以调整运镜模式标识的顺序。比如,电子设备显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;响应于用户第三操作,调整所述第一运镜模式标识和第二运镜模式标识的显示顺序为第一顺序;响应于用户第四操作,保持所述电子设备的位置不动,开始录制;自动生成合成视频,所述合成的视频中所述第一视频片段和所述第二视频片段的播放顺序为所述第一顺序。
可选的,所述录像界面中显示所述第一示例样片和/或所述第二示例样片。比如,录制界面可以是图8D中的(b)所述的界面,可以在取景界面中以画中画的方式显示示例片段,具体参见前文关于8D中的(b)的描述。
可选的,电子设备还可以响应于所述第四操作,删除所述第一视频片段或所述第二视频片段;或者,在所述合成视频中添加本地的第三视频片段;或者;调整所述合成视频中所述第一视频片段或所述第二视频片段的播放顺序。比如,参见前文图9C的描述。
可选的,所述第一录像模板是默认模板或用户自定义模板。比如,参见前文图11A。
可选的,电子设备还可以自动存储所述第一视频片段和所述第二视频片段,以及所述合成视频,比如,参见前文关于图10的描述。
可选的,电子设备还可以响应于特定操作,更换所述合成视频中的音频,或者,在所述合成视频中添加文字和/或图片。具体参见前文图9E的介绍。
实施例二
在上面的实施例一中,介绍使用微电影模式实现多种运镜模式的组合使用。区别于实施例一,本实施例二中提供另一种视频拍摄方式,即手机单独使用某种运镜模式进行拍摄。
图13中的(a)示出了手机的一种图形用户界面(graphical user interface,GUI),该GUI为手机的桌面401。当手机检测到用户点击桌面401上的相机应用的图标402的操作后,可以启动相机应用,启动普通广角摄像头(例如后置摄像头),显示如图13中的(b)所示的另一GUI,该GUI可以称为取景界面1303。该取景界面1303是录像模式(普通录像模式)下的取景界面。应理解,若手机检测到用户点击图标1302的操作后,默认显示拍照模式的取景界面,用户可以通过输入操作例如在图13中的(b)中区域1304(虚线框中的区域)中的滑动操作,选择录像模式,然后手机显示录像模式的取景界面。
示例性的,参见图13中的(b),手机进入录像模式之后,取景界面1303内包括预览图像。取景界面1303中还可以包括用于指示美肤模式的控件1305、用于指示美肤等级的控件1306以及录像控件1307。在录像模式下,当手机检测到用户点击该录像控件1307的操作后,手机开始录制视频。
可以理解的是,对于“摇”拍摄方式和“移”拍摄方式,手机对视频流的处理流程不同。因此,本申请实施例提供多种录制模式,例如普通录像模式以及两种运镜模式(例如包括移模式和摇模式),用户可以指示手机使用某种运镜模式,不同运镜模式下手机的处理过程不同。例如,用户期望使用“摇”拍摄方式时,可以输入一个指示,指示手机进入摇模式。若用户期望使用“移”拍摄方式,可以输入另一指示,指示手机进入移模式。其中,手机进入摇模式可以理解为手机基于移模式对应的处理流程进行处理。手机进入移模式可以理解 为手机基于摇模式对应的处理流程进行处理。
在一些实施例中,手机启动相机应用,默认进入普通录像模式,在用户指示某种运镜模式之后,进入对应的运镜模式。或者,手机启动相机应用之后,默认进入某种运镜模式,例如,上次使用相机应用时使用的运镜模式。假设手机启动相机应用之后,默认进入移模式,可以启动超广角摄像头。取景界面内显示超广角摄像头上采集的图像上的一个图像块,例如,可以是中心位置处的图像块。
其中,用户指示摇模式和移模式的方式可以有多种,包括但不限定于下述方式1和方式2。
方式1,参考图14中的(a)所示,手机当前处于普通录像模式。取景界面1303中显示用于指示运镜模式的控件1308。当手机检测到针对运镜模式的控件1308的操作时,显示如图14中的(b)所示的GUI,该GUI中显示选择框1309。选择框1309中包括“摇模式”和“移模式”的选项。当手机检测到针对选择框1309中的“移模式”的操作时,进入“移模式”;当手机检测到针对“摇模式”时,进入“摇模式”。
示例性的,手机进入普通录像模式后,取景界面内默认显示用于指示运镜模式的控件1308,或者,在用户设置运镜模式的快捷方式后,取景界面中显示指示运镜模式的控件1308。其中,用户可以通过相机应用内的设置菜单等方式设置运镜模式的快捷方式。
需要说明的是,用于指示运镜模式的控件1308在取景界面1303中的显示位置,本申请实施例不作限定,或者,用户也可以在自定义控件1308的显示位置,或者,控件1308的显示位置可以根据手机横屏或竖屏进行适应性调整。此外,用于指示运镜模式的控件1308的形态可以采用尽可能不遮挡预览图像的形态,例如透明或半透明的形式。
应理解,方式1中,运镜模式的控件1308在取景界面中直观呈现,方便用户操作,用户体验较高。
方式2,参考图15中的(a)所示,手机当前处于普通录像模式。取景界面1303中还包括“更多”控件1310。当手机检测到用于选择“更多”控件1310的操作时,显示如图15中的(b)所示的另一GUI,该GUI中显示多种拍摄模式对应的图标,其中包括“摇模式”的图标和“移模式”的图标。当手机检测到针对“移模式”的图标的操作时,进入“移模式”;当手机检测到针对“摇模式”的图标的操作时,进入“摇模式”。应理解,在方式2中,取景界面可以不显示用于指示运镜模式的控件1308,可以避免遮挡取景界面内的预览图像。
可以理解的是,上述方式1和方式2仅是举例,其它的能够指示手机进入运镜模式(摇模式或移模式)的方式也是可行的,例如,通过语音指令指示手机进入摇模式或移模式等等,本申请实施例不作限定。下文中以上述方式1为例进行介绍。
实施例1
示例性的,图16示出了手机进入移模式时的GUI的示意图。为了方便用户区别当前处于哪种模式,手机进入移模式时,取景界面中可以显示提示信息1312,该提示信息1312用于指示当前处于移模式。当然,手机从普通录像模式切换到移模式时,也可以输出其它提醒,例如振动反馈。其中,提示信息1312可以以半透明或透明等尽可能不遮挡预览图像的方式显示。取景界面内还显示方向控件1311。用户可以通过该方向控件1311输入用于指示图像移动方向的信息。应理解,方向控件1311在取景界面中的显示位置,本申请实施例不作限定,例如,默认显示在如图16所示的位置,或者用户可以调整其显示位置。
在一些实施例中,手机在普通录像模式下使用第一广角摄像头(例如普通广角摄像头)。 当手机从普通录像模式切换到移模式时,启动第二广角摄像头(例如超广角摄像头)。第一广角摄像头的视角小于第二广角摄像头的视角。移模式下取景界面显示第一预览图像,该第一预览图像为超广角摄像头采集的图像上第一区域内的第一图像块。可以理解的是,手机进入移模式之后,启动超广角摄像头,所述第一预览图像可以是超广角摄像头采集的第一帧图像上处于第一区域的第一图像块。
其中,第一图像块可以是超广角摄像头采集的图像上与第一预览图像对应的图像块,例如,第一图像块是超广角摄像头采集的图像和普通广角摄像头采集的图像上视角范围重叠的全部或部分图像。图17示出了超广角摄像头采集的图像上第一区域的示意图。第一区域可以是超广角摄像头和普通广角摄像头视角范围重叠的全部或部分区域。对比图17和图18可知,图17中手机进入移模式后,预览图像为图18中第一区域内的第一图像块。手机从普通录像模式切换到移模式后,可以关闭或不关闭普通广角摄像头。
在一些实施例中,手机从普通录像模式切换到移模式,预览图像不变。所述预览图像不变可以理解为切到移模式后预览图像未被缩小或放大。例如,移模式下预览图像的放大倍率与普通录像模式下预览图像的放大倍率一致,例如均为1倍。因此,手机从普通录像模式切换到移模式后,用户不会感知到预览图像突兀的被放大或缩小。
在另一些实施例中,普通录像模式和移模式下,预览图像可以变化。所述预览图像变化可以理解为切到移模式后预览图像被缩小或放大。例如,普通录像模式下预览图像的放大倍率为1倍,移模式下预览图像的放大倍率为5倍,即从普通录像模式切换到移模式,预览图像被放大。可以理解的是,切到移模式后图像放大倍率增大的情况下,超广角摄像头采集的图像上第一区域的位置移动范围增大,能够实现在较广的范围内移镜头的拍摄效果。
以下实施例介绍,移模式下手机实现图像平移的过程。
参见图18所示,超广角摄像头采集到N帧图像。例如,第一帧、第二帧、第m-1帧等等。可以理解的是,手机进入移模式之后,启动超广角摄像头,所以进入移模式后第一预览图像可以是超广角摄像头采集的第一帧图像上处于第一区域的第一图像块。假设手机还未检测到图像移动指令,手机确定第二帧图像上第二区域,第二区域相对于第一区域的位置不动,预览图像由第一区域内的图像块刷新为第二区域内的图像块。以此类推,第m-1帧图像上的第m-1区域相对于第一区域的位置不动,其中,m可以是大于等于3的整数。预览图像刷新为第m-1区域内的图像块。也就是说,从第一帧图像到第m-1帧图像期间,手机未检测到图像移动指令,所以预览图像在图像上的位置不变。
假设在预览图像刷新为第m帧图像内第m区域内的图像块之前(例如,预览图像显示为第m-1区域内图像块的过程中),手机检测到图像右移指令。手机确定第m帧图像上的第m区域,第m区域的位置相对于第m-1区域的位置右移距离A。如图18所示,第m-1区域距离图像左边缘为H,第m区域距离图像左边缘为H+A。因此,手机检测到图像右移指令后,取景界面内预览图像由第m-1区域内的图像块刷新为第m区域内的图像块,即预览图像在图像上的位置右移距离A。
然后,手机确定第m+1帧图像上的第m+1区域,第m+1区域的位置相对于第m区域的位置右移距离B。如图18所示,第m区域距离图像左边缘为H+A,第m+1区域距离图像左边缘为H+A+B。因此,取景界面内预览图像由第m区域内的图像块刷新为第m+1区域内的图像块,即预览图像在图像上的位置右移距离B。
手机确定第m+2帧图像上的第m+2区域,第m+2区域的位置相对于第m+1区域的位置右移距离C。如图18所示,第m+1区域距离图像左边缘为H+A+B,第m+2区域距离图像左边缘为H+A+B+C。因此,取景界面内预览图像由第m+1区域内的图像块刷新为第m+2区域内的图像块,即预览图像在图像上的位置右移距离C。因此,预览图像在图像上的位置逐渐右移。
假设手机检测到停止移动指令,确定第m+3帧图像上的第m+3区域,第m+3区域的位置相对于第m+2区域的位置未发生变化。预览图像由第m+2区域内的图像块刷新为第m+3区域内的图像块,即预览图像在图像上的位置不变,停止右移。之后,预览图像由第m+3区域内的图像块刷新为第m+4区域内的图像块,预览图像在图像上的位置保持不变,直到再次检测到图像移动指令,再次移动。
其中,A、B、C之间的取值关系有多种情况,下文给出几种示例。
示例1:A=B=C。假设A=B=C=L,即手机检测到图像右移指令后,每帧图像上的目标区域(例如图18中的第m区域、第m+1区域、第m+2区域等)相对于上一帧图像上的目标区域的位置右移相同的距离L,即均速右移。也就是说,手机检测到图像右移指令之后,每次刷新的预览图像相对于上一帧预览图像在图像上的位置向右平移相同距离L,直到检测到停止移动指令,实现均速右移的拍摄效果。
示例2:A<B<C<D。假设A=L,B=2L,C=3L,即手机检测到图像右移指令后,后一帧图像上的目标区域相对于前一帧图像上的目标区域加速右移。例如,第m区域相对于第m-1右移距离L,第m+1区域相对于第m区域右移距离2L,即第m+1区域加速右移;第m+2区域相对于第m+1区域右移距离3L,即第m+2区域相对于第m+1区域加速右移。因此,手机检测到图像右移指令后,每次刷新的预览图像相对于上一帧预览图像加速右移,实现图像加速右移的拍摄效果。
示例3:A>B>C>D。假设A=2L,B=L,C=0,即手机检测到图像右移指令后,后一帧图像上的目标区域相对于前一帧图像上的目标区域减速右移。例如,第m区域相对于第m-1右移距离2L,第m+1区域相对于第m区域右移距离L,即第m+1区域相对于第m区域减速右移;第m+2区域相对于第m+1区域右移距离为0,即第m+2区域相对于第m+1区域减速右移不移动。因此,手机检测到图像右移指令后,每次刷新的预览图像相对于上一帧预览图像减速右移甚至速度降为0,实现图像减速右移的拍摄效果。
以上给出了A、B、C之间的取值关系的三种示例,本申请实施例不限定A、B、C之间的取值大小,本领域技术人员可以灵活设置,以实现不同的技术效果。
在一些实施例中,手机检测到用于指示图像移动方向的指令后,默认使用上述示例1、示例2或示例3中的某种方式;或者,手机检测到用于指示图像移动方向的指令后,默认使用示例1的方式,当检测到加速移动的指令时,使用上述示例2的方式;手机检测到减速移动的指令时,使用上述示例3的方式。
第一种方式
超广角摄像头一帧一帧地采集的图像,假设采集N帧,以所述N帧图像中每帧图像上的目标区域内的图像块依次刷新预览图像。也可以理解为,手机不对超广角摄像头采集的N帧图像进行抽帧或插帧处理,而是将预览图像依次刷新为N帧图像中每帧图像上目标区域内的图像块,有助于提升预览图像的连贯性、流畅性。假设手机检测到用于指示图像移动方向的指令后,以图18中A=B=C=L的方式确定目标区域,即预览图像在图像上的位置 均速右移。假设手机以图18中A<B<C<D的方式确定目标区域,预览图像在图像上的位置加速右移。假设手机以A>B>C>D的方式确定目前区域,预览图像在图像上的位置减速右移。
第二种方式
超广角摄像头采集N帧图像,手机从超广角摄像头采集的N帧图像中抽出M帧图像,M为小于N的整数,通过所述M帧图像中每帧图像上的目标区域内的图像块刷新预览图像,可以实现快速刷新(或称为播放)的效果。例如,假设超广角摄像头的图像采集帧率为240fps,即每秒采集240帧图像。再假设手机的图像播放(或称刷新)帧率为30fps,即每秒刷新30帧,那么所述240帧图像需要8秒刷新完毕。假设手机从240帧图像中抽出120帧图像,抽出的120帧只需4秒刷新完毕,即实现快速刷新的效果。
示例1,参见图19A所示,超广角摄像头采集N帧图像,例如依次采集第一帧、第二帧、第m-1帧图像等等。假设在预览图像刷新为第m帧图像上的第m区域之前,手机检测到图像右移指令,手机确定N帧图像中第m帧图像以及之后每帧图像上的目标区域,假设后一帧图像上的目标区域的位置相对于前一帧图像的目标区域的位置右移相同距离L(即上述A=B=C=L的方式)。
然后,手机开始抽帧。假设从第m帧开始抽帧,抽出第m帧、第m+i帧、第m+i+j帧。其中,第m帧上的第m区域相对于第m-1帧右移距离L,第m+i帧图像上的第m+i区域相对于上第m帧右移距离i L,第m+i+j帧图像上的第m+i+j区域相对于上第m+i帧右移距离j L。
继续参见图19A所示,假设在预览图像显示为第m+i+j区域内的图像块的过程中,手机检测到图像停止移动的指令。手机继续以第m+i+j+1帧图像、第m+i+j+2帧图像等图像上的目标区域内的图像块刷新预览图像。即手机检测到图像停止移动指令后,不再使用抽帧刷新的方式,而且第m+i+j+1帧图像、第m+i+j+2帧图像等图像上的目标区域在图像上的位置与第m+i+j区域在图像上的位置不变,即停止移动。也就是说,手机检测到用于指示图像移动方向的指令后,使用抽帧方式刷新预览图像且预览图像在图像上的位置按照所述图像移动方向逐渐移动,当检测到停止图像移动的指令后,不再使用抽帧方式处理,预览图像在图像上的位置不再移动。
继续以图19A为例,i与j的取值可以理解为抽帧间隔,i与j的取值不同实现的拍摄效果不同。
假设i=j,可以理解为抽帧间隔相同,例如i=j=2,即隔一帧抽一帧,或者说每两帧抽一帧。也就是说,手机检测到图像右移指令后,预览依次刷新为第m区域、第m+2区域、第m+4区域等等内的图像块,第m区域相对于第m+2区域右移2L,第m+4区域相对于第m+2区域右移2L,即每次刷新的预览图像相对于上一帧预览图像在图像上的位置右移2L。对比图18中A=B=C=L的情况可知,图19A所示的实施例通过抽帧的方式,预览图像在图像上的位置可以以较快的速度(每次右移2L)匀速的右移。而且,抽帧刷新可以实现快速刷新的效果。也就是说,预览图像以较快的速度刷新的同时预览图像在图像上的位置以较快的速度匀速右移。
当然,抽帧间隔也可以不同,即i不等于j。假设i<j,例如,i=2,j=3。即手机检测到图像右移指令后,预览图像依次刷新为第m区域、第m+2区域、第m+5区域等区域内的图像块。其中,第m+2区域相对于第m区域右移2L,第m+5区域相对于第m+2区域右 移3L。因此,手机检测到图像右移指令后,每次刷新的预览图像相对于上一帧预览图像在图像上位置的加速右移。而且,抽帧刷新可以实现快速刷新的效果。也就是说,预览图像以较快的速度刷新的同时预览图像在图像上的位置加速右移。
假设i>j,例如,i=3,j=2。即手机检测到图像右移指令后,预览图像依次刷新为第m区域、第m+3区域、第m+5区域等内的图像块,其中,第m+3区域相对于第m区域右移3L,第m+5区域相对于第m+3区域右移2L。因此,手机检测到图像右移指令后,每次刷新的预览图像相对于上一帧预览图像在图像上的位置减速右移。而且,抽帧刷新可以实现快速刷新的效果。也就是说,预览图像以较快的速度刷新的同时预览图像在图像上的位置减速右移。
以上给出了i和j之间的取值关系的三种示例,本申请实施例不限定i和j之间的取值大小,本领域技术人员可以灵活设置,以实现不同的技术效果。
示例2,在上述示例1中,手机先确定每帧图像上的目标区域,然后抽帧。该示例2中,手机可以先抽帧,然后在抽帧出的图像上确定目标区域。参见图19B,在预览图像刷新到第m帧图像之前,手机检测到图像右移指令,手机从第m帧图像开始抽帧,假设抽出M帧图像。这里不限定图像抽帧间隔。手机分别在M帧图像上目标区域。在M帧图像上确定目标区域的方式可以参见图18中的描述,在此不重复赘述。假设手机以A=B=C=L的方式确定M帧图像上的目标区域,由于抽帧刷新可实现快速刷新的效果,即预览图像以较快的速度刷新的同时,预览图像在图像上的位置均速右移。假设手机以图18中A<B<C<D的方式确定目标区域,即预览图像以较快的速度刷新的同时,预览图像在图像上的位置加速右移。假设手机以A>B>C>D的方式确定目前区域,即预览图像以较快的速度刷新的同时,预览图像在图像上的位置减速右移。
需要说明的是,以图19A为例,假设预览图像刷新为第m帧图像之前,手机检测到图像右移,且加速右移的指令,预览图像以上述第一种方式刷新,即不采用抽帧方式刷新,依次刷新为第m区域、第m+1区域、第m+2区域内的图像块等等。假设预览图像刷新到第m+i+j+1帧图像之前,手机检测到停止移动指令,则生成视频,该视频由从第m帧到第m+i+j+1帧之间抽帧出的图像上位于目标区域内的图像块合成。例如,抽出第m区域、第m+i帧、第m+i+j帧,所以所述视频可以是第m区域内的图像块、第m+i区域内的图像块、第m+i+j区域内的图像块等合成的视频。也就是说,手机检测到图像右移,且加速右移的指令后,预览图像不使用抽帧的方式刷新,当检测到停止右移的指令后,生成的视频可以是通过抽出的图像上目标区域内的图像块合成的视频。预览图像刷新的过程中不使用抽帧的方式可以节省计算量、提升效率。
第三种方式
超广角摄像头采集N帧图像。手机在N帧图像种插入多帧图像,得到M帧图像,M为大于N的整数,通过所述M帧图像依次刷新预览图像,由于图像数量增多,可以实现慢速刷新(或播放)的效果。例如,假设超广角摄像头的图像采集帧率为240fps,即每秒采集240帧图像。再假设手机的图像刷新(或称播放)帧率为30fps,即每秒刷新30帧,那么所述240帧图像需要8秒刷新完毕。假设手机在所述240帧图像中插入120帧图像,得到360帧图像,则只需12秒刷新完毕,即实现慢速刷新的效果。
示例1,参见图19C所示,超广角摄像头采集N帧图像,例如依次采集第一帧、第二帧、第m-1帧图像等等。假设预览图像刷新到第m帧图像之前,手机检测到图像右移指令, 手机确定第m帧图像以及之后每帧图像上的目标区域,下一帧图像上的目标区域的位置相对于上一帧图像的目标区域的位置右移一定的相同距离L。
然后,手机开始插帧处理,假设在第m帧和第m+1帧之间插入P帧图像(插入的图像以虚线表示),在第m+1帧和第m+2帧之间插入Q帧图像。其中,P与Q可以相同或不同。
假设P=Q=1,即隔一帧插入一帧。手机可以确定第P帧图像(即第m帧和第m+1帧之间插入的一帧图像)上的第P区域,该第P区域相对于第m区域向右移动距离X,该X的取值本申请实施例不作限定,例如X可以是处于L到2L范围之间内的取值,如1.5L。手机确定第Q帧图像(即第m+1帧和第m+2帧之间插入的一帧图像)上的第Q区域,该第Q区域相对于第m+1区域向右移动距离Y,该Y的取值本申请实施例不作限定,例如Y可以是处于2L到3L范围之间内的取值,如2.5L。
以X=1.5L为例,Y等于2.5L为例,手机检测到图像右移指令之后,预览图像依次刷新为第m区域、第P区域、第m+1区域、第Q区域,等等。因此,每次刷新的预览图像右移0.5L,即预览图像在图像上的位置以较慢的速度匀速右移。而且,插帧刷新可以实现预览图像慢速刷新的效果,也就是说,预览图像以较慢的速度刷新的同时,预览图像在图像上的位置可以以较低的速度均速右移。
可以理解的是,上述X与Y的取值与图像右移速度相关,本申请实施例不限定。P和Q的取值关系,本领域技术任意可以灵活设置,以实现不同的效果。
可以理解的是,假设预览图像显示为第m+2区域内的图像块的过程中,手机检测到停止图像移动的指令,则确定第m+3帧图像上第m+3区域内的图像块、第m+4帧图像上第m+4区域内的图像块,等等。也就是说,手机检测到图像停止移动指令后,不再使用插帧方式处理,且预览图像在图像上的位置不再移动。
示例2,在上述示例1中,手机先确定每帧图像上的目标区域,然后执行插帧处理。该示例2中,手机可以先插帧,然后确定目标区域。参见图19D所示,假设在预览图像刷新到第m帧图像之前,手机检测到图像右移指令,手机从m帧图像开始插帧处理,得到M帧图像。这里不限定相邻两帧图像之间插入的图像帧数。以相邻两帧之间插入1帧为例,如图19D所示,虚线图像即插入的图像。手机确定M帧图像上目标区域的方式可以参见图18的描述,在此不重复赘述。假设手机以A=B=C=L的方式确定M帧图像上的目标区域,由于插帧刷新可实现慢速刷新的效果,即预览图像以较慢的速度刷新的同时,预览图像在图像上的位置均速右移。假设手机以A<B<C<D的方式确定目标区域,即预览图像以较慢的速度刷新的同时,预览图像在图像的位置加速右移。假设手机以A>B>C>D的方式确定目前区域,即预览图像以较慢的速度刷新的同时,预览图像在图像上的位置减速右移。
在一些实施例中,手机检测到用于指示图像移动方向的指令后,默认使用上述第一种方式处理,且默认使用第一种方式中A=B=C=L的方式确定目标区域,即预览图像在图像上的位置均速右移。手机检测到加速移动的指令,可以使用上述第一种方式中A<B<C<D的方式确定目标区域,或者使用上述第二种方式中示例1中抽帧的方式处理实现加速移动。手机检测到减速移动的指令,可以使用上述第一种方式中的A>B>C>D的方式确定目前区域,或者使用上述第三种方式中示例1中插帧的方式处理实现减速移动。
需要说明的是,不同帧图像上目前区域的位置移动的过程中,侧边可以对齐。继续以图18为例,第一区域和第二区域的侧边对齐,例如第一区域的底边到第一帧图像的底边 之间的距离,和第二区域的底边到第二帧图像的底边之间的距离相同,以尽可能的保证预览图像稳定显示。在一些实施例中,用户握持手机的过程中会发生抖动。为了缓解用户抖动而导致预览图像不稳定的情况,一种可实现的方式为,手机将超广角摄像头采集的图像作防抖处理,在经过防抖处理的图像上确定第一区域、第二区域、第三区域等目标区域。其中,防抖处理可以是防抖裁剪。通常,用户抖动的过程中,预览图像的边缘区域的图像变化较快不稳定;预览图像居中区域内的图像变化较小相对稳定。因此,手机可以将超广角摄像头采集的每帧图像的边缘裁剪掉,具体裁剪面积本申请实施例不作限定。例如,手机在对第一帧图像裁剪后剩余的图像上确定第一区域,在对第二帧图像裁剪后剩余的图像上确定第二区域,在对第三帧图像裁剪后剩余的图像上确定第三区域,依次类推。因此,预览图像视觉上较为稳定地显示。
另一种可能的实现方式为,手机可以在第一帧图像上确定第一区域,对第一区域内的第一图像块进行防抖裁剪,例如将第一图像块的边缘裁剪掉,预览图像显示第一图像块经过边缘裁剪后剩余的图像。同样的,手机可以在第二帧图像上确定第二区域,对第二区域内的第二图像块进行防抖裁剪,依次类推。也就是说,手机先确定每帧图像上的目标区域内的图像块,再进行防抖裁剪处理。
上文中提到用户输入用于指示图像移动方向的方式有多种,包括但不限定于如下几种示例。
示例1,参见图16,取景界面内还包括方向控件1311。方向控件1311可以包括分布于录像控件1307四周的四个箭头。当手机检测到用户点击(如单击)某个箭头的操作,手机开始基于所述箭头所指示的方向移动第一区域。例如,用户点击向右的箭头,预览图像在图像上的位置开始向右移动。当手机检测到用户在取景界面内任意位置的点击操作时停止移动;或者,再次检测到针对向右的箭头时,停止移动;或者,一定时长之后,自动停止移动。
示例2,参见图16,当手机检测到用户按压某个箭头的时长达到预设时长时,开始基于所述箭头所指示的方向移动第一区域。当用户检测到所述长按操作弹起时,停止移动。
示例3,参见图16,当手机检测到用户按下录像控件1307并向某个方向拖动(或者也可以称为滑动)的操作,手机开始根据拖动操作指示拖动方向移动第一区域。例如,用户按下录像控件1307并向右拖动,手机向右移动第一区域。当手机检测到所述拖动操作弹起时,停止移动。其中,手机检测到手指按下录像控件1307并在按下的位置处弹起时,确定开始录制。当手机检测到手指按下录像控件1307并拖动,拖动到非按下的位置处时,确定拖动操作的方向即图像移动方向,基于该方向移动第一区域。
示例4,参见图16,当手机检测到用户在屏幕上(例如预览图像上)的滑动操作时,开始根据滑动操作的滑动方向移动第一区域。当手机检测到滑动操作停止时,停止移动第一区域。其中,滑动操作停止可以理解为用户手指从A点滑动到B点并停留在B点,或者,是用户手指从A点滑动到B点后弹起。应理解,这种情况下,取景界面内也可以不显示方向控件1311。
示例5,用户通过语音指令输入图像移动方向。用户可以点击取景界面任意位置,或者通过语音指令指示移动结束。应理解,这种情况下,取景界面内也可以不显示方向控件1311。
当本申请实施例提供的方法适用于笔记本电脑等设备时,还可以通过键盘、触摸板等 输入图像移动方向。
上文中提到的加速移动指令或减速移动指令可以通过如下方式获取。
示例性的,参见图16所示,取景界面中显示用于指示移的速度的标识1320。这里的移动速度可以理解为预览图像在图像上的位置改变量。该标识1320默认是取值为1X。这里的1X可以理解为上述L的1倍。也就是说,当标识1320为1X时,手机可以使用上述A=B=C=L的方式处理。当用户点击该标识1320时,显示速度调整条1321。示例性的,图16中速度调整条1321提供的速度最大为3X,速度最小为0.5X,本申请实施例对此不作限定。可以理解的是,1X、2X、3X等可以理解为上述L的倍数,3X即L的3倍,2X即L的2倍。用户通过在速度调整条1321上的滑动操作选择速度。假设用户选择速度2X,则标识1320显示取值2X。手机可以使用图19A所示的方式(抽帧的方式),取i=j=2的方式处理,实现预览图像在图像上的位置每次移动2L的效果。假设用户选择速度0.5L,手机可以使用图19C所示的方式(插帧的方式),取P=Q=1的方式处理,实现预览图像在图像上的位置每次移动0.5L的效果。
可以理解的是,手机还可以通过其它方式设置速度,例如通过音量按键设置速度。例如,手机检测到音量增加按键被触发,则增加速度,若检测到音量降低按键被触发,则降低速度。需要说明的是,本申请实施例不限定手机设置移的速度的形式,例如,取景界面上提供低速、中速、高速三个速度等级选项以供用户选择也是可行的。
示例性的,参见图20所示,为本申请实施例提供的手机处于移模式时的取景界面的效果示意图。如图20中的(a)所示,取景界面内显示预览图像1。手机保持不动,当手机检测到针对向下的箭头的点击操作时,预览图像1刷新为预览图像2,参见图20中的(b)所示,预览图像2中包括的景物位于预览图像1中的景物下方,相当于手机下移拍摄。手机继续将预览图像2刷新为预览图像3,参见图20中的(c)所示,预览图像3中的景物位于预览图像2中的景物的下方,相当于手机继续下移拍摄。假设手机检测到用户点击屏幕上的任意位置,移动结束。通过图20可知,在手机保持不动的过程中,预览图像内的景物逐渐下移,实现镜头下移的拍摄效果。
实施例2
示例性的,图21示出了手机进入摇模式时的GUI的示意图。为了方便用户区别当前处于哪种模式,手机进入摇模式时,取景界面中可以显示提示信息1313,该提示信息1313用于指示当前处于摇模式。当然,手机从普通录像模式切换到摇模式时,也可以输出其它提醒,例如振动反馈。其中,提示信息1313可以以半透明或透明等尽可能不遮挡预览图像的方式显示。该GUI中还可以包括方向控件1311。用户可以通过该方向控件1311输入图像移动方向。
以图21为例,介绍手机使用摇模式录制视频的过程。
手机从普通录像模式切换到摇模式切换过程可以参见普通录像模式与移模式的切换,在此不重复赘述。在摇模式下,手机可以通过用户输入的图像移动方向实现“摇镜头”的拍摄效果。其中,用户输入图像移动方向的方式,以及输入停止移动指令的方式,参考前文,不再赘述。
与移模式不同的是,摇模式下,手机确定图像上的目标区域之后,对目标区域内的图像块进行视角转换处理,然后以经过视角转换的图像块刷新预览图像。以图16为例,手机检测到图像右摇指令,确定第m帧图像上第m区域,并对第m区域内的图像块进行视 角转换处理。手机确定第m+1帧图像上第m+1区域,并对第m+1区域内的图像块进行时间转换处理,等等。因此,手机检测到图像右摇指令后,预览图像依次刷新为第m区域内的经过视角转换处理后的图像块、第m+1区域内的经过视角转换后的图像块,等等。
需要说明的是,上述移模式下的多种实现方式例如抽帧、插帧等方式在摇模式下同样适用,在此不重复赘述。
其中,对图像块的视角变换的过程请参见前文描述。如果使用前文的视角变化过程,旋转角度θ可以有多种方式确定。例如,旋转角度θ是预设的,例如是预设的固定值。或者,若用户通过在屏幕上滑动操作输入图像移动方向时,旋转角度与滑动操作相关。例如,手机中存储滑动操作的滑动距离W与旋转角度θ之间的对应关系,手机检测到用户在屏幕上的滑动操作,确定该滑动操作的滑动距离W,基于该距离W和所述对应关系确定对应的旋转角度θ。示例性的,滑动操作滑动距离W越远,旋转角度越大。或者,旋转角度θ还可以与标识1320的显示值相关,假设标识1320显示2,则旋转角度θ为2倍的预设角度,所述预设角度的取值本申请不作限定。
实施例3
在一些实施例中,超广角摄像头采集的图像的尺寸有限,当第一区域平移到超广角摄像头的图像的边缘时,手机输出提示信息,以提示用户移动手机的位置。示例性的,以移模式为例,参见图22所示,手机检测到用户持续按压向下箭头,预览图像在图像上的位置逐渐向下移动。当移到超广角摄像头采集的图像的边缘时,手机可以输出提示信息1330,以提示用户手动向下移动手机,或者,输出用于提示用户无法继续移动的提示信息。
应当理解的是,当手机检测到针对录制控件1307的操作时,开始录制视频。在开始录制之后,用户也可以指示进入移模式或摇模式,然后输入图像移动方向,手机根据该方向平移超广角摄像头采集的图像上目标区域的位置,并基于目标区域内的图像块不断更新预览图像。手机将预览图像存储,当检测到针对停止录制控件的操作时,将存储的预览图像合成视频,并保持视频。
在一些实施例中,手机使用摇模式或移模式录制视频后,可以对应存储两个视频,其中一个视频是完整视频,即该视频中的每帧图像是超广角摄像头采集的完整图像。另一个视频为使用移模式或摇模式录制的视频,该视频中每帧图像为超广角摄像头采集的图像上的图像块。示例性的,参见图23所示,手机的相册中视频文件夹中存储两个视频,其中一个是完整视频,另一个是使用移模式或摇模式录制的视频。其中,使用移模式或摇模式录制的视频上可以显示标识2301,以方便用户区分。
实施例4
在另一些实施例中,手机还可以提供图像旋转模式,在该模式下,无需用户手动旋转手机(例如,手机保持不动),也可以实现图像旋转拍摄的效果。
示例性的,手机处于移模式或摇模式时,若检测到在预览图像内的预设操作(例如,预览图像上的双击操作或长按操作等),进入图像旋转模式。参见图24所示,为图像旋转模式下取景界面的示意图。取景界面中包括指示信息1360,用于指示当前处于图像旋转模式。可选的,提示信息1360也可以不显示。取景界面还包括预览框1361,该预览框1361中显示超广角摄像头采集的图像(例如,超广角摄像头采集的完整图像)。预览框1361中显示目标框1362,该目标框1362内的图像块为当前预览图像。取景界面中还包括用于指示旋转进度的图标1363,以及用于设置旋转速度的图标1364。
示例性的,参见图25所示,假设预览图像刷新为在第m帧图像之前,检测到用于指示图像顺时针旋转的指令,手机确定第m帧图像上的目标区域即第m区域,将该第m区域内的图像块顺时针旋转角度G。手机确定第m+1帧图像上第m+1区域,该第m+1区域相对于第m区域的位置不变,将该第m+1区域内的图像块顺时针旋转角度2G,以此类推。因此,手机检测到图像顺时针旋转指令后,预览图像依次刷新为第m区域内的顺时针旋转角度G后的图像块、第m+1区域内的顺时针旋转角度2G的图像块、第m+2区域内的顺时针旋转角度3G的图像块,等等。因此,预览图像逐渐按照顺时针的方向旋转,且每次刷新的预览图像旋转的角度相同,即匀速旋转。
假设预览图像刷新到第m+3帧图像之前,检测到停止旋转指令。手机确定第m+3帧图像上的第m+3区域,该第m+3区域相对于第m+2区域的位置不变,将第m+3区域内的图像块顺时针旋转角度3G,即相对于第m+2区域内的图像块旋转角度不变,即停止旋转。
可以理解的是,图25所示的实施例中相邻两帧图像之间的旋转角度相同都是角度G,但相邻两帧之间的旋转角度也可以不同。例如,第m区域内的图像块旋转角度G,第m+1区域内的图像块旋转角度3G,即预览图像加速旋转。或者,第m区域内的图像块旋转角度G,第m+1区域内的图像块旋转角度0.5G,即预览图像减速旋转,等等。
需要说明的是,上文中抽帧刷新或插帧刷新的方式同样可以适用于该实施例中。例如,抽帧的方式,可以实现加速旋转,插帧的方式可以实现减速旋转。示例的,参见图24所示,假设手机检测到针对图标1364上的“+”的操作时,说明用户期望增加旋转角度,手机可以通过抽帧的方式实现,类似于图19A所示的实施例中的隔一帧抽一帧的方式,实现每次刷新的预览图像旋转角度为2G的效果。假设手机检测到针对图标1364下的“-”的操作时,说明用户期望降低旋转角度,手机可以通过插帧的方式实现,类似于图10B所示的实施例中的隔一帧插入一帧的方式,实现每次刷新的预览图像旋转角度为0.5G的效果。
上文中提到的手机检测用于指示图像旋转方向的指令的方式可以有多种。例如,参见图24所示,当手机检测到针对图标1363的操作时,默认顺时针或逆时针旋转,用户可以自行设置。再例如,图标1363处显示向左指示的箭头和向右指示的箭头。当手机检测到用户点击向左的箭头时,逆时针旋转,当手机检测到用户点击向右的箭头时,顺时针旋转。
例如,手机检测点击图标1363的操作后,开始旋转,旋转预设时长(例如,5s)之后自动停止旋转。手机可以保持旋转开始到结束期间的预览图像合成的视频。
或者,手机检测点击图标1363的操作后,开始旋转,一直旋转直到旋转360度停止旋转。
或者,手机检测点击图标1363的操作后,开始旋转,一直旋转直到用户输入停止旋转的指令停止旋转。例如,手机检测到用户在预览界面内任意位置的单击操作,停止旋转,或者,检测到再次点击图标1363的操作,停止旋转。
或者,手机检测到长按图标1363(按压图标1363的时长大于预设时长)的操作时,开始旋转,当手机检测到所述长按操作弹起时,停止旋转。
可以理解的是,图像旋转可以在开始录像前(例如,点击用于指示开始录像的录像控件之前)进行,也可以在开始录像后(例如,点击用于指示开始录像的录像控件之后)进行。
示例性的,图26示出了图像逆时针旋转的示意图。预览图像逆时针旋转的过程中,取景界面中的目标框1362也可以同步旋转,以提示用户当前预览图像旋转的大致角度。 图标1363上也可以显示当前旋转进度。其中,目标框1362的旋转方向与预览图像的旋转方向可以相同或不同,本申请实施例不作限定。
实施例5
在另一些实施例中,手机还可以提供推拉模式。推拉模式下手机可以实现“推镜头”或“拉镜头”的拍摄效果。其中,“推镜头”可以理解为摄像头向物体推近的拍摄方式,即取景界面内物体被放大,有助于聚焦物体细节;“拉镜头”可以理解为摄像头远离物体,即取景界面内物体被缩小,有助于拍摄物体全貌。
示例性的,手机处于图像旋转模式、移模式或摇模式时,若手机检测到在预览图像内的预设操作(例如,预览图像上的双击操作或长按操作等),进入推拉模式。本申请实施例提供多种模式,普通录像模式、摇模式、移模式、图像旋转模式、推拉模式。在一些实施例中,手机检测到用户在预览图像内的双击操作时,实现不同模式之间的循环切换。
参见图27所示,为推拉模式下取景界面的示意图。取景界面中包括指示信息1370,用于指示当前处于推拉模式。可选的,提示信息1370也可以不显示。取景界面还包括预览框1371,该预览框1371中显示超广角摄像头采集的图像。预览框1371中显示目标框1372,该目标框1372内的图像块为当前预览图像。取景界面中还包括用于指示拉镜头的图标1373,用于指示推镜头的图标1374,以及用于设置推拉速度的图标1375。
以下实施例以拉镜头为例,介绍手机在保持不动的情况下,实现拉镜头的拍摄过程。
示例性的,参见图28所示,假设预览图像刷新为在第m帧图像之前,手机检测到用于指示拉镜头的指令,确定第m帧图像上的目标区域即第m区域,该第m区域的面积大于第m-1区域的面积。手机确定第m+1帧图像上第m+1区域,该第m+1区域的面积大于第m+1区域的面积,以此类推。因此,手机检测到拉镜头指令后,预览图像依次刷新为第m区域内的图像块、第m+1区域内的图像块、第m+2区域内的图像块,等等。因此,预览图像在图像上所占的面积逐渐增大,预览图像的视角范围越来越大,实现摄像头逐渐远离物体的拍摄效果。
假设预览图像刷新到第m+3帧图像之前,检测到停止拉镜头的指令。手机确定第m+3帧图像上的第m+3区域,该第m+3区域相对于第m+2区域的面积不变,即停止拉镜头。因此,手机检测到停止拉镜头的指令后,预览图像所占的面积不再增大,视觉上摄像头不再远离物体。
可以理解的是,图28所示的实施例中相邻两帧图像之间的目标区域的面积变化量可以相同或不同。假设相邻两帧图像上的目标区域的面积增大量相同,即目标区域的面积均速增大,即实现摄像头均速远离物体的拍摄效果。假设第m+1区域的面积比第m区域的面积大S,而第m+2区域的面积比第m+1区域的面积大2S,即目标区域的面积加速增大,即实现预览图像加速远离物体的拍摄效果。假设第m+1区域的面积比第m区域的面积大S,而第m+2区域的面积比第m+1区域的面积大0.5S,即目标区域的面积减速增大,即实现预览图像减速(缓慢)远离物体的拍摄效果。
需要说明的是,上文中抽帧刷新或插帧刷新的方式同样可以适用于该实施例中,以实现不同的效果,在此不重复赘述。
其中,手机获取拉镜头指令的方式有多种,包括但不限定于如下方式。
例如,手机检测点击图标1373的操作后,开始增大目标区域的面积,预设时长(例如,5s)之后自动停止增大。手机可以保持开始放大到结束放大期间的预览图像合成的视 频。
或者,手机检测点击图标1373的操作后,开始增大目标区域的面积,一直增大到等于超广角摄像头采集的完整图像的面积为止。
或者,手机检测点击图标1373的操作后,开始增大目标区域的面积,直到检测到用户输入停止增大的指令为止。例如,手机检测到用户在预览界面内任意位置的单击操作,停止增大,或者,检测到再次点击图标473的操作,停止放大。
或者,手机检测到长按图标1373(按压图标1373的时长大于预设时长)的操作时,开始增大目标区域的面积,当手机检测到所述长按操作弹起时,停止增大。
示例性的,以图29为例,手机检测到用户点击用于指示拉镜头的图标1373,开始增大图像上的目标区域的面积。相应的,预览图像逐渐刷新为更大面积的目标区域内的图像块,实现物体逐渐远离摄像头的拍摄效果。继续参见图29,取景界面内的目标框1372的面积可以同步增大,以提示用户当前预览图像占完整图像的大概比例。
需要说明的是,上文中拉镜头为例,对于推镜头,可以采用类似的方式。示例性的,手机检测到推镜头的指令时,可以以类似图28所示的方式确定目标区域,区别在于下一帧图像上的目标区域相对于上一帧图像上的目标区域的面积减小,实现预览图像放大。当手机检测到停止推镜头的指令时,停止减小目标区域的面积,即预览图像停止放大。
在一些实施例中,手机使用移模式、摇模式或图像旋转模式录制得到视频后,可以自动为该视频配乐。例如,使用选择好的声音为该视频配乐。所述声音可以是用户事先从相机应用提供的多种声音中选择出的声音。这里的声音可以包括歌曲片段、铃声、或其他声音等等,本申请实施例不作限定。
需要说明的是,本申请的各个实施方式可以任意进行组合,以实现不同的技术效果。例如,预览图像顺时针旋转的过程中,逐渐缩小或放大;或者,预览图像在图像上的位置逐渐左移的同时,逐渐放大,等等,本申请实施例不作限定。
结合上述实施例及相关附图,本申请实施例提供了一种录像场景下预览图像的显示方法,该方法可以在如图2所示的电子设备(比如,手机,平板电脑等)中实现。如图30所示,该方法可以包括以下步骤:
3001,检测到用于打开相机的第一操作。
示例性的,以图13中的(a)为例,第一操作例如为用户点击图标402的操作。
3002,响应于所述第一操作,启动相机。
3003,检测到用于指示第一录像模式的第二操作。
在本申请实施例中,电子设备可以提供多种录制模式。例如,普通录像模式以及第一录像模式(例如包括移模式和摇模式),电子设备可以在用户指示下进入某种模式。以图14中的(a)为例,电子设备显示普通录像模式下的取景界面,电子设备检测到点击运镜模式控件408的操作后,显示选择框1309,第二操作可以是点击选择框1309中“摇模式”或“移模式”选项的操作。假设第二操作是点击选择框1309中“移模式”选项的操作,则电子设备进入移模式。
3004,响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的第一广角摄像头采集的第一图像上位于第一区域的第一图像块。
继续以图14中的(b)为例,电子设备检测到第二操作(点击选择框1309中“移模式” 选项的操作)时,进入移模式,取景界面中显示第一预览图像,该第一预览图像是第一广角摄像头(例如超广角摄像头)采集的第一图像上第一区域内的第一图像块。其中,第一图像例如图17所示的图像,第一预览图像为第一图像上第一区域内的图像块。
可以理解的是,电子设备在普通录像模式下,使用第二广角摄像头。当电子设备检测到用于指示第一录像模式(例如移模式)的第二操作时,启动第一广角摄像头。其中,第二广角摄像头的视场角小于第一广角摄像头的视场角。第一广角摄像头例如为超广角摄像头,第二广角摄像头例如为普通广角摄像头。也就是说,电子设备由普通录像模式切换到移模式后,由普通广角摄像头切换到超广角摄像头,第一预览图像为超广角摄像头采集的第一图像上第一区域内的第一图像块。可以理解的是,上述第一图像可以是电子设备从普通录像模式切换到第一录像模式启动超广角摄像头后,超广角摄像头采集的第一帧图像。
3005,保持电子设备的位置固定不动,检测到指示图像移动方向的第三操作。
第三操作可以有多种实现方式。例如以图16为例,第三操作可以是用户点击方向控件1311某个箭头(例如向右箭头)的操作,该箭头所指示的方向即所述图像移动方向。或者,第三操作还可以是用户按压某个箭头,且按压时长达到预设时长的操作,该箭头所指示的方向即所述图像移动方向。或者,第三操作是用户按下录像控件1307并向某个方向拖动(或者也可以称为滑动)的操作,拖动方向即所述图像移动方向。或者,第三操作用户在屏幕上(例如预览图像上)的滑动操作,滑动操作的滑动方向即所述图像移动方向。或者,本申请实施例提供的方法适用于笔记本电脑等设备时,第三操作还可以通过键盘、触摸板等输入图像移动方向的操作。
3006,响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述第一广角摄像头采集的第二图像上位于第二区域内的第二图像块,或者,所述第二预览图像为对所述第二图像块经过视角转换处理之后得到的图像块;其中,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关。
需要说明的是,上述第一图像可以是电子设备从普通录像模式切换到第一录像模式启动超广角摄像头后,超广角摄像头采集的第一帧图像。以图18为例,第一预览图像为第一帧图像上第一区域内的第一图像块。假设在第m-1帧图像到第m帧图像期间,检测到用于指示图像右移的第三操作。第二预览图像为第m帧图像(即第二图像)上的第m区域(即第二区域)内的第m图像块。或者,是第m帧图像上第m区域内的第m图像块经过视角转换的得到的图像块。其中,第m区域(即第二区域)相对于第一区域的方位发生变化。
可选的,第二区域相对于第一区域的方位与图像移动方向相同或相反。例如,用户输入的图像移动方向为向右移动,第二区域在第一区域右方,即预览图像在超广角摄像头采集的图像上的位置右移;或者,用户输入图像右移指令,第二区域在第一区域左方,即预览图像在图像上的位置左移。用户可以自行设置用户输入的图像移动方向与预览图像在超广角摄像头采集的图像上的位置移动方向相同或相反。
上文中,第二区域相对于第一区域的方位与所述图像移动方向相关,可以理解为:第二区域与第二图像的第一边缘之间的距离为第二距离,第一区域与第一图像的第一边缘之间的距离为第一距离,第二距离相对于第一距离的距离改变量与图像移动方向相关。其中,第一边缘可以是超广角摄像头采集的图像的上、下、左、右边缘等。例如,若图像移动方向为左移或右移,第一边缘可以是左边缘或右边缘。若图像移动方向为上移或下移,第一 边缘可以是上边缘或下边缘。以图18为例,假设第一边缘为图像左边缘,第二区域(即第m区域)与第二图像(即第m帧图像)的左边缘之间的第二距离为H+A,第一区域与第一图像的左边缘之间的第一距离为H。因此,第二距离相对于第一距离的距离改变量即A。距离改变量A有多种情况,与图像移动方向相关。例如,图像移动方向为右移时,A大于0,即第二区域相对于第一区域右移。图像移动方向为左移时,A小于0,即第二区域相对于第一区域左移。
第二预览图像之后的第三预览图像可以是超广角摄像头采集的第三图像上第三区域内第三图像块。继续以图18为例,第二图像是第m帧图像,第三图像可以是第m+1帧图像,第三区域即第m+1帧图像上第m+1区域,那么第三预览图像即第m+1帧图像上第m+1区域的第m+1图像块。以此类推,第三预览图像之后的第四预览图像可以是第m+2帧图像上第m+2区域的第m+2图像块,等等。
继续以图18为例,第三区域(即第m+1帧图像上第m+1区域)相对于第二区域(即第m帧图像上第m区域)的第二方位改变量为第三距离相对于第二距离的距离改变量,所述第三距离为第三区域与第三图像的第一边缘(例如图像左边缘)之间的距离即H+A+B;所述第二距离为第二区域与第二图像的第一边缘(例如图像左边缘)之间的距离即H+A;那么,第二方位改变量为B。
第二区域(即第m帧图像上第m区域)相对于第一区域(即第一帧图像上第一区域)的第一方位改变量为第二距离相对于第一距离的距离改变量,所述第二距离为第二区域与第二图像的第一边缘(例如图像左边缘)之间的距离即H+A;所述第一距离为所述第一区域与所述第一图像的第一边缘之间的距离即H。因此,第一方位改变量为A。
在一些实施例中,第二方位改变量B等于第一方位改变量A,也就是说,预览图像在图像上的方位改变量相同,即预览图像在图像上的位置匀速移动。当然,第二方位改变量B可以小于或大于第一方位改变量A,以实现不同的效果,具体参见前文,再次不重复赘述。
作为一种可实现方式,第三操作用于指示图像移动方向。这种可实现方式中,电子设备检测到第三操作后,处于预览模式。预览模式下,预览图像在超广角摄像头采集的图像上的位置基于所述图像移动方向而改变。当电子设备检测到针对录像控件1307的操作时,开始录像。开始录像之后,预览图像在图像上的位置持续改变。当检测到停止录像指令时,停止录像。
作为另一种可实现方式,第三操作既可以用于指示图像移动方向,还可以用于指示开始录像。例如,以图16为例,电子设备检测到用户点击方向控件411某个箭头(例如向右箭头)的第三操作,显示第二预览图像并开始录像。当电子设备检测到停止图像移动指令时,停止移动并停止录像,保存该录像。该录像中包括第二预览图像。
以图18为例,电子设备检测到图像右移指令,取景界面显示第m帧图像上第m区域内第m图像块,并开始录像,之后预览图像依次刷新为第m+1区域、第m+2区域内的图像块,直到检测到停止移动指令,停止录像并保存录像,该录像包括第m图像块、第m+1图像块和第m+2图像块。
可选的,第二图像为从所述第一广角摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;具体的抽帧过程参见图10A或图19B的描述,在此不重复赘述。或者,第二图像为在第一广角摄像头采集的N帧图像 插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数,具体的插帧过程参见图19C或图19D的描述,在此不重复赘述。
结合上述实施例及相关附图,本申请实施例提供了一种录像场景下预览图像的显示方法,该方法可以在如图2A所示的电子设备(比如,手机,平板电脑等)中实现。如图31所示,该方法可以包括以下步骤:
3101,检测到用于打开相机的第一操作。
3102,响应于所述第一操作,启动相机。
上述步骤3101-3102的描述,请参见图30中关于步骤3001-3002的描述,在此不重复赘述。
3103,检测到用于指示第一录像模式的第二操作。
示例性的,电子设备可以提供多种录像模式,例如普通录像模式、图像旋转录像模式等。以图16为例,电子设备显示移模式下的取景界面,第二操作可以是双击取景界面的操作,或其他的可用于切换到图像旋转录像模式的操作。以图14中的(b)为例,第二操作可以是点击选择框409中的“图像旋转”选项的操作。
3104,响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的摄像头采集的第一图像。
在一些实施例中,所述摄像头为普通摄像头或第一广角摄像头。以第一广角摄像头为例,所述第一图像为所述第一广角摄像头采集的第一帧图像上第一区域内的第一图像块。
继续以图14中的(b)为例,电子设备检测到第二操作(点击选择框1309中“图像旋转”选项的操作)时,进入图像旋转录像模式。图像旋转录像模式下的取景界面可参见图24所示,取景界面中显示第一预览图像,该第一预览图像是第一广角摄像头(例如超广角摄像头)采集的第一帧图像上第一区域内的第一图像块。
可以理解的是,电子设备在普通录像模式下,使用第二广角摄像头。当电子设备检测到用于指示第一录像模式(例如图像旋转录像模式)的第二操作时,启动第一广角摄像头。其中,第二广角摄像头的视场角小于第一广角摄像头的视场角。第一广角摄像头例如为超广角摄像头,第二广角摄像头例如为普通广角摄像头。也就是说,电子设备由普通录像模式切换到图像旋转录像模式后,由普通广角摄像头切换到超广角摄像头,第一预览图像为超广角摄像头采集的第一帧图像上第一区域内的第一图像块。
3105,保持所述电子设备的位置固定不动,检测到指示图像旋转方向的第三操作。
第三操作可以有多种实现方式。以图24为例,第三操作可以是点击图标1363的操作,例如点击图标1363后,默认顺时针或逆时针旋转。或者,第三操作还可以是在取景界面内画圈的操作,画圈操作的画圈方向即所述图像旋转方向。或者,第三操作还可以是点击图标1363左侧向左指示的箭头的操作,图像旋转方向即逆时针旋转;或者,第三操作还可以是点击图标1363右侧向右指示的箭头的操作,图像旋转方向即顺时针旋转。
3106,响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述摄像头采集的第二图像按照所述图像旋转方向旋转之后得到的图像。
以所述摄像头是第一广角摄像头(例如超广角摄像头)为例,上述第一图像可以是电子设备从普通录像模式切换到第一录像模式(即图像旋转录像模式)启动超广角摄像头后,超广角摄像头采集的第一帧图像。以图25为例,第一预览图像为第一帧图像上第一区域内的第一图像块。假设在第m-1帧图像到第m帧图像期间,检测到用于指示图像顺时针旋 转的第三操作。电子设备确定第m帧图像(即第二图像)上的第m区域(即第二区域)内的第m图像块,第二预览图像为第m图像块旋转角度G之后的图像块。
可选的,第二图像相对于第一图像的旋转方向与第三操作指示的图像旋转方向相同或相反,本申请实施例不作限定。
在一些实施例中,第二预览图像之后,取景界面显示第三预览图像,第三预览图像为所述摄像头采集的第三图像按照所述图像旋转方向旋转之后得到的图像;其中,第三图像相对于第二图像的旋转角度与第二图像相对于第一图像的旋转角度相同。以所述摄像头是第一广角摄像头(超广角摄像头)为例,第二预览图像之后的第三预览图像可以是超广角摄像头采集的第三图像上第三区域内第三图像块旋转一定角度之后的图像块。继续以图25为例,第二图像是第m帧图像,第二区域即第m区域,第二预览图像是第m区域内的图像块旋转角度G之后的图像块。第三图像是第m+1帧图像,第三区域是第m+1区域,第三预览图像即第m+1区域内的图像块旋转角度2G之后的图像块。因此,第三区域相对于第二区域的旋转角度等于第二区域相对于第一区域的旋转角度。也就是说,预览图像匀速旋转。当然,第三区域相对于第二区域的旋转角度与第二区域相对于第一区域的旋转角度可以不同,例如,第三区域相对于第二区域的旋转角度大于第二区域相对于第一区域的旋转角度,即加速旋转。第三区域相对于第二区域的旋转角度小于第二区域相对于第一区域的旋转角度,即减速旋转。
作为一种可实现方式,第三操作用于指示图像旋转方向。这种可实现方式中,电子设备检测到第三操作后,处于预览模式。预览模式下,预览图像旋转。当电子设备检测到针对录像控件1307的操作时,开始录像。开始录像之后,预览图像继续保持旋转。当检测到停止录像指令时,停止录像。
作为另一种可实现方式,第三操作既可以用于指示图像旋转方向,还可以用于指示开始录像。例如,以图24为例,电子设备检测到用户点击图标1363左侧的向左指示的箭头的第三操作,显示第二预览图像并开始录像。当电子设备检测到停止旋转指令时,停止旋转并停止录像,保存该录像。该录像中包括第二预览图像。
以图25为例,电子设备检测到图像顺时针旋转指令,取景界面显示第m区域内第m图像块旋转角度G之后的图像块,并开始录像,之后预览图像依次刷新为第m+1区域、第m+2区域内旋转一定角度之后的图像块,直到检测到停止旋转指令,停止录像并保存录像,该录像包括第m图像块旋转角度G后的图像块、第m+1图像块旋转角度2G后的图像块和第m+2图像块旋转角度3G后的图像块。
可选的,第二图像可以从所述第一广角摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;具体的抽帧过程参见图19A或图19B的描述,在此不重复赘述。或者,第二图像为在第一广角摄像头采集的N帧图像插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数,具体的插帧过程参见图19C或图19D的描述,在此不重复赘述。
以上实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A 和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
上述本申请提供的实施例中,从电子设备(例如手机)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,终端设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
以上实施例中所用,根据上下文,术语“当…时”或“当…后”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。另外,在上述实施例中,使用诸如第一、第二之类的关系术语来区份一个实体和另一个实体,而并不限制这些实体之间的任何实际的关系和顺序。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。

Claims (45)

  1. 一种视频拍摄方法,应用于电子设备,其特征在于,包括:
    启动相机功能;
    响应于用户第一操作,确定第一录像模板,所述第一录像模板中包括第一示例样片、第二示例样片以及预设音频,所述第一示例样片对应第一运镜模式,所述第二示例样片对应第二运镜模式,其中所述第一运镜模式和所述第二运镜模式不同;
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第二操作,保持所述电子设备的位置不动,开始录像;
    自动生成合成视频,所述合成视频中包括第一视频片段、第二视频片段以及所述预设音频,所述第一视频片段为所述电子设备根据所述第一运镜模式生成的视频片段,所述第二视频片段为所述电子设备根据所述第二运镜模式生成的视频片段。
  2. 如权利要求1所述的方法,其特征在于,响应于用户第二操作,保持所述电子设备的位置不动,开始录像,包括:
    在所述第一运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第一运镜模式生成所述第一视频片段,所述第一视频片段的时长为第一预设时长;
    在所述第二运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第二运镜模式生成所述第二视频片段,所述第二视频片段的时长为第二预设时长。
  3. 如权利要求2所述的方法,其特征在于,在根据所述第一运镜模式生成第一视频片段时,所述录像界面中还显示根据所述第一运镜模式生成所述第一视频片段的倒计时;在根据所述第二运镜模式生成所述第二视频片段时,所述录像界面中还显示根据所述第二运镜模式生成所述第二视频片段的倒计时。
  4. 如权利要求1-3任一所述的方法,其特征在于,所述方法还包括:
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第三操作,删除第一运镜模式标识或第二运镜模式标识;
    响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
    自动生成合成视频,所述合成视频中包括所述电子设备根据未删除的运镜模式生成的视频片段以及所述预设音频。
  5. 如权利要求1-3任一所述的方法,其特征在于,所述方法还包括:
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第三操作,在所述录像界面中添加第三运镜模式标识,所述第三运镜模式标识用于指示第三运镜模式;
    响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
    自动生成合成视频,所述合成视频中包括所述第一视频片段、所述第二视频片段、第三视频片段以及所述预设音频,所述第三视频片段为所述电子设备根据所述第三运镜模式生成的视频片段。
  6. 如权利要求1-3任一所述的方法,其特征在于,所述方法还包括:
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第三操作,调整所述第一运镜模式标识和第二运镜模式标识的显示顺序为第一顺序;
    响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
    自动生成合成视频,所述合成的视频中所述第一视频片段和所述第二视频片段的播放顺序为所述第一顺序。
  7. 如权利要求1-6任一所述的方法,其特征在于,所述录像界面中显示所述第一示例样片和/或所述第二示例样片。
  8. 如权利要求1-7任一所述的方法,其特征在于,自动生成合成视频之前,还包括:
    显示展示界面,所述展示界面中包括所述第一视频片段和所述第二视频片段;
    自动生成合成视频,包括:响应于用户输入的视频合成指令,合成视频。
  9. 如权利要求8所述的方法,其特征在于,所述方法还包括:
    响应于所述第四操作,删除所述第一视频片段或所述第二视频片段;或者,在所述合成视频中添加本地的第三视频片段;或者;调整所述合成视频中所述第一视频片段或所述第二视频片段的播放顺序。
  10. 如权利要求1-9任一所述的方法,其特征在于,所述第一录像模板是默认模板或用户自定义模板。
  11. 如权利要求1-10任一所述的方法,其特征在于,所述方法还包括:
    自动存储所述第一视频片段和所述第二视频片段,以及所述合成视频。
  12. 如权利要求1-11任一所述的方法,其特征在于,所述方法还包括:
    响应于特定操作,更换所述合成视频中的音频,或者,在所述合成视频中添加文字和/或图片。
  13. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    一个或多个存储器;
    其中,所述一个或多个存储器存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器执行时,使得所述电子设备执行如下步骤:
    启动相机功能;
    响应于用户第一操作,确定第一录像模板,所述第一录像模板中包括第一示例样片、第二示例样片以及预设音频,所述第一示例样片对应第一运镜模式,所述第二示例样片对应第二运镜模式,其中所述第一运镜模式和所述第二运镜模式不同;
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第二操作,保持所述电子设备的位置不动,开始录像;
    自动生成合成视频,所述合成视频中包括第一视频片段、第二视频片段以及所述预设音频,所述第一视频片段为所述电子设备根据所述第一运镜模式生成的视频片段,所述第二视频片段为所述电子设备根据所述第二运镜模式生成的视频片段。
  14. 如权利要求13所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备具体执行如下步骤:
    在所述第一运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第一运镜模式生成所述第一视频片段,所述第一视频片段的时长为第一预设时长;
    在所述第二运镜模式标识被选中时,响应于用户指示拍摄的操作,根据所述第二运镜模式生成所述第二视频片段,所述第二视频片段的时长为第二预设时长。
  15. 如权利要求14所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备具体执行如下步骤:
    在根据所述第一运镜模式生成第一视频片段时,所述录像界面中还显示根据所述第一运镜模式生成所述第一视频片段的倒计时;在根据所述第二运镜模式生成所述第二视频片段时,所述录像界面中还显示根据所述第二运镜模式生成所述第二视频片段的倒计时。
  16. 如权利要求13-15任一所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第三操作,删除第一运镜模式标识或第二运镜模式标识;
    响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
    自动生成合成视频,所述合成视频中包括所述电子设备根据未删除的运镜模式生成的视频片段以及所述预设音频。
  17. 如权利要求13-15任一所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第三操作,在所述录像界面中添加第三运镜模式标识,所述第三运镜模式标识用于指示第三运镜模式;
    响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
    自动生成合成视频,所述合成视频中包括所述第一视频片段、所述第二视频片段、第三视频片段以及所述预设音频,所述第三视频片段为所述电子设备根据所述第三运镜模式生成的视频片段。
  18. 如权利要求13-15任一所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
    显示录像界面,所述录像界面中包括第一运镜模式标识和第二运镜模式标识;
    响应于用户第三操作,调整所述第一运镜模式标识和第二运镜模式标识的显示顺序为第一顺序;
    响应于用户第四操作,保持所述电子设备的位置不动,开始录制;
    自动生成合成视频,所述合成的视频中所述第一视频片段和所述第二视频片段的播放顺序为所述第一顺序。
  19. 如权利要求13-18任一所述的电子设备,其特征在于,所述录像界面中显示所述第一示例样片和/或所述第二示例样片。
  20. 如权利要求13-19任一所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
    显示展示界面,所述展示界面中包括所述第一视频片段和所述第二视频片段;
    自动生成合成视频,包括:响应于用户输入的视频合成指令,合成视频。
  21. 如权利要求13-20任一所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
    响应于所述第四操作,删除所述第一视频片段或所述第二视频片段;或者,在所述合成视频中添加本地的第三视频片段;或者;调整所述合成视频中所述第一视频片段或所述第二视频片段的播放顺序。
  22. 如权利要求13-21任一所述的电子设备,其特征在于,所述第一录像模板是默认模板或用户自定义模板。
  23. 如权利要求13-22任一所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
    自动存储所述第一视频片段和所述第二视频片段,以及所述合成视频。
  24. 如权利要求13-23任一所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器执行时,使得所述电子设备还执行如下步骤:
    响应于特定操作,更换所述合成视频中的音频,或者,在所述合成视频中添加文字和/或图片。
  25. 一种录像场景下预览图像的显示方法,应用于电子设备,其特征在于,所述方法包括:
    检测到用于打开相机的第一操作;
    响应于所述第一操作,启动相机;
    检测到用于指示第一录像模式的第二操作;
    响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的第一广角摄像头采集的第一图像上位于第一区域的第一图像块;
    保持所述电子设备的位置固定不动,检测到指示图像移动方向的第三操作;
    响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述第一广角摄像头采集的第二图像上位于第二区域内的第二图像块,或者,所述第二预览图像为对所述第二图像块经过视角转换处理之后得到的图像块;其中,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关。
  26. 如权利要求25所述的方法,其特征在于,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关,包括:所述第二区域相对于所述第一区域的方位与所述图像移动方向相同或相反。
  27. 如权利要求25或26所述的方法,其特征在于,所述第二区域相对于所述第一区域的方位与所述图像移动方向相关,包括:所述第二区域与所述第二图像的第一边缘之间的距离为第二距离,所述第一区域与所述第一图像的第一边缘之间的距离为第一距离,所述第二距离相对于所述第一距离的距离改变量与所述图像移动方向相关。
  28. 如权利要求25-27任一所述的方法,其特征在于,还包括:
    取景界面中显示第三预览图像,所述第三预览图像为第三图像上第三区域内的第三图像块;或为所述第三图像块经过视角转换处理之后得到的图像块;所述第三区域相对于所述第二区域的第二方位改变量等于所述第二区域相对于所述第一区域的第一方位改变量;
    其中,所述第二方位改变量为第三距离相对于第二距离的距离改变量,所述第一方位改变量为第二距离相对于第一距离的距离改变量,所述第三距离为所述第三区域与所述第三图像的第一边缘之间的距离;所述第二距离为所述第二区域与所述第二图像的第一边缘之间的距离;所述第一距离为所述第一区域与所述第一图像的第一边缘之间的距离。
  29. 如权利要求25-28任一所述的方法,其特征在于,在检测到用于指示第一录像模式的第二操作之前,所述取景界面中显示第四预览图像,所述第四预览图像为第二广角摄像头采集的图像,所述第二广角摄像头的视场角小于所述第一广角摄像头的视场角;所述第 一预览图像是所述第一广角摄像头和所述第二广角摄像头视场角重叠范围内的全部或部分图像块。
  30. 如权利要求25-29任一所述的方法,其特征在于,所述第三操作,包括:
    在所述第一预览图像上的滑动操作;或,
    针对所述取景界面内用于指示图像旋转方向的控件的操作,或,
    按压所述取景界面内的特定控件并拖动的操作。
  31. 如权利要求25-30任一所述的方法,其特征在于,所述方法还包括:
    检测到图像停止移动指令时,生成并保存视频,所述视频包括所述第二预览图像。
  32. 如权利要求31所述的方法,其特征在于,所述检测到图像停止移动指令,包括:
    所述第三操作为在所述第一预览图像上的滑动操作时,当检测到所述滑动操作弹起时,产生所述图像停止移动指令;或者,
    所述第三操作为针对所述取景界面内用于指示图像移动方向的控件的点击操作时,当检测到在所述取景界面内任意位置的再次点击操作时,产生所述图像停止移动指令,或者,所述第三操作为针对所述取景界面内用于指示图像移动方向的控件的长按操作时,当检测到所述长按操作弹起时,产生所述图像停止移动指令,或者,
    所述第三操作为针对所述取景界面内的特定控件的按压并拖动的操作时,当检测到所述拖动操作弹起时,产生所述图像停止移动指令。
  33. 如权利要求25-32任一所述的方法,其特征在于,所述第二图像为从所述第一广角摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;
    或者,
    所述第二图像为在所述第一广角摄像头采集的N帧图像插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数。
  34. 如权利要求25-33任一所述的方法,其特征在于,所述第二图像块经过视角转换处理之后得到的图像块,满足如下公式:
    x’=x*cos(θ)-sin(θ)*y
    y’=x*sin(θ)+cos(θ)*y
    其中,(x’,y’)是经过视角转换处理之后得到的图像块上的像素点,(x,y)是第二图像块上的像素点,θ为旋转角度,所述旋转角度是预设的。
  35. 一种录像场景下预览图像的显示方法,应用于电子设备,其特征在于,所述方法包括:
    检测到用于打开相机的第一操作;
    响应于所述第一操作,启动相机;
    检测到用于指示第一录像模式的第二操作;
    响应于所述第二操作,在所述电子设备的显示屏上显示取景界面,所述取景界面中包括第一预览图像,所述第一预览图像为所述电子设备上的摄像头采集的第一图像;
    保持所述电子设备的位置固定不动,检测到指示图像旋转方向的第三操作;
    响应于所述第三操作,在所述取景界面中显示第二预览图像,所述第二预览图像为所述摄像头采集的第二图像按照所述图像旋转方向旋转之后得到的图像。
  36. 如权利要求35所述的方法,其特征在于,还包括:
    所述取景界面显示第三预览图像,所述第三预览图像为所述摄像头采集的第三图像按照所述图像旋转方向旋转之后得到的图像,所述第三图像相对于所述第二图像的旋转角度与所述第二图像相对于所述第一图像的旋转角度相同。
  37. 如权利要求35或36所述的方法,其特征在于,所述摄像头为第一广角摄像头,所述第一图像为所述第一广角摄像头采集的第四图像上第一区域内的第一图像块;所述第二图像为所述第一广角摄像头采集的第五图像上第二区域内的第二图像块,所述第一区域在所述第四图像上的位置和所述第二区域在所述第五图像上的位置相同或不同。
  38. 如权利要求35-37任一所述的方法,其特征在于,所述第三操作,包括:
    在所述第一预览图像上的画圈操作;或,
    针对所述取景界面内用于指示图像旋转方向的控件的操作。
  39. 如权利要求35-38任一所述的方法,其特征在于,所述方法还包括:
    检测到图像停止旋转指令时,生成并保存视频,所述视频包括所述第二预览图像。
  40. 如权利要求39所述的方法,其特征在于,所述检测到图像停止旋转指令,包括:
    所述第三操作为在所述第一预览图像上的画圈操作时,当检测到所述画圈操作弹起时,产生所述图像停止旋转指令;或者,
    所述第三操作为针对所述取景界面内用于指示图像旋转方向的控件的点击操作时,当检测到在所述取景界面内任意位置的再次点击操作时,产生所述图像停止旋转指令,或者,
    所述第三操作为针对所述取景界面内用于指示图像旋转方向的控件的长按操作时,当检测到所述长按操作弹起时,产生所述图像停止旋转指令。
  41. 如权利要求35-40任一所述的方法,其特征在于,所述第二图像为从所述第一摄像头采集的N帧图像中抽帧出的M帧图像中的一帧图像,N为大于或等于1的整数,M为小于N的整数;
    或者,
    所述第二图像为在所述第一摄像头采集的N帧图像插入多帧图像后得到的M帧图像中的一帧图像,N为大于或等于1的整数,M为大于N的整数。
  42. 一种电子设备,其特征在于,所述电子设备包括:一个或多个处理器;一个或多个存储器;其中,所述一个或多个存储器存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求25-41任一项所述的方法。
  43. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1至12任一项所述的方法;或者,执行如权利要求25-41任一项所述的方法。
  44. 一种程序产品,其特征在于,包括指令,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1至12任一项所述的方法;或者,执行如权利要求25-41任一项所述的方法。
  45. 一种电子设备上的图形用户界面,其特征在于,所述电子设备具有显示屏、一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行如权利要求1至12任一项所述的方法或执行如权利要求25-41任一项所述的方法时显示的图形用户界面。
PCT/CN2020/132547 2019-11-29 2020-11-28 一种视频拍摄方法与电子设备 WO2021104508A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2022531504A JP7450035B2 (ja) 2019-11-29 2020-11-28 ビデオ撮影方法と電子装置
EP20894021.3A EP4044581A4 (en) 2019-11-29 2020-11-28 VIDEO PHOTOGRAPHY METHOD AND ELECTRONIC DEVICE
CN202080082673.XA CN115191110B (zh) 2019-11-29 2020-11-28 一种视频拍摄方法与电子设备
KR1020227018058A KR102709021B1 (ko) 2019-11-29 2020-11-28 비디오 촬영 방법 및 전자 디바이스
US17/780,872 US11856286B2 (en) 2019-11-29 2020-11-28 Video shooting method and electronic device
CN202310692606.5A CN117241135A (zh) 2019-11-29 2020-11-28 一种视频拍摄方法与电子设备

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201911207579.8 2019-11-29
CN201911207579 2019-11-29
CN202010079012 2020-02-03
CN202010079012.3 2020-02-03
CN202011066518.7A CN112887584A (zh) 2019-11-29 2020-09-30 一种视频拍摄方法与电子设备
CN202011066518.7 2020-09-30

Publications (1)

Publication Number Publication Date
WO2021104508A1 true WO2021104508A1 (zh) 2021-06-03

Family

ID=76043728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132547 WO2021104508A1 (zh) 2019-11-29 2020-11-28 一种视频拍摄方法与电子设备

Country Status (6)

Country Link
US (1) US11856286B2 (zh)
EP (1) EP4044581A4 (zh)
JP (1) JP7450035B2 (zh)
KR (1) KR102709021B1 (zh)
CN (4) CN116112786A (zh)
WO (1) WO2021104508A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520878A (zh) * 2022-02-11 2022-05-20 维沃移动通信有限公司 视频拍摄方法、装置及电子设备
CN115334242A (zh) * 2022-08-19 2022-11-11 维沃移动通信有限公司 视频录制方法、装置、电子设备及介质
CN116095225A (zh) * 2022-05-30 2023-05-09 荣耀终端有限公司 终端设备的图像处理方法及装置
US11856286B2 (en) 2019-11-29 2023-12-26 Huawei Technologies Co., Ltd. Video shooting method and electronic device

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9912860B2 (en) 2016-06-12 2018-03-06 Apple Inc. User interface for camera effects
US11112964B2 (en) 2018-02-09 2021-09-07 Apple Inc. Media capture lock affordance for graphical user interface
US11039074B1 (en) 2020-06-01 2021-06-15 Apple Inc. User interfaces for managing media
US11212449B1 (en) * 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
USD992593S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD992592S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media
US12112024B2 (en) 2021-06-01 2024-10-08 Apple Inc. User interfaces for managing media styles
CN115442538A (zh) * 2021-06-04 2022-12-06 北京字跳网络技术有限公司 一种视频生成方法、装置、设备及存储介质
CN113542610A (zh) * 2021-07-27 2021-10-22 上海传英信息技术有限公司 拍摄方法、移动终端及存储介质
CN113727047A (zh) * 2021-08-18 2021-11-30 深圳传音控股股份有限公司 视频处理方法、移动终端及可读存储介质
CN117652148A (zh) * 2021-09-08 2024-03-05 深圳市大疆创新科技有限公司 拍摄方法、拍摄系统及存储介质
CN114500851A (zh) * 2022-02-23 2022-05-13 广州博冠信息科技有限公司 视频录制方法及装置、存储介质、电子设备
CN115379195B (zh) * 2022-08-26 2023-10-03 维沃移动通信有限公司 视频生成方法、装置、电子设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742038A (zh) * 2008-11-14 2010-06-16 夏普株式会社 图像处理装置
US20130050477A1 (en) * 2011-08-24 2013-02-28 Marcos LIMA Device for sensing, monitoring and telemetry using video cameras for visualization, broadcast and recording
CN105049712A (zh) * 2015-06-30 2015-11-11 广东欧珀移动通信有限公司 一种启动终端广角摄像头的方法及终端
CN105657260A (zh) * 2015-12-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 一种拍摄方法及终端
CN107155068A (zh) * 2017-07-11 2017-09-12 上海青橙实业有限公司 移动终端及用于视频拍摄的方法和装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217624A (zh) * 2007-01-01 2008-07-09 黄福宝 遥控器控制的mtv合成方法和装置
US8555169B2 (en) * 2009-04-30 2013-10-08 Apple Inc. Media clip auditioning used to evaluate uncommitted media content
JP4985834B2 (ja) 2009-10-13 2012-07-25 株式会社ニコン 撮像装置及び画像処理装置
BR112016013424B1 (pt) * 2013-12-13 2021-01-26 Huawei Device (Shenzhen) Co., Ltd. método e terminal para aquisição de imagem panorâmica
JP2016027704A (ja) 2014-07-04 2016-02-18 パナソニックIpマネジメント株式会社 撮像装置
CN113138701A (zh) * 2015-11-05 2021-07-20 小米科技有限责任公司 图标位置互换方法及装置
US10021339B2 (en) * 2015-12-01 2018-07-10 Qualcomm Incorporated Electronic device for generating video data
US10547776B2 (en) * 2016-09-23 2020-01-28 Apple Inc. Devices, methods, and graphical user interfaces for capturing and recording media in multiple modes
JP6904843B2 (ja) * 2017-08-03 2021-07-21 キヤノン株式会社 撮像装置およびその制御方法
JP2019140567A (ja) 2018-02-13 2019-08-22 キヤノン株式会社 画像処理装置
CN108566519B (zh) 2018-04-28 2022-04-12 腾讯科技(深圳)有限公司 视频制作方法、装置、终端和存储介质
CN108900771B (zh) * 2018-07-19 2020-02-07 北京微播视界科技有限公司 一种视频处理方法、装置、终端设备及存储介质
JP2020053774A (ja) * 2018-09-25 2020-04-02 株式会社リコー 撮像装置および画像記録方法
CN116112786A (zh) 2019-11-29 2023-05-12 华为技术有限公司 一种视频拍摄方法与电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742038A (zh) * 2008-11-14 2010-06-16 夏普株式会社 图像处理装置
US20130050477A1 (en) * 2011-08-24 2013-02-28 Marcos LIMA Device for sensing, monitoring and telemetry using video cameras for visualization, broadcast and recording
CN105049712A (zh) * 2015-06-30 2015-11-11 广东欧珀移动通信有限公司 一种启动终端广角摄像头的方法及终端
CN105657260A (zh) * 2015-12-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 一种拍摄方法及终端
CN107155068A (zh) * 2017-07-11 2017-09-12 上海青橙实业有限公司 移动终端及用于视频拍摄的方法和装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11856286B2 (en) 2019-11-29 2023-12-26 Huawei Technologies Co., Ltd. Video shooting method and electronic device
CN114520878A (zh) * 2022-02-11 2022-05-20 维沃移动通信有限公司 视频拍摄方法、装置及电子设备
CN116095225A (zh) * 2022-05-30 2023-05-09 荣耀终端有限公司 终端设备的图像处理方法及装置
CN115334242A (zh) * 2022-08-19 2022-11-11 维沃移动通信有限公司 视频录制方法、装置、电子设备及介质

Also Published As

Publication number Publication date
CN116112786A (zh) 2023-05-12
CN115191110A (zh) 2022-10-14
US20230007186A1 (en) 2023-01-05
KR102709021B1 (ko) 2024-09-23
CN117241135A (zh) 2023-12-15
JP7450035B2 (ja) 2024-03-14
EP4044581A4 (en) 2023-03-08
CN112887584A (zh) 2021-06-01
EP4044581A1 (en) 2022-08-17
KR20220082926A (ko) 2022-06-17
CN115191110B (zh) 2023-06-20
JP2023503519A (ja) 2023-01-30
US11856286B2 (en) 2023-12-26

Similar Documents

Publication Publication Date Title
WO2021104508A1 (zh) 一种视频拍摄方法与电子设备
WO2022068537A1 (zh) 一种图像处理方法及相关装置
JP7326476B2 (ja) スクリーンショット方法及び電子装置
CN113727017B (zh) 拍摄方法、图形界面及相关装置
CN113489894B (zh) 一种长焦场景下的拍摄方法及终端
WO2021244455A1 (zh) 一种图像内容的去除方法及相关装置
CN115484380B (zh) 拍摄方法、图形用户界面及电子设备
WO2022042769A2 (zh) 多屏交互的系统、方法、装置和介质
WO2022068511A1 (zh) 视频生成方法和电子设备
CN115442509B (zh) 拍摄方法、用户界面及电子设备
WO2023160230A9 (zh) 一种拍摄方法及相关设备
CN115484390B (zh) 一种拍摄视频的方法及电子设备
WO2021204103A1 (zh) 照片预览方法、电子设备和存储介质
CN115484387A (zh) 一种提示方法及电子设备
WO2022228010A1 (zh) 一种生成封面的方法及电子设备
CN115484392B (zh) 一种拍摄视频的方法及电子设备
WO2023231696A1 (zh) 一种拍摄方法及相关设备
US20240314423A1 (en) Photographing method and electronic device
WO2023226695A1 (zh) 录像方法、装置及存储介质
WO2023160224A9 (zh) 一种拍摄方法及相关设备
WO2023226694A1 (zh) 录像方法、装置及存储介质
WO2023226699A1 (zh) 录像方法、装置及存储介质
CN113452895A (zh) 一种拍摄方法及设备
CN115811656A (zh) 一种拍摄视频的方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20894021

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020894021

Country of ref document: EP

Effective date: 20220512

ENP Entry into the national phase

Ref document number: 2022531504

Country of ref document: JP

Kind code of ref document: A

Ref document number: 20227018058

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE