WO2023160142A1 - 视频处理方法、电子设备及可读介质 - Google Patents

视频处理方法、电子设备及可读介质 Download PDF

Info

Publication number
WO2023160142A1
WO2023160142A1 PCT/CN2022/138960 CN2022138960W WO2023160142A1 WO 2023160142 A1 WO2023160142 A1 WO 2023160142A1 CN 2022138960 W CN2022138960 W CN 2022138960W WO 2023160142 A1 WO2023160142 A1 WO 2023160142A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
interface
image
shooting
response
Prior art date
Application number
PCT/CN2022/138960
Other languages
English (en)
French (fr)
Inventor
易婕
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22905479.6A priority Critical patent/EP4258675A1/en
Publication of WO2023160142A1 publication Critical patent/WO2023160142A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the present application relates to the technical field of electronic equipment, and in particular to a video processing method, electronic equipment, program products, and computer-readable storage media.
  • the present application provides a video processing method, an electronic device, a program product, and a computer-readable storage medium, with the purpose of enabling users to obtain photos of wonderful moments while shooting videos.
  • the present application provides a video processing method applied to an electronic device, the video processing method comprising: taking a first video in response to a first operation; displaying a first interface in response to a second operation, the first interface is the details interface of the first video, the first interface includes the first area, the second area and the first control, or the first interface includes the first area and the second area, or the first interface includes the first area and the first control,
  • the first area is the play area of the first video
  • the second area displays the cover thumbnail of the first video, the thumbnail of the first image and the thumbnail of the second image
  • the first image is the image of the first video at the first moment
  • the second image is the image of the first video at the second moment
  • the recording process of the first video includes the first moment and the second moment
  • the first control is used to control the electronic device to generate the second video, and the duration of the second video is less than
  • the first video and the second video at least include images of the first video.
  • the user uses the electronic device to shoot a video
  • the electronic device can obtain the first video, the first image and the second image, so that the user can obtain a wonderful moment photo while shooting the video.
  • the video processing method further includes: displaying a second interface in response to a third operation, the third operation being a touch operation on the first control, and the second interface being a display interface of the second video.
  • the electronic device after the user shoots a video, the electronic device obtains the first video, the first image and the second image, and can also obtain and display the second video.
  • the duration of the second video is shorter than that of the first video and contains The image of the first video enables the user to obtain a photo of a wonderful moment while shooting a video, and further obtain a small video of the first video for the convenience of the user to share.
  • displaying the first interface includes: in response to the fourth operation, displaying a third interface, where the third interface is an interface of a gallery application, and the third interface includes the cover of the first video Thumbnail: displaying the first interface in response to a touch operation on the cover thumbnail of the first video.
  • displaying the first interface in response to the second operation includes: displaying the first interface in response to a touch operation on the second control, the shooting interface of the electronic device includes a second control, and the second control Used to control the display of the image or video taken last time.
  • the thumbnail image of the cover of the first video includes a first identifier, and the first identifier is used to indicate that the first video is shot by an electronic device in a one-record-multiple mode.
  • the first interface is displayed with a mask, and the second area is not covered by the mask.
  • the first interface further includes: a first dialog box, the first dialog box is used to prompt the user that the first image and the second image have been generated, and the first dialog box is not covered by a mask.
  • the video processing method further includes: in response to the fifth operation, displaying a shooting interface of the electronic device, where the shooting interface includes: a second dialog box, The second dialog box is used to prompt the user that the first video and the second video have been generated.
  • the video processing method before shooting the first video in response to the first operation, further includes: in response to the sixth operation, displaying a fourth interface, the fourth interface is a shooting setting interface, and the fourth The interface includes: one-record-multiple-get option and a text field.
  • the one-record-multiple-get option is used to control the electronic device to enable or disable the one-record multiple-get function.
  • the text field is used to indicate the content of the one-record multiple-get function.
  • the video processing method further includes: in response to the seventh operation, displaying a fifth interface, where the fifth interface is an interface of a gallery application, and the fifth interface
  • the interface includes: a first folder and a second folder, the first folder includes images and videos saved by the electronic device, and the second folder includes the first image and the second image; in response to the eighth operation, a sixth interface is displayed, The sixth interface includes a thumbnail of the first image and a thumbnail of the second image, and the eighth operation is a touch operation on the second folder.
  • the video processing method further includes: in response to the ninth operation, displaying a seventh interface, where the seventh interface is a detailed interface of the second video,
  • the ninth operation is a touch operation on the third control included in the second interface, and the third control is used to control saving the second video.
  • the video processing method further includes: in response to the tenth operation, displaying an eighth interface, the eighth interface is an interface of a gallery application, and the eighth interface includes: the cover thumbnail of the second video and the first The video's cover thumbnail.
  • the video processing method further includes: in response to the eleventh operation, displaying a first shooting interface of the electronic device, where the first shooting interface includes the first One option and the second option, the first option is used to indicate the photographing mode, and the second option is used to indicate the video recording mode; in response to the operation of the fourth control on the shooting interface, the first shooting interface of the electronic device is displayed, and the fourth control uses To start taking photos; in response to the operation of the second option, display a second shooting interface of the electronic device, the second shooting interface includes a third dialog box, and the third dialog box is used to indicate the function content of one record to the user.
  • the electronic device after the user shoots the video, if the user controls the electronic device to take pictures by touching the fourth control, when the electronic device enters the shooting interface of the electronic device to shoot the video, it can display on the shooting interface
  • the third dialog box is used to remind the user that the electronic device is equipped with a one-record-multiple-get function.
  • the video processing method further includes: in response to the twelfth operation, shooting and saving the third image; the twelfth operation is electronically The touch operation of the camera button on the video capture interface of the device.
  • the electronic device when the electronic device shoots a video, the electronic device also responds to the twelfth operation by taking and saving a third image, realizing the function of configuring a snapshot image for the electronic device when shooting a video.
  • the second area further displays a thumbnail of the third image, and the second video includes the third image.
  • the image captured manually by the electronic device may be used to obtain the second video, so that the image captured by the user may be used as the image in the second video.
  • the second area displays the thumbnail image of the cover of the first video, the thumbnail image of the first image and the thumbnail image of the second image, including: the second area displays the thumbnail image of the cover image of the first video, the first a thumbnail of the image and a thumbnail of the third image; the second video includes at least the first image and the third image.
  • the method for generating the second video includes: acquiring the first video and tag data of the first video, where the tag data includes the theme TAG of the first video, the segment TAG, the first image TAG and the second Image TAG; based on the theme TAG of the first video, determine the style template and music, the style template includes at least one special effect; , the rear multi-frame image, and the front and rear multi-frame images of the second image; the special effects, music and target image of the composite style template to obtain the second video; the target image at least includes: the first image and the front and rear of the first image Multiple frame images.
  • the present application provides a video processing method applied to an electronic device, the video processing method comprising: in response to the first operation, displaying a first interface and starting to shoot the first video, the first interface is to shoot the first video preview interface, the first interface includes a first control, and the first control is used to capture an image; in response to the second operation, the first image is captured and saved during the process of capturing the first video; the second operation is for the first Touch operation of the control; after the shooting of the first video is completed, in response to the third operation, a second interface is displayed, the second interface is the details interface of the first video, and the second interface includes the first area, the second area and the second area.
  • a control, or the second interface includes the first area and the second area, or the second interface includes the first area and the first control; the first area is the play area of the first video, and the second area displays the cover thumbnail of the first video A thumbnail and a thumbnail of the first image, the first control is used to control the electronic device to generate a second video, the duration of the second video is shorter than that of the first video, and the second video includes at least an image of the first video.
  • the second area also displays thumbnails of one or more frames of images, where the other one or more frames of images are images in the first video, and the first image and other one or more frames of images
  • the sum of the numbers is greater than or equal to the preset number, and the preset number is the number of the second images automatically recognized by the electronic device during the shooting of the first video.
  • the second video includes at least one or more frames of images in the following images: the first image, and one or more frames of images.
  • the video processing method further includes: displaying a third interface in response to a fourth operation, the fourth operation is a touch operation on the first control, and the third interface is a display interface of the second video.
  • the video processing method further includes: displaying a first shooting interface of the electronic device, the shooting interface includes a first option and a second option, and the first option is used to indicate to take a picture mode, the second option is used to indicate the video recording mode; the first shooting interface is a preview interface when shooting images; in response to the operation of the second control on the shooting interface, the first shooting interface of the electronic device is displayed, and the second control is used to start Taking pictures; in response to the operation of the second option, display the second shooting interface of the electronic device, the second shooting interface includes a first dialog box, the first dialog box is used to indicate the functional content of one record to the user, and the second shooting interface The interface is the preview interface when shooting video.
  • the shooting of the first video after the shooting of the first video is completed, it further includes: in response to the sixth operation, displaying a third interface, the third interface is an interface of a gallery application, and the third interface includes: the first folder and The second folder, the first folder includes at least the first image, the second folder includes the second image and the third image, or the second folder includes the second image; in response to the seventh operation, display the fourth interface, the first The fourth interface includes a thumbnail of the second image and a thumbnail of the third image, or includes a thumbnail of the second image, and the seventh operation is a touch operation on the second folder.
  • the present application provides an electronic device, including: one or more processors, memory, camera and display; the memory, camera and display are coupled with one or more processors, and the memory is used to store computer programs Code, the computer program code includes computer instructions, when one or more processors execute the computer instructions, the electronic device executes the video processing method according to any one of the first aspect, or the video processing method according to any one of the second aspect.
  • the present application provides a computer-readable storage medium for storing a computer program.
  • the computer program When the computer program is executed by an electronic device, the electronic device implements the video processing method according to any one of the first aspect, or The video processing method according to any one of the second aspect.
  • the present application provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it enables the computer to execute the video processing method according to any one of the first aspect, or the video processing method according to any one of the second aspect. Video processing method.
  • Fig. 1 is the hardware structural diagram of the electronic equipment provided by the present application.
  • Fig. 2 is a schematic diagram of an example of opening "one record, multiple results" provided by Embodiment 1 of the present application;
  • FIG. 3 is a schematic diagram of an example of a graphical user interface of "one record, multiple results" provided by Embodiment 1 of the present application;
  • FIG. 4 is a schematic diagram of another example of a "one record, multiple results" graphical user interface provided in Embodiment 1 of the present application;
  • FIG. 5 is a schematic diagram of another example of a "one record, multiple results" graphical user interface provided in Embodiment 1 of the present application;
  • FIG. 6 is a schematic diagram of another example of a "one record, multiple get" graphical user interface provided in Embodiment 1 of the present application;
  • FIG. 7 is a schematic diagram of another example of a "one record, multiple results" graphical user interface provided in Embodiment 1 of the present application;
  • FIG. 8 is a flow chart of generating featured videos provided in Embodiment 1 of the present application.
  • FIG. 9 is a display diagram for generating featured videos provided in Embodiment 1 of the present application.
  • FIG. 10 is a diagram showing an example of generating featured videos provided in Embodiment 1 of the present application.
  • FIG. 11 is a schematic diagram of another example of a "one record, multiple results" graphical user interface provided in Embodiment 1 of the present application;
  • FIG. 12 is a schematic diagram of an example of a "one record, multiple results" graphical user interface provided in Embodiment 2 of the present application;
  • FIG. 13 is a schematic diagram of an example of a "one record, multiple results" graphical user interface provided in Embodiment 3 of the present application;
  • FIG. 14 is a schematic diagram of another example of a "one record, multiple results" graphical user interface provided in Embodiment 3 of the present application.
  • one or more refers to one, two or more than two; "and/or” describes the association relationship of associated objects, indicating that there may be three types of relationships; for example, A and/or B may mean: A exists alone, A and B exist simultaneously, and B exists alone, wherein A and B may be singular or plural.
  • the character "/" generally indicates that the contextual objects are an "or" relationship.
  • references to "one embodiment” or “some embodiments” or the like in this specification means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • a plurality referred to in the embodiment of the present application means greater than or equal to two. It should be noted that in the description of the embodiments of the present application, words such as “first” and “second” are only used to distinguish the purpose of description, and cannot be understood as indicating or implying relative importance, nor can they be understood as indicating or imply order.
  • More than one record it can be understood that when the user uses the camera app to shoot a video, by pressing the "shoot" icon once, he can get the original video, one or more wonderful photos, and one or more featured videos function. It is understandable that the duration of the wonderful short video obtained through one record is shorter than the duration of the entire complete video. For example, if the whole recorded video is 1 minute, you can get 5 photos of exciting moments and a wonderful short video with a duration of 15 seconds. It is also understandable that one-record multi-shot can also have other names, for example, one-click multi-get, one-click multi-shot, one-click movie, one-click blockbuster, AI one-click blockbuster, etc.
  • the wonderful image can be the best sports moment picture, the best expression moment picture or the best check-in action picture. It can be understood that this application does not limit the term wonderful image, and wonderful image can also be called a good moment image, a magical moment image, a wonderful moment image, a decisive moment image, a best shot (best shot, BS) image, or an AI image wait. In different scenarios, wonderful images can be different types of instant images.
  • the highlight image can be the image of the moment when the player's foot touches the football when shooting or passing the ball, the image of the moment when the football is kicked away by the athlete, or the moment when the football flies into the goal , The image of the goalkeeper catching the football moment.
  • the wonderful image can be the image of the moment when the person is at the highest point in the air, or the image of the moment when the person is in the air and the movement is the most stretched.
  • the highlight image can be an image of a building appearing in the scenery, or an image of the setting sun or the rising sun.
  • Selected videos refer to videos containing wonderful images. It can be understood that this application does not limit the term featured video, and the featured video can also be called a wonderful video, a wonderful short video, a wonderful short video, or an AI video.
  • TAG can be divided into theme TAG, scene TAG, wonderful image TAG and storyboard TAG, etc.; theme TAG is used to indicate the style or atmosphere of the video; scene TAG is used to indicate the scene of the video; wonderful image TAG is used To indicate the position of the highlight image in the captured video, the storyboard TAG is used to indicate the position of the transition scene in the captured video. For example, one or more highlight image TAGs are included in the video, and the highlight image TAG may indicate that the image frame at the 10th second, 1 minute and 20 second, etc. of the video is a highlight image.
  • the video also includes one or more storyboard TAGs, which can indicate that the first scene is switched to the second scene at the 15th second of the video, and the second scene is switched to the third scene at the 3rd minute and 43 seconds of the video.
  • an embodiment of the present application provides a video processing method.
  • a kind of video processing method that the embodiment of the present application provides can be applicable to mobile phone, panel computer, desktop type, laptop type, notebook computer, ultra-mobile personal computer (Ultra-mobile Personal Computer, UMPC), handheld computer, netbook, Personal Digital Assistant (Personal Digital Assistant, PDA), wearable electronic devices and smart watches, etc.
  • FIG. 1 is a composition example of an electronic device provided in an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an internal memory 120, a camera 130, a display screen 140, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, and a sensor module 180 , and buttons 190 and so on.
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, intelligent sensor hub (sensor hub) and/or neural network processor (neural- network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor intelligent sensor hub
  • neural- network processing unit neural- network processing unit
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the internal memory 120 may be used to store computer-executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 120 .
  • the internal memory 120 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 120 and/or instructions stored in a memory provided in the processor.
  • the internal memory 120 stores instructions for executing the video processing method.
  • the processor 110 can execute the instructions stored in the internal memory 120 to control the electronic device to shoot videos in the "one record, many" mode, and obtain the captured video, one or more wonderful photos, and one or more featured videos .
  • the electronic device realizes the display function through the GPU, the display screen 140, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 140 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 140 is used for displaying images, videos and the like.
  • the display screen 140 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oled, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device may include 1 or N display screens 140, where N is a positive integer greater than 1.
  • the electronic device shoots a video in the mode of "one record, multiple results", and obtains the captured video, one or more wonderful photos, and one or more selected videos, which are displayed to the user by the display screen 140 .
  • the electronic device 100 can realize the shooting function through the ISP, the camera 130 , the video codec, the GPU, the display screen 140 and the application processor.
  • the ISP is used to process data fed back by the camera 130 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 130 .
  • Camera 130 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 130 , where N is a positive integer greater than 1.
  • the camera 130 is used to shoot the video mentioned in the embodiment of this application.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 4, MPEG2, MPEG3, MPEG4 and so on.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves and radiate them through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the electronic device can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • the electronic device can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device receives a call or a voice message, it can listen to the voice by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device may be provided with at least one microphone 170C.
  • the electronic device can be provided with two microphones 170C, which can also implement a noise reduction function in addition to collecting sound signals.
  • the electronic device can also be equipped with three, four or more microphones 170C to realize the collection of sound signals, noise reduction, identification of sound sources, and realization of directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D may be a USB interface, or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • the pressure sensor 180A can be disposed on the display screen 140 .
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material.
  • the electronic device detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the touch sensor 180B is also called “touch device”.
  • the touch sensor 180B can be disposed on the display screen 140 , and the touch sensor 180B and the display screen 140 form a touch screen, also called “touch screen”.
  • the touch sensor 180B is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 140 .
  • the touch sensor 180B may also be disposed on the surface of the electronic device, which is different from the position of the display screen 140 .
  • the pressure sensor 180A and the touch sensor 180B can be used to detect the user's touch operation on the controls, images, icons, videos, etc. displayed on the display screen 140 .
  • the electronic device can respond to the touch operation detected by the pressure sensor 180A and the touch sensor 180B, and execute a corresponding process.
  • specific content of the process executed by the electronic device reference may be made to the content of the following embodiments.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 can be a mechanical key or a touch key.
  • the electronic device can receive key input and generate key signal input related to user settings and function control of the electronic device.
  • the electronic device is a mobile phone
  • a camera application is installed in the mobile phone
  • the camera application starts the camera to shoot video as an example, and introduces the video processing method provided by the application in detail.
  • the user can manually enable or disable the "one record multiple" function provided in the embodiments of the present application.
  • the following describes the entry of "one record, many benefits" in conjunction with Fig. 2 .
  • the user can instruct the mobile phone to start the camera application by touching a specific control on the screen of the mobile phone, pressing a specific physical key or combination of keys, inputting voice, gestures in the air, and the like.
  • FIG. 2 shows an implementation manner in which the user starts the camera application.
  • the mobile phone After the mobile phone receives the instruction to turn on the camera from the user, the mobile phone starts the camera, and displays (b) in Figure 2.
  • (c) shows the shooting interface.
  • the shooting interface shown in (b) in FIG. 2 is the shooting interface when the mobile phone is in the camera mode
  • the shooting interface shown in (c) in FIG. 2 is the shooting interface when the mobile phone is in the video recording mode.
  • the shooting interface of the mobile phone includes: a control 201 for turning on or off the flashlight, a control 202 for setting, a switch list 203, a control 204 for displaying the image taken last time, a control 205 for controlling shooting, and a switch list.
  • control 201 for turning on or off the flashlight is used to control whether to start the flashlight when the camera is shooting.
  • the setting control 202 can be used to set shooting parameters and shooting functions, for example, setting of photo ratio, setting of gesture taking pictures, setting of smiling face capture, setting of video resolution and so on.
  • the switching list 203 includes various modes of the camera, and the user can slide the switching list left and right to realize the switching operation of the various modes of the camera.
  • the switching list shown in (b) of FIG. 2 includes portrait, night scene, photo taking, video recording, and panorama.
  • Other modes not shown in (b) in FIG. 2 belong to hidden display, and the user can switch the list by swiping left and right to display the hidden display mode.
  • the control 204 for displaying the image taken last time is used to display the thumbnail of the image taken by the camera last time or the thumbnail of the cover of the video.
  • the user can touch the control 204 for displaying the previously captured image, and the display screen displays the previously captured image or video of the camera.
  • the image or video captured by the camera last time refers to: the image or video captured by the camera before the current shooting time and the shooting time is closest to the current shooting time.
  • the control for controlling shooting 205 is a control provided for the user to start shooting.
  • the user touches once the control 205 for controlling shooting, and the camera can shoot one frame of image.
  • the camera can also capture multiple frames of images, and only select one frame of images for output.
  • the user touches the control 205 for controlling shooting, and the camera starts video recording.
  • the control 206 for switching the front and rear cameras is used to realize the switching operation of multiple cameras of the mobile phone.
  • the mobile phone includes a camera on the same side as the display screen (referred to as the front camera), and a camera located on the shell of the mobile phone (referred to as the rear camera). The switching operation of the front camera and the rear camera.
  • the user controls the mobile phone to display the setting interface by clicking the setting control 202 .
  • the setting interface may be as shown in (d) in FIG. 2 .
  • the setting interface shown in (d) in FIG. 2 displays the option 207 of enabling "multiple access to one record", which is used to enable the function of multiple access to one record. That is to say, when the user activates this function and the mobile phone is in the video recording mode to shoot video images, the mobile phone will automatically adopt the video processing method provided by the embodiment of the application to automatically generate wonderful images and short videos while shooting videos. Of course, the user can also manually turn off the one-record-multiple function in the video recording mode through the option 207 .
  • the option 207 of "one record, multiple results" shown in (d) of FIG. 2 is enabled by default. That is to say, when the mobile phone is turned on for the first time or the system is updated to have the function of "multiple records for one record", the option 207 of "multiple records for one record” in the setting interface shown in (d) of Fig. Record more" function is activated.
  • the setting interface may also include controls for setting other functions.
  • the controls for camera settings and the controls for video settings shown in (d) in Figure 2 wherein: the controls for camera settings include: photo ratio settings controls, voice-activated camera settings controls, gesture camera settings controls, smiley face capture Setting controls, etc.; video setting controls include: setting controls for video resolution, setting controls for video frame rate, etc.
  • the user can control the mobile phone to start shooting videos. For example, referring to (a) in FIG. 3 , the user can click the control 301 for controlling shooting to control the mobile phone to start shooting video.
  • the mobile phone starts the camera to shoot video in response to the user's click operation, and the interface shown in (b) in Figure 3 shows a picture of the user using the mobile phone to shoot a football match.
  • the interface shown in (b) of FIG. 3 includes: a stop control 302 , a pause control 303 and a camera key 304 .
  • the user can pause the shooting by clicking the pause control 303 , end the shooting by clicking the stop control 302 , and manually grab a photo by clicking the camera button 304 .
  • the user can click the stop control 302 to end the shooting process at 56 seconds, and a video with a duration of 56 seconds can be obtained.
  • the display screen of the mobile phone can enter the display shooting interface, and when the "one-record-multiple-get" function of the mobile phone is used for the first time, it can also guide the generation prompt of one-record multiple-get on the shooting interface.
  • the shooting interface shown in (d) in Fig. 3 shows a dialog box for generating prompts with one record, which shows that "the wonderful photos of "one record and multiple" have been generated and can create blockbuster movies with one click." " text prompt.
  • the user can control the disappearance of the dialog box for generating prompts by clicking on any area of the interface shown in (d) in FIG. 3 .
  • the mobile phone can also be configured to generate a dialog box that generates a reminder for a certain period of time, such as 5 seconds, and then disappear automatically.
  • the user shoots a video again and clicks the stop control to end the shooting, and the shooting interface displayed on the screen of the mobile phone will not include a prompt to guide the creation of one record multiple.
  • the mobile phone will capture one or more wonderful images from the video shot by the mobile phone and generate a precise Choose a video.
  • the shooting interface shown in (d) in Figure 3 can display a dialog box for generating prompts for one record, which displays the "generated" one record Thanks to the text prompt of "wonderful photos and one-click creation of blockbuster movies".
  • the application scenarios for the user to shoot videos may also include the following two application scenarios: the first application scenario: the video shooting time is short, which does not meet the video shooting time requirement of the one-record-multiple-get mode.
  • the second application scenario the video shooting duration meets the video shooting duration requirement of the one-record-multiple-get mode, but the mobile phone does not recognize a wonderful image from the captured video.
  • the user is usually required to take pictures longer than another duration requirement, which is longer than the video shooting duration of the one-record-multiple-record mode, for example, 30 seconds.
  • the mobile phone does not identify a wonderful image from the captured video, it can identify an image with better quality, and the image with better quality can be used to generate the selected video proposed below.
  • the mobile phone after the mobile phone shows the shooting interface of the mobile phone in (b) in Figure 4, after the mobile phone determines for the first time that the mobile phone shoots a video that meets the second application scenario, the mobile phone can also click the button shown in (c) in Figure 4.
  • the shooting interface shown in (d) in FIG. 4 is displayed once.
  • the mobile phone clicks the stop control 302 as shown in (c) in FIG. 3 . , showing the shooting interface shown in (d) in FIG. 3 once.
  • the mobile phone shows the shooting interface of the mobile phone in (d) in FIG. Afterwards, the shooting interface shown in (b) in FIG. 4 is shown once. Or, when the mobile phone determines for the first time that it has captured a video that meets the video shooting duration requirement of the one-record-multiple-get mode and can be recognized as a wonderful image, the mobile phone clicks the stop control 302 as shown in (c) in FIG. 3 . , showing the shooting interface shown in (d) in FIG. 3 once.
  • the mobile phone can use the recognition model to automatically identify the wonderful images in the captured video.
  • the mobile phone is provided with a wonderful image recognition model.
  • the recognition model can score the wonderfulness of the images in the input video to obtain the wonderfulness score of the images in the video. value.
  • the mobile phone can use the wonderfulness score value of the image to determine the wonderful image in the video. Generally, the greater the wonderfulness score value of an image, the higher the probability that the image is a wonderful image.
  • the scene of the video may also be identified.
  • the video clips of a scene are input to the recognition model of the aforementioned wonderful images, and the recognition model scores the video clips in the scene including each image, and obtains the video clips.
  • the wonderfulness score value of the image The mobile phone can use the wonderfulness score value of the image to determine the wonderful image in the video.
  • the greater the wonderfulness score value of the image the higher the probability that the image belongs to the wonderful image.
  • the mobile phone can be configured to obtain a fixed number of highlight images, for example, 5 highlight images. Based on this, the mobile phone selects 5 images with higher wonderfulness scores as wonderful images.
  • the mobile phone may not be configured with a limited number of wonderful images, and the mobile phone may select images with a wonderfulness score higher than a certain value as wonderful images.
  • FIG. 5 shows the photo album display interface of the gallery, and the photo album display interface displays all photos and videos saved in the mobile phone in the form of folders.
  • the photo album display interface shown in (a) of FIG. 5 includes: a camera folder 401 , a folder of all photos 402 , a folder of videos 403 and a folder 404 of multiple recordings.
  • the photo album display interface may also include other folders, and this application does not limit the folders displayed on the camera display interface.
  • the camera folder 401 includes all photos and videos taken by the camera of the mobile phone
  • the all photos folder 402 includes all photos and videos saved by the mobile phone
  • the video folder 403 includes all videos saved by the mobile phone.
  • 404 includes all the wonderful images saved by the mobile phone in the one-record-multiple mode.
  • FIG. 5 depicts the display interface of a multi-record folder, which shows the thumbnails of 5 wonderful images in the aforementioned 56-second video shot by the mobile phone with a multi-record, and , 5 wonderful images are automatically captured by the mobile phone from the 56-second video shot.
  • the user can view the video and the wonderful images of the video through the gallery application.
  • the mobile phone displays the detailed interface of the video 502 on the display screen of the mobile phone.
  • the exclusive subscript for the video is used to explain to the user that the video 502 corresponding to the cover thumbnail is captured by the mobile phone using the "one record, multiple" function.
  • the cover thumbnail of the video 502 shown in (b) in FIG. 6 is different in size from the thumbnails of image A and image B.
  • Image A and image B may refer to images captured by the mobile phone without the one-record-multiple function enabled.
  • the cover thumbnail of the video 502 shown in (b) in FIG. 6 is larger than the thumbnails of image A and image B.
  • the display screen of the mobile phone displays the details interface of the video 502 (also called the browsing interface of the video 502) for the first time, and the details interface shows a mask guide.
  • the display screen of the mobile phone does not display the details interface of the video 502 for the first time, and no mask is displayed on the details interface.
  • the detailed interface of the video 502 with masked guidance is shown in (c) in FIG. 6
  • the detailed interface of the video 502 without masked guidance is shown in FIG. 6 (d).
  • the video 502 can be understood as the video captured by the mobile phone using the one-record-multiple mode for the first time.
  • the first display of the detailed interface of the video 502 on the display screen of the mobile phone can be understood as the first display of the mobile phone using the one-record-multiple mode for the first time.
  • the video details interface will display the mask guide.
  • the mobile phone shows the video captured by the mobile phone using the one-record-multiple-get mode for the first time, and the mask guide is displayed on the video detail interface, which can serve as a reminder to the user of the one-record multiple-get function.
  • the detailed interface of the video 502 shown in (c) of FIG. 6 includes: a thumbnail area 504 of the highlight image of the video 502 , a control 505 , and a play control 506 .
  • the thumbnail area 504 of the highlight image of the video 502 is exposed without being covered by the mask, and other areas of the display screen are covered by the mask.
  • the thumbnail area 504 of the highlights of the video 502 includes: a thumbnail of the cover of the video 502 and thumbnails of multiple highlights of the video 502 .
  • the thumbnail of the cover of the video 502 is usually at the first place, and the thumbnails of multiple wonderful images can be arranged according to the shooting time of the wonderful images, and are located after the thumbnail of the cover of the video 502 .
  • the wonderful image can be as mentioned above, when the mobile phone shoots a video, it automatically recognizes the picture of the wonderful moment contained in the video, and extracts it from the video.
  • the detailed interface of the video 502 also includes a reminder dialog box, and the reminder dialog box displays the words "one record gets more" to intelligently capture multiple wonderful moments for you.
  • the reminder dialog box is usually shown in (c) in FIG. 6 , located above the thumbnail area 504 of the highlight image of the video 502, and is used to prompt the user to display the content displayed in the thumbnail area 504 of the highlight image of the video 502, so as to guide the user to view the video Great image for 502.
  • the text and setting position displayed in the reminder dialog box shown in (c) in FIG. 6 is an exemplary display, and does not constitute a limitation on the reminder dialog box.
  • the user can control the disappearance of the reminder dialog box by clicking any area of the detail interface of the video 502 shown in (c) in FIG. 6 .
  • the mobile phone can also be configured to display a reminder dialog box for a certain period of time, such as 5 seconds, and then disappear automatically.
  • the control 505 is used to generate featured videos based on the wonderful images of the video 502 .
  • the playback control 506 is used to control the playback of the video 502 .
  • the playback control 506 includes: a start or stop control 507 , a slidable progress bar 508 and a speaker control 509 .
  • the start or stop control 507 is used to control the playing of the video 502 or to stop the playing of the video 502; the speaker control 509 is used to choose whether to play the video 502 silently.
  • the slidable progress bar 508 is used to display the playing progress of the video 502 , and the user can also adjust the playing progress of the video 502 by dragging the circular control on the progress bar left and right.
  • the detailed interface of the video 502 also includes options such as sharing, collection, editing, deleting, and more. If the user clicks to share, the video 502 can be shared; if the user clicks on the collection, the video 502 can be stored in a folder; if the user clicks on edit, the video 502 can be edited; if the user clicks on delete, the video 502 can be deleted; More, you can enter other operating functions on the video (such as moving, copying, adding notes, hiding, renaming, etc.).
  • the details interface of the video 502 also includes shooting information of the video 502 , generally shown in (c) or (d) in FIG. 6 , located above the video 502 .
  • the shooting information of this video 502 includes: the shooting date, shooting time and shooting address of the video 502.
  • the detailed interface of the video 502 may further include a circular control, and the circular control is filled with the letter "i".
  • the mobile phone can respond to the user's click operation and display the attribute information of the video 502 on the details interface of the video 502.
  • the attribute information can include the storage path of the video 502, the resolution, and the camera when shooting. configuration information, etc.
  • the mask layer shown in (c) in FIG. 6 belongs to a kind of mask layer.
  • the mask layer generally refers to a layer mask, and a layer mask is to cover a layer of glass sheet on the layer of the interface displayed on the display screen. And, the glass sheet layer is divided into transparent, translucent, completely opaque.
  • the semi-transparent and completely opaque covering layer can block the light of the display screen, so that the interface displayed on the display screen is vaguely visible or completely invisible to the user.
  • the mask layer shown in (c) of Figure 6 can be understood as a translucent glass sheet layer.
  • the mask guide shown in (c) of FIG. 6 is an exemplary display, and does not constitute a limitation of the mask guide for the first time displaying a detailed interface of a multi-recorded video.
  • the mask guide can also be set as a guide with a mask and other special effects, such as a guide with a mask and bubbles.
  • the user can control the disappearance of the mask by inputting an operation in any area of the detail interface of the video 502 shown in (c) of FIG. 6 .
  • the mobile phone can also be set to display a mask for a certain period of time, such as automatically disappearing in 3 seconds. After the cover disappears in the detail interface of the video 502 shown in (c) of FIG. 6 and the reminder dialog box disappears, the detail interface of the video 502 is as shown in (d) of FIG. 6 .
  • the video 502 is in a static state and will not be played. After the mask disappears, the video 502 can be played automatically. Usually, the video 502 can also be played silently. Of course, the user can control the mobile phone to play the video 502 with audio by clicking the speaker control 509 shown in (d) of FIG. 6 .
  • the user can also perform a left-right sliding operation or a click operation on the thumbnail area 504 of the highlight image of the video 502 shown in (d) of FIG. 6 .
  • the user clicks a thumbnail of a wonderful image in the thumbnail area 504, and the mobile phone responds to the user's click operation and displays the wonderful image clicked by the user on the display screen to replace the video shown in (d) in FIG. 6 502.
  • the mobile phone may also respond to the user's sliding operation and display exciting images in the thumbnail area 504 on the display screen following the user's sliding direction.
  • the image corresponding to the thumbnail of the wonderful image displayed in the thumbnail area 504 is not saved in the gallery, but is saved in the photo album of one record, that is to say, in the interface shown in (b) of Fig. 6 , there is no thumbnail corresponding to the highlight image, but when the user clicks the thumbnail of the video 502 to enter the details interface of the video 502, the thumbnail of the highlight image associated with the video 502 can be displayed below the details interface of the video 502.
  • the user can also input a left and right sliding operation in the video 502 shown in (d) of FIG.
  • the user inputs a right slide operation on the video 502 shown in (d) of FIG. 6
  • the mobile phone displays the next video or image of the video 502 saved in the gallery on the display screen.
  • the user inputs a leftward slide operation on the video 502 shown in (d) of FIG. 6
  • the mobile phone displays the previous video or image of the video 502 saved in the gallery on the display screen.
  • the previous video or image refers to the video or image whose shooting time is earlier than the video 502 and the latest with the shooting time of the video 502
  • the latter video or image refers to that the shooting time is later than the video 502 and is the same as the shooting time of the video 502.
  • Most recent video or image refers to the video or image whose shooting time is earlier than the video 502 and the latest with the shooting time of the video 502
  • the video 502 shot by the user is the video of the first application scenario mentioned above, the detailed interface of the video 502 will be different from that in Figure 6(c) and Figure 6(d). The reason is that the detailed explanation interface of the video 502 does not include the control 505 . If the video 502 shot by the user is the video of the second application scenario mentioned above, the detailed interface of the video 502 will also be different from that in Figure 6 (c) and Figure 6 (d). The difference lies in: the video 502 The detailed explanation interface of does not include the thumbnail image area 504 .
  • the mobile phone can obtain the captured video, in addition to one or more wonderful images in the video, the mobile phone can also generate a configuration file, which can be Include the tag (TAG) of the video.
  • the cell phone may obtain tag data, which includes the tag (TAG) of the video.
  • the tag data can be added to the video, usually at the video header.
  • the following content is introduced by taking the video and the configuration file of the video obtained by the mobile phone as an example.
  • the tag of the video is stored in the video solution in the form of tag data
  • the configuration file for getting the video of the following content can be modified to get the tag data of the video.
  • the tag (TAG) of the video may be set based on the hierarchical information of the video.
  • the video level information may include: first level information LV0, second level information LV1, third level information LV2, and fourth level information LV3. in:
  • the first level information LV0 is used to represent the subject category of the video, and is used to give the style or atmosphere TAG of the entire video.
  • the second level information LV1 is used to represent the scene of the video, and is used to give the scene TAG of the video.
  • the third level information LV2 is used to represent the change of the scene of the video, which can also be understood as the change of the transition scene.
  • the information of the third level information LV2 can give the video transition position (for example, the frame number where the transition occurs), and the transition type (character protagonist switching, fast camera movement, scene category changes, image content changes caused by other situations) , to prevent too many recommendations for similar scenarios.
  • LV2 information is used to represent video scene changes (or simply referred to as transitions), including but not limited to one or more of the following changes: changes in the subject of characters (or protagonists), major changes in image content composition, semantic Layer scenes change, and image brightness or color changes.
  • the mobile phone can use the third-level information LV2 to add a sub-mirror TAG to the video when a scene change occurs in the video.
  • the fourth level information LV3 is used to represent the wonderful moment, that is, the shooting moment of the wonderful image, and is used to give the wonderful image TAG of the video.
  • the first level of information LV0, the second level of information LV1, the third level of information LV2 and the fourth level of information LV3 provide decision information in order of granularity from coarse to fine, so as to identify wonderful images in the video and generate selected videos.
  • Table 1 gives an example of the definition of LV0 and LV1.
  • Scene category (LV1) figure characters etc. gourmet food gourmet, etc. ancient building Ancient buildings, etc. night view Fireworks, other nighttime scenery, etc. Nature Snow scenes, plants, mountains, rivers, etc.
  • the mobile phone may utilize the captured video and the profile of the captured video to generate a featured video of the captured video.
  • the Featured Video includes a great image of the captured video, with some special effects and music, when the phone is able to identify a great image from the captured video.
  • the mobile phone fails to recognize wonderful images from the captured video, but it can recognize good-quality images, and the mobile phone uses good-quality images to generate selected videos.
  • the featured video also has some special effects and music.
  • the configuration file or tag data of the video will also include the TAG of the image with good quality.
  • the image with good quality mentioned in this application refers to: the image is relatively clear, for example, the resolution is relatively high; or the image is relatively complete.
  • special effects mentioned in this application refer to: can be supported by materials, and can present special effects after being added to the video frame, such as animation effects such as snowflakes and fireworks, as well as filters, stickers, frames, etc.
  • special effects may also be referred to as styles or style themes and the like.
  • the following content of this application is introduced by taking the mobile phone as an example to generate a selected video by using wonderful images.
  • the mobile phone responds to the user's click operation and generates a selected video of a certain duration.
  • the featured video 602 is being played.
  • the display interface of the selected video 602 is as shown in (c) of FIG.
  • the video style may be a filter, that is, the selected video 602 is color-graded by applying a filter.
  • a filter is a kind of video special effect, and is used to realize various special effects of the selected video 602 .
  • the video style may also be video effects such as fast playback and slow playback.
  • the video style may also refer to various themes, and different themes include their corresponding filters, music and other content.
  • the interface displays multiple soundtrack controls, and the user can click on any soundtrack control to select a soundtrack for the wonderful short video, such as soothing, romantic, warm, cozy, quiet, etc., to add a soundtrack to the selected video 602 .
  • the user can input editing operations such as clipping, splitting, volume adjustment, and frame adjustment on the wonderful video 602 on the editing interface.
  • Save control 604 is used to save featured video 602 .
  • the method for generating a selected video by a mobile phone provided in an embodiment of the present application, as shown in FIG. 8 includes the following steps:
  • the mobile phone when the mobile phone uses the "one record, many get" mode to shoot video, the mobile phone will identify the content of the captured video and determine the level information of the video.
  • the mobile phone can also use the hierarchical information of the video to set the theme TAG, scene TAG, storyboard TAG and wonderful image TAG for the video, and write the theme TAG, scene TAG, storyboard TAG and wonderful picture TAG into the configuration file of the video.
  • the phone can save the captured video and the profile of that video.
  • the theme TAG is used to represent the style or atmosphere of the video.
  • the 60-second video shot by the mobile phone contains tn frames of images, where tn is an integer.
  • the mobile phone can continuously recognize the style or atmosphere of the video to determine the subject of the video.
  • the mobile phone can call a recognition algorithm to recognize the image of the video, so as to determine the style or atmosphere of the video.
  • the mobile phone can also use the recognition algorithm to identify the image of the video to determine the style or atmosphere of the video.
  • the mobile phone invokes the recognition algorithm to identify the image of the video, and determines that the video belongs to the theme of travel.
  • the scene TAG is used to characterize the scene of the video.
  • the video is divided into 6 scenes.
  • the first scene contains video subsections from 0 to 5 seconds and belongs to other scenes;
  • the second scene contains video subsections from 5 seconds to 15 seconds and belongs to people. scenes;
  • the third scene contains 15 seconds to 25 seconds of video sub-segments, belonging to the ancient building scene;
  • the fourth scene contains 25 seconds to 45 seconds of video sub-segment, belonging to the characters and ancient building scenes;
  • the fifth scene contains 45 The second to 55 second video sub-segment belongs to the mountain scene;
  • the sixth scene contains the 55 second to 60 second video sub-segment, which belongs to the character scene.
  • the shot TAG is used to characterize the position of the transition scene in the captured video indicating the video.
  • the video includes 6 segment TAGs, the first segment TAG indicates that the first scene starts at 0 seconds, the second segment TAG indicates that the second scene starts at 5 seconds; the third segment TAG indicates the start of the third scene at 15 seconds, the fourth segment TAG indicates the start of the fourth scene at 25 seconds; the fifth segment TAG indicates the start of the fifth scene at 45 seconds, and the sixth segment TAG indicates the start of 55 seconds The sixth scene.
  • the highlight image TAG is used to represent the position of the highlight image in the captured video.
  • the video includes 5 great images.
  • the user can click on the "Ai One-Key Blockbuster" control to input instructions to the mobile phone to generate selected videos.
  • the mobile phone receives the user's click operation (also referred to as the first operation), and in response to the click operation, obtains the video and the configuration file of the video stored in the mobile phone.
  • the mobile phone stores multiple style templates and music, and the style template can contain multiple special effects.
  • the editing engine is used to determine the style template and music by using the theme TAG of the video.
  • the style template and music are used to synthesize the featured video of the resulting video.
  • the editing engine belongs to a kind of service or application, and can be set in the application layer, application framework layer or system library of the software framework of the mobile phone.
  • Clipping engine is used to generate featured videos of videos.
  • the mobile phone stores the corresponding relationship between theme TAG, style template, and music, and the mobile phone can determine the style template and music corresponding to the theme TAG based on the theme TAG of the video.
  • the mobile phone can also randomly select the style template and music stored in the mobile phone according to the theme TAG of the video. It should be noted that for the same video, after the selected video is generated, if the user edits the selected video and adjusts the style template or music, the mobile phone selects a different style template or music to ensure that the same video is targeted, The special effects of the selected videos generated by the mobile phone are different each time.
  • step S702 can be understood as 1. template selection and 2. music selection shown in FIG. 9 .
  • the shot TAG of the video can indicate the position of the transition scene in the captured video. Therefore, the editing engine can determine the scenes included in the video based on the shot TAG.
  • the editing engine uses the Shot TAG to divide the video into multiple Shot Segments according to the scene.
  • the editing engine does not actually split the video into storyboards, but marks the video as the storyboards of each scene by tagging the video according to the storyboard TAG.
  • the video includes 6 storyboard TAGs.
  • the storyboard clips in the video include: 0 to 5 seconds of the first scene of the storyboard, 5 seconds to 15 seconds The second scene's storyboard clip, the third scene's storyboard clip from 15 seconds to 25 seconds, the fourth scene's storyboard clip from 25 seconds to 45 seconds, the fifth scene from 45 seconds to 55 seconds , and the sixth scene from 55 seconds to 60 seconds.
  • the highlight image TAG is used to indicate the position of the highlight image in the captured video, therefore, the position of the highlight image in the shot clip can be determined according to the TAG of the highlight image.
  • the storyboard segment of a scene may include one or more highlight images.
  • the video shot by the mobile phone includes 5 highlight images, the first highlight image belongs to the storyboard segment of the second scene, the second highlight image belongs to the storyboard segment of the third scene, and the third highlight image belongs to the storyboard segment of the third scene.
  • the first highlight image belongs to the storyboard segment of the fourth scene, the fourth highlight image belongs to the storyboard segment of the fifth scene, and the fifth highlight image belongs to the storyboard segment of the sixth scene.
  • a scene segment of a video includes at least one highlight image, therefore, in the segment segment of the scene to which the highlight image belongs, the editing engine obtains the first few frames and the next few frames of the highlight image .
  • the first 5 frame images and the last 5 frame images of the wonderful image may be acquired. The editing engine will use the acquired image as the associated image of the wonderful image.
  • the first 5 frames and the last 5 frames of the first wonderful image are obtained from the storyboard clip of the second scene; the second one is obtained from the storyboard clip of the third scene
  • the first 5 frames and the last 5 frames of the wonderful image; the first 5 frames and the last 5 frames of the third wonderful image are obtained from the storyboard clip of the fourth scene; the fifth scene is obtained from the storyboard
  • step S703 and step S705 can be understood as 3. Segment selection shown in FIG. 9 .
  • a wonderful image and associated images of the wonderful image can be understood as being combined to form a small segment. After obtaining the small fragments obtained by combining each wonderful image and its associated images, as shown in FIG. 8 , 8. Content deduplication and fragment discrete can be performed.
  • Content deduplication can be understood as: among the small clips obtained by deleting the combination of the wonderful image and its associated images, only one small clip is kept that belongs to the same content.
  • transition images may be inserted between images contained in the small segment.
  • the inserted transition image may be basically the same as the image content in the small segment, and follow the transition from the first few frame images of the highlight image to the highlight image, and then to the last few frame images of the highlight image.
  • each highlight image and its associated images may be combined into a small segment, and in the segment: the highlight images and their associated images are arranged in sequence according to the shooting time.
  • the video clips are obtained by splicing small clips according to the order of shooting time.
  • step S706 can be understood as 5. Fragment splicing strategy shown in FIG. 9 .
  • the rhythm point information of the music determined in step S702 may be referred to to ensure that the spliced video clips match the rhythm points of the music.
  • step S707 can be understood as 6. Synthesis shown in FIG. 8 , wherein the synthesized video is the selected video.
  • the selected video After the selected video is generated by using the aforementioned content, the selected video can be saved.
  • the mobile phone saves the featured video 602 in response to the user's click operation.
  • the mobile phone stores featured videos 602 in the gallery.
  • the featured video 602 can be saved along with the original video 502 of the featured video 602, that is, the featured video 602 and the original video 502 are stored in the same storage area.
  • the featured video 602 may not be saved along with the original video 502 of the featured video 602, that is, the featured video 602 and the original video 502 are stored in different storage areas.
  • the original video 502 refers to the video shot by the user, and the selected video 602 is from the original video 502 .
  • the mobile phone's display screen can also display a detailed interface of the selected video 602 .
  • the detail interface of featured video 602 is shown in (b) of FIG. 11 .
  • the featured video 602 can be played automatically.
  • the detailed interface of the featured video 602 is basically the same as that of the video 502 shown in (d) in FIG.
  • the mobile phone can generate the featured video 602 but not save it in the internal memory. Only after the user clicks the save control 604 shown in (a) in FIG. 11 , the mobile phone will also save the generated selected video 602 in the internal memory.
  • the user will also control the mobile phone to take pictures in the shooting mode when shooting video. Based on this, when the mobile phone detects the above-mentioned behavior of the user, it is necessary to guide the user to understand the one-record-multiple-get function of the mobile phone.
  • FIG. 12 shows a video shooting interface of a mobile phone.
  • the user wants to take pictures.
  • the user can click the stop control 1101 on the video shooting interface to stop shooting.
  • the mobile phone saves the captured video and displays the shooting interface of the mobile phone in video recording mode as shown in (b) in FIG. 12 .
  • the user clicks "photograph", and the mobile phone enters the shooting interface of the shooting mode as shown in (c) in Figure 12 in response to the user's click operation.
  • the mobile phone detects the user's above-mentioned operations, and when the user controls the mobile phone to shoot a video, it can display a guide prompt with multiple recordings on the video shooting interface.
  • the video shooting interface of the mobile phone shown in (d) in FIG. After accessing the content, it will automatically generate wonderful photos and create blockbuster movies with one click. It can also prompt users to manually click the control to capture photos during the process of shooting videos.
  • the mobile phone when the mobile phone determines to shoot a video in the one-record-many mode, the mobile phone can detect whether the user executes the end shooting video, and immediately take a picture, and then take another picture after taking the picture. operation of the video. If the mobile phone detects that the user finishes taking the video, takes a picture immediately, and then shoots the video after taking the picture, then when the user finishes taking the picture and shoots the video again, the video shooting interface displayed on the mobile phone display will display a guide for more than one record hint.
  • the mobile phone when the mobile phone is shooting video in the one-record-multiple-record mode, the mobile phone does not detect that the user finishes shooting the video, takes a photo immediately, and then shoots the video after taking the photo. If the mobile phone does not detect that the user finishes shooting the video, takes a photo immediately, and then takes a video after the photo is taken, the mobile phone can respond to the user's operation according to the normal process.
  • the mobile phone detects whether the user finishes shooting the video, takes a photo immediately, and then shoots the video after the photo is finished.
  • the method can be as follows:
  • the mobile phone traverses the process of the mobile phone, and recognizes the state change of the process of the mobile phone, and uses the result of the state change of the process of the mobile phone to determine whether the user executes the operation of finishing shooting the video, taking pictures immediately, and then shooting the video after taking pictures.
  • the mobile phone judges that the video recording process is in the running state, and also judges that the video recording process is closed within a certain period of time, such as 10 seconds
  • the photographing process is enabled and in the running state, and within a certain period of time when the photographing process is closed, such as After 10 seconds, the video recording process is started and is in the running state, which means that the user finishes shooting the video, takes a photo immediately, and then takes the video after the photo is finished.
  • the mobile phone can distinguish different mobile phone processes through process identifiers.
  • the video recording process has a video recording process identifier
  • the photographing process has a photographing process identifier.
  • the mobile phone uses the one-record-multiple-record mode to shoot video, it also supports the manual capture function.
  • the video shooting interface shown in (a) of FIG. 13 shows a picture during a football match.
  • the user can click the camera button 1201 to perform a manual capture operation.
  • the mobile phone calls the camera to take a picture, and saves the image obtained by taking the picture to a gallery.
  • thumbnails of images 1202 manually captured by the mobile phone during video shooting are displayed.
  • the mobile phone uses the one-record-multiple-record mode to shoot video
  • the user initiates manual capture to capture images, indicating that the images captured manually by the user are wonderful images that the user thinks are more memorable.
  • the mobile phone can also save it as a wonderful image in the One Record Multiple Folder.
  • the mobile phone can save the images captured manually, instead of the wonderful images recognized by the mobile phone, in one record.
  • the mobile phone can use the recognition model to score the wonderfulness of the captured video images, and use the wonderfulness score to determine the wonderful images in the video.
  • the user uses the manual capture function to capture images, and the mobile phone can discard the number of captured images according to the order of the wonderful score value from small to large. The mobile phone then saves the rest of the beautiful images and the images captured manually as updated wonderful images in the One Record Multiple Des folder.
  • the mobile phone is configured to save 5 exciting images in a folder with more than one record.
  • the user activates the manual capture function to capture an image.
  • Fig. 13 shows a folder with more than one record, and the folder with more than one record includes: the thumbnails of 4 wonderful images recognized by the mobile phone from the captured video, and the thumbnails of the images 1202 manually captured by the user .
  • the thumbnails of the wonderful images in the one-record-multiple-fold folder shown in (c) of FIG. 13 can be arranged according to the shooting time sequence of the exciting images. That is, the image captured by the mobile phone first has its thumbnail position first, and the image captured by the mobile phone later has its thumbnail position at the rear.
  • this sorting method does not limit the sorting of image thumbnails in the one-record folder.
  • FIG. 13 shows the detailed interface of the video shot by the user.
  • the thumbnails of the wonderful images of the video 502 displayed on the details interface are from the one-record-multiple folder shown in (c) of FIG. 13 . Therefore, the thumbnails of the wonderful images of the video 502 include: the thumbnail of the cover of the video 502, the thumbnails of the 4 wonderful images in the video 502 recognized by the mobile phone, and the thumbnails of the image 1202 manually captured by the user.
  • TAG can also be set on the image to indicate the position of the image in the captured video.
  • the mobile phone can also write the TAG of the manually captured image in the configuration file of the video.
  • the TAG of the image manually captured by the mobile phone is saved in the configuration file of the video, and at the same time, the TAG of the wonderful image discarded by the mobile phone recorded in the configuration file of the video is deleted.
  • FIG. 13 shows the display interface of the selected video 1203 of the video 502.
  • the selected image 1203 is automatically played, and the currently displayed image is a manually captured image by the user.
  • the mobile phone can save the manually captured image as a new wonderful image, together with the wonderful image recognized by the mobile phone, into the one-record-multiple folder.
  • the mobile phone is configured to save 5 exciting images in a folder with more than one record.
  • the user activates the manual capture function to capture an image.
  • Fig. 14 shows a folder with more than one record, and the folder with more than one record includes: the thumbnails of 5 wonderful images recognized by the mobile phone from the captured video, and the thumbnails of the images 1202 manually captured by the user .
  • the thumbnails of the wonderful images in the one-record-multiple folder shown in (a) in FIG. 14 can be arranged in the order of shooting time of the exciting images, and the thumbnails of the images manually captured by the user are at the last place.
  • this sorting method does not limit the sorting of image thumbnails in the one-record folder.
  • FIG. 14 shows the details interface of the video shot by the user.
  • the thumbnail of the wonderful image of the video 502 displayed on the details interface comes from the one-record-multiple folder shown in (a) of FIG. 14 . Therefore, the thumbnails of the highlights of the video 502 include: the thumbnail of the cover of the video 502, the thumbnails of five highlights in the video 502 recognized by the mobile phone, and the thumbnails of the image 1202 manually captured by the user.
  • TAG can also be set on the image to indicate the position of the image in the captured video.
  • the mobile phone can also write the TAG of the manually captured image in the configuration file of the video.
  • the mobile phone fails to recognize a wonderful image from the captured video, but recognizes a good-quality image.
  • the mobile phone can use the user's manually captured images and high-quality images to generate selected videos.
  • the method of generating the featured video refer to the method for generating the featured video provided in the first embodiment above, which will not be described here.
  • Another embodiment of the present application also provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is run on a computer or a processor, the computer or the processor executes any one of the above-mentioned methods. one or more steps.
  • the computer-readable storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical data storage device, among others.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical data storage device, among others.
  • Another embodiment of the present application also provides a computer program product including instructions.
  • the computer program product is run on the computer or the processor, the computer or the processor is made to perform one or more steps in any one of the above methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了一种视频处理方法、电子设备、程序产品及计算机可读存储介质,该视频处理方法包括:响应于第一操作,拍摄第一视频;响应于第二操作,显示第一界面,第一界面包括第一区域、第二区域和第一控件,或者包括第一区域和第二区域,或者包括第一区域和第一控件,第一区域为第一视频的播放区,第二区域显示第一视频的封面缩略图,第一图像的缩略图和第二图像的缩略图,第一图像是第一视频在第一时刻的图像,第二图像是第一视频在第二时刻的图像,第一视频的录制过程中包括第一时刻和第二时刻;第一控件用于控制电子设备生成第二视频,第二视频的时长小于第一视频,实现了用户在拍摄视频的同时获得精彩瞬间照片。

Description

视频处理方法、电子设备及可读介质
本申请要求于2022年02月28日提交中国国家知识产权局、申请号为202210187220.4、发明名称为“视频处理方法、电子设备及可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,尤其涉及一种视频处理方法、电子设备、程序产品及计算机可读存储介质。
背景技术
目前,拍照以及视频录制功能已经成为电子设备的必备功能。用户对录制和拍照的需求与体验也在不断增强。在一些拍摄视频的应用场景下,用户期望在拍摄视频的同时捕捉到值得纪念的精彩瞬间照片。
因此,需要提供一种在拍摄视频的同时得到值得纪念的精彩瞬间照片的方法。
发明内容
本申请提供了一种视频处理方法、电子设备、程序产品及计算机可读存储介质,目的在于能够使得用户在拍摄视频的同时获得精彩瞬间照片。
为了实现上述目的,本申请提供了以下技术方案:
第一方面,本申请提供了一种应用于电子设备的视频处理方法,该视频处理方法包括:响应于第一操作,拍摄第一视频;响应于第二操作,显示第一界面,第一界面是第一视频的详情界面,第一界面包括第一区域、第二区域和第一控件,或者第一界面包括第一区域和第二区域,或者第一界面包括第一区域和第一控件,第一区域为第一视频的播放区,第二区域显示第一视频的封面缩略图,第一图像的缩略图和第二图像的缩略图,第一图像是第一视频在第一时刻的图像,第二图像是第一视频在第二时刻的图像,第一视频的录制过程中包括第一时刻和第二时刻;第一控件用于控制电子设备生成第二视频,第二视频的时长小于第一视频,第二视频至少包括第一视频的图像。
由上述内容可以看出:用户利用电子设备拍摄视频,电子设备可得到拍摄的第一视频,以及第一图像和第二图像,实现了用户在拍摄视频的同时获得精彩瞬间照片。
在一个可能的实施方式中,该视频处理方法还包括:响应于第三操作,显示第二界面,第三操作为对第一控件的触控操作,第二界面为第二视频的展示界面。
在本可能的实施方式中,用户拍摄视频之后,电子设备得到第一视频,第一图像和第二图像,还可以得到第二视频并进行展示,第二视频的时长小于第一视频,且包含第一视频的图像,实现了用户在拍摄视频的同时获得精彩瞬间照片,以及进一步获得第一视频的小视频,以方便用户分享。
在一个可能的实施方式中,响应于第二操作,显示第一界面,包括:响应于第四操作,显示第三界面,第三界面为图库应用的界面,第三界面包括第一视频的封面缩略图;响应于对第一视频的封面缩略图的触控操作,显示第一界面。
在一个可能的实施方式中,响应于第二操作,显示第一界面,包括:响应于对第二控件的触控操作,显示第一界面,电子设备的拍摄界面包括第二控件,第二控件用于控制展示前一次拍摄的图像或视频。
在一个可能的实施方式中,第一视频的封面缩略图包括第一标识,第一标识用于指示第一视频为电子设备采用一录多得模式拍摄。
在一个可能的实施方式中,第一界面显示有蒙层,第二区域不被蒙层覆盖。
在一个可能的实施方式中,第一界面还包括:第一话框,第一对话框用于向用户提示已生成了第一图像和第二图像,第一对话框不被蒙层覆盖。
在一个可能的实施方式中,在响应于第一操作,拍摄第一视频之后,该视频处理方法还包括:响应于第五操作,显示电子设备的拍摄界面,拍摄界面包括:第二对话框,第二对话框用于向用户提示已生成第一视频和第二视频。
在一个可能的实施方式中,在响应于第一操作,拍摄第一视频之前,该视频处理方法还包括:响应于第六操作,显示第四界面,第四界面为拍摄的设置界面,第四界面包括:一录多得的选项和文字段,一录多得的选项用于控制电子设备开启或关闭一录多得功能,文字段用于指示一录多得的功能内容。
在一个可能的实施方式中,在响应于第一操作,拍摄第一视频之后,该视频处理方法还包括:响应于第七操作,显示第五界面,第五界面为图库应用的界面,第五界面包括:第一文件夹和第二文件夹,第一文件夹包括电子设备保存的图像和视频,第二文件夹包括第一图像和第二图像;响应于第八操作,显示第六界面,第六界面包括第一图像的缩略图和第二图像的缩略图,第八操作为对第二文件夹的触控操作。
在一个可能的实施方式中,在响应于第三操作,显示第二界面之后,该视频处理方法还包括:响应于第九操作,显示第七界面,第七界面为第二视频的详情界面,第九操作为对第二界面包括的第三控件的触控操作,第三控件用于控制保存第二视频。
在一个可能的实施方式中,该视频处理方法还包括:响应于第十操作,显示第八界面,第八界面为图库应用的界面,第八界面包括:第二视频的封面缩略图和第一视频的封面缩略图。
在一个可能的实施方式中,在响应于第一操作,拍摄第一视频之后,该视频处理方法还包括:响应于第十一操作,显示电子设备的第一拍摄界面,第一拍摄界面包括第一选项和第二选项,第一选项用于指示拍照模式,第二选项用于指示录像模式;响应于对拍摄界面的第四控件的操作,显示电子设备的第一拍摄界面,第四控件用于启动拍照;响应于对第二选项的操作,显示电子设备的第二拍摄界面,第二拍摄界面包括第三对话框,第三对话框用于向用户指示一录多得的功能内容。
在本可能的实施方式中,用户在拍摄视频之后,若用户还通过触控第四控件的形式控制电子设备拍照,在电子设备再进入电子设备的拍摄界面以拍摄视频时,可在拍摄界面显示第三对话框,以提醒用户电子设备配置有一录多得功能。
在一个可能的实施方式中,在响应于第一操作,拍摄第一视频过程中,该视频处理方法还包括:响应于第十二操作,拍摄并保存第三图像;第十二操作为对电子设备的视频拍摄界面的拍照键的触控操作。
在本可能的实施方式中,电子设备在拍摄视频时,电子设备还响应于第十二操作,拍摄并保存第三图像,实现了为电子设备在拍摄视频时配置抓拍图像功能。
在一个可能的实施方式中,第二区域还显示第三图像的缩略图,第二视频包括第三图像。
在本可能的实施方式中,电子设备手动抓拍的图像,可用于得到第二视频,以实现将用户抓拍的图像,作为第二视频中的图像。
在一个可能的实施方式中,第二区域显示第一视频的封面缩略图,第一图像的缩略图和第二图像的缩略图,包括:第二区域显示第一视频的封面缩略图,第一图像的缩略图和第三图像的缩略图;第二视频至少包括第一图像和第三图像。
在一个可能的实施方式中,第二视频的生成方式,包括:获取第一视频和第一视频的标签数据,标签数据包括第一视频的主题TAG,分镜TAG,第一图像TAG以及第二图像TAG;基于第一视频的主题TAG,确定风格模板和音乐,风格模板包括至少一个特效;基于分镜TAG,第一图像TAG和第二图像TAG,从第一视频中获取第一图像的前、后多帧图像,以及第二图像的前、后多帧图像;合成风格模板的特效,音乐以及目标图像,得到第二视频;目标图像至少包括:第一图像以及第一图像的前、后多帧图像。
第二方面,本申请提供了一种应用于电子设备的视频处理方法,该视频处理方法包括:响应于第一操作,显示第一界面并开始拍摄第一视频,第一界面为拍摄第一视频时的预览界面,第一界面中包括第一控件,第一控件用于拍摄图像;响应于第二操作,在拍摄第一视频的过程中拍摄并保存第一图像;第二操作为针对第一控件的触控操作;在完成第一视频的拍摄后,响应于第三操作,显示第二界面,第二界面是第一视频的详情界面,第二界面包括第一区域、第二区域和第一控件,或者第二界面包括第一区域和第二区域,或者第二界面包括第一区域和第一控件;第一区域为第一视频的播放区,第二区域显示第一视频的封面缩略图和第一图像的缩略图,第一控件用于控制电子设备生成第二视频,第二视频的时长小于第一视频,第二视频至少包括第一视频的图像。
由上述内容可以看出:用户利用电子设备拍摄视频时,可响应第二操作,拍摄并保存第一图像,实现了在拍摄视频的同时,抓拍,电子设备可得到拍摄的第一视频,以及第一图像和第二图像,实现了用户在拍摄视频时抓拍图像。
在一个可能的实施方式中,第二区域还显示其他一帧或者多帧图像的缩略图,其他一帧或者多帧图像为第一视频中的图像,第一图像和其他一帧或者多帧图像的数量的总和大于或者等于预设个数,预设个数为电子设备在拍摄第一视频的过程中自动识别的第二图像的数量。
在一个可能的实施方式中,第二视频至少包括以下图像中的一帧或多帧图像:第一图像,其他一帧或者多帧图像。
在一个可能的实施方式中,视频处理方法还包括:响应于第四操作,显示第三界面,第四操作为对第一控件的触控操作,第三界面为第二视频的展示界面。
在一个可能的实施方式中,在完成第一视频的拍摄之后,视频处理方法还包括:显示电子设备的第一拍摄界面,拍摄界面包括第一选项和第二选项,第一选项用于指示拍照模式,第二选项用于指示录像模式;第一拍摄界面为拍摄图像时的预览界面;响应于对拍摄 界面的第二控件的操作,显示电子设备的第一拍摄界面,第二控件用于启动拍照;响应于对第二选项的操作,显示电子设备的第二拍摄界面,第二拍摄界面包括第一对话框,第一对话框用于向用户指示一录多得的功能内容,第二拍摄界面为拍摄视频时的预览界面。
在一个可能的实施方式中,在完成第一视频的拍摄之后,还包括:响应于第六操作,显示第三界面,第三界面为图库应用的界面,第三界面包括:第一文件夹和第二文件夹,第一文件夹至少包括第一图像,第二文件夹包括第二图像和第三图像,或者第二文件夹包括第二图像;响应于第七操作,显示第四界面,第四界面包括第二图像的缩略图和第三图像的缩略图,或者包括第二图像的缩略图,第七操作为对第二文件夹的触控操作。
第三方面,本申请提供了一种电子设备,包括:一个或多个处理器、存储器,摄像头和显示屏;存储器、摄像头和显示屏与一个或多个处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,电子设备执行如第一方面任意一项的视频处理方法,或者如第二方面任意一项的视频处理方法。
第四方面,本申请提供了一种计算机可读存储介质,用于存储计算机程序,计算机程序被电子设备执行时,使得所述电子设备实现如第一方面中任意一项的视频处理方法,或者如第二方面中任意一项的视频处理方法。
第五方面,本申请提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面中任意一项的视频处理方法,或者如第二方面中任意一项的视频处理方法。
附图说明
图1为本申请提供的电子设备的硬件结构图;
图2为本申请实施例一提供的一例开启“一录多得”的示意图;
图3为本申请实施例一提供的一例“一录多得”的图形用户界面的示意图;
图4为本申请实施例一提供的另一例“一录多得”的图形用户界面的示意图;
图5为本申请实施例一提供的另一例“一录多得”的图形用户界面的示意图;
图6为本申请实施例一提供的另一例“一录多得”的图形用户界面的示意图;
图7为本申请实施例一提供的另一例“一录多得”的图形用户界面的示意图;
图8为本申请实施例一提供的生成精选视频的流程图;
图9为本申请实施例一提供的生成精选视频的展示图;
图10为本申请实施例一提供的生成精选视频的示例展示图;
图11为本申请实施例一提供的另一例“一录多得”的图形用户界面的示意图;
图12为本申请实施例二提供的一例“一录多得”的图形用户界面的示意图;
图13为本申请实施例三提供的一例“一录多得”的图形用户界面的示意图;
图14为本申请实施例三提供的另一例“一录多得”的图形用户界面的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个” 这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例涉及的多个,是指大于或等于两个。需要说明的是,在本申请实施例的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
在介绍本申请实施例之前,首先对本申请实施例涉及的一些术语或概念进行解释。应理解,本申请对以下术语的命名不作具体限定。以下术语可以有其他命名。重新命名的术语仍满足以下相关的术语解释。
1)一录多得,可以理解为用户使用相机应用拍摄视频时,通过一次按下“拍摄”图标,可以得到包括拍摄的原视频、一张或多张精彩照片、以及一段或多段精选视频的功能。可以理解的是,通过一录多得获得的精彩短视频的时长小于整段完整视频的时长。例如,录制的整段完整视频为1分钟,可以得到5张精彩时刻照片和时长为15秒的精彩短视频。还可以理解的是,一录多得也可有其他名称,比如,一键多得,一键多拍,一键出片,一键大片,AI一键大片等。
2)精彩图像,是指视频录制过程中的一些精彩瞬间的画面。例如,精彩图像可以是最佳运动瞬间画面,最佳表情时刻画面或最佳打卡动作画面。可以理解的是,本申请对术语精彩图像不作限定,精彩图像也可以称作美好时刻图像,神奇时刻图像,精彩瞬间图像,决定性瞬间图像,最佳拍摄(best shot,BS)图像,或AI图像等。在不同的场景下,精彩图像可以是不同类型的瞬间画面。例如当拍摄足球比赛视频时,精彩图像可以是射门或传球时,运动员脚与足球接触瞬间的图像、足球被远动员踢开瞬间的图像,精彩图像也可以是足球飞进球门瞬间的图像、守门员接住足球瞬间的图像。当拍摄人物从地面起跳的视频时,精彩图像可以是人物在空中最高点的瞬间的图像,也可以是人物在空中时动作最舒展的瞬间的图像。当拍摄景色时,精彩图像可以是景色中出现建筑物的图像,也可以是夕阳或朝阳的图像。
3)精选视频,是指包含精彩图像的视频。可以理解的是,本申请对术语精选视频也不作限定,精选视频也可以称作精彩视频,精彩短视频,精彩小视频,或AI视频等。
4)标签(TAG),TAG可分为主题TAG,场景TAG,精彩图像TAG以及分镜TAG等;主题TAG用于指示视频的风格或氛围;场景TAG用于指示视频的场景;精彩图像TAG用于指示精彩图像在拍摄的视频中的位置,分镜TAG用于指示拍摄的视频中的转换场景的 位置。例如,在视频中包括一个或多个精彩图像TAG,精彩图像TAG可指示在该视频的第10秒、第1分20秒等时刻的图像帧为精彩图像。在视频中也包括一个或多个分镜TAG,分镜TAG可指示在该视频的第15秒由第一场景切换为第二场景、第3分43秒由第二场景切换为第三场景。
目前,拍照以及视频录制功能已经成为电子设备的必备功能。用户对录制和拍照的需求与体验也在不断增强。在一些拍摄视频的应用场景下,用户期望在拍摄视频的同时捕捉到值得纪念的精彩瞬间照片。基于此,本申请在电子设备中设置“一录多得”模式,即在电子设备以录像模式拍摄视频时,通过分析视频流,自动提取视频流中的精彩图像。并且,当视频拍摄完成时,电子设备还可基于精彩图像,生成一段或多段精选视频。另外,用户可在图库查看到拍摄的视频,精彩图像以及精选视频。
为支持电子设备的“一录多得”模式,本申请实施例提供一种视频处理方法。并且,本申请实施例提供的一种视频处理方法可以适用于手机,平板电脑,桌面型、膝上型、笔记本电脑,超级移动个人计算机(Ultra-mobile Personal Computer,UMPC),手持计算机,上网本,个人数字助理(Personal Digital Assistant,PDA),可穿戴电子设备和智能手表等。
以手机为例,图1为本申请实施例提供的一种电子设备的组成示例。如图1所示,电子设备100可以包括处理器110,内部存储器120,摄像头130,显示屏140,天线1,天线2,移动通信模块150,以及无线通信模块160,音频模块170,传感器模块180,以及按键190等。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,智能传感集线器(sensor hub)和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
内部存储器120可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器110通过运行存储在内部存储器120的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器120可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外, 内部存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器120的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
一些实施例中,内部存储器120存储的是用于执行视频处理方法的指令。处理器110可以通过执行存储在内部存储器120中的指令,实现控制电子设备以“一录多得”模式拍摄视频,得到拍摄的视频,一张或多张精彩照片,以及一段或多段精选视频。
电子设备通过GPU,显示屏140,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏140和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏140用于显示图像,视频等。显示屏140包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oled,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏140,N为大于1的正整数。
一些实施例中,电子设备以“一录多得”模式拍摄视频,得到拍摄的视频,一张或多张精彩照片,以及一段或多段精选视频,由显示屏140向用户显示。
电子设备100可以通过ISP,摄像头130,视频编解码器,GPU,显示屏140以及应用处理器等实现拍摄功能。
ISP用于处理摄像头130反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头130中。
摄像头130用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头130,N为大于1的正整数。
一些实施例中,摄像头130用于拍摄本申请实施例提及的视频。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)4,MPEG2,MPEG3,MPEG4等。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备可以设置至少一个麦克风170C。在另一些实施例中,电子设备可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
传感器模块180中,压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏140。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备根据电容的变化确定压力的强度。当有触摸操作作用于显示屏140,电子设备根据压力传感器180A检测触摸操作强度。电子设备也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。
触摸传感器180B,也称“触控器件”。触摸传感器180B可以设置于显示屏140,由触摸传感器180B与显示屏140组成触摸屏,也称“触控屏”。触摸传感器180B用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏140提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180B也可以设置于电子设备的表面,与显示屏140所处的位置不同。
一些实施例中,压力传感器180A和触摸传感器180B可用于检测用户对显示屏140展示的控件、图像、图标、视频等的触控操作。电子设备可响应压力传感器180A和触摸传感器180B检测的触控操作,执行对应流程。电子设备执行的流程的具体内容,可参考下述实施例内容。
按键190包括开机键,音量键等。按键190可以是机械按键,也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。
以下实施例中所涉及的技术方案均可以在具有上述硬件架构的电子设备100中实现。
为了便于理解,本申请以下实施例将以具有图1所示结构的电子设备为例,对本申请实施例提供的视频处理方法进行具体阐述。
本申请以下实施例将以电子设备为手机,手机中安装相机应用,由相机应用启动摄像头拍摄视频为例,详细介绍本申请提供的视频处理方法。
实施例一
在本申请的一些实施例中,用户可以手动开启或关闭本申请实施例提供的“一录多得”功能。以下结合图2描述“一录多得”的入口。
示例性的,用户可以通过触摸手机屏幕上特定的控件、按压特定的物理按键或按键组合、输入语音、隔空手势等方式,指示手机开启相机应用。图2中(a)展示了用户开启相机应用的一种实现方式。如图2中(a)所示,用户点击手机显示屏展示的相机应用图标以输入开启相机的指示,手机响应于接收到用户开启相机的指示后,手机启动相机,显示图2中(b)或(c)展示的拍摄界面。
图2中(b)展示的拍摄界面为手机处于拍照模式时的拍摄界面,图2中(c)展示的拍摄界面为手机处于录像模式时的拍摄界面。以图2中(b)为例,手机的拍摄界面包括: 开启或关闭闪光灯的控件201、设置的控件202、切换列表203、展示前一次拍摄的图像的控件204、控制拍摄的控件205以及切换前后摄像头的控件206等。
其中,开启或关闭闪光灯的控件201用于控制摄像头拍摄时是否启动闪光灯。
设置的控件202可用于拍摄参数以及拍摄功能的设置,比如,照片比例的设置、手势拍照的设置、笑脸抓拍的设置、视频分辨率的设置等。
切换列表203包括摄像头的多种模式,用户可通过左右滑动该切换列表,实现摄像头的多种模式的切换运行。示例性的,图2中(b)展示的切换列表包括人像、夜景、拍照、录像、全景。其他未展示于图2中(b)的模式属于隐藏显示,用户可通过左右滑动切换列表的方式,显示出处于隐藏显示的模式。
展示前一次拍摄的图像的控件204,用于展示摄像头前一次拍摄的图像的缩略图或视频的封面缩略图。用户可通过触控展示前一次拍摄的图像的控件204,由显示屏展示摄像头前一次拍摄的图像或视频。其中,摄像头前一次拍摄的图像或视频是指:摄像头拍摄时间在本次拍摄前拍摄、且拍摄时间距本次拍摄时间最近的图像或视频。
控制拍摄的控件205为提供于用户启动拍摄的控件。在手机的拍照模式,用户触控一次控制拍摄的控件205,摄像头可拍摄一帧图像。当然,摄像头也可以拍摄多帧图像,而仅选择一帧图像出图。在手机的录像模式,用户触控控制拍摄的控件205,摄像头开始录像。
切换前后摄像头的控件206用于实现手机的多个摄像头的切换运行。通常情况下,手机包括与显示屏同侧的摄像头(简称前置摄像头),以及位于手机的外壳上的摄像头(简称后置摄像头),用户可通过触控切换前后摄像头的控件206,实现对手机的前置摄像头和后置摄像头的切换运行。
用户如图2中(b)或图2中(c)所示,通过点击设置的控件202,控制手机显示设置界面。示例性的,设置界面可如图2中(d)所示。图2中(d)所示的设置界面中显示开启“一录多得”的选项207,用于开启一录多得的功能。也就是说,当用户开启该功能后,手机处于录像模式拍摄视频像时,手机会自动采用本申请实施例提供的视频处理方法,在拍摄视频的同时自动生成精彩图像和短视频。当然,用户也可以通过该选项207,手动关闭录像模式下的一录多得功能。
通常情况下,图2中(d)所示的“一录多得”的选项207属于默认开启状态。也就是说,手机首次开机或更新系统以具有“一录多得”功能,图2中(d)展示的设置界面中的“一录多得”的选项207则处于开启状态,手机的“一录多得”功能被启动。
可以理解的是,设置界面还可以包括其他功能设置的控件。比如,图2中(d)所示的拍照设置的控件和视频设置的控件,其中:拍照设置的控件包括:照片比例的设置控件、声控拍照的设置控件、手势拍照的设置控件、笑脸抓拍的设置控件等;视频设置的控件包括:视频分辨率的设置控件、视频帧率的设置控件等。
以上介绍了触发手机进入“一录多得”模式的方法,但本申请不限于在录像模式进入“一录多得”。在本申请的一些实施例中,用户开启“一录多得”功能可以有其他方式。
在开启一录多得选项后,用户可以控制手机开始拍摄视频。示例性的,参见图3中(a),用户可点击控制拍摄的控件301控制手机开始拍摄视频。手机响应于用户的点击操作,启 动摄像头拍摄视频,图3中(b)绘示的界面展示了用户利用手机拍摄一场足球比赛过程中的一个画面。图3中(b)绘示的界面包括:停止控件302、暂停控件303以及拍照键304。在视频的拍摄过程中,用户可以通过点击暂停控件303暂停拍摄,也可通过点击停止控件302结束拍摄,还可通过点击拍照键304手动抓取照片。
如图3中(c)所示的界面中,用户在56秒可以点击停止控件302结束拍摄过程,可得到时长为56秒的视频。并且,拍摄结束后,手机的显示屏可进入展示拍摄界面,并且,在手机的“一录多得”功能的首次使用时,还在可拍摄界面引导一录多得的生成提示。示例性的,图3中(d)绘示的拍摄界面,展示有一录多得的生成提示的对话框,该对话框展示有“已生成“一录多得”精彩照片并可一键创作大片”的文字提示。并且,用户可通过点击图3中(d)展示的界面的任何区域,控制一录多得的生成提示的对话框消失。亦或者,手机也可被配置一录多得的生成提示的对话框展示一定时长,如5秒,自动消失。
当然,在手机的“一录多得”功能被启动一次之后,用户再一次拍摄视频,点击停止控件结束拍摄,手机的显示屏展示的拍摄界面则不会包括引导一录多得的生成提示。
需要说明的是,手机拍摄视频的时长满足一定时长要求,例如15秒,手机会以一录多得模式的要求,从手机拍摄的视频中抓取出一张或多张精彩图像,以及生成精选视频。在手机能够抓取出一张或多张精彩图像时,图3中(d)绘示的拍摄界面,可展示一录多得的生成提示的对话框,该对话框展示“已生成“一录多得”精彩照片并可一键创作大片”的文字提示。
然而,用户拍摄视频的应用场景,还可包括下述两种应用场景:第一种应用场景:视频拍摄时长较短,未满足一录多得模式的视频拍摄时长要求。第二种应用场景:视频拍摄时长满足一录多得模式的视频拍摄时长要求,但手机未从拍摄的视频中识别到精彩图像。另外,在第二个应用场景,通常要求用户拍摄的时间大于另一个时长要求,该时长大于一录多得模式的视频拍摄时长,例如30秒。并且,手机虽然未从拍摄的视频中识别到精彩图像,但是能识别到质量较好的图像,该质量较好的图像可用于生成下述提出的精选视频。
在第一种应用场景下,在用户首次以一录多得模式拍摄视频,且如图4中(a)所示,在拍摄到10秒时点击停止控件302,手机显示的拍摄界面如图4中(b)所示,一录多得的生成提示的对话框的文字为:已生成“一录多得”精彩照片。
在第二种应用场景下,在用户首次以一录多得模式拍摄视频,且如图4中(c)所示,在拍摄到56秒时点击停止控件302,手机显示的拍摄界面如图4中(d)所示,一录多得的生成提示的对话框的文字为:“一录多得”视频可一键创作大片。
另外,手机以图4中(b)显示了手机的拍摄界面之后,在手机首次确定手机拍摄出满足第二种应用场景的视频,手机还可在用户点击如图4中(c)所示的停止控件302之后,展示一次图4中(d)所示拍摄界面。或者,在手机首次确定拍摄出满足时长满足一录多得模式的视频拍摄时长要求,且能被识别出精彩图像的视频,手机在用户点击如图3中(c)所示的停止控件302之后,展示一次图3中(d)所示拍摄界面。
并且,手机以图4中(d)显示了手机的拍摄界面之后,在手机首次确定满足第一种应用场景的视频,手机还可在用户点击如图4中(a)所示的停止控件302之后,展示一次图4中(b)所示拍摄界面。或者,在手机首次确定拍摄出满足时长满足一录多得模式的视频 拍摄时长要求,且能被识别出精彩图像的视频,手机在用户点击如图3中(c)所示的停止控件302之后,展示一次图3中(d)所示拍摄界面。
需要说明的是,用户以手机的一录多得模式拍摄视频时,手机可利用识别模型自动识别拍摄的视频中的精彩图像。一些实施例中,手机设置有精彩图像的识别模型,将视频输入到精彩图像的识别模型时,该识别模型可对输入的视频中的图像进行精彩度的评分,得到视频中图像的精彩度评分值。手机可利用图像的精彩度评分值,确定视频中的精彩图像。通常情况下,图像的精彩度评分值越大,说明图像属于精彩图像的概率越高。
另外,在视频的拍摄过程中,还可识别视频的场景。在识别出拍摄完成一个场景的视频片段后,将一个场景的视频片段输入到前述精彩图像的识别模型,由识别模型对该场景下的视频片段包括每一个图像进行精彩度的评分,得到视频中图像的精彩度评分值。手机可利用图像的精彩度评分值,确定视频中的精彩图像。当然,图像的精彩度评分值越大,说明图像属于精彩图像的概率越高。
一些实施例中,手机可被配置得到固定数量的精彩图像,例如5张精彩图像。基于此,手机选择精彩度评分值较高的5张图像,作为精彩图像。
另一些实施例中,手机也可不配置精彩图像的限制数量,手机可选择精彩度评分值高于一定数值的图像,均作为精彩图像。
在手机拍摄视频结束之后,手机得到拍摄的视频,视频中的一张或多张精彩图像,手机还可将拍摄的视频和精彩图像保存到图库。一个示例中,图5中(a)展示了图库的相册展示界面,该相册展示界面以文件夹的形式展示手机保存的所有照片及视频。示例性的,图5中(a)展示的相册展示界面包括:相机文件夹401、所有照片文件夹402、视频文件夹403以及一录多得文件夹404。当然,相册展示界面还可以包括其他文件夹,本申请并不限制相机展示界面展示的文件夹。
通常情况下,相机文件夹401包括手机的摄像头拍摄的所有照片和视频,所有照片文件夹402包括手机保存的所有照片和视频,视频文件夹403包括手机保存的所有视频,一录多得文件夹404包括手机保存的以一录多得模式拍摄的所有精彩图像。
图5中(b)绘示了一录多得文件夹的展示界面,该展示界面展示了手机以一录多得拍摄的前述提及的56秒视频中的5张精彩图像的缩略图,并且,5张精彩图像由手机自动从拍摄的56秒视频中抓取。
手机在图库保存拍摄的视频和视频中的精彩图像之后,用户可通过图库应用查阅视频和视频的精彩图像。示例性的,用户在图6中(a)展示的拍摄界面中点击展示前一次拍摄的图像的控件501,或者用户在图6中(b)展示的图库的照片展示界面,点击视频502的封面缩略图。手机响应于用户的点击操作,在手机的显示屏展示视频502的详情界面。
图6中(b)所示的视频502的封面缩略图的左下角显示有“一录多得”功能的视频专属角标。该视频专属角标用于向用户说明该封面缩略图对应的视频502是手机利用“一录多得”功能拍摄得到。
图6中(b)所示的视频502的封面缩略图与图像A、图像B的缩略图大小不同,图像A和图像B可指代手机未开启一录多得功能而拍摄的图像。示例性的,图6中(b)所示的视频502的封面缩略图大于与图像A、图像B的缩略图。当然,图6中(b)所示的 视频502的封面缩略图也可以与图库中其他照片和视频(指未开启一录多得功能的视频)的缩略图相同,本申请实施例对此不作限定。
手机的显示屏首次展示视频502的详情界面(也可以称视频502的浏览界面),在详情界面显示有蒙层引导。手机的显示屏非首次展示视频502的详情界面,在详情界面不显示有蒙层。示例性的,显示有蒙层引导的视频502的详情界面如图6中(c)所示,不显示有蒙层引导的视频502的详情界面如图6中(d)所示。还需要指出的是,视频502可以理解成手机首次采用一录多得模式拍摄得到的视频,因此,手机的显示屏首次展示视频502的详情界面,可以理解成手机首次展示手机首次采用一录多得模式拍摄得到的视频,在此种情况下,视频的详情界面才会显示有蒙层引导。并且,手机首次展示手机首次采用一录多得模式拍摄得到的视频,在视频的详情界面显示蒙层引导,可以起到提醒用户一录多得功能的作用。
图6中(c)展示的视频502的详情界面包括:视频502的精彩图像的缩略图区域504,控件505,以及播放控件506。其中,视频502的精彩图像的缩略图区域504露出不被蒙层覆盖,显示屏的其他区域被蒙层覆盖。
视频502的精彩图像的缩略图区域504包括:视频502的封面缩略图,以及视频502的多张精彩图像的缩略图。视频502的封面缩略图通常处于第一位,多张精彩图像的缩略图可按照精彩图像的拍摄时间排列,且位于视频502的封面缩略图之后。精彩图像可如前述内容,是手机拍摄视频时,自动识别视频中包含的精彩瞬间的画面,并从视频中提取得到。并且,视频502的详情界面还包括一个提醒对话框,该提醒对话框显示有“一录多得”为您智能抓拍多个精彩瞬间的文字。提醒对话框通常如图6中(c)所示,位于视频502的精彩图像的缩略图区域504上方,用于提示用户视频502的精彩图像的缩略图区域504展示的内容,以引导用户查阅视频502的精彩图像。当然,图6中(c)展示的提醒对话框展示的文字和设置位置是一种示例性的展示,不构成对提醒对话框的限定。用户可通过点击图6中(c)展示的视频502的详情界面的任何区域,控制提醒对话框消失。亦或者,手机也可被配置提醒对话框展示一定时长,如5秒,自动消失。
控件505用于基于视频502的精彩图像,生成精选视频。
播放控件506用于对视频502的播放进行控制。示例性的,如图6中(d)所示,播放控件506包括:启动或停止的控件507、可滑动进度条508和喇叭控件509。启动或停止的控件507用于控制视频502播放或者停止视频502播放;喇叭控件509用于选择是否静音播放视频502。可滑动进度条508用于显示视频502的播放进度,用户还可通过左右拖动进度条上的圆形控件,来实现调整视频502的播放进度。
视频502的详情界面还包括分享、收藏、编辑、删除、更多等选项。如果用户点击分享,可以分享视频502;如果用户点击收藏,可以将视频502收藏于文件夹;如果用户点击编辑,可以对视频502执行编辑;如果用户点击删除,则可以删除视频502;如果用户点击更多,则可以进入对视频的其他操作功能(比如移动、复制、添加备注、隐藏、重命名等等)。
视频502的详情界面还包括的视频502的拍摄信息,通常如图6中(c)或(d)所示,位于视频502的上方。该视频502的拍摄信息包括:视频502的拍摄日期、拍摄时间和拍 摄地址。并且,视频502的详情界面还可以包括一个圆形控件,该圆形控件内填充字母“i”。用户点击该圆形控件,手机可响应用户的点击操作,在视频502的详情界面上显示视频502的属性信息,示例性的,该属性信息可包括视频502的存储路径、分辨率、拍摄时摄像头的配置信息等。
图6中(c)展示的蒙层属于一种蒙版图层。蒙版图层一般指图层蒙版,图层蒙版是在显示屏显示的界面的图层上面覆盖一层玻璃片图层。并且,玻璃片图层分为透明的、半透明的、完全不透明。半透明的、完全不透明的蒙层能够遮挡显示屏的光线,实现对显示屏显示的界面对用户模糊可见或完全不可见。图6中(c)展示的蒙层,可以理解成是一种半透明的玻璃片图层。
图6中(c)展示的蒙层引导是一种示例性的展示,并不构成首次展示一录多得拍摄的视频的详情界面的蒙层引导的限定。一些实施例中,蒙层引导也可以设置为带蒙层及其他特效的引导,如蒙层加气泡的引导。
并且,用户可通过在图6中(c)展示的视频502的详情界面的任意区域输入操作,以控制蒙层消失。当然,手机还可设置成蒙层展示一定时长,如3秒自动消失。图6中(c)展示的视频502的详情界面中蒙层消失,以及提醒对话框消失后,该视频502的详情界面即如图6中(d)所示。
需要说明的是,图6中(c)展示的视频502的详情界面上的蒙层未消失之前,视频502处于静止状态,不会播放。蒙层消失后,视频502可自动播放。通常情况下,视频502还可静音播放。当然,用户可通过点击图6中(d)展示的喇叭控件509控制手机带音播放视频502。
用户还可在图6中(d)展示的视频502的精彩图像的缩略图区域504,执行左右滑动操作或点击操作。一些实施例中,用户点击缩略图区域504中的一张精彩图像的缩略图,手机响应用户的点击操作,在显示屏上显示用户点击的精彩图像,以替换图6中(d)展示的视频502。另一实施例中,用户在缩略图区域504进行向左,或向右滑动操作,手机也可响应用户的滑动操作,在显示屏上跟随用户的滑动方向显示缩略图区域504的精彩图像。缩略图区域504显示的精彩图像的缩略图对应的图像没有保存在图库中,而是保存在了一录多得的相册中,也就是说,在图6中的(b)所示的界面中,没有精彩图像对应的缩略图,但是当用户点击视频502的缩略图进入视频502的详情界面时,在视频502的详情界面下方可以展示视频502关联的精彩图像的缩略图。
用户还可在图6中(d)展示的视频502输入左右滑动操作,手机响应用户的滑动操作,在显示屏上显示手机的图库保存的其他图像或视频。一些实施例中,用户在图6中(d)展示的视频502输入向右滑动操作,手机在显示屏显示图库中保存的视频502的后一个视频或图像。用户在图6中(d)展示的视频502输入向左滑动操作,手机在显示屏显示图库中保存的视频502的前一个视频或图像。其中,前一个视频或图像是指拍摄时间早于视频502,且与视频502的拍摄时间最近的视频或图像,后一个视频或图像是指拍摄时间晚于视频502,且与视频502的拍摄时间最近的视频或图像。
需要说明的是,若用户拍摄的视频502为前述提及的第一种应用场景的视频,视频502的详解界面与图6中(c)和图6中(d)会有所区别,区别点在于:视频502的详解界面 不包括控件505。若用户拍摄的视频502为前述提及的第二种应用场景的视频,视频502的详解界面与图6中(c)和图6中(d)也会有所区别,区别点在于:视频502的详解界面不包括缩略图区域504。
还需要说明的是,用户以手机的一录多得模式拍摄视频时,手机可得到拍摄的视频,视频中的一张或多张精彩图像之外,手机还可生成配置文件,该配置文件可包括视频的标签(TAG)。或者,手机可得到标签数据,该标签数据包括视频的标签(TAG)。并且,该标签数据可添加与视频中,通常位于视频头。
下述内容,以手机得到视频和视频的配置文件为例进行介绍。当然,若视频的标签以标签数据的形式存储于视频的方案中,下述内容的获取视频的配置文件可修改为获取视频的标签数据。
在一个可能的实现方式中,视频的标签(TAG)可基于视频的层级信息设置。视频的层级信息可包括:第一层级信息LV0、第二层级信息LV1、第三层级信息LV2和第四层级信息LV3。其中:
第一层级信息LV0用于表征视频的主题类别,用于给出整段视频的风格或氛围TAG。
第二层级信息LV1用于表征视频的场景,用于给出视频的场景TAG。
第三层级信息LV2用于表征视频的场景发生变化,也可以理解成是转场分镜变化。第三层级信息LV2的信息可以给出视频转场位置(比如,发生转场的帧号),以及转场类型(人物主角切换、快速运镜、场景类别变化、其他情况引起的图像内容变化),以防止相似场景推荐数量过多。LV2的信息用于表征视频场景变化(或者也可以简称为转场),包括但不限于以下变化中的一种或多种:人物主体(或主角)变化,图像内容构成发生较大变化,语义层面场景发生变化,以及图像亮度或颜色发生变化。手机可利用第三层级信息LV2,在视频中产生场景变化时,对视频添加分镜TAG。
第四层级信息LV3用于表征精彩时刻,即精彩图像的拍摄时刻,用于给出视频的精彩图像TAG。
第一层级信息LV0、第二层级信息LV1、第三层级信息LV2和第四层级信息LV3按照粒度由粗到细的次序提供决策信息,以识别到视频中的精彩图像以及生成精选视频。
以下表1给出了LV0和LV1的定义的举例。
表1
主题类别(LV0) 场景类别(LV1)
人物 人物等
美食 美食等
古建筑 古建筑等
夜景 烟花、其他夜间景色等
大自然 雪景、植物、山脉、河流等
中西节日 中西节日等
婚礼 婚礼等
毕业 毕业等
生日 生日等
运动 人物,运动动作等
童趣 孩童、猫狗等
聚会 人物等
休闲 人物等
旅行 沙滩,飞机,古建筑,人物,山峰等
轻松欢快/小伤感/动感节奏 轻松欢快、小伤感、动感节奏、休闲等
手机可利用拍摄的视频和拍摄的视频的配置文件,生成拍摄的视频的精选视频。当然,在手机能够从拍摄的视频中识别到精彩图像时,该精选视频包括拍摄的视频的精彩图像,且带有一些特效和音乐。另外,手机从拍摄的视频中未能识别到精彩图像,但却能识别到质量好的图像,手机则利用质量好的图像,生成精选视频。当然,该精选视频也带有一些特效和音乐。还需要说明的是,视频的配置文件或标签数据也会包括质量好的图像的TAG。
本申请提及的质量好的图像是指:图像较为清晰,例如分辨率较高;或者图像较为完整。
本申请所提及的特效是指:能够由素材支持,在视频帧上添加后可呈现出特殊效果,例如:雪花、放烟花等等动画效果,以及滤镜、贴纸、边框等。在一些实施例中,也可将特效称为风格或风格主题等。
本申请下述内容,以手机利用精彩图像生成精选视频为例进行介绍。
示例性的,用户如图7中(a)所示,点击视频的详情界面的控件601。手机响应用户的点击操作,生成一定时长的精选视频。
通常情况下,因生成视频的精选视频需要耗费一定时间,因此,用户如图7中(a)所示,点击视频的详情界面的控件601之后,手机的显示屏会显示如图7中(b)所示的精选视频602的缓冲界面。在精选视频602生成完成之后,手机的显示屏显示精选视频602的展示界面。当然,在手机的性能较强的情况下,用户如图7中(a)所示,点击视频的详情界面的控件601之后,手机的显示屏可不显示图7中(b)的缓冲界面,而直接显示精选视频602的展示界面。
示例性的,精选视频602的展示界面中,正在播放精选视频602。且精选视频602的展示界面如图7中(c)所示,包括:风格控件603、保存控件604、分享控件605、音乐控件606、编辑控件607等。
用户点击风格控件603,手机响应于用户的点击操作,显示屏展示手机保存的多种视频风格。用户可为精选视频602选择不同的视频风格。一些实施例中,视频风格可以是滤镜,即通过套用滤镜来对该精选视频602进行调色处理。滤镜是视频特效的一种,用来实现精选视频602的各种特殊效果。另一些实施例中,视频风格也可以是快放、慢放等视频效果。另一些实施例中,视频风格还可以指各种主题,不同的主题包括各自对应的滤镜和音乐等内容。
用户点击分享控件605,手机响应于用户的点击操作,可以分享精选视频602。
用户点击音乐控件606,手机响应于用户的点击操作,显示屏展示为该精彩视频602添加不同的配乐的界面。该界面展示有多个配乐控件,用户可点击任一个配乐控件,为该精彩短视频选择配乐,比如,舒缓、浪漫、温暖、惬意、恬静等,为精选视频602添加配乐。
用户点击编辑控件607,手机响应于用户的点击操作,显示屏显示精彩视频602的剪辑界面。用户可在编辑界面输入对精彩视频602进行剪辑、分割、音量调整、画幅调整等编辑操作。
保存控件604用于保存精选视频602。
以下结合图8、图9和图10,对生成精选视频602的过程进行介绍。
本申请实施例提供的手机生成精选视频的方法,参见图8,包括下述步骤:
S701、响应于用户的第一操作,获取视频和视频的配置文件,该视频的配置文件包括主题TAG、场景TAG、分镜TAG和精彩图像TAG。
如前述内容可知:手机在利用“一录多得”模式拍摄视频时,手机会识别拍摄的视频内容,确定视频的层级信息。手机还可以利用视频的层级信息,对视频设置主题TAG、场景TAG、分镜TAG和精彩图像TAG,并将主题TAG、场景TAG、分镜TAG和精彩图像TAG写入视频的配置文件。在视频拍摄结束时,手机可保存拍摄的视频和该视频的配置文件。
视频的配置文件中,主题TAG用于表征视频的风格或氛围。在图10展示的示例中,手机拍摄的60秒视频,包含tn帧图像,tn为整数。手机在拍摄该视频的过程中,手机可不断识别该视频的风格或氛围,以确定视频的主题。一些实施例中,手机可调用识别算法识别视频的图像,以确定视频的风格或氛围。当然,手机也可在视频拍摄完毕之后,再利用识别算法识别视频的图像,确定视频的风格或氛围。
在图10展示的示例中,手机调用识别算法识别视频的图像,确定出视频属于旅行主题。
场景TAG用于表征视频的场景。图10展示的示例中,视频被分为6个场景,第一个场景包含0到5秒的视频子段,属于其他场景;第二个场景包含5秒到15秒的视频子段,属于人物场景;第三个场景包含15秒到25秒的视频子段,属于古建筑场景;第四个场景包含25秒到45秒的视频子段,属于人物和古建筑场景;第五个场景包含45秒到55秒的视频子段,属于山峰场景;第六个场景包含55秒到60秒的视频子段,属于人物场景。
分镜TAG用于表征视频的指示拍摄的视频中的转换场景的位置。图10展示的示例中,视频包括6个分镜TAG,第一个分镜TAG指示0秒开始第一个场景,第二个分镜TAG指示5秒开始第二个场景;第三个分镜TAG指示15秒开始第三个场景,第四个分镜TAG指 示25秒开始第四个场景;第五个分镜TAG指示45秒开始第五个场景,第六个分镜TAG指示55秒开始第六个场景。
精彩图像TAG用于表征精彩图像在拍摄的视频中的位置。图10展示的示例中,视频包括5张精彩图像。
如图6中(a)所示,用户可点击“Ai一键大片”控件,向手机输入生成精选视频的指示。手机接收用户的点击操作(也可称为第一操作),并响应该点击操作,获取手机保存的视频和视频的配置文件。
S702、基于视频的主题TAG,确定风格模板和音乐。
手机保存有多个风格模板和音乐,风格模板可包含多个特效。
参见图9,剪辑引擎用于利用视频的主题TAG,确定风格模板和音乐。该风格模板和音乐用于合成得到视频的精选视频。可以理解的是,剪辑引擎属于一种服务或应用,可设置于手机的软件框架的应用层,应用框架层或系统库。剪辑引擎用于生成视频的精选视频。
一些实施例中,手机保存有主题TAG和风格模板、音乐的对应关系,手机可基于视频的主题TAG,确定该主题TAG对应的风格模板和音乐。
另一些实施例中,手机也可根据视频的主题TAG,随机选择手机保存的风格模板和音乐。需要说明的是,针对同一个视频,在生成视频的精选视频之后,若用户再编辑精选视频而调整风格模板或音乐时,手机选择不同的风格模板或音乐,以保证针对同一个视频,手机每一次生成的精选视频的特效均不同。
需要说明的是,步骤S702可以理解成图9展示的1、模板选择和2、音乐选择。
S703、基于分镜TAG,确定视频中每个场景中的分镜片段。
视频的分镜TAG能够指示拍摄的视频中的转换场景的位置。因此,剪辑引擎可基于分镜TAG,确定出视频包括的场景。剪辑引擎利用分镜TAG,可以将视频按照场景,划分为多个分镜分段。当然,剪辑引擎并非将视频实际拆分为分镜片段,而是按照分镜TAG,通过对视频进行标记的方式,将视频标记出每个场景的分镜片段。
在图10展示的示例中,视频包括6个分镜TAG,按照6个分镜TAG,视频中的分镜片段包括:0到5秒的第一个场景的分镜片段,5秒到15秒的第二个场景的分镜片段,15秒到25秒的第三个场景的分镜片段,25秒到45秒的第四个场景的分镜片段,45秒到55秒的第五个场景的分镜片段,以及55秒到60秒的第六个场景的分镜片段。
S704、基于精彩图像的TAG,确定精彩图像在分镜片段中的位置。
精彩图像TAG用于指示精彩图像在拍摄的视频中的位置,因此,可根据精彩图像的TAG,确定出精彩图像在分镜片段中的位置。一些实施例中,一个场景的分镜片段可包括一张或多张精彩图像。
在图10展示的示例中,手机拍摄的视频包括5张精彩图像,第一张精彩图像属于第二个场景的分镜片段,第二张精彩图像属于第三个场景的分镜片段,第三张精彩图像属于第四个场景的分镜片段,第四张精彩图像属于第五个场景的分镜片段,第五张精彩图像属于第六个场景的分镜片段。
S705、从视频的分镜片段中获取精彩图像前、后多帧图像,得到精彩图像的关联图像。
一些实施例中,视频的一个场景的分镜片段至少包括一张精彩图像,因此,在精彩图像所属的场景的分镜片段中,剪辑引擎获取该精彩图像的前几帧、以及后几帧图像。示例性的,可获取该精彩图像的前5帧图像和后5帧图像。剪辑引擎将获取的图像,作为精彩图像的关联图像。
在图10展示的示例中,从第二个场景的分镜片段中获取第一张精彩图像的前5帧图像,以及后5帧图像;从第三个场景的分镜片段中获取第二张精彩图像的前5帧图像,以及后5帧图像;从第四个场景的分镜片段中获取第三张精彩图像的前5帧图像,以及后5帧图像;从第五个场景的分镜片段中获取第四张精彩图像的前5帧图像,以及后5帧图像;从第六个场景的分镜片段中获取第五张精彩图像的前5帧图像,以及后5帧图像。
另外,手机拍摄的视频的精彩图像的数量,超出了最终能够呈现的精彩图像的数量限制时,可优先保留精彩度评分值较高的精彩图像,舍弃精彩度评分值较低的精彩图像,再执行步骤S705。
需要说明的是,步骤S703和步骤S705可以理解成图9展示的3、片段选择。
其中,一张精彩图像和该精彩图像的关联图像,可以理解成是组合形成一个小片段。再得到每张精彩图像及其关联图像组合得到的小片段之后,还可以如图8所示,执行8、内容去重与片段离散。
内容去重可以理解成:删除精彩图像及其关联图像组合得到的小片段中,属于内容相同的小片段,仅保留一个小片段。
片段离散可以理解成:在小片段内包含的图像进行离散化。一些实施例中,可以在小片段包含的图像之间插入过渡图像。被插入的过渡图像可以与小片段内的图像内容基本相同,且跟随从精彩图像的前几帧图像到精彩图像,再到精彩图像的后几帧图像的过渡变化。
S706、按照时间的先后顺序,拼接精彩图像及其关联图像,得到视频片段。
一些实施例中,如步骤S705内容所述,每个精彩图像及其关联图像,可以组合成小片段,该小片段中:精彩图像及其关联图像按照拍照时间的先后顺序排列。并且,同样按照拍摄时间的先后顺序,拼接小片段,得到视频片段。
另一些实施例中,也可以按照拍摄时间的先后顺序,拼接每一张精彩图像、每一张精彩图像的关联图像,得到视频片段。
在图10展示的示例中,按照第一张精彩图像的前5张图像,第一张精彩图像,第一张精彩图像的后5张图像,第二张精彩图像的前5张图像,第二张精彩图像,第二张精彩图像的后5张图像,直至第五张精彩图像的前5张图像,第五张精彩图像,第五张精彩图像的后5张图像的顺序进行拼接,得到视频片段。
需要说明的是,步骤S706可以理解成图9展示的5、片段拼接策略。
同样参见图9,在拼接得到视频片段时,可以参考步骤S702中确定的音乐的节奏点信息,保证拼接后的视频片段与音乐的节奏点契合。
S707、合成风格模板提供的特效、音乐以及视频片段,得到精选视频。
需要说明的是,步骤S707可以理解成图8展示的6、合成,其中,合成后的视频即为精选视频。
另外,若手机利用质量好的图像生成精选视频,前述内容提及的手机生成精选视频的方法中,将精彩图像替换为质量好的图像即可,为避免赘述,此处不对利用质量好的图像生成精选视频的方法进行详细介绍。
采用前述内容生成精选视频之后,可对精选视频进行保存。
以下结合图11,对精选视频的保存过程进行介绍。
示例性的,用户点击图11中(a)展示的精选视频602的展示界面的保存控件604。手机响应于用户的点击操作,保存精选视频602。示例性的,如图11中(c)所示,手机在图库中保存精选视频602。一些实施例中,精选视频602可跟随精选视频602的原视频502保存,即精选视频602和原视频502存储于相同存储区域。另一些实施例中,精选视频602也可不跟随精选视频602的原视频502保存,即精选视频602和原视频502存储于不同存储区域。需要指出的是,原视频502是指用户拍摄的视频,精选视屏602来自于原视频502。
用户点击图11中(a)展示的精选视频602的展示界面的保存控件604。手机响应于用户的点击操作,保存精选视频602之外,手机的显示屏还可展示精选视频602的详情界面。示例性的,如图11中(b)展示了精选视频602的详情界面。一些实施例中,在精选视频602的详情界面中,精选视频602可自动播放。
精选视频602的详情界面,与图6中(d)展示的视频502的详情界面基本相同,区别在于精选视频602的详情界面播放的精选视频602。精选视频602的详情界面包含的控件及其作用,可参见前述内容提及的图6中(d)展示的视频502的详情界面的内容,此处不做赘述。
需要说明的是,在用户点击图11中(a)展示的保存控件604之前,手机可生成精选视频602,但不将其保存在内部存储器中。只有在用户点击了图11中(a)展示的保存控件604之后,手机还会将生成的精选视频602保存在内部存储器中。
实施例二
前述实施例一提出的手机以一录多得模式拍摄视频的过程中,可得到视频中的精彩图像。在手机拍摄视频完毕之后,可得到拍摄的视频,以及视频中的一张或多张精彩图像。如此实现了拍摄视频的同时捕捉到值得纪念的精彩瞬间照片。
然而,若用户并不知道手机设置有一录多得模式,能够支持拍摄视频的同时捕捉到值得纪念的精彩瞬间照片的功能,用户还会在拍摄视频时,再控制手机以拍摄模式拍摄图像。基于此,手机检测到用户的上述行为时,需引导用户了解手机的一录多得功能。
示例性的,图12中(a)示出了展示了手机的视频拍摄界面。用户在利用手机拍摄视频的过程中,想要拍照图像。用户可在视频拍摄界面上点击停止控件1101以停止拍摄。手机响应于用户的点击操作,保存拍摄的视频并展示如图12中(b)所示的手机处于录像模式的拍摄界面。用户如图12中(b)所示,点击“拍照”,手机响应于用户的点击操作,进入如图12中(c)所示的拍摄模式的拍摄界面。
用户在图12中(c)所示的拍摄界面,点击控制拍摄的控件1102。用户响应于用户的点击操作,拍摄图像,并保存拍摄的图像。之后,用户再控制手机拍摄视频。
手机检测到用户的上述操作,可在用户控制手机拍摄视频时,在视频拍摄界面展示一录多得的引导提示。示例性的,图12中(d)展示的手机的视频拍摄界面上显示有一录多 得的引导提示的对话框1103,该对话框1103包括文字“使用后置录像时,智能识别精彩瞬间,识别到内容后将自动生成精彩照片并可一键创作大片”,还可以提示用户在拍摄视频的过程中,也可以手动点击控件抓拍照片。
为实现上述功能,本实施例提供的一种视频处理方法中,手机确定以一录多得模式拍摄视频的过程中,手机可检测用户是否执行结束拍摄视频,并立即拍照,拍照结束后再拍摄视频的操作。若手机检测到用户执行结束拍摄视频,并立即拍照,拍照结束后再拍摄视频的操作,则在用户结束拍照,再拍摄视频时,在手机显示屏显示的视频拍摄界面显示一录多得的引导提示。
当然,手机以一录多得模式拍摄视频的过程中,手机未检测用户执行结束拍摄视频,并立即拍照,拍照结束后再拍摄视频的操作。若手机未检测到用户执行结束拍摄视频,并立即拍照,拍照结束后再拍摄视频的操作,手机可按照常规流程响应用户的操作。
手机检测用户是否执行结束拍摄视频,并立即拍照,拍照结束后再拍摄视频的操作的方式可以为:
手机遍历手机的进程,并识别手机的进程的状态变化,利用手机的进程的状态变化结果,以确定用户是否执行结束拍摄视频,并立即拍照,拍照结束后再拍摄视频的操作。
其中:若手机判断录像的进程处于运行状态,且还判断出录像的进程被关闭的一定时长内,如10秒,拍照的进程被开启处于运行状态,拍照的进程被关闭的一定时长内,如10秒,录像的进程被开启处于运行状态,则说明用户执行结束拍摄视频,并立即拍照,拍照结束后再拍摄视频的操作。
一些实施例中,手机可通过进程标识符来区别不同的手机进程。录像的进程具有录像的进程标识符,拍照的进程具有拍照的进程标识符。
实施例三
手机利用一录多得模式拍摄视频的过程中,还支持手动抓拍功能。
示例性的,图13中(a)所示的视频拍摄界面展示了一场足球比赛过程中的一个画面。如图13中(a)所示,用户可点击拍照键1201执行手动抓拍操作。手机响应于用户的手动抓拍操作,调用摄像头进行拍照,并将拍照得到的图像保存到图库。示例性的,图13中(b)展示的图库的照片展示界面中,展示了手机在拍摄视频过程中手动抓拍的图像1202的缩略图。
手机利用一录多得模式拍摄视频过程中,用户启动手动抓拍拍摄图像,说明用户手动抓拍的图像,是用户认为更值得纪念的精彩图像。基于此,手机响应于用户的手动抓拍操作,除了在图库中保存手动抓拍的图像之后,还可以将其作为精彩图像保存于一录多得文件夹。
一些实施例中,手机可将手动抓拍的图像,替换手机识别出的精彩图像保存到一录多得进行保存。
如前述实施例一内容可知:手机可利用识别模型对拍摄的视频的图像进行精彩度评分,利用精彩度评分值,确定视频中的精彩图像。用户利用手动抓拍功能拍摄图像,手机可按 照精彩度评分值由小到大的顺序,丢弃抓拍图像的数量的精彩图像。手机再将剩余的精彩图像,和手动抓拍的图像,作为更新后的精彩图像,保存于一录多得文件夹。
示例性的,手机被配置一录多得文件夹保存5张精彩图像。图13中(a)中,用户启动手动抓拍功能拍摄一张图像。图13中(c)展示了一录多得文件夹,一录多得文件夹包括:手机从拍摄的视频中识别出的4张精彩图像的缩略图,以及用户手动抓拍的图像1202的缩略图。
图13中(c)展示的一录多得文件夹的精彩图像的缩略图,可按照精彩图像的拍摄时间先后顺序安排。即手机先拍摄的图像,其缩略图位置在先,手机后拍摄的图像,其缩略图的位置靠后。当然,此种排序方式并不构成对一录多得文件夹中图像缩略图排序的限定。
另外,图13中(d)展示的是用户拍摄的视频的详情界面。该详情界面展示的视频502的精彩图像的缩略图,来自于图13中(c)展示的一录多得文件夹。因此,视频502的精彩图像的缩略图包括:视频502的封面缩略图,手机识别出视频502中的4张精彩图像的缩略图,以及用户手动抓拍的图像1202的缩略图。
还需要说明的是,用户利用手动抓拍功能拍摄图像,手机拍摄得到图像后,还可对图像设置TAG,以指示图像在拍摄的视频中的位置。手机还可在视频的配置文件中写入手动抓拍图像的TAG。一些实施例中,手机手动抓拍图像的TAG保存于视频的配置文件,同时将视频的配置文件中记录的被手机丢弃的精彩图像的TAG删除。
用户在图13中(d)展示的视频502的详情界面上点击“Ai一键大片”控件,手机响应于用户的点击操作,生成视频502的精选视频。因视频的配置文件保存有手机识别出的精选图像的TAG,以及手动抓拍图像的TAG。因此,利用前述实施例一提供的生成精选视频的方法而生成的视频502的精选视频,包括用户手动抓拍图像和手机识别出、且保存于一录多得文件夹的精彩图像。
示例性的,图13中(e)展示了视频502的精选视频1203的展示界面,精选图像1203自动播放,且当前展示的画面为用户手动抓拍图像。
另一些实施例中,手机可将手动抓拍的图像,作为新的一张精彩图像,与手机识别出的精彩图像一同保存到一录多得文件夹。
示例性的,手机被配置一录多得文件夹保存5张精彩图像。图13中(a)中,用户启动手动抓拍功能拍摄一张图像。图14中(a)展示了一录多得文件夹,一录多得文件夹包括:手机从拍摄的视频中识别出的5张精彩图像的缩略图,以及用户手动抓拍的图像1202的缩略图。
图14中(a)展示的一录多得文件夹的精彩图像的缩略图,可按照精彩图像的拍摄时间先后顺序排列,并且,用户手动抓拍的图像的缩略图位于最后一位。当然,此种排序方式并不构成对一录多得文件夹中图像缩略图排序的限定。
另外,图14中(b)展示的是用户拍摄的视频的详情界面。该详情界面展示的视频502的精彩图像的缩略图,来自于图14中(a)展示的一录多得文件夹。因此,视频502的精彩图像的缩略图包括:视频502的封面缩略图,手机识别出视频502中的5张精彩图像的缩略图,以及用户手动抓拍的图像1202的缩略图。
还需要说明的是,用户利用手动抓拍功能拍摄图像,手机拍摄得到图像后,还可对图像设置TAG,以指示图像在拍摄的视频中的位置。手机还可在视频的配置文件中写入手动抓拍图像的TAG。
用户在图14中(b)展示的视频502的详情界面上点击“Ai一键大片”控件,手机响应于用户的点击操作,生成视频502的精选视频。因视频的配置文件保存有手机识别出的精选图像的TAG,以及手动抓拍图像的TAG。因此,利用前述实施例一提供的生成精选视频的方法而生成的视频502的精选视频,包括用户手动抓拍图像和手机识别出、且保存于一录多得文件夹的精彩图像。
还需要说明的是,若手机从拍摄的视频中未能识别出精彩图像,但识别出质量好的图像。手机可利用用户手动抓拍图像和质量好的图像,生成精选视频。当然,该精选视频的生成方式,可参见前述实施例一提供的生成精选视频的方法,此处不展开说明。
本申请另一实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个步骤。
计算机可读存储介质可以是非临时性计算机可读存储介质,例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本申请另一实施例还提供了一种包含指令的计算机程序产品。当该计算机程序产品在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个步骤。

Claims (25)

  1. 一种视频处理方法,其特征在于,应用于电子设备,所述视频处理方法,包括:
    响应于第一操作,拍摄第一视频;
    响应于第二操作,显示第一界面,所述第一界面是所述第一视频的详情界面,所述第一界面包括第一区域、第二区域和第一控件,或者所述第一界面包括所述第一区域和所述第二区域,或者所述第一界面包括所述第一区域和所述第一控件,
    所述第一区域为所述第一视频的播放区,
    所述第二区域显示所述第一视频的封面缩略图,第一图像的缩略图和第二图像的缩略图,所述第一图像是所述第一视频在第一时刻的图像,所述第二图像是所述第一视频在第二时刻的图像,所述第一视频的录制过程中包括所述第一时刻和所述第二时刻;
    所述第一控件用于控制所述电子设备生成第二视频,所述第二视频的时长小于第一视频,所述第二视频至少包括所述第一视频的图像。
  2. 根据权利要求1所述的视频处理方法,其特征在于,还包括:
    响应于第三操作,显示第二界面,所述第三操作为对所述第一控件的触控操作,所述第二界面为所述第二视频的展示界面。
  3. 根据权利要求1或2所述的视频处理方法,其特征在于,所述响应于第二操作,显示第一界面,包括:
    响应于第四操作,显示第三界面,所述第三界面为图库应用的界面,所述第三界面包括所述第一视频的封面缩略图;
    响应于对所述第一视频的封面缩略图的触控操作,显示所述第一界面。
  4. 根据权利要求1或2所述的视频处理方法,其特征在于,所述响应于第二操作,显示第一界面,包括:
    响应于对第二控件的触控操作,显示所述第一界面,所述电子设备的拍摄界面包括所述第二控件,所述第二控件用于控制展示前一次拍摄的图像或视频。
  5. 根据权利要求3所述的视频处理方法,其特征在于,所述第一视频的封面缩略图包括第一标识,所述第一标识用于指示所述第一视频为所述电子设备采用一录多得模式拍摄。
  6. 根据权利要求1至5中任意一项所述的视频处理方法,其特征在于,所述第一界面显示有蒙层,所述第二区域不被所述蒙层覆盖。
  7. 根据权利要求6所述的视频处理方法,其特征在于,所述第一界面还包括:第一话框,所述第一对话框用于向用户提示已生成了所述第一图像和所述第二图像,所述第一对话框不被所述蒙层覆盖。
  8. 根据权利要求2所述的视频处理方法,其特征在于,所述响应于第一操作,拍摄第一视频之后,还包括:
    响应于第五操作,显示所述电子设备的拍摄界面,所述拍摄界面包括:第二对话框,所述第二对话框用于向用户提示已生成所述第一视频和所述第二视频。
  9. 根据权利要求1至8中任意一项所述的视频处理方法,其特征在于,所述响应于第一操作,拍摄第一视频之前,还包括:
    响应于第六操作,显示第四界面,所述第四界面为拍摄的设置界面,所述第四界面包括:一录多得的选项和文字段,所述一录多得的选项用于控制所述电子设备开启或关闭一录多得功能,所述文字段用于指示所述一录多得的功能内容。
  10. 根据权利要求1至9中任意一项所述的视频处理方法,其特征在于,所述响应于第一操作,拍摄第一视频之后,还包括:
    响应于第七操作,显示第五界面,所述第五界面为图库应用的界面,所述第五界面包括:第一文件夹和第二文件夹,所述第一文件夹包括所述电子设备保存的图像和所述第一视频,所述第二文件夹包括所述第一图像和第二图像;
    响应于第八操作,显示第六界面,所述第六界面包括所述第一图像的缩略图和所述第二图像的缩略图,所述第八操作为对所述第二文件夹的触控操作。
  11. 根据权利要求2所述的视频处理方法,其特征在于,所述响应于第三操作,显示第二界面之后,还包括:
    响应于第九操作,显示第七界面,所述第七界面为所述第二视频的详情界面,所述第九操作为对所述第二界面包括的第三控件的触控操作,所述第三控件用于控制保存所述第二视频。
  12. 根据权利要求11所述的视频处理方法,其特征在于,还包括:
    响应于第十操作,显示第八界面,所述第八界面为图库应用的界面,所述第八界面包括:所述第二视频的封面缩略图和所述第一视频的封面缩略图。
  13. 根据权利要求1至12中任意一项所述的视频处理方法,其特征在于,所述响应于第一操作,拍摄第一视频之后,还包括:
    响应于第十一操作,显示所述电子设备的第一拍摄界面,所述第一拍摄界面包括第一选项和第二选项,所述第一选项用于指示拍照模式,所述第二选项用于指示录像模式;
    响应于对所述拍摄界面的第四控件的操作,显示所述电子设备的第一拍摄界面,所述第四控件用于启动拍照;
    响应于对所述第二选项的操作,显示所述电子设备的第二拍摄界面,所述第二拍摄界面包括第三对话框,所述第三对话框用于向用户指示一录多得的功能内容。
  14. 根据权利要求1至12中任意一项所述的视频处理方法,其特征在于,所述响应于第一操作,拍摄第一视频过程中,还包括:
    响应于第十二操作,拍摄并保存第三图像;所述第十二操作为对所述电子设备的视频拍摄界面的拍照键的触控操作。
  15. 根据权利要求14所述的视频处理方法,其特征在于,所述第二区域还显示所述第三图像的缩略图,所述第二视频包括所述第三图像。
  16. 根据权利要求14所述的视频处理方法,其特征在于,所述第二区域显示所述第一视频的封面缩略图,第一图像的缩略图和第二图像的缩略图,包括:
    所述第二区域显示所述第一视频的封面缩略图,第一图像的缩略图和第三图像的缩略图;
    所述第二视频至少包括所述第一图像和所述第三图像。
  17. 根据权利要求1至16中任一项所述的视频处理方法,其特征在于,所述第二视频的生成方式,包括:
    获取所述第一视频和所述第一视频的标签数据,所述标签数据包括所述第一视频的主题TAG,分镜TAG,所述第一图像TAG以及所述第二图像TAG;
    基于所述第一视频的主题TAG,确定风格模板和音乐,所述风格模板包括至少一个特效;
    基于所述分镜TAG,所述第一图像TAG和所述第二图像TAG,从所述第一视频中获取所述第一图像的前、后多帧图像,以及所述第二图像的前、后多帧图像;
    合成所述风格模板的特效,音乐以及目标图像,得到所述第二视频;所述目标图像至少包括:所述第一图像以及所述第一图像的前、后多帧图像。
  18. 一种视频处理方法,其特征在于,应用于电子设备,所述视频处理方法,包括:
    响应于第一操作,显示第一界面并开始拍摄第一视频,所述第一界面为拍摄所述第一视频时的预览界面,所述第一界面中包括第一控件,所述第一控件用于拍摄图像;
    响应于第二操作,在拍摄所述第一视频的过程中拍摄并保存第一图像;所述第二操作为针对所述第一控件的触控操作;
    在完成所述第一视频的拍摄后,响应于第三操作,显示第二界面,所述第二界面是所述第一视频的详情界面,所述第二界面包括第一区域、第二区域和第一控件,或者所述第二界面包括所述第一区域和所述第二区域,或者所述第二界面包括所述第一区域和所述第一控件;
    所述第一区域为所述第一视频的播放区,所述第二区域显示所述第一视频的封面缩略图和第一图像的缩略图,所述第一控件用于控制所述电子设备生成第二视频,所述第二视频的时长小于所述第一视频,所述第二视频至少包括所述第一视频的图像。
  19. 根据权利要求18所述的视频处理方法,其特征在于,所述第二区域还显示其他一帧或者多帧图像的缩略图,所述其他一帧或者多帧图像为所述第一视频中的图像,所述第一图像和所述其他一帧或者多帧图像的数量的总和大于或者等于预设个数,所述预设个数为所述电子设备在拍摄所述第一视频的过程中自动识别的第二图像的数量。
  20. 根据权利要求19所述的视频处理方法,其特征在于,所述第二视频至少包括以下图像中的一帧或多帧图像:所述第一图像,所述其他一帧或者多帧图像。
  21. 根据权利要求18至20所述的视频处理方法,其特征在于,还包括:
    响应于第四操作,显示第三界面,所述第四操作为对所述第一控件的触控操作,所述第三界面为所述第二视频的展示界面。
  22. 根据权利要求18至21任意一项所述的视频处理方法,其特征在于,在完成所述第一视频的拍摄之后,所述方法还包括:
    显示所述电子设备的第一拍摄界面,所述拍摄界面包括第一选项和第二选项,所述第一选项用于指示拍照模式,所述第二选项用于指示录像模式;所述第一拍摄界面为拍摄图像时的预览界面;
    响应于对所述拍摄界面的第二控件的操作,显示所述电子设备的第一拍摄界面,所述第二控件用于启动拍照;
    响应于对所述第二选项的操作,显示所述电子设备的第二拍摄界面,所述第二拍摄界面包括第一对话框,所述第一对话框用于向用户指示一录多得的功能内容,所述第二拍摄界面为拍摄视频时的预览界面。
  23. 根据权利要求18至22中任意一项所述的视频处理方法,其特征在于,在完成所述第一视频的拍摄之后,还包括:
    响应于第六操作,显示第三界面,所述第三界面为图库应用的界面,所述第三界面包括:第一文件夹和第二文件夹,所述第一文件夹至少包括所述第一图像,所述第二文件夹包括所述第二图像和第三图像,或者所述第二文件夹包括所述第二图像;
    响应于第七操作,显示第四界面,所述第四界面包括所述第二图像的缩略图和所述第三图像的缩略图,或者包括所述第二图像的缩略图,所述第七操作为对所述第二文件夹的触控操作。
  24. 一种电子设备,其特征在于,包括:
    一个或多个处理器、存储器,摄像头和显示屏;
    所述存储器、所述摄像头和所述显示屏与所述一个或多个所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,所述电子设备执行如权利要求1至17任意一项所述的视频处理方法,或者如权利要求18至23中任意一项所述的视频处理方法。
  25. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序被电子设备执行时,使得所述电子设备实现如权利要求1至17任意一项所述的视频处理方法,或者如权利要求18至23中任意一项所述的视频处理方法。
PCT/CN2022/138960 2022-02-28 2022-12-14 视频处理方法、电子设备及可读介质 WO2023160142A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22905479.6A EP4258675A1 (en) 2022-02-28 2022-12-14 Video processing method, and electronic device and readable medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210187220.4A CN116708649A (zh) 2022-02-28 2022-02-28 视频处理方法、电子设备及可读介质
CN202210187220.4 2022-02-28

Publications (1)

Publication Number Publication Date
WO2023160142A1 true WO2023160142A1 (zh) 2023-08-31

Family

ID=87764603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138960 WO2023160142A1 (zh) 2022-02-28 2022-12-14 视频处理方法、电子设备及可读介质

Country Status (3)

Country Link
EP (1) EP4258675A1 (zh)
CN (1) CN116708649A (zh)
WO (1) WO2023160142A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170289617A1 (en) * 2016-04-01 2017-10-05 Yahoo! Inc. Computerized system and method for automatically detecting and rendering highlights from streaming videos
CN110557566A (zh) * 2019-08-30 2019-12-10 维沃移动通信有限公司 视频拍摄方法及电子设备
CN111061912A (zh) * 2018-10-16 2020-04-24 华为技术有限公司 一种处理视频文件的方法及电子设备
CN114827342A (zh) * 2022-03-15 2022-07-29 荣耀终端有限公司 视频处理方法、电子设备及可读介质
CN115002340A (zh) * 2021-10-22 2022-09-02 荣耀终端有限公司 一种视频处理方法和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170289617A1 (en) * 2016-04-01 2017-10-05 Yahoo! Inc. Computerized system and method for automatically detecting and rendering highlights from streaming videos
CN111061912A (zh) * 2018-10-16 2020-04-24 华为技术有限公司 一种处理视频文件的方法及电子设备
CN110557566A (zh) * 2019-08-30 2019-12-10 维沃移动通信有限公司 视频拍摄方法及电子设备
CN115002340A (zh) * 2021-10-22 2022-09-02 荣耀终端有限公司 一种视频处理方法和电子设备
CN114827342A (zh) * 2022-03-15 2022-07-29 荣耀终端有限公司 视频处理方法、电子设备及可读介质

Also Published As

Publication number Publication date
EP4258675A1 (en) 2023-10-11
CN116708649A (zh) 2023-09-05

Similar Documents

Publication Publication Date Title
WO2021104508A1 (zh) 一种视频拍摄方法与电子设备
CN115002340B (zh) 一种视频处理方法和电子设备
CN113556461A (zh) 一种图像处理方法及相关装置
WO2023173850A1 (zh) 视频处理方法、电子设备及可读介质
CN112532865B (zh) 慢动作视频拍摄方法及电子设备
WO2023020006A1 (zh) 基于可折叠屏的拍摄控制方法及电子设备
WO2022068511A1 (zh) 视频生成方法和电子设备
WO2023160241A1 (zh) 一种视频处理方法及相关装置
US20230367464A1 (en) Multi-Application Interaction Method
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
WO2022252649A1 (zh) 一种视频的处理方法及电子设备
WO2023160142A1 (zh) 视频处理方法、电子设备及可读介质
WO2022262536A1 (zh) 一种视频处理方法及电子设备
WO2022160965A1 (zh) 一种视频处理方法及电子设备
CN115734032A (zh) 视频剪辑方法、电子设备及存储介质
CN116033261B (zh) 一种视频处理方法、电子设备、存储介质和芯片
WO2022237317A1 (zh) 显示方法及电子设备
EP4199492A1 (en) Video processing method and electronic device
WO2023065832A1 (zh) 视频的制作方法和电子设备
CN115037872B (zh) 视频处理方法和相关装置
WO2023160143A1 (zh) 浏览多媒体内容的方法及装置
US20240040066A1 (en) Method for displaying prompt text and electronic device
WO2023226699A1 (zh) 录像方法、装置及存储介质
WO2023226695A1 (zh) 录像方法、装置及存储介质
WO2023015959A1 (zh) 拍摄方法及电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022905479

Country of ref document: EP

Effective date: 20230621