WO2020172826A1 - 一种视频处理方法和移动设备 - Google Patents

一种视频处理方法和移动设备 Download PDF

Info

Publication number
WO2020172826A1
WO2020172826A1 PCT/CN2019/076360 CN2019076360W WO2020172826A1 WO 2020172826 A1 WO2020172826 A1 WO 2020172826A1 CN 2019076360 W CN2019076360 W CN 2019076360W WO 2020172826 A1 WO2020172826 A1 WO 2020172826A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video file
target object
frames
frame
Prior art date
Application number
PCT/CN2019/076360
Other languages
English (en)
French (fr)
Inventor
刘东淼
马崇晓
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2019/076360 priority Critical patent/WO2020172826A1/zh
Priority to CN201980093133.9A priority patent/CN113475092B/zh
Publication of WO2020172826A1 publication Critical patent/WO2020172826A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • This application relates to the field of terminal technology, and in particular to a video processing method and mobile equipment.
  • the photo or video recording function of the camera application in the mobile phone has become one of the functions frequently used by users. Now, users prefer to use their mobile phones to take (or record) more interesting images (or videos).
  • FIG. 1 is a schematic diagram of a regional interface of a camera application in a mobile phone in the prior art.
  • Two video recording modes are included in the viewfinder interface.
  • One is slow motion shooting and the other is time-lapse shooting.
  • the mobile phone collects 480 frames of images in 1 second and records 2s, that is, the video obtained by the mobile phone includes 960 frames of images, and the mobile phone plays the video at a rate of 30 frames per second. After playing 960 frames of images It needs to use 32s, that is, the mobile phone will slowly present the video content to the user (the recorded 2 second video will be played in 32 seconds).
  • the mobile phone collects 15 frames in 1 second and records for 2 seconds, that is, the video obtained by the mobile phone includes 30 frames of images.
  • the mobile phone plays the video at 30fps, it only takes 1 second to complete the video The content is quickly presented to the user (the recorded 2 second video will be played in 1 second).
  • This application provides a video processing method and a mobile device, which can make different content in the video play at different playback speeds.
  • the first aspect provides a video processing method, which can be executed by an electronic device with a display screen (such as a mobile phone, ipad, notebook computer, etc.), the method includes: detecting a first operation for playing a video file;
  • the video file is a video file stored in the electronic device; in response to the first operation, the video file is played, and the playback speed of the first target object in the video file is greater than the first target in the video file
  • the playback speed of objects other than the object the second operation for changing the target object is detected; in response to the second operation, continue to play the video file, and during the process of continuing to play the video file, the video
  • the playback speed of the second target object in the file is greater than the playback speed of other objects other than the second target object;
  • the second target object is at least one object in the video file that is different from the first target object.
  • the target object in the video file can be played quickly, and the target object can be switched. For example, a certain target object in the video file is played quickly, and other objects are played normally. After switching the target object, Another target object in the video file plays quickly, and other objects play normally. In this way, the user can quickly watch the user's interested objects in the video file, enhance the interest of the video playback, and enhance the user experience.
  • the first target object includes: a preset target object; or an object automatically determined by the electronic device according to multiple objects in the video file; or, the electronic device The object is determined according to the user's selection operation on a frame of the image in the video file;
  • the second target object includes: a preset target object; or, the electronic device according to the number of An object automatically determined by an object; or, an object determined by the electronic device according to a user's selection operation on a frame of image in the video file.
  • the target object in the video file may be preset, or automatically determined by the electronic device, or selected by the user. In this way, the user can quickly watch the user's interested objects in the video file, enhance the interest of the video playback, and enhance the user experience.
  • the playback speed of the first target object or the second target object is a preset playback speed; or, the playback speed of the first target object or the second target object is The electronic device determines the playback speed according to a user's selection operation on a frame of image in the video file.
  • the playback speed of the target object in the video file may be preset or selected by the user. In this way, the user can quickly watch the user's interested objects in the video file, enhance the interest of the video playback, and enhance the user experience.
  • the second target object is an object determined by the electronic device according to a user's selection operation on a frame of the image in the video file, including: detecting an effect on the one At least one click operation on the frame image is determined, and the object corresponding to the position of each click operation is determined as the second target object; or at least one circle selection operation acting on the one frame image is detected, and each circle selection is determined
  • the object included in the area enclosed by the operation is the second target object; or the operation of selecting at least one target identification information from the identification information of each object acting on the one frame of image is detected to determine the one
  • the object corresponding to each target identification information on the frame image is the second target object.
  • the user may be understood that there may be many ways for the user to select the target object, such as clicking the object on a frame of the video file, or circling the object, or selecting the object according to the identifier of the target object, which facilitates user operations and improves user experience.
  • the playback speed of the second target object is the playback speed determined by the electronic device according to a user's selection operation on a frame of the image in the video file, including: responding to trigger display
  • the operation of the playback speed option displays multiple playback speed options; the selection operation for selecting the target playback speed option among the multiple playback speed options is detected, and the playback speed corresponding to the target playback speed option is determined.
  • the electronic device can provide the user with multiple options of the playback speed of the target object, and the user selects the speed through the multiple options, which provides the user with a selection opportunity and improves the user experience.
  • the electronic device before playing the video file, extracts one frame of original image every M frames from the N frames of original image of the video file, and extracts K from the extracted original image.
  • the frame contains the image of the first target object; where N is an integer greater than or equal to 2; M is an integer greater than or equal to 1 and less than N, K is an integer greater than or equal to 1 less than N; the first target object is the extracted original At least one object on the image; overlaying the first target object in the N frames of original images with a background, so that the N frames of original images covered by the background do not include the first target object; extracting
  • the K frames that contain the image of the first target object and the K original images of the N original images covered by the background are fused correspondingly to obtain K frames of new images; wherein, in the N original images covered by the background, K frames of original images are continuous images; combining the K frames of new images and the remaining NK frames with the original images covered by the background into the first target video file; playing the video file by the electronic device includes
  • the electronic device extracts one original image every 4 frames from the 11 original images, and extracts a total of 3 original images (that is, extracting the first original image, Frame 5 original image and frame 9 original image).
  • the mobile phone 100 can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts 3 frames of images including the first target object from the extracted original images, and then covers the first target object in the 11 original images with the background (it can only cover the 11 original images including the first target object)
  • the first target object in the original image is covered by a background, and the original image that does not contain the first target object may not be covered by the background), so that the image after the background cover does not include the first target object.
  • the background can be the background on the original image, or it can be a general background, which is not limited here.
  • the electronic device fuses the extracted 3 frames of the image containing the first target object and 3 original images out of the 11 original images covered by the background correspondingly to obtain 3 new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, the first 3 frames of the 11 original images after the background overlay may be used.
  • the electronic device synthesizes the 3 frames of new images and the remaining 8 frames of images covered by the background into the first target video file, and when the first target video file is played, the first target object is played quickly. Therefore, the electronic device can synthesize a new video according to the N frames of original images of the video file through a series of processing steps, and then play the new video. During the playback of the new video, the first target object is quickly played.
  • the electronic device may also extract one frame of original image every P frame from the N frames of original image of the video file before continuing to play the video file.
  • the extracted Q frame contains the image of the second target object; where N is an integer greater than or equal to 2; P is an integer greater than or equal to 1 and less than N, and Q is an integer greater than or equal to 1 less than N; the first target object is extracted At least one object on the original image; overlaying the second target object in the N frames of original image with a background, so that the N frames of original image covered by the background do not include the second target object ;
  • the extracted Q frame contains the image of the second target object and the original Q frame of the original image of the N frame of the original image covered by the background is correspondingly fused to obtain a new Q frame of image; wherein, the N frames covered by the background are used
  • the Q-frame original image in the original image is a continuous image; the new Q-frame image and the remaining NQ frames are combined with the original image covered by the background into the second target video file;
  • the electronic device extracts one frame of image every 4 frames from the 11 frame of original images, and extracts a total of 3 frames of original images (that is, the first frame of original image and the first frame of original image are extracted). 5 original images and 9th original image).
  • the mobile phone 100 can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts 3 frames of images that include the second target object from the extracted original images, and then covers the second target object in the 11 frames of original images with background (it can only cover the 11 frames of original images that include the second target object).
  • the second target object in the original image uses a background overlay, and the original image that does not include the second target object may not be processed), so that the image after the background overlay does not include the second target object.
  • the electronic device correspondingly merges the extracted 3 frames of images containing the second target object and 3 frames of original images out of 11 frames of original images after overlaying with the background to obtain 3 frames of new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, it may be the first 3 frames of the 11 original images after the background overlay.
  • the electronic device synthesizes the 3 frames of new images and the remaining 8 frames of images covered by the background into a second target video file, and when the second target video file is played, the second target object is played quickly. Therefore, the electronic device can synthesize a new video according to N frames of original images of the video file through a series of processing steps, and then play the new video. During the playback of the new video, the second target object quickly plays. In other words, when the electronic device is playing a video file, if it detects the operation of switching the target object, it can process all the images in the video file to obtain a new video, and then play the new video. During the playback of the new video, The target object after switching plays quickly.
  • the electronic device may also extract one frame of original image every P frame from the subsequent W frame original image of the image currently played by the video file, and From the extracted original image, extract Q frames containing the image of the second target object; where W is an integer greater than or equal to 2; P is an integer greater than or equal to 1 and less than W, and Q is an integer greater than or equal to 1 less than W;
  • the second target object is at least one object on the extracted original image; the second target object in the W frame original image is covered with a background, so that the W frame original image covered with the background does not include all
  • the extracted Q frame contains the image of the second target object and the Q frame original image in the W frame original image covered by the background is correspondingly fused to obtain a new Q frame image; wherein, the background is used
  • the Q-frame original image in the overwritten W-frame original image is a continuous image; the Q-frame new image and the remaining NQ frames are combined with the original image covered by the background into the
  • the electronic device is playing a video file
  • the video file includes 11 frames of original images
  • the first target object in the video file is playing quickly.
  • the electronic device can compare the current frame image + subsequent unplayed images in the video file (that is, the original image from the third frame to the 11th frame).
  • the original image), or only the subsequent frame images that is, the 4th frame of the original image to the 11th frame of the original image) are processed.
  • the electronic device can start from the fourth frame of image, extract frames, extract the target object, etc., to obtain a new video, and then play the new video, and switch during the playback of the new video. After the target object is played quickly.
  • the electronic device can determine whether the subsequent frame images are sufficient, and if not, the electronic device can start to extract from the first frame. For example, a video file includes 11 frames of original images. When the electronic device plays to the 10th frame, the operation of switching the target object is detected, but there is only 1 frame left in the subsequent frames, then the mobile phone 100 can start from the 1st frame. .
  • the second aspect also provides a video processing method, which can be executed by an electronic device with a display screen (such as a mobile phone, ipad, notebook computer, etc.), the method includes: in response to the first operation, playing the video file , The playback speed of the first content in the first area in the playback interface is greater than the playback speed of the second content in the second area; the second area is an area other than the first area in the playback interface; The first content and the second content belong to the play content in the video file; a second operation for switching the fast play area is detected; in response to the second operation, continue to play the video file, and continue to play In the process of the video file, the playback speed of the third content in the third area in the playback interface is greater than the playback speed of the fourth content in the fourth area; the fourth area is outside the third area in the playback interface In other areas of, the third content and the fourth content belong to the playback content in the video file.
  • the content in the first area (or referred to as the target area) in the video file can be played quickly, and the first area can be switched during the playing of the video file. For example, currently, the content in one area of a video file is played quickly, and the content in other areas is played normally. When the area is switched, the content in another area of the video file is played quickly, and the content in other areas is played normally. . In this way, the content in a certain area of the video file can be quickly presented to the user, which enhances the fun of video playback and enhances the user experience.
  • the first area or the third area is: a preset area; or, an area automatically determined by the electronic device according to multiple objects of the video file; or, the The electronic device determines the area according to the user's selection operation on a frame of the video file.
  • the area for fast playback in the video file may be preset, or automatically determined by the electronic device, or selected by the user. In this way, the content in a certain area of the video file can be quickly presented to the user, which enhances the fun of video playback and enhances the user experience.
  • the playback speed of the content in the first region or the third region is a preset playback speed; or, the playback speed of the content in the first region or the third region is The electronic device determines the playback speed according to the user's selection operation on a frame of the video file.
  • the playback speed of the area used for fast playback in the video file may be preset, or automatically determined by the electronic device, or selected by the user. In this way, the content in a certain area of the video file can be quickly presented to the user, which enhances the fun of video playback and enhances the user experience.
  • the first area or the third area is the area determined by the electronic device according to a user's selection operation on a frame of the video file, including: detecting that the For the at least one circle selection operation on the one frame of image, it is determined that the area enclosed by the at least one circle selection operation is part of or all of the determined area.
  • the user can circle an area on a frame of the video file, and the objects in the area can be played quickly.
  • the user is provided with the opportunity to select the fast play area, that is, the area used for the fast play is determined according to the user's needs, and the user experience is improved.
  • the playback speed of the content in the third area is the playback speed determined by the electronic device according to a user's selection operation on a frame of the video file, including: displaying in response to triggering The operation of the playback speed option displays multiple playback speed options; the selection operation for selecting the target playback speed option among the multiple playback speed options is detected, and the playback speed corresponding to the target playback speed option is determined.
  • the user can select the playback speed of the area used for fast playback to improve user experience.
  • the content in a certain area of the video file can be quickly presented to the user, which enhances the fun of video playback and enhances the user experience.
  • the video file stops playing, or the last frame of the fast-playing area is displayed, and other areas continue to play.
  • the content in a certain area of the video file can be quickly presented to the user, which enhances the fun of video playback and enhances the user experience.
  • the electronic device may also extract one frame of original image every M frames from the N frame of original image of the video file, and extract K frames of original image.
  • Image where N is an integer greater than or equal to 2, M is an integer greater than or equal to 1 and less than N, K is an integer greater than or equal to 1 and less than N;
  • the first image is extracted from each extracted original image to obtain a total of K frames
  • the first image is the extracted first content in the first area on each frame of the original image; the first image in the first area on each frame of the original image in the N frames
  • Use background overlay so that each frame of the original image of the N frames of original images covered with the background does not include the first image; fill the extracted K frames of the first image with the N frames covered with the background K frames of the original image in the original image in the first region, K frames of new images are obtained; wherein, the K frames of original images in the N frames of original images covered by the background are continuous;
  • the electronic device can extract one frame of image every 4 frames from the 11 frame of original images to extract a total of 3 frames of original images, (that is, extract the first frame of original image , The 5th frame of the original image and the 9th frame of the original image).
  • the mobile phone 100 can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts the first image from each extracted original image (that is, the content in the first area on each original image), and then overlays the first image on each original image out of 11 original images with a background , So that the 11 original images do not include the first image.
  • the electronic device merges the extracted 3 first images and 3 original images out of the 11 original images covered by the background correspondingly to obtain 3 new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, the first 3 frames of the 11 original images after the background overlay may be used.
  • the electronic device synthesizes 3 frames of new images and the remaining 8 frames of images covered by the background into the first target video file.
  • the content of the first area can be played quickly.
  • the electronic device may also extract one frame of original image every P frame from the N frames of original image of the video file before continuing to play the video file, and extract Q frames of original image ;
  • N is an integer greater than or equal to 2
  • P is an integer greater than or equal to 1 less than N
  • Q is an integer greater than or equal to 1 less than N
  • the third image is the third content in the third area on each frame of original image extracted
  • the image is covered with a background, so that the third image is not included in each frame of the original image of the N frames of the original image after the background is covered
  • the extracted third image of the Q frame is filled with the N after the background is covered Q frames of new images are obtained in the third region in the Q frames of the original image in the original image; wherein, the Q frames of the original image in the N original images covered by the background are continuous;
  • the electronic device can extract one frame of image every 4 frames from the 11 frame of original images to extract a total of 3 frames of original images, (that is, extract the first frame of original image , The 5th frame of the original image and the 9th frame of the original image).
  • the mobile phone 100 can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts the third image from each extracted original image (that is, the content in the third area on each original image), and then covers the third image on each original image in the 11 original images with the background , So that the 11 original images do not include the third image.
  • the electronic device correspondingly merges the extracted 3 frames of third images and 3 frames of original images out of 11 frames of original images after overlaying with the background to obtain 3 frames of new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, the first 3 frames of the 11 original images after the background overlay may be used.
  • the electronic device synthesizes the 3 frames of new images and the remaining 8 frames of images after overlaying with the background into the second target video file.
  • the content of the third area can be played quickly.
  • the electronic device when it detects the operation of switching the target area, it can process all the images in the video file to obtain a new video, and then play the new video.
  • the target area after switching is the content in the third area to play quickly.
  • the electronic device may also extract one frame of original image every P frame from the subsequent W frame original image of the current frame image of the video file, and extract Q frames of original images; among them, W is an integer greater than or equal to 2, P is an integer greater than or equal to 1 and less than W, Q is an integer greater than or equal to 1 less than W; the third image is extracted from each extracted original image, total Obtain Q frames of the third image; the third image is the extracted third content in the third area on each frame of the original image; the W frame of original image is located in the third area The third image within is covered with a background, so that each frame of the original image of the W frame of the original image covered by the background does not include the third image; the extracted third image of the Q frame is filled with the background In the third area in the Q frame original image in the W frame original image after the coverage, a Q frame new image is obtained; wherein, the Q frame original image in the W frame original image covered by the background is continuous ; Combining the new Q
  • the video file when the electronic device is playing a video file, the video file includes 11 frames of original images, and the content in the first area of the video file is playing quickly. Assuming that the electronic device is playing the original image of the third frame when it detects the operation for switching the target area, the electronic device can compare the current frame image + subsequent frame images in the video file (that is, the third frame original image to the 11th frame original image ) For processing, or only for the subsequent frame images (ie, the 4th frame of the original image to the 11th frame of the original image).
  • the electronic device can start from the fourth frame of image, extract frames, extract the target object, etc., to obtain a new video, and then play the new video, and switch during the playback of the new video.
  • the next target area is the content in the third area to play quickly.
  • the third aspect also provides a video processing method, which is applied to an electronic device with a camera and a display screen, such as a mobile phone, a pad, etc.
  • the method includes: detecting a first operation for starting the camera; and responding to the first operation. Operation, the display screen displays a viewfinder interface of the camera application, the viewfinder interface includes a preview image, and the preview image includes at least one object; in response to a second operation, the camera captures N frames of original images; N It is an integer greater than or equal to 2; one original image is extracted every M frames from the N original images, and K frames containing the first target object are extracted from the extracted original images; M is greater than or equal to 1 and less than N K is an integer greater than or equal to 1 and less than M; the first target object is at least one object on the extracted original image; the first target object in the N frames of original images is covered with a background, so that The N frames of original images covered by the background do not include the first target object; the extracted K frames include the image of the first target
  • the electronic device collects 11 frames of original images, and extracts one frame of image every 4 frames from the 11 frame of original images, and extracts a total of 3 frames of original images (that is, extracts the first frame of original image).
  • Image, frame 5 original image and frame 9 original image Exemplarily, when the mobile phone 100 extracts an image, it can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts 3 frames of images including the first target object from the extracted original images, and then covers the first target object in the 11 original images with the background (it can only cover the 11 original images including the first target object)
  • the first target object in the original image is covered by a background, and the original image that does not contain the first target object may not be covered by the background), so that the image after the background cover does not include the first target object.
  • the electronic device fuses the extracted 3 frames of the image containing the first target object and 3 original images out of the 11 original images covered by the background correspondingly to obtain 3 new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, the first 3 frames of the 11 original images after the background overlay may be used.
  • the electronic device synthesizes the 3 frames of new images and the remaining 8 frames with the background overlay image into the target video file, and when the target video file is played, the first target object is played quickly.
  • the video file recorded by the electronic device in this way is a specially processed video file. When the video file is playing, the target object can play it quickly. In this way, it helps to enhance the fun of video recording and enhance user experience.
  • the first target object is a preset target object; or, the first target object is automatically determined by the electronic device according to at least one object to be photographed on the preview image Object; or, the first target object is an object determined by the electronic device according to a user's selection operation on the preview image.
  • the first target object may be preset, or automatically determined by the electronic device, or selected by the user.
  • the electronic device can record a specially processed video file, and when the video file is played, the user's interested object can be played quickly.
  • the playback speed of the first target object is a preset playback speed; or, the playback speed of the first target object is the electronic device according to the user's selection on the preview image Operation to determine the playback speed.
  • the playback speed of the first target object may be preset or selected by the user.
  • the electronic device can record a specially processed video file, and when the video file is played, the user's interested object can be played quickly.
  • the first target object is an object determined by the electronic device according to a user's selection operation on the preview image, including: detecting at least one action on the preview image Click operation to determine that the object corresponding to the location of each click operation is the first target object; or at least one circle selection operation acting on the preview image is detected, and determine the area enclosed by each circle selection operation
  • the included object is the first target object; or the operation of selecting at least one target identification information from the identification information of each object on the preview image is detected, and the object corresponding to each target identification information is determined The first target object.
  • the user can select the target object in the viewfinder interface. For example, click on the preview image, circle the target object, etc. In this way, the electronic device can record and obtain a specially processed video file.
  • the object that the user is interested in that is, the target object, can be played quickly.
  • the target video file is saved, and a logo is displayed on the cover of the target video file, and the logo is used to indicate that the playback speed of the first target object in the target video file is greater than the playback speed of other objects ,
  • the other objects are objects other than the first target object in the target video file.
  • an identifier may be displayed on the cover of the video file to indicate that the video file is a video file that has undergone special processing, or to indicate that the video file is the first video file.
  • a target object is played quickly, which is convenient for the user to understand the video file, and it is also convenient for the user to find the video file, which helps to improve the user experience.
  • the fourth aspect also provides a video processing method applied to an electronic device having a camera and a display screen, the method comprising: detecting a first operation for starting the camera; in response to the first operation, the The display screen displays the viewfinder interface of the camera application, the viewfinder interface includes a preview image, and the preview image includes at least one object; in response to the second operation, N frames of original images collected by the camera; N is greater than or equal to An integer of 2; extract one frame of original image every M frames from the N frame of original image, and extract K frame of original image; M is an integer greater than or equal to 1 and less than N, K is an integer greater than or equal to 1 less than N; from The first image is extracted from each extracted original image, and a total of K frames of the first image are obtained; the first image is the image in the first area on each extracted original image; each of the N original original images The first image of the first area in the original frame is covered with a background, so that each frame of the original image in the N frames of original images after the background coverage does not include
  • the electronic device collects 11 frames of original images, and extracts one frame of image every 4 frames from the 11 frame of original images, extracting a total of 3 frames of original images, (that is, extracting the first frame Original image, frame 5 original image and frame 9 original image).
  • the mobile phone 100 extracts an image, it can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts the first image from each extracted original image (that is, the content in the first area on each original image), and then overlays the first image on each original image out of 11 original images with a background , So that the 11 original images do not include the first image.
  • the electronic device merges the extracted 3 first images and 3 original images out of the 11 original images covered by the background correspondingly to obtain 3 new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, the first 3 frames of the 11 original images after the background overlay may be used.
  • the electronic device combines 3 frames of new images and the remaining 8 frames with background overlay images into the target video file.
  • the target video file is played, the content in the first area is played quickly.
  • the video file recorded by the electronic device in this way is a specially processed video file.
  • the video file is playing, the content of the first area can be played quickly. In this way, it helps to enhance the fun of video recording and enhance user experience.
  • the first area is a preset area; or, the first area is an area automatically determined according to at least one object to be photographed on the preview image; or, the first area The area is the area determined by the electronic device according to the user's selection operation on the preview image.
  • the first area may be preset, or automatically determined by the electronic device, or selected by the user on the preview image.
  • the video file recorded by the electronic device in this way is a specially processed video file. When the video file is playing, the content of the first area can be played quickly. In this way, it helps to enhance the fun of video recording and enhance user experience.
  • the playback speed of the content in the first area is a preset playback speed; or, the playback speed of the content in the first area is the electronic device according to the user's preview image The selected operation to determine the playback speed.
  • the playback speed of the first area may be preset, or automatically determined by the electronic device, or selected by the user on the preview image. After the video file is recorded by the electronic device, when the video file is playing, the content of the first area can be played quickly. In this way, it helps to enhance the fun of video recording and enhance user experience.
  • the first area is an area determined by the electronic device according to a user's selection operation on the preview image, including: detecting at least one circle acting on the preview image Selection operation, determining that the area enclosed by each circle selection operation is the first area.
  • the user can circle and select an area on the preview image in the viewfinder interface. After the electronic device records a video file, the content of the circled area can be quickly played when the video file is playing. In this way, it helps to enhance the fun of video recording and enhance user experience.
  • the electronic device saves the target video file, and a logo is displayed on the cover of the target video file, and the logo is used to indicate that the playback speed of the first region in the target video file is greater than that of other regions.
  • Speed the other area is an area other than the first area in the target video file.
  • an identifier may be displayed on the cover of the video file to indicate that the video file is a specially processed video file, or to indicate that the first target object in the video file is fast Play, it is convenient for users to understand the video file, and it is also convenient for users to find the video file, which helps to improve user experience.
  • the fifth aspect also provides an electronic device, including a display screen; one or more processors; a memory; one or more application programs; one or more programs, wherein the one or more programs are stored in the memory
  • the one or more programs include instructions, and when the instructions are executed by the electronic device, the electronic device executes the first aspect or any one of the possible design methods of the first aspect; or When the instruction is executed by the electronic device, the electronic device is caused to execute the foregoing second aspect or any one of the possible design methods of the second aspect.
  • the sixth aspect also provides an electronic device, including a display screen; a camera; one or more processors; a memory; one or more application programs; one or more programs, wherein the one or more programs are stored in the In the memory, the one or more programs include instructions, and when the instructions are executed by the electronic device, the electronic device executes the third aspect or any one of the possible design methods of the third aspect; Or when the instruction is executed by the electronic device, the electronic device is caused to execute the fourth aspect or any one of the possible design methods of the fourth aspect.
  • the seventh aspect also provides an electronic device, the electronic device includes a module/unit that executes the first aspect or any one of the possible design methods of the first aspect; or the electronic device includes the second aspect or the first aspect.
  • modules/units can be realized by hardware, or by hardware executing corresponding software.
  • An eighth aspect also provides a computer-readable storage medium, the computer-readable storage medium including a program, when the program runs on an electronic device, the electronic device is caused to execute the first aspect or any one of the first aspects above A possible design method; or when the program is running on an electronic device, the electronic device is caused to execute the second aspect or any one of the possible design methods of the second aspect; or when the program is running on the electronic device, The electronic device is caused to execute the third aspect or any one of the possible design methods of the third aspect; or when the program is running on the electronic device, the electronic device is caused to execute the fourth aspect or any of the foregoing fourth aspects A possible design method.
  • the ninth aspect also provides a method that includes a program product, which when the program product runs on an electronic device, causes the electronic device to execute the first aspect or any one of the possible designs of the first aspect; or When the program product is running on an electronic device, the electronic device is caused to execute the second aspect or any one of the possible design methods of the second aspect; or when the program product is running on an electronic device, the The electronic device executes the third aspect or any one of the possible design methods of the foregoing third aspect; or when the program product is running on the electronic device, the electronic device executes the fourth aspect or any of the foregoing fourth aspects A possible design method.
  • the tenth aspect also provides a user graphical interface on an electronic device, the electronic device having a display screen, a camera, a memory, and one or more processors, and the one or more processors are configured to execute data stored in the memory
  • the graphical user interface includes the graphical user interface displayed when the electronic device executes the above-mentioned first aspect or any one of the possible design methods of the above-mentioned first aspect; or, the graphic
  • the user interface includes a graphical user interface displayed when the electronic device executes the above second aspect or any one of the possible design methods of the above second aspect; or, the graphical user interface includes the electronic device executes the above third aspect Or the graphical user interface displayed in any one of the possible design methods of the third aspect; or, the graphical user interface includes the electronic device executing the fourth aspect or any one of the possible designs of the fourth aspect
  • the graphical user interface is displayed during the method.
  • the Xth operation involved in this application may be one operation or a combination of multiple operations.
  • the Xth operation includes the first operation, the second operation, and so on.
  • the Xth area involved in this application may be one area or a collection of multiple areas.
  • the Xth area includes the first area, the second area, the third area, or the fourth area, etc.
  • Fig. 1 is a schematic diagram of a viewfinder interface of a camera application of a mobile phone provided by this application;
  • 2A is a schematic diagram of slow playback, normal playback, and fast playback provided by an embodiment of this application;
  • 2B is a schematic diagram of slow playback, normal playback, and fast playback provided by an embodiment of this application;
  • FIG. 3 is a schematic structural diagram of a mobile phone 100 provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of the display interface of the mobile phone 100 provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 6 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 8 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 11 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 12A is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 12B is a schematic diagram of the display interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 12C is a schematic diagram of the display interface of the mobile phone 100 according to an embodiment of the application.
  • FIG. 13 is a schematic diagram of a video processing flow of the mobile phone 100 according to an embodiment of the application.
  • FIG. 14 is a schematic flowchart of a video processing method provided by an embodiment of this application.
  • FIG. 15 is a schematic diagram of a video processing flow of the mobile phone 100 according to an embodiment of the application.
  • FIG. 16 is a schematic diagram of a video processing flow of the mobile phone 100 according to an embodiment of the application.
  • FIG. 17 is a schematic diagram of a video processing flow of the mobile phone 100 according to an embodiment of the application.
  • FIG. 18 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 19 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 20 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 21 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 22 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 23 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 24 is a schematic diagram of a display interface of a mobile phone 100 according to an embodiment of the application.
  • FIG. 25 is a schematic flowchart of a video processing method provided by an embodiment of this application.
  • FIG. 26 is a schematic flowchart of a video processing method provided by an embodiment of this application.
  • FIG. 27 is a schematic flowchart of a video processing method provided by an embodiment of this application.
  • FIG. 28 is a schematic flowchart of a video processing method provided by an embodiment of this application.
  • the application program (application, app for short) involved in the embodiments of the present application is a software program that can implement one or more specific functions.
  • multiple applications can be installed on a mobile device.
  • the applications mentioned in the following can be applications that have been installed when the mobile device is shipped from the factory, or applications that the user downloads from the Internet or obtained from other mobile devices during the use of the mobile device.
  • the video processing method provided in the embodiments of the present application is applied to any application program capable of playing videos.
  • the video playback frame rate involved in the embodiments of the present application is in units of frames per second (fps).
  • a video is composed of multiple frames of images, and video playback is the continuous switching of multiple frames of images in the video, and the content of each frame of image is different to show the process of playing the screen animation.
  • the normal playback frame rate of the video is related to the human visual reflection time. For example, when the normal playback frame rate is set to 24fps, that is, when 24 frames of images are switched every second, it is already a continuous animation in the eyes of human eyes.
  • the video playback frame rate of a mobile device is usually set in the interval of 25fps-30fps.
  • the image acquisition frame rate involved in the embodiments of the present application is the speed of image acquisition, that is, how many frames of images are acquired per second, in units of frames/second.
  • the playback frame rate is less than or equal to the image capture frame rate.
  • the frame rate of the image collected by the terminal device is 60 fps, and the playback frame rate is 30 fps.
  • normal playback, fast play, and slow playback involved in the embodiments of this application. These three playback speeds are relative.
  • normal playback, fast playback, and slow playback all play images at the same video playback frame rate (for example, both are 30fps).
  • normal playback, fast playback, and The total number of frames of the images played in slow playback is different.
  • the total number of frames of images played is greater than the total number of frames of images played during fast playback, and less than the total number of frames of images played during slow playback.
  • Example 1 Taking the mobile terminal as a mobile phone as an example, the mobile phone stores a video file (such as a video file downloaded by the mobile phone from the network side, or receiving a video file sent by other devices, or a video file recorded by the mobile phone). Generally, when a mobile phone plays a video file, the images in the video file are played frame by frame, and they are played at a normal playback speed (for example, 30 fps).
  • a video file such as a video file downloaded by the mobile phone from the network side, or receiving a video file sent by other devices, or a video file recorded by the mobile phone.
  • a video file such as a video file downloaded by the mobile phone from the network side, or receiving a video file sent by other devices, or a video file recorded by the mobile phone.
  • a video file such as a video file downloaded by the mobile phone from the network side, or receiving a video file sent by other devices, or a video file recorded by the mobile phone.
  • a normal playback speed for example, 30
  • FIG. 2A is a schematic diagram of fast playback and normal playback of the same video file provided in this embodiment of the application.
  • the normal playback frame rate is 30fps
  • part of the images is not extracted, that is, after 1200 frames of images are played, the required duration is 40s.
  • the fast playback can be to extract one frame of image from every 4 frames of the 1200 images to get 300 frames of image, and only play the extracted 300 frames of image, because the playback speed is still 30fps, that is, the time required for fast playback is 10s (fast The playback duration during playback is 1/4 of the duration required for normal playback).
  • the mobile phone plays images at 30pfs, but during fast playback, the phone only plays the extracted 300 frames of images.
  • the phone plays 1200 frames of images, so the total number of frames played during fast playback is less than The total number of frames of the image played during normal playback.
  • Example 1 the difference between fast playback and normal playback when the mobile phone has obtained the video file is introduced.
  • Example 1 for example, a video file downloaded by a mobile phone from the network side, the number of image frames contained in the video file has been determined (for example, 1200 frames).
  • a mobile phone plays a video file normally, each frame of the video file is played sequentially from the first frame until the last frame of image is played.
  • Figure 2A normal playback is to play 1200 frames of images in sequence. Therefore, for the downloaded video files, the mobile phone cannot play at a slow speed, because the slow play requires more images, but because the number of image frames in the video has been determined, it cannot play more images.
  • the mobile phone can play quickly in the manner shown in FIG. 2A, that is, extract part of the image from the image of the video file, and only play the extracted part of the image to achieve fast playback.
  • the mobile phone can also implement fast, slow, and normal playback of the downloaded video file in the manner shown in FIG. 2B.
  • a video file includes 1200 frames of images
  • the normal playback speed is 30 fps.
  • one frame of images can be extracted every 2 frames of images, and 600 frames of images are extracted.
  • the time required is 20s.
  • Slow playback may not extract part of the image, but completely part of the 1200 frame image, then the slow playback requires a duration of 40s (the duration is twice the duration of normal playback).
  • Fast playback can be to extract one frame of images every 4 frames of images, and extract 300 frames of images, then the time required for fast playback is 10s (the duration is 1/2 of the time required for normal playback).
  • the normal playback speed of the mobile phone does not play 1200 frames of images frame by frame, but plays one frame every 2 frames. In this way, slow playback or fast playback can be achieved.
  • the mobile phone downloads a video from the network side, it can be played in a slow, fast, and normal manner as shown in FIG. 2B.
  • the mobile phone has obtained a video file and played the video file as an example to introduce the difference between normal, fast, and slow playback. From another perspective (the perspective of the mobile phone recording video files), normal and fast The difference between slow playback.
  • Example 2 Taking the mobile terminal as a mobile phone as an example, and taking the process of recording a video on the mobile phone as an example.
  • the normal image capture frame rate is equal to the normal playback frame rate (for example, both are 30fps).
  • the video captured by the mobile phone has 30 frames, and the video can be played for 1s when the video is played at 30fps. This way is normal playback.
  • a mobile phone collects images at a high frame rate (such as 60 fps), records 1 second, and captures 60 frames of images.
  • a high frame rate such as 60 fps
  • the mobile phone plays the video at 30 fps, it plays 2 seconds, that is, slow playback.
  • a mobile phone collects images at a low frame rate (such as 15 fps), records 1 second, and captures 15 frames of images.
  • a low frame rate such as 15 fps
  • the mobile phone plays the video at 30 fps
  • the video is played for 0.5 seconds, that is, fast playback.
  • the mobile phone plays the video at 30fps, but because the image capture frame rate is different when the video file is recorded, the number of frames of the image contained in the video file recorded in the same time is different, so the mobile phone plays the recorded video Video files require different durations, so different playback effects (fast or slow) are presented.
  • the image capture frame rate when the mobile phone records a video is related to fast playback or slow playback, and this part of the content will be introduced later.
  • the video file involved in the embodiments of the present application may be a video file downloaded by the mobile terminal from the network side, a video file recorded by the mobile terminal itself, or a video file loaded online by the network.
  • the video processing method provided in the embodiments of the present application may be applicable to video files of various formats, such as rmvb, avi, MP4, and other formats.
  • the video processing method provided in the embodiments of the present application may be applicable to videos obtained through any video coding method, such as a moving picture experts group (MPEG) and other video coding methods.
  • MPEG moving picture experts group
  • the mobile device may be a mobile phone, a tablet computer, a notebook computer, or a wearable device with wireless communication function (such as a smart watch or smart glasses).
  • the mobile device includes an image acquisition module capable of acquiring images or videos, and a device (such as an application processor, or, image processor, or other processor) that can run the image processing algorithm provided in the embodiment of the present application.
  • a device such as an application processor, or, image processor, or other processor
  • Exemplary embodiments of the mobile device include but are not limited to carrying Or other operating system equipment.
  • the above-mentioned mobile device may also be other portable devices, as long as it can collect images or videos and run the image processing algorithm provided in the embodiments of the present application. It should also be understood that in some other embodiments of the present application, the above-mentioned mobile device may not be a portable mobile device, but a desktop computer capable of collecting images or videos and running the image processing algorithm provided in the embodiments of the present application.
  • the mobile device does not need to have an image acquisition function, but only needs to have the ability to run the image processing algorithm provided in the embodiment of this application, and the image processing algorithm provided in the embodiment of this application can be used.
  • FIG. 3 shows a schematic structural diagram of the mobile phone 100.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 151, wireless communication module 152, audio module 191 (including speakers, receivers, microphones, headphone jacks, etc. not shown in the figure), sensor module 180, buttons 190, display screen 194, and subscriber identification module (subscriber identification module) , SIM) card interface 195 and so on.
  • the sensor module 180 can include a pressure sensor 180A, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a touch sensor 180K, etc.
  • the mobile phone 100 can also include other sensors such as temperature sensors, ambient light sensors, barometers, gravity sensors, and gyroscopes. Instrument sensors, etc., not shown in the figure).
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may run the code of the video playback algorithm provided in the embodiment of the present application to realize the fast playback of the content that the user is interested in in the video, and the slow playback of the content that the user is not interested in.
  • the GPU can run the code of the video playback algorithm.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data created during the use of the mobile phone 100 (for example, images and videos taken by a camera application) and the like.
  • the internal memory 121 may also be used to store the code of the video playback algorithm provided in the embodiment of the present application.
  • the processor 100 accesses and runs the code in the internal memory 121 to implement related functions.
  • the code of the video playback algorithm provided in the embodiment of the application can also be stored in the memory of the processor 110 itself (for example, when the processor 110 is a CPU, the code of the video playback algorithm provided in the embodiment of the application can be stored in the CPU cache. in).
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the functions of the sensor module 180 are described below.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the mobile phone 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the mobile phone 100 can use the distance sensor 180F to measure the distance to achieve fast focusing. In other embodiments, the mobile phone 100 can also use the distance sensor 180F to detect whether a person or an object is approaching.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone 100 emits infrared light to the outside through the light emitting diode.
  • the mobile phone 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the mobile phone 100. When insufficient reflected light is detected, the mobile phone 100 can determine that there is no object near the mobile phone 100.
  • the mobile phone 100 may use the proximity light sensor 180G to detect that the user holds the mobile phone 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor 180K may transmit the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the position of the display screen 194.
  • the touch sensor 180K may be used to detect a user's touch operation.
  • the mobile phone 100 displays a main interface, and the main interface includes icons of multiple applications (for example, WeChat, camera, phone, memo, etc.).
  • the touch sensor 180K detects the user's touch operation in the main interface, it sends the touch operation to the processor 110.
  • the processor 110 will determine the touch position of the touch operation based on the touch operation, and determine the icon corresponding to the touch position. Assuming that the processor 110 determines that the icon corresponding to the touch operation is the icon of the camera application, the mobile phone 100 starts the camera application.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the mobile phone 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the mobile phone 100 can implement audio functions through the audio module 191 (speaker, receiver, microphone, earphone interface), the processor 110, and the like. For example, music playback, recording, etc.
  • the mobile phone 100 can receive the key 190 input, and generate key signal input related to the user settings and function control of the mobile phone 100.
  • the SIM card interface 195 in the mobile phone 100 is used to connect to the SIM card.
  • the SIM card can be connected to and separated from the mobile phone 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195.
  • the mobile phone 100 may also include a camera, such as a front camera and a rear camera; it may also include a motor, which is used to generate vibration notifications (such as an incoming call vibration notification); it may also include an indicator such as an indicator light. It can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, etc.
  • a camera such as a front camera and a rear camera
  • a motor which is used to generate vibration notifications (such as an incoming call vibration notification)
  • an indicator such as an indicator light. It can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, etc.
  • the mobile phone presents the entire video screen at a fast or slow speed.
  • the playback modes of different areas (or different objects) in the video playback screen may be different.
  • the area A (or object A) in the video playback screen plays fast
  • the area B (or object B) plays slowly or at normal speed.
  • FIG. 4 shows a GUI of the mobile phone 100, and the GUI is the desktop 401 of the mobile phone.
  • the mobile phone 100 detects that the user has clicked the icon 402 of the photo album (or gallery, photo) application on the desktop 401, it can start the photo album application and display another GUI as shown in (b) in Figure 4, where Including multiple video files (2 video files are taken as an example in the figure).
  • the mobile phone 100 detects that the user clicks on the video 404, it displays another GUI as shown in (c) in FIG. 4, and the GUI includes a preview interface 405 of the first frame of the video 404.
  • a playback control 406 is displayed in the preview interface 405.
  • the mobile phone 100 starts to play the video 404 (for example, starting from the first frame of image).
  • the mobile phone 100 when the mobile phone 100 detects that the user has used to instruct the "smart play” operation on the video 404, the mobile phone 100 can perform actions on different objects in the video 404 Play with different playback speeds.
  • the mobile phone 100 may provide information related to smart playback.
  • the preview interface 405 includes a play option control 501.
  • the play option control 501 When the play option control 501 is triggered, the mobile phone 100 displays a play mode selection box 502.
  • the play mode The selection box 502 includes a normal play option 503 and a smart play option 504.
  • the mobile phone 100 When the mobile phone 100 detects that the normal playback option 503 is triggered, the mobile phone 100 plays the video according to the prior art method (for example, each frame of image in the video is played continuously, and the playback speed of each object is the same).
  • the mobile phone 100 displays an interface as shown in (b) of FIG. 5.
  • the mobile phone 100 displays the detail control 505, the completion control 506, and the prompt message "Please select the target object" (described later).
  • the mobile phone 100 displays an interface as shown in (c) of FIG. 5, that is, the mobile phone 100 provides an example of information related to the smart play mode to help the user understand the smart play function.
  • (C) in FIG. 5 is only an example of a brief introduction of the smart play mode, and is not a limitation.
  • the target object may be specified by the user.
  • the user can manually select the target object in the interface shown in (b) of FIG. 5.
  • the user can manually select the target object in the interface shown in (b) of FIG. 5.
  • the user can manually select the target object in the interface shown in (b) of FIG. 5.
  • the user can manually select the target object in the interface shown in (b) of FIG. 5.
  • the mobile phone 100 displays the text message "Please click on the target object on the screen".
  • the target object is determined based on the click operation. Assuming that the user clicks on the swan in the first frame of image, the mobile phone 100 determines that the swan is the target object.
  • the mobile phone 100 may display identification information of the selected target object.
  • the identification information 601 includes the identification information of the target object.
  • the identification information of other targets is added to the identification information 601. Comparing (a) and (b) in Figure 6, it can be seen that when the user only clicks the swan, only the black crane is included in the identification information 601, and when the user continues to click the fish, a new fish is added to the identification information 601.
  • the user selects multiple target objects, if he wants to delete a target object, he can click the target object again (click or double-click), and the identification information of the target object can be deleted in the identification information 601.
  • the mobile phone 100 When the mobile phone 100 detects the operation of the OK button 602, the mobile phone 100 determines that the object included in the identification information 601 is a target object.
  • Example 2 As shown in (a) in FIG. 7, the mobile phone 100 displays a selection box and a prompt message "Please move the selection box to select the target object". The mobile phone 100 determines the object contained in the selection box as the target object. The position of the selection box can be moved and the size can be changed.
  • the mobile phone 100 can determine the ratio of the area of the object in the selection frame to the entire area of the object , If the ratio is greater than the preset ratio, the selection box is determined to include this object.
  • the mobile phone 100 may display the identification information 701 of the selected target object. For example, when the mobile phone 100 detects that the selection frame is enlarged, the objects included in the selection frame increase, and the objects included in the identification information 701 increase. Similarly, the user can also delete the target object (for example, reduce the size of the selection box). When the mobile phone 100 detects the operation of the OK button 702, the mobile phone 100 determines that the object included in the identification information 701 is a target object.
  • Example 3 As shown in (a) in FIG. 8, the mobile phone 100 recognizes the objects contained in the first frame of image and displays the number of each recognized object. When the mobile phone 100 detects that the user selects a certain number, the mobile phone 100 determines that the object corresponding to the number is the target object.
  • the mobile phone 100 can display the identification information 801 of the selected target object. For example, when the mobile phone 100 detects that the user selects the object with the number 1, the identification information 801 includes the number 1, and after the user continues to select the object with the number 2, the identification information 802 adds the number 2. Similarly, the user can also delete the target object (for example, click the object number 1 again, and delete number 1 in the identification information 801). When the mobile phone 100 detects the operation of the confirm button 802, the mobile phone 100 determines that the object corresponding to the number included in the identification information 801 is the target object.
  • Example 4 please refer to Figure 9(a).
  • the mobile phone 100 After the mobile phone 100 detects the user's circle drawing operation on the first frame of image, it determines that the object in the area enclosed by the trajectory of the circle drawing operation is the target object .
  • the size and display position of the area enclosed by the trajectory of the user's circle operation can be changed. For example, please compare (a) and (b) in FIG. 9, when the area enclosed by the trajectory of the circle operation shape is enlarged, the objects in the area increase, and the fish is added to the identification information 901.
  • the user can also delete the target object (for example, reduce the size of the selection box), and delete the identification information of the target object moved out of the selection box in the identification information 901.
  • the user can also perform multiple circle operations on the first frame of image, and the object in the area surrounded by each circle operation is the target object.
  • the user can also delete the target object. For example, long press a drawn circle to delete the circle, see (d) in Figure 9.
  • the target object may also be automatically selected by the mobile phone 100.
  • the mobile phone 100 recognizes the target object in the first frame of image according to a preset strategy. For example, the mobile phone 100 presets the object types "person", “animal”, "building”, etc. The mobile phone 100 recognizes that one or more objects in the first frame of image belong to a preset object type, and the mobile phone 100 determines that one or more objects are target objects.
  • the preset object type in the mobile phone 100 may be set before the mobile phone 100 leaves the factory, or may be user-defined.
  • the mobile phone 100 may be set with a priority order of multiple object types: characters are higher than animals, and animals are higher than buildings.
  • characters are higher than animals
  • animals are higher than buildings.
  • the person of the object type with the highest priority is the target object, or the person and the animal of the two object types with the highest priority are the target object.
  • the first frame of image does not include people but includes animals and other objects, the animal with the highest priority included in the original image is the target object.
  • the mobile phone 100 may also have other ways to select the target object.
  • the object in the middle position in the first frame of image is determined as the target object
  • the person in the first frame of image is the target object by default
  • the object with the largest area in the first frame of image is the target object, and so on.
  • the playback speed of the target object can also be determined.
  • the mobile phone 100 displays the selected target object and speed playback options, including: 2 times speed option, 1.5 times speed option, and 0.5 times speed option. Assuming that the user triggers the 2x speed option, when the mobile phone 100 detects that the operation of the completion control 1001 is triggered, the mobile phone 100 starts to play the video 404, where the target object is played at 2x speed, and other objects are played at the normal speed.
  • the playback speed of each target object can be determined.
  • the mobile phone 100 displays respective playback speed options of two target objects. Assuming that the user selects that the playback speed of the swan is 2x speed and the playback speed of the fish is 0.5x speed, when the mobile phone 100 detects that the operation of the completion control 1101 is triggered, it starts to play the video 404, where the swan plays fast and the fish plays slowly.
  • the mobile phone 100 may also determine the playback speed of the background excluding the target object. For example, as shown in FIG. 12A, the mobile phone 100 also displays the playback speed options of the background (other objects except swans and fish). Suppose that the user selects the background playback speed at 1.5 times speed, and the mobile phone 100 plays the background at 1.5 times speed.
  • the mobile phone 100 may also have other ways to determine the playback speed of the target object, which is not limited in this embodiment of the application.
  • the mobile phone 100 when the mobile phone 100 plays a video, it is not necessary for the user to select a target object, nor for the user to select a playback speed. For example, after the mobile phone 100 enters the smart play mode, the mobile phone 100 automatically recognizes the target object in the video file, and then plays the target object at a default speed (for example, 2 times speed). For another example, when the mobile phone 100 enters the smart play mode, a video is played, but the video play interface includes a window. The content in the window is played at a default speed (for example, 2 times speed), and the content outside the window is played at a normal speed.
  • a default speed for example, 2 times speed
  • the first frame image of the video 404 is taken as an example, that is, the selection of the target object and the selection of the playback speed are all for the first frame image of the video 404.
  • the mobile phone 100 if the mobile phone 100 is currently playing a video, if the mobile phone 100 detects a control to pause playback, the mobile phone 100 pauses to a frame of image currently being played. The user can select the target object and the playback speed of the target object on the current frame image. The mobile phone 100 starts from the current frame image to play the subsequent frame image. During the playback of the subsequent frame image, different objects are displayed at different playback speeds.
  • the target object can also be switched.
  • the target object currently selected by the mobile phone 100 is a swan (such as a swan designated by the user, or a swan automatically determined by the mobile phone 100), so the swan is played quickly when the mobile phone 100 is playing a video.
  • a control 1203 for instructing to switch the target object is displayed in the playback interface.
  • the mobile phone 100 switches the target object from a swan to a fish, as shown in FIG. 12B(b).
  • the mobile phone 100 quickly plays the fish; or, plays the video file from the beginning, and the fish is played quickly during the playback from the beginning.
  • the mobile phone 100 when the mobile phone 100 detects that the user triggers the operation of the control 1203, it can automatically determine other target objects to replace the current target object. For example, the mobile phone 100 selects an object different from the current target object from multiple objects in a video file. .
  • the mobile phone 100 can also pause the video file when it detects that the user triggers the operation of the control 1203.
  • the user can select a new target object (click, circle, etc.) in the paused screen; or the mobile phone 100 can pause the video file If there is no target object that the user wants to select in the pause screen, the user can switch the pause screen to another screen and select a new target object in the other screen.
  • FIG. 12B may also include a control for adding target objects (not shown in the figure). When the control is triggered, the mobile phone 100 increases the number of target objects; or, FIG. 12B may also include a control for reducing target objects ( (Not shown in the figure), when the control is triggered, the mobile phone 100 reduces the number of target objects.
  • the mobile phone 100 does not need to determine the target object, but determines the target area.
  • the content in the target area on the playback interface is played quickly.
  • the target area may be specified by the user, such as an area selected on a frame of the video file; or the target area may be an area automatically determined by the mobile phone 100, such as an area with more target objects or a target object with a larger area. The area where it is located, etc.; or, the target area is preset.
  • the mobile phone 100 may also switch the target area during the process of playing the video file. Please refer to FIG. 12C(a).
  • the content in the rectangular frame (the first area) is played quickly.
  • the mobile phone 100 detects the operation of the control 1205 for indicating the switching target area .
  • the mobile phone 100 continues to play the video.
  • the content in the third area is played quickly; or, when the mobile phone 100 detects the operation of the control 1205 for instructing the switching target area, the mobile phone 100 plays the video from the beginning File, in the process of playing the video file from the beginning, the content in the third area is played quickly.
  • the mobile phone 100 when the mobile phone 100 detects that the user triggers the operation of the control 1205, it can automatically determine a new target area; alternatively, the mobile phone 100 can also pause the video file when it detects the user triggers the control 1205.
  • the user can pause Specify a new target area in the screen (for example, circle a new target area, etc.); or, when the mobile phone 100 pauses playing a video file, the user can switch the pause screen to another screen and specify a new target area in the other screen.
  • FIG. 12C may also include a control for increasing the target area (not shown in the figure).
  • the control When the control is triggered, the mobile phone 100 increases the area occupied by the target area; or, FIG. 12C may also include reducing the target area.
  • the mobile phone 100 reduces the area occupied by the target area.
  • different areas (or different objects) in the video playback screen may be played differently.
  • the area A (or object A) in the video playback screen plays fast, while the area B (or object B) plays slowly or at normal speed. Therefore, during the playback of this video, area A (or object A) is presented to the user at a slower speed, and area B (or object B) is presented to the user at a faster speed.
  • different areas or objects in the video playback screen can be presented to the user at different playback speeds. For example, the wonderful part of the video or the part that the user is interested in can be played slowly, and other content can be played quickly, which is helpful To enhance the fun of video playback, enhance user experience, and attract consumers.
  • the target object in the video screen is played at 2 times speed, and other objects are played normally as an example to introduce how the mobile phone 100 quickly plays the target object.
  • FIG. 13 is a schematic diagram of the flow of a video processing method provided by an embodiment of this application.
  • the mobile phone 100 determines the target object. And after the target object is played at 2x speed, one frame of image is extracted from every 2 frames of the 1200 original images to obtain 600 original images, and then the target object is extracted from these 600 original images (for example, the target object is segmented), Obtain 600 frames of target object images (for example, use the segmented target object as a separate image, that is, the target object image).
  • the mobile phone 100 can cover the target object in the original image of 1200 frames with the background, so that the original image covered by the background does not include the target object.
  • the mobile phone 100 merges the extracted 600 frames of target object images with the first 600 frames of the 1200 frames covered by the background to obtain 600 frames of new images, and then combines the 600 frames of new images and the remaining 600 frames with the original background coverage.
  • the image is combined into a new video file, and when the mobile phone 100 plays the new video file, it shows the effect that the target object is played quickly, while other objects are played normally.
  • the mobile phone 100 may also store the target object image and the obtained new video file correspondingly.
  • the mobile phone 100 stores the target object image in a gallery.
  • a mark is displayed on the target object image.
  • the mobile phone 100 opens and The target object corresponds to the stored video file, and the target object is quickly played in the video file.
  • the mobile phone 100 may also store the extracted 600 frames of target object images and the 1200 original images covered by the background correspondingly.
  • the mobile phone 100 detects that the user indicates the target object in the video file (the target object indicated by the user is the same as the target object in the stored 600 frames of target object images) to play at 2x speed, the stored 600 frames of target object images will be used
  • the first 600 original images of the 1200 original images after the background coverage are correspondingly merged to obtain a new video, and then the video is played.
  • the mobile phone 100 does not need to perform the process of extracting frames from the original image and extracting the target object image, but only needs to perform the fusion process, which helps to improve efficiency.
  • the mobile phone 100 when the mobile phone 100 again detects that the user instructs the target object in the video file to play at 2x speed, it can still follow the process shown in FIG. 13 to perform frame extraction and extract the target object image again. , And then the process of image fusion and synthesis of new video.
  • the mobile phone 100 first extracts 600 frames of original images, and then extracts the target object image from the 600 frames of original images.
  • the mobile phone 100 can first extract the target object image from each of the 1200 frame images, and then extract frames from the proposed target object image (for example, extract 600 target object images from 1200 target object images) .
  • the mobile phone 100 can first extract the target object images from the 1000 frames of images to obtain 1000 frames of target object images , And then extract 600 frames of target object images from 1000 frames of target object images.
  • the mobile phone 100 may also have other ways to extract the target object, as long as the number of frames of the target object is less than the number of frames of the original image, which is not limited in the embodiment of the present application.
  • the mobile phone 100 first extracts part of the original image, and then extracts the target object image from the extracted original image as an example.
  • FIG. 14 is a schematic flowchart of a video processing method provided by an embodiment of this application. As shown in FIG. 14, the method includes:
  • the mobile phone 100 obtains a video file to be processed, where the video file includes N frames of original images, and N is an integer greater than or equal to 2.
  • the mobile phone 100 can capture video files (for example, through the camera application to obtain a video), or download video files from the network side (for example, download video files from iYiqi, Tencent and other clients), or receive video files from other devices (For example, the mobile phone 100 receives videos sent by other devices through the WeChat application) and so on, which is not limited in this embodiment of the application.
  • the video processing method of the embodiments of the present application may not only be applied to the playback of video files, but also may be applied to the playback of animation files, or a group of image files synthesized by an application program.
  • the mobile phone 100 determines a target frame original image in the video file, where the target frame original image is one of the N frames of original images.
  • the target frame image may be one frame of multiple frames of images contained in the video file to be processed.
  • the following describes several ways for the mobile phone 100 to determine the target frame image in the video file to be played.
  • the mobile phone 100 may use the cover of the video file as the original image of the target frame.
  • the mobile phone 100 determines that the cover of the video 404 is the original image of the target frame.
  • the mobile phone 100 is currently playing a video file to be processed. If the mobile phone 100 detects that the user triggers an operation to pause the playback, it will pause the playback of the video file, and a frame of image displayed during the pause may be used as the original target frame.
  • a first control and a second control can be displayed in a frame of image displayed during the pause.
  • the first control is triggered, the mobile phone 100 switches to the previous frame of image.
  • the second control is triggered, the mobile phone 100 switches to the next frame of image, that is, the user can switch the image through these two controls to select the original image of the target frame.
  • the mobile phone 100 can automatically display a frame of image containing more target objects in the video file, and this image is the original target frame image.
  • the method for determining the target frame image is not limited to the several listed above, and will not be listed here.
  • the mobile phone 100 determines the target object in the original image of the target frame, and determines the first playback speed of the target object, where the first playback speed is greater than the normal playback speed of the to-be-played video, and the target object is the current frame At least one object in the image.
  • the process of the mobile phone 100 determining the target object the process has been described above (the user specifies the target object or the mobile phone 100 automatically recognizes the target object, etc.), which will not be repeated here.
  • S1404 The mobile phone 100 determines the number of image extraction frames based on the first playback speed.
  • the mobile phone 100 needs to extract part of the original image.
  • the mobile phone 100 may store a corresponding relationship between the playback speed and the number of frames to be extracted, and based on the corresponding relationship, determine how many frames of images are to be extracted. Please refer to Table 1 for an example of the correspondence relationship between the playback speed and the number of extracted frames of the image provided in this embodiment of the application.
  • the mobile phone 100 can determine based on Table 1 to extract one frame of image every two frames.
  • S1405 The mobile phone 100 extracts M frames of original images from N original images of the video file to be processed according to the determined number of image extraction frames, where M is an integer greater than or equal to 1 and less than N.
  • S1405 can be implemented in multiple ways, several of which are listed below.
  • Manner 1 The mobile phone 100 can extract a part of the image from all the original images in the video file to be processed according to the determined number of image extraction frames.
  • the mobile phone 100 determines to extract a frame every 2 frames, then the mobile phone 100 starts from the first frame. A frame of the original image starts to extract one frame every 2 frames.
  • Method 2 If the mobile phone 100 pauses playing the video file, and the frame of the image displayed during the pause is the original image of the target frame, the mobile phone 100 can extract the image from the image displayed during the pause, that is, there is no need to start from the original image before the target frame image. Extract the image. For example, the 200th frame is displayed during pause. If the mobile phone 100 determines to extract a frame every 2 frames, it can extract a frame every 2 frames starting from the 200th frame.
  • the mobile phone 100 extracts K frames of images containing the target object from the extracted M frames of original images, where K is an integer greater than or equal to 1 and less than N.
  • the original image extracted by the mobile phone 100 may have a target object in part of the original image and no target object in part of the original image.
  • the mobile phone 100 may not perform the target object extraction step.
  • the target object can be segmented from the image.
  • the mobile phone 100 will obtain the target object image (the segmented target object is called the target object image).
  • the target object may be extracted by the edge contour of the target object, or the area where the target object is located (the area of the area may be larger than the area surrounded by the edge contour of the target object).
  • S1407 The mobile phone 100 removes the target object from the N frames of original images in the video file to obtain processed N frames of original images, and there is no target object in the processed N frames of original images.
  • the mobile phone 100 extracts a part of the original image from the original image, and extracts the target object from the extracted original image to obtain the target object image.
  • the mobile phone 100 has obtained the target object image, so the mobile phone 100 can remove the target object in the original image.
  • the way to remove the target object may be to use background overlay. Taking a frame of the original image as an example, the mobile phone 100 can use the background in this frame of the original image to cover the target object in the frame of the original image, for example, use a copy of the area other than the area where the target object is located in the original image to cover the target object.
  • the area where the target object is located is filled with the content of other areas.
  • there may also be other ways of going to the target object which are not limited in the embodiment of the present application.
  • the mobile phone 100 may only use background coverage for the target object in the original image containing the target object, and may not perform background coverage processing for the original image that does not include the target object.
  • the mobile phone 100 correspondingly merges the extracted K frames of target object images with the K original images in the processed N original images to obtain K frames of new images; the K original images in the processed N original images are continuously.
  • the target object image and the processed image can be fused in many ways, such as overlaying the target object image on an area on the processed original image to obtain a new image, where one area can be on the original image Any area of.
  • the specific fusion algorithm may be a wavelet fusion algorithm, a brovey transformation method, etc., which are not limited in the embodiment of the present application.
  • the mobile phone 100 may correspondingly merge the extracted K frames of target object images with the first K frames of original images in the processed N frames of original images.
  • S1409 The mobile phone 100 synthesizes K frames of new images and the remaining processed N-K frames of original images into a new video file.
  • the target object quickly plays it.
  • FIG. 15 is an example of a video processing method provided in an embodiment of this application.
  • the video file has 11 original frames in total
  • the current frame is the first original frame.
  • the mobile phone 100 determines the target object, and the playback speed of the target object is 4 times the speed. Based on Table 1, the mobile phone 100 extracts one frame of image every four frames from the 11 frames of images, that is, extracts the first, fifth, and ninth original images.
  • the mobile phone 100 will segment the target object from each of the three extracted original images to obtain the target object 1-3 (It should be understood that the target object 1-3 here is to distinguish the same object segmented from the original image of different frames, and does not refer to 3 different objects).
  • the target object 1 is segmented from the first frame of image (that is, the target object 1 is one or more objects in the first frame of the original image), and the target object 2 is segmented from the fifth frame of image (that is, The target object 2 is one or more objects on the original image of frame 5), and the target object 3 is segmented from the image of frame 9 (that is, the target object 3 is one or more objects on the original image of frame 9) .
  • the mobile phone 100 can remove the target objects in the original images of frames 1-11 (the original image including the target objects in the original images of the 11 frames can be removed.
  • the original frame of the original image that does not include the target object may not be processed for removing the target object). Exemplarily, please continue to refer to FIG. 15.
  • the mobile phone 100 can use the background to fill the blank area (such as The mobile phone 100 fills the content of the non-blank area in the original image of the first frame into the blank area), and then the mobile phone 100 merges the background filled image with the target object 1 to obtain a new frame of image, and plays the new image.
  • the mobile phone 100 since the target object 1 is segmented from the original image of the first frame, the mobile phone 100 does not need to fill the blank area, and the target object 1 may be filled in the blank area to obtain a new frame of image.
  • the mobile phone 100 can cover the original image of the second frame including the target object with the background (for example, use a copy of the area other than the area of the target object in the original image of the second frame to cover the area where the target object is located, that is, the area where the target object is located.
  • the content is filled with the content of other areas), and then the original image of the second frame after the background overlay is merged with the target object 2 to obtain a new image, which only includes the target object extracted from the original image of the fifth frame 2.
  • the original image itself includes the target object.
  • the mobile phone 100 can cover the original image of the third frame itself including the target object with the background, and then fuse the original image of the third frame after the background coverage with the target object 3 to obtain a new frame of image that includes only the target object.
  • the target object 3 extracted from the 9 frames of the original image does not include the third frame of the original image itself including the target object.
  • the mobile phone 100 when the mobile phone 100 is playing the second frame of the original image, it synchronously displays the target object 2 in the fifth frame of the original image, and when playing the third frame of the original image, the synchronous display is the first Target object 3 in 9 original images. Therefore, the mobile phone 100 presents the effect of playing the target object at a faster speed while the background is playing normally.
  • the process shown in Figure 15 is an example of how the target object in the video file can be played quickly.
  • the content in the target area in the playback interface of the video file can also be realized.
  • Fast playback where the target area can be a preset area or a user-specified area.
  • the mobile phone 100 can also process according to the process shown in FIG. 15 by replacing the target object shown in FIG. 15 with an image in the target area. Assuming that the target area is a rectangle, then the target The image in the area is a rectangular image enclosed by the target area, and then the rectangular image can be processed as the target image.
  • the target object is switched from swan to fish.
  • the user can perform the process shown in Fig. 15 twice.
  • the process shown in FIG. 15 is executed once to obtain a video file, and the swan is played quickly during the process of playing the video file.
  • the mobile phone 100 When a certain frame is played, when the mobile phone 100 detects that the user triggers the operation of the control 1203, the mobile phone 100 re-executes the process shown in FIG. 15 on the subsequent frame images of the certain frame to obtain another video file. 100.
  • the fish is played quickly during the process of playing the other video file; or, when the mobile phone 100 detects that the user triggers the operation of the control 1203, the mobile phone 100 re-executes the process shown in FIG. 15 on all the images of the video file to obtain another
  • the video file is played quickly when the mobile phone 100 plays the video file.
  • the target area is switched from the first area to the third area.
  • the user can perform the process shown in Fig. 15 twice.
  • the process shown in FIG. 15 is executed once to obtain a video file.
  • the content in the first area is played quickly.
  • the mobile phone 100 When a certain frame is played, when the mobile phone 100 detects that the user triggers the operation of the control 1205, the mobile phone 100 re-executes the process shown in FIG. 15 based on the subsequent frame images of the certain frame to obtain another video file. 100.
  • the third area is quickly played during the process of playing another video file; or, when the mobile phone 100 detects that the user triggers the operation of the control 1203, the mobile phone 100 re-executes the process shown in FIG. 15 on all the images based on the video file to obtain Another video file, when the mobile phone 100 plays the other video file, the fish is played quickly.
  • the mobile phone 100 determines the target object, assuming that the target object is played quickly, then the mobile phone 100 has not finished playing other objects after playing the target object. Please continue to refer to Fig. 15. After the mobile phone 100 plays the original image from the first frame to the original image of the third frame, the target objects 1-3 have been played, but the subsequent original images, namely the fourth frame image to the 11th frame image, are still Not playing.
  • the mobile phone 100 after the mobile phone 100 finishes playing the target object, it can stay at the last frame of the target object image and wait for the subsequent original image to be played.
  • the mobile phone 100 plays the original image of the third frame, it still merges the original image of the target object 3 and the original image of the fourth frame, and the subsequent original image of the fifth frame is also merged with the target object 3, that is, target object 1- 3
  • the subsequent original image After the playback is complete, merge the target object 3 with each subsequent original image until each subsequent original image is played.
  • the subsequent original image still uses the background to cover the target object contained in itself, which has been introduced before and will not be repeated.
  • the mobile phone 100 can play the target object in a loop until the subsequent original image is played. As shown in Figure 17, after the mobile phone 100 plays the original image of the third frame, it fuses the original image of the target object 1 and the fourth frame, fuses the original image of the target object 2 and the fifth frame, and merges the target object 3 and the sixth frame. The original image is fused. In other words, the mobile phone 100 presents the effect of looping the target object 1-3 until the subsequent original image is played.
  • the mobile phone 100 may stop playing subsequent original images after playing the target object. Please continue to refer to FIG. 15, after the mobile phone 100 finishes playing the new image obtained by fusing the original image of the third frame and the target object 3, the playback of the video file is stopped.
  • the mobile phone 100 after the mobile phone 100 extracts the target object image, the original image is played. In practical applications, the mobile phone 100 may not play the original image. Please continue to refer to FIG. 15, the mobile phone 100 may also merge the target objects 1-3 with the original image of the first frame to obtain three merged images, and then play these three images. That is, the mobile phone 100 presents the effect that the target object is playing and the background stays in the original image of the first frame, that is, the background is in a static state, and only the target object is in the playing state. Of course, the mobile phone 100 can also fill the background with other content. For example, referring to FIG. 18, the mobile phone 100 displays background options.
  • the background options include the original background (that is, the background of the video itself), the fill color, and the designated image.
  • the mobile phone 100 displays color options for the user to select. Assuming that the user selects black, when the mobile phone 100 plays a video, all objects other than the target object, that is, the background, are filled with black, showing the effect that there is only the target object in the screen and the background is black.
  • the mobile phone 100 can also replace the background with another image (for example, an image that does not belong to the video).
  • the mobile phone 100 displays background options.
  • the background options include the original background (that is, the background of the video itself), fill color, and designated image.
  • the mobile phone 100 displays the storage path of the image.
  • the mobile phone 100 can specify the image according to the storage path of the image. In this case, when the mobile phone 100 plays the video, it presents the effect that the target object is played in the image specified by the user (the target object is located on the upper layer and the specified image is located on the lower layer, that is, the specified image is the background of the target object).
  • the image specified in FIG. 19 may be any image stored in the mobile phone 100.
  • the image of the target object 1-3 can be stored.
  • the mobile phone 100 can use the stored target object 1 when specifying the image. -3. That is to say, when the mobile phone 100 uses the video processing method provided in the embodiment of the application to process a certain video, it can store the extracted image of the target object, and the mobile phone 100 uses the video processing of the embodiment of the application for another video file.
  • the method is processing, if you want to specify an image, you can specify the target object extracted from the one video. In this way, the effect of the content in the two video files can be exchanged.
  • the smart play mode of the mobile phone 100 is introduced, and another embodiment is described below.
  • the mobile phone 100 can realize smart video recording.
  • FIG. 20 shows a GUI of the mobile phone 100, and the GUI is the desktop 2001 of the mobile phone.
  • the mobile phone 100 detects that the user has clicked the icon 2002 of the camera application on the desktop 2001, it can start the camera application and display another GUI as shown in (b) of FIG. 20, which includes an image capture preview interface 2003 .
  • the preview interface 2003 includes preview images, camera options, video options, and smart video options 2004.
  • the smart recording option 2004 is selected, the mobile phone 100 enters the smart recording mode.
  • the record button 2005 is triggered, the mobile phone 100 starts to record video.
  • the mobile phone 100 When the mobile phone 100 enters the smart recording mode, it can display an interface as shown in (a) in Figure 21.
  • the interface includes a detail control 2101.
  • the detail control 2101 When the detail control 2101 is triggered, the mobile phone 100 displays information about the smart recording mode. See (b) in Figure 21.
  • the target object and the playback speed of the target object can be determined, so as to determine the video recording mode according to the playback speed of the target object (described later).
  • the mobile phone 100 displays the text message "Please click on the target object in the screen". Assuming that the user clicks on the swan in the preview image 2003, the mobile phone 100 may display the identification information 2201 "swan selected”. At this point, the user can continue to click other objects in the preview interface 2003. Assuming that the user continues to click the fish, the mobile phone 100 may add “fish” to the identification information 2201. When the mobile phone 100 clicks the confirmation control 2202, the mobile phone 100 determines that the swan and fish are the target objects.
  • the mobile phone 100 determines that all objects included in the selection box are target objects. Assuming that the selection box includes two objects, swan and fish, the identification information 2201 displayed by the mobile phone 100 includes "swan" and "fish". When the selection box is enlarged, the number of objects contained in the selection box may increase, so the objects contained in the identification information 2201 increase. When the mobile phone 100 detects that the operation of the determination control 2202 is triggered, it is determined that the object included in the identification information 2201 is the target object.
  • the mobile phone 100 when the mobile phone 100 detects that the user selects the object number 1, the mobile phone 100 displays the identification information 2201 that includes the number 1. When the mobile phone 100 continues to select the object with the number 2, the mobile phone 100 adds the number 2 to the identification information 2201. When the mobile phone 100 detects that the user triggers the operation of the determination control 2202, it is determined that the object included in the identification information 2201 is the target object.
  • the mobile phone 100 After the mobile phone 100 determines the target object, it can also determine the playback speed of the target object.
  • the mobile phone 100 displays speed playback options, including: a 2x speed option, a 1.5x speed option, and a 0.5x speed option. Assuming that the user triggers the 2x speed option, when the mobile phone 100 detects an operation that triggers the shooting control 2005, the mobile phone 100 starts to record a video. When the video is played, the selected target object is played at 2x speed, and other objects are played at normal speed.
  • speed playback options including: a 2x speed option, a 1.5x speed option, and a 0.5x speed option.
  • the playback speed of each target object can be determined. For example, referring to (b) in FIG. 23, the mobile phone 100 displays the respective playback speed options of two target objects. Assuming that the user selects that the playback speed of the swan is 2 times speed and the playback speed of fish is 0.5 times speed, when the mobile phone 100 detects the operation of triggering the shooting control 2005, it starts to record the video. When the video is played, the swan plays fast and the fish plays slowly.
  • the mobile phone 100 may also determine the playback speed of the background excluding the target object. For example, referring to (c) in FIG. 23, the mobile phone 100 also displays the playback speed option of the background (other objects except swans and fish). Assuming that the playback speed of the user selecting the background is 1.5 times speed, the mobile phone 100 starts to record the video when it detects the operation of triggering the shooting control 2005. When the video is played, the background is played at 1.5 times speed.
  • the video recording method may be determined according to the playback speed of the target object.
  • the mobile phone 100 may refer to the slowest playback speed to collect images.
  • the mobile phone 100 can capture images at a high frame rate (for example, 60 fps), and then play the captured images at a normal playback speed (for example, 30 fps). Therefore, in order to ensure that the mobile phone 100 has enough images to play in the case of slow playback, the mobile phone can determine the image collection frame rate at the playback rate of the slow playback. Therefore, if the mobile phone 100 determines that the target object is playing slowly, the image can be captured at a high frame rate.
  • the mobile phone 100 can collect images at a frame rate of 60 fps, that is, if 60 frames of images are collected in 1 second, or 30 frames are played in 1 second, 2 seconds of playback are required, the playback time is extended, and the effect of slow playback is presented. Therefore, when the mobile phone 100 records a video, it can refer to the playback speed of the slow playback to collect enough images.
  • the mobile phone 100 since the number of image frames collected by the mobile phone 100 per second increases, it is to ensure that the target object can be played at a slow speed. If the background (objects other than the target object) is played normally, the mobile phone 100 still needs Segment the target object image from the video, then the target object image, and then play the target object image at normal speed, and quickly play the background. Among them, the way of fast playing in the background can be seen in FIG. 15. In this case, the number of frames played during the playback of the target object is more, and the background is extracted and played, that is, the number of playback frames is less, so the target object in the video shows the effect of slow playback.
  • the video recording mode may be determined according to the playback speed of the target object.
  • the mobile phone 100 may refer to the fastest playback speed to collect images.
  • the target object is played at 2x speed.
  • the mobile phone 100 can extract a frame of image every 4 frames (in order to extract the target object) when playing a video, it can ensure that the extracted target object image can also be used enough, that is, ensure that the extracted target object image can be continuous Play.
  • the following describes the process of the mobile phone 100 recording and obtaining a video file.
  • the mobile phone 100 after the mobile phone 100 detects the instruction for shooting, it can collect multiple frames of original images, and then the mobile phone 100 can process the multiple frames of original images according to the process shown in FIG. 15 to obtain new video files and store them.
  • the new video file (for the specific process, please refer to the previous description).
  • the mobile phone 100 plays a new video, it can present the effect of fast playback by the target object.
  • the number of target object images is less than the number of original images.
  • the mobile phone 100 fuses the target object image and the original image accordingly. Only 3 new images can be obtained.
  • the new video only includes 3 new images obtained by fusion.
  • Another implementation method is that the mobile phone 100 fuses the target object 1 with the first frame of the original image to obtain a first frame of new image, fuses the target object 2 with the second frame of original image to obtain the second frame of the new image, and combines the target object 3.
  • the mobile phone 100 merges the target object 1 with the original image of the first frame to obtain a first frame of new image, merges the target object 2 with the original image of the second frame to obtain a second frame of new image, and merges the target object 3.
  • Fusion with the third frame of image to obtain a third frame of new image continue to merge the target object 3 with the original image of the fourth frame, the original image of the fifth frame, and each subsequent original image. In this way, after the target object in the new video is played, the last frame of the target object stays until the entire video is played.
  • the mobile phone 100 after the mobile phone 100 detects the instruction for shooting, it can collect multiple frames of original images to obtain an original video file, and then store the original video file.
  • the mobile phone 100 detects an operation that triggers the playback of the original video file, the original video is processed in the manner shown in FIG. 15.
  • the mobile phone 100 stores the original video, and the process shown in FIG. 15 is executed only when the original video is played.
  • the mobile phone 100 before the mobile phone 100 records the video, the user can specify the target object and the playback speed of the target object in the preview interface. If the mobile phone 100 stores the original video, it also stores the playback information of the original video (for example, before recording the original video, The target object specified by the user, and the playback speed of the target object). When the mobile phone 100 detects an operation that triggers the playback of the original video file, it processes the original video in the manner shown in FIG. 15 (in the process shown in FIG. 15, the target object is the target object specified by the user on the preview interface, and the The number of frames can be determined according to the playback speed specified by the user and Table 1).
  • the mobile phone 100 can store both the original video file and the new video file (the new video file obtained by processing the original video).
  • the new video file stored in the mobile phone 100 may be provided with identification information 2401, and the identification information 2401 is used to indicate that the new video is a new video obtained by processing the original video.
  • the identification information 2401 may be text information or an icon, etc., which is not limited in the embodiment of the present application.
  • the effect of fast or slow playback of the target object is not presented (that is, the process shown in Figure 15 is not executed), and when the new video file is played, the target object is presented to play quickly Or the effect of slow playback.
  • the video processing method provided by the embodiments of the present application can realize that different areas or objects in the same video are presented to the user at different playback speeds.
  • the method can be applied to a variety of fields or scenarios.
  • the video processing method provided in the embodiments of the present application can be applied to the field of video surveillance, that is, a specific person (ie target object) is played quickly, and other objects are still or slowly played, etc. To track specific people.
  • the method can also be applied to other scenes, such as the scene where a video player app plays a movie, or a WeChat or QQ video call scene, such as WeChat emoticon package production, and any scene where videos or moving pictures can be recorded or played.
  • the mobile phone 100 can process the video in a manner similar to that shown in FIG. 14 or FIG.
  • the embodiments of the present application provide a video processing method, which can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) including a display screen.
  • an electronic device such as a mobile phone, a tablet computer, etc.
  • the structure of the electronic device may be as shown in FIG. 1.
  • the method may include the following steps:
  • S2501 A first operation for playing a video file is detected; the video file is a video file stored in the electronic device.
  • the video file can be the video 404 in the interface shown in Figure 4(b), and the first operation can be the operation of clicking on the video 404; or the first operation can be clicking on the icon in the interface shown in Figure 4(c).
  • Play button 406 operation can be the operation of clicking on the video 404; or the first operation can be clicking on the icon in the interface shown in Figure 4(c).
  • the first target object may be selected by the user, or automatically determined by the electronic device, or preset.
  • the user can click on at least one object on a frame of image in the video file, and the clicked object is the target object.
  • the playback speed of the first target object may be higher than the playback speed of other objects by default, or the playback speed of the first target object may be selected by the user.
  • the user can select the playback speed of the first target object through speed playback options (2 times option, 1.5 times option, etc.).
  • the currently selected first target object is a swan.
  • the user can click the control to switch the target object, and the mobile phone will change the target object and switch the first target object to the first target object.
  • Target object is a swan.
  • the second operation can also be a series of operations of pausing the video file, manually selecting the second target object from the pause screen, and then continuing to play the video, which has been described above and will not be repeated here.
  • S2504 In response to the second operation, continue to play the video file. In the process of continuing to play the video file, the playback speed of the second target object in the video file is greater than that of the second target object. Play speed of other objects; the second target object is at least one object in the video file that is different from the first target object.
  • the swan in the video file is played quickly, and after the target object is switched, the fish in the video file is played quickly.
  • the embodiments of the present application provide a video processing method, which can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) including a display screen.
  • an electronic device such as a mobile phone, a tablet computer, etc.
  • the structure of the electronic device may be as shown in FIG. 1.
  • the method may include the following steps:
  • S2601 Detect a first operation for playing a video file; the video file is a video file stored in the electronic device;
  • the video file can be the video 404 in the interface shown in Figure 4(b), and the first operation can be the operation of clicking on the video 404; or the first operation can be clicking on the icon in the interface shown in Figure 4(c).
  • Play button 406 operation can be the operation of clicking on the video 404; or the first operation can be clicking on the icon in the interface shown in Figure 4(c).
  • S2602 In response to the first operation, play the video file, and the playback speed of the content in the first area in the video file is greater than the playback speed of the content in the second area; the second area is the playback speed. Areas other than the first area in the interface;
  • the first area may be selected by the user, or automatically determined by the electronic device, or preset. Taking FIG. 12C as an example, the user can click on an image in the video file to select the first area.
  • the user can click a control for instructing to switch the fast-playing area, and the mobile phone will switch the first area to the third area.
  • the second operation can also be a series of operations of pausing the video file, then manually selecting the third area from the pause screen, and then continuing to play the video, which has been described above and will not be repeated here.
  • S2604 In response to the second operation, continue to play the video file. In the process of continuing to play the video file, the playback speed of the content in the third area in the video file is greater than that of the content in the fourth area. Play speed; the fourth area is an area other than the third area in the play interface.
  • the embodiments of the present application provide a video processing method, which can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) including a camera and a display screen.
  • an electronic device such as a mobile phone, a tablet computer, etc.
  • the structure of the electronic device may be as shown in FIG. 1.
  • the method may include the following steps:
  • the first operation may be the operation of clicking the camera application 202 in FIG. 20(a), or other operations that can start the camera, which is not limited in the embodiment of the present application.
  • S2702 In response to the first operation, display a viewfinder interface of the camera application, where the viewfinder interface includes a preview image, and the preview image includes at least one object;
  • the viewfinder interface may be the interface 2003 shown in FIG. 20(b).
  • the second operation may be an operation of clicking the control 2005 shown in FIG. 20(b).
  • N is an integer greater than or equal to 2;
  • S2705 Extract an original image every M frames from the N original images, and extract K frames of the first target object image from the extracted original images; M is an integer greater than or equal to 1 and less than N, and K is greater than or equal to 1 is an integer less than N; the first target object is at least one object on the extracted original image;
  • S2706 Cover the first target object in the N frames of original images with a background, so that the N frames of original images covered by the background do not include the first target object;
  • S2707 Correspondingly merge the extracted K frames of the first target object image with the K frames of original images in the N frames of original images covered by the background to obtain K frames of new images; wherein, the N frames of original images covered by the background are used K frames of original images in the image are continuous images;
  • S2708 Synthesize the K frames of the new image and the remaining N-K frames with the original image covered by the background into the target video file;
  • the electronic device detects the instruction for shooting, it collects 11 frames of original images, and extracts one frame of image every 4 frames from the 11 frame of original images, and extracts 3 original frames in total.
  • Image that is, extract the original image of the first frame, the original image of the fifth frame and the original image of the ninth frame.
  • the mobile phone 100 extracts an image, it can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts 3 frames of images including the first target object from the extracted original images, and then covers the first target object in the 11 original images with the background (it can only cover the 11 original images including the first target object)
  • the first target object in the original image is covered by a background, and the original image that does not contain the first target object may not be covered by the background), so that the image after the background cover does not include the first target object.
  • the electronic device fuses the extracted 3 frames of the image containing the first target object and 3 original images out of the 11 original images covered by the background correspondingly to obtain 3 new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, the first 3 frames of the 11 original images after the background overlay are used.
  • the electronic device synthesizes the 3 frames of new images and the remaining 8 frames with the background overlay image into the target video file, and when the target video file is played, the first target object is played quickly.
  • the video file recorded by the electronic device in this way is a specially processed video file. When the video file is playing, the target object can play it quickly. In this way, it helps to enhance the fun of video recording and enhance user experience.
  • the embodiments of the present application provide a video processing method, which can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) including a display screen.
  • an electronic device such as a mobile phone, a tablet computer, etc.
  • the structure of the electronic device may be as shown in FIG. 1.
  • the method may include the following steps:
  • the first operation may be the operation of clicking the camera application 202 in FIG. 20(a), or other operations that can start the camera, which is not limited in the embodiment of the present application.
  • S2802 In response to the first operation, display a viewfinder interface of the camera application, where the viewfinder interface includes a preview image, and the preview image includes at least one object;
  • the viewfinder interface may be the interface 2003 shown in FIG. 20(b).
  • the second operation may be an operation of clicking the control 2005 shown in FIG. 20(b).
  • N In response to the second operation, N frames of original images collected by the camera; N is an integer greater than or equal to 2;
  • S2805 Extract one frame of original image every M frames from the N frames of original images, and extract K frames of original images; M is an integer greater than or equal to 1 and less than N, and K is an integer greater than or equal to 1 and less than N;
  • S2806 Extract a first image from each extracted original image to obtain a total of K frames of first images; the first image is an image in the first area of each extracted original image;
  • S2807 Cover the first image of the first region in each original frame of the N frames of original images with background coverage, so that each frame of the original image of the N frames of original images covered by the background does not include the first image. image;
  • S2808 Fill the extracted K frames of the first image with the first area in the K frames of the original image in the N frames of original image covered with a background to obtain K frames of a new image; wherein, the background is used K frames of original images in the N frames of original images after coverage are continuous;
  • S2809 Combine the K frames of new images and the remaining N-K frames with the original images covered by the background to synthesize the target video file.
  • the electronic device After the electronic device detects the instruction for shooting, it collects 11 frames of original images, and extracts one frame of image every 4 frames from the 11 frame of original images, and a total of 3 frames of original images are extracted.
  • Image (that is, extract the first frame of original image, the fifth frame of original image and the 9th frame of original image).
  • the mobile phone 100 when the mobile phone 100 extracts an image, it can start from the first frame, or from the second frame, etc., as long as it is ensured that the number of subsequent frames is sufficient when extracting from the i-th frame.
  • the electronic device extracts the first image from each extracted original image (that is, the content in the first area on each original image), and then overlays the first image on each original image out of 11 original images with a background , So that the 11 original images do not include the first image.
  • the electronic device merges the extracted 3 first images and 3 original images out of the 11 original images covered by the background correspondingly to obtain 3 new images.
  • the 3 original images of the 11 original images after the background overlay are continuous, for example, the first 3 frames of the 11 original images after the background overlay may be used.
  • the electronic device combines 3 frames of new images and the remaining 8 frames with background overlay images into the target video file.
  • the target video file is played, the content in the first area is played quickly.
  • the video file recorded by the electronic device in this way is a specially processed video file.
  • the video file is playing, the content of the first area can be played quickly. In this way, it helps to enhance the fun of video recording and enhance user experience.
  • the method provided in the embodiments of the present application is introduced from the perspective of a mobile device (mobile phone 100) as the execution subject.
  • the mobile device may include a hardware structure and/or a software module, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above-mentioned functions is executed in a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • the embodiment of the present invention also provides a computer storage medium, the storage medium may include a memory, and the memory may store a program.
  • the terminal execution includes the previous execution of FIG. 25, FIG. 26, and FIG. 27. All or part of the steps performed by the electronic device described in the method embodiment shown in FIG. 28.
  • the embodiment of the present invention also provides a program product.
  • the terminal executes the method embodiments shown in FIG. 25, FIG. 26, FIG. 27, and FIG. 28. All or part of the steps performed by the recorded electronic device.
  • the embodiments of the present application can be implemented by hardware, firmware, or a combination thereof.
  • the above functions can be stored in a computer-readable medium or transmitted as one or more instructions or codes on the computer-readable medium.
  • the computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a computer.
  • computer-readable media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory, CD- ROM) or other optical disk storage, magnetic disk storage media or other magnetic storage devices, or any other media that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer.
  • EEPROM electrically erasable programmable read-only memory
  • CD- ROM compact disc read-only memory
  • Any connection can suitably become a computer-readable medium.
  • disks and discs include compact discs (CDs), laser discs, optical discs, digital video discs (digital video discs, DVDs), floppy discs, and Blu-ray discs. Disks usually copy data magnetically, while disks use lasers to copy data optically. The above combination should also be included in the protection scope of the computer-readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本文提供了一种视频处理方法和移动设备,该方法包括:检测到用于播放视频文件的第一操作;视频文件是电子设备中存储的视频文件;响应于第一操作,播放视频文件,视频文件中第一目标对象的播放速度大于其它对象的播放速度;检测到用于更换目标对象的第二操作,继续播放视频文件,在继续播放视频文件的过程中,视频文件中的第二目标对象的播放速度大于第二目标对象以外的其它对象的播放速度;第二目标对象是所述视频文件中与所述第一目标对象不同的至少一个对象。该过程中,电子设备播放视频文件时,视频文件中的目标对象可以快速播放,而且目标对象可以切换,用户可以快速观看视频文件中的用户感兴趣对象,提升视频播放的趣味性,提升用户体验。

Description

一种视频处理方法和移动设备 技术领域
本申请涉及终端技术领域,尤其涉及一种视频处理方法和移动设备。
背景技术
手机中相机应用的拍照或视频录制功能已经成为用户使用频率较高的功能之一。现在,用户更希望使用手机能够拍摄(或录制)出更有趣味性的图像(或视频)。
请参见图1所示,为现有技术中手机中相机应用的区域界面的示意图。取景界面中包括两种视频录制模式。其中一种为慢动作拍摄,另一种为延时拍摄。当用户选择慢动作拍摄视频时,假设手机1秒采集480帧图像,且录制2s,即手机得到的视频中包括960帧图像,手机以每秒30帧的速率播放该视频,播放完960帧图像需要使用32s,即手机将视频内容慢速的呈现给用户(将录制2秒的视频使用32秒播完)。当用户选择延时摄影时,假设手机1秒采集15帧,且录制2s,即手机得到的视频中包括30帧图像,手机以30fps播放该视频时,只需1秒播完,即手机将视频内容快速的呈现给用户(将录制2秒的视频使用1秒播完)。
综上,现有技术中,无论慢动作录制还是延时摄影,手机都是对整个视频画面的快速或慢速呈现。但是,如果一段视频是通过延时摄影的方式录制的,那么这段视频播放时只能快速播放,如果视频中有用户感兴趣的内容,那么感兴趣的内容也会快速播放,用户就无法更好的查看这部分内容。
发明内容
本申请提供一种视频处理方法和移动设备,该方法可以使视频中不同内容以不同的播放速度播放。
第一方面提供一种视频处理方法,该方法可以由具有显示屏的电子设备(比如手机、ipad、笔记本电脑等)执行,该方法包括:检测到用于播放视频文件的第一操作;所述视频文件是所述电子设备中存储的视频文件;响应于所述第一操作,播放所述视频文件,所述视频文件中第一目标对象的播放速度大于所述视频文件中所述第一目标对象以外的其它对象的播放速度;检测到用于更换目标对象的第二操作;响应于所述第二操作,继续播放所述视频文件,在继续播放所述视频文件的过程中,所述视频文件中的第二目标对象的播放速度大于所述第二目标对象以外的其它对象的播放速度;所述第二目标对象是所述视频文件中与所述第一目标对象不同的至少一个对象。
应理解,电子设备播放视频文件时,视频文件中的目标对象可以快速播放,而且目标对象可以切换,比如,视频文件中的某个目标对象快速播放,其它对象正常播放,当切换目标对象之后,该视频文件中的另一个目标对象快速播放,其它对象正常播放。通过这种方式,用户可以快速观看视频文件中的用户感兴趣对象,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,所述第一目标对象包括:预设的目标对象;或者为所述电子设备根据所述视频文件中的多个对象自动确定的对象;或者,为所述电子设备根据用户在所述视频文件中的一帧图像上的选定操作,确定的对象;所述第二目标对象包括:预 设的目标对象;或者,所述电子设备根据所述视频文件中的多个对象自动确定的对象;或者,所述电子设备根据用户在所述视频文件中的一帧图像上的选定操作,确定的对象。
应理解,视频文件中的目标对象可以是预先设置的,或者电子设备自动确定的,或者是用户选择的。通过这种方式,用户可以快速观看视频文件中的用户感兴趣对象,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,所述第一目标对象或者所述第二目标对象的播放速度为预设的播放速度;或者,所述第一目标对象或者所述第二目标对象的播放速度为所述电子设备根据用户在所述视频文件中的一帧图像上的选择操作,确定的播放速度。
应理解,视频文件中目标对象的播放速度可以是预先设置的,或者用户选择的。通过这种方式,用户可以快速观看视频文件中的用户感兴趣对象,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,所述第二目标对象为所述电子设备根据用户在所述视频文件中的一帧图像上的选定操作,确定的对象,包括:检测到作用在所述一帧图像上的至少一个点击操作,确定每个点击操作所在位置对应的对象为所述第二目标对象;或者检测到作用在所述一帧图像上的至少一个圈选操作,确定每个圈选操作所围成的区域内包括的对象为所述第二目标对象;或者检测到作用在所述一帧图像上的每个对象的标识信息中选择至少一个目标标识信息的操作,确定所述一帧图像上每个目标标识信息对应的对象为所述第二目标对象。
应理解,用户选择目标对象的方式可以有多种,比如在视频文件的一帧图像上点击对象,或者圈选对象,或者根据目标对象的标识选择对象,方便用户操作,提升用户体验。
在一种可能的设计中,所述第二目标对象的播放速度为所述电子设备根据用户在所述视频文件中的一帧图像上的选择操作,确定的播放速度,包括:响应于触发显示播放速度选项的操作,显示多个播放速度选项;检测到用于在多个播放速度选项中选择目标播放速度选项的选择操作,确定所述目标播放速度选项对应的播放速度。
应理解,电子设备可以为用户提供目标对象的播放速度的多个选项,用户通过多个选项选择速度,为用户提供一个选择机会,提升用户体验。
在一种可能的设计中,当所述第二目标对象播放完毕时,停止播放所述视频文件;或者当所述第二目标对象播放完毕时,显示所述第二目标对象的最后一帧画面,继续播放所述视频文件的其它对象。
应理解,当目标对象播放完毕时,停止视频文件的播放,或者目标对象播放完毕时,停留到目标对象的最后一帧画面,其它对象继续播放。通过这种方式,用户可以快速观看视频文件中的用户感兴趣对象,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,电子设备在所述播放所述视频文件之前,从所述视频文件的N帧原始图像中每隔M帧抽取一帧原始图像,从抽取出的原始图像中提取K帧包含第一目标对象的图像;其中;N为大于等于2的整数;M是大于等于1小于N的整数,K是大于等于1小于N的整数;所述第一目标对象是抽取出的原始图像上的至少一个对象;将所述N帧原始图像中的所述第一目标对象使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中不包括所述第一目标对象;将提取出的所述K帧包含第一目标对象的图像和使用背景覆盖后的N帧原始图像中的K帧原始图像对应融合,得到K帧新图像;其中,使用背景覆盖后 的N帧原始图像中的K帧原始图像是连续的图像;将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成第一目标视频文件;所述电子设备播放所述视频文件,包括:播放所述第一目标视频文件。
应理解,以视频文件包括11帧原始图像为例,电子设备从11帧原始图像中每隔4帧图像抽取一帧原始图像,共抽取出3帧原始图像(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的原始图像中提取3帧包括第一目标对象的图像,然后将11帧原始图像中的第一目标对象使用背景覆盖(可以仅对11帧原始图像中包括第一目标对象的原始图像中的第一目标对象使用背景覆盖,对于不包含第一目标对象的原始图像可以不作背景覆盖处理),使得使用背景覆盖之后的图像上不包括第一目标对象。应理解:本文中对于任一帧原始图像的第一目标对象,使用背景覆盖的时候,该背景可以采用原始图像上的背景,也可以是通用的背景,此处不做限定。
电子设备将提取出的3帧包含第一目标对象的图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原始图像中的3帧原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成第一目标视频文件,当该第一目标视频文件被播放时,第一目标对象快速播放。因此,电子设备可以根据视频文件的N帧原始图像,经过一系列的处理步骤,合成新的视频,然后播放新视频,在新视频的播放过程中,第一目标对象快速播放。
在一种可能的设计中,电子设备在所述继续播放所述视频文件之前,还可以从所述视频文件的N帧原始图像中每隔P帧抽取一帧原始图像,从抽取出的原始图像中提取Q帧包含第二目标对象的图像;其中;N为大于等于2的整数;P是大于等于1小于N的整数,Q是大于等于1小于N的整数;所述第一目标对象是抽取出的原始图像上的至少一个对象;将所述N帧原始图像中的所述第二目标对象使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中不包括所述第二目标对象;将提取出的所述Q帧包含第二目标对象的图像和使用背景覆盖后的N帧原始图像中的Q帧原始图像对应融合,得到Q帧新图像;其中,使用背景覆盖后的N帧原始图像中的Q帧原始图像是连续的图像;将所述Q帧新图像和剩余的N-Q帧使用背景覆盖后的原始图像合成第二目标视频文件;所述继续播放所述视频文件,具体为:播放所述第二目标视频文件。
应理解,以视频文件包括11帧原始图像为例,电子设备从11帧原始图像中每隔4帧图像抽取一帧图像,共抽取出3帧原始图像(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的原始图像中提取3帧包括第二目标对象的图像,然后将11帧原始图像中的第二目标对象使用背景覆盖(可以仅对11帧原始图像中包括第二目标对象的原始图像中的第二目标对象使用背景覆盖,对于不包含第二目标对象的原始图像可以不处理),使得使用背景覆盖之后的图像上不包括第二目标对象。
电子设备将提取出的3帧包含第二目标对象的图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原始图像 中的3帧原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成第二目标视频文件,当该第二目标视频文件被播放时,第二目标对象快速播放。因此,电子设备可以根据视频文件的N帧原始图像,经过一系列的处理步骤,合成新的视频,然后播放新视频,在新视频的播放过程中,第二目标对象快速播放。也就是说,电子设备正在播放一视频文件时,若检测到目标对象切换的操作时,可以对视频文件中的所有图像进行处理,得到新视频,然后播放新视频,新视频的播放过程中,切换后的目标对象快速播放。
在一种可能的设计中,电子设备在所述继续播放所述视频文件之前,还可以从所述视频文件当前播放的图像的后续W帧原始图像中每隔P帧抽取一帧原始图像,从抽取出的原始图像中提取Q帧包含第二目标对象的图像;其中;W为大于等于2的整数;P是大于等于1小于W的整数,Q是大于等于1小于W的整数;所述第二目标对象是抽取出的原始图像上的至少一个对象;将所述W帧原始图像中的所述第二目标对象使用背景覆盖,使得使用背景覆盖后的所述W帧原始图像中不包括所述第二目标对象;将提取出的所述Q帧包含第二目标对象的图像和使用背景覆盖后的W帧原始图像中的Q帧原始图像对应融合,得到Q帧新图像;其中,使用背景覆盖后的W帧原始图像中的Q帧原始图像是连续的图像;将所述Q帧新图像和剩余的N-Q帧使用背景覆盖后的原始图像合成第二目标视频文件;所述继续播放所述视频文件,包括:播放所述第二目标视频文件。
应理解,电子设备在播放一视频文件,该视频文件包括11帧原始图像,该视频文件中第一目标对象在快速播放。假设电子设备检测到用于切换目标对象的操作时,正播放第3帧原始图像,则电子设备可以对视频文件中当前帧图像+后续未播放的图像(即第3帧原始图像到第11帧原始图像)进行处理,或者,仅对后续帧图像(即第4帧原始图像到第11帧原始图像)进行处理。
以电子设备仅对后续未播放的图像为例,电子设备可以从第4帧图像开始,进行抽帧、提取目标对象等处理,得到新视频,然后播放新视频,新视频的播放过程中,切换后的目标对象快速播放。
应理解,电子设备可以判断后续帧图像是否足够,若不够,电子设备可以从第1帧开始抽取。举例来说,视频文件包括11帧原始图像,当电子设备播放到第10帧时,检测到切换目标对象的操作,但后续帧只剩下1帧,则手机100可以从第1帧开始从抽取。
第二方面还提供一种视频处理方法,该方法可以由具有显示屏的电子设备(比如手机、ipad、笔记本电脑等)执行,该方法包括:响应于所述第一操作,播放所述视频文件,播放界面中处于第一区域中的第一内容的播放速度大于第二区域中的第二内容的播放速度;所述第二区域是播放界面中所述第一区域以外的其它区域;所述第一内容和所述第二内容属于所述视频文件中的播放内容;检测到用于切换快速播放区域的第二操作;响应于所述第二操作,继续播放所述视频文件,在继续播放所述视频文件的过程中,播放界面中处于第三区域中的第三内容的播放速度大于第四区域中第四内容的播放速度;所述第四区域是播放界面中所述第三区域以外的其它区域,所述第三内容和所述第四内容属于所述视频文件中的播放内容。
应理解,电子设备播放视频文件时,视频文件中的第一区域(或者称为目标区域)中的内容可以快速播放,而且在视频文件播放的过程中,第一区域可以切换。比如,当前,视频文件中的一个区域中的内容快速播放,其它区域内的内容正常播放,当切换区 域之后,该视频文件中的另一个区域内的内容快速播放,其它区域内的内容正常播放。通过这种方式,视频文件中某个区域内的内容可以快速呈现给用户,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,所述第一区域或者所述第三区域为:预先设置的区域;或者,所述电子设备根据所述视频文件的多个对象自动确定的区域;或者,所述电子设备根据用户在所述视频文件的一帧图像上的选定操作,确定的区域。
应理解,视频文件中用于快速播放的区域可以是预先设置的,或者电子设备自动确定的,或者是用户选择的。通过这种方式,视频文件中某个区域内的内容可以快速呈现给用户,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,所述第一区域或所述第三区域中内容的播放速度为预设的播放速度;或者,所述第一区域或者所述第三区域中内容的播放速度为所述电子设备根据用户在视频文件的一帧图像上的选择操作,确定的播放速度。
应理解,视频文件中用于快速播放的区域的播放速度可以是预先设置的,或者电子设备自动确定的,或者是用户选择的。通过这种方式,视频文件中某个区域内的内容可以快速呈现给用户,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,所述第一区域或者第三区域为所述电子设备根据用户在所述视频文件的一帧图像上的选定操作,确定的区域,包括:检测到作用在所述一帧图像上的至少一个圈选操作,确定所述至少一个圈选操作所围成的区域为确定的区域中部分区域或全部区域。
应理解,用户可以在视频文件的一帧图像上圈选一个区域,该区域内的对象可以快速播放。通过这种方式,为用户提供选择快速播放区域的机会,即根据用户的需求确定用于快速播放的区域,提升用户体验。
在一种可能的设计中,所述第三区域中内容的播放速度为所述电子设备根据用户在所述视频文件的一帧图像上的选择操作,确定的播放速度,包括:响应于触发显示播放速度选项的操作,显示多个播放速度选项;检测到用于在多个播放速度选项中选择目标播放速度选项的选择操作,确定所述目标播放速度选项对应的播放速度。
应理解,用户可以选择用于快速播放的区域的播放速度,提升用户体验。通过这种方式,视频文件中某个区域内的内容可以快速呈现给用户,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,当所述第三区域内的第三内容播放完毕时,停止播放所述视频文件;或者当所述第三区域内的第三内容播放完毕时,显示所述第三区域的最后一帧画面,继续播放第四区域的第四内容。
应理解,当快速播放的区域播放完毕时,视频文件停止播放,或者显示快速播放区域的最后一帧画面,其它区域继续播放。通过这种方式,视频文件中某个区域内的内容可以快速呈现给用户,提升视频播放的趣味性,提升用户体验。
在一种可能的设计中,电子设备在所述播放所述视频文件之前,电子设备还可以从所述视频文件的N帧原始图像中每隔M帧抽取一帧原始图像,抽取出K帧原始图像;其中,N为大于等于2的整数,M为大于等于1小于N的整数,K为大于等于1小于N的整数;从抽取出的每帧原始图像中提取第一图像,共得到K帧第一图像;第一图像是抽取出的每帧原始图像上处于所述第一区域内的第一内容;将所述N帧原始图像中每帧原始图像上的第一 区域内的第一图像使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中每帧原始图像上不包括所述第一图像;将提取出的所述K帧第一图像填充在使用背景覆盖后的N帧原始图像中的K帧原始图像中的所述第一区域内,得到K帧新图像;其中,所述使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的;将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成第一目标视频文件;所述播放所述视频文件,包括:播放所述第一目标视频文件。
应理解,以视频文件包括11帧原始图像为例,电子设备可以从11帧原始图像中每隔4帧图像抽取一帧图像,共抽取出3帧原始图像,(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的每帧原始图像中提取第一图像(即每帧原始图像上的第一区域内的内容),然后将11帧原始图像中每帧原始图像上的第一图像使用背景覆盖,使得11帧原始图像上不包括第一图像。
电子设备将提取出的3帧第一图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原始图像中的3帧原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成第一目标视频文件。当该第一目标视频文件在播放时,第一区域的内容可以快速播放。
在一种可能的设计中,电子设备在所述继续播放所述视频文件之前,还可以从所述视频文件的N帧原始图像中每隔P帧抽取一帧原始图像,抽取出Q帧原始图像;其中,N为大于等于2的整数,P为大于等于1小于N的整数,Q为大于等于1小于N的整数;从抽取出的每帧原始图像中提取第三图像,共得到Q帧第三图像;第三图像是抽取出的每帧原始图像上处于所述第三区域内的第三内容;将所述N帧原始图像中每帧原始图像上处于所述第三区域内的第三图像使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中每帧原始图像上不包括所述第三图像;将提取出的所述Q帧第三图像填充在使用背景覆盖后的N帧原始图像中的Q帧原始图像中的所述第三区域内,得到Q帧新图像;其中,所述使用背景覆盖后的N帧原始图像中的Q帧原始图像是连续的;将所述Q帧新图像和剩余的N-Q帧使用背景覆盖后的原始图像合成第二目标视频文件;所述继续播放所述视频文件,具体为:播放所述第二目标视频文件。
应理解,以视频文件包括11帧原始图像为例,电子设备可以从11帧原始图像中每隔4帧图像抽取一帧图像,共抽取出3帧原始图像,(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的每帧原始图像中提取第三图像(即每帧原始图像上的第三区域内的内容),然后将11帧原始图像中每帧原始图像上的第三图像使用背景覆盖,使得11帧原始图像上不包括第三图像。
电子设备将提取出的3帧第三图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原始图像中的3帧原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成第二目标视频文件。 当该第二目标视频文件在播放时,第三区域的内容可以快速播放。也就是说,电子设备正在播放一视频文件时,若检测到目标区域切换的操作时,可以对视频文件中的所有图像进行处理,得到新视频,然后播放新视频,新视频的播放过程中,切换后的目标区域即第三区域中的内容快速播放。
在一种可能的设计中,电子设备在所述继续播放所述视频文件之前,还可以从所述视频文件当前帧图像的后续W帧原始图像中每隔P帧抽取一帧原始图像,抽取出Q帧原始图像;其中,W为大于等于2的整数,P为大于等于1小于W的整数,Q为大于等于1小于W的整数;从抽取出的每帧原始图像中提取第三图像,共得到Q帧第三图像;第三图像是抽取出的每帧原始图像上处于所述第三区域内的第三内容;将所述W帧原始图像中每帧原始图像上处于所述第三区域内的第三图像使用背景覆盖,使得使用背景覆盖后的所述W帧原始图像中每帧原始图像上不包括所述第三图像;将提取出的所述Q帧第三图像填充在使用背景覆盖后的W帧原始图像中的Q帧原始图像中的所述第三区域内,得到Q帧新图像;其中,所述使用背景覆盖后的W帧原始图像中的Q帧原始图像是连续的;将所述Q帧新图像和剩余的W-Q帧使用背景覆盖后的原始图像合成第二目标视频文件;所述继续播放所述视频文件,包括:播放所述第二目标视频文件。
应理解,当电子设备在播放一视频文件,该视频文件包括11帧原始图像,该视频文件中第一区域内的内容在快速播放。假设电子设备检测到用于切换目标区域的操作时,正播放第3帧原始图像,则电子设备可以对视频文件中当前帧图像+后续帧图像(即第3帧原始图像到第11帧原始图像)进行处理,或者,仅对后续帧图像(即第4帧原始图像到第11帧原始图像)进行处理。
以电子设备仅对后续未播放的图像为例,电子设备可以从第4帧图像开始,进行抽帧、提取目标对象等处理,得到新视频,然后播放新视频,新视频的播放过程中,切换后的目标区域即第三区域中的内容快速播放。
第三方面还提供一种视频处理方法,应用于具有摄像头和显示屏的电子设备比如手机、pad等,该方法包括:检测到用于启动所述摄像头的第一操作;响应于所述第一操作,所述显示屏显示所述相机应用的取景界面,所述取景界面中包括预览图像,所述预览图像中包括至少一个对象;响应于第二操作,所述摄像头采集N帧原始图像;N为大于等于2的整数;从所述N帧原始图像中每隔M帧抽取一帧原始图像,从抽取出的原始图像中提取K帧包含第一目标对象的图像;M是大于等于1小于N的整数,K是大于等于1小于M的整数;所述第一目标对象是抽取出的原始图像上的至少一个对象;将所述N帧原始图像中的第一目标对象使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中不包括所述第一目标对象;将提取出的所述K帧包含第一目标对象的图像和使用背景覆盖后的N帧原始图像中的K帧原始图像对应融合,得到K帧新图像;其中,使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的图像;将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成目标视频文件。
应理解,电子设备在录制视频的过程中,采集11帧原始图像后,从11帧原始图像中每隔4帧图像抽取一帧图像,共抽取出3帧原始图像(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的原始图像中提取3帧包括第一目标对象的图像,然后将11帧原始 图像中的第一目标对象使用背景覆盖(可以仅对11帧原始图像中包括第一目标对象的原始图像中的第一目标对象使用背景覆盖,对于不包含第一目标对象的原始图像可以不作背景覆盖处理),使得使用背景覆盖之后的图像上不包括第一目标对象。
电子设备将提取出的3帧包含第一目标对象的图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原始图像中的3帧原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成目标视频文件,当该目标视频文件被播放时,第一目标对象快速播放。电子设备通过该方式录制得到的视频文件,是经过特殊处理的视频文件,当该视频文件在播放时,目标对象可以快速播放。通过这种方式,有助于提升视频录制的趣味性,提升用户体验。
在一种可能的设计中,所述第一目标对象为预设的目标对象;或者,所述第一目标对象为所述电子设备根据所述预览图像的上的至少一个待拍摄对象自动确定的对象;或者,所述第一目标对象为所述电子设备根据用户在所述预览图像上的选定操作,确定的对象。
应理解,第一目标对象可以是预先设置的,或者是电子设备自动确定的,或者是用户选择的。通过这种方式,电子设备可以录制得到经过特殊处理的视频文件,该视频文件中在播放时,用户感兴趣的对象可以快速播放。
在一种可能的设计中,所述第一目标对象的播放速度为预设的播放速度;或者,所述第一目标对象的播放速度为所述电子设备根据用户在所述预览图像上的选择操作,确定的播放速度。
应理解,第一目标对象的播放速度可以是预设的或者用户选择的。通过这种方式,电子设备可以录制得到经过特殊处理的视频文件,该视频文件中在播放时,用户感兴趣的对象可以快速播放。
在一种可能的设计中,所述第一目标对象为所述电子设备根据用户在所述预览图像上的选定操作,确定的对象,包括:检测到作用在所述预览图像上的至少一次点击操作,确定每次点击操作所在位置对应的对象为所述第一目标对象;或者检测到作用在所述预览图像上的至少一个圈选操作,确定每个圈选操作所围成的区域内包括的对象为所述第一目标对象;或者检测到作用于在所述预览图像上的每个对象的标识信息中选择至少一个目标标识信息的操作,确定每个目标标识信息对应的对象为所述第一目标对象。
应理解,电子设备录制视频文件的过程中,用户可以在取景界面中选择目标对象。比如,在预览图像上点击、圈选目标对象等。通过这种方式,电子设备可以录制得到经过特殊处理的视频文件,该视频文件中在播放时,用户感兴趣的对象即目标对象可以快速播放。
在一种可能的设计中,保存所述目标视频文件,所述目标视频文件的封面上显示标识,所述标识用于指示所述目标视频文件中第一目标对象播放速度大于其它对象的播放速度,所述其它对象是所述目标视频文件中除所述第一目标对象之外的对象。
应理解,电子设备录制得到经过特殊处理的视频文件之后,在该视频文件的封面上可以显示一标识,用于指示该视频文件是经过特殊处理的视频文件,或者用于指示该视频文件中第一目标对象快速播放,方便用户了解该视频文件,也方便用户查找视频文件,有助于提升用户体验。
第四方面还提供一种视频处理方法,应用于具有摄像头和显示屏的电子设备,所述方法包括:检测到用于启动所述摄像头的第一操作;响应于所述第一操作,所述显示屏显示所述相机应用的取景界面,所述取景界面中包括预览图像,所述预览图像中包括至少一个对象;响应于第二操作,所述摄像头采集的N帧原始图像;N为大于等于2的整数;从所述N帧原始图像中每隔M帧抽取一帧原始图像,抽取出K帧原始图像;M为大于等于1小于N的整数,K为大于等于1小于N的整数;从抽取出的每帧原始图像中提取第一图像,共得到K帧第一图像;第一图像是抽取出的每帧原始图像上的第一区域内的图像;将所述N帧原始图像中每帧原始中的第一区域的第一图像使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中每帧原始图像上不包括所述第一图像;将所述K帧第一图像填充在使用背景覆盖后的N帧原始图像中的K帧原始图像上的所述第一区域,得到K帧新图像;其中,所述使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的;将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成目标视频文件。
应理解,电子设备在录制视频的过程中,采集11帧原始图像后,从11帧原始图像中每隔4帧图像抽取一帧图像,共抽取出3帧原始图像,(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的每帧原始图像中提取第一图像(即每帧原始图像上的第一区域内的内容),然后将11帧原始图像中每帧原始图像上的第一图像使用背景覆盖,使得11帧原始图像上不包括第一图像。
电子设备将提取出的3帧第一图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原始图像中的3帧原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成目标视频文件。当该目标视频文件被播放时,第一区域内的内容快速播放。电子设备通过该方式录制得到的视频文件,是经过特殊处理的视频文件,当该视频文件在播放时,第一区域的内容可以快速播放。通过这种方式,有助于提升视频录制的趣味性,提升用户体验。
在一种可能的设计中,所述第一区域为预先设置的区域;或者,所述第一区域为根据所述预览图像上的至少一个待拍摄对象自动确定的区域;或者,所述第一区域为所述电子设备根据用户在所述预览图像上的选定操作,确定的区域。
应理解,第一区域可以是预先设置的,或者电子设备自动确定的,或者是用户在预览图像上选择的。电子设备通过该方式录制得到的视频文件,是经过特殊处理的视频文件,当该视频文件在播放时,第一区域的内容可以快速播放。通过这种方式,有助于提升视频录制的趣味性,提升用户体验。
在一种可能的设计中,所述第一区域中内容的播放速度为预设的播放速度;或者,所述第一区域中内容的播放速度为所述电子设备根据用户在所述预览图像上的选定操作,确定的播放速度。
应理解,第一区域的播放速度可以是预先设置的,或者电子设备自动确定的,或者是用户在预览图像上选择的。当电子设备录制得到的视频文件后,该视频文件在播放时,第一区域的内容可以快速播放。通过这种方式,有助于提升视频录制的趣味性,提升用户体验。
在一种可能的设计中,所述第一区域为所述电子设备根据用户在所述预览图像上的选定操作,确定的区域,包括:检测到作用在所述预览图像上的至少一个圈选操作,确定每个圈选操作所围成的区域为所述第一区域。
应理解,用户可以取景界面中的预览图像上圈选一个区域,当电子设备录制视频文件之后,该视频文件在播放时,圈选的区域的内容可以快速播放。通过这种方式,有助于提升视频录制的趣味性,提升用户体验。
在一种可能的设计中,电子设备保存所述目标视频文件,所述目标视频文件的封面上显示标识,所述标识用于指示所述目标视频文件中第一区域播放速度大于其它区域的播放速度,所述其它区域是所述目标视频文件中除所述第一区域之外的区域。
应理解,电子设备录制得到视频文件之后,在该视频文件的封面上可以显示一标识,用于指示该视频文件是经过特殊处理的视频文件,或者用于指示该视频文件中第一目标对象快速播放,方便用户了解该视频文件,也方便用户查找视频文件,有助于提升用户体验。
第五方面还提供一种电子设备,包括显示屏;一个或多个处理器;存储器;一个或多个应用程序;一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行上述第一方面或者第一方面的任意一种可能的设计的方法;或者当所述指令被所述电子设备执行时,使得所述电子设备执行上述第二方面或者第二方面的任意一种可能的设计的方法。
第六方面还提供一种电子设备,包括显示屏;摄像头;一个或多个处理器;存储器;一个或多个应用程序;一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行上述第三方面或者第三方面的任意一种可能的设计的方法;或者当所述指令被所述电子设备执行时,使得所述电子设备执行上述第四方面或者第四方面的任意一种可能的设计的方法。
第七方面还提供了一种电子设备,所述电子设备包括执行第一方面或者第一方面的任意一种可能的设计的方法的模块/单元;或者所述电子设备包括执行第二方面或者第二方面的任意一种可能的设计的方法的模块/单元;或者所述电子设备包括执行第三方面或者第三方面的任意一种可能的设计的方法的模块/单元;或者所述电子设备包括执行第四方面或者第四方面的任意一种可能的设计的方法的模块/单元;
这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第八方面还提供一种计算机可读存储介质,所述计算机可读存储介质包括程序,当程序在电子设备上运行时,使得所述电子设备执行第一方面或上述第一方面的任意一种可能的设计的方法;或者当程序在电子设备上运行时,使得所述电子设备执行第二方面或上述第二方面的任意一种可能的设计的方法;或者当程序在电子设备上运行时,使得所述电子设备执行第三方面或上述第三方面的任意一种可能的设计的方法;或者当程序在电子设备上运行时,使得所述电子设备执行第四方面或上述第四方面的任意一种可能的设计的方法。
第九方面还提供一种包含程序产品,当所述程序产品在电子设备上运行时,使得所述电子设备执行第一方面或上述第一方面的任意一种可能的设计的方法;或者当所述程 序产品在电子设备上运行时,使得所述电子设备执行第二方面或上述第二方面的任意一种可能的设计的方法;或者当所述程序产品在电子设备上运行时,使得所述电子设备执行第三方面或上述第三方面的任意一种可能的设计的方法;或者当所述程序产品在电子设备上运行时,使得所述电子设备执行第四方面或上述第四方面的任意一种可能的设计的方法。
第十方面还提供一种电子设备上的用户图形界面,所述电子设备具有显示屏、摄像头、存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行上述第一方面或上述第一方面的任意一种可能的设计的方法时显示的图形用户界面;或者,所述图形用户界面包括所述电子设备执行上述第二方面或上述第二方面的任意一种可能的设计的方法时显示的图形用户界面;或者,所述图形用户界面包括所述电子设备执行上述第三方面或上述第三方面的任意一种可能的设计的方法时显示的图形用户界面;或者,所述图形用户界面包括所述电子设备执行上述第四方面或上述第四方面的任意一种可能的设计的方法时显示的图形用户界面。
需要说明的是,本申请中涉及的第X操作,可以是一个操作也可以是多个操作的组合,第X操作包括第一操作、第二操作等等。
需要说明的是,本申请中涉及的第X区域,可以是一个区域也可以多个区域的集合,第X区域包括第一区域、第二区域、第三区域、或第四区域等等
附图说明
图1为本申请提供的手机的相机应用的取景界面的示意图;
图2A为本申请一实施例提供的慢速播放、正常播放、快速播放的示意图;
图2B为本申请一实施例提供的慢速播放、正常播放、快速播放的示意图;
图3为本申请一实施例提供的手机100的结构示意图;
图4为本申请一实施例提供的手机100的显示界面的示意图;
图5为本申请一实施例提供的手机100的显示界面的示意图;
图6为本申请一实施例提供的手机100的显示界面的示意图;
图7为本申请一实施例提供的手机100的显示界面的示意图;
图8为本申请一实施例提供的手机100的显示界面的示意图;
图9为本申请一实施例提供的手机100的显示界面的示意图;
图10为本申请一实施例提供的手机100的显示界面的示意图;
图11为本申请一实施例提供的手机100的显示界面的示意图;
图12A为本申请一实施例提供的手机100的显示界面的示意图;
图12B为本申请一实施例提供的手机100的显示界面的示意图;
图12C为本申请一实施例提供的手机100的显示界面的示意图;
图13为本申请一实施例提供的手机100处理视频的流程示意图;
图14为本申请一实施例提供的视频处理方法的流程示意图;
图15为本申请一实施例提供的手机100处理视频的流程示意图;
图16为本申请一实施例提供的手机100处理视频的流程示意图;
图17为本申请一实施例提供的手机100处理视频的流程示意图;
图18为本申请一实施例提供的手机100的显示界面的示意图;
图19为本申请一实施例提供的手机100的显示界面的示意图;
图20为本申请一实施例提供的手机100的显示界面的示意图;
图21为本申请一实施例提供的手机100的显示界面的示意图;
图22为本申请一实施例提供的手机100的显示界面的示意图;
图23为本申请一实施例提供的手机100的显示界面的示意图;
图24为本申请一实施例提供的手机100的显示界面的示意图;
图25为本申请一实施例提供的一种视频处理方法的流程示意图;
图26为本申请一实施例提供的一种视频处理方法的流程示意图;
图27为本申请一实施例提供的一种视频处理方法的流程示意图;
图28为本申请一实施例提供的一种视频处理方法的流程示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
本申请实施例涉及的应用程序(application,简称app),为能够实现某项或多项特定功能的软件程序。通常,移动设备中可以安装多个应用程序。比如,相机应用、短信应用、邮箱应用、微信(WeChat)、WhatsApp Messenger、连我(Line)、照片分享(instagram)、Kakao Talk、钉钉等。下文中提到的应用程序,可以是移动设备出厂时已安装的应用程序,也可以是用户在使用移动设备的过程中从网络下载或其他移动设备获取的应用程序。需要说明的是,本申请实施例提供的视频处理方法应用于任何能够播放视频的应用程序中。
本申请实施例涉及的视频播放帧率,即每秒播放图像的帧数,单位是每秒帧数(frames per second,fps)。通常,视频由多帧图像构成,视频播放是将视频中的多帧图像连续的切换,每帧图像上的内容不同,以呈现出画面动图播放的过程。当图像的切换速度太慢时,会呈现出现画面不流畅,存在跳跃感,画面“闪烁”的感受。因此,视频的正常播放帧率与人的视觉反映时间相关。比如,正常播放帧率设置为24fps时,即每秒切换24帧图像时,在人眼看来已经是连续的动画了。目前,移动设备的视频播放帧率通常设置在25fps~30fps区间内。
本申请实施例涉及的图像采集帧率,即采集图像的速度,即每秒采集多少帧图像,以帧/秒为单位。通常,播放帧率小于等于图像采集帧率。比如,终端设备采集图像的帧率为60fps,播放帧率为30fps。
本申请实施例涉及的正常播放、快速播放,慢速播放。这三种播放速度是相对而言的。在本申请实施例中,正常播放、快速播放和慢速播放都是以相同的视频播放帧率(比如都是30fps)播放图像的,但是,对于同一视频文件来说,正常播放、快速播放和慢速播放所播放的图像的总帧数不同。通常,对于同一个视频文件,正常播放时,播放的图像的总帧数大于快速播放时播放的图像的总帧数,小于慢速播放时播放的图像的总帧数。
下面通过举例来介绍。
示例一,以移动终端是手机为例,手机从中存储有一个视频文件(比如手机从网络 侧下载的视频文件,或者接收其它设备发送的视频文件,或者手机已录制好的视频文件)。通常,手机播放视频文件时,将视频文件中的图像一帧一帧播放的,且是以正常播放速度(比如30fps)播放的。
请参见图2A所示,为本申请实施例提供的对于同一视频文件,快速播放和正常播放的示意图。假设该视频文件包括1200帧图像,正常播放帧率是30fps,正常播放时,不提取部分图像,即播放完1200帧图像,需要时长为40s。那么快速播放可以是从1200帧图像中每4帧图像提取一帧图像,得到300帧图像,只播放提取出的300帧图像,因为播放速度仍然是30fps,即快速播放时需要时长为10s(快速播放时的播放时长是正常播放所需时长的1/4)。
由此可见,手机都是以30pfs播放图像的,但是快速播放时,手机只播放了抽取出的300帧图像,正常播放时手机播放了1200帧图像,所以快速播放时播放的图像总帧数小于正常播放时播放的图像总帧数。
在示例一中,介绍手机已经获得视频文件,播放视频文件时,快速播放和正常播放的不同之处。
应理解,在示例一中,比如手机从网络侧下载的视频文件,该视频文件中包含的图像帧数已确定(比如1200帧)。通常,手机正常播放视频文件时,是从第一帧开始依次播放视频文件中的每一帧图像,直到播放完最后一帧图像,以图2A为例,正常播放是依次播放1200帧图像。因此,对于已经下载的视频文件,手机无法做到慢速播放,因为慢速播放需要播放更多的图像,但是由于视频中的图像帧数已确定,无法做到播放更多的图像。但是,手机可以按照图2A所示的方式快速播放,即从视频文件的图像中抽取部分图像,只播放抽取出的部分图像,实现快速播放。
当然,在另一些实施例中,对于已经下载的视频文件,手机也可以按照图2B所示的方式,实现快速、慢速和正常播放。请参见图2B所示,假设一个视频文件包括1200帧图像,且正常播放的速度为30fps。正常播放时可以每2帧图像提取一帧图像,提取出600帧图像,正常播放时仅播放提取出的600帧图像,需要时长为20s。慢速播放可以不提取部分图像,而是完全部分1200帧图像,则慢速播放时,需要时长为40s(时长是正常播放所需时长的2倍)。快速播放可以是每4帧图像提取一帧图像,提取出300帧图像,则快速播放时,需要时长为10s(时长是正常播放所需时长的1/2)。在这个实施例中,手机可以正常播放速度并不是将1200帧图像一帧一帧播放的,而是每2帧播一帧,这样的话,可以做到慢速播放,也可以做到快速播放。在这个实施例中,手机从网络侧下载一段视频之后,可以按照图2B所示的方式呈现慢速、快速、正常播放。
在上面的例子中,是以手机已经获取一个视频文件,播放这个视频文件为例,介绍正常、快速、慢速播放的区别,下面从另一个角度(手机录制视频文件的角度),正常、快速、慢速播放的区别。
示例二:以移动终端是手机为例,且以手机录制视频的过程为例。通常,手机录制视频时,正常图像采集帧率等于正常播放帧率(比如都是30fps)。
举例来说,手机以30fps采集图像,且录制1s,则手机拍摄得到的视频有30帧,以30fps播放该视频时可以播放1s。这种方式为正常播放。
再例如,手机以高帧率(比如60fps)采集图像,录制1s,拍摄得到60帧图像,手机以30fps播放该视频时,播放2s,即慢速播放。
再例如,手机以低帧率(比如15fps)采集图像,录制1s,拍摄得到15帧图像,手机以30fps播放该视频时,播放0.5s,即快速播放。
通过上面的示例二可知,手机是以30fps播放视频的,但是由于录制的视频文件时,图像采集帧率不同,所以相同时间内录制的视频文件包含的图像的帧数不同,所以手机播放录制的视频文件时,需要的时长不同,所以呈现出不同的播放效果(快速或者慢速)。
应理解,上面的示例二中,手机录制视频时的图像采集帧率与快速播放或者慢速播放相关,这部分内容将在后文介绍。
通过上面的例子可知,无论快速播放、慢速播放或者正常播放,都是相同的播放帧率,但是快速播放的情况下,终端设备需要播放的图像帧数小于慢速播放(或者正常播放)情况下要播放的图像帧数,所以对比来看,同一个视频文件,快速播放比慢速播放(或者正常播放)需要的时间较短,给人呈现快速播放的效果。
本申请实施例涉及的视频文件,可以是移动终端从网络侧下载的视频文件,也可以是移动终端自身录制的视频文件,还可以是网络在线加载的视频文件等。本申请实施例提供的视频处理方法可以适用于各种格式的视频文件,比如,rmvb,avi,MP4等等格式的视频文件。而且,本申请实施例提供的视频处理方法可以适用于通过任何视频编码方式获得的视频,比如动态图像专家组(moving picture experts group,MPEG)等视频编码方式。
本申请实施例涉及的多个,是指大于或等于两个。
需要说明的是,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。且在本申请实施例的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
以下介绍移动设备、用于这样的移动设备的图形用户界面(graphical user interface,GUI)、和用于使用这样的移动设备的实施例。在本申请一些实施例中,移动设备可以是手机、平板电脑、笔记本计算机或具备无线通讯功能的可穿戴设备(如智能手表或智能眼镜等)等。该移动设备包含能够采集图像或视频的图像采集模块,以及能够运行本申请实施例提供的图像处理算法的器件(比如应用处理器,或,图像处理器,或,其他处理器)。该移动设备的示例性实施例包括但不限于搭载
Figure PCTCN2019076360-appb-000001
或者其它操作系统的设备。上述移动设备也可以是其它便携式设备,只要能够采集图像或视频,并运行本申请实施例提供的图像处理算法即可。还应当理解的是,在本申请其他一些实施例中,上述移动设备也可以不是便携式移动设备,而是能够采集图像或视频,并运行本申请实施例提供的图像处理算法的台式计算机。
当然,在本申请另一些实施例中,移动设备也可以无需具有图像采集功能,只需具有运行本申请实施例提供的图像处理算法的能力即可,可以使用本申请实施例提供的图像处理算法处理其它设备发送的图像。在下文中,以移动设备自己采集图像,并运行图像处理算法为例。
以移动设备是手机为例,图3示出了手机100的结构示意图。
手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142, 天线1,天线2,移动通信模块151,无线通信模块152,音频模块191(包括扬声器,受话器,麦克风,耳机接口等图中未示出),传感器模块180,按键190,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,距离传感器180F,接近光传感器180G,指纹传感器180H,触摸传感器180K等(手机100还可包括其他传感器比如温度传感器、环境光传感器、气压计,重力传感器,陀螺仪传感器等,图中未示出)。
可以理解的是,本申请实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
下面对图3示出的手机100中的部件进行介绍。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
其中,处理器110可以运行本申请实施例提供的视频播放算法的代码,实现视频中用户感兴趣的内容快速播放,用户不感兴趣的内容慢速播放。以处理器110集成GPU为例,GPU可以运行视频播放算法的代码。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机100使用过程中所创建的数据(比如相机应用拍摄的图像,视频)等。
内部存储器121还可以用于存储本申请实施例提供的视频播放算法的代码。处理器100访问并运行内部存储器121中的所述代码,实现相关功能。当然,本申请实施例提供的视频播放算法的代码也可以存储在处理器110自身的内存中(比如,处理器110是CPU时,本申请实施例提供的视频播放算法的代码可以存储在CPU缓存中)。
其中,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
下面介绍传感器模块180的功能。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。
距离传感器180F,用于测量距离。手机100可以通过红外或激光测量距离。在一些 实施例中,拍摄场景,手机100可以利用距离传感器180F测距以实现快速对焦。在另一些实施例中,手机100还可以利用距离传感器180F检测是否有人或物体靠近。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机100通过发光二极管向外发射红外光。手机100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机100附近有物体。当检测到不充分的反射光时,手机100可以确定手机100附近没有物体。手机100可以利用接近光传感器180G检测用户手持手机100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器180K可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。
触摸传感器180K可以用于检测用户的触摸操作。比如,手机100显示主界面,主界面中包括多个应用程序的图标(比如,微信、相机、电话、备忘录等)。触摸传感器180K检测到用户在主界面中的触摸操作后,将触摸操作发送给处理器110。处理器110将基于该触摸操作确定该触摸操作的触摸位置,并确定该触摸位置对应的图标。假设处理器110确定该触摸操作对应的图标是相机应用的图标,则手机100启动相机应用。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。
另外,手机100可以通过音频模块191(扬声器,受话器,麦克风,耳机接口),以及处理器110等实现音频功能。例如音乐播放,录音等。手机100可以接收按键190输入,产生与手机100的用户设置以及功能控制有关的键信号输入。手机100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。
尽管图3未示出,手机100还可以包括摄像头,例如前置摄像头、后置摄像头;还可以包括马达,用于产生振动提示(比如来电振动提示);还可以包括指示器比如指示灯,用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
如前述内容可知,现有技术中,手机是对整个视频画面的快速或慢速呈现。本申请实施例提供的视频播放方式中,视频播放画面中不同区域(或不同对象)的播放方式可以不同。比如,视频播放画面中的区域A(或对象A)快速播放,而区域B(或对象B)慢速播放或正常速度。通过这种方式,能够实现比如视频中的精彩部分或者用户感兴趣 的部分可以慢速播放,将其他内容可以快速播放,有助于提升视频播放的趣味性,提升用户体验,吸引广大消费者。
为了便于理解,本申请以下实施例将以具有图3所示结构的手机为例,结合附图对本申请实施例提供的视频处理方法进行具体阐述。
图4中的(a)示出了手机100的一GUI,该GUI为手机的桌面401。当手机100检测到用户点击桌面401上的相册(或图库、照片)应用的图标402的操作后,可以启动相册应用,显示如图4中的(b)所示的另一GUI,该GUI中包括多个视频文件(图中以2个视频文件为例)。当手机100检测到用户点击视频404的操作后,显示如图4中的(c)所示的又一GUI,该GUI包括视频404的第一帧图像的预览界面405。通常,预览界面405中显示一个播放控件406,当该播放控件405被触发时,手机100开始播放视频404(比如从第一帧图像开始播放)。
在一些实施例中,在图4中的(c)所示的GUI中,当手机100检测到用户用于指示对视频404“智能播放”的操作后,手机100可以对视频404中不同对象以不同的播放速度播放。
为了帮助用户了解智能播放,手机100可以提供智能播放的相关信息。作为一种示例,请参见图5中的(a)所示,在预览界面405中包括播放选项控件501,当该播放选项控件501被触发时,手机100显示播放模式选择框502,该播放模式选择框502中包括普通播放选项503和智能播放选项504。
当手机100检测到普通播放选项503被触发时,手机100按照现有技术方式播放视频(比如,视频中每帧图像连续播放,每个对象播放速度相同)。当用户触发智能播放选项504时,手机100显示如图5中的(b)所示的界面。手机100显示详情控件505,完成控件506,以及提示信息“请选择目标对象”(后文介绍)。当用户触发详情控件505时,手机100显示如图5中的(c)所示的界面,即手机100提供智能播放模式的相关信息的示例,以帮助用户了解智能播放的功能。图5中的(c)仅是一种智能播放模式的简介的示例,并不是限定。
在一些实施例中,目标对象可以是用户指定的。比如,用户可以在图5中的(b)所示的界面中手动选择目标对象。下面介绍几种示例。
示例一,请参见图6中的(a)所示,手机100显示文字信息“请您点击画面中的目标对象”。当手机100检测到用户在第一帧图像中的点击操作后,基于该点击操作确定目标对象。假设用户点击第一帧图像中的天鹅,手机100确定天鹅是目标对象。
为了方便提示用户已选择的目标对象,手机100可以显示已选择的目标对象的标识信息。比如,请参见图6中的(a)中的标识信息601。需要说明的是,用户点击一个目标对象,则标识信息601中包括一个目标对象的标识信息。当用户继续点击其它目标对象时,标识信息601中新增其它目标的标识信息。请对比图6中的(a)和(b)可知,当用户只点击天鹅时,标识信息601中只包括黑天鹤,当用户继续点击鱼时,标识信息601中新增鱼。
当然,当用户选择多个目标对象后,若想要删除一个目标对象,可以再次点击该目标对象(单击或者双击),标识信息601中可以删除该目标对象的标识信息。
当手机100检测到确定按钮602的操作时,手机100确定标识信息601中包括的对象是目标对象。
示例二,请参见图7中的(a)所示,手机100显示选择框以及提示信息“请您移动选择框以选择目标对象”。手机100将选择框中包含的对象,确定为目标对象。该选择框的位置可以移动,大小可以变化。
具体而言,当选择框包括一个对象的部分区域(比如图6中选择框包括天鹅的部分身体)时,手机100可以确定该对象处于选择框内的区域的面积占该对象的整个面积的比例,若比例大于预设比例,则确定选择框包括这个对象。
当然,为了方便提示用户已选择的目标对象,手机100可以显示已选择的目标对象的标识信息701。比如,当手机100检测到选择框被放大,则选择框中包括的对象增多,标识信息701中包括的对象增多。类似的,用户也可以删除目标对象(比如缩小选择框的大小)。当手机100检测到确定按钮702的操作时,手机100确定标识信息701中包括的对象是目标对象。
示例三,请参见图8中的(a)所示,手机100识别第一帧图像中所包含的对象,并显示识别出的每个对象的编号。手机100检测到用户选择某个编号的操作时,手机100确定该编号对应的对象是目标对象。
当然,手机100可以显示已选择的目标对象的标识信息801。比如,当手机100检测到用户选择编号1的对象之后,标识信息801包括编号1、当用户继续选择编号2的对象之后,标识信息802中增加编号2。类似的,用户也可以删除目标对象(比如再次点击编号1的对象,标识信息801中删除编号1)。当手机100检测到确定按钮802的操作时,手机100确定标识信息801中包括的编号对应的对象是目标对象。
示例四,请参见图9中的(a)所示,手机100检测到用户在第一帧图像上的画圈操作后,确定该画圈操作的轨迹所围成的区域内的对象为目标对象。当然,用户画圈操作的轨迹所围成的区域的大小,显示位置可以变化。比如,请对比图9中的(a)和(b),当画圈操作形的轨迹所围成区域被放大,该区域内的对象增多,标识信息901中增加鱼。当然,用户也可以删除目标对象(比如缩小选择框的大小),标识信息901中删除移出选择框的目标对象的标识信息。
当然,请参见图9中的(c)所示,用户也可以在第一帧图像上,进行多次画圈操作,每个画圈操作所围区域的对象都是目标对象。用户也可以删除目标对象。比如,长按某个画的圈,删除该圈,请参见图9中的(d)所示。
在另一些实施例中,目标对象还可以是手机100自动选择的。举例来说,手机100按照预设策略识别第一帧图像中的目标对象。比如,手机100中预设有物体类型“人物”、“动物”、“建筑物”等。手机100识别第一帧图像中一个或多个对象属于预设的物体类型,则手机100确定一个或多个对象是目标对象。手机100中预设的物体类型可以是手机100出厂之前设置好的,也可以是用户自定义的。
应理解,第一帧图像上包含的物体种类可能较多。因此,手机100可以设置有多个物体类型的优先级顺序:人物高于动物,动物高于建筑。示例性的,若第一帧图像上包括人物、动物和建筑,则优先级最高的物体类型人物为目标对象,或者优先级最高的两个物体类型人物和动物为目标对象。再示例性的,若第一帧图像上不包括人物,而包括动物和其他物体,则原始图像上包括的优先级最高的动物为目标对象。
当然,手机100还可以有其它方式选择目标对象。比如,第一帧图像中处于中间位置的对象确定为目标对象,第一帧图像中的人物默认为目标对象,第一帧图像所占面积最 大的对象作为目标对象等等。
当手机100确定目标对象之后,还可以确定目标对象的播放速度。
在一些实施例中,请参见图10所示,手机100显示已选择的目标对象,以及速度播放选项,包括:2倍速选项,1.5倍速选项,0.5倍速选项。假设用户触发2倍速选项,当手机100检测到触发完成控件1001的操作时,手机100开始播放视频404,其中,目标对象以2倍速播放,其它对象以正常速度播放。
在另一些实施例,手机100确定出的目标对象有多个时,可以确定每个目标对象的播放速度。比如,请参见图11所示,手机100显示两个目标对象各自的播放速度选项。假设用户选择天鹅的播放速度是2倍速,鱼的播放速度是0.5倍速,当手机100检测到触发完成控件1101的操作时,开始播放视频404,其中,天鹅快速播放,鱼慢速播放。
在另一些实施例中,手机100还可以确定除去目标对象之外的背景的播放速度。比如,请参见图12A所示,手机100还显示背景(除去天鹅和鱼之外的其它对象)的播放速度选项。假设用户选择背景的播放速度是1.5倍速,手机100以1.5倍速播放背景。
当然,在实际应用中,手机100还可以有其它方式确定目标对象的播放速度,本申请实施例不限定。
在本申请另一些实施例中,手机100播放视频时,无需用户选择目的对象,也无需用户选择播放速度。比如,手机100进入智能播放模式后,手机100自动识别视频文件中的目标对象,然后将目标对象以默认的速度(比如2倍速)播放。再比如,手机100进入智能播放模式时,播放视频,但是视频播放界面中包括一窗口,该窗口中的内容以默认速度(比如2倍速)播放,窗口外的内容以正常速度播放。
在以上的实施例(图4-图10)中,是以视频404的第一帧图像为例的,即目标对象的选择、播放速度的选择都是针对视频404的第一帧图像的。在实际应用中,若手机100当前正在播放一个视频,若手机100检测到暂停播放的控件时,手机100暂停到当前播放的一帧图像。用户可以在当前帧图像上选择目标对象、以及目标对象的播放速度等操作。手机100从当前帧图像开始到播放后续帧图像,在后续帧图像的播放过程中,不同对象以不同播放速度显示。
在一些实施例中,视频文件的播放过程中,还可以切换目标对象。比如,请参见图12B(a)所示,手机100当前选择的目标对象是天鹅(比如用户指定的天鹅,或者手机100自动确定的天鹅),所以手机100播放视频的过程中,天鹅快速播放,在播放界面中显示用于指示切换目标对象的控件1203,当该控件1203被触发时,手机100将目标对象由天鹅切换为鱼,请参见图12B(b)所示。在后续播放过程中,手机100快速播放鱼;或者,从头开始播放视频文件,在从头播放的过程中,鱼被快速播放。
应理解,手机100检测到用户触发控件1203的操作时,可以自动确定其他目标对象,以替换当前的目标对象,比如手机100从视频文件中的多个对象中选择与当前的目标对象不同的对象。
当然,手机100也可以在检测到用户触发控件1203的操作时,暂停播放视频文件,用户可以在暂停的画面中选择新的目标对象(点击、圈选等);或者,手机100暂停播放视频文件时,若暂停画面中没有用户要选择的目标对象时,用户可以将暂停画面中切换到其它画面,在其它画面中选择新的目标对象。
当然,图12B中还可以包括添加目标对象的控件(图中未示出),当该控件被触发时, 手机100增加目标对象的数量;或者,图12B中还可以包括减少目标对象的控件(图中未示出),当该控件被触发时,手机100减少目标对象的数量。
在另一些实施例中,手机100无需确定目标对象,而是确定目标区域,视频播放过程中,播放界面中处于目标区域内的内容快速播放。具体而言,目标区域可以是用户指定的,比如在视频文件的一帧图像上选择的区域;或者目标区域是手机100自动确定的区域,比如目标对象较多的区域或面积较大的目标对象所在的区域等;或者,目标区域是预先设置好。
示例性的,手机100在播放视频文件的过程中,还可以切换目标区域。请参见图12C(a)所示,手机100在播放视频文件的过程中,矩形框(第一区域)中的内容快速播放,当手机100检测到用于指示切换目标区域的控件1205的操作时,手机100继续播放视频,在继续播放视频文件的过程中,第三区域中的内容快速播放;或者,当手机100检测到用于指示切换目标区域的控件1205的操作时,手机100从头播放视频文件,在从头播放视频文件的过程中,第三区域中的内容快速播放。
应理解,手机100检测到用户触发控件1205的操作时,可以自动确定新的目标区域;或者,手机100也可以在检测到用户触发控件1205的操作时,暂停播放视频文件,用户可以在暂停的画面中指定新的目标区域(比如圈选新的目标区域等);或者,手机100暂停播放视频文件时,用户可以将暂停画面中切换到其它画面,在其它画面中指定新的目标区域。
当然,图12C中还可以包括增大目标区域的控件(图中未示出),当该控件被触发时,手机100增加目标区域所占的面积;或者,图12C中还可以包括缩小目标区域的控件(图中未示出),当该控件被触发时,手机100缩小目标区域所占的面积。
通过以上描述可知,本申请实施例提供的视频处理方法中,视频播放画面中不同区域(或不同对象)的播放方式可以不同。比如,视频播放画面中的区域A(或对象A)快速播放,而区域B(或对象B)慢速播放或正常速度。因此,这段视频的播放过程中,区域A(或对象A)以较慢的速度呈现给用户,而区域B(或对象B)以较快的速度呈现给用户。通过这种方式,能够实现视频播放画面中不同区域或对象以不同的播放速度呈现给用户,比如视频中的精彩部分或者用户感兴趣的部分可以慢速播放,将其他内容可以快速播放,有助于提升视频播放的趣味性,提升用户体验,吸引广大消费者。
下面以视频画面中目标对象以2倍速播放,其它对象正常播放为例,介绍手机100如何快速播放目标对象。
请参见图13所示,为本申请实施例提供的视频处理方法的流程的示意图。如图13所示,假设视频文件包括1200帧原图像,手机100确定目标对象。且目标对象以2倍速播放之后,从1200帧原图像中每2帧图像提取一帧图像,得到600帧原图像,然后从这600帧原图像中提取目标对象(比如将目标对象分割出来),得到600帧目标对象图像(比如将分割出的目标对象作为单独的图像,即目标对象图像)。手机100可以将1200帧原图像中的目标对象使用背景覆盖,使得使用背景覆盖后的原图像中不包括目标对象。手机100将提取出的600帧目标对象图像和使用背景覆盖后的1200帧中前600帧图像对应融合,得到600帧新图像,然后将600帧新图像和剩余的600帧使用背景覆盖后的原图像合成一个新的视频文件,手机100在播放该新的视频文件时,呈现出目标对象快速播放,而其它对象正常播放的效果。
在一些实施例中,手机100还可以将目标对象图像和得到的新视频文件对应存储。比如,手机100将目标对象图像存在图库中,当手机100检测到用户打开图库中的该目标对象图像的操作时,该目标对象图像上显示一标记,当该标记被触发时,手机100打开与该目标对象对应存储的视频文件,在该视频文件中目标对象快速播放。
在另一些实施例中,以图13所示的实施例为例,手机100还可以将提取出的600帧目标对象图像和使用背景覆盖之后的1200帧原图像,对应存储。当手机100检测到用户指示该视频文件中的目标对象(用户指示的目标对象与存储的600帧目标对象图像中的目标对象相同)以2倍速播放时,将存储的600帧目标对象图像和使用背景覆盖后的1200帧原图像中的前600帧原图像对应融合,得到新视频,然后播放该视频。这个过程中,手机100无需执行从原图像中抽帧、提取目标对象图像的过程,只需执行融合过程即可,有助于提升效率。
当然,在另一些实施例中,当手机100再次检测到用户指示该视频文件中的目标对象以2倍速播放时,也可以仍然按照图13所示的流程,再次执行抽帧、提取目标对象图像、然后图像融合合成新视频等过程。
应理解,在图13所示的示例中,手机100是先抽取出600帧原图像,然后从600帧原图像中提取目标对象图像。当然,手机100可以先从1200帧图像中的每帧图像中提取目标对象图像,然后再从提出出的目标对象图像中抽帧(比如从1200个目标对象图像中抽取出600个目标对象图像)。因为,在实际应用中,1200帧图像中可能部分图像中没有目标对象,假设只有1000帧图像中有目标对象,那么手机100可以先从1000帧图像中提取目标对象图像,得到1000帧目标对象图像,然后从1000帧目标对象图像中抽出600帧目标对象图像。当然,手机100还可以有其它方式来提取目标对象,只要满足目标对象的帧数少于原图像的帧数即可,本申请实施例不限定。下文中,以手机100先抽取出部分原图像,然后从抽取出的原图像中提取目标对象图像为例。
请参见图14所示,为本申请实施例提供的视频处理方法的流程示意图,如图14所示,该方法的流程包括:
S1401:手机100获取待处理视频文件,所述视频文件包括N帧原图像,N为大于等于2的整数。
手机100可以拍摄得到视频文件(比如通过相机应用拍摄得到视频),或者从网络侧下载视频文件(比如从爱艺奇、腾讯等客户端下载的视频文件),或者从其它设备接收到的视频文件(比如手机100通过微信应用接收其它设备发送的视频)等等,本申请实施例不限定。
应理解,本申请实施例的视频处理方法,不仅可以适用于视频文件的播放,还可以应用在动图文件,或者利用应用程序合成的一组图片文件等的播放。
S1402:手机100确定视频文件中的目标帧原图像,所述目标帧原图像是所述N帧原图像中的一帧图像。
可选的,目标帧图像可以是待处理视频文件包含的多帧图像中的一帧图像。下面介绍手机100确定待播放视频文件中目标帧图像的几种方式。
作为一种示例,手机100并未播放该待处理视频文件时,手机100可以将视频文件的封面作为目标帧原图像。请参见图4中的(c)所示,手机100还未播放视频404时,则手机100确定视频404的封面就是目标帧原图像。
作为另一种示例,手机100当前正在播放待处理视频文件,若手机100检测到用户触发暂停播放的操作,则暂停播放该视频文件,暂停时显示的一帧图像可以作为目标帧原图像。
作为又一种示例,手机100暂停视频播放时,暂停时显示的一帧图像中可以显示一第一控件和第二控件,当第一控件被触发时,手机100切换到上一帧图像,当第二控件被触发时,手机100切到到下一帧图像,即用户可以通过这两个控件,切换图像来选择目标帧原图像。
作为另一种示例,手机100暂停视频播放时,手机100可以自动显示该视频文件中包含较多目标对象的一帧图像,该图像为目标帧原图像。
当然,目标帧图像的确定方式不限于上述列举的几种,在此不再一一列举。
S1403:手机100确定目标帧原图像中的目标对象,并确定目标对象的第一播放速度,所述第一播放速度大于所述待播放视频的正常播放速度,所述目标对象是所述当前帧图像中的至少一个对象。
关于手机100确定目标对象的过程,前文已经描述过程(用户指定目标对象或者手机100自动识别目标对象等),在此不再重复赘述。
S1404:手机100基于第一播放速度,确定图像抽取帧数。
需要说明的是,由于第一播放速度是快速播放目标对象,所以手机100需要抽取出部分原图像。
作为一种示例,手机100中可以存储有播放速度和图像抽取帧数之间的对应关系,基于该对应关系,确定要抽取多少帧图像。请参见表1,为本申请实施例提供的播放速度与图像抽取帧数之间的对应关系的示例。
表1
播放速度 图像抽取帧数
2倍速度 每隔2帧抽取一帧图像
3倍速度 每隔3帧抽取一帧图像
4倍速度 每隔4帧抽取一帧图像
若用户选择以2倍速播放目标对象,手机100基于表1可以确定每隔2帧抽取一帧图像。
S1405:手机100按照确定出的图像抽取帧数从待处理视频文件的N帧原图像中抽取M帧原图像,M为大于等于1小于N的整数。
S1405的实现方式可以有多种,下文将列举几种。
方式一,手机100可以按照确定出的图像抽取帧数从待处理视频文件中的所有原图像中抽取部分图像。
举例来说,假设视频文件包括1200帧原图像,而目标帧原图像是视频文件的封面,且封面是第一帧原图像,若手机100确定每隔2帧抽取一帧,那么手机100从第一帧原图像开始每隔2帧抽取一帧。
方式二,若手机100暂停播放视频文件,且暂停时显示的一帧图像是目标帧原图像,则手机100可以从暂停时显示的图像开始,抽取图像,即无需从目标帧图像之前的原图像中抽取图像。举例来说,暂停时显示的是第200帧,若手机100确定每隔2帧抽取一帧时, 可以从第200帧开始每隔2帧抽取一帧。
S1406:手机100从抽取出的M帧原图像中提取K帧包含目标对象的图像,K为大于等于1小于N的整数。
可选的,手机100抽取的原图像中可能部分原图像中存在目标对象,部分原图像中不存在目标对象,对于不存在目标对象的原图像,手机100可以不进行目标对象的提取步骤。
可选的,手机100提取目标对象之后,可以将目标对象从图像中分割出来,这样的话,手机100将得到目标对象图像(将分割出的目标对象称为目标对象图像)。应理解,手机100从原图像中提取目标对象时,可以以目标对象的边缘轮廓提取目标对象,也可以是提取目标对象所在区域(该区域的面积可以大于目标对象的边缘轮廓所围区域的面积)。
S1407:手机100将视频文件中N帧原图像中的目标对象去除,得到处理后的N帧原图像,处理后的N帧原图像中没有目标对象。
在前面的步骤中,手机100从原图像中抽取部分原图像,并从抽取的原图像中提取目标对象,得到目标对象图像。手机100已得到目标对象图像,所以手机100可以将原图像中的目标对象去除。其中,去除目标对象的方式可以是使用背景覆盖。以一帧原图像为例,手机100可以使用这帧原图像中的背景覆盖这帧原图像中的目标对象,比如使用该帧原图像中目标对象所在区域以外的其它区域拷贝覆盖到目标对象所在的区域,即将目标对象所在区域的内容使用其他区域的内容来填充。当然,实际应用中,还可以有其它的去处目标对象的方式,本申请实施例不作限定。
应理解,部分原图像中可能不包括目标对象,所以手机100可以只对包含目标对象的原图像中的目标对象使用背景覆盖,对于不包含目标对象的原图像可以不作背景覆盖处理。
S1408:手机100将提取出的K帧目标对象图像和处理后的N帧原图像中的K帧原图像对应融合,得到K帧新图像;处理后的N帧原图像中的K帧原图像是连续的。
示例性的,因为处理后的原图像中没有目标对象。因此,目标对象图像和处理后的图像的融合方式可以有多种,比如将目标对象图像覆盖在处理后的原图像上的一个区域,得到一张新图像,其中,一个区域可以是原图像上的任意一个区域。其中,具体的融合算法可以是小波融合算法、比值(brovey)变换法等等,本申请实施例不限定。
示例性的,手机100可以将提取出的K帧目标对象图像和处理后的N帧原图像中的前K帧原图像对应融合。
S1409:手机100将K帧新图像,和剩余的处理后的N-K帧原图像合成新视频文件。
手机100得到新的视频文件之后,若该新的视频文件被播放,则目标对象快速播放。
下面举一个例子来介绍本申请实施例提供的视频处理过程。
请参见图15所示,为本申请实施例提供的一种视频处理方法的示例。如图15所示,假设视频文件共有11帧原图像,当前帧是第1帧原图像。手机100确定目标对象,且目标对象的播放速度为4倍速。手机100基于表1可知,从11帧图像中每隔4帧图像抽取一帧图像,即抽取第1、5、9帧原图像。
若抽取出的第1、5、9帧原图像中每帧原图像中均包含目标对象,则手机100从抽取出的3帧原图像中的每张原图像中分割出目标对象,得到目标对象1-3(应理解,这里的目标对象1-3是为了区分从不同帧原图像上分割出的同样的对象,不是指3个不同的对象)。其中,目标对象1是从第1帧图像上分割出的(即目标对象1是第1帧原图像上的 一个或多个对象),目标对象2是从第5帧图像中分割出的(即目标对象2是第5帧原图像上的一个或多个对象),目标对象3是从第9帧图像上分割出的(即目标对象3是第9帧原图像上的一个或多个对象)。
应理解,手机100得到目标对象1-3之后,可以将第1-11帧原图像中的目标对象去除(可以仅对11帧原图像中包括目标对象的原图像作去除目标对象处理,对于11帧原图像中不包括目标对象的原图像可以不作去除目标对象的处理)。示例性的,请继续参见图15所示,由于目标对象1是从第1帧原图像上分割出的,所以第1帧原图像上存在空白地方,手机100可以使用背景填充该空白区域(比如,手机100将第1帧原图像中非空白区域的内容填充到该空白区域),然后手机100将背景填充后的图像和目标对象1融合得到一帧新图像,播放该新图像。当然,由于目标对象1是从第1帧原图像上分割出的,手机100也可以无需填充空白区域,可以将目标对象1填充到该空白地方,得到一帧新图像。
由于目标对象2是从第5帧原图像中提取出的,不是从第2帧原图像中提取出的。因此,手机100可以将第2帧原图像本身包括目标对象用背景覆盖(比如使用第2帧原图像中目标对象所在区域以外的其它区域拷贝覆盖到目标对象所在的区域,即将目标对象所在区域的内容使用其他区域的内容来填充),然后将背景覆盖之后的第2帧原图像和目标对象2融合,得到一帧新图像,该新图像中只包括从第5帧原图像中提取的目标对象2,不包含第2帧原图像本身包括目标对象。
由于目标对象3是从第9帧原图像中提取出的,不是从第3帧原图像中提取出的。因此,手机100可以将第3帧原图像本身包括目标对象使用背景覆盖,然后将背景覆盖之后的第3帧原图像和目标对象3融合,得到一帧新图像,该新图像中只包括从第9帧原图像中提取的目标对象3,不包括第3帧原图像本身包括目标对象。
由此可见,本申请实施例中,手机100在播放第2帧原图像时,同步显示的是第5帧原图像中的目标对象2,在播放第3帧原图像时,同步显示的是第9帧原图像中的目标对象3。因此,手机100呈现出将目标对象以较快的速度播放,而背景正常播放的效果。
需要说明的是,图15所示的过程是以视频文件中目标对象如何快速播放为例的,如前世内容可知,本申请实施例中,视频文件的播放界面中目标区域内的内容也可以实现快速播放,其中,目标区域可以是预设的区域,也可以是用户指定的区域等。类似的,对于这种情况,手机100也可以按照图15所示的流程来处理,只需将图15所示的目标对象替换为目标区域内的图像即可,假设目标区域是矩形,那么目标区域内的图像是目标区域所围成的矩形图像,然后将该矩形图像作为目标对象图像进行处理即可。
以图12B(a)和图12B(b)为例,视频播放过程中,目标对象由天鹅切换成鱼,这个过程中,用户可以执行两次图15所示的过程。比如,在手机100确定目标对象是天鹅时,执行一次图15所示的流程,得到一个视频文件,播放该视频文件的过程中,天鹅被快速播放。
在播放到某一帧画面时,手机100检测到用户触发控件1203的操作时,手机100对所述某一帧画面的后续帧图像重新执行图15所示的流程,得到另一个视频文件,手机100播放该另一个视频文件的过程中鱼被快速播放;或者,手机100检测到用户触发控件1203的操作时,手机100对该视频文件的所有图像重新执行图15所示的流程,得到另一个视频文件,手机100播放该视频文件时,鱼被快速播放。
以图12C为例,视频播放过程中,目标区域由第一区域切换为第三区域,这个过程 中,用户可以执行两次图15所示的过程。比如,在手机100确定目标区域是第一区域时,执行一次图15所示的流程,得到一个视频文件,播放该视频文件的过程中,第一区域中的内容快速播放。
在播放到某一帧画面时,手机100检测到用户触发控件1205的操作时,手机100基于所述某一帧画面的后续帧图像重新执行图15所示的流程,得到另一个视频文件,手机100播放另一个视频文件的过程中第三区域快速播放;或者,手机100检测到用户触发控件1203的操作时,手机100对基于该视频文件的所有图像中重新执行图15所示的流程,得到另一个视频文件,手机100播放该另一个视频文件时,鱼被快速播放。
应理解,手机100确定出目标对象之后,假设目标对象快速播放,那么手机100播放完目标对象之后,还未播放完其它对象。请继续参见图15所示,手机100从第1帧原图像播放到第3帧原图像后,目标对象1-3已经播放完毕,但是后续的原图像即第4帧图像到第11帧图像还未播放。
作为一种示例,手机100播放完目标对象后,可以停留在最后一帧目标对象图像,等待后续的原图像播放完毕。请参见图16所示,手机100播放完第3帧原图像后,仍然将目标对象3和第4帧原图像融合,后续的第5帧原图像也是和目标对象3融合,即目标对象1-3播放完毕后,将目标对象3和后续的每帧原图像融合,直到后续的每帧原图像播放完毕。当然,在这个过程中,后续的原图像中仍然使用背景覆盖本身包含的目标对象,前面已经介绍过,不再赘述。
作为另一种示例,手机100可以循环播放目标对象,直到后续的原图像播放完毕。请参见图17所示,手机100播放完第3帧原图像后,将目标对象1和第4帧原图像融合,将目标对象2和第5帧原图像融合,将目标对象3和第6帧原图像融合。也就是说,手机100呈现目标对象1-3循环播放的效果,直到播放完后续的原图像。
作为又一种示例,手机100可以在播放完目标对象之后,停止播放后续的原图像。请继续参见图15所示,手机100播放完由第3帧原图像和目标对象3融合得到的新图像之后,停止视频文件的播放。
上述实施例中,手机100提取出目标对象图像之后,原图像是播放的,在实际应用中,手机100也可以不播放原图像。请继续参见图15所示,手机100也可以将目标对象1-3均和第1帧原图像融合,得到融合后的三帧图像,然后播放这三帧图像。即手机100呈现出目标对象在播放,而背景停留在第1帧原图像的效果,即背景处于静止状态,只有目标对象处于播放状态的。当然,手机100也可以将背景填充其它内容。比如,请参见图18所示,手机100显示背景选项,背景选项中包括原背景(即视频本身的背景)、填充色,指定图像。当填充色被选中时,手机100显示颜色选项供用户选择。假设用户选择黑色,那么手机100播放视频时,目标对象之外的对象即背景全部填充黑色,呈现出画面中只有目标对象,背景为黑色的效果。
当然,手机100也可以将背景替换为其它图像(比如不属于该视频的图像)。比如,请参见图19所示,手机100显示背景选项。背景选项中包括原背景(即视频本身的背景)、填充色,指定图像。当指定图像被选中时,手机100显示图像的存储路径。手机100可以根据图像的存储路径指定图像。这样的话,手机100播放视频时,呈现出目标对象在用户指定的图像中播放的效果(目标对象位于上层,指定图像位于下层,即指定图像是目标对象的背景)。
应理解,图19中指定的图像可以是手机100中存储的任一张图像。作为一种示例,请参见图15所示,手机100提取出目标对象1-3之后,可以将目标对象1-3的图像存储下来,那么手机100指定图像时,可以使用存储下来的目标对象1-3。也就是说,手机100对某一个视频使用本申请实施例提供的视频处理方法进行处理时,可以将提取出的目标对象的图像存储,手机100对另一个视频文件使用本申请实施例的视频处理方法进行处理时,若要指定图像,可以指定所述一个视频上提取出的目标对象,这样的话,可以实现,两个视频文件中的内容互换的效果。
在上面的实施例中,介绍手机100的智能播放模式,下面介绍另一实施例,在该实施例中,手机100可以实现智能录制视频。
图20中的(a)示出了手机100的一GUI,该GUI为手机的桌面2001。当手机100检测到用户点击桌面2001上的相机应用的图标2002的操作后,可以启动相机应用,显示如图20中的(b)所示的另一GUI,该GUI中包括图像拍摄预览界面2003。预览界面2003中包括预览图像、拍照选项、视频选项、智能录像选项2004。当智能录像选项2004被选中时,手机100进入智能录像模式。当录制按键2005被触发时,手机100开始录制视频。
为了帮助用户了解智能录制模式的功能和实现方式。手机100进入智能录制模式时,可以显示如图21中的(a)所示的界面,该界面中包括详情控件2101,当详情控件2101被触发时,手机100显示智能录制模式的相关信息,请参见图21中的(b)所示。
手机100进入智能录像模式后,可以确定目标对象的,以及目标对象的播放速度,以便根据目标对象的播放速度确定视频录制方式(后文介绍)。
在一些实施例中,请参见图22中的(a)所示,手机100显示文字信息“请您点击画面中的目标对象”。假设用户点击预览图像2003中的天鹅,手机100可以显示标识信息2201“已选择天鹅”。此时,用户可以在预览界面2003中继续点击其它对象。假设用户继续点击鱼,手机100可以在标识信息2201中添加“鱼”。当手机100点击确定控件2202时,手机100确定天鹅和鱼是目标对象。
在另一些实施例中,请参见图22中的(b)所示,手机100确定选择框中包含的所有对象为目标对象。假设选择框中包括两个对象天鹅和鱼,则手机100显示标识信息2201中包括“天鹅”和“鱼”。当选择框放大时,选择框中包含的对象个数可能增多,那么标识信息2201中包含的对象增多。当手机100检测到触发确定控件2202的操作时,确定标识信息2201中包括的对象是目标对象。
在另一些实施例中,请参见图22中的(c)所示,手机100检测到用户选择编号1的对象时,手机100显示标识信息2201中包括编号1。当手机100继续选择编号2的对象时,手机100在标识信息2201中增加编号2。当手机100检测到用户触发确定控件2202的操作时,确定标识信息2201中包括的对象是目标对象。
手机100确定目标对象之后,还可以确定目标对象的播放速度。
在一些实施例中,请参见图23中的(a)所示,手机100显示速度播放选项,包括:2倍速选项,1.5倍速选项,0.5倍速选项。假设用户触发2倍速选项,当手机100检测到触发拍摄控件2005的操作时,手机100开始录制视频。该视频被播放时,以2倍速播放选择的目标对象,以正常速度播放其它对象。
在另一些实施例,手机100确定出的目标对象有多个时,可以确定每个目标对象的播 放速度。比如,请参见图23中的(b)所示,手机100显示两个目标对象各自的播放速度选项。假设用户选择天鹅的播放速度是2倍速,鱼的播放速度是0.5倍速,当手机100检测到触发拍摄控件2005的操作时,开始录制视频。该视频被播放时,天鹅快速播放,鱼慢速播放。
在另一些实施例中,手机100还可以确定除去目标对象之外的背景的播放速度。比如,请参见图23中的(c)所示,手机100还显示背景(除去天鹅和鱼之外的其它对象)的播放速度选项。假设用户选择背景的播放速度是1.5倍速,手机100检测到触发拍摄控件2005的操作时,开始录制视频。该视频被播放时,背景以1.5倍速播放。
在一些实施例中,手机100在录制视频时,可以根据目标对象的播放速度确定视频的录制方式。比如,手机100可以参考最慢的播放速度来采集图像。如前述内容可知,慢速时,手机100可以以高帧率(比如60fps)采集图像,然后将采集的图像以正常播放速度(比如30fps)播放。因此,为了保证慢速播放情况下,手机100有足够的图像来播放,所以手机可以以慢速播放的播放速率来确定图像采集帧率。因此,若手机100确定目标对象慢速播放,可以以高帧率采集图像。比如,1秒采集30帧图像,播放30帧需1秒。以0.5倍播放的话,还是1秒播放30帧,播放时间翻倍要,就要播放60帧图像。因此,手机100可以以60fps的帧率采集图像,即1秒采集60帧图像,还是1秒播放30帧,需要播放2秒,播放时间延长,呈现慢速播放的效果。因此,手机100在录制视频时,可以参考慢速播放的播放速度,以采集足够的图像。
在上述实施例中,由于手机100每秒采集的图像帧数变多,是为了保证目标对象能够慢速播放,若背景(除去目标对象之外的对象)是正常播放的,那么手机100仍然需要从视频中分割出目标对象图像,然后目标对象图像,然后将目标对象图像以正常速度播放,将背景快速播放。其中,背景快速播放的方式可以参见图15所示。这样的话,目标对象播放的过程中播放的帧数较多,而背景是抽取播放即播放帧数较少,所以视频中目标对象呈现慢速播放的效果。
在一些实施例中,手机100在录制视频时,可以根据目标对象的播放速度确定视频录制方式。比如,手机100可以参考最快的播放速度来采集图像。假设用户选择目标对象的播放速度为2倍速,即用户希望录制视频后,将目标对象以2倍速播放。手机100基于上述表1可确定2倍速播放时,要每隔4帧抽取一帧图像。因此,手机100在录制视频时,以正常采集帧率的4倍帧率来采集图像。假设正常采集帧率为60fps,那么手机100以60*4=240fps的帧率采集图像。这样的话,手机100可以在播放视频时,每隔4帧抽取一帧图像(为了提取目标对象)时,可以保证抽取出的目标对象图像也能足够使用,即保证抽取出的目标对象图像能够连续播放。
下面介绍手机100录制得到视频文件的过程。
在一些实施例中,手机100检测到用于拍摄的指示后,可以采集多帧原图像,然后手机100可以按照图15所示的流程,对多帧原图像进行处理,得到新视频文件,存储该新视频文件(具体过程,请参见前文描述)。当手机100播放新视频时,即可呈现目标对象快速播放的效果。
可选的,目标对象图像的数量少于原图像的数量,请继续参见图15所示,目标对象图像有3个,而原图像有11帧,手机100将目标对象图像和原图像对应融合,只能得到3帧新图像。一种实现方式为,新视频只包括融合得到的3帧新图像。另一种实现方式为, 手机100将目标对象1和第1帧原图像融合得到第一帧新图像,将目标对象2和第2帧原图像融合,得到第二帧新图像,将目标对象3和第3帧图像融合,得到第三帧新图像,继续将目标对象1和第4帧原图像融合,得到第四帧新图像,将目标对象2和第5帧原图像融合,得到第五帧新图像,依次类推,直到得到11帧新图像。这种方式中,新视频中包括11帧新图像,该新视频中目标对象循环播放。又一种实现方式为,手机100将目标对象1和第1帧原图像融合得到第一帧新图像,将目标对象2和第2帧原图像融合,得到第二帧新图像,将目标对象3和第3帧图像融合,得到第三帧新图像,继续将目标对象3和第4帧原图像、第5帧原图像以及后续的每帧原图像融合。这种方式中,新视频中当目标对象播放完毕之后,最后一帧目标对象停留直到整个视频播放完毕。
在另一些实施例中,手机100检测到用于拍摄的指示后,可以采集多帧原图像,得到一个原始视频文件,然后存储该原始视频文件。当手机100检测到触发播放该原始视频文件的操作时,按照图15所示的方式对该原始视频进行处理。在这个实施例中,手机100存储的是原始视频,只有在播放该原始视频时,才执行图15所示的过程。
应理解,手机100在录制视频之前,用户可以在预览界面中指定目标对象和目标对象的播放速度,若手机100存储原始视频,还存储该原始视频的播放信息(比如,录制该原始视频之前,用户指定的目标对象,以及目标对象的播放速度)。当手机100检测到触发播放该原始视频文件的操作时,按照图15所示的方式对该原始视频进行处理(图15所示的过程中,目标对象是用户在预览界面指定的目标对象,抽帧数可以根据用户指定的播放速度和表1确定)。
当然,在另一些实施例中,手机100可以同时存储原始视频文件和新视频文件(对原始视频处理得到的新视频文件)。请参见图24所示,手机100中存储的新视频文件可以设置有标识信息2401,该标识信息2401用于指示该新视频是对原始视频处理得到的新视频。标识信息2401可以是文字信息也可以是图标等,本申请实施例不限定。在这个实施例中,当原始视频文件播放时,并未呈现目标对象快速播放或者慢速播放的效果(即没有执行图15所示的流程),当新视频文件播放时,呈现目标对象快速播放或者慢速播放的效果。
通过以上描述可知,本申请实施例提供的视频处理方法,能够实现同一视频中不同区域或对象以不同的播放速度呈现给用户。该方法可以适用于多种领域或场景,比如,本申请实施例提供的视频处理方法可以适用于视频监控领域,即将特定的人(即目标对象)快速播放,将其它对象静止或慢速播放等,以便跟踪特定的人。该方法还可以适用于其它场景,比如视频播放app播放电影的场景,或者微信或者QQ视频通话场景、如微信表情包制作等任何可以录制或播放视频或动图的场景。
本申请的各个实施方式可以任意进行组合,以实现不同的技术效果。
在另一些实施例中,手机100可以可以采用与图14或图15类似的方式对视频进行处理,比如,手机100将一个目标对象
结合上述实施例及相关附图,本申请实施例提供了一种视频处理方法,该方法可以在包括显示屏的电子设备(例如手机、平板电脑等)中实现。示例性的,电子设备的结构可以如图1所示。如图25所示,该方法可以包括以下步骤:
S2501:检测到用于播放视频文件的第一操作;所述视频文件是所述电子设备中存储的视频文件。
示例性的,视频文件可以是图4(b)所示界面中的视频404,第一操作可以是点击 视频404的操作;或者第一操作可以是点击图4(c)所示的界面中的播放按键406的操作。
S2502:响应于所述第一操作,播放所述视频文件,所述视频文件中第一目标对象的播放速度大于所述视频文件中所述第一目标对象以外的其它对象的播放速度;
示例性的,第一目标对象可以是用户选择的,或者电子设备自动确定的,或者预设的。以图6为例,用户可以点击视频文件中一帧图像上的至少一个对象,点击的对象即目标对象。
示例性的,第一目标对象可以默认播放速度大于其它对象的播放速度,或者,第一目标对象的播放速度可以是用户选择的。以图10为例,用户可以通过速度播放选项(2倍选项、1.5倍选项等)来选择第一目标对象的播放速度。
S2503:检测到用于更换目标对象的第二操作;
示例性的,以图12B为例,当前选择的第一目标对象是天鹅,在视频播放的过程中,用户可以点击切换目标对象的控件,则手机更换目标对象,将第一目标对象切换为第二目标对象。
当然,第二操作也可以是暂停视频文件,然后从暂停画面中重新手动选择第二目标对象,然后继续播放视频的一系列操作,前文已经描述过,在此不再重复赘述。
S2504:响应于所述第二操作,继续播放所述视频文件,在继续播放所述视频文件的过程中,所述视频文件中的第二目标对象的播放速度大于所述第二目标对象以外的其它对象的播放速度;所述第二目标对象是所述视频文件中与所述第一目标对象不同的至少一个对象。
示例性的,以图12B(a)为例,视频播放的过程,视频文件中的天鹅快速播放,当目标对象切换之后,视频文件中的鱼快速播放。
结合上述实施例及相关附图,本申请实施例提供了一种视频处理方法,该方法可以在包括显示屏的电子设备(例如手机、平板电脑等)中实现。示例性的,电子设备的结构可以如图1所示。如图26所示,该方法可以包括以下步骤:
S2601:检测到用于播放视频文件的第一操作;所述视频文件是所述电子设备中存储的视频文件;
示例性的,视频文件可以是图4(b)所示界面中的视频404,第一操作可以是点击视频404的操作;或者第一操作可以是点击图4(c)所示的界面中的播放按键406的操作。
S2602:响应于所述第一操作,播放所述视频文件,所述视频文件中处于第一区域中的内容的播放速度大于第二区域的内容的播放速度;所述第二区域是所述播放界面中所述第一区域以外的其它区域;
示例性的,第一区域可以是用户选择的,或者电子设备自动确定的,或者预设的。以图12C为例,用户可以点击视频文件中一帧图像选择第一区域。
S2603:检测到用于重新确定第一区域的第二操作;
示例性的,以图12C为例,当前选择的第一区域,在视频播放的过程中,用户可以点击用于指示切换快速播放的区域的控件,则手机将第一区域切换为第三区域。
当然,第二操作也可以是暂停视频文件,然后从暂停画面中重新手动选择第三区域,然后继续播放视频的一系列操作,前文已经描述过,在此不再重复赘述。
S2604:响应于所述第二操作,继续播放所述视频文件,在继续播放所述视频文件的过程中,所述视频文件中处于第三区域中的内容的播放速度大于第四区域的内容的播放速度;所述第四区域是所述播放界面中所述第三区域以外的其它区域。
示例性的,以图12C(a)为例,视频播放的过程,视频文件中的第一区域中的内容快速播放,当快速播放区域切换之后,视频文件中的第三区域中的内容快速播放,请参见图12C(b)所示。
结合上述实施例及相关附图,本申请实施例提供了一种视频处理方法,该方法可以在包括摄像头和显示屏的电子设备(例如手机、平板电脑等)中实现。示例性的,电子设备的结构可以如图1所示。如图27所示,该方法可以包括以下步骤:
S2701:检测到用于启动所述摄像头的第一操作;
示例性的,第一操作可以是点击图20(a)中相机应用202的操作,或者是能够启动摄像头的其它操作,本申请实施例不作限定。
S2702:响应于所述第一操作,显示所述相机应用的取景界面,所述取景界面中包括预览图像,所述预览图像中包括至少一个对象;
示例性的,取景界面可以是图20(b)所示的界面2003。
S2703:检测到用于拍摄的第二操作;
示例性的,第二操作可以是点击图20(b)所示的控件2005的操作。
S2704:响应于所述第二操作,所述摄像头采集N帧原始图像;N为大于等于2的整数;
S2705:从所述N帧原始图像中每隔M帧抽取一帧原始图像,从抽取出的原始图像中提取K帧第一目标对象图像;M是大于等于1小于N的整数,K是大于等于1小于N的整数;所述第一目标对象是抽取出的原始图像上的至少一个对象;
S2706:将所述N帧原始图像中的第一目标对象使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中不包括所述第一目标对象;
S2707:将提取出的所述K帧第一目标对象图像和使用背景覆盖后的N帧原始图像中的K帧原始图像对应融合,得到K帧新图像;其中,使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的图像;
S2708:将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成目标视频文件;
示例性的,以图15为例,电子设备检测到用于拍摄的指示后,采集11帧原始图像后,从11帧原始图像中每隔4帧图像抽取一帧图像,共抽取出3帧原始图像(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的原始图像中提取3帧包括第一目标对象的图像,然后将11帧原始图像中的第一目标对象使用背景覆盖(可以仅对11帧原始图像中包括第一目标对象的原始图像中的第一目标对象使用背景覆盖,对于不包含第一目标对象的原始图像可以不作背景覆盖处理),使得使用背景覆盖之后的图像上不包括第一目标对象。
电子设备将提取出的3帧包含第一目标对象的图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原 始图像中的3帧原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成目标视频文件,当该目标视频文件被播放时,第一目标对象快速播放。电子设备通过该方式录制得到的视频文件,是经过特殊处理的视频文件,当该视频文件在播放时,目标对象可以快速播放。通过这种方式,有助于提升视频录制的趣味性,提升用户体验。
结合上述实施例及相关附图,本申请实施例提供了一种视频处理方法,该方法可以在包括显示屏的电子设备(例如手机、平板电脑等)中实现。示例性的,电子设备的结构可以如图1所示。如图28所示,该方法可以包括以下步骤:
S2801:检测到用于启动摄像头的第一操作;
示例性的,第一操作可以是点击图20(a)中相机应用202的操作,或者是能够启动摄像头的其它操作,本申请实施例不作限定。
S2802:响应于所述第一操作,显示所述相机应用的取景界面,所述取景界面中包括预览图像,所述预览图像中包括至少一个对象;
示例性的,取景界面可以是图20(b)所示的界面2003。
S2803:检测到用于拍摄的第二操作;
示例性的,第二操作可以是点击图20(b)所示的控件2005的操作。
S2804:响应于所述第二操作,所述摄像头采集的N帧原始图像;N为大于等于2的整数;
S2805:从所述N帧原始图像中每隔M帧抽取一帧原始图像,抽取出K帧原始图像;M为大于等于1小于N的整数,K为大于等于1小于N的整数;
S2806:从抽取出的每帧原始图像中提取第一图像,共得到K帧第一图像;第一图像是抽取出的每帧原始图像上的第一区域内的图像;
S2807:将所述N帧原始图像中每帧原始中的第一区域的第一图像使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中每帧原始图像上不包括所述第一图像;
S2808:将提取出的所述K帧第一图像填充在使用背景覆盖后的所述N帧原始图像中的K帧原始图像中的所述第一区域,得到K帧新图像;其中,使用背景覆盖后的所述N帧原始图像中的K帧原始图像是连续的;
S2809:将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成目标视频文件。
示例性的,以图15为例,电子设备检测到用于拍摄的指示之后,采集11帧原始图像后,从11帧原始图像中每隔4帧图像抽取一帧图像,共抽取出3帧原始图像,(即抽取出第1帧原始图像、第5帧原始图像和第9帧原始图像)。示例性的,手机100抽取图像时,可以从第1帧开始抽取,也可以从第2帧开始抽取等等,只要保证从第i帧开始抽取时,后续帧数足够即可。
电子设备从抽取出的每帧原始图像中提取第一图像(即每帧原始图像上的第一区域内的内容),然后将11帧原始图像中每帧原始图像上的第一图像使用背景覆盖,使得11帧原始图像上不包括第一图像。
电子设备将提取出的3帧第一图像和使用背景覆盖之后的11帧原始图像中的3帧原始图像对应融合,得到3帧新图像。其中,使用背景覆盖之后的11帧原始图像中的3帧 原始图像是连续的,比如可以是使用背景覆盖之后的11帧原始图像中的前3帧图像。
电子设备将3帧新图像和剩余的8帧使用背景覆盖之后的图像合成目标视频文件。当该目标视频文件被播放时,第一区域内的内容快速播放。电子设备通过该方式录制得到的视频文件,是经过特殊处理的视频文件,当该视频文件在播放时,第一区域的内容可以快速播放。通过这种方式,有助于提升视频录制的趣味性,提升用户体验。
上述本申请提供的实施例中,从移动设备(手机100)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,移动设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
本发明实施例还提供一种计算机存储介质,该存储介质可以包括存储器,该存储器可存储有程序,该程序被执行时,使得终端执行包括如前的执行如前的图25、图26、图27、图28所示的方法实施例中记载的电子设备所执行的全部或部分步骤。
本发明实施例还提供一种程序产品,当所述计算机程序产品在终端上运行时,使得所述终端执行包括如前的图25、图26、图27、图28所示的方法实施例中记载的电子设备所执行的全部或部分步骤。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请实施例可以用硬件实现,或固件实现,或它们的组合方式来实现。当使用软件实现时,可以将上述功能存储在计算机可读介质中或作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是计算机能够存取的任何可用介质。以此为例但不限于:计算机可读介质可以包括RAM、ROM、电可擦可编程只读存储器(electrically erasable programmable read only memory,EEPROM)、只读光盘(compact disc read-Only memory,CD-ROM)或其他光盘存储、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质。此外。任何连接可以适当的成为计算机可读介质。例如,如果软件是使用同轴电缆、光纤光缆、双绞线、数字用户线(digital subscriber line,DSL)或者诸如红外线、无线电和微波之类的无线技术从网站、服务器或者其他远程源传输的,那么同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线和微波之类的无线技术包括在所属介质的定影中。如本申请实施例所使用的,盘(disk)和碟(disc)包括压缩光碟(compact disc,CD)、激光碟、光碟、数字通用光碟(digital video disc,DVD)、软盘和蓝光光碟,其中盘通常磁性的复制数据,而碟则用激光来光学的复制数据。上面的组合也应当包括在计算机可读介质的保护范围之内。
总之,以上所述仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡根据本申请的揭露,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (31)

  1. 一种视频处理方法,其特征在于,应用于一具有显示屏的电子设备,所述方法包括:
    检测到用于播放视频文件的第一操作;所述视频文件是所述电子设备中存储的视频文件;
    响应于所述第一操作,播放所述视频文件,所述视频文件中第一目标对象的播放速度大于所述视频文件中所述第一目标对象以外的其它对象的播放速度;
    检测到用于更换目标对象的第二操作;
    响应于所述第二操作,继续播放所述视频文件,在继续播放所述视频文件的过程中,所述视频文件中的第二目标对象的播放速度大于所述第二目标对象以外的其它对象的播放速度;所述第二目标对象是所述视频文件中与所述第一目标对象不同的至少一个对象。
  2. 如权利要求1所述的方法,其特征在于,所述第一目标对象包括:预设的目标对象;或者,所述电子设备根据所述视频文件中的多个对象自动确定的对象;或者,所述电子设备根据用户在所述视频文件中的一帧图像上的选定操作,确定的对象;
    所述第二目标对象包括:预设的目标对象;或者,所述电子设备根据所述视频文件中的多个对象自动确定的对象;或者,所述电子设备根据用户在所述视频文件中的一帧图像上的选定操作,确定的对象。
  3. 如权利要求1或2所述的方法,其特征在于,所述第一目标对象或者所述第二目标对象的播放速度为预设的播放速度;或者,所述第一目标对象或者所述第二目标对象的播放速度为所述电子设备根据用户在所述视频文件中的一帧图像上的选择操作,确定的播放速度。
  4. 如权利要求2或3所述的方法,其特征在于,所述第二目标对象为所述电子设备根据用户在所述视频文件中的一帧图像上的选定操作,确定的对象,包括:
    检测到作用在所述一帧图像上的至少一个点击操作,确定每个点击操作所在位置对应的对象为所述第二目标对象;或者
    检测到作用在所述一帧图像上的至少一个圈选操作,确定每个圈选操作所围成的区域内包括的对象为所述第二目标对象;或者
    检测到作用在所述一帧图像上的每个对象的标识信息中选择至少一个目标标识信息的操作,确定所述一帧图像上每个目标标识信息对应的对象为所述第二目标对象。
  5. 如权利要求2-4任一所述的方法,其特征在于,所述第二目标对象的播放速度为所述电子设备根据用户在所述视频文件中的一帧图像上的选择操作,确定的播放速度,包括:
    响应于触发显示播放速度选项的操作,显示多个播放速度选项;
    检测到用于在多个播放速度选项中选择目标播放速度选项的选择操作,确定所述目标播放速度选项对应的播放速度。
  6. 如权利要求1-5任一所述的方法,其特征在于,所述方法还包括:
    当所述第二目标对象播放完毕时,停止播放所述视频文件;或者
    当所述第二目标对象播放完毕时,显示所述第二目标对象的最后一帧画面,继续播放所述视频文件的其它对象。
  7. 如权利要求1-6任一所述的方法,其特征在于,在所述播放所述视频文件之前,还包括:
    从所述视频文件的N帧原始图像中每隔M帧抽取一帧原始图像,从抽取出的原始图像中提取K帧包含第一目标对象的图像;其中;N为大于等于2的整数;M是大于等于1小于N的整数,K是大于等于1小于N的整数;所述第一目标对象是抽取出的原始图像上的至少一个对象;
    将所述N帧原始图像中的所述第一目标对象使用背景覆盖,使得所述N帧原始图像中不包括所述第一目标对象;
    将提取出的所述K帧包含第一目标对象的图像和使用背景覆盖后的N帧原始图像中的K帧原始图像对应融合,得到K帧新图像;其中,使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的图像;
    将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成第一目标视频文件;
    所述播放所述视频文件,包括:
    播放所述第一目标视频文件。
  8. 如权利要求1-7任一所述的方法,其特征在于,在所述继续播放所述视频文件之前,还包括:
    从所述视频文件的N帧原始图像中每隔P帧抽取一帧原始图像,从抽取出的原始图像中提取Q帧包含第二目标对象的图像;其中;N为大于等于2的整数;P是大于等于1小于N的整数,Q是大于等于1小于N的整数;所述第二目标对象是抽取出的原始图像上的至少一个对象;
    将所述N帧原始图像中的所述第二目标对象使用背景覆盖,使得所述N帧原始图像中不包括所述第二目标对象;
    将提取出的所述Q帧包含第二目标对象的图像和使用背景覆盖后的N帧原始图像中的Q帧原始图像对应融合,得到Q帧新图像;其中,使用背景覆盖后的N帧原始图像中的Q帧原始图像是连续的图像;
    将所述Q帧新图像和剩余的N-Q帧使用背景覆盖后的原始图像合成第二目标视频文件;
    所述继续播放所述视频文件,包括:
    播放所述第二目标视频文件。
  9. 一种视频处理方法,其特征在于,应用于一具有显示屏的电子设备,所述方法包括:
    检测到用于播放视频文件的第一操作;所述视频文件是所述电子设备中存储的视频文件;
    响应于所述第一操作,播放所述视频文件,播放界面中处于第一区域中的第一内容的播放速度大于第二区域中的第二内容的播放速度;所述第二区域是播放界面中所述第一区域以外的其它区域;所述第一内容和所述第二内容属于所述视频文件中的播放内容;
    检测到用于切换快速播放区域的第二操作;
    响应于所述第二操作,继续播放所述视频文件,在继续播放所述视频文件的过程中,播放界面中处于第三区域中的第三内容的播放速度大于第四区域中第四内容的播放速度; 所述第四区域是播放界面中所述第三区域以外的其它区域,所述第三内容和所述第四内容属于所述视频文件中的播放内容。
  10. 如权利要求9所述的方法,其特征在于,所述第一区域或者所述第三区域包括:
    预先设置的区域;或者,所述电子设备根据所述视频文件的多个对象自动确定的区域;或者,所述电子设备根据用户在所述视频文件的一帧图像上的选定操作,确定的区域。
  11. 如权利要求9或10所述的方法,其特征在于,所述第一区域或所述第三区域中内容的播放速度为预设的播放速度;或者,所述第一区域或者所述第三区域中内容的播放速度为所述电子设备根据用户在视频文件的一帧图像上的选择操作,确定的播放速度。
  12. 如权利要求9或10所述的方法,其特征在于,所述第一区域或所述第三区域为所述电子设备根据用户在所述视频文件的一帧图像上的选定操作,确定的区域,包括:
    检测到作用在所述一帧图像上的至少一个圈选操作,确定所述至少一个圈选操作所围成的区域为确定的区域中的部分区域或全部区域。
  13. 如权利要求9-12任一所述的方法,其特征在于,所述第一区域或第三区域中内容的播放速度为所述电子设备根据用户在所述视频文件的一帧图像上的选择操作,确定的播放速度,包括:
    响应于触发显示播放速度选项的操作,显示多个播放速度选项;
    检测到用于在多个播放速度选项中选择目标播放速度选项的选择操作,确定所述目标播放速度选项对应的播放速度。
  14. 如权利要求9-13任一所述的方法,其特征在于,所述方法还包括:
    当所述第三区域内的第三内容播放完毕时,停止播放所述视频文件;或者
    当所述第三区域内的第三内容播放完毕时,显示所述第三区域的最后一帧画面,继续播放第四区域的第四内容。
  15. 如权利要求9-14任一所述的方法,其特征在于,在所述播放所述视频文件之前,还包括:
    从所述视频文件的N帧原始图像中每隔M帧抽取一帧原始图像,抽取出K帧原始图像;其中,N为大于等于2的整数,M为大于等于1小于N的整数,K为大于等于1小于N的整数;
    从抽取出的每帧原始图像中提取第一图像,共得到K帧第一图像;第一图像是抽取出的每帧原始图像上处于所述第一区域内的第一内容;
    将所述N帧原始图像中每帧原始图像上的第一区域内的第一图像使用背景覆盖,使得所述N帧原始图像中每帧原始图像上不包括所述第一图像;
    将提取出的所述K帧第一图像填充在使用背景覆盖后的N帧原始图像中的K帧原始图像中的所述第一区域内,得到K帧新图像;其中,所述使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的;
    将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成第一目标视频文件;
    所述播放所述视频文件,包括:
    播放所述第一目标视频文件。
  16. 如权利要求9-15任一所述的方法,其特征在于,在所述继续播放所述视频文件 之前,还包括:
    从所述视频文件的N帧原始图像中每隔P帧抽取一帧原始图像,抽取出Q帧原始图像;其中,N为大于等于2的整数,P为大于等于1小于N的整数,Q为大于等于1小于N的整数;
    从抽取出的每帧原始图像中提取第三图像,共得到Q帧第三图像;第三图像是抽取出的每帧原始图像上处于所述第三区域内的第三内容;
    将所述N帧原始图像中每帧原始图像上处于所述第三区域内的第三图像使用背景覆盖,使得所述N帧原始图像中每帧原始图像上不包括所述第三图像;
    将提取出的所述Q帧第三图像填充在使用背景覆盖后的N帧原始图像中的Q帧原始图像中的所述第三区域内,得到Q帧新图像;其中,所述使用背景覆盖后的N帧原始图像中的Q帧原始图像是连续的;
    将所述Q帧新图像和剩余的N-Q帧使用背景覆盖后的原始图像合成第二目标视频文件;
    所述继续播放所述视频文件,包括:
    播放所述第二目标视频文件。
  17. 一种视频处理方法,其特征在于,应用于具有摄像头和显示屏的电子设备,所述方法包括:
    检测到用于启动所述摄像头的第一操作;
    响应于所述第一操作,所述显示屏显示所述相机应用的取景界面,所述取景界面中包括预览图像,所述预览图像中包括至少一个对象;
    检测到用于拍摄的第二操作;
    响应于所述第二操作,所述摄像头采集N帧原始图像;N为大于等于2的整数;
    从所述N帧原始图像中每隔M帧抽取一帧原始图像,从抽取出的原始图像中提取K帧包含第一目标对象的图像;M是大于等于1小于N的整数,K是大于等于1小于M的整数;所述第一目标对象是抽取出的原始图像上的至少一个对象;
    将所述N帧原始图像中的第一目标对象使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中不包括所述第一目标对象;
    将提取出的所述K帧包含第一目标对象的图像和使用背景覆盖后的N帧原始图像中的K帧原始图像对应融合,得到K帧新图像;其中,使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的图像;
    将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成目标视频文件。
  18. 如权利要求17所述的方法,其特征在于,所述第一目标对象为预设的目标对象;或者,所述第一目标对象为所述电子设备根据所述预览图像的上的至少一个待拍摄对象自动确定的对象;或者,所述第一目标对象为所述电子设备根据用户在所述预览图像上的选定操作,确定的对象。
  19. 如权利要求17或18所述的方法,其特征在于,所述第一目标对象的播放速度为预设的播放速度;或者,所述第一目标对象的播放速度为所述电子设备根据用户在所述预览图像上的选择操作,确定的播放速度。
  20. 如权利要求18或19所述的方法,其特征在于,所述第一目标对象为所述电子设备根据用户在所述预览图像上的选定操作,确定的对象,包括:
    检测到作用在所述预览图像上的至少一次点击操作,确定每次点击操作所在位置对应的对象为所述第一目标对象;或者
    检测到作用在所述预览图像上的至少一个圈选操作,确定每个圈选操作所围成的区域内包括的对象为所述第一目标对象;或者
    检测到作用于在所述预览图像上的每个对象的标识信息中选择至少一个目标标识信息的操作,确定每个目标标识信息对应的对象为所述第一目标对象。
  21. 如权利要求17-20任一所述的方法,其特征在于,所述方法还包括:
    保存所述目标视频文件,所述目标视频文件的封面上显示标识,所述标识用于指示所述目标视频文件中第一目标对象播放速度大于其它对象的播放速度,所述其它对象是所述目标视频文件中除所述第一目标对象之外的对象。
  22. 一种视频处理方法,其特征在于,应用于具有摄像头和显示屏的电子设备,所述方法包括:
    检测到用于启动所述摄像头的第一操作;
    响应于所述第一操作,所述显示屏显示所述相机应用的取景界面,所述取景界面中包括预览图像,所述预览图像中包括至少一个对象;
    检测到用于拍摄的第二操作;
    响应于所述第二操作,所述摄像头采集的N帧原始图像;N为大于等于2的整数;
    从所述N帧原始图像中每隔M帧抽取一帧原始图像,抽取出K帧原始图像;M为大于等于1小于N的整数,K为大于等于1小于N的整数;
    从抽取出的每帧原始图像中提取第一图像,共得到K帧第一图像;第一图像是抽取出的每帧原始图像上的第一区域内的图像;
    将所述N帧原始图像中每帧原始图像中的第一区域的第一图像使用背景覆盖,使得使用背景覆盖后的所述N帧原始图像中每帧原始图像上不包括所述第一图像;
    将提取出的所述K帧第一图像填充在使用背景覆盖后的N帧原始图像中的K帧原始图像中的所述第一区域,得到K帧新图像;其中,所述使用背景覆盖后的N帧原始图像中的K帧原始图像是连续的;
    将所述K帧新图像和剩余的N-K帧使用背景覆盖后的原始图像合成目标视频文件。
  23. 如权利要求22所述的方法,其特征在于,所述第一区域为预先设置的区域;或者,所述第一区域为根据所述预览图像上的至少一个待拍摄对象自动确定的区域;或者,所述第一区域为所述电子设备根据用户在所述预览图像上的选定操作,确定的区域。
  24. 如权利要求22或23所述的方法,其特征在于,所述第一区域中内容的播放速度为预设的播放速度;或者,所述第一区域中内容的播放速度为所述电子设备根据用户在所述预览图像上的选定操作,确定的播放速度。
  25. 如权利要求23或24所述的方法,其特征在于,所述第一区域为所述电子设备根据用户在所述预览图像上的选定操作,确定的区域,包括:
    检测到作用在所述预览图像上的至少一个圈选操作,确定每个圈选操作所围成的区域为所述第一区域。
  26. 如权利要求22-25任一所述的方法,其特征在于,所述方法还包括:
    保存所述目标视频文件,所述目标视频文件的封面上显示标识,所述标识用于指示所述目标视频文件中第一区域播放速度大于其它区域的播放速度,所述其它区域是所述 目标视频文件中除所述第一区域之外的区域。
  27. 一种电子设备,其特征在于,包括:显示屏;一个或多个处理器;存储器;一个或多个应用程序;一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如权利要求1-16中任一所述的方法步骤。
  28. 一种电子设备,其特征在于,包括:显示屏;摄像头;一个或多个处理器;存储器;一个或多个应用程序;一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述移动设备执行时,使得所述移动设备执行如权利要求17-26中任一所述的方法步骤。
  29. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-26中任一项所述的方法。
  30. 一种程序产品,其特征在于,当所述程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-26中任一项所述的方法。
  31. 一种电子设备上的图形用户界面,其特征在于,所述电子设备具有显示屏、摄像头、存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述存储器中的一个或多个计算机程序,其特征在于,所述图形用户界面包括所述电子设备执行如权利要求1至26中任意一项所述的方法时显示的图形用户界面。
PCT/CN2019/076360 2019-02-27 2019-02-27 一种视频处理方法和移动设备 WO2020172826A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/076360 WO2020172826A1 (zh) 2019-02-27 2019-02-27 一种视频处理方法和移动设备
CN201980093133.9A CN113475092B (zh) 2019-02-27 2019-02-27 一种视频处理方法和移动设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/076360 WO2020172826A1 (zh) 2019-02-27 2019-02-27 一种视频处理方法和移动设备

Publications (1)

Publication Number Publication Date
WO2020172826A1 true WO2020172826A1 (zh) 2020-09-03

Family

ID=72238779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076360 WO2020172826A1 (zh) 2019-02-27 2019-02-27 一种视频处理方法和移动设备

Country Status (2)

Country Link
CN (1) CN113475092B (zh)
WO (1) WO2020172826A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580067A (zh) * 2020-11-30 2021-03-30 郑州信大捷安信息技术股份有限公司 一种视频文件监管方法及系统
CN112653920A (zh) * 2020-12-18 2021-04-13 北京字跳网络技术有限公司 视频处理方法、装置、设备、存储介质及计算机程序产品
CN113238698A (zh) * 2021-05-11 2021-08-10 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113643728A (zh) * 2021-08-12 2021-11-12 荣耀终端有限公司 一种音频录制方法、电子设备、介质及程序产品
CN113691721A (zh) * 2021-07-28 2021-11-23 浙江大华技术股份有限公司 一种缩时摄影视频的合成方法、装置、计算机设备和介质
CN113825023A (zh) * 2021-11-02 2021-12-21 户龙辉 视频文件处理方法、装置、设备及存储介质
CN113918766A (zh) * 2021-08-12 2022-01-11 荣耀终端有限公司 视频缩略图的显示方法、设备和存储介质
CN114143398A (zh) * 2021-11-17 2022-03-04 西安维沃软件技术有限公司 视频播放方法、装置
CN114173177A (zh) * 2021-12-03 2022-03-11 北京百度网讯科技有限公司 一种视频处理方法、装置、设备及存储介质
CN114374868A (zh) * 2022-01-04 2022-04-19 网易传媒科技(北京)有限公司 调节视频播放速度的方法、装置、介质和计算设备
US11310553B2 (en) * 2020-06-19 2022-04-19 Apple Inc. Changing resource utilization associated with a media object based on an engagement score
CN114419198A (zh) * 2021-12-21 2022-04-29 北京达佳互联信息技术有限公司 一种帧序列处理方法、装置、电子设备及存储介质
CN115190363A (zh) * 2021-04-02 2022-10-14 花瓣云科技有限公司 视频播放方法、装置及存储介质
CN116245708A (zh) * 2022-12-15 2023-06-09 江苏北方湖光光电有限公司 一种红外图像目标轮廓勾勒ip核的设计方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543137A (zh) * 2022-09-23 2022-12-30 维沃移动通信有限公司 视频播放方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015570A2 (en) * 2007-03-30 2009-01-14 Alpine Electronics, Inc. Video player and video playback control method
CN103856819A (zh) * 2012-11-30 2014-06-11 腾讯科技(深圳)有限公司 播放速度调整装置及方法
CN103873741A (zh) * 2014-04-02 2014-06-18 北京奇艺世纪科技有限公司 一种用于替换视频中感兴趣区域的方法及装置
CN106507170A (zh) * 2016-10-27 2017-03-15 宇龙计算机通信科技(深圳)有限公司 一种视频处理方法及装置
CN106851128A (zh) * 2017-03-20 2017-06-13 努比亚技术有限公司 一种基于双摄像头的视频数据处理方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4280613B2 (ja) * 2003-12-08 2009-06-17 キヤノン株式会社 動画像再生装置及びその制御方法、並びに、コンピュータプログラム及びコンピュータ可読記憶媒体
CN107426612A (zh) * 2017-05-31 2017-12-01 厦门幻世网络科技有限公司 一种控制播放速度的方法及介质
CN107801100A (zh) * 2017-09-27 2018-03-13 北京潘达互娱科技有限公司 一种视频定位播放方法及装置
CN109361939B (zh) * 2018-11-15 2021-01-08 维沃移动通信有限公司 一种视频播放方法及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015570A2 (en) * 2007-03-30 2009-01-14 Alpine Electronics, Inc. Video player and video playback control method
CN103856819A (zh) * 2012-11-30 2014-06-11 腾讯科技(深圳)有限公司 播放速度调整装置及方法
CN103873741A (zh) * 2014-04-02 2014-06-18 北京奇艺世纪科技有限公司 一种用于替换视频中感兴趣区域的方法及装置
CN106507170A (zh) * 2016-10-27 2017-03-15 宇龙计算机通信科技(深圳)有限公司 一种视频处理方法及装置
CN106851128A (zh) * 2017-03-20 2017-06-13 努比亚技术有限公司 一种基于双摄像头的视频数据处理方法和装置

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11310553B2 (en) * 2020-06-19 2022-04-19 Apple Inc. Changing resource utilization associated with a media object based on an engagement score
CN112580067B (zh) * 2020-11-30 2022-03-25 郑州信大捷安信息技术股份有限公司 一种视频文件监管方法及系统
CN112580067A (zh) * 2020-11-30 2021-03-30 郑州信大捷安信息技术股份有限公司 一种视频文件监管方法及系统
CN112653920A (zh) * 2020-12-18 2021-04-13 北京字跳网络技术有限公司 视频处理方法、装置、设备、存储介质及计算机程序产品
US12003884B2 (en) 2020-12-18 2024-06-04 Beijing Zitiao Network Technology Co., Ltd. Video processing method and apparatus, device, storage medium and computer program product
CN115190363A (zh) * 2021-04-02 2022-10-14 花瓣云科技有限公司 视频播放方法、装置及存储介质
CN115190363B (zh) * 2021-04-02 2024-05-17 花瓣云科技有限公司 视频播放方法、装置及存储介质
CN113238698A (zh) * 2021-05-11 2021-08-10 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113691721A (zh) * 2021-07-28 2021-11-23 浙江大华技术股份有限公司 一种缩时摄影视频的合成方法、装置、计算机设备和介质
CN113643728A (zh) * 2021-08-12 2021-11-12 荣耀终端有限公司 一种音频录制方法、电子设备、介质及程序产品
CN113918766B (zh) * 2021-08-12 2023-10-13 荣耀终端有限公司 视频缩略图的显示方法、设备和存储介质
CN113918766A (zh) * 2021-08-12 2022-01-11 荣耀终端有限公司 视频缩略图的显示方法、设备和存储介质
CN113643728B (zh) * 2021-08-12 2023-08-22 荣耀终端有限公司 一种音频录制方法、电子设备、介质及程序产品
CN113825023A (zh) * 2021-11-02 2021-12-21 户龙辉 视频文件处理方法、装置、设备及存储介质
CN113825023B (zh) * 2021-11-02 2023-12-05 户龙辉 视频文件处理方法、装置、设备及存储介质
CN114143398B (zh) * 2021-11-17 2023-08-25 西安维沃软件技术有限公司 视频播放方法、装置
CN114143398A (zh) * 2021-11-17 2022-03-04 西安维沃软件技术有限公司 视频播放方法、装置
CN114173177B (zh) * 2021-12-03 2024-03-19 北京百度网讯科技有限公司 一种视频处理方法、装置、设备及存储介质
CN114173177A (zh) * 2021-12-03 2022-03-11 北京百度网讯科技有限公司 一种视频处理方法、装置、设备及存储介质
CN114419198A (zh) * 2021-12-21 2022-04-29 北京达佳互联信息技术有限公司 一种帧序列处理方法、装置、电子设备及存储介质
CN114374868A (zh) * 2022-01-04 2022-04-19 网易传媒科技(北京)有限公司 调节视频播放速度的方法、装置、介质和计算设备
CN116245708A (zh) * 2022-12-15 2023-06-09 江苏北方湖光光电有限公司 一种红外图像目标轮廓勾勒ip核的设计方法

Also Published As

Publication number Publication date
CN113475092A (zh) 2021-10-01
CN113475092B (zh) 2022-10-04

Similar Documents

Publication Publication Date Title
WO2020172826A1 (zh) 一种视频处理方法和移动设备
CN115002340B (zh) 一种视频处理方法和电子设备
CN111526314B (zh) 视频拍摄方法及电子设备
CN111147878B (zh) 直播中的推流方法、装置及计算机存储介质
WO2021190078A1 (zh) 短视频的生成方法、装置、相关设备及介质
WO2021238943A1 (zh) Gif图片生成方法、装置及电子设备
US10749923B2 (en) Contextual video content adaptation based on target device
CN112261481B (zh) 互动视频的创建方法、装置、设备及可读存储介质
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN111935516B (zh) 音频文件的播放方法、装置、终端、服务器及存储介质
JP4768846B2 (ja) 電子機器及び画像表示方法
WO2023160241A1 (zh) 一种视频处理方法及相关装置
CN114979785B (zh) 视频处理方法、电子设备及存储介质
CN115002359A (zh) 视频处理方法、装置、电子设备及存储介质
JP2015103968A (ja) 画像処理装置、画像処理方法及び画像処理プログラム
KR102138835B1 (ko) 정보 노출 방지 영상 제공 장치 및 방법
US20210377454A1 (en) Capturing method and device
CN115484423A (zh) 一种转场特效添加方法及电子设备
CN116033261B (zh) 一种视频处理方法、电子设备、存储介质和芯片
WO2024103633A1 (zh) 一种视频播放方法、装置、电子设备和存储介质
CN114546229B (zh) 信息处理方法、截屏方法及电子设备
CN116132790B (zh) 录像方法和相关装置
WO2023226725A9 (zh) 录像方法和相关装置
WO2023226699A1 (zh) 录像方法、装置及存储介质
US20240031655A1 (en) Video Playback Method, Terminal Device, Apparatus, System, and Storage Medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19917122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19917122

Country of ref document: EP

Kind code of ref document: A1