WO2022088776A1 - 视频展示方法和视频展示装置 - Google Patents

视频展示方法和视频展示装置 Download PDF

Info

Publication number
WO2022088776A1
WO2022088776A1 PCT/CN2021/107455 CN2021107455W WO2022088776A1 WO 2022088776 A1 WO2022088776 A1 WO 2022088776A1 CN 2021107455 W CN2021107455 W CN 2021107455W WO 2022088776 A1 WO2022088776 A1 WO 2022088776A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
content area
preset direction
enlargement
length
Prior art date
Application number
PCT/CN2021/107455
Other languages
English (en)
French (fr)
Inventor
周静
王慧
刘付家
袁勇
李新
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022088776A1 publication Critical patent/WO2022088776A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region

Definitions

  • the present disclosure relates to the technical field of video processing, and in particular, to a video display method and a video display device.
  • the user can browse the video through the electronic device to obtain related information, and the electronic device can simultaneously play the related information of the video, such as the comments of the video, during the process of playing the video.
  • the user can perform a zooming operation on the video, so that the display screen of the electronic device can simultaneously play video related information during the video playing process, so as to meet the user's requirement of watching the video and the video related information at the same time.
  • the present disclosure provides a video display method and a video display device.
  • the technical solutions of the present disclosure are as follows:
  • a video display method which is applied to an electronic device.
  • the video display method includes: receiving a video zoom operation implemented on a video playback interface, obtaining operation information of the video zoom operation;
  • the video information of the first video displayed in the video playback interface, the video information includes at least the display size and key content area of the first video; according to the operation information and the video information of the first video, determine the scaling method and scaling parameter of the first video; scaling the first video according to the scaling method and scaling parameter to obtain the second video.
  • performing scaling processing on the first video according to the operation information and video information of the first video to obtain a second video includes: according to the operation information and the first video The video information of the video is used to determine the zoom mode and zoom parameter of the first video; the first video is zoomed according to the zoom mode and the zoom parameter to obtain the second video.
  • the operation information includes at least an operation type and an operation distance; the determining a zooming manner and zooming parameters of the first video according to the operation information and video information of the first video, including: In the case where it is determined that the operation type is a zoom-out operation, according to the operation distance and the video information of the first video, determine the zoom-out mode and zoom-out parameters of the first video; the zoom-out mode at least includes whether to zoom out all the key content area and the type of reduction; the type of reduction includes reduction in a preset direction or the overall reduction; the reduction parameter includes at least the reduction length in the preset direction; when it is determined that the operation type is a zoom-in operation , according to the operation distance and the video information of the first video, determine the enlargement method and enlargement parameters of the first video; the enlargement method at least includes whether to enlarge the key content area and the enlargement type; the enlargement type Including enlargement in a preset direction or overall enlargement; the enlargement parameter includes at least
  • the operation distance is the projection distance of the video zoom operation in the preset direction; when it is determined that the operation type is a zoom operation, according to the operation distance and the first Video information of a video, and determining a reduction method and a reduction parameter of the first video includes: determining, according to the video information, a background content area of the first video outside the key content area; in the operation In the case where the distance is not greater than the length of the background content area in the preset direction, determine that the reduction method includes not reducing the key content area, reducing the background content area, and the reduction type of the background content area: Preset direction reduction or overall reduction; in the case where the reduction type of the background content area is the preset direction reduction, the reduction length in the preset direction is determined based on the operation distance; When the reduction type is overall reduction, a first reduction ratio and a reduction length in the preset direction are determined based on the operation distance and the size of the background content area, and the first reduction ratio is the background The ratio of the length of the content area in the preset direction;
  • the operation distance is the projection distance of the video zoom operation in the preset direction; when it is determined that the operation type is a zoom operation, according to the operation distance and the first Video information of a video, and determining a reduction method and a reduction parameter of the first video includes: determining, according to the video information, a background content area of the first video outside the key content area; in the operation In the case where the distance is greater than the length of the background content area in the preset direction, determining that the reduction method includes not reducing the key content area and reducing the background content area; based on the size of the background content area, determining A reduced length of the background content area in the preset direction and a reduced length in a direction perpendicular to the preset direction.
  • the operation distance is the projection distance of the video zoom operation in the preset direction; when it is determined that the operation type is a zoom operation, according to the operation distance and the first Video information of a video, and determining a reduction method and a reduction parameter of the first video includes: determining, according to the video information, a background content area of the first video outside the key content area; in the operation In the case where the distance is greater than the length of the background content area in the preset direction, determining that the shrinking method includes shrinking the key content area, and the shrinking type of the key content area is shrinking in a preset direction or shrinking as a whole and shrinking the background content area; based on the size of the background content area, determine the reduced length of the background content area in the preset direction and the reduced length in the direction perpendicular to the preset direction; In the case where the reduction type of the key content area is reduction in a preset direction, determine the reduction of the key content area in the preset direction based on the length of the background content area in
  • the operation distance is a projection distance of the video zoom operation in the preset direction; when it is determined that the operation type is a zoom operation, the operation distance and the first Video information of a video, and determining the zoom-in mode and zoom-in parameter of the first video includes: in the case that the length of the key content area in the preset direction is equal to the original length of the key content area, according to the the video information, and determine the original size corresponding to the background content area of the first video outside the key content area; determining the enlargement method includes not enlarging the key content area, enlarging the background content area, and enlarging the background content area and the The enlargement type of the background content area is a preset direction enlargement or an overall enlargement; if the enlargement type of the background content area is a preset direction enlargement, it is determined based on the operation distance that the background content area is in the preset direction When the enlargement type of the background content area is overall enlargement, determine the
  • the operation distance is a projection distance of the video zoom operation in the preset direction; when it is determined that the operation type is a zoom operation, the operation distance and the first Video information of a video, determining the zoom-in mode and zoom-in parameters of the first video, including: the length of the key content area in the preset direction is less than or equal to the original length of the key content area and the operation
  • determining the enlargement method includes enlarging the key content area and the enlargement type of the key content area is preset direction enlargement or overall enlargement; the enlargement type of the key content area is preset
  • the enlargement length of the key content area in the preset direction is determined based on the operation distance; when the enlargement type of the key content area is overall enlargement, the enlargement length of the key content area in the preset direction is determined based on the operation distance and
  • the size of the key content area determines the enlarged length of the key
  • the operation distance is a projection distance of the video zoom operation in the preset direction; when it is determined that the operation type is a zoom operation, the operation distance and the first Video information of a video, and determining the zoom-in mode and zoom-in parameters of the first video, including: the length of the key content area in the preset direction is greater than the difference between the original length of the key content area and the operation distance In the case of the difference, according to the video information, determine the original size corresponding to the background content area of the first video outside the key content area; determining the enlargement method includes enlarging the key content area, the The enlargement type of the key content area is the preset direction enlargement or the overall enlargement, the enlargement of the background content area and the enlargement type of the background content area are the preset direction enlargement or the overall enlargement; the enlargement type of the key content area is the preset enlargement type.
  • the key content is determined based on the size of the key content area and the original length of the key content area in the preset direction
  • the magnified length of the region in the preset direction and a third magnification ratio where the third magnification ratio is the length of the key content region in the preset direction and the length of the direction perpendicular to the preset direction the ratio of
  • the enlargement length of the background content area in the preset direction and a fourth enlargement ratio are determined based on the operation distance and the original size of the background content area.
  • the enlargement ratio is the ratio of the original length of the background content area in the preset direction to the original length in the direction perpendicular to the preset direction.
  • the background content region includes a first sub-background content region and a second sub-background content region
  • the first video sequentially includes the first sub-background content region, all the sub-background content regions in the preset direction.
  • the reduction parameter further includes the reduction ratio of the first sub-background content area in the preset direction, and the size of the second sub-background content area in the
  • the reduction ratio in the preset direction refers to the ratio of the length of itself in the preset direction to the length of the background content area in the preset direction
  • the enlargement parameter also includes the The enlargement ratio of the first sub-background content area in the preset direction, and the enlargement ratio of the second sub-background content area in the preset direction, and the enlargement ratio refers to the enlargement ratio of itself in the preset direction.
  • performing scaling processing on the first video according to the operation information and video information of the first video to obtain a second video includes: determining an operation type corresponding to the operation information; In response to the operation type being a one-key zoom-out operation, removing the background content area other than the key content area in the first video to obtain the second video; in response to the operation type being a one-key zoom-in operation, Enlarging the key content area to the original size of the key content area, and enlarging the background content area to the original size of the background content area, to obtain the second video.
  • the step of acquiring the video information of the first video displayed in the video playback interface includes: acquiring multiple frames of video images from the first video; determining the video image based on the multiple frames of the video images. The key content area contained in the first video.
  • the step of determining the key content area included in the first video based on the multiple frames of the video images includes: for two video images adjacent to any position in the multiple frames of the video images, Obtain a difference image of the two frames of video images to obtain at least one frame of difference image; obtain a target image based on the at least one frame of difference image, and the pixel value of each position in the target image is the at least one frame The average value of the pixel values corresponding to the position in the difference image; the target image area with the largest area in at least one image area included in the target image is determined as the key content area.
  • the step of obtaining the target image based on the at least one frame of difference image includes: processing the at least one frame of difference image, respectively, to obtain a first image corresponding to each frame of the difference image, a
  • the frame of the first image includes multiple image areas that are not connected to each other, and at least one image area in the multiple image areas is a multi-connected area; based on at least one frame of the first image, a second image is obtained, and the first image area is obtained.
  • the pixel of each position in the two images is the average value of the pixel values at the position in the at least one frame of the first image; the second image is processed to obtain a target image, the target image includes at least An image region is a simply connected region.
  • the video display method applied to an electronic device further comprises: converting an image located in the target image area in the target image into a grayscale image; obtaining a pixel value in the grayscale image greater than or the first number of pixels equal to the first threshold; the ratio of the first number to the second number of pixels included in the grayscale image is determined as the first probability.
  • the video display method applied to the electronic device further includes: in response to the first probability being greater than or equal to a second threshold, performing the video information according to the operation information and the first video , performing scaling processing on the first video to obtain a second video step.
  • the video display method applied to the electronic device further comprises: in response to the first probability being less than the second threshold, acquiring the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images, to obtain a set of straight line segment positions; from a plurality of ordinates included in the set of straight line segment positions, determine a first ordinate and a second ordinate; set the ordinate as the first horizontal line, the ordinate of the first ordinate The area surrounded by the boundary between the second horizontal line whose coordinates are the second ordinate and the vertical direction of the video image is determined as a candidate key content area; in response to the candidate key content area being the same as the target image area, The step of scaling the first video according to the operation information and the video information of the first video to obtain a second video is performed.
  • the step of determining the key content area included in the first video based on the multiple frames of the video images includes: acquiring the ordinates of the horizontal straight line segments included in the multiple frames of the video images, to Obtain a set of straight line segment positions; determine a first ordinate and a second ordinate from the plurality of ordinates included in the set of straight line segment positions; set the ordinate as the first horizontal line and the ordinate of the first ordinate The area surrounded by the boundary between the second horizontal line of the second ordinate and the vertical direction of the video image is determined as the key content area.
  • the step of obtaining the video information of the first video displayed in the video playing interface includes: sending an instruction to obtain the video information of the first video to a server; receiving the first video sent by the server video information.
  • a video presentation method for a server including: receiving an instruction for obtaining a video sent by an electronic device; obtaining at least one video from stored videos, where the at least one video includes a first video. video; obtain video information corresponding to at least one of the videos; the video information of one of the videos includes; the display size of the video and the key content area in the video; the at least one video and at least one of the video
  • the video information of the video is sent to the electronic device; wherein, one of the video information of the video is that the electronic device zooms the video when it detects a video zoom operation performed on the video playback interface displaying the video.
  • the basis of the processing is that the video obtained by scaling the video includes key content areas in the video.
  • the video display method further includes: acquiring multiple frames of video images from the video; key content areas.
  • the step of determining the key content area included in the video based on the multiple frames of the video images includes: for two video images adjacent to any position in the multiple frames of the video images, obtaining the A difference image of the two frames of video images to obtain at least one frame of difference image; based on the at least one frame of difference image, a target image is obtained, and the pixel value of each position in the target image is the at least one frame of difference image The average value of the pixel values corresponding to the position in the target image; the target image area with the largest area in at least one image area included in the target image is determined as the key content area.
  • the step of obtaining the target image based on the at least one frame of difference image includes: processing the at least one frame of difference image, respectively, to obtain a first image corresponding to each frame of the difference image, a
  • the frame of the first image includes multiple image areas that are not connected to each other, and at least one image area in the multiple image areas is a multi-connected area; based on at least one frame of the first image, a second image is obtained, and the first image area is obtained.
  • the pixel of each position in the two images is the average value of the pixel values at the position in the at least one frame of the first image; the second image is processed to obtain a target image, the target image includes at least An image region is a simply connected region.
  • the video display method applied to the server further comprises: converting an image located in the target image area in the target image into a grayscale image; obtaining a pixel value in the grayscale image greater than or The first number of pixels equal to the first threshold; the ratio of the first number to the second number of pixels included in the grayscale image is determined as the first probability.
  • the step of sending the at least one video and video information of the at least one video to the electronic device includes: from the at least one video, determining that the corresponding first probability is greater than or A video equal to a second threshold; sending the at least one video and the corresponding video information of the video whose first probability is greater than or equal to the second threshold to the electronic device.
  • the video display method applied to the server further includes: in response to the first probability being smaller than the second threshold, acquiring the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images, to Obtain a set of straight line segment positions; determine a first ordinate and a second ordinate from the plurality of ordinates included in the set of straight line segment positions; set the ordinate as the first horizontal line and the ordinate of the first ordinate
  • the area surrounded by the boundary between the second horizontal line of the second ordinate and the vertical direction of the video image is determined as a candidate key content area; the candidate key content area is the same as the target image area.
  • Video information is sent to the electronic device.
  • the step of determining the key content area included in the video based on the multiple frames of the video images includes: acquiring the ordinates of horizontal straight line segments included in the multiple frames of the video images to obtain a straight line A segment position set; from a plurality of ordinates included in the straight line segment position set, determine a first ordinate and a second ordinate; set the ordinate as the first horizontal line of the first ordinate, and the ordinate as all The area surrounded by the boundary between the second horizontal line of the second ordinate and the vertical direction of the video image is determined as the key content area.
  • a video display apparatus for electronic equipment, including: a first acquisition module configured to receive a video zoom operation implemented on a video playback interface, and acquire operation information of the video zoom operation a second acquisition module, configured to acquire video information of the first video displayed in the video playback interface, the video information at least including the display size of the first video and a key content area; a zoom module, configured as According to the operation information obtained by the first obtaining module and the video information of the first video by the second obtaining module, the first video is scaled to obtain a second video, the second video The video includes the key content area; the presentation module is configured to, in response to the video zoom operation, present the second video obtained by the zoom module in the video playback interface.
  • the scaling module is specifically configured as: a first determining unit, configured to determine a scaling manner and scaling parameters of the first video according to the operation information and video information of the first video a scaling unit, configured to scale the first video according to the scaling manner and the scaling parameter determined by the first determining unit to obtain the second video.
  • the operation information includes at least an operation type and an operation distance
  • the first determination unit is specifically configured as: a first determination subunit, configured to, when determining that the operation type is a zoom-out operation , according to the operation distance and the video information of the first video, determine the reduction method and reduction parameters of the first video; the reduction method at least includes whether to reduce the key content area and the type of reduction; the type of reduction including reduction in a preset direction or overall reduction; the reduction parameter includes at least a reduction length in the preset direction; and a second determining subunit is configured to, when it is determined that the operation type is an enlargement operation, according to the The operation distance and the video information of the first video are used to determine the enlargement method and enlargement parameters of the first video; the enlargement method at least includes whether to enlarge the key content area and the enlargement type; the enlargement type includes preset Directional enlargement or overall enlargement; the enlargement parameter includes at least the enlargement length in the preset direction.
  • the operation distance is a projection distance of the video zoom operation in the preset direction
  • the first determination subunit is specifically configured as: a first determination submodule, configured according to the The key content area included in the video information determines the background content area of the first video outside the key content area; the second determination sub-module is configured to be within the operating distance not greater than the background content area.
  • determining the shrinking method includes not shrinking the key content area, shrinking the background content area, and the shrinking type of the background content area is shrinking in a preset direction or shrinking as a whole;
  • the third determination sub-module is configured to determine the reduction length in the preset direction based on the operating distance when the reduction type of the background content area is the preset direction reduction;
  • the fourth determination sub-module is configured to be is configured to determine a first reduction ratio and a reduction length in the preset direction based on the operation distance and the size of the background content area when the reduction type of the background content area is overall reduction, and
  • the first reduction ratio is a ratio of the length of the background content area in the preset direction to the length of the direction perpendicular to the preset direction.
  • the operation distance is a projection distance of the video zoom operation in the preset direction
  • the first determination subunit is specifically configured as: a fifth determination submodule, configured according to the video information, determine the background content area of the first video outside the key content area; the sixth determination sub-module is configured to be greater than the length of the background content area in the preset direction when the operating distance is greater
  • determining the reduction method includes not reducing the key content area and reducing the background content area;
  • a seventh determination sub-module is configured to determine the size of the background content area based on the size of the background content area A reduced length in the preset direction and a reduced length in a direction perpendicular to the preset direction.
  • the operation distance is a projection distance of the video zoom operation in the preset direction
  • the first determination subunit is specifically configured as: an eighth determination submodule, configured as a third determination
  • the sub-module is configured to determine the background content area of the first video outside the key content area according to the video information
  • the ninth determination sub-module is configured to determine the background content area based on the size of the background content area.
  • the tenth determining sub-module is configured that the reduced type of the key content area is preset.
  • the eleventh determination submodule is configured to, in the case that the reduction type of the key content area is overall reduction, based on the length of the background content area in the preset direction, the operation distance and the size of the key content area, determine the size of the key content area.
  • the operation distance is the projection distance of the video zoom operation in the preset direction;
  • the second determination subunit is specifically configured as: a twelfth determination submodule, configured to the video information, to determine the original size of the first video corresponding to the background content area outside the key content area;
  • the thirteenth determination sub-module is configured to be in the key content area in the preset direction In the case where the length is equal to the original length of the key content area, determining that the enlargement method includes not enlarging the key content area, enlarging the background content area, and the enlargement type of the background content area is preset direction enlargement or overall enlargement.
  • the fourteenth determining sub-module is configured to determine the enlargement of the background content area in the preset direction based on the operation distance when the enlargement type of the background content area is a preset direction enlargement length; the fifteenth determining submodule is configured to, in the case that the enlargement type of the background content area is overall enlargement, based on the operation distance and the original size of the background content area, determine that the background content area is The magnified length in the preset direction and a first magnification ratio, where the first magnification ratio is the original length of the background content area in the preset direction and the original length of the direction perpendicular to the preset direction ratio.
  • the operation distance is the projection distance of the video zoom operation in the preset direction;
  • the second determination subunit is specifically configured as: a sixteenth determination submodule, configured to In the case where the length of the key content area in the preset direction is less than or equal to the difference between the original length of the key content area and the operation distance, determining the enlargement method includes enlarging the key content area and the operation distance.
  • the magnification type of the key content area is preset direction magnification or overall magnification; the seventeenth determination sub-module is configured to determine, based on the operation distance, when the magnification type of the key content area is the preset direction magnification.
  • the eighteenth determining submodule is configured to, in the case that the magnification type of the key content area is overall magnification, based on the operation distance and the key The size of the content area, determine the enlarged length of the key content area in the preset direction and a second enlargement ratio, where the second enlargement ratio is the length of the key content area in the preset direction and the The ratio of the length of the direction perpendicular to the preset direction.
  • the operation distance is a projection distance of the video zoom operation in the preset direction;
  • the second determination subunit is specifically configured as: a nineteenth determination submodule, configured to When the length of the key content area in the preset direction is greater than the difference between the original length of the key content area and the operation distance, determine, according to the video information, that the first video is in the key content area.
  • the twentieth determination sub-module is configured to determine that the enlargement method includes enlarging the key content area, and the enlargement type of the key content area is preset direction enlargement or overall enlargement Enlarging and enlarging the background content area and the enlargement type of the background content area is a preset direction enlargement or an overall enlargement; the twenty-first determination sub-module is configured so that the enlargement type of the key content area is a preset direction In the case of zooming in, based on the length of the key content area in the preset direction and the original length of the key content area in the preset direction, determine the length of the key content area in the preset direction.
  • the twenty-second determining submodule is configured to, in the case that the enlargement type of the key content area is overall enlargement, based on the size of the key content area and the preset direction of the key content area the original length of the key content area in the preset direction, and determine the enlarged length of the key content area in the preset direction and a third enlargement ratio, where the third enlargement ratio is the enlarged length of the key content area in the preset direction and the ratio of the magnification length in the direction perpendicular to the preset direction; the twenty-third determination sub-module is configured to determine based on the operation distance when the magnification type of the background content area is the preset direction magnification The enlargement length of the background content area in the preset direction; the twenty-fourth determination sub-module is configured to, in the case that the enlargement type of the background content area is overall enlargement, based on the operation distance and the original size of the background content area, determine the magnification length of
  • the background content region includes a first sub-background content region and a second sub-background content region
  • the first video sequentially includes the first sub-background content region, all the sub-background content regions in the preset direction.
  • the reduction parameter further includes the reduction ratio of the first sub-background content area in the preset direction, and the size of the second sub-background content area in the
  • the reduction ratio in the preset direction refers to the ratio of the length of itself in the preset direction to the length of the background content area in the preset direction
  • the enlargement parameter also includes the The enlargement ratio of the first sub-background content area in the preset direction, and the enlargement ratio of the second sub-background content area in the preset direction, and the enlargement ratio refers to the enlargement ratio of itself in the preset direction.
  • the video display apparatus for an electronic device further includes: a first determination module configured to determine an operation type corresponding to the video zoom operation; a one-key zoom out module configured to respond to the video zoom operation
  • the operation type is a one-key zoom-out operation, and the background content area other than the key content area in the first video is removed to obtain the second video;
  • the one-key zoom-in module is configured to respond to the operation type being a
  • the key enlargement operation is to enlarge the key content area to the original size of the key content area, and enlarge the background content area to the original size of the background content area to obtain the second video.
  • the second obtaining module is specifically configured as: a first obtaining unit, configured to obtain multiple frames of video images from the first video; The video image is used to determine the key content area included in the first video.
  • the second obtaining unit is specifically configured as: a first obtaining subunit, configured to obtain the two frames of video for two frames of video images that are adjacent to any position in the multiple frames of the video images The difference image of the image, to obtain at least one frame of difference image; the second acquisition subunit is configured to obtain a target image based on the at least one frame of difference image, and the pixel value of each position in the target image is the an average value of pixel values corresponding to the position in at least one frame of difference image; a third determination subunit, configured to determine the target image area with the largest area in at least one image area included in the target image as the key content area.
  • the second acquisition sub-unit is specifically configured as: a first acquisition sub-module, configured to process the at least one frame of difference image respectively, and obtain the first image corresponding to each frame of the difference image.
  • a first acquisition sub-module configured to process the at least one frame of difference image respectively, and obtain the first image corresponding to each frame of the difference image.
  • an image, one frame of the first image includes multiple image areas that are not connected to each other, and at least one image area in the multiple image areas is a multi-connected area; the second acquisition sub-module is configured to be based on the at least one frame.
  • the first image is obtained, and a second image is obtained, and the pixel at each position in the second image is the average value of the pixel values at the position in the at least one frame of the first image;
  • the third obtaining sub-module is composed of It is configured to process the second image to obtain a target image, where at least one image area included in the target image is a single connected area.
  • the video display apparatus for an electronic device further includes: a first conversion module configured to convert an image located in the target image area in the target image into a grayscale image; a first conversion module three acquisition modules, configured to acquire a first number of pixels whose pixel values are greater than or equal to a first threshold in the grayscale image; a second determination module configured to include the first number and the grayscale image The ratio of the second number of each pixel is determined as the first probability.
  • the video presentation apparatus for an electronic device further includes a first triggering module configured to trigger the scaling module in response to the first probability being greater than or equal to a second threshold.
  • the video display apparatus for an electronic device further includes: a fourth acquisition module, configured to, in response to the first probability being less than the second threshold, acquire a plurality of frames of the video images respectively comprising: The ordinate of the horizontal straight line segment, so as to obtain the set of straight line segment positions; the third determination module is configured to determine the first ordinate and the second ordinate from the plurality of ordinates included in the set of straight line segment positions; 4.
  • a determination module configured to connect the boundary of the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image to the area surrounded by the city , which is determined as a candidate key content area; the second triggering module is configured to trigger the zooming module in response to the candidate key content area being the same as the target image area.
  • the second obtaining unit is specifically configured as: a third obtaining subunit, configured to obtain the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images, so as to obtain a set of straight line segment positions; a fourth determination subunit, configured to determine a first ordinate and a second ordinate from the plurality of ordinates included in the straight line segment position set; and a fifth determination subunit, configured to set the ordinate as the The area surrounded by the boundary between the first horizontal line of the first ordinate and the second horizontal line of the second ordinate and the vertical direction of the video image is determined as the key content area.
  • the second obtaining module is specifically configured as: a first sending module, configured to send an instruction to obtain the video information of the first video to a server; a first receiving module, configured to receive the server The video information of the first video sent.
  • a video presentation apparatus for a server, including: a second receiving module configured to receive a video acquisition instruction sent by an electronic device; a fifth acquisition module configured to obtain a video from a stored At least one video is obtained from each video, and the at least one video includes the first video; the sixth obtaining module is configured to obtain video information corresponding to at least one of the videos; the video information of one of the videos includes; Display size and key content areas in the video; a second sending module, configured to send the at least one video and video information of at least one of the videos to the electronic device; wherein, the video of one of the videos The information is the basis for the electronic device to perform scaling processing on the video when it detects a video scaling operation performed on the video playback interface displaying the video, and the video obtained after scaling the video includes the Key content areas in the video.
  • the video display apparatus for a server further includes: a seventh obtaining module, configured to obtain multiple frames of video images from the video; and a fifth determining module, configured to A video image, and the key content area included in the video is determined.
  • the fifth determining module is specifically configured as: a third obtaining unit, configured to obtain the two frames of video images with respect to two frames of video images that are adjacent to any position in the multiple frames of the video images
  • the difference image is obtained to obtain at least one frame of difference image
  • the second determination unit is configured to obtain a target image based on the at least one frame of difference image, and the pixel value of each position in the target image is the at least one frame.
  • the third determining unit is configured to determine the target image area with the largest area among at least one image area included in the target image as the key content area.
  • the second determining unit is specifically configured as: a fourth obtaining subunit, configured to process the at least one frame of difference image, respectively, to obtain the first frame corresponding to the difference image for each frame image, one frame of the first image includes multiple image areas that are not connected to each other, and at least one image area in the multiple image areas is a multi-connected area; a fifth acquisition subunit is configured based on at least one frame of the first image area an image, obtains a second image, and the pixel of each position in the second image is the average value of the pixel values at the position in the at least one frame of the first image; the sixth obtaining subunit is configured to The second image is processed to obtain a target image, wherein at least one image area included in the target image is a single connected area.
  • the video display apparatus for a server further includes: a second conversion module configured to convert the target image in the target image area into a grayscale image; an eighth acquisition module , is configured to obtain the first number of pixels whose pixel value is greater than or equal to the first threshold in the grayscale image; the sixth determination module is configured to compare the first number with the first number of pixels included in the grayscale image The ratio of the two numbers is determined as the first probability.
  • the first sending module is specifically configured as: a fourth determining unit, configured to determine, from the at least one video, the corresponding video whose first probability is greater than or equal to a second threshold; A first sending unit, configured to send the at least one video and video information of the corresponding video whose first probability is greater than or equal to the second threshold to the electronic device.
  • the video display apparatus for a server further includes: a ninth acquiring module, configured to acquire, in response to the first probability being less than the second threshold, acquiring the video images contained in the multiple frames of the video images respectively. the ordinate of the horizontal straight line segment to obtain the set of straight line segment positions; the seventh determination module is configured to determine the first ordinate and the second ordinate from the plurality of ordinates included in the set of positions of the straight line segment; eighth A determination module configured to enclose the area of the fortress with the boundary of the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image, It is determined as a candidate key content area; and a third sending module is configured to send video information of a video in which the candidate key content area is the same as the target image area to the electronic device.
  • a ninth acquiring module configured to acquire, in response to the first probability being less than the second threshold, acquiring the video images contained in the multiple frames of the video images respectively.
  • the fifth determining module is specifically configured as: a fourth obtaining unit, configured to obtain the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images, so as to obtain a set of straight line segment positions; Five determining units, configured to determine a first vertical coordinate and a second vertical coordinate from a plurality of vertical coordinates included in the straight line segment position set; a sixth determining unit, configured to set the vertical coordinate as the first vertical coordinate
  • the area surrounded by the boundary between the first horizontal line of the coordinates and the vertical axis of the second horizontal line of the second ordinate and the vertical direction of the video image is determined as the key content area.
  • an electronic device comprising: a processor; a first memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to achieve The video display method according to the above-mentioned first aspect.
  • a server comprising: a processor; a second memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement such as The video display method described in the second aspect above.
  • a video presentation system comprising: the server according to the fifth aspect and at least one electronic device according to the fourth aspect.
  • a non-volatile computer-readable storage medium in response to instructions in the non-volatile computer-readable storage medium being executed by an electronic device, the electronic device capable of executing the above-described The video display method described in the first aspect.
  • a non-volatile computer-readable storage medium in response to the instructions in the non-volatile computer-readable storage medium being executed by a server, the server capable of executing the above-mentioned second The video display method described in the aspect.
  • a computer program product which can be directly added to the internal memory of a computer and contains software codes, the computer program can realize the video shown in the first aspect after being loaded and executed by the computer Show method.
  • a computer program product which can be directly added to the internal memory of a computer and contains software codes, the computer program can realize the video shown in the second aspect after being loaded and executed by the computer Show method.
  • the video is zoomed by acquiring the operation information of the video zoom operation and the video information of the first video displayed in the video play interface in the case of receiving the video zoom operation implemented in the video play interface. Processing, by including the key content area information in the video information, so that the zoomed second video includes the key content area, avoiding the lack of key content during the video zooming process due to the display space of the video playback interface, and improving the video zooming process.
  • Video display effect by acquiring the operation information of the video zoom operation and the video information of the first video displayed in the video play interface in the case of receiving the video zoom operation implemented in the video play interface.
  • FIGS. 1a to 1b are schematic diagrams illustrating a related technology involved in an embodiment of the present disclosure according to an exemplary embodiment
  • FIG. 2 is an architectural diagram of an implementation environment according to an exemplary embodiment
  • FIG. 3 is a flowchart of a video display method applied to an electronic device according to an exemplary embodiment
  • FIG. 4 is a schematic diagram of a display manner of a video zoom button in a display interface shown according to an exemplary embodiment
  • 5a to 5d are schematic diagrams showing a video reduction process according to an exemplary embodiment
  • FIG. 6 is a schematic diagram showing the positional relationship between a background content area and a key content area according to an exemplary embodiment
  • FIGS. 7a to 7d are schematic diagrams illustrating a method for determining a reduced length of a background content area according to an exemplary embodiment
  • FIG. 8 is a schematic diagram illustrating another manner of determining the reduced length of a background content area according to an exemplary embodiment
  • Figures 9a to 9b are schematic diagrams showing a reduction mode of the first video according to an exemplary embodiment
  • 10a to 10b are schematic diagrams showing another reduction manner of the first video according to an exemplary embodiment
  • Fig. 11 is a schematic diagram showing still another reduction manner of the first video according to an exemplary embodiment
  • Fig. 12 is a schematic diagram showing still another reduction manner of the first video according to an exemplary embodiment
  • FIG. 13 is a schematic diagram showing still another reduction manner of the first video according to an exemplary embodiment
  • 14a to 14b are schematic diagrams of multi-frame difference images according to an exemplary embodiment
  • 15a to 15d are schematic diagrams showing a first image obtained by processing a difference image according to an exemplary embodiment
  • 16a to 16b are schematic diagrams illustrating the purpose of processing the second image according to an exemplary embodiment
  • 17a to 17c are schematic diagrams showing the relative positions of the target contour area and the real key content area according to an exemplary embodiment
  • 18 is a schematic diagram of three frames of third images obtained through edge detection according to an exemplary embodiment
  • FIG. 19 is a schematic diagram of a fourth image obtained through line detection processing according to an exemplary embodiment.
  • 20a to 20c are schematic diagrams illustrating a clustering process according to an exemplary embodiment
  • FIG. 21 is a flow chart of a video display method applied to a server according to an exemplary embodiment
  • FIG. 22 is a structural diagram of a video display apparatus applied to an electronic device according to an exemplary embodiment
  • FIG. 23 is a structural diagram of a video presentation apparatus applied to a server according to an exemplary embodiment
  • Figure 24 is a block diagram of an electronic device according to an exemplary embodiment
  • Fig. 25 is a block diagram of a server according to an exemplary embodiment.
  • the video playing client can run in the electronic device, and the electronic device can display the video playing interface and the content display interface in the process of running the video playing client.
  • the video playing interface is used to display the video
  • the content display interface is used to display the content related to the video.
  • the video-related content may include one of: user comment content for the video, a list of episodes of the video, links to other videos related to the video, and comment content of other videos related to the video, or variety.
  • the video playback client mentioned in the embodiment of the present disclosure may be an application client or a web client.
  • the video playback client (hereinafter referred to as the client) has a video zoomable playback function, and the video zoomable playback function enables the video playback client to display video-related content while displaying the video.
  • the key content contained in the video may be missing.
  • FIGS. 1 a to 1 b are schematic diagrams illustrating a related technology involved in an embodiment of the present disclosure, according to an exemplary embodiment.
  • FIGS. 1 a to 1 b are described by taking the video displayed on the video playing interface as the first video, and the content display interface displaying user comment content for the first video as an example.
  • the display screen of the electronic device displays the video playback interface in full screen. Since the video playback interface displays the first video, the display screen of the electronic device displays the first video in full screen, and the first video includes the background content area 11 and the key Content area 12.
  • the background content area 11 in the first video may be an image filled with black, or a Gaussian blurred image.
  • the background content area 11 is an image filled with black as an example for illustration.
  • the key content area 12 in the first video is an area in the first video that actually has picture content.
  • the display screen of the electronic device in FIG. 1a displays the video playback interface in full screen
  • the content display interface is not displayed.
  • the user may perform a zoom-out operation on the video playback interface. As shown in Figure 1b, perform a video zoom-out operation by sliding up.
  • the client After detecting the zoom-out operation, the client will zoom out the first video, and after the first video is zoomed out, the corresponding video playback interface will be zoomed out accordingly.
  • FIG. 1b it is a schematic diagram of the video playback interface 10 after being reduced. After the video playback interface and the first video are zoomed out, the electronic device will display the content display interface 13 .
  • the length in the vertical direction and the length in the horizontal direction of the first video are reduced at the same time.
  • the key content area will also shrink.
  • the reduced first video is called the second video
  • the area 14 framed by a white dotted line as shown in FIG. 1b is the area where the second video is located
  • the black image located outside the second video is added by the client background image 10.
  • the length of the first video in the vertical direction is A1, and the length in the horizontal direction is B1; the length of the key content area in the first video in the vertical direction is A2, and the length in the horizontal direction is B2 .
  • the length of the second video in the vertical direction is A3 (A3 is smaller than A1), and the length in the horizontal direction is B3 (B3 is smaller than B1); the length of the key content area in the second video in the vertical direction is A4 (A4 is smaller than A2), and the length in the horizontal direction is B4 (B4 is smaller than B2).
  • Fig. 2 is an architectural diagram of an implementation environment according to an exemplary embodiment.
  • the following video presentation method can be applied to the implementation environment, and the implementation environment includes: a server 21 and at least one electronic device 22 .
  • the electronic device 22 and the server 21 may establish a connection and communicate through a wireless network.
  • the electronic device 22 may be any electronic product that can interact with the user in one or more ways such as a keyboard, a touchpad, a touchscreen, a remote control, a voice interaction or a handwriting device, for example, a mobile phone, Tablets, PDAs, PCs, Wearables, Smart TVs, etc.
  • the electronic device 22 in response to the client being an application client, the electronic device 22 may be installed with the client; in response to the client being a web version client, the electronic device 32 may The browser displays the web version of the client.
  • the video display apparatus applied to the electronic device provided by the embodiment of the present disclosure may be a plug-in of the client.
  • the server 21 may be a server, a server cluster composed of multiple servers, or a cloud computing service center.
  • the server 21 may include a processor, memory, and a network interface, among others.
  • the server 21 stores one or more videos uploaded by the user, and the server 21 can send one or more videos to the electronic device 22 .
  • Electronic device 22 may display one or more videos.
  • FIG. 2 is just an example, and FIG. 2 shows three electronic devices 22 .
  • the number of electronic devices 22 can be set according to actual requirements, and the embodiment of the present disclosure does not limit the number of electronic devices 22 .
  • This implementation environment involves two application scenarios.
  • the electronic device 22 is used to run a video client, the electronic device can obtain the video from the server 21, the electronic device itself obtains the video information of the video, and executes the video display method provided by the embodiment of the present disclosure.
  • the server 21 is used for sending video to the electronic device 22 running the video client.
  • the electronic device 22 is used to run a video client, and the electronic device can obtain the video and the video information of the video from the server 21, and execute the video display method provided by the embodiment of the present disclosure.
  • the server 21 is configured to send the video and video information of the video to the electronic device 22 running the video client.
  • FIG. 3 is a flow chart of a video display method applied to an electronic device according to an exemplary embodiment, and the method includes the following steps S31 to S34 in the implementation process.
  • step S31 a video zooming operation implemented on the video playing interface is received, and operation information of the video zooming operation is acquired.
  • step S32 video information of the first video displayed in the video playback interface is acquired.
  • the video information includes at least the display size and key content area of the first video.
  • step S33 scaling processing is performed on the first video according to the operation information and the video information of the first video to obtain a second video.
  • the second video includes the key content area.
  • step S34 in response to the video zooming operation, the second video is displayed in the video playing interface.
  • the client running on the electronic device 22 at least includes a video playback interface and a content display interface.
  • the video playback interface and the content display interface belong to the same window; exemplarily, the video playback interface and the content display interface belong to different windows.
  • the first video includes at least the key content area 12 .
  • the embodiments of the present disclosure do not limit the relative positional relationship between the video playback interface and the content display interface.
  • the video playback interface is located on the left side of the content display interface, or the video playback interface is located on the right side of the content display interface, or , the video playback interface is located above the content display interface, or the video playback interface is located below the content display interface.
  • the following describes the video zoom operation, the first video, the display size of the first video, and the key content area provided by the embodiments of the present disclosure.
  • step S31 there are various operation modes of the video zooming operation implemented on the video playing interface.
  • the operation method of the video zoom operation is a key operation.
  • the video zoom key may be a physical key in an electronic device, such as one or more keys in a keyboard.
  • the video zoom button may be a virtual button in a display interface (the display interface includes at least one of a video playback interface and a content display interface), for example, the display interface displays a video zoom button.
  • FIG. 4 is a schematic diagram showing a display manner of a video zoom button in a display interface according to an exemplary embodiment. As shown in FIG. 4 , a video zoom-out button 41 and a video zoom-in button 43 are displayed at a fixed position on the display interface.
  • the video zoom-out button 41 and the video zoom-in button 43 may also be displayed in the display menu 42 suspended on the display interface.
  • the display menu 42 has movable and hideable features.
  • the display menu 42 that was previously in the hidden state is displayed, and in response to the detection of the second preset operation for the display interface, the hidden is in the previously displayed state. display menu 42.
  • the first preset operation and the second preset operation may be the same or different.
  • the display menu 42 may be suspended at different positions of the display interface to avoid the display menu 42 from obscuring key content areas of the first video.
  • the display mode of the video zoom button on the display interface may include various modes.
  • FIG. 4 in the embodiment of the present disclosure provides two display modes located above the display interface, but the embodiment of the present disclosure is not limited to the display mode shown in FIG. 4 . mode, any display mode is within the protection scope of the embodiments of the present disclosure.
  • the operation method of the video zoom operation is a sliding operation.
  • swipe action is “swipe up” or “swipe down”.
  • the embodiment of the present disclosure provides two sliding operations of “slide up” and “slide down”, but the embodiment of the present disclosure is not limited to the above-mentioned sliding operation, and any sliding operation is implemented in the present disclosure.
  • the sliding operation can be "Draw a circle” or "Draw a checkmark”.
  • the operation mode of the video zoom operation is a voice operation.
  • the voice operation may be an operation of zooming the first video displayed on the video playback interface, such as "reduce the video", or an operation of adding a large content display interface, such as "display more user comment content”.
  • the voice command in the voice operation may carry the length of the first video that needs to be scaled, for example, the voice command is: "shorten the video by 5 cm”.
  • the first video is an original video that has not undergone scaling processing or a video that has undergone scaling processing one or more times.
  • the first video in response to the first video being an unscaled original video, the first video may be the video uploaded by the user to the server 21, or the first video is obtained by processing the video after the server 21 receives the video uploaded by the user. 's video.
  • the video playback interface in response to the first video being an unscaled original video, is to display the entire area of the screen, that is, the electronic device displays the first video in full screen, as shown in FIG. 1a.
  • the video playing interface in response to the first video being an unzoomed video, is a partial area of the display screen.
  • the display size of the first video includes at least the length in the vertical direction and the length in the horizontal direction of the first video.
  • the key content area of the first video is an area in the first video that actually has picture content.
  • the above-mentioned key content area of the first video refers to the location area of the first video where the key content area is located.
  • the size of the key content area included in the second video may be the same as the size of the key content area included in the first video, or the second video is composed of key content areas in the vertical direction.
  • the size of the key content area includes the length of the key content area in the vertical direction and the length of the key content area in the horizontal direction.
  • the size of the key content area included in the above-mentioned second video may be the same as the size of the key content area included in the first video, which means that the length of the key content area in the second video in the horizontal direction is the same as the key content area in the first video.
  • the length in the direction is the same, and the length of the key content area in the second video in the vertical direction is the same as the length in the vertical direction of the key content area in the first video.
  • the area other than the key content area in the first video is reduced, keeping the size of the key content area unchanged, so the obtained key content area in the second video is Key content will not be missing, improving video display during video scaling.
  • the vertical length of the key content areas in the second video is the same as the vertical length of the key content areas in the first video, or , the vertical length of the key content area in the second video is smaller than the vertical length of the key content area in the first video.
  • the second video may or may not include the background content area in the horizontal direction.
  • the zoomed second video includes the key content area, so as to avoid the lack of key content during the video zooming process due to the limited display space of the video playback interface.
  • Video display effect during video scaling by obtaining operation information of the video zoom operation and video information of the first video displayed in the video playback interface in the case of receiving the video zoom operation implemented in the video playback interface, To zoom the video, by including the key content area information in the video information, the zoomed second video includes the key content area, so as to avoid the lack of key content during the video zooming process due to the limited display space of the video playback interface. Video display effect during video scaling.
  • step S33 includes steps A1 to A2 in a specific implementation process.
  • step A1 a scaling method and scaling parameters of the first video are determined according to the operation information and video information of the first video.
  • step A2 the first video is scaled according to the scaling method and the scaling parameter to obtain the second video.
  • the zooming manner of the first video is overall zooming or preset direction zooming.
  • the preset direction may be a horizontal direction or a vertical direction.
  • the preset direction is scaled to scale the length in the horizontal direction; in response to the preset direction being the vertical direction, the preset direction is scaled to scale the vertical direction. length in the direction.
  • Overall scaling refers to scaling the length in the vertical direction as well as the length in the horizontal direction.
  • the zoom parameters include the zoom length in the preset direction and the zoom ratio, and in response to the zoom mode of the first video being the preset direction zoom, the zoom parameters include: Scale length in the preset direction.
  • the above scaling ratio refers to the ratio of the length in the preset direction to the length in the direction perpendicular to the preset direction.
  • the first video in response to the first video including the background content area and the key content area, and in response to the background content area and the key content area being reduced as a whole, it may appear that the first video is reduced in size, which may cause the first video to be reduced.
  • the size of the key content area of a video is too small, and the key content cannot be viewed clearly by the user, thereby affecting the video display effect during the video zooming process.
  • FIGs 5a to 5d are schematic diagrams of a video reduction process according to an exemplary embodiment, since the display interface of the electronic device in Figure 5a displays the first video in full screen.
  • the user can perform a zoom-out operation on the video playback interface. As shown in Figure 5b, a sliding operation of sliding upward is performed.
  • FIG. 5b it is a schematic diagram of the first video after the first reduction. After the first video is zoomed out, the electronic device will display the content display interface 13 .
  • the area 14 framed by the white dashed line shown in FIG. 5b is the area where the second video after the first video is reduced, and the black image displayed outside the area 14 framed by the white dashed line in the video playback interface 10 is the client side complement. input background image.
  • the zoom out operation can be performed again, as shown in FIG.
  • the video displayed on the video playback interface in FIG. 5c continues to shrink, and the content display interface 13 continues to expand. It can be understood that since the content display interface 13 is enlarged, the content display interface 13 can display more content. Exemplarily, the content displayed in the second display area may not be updated, and the existing content displayed in the second display area may be enlarged.
  • the background content area and the key content area are scaled as a whole.
  • the key content area in the first video will be reduced. If the size is too small, the user cannot see the key content displayed in the key content area, which affects the video display effect during the video zooming process.
  • the background content area and the key content area are reduced as two independent entities, for example, the background content area can be reduced, but the key content area is not reduced, or, after the background content area is reduced , to narrow down key content areas.
  • the operation information of the video zoom operation implemented in the video playback interface includes at least the operation type and the operation distance.
  • the first implementation manner preset a fixed length corresponding to a video zoom operation, and the fixed length is the operation distance.
  • the video zoom operation may be any one of a key operation, a sliding operation, and a voice operation without a zoom length.
  • the fixed length of the first video in the preset direction is fixedly zoomed, for example, 1 cm.
  • the fixed length may be determined based on the actual situation, and the embodiment of the present disclosure does not limit the specific value of the fixed length.
  • the second implementation method Determine the operation distance based on the video zoom operation.
  • the operation distance in response to the video zoom operation being a sliding operation, can be calculated based on the length of the sliding track; in response to the zoom operation being a voice operation, the operation distance is the length carried by the voice command, assuming that the voice command is "Zoom out. If the video is 5cm", the operation distance is 5cm; in response to the zoom operation being a key operation, the operation distance is calculated based on the duration and/or strength of the video zoom key.
  • the first preset ratio may be less than 1, or any value greater than 1.
  • the first preset ratio can be automatically changed based on the user's operating habits. For example, when the user zooms out the video, he often performs multiple zoom out operations before the background content area in the preset direction can be completely zoomed out.
  • the user's actions are conservative, for example, the sliding length is small, or the strength and/or duration of pressing the video zoom-out button is small, then the electronic device 22 can set the first preset ratio to be greater than 1, and the specific first preset ratio is Values can be statistically determined multiple times.
  • the electronic device 22 may set the first preset ratio to be less than 1.
  • the specific value of the first preset ratio may be statistically determined for many times.
  • the operation type may be a zoom-out operation or a zoom-in operation.
  • step A1 includes step A11 and step A12.
  • step A11 when it is determined that the operation type is a zoom-out operation, according to the operation distance and the video information of the first video, a zoom-out mode and zoom-out parameters of the first video are determined; the zoom-out mode It at least includes whether to shrink the key content area and the type of reduction; the type of reduction includes reduction in a preset direction or overall reduction; and the reduction parameter includes at least a reduction length in the preset direction.
  • the key content area may not be reduced, so it will not occur that after the first video is reduced to the second video, the size of the key content area in the second video is too small. It is unclear about the key content, which affects the video display effect during the video zooming process.
  • step A12 when it is determined that the operation type is an enlargement operation, according to the operation distance and the video information of the first video, the enlargement method and enlargement parameters of the first video are determined; the enlargement method It at least includes whether to enlarge the key content area and the enlargement type; the enlargement type includes a preset direction enlargement or an overall enlargement; the enlargement parameter at least includes the enlargement length in the preset direction.
  • Step A11 may involve three situations, and the three situations will be described below.
  • the first case of step A11 includes steps B1 to B4.
  • step B1 a background content area of the first video outside the key content area is determined according to the video information.
  • determining the background content area of the first video outside the key content area includes: determining the length of the background content area in a preset direction and the length of the background content area in a direction perpendicular to the preset direction. and the background content area is located in at least one of the location information of the first video.
  • step B2 under the condition that the operating distance is not greater than the length of the background content area in the preset direction, determining the reduction mode includes not reducing the key content area, reducing the background content area, and The reduction type of the background content area is a preset direction reduction or an overall reduction.
  • step B3 in the case that the reduction type of the background content area is reduction in a preset direction, a reduction length in the preset direction is determined based on the operation distance.
  • the reduced length of the background content area in the preset direction is equal to the operation distance.
  • step B4 when the type of reduction of the background content area is overall reduction, a first reduction ratio and reduction in the preset direction are determined based on the operation distance and the size of the background content area length, and the first reduction ratio is the ratio of the length of the background content area in the preset direction to the length of the direction perpendicular to the preset direction.
  • the background content area is described below.
  • the background content area has various representation forms, and the embodiments of the present disclosure provide but are not limited to the following two:
  • the first type the background content contained in the background content area is a black background image or a color image.
  • the background content included in the first video is added by the user before the first video is uploaded to the server; or added by the server after the server receives the first video.
  • the second type the background content contained in the background content area is an image after Gaussian blurring.
  • the background content in the background content area may further include video content corresponding to the first video, such as a video title, or narration or subtitles corresponding to the first video.
  • the location of the background content area and the key content area is described below.
  • the relative positions of the background content area and the key content area include various, and the embodiments of the present disclosure provide, but are not limited to: the background content area is located above the key content area, and/or the background content area is located below the key content area , and/or the background content area is located to the left of the key content area, and/or the background content area is located to the right of the key content area.
  • the background content area 11 includes a first sub-background content area and a second sub-background content area, one sub-background content area is located above the key content area 12, and one sub-background content area is located at the key content area. Below area 12.
  • the first video includes a background content area and a key content area. That is, in the vertical direction, the length of the first video is equal to the sum of the length of the background content area and the length of the key content area; in the horizontal direction, the length of the first video is equal to the length of the background content area and the length of the key content area. and.
  • the first video includes the background content area and the key content area
  • the first video in the preset direction in response to the length of the first video in the preset direction being greater than the length of the key content area in the preset direction, it is determined that the first video has a background content area in the preset direction.
  • the length of the background content area in the preset direction is equal to the length of the first video in the preset direction minus the length of the key content area in the preset direction.
  • the length of the first video in the preset direction being equal to the length of the key content area in the preset direction, it is determined that there is no background content area in the first video in the preset direction.
  • the length of the first video in the preset direction and the length of the key content area in the preset direction are described below with specific examples.
  • the first video In response to the vertical length of the first video being 10 cm and the vertical length of the key content area being 10 cm, it is determined that the first video does not have a background content area in the vertical direction.
  • the "preset direction" mentioned in the embodiments of the present disclosure may be a vertical direction or a horizontal direction.
  • the following describes the method of reducing the background content area in the embodiment of the present disclosure in combination with the positional relationship between the background content area and the key content area.
  • the method of reducing the background content area includes but is not limited to the following two situations.
  • the background content area is located on one side of the key content area as a whole, for example, the background content area is located above the key content area as a whole, or the background content area is located below the key content area as a whole, or the background content area is located as a whole in the key content area.
  • the left side of the content area, or, the background content area as a whole is to the right of the key content area.
  • the way to reduce the background content area is to reduce the background content area as a whole.
  • FIG. 6 is a schematic diagram showing the positional relationship between the background content area and the key content area according to an exemplary embodiment.
  • FIG. 6 takes an example in which the background content area 11 is located below the key content area 12 as a whole.
  • the electronic device 22 displays the first video in full screen.
  • the right side of FIG. 6 shows the second video obtained after the background content area is reduced in a preset direction.
  • the background content area is on both sides of the key content area. It is assumed that the background content area includes a first sub-background content area and a second sub-background content area. In the preset direction, the first sub-context content area, the key content area and the second sub-context content area are sequentially included.
  • the first method first determine the reduced length of the first sub-background content area on one side of the key content area, and then reduce it. After the first sub-background content area has been reduced, in response to the need to reduce the first video, determine The reduced length of the second sub-background content area on the other side of the key content area, and shrinks.
  • 7a to 7d are schematic diagrams illustrating a manner of determining the reduced length of a background content area according to an exemplary embodiment.
  • 7a to 7d are described by taking the background content area above and below the key content area as an example.
  • the first sub-background content area located on the lower side of the key content area may be reduced first, and then the first sub-context area may be reduced.
  • the second sub-background content area on the upper side of the key content area is reduced.
  • the second sub-background content area is first reduced, and after the second sub-background content area is reduced, in response to the need to reduce the first video, the first sub-background content area is reduced.
  • the first video is displayed in full screen; in response to detecting the zoom-out operation, the reduced length of the first sub-background content area located on the lower side of the key content area is determined, and based on the reduced length, the first sub-background content area is located Zoom out, as shown in FIG.
  • FIG. 7b is a schematic diagram after the first sub-background content area is zoomed out; in response to the first sub-background content area 11 on the lower side of the key content area 12 after the first sub-background content area 11 is zoomed out, in response to the need to zoom out the first video , then determine the reduced length of the second sub-background content area located on the upper side of the key content area, and reduce the second sub-background content area based on the reduced length, as shown in Figure 7c, in order to complete the reduction of the first sub-background content area after the schematic diagram.
  • the second method Determine the corresponding reduction lengths of the background content areas located on both sides of the key content area, and reduce the background content areas located on both sides of the key content area at the same time.
  • the manners of determining the reduced length of the first sub-context content area and determining the reduced length of the second sub-context content area include but are not limited to the following two.
  • the reduction ratio of the first sub-background content area in the preset direction is determined.
  • a reduction ratio of the second sub-background content area in the preset direction is determined.
  • the reduction ratio refers to the ratio of the length of itself in the preset direction to the length of the background content area in the preset direction.
  • the length of the background content region in the preset direction the length of the first sub-background content region in the preset direction+the length of the second sub-background content region in the preset direction.
  • it further includes the step of: based on the reduced length of the background content area in the preset direction, the reduction ratio of the first sub-background content area in the preset direction, and the second sub-background content area in the preset direction Determine the reduction ratio of the first sub-background content area in the preset direction and the reduction length of the second sub-background content area in the preset direction.
  • the reduced length of the background content area in the preset direction is the sum of the reduced length of the first sub-background area in the preset direction and the reduced length of the second sub-background area in the preset direction.
  • the reduced length of the background content area in the preset direction is 3cm in total
  • the second sub-background content area is in the preset direction
  • the first sub-background content area and the second sub-background content area of the background content area are scaled down in a preset direction, which improves the video display effect during the video zooming process and makes the video display effect more suitable for users improve viewing habits and improve user experience.
  • the reduced length of the first sub-background content area is the same as the reduced length of the second sub-background content area.
  • FIG. 8 is a schematic diagram illustrating another manner of determining the reduced length of a background content area according to an exemplary embodiment.
  • FIG. 8 illustrates an example in which the first sub-context content area is located below the key content area and the second sub-context content area is located above the key content area.
  • the first video is displayed in full screen in the left figure of FIG. 8; in response to detecting the zoom-out operation, the reduced length of the first sub-background content area and the reduced length of the second sub-background content area are determined, and the first sub-background content area and the second The video in which the sub-background content areas are all reduced is shown on the right side of Figure 8.
  • the reduction type of the background content area in step B3 may be a preset direction reduction, that is, the background content area is reduced in the preset direction, and the size of the background content area is kept unchanged in the direction perpendicular to the preset direction.
  • FIG. 9a to FIG. 9b are schematic diagrams showing a zoom-out manner of the first video according to an exemplary embodiment.
  • the reduced length of the background content area in the preset direction is 5cm
  • the length of the key content area in the preset direction is 5cm.
  • the length is 4cm.
  • FIG. 9b is a second video after the background content area is reduced by 5 cm in the preset direction, and the background content area is not shortened in the direction perpendicular to the preset direction.
  • the length of the key content area in the preset direction is 4 cm unchanged.
  • the reduction type of the background content area in step B4 is overall reduction, that is, the background content area is reduced in a preset direction, and the background content area is reduced in a direction perpendicular to the preset direction.
  • the first reduction ratio the length of the background content area in the preset direction/the length of the background content area in the direction perpendicular to the preset direction.
  • the method further includes: determining a reduced length of the background content area in a direction perpendicular to the preset direction based on the first reduction ratio and the reduced length of the background content area in a preset direction.
  • the reduced length of the background content area in the direction perpendicular to the preset direction the reduced length of the background content area in the preset direction/the first reduction ratio.
  • FIG. 10a to FIG. 10b are schematic diagrams showing another reduction manner of the first video according to an exemplary embodiment.
  • Fig. 10b shows the display interface after reducing the background content area by 5 cm in the preset direction and 5/(7/6) in the horizontal direction.
  • the dotted frame area in Fig. 10b is the area where the background content area is reduced as a whole.
  • the gray area outside the dotted box is the background image added by the client after the background content area is reduced as a whole. It can be seen from the comparison of FIG. 10a and FIG. 10b that in this embodiment, after the reduction processing, the size of the key content area of the video remains unchanged, and the background content area is reduced as a whole.
  • the second case involved in step A11 includes steps C1 to C3.
  • step C1 a background content area of the first video outside the key content area is determined according to the video information.
  • step C2 when the operation distance is greater than the length of the background content area in the preset direction, determining the reduction mode includes not reducing the key content area and reducing the background content area.
  • step C3 based on the size of the background content area, a reduced length of the background content area in the preset direction and a reduced length in a direction perpendicular to the preset direction are determined.
  • the length of the background content area in the preset direction is determined as the reduced length of the background content area in the preset direction
  • the length of the background content area in the direction perpendicular to the preset direction is determined as the length of the background content area in the direction perpendicular to the preset direction. The reduced length in the direction perpendicular to the preset direction.
  • the reduction type of the background content area may be a preset direction reduction or an overall reduction.
  • Fig. 11 is a schematic diagram showing another reduction manner of the first video according to an exemplary embodiment. As shown on the left side of Figure 11, assuming that the operation distance is 5cm, the length of the background content area in the preset direction is 3cm, and the length of the key content area in the preset direction is 4cm. The figure on the right side of FIG. 11 is a schematic diagram of the second video after the background content area is reduced by 3 cm in the preset direction. The length of the key content area shown on the right side of Figure 11 remains unchanged at 4 cm in the preset direction.
  • the operation distance is greater than the length of the background content area in the preset direction, exemplarily, the size of the key content area is kept unchanged, and the background content area is completely reduced.
  • the size of the key content area is guaranteed to remain unchanged, which not only avoids the situation where the key content is missing during the video zooming process due to the limited display space of the video playback interface, but also avoids the size of the key content area. This improves the video display effect during video scaling.
  • the third case involved in step A11 includes steps D1 to D5.
  • step D1 a background content area of the first video outside the key content area is determined according to the video information.
  • step D2 in the case that the operation distance is greater than the length of the background content area in the preset direction, determining that the reduction method includes reducing the key content area, and the reduction type of the key content area is:
  • the preset direction shrinks or shrinks overall and shrinks the background content area.
  • step D3 based on the size of the background content area, a reduced length in the preset direction and a reduced length in a direction perpendicular to the preset direction of the background content area are determined.
  • step D4 in the case that the reduction type of the key content area is a preset direction reduction, based on the length of the background content area in the preset direction and the operation distance, it is determined that the key content area is in the The reduced length in the preset direction.
  • step D5 in the case where the reduction type of the key content area is overall reduction, based on the length of the background content area in the preset direction, the operation distance and the size of the key content area, determine The reduced length of the key content area in the preset direction and a second reduction ratio, where the second reduction ratio is the length of the key content area in the preset direction and the length perpendicular to the preset direction The ratio of the length of the direction.
  • step D3 For the description of step D3, reference may be made to the description of step C3 in the second case involved in step A11, which will not be repeated here.
  • the implementation manner of determining the reduced length of the key content area in the preset direction provided by the embodiments of the present disclosure includes, but is not limited to, two situations.
  • Case 1 The minimum length of the key content area in the preset direction is preset.
  • the key content area is set with a minimum length in the preset direction, that is, in response to the length of the key content area in the preset direction being the minimum length, even if a reduction operation is received, Key content areas are reduced.
  • the content in the content display area may be updated while keeping the size of the key content area unchanged.
  • the reduced length of the key content area in the preset direction The length of the key content area in the preset direction - the minimum length.
  • the reduced length of the key content area in the preset direction Operation distance - the length of the background content area in the first video in the preset direction.
  • the second case: the minimum length of the key content area in the preset direction is not preset, then, the reduced length of the key content area in the preset direction operation distance - the length of the background content area in the first video in the preset direction. length.
  • step D3 and step D4 The implementation process of step D3 and step D4 is described below with a specific example.
  • Fig. 12 is a schematic diagram showing still another reduction manner of the first video according to an exemplary embodiment.
  • the length of the background content area in the preset direction (assuming the vertical direction) is 3cm
  • the length of the key content area in the vertical direction is 4cm.
  • 12 is a schematic diagram corresponding to the second video after the background content area is reduced by 3cm in the vertical direction, and the key content area is reduced by 2cm (ie, 4cm-2cm) in the vertical direction.
  • the length of the key content area in the preset direction (for example, the vertical direction) is shortened, and the length in the direction perpendicular to the preset direction (for example, the horizontal direction) remains unchanged.
  • the displayed picture appears "flat", which affects the video display effect during the video scaling process.
  • step D5 the key content area is reduced in an overall reduction manner.
  • the second reduction ratio the length of the key content area in the preset direction/the length of the key content area in the direction perpendicular to the preset direction.
  • the method further includes: determining the reduced length of the key content area in a direction perpendicular to the preset direction based on the second reduction ratio and the reduced length of the key content area in the preset direction.
  • Steps D3 and D5 will be described below with specific examples.
  • Fig. 13 is a schematic diagram showing still another reduction manner of the first video according to an exemplary embodiment.
  • the default direction is assumed to be vertical.
  • 13 is a schematic diagram corresponding to the second video after the background content area is reduced by 7 cm in the vertical direction, and the key content area is reduced by 2 cm in the vertical direction and 4 cm in the horizontal direction.
  • the The method of reducing the length of the background content area in the preset direction or the method of reducing the background content area as a whole reduces the first video to obtain the second video. That is, when the first video is zoomed out, the zooming of the first video is realized by reducing the background content area. At this time, the display size of the key content area may remain unchanged, so as to avoid being limited by the display space of the video playback interface.
  • the key content area can be reduced again.
  • the key content area in the embodiment of the present disclosure may not be reduced, or the degree of reduction may be relatively small, thereby improving the video display effect during the video zooming process.
  • step A12 involves the following three cases. It can be understood that the first video involved in step A12 may be a video reduced by the above-mentioned step A11.
  • the first video involved in step A12 may be composed of key content areas in a preset direction. Assuming that the preset direction is the vertical direction, then the first video involved in step A12 may be as shown in the right side of FIG. 11 , Or, as shown in the right side of Figure 12, or, as shown in the right side of Figure 13; exemplarily, the first video involved in step A12 may be composed of a key content area and a background content area, as shown in the right side of Figure 6 , as shown in the right panel of Figure 8, as shown in Figure 9b, or as shown in Figure 10b.
  • the first case involved in step A12 includes steps F1 to F4.
  • the first video consists of a key content area and a background content area, and the key content area is not reduced; or, the first video consists of a key content area, and the key content area is not reduced.
  • step F1 in the case that the length of the key content area in the preset direction is equal to the original length of the key content area, according to the video information of the first video, it is determined that the first video is in the key content area The original size of the background content area outside the content area.
  • the original length of the key content area in the preset direction mentioned in the embodiment of the present disclosure is the length of the unscaled key content area in the preset direction. That is, after the client receives the video from the server, when the video is not scaled, the length of the key content area in the video in the preset direction.
  • the key content area in the preset direction In response to the fact that the length of the key content area in the preset direction is equal to the original length of the key content area, it means that the key content area has not been reduced in operation, and therefore, it is not necessary to enlarge the key content area.
  • the original size of the background content area of the video refers to the size of the background content area in the video that has not been scaled, that is, after the electronic device receives the video from the server, the video has not been scaled The size of the middle background content area.
  • the original size of the background content area of the video includes the original length of the background content area in the preset direction and the original length in the direction perpendicular to the preset direction.
  • step F1 may determine the background content of the first video outside the key content area.
  • the display size corresponding to the area that is, the size of the background content area currently displayed by the first video.
  • the first video determined in step F1 is displayed corresponding to the background content area outside the key content area.
  • the size is 0, so it is necessary to determine the original size corresponding to the background content area.
  • determining the enlargement mode includes not enlarging the key content area, enlarging the background content area, and the enlargement type of the background content area is a preset direction enlargement or an overall enlargement.
  • step F3 if the enlargement type of the background content area is enlargement in a preset direction, the enlargement length of the background content area in the preset direction is determined based on the operation distance.
  • the enlarged length of the background content area in the preset direction is equal to the operation distance.
  • the preset direction may be a vertical direction or a horizontal direction.
  • the preset direction magnification is vertical direction magnification
  • the preset direction magnification is horizontal direction magnification
  • enlarging in the vertical direction refers to enlarging the length in the vertical direction
  • enlarging in the horizontal direction refers to enlarging the length in the horizontal direction
  • the process of enlarging the background content area in the preset direction is the opposite process to the process of reducing the background content area in the preset direction.
  • changing "reduction type” in this section to "enlargement type”, and changing “reduced length” to "enlarged length” is a description of the process of enlarging the background content area in a preset direction.
  • step F4 when the enlargement type of the background content area is overall enlargement, based on the operation distance and the original size of the background content area, determine the size of the background content area in the preset direction The enlargement length and a first enlargement ratio, where the first enlargement ratio is a ratio of the original length of the background content area in the preset direction to the original length of the direction perpendicular to the preset direction.
  • determining the enlargement length and the first enlargement ratio of the background content area in the preset direction includes: determining the background content based on the operation distance. The enlarged length of the area in the preset direction; the first enlargement ratio is determined based on the ratio of the original length of the background content area in the preset direction and the original length of the direction perpendicular to the preset direction.
  • the first enlargement ratio original length of the background content area in a preset direction/original length of the background content area in a direction perpendicular to the preset direction.
  • the method further includes: determining an enlarged length of the background content area in a direction perpendicular to the preset direction based on the first enlargement ratio and the enlarged length of the background content area in a preset direction.
  • the overall enlarging process of the background content area and the overall reducing process of the background content area are opposite processes.
  • step B4. Change "reduction type” to "enlargement type”, and change “reduced length” to "enlarged length”, which is a description of the process of enlarging the entire background content area.
  • the second case involved in step A12 includes steps G1 to G3.
  • the first video consists of key content areas, which have been shrunk down.
  • step G1 in the case that the length of the key content area in the preset direction is less than or equal to the difference between the original length of the key content area and the operation distance, determining that the enlargement mode includes no enlargement
  • the background content area, the enlargement of the key content area, and the enlargement type of the key content area are a preset direction enlargement or an overall enlargement.
  • the operation distance can make the length of the key content area in the preset direction enlarged to be less than or equal to the original length, the enlarged first video does not include the background content area in the preset direction at this time.
  • the length of the key content area of the first video in the preset direction is 3 cm
  • the original length of the key content area in the preset direction is 7 cm
  • the operation distance is 2 cm. That is, the operation distance of 2 cm is less than (the original length of the key content area in the preset direction is 7 cm - the length of the key content area in the preset direction is 3 cm). It can be seen that the operation distance is not enough to enlarge the length of the key content area of the first video in the preset direction to the original length of the key content area in the preset direction. At this time, the key content area is enlarged first, and the background content area is not enlarged.
  • step G2 when the enlargement type of the key content area is an enlargement in a preset direction, the enlargement length of the key content area in the preset direction is determined based on the operation distance.
  • the enlarged length of the key content area in the preset direction is equal to the operation distance.
  • step G3 when the enlargement type of the key content area is overall enlargement, the enlargement of the key content area in the preset direction is determined based on the operation distance and the size of the key content area length and a second enlargement ratio, where the second enlargement ratio is the ratio of the length of the key content area in the preset direction to the length of the direction perpendicular to the preset direction.
  • the enlarged length of the key content area in the preset direction is equal to the operation distance.
  • the second enlargement ratio the length of the key content area in the preset direction/the length of the key content area in the direction perpendicular to the preset direction.
  • the method further includes: determining the method length of the key content area in a direction perpendicular to the preset direction based on the second enlargement ratio and the enlarged length of the key content area in the preset direction.
  • the third case involved in step A12 includes steps H1 to H6.
  • the first video consists of key content areas, which have been shrunk.
  • step H1 in the case that the length of the key content area in the preset direction is greater than the difference between the original length of the key content area and the operation distance, according to the video information, determine the first The original size corresponding to the background content area of a video outside the key content area.
  • the original size of the background content area of the video includes the original length of the background content area in the preset direction and the original length in the direction perpendicular to the preset direction.
  • determining the background content area of the first video outside the key content area includes determining a positional relationship between the key content area and the background content area.
  • the background content area includes a first sub-background content area and a second sub-background content area in a preset direction, and the first sub-background content area and the second sub-background content area are located on both sides of the background content area, or, the background The content area is located above the key content area in the preset direction, or the background content area is located below the key content area in the preset direction.
  • the first video since the first video consists of key content areas, the first video does not include a background content area. Therefore, the display size of the background content area here is 0, that is, there is no background content area. The subsequent process of enlarging the background content area is a process from scratch.
  • determining the enlargement method includes enlarging the key content area, the enlargement type of the key content area is preset direction enlargement or overall enlargement, enlarging the background content area and the enlargement type of the background content area Zoom in for a preset direction or zoom in overall.
  • step H3 when the enlargement type of the key content area is enlargement in a preset direction, based on the length of the key content area in the preset direction and the original length, it is determined that the key content area is in the predetermined direction.
  • the enlarged length in the preset direction when the enlargement type of the key content area is enlargement in a preset direction, based on the length of the key content area in the preset direction and the original length, it is determined that the key content area is in the predetermined direction. The enlarged length in the preset direction.
  • the original length of the key content area in the preset direction is 10cm
  • the length (ie display length) of the key content area in the preset direction is 7cm
  • step H4 when the enlargement type of the key content area is overall enlargement, the enlargement length of the key content area in the preset direction is determined based on the size of the key content area and the original length and a third enlargement ratio, where the third enlargement ratio is a ratio of the length of the key content area in the preset direction to the length of the direction perpendicular to the preset direction.
  • the enlarged length of the key content area in the preset direction is determined based on the original length of the key content area in the preset direction and the length of the key content area in the preset direction.
  • the third enlargement ratio is determined based on the size of the key content area.
  • the third enlargement ratio the length of the key content area in the preset direction/the length of the key content area in the direction perpendicular to the preset direction.
  • the original length of the key content area in the preset direction is 10cm
  • the length of the key content area in the preset direction is 4cm
  • the length of the key content area in the direction perpendicular to the preset direction is 2cm
  • the enlarged length of the key content area in the preset direction is determined.
  • step H5 when the enlargement type of the background content area is a preset direction enlargement, the enlargement length of the background content area in the preset direction is determined based on the operation distance.
  • the enlarged length of the background content area in the preset direction is determined based on the operation distance and the enlarged length of the key content area in the preset direction.
  • the enlarged length of the key content area in the preset direction is 4cm
  • step H6 when the enlargement type of the background content area is overall enlargement, based on the operation distance and the original size of the background content area, determine the size of the background content area in the preset direction The enlargement length and a fourth enlargement ratio, where the fourth enlargement ratio is a ratio of the original length of the background content area in the preset direction to the original length of the direction perpendicular to the preset direction.
  • the original size of the background content area of the video includes the original length of the background content area in the vertical direction and the original length of the background content area in the horizontal direction.
  • the original size of the background content area of the video includes the original length of the background content area in the vertical direction and the original length of the background content area in the horizontal direction.
  • the enlarged length of the background content area in the preset direction is determined based on the operation distance and the enlarged length of the key content area in the preset direction.
  • the fourth enlargement ratio is determined based on the original length of the background content area in the preset direction and the original length of the background content area in the direction perpendicular to the preset direction.
  • the fourth enlargement ratio is 2.
  • the enlarged length of the background content area in the preset direction is determined.
  • the background content area in response to the background content area being located on both sides of the key content area (as shown in FIG. 7 ), it is assumed that the background content area includes a first sub-background content area and a second sub-background content area.
  • the enlargement parameter of the first video in the enlargement process further includes the enlargement ratio of the first sub-background content area in the preset direction, and the enlargement ratio of the second sub-background content area in the preset direction. Zoom in.
  • the enlargement ratio refers to the ratio of the original length of itself in the preset direction to the original length of the background content area in the preset direction.
  • the original length of the first sub-background content area in the preset direction refers to the original length of the first sub-background content area in the preset direction in the video that has not undergone scaling processing, that is, the electronic device receives from the server. After reaching the video, the original length of the first sub-background content area in the video in the preset direction when the video has not been scaled.
  • the original length of the second sub-background content region in the preset direction refers to the original length of the second sub-background content region in the preset direction in the video that has not been scaled.
  • the second sub-background content area is in the vertical direction.
  • the first background content area and the second background content area of the background content area are proportionally enlarged in the vertical direction, and the obtained video display effect is more in line with the user's viewing habits, thereby improving the user experience.
  • step S33 in order to achieve the purpose of rapidly zooming the first video, an operation of zooming the first video with one key may be performed, and the implementation of step S33 may include steps E1 to E3.
  • step E1 the operation type corresponding to the operation information is determined.
  • step E2 in response to the operation type being a one-key zoom-out operation, the background content area other than the key content area in the first video is removed to obtain the second video.
  • step E3 in response to the operation type being a one-key enlargement operation, the key content area is enlarged to the original size of the key content area, and the background content area is enlarged to the original size of the background content area , to get the second video.
  • the operation mode of the one-key zoom-out operation or the one-key zoom-in operation is a key operation.
  • the one-key zoom-out operation or the one-key zoom-in operation is a sliding operation.
  • the operation mode of the one-key zoom-out operation or the one-key zoom-in operation is a voice operation.
  • the content display interface is increasing, and the method for increasing the content display interface includes step I11 and step I12.
  • step I11 the content display interface in the display interface is controlled to increase at least the reduced length of the video playback interface in the preset direction in the preset direction.
  • the reduced length of the video playback interface in the preset direction is equal to the reduced length of the first video in the preset direction.
  • step I12 the content display interface is controlled to display the updated content related to the second video.
  • steps I21 to I22 are further included.
  • step I21 multiple frames of video images are acquired from the first video.
  • step I22 the key content area included in the first video is determined based on the multiple frames of the video images.
  • the above steps I21 to I22 may be specific implementations of the step S32, or the above steps I21 to I22 may be performed before the step S31.
  • step I22 There are multiple implementation manners of step I22 provided by the embodiments of the present disclosure, and the embodiments of the present disclosure provide but are not limited to the following three.
  • the implementation manner of the first step I22 includes: step J1 to step J3.
  • step J1 for two frames of video images adjacent to any position in the multiple frames of the video images, a difference image of the two frames of video images is obtained, so as to obtain at least one frame of difference image.
  • multiple frames of video images may be extracted from the video.
  • the embodiments of the present disclosure do not limit the number of video images obtained from the video.
  • the longer the total duration of the video the greater the number of video images extracted from the video.
  • the number of video images obtained from the video is greater than or equal to the preset number of frames, for example, the preset number of frames is 20.
  • the number of frames of the extracted video images can be set based on the actual situation, so as to not only ensure the accuracy of the obtained video information of the video, but also improve the data processing speed.
  • multiple frames of video images can be uniformly extracted from the video, for example, one frame of video image is extracted every 10 frames, or one frame of video image is extracted every preset duration.
  • multiple frames of video images may be randomly selected from the video.
  • the sequence of multiple frames of video images extracted from the video may be shuffled, and the two frames of video images that are adjacent in position may be two frames of video images that are adjacent in time, or may not be two frames of video images that are adjacent in time.
  • sorting can be performed based on the time of the multiple frames of video images in the video, and the above-mentioned two “positionally adjacent” video images are two frames of video images that are adjacent in time. .
  • the difference image can be obtained by calculating the adaptive mixed Gaussian background modeling method MOG2.
  • the difference image may be a difference mask FrameMask between two frames of video images.
  • step J1 specifically includes: reducing the multi-frame video image by the target multiple; for two video images adjacent to any position in the video image of the multi-frame reduced target multiple, obtaining a representation of the two frames.
  • the difference image of the difference of the video image to get the difference image of multiple frames.
  • the target multiple is less than 1, exemplarily, the target multiple can be any value less than 1, such as 0.4, 0.5, 0.6, etc.
  • 14a to 14b are schematic diagrams of multi-frame difference images according to an exemplary embodiment.
  • FIG. 14a is a difference image obtained based on the difference between the first frame video image and the second frame video image
  • FIG. 14b is a difference image obtained based on the difference between the 12th frame video image and the 13th frame image.
  • step J2 a target image is obtained based on the at least one frame of difference image, and the pixel value of each position in the target image is an average value of the pixel values corresponding to the position in the at least one frame of difference image.
  • step J2 includes steps J21 to J23.
  • step J21 the at least one frame of the difference image is processed respectively to obtain a first image corresponding to each frame of the difference image, and one frame of the first image includes a plurality of image areas that are not connected to each other, and a plurality of At least one of the image regions is a multi-connected region.
  • Exemplarily performing a morphological opening operation on at least one frame of the difference image, respectively, to obtain multiple frames of first images.
  • the morphological opening operation can remove small objects in the difference image, separate objects in thin places and smooth the boundaries of larger objects.
  • the background content area may include content corresponding to the key content area, for example, including the title corresponding to the real content displayed in the key content area, or content such as narration, or subtitles; the content contained in the background content area may be far from the key content area.
  • the content area is very close, and by performing morphological processing on the difference image, the content contained in the background content area that is adjacent to the key content area can be separated from the key content area, so that the boundary of the key content area can be more accurately determined.
  • FIGS. 15a to 15d are schematic diagrams showing a first image obtained by processing a difference image according to an exemplary embodiment.
  • the area 151 shown in Figure 15a is the key content area
  • the area 152 is the content related to the key content area contained in the background content area, such as subtitles
  • Fig. 15b shows the structure of processing the difference image.
  • the process of processing the difference image based on the structure shown in Fig. 15b is as follows: the central cell of the structure (the cell marked with a thick black line in Fig. 15b) is a moving cell, and the cell shown in Fig. 15a The difference between the structure and the structure shown in Figure 15a in response to the central cell of the structure moving to a cell in the difference image shown in Figure 15a If the intersection of the images is exactly equal to the structure, it is determined that the cell meets the requirements, and the cell in the difference image shown in Figure 15a is saved.
  • the central cell of the mobile structure in the dark black surrounding cells shown in Figure 15c responds to the structure shown in Figure 15b.
  • the center cell of moves to any cell located in the peripheral position of the image composed of the dark black cells shown in Figure 15c
  • the structure shown in Figure 15b and the image composed of the dark black cells shown in Figure 15c have an intersection, then Make sure the cell meets the requirements and keep the cell. All cells that meet the above requirements and the dark black cells shown in Figure 15c together constitute the first image shown in Figure 15d.
  • step J22 a second image is obtained based on the at least one frame of the first image, and the pixel at each position in the second image is the value of the pixel at the position in the at least one frame of the first image. average value.
  • step J22 specifically includes steps J221 to J222.
  • step J221 for each frame of the first image, determine (pixel position, pixel value) corresponding to each pixel included in the first image, so as to obtain (pixel position, pixel value) corresponding to each pixel included in each first image value).
  • step J222 for each pixel point position, the average value of each pixel value having the pixel point position is obtained to obtain the pixel average value corresponding to the pixel point position, ie (pixel point position, pixel average value).
  • the pixel value corresponding to any pixel in the second image is the average value of the pixel corresponding to the pixel position of the pixel.
  • step J1 two frames of video images can be obtained from the video, and in step J1, one frame of difference image can be obtained, and one frame of difference image can be obtained by processing one frame of difference image.
  • first image in response to only one frame of the first image, in response to the pixel value of one or more pixels in the first image being wrong (referred to as abnormal pixel points in the embodiment of the present disclosure), it will affect the determination of the key The accuracy of the content area.
  • N frames of video images may be obtained from the video, where N is a positive integer greater than 2.
  • N is a positive integer greater than 2.
  • the pixel corresponding to pixel position 1 in one frame of the first image may be an abnormal pixel
  • the pixel corresponding to pixel position 1 in another frame of the first image may be a non-abnormal pixel.
  • the probability that the pixels at the same pixel position are abnormal pixels is very small, so taking the average value can eliminate the influence of abnormal pixels on data processing, so that the obtained second image can show the boundary of the key content area more clearly.
  • step J23 the second image is processed to obtain a target image, wherein at least one image area included in the target image is a single connected area.
  • morphological closing operation processing and binarization processing are performed on the second image to obtain the target image.
  • Morphological closure can fill small spaces in objects, connect adjacent objects and smooth boundaries.
  • FIG. 16a to 16b are schematic diagrams illustrating the purpose of processing the second image according to an exemplary embodiment.
  • Fig. 16a is the second image. It can be seen from Fig. 16a that there are still many independent small spaces in the key content area to be obtained (framed by a white solid line) 1601, such as the black circled in Fig. 16a Small hole 1602, black small hole 1603, etc. These small spaces will reduce the accuracy of obtaining the key content area later. Therefore, it is necessary to connect the small spaces in the key content area 1601 (for example, the black hole and the edge area of the black hole), and perform morphological closing operation on the second image. Pixels in key content areas can be made connected.
  • Figure 16b can be obtained after the morphological closing operation is performed on Figure 16a.
  • the key content area does not include independent small spaces, and the key content area is a large single-connected area as a whole.
  • the second image can be used as the target image, or the target image can be obtained by performing binarization processing on the second image, so that the target image presents a black and white effect, and the information of the key content area can be more accurately obtained therefrom. contour.
  • step J3 the target image area with the largest area in at least one image area included in the target image is determined as the key content area.
  • the target image may include multiple image areas.
  • the target image area with the largest area is determined as the key content area.
  • the position coordinates corresponding to each image area are obtained, and the position coordinates of the image area are (top, left, bottom, right), where top is the position coordinates of the upper boundary line that forms the image area, and left is the position coordinate of the image area.
  • the position coordinates of the left border line, bottom is the position coordinates of the lower border line forming the image area, and right is the position coordinates of the right border line forming the image area.
  • an embodiment of the present disclosure further provides a method for determining the probability that a target image area is a key content area.
  • 17a to 17c are schematic diagrams showing the relative positions of the target contour area and the real key content area according to an exemplary embodiment.
  • the target image area 1701 (framed with a black dotted line) may not only contain the key content area 1702 (framed with a black solid line), but also A background content area 1703 may be included (figures 17a-17c represent the background content area with black images).
  • the target image area 1701 (framed with a black dot-dash line) includes a part of the key content area 1702 (framed with a black solid line) and a part of the background Content area 1703.
  • an embodiment of the present disclosure provides a method for determining the probability that a target image area is a key content area, and the method includes the following steps K1 to K3 in the implementation process.
  • step K1 the image located in the target image area is converted into a grayscale image.
  • the "grayscale image” mentioned in the embodiments of the present disclosure only includes images inside the target image area, and does not include images other than the target image area in the image.
  • step K2 a first number of pixels whose pixel values are greater than or equal to a first threshold in the grayscale image are acquired.
  • step K3 the ratio of the first number to the second number of pixels included in the grayscale image is determined as a first probability.
  • the pixel values of the pixels contained in the key content area should all be 255, and in response to the pixel value of a pixel in the grayscale image being greater than the first threshold, it means that the pixel is a "white point", in response to the pixel value of a pixel point in the grayscale image being less than or equal to the first threshold, indicating that the pixel point is a "black point”. Therefore, based on the first probability, the proportion of pixels that can be regarded as "white points" in the target image area can be determined.
  • step S33 in response to the first probability being greater than or equal to the second threshold, it indicates that the accuracy of the determined key content area is high, and the zoom-out operation on the first video based on the position of the key content area will not appear as shown in Figure 17a. To the problem shown in Fig. 17c, step S33 can therefore be executed.
  • the first threshold may be determined based on the actual situation. For example, in order to ensure that the reduction accuracy rate of the electronic device is greater than or equal to 95% in the process of reducing the background content area based on the video information, the second threshold is determined to be 0.9.
  • the electronic device may feed back information to the server indicating that a zoom-out error occurred.
  • the reduction accuracy rate at which the electronic device reduces the background content area based on the video information means that the server can be based on the total number A of video information of one or more videos sent to one or more electronic devices, and the characterization of the received feedback The number B of erroneous information is reduced to determine the reduction accuracy.
  • the video display method in order to expand the recall rate, for the first video whose corresponding first probability is smaller than the second threshold, the video display method further includes: steps L1 to L4.
  • step L1 the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images are acquired, so as to obtain a set of straight line segment positions.
  • step L2 a first ordinate and a second ordinate are determined from the plurality of ordinates included in the straight line segment position set.
  • step L3 determine the area surrounded by the boundary between the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image, to determine as candidate key content areas.
  • step L4 in response to the candidate key content area being the same as the target image area, step S33 is performed.
  • step L1 includes steps L11 to L13.
  • step L11 edge detection is performed on the multiple frames of the video images respectively, so as to obtain multiple frames of third images.
  • one video image corresponds to one third image.
  • points with obvious brightness changes included in the video image can be identified, for example, the boundary between the background content area and the key content area in the video image.
  • the edge detection can be the Canny edge algorithm.
  • FIG. 18 is a schematic diagram of three frames of third images obtained through edge detection according to an exemplary embodiment.
  • the boundaries of the pictures in the third images of the three frames in FIG. 18 are clear and obvious, which makes it easier to obtain the boundaries of the background content area and the key content area in the video image.
  • step L12 for each third image, the curves and vertical straight lines in the third image are removed, and the horizontal straight lines are retained to obtain a fourth image.
  • the embodiment of the present disclosure determines the horizontal boundary of the key content area. , the key content area can be obtained, so step L12 can only keep the horizontal straight line.
  • the embodiment of the present disclosure needs to determine the horizontal boundary of the key content area, And, the vertical boundary of the key content area. That is, the vertical straight line and the horizontal straight line need to be reserved in step L12.
  • step L12 a horizontal straight line is taken as an example for description, and the same is true for a vertical straight line, and details are not repeated here.
  • straight line detection processing may be referred to as straight line detection processing.
  • the line detection process may be Hough transform line detection.
  • FIG. 19 is a schematic diagram of a fourth image obtained through line detection processing according to an exemplary embodiment.
  • the three frames of the fourth images in FIG. 19 correspond to the three frames of the third images in FIG. 18 one-to-one. Comparing FIG. 19 with FIG. 18 , it can be seen that straight lines in the horizontal direction are retained in the fourth image of the three frames in FIG. 19 .
  • Figure 19 retains two horizontal straight lines
  • the middle diagram of Figure 19 retains one horizontal straight line
  • the right side of Figure 19 retains two horizontal straight lines.
  • the boundary of the key content area in some third images may be very similar to the background content area, resulting in only one or 0 horizontal straight lines; or, in some third images, two or more horizontal straight line segments may remain.
  • step L13 the ordinates of the horizontal straight lines respectively included in the multiple frames of the fourth images are obtained, so as to obtain a set of straight line segment positions.
  • multiple frames of the fourth image include a total of n horizontal straight lines
  • the vertical coordinates of the n and horizontal straight lines in the third image are: y1, y2, y3, ..., yn
  • the set of positions of straight line segments Can be (y1, y2, y3, ..., yn).
  • step L2 There are various implementation manners of step L2.
  • the embodiment of the present disclosure provides but is not limited to the following clustering manner, and the clustering manner includes step L21 to step L24.
  • FIGS. 20a to 20c are schematic diagrams of a clustering process according to an exemplary embodiment.
  • each black circle represents a yi, and i is any positive integer greater than or equal to 1 and less than or equal to n.
  • the black circles are arranged from left to right according to their corresponding yi from small to large.
  • step L21 two cluster center positions are randomly initialized based on the ordinates included in the straight line segment position set, which are the cluster center position 201 and the cluster center position 202 respectively.
  • the maximum ordinate among the ordinates included in the straight line segment position set is ordinate 1
  • the minimum ordinate is ordinate 2
  • the position of each cluster center is greater than or equal to ordinate 1, and is less than or equal to the ordinate. Coordinate 2.
  • Circles filled with grids as shown in Figure 20a represent cluster center locations 201 and cluster center locations 202.
  • the cluster center position may be any ordinate included in the straight line segment position set, or not any ordinate included in the straight line segment position set, such as the cluster center position 202 shown in FIG. 20a.
  • d(yi, c1) refers to the distance between the ordinate yi and the cluster center position 201
  • d(yi, c2) refers to the distance between the ordinate yi and the cluster center position 202 .
  • step L23 the cluster center position 201 is updated based on each ordinate included in the first set; the cluster center position 202 is updated based on each ordinate included in the second set.
  • step L24 return to step L21 until the number of iterations reaches L to terminate.
  • the cluster center position 201 and the cluster center position 202 obtained after L iterations are the first ordinate and the second ordinate.
  • the first implementation manner in response to the position coordinates (top1, left1, bottom1, right1) of the candidate key content area being the same as the position coordinates (top2, left2, bottom2, right2) of the target image area, determine the candidate key content The area is the same as the target image area, otherwise different.
  • the second implementation manner in response to the absolute value of the difference between the top2 of the target image area and the top1 of the candidate key content area is less than or equal to the third threshold, and the difference between the bottom2 of the target image area and the bottom1 of the candidate key content area
  • the absolute value of is less than or equal to the fourth threshold, and the ratio 1 is greater than or equal to the fifth threshold, and the ratio 2 is greater than or equal to the sixth threshold, determine that the candidate key content area is the same as the target image area, otherwise, different .
  • the ratio 1 the number of positions where the absolute value of the difference from top1 in each ordinate is less than or equal to the seventh threshold/half of the number of multi-frame video images obtained in step I21 in the set of straight line segment positions.
  • Ratio 2 number of positions where the absolute value of the difference from bottom1 in each ordinate included in the straight line segment position set is less than or equal to the eighth threshold/half of the number of multi-frame video images obtained in step I21.
  • the values of the third threshold, the fourth threshold, the fifth threshold, the sixth threshold, the seventh threshold, and the eighth threshold may be determined based on the actual situation, which will not be repeated here.
  • Ratio 1 is represented by upLineProb
  • ratio 2 is represented by downLineProb.
  • the implementation manner of the second step I22 includes step J1, step J2, step J4, step L1, step L2, step L3 and step L5.
  • step J4 a target image area with the largest area is determined from at least one image area included in the target image.
  • step L5 in response to the candidate key content area being the same as the target image area, it is determined that the target image area is the key content area.
  • the implementation manner of the third step I22 includes step L1, step L2 and step L6.
  • step L6 determine the area surrounded by the boundary between the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image, to determine for the key content area.
  • the video presentation method includes the following steps M1 to M2.
  • step M1 an instruction for acquiring video information of the first video is sent to the server.
  • step M2 video information of the first video sent by the server is received.
  • the above-mentioned steps M1 and M2 may be specific implementations of the step S32.
  • the above-mentioned steps M1 and M2 may be executed before the step S31.
  • the video information of a video can be represented in various forms, for example, any one of a table, a structure, a number, a queue, a linked list, and a function.
  • the video information of the video includes the original information of the video.
  • the original information refers to data before the video is not scaled, for example, including at least one of the following contents.
  • topRatio 0.18671875,//The proportion of the black border on the top
  • widthRatio 0.6805556,//The ratio of the width of the key content area
  • height refers to the original length of the video in the vertical direction
  • width refers to the original length of the video in the horizontal direction.
  • leftRatio refers to the sum of the length of the sub-background content area on the left side of the key content area in the horizontal direction/the length of the sub-background content area on both sides of the key content area in the horizontal direction.
  • the background image supplemented in the horizontal direction in FIG. 5b is a sub-background content area.
  • the length in the horizontal direction of the sub-background content area on the left side of the key content area in the horizontal direction may be the length in the horizontal direction of the sub-background content area on the left side of the key content area in the horizontal direction as shown in FIG.
  • the sum of the lengths of the sub-background content areas on both sides of the key content area in the horizontal direction may be, as shown in FIG. 5b, the sub-background content area on the left side of the key content area in the horizontal direction is The sum of the length in the horizontal direction and the length in the horizontal direction of the sub-background content area located to the right of the key content area in the horizontal direction.
  • topRatio refers to the vertical length of the sub-background content area above the key content area in the vertical direction / the sum of the vertical length of the sub-background content areas on both sides of the key content area in the vertical direction.
  • the vertical length of the sub-background content area positioned above the key content area in the vertical direction may be the vertical length of the sub-background content area positioned above the key content area as shown in FIG. 5a.
  • the sum of the lengths in the vertical direction of the sub-background content areas located on both sides of the key content area in the vertical direction the length in the vertical direction of the sub-background content areas above the key content area as shown in FIG. 5a+ As shown in Figure 5a, the length of the sub-background content area located below the key content area in the vertical direction.
  • widthRatio refers to the ratio of the original length of the key content area in the horizontal direction to the original length of the video in the horizontal direction; heightRatio refers to the ratio of the original length of the key content area in the vertical direction to the original length of the video in the vertical direction.
  • the proportion of black borders on the right 1 ⁇ the proportion of black borders on the left.
  • the proportion of black border on the right the length of the sub-background content area on the right side of the key content area in the horizontal direction/the sum of the lengths of the sub-background content areas on both sides of the key content area in the horizontal direction.
  • the original length of the key content area in the horizontal direction can be obtained; based on the height ratio*height of the key content area, the original length of the key content area in the vertical direction can be obtained .
  • width - the original length of the key content area in the horizontal direction the sum of the lengths of the sub-background content areas on both sides of the key content area in the horizontal direction can be obtained.
  • height-key content area height ratio*height the sum of the vertical lengths of the sub-background content areas on both sides of the key content area in the vertical direction can be obtained.
  • the above-mentioned original information may also include the relative position of the key content area and the background content area.
  • the sub-background content area located on the left side of the key content area is in the horizontal direction.
  • the length of the sub-background content area on the right side of the key content area in the horizontal direction, and the length of the sub-background content area above the key content area in the vertical direction the sub-background content area below the key content area length in the vertical direction.
  • the video information of the video may further include: after the video is zoomed, the display size of the video, the display size of the key content area, and the display size of the background content area.
  • the display size refers to the size of the current display, and the display size includes the length in the horizontal direction and the length in the vertical direction.
  • Fig. 21 is a flow chart of a video presentation method applied to a server according to an exemplary embodiment. The method includes steps S210 to S213.
  • step S210 a video acquisition instruction sent by the electronic device is received.
  • step S211 at least one video is obtained from the stored videos, and the at least one video includes the first video.
  • step S212 video information corresponding to at least one of the videos is obtained; the video information of one of the videos includes the display size of the video and the key content area in the video;
  • step S213 the at least one video and the video information of the at least one video are sent to the electronic device.
  • One of the video information of the video is the basis for scaling the video when the electronic device detects a video scaling operation performed on the video playback interface displaying the video, and scaling the video
  • the processed video includes key content areas in the video.
  • step S210 before step S210 , or before step S211 , or before step S212 , the following steps N11 to N12 are performed for each of the stored videos.
  • step S212 includes steps N11 to N12.
  • step N11 multiple frames of video images are acquired from the video.
  • step N12 the key content area included in the video is determined based on the multiple frames of the video images.
  • step N11 For the description of step N11, please refer to the description of step I21, and for the description of step N12, please refer to the description of step I22.
  • step N12 There are three implementation manners of step N12, and the first implementation manner includes steps J1 to J3. For the description of steps J1 to J3, reference may be made to corresponding parts, and details are not repeated here.
  • the second implementation manner includes step J1, step J2, step J4, step L1, step L2, step L3 and step L5.
  • the third implementation includes step L1, step L2 and step L6.
  • step N12 For the three implementation manners of step N12, reference may be made to the implementation manner of step I22, which will not be repeated here.
  • the video display method applied to the server also includes a method for determining the probability that the target image area is a key content area, such as steps K1 to K3, please refer to the corresponding description, which will not be repeated here.
  • Step S213 includes steps N21 to N22.
  • step N21 from the at least one video, determine the corresponding video whose first probability is greater than or equal to a second threshold.
  • step N22 video information of the at least one video and the corresponding video whose first probability is greater than or equal to the second threshold is sent to the electronic device.
  • the following steps L1, L2, L3 and N23 are performed for each video whose corresponding first probability is smaller than the second threshold.
  • step N23 the video information of the video in which the candidate key content area is the same as the target image area is sent to the electronic device.
  • step J1 to step J3 the method of step J1 to step J3
  • step L1 to step L3 the determined key content areas are the same, so The resulting key content areas have high accuracy so that they can be sent to electronic devices.
  • FIG. 22 is a structural diagram of a video display apparatus applied to an electronic device according to an exemplary embodiment.
  • the electronic device includes: a first acquisition module 2001 , a second acquisition module 2002 , a zoom module 2003 and a display module 2004 .
  • a first acquisition module configured to receive a video zoom operation implemented on the video playback interface, and acquire operation information of the video zoom operation
  • a second obtaining module configured to obtain video information of the first video displayed in the video playback interface, where the video information at least includes the display size and key content areas of the first video;
  • a scaling module configured to perform scaling processing on the first video according to the operation information obtained by the first obtaining module and the video information of the first video by the second obtaining module to obtain a second a video, the second video includes the key content area;
  • a presentation module configured to display the second video obtained by the scaling module in the video playing interface in response to the video scaling operation.
  • the scaling module is specifically configured to:
  • a first determining unit configured to determine a scaling mode and scaling parameters of the first video according to the operation information and video information of the first video
  • a scaling unit configured to scale the first video according to the scaling manner and the scaling parameter determined by the first determining unit, to obtain the second video.
  • the operation information includes at least an operation type and an operation distance
  • the first determining unit is specifically configured to:
  • a first determination subunit configured to determine a reduction mode and a reduction parameter of the first video according to the operation distance and video information of the first video when the operation type is determined to be a reduction operation;
  • the shrinking method at least includes whether to shrink the key content area and the shrinking type;
  • the shrinking type includes shrinking in a preset direction or an overall shrinking;
  • the shrinking parameter includes at least a shrinking length in the preset direction;
  • a second determination subunit configured to determine an enlargement mode and an enlargement parameter of the first video according to the operation distance and the video information of the first video when the operation type is determined to be an enlargement operation;
  • the enlargement method at least includes whether to enlarge the key content area and the enlargement type;
  • the enlargement type includes a preset direction enlargement or an overall enlargement;
  • the enlargement parameter at least includes an enlargement length in the preset direction.
  • the operation distance is the projection distance of the video zoom operation in the preset direction
  • the first determination subunit is specifically configured as: a first determination submodule, a second determination submodule.
  • the module has a third determination sub-module and a fourth determination sub-module.
  • the first determination submodule is configured to determine a background content area of the first video outside the key content area according to the key content area included in the video information.
  • the second determination submodule is configured to, in the case that the operation distance is not greater than the length of the background content area in the preset direction, determine the reduction mode includes not reducing the key content area, reducing the key content area
  • the background content area and the reduction type of the background content area are a preset direction reduction or an overall reduction.
  • the third determination submodule is configured to determine a reduction length in the preset direction based on the operation distance when the reduction type of the background content area is reduction in a preset direction.
  • the fourth determination sub-module is configured to determine the first reduction ratio and the preset reduction ratio based on the operation distance and the size of the background content area when the reduction type of the background content area is overall reduction.
  • the reduced length in the direction, and the first reduction ratio is the ratio of the length of the background content area in the preset direction to the length of the direction perpendicular to the preset direction.
  • the operation distance is the projection distance of the video zoom operation in the preset direction
  • the first determination subunit is specifically configured as: a fifth determination submodule, a sixth determination submodule. module and seventh determining sub-module.
  • a fifth determining submodule is configured to determine, according to the video information, a background content area of the first video outside the key content area.
  • a sixth determining submodule configured to, in the case that the operating distance is greater than the length of the background content area in the preset direction, determine the shrinking mode including not shrinking the key content area, shrinking the background content area.
  • a seventh determination submodule is configured to determine, based on the size of the background content area, a reduced length of the background content area in the preset direction and a reduced length in a direction perpendicular to the preset direction.
  • the operation distance is the projection distance of the video zoom operation in the preset direction
  • the first determination subunit is specifically configured as: an eighth determination submodule, a ninth determination submodule. module, a tenth determination sub-module, and an eleventh determination sub-module.
  • the eighth determination sub-module is configured such that the third determination sub-module is configured to determine, according to the video information, a background content area of the first video outside the key content area.
  • a ninth determination submodule is configured to determine, based on the size of the background content area, a reduced length of the background content area in the preset direction and a reduced length in a direction perpendicular to the preset direction.
  • a tenth determination sub-module configured to determine, based on the length of the background content area in the preset direction and the operation distance, in the case that the reduction type of the key content area is a preset direction reduction, the The reduced length of the key content area in the preset direction.
  • the eleventh determination sub-module is configured to, when the reduction type of the key content area is overall reduction, based on the length of the background content area in the preset direction, the operation distance and the key content The size of the area, determine the reduced length of the key content area in the preset direction and a second reduction ratio, where the second reduction ratio is the length of the key content area in the preset direction and the The ratio of the length of the direction perpendicular to the preset direction.
  • the second determination subunit is specifically configured as: a twelfth determination submodule, a thirteenth determination submodule, a fourteenth determination submodule, and a fifteenth determination submodule.
  • the twelfth determination submodule is configured to determine, according to the video information, the original size of the first video corresponding to the background content area outside the key content area.
  • the thirteenth determining submodule is configured to, in the case that the length of the key content area in the preset direction is equal to the original length of the key content area, determine the enlarging manner includes not enlarging the key content area , enlarge the background content area, and the enlargement type of the background content area is a preset direction enlargement or an overall enlargement.
  • the fourteenth determination submodule is configured to determine, based on the operation distance, an enlargement length of the background content area in the preset direction when the enlargement type of the background content area is a preset direction enlargement.
  • the fifteenth determination submodule is configured to, in the case that the enlargement type of the background content area is overall enlargement, based on the operation distance and the original size of the background content area, determine that the background content area is in the The enlargement length in the preset direction and the first enlargement ratio, where the first enlargement ratio is the ratio of the original length of the background content area in the preset direction to the original length of the direction perpendicular to the preset direction .
  • the operation distance is the projection distance of the video zoom operation in the preset direction;
  • the second determination subunit is specifically configured as: a sixteenth determination submodule, a seventeenth determination submodule A determination submodule and an eighteenth determination submodule.
  • the sixteenth determination submodule is configured to determine the key content area in the case that the length of the key content area in the preset direction is less than or equal to the difference between the original length of the key content area and the operation distance
  • the enlarging manner includes enlarging the key content area and the enlarging type of the key content area is a preset direction enlargement or an overall enlargement.
  • the seventeenth determination submodule is configured to determine the enlargement length of the key content area in the preset direction based on the operation distance when the enlargement type of the key content area is a preset direction enlargement.
  • the eighteenth determination sub-module is configured to, in the case that the enlargement type of the key content area is overall enlargement, based on the operation distance and the size of the key content area, determine whether the key content area is in the preset size
  • the enlargement length in the direction and the second enlargement ratio are set, and the second enlargement ratio is the ratio of the length of the key content area in the preset direction to the length of the direction perpendicular to the preset direction.
  • the operation distance is the projection distance of the video zoom operation in the preset direction;
  • the second determination subunit is specifically configured as: a nineteenth unit, a twentieth determination subunit. module, a twenty-first determination sub-module, a twenty-second determination sub-module, a twenty-third determination sub-module, and a twenty-fourth determination sub-module.
  • the nineteenth determination sub-module is configured to, in the case that the length of the key content area in the preset direction is greater than the difference between the original length of the key content area and the operation distance, according to the video information , and determine the original size of the first video corresponding to the background content area outside the key content area.
  • a twentieth determining submodule, configured to determine the zoom-in mode includes zooming in on the key content area, the zoom-in type of the key content area is a preset direction zoom or an overall zoom, zooming in the background content area and the background
  • the enlargement type of the content area is preset direction enlargement or overall enlargement.
  • the twenty-first determination sub-module is configured to, when the enlargement type of the key content area is a preset direction enlargement, based on the length of the key content area in the preset direction and the The original length in the preset direction determines the enlarged length of the key content area in the preset direction.
  • the twenty-second determining sub-module is configured to, when the enlargement type of the key content area is overall enlargement, based on the size of the key content area and the original size of the key content area in the preset direction Length, determine the enlarged length of the key content area in the preset direction and a third enlargement ratio, where the third enlargement ratio is the enlarged length of the key content area in the preset direction and the The ratio of the magnification length in the direction perpendicular to the preset direction.
  • the twenty-third determination sub-module is configured to determine, based on the operation distance, the enlargement length of the background content area in the preset direction when the enlargement type of the background content area is a preset direction enlargement .
  • the twenty-fourth determination sub-module is configured to determine, based on the operating distance and the original size of the background content area, whether the background content area is in The enlargement length in the preset direction and a fourth enlargement ratio, where the fourth enlargement ratio is the difference between the original length of the background content area in the preset direction and the original length of the direction perpendicular to the preset direction ratio.
  • the background content area includes a first sub-background content area and a second sub-background content area
  • the first video sequentially includes the first sub-background content area in the preset direction.
  • the reduction parameter further includes a reduction ratio of the first sub-background content area in the preset direction, and a reduction ratio of the second sub-background content area in the preset direction, and the reduction ratio is Refers to the ratio of the length of itself in the preset direction to the length of the background content area in the preset direction;
  • the enlargement parameter further includes the enlargement ratio of the first sub-background content area in the preset direction, and the enlargement ratio of the second sub-background content area in the preset direction, and the enlargement ratio is Refers to the ratio of the original length of itself in the preset direction to the original length of the background content area in the preset direction.
  • the device further includes:
  • a first determining module configured to determine an operation type corresponding to the video scaling operation
  • a one-key zoom-out module configured to, in response to the operation type being a one-key zoom out operation, remove the background content area other than the key content area in the first video to obtain the second video;
  • One-key enlargement module configured to enlarge the key content area to the original size of the key content area, and enlarge the background content area to the background content area in response to the operation type being a one-key enlargement operation the original size to get the second video.
  • the second obtaining module is specifically configured as:
  • a first acquiring unit configured to acquire multiple frames of video images from the first video
  • the second acquiring unit is configured to determine the key content area included in the first video based on the multiple frames of the video images.
  • the second obtaining unit is specifically configured as:
  • a first acquisition subunit configured to obtain a difference image of the two frames of video images for two adjacent frames of video images at any position in the multiple frames of the video images, so as to obtain at least one frame of difference image
  • the second obtaining subunit is configured to obtain a target image based on the at least one frame of difference image, where the pixel value of each position in the target image corresponds to the pixel value of the position in the at least one frame of difference image average of;
  • the third determination subunit is configured to determine the target image area with the largest area among at least one image area included in the target image as the key content area.
  • the second obtaining subunit is specifically configured as:
  • a first acquisition sub-module configured to process the at least one frame of difference image respectively, to obtain a first image corresponding to each frame of the difference image, and one frame of the first image includes multiple images that are not connected to each other region, at least one of the multiple image regions is a multi-connected region;
  • the second obtaining submodule is configured to obtain a second image based on at least one frame of the first image, and the pixel of each position in the second image is the pixel at the position in the at least one frame of the first image The average of the pixel values of ;
  • the third obtaining sub-module is configured to process the second image to obtain a target image, wherein at least one image area included in the target image is a single connected area.
  • it also includes:
  • a first conversion module configured to convert an image located in the target image area into a grayscale image in the target image
  • a third acquiring module configured to acquire the first number of pixels whose pixel values are greater than or equal to the first threshold in the grayscale image
  • the second determination module is configured to determine the ratio of the first number to the second number of pixels included in the grayscale image as the first probability.
  • the apparatus further includes: a first judgment sub-module configured to trigger the scaling module in response to the first probability being greater than or equal to a second threshold.
  • the device further includes:
  • a fourth acquiring module configured to acquire, in response to the first probability being less than the second threshold, the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images, so as to obtain a set of straight line segment positions;
  • a third determining module configured to determine a first ordinate and a second ordinate from among a plurality of ordinates included in the straight line segment position set;
  • the fourth determination module is configured to compare the boundary siege between the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image. area, which is determined as a candidate key content area;
  • a second triggering module configured to trigger the scaling module in response to the candidate key content region being the same as the target image region.
  • the second obtaining unit is specifically configured as:
  • a third acquiring subunit configured to acquire the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images, so as to obtain a set of straight line segment positions
  • a fourth determining subunit configured to determine a first ordinate and a second ordinate from the plurality of ordinates included in the set of straight line segment positions
  • a fifth determination subunit configured to encircle the boundary between the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image The area is determined as the key content area.
  • the second obtaining module is specifically configured as:
  • a first sending module configured to send an instruction to acquire the video information of the first video to the server
  • the first receiving module is configured to receive video information of the first video sent by the server.
  • FIG. 23 is a structural diagram of a video presentation apparatus applied to a server according to an exemplary embodiment.
  • the video display apparatus includes: a receiving module 2101 , a fifth obtaining module 2102 , a sixth obtaining module 2103 , and a first sending module 2104 .
  • the second receiving module is configured to receive the video acquisition instruction sent by the electronic device
  • a fifth obtaining module configured to obtain at least one video from the stored videos, the at least one video including the first video
  • the sixth acquisition module is configured to acquire video information corresponding to at least one of the videos; the video information of one of the videos includes; the display size of the video and the key content area in the video;
  • a second sending module configured to send the at least one video and the video information of the at least one video to the electronic device
  • One of the video information of the video is the basis for scaling the video when the electronic device detects a video scaling operation performed on the video playback interface displaying the video, and scaling the video
  • the processed video includes key content areas in the video.
  • the device further includes:
  • a seventh acquisition module configured to acquire multiple frames of video images from the video
  • the fifth determining module is configured to determine the key content area included in the video based on the multiple frames of the video images.
  • the fifth determining module is specifically configured as:
  • a third obtaining unit configured to obtain a difference image of the two frames of video images with respect to two adjacent frames of video images at any position in the multiple frames of the video images, so as to obtain at least one frame of difference image
  • the second determination unit is configured to obtain a target image based on the at least one frame of difference image, where the pixel value of each position in the target image is the difference between the pixel value of the position corresponding to the position in the at least one frame of difference image average value;
  • the third determining unit is configured to determine the target image area with the largest area among at least one image area included in the target image as the key content area.
  • the second determining unit is specifically configured as:
  • a fourth obtaining subunit configured to process the at least one frame of the difference image respectively, to obtain a first image corresponding to each frame of the difference image, and one frame of the first image includes a plurality of images that are not connected to each other region, at least one of the multiple image regions is a multi-connected region;
  • a fifth obtaining subunit is configured to obtain a second image based on at least one frame of the first image, and the pixel at each position in the second image is the pixel at the position in the at least one frame of the first image. the average of pixel values;
  • the sixth obtaining subunit is configured to process the second image to obtain a target image, wherein at least one image area included in the target image is a single connected area.
  • the device further includes:
  • a second conversion module configured to convert an image in the target image located in the target image area into a grayscale image
  • an eighth acquisition module configured to acquire the first number of pixels whose pixel values are greater than or equal to the first threshold in the grayscale image
  • the sixth determination module is configured to determine the ratio of the first number to the second number of pixels included in the grayscale image as the first probability.
  • the first sending module is specifically configured as:
  • a fourth determining unit configured to determine, from the at least one video, the corresponding video whose first probability is greater than or equal to a second threshold
  • a first sending unit configured to send the at least one video and video information of the corresponding video whose first probability is greater than or equal to the second threshold to the electronic device.
  • the device further includes:
  • a ninth obtaining module configured to obtain the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images in response to the first probability being less than the second threshold, so as to obtain a set of straight line segment positions;
  • a seventh determining module configured to determine a first ordinate and a second ordinate from the plurality of ordinates included in the straight line segment position set
  • the eighth determination module is configured to compare the boundary siege between the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image. area, which is determined as a candidate key content area;
  • the third sending module is configured to send the video information of the video in which the candidate key content area is the same as the target image area to the electronic device.
  • the fifth determining module is specifically configured as:
  • a fourth acquiring unit configured to acquire the ordinates of the horizontal straight line segments respectively included in the multiple frames of the video images, so as to obtain a set of straight line segment positions
  • a fifth determining unit configured to determine a first ordinate and a second ordinate from among the plurality of ordinates included in the straight line segment position set;
  • the sixth determining unit is configured to compare the boundary siege of the first horizontal line whose ordinate is the first ordinate, the second horizontal line whose ordinate is the second ordinate and the vertical direction of the video image. area, which is determined as the key content area.
  • an embodiment of the present disclosure further provides a video presentation system, where the video presentation system includes: a server and at least one electronic device.
  • the interaction process between the electronic device 22 and the server 21 is described below in conjunction with the first application scenario and the second application scenario involved in the implementation environment disclosed in FIG. 2 .
  • the electronic device 22 sends a video acquisition instruction to the server 21, and the server 21 receives the video acquisition instruction sent by the electronic device 22, and based on the video acquisition instruction, obtains the video acquisition instruction from the stored videos and the video acquisition instruction corresponding at least one video.
  • the electronic device 22 receives at least one video sent by the server, displays the first video in the at least one video in the video playing interface based on the video display requirement, and parses the first video to obtain video information of the first video.
  • the electronic device 22 sends a video acquisition instruction to the server 21 , and the server 21 receives the video acquisition instruction sent by the electronic device 22 .
  • At least one video corresponding to the video acquisition instruction and video information corresponding to the at least one video are acquired from the stored videos based on the acquiring video instruction.
  • FIG. 24 is a block diagram illustrating an electronic device according to an exemplary embodiment.
  • the electronic device includes but is not limited to components such as the input unit 241 , the first memory 242 , the display unit 243 , and the processor 244 .
  • components such as the input unit 241 , the first memory 242 , the display unit 243 , and the processor 244 .
  • FIG. 24 is only an example of implementation, and does not constitute a limitation to the electronic device.
  • the electronic device may include more or less components than those shown in the figure, or combine some components , or a different component arrangement.
  • the input unit 241 may be configured to receive information input by the user, such as a zoom operation.
  • the input unit 241 may include a touch panel 2411 and other input devices 2412 .
  • the touch panel 2411 also referred to as a touch screen, can collect the user's touch operations on it (such as the user's operations on the touch panel 2411 with fingers, stylus and any other suitable objects or accessories), and according to preset The program that drives the corresponding connection device (eg, drives the video scaling function in the processor 244).
  • the touch panel 2411 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the touch controller.
  • the touch panel 2411 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 241 may also include other input devices 2412 .
  • other input devices 2412 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, joysticks, and the like.
  • the first memory 242 may be used to store software programs and modules, and the processor 244 executes various functional applications and data processing of the electronic device by running the software programs and modules stored in the first memory 242 .
  • the first memory 242 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data ( For example, the length of the key content area of the first video in the vertical direction, the length of the background content area in the vertical direction, etc.).
  • the first memory 242 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • the display unit 243 may be used to display information input by the user or information provided to the user (eg, display a video) and various menus of the electronic device.
  • the display unit 243 may include a display panel 2431.
  • the display panel 2431 may be configured in the form of an LCD (Liquid Crystal Display, liquid crystal display), an OLED (Organic Light-Emitting Diode, organic light-emitting diode), and the like.
  • the touch panel 2412 can cover the display panel 2431, and when the touch panel 2412 detects a touch operation on or near it, it transmits it to the processor 244 to determine the type of the touch event, and then the processor 244 determines the type of the touch event according to the touch event. Type provides corresponding visual output on display panel 2431.
  • the touch panel 2412 and the display panel 2431 can be used as two independent components to realize the output and input functions of the electronic device 22, but in some embodiments, the touch panel 2412 and the display panel 2431 can be integrated to Realize the input and output functions of electronic equipment.
  • the processor 244 is the control center of the electronic device, using various interfaces and lines to connect various parts of the entire electronic device, by running or executing the software programs and/or modules stored in the first memory 242, and calling the software programs and/or modules stored in the first memory. 242, perform various functions of the electronic device and process data, so as to monitor the electronic device as a whole.
  • the processor 244 may include one or more processing units; exemplary, the processor 244 may integrate an application processor and a modem processor, wherein the application processor mainly handles the operating system, user interface and application programs etc., the modem processor mainly deals with wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 244.
  • the electronic device also includes a power source 245 (such as a battery) for supplying power to various components.
  • a power source 245 such as a battery
  • the power source can be first logically connected to the processor 244 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. and other functions.
  • the electronic device may further include a camera, a Bluetooth module, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi (wireless fidelity, wireless fidelity) module, a sensor, a network unit, an interface unit, and the like.
  • RF Radio Frequency
  • WiFi wireless fidelity, wireless fidelity
  • Electronic devices provide users with wireless broadband Internet access through network units, such as access to servers.
  • the interface unit is an interface for connecting an external device with an electronic device.
  • external devices may include wired or wireless headset ports, external power (or battery charger) ports, wired or wireless data ports, memory card ports, ports for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit may be used to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic device or may be used to communicate between the electronic device and the external device transfer data.
  • the processor 244 included in the electronic device may be a central processing unit (CPU), or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or be configured to implement one or more of the embodiments of the present invention. an integrated circuit.
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • the processor 244 included in the electronic device has the following functions: receiving a video zooming operation implemented in the video playing interface, acquiring operation information of the video zooming operation; acquiring the video information of the first video displayed in the video playing interface, so The video information includes at least the display size and key content area of the first video; according to the operation information and the video information of the first video, scaling the first video is performed to obtain a second video, the The second video includes the key content area; and in response to the video zooming operation, displaying the second video in the video playing interface.
  • FIG. 25 is a block diagram illustrating a server according to an exemplary embodiment.
  • the server includes, but is not limited to, a processor 251 , a second memory 252 , a network interface 253 , an I/O controller 254 and a communication bus 255 .
  • the structure of the server shown in FIG. 25 does not constitute a limitation on the server, and the server may include more or less components than those shown in FIG. 25 , or combine some components , or a different component arrangement.
  • the processor 251 is the control center of the server, using various interfaces and lines to connect various parts of the entire server, by running or executing the software programs and/or modules stored in the second memory 252, and calling the software programs and/or modules stored in the second memory 252. data, perform various functions of the server and process data, so as to monitor the server as a whole.
  • the processor 251 may include one or more processing units; for example, the processor 251 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc., and the modem
  • the modulation processor mainly handles wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 251.
  • the processor 251 may be a central processing unit (Central Processing Unit, CPU), or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention, etc.;
  • CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the second memory 252 may include memory, such as a high-speed random access memory (Random-Access Memory, RAM) 2521 and a read-only memory (Read-Only Memory, ROM) 2522, and may also include a mass storage device 2525, such as at least 1 disk storage, etc.
  • RAM Random-Access Memory
  • ROM Read-Only Memory
  • the server may also include hardware required for other businesses.
  • the above-mentioned second memory 252 is used for storing the above-mentioned executable instructions of the processor 251 .
  • the above-mentioned processor 251 has the following functions: receiving a video acquisition instruction sent by the electronic device; acquiring at least one video from each stored video, where the at least one video includes the first video; acquiring at least one video information corresponding to the video; The video information of the video includes: the display size of the video and the key content area in the video; sending the at least one video and the video information of the at least one video to the electronic device; wherein one The video information of the video is the basis for the electronic device to perform zoom processing on the video when it detects a video zoom operation implemented on the video playback interface where the video is displayed.
  • the video includes key content areas in the video.
  • a wired or wireless network interface 253 is configured to connect the server to the network.
  • the processor 251, the second memory 252, the network interface 253 and the I/O controller 254 can be connected to each other through a communication bus 255, and the communication bus can be an ISA (Industry Standard Architecture, industry standard architecture) bus, a PCI (Peripheral Component Interconnect) , Peripheral component interconnection standard) bus or EISA (Extended Industry Standard Architecture, Extended Industry Standard Architecture) bus, etc.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like.
  • the server may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gates
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable gates
  • controller a controller, a microcontroller, a microprocessor, or other electronic components are implemented for implementing the above-described electronic resource transfer method.
  • an embodiment of the present disclosure provides a storage medium including instructions, for example, a first memory 252 including instructions, and the instructions can be executed by the processor 254 of the electronic device to complete the above method.
  • the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical disk data storage devices, etc.
  • an embodiment of the present disclosure provides a storage medium including instructions, for example, a second memory 252 including instructions, and the above-mentioned instructions can be executed by the processor 251 of the server to complete the above-mentioned method.
  • the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical disk data storage devices, etc.
  • a non-volatile computer-readable storage medium is also provided, which can be directly loaded into the internal memory of a computer, such as the above-mentioned first memory 242, and contains software codes, and the computer program is loaded via the computer. After being imported and executed, the steps shown in any of the foregoing embodiments of the video display method applied to an electronic device can be implemented.
  • a non-volatile computer-readable storage medium is also provided, which can be directly loaded into an internal memory of a computer, such as the above-mentioned second memory 252, and contains software codes, and the computer program is loaded via the computer. After being imported and executed, the steps shown in any of the foregoing embodiments of the video display method applied to the server can be implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请提供一种视频展示方法和视频展示装置。在接收实施于视频播放界面的视频缩放操作的情况下,获取视频缩放操作的操作信息以及视频播放界面中所展示的第一视频的视频信息,并基于操作信息以及视频信息对第一视频进行缩放处理,由于视频信息包括关键内容区域信息,所以对第一视频缩放后得到的第二视频包括完整的关键内容区域,即第二视频展示的关键内容是完整的,不会出现关键内容缺失的情况,避免了受限于视频播放界面的展示空间,导致视频缩放过程中的关键内容缺失的情况,改善视频缩放过程中的视频展示效果。

Description

视频展示方法和视频展示装置
相关申请的交叉引用
本公开基于2020年10月30日提交的申请号为“202011191485.9”中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此并入本公开作为参考。
技术领域
本公开涉及视频处理技术领域,尤其涉及一种视频展示方法和视频展示装置。
背景技术
用户可以通过电子设备浏览视频,以获得相关信息,电子设备在播放视频的过程中,可以同时播放视频的相关信息,例如视频的评论。用户可以通过对视频进行缩放操作,实现电子设备的显示屏幕在播放视频的过程中同时播放视频的相关信息,以满足用户同时观看视频和视频的相关信息的需求。
发明内容
本公开提供一种视频展示方法和视频展示装置。本公开的技术方案如下:
根据本公开的一些实施例,提供一种视频展示方法,应用于电子设备,所述视频展示方法包括:接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数;将所述第一视频按照所述缩放方式以及所述缩放参数进行缩放,得到所述第二视频。
在一些实施例中,所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,包括:根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数;将所述第一视频按照所述缩放方式以及所述缩放参数进行缩放,得到所述第二视频。
在一些实施例中,所述操作信息至少包括操作类型以及操作距离;所述根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数,包括:在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数;所述缩小方式至少包括是否缩小所述关键内容区域以及缩小类型;所述缩小类型包括预设方向缩小或整体缩小;所述缩小参数至少包括在所述预设方向上的缩小长度;在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数;所述放大方式至少包括是否放大所述关键内容区域以及放大类型;所述放大类型包括预设方向放大或整体放大;所述放大参数至少包括在所述预设方向上的放大长度。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数,包括:根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;在所述操作距离不大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域以及所述背景内容区域的缩小类型为预设方向缩小或整体缩小;在所述背景内容区域的缩小类型为预设方向缩小的情况下,基于所述操作距离确定在所述预设方向上的缩小长度;在所述背景内容区域的缩小类型为整体缩小的情况下,基于所述操作距离以及所述背景内容区域的尺寸,确定第一缩小比例以及在所述预设方向上的缩小长度,所述第一缩小比例为所述背景内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数,包括:根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域以及缩小所述背景内容区域;基于所述背景内容区域的尺寸,确定所述背 景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数,包括:根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括缩小所述关键内容区域、所述关键内容区域的缩小类型为预设方向缩小或整体缩小以及缩小所述背景内容区域;基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度;在所述关键内容区域的缩小类型为预设方向缩小的情况下,基于所述背景内容区域在所述预设方向的长度以及所述操作距离,确定所述关键内容区域在所述预设方向上的缩小长度;在所述关键内容区域的缩小类型为整体缩小的情况下,基于所述背景内容区域在所述预设方向的长度、所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的缩小长度以及第二缩小比例,所述第二缩小比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数,包括:在所述关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;确定所述放大方式包括不放大所述关键内容区域、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例,所述第一放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数,包括:在所述关键内容区域在所述预设方向的长度小于或等于所述关键内容区域的原始长度与所述操作距离的差值的情况下,确定所述放大方式包括放大所述关键内容区域以及所述关键内容区域的放大类型为预设方向放大或整体放大;在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述关键内容区域在所述预设方向上的放大长度;在所述关键内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的放大长度以及第二放大比例,所述第二放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数,包括:在所述关键内容区域在所述预设方向的长度大于所述关键内容区域的原始长度与所述操作距离的差值的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;确定所述放大方式包括放大所述关键内容区域、所述关键内容区域的放大类型为预设方向放大或整体放大、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述关键内容区域在所述预设方向的长度以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度;在所述关键内容区域的放大类型为整体放大的情况下,基于所述关键内容区域的尺寸以及所述关键内容区域在所述预设方向上的原始长度确定所述关键内容区域在所述预设方向上的放大长度以及第三放大比例,所述第三放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值;在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容 区域在所述预设方向上的放大长度以及第四放大比例,所述第四放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
在一些实施例中,所述背景内容区域包括第一子背景内容区域以及第二子背景内容区域,所述第一视频在所述预设方向上依次包括所述第一子背景内容区域、所述关键内容区域以及所述第二子背景内容区域;所述缩小参数还包括所述第一子背景内容区域在所述预设方向上的缩小比例,以及所述第二子背景内容区域在所述预设方向上的缩小比例,所述缩小比例是指自身在所述预设方向上的长度与所述背景内容区域在所述预设方向上的长度的比值;所述放大参数还包括所述第一子背景内容区域在所述预设方向上的放大比例,以及所述第二子背景内容区域在所述预设方向上的放大比例,所述放大比例是指自身在所述预设方向上的原始长度与所述背景内容区域在所述预设方向上原始长度的比值。
在一些实施例中,所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,包括:确定所述操作信息对应的操作类型;响应于所述操作类型为一键缩小操作,去除所述第一视频中所述关键内容区域之外的背景内容区域,得到所述第二视频;响应于所述操作类型为一键放大操作,将所述关键内容区域放大至所述关键内容区域的原始尺寸,将所述背景内容区域放大至所述背景内容区域的原始尺寸,以得到所述第二视频。
在一些实施例中,所述获取所述视频播放界面中展示的第一视频的视频信息步骤包括:从所述第一视频中获取多帧视频图像;基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域。
在一些实施例中,所述基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域步骤包括:针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
在一些实施例中,所述基于所述至少一帧差异图像,获得目标图像步骤包括:分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,一帧所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
在一些实施例中,所述应用于电子设备的视频展示方法还包括:将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
在一些实施例中,所述应用于电子设备的视频展示方法还包括:响应于所述第一概率大于或等于第二阈值,执行所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频步骤。
在一些实施例中,所述应用于电子设备的视频展示方法还包括:响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;响应于所述候选关键内容区域与所述目标图像区域相同,执行所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频步骤。
在一些实施例中,所述基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域步骤包括:获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
在一些实施例中,所述获取所述视频播放界面中展示的第一视频的视频信息步骤包括:向服务器发送获取所述第一视频的视频信息的指令;接收服务器发送的所述第一视频的视频信息。
根据本公开的一些实施例,提供一种视频展示方法,用于服务器,包括:接收电子设备发送的获取 视频指令;从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括;所述视频的展示尺寸以及所述视频中的关键内容区域;将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
在一些实施例中,针对已存储的每一所述视频,所述视频展示方法还包括:从所述视频中获取多帧视频图像;基于多帧所述视频图像,确定所述视频包含的所述关键内容区域。
在一些实施例中,所述基于多帧所述视频图像,确定所述视频包含的所述关键内容区域步骤包括:针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
在一些实施例中,所述基于所述至少一帧差异图像,获得目标图像步骤包括:分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,一帧所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
在一些实施例中,所述应用于服务器的视频展示方法还包括:将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
在一些实施例中,所述将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备步骤包括:从所述至少一个视频中,确定对应的所述第一概率大于或等于第二阈值的视频;将所述至少一个视频以及对应的所述第一概率大于或等于所述第二阈值的视频的视频信息发送至所述电子设备。
在一些实施例中,所述应用于服务器的视频展示方法还包括:响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;将所述候选关键内容区域与所述目标图像区域相同的视频的视频信息发送至所述电子设备。
在一些实施例中,所述基于多帧所述视频图像,确定所述视频包含的所述关键内容区域步骤包括:获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
根据本公开的一些实施例,提供一种视频展示装置,用于电子设备,包括:第一获取模块,被配置为接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;第二获取模块,被配置为获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;缩放模块,被配置为根据所述第一获取模块获得的所述操作信息以及所述第二获取模块或的所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;展示模块,被配置为响应于所述视频缩放操作,在所述视频播放界面中展示所述缩放模块得到的所述第二视频。
在一些实施例中,所述缩放模块具体被配置为:第一确定单元,被配置为根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数;缩放单元,被配置为根据所述第一确定单元确定的所述缩放方式以及所述缩放参数对所述第一视频进行缩放,得到所述第二视频。
在一些实施例中,所述操作信息至少包括操作类型以及操作距离,所述第一确定单元具体被配置为: 第一确定子单元,被配置为在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数;所述缩小方式至少包括是否缩小所述关键内容区域以及缩小类型;所述缩小类型包括预设方向缩小或整体缩小;所述缩小参数至少包括在所述预设方向上的缩小长度;第二确定子单元,被配置为在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数;所述放大方式至少包括是否放大所述关键内容区域以及放大类型;所述放大类型包括预设方向放大或整体放大;所述放大参数至少包括在所述预设方向上的放大长度。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:第一确定子模块,被配置为根据所述视频信息中包括的关键内容区域,确定所述第一视频在所述关键内容区域之外的背景内容区域;第二确定子模块,被配置为在所述操作距离不大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域以及所述背景内容区域的缩小类型为预设方向缩小或整体缩小;第三确定子模块,被配置为在所述背景内容区域的缩小类型为预设方向缩小的情况下,基于所述操作距离确定在所述预设方向上的缩小长度;第四确定子模块,被配置为在所述背景内容区域的缩小类型为整体缩小的情况下,基于所述操作距离以及所述背景内容区域的尺寸,确定第一缩小比例以及在所述预设方向上的缩小长度,所述第一缩小比例为所述背景内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:第五确定子模块,被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;第六确定子模块,被配置为在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域;第七确定子模块,被配置为基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:第八确定子模块,被配置为第三确定子模块被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;第九确定子模块,被配置为基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度;第十确定子模块,被配置为在所述关键内容区域的缩小类型为预设方向缩小的情况下,基于所述背景内容区域在所述预设方向的长度以及所述操作距离,确定所述关键内容区域在所述预设方向上的缩小长度;第十一确定子模块,被配置为在所述关键内容区域的缩小类型为整体缩小的情况下,基于所述背景内容区域在所述预设方向的长度、所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的缩小长度以及第二缩小比例,所述第二缩小比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:第十二确定子模块,被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;第十三确定子模块,被配置为在所述关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度的情况下,确定所述放大方式包括不放大所述关键内容区域、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;第十四确定子模块,被配置为在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;第十五确定子模块,被配置为在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例,所述第一放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:第十六确定子模块,被配置为在所述关键内容区域在所述预设方向的长度小于或等于所述关键内容区域的原始长度与所述操作距离的差值的情况下,确定所述放大方式包括放大所述关键 内容区域以及所述关键内容区域的放大类型为预设方向放大或整体放大;第十七确定子模块,被配置为在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述关键内容区域在所述预设方向上的放大长度;第十八确定子模块,被配置为在所述关键内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的放大长度以及第二放大比例,所述第二放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一些实施例中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:第十九确定子模块,被配置为在所述关键内容区域在所述预设方向的长度大于所述关键内容区域的原始长度与所述操作距离的差值的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;第二十确定子模块,被配置为确定所述放大方式包括放大所述关键内容区域、所述关键内容区域的放大类型为预设方向放大或整体放大、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;第二十一确定子模块,被配置为在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述关键内容区域在所述预设方向的长度以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度;第二十二确定子模块,被配置为在所述关键内容区域的放大类型为整体放大的情况下,基于所述关键内容区域的尺寸以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度以及第三放大比例,所述第三放大比例为所述关键内容区域在所述预设方向上的放大长度和与所述预设方向垂直的方向的放大长度的比值;第二十三确定子模块,被配置为在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;第二十四确定子模块,被配置为在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第四放大比例,所述第四放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
在一些实施例中,所述背景内容区域包括第一子背景内容区域以及第二子背景内容区域,所述第一视频在所述预设方向上依次包括所述第一子背景内容区域、所述关键内容区域以及所述第二子背景内容区域;所述缩小参数还包括所述第一子背景内容区域在所述预设方向上的缩小比例,以及所述第二子背景内容区域在所述预设方向上的缩小比例,所述缩小比例是指自身在所述预设方向上的长度与所述背景内容区域在所述预设方向上的长度的比值;所述放大参数还包括所述第一子背景内容区域在所述预设方向上的放大比例,以及所述第二子背景内容区域在所述预设方向上的放大比例,所述放大比例是指自身在所述预设方向上的原始长度与所述背景内容区域在所述预设方向上原始长度的比值。
在一些实施例中,所述用于电子设备的视频展示装置还包括:第一确定模块,被配置为确定所述视频缩放操作对应的操作类型;一键缩小模块,被配置为响应于所述操作类型为一键缩小操作,去除所述第一视频中所述关键内容区域之外的背景内容区域,得到所述第二视频;一键放大模块,被配置为响应于所述操作类型为一键放大操作,将所述关键内容区域放大至所述关键内容区域的原始尺寸,将所述背景内容区域放大至所述背景内容区域的原始尺寸,以得到所述第二视频。
在一些实施例中,所述第二获取模块具体被配置为:第一获取单元,被配置为从所述第一视频中获取多帧视频图像;第二获取单元,被配置为基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域。
在一些实施例中,所述第二获取单元具体被配置为:第一获取子单元,被配置为针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;第二获取子单元,被配置为基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;第三确定子单元,被配置为将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
在一些实施例中,所述第二获取子单元具体被配置为:第一获取子模块,被配置为分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,一帧所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;第二获取子模块,被配置为基于 至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;第三获取子模块,被配置为对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
在一些实施例中,所述用于电子设备的视频展示装置还包括:第一转换模块,被配置为转换将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;第三获取模块,被配置为获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;第二确定模块,被配置为将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
在一些实施例中,所述用于电子设备的视频展示装置还包括:第一触发模块,被配置为响应于所述第一概率大于或等于第二阈值,触发所述缩放模块。
在一些实施例中,所述用于电子设备的视频展示装置还包括:第四获取模块,被配置为响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;第三确定模块,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;第四确定模块,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;第二触发模块,被配置为响应于所述候选关键内容区域与所述目标图像区域相同,触发所述缩放模块。
在一些实施例中,所述第二获取单元具体被配置为:第三获取子单元,被配置为获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;第四确定子单元,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;第五确定子单元,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
在一些实施例中,所述第二获取模块具体被配置为:第一发送模块,被配置为向服务器发送获取所述第一视频的视频信息的指令;第一接收模块,被配置为接收服务器发送的所述第一视频的视频信息。
根据本公开的一些实施例,提供一种视频展示装置,用于服务器,包括:第二接收模块,被配置为接收电子设备发送的获取视频指令;第五获取模块,被配置为从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;第六获取模块,被配置为获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括;所述视频的展示尺寸以及所述视频中的关键内容区域;第二发送模块,被配置为将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
在一些实施例中,所述用于服务器的视频展示装置还包括:第七获取模块,被配置为从所述视频中获取多帧视频图像;第五确定模块,被配置为基于多帧所述视频图像,确定所述视频包含的所述关键内容区域。
在一些实施例中,所述第五确定模块具体被配置为:第三获取单元,被配置为针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;第二确定单元,被配置为基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;第三确定单元,被配置为将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
在一些实施例中,所述第二确定单元具体被配置为:第四获取子单元,被配置为分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,一帧所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;第五获取子单元,被配置基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;第六获取子单元,被配置对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
在一些实施例中,所述用于服务器的视频展示装置还包括:第二转换模块,被配置将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;第八获取模块,被配置获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;第六确定模块,被配置将所述第一数目与所述灰度图包含的各像素 的第二数目的比值,确定为所述第一概率。
在一些实施例中,所述第一发送模块具体被配置为:第四确定单元,被配置为从所述至少一个视频中,确定对应的所述第一概率大于或等于第二阈值的视频;第一发送单元,被配置为将所述至少一个视频以及对应的所述第一概率大于或等于所述第二阈值的视频的视频信息发送至所述电子设备。
在一些实施例中,所述用于服务器的视频展示装置还包括:第九获取模块,被配置为响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;第七确定模块,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;第八确定模块,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;第三发送模块,被配置为将所述候选关键内容区域与所述目标图像区域相同的视频的视频信息发送至所述电子设备。
在一些实施例中,所述第五确定模块具体被配置为:第四获取单元,被配置为获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;第五确定单元,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;第六确定单元,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
根据本公开的一些实施例,提供一种电子设备,包括:处理器;用于存储所述处理器可执行指令的第一存储器;其中,所述处理器被配置为执行所述指令,以实现如上述第一方面所述的视频展示方法。
根据本公开的一些实施例,提供一种服务器,包括:处理器;用于存储所述处理器可执行指令的第二存储器;其中,所述处理器被配置为执行所述指令,以实现如上述第二方面所述的视频展示方法。
根据本公开的一些实施例,提供一种视频展示系统,包括:如第五方面所述的服务器以及至少一个如第四方面所述的电子设备。
根据本公开的一些实施例,提供一种非易失性计算机可读存储介质,响应于所述非易失性计算机可读存储介质中的指令由电子设备执行,所述电子设备能够执行如上述第一方面所述的视频展示方法。
根据本公开的一些实施例,提供一种非易失性计算机可读存储介质,响应于所述非易失性计算机可读存储介质中的指令由服务器执行,所述服务器能够执行如上述第二方面所述的视频展示方法。
根据本公开的一些实施例,提供一种计算机程序产品,可直接加在到计算机的内部存储器中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现第一方面所示的视频展示方法。
根据本公开的一些实施例,提供一种计算机程序产品,可直接加在到计算机的内部存储器中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现第二方面所示的视频展示方法。
在本公开实施例中,通过在接收实施于视频播放界面的视频缩放操作的情况下,获取视频缩放操作的操作信息以及视频播放界面中所展示的第一视频的视频信息,来对视频进行缩放处理,通过视频信息中包括关键内容区域信息,使得缩放后的第二视频包括关键内容区域,避免受限于视频播放界面的展示空间导致视频缩放过程中的关键内容缺失,改善视频缩放过程中的视频展示效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1a至图1b是根据一示例性实施例示出的一种本公开实施例涉及的相关技术的示意图;
图2是根据一示例性实施例示出的一种实施环境的架构图;
图3是根据一示例性实施例示出的一种应用于电子设备的视频展示方法的流程图;
图4是根据一示例性实施例示出的显示界面中视频缩放按键的显示方式示意图;
图5a至图5d根据一示例性实施例示出的一种视频缩小过程示意图;
图6是根据一示例性实施例示出的背景内容区域与关键内容区域的位置关系的示意图;
图7a至图7d是根据一示例性实施例示出的一种确定背景内容区域缩小长度的方式的示意图;
图8是根据一示例性实施例示出的另一种确定背景内容区域缩小长度的方式的示意图;
图9a至图9b是根据一示例性实施例示出的第一视频的一种缩小方式的示意图;
图10a至图10b是根据一示例性实施例示出的第一视频的另一种缩小方式的示意图;
图11是根据一示例性实施例示出的第一视频的又一种缩小方式的示意图;
图12是根据一示例性实施例示出的第一视频的又一种缩小方式的示意图;
图13是根据一示例性实施例示出的第一视频的又一种缩小方式的示意图;
图14a至图14b是根据一示例性实施例示出的多帧差异图像的示意图;
图15a至15d根据一示例性实施例示出的对差异图像进行处理得到第一图像的示意图;
图16a至图16b是根据一示例性实施例示出的对第二图像进行处理的目的示意图;
图17a至图17c是根据一示例性实施例示出的目标轮廓区域与真正的关键内容区域相对位置示意图;
图18是根据一示例性实施例示出的经过边缘检测得到的三帧第三图像的示意图;
图19是根据一示例性实施例示出的经过直线检测处理得到的第四图像的示意图;
图20a至图20c是根据一示例性实施例示出的聚类过程示意图;
图21是根据一示例性实施例示出的一种应用于服务器的视频展示方法的流程图;
图22是根据一示例性实施例示出的一种应用于电子设备的视频展示装置的结构图;
图23是根据一示例性实施例示出的一种应用于服务器的视频展示装置的结构图;
图24是根据一示例性实施例示出的一种电子设备的框图;
图25是根据一示例性实施例示出的一种服务器的框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在对本公开实施例提供的视频展示方法、视频展示装置、电子设备、服务器以及存储介质进行详细介绍之前,这里先对本公开实施例涉及的相关技术和实施环境进行简单介绍。
首先对本公开实施例涉及的相关技术进行简单介绍。
视频播放类客户端可以运行于电子设备中,电子设备在运行视频播放类客户端的过程中,可以展示视频播放界面以及内容显示界面。视频播放界面用于显示视频,内容显示界面用于显示与视频相关的内容。
示例性的,与视频相关的内容可以包括:针对视频的用户评论内容,该视频的剧集列表,与视频相关的其他视频的链接,以及与视频相关的其他视频的评论内容中的一种或多种。
本公开实施例提及的视频播放类客户端可以为应用程序客户端,或网页版客户端。
视频播放类客户端(后续称为客户端)具备视频可缩播放功能,视频可缩放播放功能使得视频播放类客户端在展示视频的同时可以展示与视频相关内容。目前在视频缩放过程中,会出现视频包含的关键内容缺失的情况。
下面举例对此种情况进行说明。
图1a至图1b是根据一示例性实施例示出的一种本公开实施例涉及的相关技术的示意图。
图1a至图1b是以视频播放界面显示的视频为第一视频,内容显示界面显示针对第一视频的用户评论内容为例进行说明的。
在图1a中,电子设备的显示屏幕全屏展示视频播放界面,由于视频播放界面显示有第一视频,所以电子设备的显示屏幕全屏展示第一视频,该第一视频包含有背景内容区域11以及关键内容区域12。
示例性的,第一视频中的背景内容区域11可以为用黑色填充的图像,或者,高斯模糊图像。图1a中以背景内容区域11为用黑色填充的图像为例进行说明。
示例性的,第一视频中的关键内容区域12为第一视频中真正有画面内容的区域。
由于图1a中电子设备的显示屏幕全屏展示视频播放界面,所以未显示内容显示界面。响应于用户需要看到内容显示界面,例如,用户需要查看用户评论内容,那么,用户可以针对视频播放界面执行缩小操作。如图1b所示,执行向上滑动的视频缩小操作。
客户端检测到缩小操作后会缩小第一视频,第一视频缩小后,相应的视频播放界面会相应缩小。
如图1b所示,为视频播放界面10经过缩小后的示意图。视频播放界面和第一视频缩小后,电子设备会显示出内容显示界面13。
示例性的,第一视频在缩小时,同时会缩小第一视频在垂直方向上的长度和在水平方向上的长度。第一视频在缩小的过程中,关键内容区域也会缩小。
示例性的,假设将缩小后的第一视频称为第二视频,图1b所示用白色虚线框出的区域14为第二视频所在区域,位于第二视频外侧的黑色图像为客户端补入的背景图像10。
如图1a所示,假设第一视频在垂直方向上的长度为A1,水平方向上的长度为B1;第一视频中关键内容区域在垂直方向上的长度为A2,水平方向上的长度为B2。如图1b所示,第二视频在垂直方向上的长度为A3(A3小于A1),水平方向上的长度为B3(B3小于B1);第二视频中关键内容区域在垂直方向上的长度为A4(A4小于A2),水平方向上的长度为B4(B4小于B2)。
在对第一视频进行缩小的过程中,受限于视频播放界面的展示空间,可能出现如图1b所示的第二视频包含的关键内容缺失的情况。
其次对本公开实施例涉及的实施环境进行简单介绍。
图2是根据一示例性实施例示出的一种实施环境的架构图。下述视频展示方法可以应用于该实施环境中,该实施环境包括:服务器21以及至少一个电子设备22。
示例性的,电子设备22与服务器21可以通过无线网络建立连接并通信。
示例性的,电子设备22可以为任何一种可与用户通过键盘、触摸板、触摸屏、遥控器、语音交互或手写设备等一种或多种方式进行人机交互的电子产品,例如,手机、平板电脑、掌上电脑、个人计算机、可穿戴设备、智能电视等。
示例性的,电子设备22中有客户端,响应于该客户端为应用程序客户端,那么电子设备22可以安装有该客户端;响应于客户端为网页版客户端,那么电子设备32可以通过浏览器展示网页版客户端。
示例性的,本公开实施例提供的应用于电子设备的视频展示装置可为客户端的中插件。
示例性的,服务器21可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者,是一个云计算服务中心。服务器21可以包括处理器、存储器以及网络接口等。
示例性的,服务器21存储有用户上传的一个或多个视频,服务器21可以将一个或视频发送至电子设备22中。电子设备22可以显示一个或多个视频。
图2仅仅是一种示例,图2示出了3个电子设备22,实际应用中电子设备22的数量可以按照实际需求设定,本公开实施例不对电子设备22的数目进行限定。
本实施环境涉及两种应用场景。
在第一种应用场景中,电子设备22用于运行视频类客户端,电子设备可以从服务器21获得视频,电子设备自身获得视频的视频信息,并执行本公开实施例提供的视频展示方法。服务器21用于向运行有视频类客户端的电子设备22发送视频。
在第二种应用场景中,电子设备22用于运行视频类客户端,电子设备可以从服务器21获得视频以及视频的视频信息,并执行本公开实施例提供的视频展示方法。服务器21用于向运行有视频类客户端的电子设备22发送视频以及视频的视频信息。
本领域技术人员应能理解上述电子设备和服务器仅为举例,其他现有的或今后可能出现的电子设备或服务器如可适用于本公开,也应该包含在本公开保护范围以内,并在此以引用方式包含于此。
下面结合附图对本公开提供的技术方案进行介绍。
图3是根据一示例性实施例示出的一种应用于电子设备的视频展示方法的流程图,该方法在实施过程中包括以下步骤S31至步骤S34。
在步骤S31中,接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息。
在步骤S32中,获取所述视频播放界面中展示的第一视频的视频信息。
其中,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域。
在步骤S33中,根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频。
其中,所述第二视频包括所述关键内容区域。
在步骤S34中,响应于所述视频缩放操作,在所述视频播放界面中展示所述第二视频。
电子设备22运行的客户端至少包括视频播放界面和内容显示界面。示例性的,视频播放界面与内容显示界面属于同一窗口;示例性的,视频播放界面与内容显示界面属于不同窗口。
示例性的,第一视频至少包括关键内容区域12。
示例性的,本公开实施例并不限定视频播放界面与内容显示界面的相对位置关系,例如,视频播放界面位于内容显示界面的左侧,或者,视频播放界面位于内容显示界面的右侧,或者,视频播放界面位于内容显示界面的上方,或者,视频播放界面位于内容显示界面的下方。
下面对本公开实施例提供的视频缩放操作、第一视频、第一视频的展示尺寸以及关键内容区域进行说明。
示例性的,在步骤S31中实施于视频播放界面的视频缩放操作的操作方式有多种。
本公开实施例提供但不限于以下三种。
第一种方式,视频缩放操作的操作方式为按键操作。
示例性的,视频缩放按键可以为电子设备中的物理按键,如键盘中的一个或多个按键。
示例性的,视频缩放按键可以为显示界面(显示界面包括视频播放界面和内容显示界面中的至少一个)中的虚拟按键,如显示界面显示有视频缩放按键。图4是根据一示例性实施例示出的显示界面中视频缩放按键的显示方式示意图。如图4所示,显示于显示界面的固定位置的视频缩小按键41以及视频放大按键43。
示例性的,视频缩小按键41以及视频放大按键43还可显示在悬浮在显示界面的显示菜单42中。显示菜单42中具有可移动和可隐藏的特性。
示例性的,响应于检测到针对显示界面的第一预设操作,则显示之前处于隐藏状态的显示菜单42,响应于检测到针对显示界面的第二预设操作,则隐藏处于之前处于显示状态的显示菜单42。
示例性的,第一预设操作与第二预设操作可以相同,可以不同。
例如,响应于检测到针对显示界面的“触按”操作,则确定显示之前处于隐藏状态的显示菜单42;响应于再次检测到针对显示界面的“触按”操作,则隐藏菜单42。
示例性的,响应于检测到针对显示菜单42的“拖动”操作,显示菜单42可悬浮于显示界面的不同位置,以避免显示菜单42遮挡第一视频的关键内容区域。
视频缩放按键在显示界面上的显示方式可以包括多种方式,本公开实施例的图4提供了两种位于显示界面上方的显示方式,但本公开实施例并不局限于图4所示的显示方式,任意一种显示方式均在本公开实施例的保护范围内。
第二种方式,视频缩放操作的操作方式为滑动操作。
例如,该滑动操作为“向上滑动”或“向下滑动”。可以理解的是,本公开实施例提供了两种“向上滑动”和“向下滑动”的滑动操作,但本公开实施例并不局限于上述滑动操作,任意一种滑动操作均在本公开实施例的保护范围内,如滑动操作可以为“画圆”或“画对勾”等。
第三种方式,视频缩放操作的操作方式为语音操作。
示例性的,该语音操作可以为缩放视频播放界面显示的第一视频的操作,如“缩小视频”,或者,增加大内容显示界面的操作,如“显示更多用户评论内容”。
示例性的,语音操作中的语音指令可以携带第一视频需要缩放的长度,例如,语音指令为:“视频缩短5cm”。
示例性的,在步骤S31中,第一视频为未经过缩放处理的原始视频或已经经过一次或多次缩放处理的视频。
例如,响应于第一视频为未经过缩放处理的原始视频,第一视频可为用户上传至服务器21的视频, 或,第一视频为服务器21接收到用户上传的视频后,针对视频进行处理得到的视频。
示例性的,响应于第一视频为未经过缩放处理的原始视频,视频播放界面为显示屏幕的整个区域,即电子设备全屏显示第一视频,如图1a所示。
示例性的,响应于第一视频为未经过缩放处理的视频,视频播放界面为显示屏幕的局部区域。
示例性的,在步骤S32中,第一视频的展示尺寸至少包括第一视频在垂直方向上的长度和在水平方向上的长度。第一视频的关键内容区域为第一视频中真正有画面内容的区域。
示例性的,上述第一视频的关键内容区域是指关键内容区域位置第一视频的位置区域。
示例性的,在步骤S33中第二视频包含的关键内容区域的尺寸可能与第一视频包含的关键内容区域的尺寸相同,或者,第二视频在垂直方向上由关键内容区域组成。
示例性的,关键内容区域的尺寸包括关键内容区域在垂直方向上的长度和关键内容区域在水平方向上的长度。
上述第二视频包含的关键内容区域的尺寸可能与第一视频包含的关键内容区域的尺寸相同是指,第二视频中关键内容区域在水平方向上的长度与第一视频中关键内容区域在水平方向上的长度相同,且,第二视频中关键内容区域在垂直方向上的长度与第一视频中关键内容区域在垂直方向上的长度相同。
示例性的,在对第一视频缩小的过程中,对第一视频中除关键内容区域以外的区域进行缩小,保持关键内容区域尺寸不变,所以得到的第二视频中的关键内容区域中的关键内容不会缺失,改善视频缩放过程中的视频展示效果。
示例性的,响应于第二视频在垂直方向上由关键内容区域组成,第二视频中的关键内容区域在垂直方向上的长度与第一视频中关键内容区域在垂直方向上的长度相同,或者,第二视频中关键内容区域在垂直方向上的长度小于第一视频中关键内容区域在垂直方向上的长度。第二视频在水平方向上可能包括背景内容区域或不包括背景内容区域。
在本公开实施例提供的视频展示方法中,通过在接收实施于视频播放界面的视频缩放操作的情况下,获取视频缩放操作的操作信息以及视频播放界面中所展示的第一视频的视频信息,来对视频进行缩放处理,通过视频信息中包括关键内容区域信息,使得缩放后的第二视频包括关键内容区域,避免受限于视频播放界面的展示空间导致视频缩放过程中的关键内容缺失,改善视频缩放过程中的视频展示效果。
在一可选实施例中,步骤S33在具体实现过程中包括步骤A1至步骤A2。
在步骤A1中,根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数。
在步骤A2中,将所述第一视频按照所述缩放方式以及所述缩放参数进行缩放,得到所述第二视频。
示例性的,第一视频的缩放方式为整体缩放或预设方向缩放。
其中,预设方向可以为水平方向或垂直方向,响应于预设方向为水平方向,预设方向缩放为缩放水平方向上的长度;响应于预设方向为垂直方向,预设方向缩放为缩放垂直方向上的长度。整体缩放是指缩放垂直方向上的长度以及水平方向上的长度。
相应的,响应于第一视频的缩放方式为整体缩放,则缩放参数包括在预设方向上的缩放长度,以及缩放比例,响应于第一视频的缩放方式为预设方向缩放,则缩放参数包括在预设方向上的缩放长度。
其中,上述缩放比例是指在预设方向上的长度和在与预设方向垂直的方向上的长度的比值。
可以理解的是,响应于第一视频包括背景内容区域和关键内容区域,响应于背景内容区域和关键内容区域作为整体进行缩小,可能会出现响应于第一视频被缩小的较小,会导致第一视频的关键内容区域尺寸过小,用户观看不清楚关键内容,从而影响视频缩放过程中的视频展示效果。
如图5a至图5d根据一示例性实施例示出的一种视频缩小过程示意图,由于图5a中电子设备的显示界面全屏展示第一视频。响应于用户需要看到内容显示界面13,例如,用户需要查看用户评论内容,那么,用户可以针对视频播放界面执行缩小操作。如图5b所示,执行向上滑动的滑动操作。
如图5b所示,为第一视频第一次缩小后的示意图。第一视频缩小后,电子设备会显示出内容显示界面13。
从图5b中可以看出第一视频在缩小时,是背景内容区域和关键内容区域作为一个整体进行缩小的,假设第一视频缩小至第二视频时,在垂直方向上的长度由长度B1缩小至长度A2,假设第一视频在垂直 方向上的长度为长度B1、在水平方向上的长度为长度B2;那么,第一视频缩小至第二视频时,在水平方向上的长度由B2缩小至B2*A2/B1。
示例性的,图5b所示用白色虚线框出的区域14为第一视频缩小后的第二视频所在区域,视频播放界面10中白色虚线框出的区域14外显示的黑色图像为客户端补入的背景图像。
响应于用户需要继续缩小视频播放界面展示的视频,可以再次执行缩小操作,如图5c所示,为第二次执行缩小操作后,电子设备展示的视频播放界面10和内容显示界面13的示意图。图5c中视频播放界面展示的视频继续缩小,内容显示界面13继续增大。可以理解的是,由于内容显示界面13增大,内容显示界面13可以显示更多内容,示例性的,可以不更新第二显示区域显示的内容,可以放大第二显示区域显示的已有内容。
假设图5c中第一视频已经缩小到最小,响应于再次检测到缩小操作时,仅更新显示内容显示界面中的内容,如图5d所示。图5d中内容显示界面显示的内容相对于图5c中内容显示界面显示的内容而言,已经更新了。
综上,第一视频在缩放时是背景内容区域以及关键内容区域作为一个整体进行缩放的,响应于背景内容区域以及关键内容区域作为一个整体进行缩小,会导致第一视频中的关键内容区域的尺寸过小,使得用户观看不清楚关键内容区域展示的关键内容的情况,影响视频缩放过程中的视频展示效果。
为了防止在缩小第一视频时,背景内容区域和关键内容区域作为一个整体进行缩放,导致缩小得到的第二视频中关键内容区域的尺寸过小,使得用户观看不清楚关键内容的情况,在缩小第一视频的过程中,将背景内容区域和关键内容区域作为两个独立的个体分别进行缩小,例如,可以缩小背景内容区域,但不缩小关键内容区域,或者,在将背景内容区域缩小完毕后,可以缩小关键内容区域。
下面对上述技术方案进行说明。在一可选实施例中,实施于视频播放界面的视频缩放操作的操作信息至少包括操作类型和操作距离。
在一可选实现方式中,操作距离的确定方式有多种,本公开实施例提供但不限于以下两种方式。
第一种实现方式:预先设置一次视频缩放操作对应的固定长度,该固定长度即为操作距离。
示例性的,视频缩放操作可以为:按键操作、滑动操作、未携带缩放长度的语音操作中的任一种。
响应于预先设置一次视频缩放操作对应的固定长度为1cm,那么,每次执行视频缩放操作后,均固定缩放第一视频在预设方向上的固定长度,例如1cm。
示例性的,固定长度可以基于实际情况而定,本公开实施例并不对固定长度的具体值进行限定。
第二种实现方式:基于视频缩放操作,确定操作距离。
示例性的,响应于视频缩放操作为滑动操作,则操作距离可以基于滑动轨迹的长度计算得到;响应于缩放操作为语音操作,则操作距离为语音指令携带的长度,假设该语音指令为“缩小视频5cm”,则操作距离为5cm;响应于缩放操作为按键操作,则操作距离基于触按视频缩放按键的时长和/或力度计算得到。
示例性的,操作距离=滑动轨迹在预设方向上的投影距离,或者,操作距离=滑动轨迹的长度,或者,操作距离=滑动轨迹在预设方向上的投影距离*第一预设比例,或者,操作距离=滑动轨迹的长度*第一预设比例。
示例性的,第一预设比例可以小于1,或者,大于1的任意数值。
示例性的,第一预设比例可以基于用户的操作习惯自动变更,例如,用户在对视频进行缩小时,经常执行多次缩小操作,才能够将预设方向上的背景内容区域缩小完,说明用户的动作保守,例如,滑动长度较小,或者,触按视频缩小按键的力度和/或时长较小,那么,电子设备22可以设置第一预设比例大于1,第一预设比例的具体值可以经过多次统计确定。
示例性的,响应于用户在对视频进行缩小时,经常执行一次缩小操作后,视频缩小至最小尺寸,说明用户动作较大,例如,滑动长度较大,或者,触按视频缩小按键的力度和/或时长较大,那么,电子设备22可以设置第一预设比例小于1。示例性的,第一预设比例的具体值可以经过多次统计确定。
示例性的,操作类型可以为缩小操作或放大操作。
示例性的,步骤A1的具体实现过程包括步骤A11和步骤A12。
在步骤A11中,在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数;所述缩小方式至少包括是否缩小所述关键内容区域以及缩小类型;所述缩小类型包括预设方向缩小或整体缩小;所述缩小参数至少包括在所述预设方向上的缩小长度。
示例性的,在缩小第一视频的过程中,可能不会缩小关键内容区域,所以不会出现第一视频缩小至第二视频后,由于第二视频中关键内容区域的尺寸过小,出现观看不清楚关键内容的情况,影响视频缩放过程中的视频展示效果。
在步骤A12中,在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数;所述放大方式至少包括是否放大所述关键内容区域以及放大类型;所述放大类型包括预设方向放大或整体放大;所述放大参数至少包括在所述预设方向上的放大长度。
在一可选实施例中,在确定第一视频的缩小方式和缩小参数时,需要综合考虑对第一视频缩放操作的操作距离以及第一视频的视频信息,这样可以提升的用户的操作感。
下面对步骤A11的具体实现过程进行介绍,步骤A11可能涉及三种情况,下面对这三种情况进行说明。
步骤A11的第一种情况包括步骤B1至步骤B4。
在步骤B1中,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域。
示例性的,确定所述第一视频在所述关键内容区域之外的背景内容区域包括:确定背景内容区域在预设方向上的长度、背景内容区域在与预设方向垂直的方向上的长度和所述背景内容区域位于第一视频的位置信息中的至少一个。
在步骤B2中,在所述操作距离不大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域以及所述背景内容区域的缩小类型为预设方向缩小或整体缩小。
在步骤B3中,在所述背景内容区域的缩小类型为预设方向缩小的情况下,基于所述操作距离确定在所述预设方向上的缩小长度。
示例性的,背景内容区域在所述预设方向上的缩小长度等于所述操作距离。
在步骤B4中,在所述背景内容区域的缩小类型为整体缩小的情况下,基于所述操作距离以及所述背景内容区域的尺寸,确定第一缩小比例以及在所述预设方向上的缩小长度,所述第一缩小比例为所述背景内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
下面对背景内容区域进行介绍。
示例性的,所述背景内容区域的表现形式有多种,本公开实施例提供但不限于以下两种:
第一种:背景内容区域中包含的背景内容为黑底图像或彩色图像。
示例性的,第一视频包含的背景内容是在将第一视频上传至服务器之前,由用户添加的;或,服务器接收到第一视频后,由服务器添加的。
第二种:背景内容区域中包含的背景内容为高斯模糊处理后的图像。
示例性的,背景内容区域中的背景内容还可以包括第一视频对应的视频内容,例如视频标题,或第一视频对应的旁白或字幕。
下面对背景内容区域与关键内容区域的位置进行介绍。
示例性的,背景内容区域与关键内容区域的相对位置包括多种,本公开实施例提供但不限于:背景内容区域位于关键内容区域的上方,和/或,背景内容区域位于关键内容区域的下方,和/或,背景内容区域位于关键内容区域的左侧,和/或,背景内容区域位于关键内容区域的右侧。
示例性的,如图1a所示,背景内容区域11包括第一子背景内容区域和第二子背景内容区域,一个子背景内容区域位于关键内容区域12的上方,一个子背景内容区域位于关键内容区域12的下方。
下面对本公开实施例提出的第一视频的背景内容区域以及关键内容区域关系进行说明。
第一视频包括背景内容区域和关键内容区域。即,在垂直方向上,第一视频的长度等于背景内容区 域的长度与关键内容区域的长度的和;在水平方向上,第一视频的长度等于背景内容区域的长度与关键内容区域的长度的和。
由于第一视频包括背景内容区域和关键内容区域,因而可基于第一视频的展示尺寸和第一视频中关键内容区域的展示尺寸,确定第一视频是否包括背景内容区域;响应于第一视频包括背景内容区域,可以基于第一视频中关键内容区域在第一视频的位置区域以及第一视频的展示尺寸,确定背景内容区域在第一视频中的位置区域。
示例性的,可以基于第一视频在预设方向上长度以及关键内容区域在预设方向上长度,确定第一视频在预设方向上是否存在背景内容区域。且响应于第一视频在预设方向上存在背景内容区域,则可以计算得到背景内容区域在预设方向上的长度。
其中,响应于第一视频在预设方向上的长度大于关键内容区域在预设方向上的长度,则确定第一视频在预设方向上存在背景内容区域。背景内容区域在预设方向上的长度等于第一视频在预设方向上的长度减去关键内容区域在预设方向上的长度。响应于第一视频在预设方向上的长度等于关键内容区域在预设方向上的长度,则确定第一视频在预设方向上不存在背景内容区域。
下面以具体示例介绍第一视频在预设方向上长度和关键内容区域在预设方向上的长度。
响应于视频缩放操作为针对视频播放界面在预设方向上的滑动操作,假设预设方向为如图1b所示的垂直方向。那么,在检测到针对视频播放界面的向上滑动操作后,响应于第一视频在垂直方向上的长度为10cm,关键内容区域在垂直方向上的长度为6cm,则确定第一视频在垂直方向上存在背景内容区域,且背景内容区域在垂直方向上的长度为:10cm-6cm=4cm。
响应于第一视频在垂直方向上的长度为10cm,关键内容区域在垂直方向上的长度为10cm,则确定第一视频在垂直方向上不存在背景内容区域。
本公开实施例提及的“预设方向”可以为垂直方向或水平方向。
下面结合背景内容区域与关键内容区域的位置关系,对本公开实施例中缩小背景内容区域的方式进行介绍,缩小背景内容区域的方式包括但不限于以下两种情况。
第一种情况:背景内容区域整体位于关键内容区域的一侧,如背景内容区域整体位于关键内容区域的上方,或,背景内容区域整体位于关键内容区域的下方,或,背景内容区域整体位于关键内容区域的左侧,或,背景内容区域整体位于关键内容区域的右侧。缩小背景内容区域的方式即为缩小背景内容区域这一个整体。
图6是根据一示例性实施例示出的背景内容区域与关键内容区域的位置关系示意图。图6以背景内容区域11整体位于关键内容区域12的下方为例进行说明。
在图6左侧图中电子设备22全屏显示第一视频。
假设背景内容区域在垂直方向上的缩小长度小于背景内容区域在垂直方向上的长度,图6右侧图示出了在对背景内容区域在预设方向上缩小后得到的第二视频。
通过图6左侧图和右侧图的比对可以看出,第一视频和第二视频的关键内容区域的尺寸相同,第二视频的背景内容区域的尺寸小于第一视频背景内容区域的尺寸。
第二种情况:背景内容区域位于关键内容区域的两侧。假设,背景内容区域包括第一子背景内容区域以及第二子背景内容区域。在预设方向上,依次包括第一子背景内容区域、关键内容区域以及第二子背景内容区域。
示例性的,在第二种情况中涉及两种缩小背景内容区域的方式。
第一种方式:先确定关键内容区域一侧的第一子背景内容区域的缩小长度,并进行缩小,待第一子背景内容区域缩小完后,响应于仍需要缩小第一视频,则再确定位于关键内容区域另一侧的第二子背景内容区域的缩小长度,并进行缩小。
图7a至图7d是根据一示例性实施例示出的一种确定背景内容区域缩小长度的方式的示意图。图7a至图7d是以背景内容区域位于关键内容区域的上方和下方为例进行说明,示例性的,可以先对位于关键内容区域下侧的第一子背景内容区域进行缩小,待第一子背景内容区域缩小完后,响应于仍需要缩小第一视频,则在缩小关键内容区域上侧的第二子背景内容区域。或者,先缩小第二子背景内容区域,待第二子背景内容区域缩小完后,响应于仍需要缩小第一视频,则缩小第一子背景内容区域。
在图7a中全屏展示第一视频;响应于检测到缩小操作,则确定位于关键内容区域下侧的第一子背景内容区域的缩小长度,并基于该缩小长度对位于第一子背景内容区域进行缩小,如图7b所示,为缩小完毕于第一子背景内容区域后的示意图;响应于关键内容区域12下侧的第一子背景内容区域11缩小完后,响应于仍需要缩小第一视频,则确定位于关键内容区域上侧的第二子背景内容区域的缩小长度,并基于该缩小长度对第二子背景内容区域进行缩小,如图7c所示,为缩小完毕第一子背景内容区域后的示意图。
图7d中随着位于关键内容区域上侧的第二子背景内容区域的缩小,关键内容区域在显示界面中的位置也随之上升,关键内容区域在显示界面中位置上升的高度等于关键内容区域上侧的第二子背景内容区域的缩小长度。
第二种方式:确定位于关键内容区域两侧的背景内容区域分别对应的缩小长度,并同时缩小位于关键内容区域两侧的背景内容区域。
在一可选实现方式中,确定第一子背景内容区域的缩小长度和确定第二子背景内容区域的缩小长度的方式包括但不限于以下两种。
第一种实现方式,确定第一子背景内容区域在所述预设方向上的缩小比例。确定第二子背景内容区域在所述预设方向上的缩小比例。
所述缩小比例是指自身在所述预设方向上的长度与所述背景内容区域在所述预设方向上的长度的比值。
其中,背景内容区域在预设方向上的长度=第一子背景内容区域在预设方向上的长度+第二子背景内容区域在预设方向上的长度。
示例性的,还包括步骤:基于背景内容区域在预设方向上的缩小长度、第一子背景内容区域在所述预设方向上的缩小比例以及第二子背景内容区域在所述预设方向上的缩小比例,确定第一子背景内容区域在预设方向上的缩小长度和第二子背景内容区域在预设方向上的缩小长度。
示例性的,计算公式可以为:第一子背景区域在预设方向上的缩小长度=第一子背景区域的缩小比例*背景内容区域在预设方向上的缩小长度。
其中,背景内容区域在预设方向上的缩小长度即为第一子背景区域在预设方向上的缩小长度与第二子背景区域在预设方向上的缩小长度的和。
例如,图8左图,背景内容区域在预设方向上的缩小长度=第一子背景区域在预设方向上的缩小长度+第二子背景区域在预设方向上的缩小长度=2cm+3cm=5cm。
下面以具体示例,介绍确定第一子背景内容区域在预设方向上的缩小长度和第二子背景内容区域在预设方向上的缩小长度的过程。
假设,第一子背景内容区域在预设方向上的长度为3cm,第二子背景内容区域在预设方向上的长度为6m,则第一子背景内容区域的缩小比例等于3cm/(6+3)cm=1/3,第二子背景内容区域的缩小比例等于6cm/(6+3)cm=2/3。
假设背景内容区域在预设方向上的缩小长度总共为3cm,则第一子背景内容区域在预设方向上缩小长度为3*1/3=1cm,第二子背景内容区域在预设方向上缩小长度为3*2/3=2cm。
在本公开实施例中背景内容区域的第一子背景内容区域和第二子背景内容区域在预设方向上按照比例缩小,改善了视频缩放过程中的视频展示效果,使得视频展示效果更加符合用户的观看习惯,提高用户感受。
第二种实现方式,确定第一子背景内容区域的缩小长度和第二子背景内容区域的缩小长度相同。
下面以具体示例对第一子背景内容区域和第二子背景内容区域的同时缩小的过程进行说明。
图8是根据一示例性实施例示出的另一种确定背景内容区域缩小长度的方式的示意图。图8是以第一子背景内容区域位于关键内容区域的下方,第二子背景内容区域位于关键内容区域的上方为例进行说明。
在图8左侧图中全屏展示第一视频;响应于检测到缩小操作,确定第一子背景内容区域的缩小长度和第二子背景内容区域的缩小长度,第一子背景内容区域和第二子背景内容区域均缩小后的视频如图8右侧图所示。
在步骤B3中背景内容区域的缩小类型可以为预设方向缩小,即在预设方向上缩小背景内容区域,在与预设方向垂直的的方向上,保持背景内容区域的尺寸不变。
下面以具体示例介绍步骤B3的实现过程。图9a至图9b是根据一示例性实施例示出的第一视频的一种缩小方式的示意图。如图9a所述,假设背景内容区域在预设方向上长度为7.5cm(0.5+7=7.5),背景内容区域在预设方向上的缩小长度为5cm,关键内容区域在预设方向上的长度为4cm。图9b为背景内容区域在预设方向上缩小掉5cm后的第二视频,背景内容区域在与预设方向垂直的方向上并未缩短。图9b中关键内容区域在预设方向上的长度4cm不变。
在步骤B4中背景内容区域的缩小类型为整体缩小,即在预设方向上缩小背景内容区域,以及在与预设方向垂直的方向上缩小背景内容区域。
示例性的,第一缩小比例=背景内容区域在预设方向的长度/背景内容区域在与所述预设方向垂直的方向的长度。
示例性的,还包括:基于第一缩小比例以及背景内容区域在预设方向的缩小长度,确定背景内容区域在与预设方向的垂直方向的缩小长度。
示例性的,可以基于公式:背景内容区域在与预设方向垂直的方向上的缩小长度=背景内容区域在预设方向的缩小长度/第一缩小比例。
下面以具体示例介绍步骤B4的实现过程。图10a至图10b是根据一示例性实施例示出的第一视频的另一种缩小方式的示意图。
如图10a所示,假设背景内容区域在预设方向(假设为垂直方向)上的长度为7cm,背景内容区域在与预设方式垂直的方向(假设为水平方向)上的长度为6cm,第一缩小比例=7/6,背景内容区域在垂直方向上的缩小长度为5cm,那么,背景内容区域在水平方向上的缩小长度=5/(7/6)。
图10b为缩小背景内容区域在预设方向上5cm,且,在水平方向上缩小5/(7/6)后的显示界面,图10b中虚线框区域表示为背景内容区域整体缩小后的所在区域,虚线框以外的灰色区域为背景内容区域整体缩小后,客户端添加的背景图像。由图10a和图10b对比可知,在本实施例中,在经过缩小处理后,视频的关键内容区域的尺寸保持不变,背景内容区域为整体缩小。
步骤A11涉及的第二种情况包括步骤C1至步骤C3。
在步骤C1中,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域。
在步骤C2中,在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域以及缩小所述背景内容区域。
在步骤C3中,基于所述背景内容区域的尺寸,确定所述背景内容区域在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
示例性的,将背景内容区域在预设方向上的长度确定为背景内容区域在预设方向上的缩小长度,背景内容区域在与预设方向垂直的方向上的长度确定为背景内容区域在与预设方向垂直的方向上的缩小长度。
示例性的,由于背景内容区域均被去除掉,所以背景内容区域的缩小类型可以为预设方向缩小或整体缩小。
假设背景内容区域的缩小类型为预设方向缩小,假设预设方向为垂直方向,下面以具体示例介绍步骤C3的实现过程。图11是根据一示例性实施例示出的第一视频的另一种缩小方式的示意图。如图11左侧图所示,假设操作距离5cm,背景内容区域在预设方向上的长度为3cm,关键内容区域在预设方向上的长度为4cm。图11右侧图为缩小掉背景内容区域在预设方向上3cm后的第二视频的示意图。图11右侧图所示关键内容区域在预设方向上的长度4cm不变。
在本公开实施例中,由于操作距离大于背景内容区域在预设方向上的长度,示例性的,保持关键内容区域的尺寸不变,将背景内容区域全部缩小完毕。
上述两种实现方式中,仅对背景内容区域进行缩小,不对关键内容区域进行缩小,即采用保持关键内容区域尺寸不变的方式对第一视频进行缩小。在对视频缩小过程中,保证了关键内容区域的尺寸不变,既避免出现受限于视频播放界面的展示空间导致视频缩放过程中的关键内容缺失的情况,又避免出现关键内容区域的尺过小的情况,从而改善了视频缩放过程中的视频展示效果。
步骤A11涉及的第三种情况包括步骤D1至步骤D5。
在步骤D1中,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域。
在步骤D2中,在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括缩小所述关键内容区域、所述关键内容区域的缩小类型为预设方向缩小或整体缩小以及缩小所述背景内容区域。
在步骤D3中,基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
在步骤D4中,在所述关键内容区域的缩小类型为预设方向缩小的情况下,基于所述背景内容区域在所述预设方向的长度以及所述操作距离,确定所述关键内容区域在所述预设方向上的缩小长度。
在步骤D5中,在所述关键内容区域的缩小类型为整体缩小的情况下,基于所述背景内容区域在所述预设方向的长度、所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的缩小长度以及第二缩小比例,所述第二缩小比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
对步骤D3的说明可以参见步骤A11涉及的第二种情况中对步骤C3的说明,这里不再赘述。
本公开实施例提供的确定关键内容区域在预设方向上的缩小长度的实现方式包括但不限于以两种情况。
第一种情况:预先设置了关键内容区域在预设方向上的最小长度。
为了避免关键内容区域尺寸过小,关键内容区域在预设方向上设置有最小长度,即响应于关键内容区域在预设方向上的长度为所述最小长度时,即使接收到缩小操作,也不会对关键内容区域进行缩小。
示例性的,响应于关键内容区域在预设方向上的长度为所述最小长度,且再次接收到缩小操作,则可以更新内容显示区域中的内容,且保持关键内容区域的尺寸不变。
响应于操作距离≥第一视频中背景内容区域在预设方向上的长度+(关键内容区域在预设方向上的长度-最小长度),那么,关键内容区域在预设方向上的缩小长度=关键内容区域在预设方向上的长度-最小长度。
响应于操作距离<第一视频中背景内容区域在预设方向上的长度+(关键内容区域在预设方向上的长度-最小长度),那么,关键内容区域在预设方向上的缩小长度=操作距离-第一视频中背景内容区域在预设方向上的长度。
第二种情况:没有预先设置关键内容区域在预设方向上的最小长度,那么,关键内容区域在预设方向上的缩小长度=操作距离-第一视频中背景内容区域在预设方向上的长度。
下面以具体示例介绍步骤D3和步骤D4的实现过程。
图12是根据一示例性实施例示出的第一视频的又一种缩小方式的示意图。
如图12左侧图所示,假设操作距离为5cm,背景内容区域在预设方向(假设为垂直方向)上长度为3cm,关键内容区域在垂直方向上的长度为4cm。图12右侧图为缩小掉背景内容区域在垂直方向上3cm,且将关键内容区域在垂直方向上缩小2cm(即4cm-2cm)后的第二视频对应的示意图。
在本实施例中,关键内容区域在预设方向(例如垂直方向)上的长度缩短,在与预设方向垂直的方向(例如水平方向)上的长度保持不变,因而会造成关键内容区域所展现的画面出现“扁平”的情况,影响视频缩放过程中的视频展示效果。
为此在步骤D5中,关键内容区域在缩小时采用整体缩小的方式。
示例性的,第二缩小比例=关键内容区域在预设方向的长度/关键内容区域在与所述预设方向垂直的方向的长度。
示例性的,还包括:基于第二缩小比例以及关键内容区域在预设方向的缩小长度,确定关键内容区域在与预设方向的垂直方向的缩小长度。
示例性的,公式如下:关键内容区域在与预设方向的垂直方向的缩小长度=关键内容区域在预设方向的缩小长度/第二缩小比例。
下面以具体示例对步骤D3和步骤D5进行说明。
图13是根据一示例性实施例示出的第一视频的又一种缩小方式的示意图。假设预设方向为垂直方 向。
如图13左侧图所示,假设操作距离为9cm,背景内容区域在垂直方向上长度为7cm,关键内容区域在垂直方向上的长度为4cm,在水平方向上的长度为8cm。即第二缩小比例为4cm/8cm=1/2。响应于关键内容区域在预设方向(假设为垂直方向)上缩小2cm,即9-7=2cm,那么,关键内容区域在水平方向上的缩放长度=关键内容区域在预设方向的缩小长度/第二缩小比例=2/(1/2)=4cm。
图13右侧图为缩小掉背景内容区域在垂直方向上7cm,且将关键内容区域在垂直方向上缩小2cm,在水平方向上缩小4cm后的第二视频对应的示意图。
由于经过缩小处理的关键内容区域的显示比例为2cm/4cm=1/2,与关键内容区域未被缩放时的显示比例相同。这样,保障了关键内容区域在经过缩小处理前后的显示比例相同,避免了关键内容区域出现“扁平”的情况,改善了视频缩放过程中的视频展示效果。
以上公开实施例提供的视频展示方法中,针对由背景内容区域和关键内容区域组成的第一视频而言,检测针对视频播放界面的缩小操作后,响应于第一视频存在背景内容区域,可以通过缩小背景内容区域在预设方向上的长度的方式或整体缩小背景内容区域的方式,缩小第一视频,得到第二视频。即在对第一视频进行缩小时,通过缩小背景内容区域的方式实现第一视频的缩放,此时,关键内容区域的展示尺寸可能维持不变,避免受限于视频播放界面的展示空间,出现视频缩放过程中的关键内容缺失的情况;或者,响应于背景内容区域都被缩小掉后还需要缩小,可以再缩小关键内容区域。本公开实施例中的关键内容区域可能不会缩小,或者,缩小程度相对于较小,从而改善了视频缩放过程中的视频展示效果。
在一可选实现方式中,响应于关键内容区域的缩小方式为整体缩小,那么,关键内容区域的缩小参数包括:关键内容区域在预设方向上的缩小长度,以及,关键内容区域在与预设方向的垂直方向的缩小长度,其中,关键内容区域在预设方向上的缩小长度=关键内容区域在与预设方向的垂直方向的缩小长度。
在一可选实施例中,步骤A12涉及以下三种情况。可以理解的是,在步骤A12中涉及的第一视频可以是通过上述步骤A11缩小后的视频。
示例性的,步骤A12涉及的第一视频在预设方向上可以由关键内容区域组成,假设预设方向为垂直方向,那么,步骤A12涉及的第一视频可以如图11右侧图所示,或,如图12右侧图所示,或,如图13右侧图所示;示例性的,步骤A12涉及的第一视频可以由关键内容区域以及背景内容区域组成,如图6右侧图所示、如图8右侧图所示、如图9b所示或如图10b所示。
步骤A12涉及的第一种情况包括步骤F1至步骤F4。在第一种情况中,第一视频由关键内容区域以及背景内容区域组成,且,关键内容区域未被缩小;或者,第一视频由关键内容区域组成,且关键内容区域未被缩小。
在步骤F1中,在所述关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度的情况下,根据第一视频的视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸。
示例性的,本公开实施例中提及的关键内容区域的在预设方向的原始长度为未经过缩放的关键内容区域在预设方向的长度。即,客户端从服务器接收到的视频后,未对视频进行任何缩放处理时,视频中关键内容区域在预设方向的长度。
响应于关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度,说明针对关键内容区域未执行过缩小操作,因此,无需放大关键内容区域。
示例性的,视频的背景内容区域的原始尺寸是指未被经过缩放处理的视频中背景内容区域的尺寸,即电子设备从服务器接收到视频后,未对视频进过任何缩放处理时,该视频中背景内容区域的尺寸。
其中,视频的背景内容区域的原始尺寸包括背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度。
示例性的,响应于第一视频由关键内容区域以及背景内容区域组成,且,关键内容区域未被缩小,那么,步骤F1可以确定所述第一视频在所述关键内容区域之外的背景内容区域对应的展示尺寸,即第 一视频当前展示的背景内容区域的尺寸。
示例性的,响应于第一视频由关键内容区域组成,且,关键内容区域未被缩小,那么,步骤F1确定的所述第一视频在所述关键内容区域之外的背景内容区域对应的展示尺寸均为0,所以需要确定背景内容区域对应的原始尺寸。
在步骤F2中,确定所述放大方式包括不放大所述关键内容区域、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大。
在步骤F3中,在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度。
示例性的,所述背景内容区域在所述预设方向上的放大长度等于操作距离。
预设方向可以为垂直方向或水平方向,响应于预设方向为垂直方向,预设方向放大为垂直方向放大,响应于预设方向为水平方向,预设方向放大为水平方向放大。
其中,垂直方向放大是指将在垂直方向上的长度放大;水平方向放大是指将在水平方向上的长度放大。
其中,对背景内容区域在预设方向上放大的过程与对背景内容区域在预设方向上的缩小过程是相反的过程,详细可参见对背景内容区域在预设方向缩小过程,例如,参见针对步骤B3的说明,将此部分中的“缩小类型”变更为“放大类型”,将“缩小长度”变更为“放大长度”,即为针对背景内容区域在预设方向上放大的过程的描述。
在步骤F4中,在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例,所述第一放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
示例性的,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例包括:基于操作距离确定所述背景内容区域在所述预设方向上的放大长度;基于背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值,确定第一放大比例。
示例性的,第一放大比例=背景内容区域在预设方向上的原始长度/背景内容区域在与所述预设方向垂直的方向的原始长度。
示例性的,还包括:基于第一放大比例以及背景内容区域在预设方向的放大长度,确定背景内容区域在与预设方向的垂直方向的放大长度。
示例性,公式为:背景内容区域在与预设方向的垂直方向的放大长度=背景内容区域在预设方向的放大长度/第一放大比例。
其中,对背景内容区域的整体放大过程与对背景内容区域的整体缩小过程是相反的过程,详细可参见对背景内容区域整体缩小的过程,例如,参见针对步骤B4的说明,将此部分中的“缩小类型”变更为“放大类型”,将“缩小长度”变更为“放大长度”,即为针对背景内容区域整体放大的过程的描述。
步骤A12涉及的第二种情况包括步骤G1至步骤G3。在第二种情况中,第一视频由关键内容区域组成,此时关键内容区域已被缩小。
在步骤G1中,在所述关键内容区域在所述预设方向的长度小于或等于所述关键内容区域的原始长度与所述操作距离的差值的情况下,确定所述放大方式包括不放大所述背景内容区域、放大所述关键内容区域以及所述关键内容区域的放大类型为预设方向放大或整体放大。
可以理解的是,由于操作距离可以使得关键内容区域在所述预设方向的长度放大至小于或等于原始长度的情况,此时经过放大后的第一视频在预设方向上不包括背景内容区域。
假如,第一视频的关键内容区域在预设方向上的长度为3cm,关键内容区域在预设方向上的原始长度为7cm,操作距离为2cm。即,操作距离2cm小于(关键内容区域在预设方向上的原始长度为7cm-关键内容区域在预设方向上的长度3cm)。可知,操作距离不足以使第一视频的关键内容区域在预设方向上的长度放大至关键内容区域的在预设方向上的原始长度。此时,先放大关键内容区域,不放大背景内容区域。
在步骤G2中,在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述关键内容区域在所述预设方向上的放大长度。
示例性的,关键内容区域在所述预设方向上的放大长度等于所述操作距离。
在步骤G3中,在所述关键内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的放大长度以及第二放大比例,所述第二放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
示例性的,关键内容区域在所述预设方向上的放大长度等于所述操作距离。
示例性的,第二放大比例=关键内容区域在所述预设方向上的长度/关键内容区域在与所述预设方向垂直的方向的长度。
示例性的,还包括:基于第二放大比例以及关键内容区域在预设方向的放大长度,确定关键内容区域在与预设方向的垂直方向的方法长度。
示例性的,公式为:关键内容区域在与预设方向的放大长度=关键内容区域在预设方向的放大长度/第二放大比例。
步骤A12涉及的第三种情况包括步骤H1至步骤H6。在第三种情况中,第一视频由关键内容区域组成,此时关键内容区域已被缩小。
在步骤H1中,在所述关键内容区域在所述预设方向的长度大于所述关键内容区域的原始长度与所述操作距离的差值的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸。
其中,视频的背景内容区域的原始尺寸包括背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度。
示例性的,确定所述第一视频在所述关键内容区域之外的背景内容区域包括确定关键内容区域和背景内容区域的位置关系。
示例性的,背景内容区域在预设方向上包括第一子背景内容区域以及第二子背景内容区域且第一子背景内容区域以及第二子背景内容区域位于背景内容区域两侧,或,背景内容区域在预设方向上位于关键内容区域上侧,或背景内容区域在预设方向上位于关键内容区域下。
可以理解的是,由于第一视频由关键内容区域组成,所以第一视频并不包括背景内容区域,因此,这里的背景内容区域的展示尺寸为0,即没有背景内容区域。后续放大背景内容区域的过程为从无到有的过程。
在步骤H2中,确定所述放大方式包括放大所述关键内容区域、所述关键内容区域的放大类型为预设方向放大或整体放大、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大。
在步骤H3中,在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述关键内容区域在所述预设方向的长度以及所述原始长度,确定所述关键内容区域在所述预设方向上的放大长度。
示例性的,公式为:关键内容区域在预设方向上的放大长度=关键内容区域在预设方向上的原始长度-关键内容区域在预设方向上的长度。
例如,关键内容区域在预设方向上的原始长度为10cm,关键内容区域在预设方向上的长度(即展示长度)为7cm,则关键内容区域在预设方向上的放大长度为10cm-7cm=3cm。
在步骤H4中,在所述关键内容区域的放大类型为整体放大的情况下,基于所述关键内容区域的尺寸以及所述原始长度确定所述关键内容区域在所述预设方向上的放大长度以及第三放大比例,所述第三放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
示例性的,基于关键内容区域在预设方向上的原始长度和关键内容区域在预设方向上的长度,确定关键内容区域在预设方向上的放大长度。
示例性的,公式为:关键内容区域在预设方向上的放大长度=关键内容区域在预设方向上的原始长度-关键内容区域在预设方向上的长度。
示例性的,基于关键内容区域的尺寸确定第三放大比例。
示例性的,第三放大比例=关键内容区域在所述预设方向上的长度/关键内容区域在与所述预设方向 垂直的方向的长度。
例如,关键内容区域在预设方向上的原始长度为10cm,关键内容区域在预设方向上的长度4cm,关键内容区域在与所述预设方向垂直的方向的长度为2cm,则关键内容区域在预设方向上的放大长度=(10cm-4cm)=6cm,第三放大比例=4cm/2cm=2。
示例性,基于关键内容区域在预设方向上的放大长度和第三放大比例,确定关键内容区域在与所述预设方向垂直的方向的放大长度。
示例性,公式为:关键内容区域在与所述预设方向垂直的方向的放大长度=关键内容区域在所述预设方向上的放大长度/第三放大比例。
仍以关键内容区域在预设方向上的原始长度为10cm,关键内容区域在预设方向上的长度4cm,关键内容区域在与所述预设方向垂直的方向的长度为2cm为例,则关键内容区域在与所述预设方向垂直的方向的放大长度=6cm/2=3cm。
在步骤H5中,在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度。
示例性,基于操作距离和关键内容区域在预设方向上的放大长度,确定背景内容区域在所述预设方向上的放大长度。
示例性,公式为:背景内容区域在所述预设方向上的放大长度=操作距离-关键内容区域在预设方向上的放大长度。
例如,操作距离为10cm,关键内容区域在预设方向上的放大长度4cm,则背景内容区域在所述预设方向上的放大长度为10cm-4cm=6cm。
在步骤H6中,在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第四放大比例,所述第四放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
其中,视频的背景内容区域的原始尺寸包括背景内容区域在垂直方向上的原始长度和背景内容区域在水平方向上的原始长度。
其中,视频的背景内容区域的原始尺寸包括背景内容区域在垂直方向上的原始长度和背景内容区域在水平方向上的原始长度。
示例性,基于操作距离和关键内容区域在预设方向上的放大长度,确定背景内容区域在所述预设方向上的放大长度。
示例性,公式为:背景内容区域在所述预设方向上的放大长度=操作距离-关键内容区域在预设方向上的放大长度。
示例性,基于背景内容区域在预设方向上的原始长度和背景内容区域在与预设方向垂直的方向的原始长度,确定第四放大比例。示例性的,公式为:第四放大比例=基于背景内容区域在预设方向上的原始长度/背景内容区域在与预设方向垂直的方向的原始长度。
例如,基于背景内容区域在预设方向上的原始长度为10cm,背景内容区域在与预设方向垂直的方向的原始长度为5cm,则第四放大比例为2。
示例性的,基于背景内容区域在所述预设方向上的放大长度和第四放大比例,确定背景内容区域在与预设方向垂直的方向的放大长度。
示例性,公式为:背景内容区域在与预设方向垂直的方向的放大长度=基于背景内容区域在所述预设方向上的放大长度/第四放大比例。
仍以背景内容区域在预设方向上的原始长度为10cm,背景内容区域在与预设方向垂直的方向的原始长度为5cm为例,响应于背基于背景内容区域在所述预设方向上的放大长度为4cm,则基于背景内容区域在所述预设方向上的放大长度为4cm/2=2cm。
在一可选实施例中,响应于背景内容区域位于关键内容区域的两侧(如图7所示),假设背景内容区域包括第一子背景内容区域以及第二子背景内容区域。
第一视频在放大过程中的所述放大参数还包括所述第一子背景内容区域在所述预设方向上的放大比例,以及所述第二子背景内容区域在所述预设方向上的放大比例。
其中,所述放大比例是指自身在所述预设方向上的原始长度与所述背景内容区域在所述预设方向上的原始长度的比值。
示例性的,第一子背景内容区域在预设方向上的原始长度是指未被经过缩放处理的视频中第一子背景内容区域的在预设方向上的原始长度,即电子设备从服务器接收到视频后,未对视频进过任何缩放处理时,该视频中第一子背景内容区域的在预设方向上的原始长度。
同样,第二子背景内容区域在预设方向上的原始长度是指未被经过缩放处理的视频中第二子背景内容区域的在预设方向上的原始长度。
假设,第一子背景内容区域在垂直方向上的长度为3cm,第二子背景内容区域在垂直方向上的长度为6m,则第一子背景内容区域对应的放大比例等于3cm/(6+3)cm=1/3,第二子背景内容区域对应的放大比例小比例等于2/3。
设定背景内容区域在垂直方向上的放大长度总共为3cm,则第一子背景内容区域在垂直方向上放大长度为3*(1/3)=1cm,第二子背景内容区域在垂直方向上放大长度为3*(2/3)=2cm。
在本公开实施例中背景内容区域的第一背景内容区域和第二背景内容区域在垂直方向上按照比例放大,得到的视频展示效果更加符合用户观看习惯,从而提高用户体验。
在一可选实施例中,为了实现快速缩放第一视频的目的,可以执行一键缩放第一视频的操作,步骤S33的实现方式可以包括步骤E1至步骤E3。
在步骤E1中,确定所述操作信息对应的操作类型。在步骤E2中,响应于所述操作类型为一键缩小操作,去除所述第一视频中所述关键内容区域之外的背景内容区域,得到所述第二视频。
在步骤E3中,响应于所述操作类型为一键放大操作,将所述关键内容区域放大至所述关键内容区域的原始尺寸,将所述背景内容区域放大至所述背景内容区域的原始尺寸,以得到所述第二视频。
示例性,一键缩小操作或一键放大操作操作方式有多种。
本公开实施例提供但不限于以下三种。
第一种方式,一键缩小操作或一键放大操作的操作方式为按键操作。
第二种方式,一键缩小操作或一键放大操作的操作方式为滑动操作。
第三种方式,一键缩小操作或一键放大操作的操作方式为语音操作。
上述三种一键缩小操作或一键放大操作操作方式,可参见步骤S31中实施于视频播放界面的视频缩放操作的操作方式,在此不再赘述。
在一可选实施例中,在视频播放界面在预设方向上缩小的过程中,内容显示界面在增大,内容显示界面增大的方法包括步骤I11和步骤I12。
在步骤I11中,控制所述显示界面中的内容显示界面在所述预设方向上至少增加视频播放界面在预设方向上的缩小长度。
示例性的,视频播放界面在预设方向上缩小多少长度,内容显示界面在预设方向上增大相应长度。示例性的,视频播放界面在预设方向上的缩小长度等于第一视频在预设方向上的缩小长度。
在步骤I12中,控制所述内容显示界面显示更新后的与所述第二视频相关的内容。
在一可选实施例中,还包括步骤I21至步骤I22。
在步骤I21中,从所述第一视频中获取多帧视频图像。
在步骤I22中,基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域。
示例性的,上述步骤I21至步骤I22可以为步骤S32的具体实现方式,或者,上述步骤I21至步骤I22可以在步骤S31之前执行。
本公开实施例提供的步骤I22的实现方式有多种,本公开实施例提供但不限于以下三种。
第一种步骤I22的实现方式包括:步骤J1至步骤J3。
在步骤J1中,针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像。
示例性的,可从视频中抽取多帧视频图像。本公开实施例并不限定从视频中获得的视频图像的数目。
示例性的,视频的总时长越长,从视频中抽取的视频图像的数目越多。
示例性的,在实际应用中为了保障后续获得视频的视频信息的准确性,从视频中获得的视频图像的数目大于或等于预设帧数,例如预设帧数为20。示例性的,所抽取的视频图像的帧数,可基于实际情况设定,以便既保障所获得所述视频的视频信息的准确性,又能提高数据处理速度。
示例性的,可以从视频中均匀抽取多帧视频图像,例如,每隔10帧抽取一帧视频图像,或,每隔预设时长抽取一帧视频图像。示例性的,可以从视频中随机抽取多帧视频图像。
示例性的,可以将从视频中抽取的多帧视频图像打乱顺序,位置相邻的两帧视频图像可以为时间相邻的两帧视频图像,也可以不是时间相邻的两帧视频图像。
示例性的,从视频中抽取多帧视频图像后,可以基于多帧视频图像在视频中的时间进行排序,上述“位置相邻”两帧视频图像即为在时间上相邻的两帧视频图像。
示例性的,可以通过自适应混合高斯背景建模方法MOG2计算得到差异图像。
示例性的,差异图像可以为两帧视频图像之间的差异掩膜FrameMask。
示例性的,为了提高数据处理速度,步骤J1具体包括:将多帧视频图像缩小目标倍数;针对多帧缩小目标倍数的视频图像中任意位置相邻的两帧视频图像,获得表征所述两帧视频图像的差异的差异图像,以得到多帧差异图像。
其中,目标倍数小于1,示例性的,目标倍数可以为0.4、0.5、0.6等小于1的任意数值。
图14a至图14b是根据一示例性实施例示出的多帧差异图像的示意图。
假如,获取了20帧视频图像,针对20视频图像中任意位置相邻的两帧视频图像,采用自适应混合高斯背景建模方法,计算相邻两帧的差异掩膜,以得到多帧差异图像。图14a为基于第1帧视频图像和第2帧视频图像的差异得到的差异图像,图14b为基于第12帧视频图像和第13帧图像的差异得的差异图像。
在步骤J2中,基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值。
在一可选实现方式中,步骤J2的具体实现过程包括步骤J21至步骤J23。
在步骤J21中,分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,一帧所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域。
示例性的,对至少一帧所述差异图像分别进行形态学开操作处理,得到多帧第一图像。
需要说明的是,对差异图像进行先腐蚀后膨胀的操作称为形态学开操作。形态学开操作能够消除差异图像中的细小物体,在纤细处分离物体和平滑较大物体边界的作用。
示例性的,背景内容区域可能包括与关键内容区域对应的内容,例如,包含关键内容区域显示的真正内容对应的标题,或者,旁白,或者,字幕等内容;背景内容区域包含的内容可能距离关键内容区域很近,对差异图像进行形态学开操作处理,可以将背景内容区域包含的与关键内容区域临近的内容与关键内容区域分离,从而能够更加准确的确定关键内容区域的边界。
为了本领域技术人员更加理解本公开实施例提供的对差异图像进行处理得到第一图像的过程,下面举例进行说明。
图15a至15d根据一示例性实施例示出的对差异图像进行处理得到第一图像的示意图。其中,图15a所示的区域151(用点划线框标出)为关键内容区域,区域152(用虚线框标出)为背景内容区域中包含的与关键内容区域相关的内容,例如字幕,差异图像中这两部分区域还是有连接的,如图15a中由三个单元格连接。图15b为对差异图像进行处理的结构。
示例性的,基于图15b所示的结构对差异图像进行处理的过程如下:结构的中心单元格(如图15b中用黑色粗线标注的单元格)为移动单元格,在图15a所示的差异图像包含的各单元格中的移动该结构的中心单元格,响应于该结构的中心单元格移动到图15a所示的差异图像中的一个单元格时,该结构与图15a所示的差异图像的交集完全等于该结构,则确定该单元格满足要求,保存图15a所示的差异图像 中的该单元格。确定图15a所示的差异图像中满足上述要求的所有单元格,上述所有满足要求的单元格构成了如图15c所示深黑色单元格组成的图像,图15c中浅黑色单元格是图15a所示的差异图像中不满足上述要求的单元格。
示例性的,以图15b所示的结构的中心单元格为移动单元格,在图15c所示的由深黑色的周边单元格中的移动结构的中心单元格,响应于图15b所示的结构的中心单元格移动到位于图15c所示深黑色单元格组成的图像的周边位置的任一单元格时,图15b所示的结构与图15c所示深黑色单元格组成的图像存在交集,则确定该单元格满足要求,保留该单元格。将所有满足上述要求的所有单元格以及如图15c所示深黑色单元格共同构成了图15d所示的第一图像。
从15d可以看出区域151与区域152已经不相连了。
在步骤J22中,基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值。
在一可选实现方式中,步骤J22的实现过程具体包括步骤J221至步骤J222。
在步骤J221中,针对每一帧第一图像,确定第一图像包含的各像素对应的(像素点位置,像素值),以得到各第一图像分别包含各像素对应的(像素点位置,像素值)。
在步骤J222中,针对每一像素点位置,获得具有该像素点位置的各像素值的平均值,以得到该像素点位置对应的像素平均值,即(像素点位置,像素平均值)。
第二图像中任一像素对应的像素值为该像素的像素点位置对应的像素平均值。
可以理解的是,步骤“从所述视频中获取多帧视频图像”中可以从视频中得到两帧视频图像,步骤J1可以得到一帧差异图像,对一帧差异图像进行处理,可以得到一帧第一图像,响应于仅有一帧第一图像,那么,响应于该第一图像中一个或多个像素的像素值是错误(本公开实施例称为异常像素点)的,那么会影响确定关键内容区域的准确性。
为了解决上述问题,示例性的,步骤“从所述视频中获取多帧视频图像”中可以从视频中得到N帧视频图像,其中N为大于2的正整数。这样步骤J1可以得到至少两帧差异图像,步骤J21可以得到至少两帧第一图像,在步骤J22中将多帧第一图像中每一像素点位置对应的像素值为相应的像素平均值,可以消除异常像素点带来的影响。
可以理解的是,一帧第一图像中像素点位置1对应的像素可能为异常像素点,另一帧第一图像中像素位置1对应的像素可能为非异常像素点;且不同第一图像在同一像素点位置的像素都为异常像素点的概率很小,所以求取平均值可以消除异常像素点对数据处理的影响,从而使得到的第二图像更加清晰的展现关键内容区域的边界。
在步骤J23中,对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
示例性的,对第二图进行形态学闭操作处理和二值化处理,以获得目标图像。
其中,对第二图像进行先膨胀后腐蚀的操作称为形态学闭操作。形态学闭操作能够填充物体内细小空间,连接临近物体和平滑边界的作用。
下面举例对第二图像进行形态学闭操作处理的目的进行说明。
图16a至图16b是根据一示例性实施例示出的对第二图像进行处理的目的示意图。图16a为第二图像,从图16a中可以看出,在需要获得的关键内容区域(用白色实线框框出)1601中还有很多独立的细小空间,如图16a中用圆圈圈出的黑色小孔1602、黑色小孔1603等等。这些细小空间会降低后续得到关键内容区域的准确性,因此,需要将关键内容区域1601中细小空间(例如黑色小孔与黑色小孔边缘区域)连通,通过对第二图像进行形态学闭操作处理可以使得关键内容区域中的像素连通。对图16a进行形态学闭操作处理后,可以得到图16b。
从图16b可以看出关键内容区域不包括独立的细小空间了,关键内容区域整体为一个大的单连通区域。
在一可选实现方式中,可以将第二图像作为目标图像,或者,对第二图像进行二值化处理得到目标图像,以使得目标图像呈现黑白效果,可以更加准确的从中获得关键内容区域的轮廓。
在步骤J3中,将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键 内容区域。
示例性的,目标图像中可能包括多个图像区域,示例性的,将面积最大的目标图像区域确定为关键内容区域。
示例性的,获取每个图像区域对应的位置坐标,图像区域位置坐标为(top,left,bottom,right),其中,top为形成图像区域的上方边界线的位置坐标,left为形成图像区域的左侧边界线的位置坐标,bottom为形成图像区域的下方边界线的位置坐标,right为形成图像区域的右侧边界线的位置坐标。基于每个图像区域对应的位置坐标计算每个图像区域面积,以得面积最大的目标图像区域。
在一可选实施例中,本公开实施例还提供了确定目标图像区域为关键内容区域的概率的方法。
下面结合图17a至图17c对确定目标图像区域为关键内容区域的概率的意义进行说明。图17a至图17c是根据一示例性实施例示出的目标轮廓区域与真正的关键内容区域相对位置示意图。
响应于目标图像区域为关键内容区域的概率较低,如图17a所示,说明目标图像区域1701(用黑色虚线框出)可能除了包含关键内容区域1702(用黑色实线框出)外,还可能包括背景内容区域1703(图17a至图17c用黑色图像表示背景内容区域)。
响应于目标图像区域为关键内容区域的概率较低,如图17b所示,目标图像区域1701(用黑色点划线框出)包括部分关键内容区域1702(用黑色实线框出)以及部分背景内容区域1703。
响应于基于图17b对应的视频信息缩小背景内容区域,可能会出现关键内容区缺失的情况,如图17c所示。
针对于此,本公开实施例提供了一种的确定目标图像区域为关键内容区域的概率的方法,该方法在实施过程中包括以下步骤K1至步骤K3。
在步骤K1中,将位于所述目标图像区域中的图像转换成灰度图。
示例性的,本公开实施例提及的“灰度图”仅包括目标图像区域内部的图像,不包括图像中除目标图像区域以外的图像。
在步骤K2中,获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目。
在步骤K3中,将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为第一概率。
示例性的,响应于目标图像经过二值化处理,关键内容区域包含的像素的像素值应该均为255,响应于灰度图中的一个像素点的像素值大于第一阈值,说明该像素点为“白点”,响应于灰度图中的一个像素点的像素值小于或等于第一阈值,说明该像素点为“黑点”。因此可以基于第一概率,确定能够作为“白点”的像素在目标图像区域中所占的比例。
示例性的,响应于所述第一概率大于或等于第二阈值,说明确定的关键内容区域的准确度较高,基于该关键内容区域的位置对第一视频进行缩小操作不会出现如图17a至图17c所示的问题,因此可以执行步骤S33。
示例性的,第一阈值可以基于实际情况而定,例如,为了保证电子设备在基于视频信息对背景内容区域进行缩小的过程中,缩小准确率大于或等于95%,确定第二阈值为0.9。
示例性的,在电子设备检测到缩小操作后,在缩小关键内容区域的过程中是否会缺失关键内容,即出现图17c所示的情况,或者,在背景内容区域未缩小完毕的情况下,就开始对整个视频进行缩放,响应于出现这两种情况中任一种说明出现缩小错误,电子设备可以向服务器反馈表征出现缩小错误的信息。
电子设备基于视频信息对背景内容区域进行缩小的缩小准确率是指,服务器可以基于发送给一个或多个电子设备的一个或多个视频的视频信息的总数目A,以及,接收到反馈的表征缩小错误的信息的数目B,确定缩小准确率。
示例性的,缩小准确率=数目B/总数目A。
在一可选实施例中,为了扩大召回率,对于对应的第一概率小于第二阈值的第一视频,所述视频展示方法还包括:步骤L1至步骤L4。
在步骤L1中,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合。
在步骤L2中,从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标。
在步骤L3中,将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平 直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域。
在步骤L4中,响应于所述候选关键内容区域与所述目标图像区域相同,执行步骤S33。
在一可选实现方式中,步骤L1的实现过程包括步骤L11至步骤L13。
在步骤L11中,对多帧所述视频图像分别进行边缘检测,以得到多帧第三图像。
其中,一个视频图像对应一个第三图像。
示例性的,对视频图像进行边缘检测,可以识别出视频图像包含的亮度变化明显的点,例如,视频图像中背景内容区域和关键内容区域的边界。示例性的,边缘检测可以为Canny边缘算法。
图18是根据一示例性实施例示出的经过边缘检测得到的三帧第三图像的示意图。图18中三帧第三图像中画面的边界清晰明显,从而使得更加容易得到视频图像中背景内容区域和关键内容区域的边界。
在步骤L12中,对于每一第三图像,去除第三图像中的曲线、垂直直线,保留水平直线,以得到第四图像。
示例性的,响应于关键内容区域为矩形区域,且,关键内容区域在垂直方向的边界,即为视频在垂直方向的边界,那么,本公开实施例在确定关键内容区域在水平方向上的边界,即可得到关键内容区域,所以步骤L12可以仅保留水平直线。
示例性的,响应于关键内容区域为矩形区域,且,关键内容区域在垂直方向的边界,不是视频在垂直方向的边界,那么,本公开实施例需要确定关键内容区域在水平方向上的边界,以及,关键内容区域在垂直方向上的边界。即步骤L12中需要保留垂直直线和水平直线。
示例性的,步骤L12中以得到水平直线为例进行说明,垂直直线同理,这里不再赘述。
示例性的,可以称“去除第三图像中的曲线、垂直直线,保留水平直线”为直线检测处理。示例性的,该直线检测处理可以为霍夫变换直线检测。
图19是根据一示例性实施例示出的经过直线检测处理得到的第四图像的示意图。
图19中3帧第四图像与图18中3帧第三图像一一对应。将图19与图18比对可知,图19中3帧第四图像保留了水平方向上的直线。
其中,图19左侧图保留了2条水平直线,图19中间图保留了1条水平直线;图19右侧图保留了2条水平直线。
综上,虽然每一帧第四图像应该保留至少2条水平直线,但是在实际应用中,有的第三图像中关键内容区域的边界处可能与背景内容区域非常相似,导致仅能保留一条或0条水平直线;或者,有的第三图像中可能保留2条或2条以上的水平直线段。
在步骤L13中,获得多帧所述第四图像分别包含的水平直线的纵坐标,以得到直线段位置集合。
示例性的,假设多帧第四图像总共包括n个水平直线,假设n个与水平直线在第三图像中的纵坐标分别为:y1,y2,y3,…,yn,那么,直线段位置集合可以为(y1,y2,y3,…,yn)。
步骤L2的实现方式有多种,本公开实施例提供但不限于以下聚类方式,该聚类方式包括步骤L21至步骤L24。
示例性的,图20a至图20c是根据一示例性实施例示出的聚类过程示意图。
如图20a至图20c所示,每一黑色圆圈表征一个yi,i为大于或等于1小于或等于n的任意正整数。图20a至图20c中各黑色圆圈按照自身对应的yi由小至大从左向右排列。
图20a至图20c中以n=10为例进行说明。
示例性的,设定聚类数目为2,聚类迭代次数为L,L为大于1的任一正整数,示例性的,L=10。
在步骤L21中,基于直线段位置集合包含的各纵坐标,随机初始化两个聚类中心位置,分别为聚类中心位置201以及聚类中心位置202。
示例性的,假设直线段位置集合包含的各纵坐标中最大纵坐标为纵坐标1,最小纵坐标为纵坐标2,每一聚类中心位置大于或等于纵坐标1,且,小于或等于纵坐标2。
如图20a所示用网格填充的圆圈表征聚类中心位置201和聚类中心位置202。
示例性的,聚类中心位置可以为直线段位置集合包含的任一纵坐标,或者,不是直线段位置集合包含的任一纵坐标,如图20a所示的聚类中心位置202。
在步骤L22中,计算d(yi,c1)以及d(yi,c2),响应于d(yi,c1)<=d(yi,c2),确定纵坐标yi属于聚类中心位置201对应的第一集合;响应于d(yi,c1)>d(yi,c2),确定纵坐标yi属于聚类中心位置202对应的第二集合,i=1,…,n。
其中,d(yi,c1)是指纵坐标yi与聚类中心位置201的距离,d(yi,c2)是指纵坐标yi与聚类中心位置202的距离。
在步骤L23中,基于第一集合包含的各纵坐标更新聚类中心位置201;基于第二集合包含的各纵坐标更新聚类中心位置202。
假设确定y1,y2,y3属于第一集合,y4,y5,y6,y7,y8,y9,…,yn属于第二集合。示例性的,聚类中心位置201=(y1+y2+y3)/3;聚类中心位置202=(y4+y5+y6+y7+y8+y9+,…,+yn)/(n-3)。
第一次迭代之后,聚类中心位置201和聚类中心位置202的位置如图20b所示。
在步骤L24中,返回步骤L21,直至迭代次数到达L终止。
示例性的,经过L次迭代后得到的聚类中心位置201和聚类中心位置202为第一纵坐标以及第二纵坐标。
本公开实施例中确定所述候选关键内容区域与所述目标图像区域是否相同的方式有多种,本公开实施例提供但不限于以下两种实现方式。
第一种实现方式:响应于候选关键内容区域的位置坐标(top1,left1,bottom1,right1)与所述目标图像区域的位置坐标(top2,left2,bottom2,right2)相同,确定所述候选关键内容区域与所述目标图像区域相同,否则不同。
其中,top1和bottom1为关键内容区域的水平边界的纵坐标;left1、right1为关键内容区域的垂直边界的横坐标,top2和bottom2为目标图像区域的水平边界的纵坐标;left2、right2为目标图像区域的垂直边界的横坐标。
第二种实现方式:响应于目标图像区域的top2与候选关键内容区域的top1的差值的绝对值小于或等于第三阈值,且,目标图像区域的bottom2与候选关键内容区域的bottom1的差值的绝对值小于或等于第四阈值,且,比值1大于或等于第五阈值,且,比值2大于或等于第六阈值,确定所述候选关键内容区域与所述目标图像区域相同,否则,不同。
其中,比值1=所述直线段位置集合包含各纵坐标中与top1的差值的绝对值小于或等于第七阈值的位置的数目/步骤I21中获得的多帧视频图像的数目的一半。
比值2=所述直线段位置集合包含的各纵坐标中与bottom1的差值的绝对值小于或等于第八阈值的位置的数目/步骤I21中获得的多帧视频图像的数目的一半。
示例性的,第三阈值、第四阈值、第五阈值、第六阈值、第七阈值、第八阈值的取值可以基于实际情况而定,这里不再赘述。
假设,第三阈值为2个像素,第四阈值为0.4;第五阈值为2个像素,第六阈值为0.4。比值1用upLineProb表示,比值2用downLineProb表示。
示例性的,upLineProb=numUpCount/0.5*n,其中,针对每一yi,响应于abs(yi–top1)<=2,numUpCount=numUpCount+1,numUpCount的初始值为0。
示例性的,downLineProb=numBottomCount/0.5*n,其中,针对每一yi,响应于abs(yi–bottom1)<=2,numBottomCount=numBottomCount+1,numBottomCount的初始值为0。
响应于abs(top2-top1)<=3且upLineProb>=0.4且abs(bottom2-bottom1)<=3且downLineProb>=0.4,确定目标图像区域与所述候选关键内容区域相同。
响应于上述4个条件任一个不满足,则确定目标图像区域与所述候选关键内容区域不同。
第二种步骤I22的实现方式包括步骤J1、步骤J2、步骤J4、步骤L1、步骤L2、步骤L3以及骤L5。
在步骤J4中,从所述目标图像包含的至少一个图像区域中确定面积最大的目标图像区域。
在步骤L5中,响应于所述候选关键内容区域与所述目标图像区域相同,确定所述目标图像区域为所述关键内容区域。
第三种步骤I22的实现方式包括步骤L1、步骤L2以及步骤L6。
在步骤L6中,将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
在一可选实施例中,视频展示方法包括以下步骤M1至步骤M2。
在步骤M1中,向服务器发送获取所述第一视频的视频信息的指令。
在步骤M2中,接收服务器发送的所述第一视频的视频信息。
示例性的,上述步骤M1和步骤M2可以为步骤S32的具体实现方式,示例性的,上述步骤M1和步骤M2可以在步骤S31之前执行。
本公开实施例提供的视频展示方法中,一个视频的视频信息的表现形式有多种,例如,表格、结构体、数字、队列、链表、函数中任一种。
下面以视频的视频信息的表现形式为函数进行举例说明。示例性的,视频的视频信息包括该视频的原始信息。其中,原始信息是指在视频未被缩放之前的数据,例如,包括以下内容中至少一个。
"feeds":[{height:720,width:360,
"photoDisplayLocationInfo":{
"leftRatio":0.11805555,//左边黑边占比
"topRatio":0.18671875,//上边黑边占比
"widthRatio":0.6805556,//关键内容区域宽度占比
"heightRatio":0.51953125//关键内容区域高度占比},}]}
其中,height是指视频在垂直方向上的原始长度,width是指视频在水平方向上的原始长度。
leftRatio是指在水平方向位于关键内容区域左侧的子背景内容区域的长度/在水平方向上位于关键内容区域两侧的子背景内容区域在水平方向的长度之和。
示例性的,假设图5b中在水平方向上补入的背景图像为子背景内容区域。在水平方向位于关键内容区域左侧的子背景内容区域在水平方向的长度可以为,如图5b所示在水平方向中位于关键内容区域左侧的子背景内容区域在水平方向上的长度。
示例性的,在水平方向上位于关键内容区域两侧的子背景内容区域在水平方向的长度之和可以为,如图5b所示在水平方向中位于关键内容区域左侧的子背景内容区域在水平方向的长度与在水平方向中位于关键内容区域右侧的子背景内容区域在水平方向的长度之和。
topRatio是指在垂直方向上位于关键内容区域上方的子背景内容区域在垂直方向上的长度/在垂直方向上位于关键内容区域两侧的子背景内容区域在垂直方向上的长度之和。
示例性的,在垂直方向上位于关键内容区域上方的子背景内容区域在垂直方向上的长度,可以为如图5a所示位于关键内容区域上方的子背景内容区域在垂直方向上的长度。
示例性的,在垂直方向上位于关键内容区域两侧的子背景内容区域在垂直方向上的长度之和=如图5a所示位于关键内容区域上方的子背景内容区域在垂直方向上的长度+如图5a所示位于关键内容区域下方的子背景内容区域在垂直方向上的长度。
widthRatio是指关键内容区域在水平方向上的原始长度与视频在水平方向上的原始长度的比值;heightRatio是指关键内容区域在垂直方向上的原始长度与视频在垂直方向上的原始长度的比值。
示例性的,右边黑边占比=1-左边黑边占比。其中,右边黑边占比=在水平方向位于关键内容区域右侧的子背景内容区域的长度/在水平方向上位于关键内容区域两侧的子背景内容区域在水平方向的长度之和。
示例性的,基于关键内容区域的宽度占比*width,即可得到关键内容区域在水平方向的原始长度;基于关键内容区域高度占比*height,即可得到关键内容区域在垂直方向的原始长度。基于width-关键内容区域在水平方向的原始长度,可得到在水平方向上位于关键内容区域两侧的子背景内容区域在水平方向的长度之和。基于height-关键内容区域高度占比*height,可得到在垂直方向上位于关键内容区域两侧的子背景内容区域在垂直方向上的长度之和。
示例性的,上述原始信息还可以包括关键内容区域与背景内容区域的相对位置,例如,响应于关键内容区域位于视频的中间位置,即位于关键内容区域左侧的子背景内容区域在水平方向上的长度=位于关键内容区域右侧的子背景内容区域在水平方向上的长度,且,位于关键内容区域上方的子背景内容区域在垂直方向上的长度=位于关键内容区域下方的子背景内容区域在垂直方向上的长度。
示例性的,视频的视频信息还可以包括:在视频进行缩放后,视频的展示尺寸、关键内容区域的展示尺寸、背景内容区域的展示尺寸。
示例性的,展示尺寸是指当前展示的尺寸,展示尺寸包括在水平方向上的长度以及在垂直方向上的长度。
图21是根据一示例性实施例示出的一种应用于服务器的视频展示方法的流程图。该方法包括步骤S210至步骤S213。
在步骤S210中,接收电子设备发送的获取视频指令。
在步骤S211中,从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频。
在步骤S212中,获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括所述视频的展示尺寸以及所述视频中的关键内容区域;
在步骤S213中,将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备。
其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
在一可选实现方式中,在步骤S210之前,或者,在步骤S211之前,或者,在步骤S212之前针对已存储的每一所述视频,执行以下步骤N11至步骤N12。或者步骤S212包括步骤N11至步骤N12。
在步骤N11中,从所述视频中获取多帧视频图像。
在步骤N12中,基于多帧所述视频图像,确定所述视频包含的所述关键内容区域。
针对步骤N11的说明,请参见步骤I21的说明,针对步骤N12的说明,请参见步骤I22的说明。
步骤N12有三种实现方式,第一种实现方式包括步骤J1至步骤J3,针对步骤J1至步骤J3的说明可以参见相应部分,这里不再赘述。第二种实现方式包括步骤J1、步骤J2、步骤J4、步骤L1、步骤L2、步骤L3以及骤L5。第三种实现方式包括步骤L1、步骤L2以及步骤L6。
步骤N12的三种实现方式可以参见步骤I22的实现方式这里不再赘述。
在应用于服务器的视频展示方法实施例中,还包括确定目标图像区域为关键内容区域的概率的方法,如步骤K1至步骤K3,请参见相应描述,这里不再赘述。
在一可选实现方式中,可参见图17a至图17c对确定目标图像区域为关键内容区域的概率的意义的说明,为了提高客户端基于视频信息缩放视频的准确度,将对应的第一概率大于或等于第二阈值的视频的视频信息发送至电子设备,不将对应的第一概率小于第二阈值的视频的视频信息发送至电子设备。步骤S213包括步骤N21至步骤N22。
在步骤N21中,从所述至少一个视频中,确定对应的所述第一概率大于或等于第二阈值的视频。
在步骤N22中,将所述至少一个视频以及对应的所述第一概率大于或等于所述第二阈值的视频的视频信息发送至所述电子设备。
在一可选实施例中,为了扩大召回率,对于对应的第一概率小于第二阈值的每一视频执行以下步骤L1、步骤L2、步骤L3以及步骤N23。
步骤N23中,将所述候选关键内容区域与所述目标图像区域相同的视频的视频信息发送至所述电子设备。
可以理解的是,由于通过两种计算方式,一种计算方式为步骤J1至步骤J3的方法,和,另一种计算方式为通过步骤L1至步骤L3的方法,确定的关键内容区域相同,所以得到的关键内容区域的准确性较高,所以可以发送至电子设备。
针对步骤L1至步骤L3的说明可以参见上述相应说明,这里不再赘述。
在一可选实施例中,图22是根据一示例性实施例示出的一种应用于电子设备的视频展示装置的结构图。
该电子设备包括:第一获取模块2001、第二获取模块2002、缩放模块2003以及展示模块2004。
第一获取模块,被配置为接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;
第二获取模块,被配置为获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;
缩放模块,被配置为根据所述第一获取模块获得的所述操作信息以及所述第二获取模块或的所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;
展示模块,被配置为响应于所述视频缩放操作,在所述视频播放界面中展示所述缩放模块得到的所述第二视频。
在一可选实现方式中,所述缩放模块具体被配置为:
第一确定单元,被配置为根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数;
缩放单元,被配置为根据所述第一确定单元确定的所述缩放方式以及所述缩放参数对所述第一视频进行缩放,得到所述第二视频。
在一可选实现方式中,所述操作信息至少包括操作类型以及操作距离,所述第一确定单元具体被配置为:
第一确定子单元,被配置为在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数;所述缩小方式至少包括是否缩小所述关键内容区域以及缩小类型;所述缩小类型包括预设方向缩小或整体缩小;所述缩小参数至少包括在所述预设方向上的缩小长度;
第二确定子单元,被配置为在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数;所述放大方式至少包括是否放大所述关键内容区域以及放大类型;所述放大类型包括预设方向放大或整体放大;所述放大参数至少包括在所述预设方向上的放大长度。
在一可选实现方式中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:第一确定子模块、第二确定子模块第三确定子模块以及第四确定子模块。
第一确定子模块,被配置为根据所述视频信息中包括的关键内容区域,确定所述第一视频在所述关键内容区域之外的背景内容区域。
第二确定子模块,被配置为在所述操作距离不大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域以及所述背景内容区域的缩小类型为预设方向缩小或整体缩小。
第三确定子模块,被配置为在所述背景内容区域的缩小类型为预设方向缩小的情况下,基于所述操作距离确定在所述预设方向上的缩小长度。
第四确定子模块,被配置为在所述背景内容区域的缩小类型为整体缩小的情况下,基于所述操作距离以及所述背景内容区域的尺寸,确定第一缩小比例以及在所述预设方向上的缩小长度,所述第一缩小比例为所述背景内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一可选实现方式中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:第五确定子模块、第六确定子模块和第七确定子模块。
第五确定子模块,被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域。
第六确定子模块,被配置为在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下, 确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域。
第七确定子模块,被配置为基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
在一可选实现方式中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:第八确定子模块、第九确定子模块、第十确定子模块以及第十一确定子模块。
第八确定子模块,被配置为第三确定子模块被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域。
第九确定子模块,被配置为基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
第十确定子模块,被配置为在所述关键内容区域的缩小类型为预设方向缩小的情况下,基于所述背景内容区域在所述预设方向的长度以及所述操作距离,确定所述关键内容区域在所述预设方向上的缩小长度。
第十一确定子模块,被配置为在所述关键内容区域的缩小类型为整体缩小的情况下,基于所述背景内容区域在所述预设方向的长度、所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的缩小长度以及第二缩小比例,所述第二缩小比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一可选实现方式中,所述第二确定子单元具体被配置为:第十二确定子模块、第十三确定子模块、第十四确定子模块和第十五确定子模块。
第十二确定子模块,被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸。
第十三确定子模块,被配置为在所述关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度的情况下,确定所述放大方式包括不放大所述关键内容区域、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大。
第十四确定子模块,被配置为在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度。
第十五确定子模块,被配置为在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例,所述第一放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
在一可选实现方式中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:第十六确定子模块、第十七确定子模块和第十八确定子模块。
第十六确定子模块,被配置为在所述关键内容区域在所述预设方向的长度小于或等于所述关键内容区域的原始长度与所述操作距离的差值的情况下,确定所述放大方式包括放大所述关键内容区域以及所述关键内容区域的放大类型为预设方向放大或整体放大。
第十七确定子模块,被配置为在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述关键内容区域在所述预设方向上的放大长度。
第十八确定子模块,被配置为在所述关键内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的放大长度以及第二放大比例,所述第二放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
在一可选实现方式中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:第十九单元、第二十确定子模块、第二十一确定子模块、第二十二确定子模块、第二十三确定子模块和第二十四确定子模块。
第十九确定子模块,被配置为在所述关键内容区域在所述预设方向的长度大于所述关键内容区域的原始长度与所述操作距离的差值的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸。
第二十确定子模块,被配置为确定所述放大方式包括放大所述关键内容区域、所述关键内容区域的放大类型为预设方向放大或整体放大、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大。
第二十一确定子模块,被配置为在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述关键内容区域在所述预设方向的长度以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度。
第二十二确定子模块,被配置为在所述关键内容区域的放大类型为整体放大的情况下,基于所述关键内容区域的尺寸以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度以及第三放大比例,所述第三放大比例为所述关键内容区域在所述预设方向上的放大长度和与所述预设方向垂直的方向的放大长度的比值。
第二十三确定子模块,被配置为在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度。
第二十四确定子模块,被配置为在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第四放大比例,所述第四放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
在一可选实现方式中,所述背景内容区域包括第一子背景内容区域以及第二子背景内容区域,所述第一视频在所述预设方向上依次包括所述第一子背景内容区域、所述关键内容区域以及所述第二子背景内容区域;
所述缩小参数还包括所述第一子背景内容区域在所述预设方向上的缩小比例,以及所述第二子背景内容区域在所述预设方向上的缩小比例,所述缩小比例是指自身在所述预设方向上的长度与所述背景内容区域在所述预设方向上的长度的比值;
所述放大参数还包括所述第一子背景内容区域在所述预设方向上的放大比例,以及所述第二子背景内容区域在所述预设方向上的放大比例,所述放大比例是指自身在所述预设方向上的原始长度与所述背景内容区域在所述预设方向上原始长度的比值。
在一可选实现方式中,该装置还包括:
第一确定模块,被配置为确定所述视频缩放操作对应的操作类型;
一键缩小模块,被配置为响应于所述操作类型为一键缩小操作,去除所述第一视频中所述关键内容区域之外的背景内容区域,得到所述第二视频;
一键放大模块,被配置为响应于所述操作类型为一键放大操作,将所述关键内容区域放大至所述关键内容区域的原始尺寸,将所述背景内容区域放大至所述背景内容区域的原始尺寸,以得到所述第二视频。
在一可选实现方式中,第二获取模块具体被配置为:
第一获取单元,被配置为从所述第一视频中获取多帧视频图像;
第二获取单元,被配置为基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域。
在一可选实现方式中,第二获取单元具体被配置为:
第一获取子单元,被配置为针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
第二获取子单元,被配置为基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
第三确定子单元,被配置为将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
在一可选实现方式中,第二获取子单元具体被配置为:
第一获取子模块,被配置为分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,一帧所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
第二获取子模块,被配置为基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
第三获取子模块,被配置为对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
在一可选实现方式中,还包括:
第一转换模块,被配置为转换将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
第三获取模块,被配置为获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
第二确定模块,被配置为将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
在一可选实现方式中,该装置还包括:第一判断子模块,被配置为响应于所述第一概率大于或等于第二阈值,触发所述缩放模块。
在一可选实现方式中,该装置还包括:
第四获取模块,被配置为响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
第三确定模块,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
第四确定模块,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
第二触发模块,被配置为响应于所述候选关键内容区域与所述目标图像区域相同,触发所述缩放模块。
在一可选实现方式中,第二获取单元具体被配置为:
第三获取子单元,被配置为获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
第四确定子单元,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
第五确定子单元,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
在一可选实现方式中,所述第二获取模块具体被配置为:
第一发送模块,被配置为向服务器发送获取所述第一视频的视频信息的指令;
第一接收模块,被配置为接收服务器发送的所述第一视频的视频信息。
在一可选实施例中,图23是根据一示例性实施例示出的一种应用于服务器的视频展示装置的结构图。
该视频展示装置包括:接收模块2101、第五获取模块2102、第六获取模块2103、第一发送模块2104。
第二接收模块,被配置为接收电子设备发送的获取视频指令;
第五获取模块,被配置为从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;
第六获取模块,被配置为获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括;所述视频的展示尺寸以及所述视频中的关键内容区域;
第二发送模块,被配置为将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;
其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
在一可选实现方式中,该装置还包括:
第七获取模块,被配置为从所述视频中获取多帧视频图像;
第五确定模块,被配置为基于多帧所述视频图像,确定所述视频包含的所述关键内容区域。
在一可选实现方式中,第五确定模块具体被配置为:
第三获取单元,被配置为针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
第二确定单元,被配置为基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
第三确定单元,被配置为将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
在一可选实现方式中,第二确定单元具体被配置为:
第四获取子单元,被配置为分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,一帧所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
第五获取子单元,被配置基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
第六获取子单元,被配置对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
在一可选实现方式中,该装置还包括:
第二转换模块,被配置将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
第八获取模块,被配置获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
第六确定模块,被配置将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
在一可选实现方式中,第一发送模块具体被配置为:
第四确定单元,被配置为从所述至少一个视频中,确定对应的所述第一概率大于或等于第二阈值的视频;
第一发送单元,被配置为将所述至少一个视频以及对应的所述第一概率大于或等于所述第二阈值的视频的视频信息发送至所述电子设备。
在一可选实现方式中,该装置还包括:
第九获取模块,被配置为响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
第七确定模块,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
第八确定模块,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
第三发送模块,被配置为将所述候选关键内容区域与所述目标图像区域相同的视频的视频信息发送至所述电子设备。
在一可选实现方式中,所述第五确定模块具体被配置为:
第四获取单元,被配置为获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
第五确定单元,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
第六确定单元,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
在一可选实施例中,本公开实施例还提供了一种视频展示系统,该视频展示系统包括:服务器以及至少一个电子设备。
下面结合图2公开的实施环境所涉及的第一种应用场景和第二种应用场景,对电子设备22以及服务器21之间的交互过程进行介绍。
在第一种应用场景中,电子设备22向服务器21发送获取视频指令,服务器21接收电子设备22发 送的获取视频指令,并基于该获取视频指令从已存储的各视频中获得与该获取视频指令对应的至少一个视频。电子设备22接收服务器发送的至少一个视频,基于视频展示需求在视频播放界面中展示至少一个视频中的第一视频,并对第一视频进行解析,以得到第一视频的视频信息。
在第二种应用场景中,电子设备22向服务器21发送获取视频指令,服务器21接收电子设备22发送的获取视频指令。基于该获取视频指令从已存储的各视频中获得与该获取视频指令对应的至少一个视频以及至少一个视频各自对应的视频信息。将至少一个视频以及至少一个视频各自对应的视频信息发送至电子设备22,以便电子设备基于基于视频展示需求在视频播放界面中展示至少一个视频中的第一视频。
图24是根据一示例性实施例示出了一种电子设备的框图。电子设备包括但不限于输入单元241、第一存储器242、显示单元243以及处理器244等部件。本领域技术人员可以理解,图24中示出的结构只做实现方式的举例,并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图24对电子设备的各个构成部件进行具体的介绍:
示例性的,输入单元241可用于接收用户输入的信息,例如缩放操作。
示例性的,输入单元241可以包括触控面板2411以及其他输入设备2412。触控面板2411,也称为触摸屏,可收集用户在其上的触摸操作(比如用户使用手指、触控笔等任何适合的物体或附件在触控面板2411上的操作),并根据预先设定的程式驱动相应的连接装置(例如驱动处理器244中的视频缩放功能)。可选的,触控面板2411可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器244,并能接收处理器244发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板2411。除了触控面板2411,输入单元241还可以包括其他输入设备2412。具体地,其他输入设备2412可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
示例性的,第一存储器242可用于存储软件程序以及模块,处理器244通过运行存储在第一存储器242的软件程序以及模块,从而执行电子设备的各种功能应用以及数据处理。第一存储器242可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据电子设备的使用所创建的数据(比如第一视频的关键内容区域在垂直方向上的长度、背景内容区域在垂直方向上的长度等)。此外,第一存储器242可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
示例性的,显示单元243可用于显示由用户输入的信息或提供给用户的信息(例如显示视频)以及电子设备的各种菜单。显示单元243可包括显示面板2431,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板2431。进一步的,触控面板2412可覆盖显示面板2431,当触控面板2412检测到在其上或附近的触摸操作后,传送给处理器244以确定触摸事件的类型,随后处理器244根据触摸事件的类型在显示面板2431上提供相应的视觉输出。
示例性的,触控面板2412与显示面板2431可作为两个独立的部件来实现电子设备22的输出和输入功能,但是在某些实施例中,可以将触控面板2412与显示面板2431集成而实现电子设备的输入和输出功能。
处理器244是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在第一存储器242内的软件程序和/或模块,以及调用存储在第一存储器242内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。示例性的,处理器244可包括一个或多个处理单元;示例性的,处理器244可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器244中。
电子设备还包括给各个部件供电的电源245(比如电池),示例性的,电源可以通过电源管理系统第一与处理器244逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,电子设备还可以包括摄像头、蓝牙模块、RF(Radio Frequency,射频)电路、传感器、 音频电路、WiFi(wireless fidelity,无线保真)模块、传感器、网络单元、接口单元等等。
电子设备通过网络单元为用户提供了无线的宽带互联网访问,如访问服务器。
接口单元为外部装置与电子设备连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备内的一个或多个元件或者可以用于在电子设备和外部装置之间传输数据。
在本公开实施例中,该电子设备所包括处理器244可能是一个中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。
该电子设备所包括处理器244具有以下功能:接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;响应于所述视频缩放操作,在所述视频播放界面中展示所述第二视频。
图25是根据一示例性实施例示出了一种服务器的框图。服务器包括但不限于:处理器251、第二存储器252、网络接口253、I/O控制器254以及通信总线255。
需要说明的是,本领域技术人员可以理解,图25中示出的服务器的结构并不构成对服务器的限定,服务器可以包括比图25所示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图25对服务器的各个构成部件进行具体的介绍:
处理器251是服务器的控制中心,利用各种接口和线路连接整个服务器的各个部分,通过运行或执行存储在第二存储器252内的软件程序和/或模块,以及调用存储在第二存储器252内的数据,执行服务器的各种功能和处理数据,从而对服务器进行整体监控。处理器251可包括一个或多个处理单元;示例性的,处理器251可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器251中。
处理器251可能是一个中央处理器(Central Processing Unit,CPU),或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路等;
第二存储器252可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM)2521和只读存储器(Read-Only Memory,ROM)2522,也可能还包括大容量存储设备2525,例如至少1个磁盘存储器等。当然,该服务器还可能包括其他业务所需要的硬件。
其中,上述的第二存储器252,用于存储上述处理器251可执行指令。上述处理器251具有以下功能:接收电子设备发送的获取视频指令;从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括;所述视频的展示尺寸以及所述视频中的关键内容区域;将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
一个有线或无线网络接口253被配置为将服务器连接到网络。
处理器251、第二存储器252、网络接口253和I/O控制器254可以通过通信总线255相互连接,该通信总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。
在示例性实施例中,服务器可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述电子资源传输方法。
在示例性实施例中,本公开实施例提供了一种包括指令的存储介质,例如包括指令的第一存储器252,上述指令可由电子设备的处理器254执行以完成上述方法。在一些实施例中,存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,本公开实施例提供了一种包括指令的存储介质,例如包括指令的第二存储器252,上述指令可由服务器的处理器251执行以完成上述方法。在一些实施例中,存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,可直接加载到计算机的内部存储器,例如上述第一存储器242中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现上述应用于电子设备的视频展示方法任一实施例所示步骤。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,可直接加载到计算机的内部存储器,例如上述第二存储器252中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现上述应用于服务器的视频展示方法任一实施例所示步骤。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
本领域技术人员在考虑说明书及实践本公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (83)

  1. 一种视频展示方法,包括:
    接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;
    获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;
    根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;
    响应于所述视频缩放操作,在所述视频播放界面中展示所述第二视频。
  2. 根据权利要求1所述视频展示方法,其中,所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,包括:
    根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数;
    将所述第一视频按照所述缩放方式以及所述缩放参数进行缩放,得到所述第二视频。
  3. 根据权利要求2所述视频展示方法,其中,所述操作信息至少包括操作类型以及操作距离;所述根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数,包括:
    在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数;所述缩小方式至少包括是否缩小所述关键内容区域以及缩小类型;所述缩小类型包括预设方向缩小或整体缩小;所述缩小参数至少包括在所述预设方向上的缩小长度;
    在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数;所述放大方式至少包括是否放大所述关键内容区域以及放大类型;所述放大类型包括预设方向放大或整体放大;所述放大参数至少包括在所述预设方向上的放大长度。
  4. 根据权利要求3所述视频展示方法,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数,包括:
    根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    在所述操作距离不大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域以及所述背景内容区域的缩小类型为预设方向缩小或整体缩小;
    在所述背景内容区域的缩小类型为预设方向缩小的情况下,基于所述操作距离确定在所述预设方向上的缩小长度;
    在所述背景内容区域的缩小类型为整体缩小的情况下,基于所述操作距离以及所述背景内容区域的尺寸,确定第一缩小比例以及在所述预设方向上的缩小长度,所述第一缩小比例为所述背景内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  5. 根据权利要求3所述视频展示方法,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数,包括:
    根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域以及缩小所述背景内容区域;
    基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
  6. 根据权利要求3所述视频展示方法,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数,包括:
    根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括缩 小所述关键内容区域、所述关键内容区域的缩小类型为预设方向缩小或整体缩小以及缩小所述背景内容区域;
    基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度;
    在所述关键内容区域的缩小类型为预设方向缩小的情况下,基于所述背景内容区域在所述预设方向的长度以及所述操作距离,确定所述关键内容区域在所述预设方向上的缩小长度;
    在所述关键内容区域的缩小类型为整体缩小的情况下,基于所述背景内容区域在所述预设方向的长度、所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的缩小长度以及第二缩小比例,所述第二缩小比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  7. 根据权利要求3所述视频展示方法,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数,包括:
    在所述关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;
    确定所述放大方式包括不放大所述关键内容区域、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;
    在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;
    在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例,所述第一放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
  8. 根据权利要求3所述视频展示方法,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数,包括:
    在所述关键内容区域在所述预设方向的长度小于或等于所述关键内容区域的原始长度与所述操作距离的差值的情况下,确定所述放大方式包括放大所述关键内容区域以及所述关键内容区域的放大类型为预设方向放大或整体放大;
    在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述关键内容区域在所述预设方向上的放大长度;
    在所述关键内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的放大长度以及第二放大比例,所述第二放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  9. 根据权利要求3所述视频展示方法,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数,包括:
    在所述关键内容区域在所述预设方向的长度大于所述关键内容区域的原始长度与所述操作距离的差值的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;
    确定所述放大方式包括放大所述关键内容区域、所述关键内容区域的放大类型为预设方向放大或整体放大、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;
    在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述关键内容区域在所述预设方向的长度以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度;
    在所述关键内容区域的放大类型为整体放大的情况下,基于所述关键内容区域的尺寸以及所述关键内容区域在所述预设方向上的原始长度确定所述关键内容区域在所述预设方向上的放大长度以及第三 放大比例,所述第三放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值;
    在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;
    在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第四放大比例,所述第四放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
  10. 根据权利要求4至9任一所述视频展示方法,其中,所述背景内容区域包括第一子背景内容区域以及第二子背景内容区域,所述第一视频在所述预设方向上依次包括所述第一子背景内容区域、所述关键内容区域以及所述第二子背景内容区域;
    所述缩小参数还包括所述第一子背景内容区域在所述预设方向上的缩小比例,以及所述第二子背景内容区域在所述预设方向上的缩小比例,所述缩小比例是指自身在所述预设方向上的长度与所述背景内容区域在所述预设方向上的长度的比值;
    所述放大参数还包括所述第一子背景内容区域在所述预设方向上的放大比例,以及所述第二子背景内容区域在所述预设方向上的放大比例,所述放大比例是指自身在所述预设方向上的原始长度与所述背景内容区域在所述预设方向上原始长度的比值。
  11. 根据权利要求1所述视频展示方法,其中,所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,包括:
    确定所述操作信息对应的操作类型;
    响应于所述操作类型为一键缩小操作,去除所述第一视频中所述关键内容区域之外的背景内容区域,得到所述第二视频;
    响应于所述操作类型为一键放大操作,将所述关键内容区域放大至所述关键内容区域的原始尺寸,将所述背景内容区域放大至所述背景内容区域的原始尺寸,以得到所述第二视频。
  12. 根据权利要求1至9任一所述视频展示方法,其中,所述获取所述视频播放界面中展示的第一视频的视频信息包括:
    从所述第一视频中获取多帧视频图像;
    基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域。
  13. 根据权利要求12所述视频展示方法,其中,所述基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域包括:
    针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
    基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
    将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
  14. 根据权利要求13所述视频展示方法,其中,所述基于所述至少一帧差异图像,获得目标图像包括:
    分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
    基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
    对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
  15. 根据权利要求14所述视频展示方法,其中,还包括:
    将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
    获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
    将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
  16. 根据权利要求15所述视频展示方法,其中,还包括:
    响应于所述第一概率大于或等于第二阈值,执行所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频。
  17. 根据权利要求16所述视频展示方法,其中,还包括:
    响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
    响应于所述候选关键内容区域与所述目标图像区域相同,执行所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频。
  18. 根据权利要求12所述视频展示方法,其中,所述基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域包括:
    获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
  19. 一种视频展示方法,其中,应用于服务器,所述视频展示方法包括:
    接收电子设备发送的获取视频指令;
    从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;
    获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括所述视频的展示尺寸以及所述视频中的关键内容区域;
    将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;
    其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
  20. 根据权利要求19所述视频展示方法,其中,针对已存储的每一所述视频,所述视频展示方法还包括:
    从所述视频中获取多帧视频图像;
    基于多帧所述视频图像,确定所述视频包含的所述关键内容区域。
  21. 根据权利要求20所述视频展示方法,其中,所述基于多帧所述视频图像,确定所述视频包含的所述关键内容区域包括:
    针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
    基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
    将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
  22. 根据权利要求21所述视频展示方法,其中,所述基于所述至少一帧差异图像,获得目标图像包括:
    分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
    基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
    对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
  23. 根据权利要求22所述视频展示方法,其中,还包括:
    将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
    获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
    将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
  24. 根据权利要求23所述视频展示方法,其中,所述将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备包括:
    从所述至少一个视频中,确定对应的所述第一概率大于或等于第二阈值的视频;
    将所述至少一个视频以及对应的所述第一概率大于或等于所述第二阈值的视频的视频信息发送至所述电子设备。
  25. 根据权利要求24所述视频展示方法,其中,还包括:
    响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
    将所述候选关键内容区域与所述目标图像区域相同的视频的视频信息发送至所述电子设备。
  26. 根据权利要求20所述视频展示方法,其中,所述基于多帧所述视频图像,确定所述视频包含的所述关键内容区域包括:
    获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
  27. 一种视频展示装置,其中,所述视频展示装置包括:
    第一获取模块,被配置为接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;
    第二获取模块,被配置为获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;
    缩放模块,被配置为根据所述第一获取模块获得的所述操作信息以及所述第二获取模块或的所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;
    展示模块,被配置为响应于所述视频缩放操作,在所述视频播放界面中展示所述缩放模块得到的所述第二视频。
  28. 根据权利要求27所述视频展示装置,其中,所述缩放模块具体被配置为:
    第一确定单元,被配置为根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数;
    缩放单元,被配置为根据所述第一确定单元确定的所述缩放方式以及所述缩放参数对所述第一视频进行缩放,得到所述第二视频。
  29. 根据权利要求28所述视频展示装置,其中,所述操作信息至少包括操作类型以及操作距离,所述第一确定单元具体被配置为:
    第一确定子单元,被配置为在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数;所述缩小方式至少包括是否缩小所述关键内容区域以及缩小类型;所述缩小类型包括预设方向缩小或整体缩小;所述缩小参数至少包括在所述预设方向上的缩小长度;
    第二确定子单元,被配置为在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数;所述放大方式至少包括是否放大所述关键内容区域以及放大类型;所述放大类型包括预设方向放大或整体放大;所述放大参数至少包括在所述预设方向上的放大长度。
  30. 根据权利要求29所述视频展示装置,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:
    第一确定子模块,被配置为根据所述视频信息中包括的关键内容区域,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    第二确定子模块,被配置为在所述操作距离不大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域以及所述背景内容区域的缩小类型为预设方向缩小或整体缩小;
    第三确定子模块,被配置为在所述背景内容区域的缩小类型为预设方向缩小的情况下,基于所述操作距离确定在所述预设方向上的缩小长度;
    第四确定子模块,被配置为在所述背景内容区域的缩小类型为整体缩小的情况下,基于所述操作距离以及所述背景内容区域的尺寸,确定第一缩小比例以及在所述预设方向上的缩小长度,所述第一缩小比例为所述背景内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  31. 根据权利要求29所述视频展示装置,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:
    第五确定子模块,被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    第六确定子模块,被配置为在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域;
    第七确定子模块,被配置为基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
  32. 根据权利要求29所述视频展示装置,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离,所述第一确定子单元具体被配置为:
    第八确定子模块,被配置为第三确定子模块被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    第九确定子模块,被配置为基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度;
    第十确定子模块,被配置为在所述关键内容区域的缩小类型为预设方向缩小的情况下,基于所述背景内容区域在所述预设方向的长度以及所述操作距离,确定所述关键内容区域在所述预设方向上的缩小长度;
    第十一确定子模块,被配置为在所述关键内容区域的缩小类型为整体缩小的情况下,基于所述背景内容区域在所述预设方向的长度、所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的缩小长度以及第二缩小比例,所述第二缩小比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  33. 根据权利要求29所述视频展示装置,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:
    第十二确定子模块,被配置为根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;
    第十三确定子模块,被配置为在所述关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度的情况下,确定所述放大方式包括不放大所述关键内容区域、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;
    第十四确定子模块,被配置为在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;
    第十五确定子模块,被配置为在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例,所述第一放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
  34. 根据权利要求29所述视频展示装置,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:
    第十六确定子模块,被配置为在所述关键内容区域在所述预设方向的长度小于或等于所述关键内容区域的原始长度与所述操作距离的差值的情况下,确定所述放大方式包括放大所述关键内容区域以及所述关键内容区域的放大类型为预设方向放大或整体放大;
    第十七确定子模块,被配置为在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述关键内容区域在所述预设方向上的放大长度;
    第十八确定子模块,被配置为在所述关键内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的放大长度以及第二放大比例,所述第二放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  35. 根据权利要求29所述视频展示装置,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述第二确定子单元具体被配置为:
    第十九确定子模块,被配置为在所述关键内容区域在所述预设方向的长度大于所述关键内容区域的原始长度与所述操作距离的差值的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;
    第二十确定子模块,被配置为确定所述放大方式包括放大所述关键内容区域、所述关键内容区域的放大类型为预设方向放大或整体放大、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;
    第二十一确定子模块,被配置为在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述关键内容区域在所述预设方向的长度以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度;
    第二十二确定子模块,被配置为在所述关键内容区域的放大类型为整体放大的情况下,基于所述关键内容区域的尺寸以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度以及第三放大比例,所述第三放大比例为所述关键内容区域在所述预设方向上的放大长度和与所述预设方向垂直的方向的放大长度的比值;
    第二十三确定子模块,被配置为在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;
    第二十四确定子模块,被配置为在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第四放大比例,所述第四放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
  36. 根据权利要求30至35任一所述视频展示装置,其中,所述背景内容区域包括第一子背景内容区域以及第二子背景内容区域,所述第一视频在所述预设方向上依次包括所述第一子背景内容区域、所述关键内容区域以及所述第二子背景内容区域;
    所述缩小参数还包括所述第一子背景内容区域在所述预设方向上的缩小比例,以及所述第二子背景内容区域在所述预设方向上的缩小比例,所述缩小比例是指自身在所述预设方向上的长度与所述背景内容区域在所述预设方向上的长度的比值;
    所述放大参数还包括所述第一子背景内容区域在所述预设方向上的放大比例,以及所述第二子背景内容区域在所述预设方向上的放大比例,所述放大比例是指自身在所述预设方向上的原始长度与所述背景内容区域在所述预设方向上原始长度的比值。
  37. 根据权利要求27所述视频展示装置,其中,还包括:
    第一确定模块,被配置为确定所述视频缩放操作对应的操作类型;
    一键缩小模块,被配置为响应于所述操作类型为一键缩小操作,去除所述第一视频中所述关键内容区域之外的背景内容区域,得到所述第二视频;
    一键放大模块,被配置为响应于所述操作类型为一键放大操作,将所述关键内容区域放大至所述关键内容区域的原始尺寸,将所述背景内容区域放大至所述背景内容区域的原始尺寸,以得到所述第二视频。
  38. 根据权利要求27至35任一所述视频展示装置,其中,所述第二获取模块具体被配置为:
    第一获取单元,被配置为从所述第一视频中获取多帧视频图像;
    第二获取单元,被配置为基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域。
  39. 根据权利要求38所述视频展示装置,其中,所述第二获取单元具体被配置为:
    第一获取子单元,被配置为针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
    第二获取子单元,被配置为基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
    第三确定子单元,被配置为将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
  40. 根据权利要求39所述视频展示装置,其中,所述第二获取子单元具体被配置为:
    第一获取子模块,被配置为分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
    第二获取子模块,被配置为基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
    第三获取子模块,被配置为对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
  41. 根据权利要求40所述视频展示装置,其中,还包括:
    第一转换模块,被配置为转换将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
    第三获取模块,被配置为获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
    第二确定模块,被配置为将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
  42. 根据权利要求41所述视频展示装置,其中,还包括:
    第一触发模块,被配置为响应于所述第一概率大于或等于第二阈值,触发所述缩放模块。
  43. 根据权利要求42所述视频展示装置,其中,还包括:
    第四获取模块,被配置为响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    第三确定模块,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    第四确定模块,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
    第二触发模块,被配置为响应于所述候选关键内容区域与所述目标图像区域相同,触发所述缩放模块。
  44. 根据权利要求38所述视频展示装置,其中,所述第二获取单元具体被配置为:
    第三获取子单元,被配置为获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    第四确定子单元,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    第五确定子单元,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
  45. 一种视频展示装置,其中,所述视频展示装置包括:
    第二接收模块,被配置为接收电子设备发送的获取视频指令;
    第五获取模块,被配置为从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;
    第六获取模块,被配置为获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括;所述视频的展示尺寸以及所述视频中的关键内容区域;
    第二发送模块,被配置为将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;
    其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
  46. 根据权利要求45所述视频展示装置,其中,还包括:
    第七获取模块,被配置为从所述视频中获取多帧视频图像;
    第五确定模块,被配置为基于多帧所述视频图像,确定所述视频包含的所述关键内容区域。
  47. 根据权利要求48所述视频展示装置,其中,所述第五确定模块具体被配置为:
    第三获取单元,被配置为针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
    第二确定单元,被配置为基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
    第三确定单元,被配置为将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
  48. 根据权利要求47所述视频展示装置,其中,所述第二确定单元具体被配置为:
    第四获取子单元,被配置为分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
    第五获取子单元,被配置基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
    第六获取子单元,被配置对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
  49. 根据权利要求48所述视频展示装置,其中,还包括:
    第二转换模块,被配置将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
    第八获取模块,被配置获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
    第六确定模块,被配置将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
  50. 根据权利要求49所述视频展示装置,其中,所述第一发送模块具体被配置为:
    第四确定单元,被配置为从所述至少一个视频中,确定对应的所述第一概率大于或等于第二阈值的视频;
    第一发送单元,被配置为将所述至少一个视频以及对应的所述第一概率大于或等于所述第二阈值的视频的视频信息发送至所述电子设备。
  51. 根据权利要求50所述视频展示装置,其中,还包括:
    第九获取模块,被配置为响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    第七确定模块,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    第八确定模块,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
    第三发送模块,被配置为将所述候选关键内容区域与所述目标图像区域相同的视频的视频信息发送至所述电子设备。
  52. 根据权利要求46所述视频展示装置,其中,所述第五确定模块具体被配置为:
    第四获取单元,被配置为获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    第五确定单元,被配置为从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵 坐标;
    第六确定单元,被配置为将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
  53. 一种电子设备,包括:
    处理器;
    用于存储所述处理器可执行指令的第一存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:
    接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;
    获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;
    根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;
    响应于所述视频缩放操作,在所述视频播放界面中展示所述第二视频。
  54. 根据权利要求53所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    根据所述操作信息以及所述第一视频的视频信息,确定所述第一视频的缩放方式以及缩放参数;
    将所述第一视频按照所述缩放方式以及所述缩放参数进行缩放,得到所述第二视频。
  55. 根据权利要求54所述的电子设备,其中,所述操作信息至少包括操作类型以及操作距离;所述处理器被配置执行所述可执行指令,实现以下步骤:
    在确定所述操作类型为缩小操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的缩小方式以及缩小参数;所述缩小方式至少包括是否缩小所述关键内容区域以及缩小类型;所述缩小类型包括预设方向缩小或整体缩小;所述缩小参数至少包括在所述预设方向上的缩小长度;
    在确定所述操作类型为放大操作的情况下,根据所述操作距离以及所述第一视频的视频信息,确定所述第一视频的放大方式以及放大参数;所述放大方式至少包括是否放大所述关键内容区域以及放大类型;所述放大类型包括预设方向放大或整体放大;所述放大参数至少包括在所述预设方向上的放大长度。
  56. 根据权利要求55所述的电子设备,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述处理器被配置执行所述可执行指令,实现以下步骤:
    根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    在所述操作距离不大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域、缩小所述背景内容区域以及所述背景内容区域的缩小类型为预设方向缩小或整体缩小;
    在所述背景内容区域的缩小类型为预设方向缩小的情况下,基于所述操作距离确定在所述预设方向上的缩小长度;
    在所述背景内容区域的缩小类型为整体缩小的情况下,基于所述操作距离以及所述背景内容区域的尺寸,确定第一缩小比例以及在所述预设方向上的缩小长度,所述第一缩小比例为所述背景内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  57. 根据权利要求55所述的电子设备,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述处理器被配置执行所述可执行指令,实现以下步骤:
    根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括不缩小所述关键内容区域以及缩小所述背景内容区域;
    基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度。
  58. 根据权利要求55所述的电子设备,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述处理器被配置执行所述可执行指令,实现以下步骤:
    根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域;
    在所述操作距离大于所述背景内容区域在所述预设方向的长度的情况下,确定所述缩小方式包括缩小所述关键内容区域、所述关键内容区域的缩小类型为预设方向缩小或整体缩小以及缩小所述背景内容区域;
    基于所述背景内容区域的尺寸,确定所述背景内容区域的在所述预设方向上的缩小长度和与所述预设方向垂直的方向上的缩小长度;
    在所述关键内容区域的缩小类型为预设方向缩小的情况下,基于所述背景内容区域在所述预设方向的长度以及所述操作距离,确定所述关键内容区域在所述预设方向上的缩小长度;
    在所述关键内容区域的缩小类型为整体缩小的情况下,基于所述背景内容区域在所述预设方向的长度、所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的缩小长度以及第二缩小比例,所述第二缩小比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  59. 根据权利要求55所述的电子设备,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述处理器被配置执行所述可执行指令,实现以下步骤:
    在所述关键内容区域在所述预设方向的长度等于所述关键内容区域的原始长度的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;
    确定所述放大方式包括不放大所述关键内容区域、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;
    在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;
    在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第一放大比例,所述第一放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
  60. 根据权利要求55所述的电子设备,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述处理器被配置执行所述可执行指令,实现以下步骤:
    在所述关键内容区域在所述预设方向的长度小于或等于所述关键内容区域的原始长度与所述操作距离的差值的情况下,确定所述放大方式包括放大所述关键内容区域以及所述关键内容区域的放大类型为预设方向放大或整体放大;
    在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述关键内容区域在所述预设方向上的放大长度;
    在所述关键内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述关键内容区域的尺寸,确定所述关键内容区域在所述预设方向上的放大长度以及第二放大比例,所述第二放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值。
  61. 根据权利要求55所述的电子设备,其中,所述操作距离为所述视频缩放操作在所述预设方向的投影距离;所述处理器被配置执行所述可执行指令,实现以下步骤:
    在所述关键内容区域在所述预设方向的长度大于所述关键内容区域的原始长度与所述操作距离的差值的情况下,根据所述视频信息,确定所述第一视频在所述关键内容区域之外的背景内容区域对应的原始尺寸;
    确定所述放大方式包括放大所述关键内容区域、所述关键内容区域的放大类型为预设方向放大或整体放大、放大所述背景内容区域以及所述背景内容区域的放大类型为预设方向放大或整体放大;
    在所述关键内容区域的放大类型为预设方向放大的情况下,基于所述关键内容区域在所述预设方向的长度以及所述关键内容区域在所述预设方向上的原始长度,确定所述关键内容区域在所述预设方向上的放大长度;
    在所述关键内容区域的放大类型为整体放大的情况下,基于所述关键内容区域的尺寸以及所述关键内容区域在所述预设方向上的原始长度确定所述关键内容区域在所述预设方向上的放大长度以及第三放大比例,所述第三放大比例为所述关键内容区域在所述预设方向上的长度和与所述预设方向垂直的方向的长度的比值;
    在所述背景内容区域的放大类型为预设方向放大的情况下,基于所述操作距离确定所述背景内容区域在所述预设方向上的放大长度;
    在所述背景内容区域的放大类型为整体放大的情况下,基于所述操作距离以及所述背景内容区域的原始尺寸,确定所述背景内容区域在所述预设方向上的放大长度以及第四放大比例,所述第四放大比例为所述背景内容区域在所述预设方向上的原始长度和与所述预设方向垂直的方向的原始长度的比值。
  62. 根据权利要求55至61任一所述的电子设备,其中,所述背景内容区域包括第一子背景内容区域以及第二子背景内容区域,所述第一视频在所述预设方向上依次包括所述第一子背景内容区域、所述关键内容区域以及所述第二子背景内容区域;
    所述缩小参数还包括所述第一子背景内容区域在所述预设方向上的缩小比例,以及所述第二子背景内容区域在所述预设方向上的缩小比例,所述缩小比例是指自身在所述预设方向上的长度与所述背景内容区域在所述预设方向上的长度的比值;
    所述放大参数还包括所述第一子背景内容区域在所述预设方向上的放大比例,以及所述第二子背景内容区域在所述预设方向上的放大比例,所述放大比例是指自身在所述预设方向上的原始长度与所述背景内容区域在所述预设方向上原始长度的比值。
  63. 根据权利要求53所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    确定所述操作信息对应的操作类型;
    响应于所述操作类型为一键缩小操作,去除所述第一视频中所述关键内容区域之外的背景内容区域,得到所述第二视频;
    响应于所述操作类型为一键放大操作,将所述关键内容区域放大至所述关键内容区域的原始尺寸,将所述背景内容区域放大至所述背景内容区域的原始尺寸,以得到所述第二视频。
  64. 根据权利要求53至61任一所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    从所述第一视频中获取多帧视频图像;
    基于多帧所述视频图像,确定所述第一视频包含的所述关键内容区域。
  65. 根据权利要求64所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
    基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
    将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
  66. 根据权利要求65所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
    基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
    对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
  67. 根据权利要求66所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
    获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
    将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
  68. 根据权利要求66所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    响应于所述第一概率大于或等于第二阈值,执行所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频。
  69. 根据权利要求68所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
    响应于所述候选关键内容区域与所述目标图像区域相同,执行所述根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频步骤。
  70. 根据权利要求64所述的电子设备,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
  71. 一种服务器,包括:
    处理器;
    用于存储所述处理器可执行指令的第二存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:
    接收电子设备发送的获取视频指令;
    从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;
    获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括所述视频的展示尺寸以及所述视频中的关键内容区域;
    将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;
    其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
  72. 根据权利要求71所述的服务器,其中,针对已存储的每一所述视频,所述处理器被配置执行所述可执行指令,实现以下步骤:
    从所述视频中获取多帧视频图像;
    基于多帧所述视频图像,确定所述视频包含的所述关键内容区域。
  73. 根据权利要求72所述的服务器,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    针对多帧所述视频图像中任意位置相邻的两帧视频图像,获得所述两帧视频图像的差异图像,以得到至少一帧差异图像;
    基于所述至少一帧差异图像,获得目标图像,所述目标图像中的每一位置的像素值为所述至少一帧差异图像中所述位置对应的像素值的平均值;
    将所述目标图像包含的至少一个图像区域中面积最大的目标图像区域确定为所述关键内容区域。
  74. 根据权利要求73所述的服务器,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    分别对所述至少一帧差异图像进行处理,获得每一帧所述差异图像对应的第一图像,所述第一图像包括互不相连的多个图像区域,多个所述图像区域中至少一个图像区域为多连通区域;
    基于至少一帧所述第一图像,获得第二图像,所述第二图像中的每一位置的像素为所述至少一帧第一图像中所述位置处的像素值的平均值;
    对所述第二图像进行处理,获得目标图像,所述目标图像包含的至少一个图像区域均为单连通区域。
  75. 根据权利要求74所述的服务器,其中,所述处理器被配置执行所述可执行指令,还实现以下步 骤:
    将所述目标图像中位于所述目标图像区域中的图像转换成灰度图;
    获取所述灰度图中像素值大于或等于第一阈值的像素的第一数目;
    将所述第一数目与所述灰度图包含的各像素的第二数目的比值,确定为所述第一概率。
  76. 根据权利要求75所述的服务器,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    从所述至少一个视频中,确定对应的所述第一概率大于或等于第二阈值的视频;
    将所述至少一个视频以及对应的所述第一概率大于或等于所述第二阈值的视频的视频信息发送至所述电子设备。
  77. 根据权利要求76所述的服务器,其中,所述处理器被配置执行所述可执行指令,还实现以下步骤:
    响应于所述第一概率小于所述第二阈值,获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为候选关键内容区域;
    将所述候选关键内容区域与所述目标图像区域相同的视频的视频信息发送至所述电子设备。
  78. 根据权利要求72所述的服务器,其中,所述处理器被配置执行所述可执行指令,实现以下步骤:
    获取多帧所述视频图像分别包含的水平直线段的纵坐标,以得到直线段位置集合;
    从所述直线段位置集合包含的多个纵坐标中,确定第一纵坐标以及第二纵坐标;
    将纵坐标为所述第一纵坐标的第一水平直线、纵坐标为所述第二纵坐标的第二水平直线与所述视频图像的垂直方向的边界围城的区域,确定为所述关键内容区域。
  79. 一种视频展示系统,包括:如权利要求52至70中任一项所述的电子设备以及如权利要求71至78中任一项所述的服务器。
  80. 一种非易失性计算机可读存储介质,其中,响应于所述非易失性计算机可读存储介质中的指令由电子设备执行,所述电子设备能够执行以下步骤:
    接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;
    获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;
    根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;
    响应于所述视频缩放操作,在所述视频播放界面中展示所述第二视频。
  81. 一种非易失性计算机可读存储介质,其中,响应于所述非易失性计算机可读存储介质中的指令由服务器执行,所述服务器能够执行以下步骤:
    接收电子设备发送的获取视频指令;
    从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;
    获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括所述视频的展示尺寸以及所述视频中的关键内容区域;
    将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;
    其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
  82. 一种计算机程序产品,可直接加在到计算机的内部存储器中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现以下步骤:
    接收实施于视频播放界面的视频缩放操作,获取所述视频缩放操作的操作信息;
    获取所述视频播放界面中展示的第一视频的视频信息,所述视频信息至少包括所述第一视频的展示尺寸以及关键内容区域;
    根据所述操作信息以及所述第一视频的视频信息,对所述第一视频进行缩放处理,得到第二视频,所述第二视频包括所述关键内容区域;
    响应于所述视频缩放操作,在所述视频播放界面中展示所述第二视频。
  83. 一种计算机程序产品,可直接加在到计算机的内部存储器中,并含有软件代码,该计算机程序经由计算机载入并执行后能够实现以下步骤:
    接收电子设备发送的获取视频指令;
    从已存储的各视频中获得至少一个视频,所述至少一个视频包括第一视频;
    获取至少一个所述视频对应的视频信息;一个所述视频的视频信息包括所述视频的展示尺寸以及所述视频中的关键内容区域;
    将所述至少一个视频以及至少一个所述视频的视频信息发送至所述电子设备;
    其中,一个所述视频的视频信息是所述电子设备在检测到实施于展示有所述视频的视频播放界面的视频缩放操作时,对所述视频进行缩放处理的基础,对所述视频进行缩放处理后得到的视频包括所述视频中的关键内容区域。
PCT/CN2021/107455 2020-10-30 2021-07-20 视频展示方法和视频展示装置 WO2022088776A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011191485.9A CN112367559B (zh) 2020-10-30 2020-10-30 视频展示方法、装置、电子设备、服务器及存储介质
CN202011191485.9 2020-10-30

Publications (1)

Publication Number Publication Date
WO2022088776A1 true WO2022088776A1 (zh) 2022-05-05

Family

ID=74513856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107455 WO2022088776A1 (zh) 2020-10-30 2021-07-20 视频展示方法和视频展示装置

Country Status (2)

Country Link
CN (1) CN112367559B (zh)
WO (1) WO2022088776A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367559B (zh) * 2020-10-30 2022-10-04 北京达佳互联信息技术有限公司 视频展示方法、装置、电子设备、服务器及存储介质
CN113891040A (zh) * 2021-09-24 2022-01-04 深圳Tcl新技术有限公司 视频处理方法、装置、计算机设备和存储介质
CN117459662A (zh) * 2023-10-11 2024-01-26 书行科技(北京)有限公司 一种视频播放方法、识别方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111601A (zh) * 2009-12-23 2011-06-29 大猩猩科技股份有限公司 内容可适性的多媒体处理系统与处理方法
CN104822088A (zh) * 2015-04-16 2015-08-05 腾讯科技(北京)有限公司 视频图像缩放方法和装置
CN110784754A (zh) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 视频显示方法、装置和电子设备
US10580453B1 (en) * 2015-12-21 2020-03-03 Amazon Technologies, Inc. Cataloging video and creating video summaries
CN110941378A (zh) * 2019-11-12 2020-03-31 北京达佳互联信息技术有限公司 视频内容显示方法及电子设备
CN111562895A (zh) * 2020-03-25 2020-08-21 北京字节跳动网络技术有限公司 多媒体信息的展示方法、装置以及电子设备
CN112367559A (zh) * 2020-10-30 2021-02-12 北京达佳互联信息技术有限公司 视频展示方法、装置、电子设备、服务器及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201141205A (en) * 2010-05-14 2011-11-16 Univ Nat Cheng Kung System for transforming video outputting format
CN106803234B (zh) * 2015-11-26 2020-06-16 腾讯科技(深圳)有限公司 图片编辑中的图片显示控制方法及装置
WO2018049321A1 (en) * 2016-09-12 2018-03-15 Vid Scale, Inc. Method and systems for displaying a portion of a video stream with partial zoom ratios
CN107562877A (zh) * 2017-09-01 2018-01-09 北京搜狗科技发展有限公司 图像数据的显示方法、装置和用于图像数据显示的装置
CN108062364A (zh) * 2017-12-05 2018-05-22 优酷网络技术(北京)有限公司 信息展示方法及装置
CN110691259B (zh) * 2019-11-08 2022-04-22 北京奇艺世纪科技有限公司 视频播放方法、系统、装置、电子设备及存储介质
CN111083568A (zh) * 2019-12-13 2020-04-28 维沃移动通信有限公司 视频数据处理方法及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111601A (zh) * 2009-12-23 2011-06-29 大猩猩科技股份有限公司 内容可适性的多媒体处理系统与处理方法
CN104822088A (zh) * 2015-04-16 2015-08-05 腾讯科技(北京)有限公司 视频图像缩放方法和装置
US10580453B1 (en) * 2015-12-21 2020-03-03 Amazon Technologies, Inc. Cataloging video and creating video summaries
CN110784754A (zh) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 视频显示方法、装置和电子设备
CN110941378A (zh) * 2019-11-12 2020-03-31 北京达佳互联信息技术有限公司 视频内容显示方法及电子设备
CN111562895A (zh) * 2020-03-25 2020-08-21 北京字节跳动网络技术有限公司 多媒体信息的展示方法、装置以及电子设备
CN112367559A (zh) * 2020-10-30 2021-02-12 北京达佳互联信息技术有限公司 视频展示方法、装置、电子设备、服务器及存储介质

Also Published As

Publication number Publication date
CN112367559B (zh) 2022-10-04
CN112367559A (zh) 2021-02-12

Similar Documents

Publication Publication Date Title
WO2022088776A1 (zh) 视频展示方法和视频展示装置
WO2020259651A1 (zh) 一种控制用户界面的方法及电子设备
US8760557B2 (en) User interface for a digital camera
WO2018177379A1 (zh) 手势识别、控制及神经网络训练方法、装置及电子设备
CN110471596B (zh) 一种分屏切换方法、装置、存储介质及电子设备
US20120174029A1 (en) Dynamically magnifying logical segments of a view
JP7181375B2 (ja) 目標対象の動作認識方法、装置及び電子機器
EP3547218A1 (en) File processing device and method, and graphical user interface
CN112099707A (zh) 显示方法、装置和电子设备
CN110796664B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
US20190278426A1 (en) Inputting information using a virtual canvas
CN112328353B (zh) 子应用播放器的展示方法、装置、电子设备和存储介质
CN112068698A (zh) 一种交互方法、装置及电子设备、计算机存储介质
CN108737739A (zh) 一种预览画面采集方法、预览画面采集装置及电子设备
CN109743566A (zh) 一种用于识别vr视频格式的方法与设备
CN112911147A (zh) 显示控制方法、显示控制装置及电子设备
CN109873980B (zh) 视频监控方法、装置及终端设备
CN107357422A (zh) 摄像机‑投影交互触控方法、装置及计算机可读存储介质
WO2024067512A1 (zh) 视频密集预测方法及其装置
WO2011096571A1 (ja) 入力装置
CN109246468B (zh) 一种基于教育系统的视频列表切换方法、设备及存储介质
CN110047126B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CA2807866C (en) User interface for a digital camera
CN113457117B (zh) 游戏中的虚拟单位选取方法及装置、存储介质及电子设备
CN115599484A (zh) 一种截屏方法、截屏装置、截屏设备及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884512

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10-08-2023)