WO2022022689A1 - 交互方法、装置和电子设备 - Google Patents

交互方法、装置和电子设备 Download PDF

Info

Publication number
WO2022022689A1
WO2022022689A1 PCT/CN2021/109648 CN2021109648W WO2022022689A1 WO 2022022689 A1 WO2022022689 A1 WO 2022022689A1 CN 2021109648 W CN2021109648 W CN 2021109648W WO 2022022689 A1 WO2022022689 A1 WO 2022022689A1
Authority
WO
WIPO (PCT)
Prior art keywords
size
target video
video
target
transformation
Prior art date
Application number
PCT/CN2021/109648
Other languages
English (en)
French (fr)
Inventor
吴安妮
李笑林
杨弘宇
陈宇鹏
孙英杰
杨盼祎
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to EP21850635.0A priority Critical patent/EP4175307A4/en
Publication of WO2022022689A1 publication Critical patent/WO2022022689A1/zh
Priority to US17/887,077 priority patent/US11863835B2/en
Priority to US18/514,931 priority patent/US20240089551A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440227Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25825Management of client data involving client display capabilities, e.g. screen resolution of a mobile phone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • H04N21/440272Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA for performing aspect ratio conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4518Management of client data or end-user data involving characteristics of one or more peripherals, e.g. peripheral type, software version, amount of memory available or display capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to an interaction method, apparatus, and electronic device.
  • an embodiment of the present disclosure provides an interaction method, the method comprising: in response to detecting a predefined size transformation operation, determining a target of the target video based on whether the current size of the target video is a preset anchor point size Transformation information, wherein the target video is a video played in a video playback area; based on the target transformation information, the target video is transformed, and the transformed target video is played.
  • an embodiment of the present disclosure provides a video transformation method, the method includes: encapsulating a first layer into a second layer, wherein the target video in the second layer is the same as the target video in the first layer The target video playback progress is consistent, and the layer corresponding to the player is the first layer; according to the predefined transformation operation, the target video in the second layer is transformed.
  • an embodiment of the present disclosure provides an interaction apparatus, the apparatus includes: a determination unit, configured to respond to detecting a predefined size transformation operation, based on whether the current size of the target video is a preset anchor point size, Determining target transformation information of the target video, wherein the target video is a video played in the video playback area; a transformation unit, for transforming the target video based on the target transformation information, and playing the transformed target video .
  • an embodiment of the present disclosure provides an interaction apparatus, applied to a first electronic device, the apparatus includes: an encapsulation module, configured to encapsulate a first layer into a second layer, wherein the second layer The target video in the video is consistent with the playback progress of the target video in the first layer, and the layer corresponding to the player is the first layer; the transformation module is used to perform a transformation on the target video in the second layer according to the predefined transformation operation. transform.
  • embodiments of the present disclosure provide an electronic device, including: one or more processors; and a storage device for storing one or more programs, when the one or more programs are stored by the one or more programs Each processor executes, so that the one or more processors implement the interaction method as described in the first aspect or the video transformation method as described in the second aspect.
  • embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, implements the interactive method described in the first aspect or the video described in the second aspect The steps of the transformation method.
  • the interaction method, apparatus and electronic device provided by the embodiments of the present disclosure can determine the target transformation information of the target video by referring to whether the current size of the target video is the preset anchor point size when the size transformation operation is detected. Therefore, in the size conversion operation, the user's requirement for scaling the preset anchor point size can be compatible, and the target video can be quickly converted to the size commonly used by the user, thereby reducing user operations and improving interaction efficiency.
  • the target video picture in the video playing area can meet the needs of the user to obtain information, and the efficiency of the user's information acquisition is improved.
  • FIG. 1 is a flowchart of one embodiment of an interaction method according to the present disclosure
  • FIG. 2 is a schematic diagram of an application scenario of the interaction method according to the present disclosure
  • FIG. 3 is an optional implementation of step 101 of the interaction method according to the present disclosure.
  • FIG. 4 is a flowchart of yet another embodiment of an interaction method according to the present disclosure.
  • FIG. 5 is a flowchart of one embodiment of a video transformation method according to the present disclosure.
  • FIG. 6 is a schematic structural diagram of an embodiment of an interaction apparatus according to the present disclosure.
  • FIG. 7 is a schematic structural diagram of an embodiment of an interaction apparatus according to the present disclosure.
  • FIG. 8 is an exemplary system architecture in which the interaction method of an embodiment of the present disclosure may be applied.
  • FIG. 9 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 shows the flow of an embodiment of the interaction method according to the present disclosure.
  • the interaction method is applied to a terminal device.
  • the interaction method includes the following steps:
  • Step 101 in response to detecting a predefined size conversion operation, determine target conversion information of the target video based on whether the current size of the target video is a preset anchor point size.
  • the first execution body of the interaction method may, in response to detecting a predefined size transformation operation, determine the target transformation of the target video based on whether the current size of the target video is the preset anchor point size information.
  • the above-mentioned predefined size transformation operation may be a predefined operation, and the predefined operation may be used to perform size transformation on an image.
  • the specific method of the predefined transformation operation can be set according to the actual application scenario, and is not limited here.
  • the implementation location of the above-mentioned predefined transformation operation may be in the video playing area, or may be an area outside the video playing area; it may also be implemented through voice control.
  • the above-defined size transformation operation may include two-finger operation, and the two-finger operation can be visually understood as an operation performed by two fingers; of course, during the specific operation, the user can use any part to simulate the effect of two fingers, Two-finger operation.
  • Two-finger operation For example, increasing the distance between two fingers can be used as a zoom-in operation; reducing the distance between two fingers can be used as a zoom-out operation.
  • the distance between the two fingers does not change, and the two fingers move together, which can be used as a translation operation.
  • a size transformation operation may be a process from the user's start of the operation to the release of the operation. It can be understood that in one size transformation operation, one or more specific transformation operations may be implemented. For example, in a size transformation operation, an enlargement operation and a reduction operation may be included.
  • the user may first enlarge the target video, and then reduce the target video in the process of not releasing the size conversion operation; in this case, the above-mentioned size conversion operation may include the enlargement operation type and the reduction operation type.
  • the target transformation information may be determined according to the operation type first identified on the current video picture and whether the current picture size is the preset anchor point size.
  • the current video picture is of the original video size, and a zoom-in operation is recognized on the current video picture, regardless of whether the operation has been released, the current video picture can be adjusted to a full-screen size.
  • the operation type of the size transformation operation can be determined according to a vector between the operation position where the user starts the operation and the operation position where the user releases the operation.
  • the user's operation trajectory from the start of the operation to the release operation may not be used as the basis for determining the operation type.
  • the user may resize the target video.
  • the current size of the target video is A, and the user can zoom in and then zoom out to B, where B is smaller than A.
  • the screen of the target video can be zoomed in and then zoomed out following the user's operation.
  • the operation type of this operation is a reduction operation.
  • the target video is the video played in the video playback area.
  • the current size of the target video may be the size of the target video when the size conversion operation starts.
  • the size transformation may include increasing the size of the video.
  • the size transformation may include downsizing the video.
  • panning may include moving the video frame in the video playback area.
  • the preset anchor point size may be a preset common size. It should be noted that the size of the video can be continuously changed. Using the expression of the preset anchor point size, some sizes in the continuously changing size can be visually represented, and can be used as a fixed point to represent the commonly used video size.
  • the preset anchor point size may include at least one of the following, but is not limited to: original image size, full-screen image size.
  • the above-mentioned original image size may be the original size of the video frame in the target video displayed to the terminal device.
  • the size of the full-screen image may be the size of the target video in a full-screen playback state on the terminal device.
  • the full-screen playback state also known as the full-screen playback state, usually refers to a state that is larger than the original image size and can be maximized in the preset display area, for example, it may be a state where the video screen fills the video playback area .
  • the target transformation information of the target video may be information used to transform the target video.
  • the target transformation information may indicate a transformation method, and the target transformation information corresponding to the free scaling mode can be understood as transforming the current video size according to the scale of the user operation.
  • the target transformation information may include, but is not limited to, at least one of the following: target transformation ratio, target transformation size.
  • the target transformation ratio may indicate the ratio of transforming the current size, for example, transforming to twice the current size.
  • the target transform size can indicate the size to which the target video is expected to be transformed.
  • the dimensions in this application may be absolute dimensions or relative dimensions.
  • the size (such as anchor point size, target video size) may also vary depending on the model.
  • the ratio of other sizes to the size of the original image ie, the relative size
  • the size in this application can also be referred to as the size in this application.
  • Step 102 transform the target video based on the target transformation information, and play the transformed target video.
  • the above-mentioned execution body may transform the target video based on the target transformation information, and play the transformed target video.
  • the transformed target video can be played in the video playback area.
  • the size of the video playback area may change.
  • the size of the video playback area can be full screen size or three-quarter screen size.
  • the size of the video playback area may switch between at least two sizes.
  • the video playing area 201 is a rectangle
  • the target video picture 202 is also a rectangle. It can be understood that we can refer to the horizontal side of the rectangle in Figure 2 as the length of the rectangle; the vertical side of the rectangle is referred to as the width of the rectangle.
  • the target video picture can be understood as the target video displayed in the video playback area.
  • the image of the target video image 202 is set to be much smaller than the image of the video playing area 201 .
  • the size of the target video screen can be set to be the same as the size of the target video.
  • the target video picture is part of the video frame of the target video.
  • the target conversion information of the target video can be determined by referring to whether the current size of the target video is the preset anchor point size.
  • the user's requirement for scaling the preset anchor point size can be compatible, and the target video can be quickly converted to the size commonly used by the user, reducing user operations and improving interaction efficiency.
  • the target video picture in the video playing area can meet the needs of the user to obtain information, and the efficiency of the user's information acquisition is improved.
  • the above method may include: in response to detecting a predefined pan operation in the video playback area, and in response to determining that the current size of the target video is greater than the size of the video playback area, moving the target in the video playback area video.
  • the above-defined panning operations can be used to pan the target video.
  • the specific implementation manner of the above-defined translation operation can be set according to an actual application scenario, which is not limited herein.
  • the above-defined translation operation may be a drag operation, and the number of trigger points between the human body and the screen when the drag operation is performed may not be limited.
  • the above-mentioned predefined panning operation may be a single-finger drag operation, a two-finger drag operation, or a three-finger drag operation.
  • the current size of the target video is greater than the size of the video playback area, which can be understood as the side length of the target video is greater than the side length of the video playback area in at least one direction.
  • the length of the target video may be greater than the length of the video playing area, or the width of the target video may be greater than the width of the video playing area.
  • the target video can be moved in the video playback area, so that the user can move the desired viewing part to a convenient viewing position, thereby improving the user's information acquisition efficiency.
  • the video picture at the product can be enlarged, and the user can also move the video picture at the product to the middle position of the video playback area, so that the beauty product information can be clearly viewed .
  • determining the target transformation information of the target video based on whether the current size of the target video is a preset anchor point size includes: based on the operation type of the size transformation operation and the preset anchor point size, determining the corresponding target transformation information.
  • the determining the corresponding target transformation information based on the operation type of the size transformation operation and the preset anchor point size includes: in response to the operation type and the preset anchor point size satisfying The first preset relationship uses first target transformation information, wherein the first target transformation information indicates that the size of the target video is switched between different preset anchor point sizes; in response to the operation type and the preset anchor point The point size satisfies the second preset relationship, and the second target transformation information is used, wherein the second target transformation information corresponds to the free scaling mode.
  • the first preset relationship indicates that the operation type, the current size of the target video, and the preset anchor point size satisfy a preset first correspondence relationship or do not satisfy a preset the second correspondence.
  • the second preset relationship indicates that the operation type, the current size of the target video, and the preset anchor point size do not satisfy the preset first correspondence, or satisfy the preset first Two correspondences.
  • the first preset relationship includes: the current size of the target video is an original image size, the operation type is a zoom-in operation, and the target transformation information includes transforming the target video into a full-screen image and/or, the current size of the target video is a full-screen image size, and the operation type is a reduction operation, and the target transform information includes transforming the target video to an original image size.
  • the second preset relationship includes: the current size of the target video is an original image size, the operation type is a reduction operation, and the target transformation information includes freeing the target video according to the operation information. scaling; and/or, the current size of the target video is a full-screen image size, and the operation type is a zoom-in operation, and the target transformation information includes freely scaling the target video according to the operation information.
  • the first target transformation information indicates to switch the size of the target video between different preset anchor point sizes.
  • the second target transformation information corresponds to the free scaling mode.
  • the size of the target video can be continuously enlarged or reduced according to the user operation. For example, when the distance between the two touch points operated by the user becomes larger and larger, the size of the target video is gradually enlarged. , or, gradually reduce the size of the target video as the distance between the two touch points operated by the user becomes smaller and smaller.
  • the above step 101 may include: in response to determining that the current size is the preset anchor point size, and in response to the operation type of the size transformation operation corresponding to the preset anchor point size, converting the preset anchor point size
  • the preset transformation information corresponding to the point size is determined as the target transformation information.
  • the operation type of the size transformation operation may include at least one of the following: reduction operation, enlargement operation.
  • each preset anchor point size has corresponding preset transformation information and also has a corresponding operation type.
  • the free scaling mode may be entered.
  • the preset transformation information may be set according to an actual application scenario, which is not limited herein.
  • the preset transformation information may be a preset transformation ratio or a preset size.
  • the size transformation operation includes a zoom-in operation
  • the preset anchor point size includes an original image size
  • the preset transformation information corresponding to the original image size indicates a full-screen image size
  • the operation corresponding to the original image size Type is zoom in operation.
  • the preset transformation information corresponding to the original image size indicates the full-screen image size, which can be understood as: the preset transformation information corresponding to the original image size may indicate transformation information for converting the original image size to the full-screen image size.
  • the preset transformation information corresponding to the preset anchor point size is determined.
  • the target transformation information may include: in response to determining that the current size is the original image size, and in response to detecting a zoom-in operation in the video playback area, determining preset transformation information indicating a full-screen image size as the target transformation information.
  • the zooming operation is performed, and the target video can be adjusted to the full-screen image size.
  • the number of user operations can be reduced, and the computing resources and display resources consumed by the user operations can be reduced.
  • the size transformation operation includes a reduction operation
  • the preset anchor point size includes a full-screen image size
  • the preset transformation information corresponding to the full-screen image size indicates the original image size, which corresponds to the full-screen image size
  • the operation type is a reduction type.
  • the preset transformation information corresponding to the full-screen image size indicates the original image size. It can be understood that the preset transformation information corresponding to the full-screen image size can indicate the transformation information for converting the full-screen image size to the original image size. .
  • the preset transformation information corresponding to the preset anchor point size is determined.
  • the target transformation information may include: in response to determining that the current size is a full-screen image size, and in response to detecting a reduction operation in the video playback area, determining the preset transformation information indicating the original image size as the target transformation information.
  • the reduction operation can be performed to adjust the target video to the original image size.
  • the number of user operations can be reduced, and the computing resources and display resources consumed by user operations can be reduced.
  • FIG. 3 shows an optional implementation manner of the foregoing step 101 .
  • the flow shown in FIG. 3 may include step 1011 and step 1012 .
  • Step 1011 in response to detecting a size change operation in the video playing area, and in response to a preset free zoom condition being satisfied, enter the free zoom mode.
  • Step 1012 in response to determining that the free scaling mode is entered, determine the target transformation information according to the transformation information of the size transformation operation and the current size of the target video.
  • the target transformation information may be determined according to the transformation information of the size transformation operation and the current size of the target video.
  • the size of the enlarged video image may be larger than the size of the screen. At this time, only a part of the size of the enlarged video image can be displayed on the screen. Display other parts of the enlarged video image by dragging the video image, etc.
  • the above-mentioned free scaling conditions may include at least one of the following but not limited to: the target video is not a preset anchor point size, and it is determined that the operation type of the size transformation operation does not correspond to the preset anchor point size .
  • the zooming-out mode can be performed in a free zoom mode, that is, the video image can be zoomed in or out freely.
  • the zoomed-in picture can enter the free zoom mode, and the video picture can be freely enlarged or reduced.
  • the free scaling conditions include: the current size of the target video is a preset anchor point size, and the operation type indicated by the ratio of the real-time size of the target video to the current size when the size transformation operation is released, Corresponds to the preset anchor point size.
  • the execution subject can determine the real-time size of the target video in real time as the size transformation operation proceeds.
  • the size of the target video is converted in real time based on the conversion information indicated by the size conversion operation.
  • the target video displayed on the screen can be enlarged or reduced following the instruction of the size conversion operation.
  • the size conversion effect is displayed in real time, and the progress of the size conversion can be displayed to the user in time, so that the user can continue or terminate the size conversion operation according to the effect of zooming to.
  • the real-time size can be used as the basis for judging the type of user operation.
  • the zoom-in operation type may be indicated. If the ratio of the real-time size of the target video to the current size when the size conversion operation is released is less than 1, the type of reduction operation can be indicated.
  • the real-time size when the operation is released can improve the accuracy of determining the operation type, thereby improving the accuracy of transformation.
  • the user is watching the video in real time, and determines whether to release the size conversion operation according to the real-time screen. Therefore, the real-time size of the target video when the user releases the size conversion operation can more accurately reflect the user's expectations for the degree of size conversion.
  • the target transformation information of the target video is determined by referring to whether the current size of the target video is the preset anchor point size, so as to be compatible with the transformation operations commonly used by users and meet the operation requirements of most users;
  • the trigger condition of the free zoom mode under certain conditions (for example, a zoom-in operation is received in a full screen state, or a zoom-out operation is received under the original image size condition, wherein the zoom-in operation and the zoom-out operation can be real-time.
  • the operation result, or the operation result when the operation is released) triggers the free zoom mode, which can provide more precise zoom processing and meet the further needs of some users.
  • the fusion of the above two aspects can meet the different operation scenarios of a large number of users, maximize Operational efficiency and experience.
  • the above method may further include: displaying corresponding prompt information based on the zoom mode performed on the target video, wherein the prompt information is used to prompt the zoom mode.
  • the display manner and content of the above-mentioned prompt information can be set according to the actual application scenario, which is not limited here.
  • the prompt information can be displayed in the form of a toast.
  • the zoom mode may indicate that the image size is adjusted to full screen, and the corresponding prompt information may be "switched to full screen".
  • the zoom mode may indicate the free zoom mode, and the corresponding prompt information may be "entered the free zoom mode"
  • the zoom mode may indicate resizing to the original image size, and the corresponding prompt information may be "restored to the original size".
  • the displaying corresponding prompt information based on the zooming method of the target video includes: in response to adjusting the target video to a full-screen image size, displaying first prompt information, wherein the first prompt The information is used to indicate that the target video is in a full screen playback state.
  • the displaying corresponding prompt information based on the zooming manner of the target video includes: in response to adjusting the target video to the original image size, displaying second prompt information, wherein the second prompt information Used to indicate that the target video is playing at its original size.
  • the displaying corresponding prompt information based on the zooming method performed on the target video includes: in response to determining that the free zoom mode is entered, displaying third prompt information, wherein the third prompt information is used to indicate The target video can be freely scaled based on size transformation operations.
  • the method further includes: in response to determining that the transformed target video is not the original image size, presenting a restore control, wherein the restore control is used to transform the target video to the original image size.
  • the above method may further include: in response to the end of the size transformation operation and/or the panning operation, detecting whether the target video is moved out of the video playing area; in response to determining that the target video is moved out, correcting the video picture in the video playing area to obtain the target video image that matches the size of the video playback area.
  • the judgment of whether to move out of the video playback area can be judged by setting specific judgment conditions according to the actual application scenario. In some embodiments, if there is no frame in the video playback area in a direction, but the target video also has undisplayed images along the direction, then this situation can be understood as the video being moved out of the video playback area.
  • the correction of the video picture may include operations such as panning the target video, and pulling the target video back to the video playing area.
  • the target video can be pulled back to the video playing area.
  • the introduction of the embodiment of FIG. 4 can solve a further technical problem, that is, the problem that the video size conversion operation may cause confusion in the layout of the video playback screen.
  • the method provided in the embodiment of FIG. 4 can be applied to any transformation scene of the target video involved in this application.
  • the method provided in the embodiment of FIG. 4 can be applied to a size transformation scene and/or a translation scene.
  • the application to the size transformation field may include the application to the free scaling scene, or the application to the scene where the size transformation is performed based on the preset transformation information.
  • FIG. 4 shows the flow of an embodiment of the interaction method according to the present disclosure.
  • it can include:
  • Step 401 encapsulating the first layer into the second layer.
  • the first execution body (for example, a terminal device) of the interaction method may encapsulate the first layer into the second layer.
  • the layer where the acquired target video is located is the first layer.
  • the player on the terminal can acquire the target video from a local or other electronic device. And the target video parsed by the player is drawn on the layer, which can be displayed on the screen.
  • the player may further include playback logic information to control the playback of the video, for example, to control the switching of the video, to control the playback progress, and so on.
  • the layer corresponding to the player can be recorded as the first layer.
  • the layer corresponding to the player can be understood as the video parsed by the player can be drawn on this layer.
  • the target video in the second layer has the same playback progress as the target video in the first layer.
  • the second layer can be understood as a new layer encapsulated outside the first layer.
  • the transformation of the second layer does not affect the playback logic of the player.
  • the playback progress of the second layer is consistent with the playback progress of the first layer. In other words, it can be understood as drawing the target video on the first layer onto the second layer.
  • the number of layers in the first layer may be one or at least two. If there are at least two, the display content of each layer can be drawn to the second layer.
  • the target video in the second layer is used for transformation according to a size transformation operation and/or a translation operation.
  • the image displayed on the screen can be obtained from the second layer.
  • step 401 can be applied to the scene of size transformation and/or the scene of translation.
  • the above-mentioned first execution body may transform the target video in the second layer by: determining the transformation coefficient according to the operation position information of the size transformation operation and/or the translation operation, wherein the The transform coefficients include at least one of the following: size transform information, translation coefficients; and transform the target video in the second layer according to the transform coefficients.
  • Step 402 Determine the transformation coefficients according to the operation position information of the size transformation operation and/or the translation operation.
  • the transform coefficients include at least one of the following: size transform information, translation coefficients.
  • predefined transformation operations may include, but are not limited to, at least one of the following: size transformation operations, translation operations.
  • the user can touch the screen, and the terminal converts the touch signal into a logical coordinate point. Then the terminal can calculate the change in the distance of the coordinate point when the finger slides, thereby calculating the displacement. From the displacement, transform information and/or translation coefficients are then determined.
  • Step 403 Transform the target video in the second layer according to the transform coefficient.
  • the interaction method provided by the embodiment corresponding to FIG. 4 can encapsulate a new layer outside the layer corresponding to the player, and perform transformation on the new layer.
  • video playback and video transformation can be isolated to ensure that the video transformation will not affect the player's processing (including playback, transformation, etc.) of the target video. Therefore, it can be ensured that video playback and video conversion are performed simultaneously, the interaction efficiency and the information display efficiency are improved, and the information acquisition efficiency of the user is improved at the same time.
  • the method may further include: from the target video of the second layer, acquiring a target video picture matching the video playing area; in the video playing area, playing the target video picture matching the video playing area .
  • the preset area of the second layer may correspond to the video playback area.
  • the images in the preset area need to be displayed in the video playback area.
  • the coordinates of the image on the second layer may be translated and transformed, and it can be understood that at this time, the image in the preset area will be changed.
  • the image on the second layer may be transformed, and it can be understood that the image in the preset area will be changed at this time.
  • the image displayed in the video playback area may be obtained from the above-mentioned preset area.
  • the size of the video playback area may be smaller than the size of the target video, so that the video playback area cannot display the complete picture of the target video.
  • the acquired picture of the target video that matches the video playing area can be displayed in the video playing area as the target video picture.
  • the judgment of matching with the video playback area can be judged by setting the matching judgment condition according to the actual application scenario.
  • the complete picture of the target video can be used as the target video picture matching the video playing area.
  • the picture displayed on the screen is a part of the complete video picture. Therefore, it is necessary to acquire a part of the video picture that matches the video playing area as the target video picture.
  • the length of the target video picture may be consistent with the length of the video playing area, and the width of the target video picture may be consistent with the width of the video playing area.
  • the video picture that matches the video playing area can be obtained without changing the playback logic of the player.
  • playing and transforming the video It can be performed at the same time, thereby improving the efficiency of information display and improving the efficiency of information acquisition by users.
  • FIG. 5 shows the flow of one embodiment of the video transformation method according to the present disclosure.
  • the interaction method is applied to a terminal device.
  • the video conversion method includes the following steps:
  • Step 501 encapsulating the first layer into the second layer.
  • the second execution body of the interaction method may send a data forwarding request to a data forwarding device supporting a wireless network.
  • the target video in the second layer has the same playback progress as the target video in the first layer, and the layer corresponding to the player is the first layer.
  • Step 502 Transform the target video in the second layer according to a predefined transformation operation.
  • the above step 501 may include: determining a transform coefficient according to the operation position information of the predefined transform operation, wherein the transform coefficient includes at least one of the following: size transform information, translation coefficient; Transform information and translation coefficients to transform the target video in the second layer.
  • the above method may further include: acquiring a target video picture matching the video playing area from the target video of the second layer; and playing the target video picture matching the video playing area.
  • the present disclosure provides an embodiment of an interaction apparatus, which corresponds to the method embodiment shown in FIG. 1 , and the apparatus can be specifically applied in various electronic devices.
  • the interaction apparatus in this embodiment includes: a determining unit 601 and a transforming unit 602 .
  • the determining unit is configured to, in response to detecting a predefined size transformation operation, determine the target transformation information of the target video based on whether the current size of the target video is a preset anchor point size, wherein the target video is a video playback area
  • a transforming unit configured to transform the target video based on the target transform information, and play the transformed target video.
  • the specific processing of the determining unit 601 and the transforming unit 602 of the interaction device and the technical effects brought by them can refer to the relevant descriptions of steps 101 and 102 in the corresponding embodiment of FIG. 1 respectively, and will not be repeated here. .
  • the preset anchor point size corresponds to an operation type and preset transformation information; and the response to detecting a size transformation operation is based on whether the current size of the target video is a preset anchor point size, Determining target transformation information of the target video includes: in response to determining that the current size is the preset anchor point size, and in response to the operation type of the size transformation operation corresponding to the preset anchor point size, converting the The preset transformation information corresponding to the preset anchor point size is determined as the target transformation information.
  • the size transformation operation includes a zoom-in operation
  • the preset anchor point size includes an original image size
  • the preset transformation information corresponding to the original image size indicates a full-screen image size
  • the operation corresponding to the original image size type is a zoom-in operation
  • the preset transformation information corresponding to the anchor point size, which is determined as the target transformation information includes: in response to determining that the current size is the original image size, and in response to detecting a zoom-in operation, the preset indicating the full-screen image size
  • the transformation information is determined as the target transformation information.
  • the size transformation operation includes a reduction operation
  • the preset anchor point size includes a full-screen image size
  • the preset transformation size corresponding to the full-screen image size indicates the original image size, which corresponds to the full-screen image size
  • the operation type of is a reduction type; and in response to determining that the current size is the preset anchor point size, and in response to the operation type of the size transformation operation corresponding to the preset anchor point size, converting the
  • the preset transformation information corresponding to the preset anchor point size, which is determined as the target transformation information includes: in response to determining that the current size is a full-screen image size, and in response to detecting a reduction operation, converting the preset transformation information indicating the original image size.
  • the transformation information is assumed to be the target transformation information.
  • determining the target conversion information of the target video based on whether the current size of the target video is a preset anchor point size in response to detecting a size conversion operation in the video playback area includes: in response to detecting to the size transformation operation, and in response to the preset free scaling condition being satisfied, enter the free scaling mode; in response to determining to enter the free scaling mode, according to the transformation information indicated by the size transformation operation and the current size of the target video, determine the size of the target video. Describe the target transformation information.
  • the free scaling condition includes at least one of the following: the current size of the target video is not a preset anchor point size, and the operation type of the size transformation operation does not correspond to the preset anchor point size.
  • the free scaling conditions include: the current size of the target video is a preset anchor point size, and the operation type indicated by the ratio of the real-time size of the target video to the current size when the size transformation operation is released, and The default anchor point size corresponds.
  • the size of the target video is converted in real time based on the conversion information indicated by the size conversion operation.
  • the apparatus is further configured to: in response to determining that the transformed target video is not of the original image size, displaying a restore control, wherein the restore control is used to transform the target video to the original image size.
  • the apparatus is further configured to: in response to detecting a predefined pan operation, and in response to determining that the current size of the target video is greater than the size of the video playback area, move the target video in the video playback area.
  • the apparatus is further configured to: in response to the end of the size transformation operation and/or the panning operation, detect whether the target video is moved out of the video play area; in response to determining that the target video is moved out, perform a step on the video picture in the video play area. Correction.
  • the device is further configured to: encapsulate the first layer into a second layer, wherein the layer corresponding to the player is the first layer, and the target video in the second layer is the same as the first layer.
  • the target videos in the layers have the same playing progress, wherein the target videos in the second layer are used for transformation according to a size transformation operation and/or a translation operation.
  • the target video in the second layer is transformed by: determining transformation coefficients according to the operation position information of the size transformation operation and/or the translation operation, wherein the transformation coefficients include at least one of the following Item: size transformation information, translation coefficient; according to the transformation coefficient, transform the target video in the second layer.
  • the apparatus is further configured to: obtain a target video picture matching the video playing area from the target video of the second layer; in the video playing area, play the target video picture matching the video playing area .
  • determining the target transformation information of the target video based on whether the current size of the target video is a preset anchor point size includes: based on the operation type of the size transformation operation and the preset anchor point size, Determine the corresponding target transformation information.
  • the determining the corresponding target transformation information based on the operation type of the size transformation operation and the preset anchor point size includes: in response to the operation type and the preset anchor point size satisfying The first preset relationship uses first target transformation information, wherein the first target transformation information indicates that the size of the target video is switched between different preset anchor point sizes; in response to the operation type and the preset anchor point The point size satisfies the second preset relationship, and the second target transformation information is used, wherein the second target transformation information corresponds to the free scaling mode.
  • the first preset relationship indicates that the operation type, the current size of the target video, and the preset anchor point size satisfy a preset first correspondence relationship or do not satisfy a preset the second corresponding relationship;
  • the second preset relationship indicates that the operation type, the current size of the target video, and the preset anchor point size do not satisfy the preset first correspondence Two correspondences.
  • the first preset relationship includes: the current size of the target video is an original image size, the operation type is a zoom-in operation, and the target transformation information includes transforming the target video into a full-screen image and/or, the current size of the target video is a full-screen image size, and the operation type is a reduction operation, and the target transformation information includes transforming the target video to the original image size; and/or, the first The two preset relationships include: the current size of the target video is the original image size, and the operation type is a reduction operation, and the target transformation information includes freely scaling the target video according to the operation information; and/or, the target video's The current size is a full-screen image size, the operation type is a zoom-in operation, and the target transformation information includes freely scaling the target video according to the operation information.
  • the present disclosure provides an embodiment of a video conversion apparatus.
  • the apparatus embodiment corresponds to the method embodiment shown in FIG. 5 .
  • the apparatus may Used in various electronic devices.
  • the interaction apparatus in this embodiment includes: an encapsulation module 701 and a transformation module 702 .
  • the encapsulation module is used to encapsulate the first layer into the second layer, wherein the target video in the second layer has the same playback progress as the target video in the first layer, and the layer corresponding to the player is the first layer.
  • a layer; the transformation module is used to transform the target video in the second layer according to a predefined transformation operation.
  • the encapsulation module 701 described above may be configured to: determine a transform coefficient according to the operation position information of the predefined transform operation, wherein the transform coefficient includes at least one of the following: size transform information, translation coefficient; Transform the target video in the second layer according to the size transform information and the translation coefficient.
  • the above apparatus is further configured to: obtain a target video picture matching the video playing area from the target video of the second layer; and play the target video picture matching the video playing area in the video playing area.
  • the layer corresponding to the player is the first layer to isolate video playback from video transformation.
  • FIG. 8 illustrates an exemplary system architecture in which the interaction method of an embodiment of the present disclosure may be applied.
  • FIG. 8 illustrates an exemplary system architecture to which an information processing method according to an embodiment of the present disclosure may be applied.
  • the system architecture may include terminal devices 801 , 802 , and 803 , a network 804 , and a server 805 .
  • the network 804 is a medium used to provide a communication link between the terminal devices 801 , 802 , 803 and the server 805 .
  • Network 804 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the terminal devices 801, 802, and 803 can interact with the server 805 through the network 804 to receive or send messages and the like.
  • Various client applications may be installed on the terminal devices 801 , 802 and 803 , such as web browser applications, search applications, and news information applications.
  • the client applications in the terminal devices 801, 802, and 803 can receive the user's instruction, and complete corresponding functions according to the user's instruction, such as adding corresponding information to the information according to the user's instruction.
  • the terminal devices 801, 802, and 803 may be hardware or software.
  • the terminal devices 801, 802, and 803 can be various electronic devices that have a display screen and support web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, Moving Picture Experts Compression Standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) Players, Laptops and Desktops, etc.
  • the terminal devices 801, 802, and 803 are software, they can be installed in the electronic devices listed above. It can be implemented as a plurality of software or software modules (eg, software or software modules for providing distributed services), or can be implemented as a single software or software module. There is no specific limitation here.
  • the server 805 may be a server that provides various services, such as receiving information acquisition requests sent by the terminal devices 801 , 802 , and 803 , and acquiring display information corresponding to the information acquisition requests in various ways according to the information acquisition requests. And the related data of the displayed information is sent to the terminal devices 801 , 802 , and 803 .
  • the information processing methods provided by the embodiments of the present disclosure may be executed by terminal devices, and correspondingly, the information processing apparatuses may be set in the terminal devices 801 , 802 , and 803 .
  • the information processing method provided by the embodiment of the present disclosure may also be executed by the server 805 , and accordingly, the information processing apparatus may be provided in the server 805 .
  • terminal devices, networks and servers in FIG. 8 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • FIG. 9 shows a schematic structural diagram of an electronic device (eg, the terminal device or the server in FIG. 8 ) suitable for implementing an embodiment of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, mobile terminals such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 9 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device may include a processing device (eg, a central processing unit, a graphics processor, etc.) 901 which may be loaded into a random access memory according to a program stored in a read only memory (ROM) 902 or from a storage device 908 (RAM) 903 executes various appropriate actions and processes.
  • ROM read only memory
  • RAM storage device 908
  • various programs and data necessary for the operation of the electronic device 900 are also stored.
  • the processing device 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904.
  • An input/output (I/O) interface 905 is also connected to bus 904 .
  • the following devices can be connected to the I/O interface 905: input devices 909 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration
  • An output device 907 such as a computer
  • a storage device 908 including, for example, a magnetic tape, a hard disk, etc.
  • the communication means 909 may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While FIG. 9 illustrates an electronic device having various means, it should be understood that not all of the illustrated means are required to be implemented or available. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 909, or from the storage device 908, or from the ROM 902.
  • the processing apparatus 901 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: in response to detecting a predefined size conversion operation, based on whether the current size of the target video is is the preset anchor point size, and determines the target transformation information of the target video, wherein the target video is the video played in the video playback area; based on the target transformation information, transform the target video, and play the transformed video. target video.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device causes the electronic device to: encapsulate the first layer into the second layer, wherein the second layer
  • the target video in the player has the same playback progress as the target video in the first layer, and the layer corresponding to the player is the first layer; according to the predefined transformation operation, the target video in the second layer is transformed.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the transformation unit may also be described as a "unit for playing the target video".
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, device, terminal, and storage medium for processing a face image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开实施例公开了交互方法、装置和电子设备。该方法的一具体实施方式包括:响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,其中,所述目标视频为视频播放区域中播放的视频;基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。由此,可以提供一种新的交互方式,提高交互效率。

Description

交互方法、装置和电子设备
相关申请的交叉引用
本申请要求于2020年07月31日提交的,申请号为202010764798.2、发明名称为“交互方法、装置和电子设备”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,尤其涉及一种交互方法、装置和电子设备。
背景技术
随着计算机技术的发展,人们可以利用计算机进行实现各种功能。例如,人们可以利用终端设备观看视频。在观看视频时,有时需要调整视频画面的显示尺寸。如何调整视频画面的显示尺寸,是当前需要解决的问题之一。
发明内容
提供该公开内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该公开内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开实施例提供了一种交互方法,所述方法包括:响应于检测到预定义的尺寸变换操作,基于目标视频的当前 尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,其中,所述目标视频为视频播放区域中播放的视频;基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。
第二方面,本公开实施例提供了一种视频变换方法,所述方法包括:将第一图层封装到第二图层,其中,第二图层中的目标视频与第一图层中的目标视频播放进度一致,播放器对应的图层为第一图层;根据预定义变换操作,对第二图层中的目标视频进行变换。
第三方面,本公开实施例提供了一种交互装置,所述装置包括:确定单元,用于响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,其中,所述目标视频为视频播放区域中播放的视频;变换单元,用于基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。
第四方面,本公开实施例提供了一种交互装置,应用于第一电子设备,所述装置包括:封装模块,用于将第一图层封装到第二图层,其中,第二图层中的目标视频与第一图层中的目标视频播放进度一致,播放器对应的图层为第一图层;变换模块,用于根据预定义变换操作,对第二图层中的目标视频进行变换。
第五方面,本公开实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的交互方法或者如第二方面所述的视频变换方法。
第六方面,本公开实施例提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面所述的交互方法或者如第二方面所述的视频变换方法的步骤。
本公开实施例提供的交互方法、装置和电子设备,可以在检测到尺寸变换操作的时候,参考目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息。由此,可以在尺寸变 换操作中,兼容用户对于预设锚点尺寸缩放需求,将目标视频快速变换到用户常用的尺寸,减少用户操作,提高交互效率。并且,使得视频播放区域中的目标视频画面可以满足用户获取信息的需要,提高用户的信息获取效率。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1是根据本公开的交互方法的一个实施例的流程图;
图2是根据本公开的交互方法的一个应用场景的示意图;
图3是根据本公开的交互方法的步骤101的一种可选的实现方式;
图4是根据本公开的交互方法的又一个实施例的流程图;
图5是根据本公开的视频变换方法的一个实施例的流程图;
图6是根据本公开的交互装置的一个实施例的结构示意图;
图7是根据本公开的交互装置的一个实施例的结构示意图;
图8是本公开的一个实施例的交互方法可以应用于其中的示例性系统架构;
图9是根据本公开实施例提供的电子设备的基本结构的示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
请参考图1,其示出了根据本公开的交互方法的一个实施例的流程。该交互方法应用于终端设备。如图1所示该交互方法,包括以下步骤:
步骤101,响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息。
在本实施例中,交互方法的第一执行主体(例如终端设备)可以响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息。
在本实施例中,上述预定义的尺寸变换操作,可以是预先定义的操作,该预先定义的操作可以用于对图像进行尺寸变换。预定义变换操作的具体方式,可以根据实际应用场景设置,在此不 做限定。
在本实施例中,上述预定义变换操作的实施位置可以是在视频播放区域中,也可以是在视频播放区域之外的区域;还可以是通过语音控制实施的。
作为示例,上述预定义的尺寸变换操作可以包括二指操作,二指操作可以形象地理解为通过两个手指进行操作;当然,具体操作的时候,用户可以用任何部位模拟二指的效果,进行二指操作。例如,两个手指之间的距离拉大,可以作为放大操作;两个手指之间的距离减小,可以作为缩小操作。另外,两个手指之间的距离不变,两个手指一起移动,可以作为平移操作。
在一些应用场景中,一次尺寸变换操作,可以是用户从开始操作到释放操作的过程。可以理解,在一次尺寸变换操作中,可能实施一种或者多种具体的变换操作。例如,在一次尺寸变换操作中,可能包括放大操作和缩小操作。
作为示例,用户开始尺寸变换操作的过程中,可以先放大目标视频,在不释放尺寸变换操作的过程中,再缩小目标视频;这种情况下,上述尺寸变换操作可以包括放大操作类型和缩小操作类型。例如可以根据在当前视频画面上首先识别到的操作类型,以及当前画面尺寸是否是预设锚点尺寸,确定目标变换信息。比如,在一些实施例中,当前视频画面是原始视频尺寸、且在当前视频画面上识别到了放大操作,无论该操作是否已释放,都可以将当前视频画面调整为满屏尺寸。
在一些实施例中,对于尺寸变换操作的操作类型的确定,可以根据用户开始操作的操作位置与释放操作的操作位置之间的矢量确定。换句话说,开始操作到释放操作的过程中用户的操作轨迹,可以不作为确定操作类型的依据。
作为示例,用户可以对目标视频进行尺寸变换。目标视频的当前尺寸为A,用户可以先放大再缩小至B,B小于A。在用户先放大再缩小的过程中,目标视频的画面可以跟随用户的操作先放大再缩小。当用户释放这次操作的时候,可以确定此次操作的操 作类型为缩小操作。在这里,目标视频为视频播放区域中播放的视频。
在这里,上述目标视频的当前尺寸,可以是尺寸变换操作开始的时候目标视频的尺寸。
在本实施例中,尺寸变换可以包括将视频尺寸变大。
在本实施例中,尺寸变换可以包括将视频尺寸缩小。
在一些实施例中,平移可以包括将视频画面在视频播放区域进行移动。在这里,预设锚点尺寸可以是预设的常用尺寸。需要说明的,视频的尺寸可以是连续变化的,利用预设锚点尺寸这种表述,可以形象地表示在连续变化的尺寸中的一些尺寸,可以作为定点,来表示常用的视频尺寸。
在一些实施例中,预设锚点尺寸可以包括以下至少一项但不限于:原始图像尺寸、满屏图像尺寸。
在这里,上述原始图像尺寸,可以是目标视频中的视频帧显示到终端设备的原始尺寸。
在这里,上述满屏图像尺寸,可以是上述目标视频在终端设备上全屏播放状态下的尺寸。其中,全屏播放状态,也可以称为满屏播放状态,通常是指大于原始图像尺寸的、能在预设显示区域内最大化显示的状态,例如,可以是将视频画面充满视频播放区域的状态。
在这里,目标视频的目标变换信息,可以是用于对目标视频进行变换的信息。作为示例,目标变换信息可以指示变换方式,对应自由缩放模式的目标变换信息,可以理解为根据用户操作的比例变换当前视频尺寸。作为示例,目标变换信息可以包括但是不限于以下至少一项:目标变换比例、目标变换尺寸。
在这里,目标变换比例,可以指示对当前尺寸进行变换的比例,例如,变换至当前尺寸的二倍。
在这里,目标变换尺寸,可以指示期望将目标视频变换至的尺寸。
需要说明的是,本申请中的尺寸,可以是绝对尺寸,也可以 是相对尺寸。在一些应用场景中,尺寸(例如锚点尺寸、目标视频尺寸)也可能根据机型的不同而不同。作为示例,如果将原始图像的尺寸作为分子,那其它尺寸与原始图像的尺寸的比值(即相对尺寸),也可以称为本申请中的尺寸。
步骤102,基于目标变换信息,对目标视频进行变换,以及播放变换后的目标视频。
在本实施例中,上述执行主体可以基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。
在这里,可以在视频播放区域中,播放变换后的目标视频。
在一些应用场景中,视频播放区域的尺寸可能发生变换。例如,视频播放区域的尺寸,可以是满屏尺寸,也可以是四分之三的屏幕尺寸。换句话说,视频播放区域的尺寸,可能在至少两种尺寸之间切换。
作为示例,请参考图2,视频播放区域201为矩形,目标视频画面202也为矩形。可以理解,我们可以将图2中矩形沿着横向的边,称为矩形的长;将矩形沿着纵向的边,称为矩形的宽。目标视频画面可以理解为显示在视频播放区域中的目标视频。
需要说明的是,图2中为了方便示意,将目标视频画面202的图像设置为远小于视频播放区域201的图像。
在实际应用场景中,如果目标视频的长小于视频播放区域的长,并且目标视频的宽小于视频播放区域的宽,可以将目标视频画面的尺寸设置为目标视频的尺寸一致。
在实际应用场景中,如果目标视频的长大于视频播放区域的长,或者目标视频的宽大于视频播放区域的宽,可以将目标视频画面的尺寸设置为视频播放区域的尺寸一致。换句话说,目标视频画面是目标视频的视频帧的一部分。
需要说明的是,本实施例提供的交互方法,可以在检测到尺寸变换操作的时候,参考目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息。由此,可以在尺寸变换操作中,兼容用户对于预设锚点尺寸缩放需求,将目标视频快速变换 到用户常用的尺寸,减少用户操作,提高交互效率。并且,使得视频播放区域中的目标视频画面可以满足用户获取信息的需要,提高用户的信息获取效率。
在一些实施例中,上述方法可以包括:响应于检测到在视频播放区域的预定义的平移操作,以及响应于确定目标视频的当前尺寸大于视频播放区域的尺寸,在所述视频播放区域中移动目标视频。
在这里,上述预定义的平移操作可以用于对目标视频进行平移。上述预定义的平移操作的具体实施方式,可以根据实际应用场景设置,在此不做限定。
作为示例,上述预定义的平移操作可以是拖动操作,实施拖动操作时人体与屏幕的触发点的个数,可以不做限定。作为示例,上述预定义的平移操作可以是单指拖动操作,可以是双指拖动操作,还可以是三指拖动操作。在这里,上述目标视频的当前尺寸大于视频播放区域的尺寸,可以理解为至少在一个方向上,目标视频的边长大于视频播放区域的边长。请参考图2,可以是目标视频的长大于视频播放区域的长,或者,可以是目标视频的宽大于视频播放区域的宽。
需要说明的是,通过预定义的平移操作,可以实现目标视频在视频播放区域进行移动,由此,用户可以将期望观看的部分移动到便于观看的位置,提高用户的信息获取效率。
作为示例,用户在观看美妆类视频时,可能会关注博主使用的美妆产品,当视频没有给到产品特写镜头时会较难看清。通过本申请提供的尺寸变换操作和/或平移操作,可以将产品处的视频画面放大,用户还可以将产品处的视频画面移动到视频播放区域的中间位置,从而可以清楚地查看美妆产品信息。
在一些实施例中,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,包括:基于所述尺寸变换操作的操作类型和所述预设锚点尺寸,确定对应的目标变换信息。
在一些实施例中,所述基于所述尺寸变换操作的操作类型和所述预设锚点尺寸,确定对应的目标变换信息,包括:响应于所述操作类型和所述预设锚点尺寸满足第一预设关系,采用第一目标变换信息,其中,第一目标变换信息指示将目标视频的尺寸在不同的预设锚点尺寸之间切换;响应于所述操作类型和所述预设锚点尺寸满足第二预设关系,采用第二目标变换信息,其中,第二目标变换信息对应自由缩放模式。
在一些实施例中,所述第一预设关系指示所述操作类型、所述目标视频的当前尺寸、以及预设锚点尺寸之间,满足预设的第一对应关系、或者不满足预设的第二对应关系。
在一些实施例中,第二预设关系指示所述操作类型、所述目标视频的当前尺寸、以及预设锚点尺寸之间,不满足预设的第一对应关系、或者满足预设的第二对应关系。
在一些实施例中,所述第一预设关系包括:目标视频的当前尺寸是原始图像尺寸、且所述操作类型是放大操作,所述目标变换信息包括将所述目标视频变换到满屏图像尺寸;和/或,目标视频的当前尺寸是满屏图像尺寸、且所述操作类型是缩小操作,所述目标变换信息包括将所述目标视频变换到原始图像尺寸。
在一些实施例中,所述第二预设关系包括:目标视频的当前尺寸是原始图像尺寸、且所述操作类型是缩小操作,所述目标变换信息包括根据操作信息将所述目标视频进行自由缩放;和/或,目标视频的当前尺寸是满屏图像尺寸、且所述操作类型是放大操作,所述目标变换信息包括根据操作信息将所述目标视频进行自由缩放。
在这里,第一目标变换信息指示将目标视频的尺寸在不同的预设锚点尺寸之间切换。
在这里,第二目标变换信息对应自由缩放模式。在自由缩放模式下,可根据用户操作对目标视频的尺寸进行连续性的放大或缩小,例如,在用户操作的两个触点之间的距离越来越大时,逐渐将目标视频的尺寸放大,或者,在用户操作的两个触点之间的距离越来越小时,逐渐将目标视频的尺寸缩小。
在一些实施例中,上述步骤101,可以包括:响应于确定当前尺寸是预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息。
在这里,尺寸变换操作的操作类型可以包括以下至少一项:缩小操作、放大操作。
在这里,每个预设锚点尺寸,均有相对应的预设变换信息,也具有相对应的操作类型。
在一些应用场景中,如果响应于确定当前尺寸是预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与预设锚点尺寸不对应,可以进入自由缩放模式。
在这里,预设变换信息可以根据实际应用场景设置,在此不做限定。预设变换信息的,可以是预设的变换比例,也可以是预设的尺寸。
需要说明的是,通过设置预设锚点尺寸与操作类型的对应关系,以及设置预设锚点尺寸与预设变换信息的对应的关系,可以在目标视频处于锚点尺寸的情况下,通过一次对应的操作,即可实现视频画面快速变换至常用的尺寸,例如其它预设锚点尺寸,避免了用户多次操作才能变换至期望的尺寸。由此,减少了用户操作,进而减少因用户操作而耗费的计算资源和显示资源。
在一些实施例中,所述尺寸变换操作包括放大操作,所述预 设锚点尺寸包括原始图像尺寸,与原始图像尺寸对应的预设变换信息指示满屏图像尺寸,与原始图像尺寸对应的操作类型为放大操作。
在这里,与原始图像尺寸对应的预设变换信息指示满屏图像尺寸,可以理解为:与原始图像尺寸对应的预设变换信息,可以指示将原始图像尺寸变换到满屏图像尺寸的变换信息。
相应的,响应于确定当前尺寸是预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息,可以包括:响应于确定当前尺寸是所述原始图像尺寸,以及响应于检测到在视频播放区域的放大操作,将指示满屏图像尺寸的预设变换信息确定为目标变换信息。
需要说明的是,在目标视频处于原始图像尺寸的时候,实施放大操作,可以将目标视频调整到满屏图像尺寸。由此,可以减少用户操作次数,减少因用户操作而耗费的计算资源和显示资源。
在一些实施例中,所述尺寸变换操作包括缩小操作,所述预设锚点尺寸包括满屏图像尺寸,与满屏图像尺寸对应的预设变换信息指示原始图像尺寸,与满屏图像尺寸对应的操作类型为缩小类型。
在这里,与满屏图像尺寸对应的预设变换信息指示原始图像尺寸,可以理解为,与满屏图像尺寸对应的预设变换信息,可以指示将满屏图像尺寸变换到原始图像尺寸的变换信息。
相应的,响应于确定当前尺寸是预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息,可以包括:响应于确定当前尺寸是满屏图像尺寸,以及响应于检测到在视频播放区域的缩小操作,将所述指示原始图像尺寸的预设变换信息确定为目标变换信息。
需要说明的是,在目标视频处于满屏图像尺寸的时候,实施缩小操作,可以将目标视频调整到原始图像尺寸。由此,可以减 少用户操作次数,减少因用户操作而耗费的计算资源和显示资源。
在一些实施例中,请参考图3,其示出上述步骤101的一种可选的实现方式。图3中示出的流程可以包括步骤1011和步骤1012。
步骤1011,响应于检测到在视频播放区域的尺寸变换操作,以及响应于预设的自由缩放条件满足,进入自由缩放模式。
步骤1012,响应于确定进入自由缩放模式,根据所述尺寸变换操作的变换信息和目标视频的当前尺寸,确定所述目标变换信息。
换句话说,自由缩放模式下,可以根据所述尺寸变换操作的变换信息和所述目标视频的当前尺寸,确定所述目标变换信息。
在一些应用场景中,自由缩放模式下,目标视频在放大的过程中,可能视频图像被放大后的尺寸大于屏幕尺寸,此时,屏幕中仅能显示被放大后的视频图像尺寸的一部分,可以通过拖动视频图像等操作来显示放大后的视频图像的其他部分。
在一些实施例中,上述所述自由缩放条件可以包括以下至少一项但不限于:目标视频不是预设锚点尺寸,确定所述尺寸变换操作的操作类型与所述预设锚点尺寸不对应。
作为示例,在目标视频为原始图像尺寸的基础上,缩小画面可以进行自由缩放模式,即可以自由放大或者缩小视频画面。
作为示例,在目标视频为满屏图像尺寸的基础上,放大画面可以进入自由缩放模式,可以自由放大或者缩小视频画面。
需要说明的是,通过设置上述自由缩放条件,可以贴合用户使用场景,满足用户在自由缩放和锚点尺寸变换之间的切换需求,进而可以减少用户用于尺寸变换操作的操作次数和操作时间,提高操作效率。
在一些实施例中,所述自由缩放条件包括:目标视频的当前尺寸是预设锚点尺寸,并且,尺寸变换操作释放时目标视频的实时尺寸与所述当前尺寸的比例所指示的操作类型,与预设锚点尺寸对应。
在一些应用场景中,随着尺寸变换操作的进行,执行主体可 以实时确定目标视频的实时尺寸。
在一些实施例中,在所述尺寸变换操作释放前,基于尺寸变换操作指示的变换信息,实时变换目标视频的尺寸。换句话说,随着尺寸变换操作的进行,屏幕上显示的目标视频可以跟随尺寸变换此操作的指示,进行放大或者缩小。
需要说明的是,根据用户的尺寸变换操作,实时显示尺寸变换效果,可以及时向用户展示尺寸变换的进度,便于用户根据实施缩放至的效果,继续或者终止尺寸变换操作。
在一些应用场景中,可以将实时尺寸作为判断用户操作类型的依据。
在这里,如果上述尺寸变换操作释放时目标视频的实时尺寸与所述当前尺寸的比例,大于1,可以指示放大操作类型。如果尺寸变换操作释放时目标视频的实时尺寸与所述当前尺寸的比例,小于1,可以指示缩小操作类型。
需要说明的是,将操作释放时的实时尺寸作为判断用户操作类型的依据,可以提高确定操作类型的准确性,进而提高变换的准确性。具体来说,用户在实时观看视频,并且根据实时画面确定是否释放尺寸变换操作,因此,用户释放尺寸变换操作时目标视频的实时尺寸,可以较为准确地体现用户对于尺寸变换程度的期望。
在一些实施例中,一方面,通过参考目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,来兼容用户常用的变换操作,满足大部分用户的操作需求,另一方面,通过设置自由缩放模式的触发条件,在特定条件下(例如在满屏状态下接收到放大操作,或者在原始图像尺寸条件下接收到缩小操作,其中,放大操作和缩小操作可以是实时的操作结果,也可以是操作释放时的操作结果)触发自由缩放模式,能够提供更精细的缩放处理,满足部分用户的进一步需求,上述两方面的融合,可以满足海量用户的不同操作场景、最大化操作效率和体验。
在一些实施例中,上述方法还可以包括:基于对目标视频进 行的缩放方式,展示对应的提示信息,其中,所述提示信息用于提示缩放方式。
在这里,上述提示信息的展示方式和内容,可以根据实际应用场景设置,在此不做限定。
作为示例,可以利用横幅(toast)的方式,展示提示信息。
作为示例,缩放方式可以指示调整至满屏图像尺寸,对应的提示信息可以是“已切换至满屏”。
作为示例,缩放方式可以指示自由缩放模式,对应的提示信息可以是“已进入自由缩放模式”
作为示例,缩放方式可以指示调整至原始图像尺寸,对应的提示信息可以是“已恢复原始大小”。
在一些实施例中,所述基于对目标视频进行的缩放方式,展示对应的提示信息,包括:响应于将目标视频调整至满屏图像尺寸,展示第一提示信息,其中,所述第一提示信息用于指示目标视频为满屏播放状态。
在一些实施例中,所述基于对目标视频进行的缩放方式,展示对应的提示信息,包括:响应于将目标视频调整至原始图像尺寸,展示第二提示信息,其中,所述第二提示信息用于指示目标视频为原始尺寸播放状态。
在一些实施例中,所述基于对目标视频进行的缩放方式,展示对应的提示信息,包括:响应于确定进入自由缩放模式,展示第三提示信息,其中,所述第三提示信息用于指示目标视频能够基于尺寸变换操作自由缩放。
在一些实施例中,所述方法还包括:响应于确定变换后的目标视频不是原始图像尺寸,展示还原控件,其中,所述还原控件用于将目标视频变换至原始图像尺寸。
换句话说,如果视频画面是非原始比例,可以展示标示有“还原屏幕”字样的按钮,点击“还原屏幕”的按钮后,可以切回原始图像尺寸的视频画面。
需要说明的是,通过还原控件的设置,可以使得用户便捷地 切换原始图像画面,减少用户期望回到原始图像画面时的操作,即可以提高交互效率。
在一些实施例中,上述方法还可以包括:响应于尺寸变换操作和/或平移操作结束,检测目标视频是否被移出视频播放区域;响应于确定被移出,对视频播放区域中的视频画面进行校正,得到与视频播放区域尺寸匹配的目标视频画面。
在这里,是否移出视频播放区域的判断,可以根据实际应用场景,设置具体的判断条件进行判断。在一些实施例中,如果在一方向上,视频播放区域中没有画面,但是沿着该方向目标视频还具有没有显示的图像,那么这种情况可以理解为视频被移出了视频播放区域。
在这里,对视频画面进行校正,可以包括将目标视频进行平移等操作,将目标视频拉回到视频播放区域。
作为示例,如果目标视频的画面被移出视频播放区域,可以将目标视频拉回到视频播放区域。
需要说明的是,通过设置校正环节,可以减少目标视频被移出视频播放区域的情况发生,提高信息展示效率。
图4实施例的引入,可以解决进一步的技术问题,即视频尺寸变换操作可能造成视频播放画面布局混乱的问题。
需要说明的是,图4实施例提供的方法,可以应用于本申请所涉及的任何对目标视频的变换场景当中。图4实施例提供的方法,可以应用于尺寸变换场景和/或平移场景当中。应用到尺寸变换场中,可以包括应用到自由缩放场景当中,也可以应用到基于预设变换信息进行尺寸变换的场景当中。
请参考图4,其示出了根据本公开的交互方法的一个实施例的流程。在图4所示流程中,可以包括:
步骤401,将第一图层封装到第二图层。
在本实施例中,交互方法的第一执行主体(例如终端设备)可以将第一图层封装到第二图层。
在这里,所获取的目标视频所在的图层为第一图层。
通常,终端上的播放器可以从本地或者其它电子设备,获取目标视频。并且播放器解析出的目标视频绘制于图层上,此图层可以供屏幕进行显示。在一些应用场景中,播放器还可以包括播放逻辑信息,来控制视频的播放,例如控制切换视频、控制播放进度等。
在这里,可以将播放器对应的图层记为第一图层。在这里,播放器对应的图层,可以理解为播放器解析出的视频可以绘制到该图层上。
在这里,第二图层中的目标视频与第一图层中的目标视频播放进度一致。
在这里,第二图层可以理解为第一图层外封装的一个新图层。第二图层的变换不会影响播放器的播放逻辑。并且,第二图层的播放进度与第一图层的播放进度一致。换句话说,可以理解为将第一图层上的目标视频,再绘制到第二图层上。
在一些应用场景中,第一图层中图层的数量可以是一个,也可以是至少两个。如果是至少两个,可以将各个图层的显示内容都可以绘制到第二图层。
在这里,所述第二图层中的目标视频用于根据尺寸变换操作和/或平移操作进行变换。换句话说,在第一图层外封装了第二图层后,屏幕显示的图像,可以从第二图层中获取。
需要说明的是,步骤401提供的方式,可以应用于尺寸变换场景和/或平移场景当中。
在一些实施例中,上述第一执行主体可以通过以下方式,对第二图层中的目标视频进行变换:根据尺寸变换操作和/或平移操作的操作位置信息,确定变换系数,其中,所述变换系数包括以下至少一项:尺寸变换信息、平移系数;根据变换系数,对第二图层中的目标视频进行变换。
步骤402,根据尺寸变换操作和/或平移操作的操作位置信息,确定变换系数。
在这里,在这里,所述变换系数包括以下至少一项:尺寸变 换信息、平移系数。
在这里,上述预定义变换操作可以包括但是不限于以下至少一项:尺寸变换操作、平移操作。
在这里,用户可以触摸屏幕,终端将触摸信号转换为逻辑坐标点。然后终端可以计算手指滑动时坐标点距离的变化,从而计算出位移。然后根据位移,确定变换信息和/或平移系数。
步骤403,根据变换系数,对第二图层中的目标视频进行变换。
需要说明的是,图4对应的实施例提供的交互方法,可以在播放器对应的图层之外,封装新的图层;并且在新的图层上进行变换。由此,可以将视频播放与视频变换进行隔离,保证视频变换不会影响播放器对于目标视频的处理(包括播放、变换等)。从而,可以保证视频播放和视频变换同时进行,提高交互效率和信息展示效率,同时提高用户的信息获取效率。
在一些实施例中,所述方法还可以包括:从第二图层的目标视频中,获取与视频播放区域匹配的目标视频画面;在视频播放区域中,播放与视频播放区域匹配的目标视频画面。
在这里,可以将第二图层的预设区域与视频播放区域对应。该预设区域中的图像是需要显示到视频播放区域的。
作为示例,对于平移操作,可以将第二图层上的图像进行坐标的平移变换,可以理解,此时预设区域中的图像将会发生改变。
作为示例,对于尺寸变换操作,可以对第二图层上的图像进行变换,可以理解,此时预设区域中的图像将会发生改变。
在这里,第二图层中的图像无论是否改变,在视频播放区域显示的图像,均可以是从上述预设区域获取的。
在一些应用场景中,视频播放区域的尺寸可以小于目标视频的尺寸,由此视频播放区域不能显示目标视频的完整画面。这种情况下,可以将获取与视频播放区域匹配的目标视频的画面,作为目标视频画面显示在视频播放区域中。
在这里,与视频播放区域匹配的判断,可以根据实际应用场景设置匹配判断条件进行判断。
作为示例,如果目标视频的长小于视频播放区域的长,并且目标视频的宽小于视频播放区域的宽,可以将目标视频的完整画面作为与视频播放区域匹配的目标视频画面。
作为示例,如果用户在满屏基础上继续放大,则显示在屏幕上的画面,是完整视频画面的一部分。因此,需要获取与视频播放区域匹配的部分视频画面,作为目标视频画面。目标视频画面的长可以与视频播放区域的长一致,目标视频画面的宽可以与视频播放区域的宽一致。
需要说明的是,通过从第二图层中获取目标视频画面,可以在不改变播放器播放逻辑的情况下,获取与视频播放区域匹配的视频画面,对于用户来说,播放和对视频的变换可以同时进行,由此,可以提高信息展示效率,同时提高用户的信息获取效率。
请继续参考图5,其示出了根据本公开的视频变换方法的一个实施例的流程。该交互方法应用于终端设备,如图所示该视频变换方法,包括以下步骤:
步骤501,将第一图层封装到第二图层。
在本实施例中,交互方法的第二执行主体(例如第一电子设备)可以向支持无线网络的数据转发设备,发送数据转发请求。
在这里,第二图层中的目标视频与第一图层中的目标视频播放进度一致,播放器对应的图层为第一图层。
步骤502,根据预定义变换操作,对第二图层中的目标视频进行变换。
在一些实施例中,上述步骤501,可以包括:根据所述预定义变换操作的操作位置信息,确定变换系数,其中,所述变换系数包括以下至少一项:尺寸变换信息、平移系数;根据尺寸变换信息和平移系数,对第二图层中的目标视频进行变换。
在一些实施例中,上述方法还可以包括:从第二图层的目标视频中,获取与视频播放区域匹配的目标视频画面;播放与视频播放区域匹配的目标视频画面。
需要说明的是,图5对应的实施例所提供的交互方法中,各 个步骤的实现细节和技术效果,可以参考本申请中相关部分的说明,在此不再赘述。
进一步参考图6,作为对上述各图所示方法的实现,本公开提供了一种交互装置的一个实施例,该装置实施例与图1所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图6所示,本实施例的交互装置包括:确定单元601和变换单元602。其中,确定单元,用于响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,其中,所述目标视频为视频播放区域中播放的视频;变换单元,用于基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。
在本实施例中,交互装置的确定单元601和变换单元602的具体处理及其所带来的技术效果可分别参考图1对应实施例中步骤101和步骤102的相关说明,在此不再赘述。
在一些实施例中,所述预设锚点尺寸与操作类型、预设变换信息对应;以及所述响应于检测到尺寸变换操作,基于所述目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,包括:响应于确定所述当前尺寸是所述预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与所述预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息。
在一些实施例中,所述尺寸变换操作包括放大操作,所述预设锚点尺寸包括原始图像尺寸,与原始图像尺寸对应的预设变换信息指示满屏图像尺寸,与原始图像尺寸对应的操作类型为放大操作;以及所述响应于确定所述当前尺寸是所述预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与所述预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息,包括:响应于确定当前尺寸是所述原始图像尺寸,以及响应于检测到放大操作,所述指示满屏图像尺寸的预设变换信 息,确定为目标变换信息。
在一些实施例中,所述尺寸变换操作包括缩小操作,所述预设锚点尺寸包括满屏图像尺寸,与满屏图像尺寸对应的预设变换尺寸指示原始图像尺寸,与满屏图像尺寸对应的操作类型为缩小类型;以及所述响应于确定所述当前尺寸是所述预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与所述预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息,包括:响应于确定当前尺寸是满屏图像尺寸,以及响应于检测到缩小操作,将所述指示原始图像尺寸的预设变换信息,确定为目标变换信息。
在一些实施例中,所述响应于检测到在视频播放区域的尺寸变换操作,基于所述目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,包括:响应于检测到尺寸变换操作,以及响应于预设的自由缩放条件满足,进入自由缩放模式;响应于确定进入自由缩放模式,根据所述尺寸变换操作指示的变换信息和所述目标视频的当前尺寸,确定所述目标变换信息。
在一些实施例中,所述自由缩放条件包括以下至少一项:目标视频的当前尺寸不是预设锚点尺寸,所述尺寸变换操作的操作类型与所述预设锚点尺寸不对应。
在一些实施例中,所述自由缩放条件包括:目标视频的当前尺寸是预设锚点尺寸,并且尺寸变换操作释放时目标视频的实时尺寸与所述当前尺寸的比例所指示的操作类型,与预设锚点尺寸对应。
在一些实施例中,在所述尺寸变换操作释放前,基于尺寸变换操作指示的变换信息,实时变换目标视频的尺寸。
在一些实施例中,所述装置还用于:响应于确定变换后的目标视频不是原始图像尺寸,展示还原控件,其中,所述还原控件用于将目标视频变换至原始图像尺寸。
在一些实施例中,所述装置还用于:响应于检测到预定义的平移操作,以及响应于确定目标视频的当前尺寸大于视频播放区 域的尺寸,在所述视频播放区域中移动目标视频。
在一些实施例中,所述装置还用于:响应于尺寸变换操作和/或平移操作结束,检测目标视频是否被移出视频播放区域;响应于确定被移出,对视频播放区域中的视频画面进行校正。
在一些实施例中,所述装置还用于:将第一图层封装到第二图层,其中,播放器对应的图层为第一图层,第二图层中的目标视频与第一图层中的目标视频播放进度一致,其中,所述第二图层中的目标视频用于根据尺寸变换操作和/或平移操作进行变换。
在一些实施例中,通过以下方式,对第二图层中的目标视频进行变换:根据尺寸变换操作和/或平移操作的操作位置信息,确定变换系数,其中,所述变换系数包括以下至少一项:尺寸变换信息、平移系数;根据变换系数,对第二图层中的目标视频进行变换。
在一些实施例中,所述装置还用于:从第二图层的目标视频中,获取与视频播放区域匹配的目标视频画面;在视频播放区域中,播放与视频播放区域匹配的目标视频画面。
在一些实施例中,所述基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,包括:基于所述尺寸变换操作的操作类型和所述预设锚点尺寸,确定对应的目标变换信息。
在一些实施例中,所述基于所述尺寸变换操作的操作类型和所述预设锚点尺寸,确定对应的目标变换信息,包括:响应于所述操作类型和所述预设锚点尺寸满足第一预设关系,采用第一目标变换信息,其中,第一目标变换信息指示将目标视频的尺寸在不同的预设锚点尺寸之间切换;响应于所述操作类型和所述预设锚点尺寸满足第二预设关系,采用第二目标变换信息,其中,第二目标变换信息对应自由缩放模式。
在一些实施例中,所述第一预设关系指示所述操作类型、所述目 标视频的当前尺寸、以及预设锚点尺寸之间,满足预设的第一对应关系、或者不满足预设的第二对应关系;第二预设关系指示所述操作类型、所述目标视频的当前尺寸、以及预设锚点尺寸之间,不满足预设的第一对应关系、或者满足预设的第二对应关系。
在一些实施例中,所述第一预设关系包括:目标视频的当前尺寸是原始图像尺寸、且所述操作类型是放大操作,所述目标变换信息包括将所述目标视频变换到满屏图像尺寸;和/或,目标视频的当前尺寸是满屏图像尺寸、且所述操作类型是缩小操作,所述目标变换信息包括将所述目标视频变换到原始图像尺寸;和/或,所述第二预设关系包括:目标视频的当前尺寸是原始图像尺寸、且所述操作类型是缩小操作,所述目标变换信息包括根据操作信息将所述目标视频进行自由缩放;和/或,目标视频的当前尺寸是满屏图像尺寸、且所述操作类型是放大操作,所述目标变换信息包括根据操作信息将所述目标视频进行自由缩放。
进一步参考图7,作为对上述各图所示方法的实现,本公开提供了一种视频变换装置的一个实施例,该装置实施例与图5所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图7所示,本实施例的交互装置包括:封装模块701和变换模块702。其中,封装模块,用于将第一图层封装到第二图层,其中,第二图层中的目标视频与第一图层中的目标视频播放进度一致,播放器对应的图层为第一图层;变换模块,用于根据预定义变换操作,对第二图层中的目标视频进行变换。
在本实施例中,交互装置的:封装模块701和变换模块702的具体处理及其所带来的技术效果可分别参考图5对应实施例中 步骤501和步骤502的相关说明,在此不再赘述。
在一些实施例中,上述封装模块701,可以用于:根据所述预定义变换操作的操作位置信息,确定变换系数,其中,所述变换系数包括以下至少一项:尺寸变换信息、平移系数;根据尺寸变换信息和平移系数,对第二图层中的目标视频进行变换。
在一些实施例中,上述装置还用于:从第二图层的目标视频中,获取与视频播放区域匹配的目标视频画面;在视频播放区域中,播放与视频播放区域匹配的目标视频画面。
在一些实施例中,播放器对应的图层为第一图层以将视频播放与视频变换进行隔离。
请参考图8,图8示出了本公开的一个实施例的交互方法可以应用于其中的示例性系统架构。
请参考图8,图8示出了本公开的一个实施例的信息处理方法可以应用于其中的示例性系统架构。
如图8所示,系统架构可以包括终端设备801、802、803,网络804,服务器805。网络804用以在终端设备801、802、803和服务器805之间提供通信链路的介质。网络804可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
终端设备801、802、803可以通过网络804与服务器805交互,以接收或发送消息等。终端设备801、802、803上可以安装有各种客户端应用,例如网页浏览器应用、搜索类应用、新闻资讯类应用。终端设备801、802、803中的客户端应用可以接收用户的指令,并根据用户的指令完成相应的功能,例如根据用户的指令在信息中添加相应信息。
终端设备801、802、803可以是硬件,也可以是软件。当终端设备801、802、803为硬件时,可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture  Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。当终端设备801、802、803为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
服务器805可以是提供各种服务的服务器,例如接收终端设备801、802、803发送的信息获取请求,根据信息获取请求通过各种方式获取信息获取请求对应的展示信息。并展示信息的相关数据发送给终端设备801、802、803。
需要说明的是,本公开实施例所提供的信息处理方法可以由终端设备执行,相应地,信息处理装置可以设置在终端设备801、802、803中。此外,本公开实施例所提供的信息处理方法还可以由服务器805执行,相应地,信息处理装置可以设置于服务器805中。
应该理解,图8中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
下面参考图9,其示出了适于用来实现本公开实施例的电子设备(例如图8中的终端设备或服务器)的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,电子设备可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(ROM)902中的程序或者从存储装置908加载到随机访问存储器(RAM)903 中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置909;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储 程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,其中,所述目标视频为视频播放区域中播放的视频;基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:将第一图层封装到第二图层,其中,第二图层中的目标视频与第一图层中的目标视频播放进度一致,播放器对应的图层为第一图层;根据 预定义变换操作,对第二图层中的目标视频进行变换。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,变换单元还可以被描述为“播放目标视频的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件 逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。
本公开实施例涉及计算机技术领域,尤其涉及一种人脸图像的处理方法、装置、终端及存储介质。

Claims (24)

  1. 一种交互方法,其特征在于,包括:
    响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,其中,所述目标视频为视频播放区域中播放的视频;
    基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。
  2. 根据权利要求1所述的方法,其特征在于,所述预设锚点尺寸与操作类型、预设变换信息对应;以及
    所述响应于检测到尺寸变换操作,基于所述目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,包括:
    响应于确定所述当前尺寸是所述预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与所述预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息。
  3. 根据权利要求2所述的方法,其特征在于,所述尺寸变换操作包括放大操作,所述预设锚点尺寸包括原始图像尺寸,与原始图像尺寸对应的预设变换信息指示满屏图像尺寸,与原始图像尺寸对应的操作类型为放大操作;以及
    所述响应于确定所述当前尺寸是所述预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与所述预设锚点尺寸对应,将所述预设 锚点尺寸对应的预设变换信息,确定为所述目标变换信息,包括:
    响应于确定当前尺寸是所述原始图像尺寸,以及响应于检测到放大操作,所述指示满屏图像尺寸的预设变换信息,确定为目标变换信息。
  4. 根据权利要求2所述的方法,其特征在于,所述尺寸变换操作包括缩小操作,所述预设锚点尺寸包括满屏图像尺寸,与满屏图像尺寸对应的预设变换尺寸指示原始图像尺寸,与满屏图像尺寸对应的操作类型为缩小类型;以及
    所述响应于确定所述当前尺寸是所述预设锚点尺寸,以及响应于所述尺寸变换操作的操作类型与所述预设锚点尺寸对应,将所述预设锚点尺寸对应的预设变换信息,确定为所述目标变换信息,包括:
    响应于确定当前尺寸是满屏图像尺寸,以及响应于检测到缩小操作,将所述指示原始图像尺寸的预设变换信息,确定为目标变换信息。
  5. 根据权利要求1所述的方法,其特征在于,所述响应于检测到在视频播放区域的尺寸变换操作,基于所述目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,包括:
    响应于检测到尺寸变换操作,以及响应于满足预设的自由缩放条件,进入自由缩放模式;
    响应于确定进入自由缩放模式,根据所述尺寸变换操作指示的变换信息和所述目标视频的当前尺寸,确定所述目标变换信息。
  6. 根据权利要求5所述的方法,其特征在于,
    所述自由缩放条件包括以下至少一项:
    目标视频的当前尺寸不是预设锚点尺寸;
    所述尺寸变换操作的操作类型与所述预设锚点尺寸不对应。
  7. 根据权利要求5所述的方法,其特征在于,所述自由缩放条件包括:目标视频的当前尺寸是预设锚点尺寸,并且尺寸变换操作释放时目标视频的实时尺寸与所述当前尺寸的比例所指示的操作类型,与预设锚点尺寸不对应。
  8. 根据权利要求7所述的方法,其特征在于,在所述尺寸变换操作释放前,基于尺寸变换操作指示的变换信息,实时变换目标视频的尺寸。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于确定变换后的目标视频不是原始图像尺寸,展示还原控件,其中,所述还原控件用于将目标视频变换至原始图像尺寸。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于检测到预定义的平移操作,以及响应于确定目标视频的当前尺寸大于视频播放区域的尺寸,在所述视频播放区域中移动目标视频。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    响应于尺寸变换操作和/或平移操作结束,检测目标视频是否被移出视频播放区域;
    响应于确定被移出,对视频播放区域中的视频画面进行校正。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,所述方法还包括:
    将第一图层封装到第二图层,其中,播放器对应的图层为第一图层,第二图层中的目标视频与第一图层中的目标视频播放进度一致,其中,所述第二图层中的目标视频用于根据尺寸变换操作和/或平移操作进行变换。
  13. 根据权利要求12所述的方法,其特征在于,通过以下方式,对第二图层中的目标视频进行变换:
    根据尺寸变换操作和/或平移操作的操作位置信息,确定变换系数,其中,所述变换系数包括以下至少一项:尺寸变换信息、平移系数;
    根据变换系数,对第二图层中的目标视频进行变换。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    从第二图层的目标视频中,获取与视频播放区域匹配的目标视频画面;
    在视频播放区域中,播放与视频播放区域匹配的目标视频画面。
  15. 根据权利要求1所述的方法,其特征在于,所述基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,包括:
    基于所述尺寸变换操作的操作类型和所述预设锚点尺寸,确定对应的目标变换信息。
  16. 根据权利要求15所述的方法,其特征在于,所述基于所述尺寸变换操作的操作类型和所述预设锚点尺寸,确定对应的目标变换信息,包括:
    响应于所述操作类型和所述预设锚点尺寸满足第一预设关系,采用第一目标变换信息,其中,第一目标变换信息指示将目标视频的尺寸在不同的预设锚点尺寸之间切换;
    响应于所述操作类型和所述预设锚点尺寸满足第二预设关系,采用第二目标变换信息,其中,第二目标变换信息对应自由缩放模式。
  17. 根据权利要求16所述的方法,其特征在于,
    所述第一预设关系指示所述操作类型、所述目标视频的当前尺寸、以及预设锚点尺寸之间,满足预设的第一对应关系、或者不满足预设的第二对应关系;
    第二预设关系指示所述操作类型、所述目标视频的当前尺寸、以及预设锚点尺寸之间,不满足预设的第一对应关系、或者满足预设的 第二对应关系。
  18. 根据权利要求16所述的方法,其特征在于,
    所述第一预设关系包括:目标视频的当前尺寸是原始图像尺寸、且所述操作类型是放大操作,所述目标变换信息包括将所述目标视频变换到满屏图像尺寸;和/或,目标视频的当前尺寸是满屏图像尺寸、且所述操作类型是缩小操作,所述目标变换信息包括将所述目标视频变换到原始图像尺寸;
    和/或,所述第二预设关系包括:目标视频的当前尺寸是原始图像尺寸、且所述操作类型是缩小操作,所述目标变换信息包括根据操作信息将所述目标视频进行自由缩放;和/或,目标视频的当前尺寸是满屏图像尺寸、且所述操作类型是放大操作,所述目标变换信息包括根据操作信息将所述目标视频进行自由缩放。
  19. 一种视频变换方法,其特征在于,所述方法包括:
    将第一图层封装到第二图层,其中,第二图层中的目标视频与第一图层中的目标视频播放进度一致,播放器对应的图层为第一图层;
    根据预定义变换操作,对第二图层中的目标视频进行变换。
  20. 根据权利要求19所述的方法,其特征在于,播放器对应的图层为第一图层,以将视频播放与视频变换进行隔离。
  21. 一种交互装置,其特征在于,所述装置包括:
    确定单元,用于响应于检测到预定义的尺寸变换操作,基于目标视频的当前尺寸是否是预设锚点尺寸,确定目标视频的目标变换信息,其中,所述目标视频为视频播放区域中播放的视频;
    变换单元,用于基于所述目标变换信息,对所述目标视频进行变换,以及播放变换后的目标视频。
  22. 一种视频变换装置,其特征在于,所述装置包括:
    封装模块,用于将第一图层封装到第二图层,其中,第二图层中的目标视频与第一图层中的目标视频播放进度一致,播放器对应的图层为第一图层;
    变换模块,用于根据预定义变换操作,对第二图层中的目标视频进行变换。
  23. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-18中任一所述的方法或者如权利要求19-20所述的方法。
  24. 一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-18中任一所述的方法或者 如权利要求19-20所述的方法。
PCT/CN2021/109648 2020-07-31 2021-07-30 交互方法、装置和电子设备 WO2022022689A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21850635.0A EP4175307A4 (en) 2020-07-31 2021-07-30 INTERACTION METHOD AND DEVICE AND ELECTRONIC DEVICE
US17/887,077 US11863835B2 (en) 2020-07-31 2022-08-12 Interaction method and apparatus, and electronic device
US18/514,931 US20240089551A1 (en) 2020-07-31 2023-11-20 Interaction method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010764798.2 2020-07-31
CN202010764798.2A CN111935544B (zh) 2020-07-31 2020-07-31 交互方法、装置和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/887,077 Continuation US11863835B2 (en) 2020-07-31 2022-08-12 Interaction method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2022022689A1 true WO2022022689A1 (zh) 2022-02-03

Family

ID=73315591

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109648 WO2022022689A1 (zh) 2020-07-31 2021-07-30 交互方法、装置和电子设备

Country Status (4)

Country Link
US (2) US11863835B2 (zh)
EP (1) EP4175307A4 (zh)
CN (1) CN111935544B (zh)
WO (1) WO2022022689A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935544B (zh) 2020-07-31 2022-03-08 北京字节跳动网络技术有限公司 交互方法、装置和电子设备
CN112712395A (zh) * 2021-01-08 2021-04-27 北京有竹居网络技术有限公司 展示信息生成方法、装置和电子设备
CN112887768B (zh) * 2021-01-12 2023-06-16 京东方科技集团股份有限公司 投屏显示方法、装置、电子设备及存储介质
CN114071214A (zh) * 2021-11-17 2022-02-18 上海哔哩哔哩科技有限公司 视频展示方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576996A (zh) * 2009-06-05 2009-11-11 腾讯科技(深圳)有限公司 一种实现图像缩放中的处理方法及装置
CN102890816A (zh) * 2011-07-20 2013-01-23 深圳市快播科技有限公司 视频图像缩放处理方法以及视频图像缩放处理装置
WO2017184241A1 (en) * 2016-04-18 2017-10-26 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN107741815A (zh) * 2017-10-26 2018-02-27 上海哔哩哔哩科技有限公司 用于播放器的手势操作方法及设备
CN110446110A (zh) * 2019-07-29 2019-11-12 深圳市东微智能科技股份有限公司 视频的播放方法、视频播放设备及存储介质
CN110944186A (zh) * 2019-12-10 2020-03-31 杭州当虹科技股份有限公司 一种视频局部区域高质量查看方法
CN111935544A (zh) * 2020-07-31 2020-11-13 北京字节跳动网络技术有限公司 交互方法、装置和电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003108976A (ja) * 2001-09-27 2003-04-11 Canon Inc 画像管理システム、画像表示方法切替方法、記憶媒体、及びプログラム
CN101616281A (zh) * 2009-06-26 2009-12-30 中兴通讯股份有限公司南京分公司 一种将手机电视播放画面局部放大的方法及移动终端
CN102890603B (zh) * 2011-07-20 2015-05-27 深圳市快播科技有限公司 视频图像处理方法以及视频图像处理装置
CN106802759A (zh) * 2016-12-21 2017-06-06 华为技术有限公司 视频播放的方法及终端设备
EP3583780B1 (en) * 2017-02-17 2023-04-05 InterDigital Madison Patent Holdings, SAS Systems and methods for selective object-of-interest zooming in streaming video
CN111355998B (zh) * 2019-07-23 2022-04-05 杭州海康威视数字技术股份有限公司 视频处理方法及装置
CN111031398A (zh) * 2019-12-10 2020-04-17 维沃移动通信有限公司 一种视频控制方法及电子设备
CN111147911A (zh) * 2019-12-27 2020-05-12 北京达佳互联信息技术有限公司 视频裁剪方法、装置、电子设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576996A (zh) * 2009-06-05 2009-11-11 腾讯科技(深圳)有限公司 一种实现图像缩放中的处理方法及装置
CN102890816A (zh) * 2011-07-20 2013-01-23 深圳市快播科技有限公司 视频图像缩放处理方法以及视频图像缩放处理装置
WO2017184241A1 (en) * 2016-04-18 2017-10-26 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN107741815A (zh) * 2017-10-26 2018-02-27 上海哔哩哔哩科技有限公司 用于播放器的手势操作方法及设备
CN110446110A (zh) * 2019-07-29 2019-11-12 深圳市东微智能科技股份有限公司 视频的播放方法、视频播放设备及存储介质
CN110944186A (zh) * 2019-12-10 2020-03-31 杭州当虹科技股份有限公司 一种视频局部区域高质量查看方法
CN111935544A (zh) * 2020-07-31 2020-11-13 北京字节跳动网络技术有限公司 交互方法、装置和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4175307A4 *

Also Published As

Publication number Publication date
CN111935544B (zh) 2022-03-08
US20230171472A1 (en) 2023-06-01
EP4175307A1 (en) 2023-05-03
US11863835B2 (en) 2024-01-02
CN111935544A (zh) 2020-11-13
US20240089551A1 (en) 2024-03-14
EP4175307A4 (en) 2024-03-13

Similar Documents

Publication Publication Date Title
WO2022022689A1 (zh) 交互方法、装置和电子设备
WO2020082870A1 (zh) 即时视频显示方法、装置、终端设备及存储介质
WO2020253453A1 (zh) 图像切换方法、装置、电子设备及存储介质
WO2020233142A1 (zh) 多媒体文件播放方法、装置、电子设备和存储介质
JP2023513329A (ja) ビデオ再生方法、装置、電子機器及びコンピュータ可読媒体
WO2021254502A1 (zh) 目标对象显示方法、装置及电子设备
WO2023134492A1 (zh) 页面的显示方法、装置、电子设备、存储介质和程序产品
WO2023024921A1 (zh) 视频交互方法、装置、设备及介质
WO2021027631A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
CN112051961A (zh) 虚拟交互方法、装置、电子设备及计算机可读存储介质
WO2020220776A1 (zh) 图片类评论数据的展示方法、装置、设备及介质
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
WO2021104130A1 (zh) 在视频中显示对象的方法、装置、电子设备及计算机可读存储介质
CN109168027B (zh) 即时视频展示方法、装置、终端设备及存储介质
WO2022183887A1 (zh) 视频编辑及播放方法、装置、设备、介质
CN111641797A (zh) 视频通话界面显示控制方法、装置、存储介质及设备
WO2023125164A1 (zh) 页面显示方法、装置、电子设备和存储介质
WO2021197024A1 (zh) 视频特效配置文件生成方法、视频渲染方法及装置
WO2023216936A1 (zh) 视频播放方法、装置、电子设备、存储介质和程序产品
WO2023179362A1 (zh) 控件的显示方法、装置、电子设备、存储介质和程序产品
WO2023138294A1 (zh) 一种信息展示方法、装置、设备及介质
CN110519645B (zh) 视频内容的播放方法、装置、电子设备及计算机可读介质
WO2021027547A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
CN112995401A (zh) 控件显示方法、装置、设备及介质
WO2023155708A1 (zh) 视角的切换方法、装置、电子设备、存储介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850635

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021850635

Country of ref document: EP

Effective date: 20230130

NENP Non-entry into the national phase

Ref country code: DE