CN111107418B - Video data processing method, device, computer equipment and storage medium - Google Patents

Video data processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111107418B
CN111107418B CN201911319354.1A CN201911319354A CN111107418B CN 111107418 B CN111107418 B CN 111107418B CN 201911319354 A CN201911319354 A CN 201911319354A CN 111107418 B CN111107418 B CN 111107418B
Authority
CN
China
Prior art keywords
window
area
video frame
played
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911319354.1A
Other languages
Chinese (zh)
Other versions
CN111107418A (en
Inventor
王健
庹虎
周霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201911319354.1A priority Critical patent/CN111107418B/en
Publication of CN111107418A publication Critical patent/CN111107418A/en
Application granted granted Critical
Publication of CN111107418B publication Critical patent/CN111107418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application relates to a video data processing method, a video data processing device, a computer device and a storage medium. The method comprises the following steps: receiving a video frame to be played; intercepting a partial area from the video frame to be played as a focus area; the method comprises the steps of rendering the focus area in a first window of an image display layer, rendering the video frame to be played in a second window of the image display layer, wherein the second window is located above the first window, the first window is used for displaying the focus area in an enlarged mode according to a first preset proportion, and the second window is used for displaying the video frame to be played in a reduced mode. By arranging two different areas in the same screen, one area is used for amplifying and displaying detailed information (namely a focus area) and the other area is used for reducing and displaying complete information (video frames to be played), more complete video information is provided for a user in a film watching mode, and user experience is improved.

Description

Video data processing method, video data processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video data processing method and apparatus, a computer device, and a storage medium.
Background
With the development of video technology, video watching is a part of people's daily life. The main current video watching modes include a horizontal screen watching mode and a vertical screen watching mode, wherein the horizontal screen playing mode and the vertical screen playing mode are adopted to display a complete video, a key picture area and a non-key picture area exist in each video data, the complete display picture area can enable a user to understand complete picture content, but key watching on the details of a key picture in the picture content of the video data cannot be carried out, and the film watching experience is influenced.
Disclosure of Invention
In order to solve the technical problem, the present application provides a video data processing method, an apparatus, a computer device and a storage medium.
In a first aspect, the present application provides a video data processing method, including:
receiving a video frame to be played;
intercepting a partial area from a video frame to be played as a focus area;
and rendering a focus area on a first window of the image display layer, and rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played.
In a second aspect, the present application provides a video data processing apparatus comprising:
the data receiving module is used for receiving the video frame to be played;
the area intercepting module is used for intercepting a part of area from a video frame to be played as a focus area;
and the rendering module is used for rendering a focus area on a first window of the image display layer and rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for amplifying the display focus area according to a first preset proportion, and the second window is used for reducing and displaying the video frame to be played.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
receiving a video frame to be played;
intercepting a partial area from a video frame to be played as a focus area;
and rendering a focus area on a first window of the image display layer, rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
receiving a video frame to be played;
intercepting a partial area from a video frame to be played as a focus area;
and rendering a focus area on a first window of the image display layer, rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played.
The video data processing method, the video data processing device, the computer equipment and the storage medium comprise the following steps: receiving a video frame to be played; intercepting a partial area from a video frame to be played as a focus area; and rendering a focus area on a first window of the image display layer, rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played. By arranging two different areas in the same screen, one area is used for amplifying and displaying detailed information (namely a focus area) and the other area is used for reducing and displaying complete information (video frames to be played), more complete video information is provided for a user in a film watching mode, and user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram of an exemplary video data processing system;
FIG. 2 is a flow diagram illustrating a method for video data processing according to one embodiment;
FIG. 3 is an interface diagram of a vertical screen display interface in one embodiment;
FIG. 4 is a schematic illustration of an interface showing an interface in an embodiment;
FIG. 5 is a schematic interface diagram of a landscape display interface in one embodiment;
FIG. 6 is a flow diagram illustrating a method for video data processing according to one embodiment;
FIG. 7 is a block diagram showing the structure of a video data processing apparatus according to one embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a diagram of an application environment of a video data processing method according to an embodiment. Referring to fig. 1, the video data processing method is applied to a video data processing system. The video data processing system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network.
The terminal 110 receives a video frame to be played in a video to be played, which is sent by the server 120, and intercepts a partial area from the video frame to be played as a focus area; and rendering a focus area on a first window of the image display layer, and rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played.
The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
As shown in fig. 2, in one embodiment, a video data processing method is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 (or the server 120) in fig. 1. Referring to fig. 2, the video data processing method specifically includes the following steps:
step S201, receiving a video frame to be played.
Specifically, the video frame to be played refers to a video frame that is being prepared for playing. The video frame to be played can be a video frame in a video actively issued by the server, can also be a video frame in a video issued according to a user request, and can also be a video frame of a local video uploaded by the user.
Step S202, a partial area is cut out from the video frame to be played as a focus area.
Specifically, the focus area refers to a part of an area intercepted in a video frame to be played. The interception can be performed according to a preset interception rule, or can be performed by user-defined interception according to the requirement of a user. The preset interception rule can be customized, for example, an area where a target object in the intercepted video is located is defined, for example, an area where a hero in a tv show or a movie is located in the video is defined as a focus area. When user-defined interception is carried out according to the requirements of users, if the center point coordinate of the focus area can be defined, the video is intercepted according to the preset interception proportion and the center point coordinate.
Step S203, rendering a focus area in the first window of the image display layer, and rendering a video frame to be played in the second window of the image display layer.
In this embodiment, the second window is located above the first window, the first window is used for displaying the focus area in an enlarged manner according to a first preset ratio, and the second window is used for displaying the video frame to be played in a reduced manner.
Specifically, the image display layer is an image layer for displaying a video frame, where a text, a picture, a table, and an add-in may be added to the image layer, and the image layer may also be nested inside the image layer. The first window and the second window are both located on the same image display layer, the second window is located above the first window, namely two areas are arranged in the same image layer, the first window is used for amplifying and displaying a focus area according to a first preset proportion, and the second window is used for reducing and displaying a video frame to be played. Wherein the first preset scale refers to a magnification scale of the focus area, and the scale is determined according to a size of the first window and a size of the focus area. The first window is consistent with a display screen which can be a terminal screen, and can also be half the size of the terminal display screen, and can be set according to requirements. The area of the second window is smaller than that of the first window, the larger window is adopted to display the information of the smaller area, namely, the information of the focal area is displayed in an enlarged mode, and the smaller window is adopted to display the whole video picture, namely, the video frame to be played is displayed in a reduced mode. When each video frame to be played of the video to be played is received, a focus area is intercepted from each video frame to be played, and the video frame to be played and the corresponding focus area are respectively rendered in the second window and the first window. And the area of the first window is larger than that of the second window, the picture of the focus area is enlarged by adopting the first window, and the second window is used for reducing the picture of the video frame.
In one embodiment, after the first window and the second window respectively render the focus area and the video frame to be played, interaction may be performed through a display screen or a shortcut key of the terminal, such as performing sliding, clicking, and the like on the screen, and a corresponding instruction is triggered through the operation performed on the screen, such as moving a position of the second window, enlarging a size of the second window, changing a size ratio of the second window, switching a play mode, and the like.
In one embodiment, after the rendering of the second window is completed, an identifier corresponding to the focus area is drawn on the second window, wherein the identifier may be an identifier box.
In one embodiment, as shown in FIG. 3, while erecting the screen, the first window 310 is used to play the enlarged video of the focus area 321 played in the second window 320. As shown in fig. 4, fig. 4 is a schematic view of a display interface of a terminal.
In another embodiment, as shown in FIG. 5, the first window 310 is used to play an enlarged video of the focus area 321 played in the second window 320 when the screen is panned.
In a specific embodiment, the video data processing method includes: receiving a video frame to be played; intercepting a partial area from a video frame to be played as a focus area; and rendering a focus area on a first window of the image display layer, rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played. By arranging two different areas in the same screen, one area is used for amplifying and displaying detailed information (namely a focus area) and the other area is used for reducing and displaying complete information (video frames to be played), more complete video information is provided for a user in a film watching mode, and user experience is improved.
In an embodiment, the video data processing method further includes: receiving sliding operation of a user on a first window and a corresponding sliding distance; acquiring the coordinate of the central point of the focus area in a video frame to be played to obtain the coordinate of the central point; calculating to obtain the current center point coordinate according to the sliding distance and the center point coordinate; intercepting an area corresponding to the current central point coordinate from a video frame to be played according to a preset intercepting proportion, and taking the area as a current focus area; and rendering the current focus area in a first window of the image display layer, and rendering the video frame to be played in a second window of the image display layer.
Specifically, the slide operation refers to a slide operation of the user on the presentation screen of the terminal. The sliding operation distance is the distance generated in the actual sliding process, the sliding distance is converted into the sliding distance of the second window according to the proportion between the screen and the second window, and the sliding direction is determined according to the sliding operation direction. The focus area is a rectangular area in the video frame to be played, the area center point of the rectangular area is the center point of the focus area, and the coordinates of the center point of the focus area in the video frame to be played are obtained to obtain the coordinates of the center point. And moving the central point according to the sliding distance and the sliding direction of the second window to obtain the current central point coordinate. And intercepting a corresponding area from the video frame to be played according to the current central point coordinate and a preset interception proportion to obtain a current focus area, rendering the current focus area on a first window, and rendering the video frame to be played on a second window. The focus area is dynamically adjusted according to the user operation, so that the user can conveniently understand the content of the large-screen focus picture, and the user experience is further improved.
In one embodiment, the center point is moved according to the sliding distance and the sliding direction of the second window to obtain the current center point coordinate, the identification frame for identifying the focus area is redrawn according to the current center point coordinate, or the identification frame is moved according to the sliding distance and the sliding direction, the position of the identification frame is changed along with the sliding operation, and the relative position relation between the focus area and the whole complete video is better prompted to a user. When the focal region information of the aerial view video image is obtained, the full-image information of the video is obtained through insights, and the position relation of the relatively complete image of the focal region is captured in real time, so that a user can more accurately, conveniently and comprehensively understand the video content while watching the video in an immersive manner.
In one embodiment, step S202 includes: and intercepting a partial area from the video frame to be played as a focus area according to a preset intercepting rule.
Specifically, the preset clipping rule is a rule set in advance for clipping the focus area. The preset intercepting rule comprises the step of automatically intercepting the area where the preset characters such as characters, animals or cartoon characters are located. Or intercepting the area where the preset action is located as a focus area.
In an embodiment, when there is no picture corresponding to the preset capturing rule in the video frame to be played, the position of the area of the focus area in the video frame to be played is the same as the position of the area in the video frame to be played where there is a picture corresponding to the preset capturing rule last.
In one embodiment, when a picture corresponding to a preset capture rule does not exist in the video frame to be played, an area corresponding to a specified point in the video frame to be played is used as a focus area, for example, a central point in the video frame to be played is used as the specified point, or a midpoint of a third position corresponding to a left or right picture in the video frame to be played is used as the specified point.
In an embodiment, the video data processing method further includes: receiving a first preset operation of a user; and acquiring the position information and the size information of the second window, updating the second window according to the first preset operation, the position information and the size information of the second window to obtain an updated second window, and rendering the video frame to be played in the updated second window.
Specifically, the first preset operation is used for adjusting the size and the position of the second window, when the first preset operation is detected by detecting the operation of a user on a display screen of the terminal, the change of the second window, namely, the position change, the size change and the like, is calculated according to the first preset operation, the updated position information of the second window is obtained according to the position information and the position change of the second window, the updated size information of the second window is obtained according to the size information and the size change of the second window, and the updated second window is generated according to the updated size information and the updated position information of the second window. The user can customize and modify the size of the window and the position of the mobile window.
In one embodiment, whether the updated area of the second window is larger than or equal to the area of the preset area is judged; when the updated area of the second window is larger than or equal to the area of the preset area, adjusting the size and the direction of the second window according to the first preset size, the first rotating direction and the first preset angle to obtain a third window, and closing the first window or covering the first window by adopting the third window; and rendering the video frame to be played in the third window.
Specifically, the area of the preset region may be set by a user according to the terminal, or set according to the area of the first window. If the setting is more than two-thirds or four-fifths of the area of the first window, adjusting the size and the direction of the second window according to a first preset size, a first rotating direction and a first preset angle, wherein the first preset size is preset size information, the first rotating direction and the first preset angle are both configured in advance, if the first rotating direction and the first preset angle are rotated clockwise or anticlockwise by the first preset angle, and after the second window is rotated clockwise or anticlockwise by the first preset angle, the window is obtained to be a third window, and the size of the third window is consistent with the first preset size. And closing the first window, or covering the first window by adopting a third window. The first preset size may be self-defined, for example, the first preset size is the same as the size of the first window. And rendering the video frame to be played in the third window.
In a specific embodiment, when the first window and the second window are different regions of the same layer, the first window disappears, only the third window is reserved, and the aspect ratio of the third window is equal to the aspect ratio of the first window, that is, the width of the third window is equal to the height of the first window, and the height of the third window is higher than the width of the first window. When the horizontal screen and the vertical screen are expressed, when the playing mode adopted by the first window to play the video is vertical screen playing, and the area of the second window is larger than or equal to the area of the preset area, the size of the second window is adjusted according to the size of the first window, so that the adjusted size of the second window is the same as the size of the second window, the window with the size of the second window adjusted according to the first rotating direction and the first rotating angle is used for obtaining a third window of the horizontal screen, and a video frame to be played is rendered in the third window of the horizontal screen, namely, the video frame is changed into ordinary horizontal screen video playing.
In a specific embodiment, when the updated area of the second window is greater than or equal to the preset area, the size of the second window is adjusted according to the preset size, the first window is closed, and the window with the adjusted size of the second window is used for rendering the video frame to be played, that is, the video frame to be played is played in a normal vertical screen playing mode.
In one embodiment, after the rotated third window renders the video frame to be played, the method further includes: receiving a second preset operation of the user in a third window; reducing the area of the third window according to a second preset operation; when the area of the third window is smaller than the preset area and the first window is closed, opening the first window on the image display layer, adjusting the size and the direction of the third window according to a second preset size, a second rotation direction and a second preset angle to obtain a fourth window, rendering a focus area on the first window of the image display layer, and rendering a video frame to be played on the fourth window;
when the area of the third window is smaller than the preset area and the first window is covered, adjusting the size and the direction of the third window according to a second preset size, a second rotating direction and a second preset angle to obtain a fifth window, rendering a focus area on the first window of the image display layer, and rendering a video frame to be played on the fifth window.
Specifically, the whole played video is rendered through the rotated third window in the video, when the first window is completely covered, the area of the rotated third window is reduced through a second preset operation, namely, the rotated third window is reduced, when the area of the rotated third window is reduced to be smaller than the area of the preset area, the size and the direction of the third window are adjusted according to a second preset size, a second rotation direction and a second preset angle to obtain a fourth window, and the first window and the fourth window are generated on the layer.
When the first window is closed and the area of the rotated third window is reduced to be smaller than the area of the preset area, adjusting the size and the direction of the third window according to a second preset size, a second rotating direction and a second preset angle to obtain a fourth window.
And rendering a video frame to be played in a fourth window of the image display layer, intercepting a focus area from the current frame of the video played in the fourth window, and rendering the focus area in the first window of the image display layer. That is, the user can adjust the play mode by enlarging or reducing the size of the window for playing the video frame to be played, and switch from the interactive display mode to the ordinary viewing mode, or switch from the ordinary viewing mode to the interactive display mode. And in the interactive display mode, two different windows are adopted to display the video at the same time, one window plays the complete video frame, and the other window plays a part of the region in the complete video frame in an amplified manner.
In one embodiment, position information of the second window is acquired; and receiving a third preset operation, calculating according to the third preset operation and the position information to obtain the current position information of the second window, and displaying the second window in an area corresponding to the current position information.
Specifically, the third preset operation is used to move the second window, and only the moving operation is performed without adjusting the size of the second window. And determining the position information to be adjusted of the second window according to a third preset operation, calculating the current position information of the second window according to the position information of the second window and the position information to be adjusted, constructing the second window in a region corresponding to the current position information, and rendering the video frame to be played in the second region.
The first preset operation, the second preset operation and the third preset operation in the embodiment may be designed according to existing interactive actions, and may also be designed in a self-defined manner, such as double-clicking a screen, clicking, circling on the screen, and the like.
In one embodiment, the videos may be a landscape video and a portrait video, and for convenience of illustration, the landscape video is taken as an example. As shown in fig. 6, the video data processing method includes:
step S301, receiving a horizontal version video frame.
Step S302, a vertical screen area (focus area) is intercepted from a horizontal screen and horizontal screen frame. The method comprises the steps of intercepting a display interface of a terminal, wherein the proportion of an intercepted focus area is intercepted according to a preset proportion, if the aspect ratio of a vertical screen area is determined according to the aspect ratio of the display interface of the terminal, the intercepted height is the height of a horizontal video picture, and the intercepted width is adjusted to enable the aspect ratio of the intercepted picture to be consistent with the aspect ratio of the display interface.
Step S303, rendering a full screen in the vertical screen area. And rendering the vertical screen area by adopting the full screen (the first window) of the vertical screen.
Step S304, determining a thumbnail display area of the horizontal video frame.
In step S305, the horizontal version video frame is rendered in the thumbnail display area (second window) for the second time. Displaying the portrait screen area in full screen and the landscape video frame in the thumbnail display area are synchronized. After display, the display interface of the display device is shown in fig. 4.
And step S306, dynamically drawing a vertical screen area intercepting frame in the thumbnail display area. The vertical screen area intercepting frame is used for identifying the specific position of interception of the vertical screen area. And the dynamic drawing refers to drawing an interception frame of the vertical screen area in real time according to the real position of the vertical screen area of each frame.
In step S307, the thumbnail display area is adjusted. Wherein adjusting the thumbnail display area includes moving a position and/or resizing the display area. After the thumbnail display area is adjusted, the process proceeds to step S304 and step S306, and the horizontal version video frame is rendered in the adjusted thumbnail display area.
Step S308, determining a vertical screen area. The determination of the vertical screen area can be determined in advance according to a preset interception rule, or can be determined manually by a user on a screen. After the portrait screen area is determined, the process proceeds to step S302 and step S306.
When watching videos, two windows are arranged on a mobile phone screen to synchronously display focus local pictures and complete thumbnail pictures of the videos all the time, and more complete video information is provided for a user in a film watching mode. By dynamically drawing the focus area intercepting frame on the complete thumbnail picture and combining the focus local picture and the complete thumbnail picture of the two windows to synchronously display the video, a user can more conveniently understand the content of the large-screen focus picture, the interaction capacity of a mobile phone screen is fully developed, the capability of changing the content of the focus picture and the display area of the global thumbnail picture is given to the user, and more viewing choices suitable for the habit of the user are provided for the user.
In a specific embodiment, the video data processing method includes: AI analyzing the current video picture content to obtain the center point coordinate of the focus area; according to the central point coordinate of the focus area, the intercepting height is the height of a horizontal video picture, and the intercepting width is adjusted to enable the width-height ratio of the intercepting picture to be consistent with the width-height ratio of a vertical screen of the playing equipment; intercepting an area, and performing vertical screen full-screen amplification rendering on the playing equipment; calculating a small window area with the aspect ratio equal to the actual aspect ratio of the horizontal version video in a vertical screen display area of the playing equipment; displaying and rendering the complete picture of the horizontal video in the small window area while rendering the full screen of the vertical screen; when the user slides in the vertical screen display area of the playing equipment, the coordinate of the central point of the focus area can be controlled to move left and right, so that the intercepted area is displayed on the playing equipment in a vertical screen full-screen amplification mode, and a playing picture outside the AI focus area is displayed; the user can drag the small window area to any position of the vertical screen display area of the screen of the playing device, and the dragged small window area renders complete pictures of the horizontal video; and the UI layer dynamically draws the vertical screen capture frame in the small window area according to the sliding of the finger.
According to the multi-window interactive display example of the video player of the terminal, a local focus picture intercepted from a horizontal version video is displayed on a full screen of a mobile phone screen in an amplified manner, a complete picture is displayed in a small window area of the screen in a contracted manner, the position of the small window can be dragged to any position of a vertical screen display area of a screen of a playing device in real time, and a user can conveniently capture full-appearance information and relative position relation of the horizontal version video while watching the vertical screen. It can be found that through two synchronous renderings, more complete video content information can be flexibly and synchronously presented to a user.
Fig. 2 and 3 are schematic flow diagrams of a video data processing method according to an embodiment. It should be understood that although the steps in the flowcharts of fig. 2 and 3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided a video data processing apparatus 200 comprising:
the data receiving module 201 is configured to receive a video frame to be played.
And the region intercepting module 202 is configured to intercept a partial region from the video frame to be played as a focus region.
The rendering module 203 is configured to render a focus area in a first window of the image display layer, and render a video frame to be played in a second window of the image display layer, where the second window is located above the first window.
In an embodiment, the video data processing apparatus 200 further includes:
and the sliding data receiving module is used for receiving the sliding operation of the user on the first window and the corresponding sliding distance.
And the central point coordinate acquisition module is used for acquiring the coordinate of the central point of the focus area in the video frame to be played to obtain the coordinate of the central point.
And the current central point coordinate calculation module is used for calculating to obtain the current central point coordinate according to the sliding distance and the central point coordinate.
The region capture module 202 is further configured to capture a region corresponding to the current center point coordinate from the video frame to be played according to a preset capture proportion, and use the region as the current focus region.
The rendering module 203 is further configured to render the current focus area in a first window of the image display layer, and render the video frame to be played in a second window of the image display layer.
In an embodiment, the region capture module 202 is specifically configured to capture a partial region from the video frame to be played as a focus region according to a preset capture rule.
In an embodiment, the video data processing apparatus 200 further includes:
the operation receiving module is used for receiving a first preset operation of a user.
The window information acquisition module is used for acquiring the position information and the size information of the second window;
the window updating module is used for updating the second window according to the first preset operation, the position information and the size information of the second window to obtain an updated second window;
the rendering module 203 is further configured to render the video frame to be played in the updated second window.
In an embodiment, the video data processing apparatus 200 further includes:
and the judging module is used for judging whether the updated area of the second window is larger than or equal to the area of the preset area.
The adjusting module is used for adjusting the size and the direction of the second window according to the first preset size, the first rotating direction and the first preset angle to obtain a third window when the updated area of the second window is larger than or equal to the area of the preset area, and closing the first window or covering the first window by adopting the third window;
the rendering module 203 is further configured to render the video frame to be played in a third window.
In an embodiment, the video data processing apparatus 200 further includes:
the operation receiving module is further used for receiving a second preset operation of the user in the third window;
the window adjusting module is used for reducing the area of the third window according to a second preset operation;
and the fourth window generation module is used for opening the first window on the image display layer when the area of the third window is smaller than the preset area and the first window is closed, and adjusting the size and the direction of the third window according to a second preset size, a second rotation direction and a second preset angle to obtain a fourth window.
The rendering module 203 is further configured to render the focus area in the first window of the image display layer, and render the video frame to be played in the fourth window.
And the fifth window generation module is used for adjusting the size and the direction of the third window according to a third preset size, a third rotation direction and a second preset angle when the area of the third window is smaller than the preset area and the first window is covered, so that the fifth window is obtained.
The rendering module 203 is further configured to render a focus area in the first window of the image display layer, and render a video frame to be played in the fifth window.
In an embodiment, the video data processing apparatus 200 further includes:
the position information acquisition module is used for acquiring the position information of the second window;
the position information calculation module is used for receiving a third preset operation and calculating according to the third preset operation and the position information to obtain the current position information of the second window;
and the window moving module is used for displaying the second window in the area corresponding to the current position information.
FIG. 8 is a diagram that illustrates an internal structure of the computer device in one embodiment. The computer device may specifically be the terminal 110 (or the server 120) in fig. 1. As shown in fig. 8, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected via a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video data processing method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a video data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 8 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the video data processing apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 8. The memory of the computer device may store various program modules constituting the video data processing apparatus, such as a data receiving module 201, a region intercepting module 202, and a rendering module 203 shown in fig. 7. The computer program constituted by the respective program modules causes the processor to execute the steps in the video data processing method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 8 may perform receiving a video frame to be played by the data receiving module 201 in the video data processing apparatus shown in fig. 7. The computer device can intercept a partial area from a video frame to be played as a focus area through the area interception module 202; the computer device may perform, through the rendering module 203, rendering a focus area in a first window of the image display layer, and rendering a video frame to be played in a second window of the image display layer, where the second window is located above the first window, the first window is configured to enlarge the display focus area according to a first preset ratio, and the second window is configured to reduce and display the video frame to be played.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: receiving a video frame to be played; intercepting a partial area from a video frame to be played as a focus area; and rendering a focus area on a first window of the image display layer, rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played.
In one embodiment, the processor, when executing the computer program, further performs the steps of: receiving sliding operation of a user on a first window and a corresponding sliding distance; acquiring the coordinate of the central point of the focus area in a video frame to be played to obtain the coordinate of the central point; calculating to obtain the current center point coordinate according to the sliding distance and the center point coordinate; intercepting an area corresponding to the current central point coordinate from a video frame to be played according to a preset interception proportion to serve as a current focus area; and rendering the current focus area in a first window of the image display layer, and rendering the video frame to be played in a second window of the image display layer.
In one embodiment, intercepting a partial area from a video frame to be played as a focus area includes: and intercepting a partial area from the video frame to be played as a focus area according to a preset intercepting rule.
In one embodiment, the processor, when executing the computer program, further performs the steps of: receiving a first preset operation of a user; acquiring position information and size information of a second window; updating the second window according to the first preset operation, the position information and the size information of the second window to obtain an updated second window; and rendering the video frame to be played in the updated second window.
In one embodiment, the processor when executing the computer program further performs the steps of: judging whether the updated area of the second window is larger than or equal to the area of a preset area or not; when the updated area of the second window is larger than or equal to the area of the preset area, adjusting the size and the direction of the second window according to a first preset size, a first rotating direction and a first preset angle to obtain a third window, and closing the first window or covering the first window by adopting the third window; and rendering a focus area in a first window of the image display layer, and rendering a video frame to be played in a third window.
In one embodiment, after the rotated third window renders the video frame to be played, the processor executes the computer program to further implement the following steps: receiving a second preset operation of the user in a third window; reducing the area of the third window according to a second preset operation; when the area of the third window is smaller than the preset area and the first window is closed, opening the first window on the image display layer, adjusting the size and the direction of the third window according to a second preset size, a second rotation direction and a second preset angle to obtain a fourth window, rendering a focus area on the first window of the image display layer, and rendering a video frame to be played on the fourth window; when the area of the third window is smaller than the preset area and the first window is covered, adjusting the size and the direction of the third window according to a third preset size, a third rotating direction and a second preset angle to obtain a fifth window, rendering a focus area on the first window of the image display layer, and rendering a video frame to be played on the fifth window.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring position information of a second window; receiving a third preset operation, and calculating according to the third preset operation and the position information to obtain the current position information of the second window; and displaying a second window in an area corresponding to the current position information.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: receiving a video frame to be played; intercepting a partial area from a video frame to be played as a focus area; and rendering a focus area on a first window of the image display layer, rendering the video frame to be played on a second window of the image display layer, wherein the second window is positioned above the first window, the first window is used for magnifying the display focus area according to a first preset proportion, and the second window is used for reducing the display video frame to be played.
In one embodiment, the computer program when executed by the processor further performs the steps of: receiving sliding operation of a user on a first window and a corresponding sliding distance; acquiring the coordinate of the central point of the focus area in a video frame to be played to obtain the coordinate of the central point; calculating to obtain the current center point coordinate according to the sliding distance and the center point coordinate; intercepting an area corresponding to the current central point coordinate from a video frame to be played according to a preset intercepting proportion, and taking the area as a current focus area; and rendering the current focus area in a first window of the image display layer, and rendering the video frame to be played in a second window of the image display layer.
In one embodiment, intercepting a partial area from a video frame to be played as a focus area includes: and intercepting a partial area from the video frame to be played as a focus area according to a preset intercepting rule.
In one embodiment, the computer program when executed by the processor further performs the steps of: receiving a first preset operation of a user; acquiring position information and size information of a second window; updating the second window according to the first preset operation, the position information and the size information of the second window to obtain an updated second window; and rendering the video frame to be played in the updated second window.
In one embodiment, the computer program when executed by the processor further performs the steps of: judging whether the updated area of the second window is larger than or equal to the area of a preset area or not; when the updated area of the second window is larger than or equal to the area of the preset area, adjusting the size and the direction of the second window according to a first preset size, a first rotating direction and a first preset angle to obtain a third window, and closing the first window or covering the first window by adopting the third window; and rendering the video frame to be played in the third window.
In one embodiment, after the rotated third window renders the video frame to be played, the computer program when executed by the processor further performs the steps of: receiving a second preset operation of the user in a third window; reducing the area of the third window according to a second preset operation; when the area of the third window is smaller than the preset area and the first window is closed, opening the first window on the image display layer, adjusting the size and the direction of the third window according to a second preset size, a second rotation direction and a second preset angle to obtain a fourth window, rendering a focus area on the first window of the image display layer, and rendering a video frame to be played on the fourth window; when the area of the third window is smaller than the preset area and the first window is covered, adjusting the size and the direction of the third window according to a third preset size, a third rotating direction and a second preset angle to obtain a fifth window, rendering a focus area on the first window of the image display layer, and rendering a video frame to be played on the fifth window.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring position information of a second window; receiving a third preset operation, and calculating according to the third preset operation and the position information to obtain the current position information of the second window; and displaying a second window in an area corresponding to the current position information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a non-volatile computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A video data processing method, applied to a terminal, the method comprising:
receiving a video frame to be played; the video frame to be played comprises a horizontal screen video frame or a vertical screen video frame;
intercepting a partial area from the video frame to be played as a focus area; when the video frame to be played is a horizontal screen video frame, the focus area is a vertical screen area;
rendering the focus area in a first window of an image display layer, and rendering the video frame to be played in a second window of the image display layer, wherein the second window is located above the first window, the first window is used for displaying the focus area in an enlarged manner according to a first preset proportion, and the second window is used for displaying the video frame to be played in a reduced manner;
wherein the method further comprises:
after the size and the position of the second window are updated, when the updated area of the second window is larger than or equal to the area of a preset area, adjusting the size and the direction of the second window according to a first preset size, a first rotating direction and a first preset angle to obtain a third window, and closing the first window or covering the first window by adopting the third window; rendering the video frame to be played at the third window; wherein the aspect ratio of the third window and the aspect ratio of the first window are reciprocal.
2. The method of claim 1, further comprising:
receiving sliding operation of a user on the first window and a corresponding sliding distance;
acquiring the coordinate of the central point of the focus area in the video frame to be played to obtain the coordinate of the central point;
calculating to obtain the current central point coordinate according to the sliding distance and the central point coordinate;
intercepting an area corresponding to the current central point coordinate from the video frame to be played according to a preset interception proportion to serve as a current focus area;
and rendering the current focus area in a first window of the image display layer, and rendering the video frame to be played in a second window of the image display layer.
3. The method according to claim 1 or 2, wherein said intercepting a partial area from the video frame to be played as a focus area comprises:
and intercepting a partial area from the video frame to be played as the focus area according to a preset intercepting rule.
4. The method of claim 3, wherein the updating the size and position of the second window comprises:
receiving a first preset operation of a user;
acquiring position information and size information of the second window;
updating the second window according to the first preset operation, the position information and the size information of the second window to obtain an updated second window;
and rendering the video frame to be played in the updated second window.
5. The method of claim 4, wherein after the rotated third window renders the video frame to be played, further comprising:
receiving a second preset operation of the user in the third window;
reducing the area of the third window according to the second preset operation;
when the area of the third window is smaller than the preset area and the first window is closed, opening the first window on the image display layer, adjusting the size and the direction of the third window according to a second preset size, a second rotation direction and a second preset angle to obtain a fourth window, rendering the focus area on the first window of the image display layer, and rendering the video frame to be played on the fourth window;
and when the area of the third window is smaller than the preset area and the first window is covered, adjusting the size and the direction of the third window according to a third preset size, a third rotating direction and a second preset angle to obtain a fifth window, and rendering the video frame to be played in the fifth window.
6. The method of claim 1, further comprising:
acquiring the position information of the second window;
receiving a third preset operation, and calculating according to the third preset operation and the position information to obtain the current position information of the second window;
and displaying the second window in an area corresponding to the current position information.
7. A video data processing apparatus, characterized in that the apparatus comprises:
the data receiving module is used for receiving the video frame to be played; the video frame to be played comprises a horizontal screen video frame or a vertical screen video frame;
the area intercepting module is used for intercepting a part of area from the video frame to be played as a focus area; when the video frame to be played is a horizontal screen video frame, the focus area is a vertical screen area;
the rendering module is used for rendering the focus area on a first window of an image display layer, rendering the video frame to be played on a second window of the image display layer, wherein the second window is located above the first window, the first window is used for displaying the focus area in an enlarged mode according to a first preset proportion, and the second window is used for displaying the video frame to be played in a reduced mode;
wherein the rendering module is further to:
after the size and the position of the second window are updated, when the updated area of the second window is larger than or equal to the area of a preset area, adjusting the size and the direction of the second window according to a first preset size, a first rotating direction and a first preset angle to obtain a third window, and closing the first window or covering the first window by adopting the third window; rendering the video frame to be played at the third window; wherein the aspect ratio of the third window and the aspect ratio of the first window are reciprocal.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201911319354.1A 2019-12-19 2019-12-19 Video data processing method, device, computer equipment and storage medium Active CN111107418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911319354.1A CN111107418B (en) 2019-12-19 2019-12-19 Video data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911319354.1A CN111107418B (en) 2019-12-19 2019-12-19 Video data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111107418A CN111107418A (en) 2020-05-05
CN111107418B true CN111107418B (en) 2022-07-12

Family

ID=70422701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911319354.1A Active CN111107418B (en) 2019-12-19 2019-12-19 Video data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111107418B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866590B (en) * 2020-07-17 2022-12-23 海信视像科技股份有限公司 Display device
CN111757162A (en) * 2020-06-19 2020-10-09 广州博冠智能科技有限公司 High-definition video playing method, device, equipment and storage medium
CN112003875A (en) * 2020-09-03 2020-11-27 北京云石海慧软件有限公司 Video focus content transmission system and method
CN112218160A (en) * 2020-10-12 2021-01-12 北京达佳互联信息技术有限公司 Video conversion method and device, video conversion equipment and storage medium
CN112565839B (en) * 2020-11-23 2022-11-29 青岛海信传媒网络技术有限公司 Display method and display device of screen projection image
CN112333395B (en) * 2020-12-01 2022-05-06 维沃移动通信(杭州)有限公司 Focusing control method and device and electronic equipment
CN112650467B (en) * 2020-12-24 2023-12-19 深圳市富途网络科技有限公司 Voice playing method and related device
CN114022590B (en) * 2020-12-30 2023-03-24 万翼科技有限公司 Picture rendering method and related equipment
CN112911371B (en) * 2021-01-29 2023-05-05 Vidaa美国公司 Dual-channel video resource playing method and display equipment
CN112819536B (en) * 2021-02-01 2023-09-01 北京奇艺世纪科技有限公司 Method, device, computer equipment and storage medium for displaying effect advertisement
CN113608644A (en) * 2021-08-13 2021-11-05 北京仁光科技有限公司 Multi-window adjusting method, readable storage medium, electronic device and system
CN115022683A (en) * 2022-05-27 2022-09-06 咪咕文化科技有限公司 Video processing method, device, equipment and readable storage medium
CN114942711A (en) * 2022-05-31 2022-08-26 北京字节跳动网络技术有限公司 Data playing method and device, computer equipment and storage medium
CN115190351B (en) * 2022-07-06 2023-09-29 Vidaa国际控股(荷兰)公司 Display equipment and media resource scaling control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102106145A (en) * 2008-07-30 2011-06-22 三星电子株式会社 Apparatus and method for displaying an enlarged target region of a reproduced image
CN107396165A (en) * 2016-05-16 2017-11-24 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device
CN107852531A (en) * 2015-08-25 2018-03-27 Lg电子株式会社 Display device and its control method
CN108182016A (en) * 2016-12-08 2018-06-19 Lg电子株式会社 Mobile terminal and its control method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5659510B2 (en) * 2010-03-10 2015-01-28 ソニー株式会社 Image processing apparatus, image processing method, and program
US10622023B2 (en) * 2016-07-01 2020-04-14 Snap Inc. Processing and formatting video for interactive presentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102106145A (en) * 2008-07-30 2011-06-22 三星电子株式会社 Apparatus and method for displaying an enlarged target region of a reproduced image
CN107852531A (en) * 2015-08-25 2018-03-27 Lg电子株式会社 Display device and its control method
CN107396165A (en) * 2016-05-16 2017-11-24 杭州海康威视数字技术股份有限公司 A kind of video broadcasting method and device
CN108182016A (en) * 2016-12-08 2018-06-19 Lg电子株式会社 Mobile terminal and its control method

Also Published As

Publication number Publication date
CN111107418A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN111107418B (en) Video data processing method, device, computer equipment and storage medium
US11532072B2 (en) Multifunctional environment for image cropping
US11500513B2 (en) Method for icon display, terminal, and storage medium
CN111866423B (en) Screen recording method for electronic terminal and corresponding equipment
US9026938B2 (en) Dynamic detail-in-context user interface for application access and content access on electronic displays
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
US9792268B2 (en) Zoomable web-based wall with natural user interface
US11825177B2 (en) Methods, systems, and media for presenting interactive elements within video content
CN110825289A (en) Method and device for operating user interface, electronic equipment and storage medium
CN113892129B (en) Creating virtual parallax for three-dimensional appearance
CN107870703B (en) Method, system and terminal equipment for full-screen display of picture
CN113282262A (en) Control method and device for screen projection display picture, mobile terminal and storage medium
CN113342247B (en) Material processing method and device, electronic equipment and storage medium
CN114003160B (en) Data visual display method, device, computer equipment and storage medium
CN114756159B (en) Intelligent interaction panel, data processing method and device thereof, and computer storage device
CN115658196A (en) Page display method and device, electronic equipment and storage medium
CN117425875A (en) Operation method, apparatus, electronic device, and computer-readable storage medium
KR20090000107A (en) Representation method on map for video image taken by mobile camera
US20240020910A1 (en) Video playing method and apparatus, electronic device, medium, and program product
WO2023130544A1 (en) Three-dimensional scene interaction video generation method and apparatus
CN115988261A (en) Video playing method, device, equipment and storage medium
CN116055783A (en) Video playing method and device, electronic equipment and storage medium
WO2020062681A1 (en) Eyeball motion trajectory-based test question magnifying method and system, and device
CN117319549A (en) Multimedia data selection method and device
CN117939236A (en) Special effect interaction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant