CN112243148A - Display device and video picture scaling method - Google Patents

Display device and video picture scaling method Download PDF

Info

Publication number
CN112243148A
CN112243148A CN201910642088.XA CN201910642088A CN112243148A CN 112243148 A CN112243148 A CN 112243148A CN 201910642088 A CN201910642088 A CN 201910642088A CN 112243148 A CN112243148 A CN 112243148A
Authority
CN
China
Prior art keywords
display
video
time point
preset
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910642088.XA
Other languages
Chinese (zh)
Other versions
CN112243148B (en
Inventor
张会岳
杨依灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidaa Netherlands International Holdings BV
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN201910642088.XA priority Critical patent/CN112243148B/en
Publication of CN112243148A publication Critical patent/CN112243148A/en
Application granted granted Critical
Publication of CN112243148B publication Critical patent/CN112243148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface

Abstract

The application discloses a display device and a video image zooming method, wherein after a controller of the display device receives an instruction of zooming a video image, on one hand, a user layer image is updated according to a first zooming rule, on the other hand, position information of a transparent window in a user layer within a preset time period is obtained, a second zooming rule is generated, and the video image is controlled to be zoomed in according to the second zooming rule. The position information of the transparent window corresponds to the preset display parameters of the video picture in the preset time period, so that the position information of the transparent window at a certain time point in the future can be predicted according to the position information of the transparent window in the preset time period, namely, the display parameters of the video picture to be displayed at a certain time point in the future, namely, the second scaling rule, can be predicted, and the video picture to be displayed can be cached in advance, so that the phenomenon that the video picture is delayed to be displayed due to time consumption of caching and the like can be avoided, the phenomena of edge cutting and black edge can not occur, and the visual experience of a user is more friendly.

Description

Display device and video picture scaling method
Technical Field
The present application relates to the field of display screen zooming technologies, and in particular, to a display device and a method for zooming a video screen in a user interface displayed by the display device.
Background
With the popularization of networks and intelligent terminals, channels and modes for users to watch video programs are more and more extensive. For example, various video display applications such as an arcade video and an Tencent video are installed on a device such as a smart television or a mobile phone, and a user can select and watch videos by running the display applications through the device.
The user interface typically includes a user layer (UI layer) provided by a browser for providing a Video profile and other information, and a Video layer (Video layer) for displaying a Video picture of one frame. Currently, video display applications also provide a video frame zooming function, and with the zooming function, a user can switch video frames at will at least in two modes, full-screen playing and small-window playing. The zooming process is a process of continuously changing the display parameters of a video picture, and the preset display parameters of a frame of video picture to be displayed are applied at each preset time point t in the zooming processi(i is more than or equal to 1 and less than or equal to n) to a bottom layer module of the video layer, and the bottom layer module caches the corresponding video picture P according to the preset display parametersiAnd displayed in the video layer. At the same time, the user layer will load the corresponding picture to adapt to the scaling of the video picture.
In the above process, it is assumed that the buffering time of a frame of video picture is Δ tiIf the sum of the buffering time of n frames of video pictures is sigma delta ti. Sigma delta t is consumed due to accumulative buffering of video picturesiIt is likely to be longer than the loading time of the user layer picture, and therefore the video layer picture will be displayed with a delay compared to the user layer picture. Also, to avoid scaling and blocking due to frame dropping, the bottom layer module is usually at PiAfter the buffer is finished, the video layer displays Pi-kThat is, after k frames of video pictures after the i-k frames of video pictures are also buffered, the i-k frames of video pictures are displayed. This in turn increases the time required for delayed display of the video layer pictures.
The delayed display of the video layer pictures causes the phenomenon of black edge or edge cutting to often occur in the zooming process. Specifically, for the zoom-in process, black borders appear because the video layer picture is slower than the user layer picture; for the zoom-out process, since the video layer picture is slower than the user layer picture, a cut edge occurs, i.e., the video picture is not fully displayed. The black edge or edge trimming phenomenon seriously affects the visual experience of the user.
Disclosure of Invention
The application provides a method for zooming a video picture in a video display interface presented by display equipment, a video picture zooming service device based on a parameter prediction mechanism and the display equipment, so as to solve the problem of edge cutting or black edges in the process of zooming the video picture.
In a first aspect, the present application provides a display device comprising:
a display configured to present a user interface, the user interface including a video layer video picture and a user layer picture, the user layer being presented floating on the video layer, the user layer being provided by a browser, the user layer being a transparent window at the video picture such that the video picture is presented through the user layer;
a controller in communication with the display, the controller configured to perform presenting a user interface:
receiving an instruction of amplifying the video picture, and updating the user layer picture according to a first scaling rule in response to the instruction of amplifying the video picture, wherein the transparent window is amplified according to the first scaling rule;
acquiring position information of the transparent window in the user layer within a preset time period, and generating a second scaling rule;
and controlling the video picture to be amplified to a target display position according to the second scaling rule.
Further, the controller obtains position information of the transparent window in the user layer within a preset time period, and generates a second scaling rule, including:
in response to an instruction of amplifying the video picture, starting a video picture zooming service based on a parameter prediction mechanism, wherein the video picture zooming service is used for acquiring user layer picture data in a preset time period;
determining the position information of the transparent window in the user layer within the preset time period according to the user layer picture data;
generating the second scaling rule according to the position information of the transparent window in the user layer;
the video scaling service continues to run after startup until the video is zoomed in to a target display position.
Further, the generating the second scaling rule according to the position information of the transparent window in the user layer specifically includes:
according to the current time point, acquiring position information of the transparent window corresponding to at least one time point in the preset time period, wherein the position information of the transparent window is used for acquiring preset display parameters corresponding to the video picture;
generating a display parameter predicted value corresponding to a target time point according to the acquired preset display parameters, wherein the target time point is a time point which is behind the current time point and is away from the current time point by a preset interval, and the display parameter predicted value is used for caching a video picture to be displayed at the target time point in advance;
and generating the second scaling rule according to the target time point and the display parameter predicted value corresponding to the target time point.
Further, the controller generates a display parameter predicted value corresponding to the target time point according to the acquired preset display parameter, and the method comprises the following steps:
calculating the variation of the preset parameters according to the acquired preset display parameters;
calculating the display parameter prediction variable quantity corresponding to the target time point by using the prediction weight and the predetermined parameter variable quantity;
and calculating a display parameter predicted value corresponding to the target time point according to the display parameter predicted variation.
Further, after the controller calculates the predetermined parameter variation, the method further includes:
judging whether the use times of the prediction weight exceed a preset use time or not;
if so, enabling the prediction weight to be the prediction weight multiplied by a preset updating coefficient, wherein the preset updating coefficient is used for updating the prediction weight according to the preset using times;
if not, the current prediction weight is used for the next calculation.
Further, when the current time point is the first time point of the scaling process, the method further includes:
if the zooming process is judged to be the amplifying process according to the display parameters corresponding to the zooming process starting point and the preset display parameters corresponding to the current time point, selecting the initial value of the prediction weight corresponding to the amplifying process for calculating the prediction variation of the display parameters corresponding to the first target time point in the amplifying process;
and if the zooming process is judged to be a zooming-out process according to the display parameters corresponding to the zooming-out process starting point and the preset display parameters corresponding to the current time point, selecting the initial value of the prediction weight corresponding to the zooming-out process for calculating the prediction variation of the display parameters corresponding to the first target time point in the zooming-out process.
Further, the controller calculates a predetermined parameter variation according to the acquired predetermined display parameter, including:
the predetermined display parameters comprise a predetermined display height, a predetermined display width and predetermined reference point coordinates;
and calculating the variation of the preset display height or the variation of the preset display width corresponding to at least one time point in a preset time period before the current time point as the variation of the preset parameter.
Further, the controller calculates a predicted variation of the display parameter corresponding to the target time point using the prediction weight and a predetermined parameter variation, including:
and calculating the product of the prediction weight and the preset parameter variation as the display parameter prediction variation corresponding to the target time point.
Further, the controller calculates a predicted value of the display parameter corresponding to the target time point according to the predicted variation of the display parameter using the following formula:
Figure BDA0002132228840000031
Figure BDA0002132228840000032
Figure BDA0002132228840000033
hG=hi+Δp;
wherein x isG、yG、wGAnd hGRespectively representing a prediction reference abscissa, a prediction reference ordinate, a display width prediction value and a display height prediction value corresponding to the target time point; x is the number ofi、yi、wiAnd hiRespectively representing a preset reference abscissa, a preset reference ordinate, a preset display width and a preset display height corresponding to the current time point; Δ p represents the predicted variation of the display parameter.
In a second aspect, the present application provides a method for zooming a video frame in a user interface presented by a display device, including:
receiving an instruction of amplifying a video picture in a video layer;
in response to a received instruction for amplifying a video picture in the video layer, updating the user layer picture according to a first scaling rule, wherein the user layer is displayed on the video layer in a floating manner, the user layer is provided by a browser, a transparent window is arranged at the video picture, so that the video picture is displayed through the user layer, and the transparent window is amplified according to the first preset rule;
acquiring position information of the transparent window in the user layer within a preset time period, and generating a second scaling rule;
controlling the video picture to be enlarged to a target display position according to the second scaling rule
Compared with the prior art, the technical solutions proposed in the exemplary embodiments of the present application have the following beneficial effects:
in the display device of the embodiment of the application, after receiving an instruction for enlarging a video picture, the controller updates the picture of the user layer according to a first scaling rule on the one hand, and acquires the position information of the transparent window in the user layer within a preset time period on the other hand to generate a second scaling rule, and controls the video picture to be enlarged to a target display position according to the second scaling rule. In the preset time period, the position information of the transparent window in the user layer corresponds to the preset display parameters of the video picture in the preset time period, so that the position information of the transparent window in the user layer at a certain future time point can be predicted according to the position information of the transparent window in the user layer in the preset time period, namely the display parameters of the video picture to be displayed at a certain future time point are predicted, namely a second scaling rule is obtained, the video picture to be displayed can be cached in advance, the delayed display of the video picture due to time-consuming caching and other reasons is avoided, the picture to be displayed in the video layer can be synchronously displayed with the picture to be displayed in the user layer, the phenomena of edge cutting and black edge can not occur, and the visual experience of a user is more friendly.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a diagram illustrating an operational scenario between a display device and a control apparatus according to an exemplary embodiment of the present application;
fig. 2 is a block diagram illustrating a hardware configuration of a display device 200 according to an exemplary embodiment of the present application;
fig. 3 is a block diagram illustrating a hardware configuration of the control apparatus 100 according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a functional configuration of a display device 200 shown in the present application according to an exemplary embodiment;
FIG. 5a is a block diagram of a software configuration of a display device 200 according to an exemplary embodiment of the present application;
FIG. 5b is a schematic diagram illustrating an application center in the display device 200 according to an exemplary embodiment of the present application;
FIG. 6a is a schematic diagram of a user interface of a display device 200 according to an exemplary embodiment of the present application;
FIG. 6b is a schematic view of a video display interface of the display device 200 according to an exemplary embodiment of the present application;
FIG. 7a is a schematic view of a user interface for a user selecting a video display with VOD functionality at an application center;
FIG. 7b is a schematic view of a user interface when a user selects a video program in a video display application;
FIG. 7c is a schematic view of a video display interface entered by a user selecting a video program in a video display application;
FIG. 7d is a schematic view of another video display interface entered by the user clicking "full screen" in the video display interface shown in FIG. 7 c;
FIG. 7e is a schematic view of another video display interface entered by the user clicking "portlet play" in the video display interface shown in FIG. 7 d;
FIG. 7f is a schematic diagram illustrating a black border phenomenon occurring in the process of zooming from FIG. 7c to FIG. 7d in the video frame of the video display interface;
FIG. 7g is a schematic diagram illustrating a trimming phenomenon occurring during the process of reducing the video frame from FIG. 7d to FIG. 7e in the video display interface;
FIG. 8a is a diagram illustrating another application scenario in accordance with an exemplary embodiment of the present application;
FIG. 8b is a schematic view of another video display interface entered by the user clicking "full screen" on the video display interface shown in FIG. 8 a;
FIG. 8c is a schematic view of yet another video display interface entered by the user clicking on the "portlet interface" in the video display interface shown in FIG. 8 b;
FIG. 8d is a schematic diagram illustrating a black border phenomenon occurring in the process of enlarging the video frame from FIG. 8a to FIG. 8b in the video display interface;
FIG. 9a is a diagram illustrating a hardware configuration of a display device according to an exemplary embodiment of the present application;
FIG. 9b is a flow chart illustrating a method performed by a display device controller according to an exemplary embodiment of the present application;
FIG. 10 is a flowchart illustrating one implementation of S924 according to an example embodiment of the present application;
FIG. 11 is a flowchart illustrating one implementation of S103 according to an exemplary embodiment of the present application;
FIG. 12 is a flowchart illustrating one implementation of S112 according to an exemplary embodiment of the present application;
fig. 13 is a schematic diagram illustrating a video frame zooming method according to an exemplary embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The term "remote control" as used in this application refers to a component of an electronic device (such as the display device disclosed in this application) that is typically wirelessly controllable over a relatively short range of distances. Typically using infrared and/or Radio Frequency (RF) signals and/or bluetooth to connect with the electronic device, and may also include WiFi, wireless USB, bluetooth, motion sensor, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in the common remote control device with the user interface in the touch screen.
The term "gesture" as used in this application refers to a user's behavior through a change in hand shape or an action such as hand motion to convey a desired idea, action, purpose, or result.
Before a detailed description is given to a specific implementation of the technical solution of the present application, a basic application scenario of the technical solution of the present application is described.
Fig. 1 is a diagram illustrating an operation scenario between a display device and a control apparatus according to an exemplary embodiment. As shown in fig. 1, a user may operate the display apparatus 200 through the control device 100.
The control device 100 may be a remote controller 100A, which includes infrared protocol communication, bluetooth protocol communication, other short-distance communication methods, and the like, and controls the display apparatus 200 in a wireless or other wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control device 100 may also be an intelligent device, such as a mobile terminal 100B, a tablet computer, a notebook computer, and the like. For example, the display device 200 is controlled using an application program running on the smart device. The application may provide the user with various controls through an intuitive User Interface (UI) on a screen associated with the smart device.
For example, the mobile terminal 100B may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 100B and the display device 200 may establish a control instruction protocol, synchronize the remote control keyboard to the mobile terminal 100B, and control the function of the display device 200 by controlling the user interface on the mobile terminal 100B. The audio and video content displayed on the mobile terminal 100B may also be transmitted to the display device 200, so as to implement a synchronous display function.
As shown in fig. 1, the display apparatus 200 also performs data communication with the server 300 through various communication means. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display apparatus 200. Illustratively, the display device 200 receives software program updates, or accesses a remotely stored digital media library, by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 300 may be a group or groups, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The specific display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide an intelligent network tv function that provides a computer support function in addition to the broadcast receiving tv function. Examples include a network television, a display device, an Internet Protocol Television (IPTV), and the like.
Fig. 2 is a block diagram illustrating a hardware configuration of a display device 200 according to an exemplary embodiment of the present application. As shown in fig. 2, the display apparatus 200 may include a tuner demodulator 220, a communicator 230, a detector 240, an external device interface 250, a controller 210, a memory 290, a user input interface, a video processor 260-1, an audio processor 260-2, a display 280, an audio output interface 272, and a power supply.
The tuning demodulator 220 receives the broadcast television signals in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, resonance, and the like, and is configured to demodulate, from a plurality of wireless or wired broadcast television signals, an audio/video signal carried in a frequency of a television channel selected by a user, and additional information (e.g., an EPG data signal).
The tuner demodulator 220 is responsive to the user-selected television channel frequency and the television signal carried thereby, as selected by the user and as controlled by the controller 210.
The tuner demodulator 220 may receive signals according to different broadcasting systems of television signals, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, the digital modulation mode and the analog modulation mode can be adopted; and can demodulate the analog signal and the digital signal according to different types of the received television signals.
In other exemplary embodiments, the tuner/demodulator 220 may be in an external device, such as an external set-top box. In this way, the set-top box outputs television audio/video signals after modulation and demodulation, and the television audio/video signals are input into the display device 200 through the external device interface 250.
The communicator 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communicator 230 may include a WIFI module 231, a bluetooth communication protocol module 232, a wired ethernet communication protocol module 233, and other network communication protocol modules or near field communication protocol modules.
The display apparatus 200 may establish a connection of a control signal and a data signal with an external control apparatus or a content providing apparatus through the communicator 230. For example, the communicator may receive a control signal of the remote controller 100 according to the control of the controller.
The detector 240 is a component of the display apparatus 200 for collecting signals of an external environment or interaction with the outside. The detector 240 may include a light receiver 242, a sensor for collecting the intensity of ambient light, which may be used to adapt to display parameter changes, etc.; the system can further include an image collector 241, such as a camera, etc., which can be used for collecting external environment scenes, collecting attributes of the user or interacting gestures with the user, adaptively changing display parameters, and recognizing user gestures, so as to realize the function of interaction with the user.
In some other exemplary embodiments, the detector 240 may further include a temperature sensor, such as by sensing an ambient temperature, and the display device 200 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the display apparatus 200 may be adjusted to display a color temperature of an image that is cooler; when the temperature is lower, the display device 200 may be adjusted to display a warmer color temperature of the image.
In some other exemplary embodiments, the detector 240 may further include a sound collector, such as a microphone, which may be used to receive a user's voice, a voice signal including a control instruction of the user to control the display device 200, or collect an ambient sound for identifying an ambient scene type, and the display device 200 may adapt to the ambient noise.
The external device interface 250 provides a component for the controller 210 to control data transmission between the display apparatus 200 and other external apparatuses. The external device interface may be connected with an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 250 may include: a High Definition Multimedia Interface (HDMI) terminal 251, a Composite Video Blanking Sync (CVBS) terminal 252, an analog or digital component terminal 253, a Universal Serial Bus (USB) terminal 254, a red, green, blue (RGB) terminal (not shown), and the like.
The controller 210 controls the operation of the display device 200 and responds to the operation of the user by running various software control programs (such as an operating system and various application programs) stored on the memory 290.
As shown in fig. 2, the controller 210 includes a random access memory RAM213, a read only memory ROM214, a graphics processor 216, a CPU processor 212, a communication interface 218, and a communication bus. The RAM213 and the ROM214, the graphic processor 216, the CPU processor 212, and the communication interface 218 are connected via a bus.
A ROM213 for storing instructions for various system boots. If the display device 200 is powered on upon receipt of the power-on signal, the CPU processor 212 executes a system boot instruction in the ROM and copies the operating system stored in the memory 290 to the RAM214 to start running the boot operating system. After the start of the operating system is completed, the CPU processor 212 copies the various application programs in the memory 290 to the RAM214, and then starts running and starting the various application programs.
A graphics processor 216 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And a renderer for generating various objects based on the operator and displaying the rendered result on the display 280.
A CPU processor 212 for executing operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 212 may include a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some operations of the display apparatus 200 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. A plurality of or one sub-processor for performing an operation in a standby mode or the like.
The communication interfaces may include a first interface 218-1 through an nth interface 218-n. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 210 may control the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to an icon. The user command for selecting the UI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
The memory 290 includes a memory for storing various software modules for driving and controlling the display apparatus 200. Such as: various software modules stored in memory 290, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
The basic module is a bottom layer software module for signal communication between hardware in the display device 200 and sending processing and control signals to an upper layer module. The detection module is a management module used for collecting various information from various sensors or user input interfaces, and performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and may be used to play information such as multimedia image content and UI interface. The communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing data communication between the browsing servers. The service module is a module for providing various services and various application programs.
Meanwhile, the memory 290 is also used to store visual effect maps and the like for receiving external data and user data, images of respective items in various user interfaces, and a focus object.
A user input interface for transmitting an input signal of a user to the controller 210 or transmitting a signal output from the controller to the user. For example, the control device (e.g., a mobile terminal or a remote controller) may send an input signal, such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., input by a user to the user input interface, and then the input signal is forwarded to the controller by the user input interface; alternatively, the control device may receive an output signal such as audio, video, or data output from the user input interface via the controller, and display the received output signal or output the received output signal in audio or vibration form.
In some embodiments, a user may enter a user command on a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
The video processor 260-1 is configured to receive a video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a video signal that is directly displayed or played on the display 280.
Illustratively, the video processor 260-1 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio and video data stream, and if the input MPEG-2 is input, the demultiplexing module demultiplexes the input audio and video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, such as a 24Hz, 25Hz, 30Hz, or 60Hz video, into a 60Hz, 120Hz, or 240Hz frame rate, where the input frame rate may be related to a source video stream, and the output frame rate may be related to an update rate of a display. The input is realized in a common format by using a frame insertion mode.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format of a display, such as converting the format of the signal output by the frame rate conversion module to output an RGB data signal.
And a display 280 for receiving the image signal input from the video processor 260-1 and displaying the video content and image and the menu manipulation interface. The display 280 includes a display component for presenting a picture and a driving component for driving the display of an image. The video content may be displayed from the video in the broadcast signal received by the tuner/demodulator 220, or from the video content input from the communicator or the external device interface. And a display 220 simultaneously displaying a user manipulation interface UI generated in the display apparatus 200 and used to control the display apparatus 200.
And, a driving component for driving the display according to the type of the display 280. Alternatively, in case the display 280 is a projection display, it may also comprise a projection device and a projection screen.
The audio processor 260-2 is configured to receive an audio signal, decompress and decode the audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, amplification and other audio data processing to obtain an audio signal that can be played in the speaker 272.
An audio output interface 270 for receiving the audio signal output by the audio processor 260-2 under the control of the controller 210, wherein the audio output interface may include a speaker 272 or an external sound output terminal 274 for outputting to a generating device of an external device, such as: external sound terminal or earphone output terminal.
In other exemplary embodiments, video processor 260-1 may comprise one or more chip components. The audio processor 260-2 may also include one or more chips.
And, in other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or may be integrated in one or more chips with the controller 210.
And a power supply for supplying power supply support to the display apparatus 200 from the power input from the external power source under the control of the controller 210. The power supply may include a built-in power supply circuit installed inside the display apparatus 200, or may be a power supply installed outside the display apparatus 200, such as a power supply interface for providing an external power supply in the display apparatus 200.
Fig. 3 is a block diagram illustrating a hardware configuration of the control apparatus 100 according to an exemplary embodiment of the present application. As shown in fig. 3, the control device 100 includes a controller 110, a communicator 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control apparatus 100 is configured to control the display device 200 and may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200. Such as: the user operates the channel up/down key on the control device 100, and the display device 200 responds to the channel up/down operation.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications that control the display device 200 according to user demands.
In some embodiments, as shown in fig. 1, the mobile terminal 100B or other intelligent electronic device may function similar to the control apparatus 100 after an application for manipulating the display device 200 is installed. Such as: the user may implement the functions of controlling the physical keys of the apparatus 100 by installing applications, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 100B or other intelligent electronic devices.
The controller 110 includes a processor 112, a RAM113 and a ROM114, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components for communication and coordination and external and internal data processing functions.
The communicator 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display apparatus 200. The communicator 130 may include at least one of a WIFI module 131, a bluetooth module 132, an NFC module 133, and the like.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like. Such as: the user can realize a user instruction input function through actions such as voice, touch, gesture, pressing, and the like, and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display apparatus 200. In some embodiments, it may be an infrared interface or a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
In some embodiments, the control device 100 includes at least one of a communicator 130 and an output interface. The communicator 130 is configured in the control device 100, such as: the modules of WIFI, bluetooth, NFC, etc. may send the user input command to the display device 200 through the WIFI protocol, or the bluetooth protocol, or the NFC protocol code.
And a memory 190 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110. The memory 190 may store various control signal commands input by a user.
And a power supply 180 for providing operational power support to the components of the control device 100 under the control of the controller 110. A battery and associated control circuitry.
Fig. 4 is a functional configuration diagram of a display device 200 according to an exemplary embodiment of the present application. As shown in fig. 4, the memory 290 is used to store an operating system, an application program, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. The memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically used for storing an operating program for driving the controller 210 in the display device 200, and storing various applications installed in the display device 200, various applications downloaded by a user from an external device, various graphical user interfaces related to the applications, various objects related to the graphical user interfaces, user data information, and internal data of various supported applications. The memory 290 is used to store system software such as an Operating System (OS) kernel, middleware, and applications, and to store input video data and audio data, and other user data.
The memory 290 is specifically used for storing drivers and related data such as the video processor 260-1 and the audio processor 260-2, the display 280, the communication interface 230, the tuner demodulator 220, the detector 240, the input/output interface, etc.
In some embodiments, memory 290 may store software and/or programs, software programs for representing an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (e.g., the middleware, APIs, or applications), and the kernel may provide interfaces to allow the middleware and APIs, or applications, to access the controller to implement controlling or managing system resources.
The memory 290, for example, includes a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The external command recognition module 2907 includes a pattern recognition module 2907-1, a voice recognition module 2907-2, and a key command recognition module 2907-3. The controller 210 performs functions such as: a broadcast television signal reception demodulation function, a television channel selection control function, a volume selection control function, an image control function, a display control function, an audio control function, an external instruction recognition function, a communication control function, an optical signal reception function, an electric power control function, a software control platform supporting various functions, a browser function, and the like.
Fig. 5a is a block diagram illustrating a software configuration of a display device 200 according to an exemplary embodiment of the present application. As shown in fig. 5 a:
the operating system 210, which includes executing operating software for handling various basic system services and for performing hardware-related tasks, acts as an intermediary between applications and hardware components for performing data processing.
In some embodiments, portions of the operating system kernel may contain a series of software to manage the display device hardware resources and provide services to other programs or software code.
In other embodiments, portions of the operating system kernel may include one or more device drivers, which may be a set of software code in the operating system that assists in operating or controlling the devices or hardware associated with the display device. The drivers may contain code that operates the video, audio, and/or other multimedia components. Examples include a display screen, a camera, Flash, WiFi, and audio drivers.
An accessibility module 211 for modifying or accessing the application to achieve accessibility of the application and operability of the content displayed thereto. A communication module 212 for connection with other peripherals via associated communication interfaces and a communication network. The user interface module 213 is configured to provide an object for displaying a user interface for access by each application program, so that user operability can be achieved. The user interface module 213 further includes a user layer that provides a plurality of user operable controls and a video layer for displaying video frames. And the interface bottom layer module 214 comprises a bottom layer module of the user layer and a bottom layer module of the video layer, wherein the bottom layer module of the video layer is used for providing a video picture to be displayed for the video layer. Control applications 215 for controlling process management, including runtime applications and the like.
The interface layout management module 220 is configured to manage a user interface object to be displayed, and control a user interface layout so that the user interface can be displayed in response to a user operation.
The event delivery system 230 may be implemented within the operating system 210 or in the application layer 240. In some embodiments, an aspect is implemented within the operating system 210, while implemented in the application 240, for listening for various user input events, and will implement one or more sets of predefined operations in response to the recognition of various events or sub-events, depending on the various events.
Illustratively, the video zoom service 231 operates on an event delivery system for receiving user-entered instructions for triggering a zoom function.
Fig. 5b is a schematic diagram of an application center in the display device 200 according to an exemplary embodiment of the present application. As shown in FIG. 5b, the application layer 240 contains various applications that may be executed on the display device. The application may include, but is not limited to, one or more applications such as: live television applications, Video On Demand (VOD) applications, media center applications, application centers, gaming applications, and the like.
The live television application program can provide live television through different signal sources. For example, a live television application may provide television signals using input from cable television, radio broadcasts, satellite services, or other types of live television services. And, the live television application may display a video of the live television signal on the display device.
A video-on-demand application may provide video from different storage sources. Unlike live television applications, video on demand provides a video display from some storage source. For example, the video on demand may come from a server side of the cloud storage, from a local hard disk storage containing stored video programs.
The media center application program can provide various applications for playing multimedia contents. For example, a media center, which may be other than live television or video on demand, may provide services that a user may access to various images or audio through a media center application.
At least one application program of the live television application program, the video-on-demand application program and the media center application program is provided with a zooming function module which is used for responding to a zooming instruction triggered by a user to realize zooming function.
The application program center can provide and store various application programs. The application may be a game, an application, or some other application associated with a computer system or other device that may be run on the smart television. The application center may obtain these applications from different sources, store them in local storage, and then be operable on the display device.
FIG. 6a is a schematic diagram of a user interface of a display device 200 according to an exemplary embodiment of the present application. As shown in fig. 6a, the user interface is displayed by overlapping display frames of different levels, and the multiple layers of frames are respectively used for presenting different image contents, for example, a first layer 610 may present system layer project contents, such as current attributes, etc., a second layer 620 may be used for presenting application layer project contents, such as web videos, VOD presentations, application program frames, etc., and a third layer 630 may be further included.
Since the images presented by the user interface, the content of the image presented by each layer, the source of the image, and the like are different according to different operations performed by the user on the display device 200, for convenience of clear description of the technical solution of the present application, the user interface when the display device presents the video images is referred to as a video display interface in the present application.
Fig. 6b is a schematic view of a video display interface of the display device 200 according to an exemplary embodiment of the present application. As shown in fig. 6b, the video display interface includes at least two layers, i.e., a video layer and a user layer, wherein the video layer is used for displaying a video frame, and the user layer is used for providing at least a video profile, recommendation information, and other information.
Further, the user layer can be displayed by overlapping a plurality of sub-layer pictures, and each sub-layer is used for displaying a part of content of the video display interface, so that other pictures except the video pictures can be displayed in a layered mode, and the display effect is guaranteed.
FIG. 7a is a schematic view of a user interface when a user selects a video display with video-on-demand functionality at an application center. The video display application (such as a browser and the like) is a non-system program and can be downloaded in an application store or transmitted and installed for use through an external storage device. Alternatively, the video display application is a system program presenting system parameters to the user. In one possible implementation, the video display application is pre-installed in the display device and is presented in a user interface corresponding to the application center, so that the user can click the video display application.
Fig. 7b is a schematic view of a user interface when a user selects a video program in a video display application. As shown in fig. 7b, the user interface at this time provides a plurality of user-selectable video program selection indicators, which the user can select to cause the display device to start playing the selected video program.
Fig. 7c is a schematic diagram of a user interface entered by a user selecting a video program in a video display application. As shown in fig. 7c, in the user interface at this time, the user layer screen presents information such as "home page", "hot", "recommended", "personal center", "program profile", "comment", "full screen", and the like; the video layer presents video pictures.
FIG. 7d is yet another user interface entered by the user clicking "full screen" in the video display interface shown in FIG. 7 c. As shown in fig. 7d, compared with the presentation effect of fig. 7c, in the user interface at this time, the user layer only presents "small window playing", and the video frame presented by the video layer changes in display position and size due to the change of the display parameters, and particularly obviously, the size of the video frame is enlarged.
Fig. 7e is a schematic diagram of another user interface entered by the user clicking "widget play" in the user interface shown in fig. 7d, and it can be seen that, under the operation control of the user on the display device, the video picture in the user interface is reduced, the picture of the user layer adapts to the change of the video picture, and more contents are presented, and finally the presentation effect of fig. 7e is consistent with that of fig. 7 c.
As described in the background of the present application, in the existing video scaling method, due to the delayed display of the video layer picture, a black edge or a cut edge often occurs during the scaling process.
Fig. 7f shows a black border phenomenon occurring during the process of zooming in the video screen from fig. 7c to fig. 7d in the user interface. Specifically, the magnification speed of the user layer picture is greater than that of the video picture, which causes the sizes of two layers of pictures displayed simultaneously to be not suitable, and further causes black edges.
Fig. 7g illustrates a clipping phenomenon occurring during the process of the video frame being reduced from fig. 7d to fig. 7e in the user interface. Specifically, the reduction speed of the user layer picture is greater than that of the video picture, so that the sizes of two layers of pictures displayed simultaneously are not suitable, and the user layer picture covers the edge part of the video picture, so that the edge cutting occurs.
Fig. 8a is a diagram illustrating another application scenario according to an exemplary embodiment of the present application. As shown in fig. 8a, the mobile terminal runs a pre-installed video display application, and the user watches a video in a small window play mode.
FIG. 8b is a schematic diagram of another user interface entered by the user clicking "full screen" on the video display interface shown in FIG. 8 a.
Fig. 8c is a further user interface entered by the user clicking on the "widget" in the user interface shown in fig. 8b, and fig. 8c is consistent with the presentation effect of fig. 8 a.
Fig. 8d illustrates a black border phenomenon occurring in the process of enlarging a video screen from fig. 8a to fig. 8b in the user interface.
In order to solve the above technical problem, the present application provides a display device. Fig. 9 is a schematic diagram illustrating a hardware structure of a display device according to an exemplary embodiment of the present application.
As shown in fig. 9a, the display apparatus may include:
the display 91 is configured to present a user interface, the user interface includes a video layer video picture and a user layer picture, the user layer is presented floating on the video layer, the user layer is provided by the browser, and the user layer is a transparent window at the video picture, so that the video picture can be presented through the user layer.
A controller 92 in communication with the display 91, the controller 92 configured to perform presenting a user interface, including in particular:
step 921: presenting a user interface including a video frame on a display is performed.
Before step 921, the user selects a video display application in the application center, as shown in fig. 7a, and then enters the user interface of the video display application, and selects a video program desired to be viewed in the user interface, as shown in fig. 7b, and then enters the user interface for playing the video program, as shown in fig. 7c or fig. 7 d.
Optionally, for the presentation of the video frame, the display device presets at least two display modes, where a size of the video frame in one display mode is larger, for example, the video frame achieves maximum full-screen playing, and a size of the video frame in another display mode is smaller, for example, the video frame minimizes small-window playing. Of course, in each mode, the display size of the video picture also needs to be adapted to the display size and so on.
Further alternatively, the display device may also preset an intermediate state display mode, i.e. the size of the video pictures is between the maximized and minimized state.
Step S922, receiving a zoom instruction, which is input by a user through a control device and is used for instructing to zoom in or zoom out a video image in a currently presented video display interface, for example, the zoom instruction may be an instruction for zooming in the video image;
optionally, after receiving the zoom instruction, determining whether the zoom instruction is a zoom-in instruction or a zoom-out instruction.
Step S923, in response to the received zoom instruction, updating the user layer picture according to a first zoom rule, where the transparent window is zoomed according to the first zoom rule.
The first scaling rule is used for scaling a user layer picture, and the process of scaling the user layer picture according to the first scaling rule comprises the steps of obtaining user layer picture data according to the first scaling rule and displaying the user layer picture data.
If the zooming-in process is carried out, the user in the user layer zooms in through the transparent window of the video picture according to the first zooming rule, and if the zooming-out process is carried out, the transparent window zooms out according to the first zooming rule. In the process of user scaling, as the transparent window is enlarged or reduced along with time, the position information of the transparent window in the user layer changes correspondingly.
It should be noted that, the display parameters of the video image to be displayed at the transparent window are adapted to the size of the transparent window and the position information of the transparent window in the user layer, that is, the display parameters of the video image to be displayed at the transparent window need to be changed along with the change of the transparent window.
Step S924, obtaining position information of the transparent window in the user layer within a preset time period, and generating a second scaling rule.
And step S925, controlling the video picture to be enlarged to a target display position according to the second scaling rule.
In this embodiment, the preset time period is a preset past time period before the current time point, for example, a time period preset interval from the current time point before the current time point, and the past time period is composed of several consecutive past time points.
In this embodiment, the second scaling rule is generated according to the position information of the transparent window in the user layer within the preset past time period. In the past time period, the position information of the transparent window in the user layer corresponds to the preset display parameters of the video picture in the past time period, so that the position information of the transparent window in the user layer at a certain future time point can be predicted according to the position information of the transparent window in the user layer in the preset time period, namely the display parameters of the video picture to be displayed at a certain future time point are predicted, namely a second scaling rule is obtained, the video picture to be displayed can be cached in advance, the delayed display of the video picture due to time-consuming caching and the like is avoided, the picture to be displayed in the video layer can be synchronously displayed with the picture to be displayed in the user layer, the phenomena of edge cutting and black edge cannot occur, and the visual experience of a user is more friendly.
In this embodiment, the target display position is a display position of the video image after being zoomed, which is determined according to the zoom instruction. For example, if the zoom instruction is a zoom-out instruction, the target display position may be a video screen display position corresponding to the small window playing mode and a position of the transparent window in the user layer in the small window playing mode, and if the zoom instruction is a zoom-in instruction, the target display position may be a video screen display position corresponding to the full screen playing mode and a position of the transparent window in the user layer in the full screen playing mode.
As can be seen from the foregoing embodiments, an embodiment of the present application provides a display device, where after receiving an instruction to enlarge a video picture, a controller of the display device updates a picture of a user layer according to a first scaling rule on one hand, and on the other hand, obtains position information of a transparent window in the user layer within a preset time period to generate a second scaling rule, and controls the video picture to be enlarged to a target display position according to the second scaling rule. In the preset time period, the position information of the transparent window in the user layer corresponds to the preset display parameters of the video picture in the preset time period, so that the position information of the transparent window in the user layer at a certain future time point can be predicted according to the position information of the transparent window in the user layer in the preset time period, namely the display parameters of the video picture to be displayed at a certain future time point are predicted, namely a second scaling rule is obtained, and the video picture to be displayed can be cached in advance, so that the delayed display of the video picture due to time-consuming caching and the like is avoided, the picture to be displayed in the video layer can be synchronously displayed with the picture to be displayed in the user layer, the phenomena of edge cutting and black edge can not occur, and the visual experience of a user is more friendly.
A specific implementation of step S924 will be described in detail below.
Referring to fig. 10, in an alternative embodiment, step S924 further includes:
step S101, responding to a received instruction for amplifying the video picture, and starting a video picture zooming service based on a parameter prediction mechanism, wherein the video picture zooming service is used for acquiring user layer picture data in a preset time period.
The video scaling service based on the parameter prediction mechanism can provide a video scaling service for a separate application program on the display device, and when the application program is the separate application program, the video scaling service is provided together with the video display application through interaction with the video display application. The video frame scaling service can also be provided for a program configured for the video display application itself, independently or by calling a system layer program.
Alternatively, the video scaling service based on the parameter prediction mechanism may be at least two sets, one of which is suitable for the scaling-down process, i.e. the video scaling-down service, and the other of which is suitable for the scaling-up process, i.e. the video scaling-up service.
Alternatively, in response to a zoom-out instruction, a video picture zoom-out service based on a parameter prediction mechanism is started.
Alternatively, in response to a zoom-in instruction, a video picture reduction service based on a parameter prediction mechanism is started.
Step S102, determining the position information of the transparent window in the user layer within the preset time period according to the user layer picture data.
Step S103, generating the second scaling rule according to the position information of the transparent window in the user layer. The video scaling service continues to run after startup until the video is zoomed in to a target display position.
Referring to fig. 11, in an alternative embodiment, step S103 further includes:
step S111, according to the current time point, obtaining position information of the transparent window corresponding to at least one time point in the preset time period, wherein the position information of the transparent window is used for obtaining preset display parameters corresponding to the video picture.
In the process of zooming the video picture, the display parameters of the video picture, such as the size, the coordinates of the reference point and the like, are continuously changed along with the time change. The reference point coordinates are used for determining the display position of the video picture on the video layer, and may be coordinates of a vertex at the upper left corner or a central point of the video picture on the video layer.
The preset display parameters refer to that in the zooming process, the video display application is issued to a bottom layer module of a video layer so that the video display application caches corresponding pictures to be displayed according to the parameters and displays the pictures on the video layer. The predetermined display parameters may specifically include a predetermined display height, a predetermined display width, and predetermined reference point coordinates.
According to the existing scaling implementation method, the continuous scaling process can be further visualized as that the video display application sequentially sends the preset display parameters with changes to the bottom layer module of the video layer at n continuous time points, and one time point corresponds to one group of display parameters, namely corresponds to one frame of video picture. The bottom layer module caches corresponding video pictures according to preset display parameters, the video pictures with different display parameters are sequentially and continuously displayed on the video layer, and meanwhile, the user layer loads and displays the corresponding pictures to adapt to the zooming of the video pictures, so that a smooth continuous zooming-in or zooming-out process is presented for a user. It is understood that the size of n depends on the frame rate of the video, and specifically, the larger the frame rate, the larger n, and the smaller n.
The user can trigger the zooming process by clicking the corresponding control provided by the user layer, the video display application responds to the user click, and the preset display parameters corresponding to each time point are sequentially issued to the bottom layer module of the video layer at the n time points.
The parameter issuing has strict time sequence and order, so that the current time point can be understood as a past time point closest to the current time, and the preset display parameter corresponding to the current time point is the parameter issued this time.
In this embodiment, the purpose of step S111 is to obtain, according to the position information of the transparent window, the predetermined display parameters that have been issued by the application, that is, the display parameters corresponding to the past time point, for predicting the display parameters corresponding to the future time point (that is, the time point after the current time point). Since the more similar the display parameters corresponding to the time points, the more precise the implicit parameter change rules, in this embodiment, each time point is associated with a time period, that is, a preset time period, and the interval between the two time periods is made sufficiently small, then for the current time point, if the parameters used for predicting the future time point are obtained from the preset time period, the implicit parameter change rules of the parameters are more representative of the actual change rules, and the accuracy of the prediction result in step S112 can be improved to a certain extent.
The predetermined time period includes a plurality of time points, e.g. [ t ]i-m,ti]This time period contains m +1 time points. In step S111, according to the current time point, the preset time period associated therewith may be determined, and then the predetermined display parameter corresponding to at least one time point in the time period is obtained.
As a possible implementationThe current time point is the right boundary of its associated preset time period. For example, the current time point is tiAssociated time period is [ t ]i-m,ti]. In step S111, a predetermined display parameter corresponding to the current time point and at least one time point in a preset time period before the current time point is obtained, that is, the obtained predetermined display parameter is tiCorresponding predetermined display parameters, and, [ t ]i-m,ti) At least one point in time. The implementation mode uses the preset display parameters corresponding to the current time point to predict the display parameters of the future time point, and can further improve the accuracy of the prediction result.
Of course, in other possible implementations, the predetermined display parameters corresponding to more than two time points may also be used, which is not limited in this application.
Through step S111, the predetermined display parameters corresponding to at least part of the issued time points are obtained, and data capable of representing the variation trend of the parameters of the zooming process are selected as much as possible, so that in the subsequent steps, the display parameters corresponding to the future time points are predicted by using the variation trends implied by the parameters.
Step S112, generating a display parameter predicted value corresponding to a target time point according to the acquired predetermined display parameter, where the display parameter predicted value is used for a video layer to pre-cache a video frame to be displayed at the target time point, and the target time point is a time point after the current time point and at a preset interval from the current time point.
The target time point refers to a future time point that is a preset interval from the current time point. Wherein, the interval between two time points can be measured by the number of time points existing between the two time points. For example, if there are j time points between the current time point and the target time point, the interval is j.
Step S112 generates a display parameter prediction value corresponding to the target time point by using the predetermined display parameter obtained according to the current time point, so that the bottom layer module of the video layer can pre-cache a corresponding video frame according to the display parameter prediction value. That is, only the current application is issuedUnder the condition of the preset display parameters corresponding to the time points, the bottom layer module of the video layer finishes caching the video pictures corresponding to the future time points at preset intervals from the current time point, thereby compensating the accumulated caching time consumption Sigma Delta t of n frames of video picturesiThe method can also cache the video pictures with preset frame numbers in advance, solve the problem of delayed display of the pictures of the video layer in the existing zooming method and avoid the phenomenon of black edges or edge cutting.
It should be noted that the size of the preset interval determines the number of frames of the video frame that can be cached in advance by the bottom module of the video layer. Specifically, assuming that the interval is j, the current time point tiCorresponding target time point is ti+jThat is, t is issued under the applicationiAfter the corresponding preset display parameters, the bottom layer module of the video layer can be used for displaying the video layer according to ti+jAnd the corresponding display parameter predicted value caches the (i + j) th frame of video picture instead of the (i) th frame, thereby not influencing the instant display of the (i) th frame of video picture.
As a possible implementation, step S112 may include the refinement steps shown in fig. 12:
step S121, calculating a predetermined parameter variation according to the acquired predetermined display parameter.
The predetermined parameter change amount may be a quantized representation of the magnitude of the predetermined display parameter change. Since the change of the predetermined display parameter includes a height change, a width change, and the like, a change amount of the predetermined display height or a change amount of the predetermined display width is calculated as the predetermined parameter change amount.
If the predetermined display parameters corresponding to the current time point and the previous time point are acquired in step S111, the predetermined parameter variation may be calculated using the following equation:
Δp=Δh=hi-hi-1or, Δ p ═ Δ w ═ wi-wi-1
Wherein Δ p represents a predetermined parameter variation; Δ h represents a predetermined height variation amount; h isiRepresenting a preset display height corresponding to the current time point; h isi-1The preset display height corresponding to the previous time point; Δ w represents a predetermined widthDegree of change; w is aiRepresenting a predetermined display width corresponding to the current time point; w is ai-1Indicating the predetermined display width corresponding to the last time point. .
Step S122, using the prediction weight and the predetermined parameter variation, calculates a display parameter prediction variation corresponding to the target time point.
The predicted variation of the display parameter represents a predicted variation of the parameter at the target time point with respect to the current time point. Since the parameter variation rule of the scaling process is not necessarily a linear rule, that is, the parameter variation of each time point relative to the past time point is not constant, the method and the device adopt the prediction weight to adjust the predetermined parameter variation so as to obtain the display parameter prediction variation corresponding to the target time point. Specifically, the predicted weight may be multiplied by a predetermined parameter variation, and the product is used as the predicted display parameter variation.
At the beginning of the scaling process, or when the current time point is the first time point of the scaling process, the display parameter prediction variation at the first target time point is calculated by using the preset initial value of the prediction weight. In addition, since the delay time generated by the video layer picture is different in the enlargement process and the reduction process, different initial values of the prediction weights are set for the enlargement process and the reduction process, respectively.
Based on this, if it is determined that the current time point is the first time point of the zooming process, it is necessary to determine whether the zooming process is an enlarging process or a reducing process according to the display parameter corresponding to the starting point of the zooming process and the predetermined display parameter corresponding to the current time point, so as to select the corresponding initial value of the prediction weight. The starting point refers to a time point when the user triggers the zooming process and the video display interface is still in the original state, and the display parameters corresponding to the starting point at least comprise display parameters in a full-screen playing mode and a small-window playing mode. Specifically, if the display height corresponding to the starting point is greater than the predetermined display height corresponding to the current time point, it is seen that there is a reduction trend, and therefore, the process is a reduction process, and otherwise, the process is an enlargement process.
If the zooming process is an amplifying process, selecting a prediction weight initial value corresponding to the amplifying process for calculating the display parameter prediction variation corresponding to the first target time point in the amplifying process; and if the zooming process is a reducing process, selecting a prediction weight initial value corresponding to the reducing process for calculating the display parameter prediction variation corresponding to the first target time point in the reducing process.
In step S122, the prediction weight is used to adjust the predetermined parameter variation to obtain the display parameter variation corresponding to the target time point. Because the variation trends of the display parameters are different in different time periods during the zooming process, if the same prediction weight is always adopted, the accuracy of the obtained display parameter prediction variation is insufficient, and the accuracy of the target time point display parameter prediction value is influenced.
In order to further improve the accuracy of the prediction result, in a preferred implementation, a counter is used to record the number of times of use of the current prediction weight, and before step S122 is executed, it is determined whether the number of times of use of the prediction weight exceeds a preset number of times of use, and if so, the prediction weight is re-assigned by using a preset update coefficient, that is, the prediction weight is made to be the prediction weight × the preset update coefficient, so as to use the new prediction weight for the calculation in step S122; if not, the calculation of step S122 is performed using the current prediction weight. Wherein, the preset update coefficient is generally between 0 and 1.
By adopting the optimal implementation mode, the prediction weight is continuously updated along with the time point, so that the prediction mechanism has higher flexibility, and the accuracy of the predicted value of the display parameter at the target time point is improved.
And S123, calculating a display parameter predicted value corresponding to the target time point according to the display parameter predicted variation.
The predicted display parameter variation is a predicted parameter variation of the target time point relative to the current time point, and therefore, in a specific implementation, the predicted display parameter value corresponding to the target time point may be calculated by using the following formula:
Figure BDA0002132228840000201
Figure BDA0002132228840000202
Figure BDA0002132228840000203
hG=hi+Δp;
wherein x isG、yG、wGAnd hGRespectively representing a prediction reference abscissa, a prediction reference ordinate, a display width prediction value and a display height prediction value corresponding to the target time point; x is the number ofi、yi、wiAnd hiRespectively representing a preset reference abscissa, a preset reference ordinate, a preset display width and a preset display height corresponding to the current time point; Δ p ═ Δ h.
Step S113, generating the second scaling rule according to the target time point and the display parameter prediction value corresponding to the target time point.
In summary, in the present embodiment, first, according to the current time point, the predetermined display parameter corresponding to at least one time point in the preset time period is obtained for prediction. Since the aforementioned parameters are acquired according to the current time point, the representativeness of the parameters can be ensured. And generating a display parameter predicted value corresponding to the target time point according to the acquired preset display parameters, wherein the target time point is a time point which is behind the current time point and is away from the current time point by a preset interval, namely the display parameters of a future time point can be acquired at the current time, and the display parameter predicted value is used for caching a video picture to be displayed at the target time point in advance in a video layer, namely caching the video picture to be displayed at the future time point in advance. Because each frame of video picture to be displayed is cached in advance, the video picture to be displayed on the video layer and the picture to be displayed on the user layer can be synchronously displayed according to the preset rule without the phenomenon of edge cutting or black edge, so that the visual experience of a user is more friendly.
In addition, an embodiment of the present application further provides a method for zooming a video frame in a user interface presented by a display device, and referring to fig. 13, the method may include:
step 131, receiving an instruction for amplifying a video picture in a video layer;
step 132, in response to a received instruction for enlarging a video picture in the video layer, updating the picture of the user layer according to a first scaling rule, where the user layer is shown floating on the video layer, the user layer is provided by a browser, a transparent window is provided at the video picture, so that the video picture is shown through the user layer, and the transparent window is enlarged according to the first preset rule;
step 133, obtaining position information of the transparent window in the user layer within a preset time period, and generating a second scaling rule;
and step 134, controlling the video image to be enlarged to the target display position according to the second scaling rule.
The specific implementation manner of any step in the steps 131-134 may refer to the above embodiment of the display device, and is not described herein again.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments of the scaling method provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the embodiments of the zooming apparatus and the display device, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the description in the method embodiments.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (10)

1. A display device, comprising:
a display configured to present a user interface, the user interface including a video layer video picture and a user layer picture, the user layer being presented floating on the video layer, the user layer being provided by a browser, the user layer being a transparent window at the video picture such that the video picture is presented through the user layer;
a controller in communication with the display, the controller configured to perform presenting a user interface:
receiving an instruction of amplifying the video picture, and updating the user layer picture according to a first scaling rule in response to the instruction of amplifying the video picture, wherein the transparent window is amplified according to the first scaling rule;
acquiring position information of the transparent window in the user layer within a preset time period, and generating a second scaling rule;
and controlling the video picture to be amplified to a target display position according to the second scaling rule.
2. The display device according to claim 1, wherein the controller obtains position information of the transparent window in the user layer within a preset time period, and generates a second scaling rule, including:
in response to an instruction of amplifying the video picture, starting a video picture zooming service based on a parameter prediction mechanism, wherein the video picture zooming service is used for acquiring user layer picture data in a preset time period;
determining the position information of the transparent window in the user layer within the preset time period according to the user layer picture data;
generating the second scaling rule according to the position information of the transparent window in the user layer;
the video scaling service continues to run after startup until the video is zoomed in to a target display position.
3. The display device according to claim 1, wherein the controller generates the second scaling rule according to the position information of the transparent window in the user layer, specifically including:
according to the current time point, acquiring position information of the transparent window corresponding to at least one time point in the preset time period, wherein the position information of the transparent window is used for acquiring preset display parameters corresponding to the video picture;
generating a display parameter predicted value corresponding to a target time point according to the acquired preset display parameters, wherein the target time point is a time point which is behind the current time point and is away from the current time point by a preset interval, and the display parameter predicted value is used for caching a video picture to be displayed at the target time point in advance;
and generating the second scaling rule according to the target time point and the display parameter predicted value corresponding to the target time point.
4. The apparatus according to claim 3, wherein the controller generates the predicted display parameter value corresponding to the target time point according to the acquired predetermined display parameter, and includes:
calculating the variation of the preset parameters according to the acquired preset display parameters;
calculating the display parameter prediction variable quantity corresponding to the target time point by using the prediction weight and the predetermined parameter variable quantity;
and calculating a display parameter predicted value corresponding to the target time point according to the display parameter predicted variation.
5. The display device according to claim 4, wherein after the controller calculates the predetermined parameter variation, the method further comprises:
judging whether the use times of the prediction weight exceed a preset use time or not;
if so, enabling the prediction weight to be the prediction weight multiplied by a preset updating coefficient, wherein the preset updating coefficient is used for updating the prediction weight according to the preset using times;
if not, the current prediction weight is used for the next calculation.
6. The display device according to claim 5, wherein when the current time point is a first time point of a zooming process, the method further comprises:
if the zooming process is judged to be the amplifying process according to the display parameters corresponding to the zooming process starting point and the preset display parameters corresponding to the current time point, selecting the initial value of the prediction weight corresponding to the amplifying process for calculating the prediction variation of the display parameters corresponding to the first target time point in the amplifying process;
and if the zooming process is judged to be a zooming-out process according to the display parameters corresponding to the zooming-out process starting point and the preset display parameters corresponding to the current time point, selecting the initial value of the prediction weight corresponding to the zooming-out process for calculating the prediction variation of the display parameters corresponding to the first target time point in the zooming-out process.
7. The display device according to claim 6, wherein the controller calculates a predetermined parameter variation amount based on the acquired predetermined display parameter, including:
the predetermined display parameters comprise a predetermined display height, a predetermined display width and predetermined reference point coordinates;
and calculating the variation of the preset display height or the variation of the preset display width corresponding to at least one time point in a preset time period before the current time point as the variation of the preset parameter.
8. The display apparatus according to claim 4, wherein the controller calculates the predicted variation of the display parameter corresponding to the target time point using the predicted weight and the predetermined parameter variation, including:
and calculating the product of the prediction weight and the preset parameter variation as the display parameter prediction variation corresponding to the target time point.
9. The apparatus according to claim 4, wherein the controller calculates the predicted value of the display parameter corresponding to the target time point using the following formula in accordance with the predicted amount of change in the display parameter:
Figure FDA0002132228830000021
Figure FDA0002132228830000022
Figure FDA0002132228830000023
hG=hi+Δp;
wherein x isG、yG、wGAnd hGRespectively representing a prediction reference abscissa, a prediction reference ordinate, a display width prediction value and a display height prediction value corresponding to the target time point; x is the number ofi、yi、wiAnd hiRespectively representing a preset reference abscissa, a preset reference ordinate, a preset display width and a preset display height corresponding to the current time point; Δ p represents displayThe parameter predicts a variance.
10. A method for zooming a video picture in a user interface presented by a display device is characterized by comprising the following steps:
receiving an instruction of amplifying a video picture in a video layer;
in response to a received instruction for amplifying a video picture in the video layer, updating the user layer picture according to a first scaling rule, wherein the user layer is displayed in a floating manner on the video layer and provided by a browser, the user layer is a transparent window at the video picture so that the video picture is displayed through the user layer, and the transparent window is amplified according to the first preset rule;
acquiring position information of the transparent window in the user layer within a preset time period, and generating a second scaling rule;
and controlling the video picture to be amplified to a target display position according to the second scaling rule.
CN201910642088.XA 2019-07-16 2019-07-16 Display device and video picture scaling method Active CN112243148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910642088.XA CN112243148B (en) 2019-07-16 2019-07-16 Display device and video picture scaling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910642088.XA CN112243148B (en) 2019-07-16 2019-07-16 Display device and video picture scaling method

Publications (2)

Publication Number Publication Date
CN112243148A true CN112243148A (en) 2021-01-19
CN112243148B CN112243148B (en) 2022-10-11

Family

ID=74167321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910642088.XA Active CN112243148B (en) 2019-07-16 2019-07-16 Display device and video picture scaling method

Country Status (1)

Country Link
CN (1) CN112243148B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923379A (en) * 2021-09-30 2022-01-11 广州市保伦电子有限公司 Multi-picture synthesis method and processing terminal for self-adaptive window

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010117446A (en) * 2008-11-11 2010-05-27 Sony Computer Entertainment Inc Image processing apparatus and image processing method
JP2010117828A (en) * 2008-11-12 2010-05-27 Sony Computer Entertainment Inc Information processing apparatus and information processing method
US20120056903A1 (en) * 2009-03-25 2012-03-08 Sony Computer Entertainment Inc. Information Processing Device And Information Processing Method
CN103703785A (en) * 2011-08-01 2014-04-02 索尼电脑娱乐公司 Video data generation unit, video image display device, video data generation method, video image display method, and video image file data structure
CN103988164A (en) * 2011-12-09 2014-08-13 索尼电脑娱乐公司 Image processing apparatus and image processing method
US20150341654A1 (en) * 2014-05-22 2015-11-26 Apple Inc. Video coding system with efficient processing of zooming transitions in video
US20160127772A1 (en) * 2014-10-29 2016-05-05 Spotify Ab Method and an electronic device for playback of video
US20170180807A1 (en) * 2015-12-18 2017-06-22 Le Holdings (Beijing) Co., Ltd. Method and electronic device for amplifying video image
CN106980510A (en) * 2017-04-14 2017-07-25 宁波视睿迪光电有限公司 The form adaptive approach and device of a kind of player
US20170302719A1 (en) * 2016-04-18 2017-10-19 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN107786904A (en) * 2017-10-30 2018-03-09 深圳Tcl数字技术有限公司 Picture amplification method, display device and computer-readable recording medium
CN109547838A (en) * 2018-12-06 2019-03-29 青岛海信传媒网络技术有限公司 The processing method and processing device of video window
CN109640158A (en) * 2019-02-02 2019-04-16 北京字节跳动网络技术有限公司 A kind of control method of video playing, device, equipment and medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010117446A (en) * 2008-11-11 2010-05-27 Sony Computer Entertainment Inc Image processing apparatus and image processing method
JP2010117828A (en) * 2008-11-12 2010-05-27 Sony Computer Entertainment Inc Information processing apparatus and information processing method
US20120056903A1 (en) * 2009-03-25 2012-03-08 Sony Computer Entertainment Inc. Information Processing Device And Information Processing Method
CN103703785A (en) * 2011-08-01 2014-04-02 索尼电脑娱乐公司 Video data generation unit, video image display device, video data generation method, video image display method, and video image file data structure
CN103988164A (en) * 2011-12-09 2014-08-13 索尼电脑娱乐公司 Image processing apparatus and image processing method
US20150341654A1 (en) * 2014-05-22 2015-11-26 Apple Inc. Video coding system with efficient processing of zooming transitions in video
US20160127772A1 (en) * 2014-10-29 2016-05-05 Spotify Ab Method and an electronic device for playback of video
US20170180807A1 (en) * 2015-12-18 2017-06-22 Le Holdings (Beijing) Co., Ltd. Method and electronic device for amplifying video image
US20170302719A1 (en) * 2016-04-18 2017-10-19 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN106980510A (en) * 2017-04-14 2017-07-25 宁波视睿迪光电有限公司 The form adaptive approach and device of a kind of player
CN107786904A (en) * 2017-10-30 2018-03-09 深圳Tcl数字技术有限公司 Picture amplification method, display device and computer-readable recording medium
CN109547838A (en) * 2018-12-06 2019-03-29 青岛海信传媒网络技术有限公司 The processing method and processing device of video window
CN109640158A (en) * 2019-02-02 2019-04-16 北京字节跳动网络技术有限公司 A kind of control method of video playing, device, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923379A (en) * 2021-09-30 2022-01-11 广州市保伦电子有限公司 Multi-picture synthesis method and processing terminal for self-adaptive window

Also Published As

Publication number Publication date
CN112243148B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN109618206B (en) Method and display device for presenting user interface
CN112214189B (en) Image display method and display device
CN111208969A (en) Selection control method of sound output equipment and display equipment
CN111031375B (en) Method for skipping detailed page of boot animation and display equipment
CN112118400B (en) Display method of image on display device and display device
CN111954059A (en) Screen saver display method and display device
CN112243141A (en) Display method and display equipment for screen projection function
CN111757181B (en) Method for reducing network media definition jitter and display device
CN110602540B (en) Volume control method of display equipment and display equipment
CN112243148B (en) Display device and video picture scaling method
CN114430492A (en) Display device, mobile terminal and picture synchronous zooming method
CN112243147B (en) Video picture scaling method and display device
CN111078926A (en) Method for determining portrait thumbnail image and display equipment
CN113259733B (en) Display device
CN111935530B (en) Display equipment
CN111479146B (en) Display apparatus and display method
CN111988646B (en) User interface display method and display device of application program
WO2021196432A1 (en) Display method and display device for content corresponding to control
CN111259639B (en) Self-adaptive adjustment method of table and display equipment
CN114417035A (en) Picture browsing method and display device
CN112367550A (en) Method for realizing multi-title dynamic display of media asset list and display equipment
CN113495654A (en) Control display method and display device
CN113971049A (en) Background service management method and display device
CN113453056B (en) Display method and display device for photo album control
CN111970554B (en) Picture display method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221013

Address after: 83 Intekte Street, Devon, Netherlands

Patentee after: VIDAA (Netherlands) International Holdings Ltd.

Address before: 266061 room 131, 248 Hong Kong East Road, Laoshan District, Qingdao City, Shandong Province

Patentee before: QINGDAO HISENSE MEDIA NETWORKS Ltd.