CN113014977B - Display device and volume display method - Google Patents

Display device and volume display method Download PDF

Info

Publication number
CN113014977B
CN113014977B CN202110175536.7A CN202110175536A CN113014977B CN 113014977 B CN113014977 B CN 113014977B CN 202110175536 A CN202110175536 A CN 202110175536A CN 113014977 B CN113014977 B CN 113014977B
Authority
CN
China
Prior art keywords
volume
picture
layer
display
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110175536.7A
Other languages
Chinese (zh)
Other versions
CN113014977A (en
Inventor
程霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidaa Netherlands International Holdings BV
Original Assignee
Vidaa Netherlands International Holdings BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vidaa Netherlands International Holdings BV filed Critical Vidaa Netherlands International Holdings BV
Priority to CN202110175536.7A priority Critical patent/CN113014977B/en
Publication of CN113014977A publication Critical patent/CN113014977A/en
Priority to PCT/CN2021/133766 priority patent/WO2022160910A1/en
Application granted granted Critical
Publication of CN113014977B publication Critical patent/CN113014977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4852End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a display device and a volume display method, wherein the display device comprises a display and a controller, and the controller is configured to: receiving a volume adjustment request; when the top layer interface of the display is a voice interaction interface, responding to the volume adjustment request, and acquiring a target volume picture corresponding to the volume adjustment request; and controlling the display to display the target volume picture above the voice interaction interface. The application solves the technical problem that the volume bar is shielded by the voice interaction interface.

Description

Display device and volume display method
Technical Field
The present application relates to the field of display technologies, and in particular, to a display device and a volume display method.
Background
With the rapid development of artificial intelligence in the field of display devices, more and more display devices such as smart televisions start to support a voice control function, and users can input voice instructions to the televisions so as to control the televisions. For example, the user may input an instruction to play a weather forecast to the television, and the television may acquire the weather forecast for the current day or the near future on the weather application or the internet and then display it to the user, or voice broadcast it to the user. When the user performs voice interaction with the television, if the user feels that the volume of the television broadcasting information is smaller, the user can press a volume button on the remote controller to adjust the volume. Typically, after the user presses the volume button, the television will add a volume bar to the current interface to prompt the user about the current volume of the television. However, in order to guarantee the voice interaction experience, the voice interaction window is usually required to be displayed at the top layer, which may cover the volume bar, so that the user cannot know the current volume of the television, and can only adjust the volume by feel.
Disclosure of Invention
The application provides display equipment and a volume display method for solving the technical problem of poor volume display effect.
In a first aspect, the present application provides a display device comprising:
a display;
a controller configured to:
receiving a volume adjustment request;
when the top layer interface of the display is a voice interaction interface, responding to the volume adjustment request, and acquiring a target volume picture corresponding to the volume adjustment request;
and controlling the display to display the target volume picture above the voice interaction interface.
In some embodiments, the obtaining the target volume picture corresponding to the volume adjustment request includes:
calculating a target volume value according to the volume adjustment request and the current volume value;
and selecting a target volume picture corresponding to the target volume value from a volume picture library, wherein the volume picture library comprises volume pictures corresponding to each volume value of the display equipment.
In some embodiments, the controlling the display to display the target volume picture over the voice interactive interface includes:
transmitting first data including the volume picture and a first handle into a DFB;
the DFB writes the first data in a picture cache region of a first picture layer according to the first handle;
and superposing the volume picture above the voice interaction interface according to the layer sequence corresponding to the picture buffer area.
In some embodiments, the controller is further configured to:
when the top layer interface of the display is a non-voice interaction interface, responding to the volume adjustment request, and generating a volume bar control corresponding to the volume adjustment request;
and controlling the display to display the volume bar control on the top-layer interface.
In some embodiments, the controlling the display to display the volume bar control on the top-level interface includes:
forming a picture layer of the volume bar control and a picture layer of the current image of the display into a picture to be updated;
and controlling the display to refresh the top-layer interface to display the picture to be updated.
In a second aspect, the present application provides a volume display method, including:
receiving a volume adjustment request;
when the top layer interface of the display is a voice interaction interface, responding to the volume adjustment request, and acquiring a target volume picture corresponding to the volume adjustment request;
and controlling the display to display the target volume picture above the voice interaction interface.
The display device and the volume display method provided by the application have the beneficial effects that:
according to the embodiment of the application, the volume picture is preset, so that after the volume adjustment request is received, the volume picture can be determined as the target volume picture according to the volume adjustment request, the target volume picture is directly displayed above the voice interaction interface, and the volume picture is not required to be overlapped in the picture layer below the voice interaction interface, so that the problem that the volume picture is blocked by the voice interaction interface is avoided, the current volume can be seen by a user, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
A schematic diagram of an operational scenario between a display device and a control apparatus according to some embodiments is schematically shown in fig. 1;
a hardware configuration block diagram of the control apparatus 100 according to some embodiments is exemplarily shown in fig. 2;
a hardware configuration block diagram of a display device 200 according to some embodiments is exemplarily shown in fig. 3;
a schematic diagram of the software configuration in a display device 200 according to some embodiments is exemplarily shown in fig. 4;
a composite schematic of a volume bar and UI interface according to some embodiments is shown schematically in fig. 5;
a synthetic schematic of a voice interaction interface and UI interface according to some embodiments is shown schematically in fig. 6;
a schematic diagram in which the volume bar is occluded according to some embodiments is exemplarily shown in fig. 7;
a layer sequence diagram of a volume bar, UI interface, and voice interaction interface according to some embodiments is shown schematically in fig. 8;
a flow diagram of a method of volume display of a display device according to some embodiments is shown schematically in fig. 9;
a data transmission schematic of display data of a display device according to some embodiments is exemplarily shown in fig. 10;
a schematic diagram of a volume bar not occluded according to some embodiments is illustrated in fig. 11.
Detailed Description
For the purposes of making the objects and embodiments of the present application more apparent, an exemplary embodiment of the present application will be described in detail below with reference to the accompanying drawings in which exemplary embodiments of the present application are illustrated, it being apparent that the exemplary embodiments described are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control device configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
Fig. 3 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, for receiving image signals from the controller output, for displaying video content, image content, and a menu manipulation interface, and for manipulating a UI interface by a user.
In some embodiments, the display 260 may be a liquid crystal display, an OLED display, a projection device, and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display device 200 may establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220.
In some embodiments, the user interface may be configured to receive control signals from the control device 100 (e.g., an infrared remote control, etc.).
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the modem 210 receives broadcast television signals via wired or wireless reception and demodulates audio-video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable control. The operations related to the selected object are: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents. The CPU processor may include a plurality of processors. Such as one main processor and one or more sub-processors.
In some embodiments, a graphics processor is used to generate various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The graphic processor comprises an arithmetic unit, which is used for receiving various interactive instructions input by a user to operate and displaying various objects according to display attributes; the device also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, etc., according to a standard codec protocol of an input signal, and may obtain a signal that is displayed or played on the directly displayable device 200.
In some embodiments, the video processor includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the demultiplexed video signal, including decoding, scaling and the like. And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received frame rate into a video output signal and changing the video output signal to be in accordance with a display format, such as outputting RGB data signals.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the audio signal according to a standard codec protocol of an input signal, and perform noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in a speaker.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, a system of display devices may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
The system of the display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
As shown in fig. 4, the system of the display device is divided into three layers, an application layer, a middleware layer, and a hardware layer, from top to bottom.
The application layer mainly comprises common applications on the television, and an application framework (Application Framework), wherein the common applications are mainly applications developed based on Browser, such as: HTML5 APPs; native applications (Native APPs);
the application framework (Application Framework) is a complete program model with all the basic functions required by standard application software, such as: file access, data exchange, and the interface for the use of these functions (toolbar, status column, menu, dialog box).
Native applications (Native APPs) may support online or offline, message pushing, or local resource access.
The middleware layer includes middleware such as various television protocols, multimedia protocols, and system components. The middleware can use basic services (functions) provided by the system software to connect various parts of the application system or different applications on the network, so that the purposes of resource sharing and function sharing can be achieved.
The hardware layer mainly comprises a HAL interface, hardware and a driver, wherein the HAL interface is a unified interface for all the television chips to be docked, and specific logic is realized by each chip. The driving mainly comprises: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
The hardware or software architecture in some embodiments may be based on the description in the foregoing embodiments, and in some embodiments may be based on other similar hardware or software architectures, so long as the technical solution of the present application may be implemented.
In some embodiments, the image displayed by the display device includes multiple layers, and the display device may combine the multiple layers together in advance and display the image so that the user may see the image including the content of the multiple layers. For example, when the display device is displaying a UI interface, such as a home page of the display device, the user presses a volume key on the remote controller, as shown in fig. 5, the display device may combine the layer of the volume bar with the layer of the home page, and then display the combined image so that the user can see the volume bar.
In some embodiments, the display device supports a voice control function that receives voice commands input by a user, responds according to the voice commands, such as displaying tomorrow's weather forecast information on the display for tomorrow's weather according to the voice commands. In order to ensure the voice interaction experience, the display device generally sets a layer of the voice interaction interface to be arranged at the uppermost layer. Referring to fig. 6, a schematic diagram of synthesis of a voice interaction interface and a UI interface according to some embodiments is shown in fig. 6, where, when a user performs voice interaction with a display device, in a viewing direction of the user, a first layer of a layer displayed by the display device may be a layer of the voice interaction interface, such as an OSD layer1 layer in fig. 6, and a second layer of the layer may be a layer of another interface, such as an OSD layer2 layer in fig. 6, and the layer may be a layer of the UI interface, such as a top page interface.
In some embodiments, when the image displayed by the display device includes the layer of the voice interaction interface shown in fig. 6 and other layers, if the user presses the volume key on the remote controller, the display device generates a layer of a volume bar, synthesizes the layer of the volume bar with the layer of the top page, synthesizes the layer of the voice interaction interface above the layer of the top page, and when the default position of the volume bar overlaps or partially overlaps with the position of the voice interaction interface, the volume bar is blocked by the voice interaction interface.
Referring to fig. 7, a schematic view of a volume bar being occluded according to some embodiments, in fig. 7, the content displayed by the voice interaction interface is weather prediction information. The voice interaction interface is a half-screen interface, and the position of the volume bar is partially overlapped with the position of the voice interaction interface, so that the volume bar is shielded by the voice interaction interface, and the volume display effect is influenced.
In some embodiments, the voice interaction interface occupies more than half of the display, and at this time, the probability and area of the volume bar being blocked by the voice interaction interface become large, so that the effect of volume display becomes poor.
In order to solve the above technical problem, the display sequence shown in fig. 8 needs to be implemented, as shown in fig. 8, the layer where the volume bar is located needs to be OSD layer1 layer, the layer where the intelligent voice interface is located needs to be OSD layer2 layer, and the layer where the UI interface is located is OSD layer3 layer. According to the arrangement sequence of the layers, the volume bar can be displayed in front of the intelligent voice, and the interface of the intelligent voice can be displayed in front of the UI interface.
In some embodiments, embodiments of the present application provide a volume display method that may implement the display sequence shown in fig. 8 based on one volume bar application. The volume bar application may select a volume picture from a library of volume pictures as a target volume picture to be displayed.
In some embodiments, the volume bar application may be written based on the C language or the c++ language, with the advantage of a high execution speed compared to JavaScript.
In some embodiments, the volume bar application may be started after the display device is powered on, and listen for volume adjustment instructions on the display device after the start-up.
In some embodiments, the volume picture may be a preset picture corresponding to one volume value, and the volume picture may include a volume bar and a volume value. The volume picture may be a picture in a non-compressed format, i.e., a picture in raw format, without performing encoding or decoding processing. The size of the volume picture is 522 x 120, one hundred volume pictures can be preset according to the volume value of the display device being 0-100 to form a volume picture library, the size of each picture is about 0.48MB, and the storage space of 48MB is occupied by 100 volume pictures.
In some embodiments, the volume picture may be updated to enhance the display effect.
In some embodiments, the volume display method based on the volume bar application can be seen in fig. 9, and includes the following steps:
step S110: a volume adjustment request is received.
In some embodiments, a user may press a volume up key or a volume down key on a remote control, send a volume adjustment request to a display device, the display device may receive a trigger signal of the volume up key or a trigger signal of the volume down key, and generate a volume adjustment instruction according to the trigger signal and the current volume, where the volume adjustment instruction may be used to control a speaker of the display device to adjust the current volume value to a target volume value. The volume value of the display device may be 0 to 100, and the target volume value is one of them.
Step S120: and when the top layer interface of the display is a voice interaction interface, responding to the volume adjustment request, and acquiring a target volume picture corresponding to the volume adjustment request.
In some embodiments, the controller of the display device may broadcast after generating the volume adjustment command, and the volume bar application may obtain the volume adjustment command, and obtain the target volume value according to the volume adjustment command.
In some embodiments, the controller of the display device may broadcast a trigger signal of a volume up key or a trigger signal of a volume down key, and the volume bar application may obtain the trigger signal of the volume up key or the trigger signal of the volume down key, calculate a target volume value according to the trigger signal of the volume up key or the trigger signal of the volume down key, and the current volume value.
In some embodiments, upon voice interaction of the user with the display device, the display device may generate a voice interaction process that may be responsive to the user's voice instructions, such as searching for information related to keywords in the voice instructions, and so forth. The volume bar application can determine that the top layer interface of the current display is a voice interaction interface according to the current voice interaction process.
In some embodiments, when the top interface of the display is a voice interaction interface, the volume bar is applied to obtain a target volume value, and then a target volume picture corresponding to the target volume value can be selected from a volume picture library.
Step S130: and controlling the display to display the target volume picture above the voice interaction interface.
Referring to fig. 10, a schematic diagram of data transmission of display data of a display device according to some embodiments is shown in fig. 10, and when a user performs voice interaction with the display device, the source of interface data of the display device may be three places, namely, a volume bar APP (Application), a UIAPP, and an intelligent voice APP.
In some embodiments, the UIAPP may utilize the UI framework to expose a variety of interfaces, such as a home interface of a display device, an interface of a video application, an interface of a game application, and so forth.
In some embodiments, the intelligent voice APP may be used to generate an interface for intelligent voice, i.e., a voice interactive interface.
In some embodiments, the volume bar APP, upon acquiring the target volume picture, may generate a first handle, and transmit first data comprising the target volume picture and the first handle into the DFB (Direct Framebuffer ). The DFB is a program framework for managing frame buffer, and may generate frame buffer1 according to a first handle included in first data, and transmit the first data to frame buffer1, where frame buffer1 is a picture buffer of a first layer, the first layer is a top layer of a to-be-displayed image, and the second layer is a layer of the to-be-displayed image below the first layer, and so on.
In some embodiments, the DFB may be replaced with other frameworks, such as a waiand (display server protocol).
In some embodiments, after obtaining the data of the UI interface to be displayed, the UIAPP may generate a second handle, sequentially pass through the intelligent voice frame work and the openGL ES/EGL, and then transmit the second data including the data of the UI interface to be displayed and the second data of the second handle to the DFB, and the DFB may generate frame buffer2 according to the first handle included in the second data, and transmit the second data to the frame buffer 2. The frame may be a UI frame, the browser kernel may be used to load the UI frame, the openGL ES is a subset of the openGL three-dimensional graphics API, and the EGL is a bridge between the openGL ES and a local window system of the display device.
In some embodiments, after obtaining the data of the UI interface to be displayed, the intelligent voice APP may generate a third handle, sequentially pass through the intermediate frame intelligent voice frame and the openGL ES/EGL containing the data of the UI interface to be displayed and the third data of the third handle, and then transmit the third data to the DFB, and the DFB may generate frame buffer3 according to the third handle contained in the third data, and transmit the third data to the frame buffer 3.
In some embodiments, the display device may synthesize the layers according to the layer sequence corresponding to the image buffer region to obtain the image to be displayed, for example, synthesize the corresponding layers according to the sequence of frame buffer1, frame buffer2, and frame buffer3, so that the voice interaction interface is displayed above the UI interface and the volume image is displayed above the voice interaction interface in the image to be displayed. After the image to be displayed is obtained, the image currently displayed by the display can be refreshed to the image to be displayed.
In some embodiments, the interface where the volume bar is not obscured by the voice interaction interface may be referred to in fig. 11, and as shown in fig. 11, the volume bar may be displayed above the voice interaction interface, so that the user may conveniently view the real-time volume.
In some embodiments, when the top-level interface of the display is a non-voice interactive interface, the volume bar application may control the display to display the volume bar in a conventional manner after obtaining the target volume value. For example, a volume bar control may be generated from a target volume value and then the display may be controlled to display the volume bar control on the top-level interface.
In some embodiments, when the top-level interface of the display is a non-voice interactive interface, the method for controlling the display to display the volume bar control on the top-level interface may include: and forming a layer of the volume bar control and a layer of the current image of the display into a picture to be updated, and controlling the display to refresh the top-layer interface to display the picture to be updated.
In some embodiments, when the top-level interface of the display is a non-voice interactive interface, the volume value displayed by the volume bar control may change following the current volume change of the display device.
As can be seen from the above embodiments, according to the embodiments of the present application, by presetting the volume picture, after receiving the volume adjustment request, a volume picture can be determined as a target volume picture according to the volume adjustment request, and the target volume picture is directly displayed above the voice interaction interface, so that the volume picture does not need to be superimposed into a layer below the voice interaction interface, thereby avoiding the problem that the volume picture is blocked by the voice interaction interface, and ensuring that a user can see the current volume; by setting a volume bar application, the target volume bar APP is directly transmitted to the DFB through the application, and data transmission through an intermediate frame and an OpenGL library is not needed, so that the operation cost is greatly saved, and the display speed of the volume bar can be further improved; in addition, when the volume bar is displayed, data transmission is not required to be performed through the middle frame and the OpenGL library, in an actual example, smooth display of the volume bar can be ensured only by 48MB flash, so that a large amount of DDR memory resources are saved, smooth operation of display equipment is ensured, and user experience is improved.
Since the foregoing embodiments are all described in other modes by reference to the above, the same parts are provided between different embodiments, and the same and similar parts are provided between the embodiments in the present specification. And will not be described in detail herein.
It should be noted that in this specification, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a circuit structure, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such circuit structure, article, or apparatus. Without further limitation, the statement "comprises" or "comprising" a … … "does not exclude the presence of other identical elements in a circuit structure, article or apparatus that comprises the element.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure of the application herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
The above embodiments of the present application do not limit the scope of the present application.

Claims (7)

1. A display device, characterized by comprising:
a display;
a controller configured to:
receiving a volume adjustment request;
when the top layer interface of the display is a voice interaction interface, responding to the volume adjustment request, and acquiring a target volume picture corresponding to the volume adjustment request; calculating a target volume value according to the volume adjustment request and the current volume value;
selecting a target volume picture corresponding to the target volume value from a volume picture library, wherein the volume picture library comprises volume pictures corresponding to each volume value of display equipment;
transmitting first data containing the volume picture and the first handle to a DFB direct frame buffer; the first handle is generated by the volume bar application according to the acquired target volume picture; the DFB direct frame buffer area writes the first data into a picture buffer area of a first picture layer according to the first handle;
transmitting second data containing data of the UI interface to be displayed and the second handle to the DFB direct frame buffer; the second handle is generated by the UI application according to the data of the UI interface to be displayed; the DFB direct frame buffer area writes the second data in a picture buffer area of a second picture layer according to the second handle;
transmitting third data containing data of the UI interface to be displayed and a third handle to the DFB direct frame buffer; the third handle is generated by the voice application according to the data of the UI interface to be displayed; the DFB direct frame buffer area writes the third data in a picture buffer area of a third picture layer according to the third handle; the first layer is a top layer of the image to be displayed, the second layer is a layer of the image to be displayed below the first layer, and the third layer is a layer of the image to be displayed below the second layer;
superposing the volume picture above the voice interaction interface according to the picture layer sequence corresponding to the picture buffer area; and synthesizing the corresponding layers according to the sequence of the picture buffer areas of the first layer, the second layer and the third layer to obtain the volume picture to be displayed.
2. The display device of claim 1, wherein the volume picture is a picture in a non-compressed format.
3. The display device of claim 1, wherein the volume picture includes a volume bar.
4. The display device of claim 1, wherein the controller is further configured to:
when the top layer interface of the display is a non-voice interaction interface, responding to the volume adjustment request, and generating a volume bar control corresponding to the volume adjustment request;
and controlling the display to display the volume bar control on the top-layer interface.
5. The display device of claim 4, wherein the controlling the display to display the volume bar control on the top-level interface comprises:
forming a picture layer of the volume bar control and a picture layer of the current image of the display into a picture to be updated;
and controlling the display to refresh the top-layer interface to display the picture to be updated.
6. A volume display method, comprising:
receiving a volume adjustment request;
when the top layer interface of the display is a voice interaction interface, responding to the volume adjustment request, and acquiring a target volume picture corresponding to the volume adjustment request; calculating a target volume value according to the volume adjustment request and the current volume value;
selecting a target volume picture corresponding to the target volume value from a volume picture library, wherein the volume picture library comprises volume pictures corresponding to each volume value of display equipment;
transmitting first data containing the volume picture and the first handle to a DFB direct frame buffer; the first handle is generated by the volume bar application according to the acquired target volume picture; the DFB direct frame buffer area writes the first data into a picture buffer area of a first picture layer according to the first handle;
transmitting second data containing data of the UI interface to be displayed and the second handle to the DFB direct frame buffer; the second handle is generated by the UI application according to the data of the UI interface to be displayed; the DFB direct frame buffer area writes the second data in a picture buffer area of a second picture layer according to the second handle;
transmitting third data containing data of the UI interface to be displayed and a third handle to the DFB direct frame buffer; the third handle is generated by the voice application according to the data of the UI interface to be displayed; the DFB writes the third data in a picture cache region of a third picture layer according to the third handle; the first layer is a top layer of the image to be displayed, the second layer is a layer of the image to be displayed below the first layer, and the third layer is a layer of the image to be displayed below the second layer;
superposing the volume picture above the voice interaction interface according to the picture layer sequence corresponding to the picture buffer area; and synthesizing the corresponding layers according to the sequence of the picture buffer areas of the first layer, the second layer and the third layer to obtain the volume picture to be displayed.
7. The volume display method according to claim 6, further comprising:
when the top layer interface of the display is a non-voice interaction interface, responding to the volume adjustment request, and generating a volume bar control corresponding to the volume adjustment request;
and controlling the display to display the volume bar control on the top-layer interface.
CN202110175536.7A 2021-01-28 2021-02-07 Display device and volume display method Active CN113014977B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110175536.7A CN113014977B (en) 2021-02-07 2021-02-07 Display device and volume display method
PCT/CN2021/133766 WO2022160910A1 (en) 2021-01-28 2021-11-27 Display device and volume display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110175536.7A CN113014977B (en) 2021-02-07 2021-02-07 Display device and volume display method

Publications (2)

Publication Number Publication Date
CN113014977A CN113014977A (en) 2021-06-22
CN113014977B true CN113014977B (en) 2023-08-11

Family

ID=76384028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110175536.7A Active CN113014977B (en) 2021-01-28 2021-02-07 Display device and volume display method

Country Status (1)

Country Link
CN (1) CN113014977B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160910A1 (en) * 2021-01-28 2022-08-04 青岛海信传媒网络技术有限公司 Display device and volume display method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246477A (en) * 2007-12-03 2008-08-20 深圳市茁壮网络技术有限公司 Method and system for modifying and indicating attribute of interface constituent
CN102033800A (en) * 2009-10-08 2011-04-27 捷讯研究有限公司 Method for indicating volume of audio sink of portable electronic device
CN103209341A (en) * 2013-03-19 2013-07-17 深圳市龙视传媒有限公司 Volume bar display method and system and digital television terminal
CN106155640A (en) * 2015-03-24 2016-11-23 海信集团有限公司 A kind of volume display methods and device
CN106792101A (en) * 2017-01-03 2017-05-31 青岛海信电器股份有限公司 Home interface method of adjustment, device and intelligent television
CN107580696A (en) * 2015-05-12 2018-01-12 Lg 电子株式会社 Image display and its control method
CN107861665A (en) * 2017-11-16 2018-03-30 珠海市魅族科技有限公司 Reminding method and device, terminal, the readable storage medium storing program for executing of a kind of volume adjusting
CN108182047A (en) * 2016-12-08 2018-06-19 武汉斗鱼网络科技有限公司 The display methods and device of a kind of information volume
CN108182097A (en) * 2016-12-08 2018-06-19 武汉斗鱼网络科技有限公司 The implementation method and device of a kind of volume bar
CN109271230A (en) * 2018-10-10 2019-01-25 深圳Tcl新技术有限公司 Display methods, intelligent terminal and the computer readable storage medium of volume bar
JP2020039974A (en) * 2019-12-19 2020-03-19 株式会社大一商会 Game machine
CN111343492A (en) * 2020-02-17 2020-06-26 海信电子科技(深圳)有限公司 Display method and display device of browser in different layers

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246477A (en) * 2007-12-03 2008-08-20 深圳市茁壮网络技术有限公司 Method and system for modifying and indicating attribute of interface constituent
CN102033800A (en) * 2009-10-08 2011-04-27 捷讯研究有限公司 Method for indicating volume of audio sink of portable electronic device
CN103209341A (en) * 2013-03-19 2013-07-17 深圳市龙视传媒有限公司 Volume bar display method and system and digital television terminal
CN106155640A (en) * 2015-03-24 2016-11-23 海信集团有限公司 A kind of volume display methods and device
CN107580696A (en) * 2015-05-12 2018-01-12 Lg 电子株式会社 Image display and its control method
CN108182047A (en) * 2016-12-08 2018-06-19 武汉斗鱼网络科技有限公司 The display methods and device of a kind of information volume
CN108182097A (en) * 2016-12-08 2018-06-19 武汉斗鱼网络科技有限公司 The implementation method and device of a kind of volume bar
CN106792101A (en) * 2017-01-03 2017-05-31 青岛海信电器股份有限公司 Home interface method of adjustment, device and intelligent television
CN107861665A (en) * 2017-11-16 2018-03-30 珠海市魅族科技有限公司 Reminding method and device, terminal, the readable storage medium storing program for executing of a kind of volume adjusting
CN109271230A (en) * 2018-10-10 2019-01-25 深圳Tcl新技术有限公司 Display methods, intelligent terminal and the computer readable storage medium of volume bar
JP2020039974A (en) * 2019-12-19 2020-03-19 株式会社大一商会 Game machine
CN111343492A (en) * 2020-02-17 2020-06-26 海信电子科技(深圳)有限公司 Display method and display device of browser in different layers

Also Published As

Publication number Publication date
CN113014977A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN112672195A (en) Remote controller key setting method and display equipment
CN114302021A (en) Display device and sound picture synchronization method
CN117981331A (en) Display device and display method thereof
CN113794914B (en) Display equipment and method for configuring startup navigation
CN113014977B (en) Display device and volume display method
CN113111214A (en) Display method and display equipment for playing records
CN112911371B (en) Dual-channel video resource playing method and display equipment
CN115190351B (en) Display equipment and media resource scaling control method
CN113064691B (en) Display method and display equipment for starting user interface
CN112363683B (en) Method and display device for supporting multi-layer display by webpage application
CN115103144A (en) Display device and volume bar display method
CN113784203A (en) Display device and channel switching method
CN113286185A (en) Display device and homepage display method
CN112882631A (en) Display method of electronic specification on display device and display device
CN113490030A (en) Display device and channel information display method
CN113064534A (en) Display method and display equipment of user interface
CN112882780A (en) Setting page display method and display device
CN112817679A (en) Display device and interface display method
CN113038193B (en) Method for automatically repairing asynchronous audio and video and display equipment
CN112883302B (en) Method for displaying page corresponding to hyperlink address and display equipment
CN113038221B (en) Double-channel video playing method and display equipment
CN113766164B (en) Display equipment and signal source interface display method
CN113676782B (en) Display equipment and interaction method for coexisting multiple applications
CN113709557B (en) Audio output control method and display device
CN113689856B (en) Voice control method for video playing progress of browser page and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221017

Address after: 83 Intekte Street, Devon, Netherlands

Applicant after: VIDAA (Netherlands) International Holdings Ltd.

Address before: 266100 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399

Applicant before: QINGDAO HISENSE MEDIA NETWORKS Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant