CN117956213A - Voice broadcasting method and display device - Google Patents

Voice broadcasting method and display device Download PDF

Info

Publication number
CN117956213A
CN117956213A CN202211295368.6A CN202211295368A CN117956213A CN 117956213 A CN117956213 A CN 117956213A CN 202211295368 A CN202211295368 A CN 202211295368A CN 117956213 A CN117956213 A CN 117956213A
Authority
CN
China
Prior art keywords
interface
updated
interface element
javascript file
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211295368.6A
Other languages
Chinese (zh)
Inventor
易舟
蔡培玲
王小伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vidaa Netherlands International Holdings BV
Original Assignee
Vidaa Netherlands International Holdings BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vidaa Netherlands International Holdings BV filed Critical Vidaa Netherlands International Holdings BV
Priority to CN202211295368.6A priority Critical patent/CN117956213A/en
Publication of CN117956213A publication Critical patent/CN117956213A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a voice broadcasting method and display equipment, relates to the technical field of intelligent terminals, and can reduce the requirement of hardware conditions required by voice broadcasting, thereby improving the realizability of a voice broadcasting scheme. The specific scheme is as follows: receiving a first instruction of a user triggering display equipment to display an interface in an application program; responding to the first instruction, displaying an interface and running a JavaScript file; receiving a second instruction of updating the interface focus triggered by a user; responding to the second instruction through the JavaScript file, and determining the target text content indicated by the updated interface focus; and broadcasting the target text content.

Description

Voice broadcasting method and display device
Technical Field
The application relates to the technical field of intelligent terminals, in particular to a voice broadcasting method and display equipment.
Background
Currently, display devices (e.g., cell phones, televisions, etc.) are mostly provided with a function of voice broadcasting contents (e.g., text) of a user interface displayed by the display device. Because the user interface includes more content, the user switches the focus of the interface when browsing the user interface. The related scheme uses a voice broadcasting plug-in to acquire the content indicated by the updated focus and perform voice broadcasting on the content.
However, the voice broadcasting plug-in can only be used in a personal computer, and the memory occupied by the voice broadcasting plug-in is large, so that the hardware condition required by using the voice broadcasting plug-in is high. And further, the implementation difficulty of voice playing is high.
Disclosure of Invention
The embodiment of the application provides a voice broadcasting method and display equipment, which can reduce the requirement of hardware conditions required by voice broadcasting, thereby improving the feasibility of a voice broadcasting scheme.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
In a first aspect, there is provided a display device including: display, voice broadcast module, communicator and controller.
Wherein the display is configured to display a user interface. And the voice broadcasting module is configured to broadcast the text content.
A communicator configured to: receiving a first instruction of a user triggering display equipment to display an interface in an application program; and receiving a second instruction triggered by the user to update the interface focus.
A controller configured to: responding to the first instruction, controlling a display interface of a display, and running a JavaScript file; responding to the second instruction through the JavaScript file, and determining the target text content indicated by the updated interface focus; and controlling the voice broadcasting module to broadcast the target text content.
With reference to the first aspect, in one possible implementation manner, the controller is specifically configured to: responding to the second instruction through the JavaScript file, determining an updated interface focus, and acquiring at least one first interface element indicated by the updated interface focus; determining updated first interface elements from at least one first interface element through JavaScript files; and acquiring the updated target text content stored in the first interface element through the JavaScript file.
With reference to the first aspect, in one possible implementation manner, the controller is specifically configured to: acquiring at least one second interface element indicated by the interface focus before updating through a JavaScript file; acquiring element attributes of at least one first interface element and element attributes of at least one second interface element through a JavaScript file; and determining the updated first interface element from the at least one first interface element through the JavaScript file according to the element attribute of the at least one first interface element and the element attribute of the at least one second interface element.
Wherein the updated element attribute of the first interface element is different from the element attribute of the at least one second interface element.
With reference to the first aspect, in one possible implementation manner, the interface is a tree-structured document formed by a plurality of nodes. A controller specifically configured to: acquiring at least one first node indicated by the updated interface focus and at least one second node indicated by the interface focus before updating through a JavaScript file; determining updated first nodes from at least one first node through JavaScript files; at least one second node does not include the updated first node; and determining that the updated interface element on the first node is the updated first interface element through the JavaScript file.
With reference to the first aspect, in one possible implementation manner, the controller is specifically configured to: acquiring at least one second interface element indicated by the interface focus before updating through a JavaScript file; acquiring text content in at least one first interface element and text content in at least one second interface element through a JavaScript file; determining an updated first interface element from at least one first interface element through a JavaScript file according to text content in the at least one first interface element and text content in at least one second interface element; wherein the text content in the updated first interface element is different from the text content in the at least one second interface element.
With reference to the first aspect, in one possible implementation manner, if the first instruction is an instruction for triggering to start the application program, the controller is further configured to, before controlling the display interface of the display and running the JavaScript file, obtain the JavaScript file in response to the first instruction, and load the JavaScript file into a process of the application program.
Or if the first instruction does not trigger the launch of the application, the communicator is further configured to: before receiving a first instruction of triggering the display equipment to display an interface in the application program by a user, receiving a third instruction of triggering the starting application program by the user; and the controller is further configured to respond to the third instruction, start the application program, acquire the JavaScript file and load the JavaScript file into the process of the application program.
In a second aspect, a voice broadcasting method is provided, and the method includes: receiving a first instruction of a user triggering display equipment to display an interface in an application program; responding to the first instruction, displaying an interface and running a JavaScript file; receiving a second instruction of updating the interface focus triggered by a user; responding to the second instruction through the JavaScript file, and determining the target text content indicated by the updated interface focus; and broadcasting the target text content.
With reference to the second aspect, in one possible implementation manner, the determining, by using the JavaScript file, the target text content indicated by the updated interface focus in response to the second instruction includes: responding to the second instruction through the JavaScript file, determining an updated interface focus, and acquiring at least one first interface element indicated by the updated interface focus; determining updated first interface elements from at least one first interface element through JavaScript files; and acquiring the updated target text content stored in the first interface element through the JavaScript file.
With reference to the second aspect, in one possible implementation manner, determining, by the JavaScript file, an updated first interface element from at least one first interface element includes: acquiring at least one second interface element indicated by the interface focus before updating through a JavaScript file; acquiring element attributes of at least one first interface element and element attributes of at least one second interface element through a JavaScript file; determining an updated first interface element from at least one first interface element through a JavaScript file according to the element attribute of the at least one first interface element and the element attribute of the at least one second interface element; wherein the updated element attribute of the first interface element is different from the element attribute of the at least one second interface element.
With reference to the second aspect, in one possible implementation manner, the interface is a tree-structured document formed by a plurality of nodes. The determining, by the JavaScript file in response to the second instruction, the target text content indicated by the updated interface focus includes: acquiring at least one first node indicated by the updated interface focus and at least one second node indicated by the interface focus before updating through a JavaScript file; determining updated first nodes from at least one first node through JavaScript files; at least one second node does not include the updated first node; and determining that the updated interface element on the first node is the updated first interface element through the JavaScript file.
With reference to the second aspect, in one possible implementation manner, the determining, by using the JavaScript file, the target text content indicated by the updated interface focus in response to the second instruction includes: acquiring at least one second interface element indicated by the interface focus before updating through a JavaScript file; acquiring text content in at least one first interface element and text content in at least one second interface element through a JavaScript file; determining an updated first interface element from at least one first interface element through a JavaScript file according to text content in the at least one first interface element and text content in at least one second interface element; wherein the text content in the updated first interface element is different from the text content in the at least one second interface element.
With reference to the second aspect, in one possible implementation manner, the method further includes: if the first instruction is an instruction for triggering starting the application program, before the interface is displayed and the JavaScript file is operated, responding to the first instruction, acquiring the JavaScript file, and loading the JavaScript file into the process of the application program.
Or if the first instruction does not trigger the application program to be started, receiving a third instruction for triggering the application program to be started by the user before receiving the first instruction for triggering the display equipment to display the interface in the application program by the user; and responding to the third instruction, starting the application program, acquiring the JavaScript file, and loading the JavaScript file into the process of the application program.
In a third aspect, a display device is provided, which has the functionality to implement the method according to the second aspect described above. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a fourth aspect, there is provided a display device including: a processor and a memory; the memory is configured to store computer instructions that, when executed by the display device, cause the display device to perform the voice broadcast method of any of the second aspects described above.
In a fifth aspect, a computer readable storage medium is provided, in which instructions are stored which, when run on a display device, cause the display device to perform the method of voice broadcasting according to any of the second aspects above.
In a sixth aspect, there is provided a computer program product comprising computer instructions which, when run on a display device, cause the display device to perform the method of voice broadcasting of any of the second aspects above.
In a seventh aspect, there is provided an apparatus (e.g. the apparatus may be a system-on-a-chip) comprising a processor for supporting a display device to implement the functions referred to in the second aspect above. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the display device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
The embodiment of the application provides a voice broadcasting method, and when a display device displays any interface in an application program, a JavaScript file (which may be called as a JS file) is operated. And then, the display device receives a second instruction of triggering and updating the interface focus by the user, and responds to the second instruction through the JS file to determine the target text content indicated by the updated interface focus. And the display equipment controls the voice broadcasting module to broadcast the target text content. Because the JS file occupies smaller space, and the JS file can determine the target text content indicated by the updated interface focus without depending on some APIs, it can be known that the memory space required for using the JS file is smaller. Second, the JS file can be applied to various types of display devices (e.g., PC, mobile phone, television, etc.), i.e., the JS file does not limit the type of display device. Because the memory space required by using the JS file is smaller, the type of the display equipment is not limited, and therefore, the method provided by the embodiment of the application can reduce the requirement of hardware required by voice broadcasting, and the scheme of voice broadcasting is easy to realize.
Drawings
Fig. 1 is a schematic view of a scenario of a voice broadcasting method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a control device according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a display device according to an embodiment of the present application;
Fig. 4 is a flowchart of a voice broadcasting method according to an embodiment of the present application;
FIG. 5 is a schematic software diagram of a browser application triggering display interface according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a display device according to a second embodiment of the present application;
fig. 7 is a schematic structural diagram III of a display device according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of a display device according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a display device according to a second embodiment of the present application;
fig. 10 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects and embodiments of the present application more apparent, an exemplary embodiment of the present application will be described in detail below with reference to the accompanying drawings in which exemplary embodiments of the present application are illustrated, it being apparent that the exemplary embodiments described are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and any variations thereof herein are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
Currently, more and more display devices (e.g., mobile phones, televisions, etc.) are provided with a function of voice broadcasting contents (e.g., text) of a user interface displayed by the display device. Since the user interface includes more content, the display device can display a part of the content in the user interface, and the display device provides the user with a function of updating the focus of the interface (which may be referred to as an interface focus). After the user browses a portion of the content in the user interface displayed by the display device, the user may want the display device to input an operation to trigger updating of the focus of the interface. And the display equipment receives and responds to the operation, acquires the content indicated by the updated interface focus and displays the content. And then, the display equipment performs voice broadcasting on the content indicated by the updated interface focus.
The related scheme provides a voice broadcast plug-in (e.g., SCREEN READER TM) for acquiring the content indicated by the updated focus and performing voice broadcast on the content indicated by the updated focus. However, the voice broadcast plug-in can only be used in a personal computer (Personal Computer, PC), and the voice broadcast plug-in is crx format, which occupies a large space. Secondly, the implementation of the voice broadcast plug-in relies on a large number of application program interfaces (Application Program Interface, APIs) which also occupy a certain amount of space.
In summary, the hardware conditions required to use the voice broadcast plug-in are high. And further, the implementation difficulty of voice playing is high.
In view of the above problem, an embodiment of the present application provides a voice broadcasting method, where when a display device displays any interface in an application program, a JavaScript file (may be referred to as a JS file) is run. And then, the display device receives a second instruction of triggering and updating the interface focus by the user, and responds to the second instruction through the JS file to determine the target text content indicated by the updated interface focus. And the display equipment controls the voice broadcasting module to broadcast the target text content. Because the JS file occupies smaller space, and the JS file can determine the target text content indicated by the updated interface focus without depending on some APIs, it can be known that the memory space required for using the JS file is smaller. Second, the JS file can be applied to various types of display devices (e.g., PC, mobile phone, television, etc.), i.e., the JS file does not limit the type of display device. Because the memory space required by using the JS file is smaller, the type of the display equipment is not limited, and therefore, the method provided by the embodiment of the application can reduce the requirement of hardware required by voice broadcasting, and the scheme of voice broadcasting is easy to realize.
The following describes a voice broadcasting method provided by the embodiment of the application.
The display device provided by the embodiment of the application can have various implementation forms, for example, can be a display device with a display and a voice broadcasting module, such as a mobile phone, a tablet personal computer, a PC, a television, an intelligent television, a laser projection device, an electronic desktop (electronic table) and the like. The embodiment of the application does not limit the specific form of the display device. In the embodiment of the application, the display device is taken as a television set as an example for schematic description. Fig. 1 and 2 are specific embodiments of a display device of the present application.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the television 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control device 100 may be a remote controller, and the communication between the remote controller and the television 200 includes infrared protocol communication, and other short-range communication modes, and the television 200 is controlled by a wireless or wired mode. The user may control the television 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
It should be noted that, in the embodiment of the present application, the television 200 and the control device 100 may use an infrared protocol for communication, or may use other communication protocols for communication. The communication protocol between the television 200 and the control device 100 is not limited in the embodiments of the present application, and the following embodiments will exemplify the use of infrared protocol communication between the television 200 and the control device 100. Where the television 200 and the control device 100 communicate using an infrared protocol, the instruction (e.g., the first instruction, the second instruction) sent by the control device 100 to the television 200 includes any system code currently used by the control device 100.
In some embodiments, the user may also control the television 200 using a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.). For example, the television 200 is controlled using an application running on a smart device.
In some embodiments, the television set 200 may not receive instructions using the above-described smart device 300 or the control apparatus 100, but receive control of the user through touch or gesture, or the like.
In some embodiments, the television 200 may further perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the module configured inside the television 200 device for obtaining the voice command may directly receive the voice command control of the user, or the voice command control of the user may also be received through a voice control device set outside the television 200 device. The method according to the embodiment of the present application will be described below by taking the control device 100 as an example.
In some embodiments, the television 200 is also in data communication with a server 400. Television set 200 may be permitted to communicate via a local area network (Local Area Network, LAN), a wireless local area network (Wireless Local Area Networks, WLAN) and other networks. The server 400 may provide various content and interactions to the television 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 120, a user input/output interface 130, a memory, and a power supply. The control device 100 may receive an operation instruction input by a user, and convert the operation instruction into an instruction that the television 200 can recognize and respond to, and may mediate interactions between the user and the television 200.
Fig. 3 is a schematic structural diagram of a television according to an embodiment of the present application.
As shown in fig. 3, the television 200 includes at least one of a modem 210, a communicator 220 (which may also be referred to as a communication module 220), a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface 280.
In some embodiments, the controller 250 includes: a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), at least one of a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
The display 260 includes a display screen component for presenting a picture, and a driving component for driving image display, a component for receiving an image signal outputted from the controller 250, displaying video content, image content, and a menu manipulation Interface, and a user manipulation User Interface (UI).
The display 260 may be a liquid crystal display, an Organic Light-Emitting Diode (OLED) display, a projection device, and a projection screen.
The communicator 220 is a component for communicating with external devices according to various communication protocol types. For example: the communicator 220 may comprise at least one of a wireless network communication technology Wi-Fi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared module (e.g., an infrared receiver and an infrared transmitter). The television set 200 can establish transmission and reception of control signals and data signals with the control device 100 through the communicator 220.
The user interface 280 is operable to receive control signals for controlling the apparatus 100.
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; either the detector 230 comprises an image collector, such as a camera, which may be used to collect external environmental scenes, user attributes or user interaction gestures, or the detector 230 comprises a sound collector, such as a microphone or the like, for receiving external sounds.
The external device interface 240 may include, but is not limited to, the following: high definition multimedia interface (High Definition Multimedia Interface, HDMI), analog or data high definition component input interface (which may be referred to as a component), composite video input interface CVBS, universal serial bus (Universal Serial Bus, USB) input interface (which may be referred to as a USB port), and the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
The controller 250 controls the operation of the television set 200 and responds to the user's operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the television set 200. For example: the controller 250 acquires audio and video data and subtitle data in response to a user-triggered instruction to play video, and controls the display 360 to play video according to the audio and video data.
The user may input user commands through a user interface UI displayed on the display 260, and the user input interface receives the user input commands through the user interface UI.
A "user interface UI" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of a user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a graphically displayed user interface that is related to computer operations. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the television 200, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The methods in the following embodiments may be implemented in a display device having the above-described hardware structure.
The following describes the voice broadcasting method provided by the embodiment of the present application in detail with reference to fig. 4. As shown in fig. 4, continuing to illustrate the television 200 as a display device, the voice broadcasting method provided in the embodiment of the present application may include the following S401 to S405.
S401, the television 200 receives a first instruction that the user triggers the display device to display an interface in the application program.
The television 200 is installed with the application. The user may input a first operation to the control device 100 triggering the display of one of the interfaces in the application. The control device 100 generates the first instruction in response to the first operation, and transmits the first instruction to the television 200.
In some embodiments, the application may refer to an application that is capable of browsing an interface. Such as browser applications, chat-like applications.
In some embodiments, if the first instruction is an instruction to trigger starting an application, the first instruction is used to trigger starting the application and trigger displaying a first interface in the application (e.g., a main interface or a default interface in the application).
Or if the first instruction does not trigger the application to be started, the television 200 may receive the first instruction in the case that the application is run in the foreground.
S402, the television 200 responds to the first instruction, displays an interface and runs the JS file.
The television 200 can control a display to display the interface and run the JS file via the application in response to the first instruction. After the JS file starts to run, the change of the interface focus is monitored. The JS file is provided with a function of monitoring focus changes and acquiring text contents indicated by updated focus.
In some embodiments, the interface includes more than all content, and the television 200 displays a portion of the content in the interface in response to the first instruction.
In some embodiments, if the first instruction is an instruction to trigger starting the application, the television 200 responds to the first instruction to obtain a JavaScript file first, and loads the JavaScript file into the process of the application. The television 200 also displays an interface and runs JavaScript files.
Or if the first instruction does not trigger the application to be started, before S401, the television 200 obtains the JS file and loads the JS file into the process of the application. The process of the television 200 obtaining the JS file and loading the JS file into the application may include: receiving a third instruction for triggering and starting an application program by a user; responding to a third instruction, starting an application program, and acquiring a JS file; the JS file is loaded into the process of the application.
The method for voice broadcasting according to the embodiment of the present application is described by taking an example in which the first instruction does not trigger the start application and the application is a browser application. As shown in fig. 5, the voice broadcasting method includes S501-S503 before S401, and S402 in the voice broadcasting method includes S504-S506.
S501, the television 200 receives a third instruction that the user triggers to start the application.
The control device 100 may receive a third operation of the user input triggering the television 200 to start the application, and generate a third instruction in response to the third operation. The control device 100 sends the third instruction to the television 200 in the on state.
S502, the television 200 responds to the third instruction, starts a browser process and obtains the JS file.
In response to the third instruction, the television 200 initiates a process of the browser application (i.e., browser process (Browaser processes)), which may control the display to display a default interface or a main interface of the browser application.
In some embodiments, the browser process controls the display to display each interface (e.g., a main interface), and initiates an interface generation process (e.g., a render process) corresponding to each interface. The interface generation process is used for generating a corresponding interface. As shown in fig. 6, the browser process may control the display to sequentially display a plurality of interfaces, and then the browser process may sequentially start a plurality of interface generation processes. The plurality of interface generation contacts and the plurality of interfaces are in one-to-one correspondence. The browser process and each interface generation process communicate by adopting an Inter-process communication (Inter-Process Communication, IPC) mechanism.
And secondly, the browser process displays one interface at any moment, and runs one interface generation process corresponding to the one interface.
For example, each interface of the browser application may be a web page or tab page in the browser application.
In some embodiments, the JS file is stored locally by the television 200, and the television 200 can directly obtain the JS file. Or the television 200 may download the JS file from the server 500.
S503, the television 200 loads the JS file into a browser process.
The television 200 may upload the JS file into a configuration storage path (also referred to as a configuration address) provided by the browser application to effect loading of the JS file into the application's process.
S504, the browser process in the television 200 responds to the first instruction to start an interface generation process corresponding to the interface.
S505, the interface generating process in the television 200 generates the interface, and controls the display to display the interface.
S506, the interface generating process in the television 200 triggers the JS file to be run.
After the interface generating process triggers and runs the JS file, the JS file can monitor the focus change and acquire the text content indicated by the updated focus.
S403, the television 200 receives a second instruction of triggering updating of the interface focus by the user.
The television 200 displays a portion of the content in the interface. When the user needs to view other contents not displayed in the interface, a second operation triggering updating of the focus of the interface may be input to the control apparatus 100. The control device 100 generates the second instruction in response to the second operation, and transmits the second instruction to the television 200.
For example, when the control device 100 is a remote controller, the second operation may be a pressing operation of any one of the up, down, left, and right keys included in the remote controller. When the control device 100 is a mouse, the second operation may be an operation of a sliding interface.
S404, the television 200 responds to the second instruction through the JS file to determine the target text content indicated by the updated interface focus.
When the interface focus in the interface displayed by the television 200 changes, the television 200 determines the updated interface focus through the JS file, and then determines the target text content indicated by the updated interface focus.
In some embodiments, the television 200 may determine, via the JS file, all text contents indicated by the updated interface focus, where all text contents indicated by the updated interface focus are target text contents.
In other embodiments, the same text content may exist between all text content indicated by the updated interface focus and all text content indicated by the interface focus before the update as the interface focus changes. And no broadcast is required for the same text content, the television set 200 can determine the target text content from all the text contents indicated by the updated interface focus. The target text content may refer to text content (i.e., changed text content) that is different from all text content indicated by the interface focus before the update, among all text content indicated by the interface focus after the update.
Illustratively, taking an application as a browser application, each interface in the browser application may be made up of a plurality of interface elements (which may also be referred to as page elements). The content in the interface is stored in each interface element. For example, an interface generated using a document object model (Document Object Model, DOM) is made up of multiple interface elements, and interface elements in an interface generated using a DOM may also be referred to as DOM elements.
Further, the process of the television 200 determining all text contents indicated by the updated interface focus may include: at least one first interface element indicated by the updated interface focus is determined. The text content in all interface elements indicated by the updated interface focus is all text content indicated by the updated interface focus. The television set 200 may further determine the target text content from all text contents indicated by the updated interface focus, including: an updated first interface element is determined from the at least one first interface element indicated by the updated interface focus. The updated first interface element is different from the interface element indicated by the interface focus before updating.
In addition, the process by which the television 200 determines all interface elements indicated by the updated interface focus may be referred to as "coarse positioning interface elements"; the process by which the television set 200 determines target text content from all text content indicated by the updated interface focus may be referred to as "pinpointing interface element". As shown in fig. 7, S404 may include a "coarse positioning interface element" and a "fine positioning interface element".
Optionally, the process of implementing "coarse localization interface element" by the television 200 through the JS file may include: the JS file listens for User Interface (UI) events using a first listening function (e.g., ACTIVEELEMENT functions); the monitoring function can acquire updated interface focus at the moment of monitoring the UI event; the JS file re-acquires at least one first interface element indicated by the updated interface focus. The UI event may refer to an interaction moment with the television 200 triggered by the control device 100, for example, a mouse click, a mouse movement, and a key of a keyboard being pressed.
Optionally, the process of implementing "precisely locating interface element" by the television 200 through the JS file may include: the JS file listens for changes in the interface using a second listening function (e.g., change observer Mutation Observer). For example, changes to the interface generated using the DOM (i.e., DOM changes) may include: the addition and deletion of nodes, the change of element attributes, the change of text contents, and the like.
It is appreciated that the element attributes of the different interface elements are different, and therefore, the JS file can monitor changes in the element attributes using the second monitoring function. The interface element whose element attribute is changed is the updated first interface element that is different from the interface element indicated by the interface focus before the update. And if the text content in the updated first interface element is different from the text content indicated by the interface focus before updating, voice broadcasting can be performed on the text content in the updated first interface element.
Alternatively, the television 200 may employ the following four implementations to determine all text contents indicated by the updated interface focus, and then determine the target text content from all text contents indicated by the updated interface focus.
In a first implementation, the interface is composed of a plurality of interface elements, for example, an interface generated using DOM is composed of a plurality of interface elements, and the television set 200 may perform the following steps through the JS file to determine the target text content: responding to the second instruction, determining an updated interface focus, and acquiring at least one first interface element indicated by the updated interface focus; determining an updated first interface element from the at least one first interface element; and acquiring the target text content stored in the updated first interface element. The updated first interface element may be a different interface element than the at least one second interface element indicated by the pre-update interface focus.
Further, the television 200 may determine, via the JS file, the updated first interface element by at least the following steps: acquiring at least one second interface element indicated by the interface focus before updating; acquiring element attributes of at least one first interface element and element attributes of at least one second interface element; and determining the updated first interface element from the at least one first interface element according to the element attribute of the at least one first interface element and the element attribute of the at least one second interface element through the JS file. Wherein the updated element attribute of the first interface element is different from the element attribute of the at least one second interface element.
Illustratively, the element attributes of any of the interface elements may include at least one of: a first attribute (e.g., identifier (ID) attribute) for representing an element identification, a second attribute (e.g., class attribute) for representing an element type, a third attribute (e.g., accesskey attribute) for representing a shortcut key that brings an element into focus, and a fourth attribute (e.g., style attribute for representing a cascading style sheet (CASCADING STYLE SHEETS, CSS) style for an element) for representing an element style, and so forth.
In a second implementation, the television 200 may perform the following steps through the JS file to determine the target text content: acquiring at least one second interface element indicated by the interface focus before updating; acquiring text content in at least one first interface element and text content in at least one second interface element; determining an updated first interface element from the at least one first interface element according to the text content in the at least one first interface element and the text content in the at least one second interface element; acquiring target text content stored in the updated first interface element; wherein the text content in the updated first interface element is different from the text content in the at least one second interface element.
In a third implementation manner, the interface is a document with a tree structure formed by a plurality of nodes, for example, a document with a tree structure formed by a plurality of nodes is generated by adopting a DOM, and then the television 200 may perform the following steps through the JS file to determine the target text content: acquiring at least one first node indicated by the updated interface focus and at least one second node indicated by the interface focus before updating; determining an updated first node from the at least one first node; at least one second node does not include the updated first node; determining that the interface element on the updated first node is the updated first interface element; and acquiring the target text content stored in the updated first interface element.
If the interface is a tree-structured document composed of a plurality of nodes, all contents (such as labels, interface elements, text contents stored in interface elements) in the interface can be considered as nodes, and each content in the interface is a node.
In a fourth implementation manner, the interface is a document with a tree structure formed by a plurality of nodes, for example, a document with a tree structure formed by a plurality of nodes is generated by adopting a DOM, and then the television 200 may perform the following steps through the JS file to determine the target text content: acquiring at least one first node indicated by the updated interface focus and the acquisition time of each first node in the at least one first node; determining updated first nodes from at least one first node according to the acquisition time of each first node; the updated interface element on the first node is the updated first interface element; and acquiring the target text content stored in the updated first interface element. The updated acquisition time of the first node is later than the acquisition time of other first nodes; the other first nodes are first nodes except the updated first node in the at least one first node. For example, the updated first node is the first node with the latest acquisition time in the at least one first node.
It should be noted that, when the interface focus changes, the content in the interface displayed by the television 200 may change in a scrolling manner, so that at least one first node indicated by the updated interface focus acquired by the television 200 may be acquired at a different time from a time when the interface focus starts to change to a time when the interface focus ends to change.
In a fifth implementation, since the interface includes some interface elements that do not need to be advertised, for example, interface elements that do not store text content (e.g., interface elements that store video), interface elements that do not need to be advertised for text content (e.g., content in an input box may not be advertised), and so forth. Therefore, the JS file can judge whether the interface element needs to be broadcasted according to the label of the interface element; and filtering out interface elements which do not need broadcasting.
Specifically, the television set 200 may perform the following steps through the JS file to determine the target text content: firstly, any one implementation mode of a first implementation mode, a second implementation mode, a third implementation mode and a fourth implementation mode is adopted to obtain an updated first interface element; deleting the interface element with the label being the preset label in the updated first interface element to obtain a filtered first interface element; and acquiring target text content stored in the filtered first interface element. The preset labels may include labels that characterize the content of the interface element not to be broadcast, for example, labels input representing input controls, labels select defining a selection list, block labels div, and labels video defining videos.
In some embodiments, if the interface is a document with a tree structure formed by a plurality of nodes, after the television 200 obtains the updated first interface element, the updated first interface element may be further filtered according to the node relationship of the updated first interface element. The television 200 obtains text content in the screened first interface element. The text content in the first interface element after screening is the target text content.
For example, the updated first interface element includes a plurality of first interface elements. The television 200 may perform the following steps on the updated first interface element, to obtain a filtered first interface element: deleting a plurality of first interface elements on the father-son node; and deleting the first interface elements with repeated node patterns for the plurality of first interface elements on the brother nodes.
Taking the example that the television 200 executes the fifth implementation manner to obtain the target text content through the JS file and executes the fourth implementation manner to obtain the updated first interface element in the fifth implementation manner, the voice broadcasting method provided by the embodiment of the present application is described. As shown in fig. 8, the voice broadcasting method may include S401 to S403, and S404 may include S801 to S805.
S801, acquiring at least one first node indicated by the updated interface focus and acquisition time of each first node in the at least one first node through the JS file.
S802, determining updated first nodes from at least one first node according to the acquisition time of each first node through the JS file; the updated interface element on the first node is the updated first interface element.
S803, deleting the interface element with the label being the preset label in the updated first interface element through the JS file to obtain the filtered first interface element.
S804, judging whether the filtered first interface element is in the viewport or not through the JS file.
If the filtered first interface element is within the viewport, then S805 is performed. If the filtered first interface element is not in the viewport, ending the flow. The viewport may be referred to as a window of a display in the television 200.
For example, the JS file may determine whether the filtered first interface element is in the viewport according to the size of the display and the coordinates of the filtered first interface element in the interface.
In some embodiments, the filtered first interface element may include a plurality of first interface elements, and if any of the filtered first interface elements is within the viewport, then it is determined that the filtered first interface element is within the viewport.
Or if all the first interface elements in the filtered first interface elements are in the viewport, determining that the filtered first interface elements are in the viewport.
S805, obtaining the target text content stored in the filtered first interface element through the JS file.
In some embodiments, if a portion of the first interface elements in the filtered first interface elements are within the viewport, the stored text content (i.e., the target text content) may be obtained for the first interface elements in the viewport in the filtered first interface elements.
It should be noted that, in addition to executing S804 after S803 as shown in fig. 8, S804 may also be executed before S803, and the execution sequence of executing S804 is not limited in the embodiment of the present application.
S405, the tv set 200 reports the target text content.
After the JS file in the television 200 determines the target text content, the voice broadcast module in the television 200 may be invoked to broadcast the target text content.
In some embodiments, the voice broadcast module may include a text conversion module and a speaker. The text conversion module converts the target text content into voice; the speaker plays the voice.
In some embodiments, in addition to determining the target text content indicated by the updated interface focus in response to the second instruction, the television 200 may also determine the target text content indicated by the updated interface focus when other signals are detected. For example, after the television 200 runs the JS file, the JS file may begin detecting whether a change in CSS style has occurred, and/or detecting whether a UI event has occurred. If at least one of a CSS style change and UI event is satisfied, the JS file may determine the target text content indicated by the updated interface focus.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may divide functional modules of a display device (e.g., the television 200) according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
The embodiment of the application also provides a display device. As shown in fig. 9, the display device 900 includes: a display 901, a voice broadcast module 902, a communicator 903, and a controller 904.
Wherein the display 901 is configured to display a user interface. The voice broadcasting module 902 is configured to broadcast text content.
A communicator 903 configured to: receiving a first instruction of a user triggering display equipment to display an interface in an application program; and receiving a second instruction triggered by the user to update the interface focus.
A controller 904 configured to: responding to the first instruction, controlling the display 901 to display an interface, and running a JavaScript file; responding to the second instruction through the JavaScript file, and determining the target text content indicated by the updated interface focus; the voice broadcast module 902 is controlled to broadcast the target text content.
In one possible implementation, the controller 904 is specifically configured to: responding to the second instruction through the JavaScript file, determining an updated interface focus, and acquiring at least one first interface element indicated by the updated interface focus; determining updated first interface elements from at least one first interface element through JavaScript files; and acquiring the updated target text content stored in the first interface element through the JavaScript file.
In one possible implementation, the controller 904 is specifically configured to: acquiring at least one second interface element indicated by the interface focus before updating through a JavaScript file; acquiring element attributes of at least one first interface element and element attributes of at least one second interface element through a JavaScript file; and determining the updated first interface element from the at least one first interface element through the JavaScript file according to the element attribute of the at least one first interface element and the element attribute of the at least one second interface element.
Wherein the updated element attribute of the first interface element is different from the element attribute of the at least one second interface element.
In one possible implementation, the interface is a tree structured document composed of a plurality of nodes. The controller 904 is specifically configured to: acquiring at least one first node indicated by the updated interface focus and at least one second node indicated by the interface focus before updating through a JavaScript file; determining updated first nodes from at least one first node through JavaScript files; at least one second node does not include the updated first node; and determining that the updated interface element on the first node is the updated first interface element through the JavaScript file.
In one possible implementation, the controller 904 is specifically configured to: acquiring at least one second interface element indicated by the interface focus before updating through a JavaScript file; acquiring text content in at least one first interface element and text content in at least one second interface element through a JavaScript file; determining an updated first interface element from at least one first interface element through a JavaScript file according to text content in the at least one first interface element and text content in at least one second interface element; wherein the text content in the updated first interface element is different from the text content in the at least one second interface element.
In one possible implementation, if the first instruction is an instruction to trigger starting the application program, the controller 904 is further configured to, before controlling the display 901 to display the interface and running the JavaScript file, obtain the JavaScript file in response to the first instruction, and load the JavaScript file into a process of the application program.
Or if the first instruction does not trigger the launch of the application, the communicator 903 is further configured to: before receiving a first instruction of triggering the display equipment to display an interface in the application program by a user, receiving a third instruction of triggering the starting application program by the user; the controller 904 is further configured to start the application program in response to the third instruction, obtain the JavaScript file, and load the JavaScript file into a process of the application program.
Of course, the display device 900 provided in the embodiment of the present application includes, but is not limited to, the above modules, for example, the display device 900 may further include a memory. The memory may be used to store executable instructions for the writing display device 900 and may also be used to store data generated by the display device 900 during operation, such as target text content, and the like.
The embodiment of the application also provides a display device, which comprises: a processor and a memory; the memory is used for storing computer instructions, and when the display device runs, the processor executes the computer instructions stored in the memory, so that the display device executes the voice broadcasting method provided by the embodiment of the application.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions, and when the computer instructions run on the display device, the display device can execute the voice broadcasting method provided by the embodiment of the application.
For example, the computer readable storage medium may be ROM, RAM, compact disk read-Only (CD-ROM), magnetic tape, floppy disk, optical data storage device, etc.
The embodiment of the application also provides a computer program product containing computer instructions, which enable the display device to execute the voice broadcasting method provided by the embodiment of the application when the computer instructions are run on the display device.
The embodiment of the application also provides a device (for example, the device can be a chip system) which comprises a processor for supporting the display equipment to realize the voice broadcasting method provided by the embodiment of the application. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the display device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
Illustratively, as shown in FIG. 10, a system-on-chip provided by an embodiment of the present application may include at least one processor 1001 and at least one interface circuit 1002. The processor 1001 may be a processor in the television set 200 described above. The processor 1001 and the interface circuit 1002 may be interconnected by wires. The processor 1001 may receive and execute computer instructions from the memory of the television set 200 described above through the interface circuit 1002. The computer instructions, when executed by the processor 1001, may cause the television 200 to perform the steps performed by the television 200 in the above-described embodiments. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules, that is, the internal structure of the apparatus (e.g., the first control device, the regional controller) is divided into different functional modules, so as to perform all or part of the functions described above. The specific working processes of the above-described system, apparatus (e.g., first control device, area controller) and unit may refer to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In several embodiments provided herein, it should be understood that the disclosed systems, apparatuses (e.g., first control device, zone controller) and methods may be implemented in other ways. For example, the above-described embodiments of the apparatus (e.g., first control device, regional controller) are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) or a processor to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A display device, the display device comprising:
A display configured to display a user interface;
The voice broadcasting module is configured to broadcast text contents;
a communicator configured to: receiving a first instruction of a user triggering the display equipment to display an interface in an application program; receiving a second instruction of updating the interface focus triggered by a user;
a controller configured to:
Responding to the first instruction, controlling the display to display the interface, and running a JavaScript file;
Responding to the second instruction through the JavaScript file, and determining the updated target text content indicated by the interface focus;
And controlling the voice broadcasting module to broadcast the target text content.
2. The display device of claim 1, wherein the display device comprises a display device,
The controller is specifically configured to:
Determining the updated interface focus through the JavaScript file in response to the second instruction, and acquiring at least one first interface element indicated by the updated interface focus;
determining an updated first interface element from the at least one first interface element through the JavaScript file;
And acquiring the target text content stored in the updated first interface element through the JavaScript file.
3. The display device of claim 2, wherein the controller is specifically configured to:
Acquiring at least one second interface element indicated by the interface focus before updating through the JavaScript file;
acquiring element attributes of the at least one first interface element and element attributes of the at least one second interface element through the JavaScript file;
Determining the updated first interface element from the at least one first interface element according to the element attribute of the at least one first interface element and the element attribute of the at least one second interface element through the JavaScript file; wherein the updated element attribute of the first interface element is different from the element attribute of the at least one second interface element.
4. The display device according to claim 2, wherein the interface is a document of a tree structure composed of a plurality of nodes;
The controller is specifically configured to:
Acquiring at least one first node indicated by the updated interface focus and at least one second node indicated by the interface focus before updating through the JavaScript file;
Determining an updated first node from the at least one first node through the JavaScript file; the at least one second node does not include the updated first node;
And determining that the interface element on the updated first node is the updated first interface element through the JavaScript file.
5. The display device of claim 2, wherein the controller is specifically configured to:
Acquiring at least one second interface element indicated by the interface focus before updating through the JavaScript file;
Acquiring text contents in the at least one first interface element and text contents in the at least one second interface element through the JavaScript file;
determining the updated first interface element from the at least one first interface element according to the text content in the at least one first interface element and the text content in the at least one second interface element through the JavaScript file; wherein the text content in the updated first interface element is different from the text content in the at least one second interface element.
6. The display device of any one of claims 1-5, wherein,
If the first instruction is an instruction for triggering the starting of the application program, the controller is further configured to respond to the first instruction to acquire a JavaScript file and load the JavaScript file into the process of the application program before the display is controlled to display the interface and the JavaScript file is operated;
Or if the first instruction does not trigger the launching of the application, the communicator is further configured to: before the first instruction of triggering the display equipment to display the interface in the application program by the user is received, a third instruction of triggering the application program to start is received by the user; the controller is further configured to respond to the third instruction, start the application program, obtain a JavaScript file, and load the JavaScript file into a process of the application program.
7. A method of voice broadcasting, the method comprising:
receiving a first instruction of a user triggering the display equipment to display an interface in an application program;
Responding to the first instruction, displaying the interface and running a JavaScript file;
receiving a second instruction of updating the interface focus triggered by a user;
Responding to the second instruction through the JavaScript file, and determining the updated target text content indicated by the interface focus;
and broadcasting the target text content.
8. The method of claim 7, wherein the determining, by the JavaScript file in response to the second instructions, the target text content indicated by the updated interface focus comprises:
Determining the updated interface focus through the JavaScript file in response to the second instruction, and acquiring at least one first interface element indicated by the updated interface focus;
determining an updated first interface element from the at least one first interface element through the JavaScript file;
And acquiring the target text content stored in the updated first interface element through the JavaScript file.
9. The method of claim 8, wherein the determining, by the JavaScript file, an updated first interface element from the at least one first interface element comprises:
Acquiring at least one second interface element indicated by the interface focus before updating through the JavaScript file;
acquiring element attributes of the at least one first interface element and element attributes of the at least one second interface element through the JavaScript file;
Determining the updated first interface element from the at least one first interface element according to the element attribute of the at least one first interface element and the element attribute of the at least one second interface element through the JavaScript file; wherein the updated element attribute of the first interface element is different from the element attribute of the at least one second interface element.
10. The method of claim 8, wherein the interface is a tree structured document of a plurality of nodes; the step of determining, by the JavaScript file, the target text content indicated by the updated interface focus in response to the second instruction, including:
Acquiring at least one first node indicated by the updated interface focus and at least one second node indicated by the interface focus before updating through the JavaScript file;
Determining an updated first node from the at least one first node through the JavaScript file; the at least one second node does not include the updated first node;
And determining that the interface element on the updated first node is the updated first interface element through the JavaScript file.
CN202211295368.6A 2022-10-21 2022-10-21 Voice broadcasting method and display device Pending CN117956213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211295368.6A CN117956213A (en) 2022-10-21 2022-10-21 Voice broadcasting method and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211295368.6A CN117956213A (en) 2022-10-21 2022-10-21 Voice broadcasting method and display device

Publications (1)

Publication Number Publication Date
CN117956213A true CN117956213A (en) 2024-04-30

Family

ID=90800483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211295368.6A Pending CN117956213A (en) 2022-10-21 2022-10-21 Voice broadcasting method and display device

Country Status (1)

Country Link
CN (1) CN117956213A (en)

Similar Documents

Publication Publication Date Title
CN109618206B (en) Method and display device for presenting user interface
CN111405318B (en) Video display method and device and computer storage medium
CN111031375B (en) Method for skipping detailed page of boot animation and display equipment
CN111970549B (en) Menu display method and display device
CN111836115B (en) Screen saver display method, screen saver skipping method and display device
CN111897478A (en) Page display method and display equipment
CN113590059A (en) Screen projection method and mobile terminal
CN111104020B (en) User interface setting method, storage medium and display device
CN111954059A (en) Screen saver display method and display device
CN112087671A (en) Display method and display equipment for control prompt information of input method control
CN112506859B (en) Method for maintaining hard disk data and display device
CN112199560B (en) Search method of setting items and display equipment
CN111935530B (en) Display equipment
CN112235621B (en) Display method and display equipment for visual area
CN114390190B (en) Display equipment and method for monitoring application to start camera
CN117956213A (en) Voice broadcasting method and display device
CN113971049A (en) Background service management method and display device
CN114079827A (en) Menu display method and display device
CN111988649A (en) Control separation amplification method and display device
CN111949179A (en) Control amplifying method and display device
CN113573115B (en) Method for determining search characters and display device
CN112087651B (en) Method for displaying inquiry information and smart television
CN112199612B (en) Bookmark adding and combining method and display equipment
CN113438553B (en) Display device awakening method and display device
CN111966646B (en) File caching method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination