CN116339570A - Display device and focus control method - Google Patents

Display device and focus control method Download PDF

Info

Publication number
CN116339570A
CN116339570A CN202111608799.9A CN202111608799A CN116339570A CN 116339570 A CN116339570 A CN 116339570A CN 202111608799 A CN202111608799 A CN 202111608799A CN 116339570 A CN116339570 A CN 116339570A
Authority
CN
China
Prior art keywords
display window
control
user
display
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111608799.9A
Other languages
Chinese (zh)
Inventor
刘庆全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Media Network Technology Co Ltd
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202111608799.9A priority Critical patent/CN116339570A/en
Publication of CN116339570A publication Critical patent/CN116339570A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display device and a focus control method, wherein the method comprises the steps of receiving a first instruction input by a user, drawing a first display window on a first image layer on a user interface, wherein the first display window comprises at least one first control, the first display window also comprises a focus indicating that the first control is selected, and the first instruction is an instruction input by the user through voice; receiving a second instruction input by a user, and drawing a second display window on a second image layer on the user interface, wherein the second display window comprises at least one second control; and if the display positions of the first display window and the second display window are not overlapped, receiving a third instruction input by a user, and controlling the focus to move from the first control of the first display window to the second control of the second display window. The user can move the focus on the first display window and the second display window without exiting the first display window, so that user experience is improved.

Description

Display device and focus control method
Technical Field
The present disclosure relates to the field of display technologies, and in particular, to a display device and a focus control method.
Background
With the development of operating systems and smart television platforms, smart operating systems have been fully applied to television platforms. The complexity of the intelligent television function, in particular to the higher and higher requirements on various indexes of intelligent voice, wherein the evaluation indexes are more and more heavy after the intelligent voice user experience is completed with the continuous improvement of an intelligent voice system. Particularly, in the interaction process, the voice semantic of the user can be accurately identified to meet the user requirements, if the operation intention of the user can be better identified, the user interaction method and device have friendly interaction scene optimization, and the user experience can be effectively improved.
In the related art, most intelligent voice displays of the intelligent televisions are configured to the uppermost layer of the image layer, or the display priority of the intelligent voice assistant is highest, and the display of corresponding intelligent voices is required in any scene of the intelligent televisions, so that the display of the intelligent voices can shield the display of other functions of the intelligent televisions, the display of some complex interaction scenes can cause some confusion for users, and the users can not find out where to interact. For example: when the intelligent voice is being displayed, the user has a remote controller to call out other setting menus, the interactive focus is still on the voice assistant, and if the user wants to control the setting menus, the user can control the setting menus only after the user manually exits the voice assistant. After the menu is set, the interface of the intelligent voice assistant can be displayed after the voice assistant is called again, the operation is complex, and the user experience is poor.
Disclosure of Invention
Based on the above technical problems, an object of the present invention is to provide a display device and a focus control method for solving the problems of the existing display device.
A first aspect of an embodiment of the present application provides a display device, including:
a display for displaying a user interface;
a controller for performing:
receiving a first instruction input by a user, drawing a first display window on a first image layer on the user interface, wherein the first display window comprises at least one first control, the first display window further comprises a focus indicating that the first control is selected, the position of the focus in the first display window can be moved through the input of the user so as to select different first controls, and the first instruction is an instruction input by the user through voice;
receiving a second instruction input by a user, and drawing a second display window on a second image layer on the user interface, wherein the second display window comprises at least one second control;
and if the display positions of the first display window and the second display window are not overlapped, receiving a third instruction input by a user, and controlling the focus to move from the first control of the first display window to the second control of the second display window.
With reference to the first feasible implementation manner of the first aspect, after the step of drawing the first display window on the first layer on the user interface, the controller is further configured to perform:
if a preset message is received, drawing a third display window on a third layer on the user interface, wherein the third display window comprises a third control;
controlling the focus to move from a first control of the first display window to a third control of the third display window;
and if the preset message is not received, executing a second instruction input by the user, and drawing a second display window on a second image layer on the user interface.
With reference to the second feasible implementation manner of the first aspect, the third instruction is an instruction input by a user by pressing a preset key of the control device, where the preset key includes a shortcut key for controlling the focus to move on the second display window.
With reference to the third feasible implementation manner of the first aspect, the controller executes a third instruction for receiving user input, and controls the focus to be moved from the first control of the first display window to the second control of the second display window by adopting the following manner:
And when the focus moves to the first control close to the edge of the second display window, receiving a third instruction input by a user, and controlling the focus to move from the first control to the second control of the second display window, wherein the third instruction is an instruction input by the user by pressing a direction key pointing to the second display window on a control device.
With reference to the fourth feasible implementation manner of the first aspect, the step of the first display window not overlapping with the display position of the second display window includes:
and the display area coordinates of the first display window are not overlapped with the display area coordinates of the second display window.
A second aspect of the embodiments of the present application provides a focus control method, including:
receiving a first instruction input by a user, drawing a first display window on a first image layer on the user interface, wherein the first display window comprises at least one first control, the first display window further comprises a focus indicating that the first control is selected, the position of the focus in the first display window can be moved through the input of the user so as to select different first controls, and the first instruction is an instruction input by the user through voice;
Receiving a second instruction input by a user, and drawing a second display window on a second image layer on the user interface, wherein the second display window comprises at least one second control;
and if the display positions of the first display window and the second display window are not overlapped, receiving a third instruction input by a user, and controlling the focus to move from the first control of the first display window to the second control of the second display window.
With reference to the first feasible implementation manner of the second aspect, after the step of drawing the first display window on the first layer on the user interface, the method further includes:
if a preset message is received, drawing a third display window on a third layer on the user interface, wherein the third display window comprises a third control;
controlling the focus to move from a first control of the first display window to a third control of the third display window;
and if the preset message is not received, executing a second instruction input by the user, and drawing a second display window on a second image layer on the user interface.
With reference to the second feasible implementation manner of the second aspect, the third instruction is an instruction input by a user by pressing a preset key of the control device, where the preset key includes a shortcut key for controlling the focus to move on the second display window.
With reference to the third feasible implementation manner of the second aspect, the step of receiving a third instruction input by the user and controlling the focus to be moved from the first control to the second control of the second display window includes:
and when the focus moves to the first control close to the edge of the second display window, receiving a third instruction input by a user, and controlling the focus to move from the first control to the second control of the second display window, wherein the third instruction is an instruction input by the user by pressing a direction key pointing to the second display window on a control device.
With reference to the fourth feasible implementation manner of the second aspect, the step of the first display window not overlapping with the display position of the second display window includes:
and the display area coordinates of the first display window are not overlapped with the display area coordinates of the second display window.
The focus control method provided by the embodiment is suitable for display equipment, the display equipment at least comprises a controller and a display, a first instruction input by a user is received, a first display window is drawn on a first image layer on the user interface, the first display window comprises at least one first control, the first display window further comprises a focus indicating that the first control is selected, the position of the focus in the first display window can be moved through the input of the user so as to select different first controls, and the first instruction is an instruction input by the user through voice; receiving a second instruction input by a user, and drawing a second display window on a second image layer on the user interface, wherein the second display window comprises at least one second control; and if the display positions of the first display window and the second display window are not overlapped, receiving a third instruction input by a user, and controlling the focus to move from the first control of the first display window to the second control of the second display window. The user can move the focus on the first display window and the second display window without exiting the first display window, so that user experience is improved.
Drawings
FIG. 1 illustrates an operational scenario between a display device and a control apparatus according to some embodiments;
fig. 2 shows a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in a display device 200 according to some embodiments;
FIG. 5 is a flow chart of interactions of a display device with a user provided in accordance with some embodiments;
FIG. 6 is a schematic diagram of a first display window provided in accordance with some embodiments;
FIG. 7 is a schematic diagram of a weather display window provided in accordance with some embodiments;
FIG. 8 is a schematic illustration of a first display window and a second display window overlapping provided in accordance with some embodiments;
FIG. 9 is a schematic diagram of first display window and second display window misalignment provided in accordance with some embodiments;
FIG. 10 is a schematic diagram of a rectangular coordinate system setup provided according to some embodiments;
FIG. 11 is a schematic view of focus movement provided in accordance with some embodiments;
FIGS. 12-14 are schematic diagrams of weather display windows and volume display windows provided in accordance with some embodiments;
15-17 are schematic diagrams of first, second, and fourth display windows provided in accordance with some embodiments;
FIG. 18 is a schematic illustration of a second display window and a fourth display window overlapping the first display window provided in accordance with some embodiments;
19-21 are schematic diagrams of second and fourth display windows provided in accordance with some embodiments;
FIG. 22 is a schematic view of a joke display window provided in accordance with some embodiments;
fig. 23 is a schematic diagram of a preset message hint provided in accordance with some embodiments.
Detailed Description
For purposes of clarity and implementation of the present application, the following description will make clear and complete descriptions of exemplary implementations of the present application with reference to the accompanying drawings in which exemplary implementations of the present application are illustrated, it being apparent that the exemplary implementations described are only some, but not all, of the examples of the present application.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," second, "" third and the like in the description and in the claims and in the above drawings are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
Fig. 1 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 1, the display device 200 is also in data communication with a server 400, and a user can operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the smart device 300 may include any one of a mobile terminal, tablet, computer, notebook, AR/VR device, etc.
In some embodiments, the smart device 300 may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the smart device 300 and the display device may also be used for communication of data.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control apparatus configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
In some embodiments, software steps performed by one step execution body may migrate on demand to be performed on another step execution body in data communication therewith. For example, software steps executed by the server may migrate to be executed on demand on a display device in data communication therewith, and vice versa.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
In some embodiments, the communication interface 130 is configured to communicate with the outside, including at least one of a WIFI chip, a bluetooth module, NFC, or an alternative module.
In some embodiments, the user input/output interface 140 includes at least one of a microphone, a touchpad, a sensor, keys, or an alternative module.
Fig. 3 shows a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment.
In some embodiments, display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, memory, a power supply, a user interface.
In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, for receiving an image signal from the controller output, for displaying video content, image content, and components of a menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, the display 260 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, the modem 210 receives broadcast television signals via wired or wireless reception and demodulates audio-video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable control. The operations related to the selected object are: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents. The CPU processor may include a plurality of processors. Such as one main processor and one or more sub-processors.
In some embodiments, a graphics processor is used to generate various graphical objects, such as: at least one of icons, operation menus, and user input instruction display graphics. The graphic processor comprises an arithmetic unit, which is used for receiving various interactive instructions input by a user to operate and displaying various objects according to display attributes; the device also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, perform at least one of decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, and the like according to a standard codec protocol of an input signal, and obtain a signal that is displayed or played on the directly displayable device 200.
In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the demultiplexed video signal, including decoding, scaling and the like. And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received frame rate into a video output signal and changing the video output signal to be in accordance with a display format, such as outputting RGB data signals.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode according to a standard codec protocol of an input signal, and at least one of noise reduction, digital-to-analog conversion, and amplification, to obtain a sound signal that can be played in the speaker.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include at least one of a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, the user interface 280 is an interface (e.g., physical keys on a display device body, or the like) that may be used to receive control inputs.
In some embodiments, a system of display devices may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together form the basic operating system architecture that allows users to manage files, run programs, and use the system. After power-up, the kernel is started, the kernel space is activated, hardware is abstracted, hardware parameters are initialized, virtual memory, a scheduler, signal and inter-process communication (IPC) are operated and maintained. After the kernel is started, shell and user application programs are loaded again. The application program is compiled into machine code after being started to form a process.
As shown in fig. 4, the system of the display device is divided into three layers, an application layer, a middleware layer, and a hardware layer, from top to bottom.
The application layer mainly comprises common applications on the television, and an application framework (Application Framework), wherein the common applications are mainly applications developed based on Browser, such as: HTML5 APPs; native applications (Native APPs);
the application framework (Application Framework) is a complete program model with all the basic functions required by standard application software, such as: file access, data exchange, and the interface for the use of these functions (toolbar, status column, menu, dialog box).
Native applications (Native APPs) may support online or offline, message pushing, or local resource access.
The middleware layer includes middleware such as various television protocols, multimedia protocols, and system components. The middleware can use basic services (functions) provided by the system software to connect various parts of the application system or different applications on the network, so that the purposes of resource sharing and function sharing can be achieved.
The hardware layer mainly comprises a HAL interface, hardware and a driver, wherein the HAL interface is a unified interface for all the television chips to be docked, and specific logic is realized by each chip. The driving mainly comprises: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
The intelligent voice display of most display devices is configured to the uppermost layer of the layers, or the display priority of the intelligent voice assistant is highest, and the display of corresponding intelligent voice is required in any scene of the display devices, so that the display of the intelligent voice can block the display of other functions of the display devices, the display of some complex interaction scenes can cause some confusion for users, and the users can not find out where the interaction focus is frequently. For example: when the intelligent voice is being displayed, the user has a remote controller to call out other setting menus, the interactive focus is still on the voice assistant, and if the user wants to control the setting menus, the user can control the setting menus only after the user manually exits the voice assistant. After the menu is set, the interface of the intelligent voice assistant can be displayed after the voice assistant is called again, the operation is complex, and the user experience is poor.
In order to solve the above technical problems, the embodiments of the present application provide a display device, and the structure and the functions of each portion of the display device may refer to the above embodiments. In addition, on the basis of the display device shown in the foregoing embodiment, some functions of the display device are further optimized in this embodiment, and specifically, referring to fig. 5, fig. 5 is a flowchart of interaction between the display device and a user provided in a feasible embodiment;
the user performs step S501: inputting a first instruction, wherein the first instruction is an instruction input by a user through voice;
in some embodiments, the voice command input by the user may be received after the display device is triggered into the voice control mode.
In some embodiments, the user may enter the voice control mode by triggering a voice key of the corresponding control device, e.g., by pressing the voice key of the control device; or the display device is triggered to enter a voice control mode by speaking a far-field wake-up word, for example, after a user sends out a voice of 'little focus', the display device can enter the voice control mode, and when the display device is triggered to enter the voice control mode, the voice input module monitors voice data input by the user in real time, and the user can further speak a voice command.
After receiving a first instruction input by a user. The controller executes step S502: drawing a first display window on a first layer on the user interface;
in some embodiments, the controller recognizes the first instruction after receiving the first instruction input by the user. The recognition mode can convert the user voice into voice text, acquire the operation corresponding to the voice text on the current interface, and execute the corresponding operation. The recognition mode can also convert the voice of the user into a voice text and then send the voice text to the server, the server converts the voice text into a standardized interface word text, the operation corresponding to the standardized interface word text is obtained, and the corresponding operation is executed.
In some embodiments, the user interface is located on a video layer, and a first display window is drawn on a first layer on the video layer according to a first instruction input by a user, where content played by the first display window is content indicated by the first instruction. As shown in fig. 6, the first display window 61 includes at least one first control 611, and the first display window 61 further includes a focus 612 indicating that the first control is selected, and a position of the focus 612 in the first display window 61 can be moved by a user input to select a different first control 611.
It should be noted that controls, which are visual objects displayed in the display areas of the user interface in the display device 200 to represent corresponding contents such as icons, thumbnails, video clips, links, etc., can provide the user with various conventional program contents received through data broadcasting, and various application and service contents set by the content manufacturer.
The presentation form of the control is typically diversified. For example, the controls may include text content and/or images for displaying thumbnails related to the text content, or video clips related to the text. As another example, the control may be text and/or an icon of an application.
It should also be noted that the focus is used to indicate that any of the controls has been selected, such as the focus object. In one aspect, the control may be selected or controlled by controlling movement of the display focus object in the display device 200 according to user input through the control apparatus 100. Such as: the user may select and control controls by directional keying movement of the control focus object between controls on the control device 100. On the other hand, the movement of each control displayed in the display apparatus 200 may be controlled to cause the focus object to select or control the control according to the input of the user through the control device 100. Such as: the user can control the controls to move left and right together through the direction keys on the control device 100, so that the focus object can select and control the controls while the position of the focus object is kept unchanged.
The form of identification of the focal point is typically varied. By way of example, the location of the focus object is achieved or identified by magnifying the control, by setting the control background color, or by changing the border line, size, color, transparency, outline, and/or font of the text or image of the focus control.
In some embodiments, the user-entered voice command "weather" may be received after the display device is triggered into the voice control mode. After the controller recognizes the user's "weather" voice command, it draws the weather display window 71 on the current user interface, the weather display window 71 includes the first controls 711 to 716, the first controls 711 to 716 are Saturday, sunday, monday, tuesday, 24 hours, and 14 days, respectively, the weather display window 71 further includes the focus 72, and the current focus indicates that Saturday is selected, as shown in FIG. 7. The user may change the position of the focal point and select a different first control by controlling the directional keys of the device.
It should be noted that, the present embodiment only shows the arrangement form and the display content of the first control, and the first control is not limited to the above first control, and the first control of the first display window may be arranged and set according to the needs in the actual application process, which is not limited by the applicant.
The user performs step S503: a second instruction is input.
In some embodiments, the second instruction input by the user may be input by pressing a key of the control device, or the user may send the second instruction through voice.
After receiving the second instruction input by the user. The controller executes step S504: drawing a second display window on a second layer on the user interface;
in some embodiments, the controller, upon receiving a second instruction entered by the user, identifies the second instruction. If the voice command is the voice command, the recognition mode can convert the voice of the user into the voice text, acquire the operation corresponding to the voice text on the current interface and execute the corresponding operation. The recognition mode can also convert the voice of the user into a voice text and then send the voice text to the server, the server converts the voice text into a standardized interface word text, the operation corresponding to the standardized interface word text is obtained, and the corresponding operation is executed. And if the key instruction is a key instruction, identifying a key value corresponding to the user key, and executing corresponding operation according to the key value.
In some embodiments, the user interface is located on a video layer, and a second display window is drawn on a second layer on the video layer according to a second instruction input by the user, where content played by the second display window is content indicated by the second instruction.
In some embodiments, the first layer and the second layer are not in the same layer, and the display positions of the first display window of the first layer and the second display window of the second layer may overlap. If the first instruction is a voice instruction and the second instruction is a non-voice instruction, since the first display window is drawn under the voice instruction and has a higher priority than the second display window drawn under the non-voice instruction, the first display window of the first layer is located at the uppermost layer, and in the second display window, the overlapping portion of the second display window and the first display window is covered by the first display window, as shown in fig. 8. If the first instruction and the second instruction are voice instructions, the up-down sequence of the layers can be determined according to the sequence of the instructions. For example: the second layer corresponding to the second voice command is higher than the first layer corresponding to the first voice command, namely, in the first display window, a part of the first display window, which is overlapped with the second display window, is covered by the second display window, and at the moment, the focus is on the second display window.
In some embodiments, the first layer and the second layer are not in the same layer, and the first display window of the first layer and the second display window of the second layer may not overlap, as shown in fig. 9.
As shown in fig. 9, the current interface includes a first display window 61 and a second display window 62, the first display window 61 including at least one first control 611, the first display window 61 further including a focus 612, the current focus 612 indicating that the first control is selected; the second display window 62 includes at least one second control 621.
In some embodiments, a method of determining whether a display position of a first display window of a first layer coincides with a display position of a second display window of a second layer includes:
a rectangular coordinate system is established according to the method shown in fig. 10.
Determining a first coordinate range of a first display window of a first layer;
determining a second coordinate range of a second display window of the second layer;
it is determined whether the first coordinate range coincides with the second coordinate range.
For example: and drawing a first coordinate range (0, 0) - (1080, 200) of the first display window on the first layer, and drawing a second coordinate range (800, 400) - (1000, 800) of the second display window on the second layer, wherein coordinate points which do not coincide with the first coordinate range and the second coordinate range, namely, any coordinate point which does not exist in the first coordinate range and the second coordinate range, are not coincident with the display positions of the first display window and the second display window.
For example: and drawing a first coordinate range (0, 0) - (1080, 200) of the first display window on the first layer, and drawing a second coordinate range (800, 100) - (1000, 800) of the second display window on the second layer, wherein coordinate points which are coincident with the first coordinate range and the second coordinate range exist in the first coordinate range and the second coordinate range, and the existence of at least one coordinate point which is both in the first coordinate range and the second coordinate range indicates that the display positions of the first display window and the second display window are coincident.
In some embodiments, the first display window of the first layer coincides with the second display window of the second layer, and the first display window covers the overlapping portion of the second display window, as shown in fig. 8. At this time, the user issues a third instruction to move the focus, and the focus 612 can move only on the first control 611 of the first display window 61.
The user performs step S505: a third instruction is input.
After receiving the third instruction input by the user, if the display positions of the first display window of the first layer and the second display window of the second layer do not coincide, the controller executes step S506: and controlling the focus to move from the first control of the first display window to the second control of the second display window.
In fig. 9, the user inputs a third instruction by pressing an up key on the control device, and the controller moves the focus 612 from the first control 611 of the first display window 61 to the second control 621 of the second display window 62 in response to the third instruction, as shown in fig. 11.
In some embodiments, the movement of the focus is determined by key distribution, which is managed by an application management (Applicationg Manager, APM) module in the controller. When the first display window and the second display window are completely displayed, that is, the display positions of the first display window of the first layer and the second display window of the second layer are not coincident, in response to the key input of the user, the key distribution firstly gives the key to the first display window, if the first display window has event processing such as operable content, the focus is still on the first display window, and when the user inputs a third instruction by pressing the right key on the control device, for example, in fig. 9, the controller moves the focus 612 to the right on the first control 611 of the first display window 61 in response to the third instruction. If the first display window has no event processing, such as no operable content, the first control of the first display window is moved to the second control of the second display window, and also taking fig. 9 as an example, when the user inputs a third instruction by pressing an upper key on the control device, the controller responds to the third instruction, the key distribution is given to the first display window 61 first, but the first display window 61 has no operation content corresponding to the upper key, the key distribution is continued to the second display window, the second display window 62 has operation content corresponding to the upper key, and the focus 612 is moved to the second control 621 of the second display window 62 at the first control 611 of the first display window 61.
In some embodiments, the user may input the voice command "weather" when the display device may receive the user input. After recognizing the user "weather" voice command, the controller draws the weather display window 71 on the current user interface, and as shown in fig. 12, after the display device can receive a command for displaying the volume by the user inputting the volume through the keys of the control apparatus, the controller draws the volume display window 73 on the current user interface, and the volume display window 73 includes the second control 731. The weather display window 71 and the volume display window 73 are not overlapped in display position. After the display device receives an instruction from the user to press the key by the control device, the focus 72 is moved from the first control 711 of the weather display window 71 to the second control 731 of the volume display window 73, as shown in fig. 13.
In some embodiments, when the focus moves to the first control near the edge of the second display window, a third instruction input by the user is received, and the focus is controlled to move from the first control near the edge to the second control of the second display window, where the third instruction is an instruction input by the user by pressing a direction key pointing to the second display window on the control device.
In some embodiments, after determining that the display positions of the first display window of the first layer and the second display window of the second layer do not coincide, a positional relationship of the first display window and the second display window is determined. For example: in fig. 12, the weather display window 71 and the volume display window 73 are positioned adjacently to the left and right, that is, the volume display window 73 is positioned on the right side of the weather display window 71. The user moves the position of focus 72 from the Saturday control to the Saturday control via the directional keys of the control device, as shown in FIG. 14. After the display device receives an instruction from the user to press the right key through the control device, the focus 72 moves from the first control 714 of the weather display window 71 to the second control 731 of the volume display window 73, as shown in fig. 13. The Tuesday control is an edge control close to the volume display window.
In some embodiments, the third instruction is an instruction input by the user by pressing a preset key of the control device, where the preset key includes a shortcut key for controlling the focus to move on the second display window.
In some embodiments, the shortcut key includes a volume up key, a volume down key, a channel up key, a channel down key, a fast forward key, a fast reverse key, a play/pause key, and the like.
In some embodiments, the user may input the voice command "weather" when the display device may receive the user input. After recognizing the user "weather" voice command, the controller draws the weather display window 71 on the current user interface, and as shown in fig. 12, after the display device can receive a command for displaying the volume by the user inputting the volume through the keys of the control apparatus, the controller draws the volume display window 73 on the current user interface, and the volume display window 73 includes the second control 731. The weather display window 71 and the volume display window 73 are not overlapped in display position. After the display device receives an instruction that the user presses the volume up key through the control device, the focus 72 moves from the first control 711 of the weather display window 71 to the second control 731 of the volume display window 73, as shown in fig. 13.
In some embodiments, after drawing a second display window on a second layer on the user interface at step S504, the user inputs a fourth instruction.
It should be noted that, the fourth instruction in the present application is after the second instruction and before the third instruction.
In some embodiments, the fourth instruction input by the user may be input by pressing a key of the control device, or the user may send the fourth instruction through voice.
After receiving the fourth instruction input by the user. The controller performs the steps of drawing a fourth display window on a fourth layer on the user interface;
in some embodiments, the controller recognizes the fourth instruction after receiving the fourth instruction input by the user. If the voice command is the voice command, the recognition mode can convert the voice of the user into the voice text, acquire the operation corresponding to the voice text on the current interface and execute the corresponding operation. The recognition mode can also convert the voice of the user into a voice text and then send the voice text to the server, the server converts the voice text into a standardized interface word text, the operation corresponding to the standardized interface word text is obtained, and the corresponding operation is executed. And if the key instruction is a key instruction, identifying a key value corresponding to the user key, and executing corresponding operation according to the key value.
In some embodiments, the user interface is located on a video layer, and a fourth display window is drawn on a fourth layer on the video layer according to a fourth instruction input by a user, where content played by the fourth display window is content indicated by the third instruction.
In some embodiments, none of the first layer, the second layer, and the fourth layer are on the same layer. The method for determining whether the display positions of the first display window, the second display window and the fourth display window overlap is described in detail above, and is not described herein.
In some embodiments, the first instruction is a voice instruction, and the second instruction and the fourth instruction are non-voice instructions. The display positions of the first display window, the second display window and the fourth display window do not coincide, as shown in fig. 15. Since the first display window is drawn under the voice command and has a higher priority than the second display window and the fourth display window drawn under the non-voice command, the first display window of the first layer will be located at the uppermost layer, and the focus 612 is located on the first control 611 of the first display window 61. The positional relationship of the first display window 61 and the fourth display window 64 is an up-down adjacent relationship, and the first display window 61 and the fourth display window 64 are both in a left-right adjacent relationship with the second display window 62.
In fig. 15, when the user inputs a third instruction by pressing an up key on the control device, the controller moves the focus 612 from the first control 611 of the first display window 61 to the fourth control 641 of the fourth display window 64 in response to the third instruction, as shown in fig. 16.
In fig. 15, when the user inputs a third instruction by pressing the right key on the control device, the controller moves the focus 612 from the first control 611 of the first display window 61 to the second control 621 of the second display window 62 in response to the third instruction, as shown in fig. 17.
In fig. 17, when the user inputs a third instruction by pressing the left key on the control device, the controller needs to determine whether to move the focus to the first display window or the fourth display window in response to the third instruction, since the first display window and the fourth display window are both located on the left side of the second display window 62.
In some embodiments, the display areas of the first display window and the fourth display window may be compared to determine the focus movement position. For example: if the area of the fourth display window is greater than the area of the first display window, the focus 612 is moved from over the second control 621 of the second display window 62 to over the fourth control 641 of the fourth display window 64, as shown in FIG. 16.
In some embodiments, the area refers to the area enclosed by the coordinate parameters of at least two vertices of the window. The controller controls/displays the image of the designated area by controlling the coordinate parameters of the at least two vertices to the designated coordinate parameters or displaying the at least two vertices with the designated coordinate positions as the coordinate parameters of the at least two points.
In some embodiments, the priorities of the first display window and the fourth display window may be compared to determine the focus movement position. For example: if the first display window has a higher priority than the fourth display window, the focus 612 is moved from the second control 621 of the second display window 62 to the first control 611 of the first display window 61, as shown in FIG. 15.
In some embodiments, the display positions of the second display window and the fourth display window are overlapped with the display positions of the first display window, that is, the overlapped part of the second display window and the fourth display window is covered by the first display window of the first layer, as shown in fig. 18. At this time, the user issues a third instruction to move the focus, and the focus 612 can move only on the first control 611 of the first display window 61.
In some embodiments, neither the second display window nor the fourth display window coincides with the display position of the first display window, but the display positions of the second display window and the fourth display window coincide, as shown in fig. 19.
In some embodiments, the display areas of the second display window and the fourth display window may be compared to determine a display window with a movable focus. For example: in fig. 19, when the user inputs the third instruction by pressing the up key on the control device, the area of the fourth display window is larger than the area of the second display window, the fourth layer where the fourth display window is located is disposed on the second layer where the second display window is located, and in response to the third instruction, the focus 612 is moved from the first control 611 of the first display window 61 to the fourth control 641 of the fourth display window 64, as shown in fig. 20.
In some embodiments, the priorities of the first display window and the fourth display window may be compared to determine a display window with a movable focus. For example: in fig. 19, when the user inputs the third instruction by pressing the up key on the control device, the second display window has a higher priority than the fourth display window, the second layer on which the second display window is located is disposed on the fourth layer on which the fourth display window is located, and in response to the third instruction, the focus 612 is moved from the first control 611 of the first display window 61 to the second control 621 of the second display window 62.
In some embodiments, after step S502 is performed, the controller performs the steps of: judging whether a preset message is received or not;
in some embodiments, the preset message is used to prompt the user about the occurrence of some events in time, so that the user can process the event in time after receiving the prompt. The preset message comprises a voice message of chat application software, an alarm clock message set by a user, a reservation message set by the user and the like.
If a preset message is received, drawing a third display window on a third layer on the user interface, wherein the third display window comprises a third control;
in some embodiments, the user interface is located on a video layer, and a third display window is drawn on a third layer on the video layer according to a third instruction input by the user, where the content played by the third display window is a prompt content of the preset message.
And controlling the focus to move from the first control of the first display window to the third control of the third display window.
On the third control of which the current focus has displayed the preset message, if the user wants to view the preset message, the user can directly press the confirmation key on the control device to directly view the specific content of the preset message.
In some embodiments, it is periodically detected whether a preset message is received, and once the preset message is received, no matter what display windows exist and what content is played by the display windows, the focus needs to be moved to a third control for displaying prompt content of the preset message.
In some embodiments, the user may input a voice command "joke" when the display device may receive the user input. After recognizing the user "joke" voice instruction, the controller draws the joke display window 71 on the current user interface with the current focus 72 on the first control 711 of the joke display window 71, as shown in FIG. 22. When the display device detects that a voice message is received, the controller draws the voice prompt display 73 at the current user interface, the voice prompt display 73 including a third control 731, and the focus 72 moves from the first control 711 of the joke display 71 to the third control 731 of the voice prompt display 73, as shown in fig. 23. The user may press the confirm key on the control device directly to play the voice message.
The focus control method provided by the embodiment is suitable for display equipment, the display equipment at least comprises a controller and a display, a first instruction input by a user is received, a first display window is drawn on a first image layer on the user interface, the first display window comprises at least one first control, the first display window further comprises a focus indicating that the first control is selected, the position of the focus in the first display window can be moved through the input of the user so as to select different first controls, and the first instruction is an instruction input by the user through voice; receiving a second instruction input by a user, and drawing a second display window on a second image layer on the user interface, wherein the second display window comprises at least one second control; and if the display positions of the first display window and the second display window are not overlapped, receiving a third instruction input by a user, and controlling the focus to move from the first control of the first display window to the second control of the second display window. The user can move the focus on the first display window and the second display window without exiting the first display window, so that user experience is improved.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
a display for displaying a user interface;
A controller for performing:
receiving a first instruction input by a user, drawing a first display window on a first image layer on the user interface, wherein the first display window comprises at least one first control, the first display window further comprises a focus indicating that the first control is selected, the position of the focus in the first display window can be moved through the input of the user so as to select different first controls, and the first instruction is an instruction input by the user through voice;
receiving a second instruction input by a user, and drawing a second display window on a second image layer on the user interface, wherein the second display window comprises at least one second control;
and if the display positions of the first display window and the second display window are not overlapped, receiving a third instruction input by a user, and controlling the focus to move from the first control of the first display window to the second control of the second display window.
2. The display device of claim 1, wherein after the step of drawing a first display window on a first layer on the user interface, the controller is further configured to perform:
if a preset message is received, drawing a third display window on a third layer on the user interface, wherein the third display window comprises a third control;
Controlling the focus to move from a first control of the first display window to a third control of the third display window;
and if the preset message is not received, executing a second instruction input by the user, and drawing a second display window on a second image layer on the user interface.
3. The display apparatus according to claim 1, wherein the third instruction is an instruction input by a user by pressing a preset key of a control device, the preset key including a shortcut key that controls the focus to move on the second display window.
4. The display device of claim 1, wherein the controller is configured to execute a third instruction to receive user input by controlling the focus to be moved from a first control of the first display window to a second control of the second display window:
and when the focus moves to the first control close to the edge of the second display window, receiving a third instruction input by a user, and controlling the focus to move from the first control to the second control of the second display window, wherein the third instruction is an instruction input by the user by pressing a direction key pointing to the second display window on a control device.
5. The display device according to claim 1, wherein the step of the first display window not overlapping with the display position of the second display window includes:
and the display area coordinates of the first display window are not overlapped with the display area coordinates of the second display window.
6. A focus control method, characterized by comprising:
receiving a first instruction input by a user, drawing a first display window on a first image layer on the user interface, wherein the first display window comprises at least one first control, the first display window further comprises a focus indicating that the first control is selected, the position of the focus in the first display window can be moved through the input of the user so as to select different first controls, and the first instruction is an instruction input by the user through voice;
receiving a second instruction input by a user, and drawing a second display window on a second image layer on the user interface, wherein the second display window comprises at least one second control;
and if the display positions of the first display window and the second display window are not overlapped, receiving a third instruction input by a user, and controlling the focus to move from the first control of the first display window to the second control of the second display window.
7. The method of claim 6, further comprising, after the step of drawing a first display window on a first layer on the user interface:
if a preset message is received, drawing a third display window on a third layer on the user interface, wherein the third display window comprises a third control;
controlling the focus to move from a first control of the first display window to a third control of the third display window;
and if the preset message is not received, executing a second instruction input by the user, and drawing a second display window on a second image layer on the user interface.
8. The method of claim 6, wherein the third instruction is an instruction input by a user by pressing a preset key of a control device, the preset key including a shortcut key that controls the focus to move on the second display window.
9. The method of claim 6, wherein the step of receiving a third instruction from the user to control the focus to be moved from the first control to the second control of the second display window comprises:
and when the focus moves to the first control close to the edge of the second display window, receiving a third instruction input by a user, and controlling the focus to move from the first control to the second control of the second display window, wherein the third instruction is an instruction input by the user by pressing a direction key pointing to the second display window on a control device.
10. The method of claim 6, wherein the step of the first display window being misaligned with the display position of the second display window comprises:
and the display area coordinates of the first display window are not overlapped with the display area coordinates of the second display window.
CN202111608799.9A 2021-12-23 2021-12-23 Display device and focus control method Pending CN116339570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111608799.9A CN116339570A (en) 2021-12-23 2021-12-23 Display device and focus control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111608799.9A CN116339570A (en) 2021-12-23 2021-12-23 Display device and focus control method

Publications (1)

Publication Number Publication Date
CN116339570A true CN116339570A (en) 2023-06-27

Family

ID=86890315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111608799.9A Pending CN116339570A (en) 2021-12-23 2021-12-23 Display device and focus control method

Country Status (1)

Country Link
CN (1) CN116339570A (en)

Similar Documents

Publication Publication Date Title
CN112672195A (en) Remote controller key setting method and display equipment
CN112672199B (en) Display device and multi-layer overlapping method
CN113821184A (en) Pairing method of control device and display equipment
CN114302021A (en) Display device and sound picture synchronization method
CN112584229B (en) Method for switching channels of display equipment and display equipment
CN113490024A (en) Control device key setting method and display equipment
CN113163258A (en) Channel switching method and display device
CN113593488A (en) Backlight adjusting method and display device
CN112799576A (en) Virtual mouse moving method and display device
CN113301405A (en) Display device and display control method of virtual keyboard
CN113064691B (en) Display method and display equipment for starting user interface
CN112911371B (en) Dual-channel video resource playing method and display equipment
CN115103144A (en) Display device and volume bar display method
CN113064534A (en) Display method and display equipment of user interface
CN113542882A (en) Method for awakening standby display device, display device and terminal
CN112882631A (en) Display method of electronic specification on display device and display device
CN113608715A (en) Display device and voice service switching method
CN113132809A (en) Channel switching method, channel program playing method and display equipment
CN114302070A (en) Display device and audio output method
CN112817556A (en) Switching method of voice scheme on display equipment, display equipment and control device
CN116339570A (en) Display device and focus control method
CN113014979A (en) Content display method and display equipment
CN112732396A (en) Media asset data display method and display device
CN112770169B (en) List circulating page turning method and display device
CN113784222B (en) Interaction method of application and digital television program and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination