CN113849664A - Display device, server and media asset searching method - Google Patents

Display device, server and media asset searching method Download PDF

Info

Publication number
CN113849664A
CN113849664A CN202111098395.XA CN202111098395A CN113849664A CN 113849664 A CN113849664 A CN 113849664A CN 202111098395 A CN202111098395 A CN 202111098395A CN 113849664 A CN113849664 A CN 113849664A
Authority
CN
China
Prior art keywords
media asset
operation data
user
display
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111098395.XA
Other languages
Chinese (zh)
Inventor
戴磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202111098395.XA priority Critical patent/CN113849664A/en
Publication of CN113849664A publication Critical patent/CN113849664A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a display device, a server and a media asset searching method. The display device includes a display, a sound collector, and a controller. The voice collector is configured to receive a voice instruction input by a user; the controller is configured to: and when the medium resource searching instruction for requesting the medium resources input by the user does not contain the medium resource type, the display equipment sends the medium resource searching instruction to the server. And the server acquires all user operation data related to the media asset searching instruction in a preset period of the display equipment according to the media asset searching instruction, and returns the user operation data to the display equipment. And the display equipment acquires the user operation data with the highest priority in the media asset operation data according to the preset priority and acquires the target media asset so as to control the display to display the target media asset. According to the method and the device, the specific type of the media assets is obtained in consideration of the preference degree of the user related to the media asset searching instruction in the near term, so that the corresponding media assets are played in the displayer, and the experience of the user is high.

Description

Display device, server and media asset searching method
Technical Field
The present application relates to the technical field of display devices, and in particular, to a display device, a server, and a media asset searching method.
Background
The display device refers to a terminal device capable of outputting a specific display picture, such as a smart television, a mobile terminal, a smart advertisement screen, a projector, and the like. Along with the rapid development of display equipment, the function of the display equipment is more and more abundant, the performance is more and more powerful, the bidirectional man-machine interaction function can be realized, and various functions such as audio and video, entertainment, data and the like are integrated, so that the diversified and personalized requirements of users are met.
The intelligent voice interaction is one of the main functions of the display device, and the display device is configured with a voice search function in order to realize human-computer voice interaction. The user can search the desired media assets by voice input instructions by using the voice search function. For example, the user may say "i want to watch you for a tv drama teenager". The display device can analyze the voice of the user, so as to determine the related media assets in the voice and play the media assets.
However, the user may not speak a particular asset type when entering speech, e.g., the user speech enters "i want you for a teenager". At this time, the display device may only analyze the asset name from the voice, but the asset name may correspond to multiple asset types, such as a tv show or a movie, and therefore the display device cannot determine the specific asset type, and cannot play the corresponding asset, resulting in poor user experience.
Disclosure of Invention
The invention provides a display device, a server and a media asset searching method. The method and the device solve the problem that in the related technology, the display equipment cannot determine the type of the media assets, so that the user experience is poor.
In a first aspect, the present application provides a display device comprising a display, a sound collector, and a controller. The voice collector is configured to receive a voice instruction input by a user; the controller is configured to perform the steps of:
judging whether a media asset searching instruction which is input by a user and used for requesting media assets contains a media asset type; if the media asset searching instruction does not contain the media asset type, the media asset searching instruction is sent to a server; receiving media asset operation data sent by a server, wherein the media asset operation data are all user operation data related to the media asset searching instruction in a preset first history period of display equipment; acquiring user operation data with the highest priority in the media asset operation data based on a preset priority, and acquiring target media assets in the user operation data with the highest priority; and controlling a display to display the target media assets.
In some implementations, the controller is further configured to: in performing the step of determining whether the asset type is included in the asset search instruction for requesting assets input by the user,
converting the media resource searching instruction into a media resource searching text; and analyzing the media asset searching text, and judging whether the media asset searching text contains the media asset type.
In some implementations, the controller is further configured to: when the step of acquiring the target asset in the operation data with the highest priority is executed,
if the user operation data with the highest priority is multiple, determining the latest user operation data in the user operation data with the highest priority as target user operation data; and acquiring the target media assets in the operation data of the target user.
In some implementations, the controller is further configured to: after performing the step of sending the media asset search instruction to the server,
if the media asset operation data sent by the server is not received, sending a request for acquiring the media asset type to the server; receiving a target media asset type sent by a server; acquiring a media asset name according to the media asset searching text; and acquiring target media assets according to the target media asset types and the media asset names, and executing a step of controlling a display to display the target media assets.
In some implementations, the controller is further configured to:
when the operation of a user on the display equipment is detected, obtaining user operation data of the current operation, and sending the user operation data to a server, wherein the user operation data comprises: media asset playing operation data, media asset query operation data and voice search operation data.
In a second aspect, the present application provides a server configured to:
responding to a media asset searching instruction sent by display equipment, and acquiring media asset operation data, wherein the media asset operation data are all user operation data related to the media asset searching instruction in a preset first history period of the display equipment, and the media asset searching instruction does not contain a media asset type; and sending the media asset operation data to display equipment so that the display equipment acquires target media assets according to the media asset operation data and controls a display to display the target media assets.
In some implementations, the server is further configured to: in performing the step of obtaining the asset operation data,
carrying out fuzzy matching on the media asset searching instruction and a preset database to obtain media asset operation data; all user operation data of the display device are stored in the preset database, and the user operation data comprise: media asset playing operation data, media asset query operation data and voice search operation data.
In some implementations, the server is further configured to:
responding to a request for acquiring the media asset type sent by the display equipment, and acquiring voice search operation data which is most recent in time of the display equipment in a preset second history period according to the preset database; acquiring the media asset type in the voice search operation data with the nearest time, and determining the media asset type as a target media asset type; and sending the target media asset type to display equipment.
In some implementations, the server is further configured to:
if the voice searching operation data with the latest time does not exist in the preset database, acquiring the media asset type with the highest heat degree related to the media asset searching instruction in the network; and determining the media asset type with the highest popularity as a target media asset type, and executing the step of sending the target media asset type to display equipment.
In a third aspect, the present application provides a media asset searching method, applied to a display device, including:
judging whether a media asset searching instruction which is input by a user and used for requesting media assets contains a media asset type; if the media asset searching instruction does not contain the media asset type, the media asset searching instruction is sent to a server; receiving media asset operation data sent by a server, wherein the media asset operation data are all user operation data related to the media asset searching instruction in a preset first history period of display equipment; acquiring user operation data with the highest priority in the media asset operation data based on a preset priority, and acquiring target media assets in the user operation data with the highest priority; and controlling a display to display the target media assets.
According to the technical scheme, the display device and the media asset searching method are provided, and when the media asset searching instruction input by the user and used for requesting the media asset does not contain the media asset type, the display device sends the media asset searching instruction to the server. And the server acquires all user operation data related to the media asset searching instruction in a preset period of the display equipment according to the media asset searching instruction, and returns the user operation data to the display equipment. And the display equipment acquires the user operation data with the highest priority in the media asset operation data according to the preset priority and acquires the target media asset so as to control the display to display the target media asset. According to the method and the device, the specific type of the media assets is obtained in consideration of the preference degree of the user related to the media asset searching instruction in the near term, so that the corresponding media assets are played in the displayer, and the experience of the user is high.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 illustrates a usage scenario of a display device according to some embodiments;
fig. 2 illustrates a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of the display apparatus 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in the display device 200 according to some embodiments;
FIG. 5 shows a schematic diagram of a user interface in some embodiments;
FIG. 6 illustrates a schematic diagram of an application list in some embodiments;
FIG. 7 is a diagram illustrating a display of media asset search mode confirmation information in a display in some embodiments;
FIG. 8 illustrates an interaction flow diagram for components of a display device in some embodiments;
FIG. 9 is a flow diagram that illustrates the operation of a server in obtaining media asset operation data in some embodiments;
FIG. 10 is a schematic flow chart illustrating the server obtaining a target asset type in some embodiments;
FIG. 11 is a schematic diagram illustrating a target asset confirmation interface displayed in a display in some embodiments;
FIG. 12 is a flow diagram illustrating one embodiment of a method for media asset search;
FIG. 13 is a flowchart illustrating one embodiment of a method for media asset searching.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Fig. 1 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 1, the display apparatus 200 is also in data communication with a server 400, and a user can operate the display apparatus 200 through the smart device 300 or the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls the display device 200 in a wireless or wired manner. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the smart device 300 may include any of a mobile terminal, a tablet, a computer, a laptop, an AR/VR device, and the like.
In some embodiments, the smart device 300 may also be used to control the display device 200. For example, the display device 200 is controlled using a camera application running on the smart device.
In some embodiments, the smart device 300 and the display device may also be used for communication of data.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the smart device 300, for example, the voice instruction control of the user may be directly received by a module configured inside the display device 200 to obtain a voice instruction, or may be received by a voice control apparatus provided outside the display device 200.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers.
In some embodiments, software steps executed by one step execution agent may be migrated on demand to another step execution agent in data communication therewith for execution. Illustratively, software steps performed by the server may be migrated to be performed on a display device in data communication therewith, and vice versa, as desired.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 according to an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction from a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200.
In some embodiments, the communication interface 130 is used for external communication, and includes at least one of a WIFI chip, a bluetooth module, NFC, or an alternative module.
In some embodiments, the user input/output interface 140 includes at least one of a microphone, a touchpad, a sensor, a key, or an alternative module.
Fig. 3 shows a hardware configuration block diagram of the display apparatus 200 according to an exemplary embodiment.
In some embodiments, the display apparatus 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, a RAM, a ROM, a first interface to an nth interface for input/output.
In some embodiments, the display 260 includes a display screen component for displaying pictures, and a driving component for driving image display, a component for receiving image signals from the controller output, displaying video content, image content, and menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, the display 260 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, the detector 230 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other actionable control. The operations related to the selected object are: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon.
In some embodiments the controller comprises at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphics Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a first to nth interface for input/output, a communication Bus (Bus), and the like.
A CPU processor. The system is used for executing the operating system and the camera application instructions stored in the memory and executing various camera applications, data and contents according to various interaction instructions received from the outside so as to finally display and play various audio and video contents. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: at least one of an icon, an operation menu, and a user input instruction display figure. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, and perform at least one of video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal displayed or played on the direct display device 200.
In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image composition module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like. And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform at least one of noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in the speaker.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on display 260, and the user input interface receives the user input commands through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between a camera application or operating system and a user that enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the electronic device, where the control may include at least one of an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc. visual interface elements.
In some embodiments, user interface 280 is an interface that may be used to receive control inputs (e.g., physical buttons on the body of the display device, or the like).
In some embodiments, a system of a display device may include a Kernel (Kernel), a command parser (shell), a file system, and a camera application. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user camera application. The camera application is compiled into machine code after being started, and a process is formed.
As shown in fig. 4, the system of the display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application program. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
As shown in fig. 4, the system of the display device is divided into three layers, i.e., an application layer, a middleware layer and a hardware layer from top to bottom.
The Application layer mainly includes common applications on the television and an Application Framework (Application Framework), wherein the common applications are mainly applications developed based on the Browser, such as: HTML5 APPs; and Native APPs (Native APPs);
an Application Framework (Application Framework) is a complete program model, and has all basic functions required by standard Application software, such as: file access, data exchange, and interfaces to use these functions (toolbars, status lists, menus, dialog boxes).
Native APPs (Native APPs) may support online or offline, message push, or local resource access.
The middleware layer comprises various television protocols, multimedia protocols, system components and other middleware. The middleware can use basic service (function) provided by system software to connect each part of an application system or different applications on a network, and can achieve the purposes of resource sharing and function sharing.
The hardware layer mainly comprises an HAL interface, hardware and a driver, wherein the HAL interface is a unified interface for butting all the television chips, and specific logic is realized by each chip. The driving mainly comprises: audio drive, display driver, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (like fingerprint sensor, temperature sensor, pressure sensor etc.) and power drive etc..
The display device can analyze the voice of the user, so as to determine the related media assets in the voice and play the media assets. However, the user may not speak a particular asset type when entering speech, e.g., the user speech enters "i want you for a teenager". At this time, the display device may only analyze the asset name from the voice, but the asset name may correspond to multiple asset types, such as a tv show or a movie, and therefore the display device cannot determine the specific asset type, and cannot play the corresponding asset, resulting in poor user experience.
The application provides a display device and a server. Wherein the display device includes a display and a controller.
The display is used for displaying a user interface. The user interface may be a specific target image, such as various media assets acquired from a network signal source, including video, pictures, and other content. The user interface may also be some UI interface of the display device.
In some embodiments, the display device has voice control functionality, and the user may have voice control of the display device by way of voice input. Specifically, the display device further includes a sound collector, which may be a microphone, and is configured to receive a voice instruction input by a user, for example, a control triggering instruction. The user can send various instructions, such as a media asset searching instruction, to the display device by using the microphone in a voice input mode.
In some embodiments, the controller may control the display to display the user interface when the user controls the display device to power on, and fig. 5 is a schematic diagram of the user interface in some embodiments. The user interface includes a first navigation bar 500, a second navigation bar 510, a ribbon bar 520, and a content display area 530, the ribbon bar 520 including a plurality of functionality controls such as "view records", "my favorites", and "my applications", among others. The content displayed in the content display area 530 changes according to the selected controls in the first navigation bar 500 and the second navigation bar 510. When the application panel page is applied, the user can click the my application control to input a display instruction for the application panel page to trigger entering the corresponding application panel. It should be noted that the user may also input a selection operation on the functionality control in other manners to trigger entry into the application panel. For example, control is passed to the application panel page using a voice control function or a search function, etc. After the user clicks the my application control, the display device may show an interface corresponding to the my application control to the user, where the interface may include all applications installed in the display device. FIG. 6 is a diagram illustrating an application list in some embodiments, where the interface corresponding to the "My applications" control may include an application list including all applications that have been installed on the display device.
In some embodiments, the display device comprises a media asset searching function, and can identify the target media assets which the user wants to search according to the media asset searching instruction input by the user. Specifically, the display device may be provided with a media asset search mode. In the media asset searching mode, the display equipment can automatically identify a media asset searching instruction input by a user through voice, so that target media assets which the user wants to search are obtained.
In some embodiments, the user may send a media asset search mode instruction to the display device by operating a designated key of the remote control. And binding the corresponding relation between the medium resource searching mode instruction and the remote controller key in advance in the actual application process. For example, a media asset search mode key is arranged on the remote controller, when a user touches the key, the remote controller sends a media asset search mode instruction to the controller, and at the moment, the controller controls the display device to enter a media asset search mode. When the user touches the key again, the controller can control the display device to exit the media resource searching mode.
In some embodiments, the corresponding relationship between the media asset search mode instruction and the plurality of remote controller keys may also be pre-bound, and when the user touches the plurality of keys bound to the media asset search mode instruction, the remote controller sends the media asset search mode instruction. In a feasible embodiment, the keys bound by the medium resource search mode instruction are directional keys (left, down, left, down) in sequence, that is, when the user continuously touches the keys (left, down, left, down) within a preset time, the remote controller sends the medium resource search mode instruction to the controller. By adopting the binding method, the situation that the medium resource searching mode instruction is sent out due to misoperation of a user can be avoided. The embodiment of the application is only exemplary in providing the binding relations between the media asset searching mode instructions and the keys, and the binding relations between the media asset searching mode instructions and the keys can be set according to habits of users in the actual application process, so that excessive limitation is not made.
In some embodiments, the user may send a media search mode instruction to the display device by means of voice input using a sound collector of the display device, such as a microphone, to control the display device to enter a media search mode. An intelligent voice system can be arranged in the display device, and the intelligent voice system can recognize the voice of the user so as to extract the instruction content input by the user. The user can input a preset awakening word through the microphone so as to start the intelligent voice system, and the controller can respond to the instruction input by the user. And inputting a media asset searching mode instruction within a certain time to enable the display equipment to enter a media asset searching mode. For example, the user may enter "something classmate" to activate the intelligent speech system. And inputting 'entering a media asset searching mode', and realizing sending a media asset searching mode instruction to the display equipment.
In some embodiments, the user may also send a media asset search mode instruction to the display device through a preset gesture. The display device may detect the user's behavior through an image collector, such as a camera. When the user makes a preset gesture, the user may be considered to send a media asset search mode instruction to the display device. For example, it can be set as: and when the V-shaped characters drawn by the user are detected, determining that the user inputs a media asset searching mode instruction to the display device. The user can also send a media asset searching mode instruction to the display device through a preset action. For example, it can be set as: and when the fact that the user lifts the left foot and the right hand simultaneously is detected, the fact that the user inputs a media asset searching mode instruction to the display device is judged.
In some embodiments, when the user controls the display device using the smart device, for example, using a cell phone, the asset search mode instructions may also be sent to the display device. In the process of practical application, a control can be set in the mobile phone, whether the mobile phone enters a media asset searching mode can be selected through the control, and therefore a media asset searching mode instruction is sent to the controller, and at the moment, the controller can control the display equipment to enter the media asset searching mode.
In some embodiments, when the user controls the display device using the cell phone, a continuous click command may be issued to the cell phone. The continuous click command refers to: in a preset period, the number of times that a user clicks the same area of the mobile phone touch screen exceeds a preset threshold value. For example: when the user continuously clicks a certain area of the mobile phone touch screen for 3 times within 1s, the user is regarded as a continuous clicking instruction. After receiving the continuous click command, the mobile phone can send a media asset searching mode command to the display device, so that the controller controls the display device to enter a media asset searching mode.
In some embodiments, when the user uses the mobile phone to control the display device, the following may also be set: when detecting that a touch pressure value of a certain area of the mobile phone touch screen by a user exceeds a preset pressure threshold value, the mobile phone can send a media resource searching mode instruction to the display device.
And a media resource search mode option can be set in a UI interface of the display device, and when the user clicks the option, the display device can be controlled to enter or exit a media resource search mode.
In some embodiments, to prevent the user from triggering the media asset search mode by mistake, when the controller receives the media asset search mode instruction, the controller may control the display to display the media asset search mode confirmation information, so that the user performs secondary confirmation to determine whether to control the display device to enter the media asset search mode. Fig. 7 is a diagram illustrating the display of media asset search mode confirmation information in the display in some embodiments.
In some embodiments, after the display device is triggered to enter the media asset search mode, the user may also send an instruction to the display device in a text form through a mobile phone, a remote controller, and the like, so as to prevent the display device from being unable to receive a voice instruction of the user after the microphone has a problem.
When the display equipment enters a media asset searching mode, a media asset searching instruction input by a user through voice can be automatically identified, so that a target media asset which the user wants to search is identified.
FIG. 8 illustrates a flow diagram for interaction of components of a display device in some embodiments.
In some embodiments, when the display device is in the media asset searching mode, the user can send a media asset searching instruction to the display device through the microphone by means of voice input to search for a target media asset.
After receiving a media asset searching instruction input by a user, the display device can analyze the media asset searching instruction, so that a target media asset which the user wants to search is identified. It should be noted that, in order to accurately show the correct target asset to the user, the display device needs to determine the asset type of the target asset, and therefore, the display device first needs to determine whether the asset search instruction input by the user includes the asset type.
In some embodiments, after the display device receives a media asset search instruction input by a user, the controller may send the received voice data to a voice recognition service, so as to convert the voice data into text information and obtain a media asset search text. The voice recognition service is a web service deployable on a display device and may include a voice recognition module and a semantic analysis module. The voice recognition service is used for recognizing the audio frequency into a text, and the semantic service is used for performing semantic analysis on the text. For example, the voice recognition module may recognize a media asset search instruction input by a user as a media asset search text. And then, the semantic analysis module analyzes the lexical syntax and the semantics of the media asset searching text, so that the intention of the user is understood, and the target media asset is obtained.
In some embodiments, the display device may also include a third party speech recognition interface. After receiving a media asset searching instruction input by a user, the controller can send the voice data to a third-party voice recognition interface, and the media asset searching instruction of the user is converted into a media asset searching text by using a third-party voice recognition device and the like. After the media asset search text is obtained, the controller can analyze the media asset search text and judge whether the media asset search text contains the media asset type.
In some embodiments, after the media asset search text is obtained, the controller may perform word segmentation processing on the media asset search text to obtain a word segmentation result including a plurality of words, and the word segmentation processing may use an open source word segmentation tool JIEBA.
For example, for the media asset search instruction "you i want to watch tv drama teenagers", after performing the word segmentation processing, four words with word segmentation results of "you i want to watch tv drama and teenagers" can be obtained. The specific word segmentation method can refer to the related technology, and is not described in detail in this application. After the word segmentation processing is carried out on the media asset search text, the controller can analyze the word segmentation result and determine the meaning of each word. For example, for the word segmentation result "i want, watch, tv show, you of teenagers", "watch" can be resolved as the control instruction "click", "tv show" can be resolved as the asset type, and "you of teenagers" can be resolved as the asset name.
By analyzing the word segmentation result, whether the media asset searching instruction contains the media asset type can be determined.
In some embodiments, if the asset type is included in the asset search command, the controller may further obtain the asset name in the asset search command. Specifically, the media asset name can be obtained from the word segmentation result of the media asset search text.
At this time, the controller may determine the target asset according to the asset type and the asset name. For example, for the media asset search instruction "i want to watch you for a tv show teenager," it can be determined that the media asset that the user wants to search is "you for teenagers," and the media asset type is tv show. Therefore, the controller can determine the TV series 'you of teenagers' as the target media asset and acquire the corresponding media asset resources from the network signal source. Further, the controller may control the display to display the asset. The detailed page of the media asset resource may be displayed in the display, or the media asset resource may be directly played, which is not specifically limited in this embodiment of the present application.
In some embodiments, if the specific asset type is not included in the asset search instruction, the controller needs to further obtain the corresponding asset type, so as to determine the target asset. The controller may send the media asset search instruction to the server, specifically, may directly send a media asset search text corresponding to the media asset search instruction to the server.
In some embodiments, after converting the asset Search instruction into an asset Search text, the display device may directly transmit the asset Search text to an ES (inverted Search) server. The ES server is a distributed extensible real-time search and analysis engine, a search engine based on the full-text search engine Apa chelecene (TM) (project development open source search software).
In some embodiments, a user may generate one user operation data for each operation of the display device during use of the display device. For example, a user may use the display device to watch the media assets, that is, the user operates the display device to perform a media asset playing operation, and at this time, one piece of media asset playing operation data may be generated in the display device. The media asset playing operation data may include a media asset name, a media asset type, a playing time length, a playing time, and the like, for example: the user plays the television series 'you of juveniles', the time length is 50 minutes, and the playing time is A.
The user can also click and view the detailed introduction of a certain asset by using the display device, and enter the detailed page of the asset, that is, the user operates the display device to perform asset query operation, and at this time, one asset query operation data can be generated in the display device. The asset query operation data may include asset names, asset types, query times, and the like, for example: the user clicks and inquires about the TV series 'you of teenagers', and the inquiry time is B.
The user can also search the media assets in a voice input mode, namely the user performs voice search operation, and at the moment, voice search operation data can be generated in the display equipment. The voice search operation data may include a media asset name, a media asset type, a search time, and the like, for example: the user searches the television series 'you of teenagers' by voice, and the search time is C.
In the embodiment of the application, the user operation is exemplarily introduced, and the user can also control the display device to perform other operations, which is not described herein again.
Each time the user performs one operation on the display device, the display device may generate user operation data corresponding to the current operation. Meanwhile, each time one user operation data is generated, the display device can send the user operation data to the server, so that the server stores all the user operation data of the display device.
In some embodiments, a database for storing user operation data may be provided in the server, and the database may store all the user operation data of the display device. It should be noted that the database may store user operation data of a plurality of display devices, and the user operation data of each display device is separately stored, so that the related data of each display device is separately stored, and data confusion is prevented. All the user operation data of each display device can be regarded as all the user operation data of each user, thereby better distinguishing the operation preference of each user.
In some embodiments, the display device may send the asset search instruction to the server, which may be directly sending the asset search text to the server. After receiving the media asset search text, the server may screen the user operation data stored in the database, thereby obtaining all user operation data related to the media asset search text, determine all user operation data as media asset operation data, and further may send the media asset operation data to the display device.
In some embodiments, when acquiring the media asset operation data, the server may first find all user operation data of the current display device in the database, which may be considered as all user operation data of the current user. The user operation data may include: media asset playing operation data, media asset query operation data and voice search operation data.
The server further can perform time screening on all user operation data of the display device to obtain all user operation data of the display device in a preset first history period. The preset first history period may be set to three months, that is, the server may obtain all the user operation data of the display device within three months, to obtain the screening data.
And the server performs instruction correlation screening on the screened data to acquire all user operation data related to the medium resource searching instruction, so that the medium resource operation data is obtained. Namely, the media asset operation data is as follows: and displaying all user operation data related to the media asset searching instruction by the display equipment in a preset first history period. Fig. 9 is a flow diagram illustrating a process of a server obtaining media asset operation data in some embodiments.
In some embodiments, in performing instruction relevance screening on the screening data, the server may perform fuzzy matching (fuzzy matching) processing on the media asset search instructions and the screening data. Specifically, the server may perform fuzzy matching on the media asset search text and the screening data, so as to obtain the media asset operation data.
In some embodiments, after the media asset operation data is acquired, the server may send the media asset operation data to the display device, so that the display device acquires the media asset type according to the media asset operation data.
In some embodiments, after receiving the media asset operation data sent by the server, the controller may obtain the type of the media asset according to the media asset operation data, thereby determining the target media asset.
Specifically, the media asset operation data may include various types of user operation data, such as: media asset playing operation data, media asset query operation data and voice search operation data. The user operation data may represent operations of the display device by the user, i.e., each type of user operation data may represent a type of user operation.
For each type of user operation data, a priority may be set in advance for indicating the degree of preference of the user. In the embodiment of the present application, the priority is set as follows: media play operation > media query operation > voice search operation. The media asset playing operation indicates that the user watches the media assets, and the preference degree of the user is considered to be maximum and is very interesting. The media asset query operation indicates that the user clicks to view the details of the media assets, but does not view specific media assets, and the preference degree of the user is considered to be general and is interested. The voice search operation indicates that the user only inputs the intention by voice, does not watch specific media assets, and does not click on the details of the media assets, so that the user is considered to have the minimum preference degree and not to be interested.
For the received media asset operation data, the controller can acquire user operation data with the highest priority in the media asset operation data, so that the most interesting media asset type with the greatest preference degree of the user in the first history period is determined. For example, the media asset operation data includes two pieces of data, one piece of data is media asset playing operation data, which may be "you of a user playing tv drama, the duration is 50 minutes, and the playing time is a". One is media asset query operation data, which can be 'you for clicking and querying a movie juvenile, and the query time is B'. According to the preset priority, the user operation data with the highest priority can be determined as the media asset playing operation data. At this time, the target asset, i.e., "you of tv drama juveniles", can be determined according to the asset name and asset type included in the piece of asset playing operation data.
In some embodiments, there may be a plurality of user operation data with the highest priority. For example, the media asset operation data includes three pieces of data, one is media asset play operation data "you for a user to play a tv series, the duration is 50 minutes, and the play time is a 1". One is media asset playing operation data, that is, the user plays a juvenile movie for 70 minutes, and the playing time is a 2'. One is media asset query operation data, which can be 'you for clicking and querying a movie juvenile, and the query time is B'. Two user operation data with the highest priority are both media asset playing operation data. But two asset types are involved in both data.
In this case, the most recent user operation data among all the user operation data having the highest priority may be determined as the target user operation data. Namely, the judgment times a1 and a2, the media asset playing operation data corresponding to the latest time is determined as the target user operation data. And then acquiring the media asset name and the media asset type contained in the operation data of the target user, thereby determining the target media asset.
In some embodiments, the server may not obtain the media asset operation data, that is, the display device has no user operation data related to the media asset search instruction in the preset first history period, and the user has not searched or played related media assets. At this time, the display device needs to further acquire the asset type.
A data feedback period, for example 30 minutes, may be set. In a data feedback period after the display equipment sends the media asset searching instruction to the server, if the server does not feed back the media asset operation data, the media asset operation data is not considered to exist. At this time, the controller may transmit a request for acquiring the asset type to the server.
In some embodiments, when acquiring the request for acquiring the type of the asset, which is sent by the display device, the server may acquire the specific type of the asset. FIG. 10 is a flow diagram that illustrates the server obtaining a target asset type in some embodiments.
The server may perform preliminary screening on all user operation data of the display device in the database, and acquire all user operation data within a preset second history period, for example, within three days.
And the server acquires the user operation data with the latest time in the user operation data, acquires the media asset type in the user operation data with the latest time, and determines the media asset type as the target media asset type. Namely, the operation related to the media asset, which is performed by the user last time, is determined, and the media asset type corresponding to the Ganz operation is determined as the target media asset type, namely, the media asset type with the maximum user preference degree in the near term. For example, the user operation data with the latest time may be "the user plays tv drama sweet at 50 minutes and a play time of a 3". The user may not search for the media asset resources related to "youth" before playing, but if the user last operated the play of the tv series, the user considers that the user has the greatest preference for the tv series in the near future, and therefore the tv series is determined as the target media asset type.
In some embodiments, when the user operation data is preliminarily screened, the server may acquire all media asset playing operation data in a preset second history period. And simultaneously acquiring the media asset type in the media asset playing operation data with the latest time in the media asset playing operation data, and determining the media asset type as the target media asset type.
Or acquiring the media asset type in the voice search operation data with the latest time from all the voice search operation data in the preset second history period, and determining the media asset type as the target media asset type.
In some embodiments, the server may not obtain the latest user operation data, that is, the user has not performed a media asset related operation on the display device in the near future, and at this time, the preference degree of the user for the media asset type cannot be determined according to the user operation data of the user.
The server can inquire in the user operation data of all the display devices in the database, acquire the media asset type with the highest popularity related to the media asset searching instruction and determine the target media asset type. Or acquiring the media asset type with the highest popularity related to the media asset searching instruction in the network, and determining the target media asset type. Namely, the preference degrees of all other users are comprehensively considered, and the media asset type with the highest popularity related to the media asset searching instruction is determined as the target media asset type. For example, the media asset search instruction indicates that the user needs to search for the media asset "you of teenagers", the server may acquire the highest media asset type among all media assets related to "you of teenagers" in the network and determine it as the target media asset type.
The server may send the target asset type to the display device.
In some embodiments, after receiving the target asset type sent by the server, the controller may obtain the target asset according to the target asset type. Specifically, the controller may obtain the name of the media asset in the media asset search text, determine the ID of the media asset, that is, the target media asset, according to the name of the media asset and the type of the target media asset, and obtain the corresponding media asset resource from the network signal source.
In some embodiments, after the target asset is acquired, the controller may control the display to display an asset detail page of the target asset. The user may click to view the target asset in the asset detail page. The controller can also directly play the target asset.
In some embodiments, after the target asset is acquired, the controller may further control the display to display a target asset confirmation interface, so that the user confirms whether to play the target asset. Figure 11 illustrates a schematic diagram of a display of a target asset confirmation interface in a display in some embodiments. And when the user clicks the confirmation, the target media assets can be played in the display.
According to the method and the device, the target media assets are determined according to the preference degree of the user relative to the media asset searching instruction in the first history period, if the target media assets cannot be determined, the target media assets are determined according to the preference degree of the user to the media asset types in the second history period, if the target media assets cannot be determined, the media asset types with the highest heat degree relative to the media asset searching instruction are obtained according to the network big data, and therefore the target media assets are determined. Therefore, the method and the device can determine the specific media asset type, so that the corresponding media asset is played, and the user experience is higher.
An embodiment of the present application further provides a media asset searching method, which is applied to a display device, and as shown in fig. 12, the method includes:
step 1201, judging whether a media asset searching instruction for requesting media assets input by a user contains a media asset type; and if the medium resource searching instruction does not contain the medium resource type, sending the medium resource searching instruction to the server.
Step 1202, receiving media asset operation data sent by a server, wherein the media asset operation data is all user operation data related to a media asset searching instruction of a display device in a preset first history period.
Step 1203, obtaining user operation data with the highest priority in the media asset operation data based on the preset priority, and obtaining target media assets in the user operation data with the highest priority.
And step 1204, controlling the display to display the target media assets.
An embodiment of the present application further provides a media asset searching method, which is applied to a server, and as shown in fig. 12, the method includes the following steps:
step 1301, responding to a media asset searching instruction sent by the display device, acquiring media asset operation data, wherein the media asset operation data are all user operation data related to the media asset searching instruction of the display device in a preset first history period, and the media asset searching instruction does not include a media asset type.
And 1302, sending the media asset operation data to a display device, so that the display device obtains the target media asset according to the media asset operation data and controls a display to display the target media asset.
The same and similar parts in the embodiments in this specification may be referred to one another, and are not described herein again.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of software products, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method in the embodiments or some parts of the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, comprising:
a display;
the voice collector is configured to receive a voice instruction input by a user;
a controller configured to:
judging whether a media asset searching instruction which is input by a user and used for requesting media assets contains a media asset type; if the media asset searching instruction does not contain the media asset type, the media asset searching instruction is sent to a server;
receiving media asset operation data sent by a server, wherein the media asset operation data are all user operation data related to the media asset searching instruction in a preset first history period of display equipment;
acquiring user operation data with the highest priority in the media asset operation data based on a preset priority, and acquiring target media assets in the user operation data with the highest priority;
and controlling a display to display the target media assets.
2. The display device of claim 1, wherein the controller is further configured to:
in performing the step of determining whether the asset type is included in the asset search instruction for requesting assets input by the user,
converting the media resource searching instruction into a media resource searching text;
and analyzing the media asset searching text, and judging whether the media asset searching text contains the media asset type.
3. The display device of claim 1, wherein the controller is further configured to:
when the step of acquiring the target asset in the operation data with the highest priority is executed,
if the user operation data with the highest priority is multiple, determining the latest user operation data in the user operation data with the highest priority as target user operation data;
and acquiring the target media assets in the operation data of the target user.
4. The display device of claim 2, wherein the controller is further configured to:
after performing the step of sending the media asset search instruction to the server,
if the media asset operation data sent by the server is not received, sending a request for acquiring the media asset type to the server;
receiving a target media asset type sent by a server;
acquiring a media asset name according to the media asset searching text;
and acquiring target media assets according to the target media asset types and the media asset names, and executing a step of controlling a display to display the target media assets.
5. The display device of claim 1, wherein the controller is further configured to:
when the operation of a user on the display equipment is detected, obtaining user operation data of the current operation, and sending the user operation data to a server, wherein the user operation data comprises: media asset playing operation data, media asset query operation data and voice search operation data.
6. A server, wherein the server is configured to:
responding to a media asset searching instruction sent by display equipment, and acquiring media asset operation data, wherein the media asset operation data are all user operation data related to the media asset searching instruction in a preset first history period of the display equipment, and the media asset searching instruction does not contain a media asset type;
and sending the media asset operation data to display equipment so that the display equipment acquires target media assets according to the media asset operation data and controls a display to display the target media assets.
7. The server of claim 6, wherein the server is further configured to:
in performing the step of obtaining the asset operation data,
carrying out fuzzy matching on the media asset searching instruction and a preset database to obtain media asset operation data; all user operation data of the display device are stored in the preset database, and the user operation data comprise: media asset playing operation data, media asset query operation data and voice search operation data.
8. The server of claim 7, wherein the server is further configured to:
responding to a request for acquiring the media asset type sent by the display equipment, and acquiring voice search operation data which is most recent in time of the display equipment in a preset second history period according to the preset database;
acquiring the media asset type in the voice search operation data with the nearest time, and determining the media asset type as a target media asset type;
and sending the target media asset type to display equipment.
9. The server of claim 8, wherein the server is further configured to:
if the voice searching operation data with the latest time does not exist in the preset database, acquiring the media asset type with the highest heat degree related to the media asset searching instruction in the network;
and determining the media asset type with the highest popularity as a target media asset type, and executing the step of sending the target media asset type to display equipment.
10. A media asset searching method is applied to display equipment and is characterized by comprising the following steps:
judging whether a media asset searching instruction which is input by a user and used for requesting media assets contains a media asset type; if the media asset searching instruction does not contain the media asset type, the media asset searching instruction is sent to a server;
receiving media asset operation data sent by a server, wherein the media asset operation data are all user operation data related to the media asset searching instruction in a preset first history period of display equipment;
acquiring user operation data with the highest priority in the media asset operation data based on a preset priority, and acquiring target media assets in the user operation data with the highest priority;
and controlling a display to display the target media assets.
CN202111098395.XA 2021-09-18 2021-09-18 Display device, server and media asset searching method Pending CN113849664A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111098395.XA CN113849664A (en) 2021-09-18 2021-09-18 Display device, server and media asset searching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111098395.XA CN113849664A (en) 2021-09-18 2021-09-18 Display device, server and media asset searching method

Publications (1)

Publication Number Publication Date
CN113849664A true CN113849664A (en) 2021-12-28

Family

ID=78974617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111098395.XA Pending CN113849664A (en) 2021-09-18 2021-09-18 Display device, server and media asset searching method

Country Status (1)

Country Link
CN (1) CN113849664A (en)

Similar Documents

Publication Publication Date Title
WO2021189697A1 (en) Video display method, terminal, and server
CN112004157B (en) Multi-round voice interaction method and display device
CN112885354B (en) Display device, server and display control method based on voice
CN112601117B (en) Display device and content presentation method
WO2022012271A1 (en) Display device and server
CN111866568B (en) Display device, server and video collection acquisition method based on voice
CN113141479A (en) Display device and key reuse method thereof
CN113630656A (en) Display device, terminal device and communication connection method
CN112733050A (en) Display method of search results on display device and display device
CN113490032A (en) Display device and medium resource display method
CN113490057B (en) Display device and media asset recommendation method
CN115202604A (en) Display device and keyboard language switching method
CN112882780A (en) Setting page display method and display device
CN114116622A (en) Display device and file display method
CN113038217A (en) Display device, server and response language generation method
CN113784203A (en) Display device and channel switching method
CN113490030A (en) Display device and channel information display method
CN113849664A (en) Display device, server and media asset searching method
CN112601116A (en) Display device and content display method
CN111914565A (en) Electronic equipment and user statement processing method
CN115150673B (en) Display equipment and media asset display method
CN115174997B (en) Display device and media asset recommendation method
CN113490013B (en) Server and data request method
CN113076427B (en) Media resource searching method, display equipment and server
CN113207042B (en) Media asset playing method and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination