CN115278316A - Prompt language generation method and device and electronic equipment - Google Patents

Prompt language generation method and device and electronic equipment Download PDF

Info

Publication number
CN115278316A
CN115278316A CN202210761080.7A CN202210761080A CN115278316A CN 115278316 A CN115278316 A CN 115278316A CN 202210761080 A CN202210761080 A CN 202210761080A CN 115278316 A CN115278316 A CN 115278316A
Authority
CN
China
Prior art keywords
information
actual
user
data
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210761080.7A
Other languages
Chinese (zh)
Inventor
张路伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202210761080.7A priority Critical patent/CN115278316A/en
Publication of CN115278316A publication Critical patent/CN115278316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a prompt language generation method and device and electronic equipment, relates to the technical field of data processing, and is used for solving the problem that in the prior art, when the electronic equipment generates a prompt language, the generated prompt language is often inaccurate, the real intention of a user cannot be accurately reflected, and the user experience is poor. The method comprises the following steps: receiving first information sent by electronic equipment; in the case that the first information includes operation information, determining a user portrait according to identification information in the first information; determining an actual prompt word according to the user portrait and actual position information and actual time information in the first information; and sending first display information carrying the actual prompt words to the electronic equipment.

Description

Prompt language generation method and device and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for generating a prompt, and an electronic device.
Background
Currently, the electronic device generates a prompt according to the historical voice data input by the user, so as to facilitate the user to select the required functions, such as: watching movies, watching television, searching songs, querying weather, etc.
However, when the above manner is used to generate the cue, the generated cue is often inaccurate, and the real intention of the user cannot be accurately reflected, so that the user experience is poor.
Disclosure of Invention
In order to solve the technical problem, the present disclosure provides a prompt generation method, a device and an electronic device, which are used for solving the problem in the prior art that when the electronic device generates a prompt, the generated prompt is often inaccurate, and the situation that the true intention of a user cannot be accurately reflected, so that the experience of the user is poor.
The technical scheme of the disclosure is as follows:
in a first aspect, the present disclosure provides a method for generating a hint, including: receiving first information sent by electronic equipment; determining a user portrait according to the identification information in the first information under the condition that the first information comprises the operation information; the operation information comprises any one of voice information lacking entity words and indication information used for indicating that the electronic equipment receives first operation, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information; determining an actual prompt language according to the user portrait and actual position information and actual time information in the first information; sending first display information carrying actual prompts to the electronic equipment; the first display information is used for indicating the electronic equipment to display an actual prompt.
In some practical examples, the identification information includes a user account, and the hint generating method provided by the present disclosure further includes: receiving voice information sent by electronic equipment; wherein, the voice information comprises first voiceprint data; determining whether second voiceprint data bound to the user account is matched with the first voiceprint data or not according to the first voiceprint data; under the condition that the second voiceprint data are matched with the first voiceprint data, first indication information is sent to the electronic equipment; the first indication information is used for indicating the electronic equipment to continuously log in the user account.
In some practical examples, the hint generating method provided by the present disclosure further includes: under the condition that the second voiceprint data are not matched with the first voiceprint data, second indication information is sent to the electronic equipment; the second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account; receiving actual login information sent by electronic equipment; under the condition that the theoretical login information is the same as the actual login information, first indication information is sent to the electronic equipment; under the condition that the theoretical login information is different from the actual login information, third indication information is sent to the electronic equipment; the third indication information is used for indicating the electronic device to prompt the user to execute a target operation, and the target operation includes any one of account switching or account registration.
In some practical examples, before receiving the first information sent by the electronic device, the hint generating method provided by the present disclosure further includes: acquiring historical voice data, historical operation data and historical question and answer data corresponding to the identification information; analyzing historical voice data, historical operation data and historical question and answer data to determine at least one piece of portrait data; wherein the portrait data includes a user tag weight for each content category in each time period under the historical location information; historical voice data, historical operation data, historical question and answer data, historical voice data, historical operation data and historical question and answer data; a user representation is generated based on the representation data.
In some practical examples, determining the actual hint from the user representation and the actual location information and the actual time information in the first information includes: determining portrait data corresponding to the actual position information and the actual time information according to the user portrait, the actual position information and the actual time information; determining content classification with user label weight greater than or equal to preset weight according to portrait data corresponding to actual position information and actual time information; determining an actual prompt word according to content classification with user label weight being more than or equal to preset weight; wherein one content classification corresponds to at least one actual prompt.
In some implementable examples, the first information further includes query information, one content classification corresponding to at least one theoretical entity word; determining an actual prompt according to the user portrait and the actual position information and the actual time information in the first information, wherein the actual prompt comprises: determining a theoretical cue word corresponding to each actual entity word contained in query information according to a preconfigured knowledge graph and the query information; determining portrait data corresponding to the actual position information and the actual time information according to the user portrait, the actual position information and the actual time information; determining content classification with user label weight greater than or equal to preset weight according to portrait data corresponding to actual position information and actual time information; matching the theoretical entity words of the content classification with the user label weight more than or equal to the preset weight with the actual entity words; and under the condition that the theoretical entity words are matched with the actual entity words, determining the theoretical cue words corresponding to the actual entity words as actual cue words.
In some implementable examples, one actual hint corresponds to one actual control; after first display information carrying an actual prompt is sent to the electronic device, the method for generating the prompt provided by the disclosure further comprises the following steps: receiving second information sent by the electronic equipment; the second information is used for indicating that the target prompt is selected, and the target prompt is any one of the actual prompts; generating display data corresponding to the target prompt words according to the target prompt words; sending second display information carrying display data to the electronic equipment; the second display information is used for indicating the electronic equipment to display the display data.
In a second aspect, the present disclosure provides a hint generating method, including: responding to the first operation, and sending first information to a server; the first information comprises identification information, actual position information, actual time information and operation information, and the operation information comprises any one of voice information lacking entity words and indication information used for indicating that the electronic equipment receives the first operation; receiving first display information which is sent by a server and carries an actual prompt; the first display information is used for indicating the electronic equipment to display an actual prompt language, the actual prompt language is determined according to a user portrait, actual position information and actual time information, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information; and displaying the actual prompt words according to the first display information.
In some practical examples, the identification information includes a user account, and the hint generating method provided by the present disclosure further includes: sending voice information to a server; wherein, the voice information comprises first voiceprint data; receiving first indication information sent by a server; the first indication information is used for indicating the electronic equipment to continuously log in the user account.
In some practical examples, the hint generating method provided by the present disclosure further includes: receiving second indication information sent by the server; the second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account; responding to the second operation, and sending actual login information to the server; receiving third indication information sent by the server; the third indication information is used for indicating the electronic equipment to prompt the user to execute target operation, and the target operation includes any one of account switching or account registration.
In some implementable examples, one actual hint corresponds to one actual control; after receiving first display information carrying an actual prompt sent by a server, the method for generating the prompt provided by the disclosure further includes: responding to a third operation of the user, and sending second information to the server; the second information is used for indicating that the target prompt is selected, and the target prompt is any one of the actual prompts; receiving second display information which is sent by the server and carries display data; the display data are generated according to the target prompt, and the second display information is used for indicating the electronic equipment to display the display data; and displaying the display data according to the second display information.
In a third aspect, the present disclosure provides a hint language generation apparatus, including: the receiving unit is used for receiving first information sent by the electronic equipment; the processing unit is used for determining the user portrait according to the identification information in the first information acquired by the acquisition unit under the condition that the first information acquired by the acquisition unit comprises the operation information; the operation information comprises any one of voice information lacking entity words and indication information used for indicating that the electronic equipment receives first operation, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information; the processing unit is also used for determining an actual prompt according to the actual position information and the actual time information in the first information received by the user portrait and receiving unit; the processing unit is also used for controlling the sending unit to send first display information carrying the actual prompt words to the electronic equipment; the first display information is used for indicating the electronic equipment to display an actual prompt.
In some practical examples, the identification information includes a user account, and the receiving unit is further configured to receive voice information sent by the electronic device; wherein, the voice information comprises first voiceprint data; the processing unit is further used for determining whether second voiceprint data bound to the user account is matched with the first voiceprint data received by the receiving unit according to the first voiceprint data received by the receiving unit; the processing unit is further used for controlling the sending unit to send the first indication information to the electronic equipment under the condition that the second voiceprint data are matched with the first voiceprint data received by the receiving unit; the first indication information is used for indicating the electronic equipment to continuously log in the user account.
In some practical examples, the processing unit is further configured to control the sending unit to send the second indication information to the electronic device if the second voiceprint data does not match the first voiceprint data received by the receiving unit; the second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account; the receiving unit is also used for receiving actual login information sent by the electronic equipment; the processing unit is further used for controlling the sending unit to send first indication information to the electronic equipment under the condition that the theoretical login information is the same as the actual login information received by the receiving unit; the processing unit is further used for controlling the sending unit to send third indication information to the electronic equipment under the condition that the theoretical login information is different from the actual login information received by the receiving unit; the third indication information is used for indicating the electronic equipment to prompt the user to execute target operation, and the target operation includes any one of account switching or account registration.
In some implementable examples, the hint generating apparatus further comprises an obtaining unit. The acquisition unit is used for acquiring historical voice data, historical operation data and historical question and answer data corresponding to the identification information; historical voice data, historical operation data and historical question and answer data; the processing unit is also used for analyzing the historical voice data acquired by the acquisition unit, the historical operation data acquired by the acquisition unit and the historical question and answer data acquired by the acquisition unit to determine at least one piece of portrait data; wherein the portrait data includes a user tag weight for each content category in each time period under the historical location information; a processing unit further configured to generate a user representation from the representation data.
In some implementable examples, a processing unit, in particular to determine portrait data corresponding to both the actual position information and the actual time information, based on the user portrait, the actual position information received by the receiving unit, and the actual time information received by the receiving unit; the processing unit is specifically used for determining content classification with user label weight greater than or equal to preset weight according to portrait data corresponding to the actual position information received by the receiving unit and the actual time information received by the receiving unit; the processing unit is specifically used for determining the actual prompt words according to the content classification with the user label weight being greater than or equal to the preset weight; wherein one content category corresponds to at least one actual cue.
In some implementable examples, the first information further includes query information, one content classification corresponding to at least one theoretical entity word; the processing unit is specifically used for determining a theoretical prompt corresponding to each actual entity word contained in the query information according to the pre-configured knowledge graph and the query information received by the receiving unit; a processing unit, specifically configured to determine portrait data corresponding to the actual position information and the actual time information according to the user portrait, the actual position information received by the receiving unit, and the actual time information received by the receiving unit; the processing unit is specifically used for determining content classification with user label weight greater than or equal to preset weight according to portrait data corresponding to the actual position information received by the receiving unit and the actual time information received by the receiving unit; the processing unit is specifically used for matching the theoretical entity words and the actual entity words of the content classification with the user label weight being greater than or equal to the preset weight; and the processing unit is specifically used for determining the theoretical cue words corresponding to the actual entity words as the actual cue words under the condition that the theoretical entity words are matched with the actual entity words.
In some implementable examples, one actual hint corresponds to one actual control; the receiving unit is also used for receiving second information sent by the electronic equipment; the second information is used for indicating that the target prompt is selected, and the target prompt is any one of the actual prompts; the processing unit is also used for generating display data corresponding to the target prompt language according to the target prompt language received by the receiving unit; the processing unit is also used for controlling the sending unit to send second display information carrying display data to the electronic equipment; the second display information is used for indicating the electronic equipment to display the display data.
In a fourth aspect, the present disclosure provides a hint language generation apparatus, comprising: a processing unit configured to control the transmission unit to transmit the first information to the server in response to the first operation; the first information comprises identification information, actual position information, actual time information and operation information, and the operation information comprises any one of voice information lacking an entity word and indication information used for indicating that the electronic equipment receives the first operation; the receiving unit is used for receiving first display information which is sent by the server and carries an actual prompt; the first display information is used for indicating the electronic equipment to display an actual prompt language, the actual prompt language is determined according to a user portrait, actual position information and actual time information, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data of the identification information; and the processing unit is also used for controlling the display unit to display the actual prompt words according to the first display information.
In some implementable examples, the processing unit is further configured to control the sending unit to send the voice information to the server; wherein, the voice information comprises first voiceprint data; the receiving unit is also used for receiving first indication information sent by the server; the first indication information is used for indicating the electronic equipment to continuously log in the user account.
In some practical examples, the receiving unit is further configured to receive second indication information sent by the server; the second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account; the processing unit is also used for responding to the second operation and controlling the sending unit to send the actual login information to the server; the receiving unit is further used for receiving third indication information sent by the server; the third indication information is used for indicating the electronic equipment to prompt the user to execute target operation, and the target operation includes any one of account switching or account registration.
In some implementable examples, one actual hint corresponds to one actual control; the processing unit is also used for responding to a third operation of the user and controlling the sending unit to send the second information to the server; the second information is used for indicating that the target prompt is selected, and the target prompt is any one of the actual prompts; the receiving unit is also used for receiving second display information which is sent by the server and carries display data; the display data are generated according to the target prompt, and the second display information is used for indicating the electronic equipment to display the display data; and the processing unit is also used for controlling the display unit to display the display data according to the second display information received by the receiving unit.
In a fifth aspect, the present disclosure provides an electronic device comprising: a memory for storing a computer program and a processor; the processor is configured to, when executing the computer program, cause the electronic device to implement the cue word generation method of any one of the first aspect.
In a sixth aspect, the present invention provides a computer-readable storage medium comprising: the computer-readable storage medium stores thereon a computer program that is executed by a processor to perform the cue-generating method of any one of the aspects as provided in the first aspect.
In a seventh aspect, the present invention provides a computer program product for causing a computer to perform the hint generating method of any one of the preceding aspects when the computer program product is run on the computer.
In an eighth aspect, the present disclosure provides an electronic device comprising: a memory for storing a computer program and a processor; the processor is configured to, when executing the computer program, cause the electronic device to implement the cue word generation method of any one of the second aspect.
In a ninth aspect, the present invention provides a computer-readable storage medium comprising: the computer-readable storage medium stores thereon a computer program that is executed by a processor to perform the hint generating method of any one of the second aspect.
In a tenth aspect, the present invention provides a computer program product for causing a computer to perform the hint generating method of any one of the second aspect when the computer program product is run on the computer.
It should be noted that all or part of the above computer instructions may be stored on the first computer readable storage medium. The first computer readable storage medium may be packaged with the processor of the hint generating apparatus or packaged separately from the processor of the hint generating apparatus, which is not limited in this disclosure.
For the description of the third, fifth, sixth, and seventh aspects of the present disclosure, reference may be made to the detailed description of the first aspect; in addition, for the beneficial effects described in the third aspect, the fifth aspect, the sixth aspect and the seventh aspect, reference may be made to beneficial effect analysis of the first aspect, and details are not repeated here.
Reference may be made to the detailed description of the first aspect for the description of the fourth, eighth, ninth and tenth aspects of the disclosure; in addition, for the beneficial effects described in the fourth aspect, the eighth aspect, the ninth aspect and the tenth aspect, reference may be made to the beneficial effect analysis of the second aspect, and details are not repeated here.
In the present disclosure, the names of the above-mentioned hint generating means do not limit the devices or functional modules themselves, and in actual implementation, these devices or functional modules may appear by other names. Insofar as the functions of the respective devices or functional modules are similar to those of the present disclosure, they are within the scope of the claims of the present disclosure and their equivalents.
These and other aspects of the disclosure will be more readily apparent from the following description.
Compared with the prior art, the technical scheme provided by the disclosure has the following advantages:
the user image is generated based on the historical voice data, the historical operation data, and the historical question and answer data, so that the user's preference can be more accurately analyzed. And then, after receiving the first information sent by the electronic equipment, when the first information is determined to comprise the operation information, determining the user portrait according to the identification information in the first information. And then, determining an actual prompt according to the user portrait and the actual position information and the actual time information in the first information. Due to the fact that the accuracy of the user portrait is improved, the actual prompt words can be determined more accurately according to the user portrait, the actual position information and the actual time information in the first information, and the user experience is guaranteed. And then, the first display information carrying the actual prompt is sent to the electronic equipment, so that the user can look up the favorite actual prompt. The problem of among the prior art, when electronic equipment generated the tip, often can appear the tip that generates inaccurate, can't accurately reflect the condition of user's true intention, lead to user's experience relatively poor is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the embodiments or technical solutions in the prior art description will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a scene schematic diagram of a prompt generation method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a display device in a prompt word generation method according to an embodiment of the present application;
fig. 3 is a second schematic structural diagram of a display device in the hint generating method provided in the embodiment of the present application;
fig. 4 is a flowchart illustrating a method for generating a hint provided in an embodiment of the present application;
fig. 5 is a second scenario diagram of a hint generating method according to an embodiment of the present application;
fig. 6 is a third scene schematic diagram of a prompt generation method according to the embodiment of the present application;
fig. 7 is a fourth view of a scene of a hint generating method provided in an embodiment of the present application;
fig. 8 is a fifth scenario diagram of a prompt generation method according to an embodiment of the present application;
fig. 9 is a sixth schematic view of a scene of a hint generating method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 11 is a schematic diagram of a chip system according to an embodiment of the disclosure;
fig. 12 is a schematic structural diagram of a display device provided in an embodiment of the present application;
fig. 13 is a second schematic diagram of a chip system according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments of the present disclosure may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of additional identical elements in the process, method, article, or apparatus that comprises the element.
The hive mentioned in the embodiment of the disclosure is a data warehouse tool based on Hadoop, and is used for data extraction, conversion and loading.
The HBase referred to in the embodiments of the present disclosure is a distributed, column-oriented open-source database.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device according to one or more embodiments of the present application, as shown in fig. 1, a user may operate the display device 200 through a mobile terminal 300 and the control device 100. The control device 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication, bluetooth protocol communication, wireless or other wired method to control the display device 200. The user may input a user command through a key on a remote controller, a voice input, a control panel input, etc. to control the display apparatus 200. In some embodiments, mobile terminals, tablets, computers, laptops, and other smart devices may also be used to control the display device 200.
In some embodiments, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. The audio and video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so that the display device 200 with the synchronous display function can also perform data communication with the electronic device 400 in multiple communication modes. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The electronic device 400 may provide various content and interactions to the display device 200. The display device 200 may be a liquid crystal display, an OLED display, or a projection display device. The display apparatus 200 may additionally provide an intelligent network television function providing a computer support function in addition to the broadcast receiving television function.
In some embodiments, the electronic device provided by the embodiment of the present application may be the server 400 described above. Wherein the display apparatus 200 transmits the first information to the server 400 in response to a first operation, such as a user performing a preset operation on the control device 100. The server 400 receives the first information transmitted by the display apparatus 200. Server 400 determines the user representation based on the identification information in the first information if the first information includes operational information. The server 400 determines the actual prompt based on the user representation and the actual location information and the actual time information in the first information. The server 400 transmits first display information carrying an actual prompt to the display device 200. The display device 200 receives first display information which is sent by the server and carries an actual prompt. The display device 200 displays the actual prompt according to the first display information. According to the prompt language generation method provided by the embodiment of the disclosure, the user portrait is generated through the historical voice data, the historical operation data and the historical question and answer data of the user account currently logged in the display device 200, so that the user portrait accuracy can be refreshed, and therefore the actual prompt language can be determined more accurately according to the user portrait and the actual position information and the actual time information in the first information, and the user experience is guaranteed.
Fig. 2 shows a hardware configuration block diagram of a display device 200 according to an exemplary embodiment. The display apparatus 200 as shown in fig. 2 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface 280. The controller includes a central processor, a video processor, an audio processor, a graphic processor, a RAM, a ROM, and first to nth interfaces for input/output. The display 260 may be a display with a touch function, such as a touch display. The tuner demodulator 210 receives a broadcast television signal through a wired or wireless reception manner, and demodulates an audio/video signal, such as an EPG data signal, from a plurality of wireless or wired broadcast television signals. The detector 230 is used to collect signals of an external environment or interaction with the outside. The controller 250 and the tuner-demodulator 210 may be located in different separate devices, that is, the tuner-demodulator 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 200.
In some examples, taking the display device 200 applying for one or more embodiments as the television set 1 and the operating system of the television set 1 as the Android system as an example, as shown in fig. 3, the television set 1 may be logically divided into an application (Applications) layer (abbreviated as "application layer") 21, a kernel layer 22 and a hardware layer 23.
As shown in fig. 3, the hardware layer may include the communicator 220, the detector 230, the display 260, and the like shown in fig. 2. The application layer 21 includes one or more applications. The application may be a system application or a third party application. For example, the application layer 21 includes a first application that may provide a cue recommendation function. The kernel layer 22 acts as software middleware between the hardware layer and the application layer 21 for managing and controlling hardware and software resources.
In some examples, the core layer 22 includes a first driver to send the user operations collected by the detector 230 to a first application and a second driver to control the display 260 to display information sent by the display module 213. A first application in the tv set 1 is started, the first application calls the communicator 220 to establish a communication connection with the communication module 301 of the server 400, and a corresponding user representation for each tv set 1 is stored in the storage module 305 of the server 400. Thereafter, the first driver is used to send the user operation collected by the detector 230 to the first application for identification. Thereafter, the processing module 211 of the first application controls the sending module 212 to send the first information to the server 400 in response to the first operation (e.g., the user performs a preset operation on the control device 100) received by the obtaining module 210. The receiving module 302 of the server 400 receives the first information sent by the first application of the television set 1. Processing module 303 of server 400 determines the user representation based on the identification information in the first information received by receiving module 302 if the first information received by receiving module 302 includes operational information. The processing module 303 of the server 400 determines the actual prompt based on the actual location information and the actual time information in the first information received by the user representation and receiving module 302. The processing module 303 of the server 400 controls the sending module 304 to send the first display information carrying the actual prompt to the first application of the television 1. The obtaining module 210 of the first application of the television 1 receives the first display information carrying the actual prompt sent by the server 1. The processing module 211 of the first application controls the sending module 212 to send the first display information to the display module 213 according to the first display information received by the obtaining module 210. The display module 213 sends the first display information to the second driver, and the second driver controls the display 260 to display the actual prompt.
Specifically, the storage module 214 of the first application may be configured to store the program code of the write television 1, and may also be configured to store data generated by the write television 1 during the operation process, such as data in a write request.
Specifically, the electronic device provided in the embodiment of the present disclosure may be the server 400 or the display device 200, which is not limited herein.
The user image, the voiceprint information, the position information, the time information, the login information, the historical voice data, the historical operation data and the historical question and answer data related to the application can be data authorized by the user or fully authorized by all parties.
In the following embodiments, the method according to the embodiments of the present disclosure is described by taking the main execution body for executing the method for generating a hint provided by the embodiments of the present disclosure as the server 400 and taking the electronic device as the television 1 as an example.
The embodiment of the present application provides a hint generating method, which may include S11 to S14, as shown in fig. 4.
S11, receiving first information sent by the television 1.
In some examples, the user may select the required functions, such as watching a movie, watching tv, searching for a song, inquiring about weather, etc., during the use of the tv 1, usually according to the actual prompt recommended by the tv 1. When the user needs the television 1 to recommend the actual prompt, a first operation may be performed, such as: the television 1 supports a touch function, and a user presses a prompt function on the television 1, or presses a voice button of a remote controller, or selects a prompt function on a mobile phone which establishes communication connection with the television 1, or inputs voice information (such as i want, i want to see) lacking a physical word through a voice function of the mobile phone or the remote controller, and the like. At this time, the television set 1 transmits the first information to the server 400 in response to the first operation. In this way, when the server 400 determines that the first information includes the operation information after receiving the first information, the server generates the corresponding actual prompt. Then, the server 400 sends the first display information carrying the actual prompt to the television 1, so that the television 1 displays the actual prompt. Therefore, the user can select the required function according to the actual prompt words displayed by the television 1, and the user experience is guaranteed.
And S12, under the condition that the first information comprises the operation information, determining the user portrait according to the identification information in the first information. The operation information comprises any one of voice information lacking entity words and indication information used for indicating that the electronic equipment receives first operation, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information.
In some examples, the identification information comprises information characterizing the television set 1, or information characterizing a user currently using the television set 1. The memory of the server 400 stores in advance a user figure corresponding to the identification information of each television set 1. Such as: when the identification information includes information characterizing the television set 1, the identification information may be a device identification code, so that the server 400 may query the memory for the user representation corresponding to the device identification code based on the device identification code in the first information. Or, when the identification information includes information for representing a user currently using the television 1, the identification information may be a user account, so that the server 400 may query, in the memory, a user portrait corresponding to a currently logged-in user account according to the currently logged-in user account in the first information.
Specifically, the historical voice data includes voice data input by a user in history; the historical operational data includes historical search and click data. Wherein the historical search and click data comprises: the user manually searches data corresponding to the manual clicking behavior, such as: the user does not obtain the required content by using a voice mode, but obtains the content by a manual searching mode, such as: the user does not remember the complete name of the song, but knows it is a hit song, and then inputs voice data via the remote control, such as: "popular songs", at this time, the television set 1 prompts "no corresponding song is found, the recommended song will be played for you", and songs in the recommended list are randomly played. When the songs in the recommendation list are not the songs required by the user, the requirements of the user cannot be met. Thereafter, the user manually searches for "80 s", a song "given by the television set 1 for" 80 s ", and there is a song desired by the user on the television set 1. Then, the "song of 80 s" is used as the data corresponding to the manual search and manual click behavior of the user, so that the historical operation data can be continuously updated, and the accuracy of the user representation can be improved.
The historical question-answer data includes content entity type question-answer replies, such as data of mutual question-answer between the user and the television 1, for example: the television set 1 randomly plays a song, which the user likes very much but not clearly what the title is, so the user inputs the voice data "what song this is" through the remote controller, and the television set 1 gives the recognition result based on the voice data: "this is XX of XX", and "this is XX of XX" is used as the content entity class question-answer reply phrase at this time.
And S13, determining an actual prompt word according to the user portrait and the actual position information and the actual time information in the first information.
And S14, sending first display information carrying the actual prompt to the television 1. The first display information is used to instruct the television 1 to display an actual prompt.
Therefore, the prompt generating method provided by the embodiment of the disclosure generates the user image based on the historical voice data, the historical operation data and the historical question and answer data, so that the preference of the user can be analyzed more accurately. And then, after receiving the first information sent by the electronic equipment, determining whether the first information comprises the operation information. If the first information includes operational information, the user representation is determined based on the identification information in the first information. And then, determining the actual prompt words according to the user portrait and the actual position information and the actual time information in the first information. And then, the first display information carrying the actual prompt is sent to the electronic equipment, so that the user can look up the favorite actual prompt.
In some implementable examples, the identification information includes a user account, and in conjunction with fig. 4, as shown in fig. 5, the hint generating method provided by the embodiment of the present disclosure further includes S15-S17.
And S15, receiving the voice information sent by the television 1. The voice information comprises first voiceprint data.
In some examples, when the television 1 is used for the first time or a new user uses the television 1, the user is required to register a user account, and at this time, the user is required to set attribute information such as a user image, a user age, a user gender, and the like, login information, and voiceprint data. In this way, the television 1 transmits the registration information of the user to the server 400, and the server 400 creates a user account and stores the user image, the attribute information such as the user age and sex, the login information, and the voiceprint data corresponding to the user account. In order to provide the actual prompt more accurately, the server 400 needs to determine whether the user currently using the television 1 is the user corresponding to the user account currently logged in by the television 1 according to the voice information sent by the television 1. Therefore, the server 400 needs to determine whether the user currently using the television 1 is the user corresponding to the user account currently logged in by the television 1 according to whether the first voiceprint data in the voice message sent by the television 1 matches the second voiceprint data of the user account currently logged in by the television 1, which is stored in the server 400.
And S16, determining whether the second voiceprint data bound to the user account is matched with the first voiceprint data or not according to the first voiceprint data.
In some examples, the server 400 extracts the first feature information in the first voiceprint data and the second feature information in the second voiceprint data. Then, processing the first characteristic information to obtain a first characteristic vector; and similarly, processing the second feature information to obtain a second feature vector. And then, calculating the similarity between the first feature vector and the second feature vector (for example, by calculating the cosine similarity between the first feature vector and the second feature vector and using the cosine similarity as the similarity between the first feature vector and the second feature vector, or calculating the target distance between the first feature vector and the second feature vector and using the target distance as the similarity between the first feature vector and the second feature vector), and determining whether the second voiceprint data is matched with the first voiceprint data. When the similarity is larger than the similarity threshold value, the second voiceprint data is matched with the first voiceprint data; and in the case that the similarity is smaller than or equal to the similarity threshold, the second voiceprint data is not matched with the first voiceprint data.
Specifically, the target Distance includes any one of Euclidean Distance (Euclidean Distance), manhattan Distance (Manhattan Distance), and Chebyshev Distance (Chebyshev Distance).
And S17, under the condition that the second voiceprint data is matched with the first voiceprint data, sending first indication information to the television 1. The first indication information is used for indicating the television 1 to continuously log in the user account.
In some examples, the second voiceprint data is matched with the first voiceprint data, which indicates that the user currently using the television 1 is the user corresponding to the user account currently logged in by the television 1, so that when the television 1 generates the actual prompt according to the user portrait and the first information, the actual prompt conforming to the user can be accurately generated, and the user experience is ensured.
In some practical examples, in conjunction with fig. 4, as in fig. 5, in the hint generating method provided in the embodiments of the present disclosure, before performing S11, S18-S21 also need to be performed.
And S18, sending second indication information to the television 1 when the second voiceprint data is not matched with the first voiceprint data. The second indication information is used for indicating the television 1 to prompt the input of theoretical login information of the user account.
In some examples, the second voiceprint data is not matched with the first voiceprint data, which indicates that the user currently using the television 1 is not the user corresponding to the user account currently logged in the television 1, and at this time, when the television 1 generates the actual prompt according to the user portrait corresponding to the user account and the first information, because the user currently using the television 1 is not the user corresponding to the user account currently logged in the television 1, the actual prompt generated by the television 1 is not required by the user currently using the television 1, which may cause the user to manually search. In order to solve the above problem, the prompt generating method provided in the embodiment of the present disclosure needs to perform identity confirmation again for the user currently using the television 1, for example: the server 400 transmits the second instruction information to the television set 1. Thereafter, the television set 1 transmits actual login information (e.g., a login password) to the server 400 in response to a second operation (e.g., an operation of inputting actual login information by the user). In this way, the server 400 may match the actual login information with the theoretical login information of the user account to determine whether the user currently using the television 1 is the user corresponding to the user account currently logged in the television 1.
And S19, receiving the actual login information transmitted by the television 1.
And S20, under the condition that the theoretical login information is the same as the actual login information, sending first indication information to the television 1.
In some examples, when the theoretical login information is the same as the actual login information, it is indicated that the user currently using the television 1 is the user corresponding to the user account currently logged in by the television 1, so that when the television 1 generates the actual prompt according to the user portrait and the first information, the actual prompt conforming to the user can be accurately generated, and the user experience is ensured.
And S21, under the condition that the theoretical login information is different from the actual login information, sending third indication information to the television 1. The third indication information is used to instruct the television 1 to prompt the user to execute a target operation, where the target operation includes any one of switching an account or registering an account.
In some examples, when the theoretical login information is different from the actual login information, it is indicated that the user currently using the television 1 is not the user corresponding to the user account currently logged in by the television 1, and a problem that the user needs to manually search for the user is avoided because the actual prompt generated by the television 1 is not required by the user currently using the television 1. The server 1 transmits the third instruction information to the television set 1. In this way, after receiving the third indication information sent by the server 400, the television 1 prompts the user currently using the television 1 to switch the account or register the account, so that when each user uses the television 1, the television 1 can accurately generate an actual prompt word required by each user, and the user experience is ensured.
It should be noted that, in the above example, the server 400 determines, by verifying the voiceprint data and the login information, whether the user currently using the television 1 is a user corresponding to the user account currently logged in by the television 1. In some other examples, the television 1 is provided with an image capturing device for capturing a user image, when the user uses the television 1, the image capturing device captures the user image of the user and sends the user image to the server 400, and the server 400 searches the user portrait corresponding to the user image in the memory according to the user image. In this way, no matter which user uses the television 1, the server 400 can determine the user portrait corresponding to the user image according to the user image of the current user collected by the television 1. And then, the actual prompt words required by the user can be generated in real time according to the user portrait, so that the user experience is ensured.
In some practical examples, in conjunction with fig. 4, as in fig. 6, in the hint generating method provided in the embodiments of the present disclosure, before performing S11, S22-S24 also need to be performed.
And S22, acquiring historical voice data, historical operation data and historical question and answer data corresponding to the identification information.
In some examples, when analyzing the historical voice data, the historical operation data and the historical question and answer data, the historical voice data, the historical operation data and the historical question and answer data need to be processed, such as: and extracting, cleaning, converting and loading the historical voice data, the historical operation data and the historical question and answer data by adopting a data warehouse technology (Extract, transform, load and ETL) so as to obtain the required historical voice data, the required historical operation data and the required historical question and answer data. Then, by analyzing the historical voice data, the historical operation data and the historical question and answer data after data processing, the portrait data of the user in the same position information and different time periods (such as morning, noon and afternoon of a working day or morning, noon and afternoon of a non-working day) can be determined. Thus, a user image of the user can be generated from the image data.
And S23, analyzing the historical voice data, the historical operation data and the historical question and answer data to determine at least one piece of portrait data. Wherein the portrait data includes a user tag weight for each content category in each time period under the historical location information.
In some examples, server 400 may determine the access probability of users accessing each content category in each time period under the historical location information by analyzing historical speech data, historical operational data, and historical question and answer data, such as: probability of access equal to
Figure BDA0003721013610000111
a represents a first total number of times of accessing a target classification under the historical position information in the historical voice data, the historical operation data and the historical question answering data in any time period under the historical position information, b represents a second total number of times of target classification under the historical position information in the historical voice data, the historical operation data and the historical question answering data, and the target classification is any content classification.
Then, the server 400 determines the user tag weight according to the access probability and the first total number of times. Such as: the user tag weight is equal to (α × γ) x (ω × a), α represents a behavior type weight, γ represents a time decay coefficient, ω represents an access probability, and a represents a first total number of times.
The behavior type weight means that different behaviors such as user search and click have different importance to the user, and the weights of the different behaviors are different. The behavior type weight is generally preset after the operation and maintenance personnel analyze the historical voice data, the historical operation data and the historical question and answer data. The time attenuation coefficient means that some behaviors are continuously weakened under the actual influence, so the objectivity of the behaviors needs to be adjusted through the time attenuation coefficient, and the accuracy of the user label weight is ensured.
For example, if the time information adopts a 24-hour system, the content categories include 5 categories, namely music/radio, hot news, movies, shopping and beauty, and the user tag weight of each content category in each time slot under the same historical location information is shown in table 1.
TABLE 1
Figure BDA0003721013610000121
S24, generating a user image according to the image data.
In some practical examples, in conjunction with fig. 4, as shown in fig. 7, S13 described above may be specifically implemented by S130-S132 described below.
S130, image data corresponding to the actual position information and the actual time information are determined according to the user image, the actual position information and the actual time information.
In some examples, in conjunction with table 1 given in S23 above, when the actual location information is "siemens", and the actual time information is "monday, 8.
TABLE 2
Figure BDA0003721013610000131
S131, according to the portrait data corresponding to the actual position information and the actual time information, determining content classification with the user label weight being larger than or equal to the preset weight.
In some examples, in combination with the example given in S130 above, the user tag weights corresponding to each content classification are different in the same time period of the same historical location information, so that the content classification interested by the user can be determined according to the magnitude relationship between the user tag weight and the preset weight, and then the actual prompt for the content classification interested by the user is generated, thereby improving the accuracy of the actual prompt and ensuring the user experience.
S132, determining the actual prompt words according to the content classification with the user label weight being greater than or equal to the preset weight. Wherein one content classification corresponds to at least one actual prompt.
In some examples, in combination with the example given in S131 above, when the user tag weight is greater than or equal to the preset weight, it indicates that the user is interested in the content classification corresponding to the user tag weight. Therefore, the actual prompt words of the content categories which are interested by the user are generated, so that the accuracy of the actual prompt words can be improved, and the experience of the user is guaranteed.
In some implementable examples, the first information further includes query information, one content classification corresponding to at least one theoretical entity word; referring to fig. 4, as shown in fig. 8, S13 may be specifically implemented by S130, S131, and S133 to S135 described below.
S130, image data corresponding to the actual position information and the actual time information are determined according to the user image, the actual position information and the actual time information.
S131, according to the portrait data corresponding to the actual position information and the actual time information, determining content classification with the user label weight being larger than or equal to the preset weight.
S133, determining theoretical prompts corresponding to each actual entity word contained in the query information according to the pre-configured knowledge graph and the query information.
In some examples, when the server 400 determines that the first information includes the query information, the query information needs to be identified, for example: optical Character Recognition (OCR) or Natural Language Processing (NLP), so that at least one actual entity word included in the query information can be recognized. And then, calculating a theoretical prompt related to each actual entity word based on a pre-configured knowledge graph. Such as: the query information is "drama of AA", the server 400 identifies and processes the drama of AA ", and determines that actual entity words included in the" drama of AA "are" AA "and" drama ", respectively. And then, determining at least one theoretical prompt of the AA and at least one theoretical prompt of the TV play based on the pre-configured knowledge graph.
And S134, matching the theoretical entity words and the actual entity words of the content classification with the user label weight being greater than or equal to the preset weight.
In some examples, the theoretical entity words or the actual entity words are not recognized by the server 400. Therefore, when matching the theoretical entity words with the actual entity words, it is necessary to match word vectors corresponding to the theoretical entity words with word vectors corresponding to the actual entity words, such as: and determining whether the theoretical entity words are matched with the actual entity words or not by calculating the similarity between the word vectors corresponding to the theoretical entity words and the word vectors corresponding to the actual entity words. Specifically, the process of calculating the similarity between the word vector corresponding to the theoretical entity word and the word vector corresponding to the actual entity word is the same as the process of calculating the similarity between the first feature vector and the second feature vector, and is not repeated here.
Because each content classification corresponds to one or more theoretical entity words, the actual prompt words which are interested by the user can be found out by matching the theoretical entity words with the actual entity words. Such as: in connection with the example given in S130 above, it is determined that the content with the user tag weight greater than or equal to the preset weight is classified as "music/station", and the similarity between the theoretical entity word corresponding to "music/station" and the actual entity word "AA" is greater than the similarity threshold, which indicates that the theoretical entity word matches the actual entity word "AA". Therefore, the actual cue related to "music/station" can be found in the theoretical cue corresponding to "AA".
And S135, under the condition that the theoretical entity words are matched with the actual entity words, determining the theoretical cue words corresponding to the actual entity words as actual cue words.
In some examples, a theoretical cue corresponds to a content classification, and with reference to the example given in S134 above, when a theoretical entity word matches an actual entity word, it indicates that the user is interested in "AA", so that the theoretical cue corresponding to "AA" can be recommended to the user as the actual cue, thereby ensuring that the user can select a desired actual cue.
In other examples, in order to improve the accuracy of the actual prompt, the theoretical prompt corresponding to the content classification whose user tag weight is greater than or equal to the preset weight needs to be screened from the theoretical prompts corresponding to the "AA", so that the accuracy of the actual prompt can be greatly improved.
In some implementable examples, one actual hint corresponds to one actual control; with reference to fig. 4, as shown in fig. 9, after S14 is executed, S25 to S27 may also be executed in the hint generating method provided in the embodiment of the present disclosure.
And S25, receiving second information sent by the television 1. The second information is used for indicating that the target prompt words are selected, and the target prompt words are any one of the actual prompt words.
And S26, generating display data corresponding to the target prompt according to the target prompt.
And S27, sending second display information carrying display data to the television 1. The second display information is used to instruct the television 1 to display the display data.
In some examples, to facilitate the user to select the desired actual prompt, server 400 assigns an actual control to each actual prompt when sending the first display information to television 1. Thus, when the television 1 displays the actual prompt, each actual prompt corresponds to one control, and the user can select and operate the control corresponding to the actual prompt, for example: the user selects the control corresponding to the required actual prompt word through an 'OK' key in the remote controller, or the user selects the control corresponding to the required actual prompt word through a semantic information (such as selecting the first one), so that the media resource access rate can be improved, and the user experience is improved. Thus, the television set 1 transmits the second information to the server 400 in response to the third operation. After receiving the second information transmitted from the television 1, the server 400 needs to generate display data of the target prompt included in the second information. Then, the server 400 sends the second display information carrying the display data to the television 1, so that the user can look up the required display content, and the user experience is ensured.
Specifically, in the method for generating a hint provided by the embodiment of the present disclosure, the television 1 uses a computing engine (Apache Spark) to synchronize the voice data of the user and the operation data to the server 400 in real time. After receiving the voice data and the operation data transmitted from the television 1, the server 400 stores the voice data, the operation data, and the user representation in a database, such as HBase. The server 400 may read the data user representation, historical speech data, historical operational data and historical question and answer data in HBase by Hive.
It should be noted that the above example is described by taking an execution subject for executing the prompt generation method provided by the embodiment of the present disclosure as an example of the server 400. In other examples, the execution subject for executing the hint generating method provided by the embodiments of the present disclosure may also be the display device 200, which is not limited herein.
In the following embodiments, the method of the present application will be described by taking the television 1 as an execution subject and the server 400 as an example to execute the hint generating method provided by the embodiments of the present disclosure.
An embodiment of the present application provides a hint generating method, which may include S31 to S33 as shown in fig. 4.
And S31, responding to the first operation, and sending the first information to the server. The first information includes any one of identification information, actual position information, actual time information and operation information, and the operation information includes voice information lacking a physical word and indication information indicating that the electronic device has received the first operation.
And S32, receiving first display information which is sent by the server and carries the actual prompt. The first display information is used for indicating the electronic equipment to display an actual prompt language, the actual prompt language is determined according to a user portrait, actual position information and actual time information, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information.
And S33, displaying the actual prompt words according to the first display information.
In some practical examples, the identification information includes a user account, and in conjunction with fig. 4, as shown in fig. 5, the hint generating method provided by the embodiment of the present disclosure further includes S34 and S35.
And S34, sending the voice information to the server. The voice information comprises first voiceprint data.
And S35, receiving the first indication information sent by the server. The first indication information is used for indicating the electronic equipment to continuously log in the user account.
In some implementable examples, in combination with fig. 4, as shown in fig. 5, the method for generating a hint provided by the embodiment of the present disclosure further includes: S36-S38.
And S36, receiving second indication information sent by the server. The second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account.
And S37, responding to the second operation, and sending the actual login information to the server.
And S38, receiving third indication information sent by the server. The third indication information is used for indicating the electronic equipment to prompt the user to execute target operation, and the target operation includes any one of account switching or account registration.
In some implementable examples, one actual hint corresponds to one actual control; with reference to fig. 4 and as shown in fig. 9, the method for generating a hint provided by the embodiment of the present disclosure further includes: S39-S41.
And S39, responding to the third operation of the user, and sending the second information to the server. The second information is used for indicating that the selection operation is carried out on the target prompting words, and the target prompting words are any one of the actual prompting words.
And S40, receiving second display information which is sent by the server and carries display data. The display data are generated according to the target prompt words, and the second display information is used for indicating the electronic equipment to display the display data.
And S41, displaying the display data according to the second display information.
The scheme provided by the embodiment of the application is mainly introduced from the perspective of a method. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
As shown in fig. 10, an embodiment of the present application provides a schematic structural diagram of a server 400. Including a communicator 101 and a processor 102.
A communicator 101, configured to receive first information sent by an electronic device; a processor 102 for determining a user representation from identification information in the first information received by the communicator 101, in case the first information received by the communicator 101 comprises operation information; the operation information comprises any one of voice information lacking entity words and indication information used for indicating that the electronic equipment receives first operation, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information; a processor 102, further configured to determine an actual prompt based on the user representation and actual location information and actual time information in the first information received by the communicator 101; the processor 102 is further configured to control the sending unit to send first display information carrying an actual prompt to the electronic device; the first display information is used for indicating the electronic equipment to display the actual prompt words.
In some practical examples, the identification information includes a user account, the communicator 101, and is further configured to receive voice information sent by the electronic device; wherein, the voice information comprises first voiceprint data; the processor 102 is further configured to determine whether second voiceprint data bound to the user account matches the first voiceprint data received by the communicator 101 according to the first voiceprint data received by the communicator 101; the processor 102 is further configured to control the sending unit to send the first indication information to the electronic device if the second voiceprint data matches the first voiceprint data received by the communicator 101; the first indication information is used for indicating the electronic equipment to continuously log in the user account.
In some practical examples, the processor 102 is further configured to control the sending unit to send the second indication information to the electronic device if the second voiceprint data does not match the first voiceprint data received by the communicator 101; the second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account; the communicator 101 is further configured to receive actual login information sent by the electronic device; the processor 102 is further configured to control the sending unit to send the first indication information to the electronic device in a case that the theoretical login information is the same as the actual login information received by the communicator 101; the processor 102 is further configured to control the sending unit to send third indication information to the electronic device in a case that the theoretical login information is different from the actual login information received by the communicator 101; the third indication information is used for indicating the electronic equipment to prompt the user to execute target operation, and the target operation includes any one of account switching or account registration.
In some practical examples, the communicator 101 is configured to obtain historical voice data, historical operation data, historical question and answer data corresponding to the identification information, historical voice data, historical operation data, and historical question and answer data; a processor 102, further configured to analyze the historical voice data obtained by the communicator 101, the historical operation data obtained by the communicator 101, and the historical question and answer data obtained by the communicator 101, and determine at least one piece of portrait data; wherein the portrait data includes a user tag weight for each content classification in each time period under the historical location information; processor 102 is also configured to generate a user representation based on the representation data.
In some implementable examples, processor 102, specifically to determine, from the user representation, the actual location information received by communicator 101, and the actual time information received by communicator 101, representation data corresponding to both the actual location information and the actual time information; a processor 102, configured to determine a content classification with a user tag weight greater than or equal to a preset weight according to portrait data corresponding to the actual location information received by the communicator 101 and the actual time information received by the communicator 101; the processor 102 is specifically configured to determine an actual prompt according to content classification with a user tag weight greater than or equal to a preset weight; wherein one content classification corresponds to at least one actual prompt.
In some implementable examples, the first information further includes query information, one content classification corresponding to at least one theoretical entity word; the processor 102 is specifically configured to determine, according to a preconfigured knowledge graph and query information received by the communicator 101, a theoretical prompt corresponding to each actual entity word included in the query information; a processor 102, specifically configured to determine portrait data corresponding to both actual location information and actual time information based on the user portrait, the actual location information received by the communicator 101, and the actual time information received by the communicator 101; a processor 102, configured to determine a content classification with a user tag weight greater than or equal to a preset weight according to portrait data corresponding to both the actual location information received by the communicator 101 and the actual time information received by the communicator 101; the processor 102 is specifically configured to match the theoretical entity words and the actual entity words of the content classification with the user tag weight being greater than or equal to the preset weight; the processor 102 is specifically configured to determine that the theoretical cue corresponding to the actual entity word is the actual cue under the condition that the theoretical entity word is matched with the actual entity word.
In some implementable examples, one actual hint corresponds to one actual control; the communicator 101 is further configured to receive second information sent by the electronic device; the second information is used for indicating that the target prompt is selected, and the target prompt is any one of the actual prompts; the processor 102 is further configured to generate display data corresponding to the target prompt according to the target prompt received by the communicator 101; the processor 102 is further configured to control the sending unit to send second display information carrying display data to the electronic device; the second display information is used for indicating the electronic equipment to display the display data.
All relevant contents of the steps related to the above method embodiments may be referred to the functional description of the corresponding functional module, and the functions thereof are not described herein again.
Of course, the server 400 provided in the embodiment of the present application includes, but is not limited to, the above modules, and for example, the server 400 may further include the memory 103. The memory 103 may be used to store the program code of the write server 400, and may also be used to store data generated by the write server 400 during operation, such as data in a write request.
As an example, in conjunction with fig. 3, the functions implemented by the communication module 301, the receiving module 302, and the sending module 304 in the server 400 are the same as those of the communicator 101 in fig. 10, the functions implemented by the processing module 303 are the same as those of the processor 102 in fig. 10, and the functions implemented by the storage module 305 are the same as those of the memory 103 in fig. 10.
The embodiment of the present application further provides a chip system, which can be applied to the server 400 in the foregoing embodiment. As shown in fig. 11, the system-on-chip includes at least one processor 1501 and at least one interface circuit 1502. The processor 1501 may be a processor in the server 400 described above. The processor 1501 and the interface circuit 1502 may be interconnected by wires. The processor 1501 may receive and execute computer instructions from the memory of the server 400 described above via the interface circuit 1502. The computer instructions, when executed by the processor 1501, may cause the server 400 to perform the various steps performed by the server 400 in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides a computer-readable storage medium for storing computer instructions executed by the server 400.
Embodiments of the present application further provide a computer program product, which includes computer instructions executed by the server 400.
As shown in fig. 12, an embodiment of the present application provides a schematic structural diagram of a display device 200. Including a processor 201, a communicator 202, and a display 203.
A processor 201 for controlling the transmission unit to transmit the first information to the server in response to the first operation; the first information comprises identification information, actual position information, actual time information and operation information, and the operation information comprises any one of voice information lacking an entity word and indication information used for indicating that the electronic equipment receives the first operation; the communicator 202 is used for receiving first display information which is sent by the server and carries an actual prompt; the first display information is used for indicating the electronic equipment to display an actual prompt language, the actual prompt language is determined according to a user portrait, actual position information and actual time information, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information; the processor 201 is further configured to control the display 203 to display the actual prompt according to the first display information.
In some practical examples, the identification information includes a user account, and the processor 201 is further configured to control the sending unit to send the voice information to the server; wherein, the voice information comprises first voiceprint data; the communicator 202 is further used for receiving first indication information sent by the server; the first indication information is used for indicating the electronic equipment to continuously log in the user account.
In some practical examples, the communicator 202 is further configured to receive second indication information sent by the server; the second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account; the processor 201 is further configured to control the sending unit to send the actual login information to the server in response to the second operation; the communicator 202 is further used for receiving third indication information sent by the server; the third indication information is used for indicating the electronic equipment to prompt the user to execute target operation, and the target operation includes any one of account switching or account registration.
In some implementable examples, one actual hint corresponds to one actual control; the processor 201 is further configured to control the sending unit to send the second information to the server in response to a third operation by the user; the second information is used for indicating that the target prompt is selected, and the target prompt is any one of the actual prompts; the communicator 202 is further configured to receive second display information carrying display data sent by the server; the display data are generated according to the target prompt words, and the second display information is used for indicating the electronic equipment to display the display data; the processor 201 is further configured to control the display 203 to display the display data according to the second display information received by the communicator 202.
All relevant contents of the steps related to the above method embodiments may be referred to the functional description of the corresponding functional module, and the functions thereof are not described herein again.
Of course, the display device 200 provided in the embodiment of the present application includes, but is not limited to, the above modules, for example, the display device 200 may further include the memory 204. The memory 204 may be used for storing the program code of the write display device 200, and may also be used for storing data generated by the write display device 200 during operation, such as data in a write request.
As an example, in conjunction with fig. 3, the functions implemented by both the acquisition module 210 and the transmission module 212 in the display device 200 are the same as those of the communicator 202 in fig. 12, the functions implemented by the processing module 211 are the same as those of the processor 201 in fig. 12, the functions implemented by the display module 213 are the same as those of the display 203 in fig. 12, and the functions implemented by the storage module 214 are the same as those of the memory 204 in fig. 12.
The embodiment of the present application further provides a chip system, which can be applied to the display device 200 in the foregoing embodiment. As shown in fig. 13, the system-on-chip includes at least one processor 2501 and at least one interface circuit 2502. The processor 2501 may be a processor in the display device 200 described above. The processor 2501 and the interface circuit 2502 may be interconnected by wires. The processor 2501 may receive and execute computer instructions from the memory of the display device 200 described above via the interface circuit 2502. The computer instructions, when executed by the processor 2501, may cause the display device 200 to perform the various steps performed by the display device 200 in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides a computer-readable storage medium for storing computer instructions executed by the display device 200.
The embodiment of the present application further provides a computer program product, which includes computer instructions executed by the display device 200.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for generating a hint, comprising:
receiving first information sent by electronic equipment;
under the condition that the first information comprises operation information, determining a user portrait according to identification information in the first information; the operation information comprises any one of voice information lacking entity words and indication information used for indicating that the electronic equipment receives first operation, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information;
determining an actual prompt according to the user portrait and actual position information and actual time information in the first information;
sending first display information carrying the actual prompt to the electronic equipment; the first display information is used for indicating the electronic equipment to display the actual prompt words.
2. The method of claim 1, wherein the identification information comprises a user account, and the method further comprises:
receiving voice information sent by the electronic equipment; wherein, the voice information comprises first voiceprint data;
determining whether second voiceprint data bound to the user account is matched with the first voiceprint data or not according to the first voiceprint data;
under the condition that the second voiceprint data are matched with the first voiceprint data, first indication information is sent to the electronic equipment; the first indication information is used for indicating the electronic equipment to continuously log in the user account.
3. The method of claim 2, further comprising:
under the condition that the second voiceprint data are not matched with the first voiceprint data, sending second indication information to the electronic equipment; the second indication information is used for indicating the electronic equipment to prompt the input of theoretical login information of the user account;
receiving actual login information sent by the electronic equipment;
under the condition that the theoretical login information is the same as the actual login information, the first indication information is sent to the electronic equipment;
under the condition that the theoretical login information is different from the actual login information, third indication information is sent to the electronic equipment; the third indication information is used for indicating the electronic equipment to prompt a user to execute a target operation, wherein the target operation includes any one of account switching or account registration.
4. The method for generating a hint word according to claim 1, wherein before receiving the first information sent by the electronic device, the method further comprises:
acquiring historical voice data, historical operation data and historical question and answer data corresponding to the identification information;
analyzing the historical voice data, the historical operation data and the historical question and answer data to determine at least one piece of portrait data; wherein the portrait data includes a user tag weight for each content classification in each time period under historical location information;
the user representation is generated based on the representation data.
5. The method of claim 4, wherein determining an actual hint based on the user representation and actual location information and actual time information in the first information comprises:
determining portrait data corresponding to the actual position information and the actual time information according to the user portrait, the actual position information and the actual time information;
determining content classification with the user label weight being greater than or equal to a preset weight according to portrait data corresponding to the actual position information and the actual time information;
determining an actual prompt word according to the content classification of which the user label weight is greater than or equal to a preset weight; wherein one content classification corresponds to at least one actual prompt.
6. A method for generating a hint, comprising:
responding to the first operation, and sending first information to a server; the first information comprises identification information, actual position information, actual time information and operation information, and the operation information comprises any one of voice information lacking an entity word and indication information used for indicating that the electronic equipment receives a first operation;
receiving first display information which is sent by the server and carries an actual prompt; the first display information is used for indicating the electronic equipment to display the actual prompt, the actual prompt is determined according to a user image, the actual position information and the actual time information, and the user image is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information;
and displaying the actual prompt words according to the first display information.
7. A cue generation device, comprising:
the receiving unit is used for receiving first information sent by the electronic equipment;
the processing unit is used for determining the user portrait according to the identification information in the first information acquired by the acquisition unit under the condition that the first information acquired by the acquisition unit comprises operation information; the operation information comprises any one of voice information lacking entity words and indication information used for indicating that the electronic equipment receives first operation, and the user portrait is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information;
the processing unit is further used for determining an actual prompt according to the user portrait and actual position information and actual time information in the first information received by the receiving unit;
the processing unit is further used for controlling the sending unit to send first display information carrying the actual prompt to the electronic equipment; the first display information is used for indicating the electronic equipment to display the actual prompt words.
8. A hint word generation apparatus, comprising:
a processing unit configured to control the transmission unit to transmit the first information to the server in response to the first operation; the first information comprises identification information, actual position information, actual time information and operation information, and the operation information comprises any one of voice information lacking an entity word and indication information used for indicating that the electronic equipment receives a first operation;
the receiving unit is used for receiving first display information which is sent by the server and carries an actual prompt; the first display information is used for indicating the electronic equipment to display the actual prompt, the actual prompt is determined according to a user image, the actual position information and the actual time information, and the user image is generated based on historical voice data, historical operation data and historical question and answer data corresponding to the identification information.
9. An electronic device, comprising: a memory for storing a computer program and a processor; the processor is configured to, when executing a computer program, cause the electronic device to implement the cue language generation method of any one of claims 1-6, or cause the electronic device to implement the cue language generation method of claim 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a computing device, causes the electronic device to implement the cue language generation method of any one of claims 1-6, or causes the electronic device to implement the cue language generation method of claim 7.
CN202210761080.7A 2022-06-29 2022-06-29 Prompt language generation method and device and electronic equipment Pending CN115278316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210761080.7A CN115278316A (en) 2022-06-29 2022-06-29 Prompt language generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210761080.7A CN115278316A (en) 2022-06-29 2022-06-29 Prompt language generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115278316A true CN115278316A (en) 2022-11-01

Family

ID=83762577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210761080.7A Pending CN115278316A (en) 2022-06-29 2022-06-29 Prompt language generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115278316A (en)

Similar Documents

Publication Publication Date Title
US20200341975A1 (en) Methods and systems for identifying an information resource for answering natural language queries
US9451305B2 (en) Method, computer readable storage medium, and introducing and playing device for introducing and playing media
CN109688475B (en) Video playing skipping method and system and computer readable storage medium
US11409817B2 (en) Display apparatus and method of controlling the same
CN110737840A (en) Voice control method and display device
US20110282759A1 (en) Systems and methods for performing an action on a program or accessing the program from a third-party media content source
CN105808182B (en) Display control method and system, advertisement breach judging device and video and audio processing device
US20110283320A1 (en) Systems and methods for identifying a program using information from a third-party data source
CN104170397A (en) User interface for entertainment systems
JP2006203593A (en) System and method for televiewing tv broadcast
US20110283209A1 (en) Systems and methods for sharing information between widgets operating on the same user equipment
US11012754B2 (en) Display apparatus for searching and control method thereof
CN111787376A (en) Display device, server and video recommendation method
CN112135170A (en) Display device, server and video recommendation method
CN110750719A (en) IPTV-based information accurate pushing system and method
CN101888470A (en) Device and method for providing general program guide and terminal device and system thereof
CN111274449B (en) Video playing method, device, electronic equipment and storage medium
CN116847131A (en) Play control method, device, remote controller, play system and storage medium
CN109618231B (en) Movie content recommendation method and system based on cold start
CN115278316A (en) Prompt language generation method and device and electronic equipment
CN115802112A (en) Display device, channel data processing method, and storage medium
CN115240665A (en) Display apparatus, control method, and storage medium
CN111552794A (en) Prompt language generation method, device, equipment and storage medium
US20150066887A1 (en) Information processing apparatus, information processing method, and program
CN110851727A (en) Search result sorting method and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination