WO2022083328A1 - 一种内容推送方法、装置、存储介质和芯片系统 - Google Patents

一种内容推送方法、装置、存储介质和芯片系统 Download PDF

Info

Publication number
WO2022083328A1
WO2022083328A1 PCT/CN2021/116865 CN2021116865W WO2022083328A1 WO 2022083328 A1 WO2022083328 A1 WO 2022083328A1 CN 2021116865 W CN2021116865 W CN 2021116865W WO 2022083328 A1 WO2022083328 A1 WO 2022083328A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
terminal device
user
interface
chat
Prior art date
Application number
PCT/CN2021/116865
Other languages
English (en)
French (fr)
Inventor
马志伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011502425.4A external-priority patent/CN114465975B/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21881747.6A priority Critical patent/EP4213461A4/en
Publication of WO2022083328A1 publication Critical patent/WO2022083328A1/zh
Priority to US18/304,941 priority patent/US20230262017A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1845Arrangements for providing special services to substations for broadcast or conference, e.g. multicast broadcast or multicast in a specific location, e.g. geocast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/02User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/216Handling conversation history, e.g. grouping of messages in sessions or threads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/222Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/224Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal

Definitions

  • the present application relates to the field of communications, and in particular, to a content push method, device, storage medium and chip system.
  • Human-machine dialogue has been widely used in people's daily life, such as chat robots, robot customer service, smart speakers, voice assistants, etc. Human-machine dialogue has a wide range of application scenarios, and can be directly used in specific business processing, such as hotel reservation services, flight reservation services, and train ticket reservation services.
  • the user needs to wake up the chatbot in the mobile phone in a specific way, and the system will provide a fixed interface for human-machine dialogue.
  • the terminal device can open a fixed interface on which the user can have a conversation with the chatbot.
  • the terminal device opens a fixed interface for chatting with the chatbot
  • the command includes the intent and the content of the slot.
  • the intent corresponds to the function
  • the slot corresponds to the parameters required to complete the function.
  • the user inputs the command "inquire about weather conditions in Jiading District, Shanghai”
  • the user's intention is “inquire about weather conditions”
  • the slot corresponding to the intention includes: location.
  • the content of the slot “Location” can be determined as “Shanghai Jiading”. It can be said that "location” is the slot corresponding to the intention of "checking weather conditions”, and the slot can be called an entity.
  • the chatbot parses the command input by the user to understand the intent of the user command. That is, you need to understand what functionality the user wants. Further, it is necessary to identify the content of the slot.
  • the identification of the content of the slot is the problem of word extraction and matching.
  • the present application provides a content push method, apparatus, storage medium and chip system, which are used to reduce the number of interactions between a user and a terminal device.
  • the terminal device in this application acquires first information, where the first information includes location information of the terminal device.
  • the terminal device displays the second information.
  • the second information includes the content to be pushed associated with the first information or a link to the content to be pushed.
  • the first condition may include: the location corresponding to the location information of the terminal device is located in the first area, and the type of the first area belongs to one of preset area types. Since the second information can be pushed according to the location information of the terminal device, the query steps in the process of the user actively querying the second information can be reduced, thereby reducing the number of times the user inputs commands, thereby reducing the number of times the user interacts with the terminal device.
  • the terminal device may predict the user's intention according to the first information.
  • the user's intention actively predicted according to the information is referred to as the predicted intention.
  • a first request for requesting the first server to perform the prediction intent may be sent to the first server, and a first response returned by the first server may be received.
  • the first response includes second information obtained after the first server executes the predicted intent.
  • the first message is sent to the interface module of the Changlian application of the terminal device, so that the terminal device displays the second information on the chat interface of the Changlian application. Since the predicted intention of the user can be determined according to the first information of the terminal device, the result of executing the predicted intention can be displayed, thereby reducing the number of times the user inputs commands, thereby reducing the number of times the user interacts with the terminal device.
  • the second information includes: a scenic spot guide of the first area. Since it is determined that the location of the terminal device belongs to the scenic spot, the scenic spot guide is actively pushed to the user, for example, the scenic spot guide can be pushed to the user through the Changlian application. In this way, the step of the user inquiring about the scenic spot strategy is omitted, and information related to the current situation of the user can be directly obtained.
  • the second information comes from the first server.
  • the terminal device sends a first request to the first server, where the first request is used to request to obtain the second information; the terminal device receives a first response, and the first response includes the second information.
  • the server returns the scenic spot guide of the scenic spot as the second information to the terminal device.
  • querying the scenic spot strategy can be understood as a prediction intention, that is, the terminal device predicts that the user wants to query the scenic spot strategy according to the current location of the terminal device, and then sends a first request to the server.
  • the first request is used to request the first server to execute the prediction intent, that is, the first server queries the scenic spot strategy of the scenic spot, for example, it can be queried from the database, and then executes the predicted intent to obtain the scenic spot.
  • the scenic spot guide is returned to the terminal device as the second information.
  • the second information comes from information pre-stored by the terminal device. In this way, the speed at which the terminal device acquires the second information can be accelerated.
  • the terminal device may display the second information on the chat interface of the Changlian application.
  • the second information may be displayed on the chat interface of the Changlian application of the first user, where the first user is a user who logs in the Changlian application on the terminal device.
  • the intelligent assistant is integrated into the Changlian application.
  • the intelligent assistant can be displayed in the contact information of the Changlian application, in this case, the second information can be displayed on the first chat interface of the terminal device's Changlian application.
  • the second information is displayed on the first chat interface as chat content sent by the intelligent assistant. It can be seen that the intelligent assistant has been anthropomorphic in the Changlian application.
  • the present application does not require the user to actively wake up the intelligent assistant, which can further reduce the number of interactions between the user and the terminal device.
  • the method further includes: the terminal device autonomously obtains the chat records in the Changlian application; analyzing the chat records, predicting the user's predicted intention, and displaying the user's prediction intention through the Changlian application according to the predicted intention. Predict the content to be pushed or the link to the content to be pushed associated with the prediction intent.
  • the chat records in the Changlian application can be analyzed autonomously, so as to predict the user's prediction intention, and then push the content. It can be seen that this solution does not require the user to actively wake up the intelligent assistant and send an inquiry to it. The solution can reduce the number of user input commands and the number of interactions between the user and the terminal device.
  • the Connect application includes one or more chat groups, and one chat group includes at least two users.
  • the terminal device can acquire the chat records in the chat group, analyze them, predict the user's predicted intention, and then push content or content links as an intelligent assistant on the chat interface of the chat group. In this way, the information actively pushed by the intelligent assistant can be seen by each user in the group, which can save the communication between the two users in the group.
  • the Connect application includes at least one chat group.
  • the terminal device determines the first chat group that satisfies the preset second condition.
  • the terminal device displays the second information on the chat interface of the first chat group.
  • the second condition may include: the members of the first chat group include the first user and N second users, and each second user among the M second users among the N second users
  • the distance from the first user is not greater than the distance threshold, N is a positive integer greater than 1, M is a positive integer not greater than N, and the ratio of M to N is not less than a preset value.
  • the preset value can be set to 50%. It can be seen that if the positions of at least half of the second users in a group are relatively close to the positions of the first users, it can be predicted that this group Most of the people in the group are in the same scene.
  • the information can be directly pushed to the chat interface of the chat group, so that all members of the chat group can see the information, which can save the user from putting the first The operation of sending the second information to other users separately, thereby further saving the number of interactions between the user and the terminal device.
  • the second condition may include: the subscription information corresponding to the first chat group includes the type of the second information.
  • the subscription information corresponding to the first chat group includes the type of the second information.
  • the second condition may include: the chat records within the preset time period of the first chat group involve the first region.
  • the terminal device can autonomously acquire the chat records in the first chat group, and then perform semantic analysis on the chat records, so as to determine the chat records within the preset time period of the first chat group Whether words related to the first region have appeared in . If it exists, it can be inferred that most of the members in the first chat group may be located in the first area. Based on this, the second information can be pushed in the first chat group, thereby further saving the time between the user and the terminal device. the number of interactions.
  • the chat group in the Changlian application can have a tag value, which can display the social relationship of the members of the group, for example, the tag value can be family group, work group, donkey friend group, etc.
  • the tag value may be filled in by the user, or may be inferred from the content of chats between members, or may be inferred from the social relationship between members.
  • the tag value of a group matches the type of information, the information can be published to the group. For example, if the type of information is family health data, the information can be pushed to the family group in the chat group. For another example, when the type of information is a guide for attractions, it can be pushed to the ALICE group.
  • the type of information matched by the tag value of a chat group can be preset.
  • the terminal device after the terminal device displays the second information on the chat interface of the first chat group, the terminal device sends a second request to the second server, where the second request carries the second information, wherein the second request It is used to request the second server to display the second information on the terminal device logged in by the second user among the N second users.
  • the N second users may view the second information on the devices they log in to.
  • the terminal devices logged in by the N second users include at least one of the following: a smart phone, a smart large screen, a smart speaker, a smart bracelet, and a tablet computer. In this way, it can be compatible with more types of terminal equipment.
  • the chat interface of the Changlian application further includes: a third chat interface between the first user and the second device; the second device is a smart phone, a smart screen, a smart speaker, a smart phone One of ring, tablet.
  • the method further includes: the first user sends third information on the third chat interface, and the terminal device sends the third information to the second device, so as to display the third information on the display screen of the second device.
  • the terminal device is a user's smartphone
  • the user can add other devices, such as smart screens, smart speakers, smart bracelets, etc.
  • the screen projection scheme in this way is relatively simple, and for the user, it is similar to chatting with the smart big screen, which can simplify the complexity of the user's operation.
  • the first server receives the first request, the first request is used to request to obtain the second information, the first server carries the second information in the second response, and sends the second response to the terminal device. In this way, a foundation can be laid for the terminal device to display the second information.
  • the first request received by the first server may be used to request the first server to execute the prediction intent.
  • the first server executes the predicted intent and obtains the second information.
  • the first server carries the second information in the second response, and sends the second response to the terminal device. For example, for example, if the first request is for requesting to query the scenic spot guide of the scenic spot where the terminal device is currently located, the first server returns the scenic spot guide of the scenic spot as the second information to the terminal device.
  • querying the scenic spot strategy can be understood as a prediction intention, that is, the terminal device predicts that the user wants to query the scenic spot strategy according to the current location of the terminal device, and then sends a first request to the first server, and then sends a first request to the first server.
  • the first request is used to request the first server to execute the prediction intent, that is, the first server queries the scenic spot strategy of the scenic spot, for example, it can be queried from the database, and then executes the prediction intent obtained by the first server.
  • the scenic spot guide of the scenic spot is returned to the terminal device as the second information.
  • the present application further provides a communication device.
  • the communication apparatus may be any device on the sending end or device on the receiving end that performs data transmission in a wireless manner.
  • a communication chip, a terminal device, or a server a first server or a second server.
  • the device on the sending end and the device on the receiving end are relative.
  • the communication device can be used as the above-mentioned server or a communication chip that can be used for the server; in some communication processes, the communication device can be used as the above-mentioned terminal device or a communication chip that can be used for the terminal device.
  • a communication apparatus including a communication unit and a processing unit, so as to execute any one of the implementation manners of any of the content pushing methods of the first aspect to the second aspect.
  • the communication unit is used to perform functions related to transmission and reception.
  • the communication unit includes a receiving unit and a sending unit.
  • the communication device is a communication chip, and the communication unit may be an input/output circuit or port of the communication chip.
  • the communication unit may be a transmitter and receiver, or the communication unit may be a transmitter and receiver.
  • the communication apparatus further includes various modules that can be used to execute any one of the implementation manners of any one of the content pushing methods of the first aspect to the second aspect.
  • a communication device is provided, where the communication device is the above-mentioned terminal device or server (a first server or a second server). Includes processor and memory.
  • the memory is used to store a computer program or instruction
  • the processor is used to call and run the computer program or instruction from the memory, and when the processor executes the computer program or instruction in the memory, make the computer program or instruction in the memory.
  • the communication apparatus executes any one of the implementation manners of any one of the content pushing methods of the first aspect to the second aspect.
  • processors there are one or more processors and one or more memories.
  • the memory may be integrated with the processor, or the memory may be provided separately from the processor.
  • the transceiver may include a transmitter (transmitter) and a receiver (receiver).
  • a communication apparatus including a processor.
  • the processor coupled to the memory, is operable to perform the method of any one of the first to second aspects, and any one of possible implementations of the first to second aspects.
  • the communication device further includes a memory.
  • the communication device further includes a communication interface, and the processor is coupled to the communication interface.
  • the communication apparatus is a terminal device.
  • the communication interface may be a transceiver, or an input/output interface.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the communication device is a server (either a first server or a second server).
  • the communication interface may be a transceiver, or an input/output interface.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the communication device is a chip or a system of chips.
  • the communication interface may be an input/output interface, an interface circuit, an output circuit, an input circuit, a pin or a related circuit, etc. on the chip or a chip system.
  • a processor may also be embodied as a processing circuit or a logic circuit.
  • a system which includes the above-mentioned terminal device and a server (a first server or a second server).
  • a computer program product includes: a computer program (also referred to as code, or instruction), which, when the computer program is executed, enables the computer to execute any one of the above-mentioned first aspects.
  • the method in the manner, or causing the computer to execute the method in any one of the implementation manners of the first aspect to the second aspect.
  • a computer-readable storage medium stores a computer program (which may also be referred to as code, or an instruction), when it runs on a computer, causing the computer to execute any one of the above-mentioned first aspects.
  • the method in one possible implementation manner, or causing the computer to execute the method in any one of the implementation manners of the first aspect to the second aspect.
  • a system-on-chip may include a processor.
  • the processor coupled to the memory, is operable to perform the method of any one of the first to second aspects and any possible implementation of any of the first to second aspects.
  • the chip system further includes a memory.
  • Memory used to store computer programs (also called code, or instructions).
  • a processor for invoking and running a computer program from a memory, so that a device on which the chip system is installed performs any one of the first to second aspects, and any one of the first to second aspects is possible method in the implementation.
  • a processing device comprising: an input circuit, an output circuit and a processing circuit.
  • the processing circuit is configured to receive the signal through the input circuit and transmit the signal through the output circuit, so that the method of any one of the first aspect to the second aspect, and any one of the possible implementations of the first aspect to the second aspect is implemented.
  • the above-mentioned processing device may be a chip
  • the input circuit may be an input pin
  • the output circuit may be an output pin
  • the processing circuit may be a transistor, a gate circuit, a flip-flop, and various logic circuits.
  • the input signal received by the input circuit may be received and input by, for example, but not limited to, a receiver
  • the signal output by the output circuit may be, for example, but not limited to, output to and transmitted by a transmitter
  • the circuit can be the same circuit that acts as an input circuit and an output circuit at different times.
  • the embodiments of the present application do not limit the specific implementation manners of the processor and various circuits.
  • FIG. 1a is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 1b is a schematic diagram of another system architecture provided by an embodiment of the present application.
  • FIG. 1c is a schematic diagram of another system architecture provided by an embodiment of the present application.
  • FIG. 1d is a schematic diagram of another system architecture provided by an embodiment of the present application.
  • FIG. 1e is a schematic structural diagram of a terminal device according to an embodiment of the application.
  • FIG. 1f is a schematic structural diagram of another terminal device provided by an embodiment of the present application.
  • 2a is a schematic flowchart of a method for pushing content according to an embodiment of the present application
  • 2b is a schematic flowchart of a method for pushing content according to an embodiment of the present application
  • FIG. 3 is a schematic interface diagram of a terminal device suitable for scenario 1 provided by an embodiment of the present application;
  • FIG. 3 is a schematic interface diagram of another terminal device suitable for scenario 1 provided by an embodiment of the present application;
  • FIG. 3 is a schematic interface diagram of another terminal device suitable for scenario 1 provided by an embodiment of the present application.
  • FIG. 3 is a schematic interface diagram of another terminal device suitable for scenario 1 provided by an embodiment of the present application.
  • FIG. 3 is a schematic interface diagram of another terminal device suitable for scenario 1 provided by an embodiment of the present application;
  • FIG. 4 is a schematic interface diagram of another terminal device suitable for scenario 1 provided by an embodiment of the present application;
  • FIG. 4 is a schematic interface diagram of another terminal device applicable to scenario 1 provided by an embodiment of the present application.
  • FIG. 4 is a schematic interface diagram of another terminal device suitable for scenario 1 provided by an embodiment of the present application.
  • FIG. 5 is a schematic interface diagram of a terminal device suitable for scenario 2 provided by an embodiment of the present application;
  • FIG. 5 is a schematic interface diagram of another terminal device suitable for scenario 2 provided by an embodiment of the present application.
  • FIG. 5 is a schematic interface diagram of another terminal device suitable for scenario 2 provided by an embodiment of the present application.
  • FIG. 5 is a schematic interface diagram of another terminal device suitable for scenario 2 provided by an embodiment of the present application.
  • FIG. 5 is a schematic interface diagram of another terminal device suitable for scenario 2 provided by an embodiment of the present application.
  • FIG. 5 is a schematic interface diagram of another terminal device suitable for scenario 2 provided by an embodiment of the present application.
  • FIG. 6 is a schematic interface diagram of a terminal device applicable to scenario three provided by an embodiment of the present application.
  • FIG. 6 is a schematic interface diagram of another terminal device applicable to scenario three provided by an embodiment of the present application.
  • FIG. 6 is a schematic interface diagram of another terminal device applicable to scenario 3 provided by an embodiment of the present application.
  • FIG. 6 is a schematic interface diagram of another terminal device applicable to scenario 3 provided by an embodiment of the present application.
  • FIG. 6 is a schematic interface diagram of another terminal device applicable to scenario 3 provided by an embodiment of the present application.
  • FIG. 7 is a schematic interface diagram of another terminal device applicable to scenario 3 provided by an embodiment of the present application.
  • FIG. 7 is a schematic interface diagram of another terminal device applicable to scenario 3 provided by an embodiment of the present application.
  • FIG. 7 is a schematic interface diagram of another terminal device applicable to scenario 3 provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a communication device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a communication device according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a communication device according to an embodiment of the present application.
  • the first type of terminal device needs to have a display screen, which can be used to display the information sent by the intelligent assistant on the display screen.
  • the second type of terminal device can be used to collect user information, that is, the user's information can be obtained from the terminal device, and the second type of terminal device may or may not have a display screen.
  • the first type of terminal device may be a mobile phone, a tablet computer, a computer, a wearable device (such as a smart watch) with a display screen and wireless communication functions, a smart screen, a smart router with a display screen, and Vehicle-mounted equipment with display screen and wireless communication function, smart speaker with display screen and wireless communication function, etc.
  • the second type of terminal device may be a mobile phone, a tablet computer, a computer, a wearable device with a wireless communication function (such as a smart watch), a vehicle-mounted device with a wireless communication function, a wireless communication function Smart speakers, smart screens, smart routers, etc.
  • a terminal device may belong to both the first type of terminal device and the second type of terminal device. That is, a terminal device can be used to acquire user information from the terminal device, and can also be used to display the information sent by the intelligent assistant.
  • a terminal device may only belong to the second type of terminal device, but does not belong to the first type of terminal device. That is, the terminal device can only be used to obtain user information from the terminal device, but cannot display the information pushed by the smart assistant. For example, a smart bracelet without a screen can only collect user information from the smart bracelet. heartbeat and other data, but it cannot display the information pushed by the smart assistant.
  • user commands are input by users, which may also be called user requirements, commands, user commands, and so on.
  • the user command may be a combination of one or more of voice, image, video, audio and video, and text.
  • the user command is the voice input by the user through the microphone, at this time, the user command can also be called “voice command”; for another example, the user command is the text input by the user through the keyboard or virtual keyboard, at this time, the user command can also be called “voice command”.
  • the user command is a combination of image and text; for another example, the user The command is a piece of audio and video input by the user through the camera and the microphone. In this case, the user command may also be called an "audio and video command”.
  • Speech recognition technology also known as automatic speech recognition (ASR), computer speech recognition (computer speech recognition), or speech to text (STT), is a method of converting human speech through a computer. method to convert to the corresponding text.
  • ASR automatic speech recognition
  • computer speech recognition computer speech recognition
  • STT speech to text
  • Natural language understanding is to hope that intelligent assistants have the language understanding ability of normal people like people. Among them, an important function is intent recognition.
  • the intent corresponds to the function, that is, what kind of function the user needs.
  • intentions are divided into prediction intentions and target intentions.
  • the relevant description of the intention is applicable to both the prediction intention and the target intention. It can also be understood that intention is a superordinate concept of prediction intention and target intention.
  • the prediction intent in the embodiment of the present application refers to a function that the user may want, which is predicted according to the acquired user data without inputting a command by the user. For example, if the user's current location information is obtained, and it is analyzed that the user is currently in the Forbidden City, which belongs to a tourist attraction, it can be predicted that the user's intention is "inquire about scenic spots", and the preset intention corresponds to the slot. relationship, it can be determined that the slot corresponding to the intention is "Location”, and the information of the slot can be determined to be "Forbidden City" according to the current location information of the user. In this example, the predicted intent "inquiry about scenic spots" is the predicted intent. It can be seen that the predicted intent does not require the user to input commands, and the intent can be inferred only based on the obtained user information, thereby reducing the need for users and terminals. The number of interactions with the device.
  • the target intent in the embodiment of the present application refers to an intent that needs to be determined by analyzing according to a user command.
  • a "user command” may be input by the user, and then what function the user wants is identified from the "user command”.
  • Intent recognition can be understood as a problem of semantic expression classification. It can also be said that intent recognition is a classifier (also called an intent classifier) that determines which intent a user command is.
  • a commonly used intent classifier for intent recognition is support vector machine (SVM). Decision Trees and Deep Neural Networks (DNNs).
  • the deep neural network can be a convolutional neural network (CNN) or a recurrent neural network (RNN), etc.
  • the RNN can include long short-term memory (long short-term memory, LSTM) network, stacked ring neural network Network (stacked recurrent neural network, SRNN) and so on.
  • the general process of identifying the "target intent” according to the "user command” includes: first, preprocessing the user command (ie, a set of word sequences), such as removing punctuation marks from the corpus, removing stop words, etc.; secondly, using Word embedding algorithms, such as word2vec algorithm, generate word embeddings from the preprocessed corpus; further, use intent classifiers (eg, LSTM) to perform feature extraction, intent classification, etc.
  • the intent classifier is a trained model, which can identify intents in one or more scenarios, or identify any intent.
  • the intent classifier can identify the intent in the flight booking scenario, including booking flight tickets, filtering tickets, querying ticket prices, querying ticket information, refunding tickets, changing tickets, querying the distance to the airport, etc.
  • the terminal device may store ⁇ intent, slot>, that is, the terminal device stores the correspondence between the intent and the slot, so that the terminal device can quickly determine its corresponding slot according to the intent.
  • an intent may correspond to one or more slots, or may not correspond to a slot.
  • Table 1 exemplarily shows a structure diagram of the corresponding relationship between several possible intentions and slots.
  • Map is a container that stores elements according to keys, and is implemented by means of arrays and linked lists.
  • the above description takes the corresponding relationship between the terminal storage intention and the slot as an example. It should be understood that in another implementation, the corresponding relationship between the intention and the slot may be stored in a server (such as a server in the cloud).
  • a server such as a server in the cloud.
  • both the prediction intention and the target intention belong to the intention.
  • the slot corresponding to the predicted intent may be determined from the correspondence between the intent and the slot.
  • the slot corresponding to the target intent may be determined from the corresponding relationship between the intent and the slot.
  • the slot can be filled according to the obtained information of the user.
  • the information of the "location" of the slot can be filled as "Forbidden City" according to the current location information of the user.
  • an intent is a target intent, at least the slots can be filled based on "user commands".
  • One or more slots can be configured for an intent. For example, in the intent of "inquiry about scenic spots”, there is one slot, that is, “location”. For another example, in the intent of "booking air tickets”, the slots include “departure time”, “starting place”, and "destination”.
  • slot type (Slot-Type), still taking the above example, if you want to accurately identify the three slots of "takeoff time", "starting place” and “destination” slot, you need to have the corresponding slot type behind it, namely "time” and "city name”.
  • the slot type is a structured knowledge base of specific knowledge, which is used to identify and transform the slot information expressed by the user orally. From the perspective of programming language, intent+slot can be regarded as a function to describe the user's needs, where “intent corresponds to function”, “slot corresponds to function parameters”, and "slot_type corresponds to parameter type”.
  • a slot that is intended to be configured can be divided into necessary slots and optional slots.
  • the necessary slot is the slot that must be filled to execute the user command
  • the optional slot is the slot that can be filled or not to execute the user command.
  • the filled slot if not described, in this application, the slot may be a necessary slot, an optional slot, or a necessary slot.
  • Instant Messaging refers to services that can instantly send and receive Internet messages. Users can chat through instant messaging applications.
  • the instant messaging application can support two people's chat, or can support one user's direct single chat with the smart assistant.
  • a group chat can also be supported, and a group includes three or more members.
  • the smart assistant can also participate in a group chat, and the smart assistant can publish information on the group chat interface.
  • the intelligent assistant does not need to be added separately, and can be integrated in the system layer of the terminal device. This embodiment can further reduce the operation steps that the user needs to perform when interacting with the intelligent assistant.
  • the cloud AI engine module or the terminal device-side AI engine module infers the user's prediction intention according to the acquired user data, and after acquiring the content satisfying the prediction intention from the content server, the The content is returned to the terminal device, and the terminal device side can display the content on the chat interface as an intelligent assistant.
  • several ways to wake up the smart assistant can be preset (for example, you can @ the name of the smart assistant in the chat interface, or directly address the name of the smart assistant), and the user can wake up the smart assistant in a preset way.
  • the assistant issues user commands, and then the cloud AI engine module or the terminal device-side AI engine module determines the user's target intent according to the obtained user commands, and after obtaining the content satisfying the target intent from the content server, the content can be returned.
  • the terminal device side can display the content on the chat interface as an intelligent assistant.
  • the intelligent assistant in the embodiment of the present application may also be called a chat robot.
  • the name of the intelligent assistant is “Xiaoyi” as an example for introduction. In practical applications, the intelligent assistant may also have other names. There are no restrictions.
  • User interface is a medium interface for interaction and information exchange between application programs or operating systems and users, and it realizes the conversion between the internal form of information and the form acceptable to users.
  • the user interface of the application is the source code written in a specific computer language such as java and extensible markup language (XML). Such as pictures, text, buttons and other controls.
  • a specific computer language such as java and extensible markup language (XML).
  • XML extensible markup language
  • a graphical user interface can display multiple cards, which can also be called card-based display of query results.
  • a movie theater card as a control as an example, a movie theater
  • the card can be used to describe a movie theater.
  • the movie theater information displayed by a movie theater card may not be all the information corresponding to the control.
  • the terminal device can output detailed information describing the movie theater specified by the movie theater card.
  • GUI information is the detailed information of the cinema.
  • the information of multiple movie theaters can be sorted, for example, according to the rating of the restaurant, etc., as shown in (f) in Figure 5 below, a possible way is shown on the interface of the terminal device A schematic diagram of the interface of multiple cinemas displayed by Xiaoyi card. There may also be other forms for rendering the query result, which are not limited in this embodiment of the present application.
  • Fig. 1a exemplarily shows a schematic diagram of a system architecture to which this embodiment of the present application is applied.
  • the system architecture includes one or more terminal devices, such as the terminal device 201, the terminal device 202 and Terminal device 203 .
  • the terminal device 201 is taken as an example for displaying the information sent by the intelligent assistant for illustration.
  • the terminal device 201, the terminal device 202, and the terminal device 203 can all be used as terminal devices that collect user data.
  • the system architecture may further include one or more servers, such as the information collection server 241, the application server 242 and the content server 23 as shown in Fig. 1a.
  • the content server 23 can be set with different servers for different types of content, such as the content server 232 and the content server 231, and the content server may have, for example, a content server for providing weather services (the data mining module can query the weather from the content server). Status), a content server that can provide encyclopedia services, or a content server that provides content such as video entertainment, etc.
  • a content server may be used to provide one or more types of services, which is not limited in this embodiment of the present application.
  • the information collection server 241 can be used to store data reported by various terminal devices, for example, can collect heartbeat data reported by the terminal device 203 (the terminal device 203 is a smart bracelet).
  • the number of the information collection server 241 may be one, or there may be multiple ones, and only one is exemplarily shown in the figure.
  • the application server 242 may be the application server of the instant messaging application mentioned in the embodiments of this application.
  • a user can chat with a smart assistant.
  • Group chat is also possible between multiple users through instant messaging applications.
  • the smart assistant can also chat with multiple users in a group chat, and in the group, the smart assistant can participate in the group chat as a group chat member.
  • the terminal device can send the information sent by the intelligent assistant to the application server 242, and then send it to the terminal devices of each group member through the application server 242, so that each group of the group can be sent to the terminal device of each group member. All members can see the information displayed by the smart assistant in the group chat interface.
  • the embodiment of the present application further includes an AI engine module, and the AI engine module can be written as engine in English.
  • the AI engine module may be deployed on the terminal device side, for example, the terminal device side AI engine module 21 deployed on the terminal device 201 shown in FIG. 1a.
  • the terminal-device-side AI engine module may also be deployed on other terminal devices. In the figure, it is only illustrated that the terminal-device-side AI engine module 21 is deployed on the terminal device 201 .
  • the AI engine module may be deployed on the side of a terminal device with relatively strong capabilities, such as a smart phone, a tablet computer, and the like.
  • the AI engine module may also be deployed on the cloud side, for example, the cloud AI engine module 22 .
  • the specific processing flow of the solution can be processed by the terminal device-side AI engine module 21 or by the cloud AI engine module 22 .
  • an AI engine module When an AI engine module is deployed on the terminal device side, it can be processed by the AI engine module 21 on the terminal device side, which can reduce the number of interactions between the terminal device and the cloud, thereby speeding up the processing process.
  • the terminal-device-side AI engine module 21 includes a target intent identification module 211 , a predicted intent identification module 212 , and a data mining module 213 .
  • the target intent recognition module 211 may be used to identify the user's target intent according to the commands input by the user, and the target intent recognition module 211 may include a distribution module 2111 , a speech recognition module 2113 and a natural language understanding module 2112 .
  • the distribution module 2111 may be configured to receive a command input by the user, and the command may be voice or text. If it is speech, it can be converted into text by the speech recognition module 2113, and then the recognized text is input into the natural language understanding module 2112.
  • the natural language understanding module 2112 is used to identify the user's target intention according to the input text, and send the target intention to the data mining module 213 .
  • the data mining module 213 can determine the slot corresponding to the target intention according to the corresponding relationship between the intention and the slot, fill in the information of the slot, and then query the corresponding server for the relevant content that needs to satisfy the target intention and the information of the slot, And return the queried related content to the terminal device side so that it can be displayed to the user for viewing.
  • the prediction intention recognition module 212 in this embodiment of the present application may also be referred to as a full-scene smart brain, which may include an acquisition module 2121 and a decision module 2122 .
  • the acquisition module is used to collect the user's information, such as the user's schedule, geographic location, health data and other information. In one possible implementation, the user's authorization may be obtained before the user's data is collected.
  • the acquisition module can collect data on one or more terminal devices. For example, although the acquisition module 2121 belongs to the module on the terminal device 201, in addition to collecting the data on the terminal device 201, it can also collect other terminal devices, such as the data on the terminal device 203. data.
  • the terminal device 203 may report the data to the information collection server 241 in the cloud, and the acquiring module 2121 may acquire the data reported by the terminal device 203 through the network.
  • the decision-making module 2122 determines the prediction intention of the user according to the data obtained by the obtaining module 2121, that is to say, the intention determined by the prediction intention recognition module 2122 is not completely dependent on the user's command, but relies on the collected data for analysis, and then
  • the predicted intention of the user in this embodiment of the present application, the intention predicted by the predicted intention identification module 212 is referred to as a predicted intention.
  • the decision-making module 2122 fills the slot of the prediction intention according to the data obtained by the obtaining module 2121 , and sends the slot to the data mining module 213 after the slot is filled.
  • the data mining module 213 queries the corresponding server for relevant content that needs to meet the prediction intention and slot information according to the received prediction intention and slot information, and returns the inquired relevant content to the terminal device side for display. for users to view.
  • both the target intent and the predicted intent belong to the intent, and the predicted intent is a function that the user may want to be predicted according to the collected user information.
  • the target intent is obtained after being understood by the natural language understanding module 2112 according to the user command input by the user.
  • functions that the user may want can be predicted according to the user's information, so the steps for the user to input commands to the terminal device can be reduced, thereby reducing the number of interactions between the user and the terminal device.
  • the above content is introduced by taking the AI engine module on the terminal device side as an example.
  • the following describes a possible solution processing flow of the cloud AI engine module 22 .
  • the cloud AI engine module 22 includes a target intention recognition module 221 , a prediction intention recognition module 222 , and a data mining module 223 .
  • the target intent recognition module 221 may be used to recognize the user's target intent according to the commands input by the user, and the target intent recognition module 221 may include a distribution module 2211 , a speech recognition module 2213 and a natural language understanding module 2212 .
  • the distribution module 2211 may be configured to receive a command input by the user, and the command may be voice or text. If it is speech, it can be converted into text by the speech recognition module 2213, and then the recognized text is input into the natural language understanding module 2212. If it is text, it can be directly input to the natural language understanding module 2212.
  • the natural language understanding module 2212 is used to identify the user's target intention according to the input text, and send the target intention to the data mining module 223 .
  • the data mining module 223 can determine the slot corresponding to the target intention according to the corresponding relationship between the intention and the slot, and fill in the information of the slot, and then query the corresponding server for the relevant content that needs to satisfy the target intention and the information of the slot, And return the queried relevant content to the cloud so that it can be displayed to users for viewing.
  • the prediction intention recognition module 222 in this embodiment of the present application may also be referred to as a full-scene smart brain, which may include an acquisition module 2221 and a decision module 2222 .
  • the acquisition module is used to collect the user's information, such as the user's schedule, geographic location, health data and other information. In one possible implementation, the user's authorization may be obtained before the user's data is collected.
  • the acquisition module can collect data on one or more terminal devices, for example, can collect data on the terminal device 201 and can also collect data on the terminal device 203 .
  • the terminal device 203 may report the data to the information collection server 241 in the cloud, and the acquiring module 2221 may acquire the data reported by the terminal device 203 through the network.
  • the decision-making module 2222 determines the prediction intention of the user according to the data obtained by the obtaining module 2221, that is to say, the intention determined by the prediction intention recognition module 2222 is not completely dependent on the user's command, but relies on the collected data for analysis, and then
  • the predicted intention of the user in this embodiment of the present application, the intention predicted by the predicted intention identification module 222 is referred to as a predicted intention.
  • the decision-making module 2222 fills the slot of the prediction intention according to the data obtained by the obtaining module 2221, and sends the slot to the data mining module 223 after the slot is filled.
  • the data mining module 223 queries the corresponding server for relevant content that needs to satisfy the prediction intention and slot information according to the received prediction intention and slot information, and returns the queried relevant content to the cloud for display to the user. Check.
  • the above content introduces the terminal device side AI engine module 21 and the cloud AI engine module 22 respectively. If, as shown in Figure 1a, both the terminal device 201 and the cloud are deployed with AI engine modules, part of the operations can be performed on the terminal device side. Some operations are done in the cloud AI engine module.
  • the prediction intention determination process may be performed by the prediction intention identification module 212 of the AI engine module 21 on the terminal device side.
  • the target intention determination process is performed by the target intention recognition module 221 of the cloud AI engine module 22 .
  • the data mining module 213 can be selected to be used, or the data mining module 223 can be used rotatably.
  • the acquisition module 2121 on the terminal device side can collect the user's data, and then report the collected data through the network, and the decision module 2222 in the cloud can infer the user's prediction intention.
  • Each module in the embodiment of the present application can be used in combination, which is relatively flexible, and is not limited in the embodiment of the present application.
  • Fig. 1a shows a schematic diagram of a system architecture with an AI engine module deployed on both the terminal device side and the cloud
  • Fig. 1b exemplarily shows a schematic diagram of a system architecture with an AI engine module deployed only on the cloud
  • Fig. 1c exemplarily shows A schematic diagram of the system architecture in which the AI engine module is deployed only on the terminal device side.
  • the functions and roles of each module shown in FIG. 1b and FIG. 1c can be referred to the corresponding descriptions in FIG. 1a, which will not be repeated here.
  • FIG. 1d exemplarily shows a schematic structural diagram of the terminal device 201 in FIG. 1a .
  • the terminal device 201 may include an instant messaging application module 25 .
  • the instant messaging application module 25 is integrated with an AI interface module 252 . Therefore, the cloud AI engine module 22 or the terminal device side AI engine module 21 can be used in the instant messaging application.
  • the data returned by the data mining module 213 can be transmitted to the instant messaging application module 25 through the AI interface module 252 .
  • the instant messaging application module 25 may further include a rendering module 253 .
  • the rendering module 253 can be used to render the information received by the AI interface module 252, for example, it can render and draw the received "Scenic Spots Guide of the Forbidden City", so that the information displayed to the user can be drawn more beautifully.
  • the instant messaging application module 25 may further include a message processing module 251, and the message processing module 251 may be configured to send a message to the chat interface of the user as an intelligent assistant.
  • the message processing module 251 can send the message to the application server 242, and then transmit the message to the terminal devices of other group members of the group,
  • the purpose of publishing messages in the group chat records as an intelligent assistant can be achieved.
  • FIG. 1e exemplarily shows a schematic structural diagram of a terminal device, and the terminal device may be the terminal device 201 of the above-mentioned FIG. 1a to FIG. 1d.
  • the illustrated terminal device is only an example and that the terminal device may have more or fewer components than those shown in the figures, may combine two or more components, or may have different component configurations .
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the terminal device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 193, Display screen 194, and subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller can be the nerve center and command center of the terminal device.
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be directly called from the memory, thereby avoiding repeated access, reducing the waiting time of the processor 110, and thus improving the efficiency of the system.
  • the processor 110 may execute the method for adjusting the volume of the touch screen provided by the embodiment of the present application, and the processor may display related prompt information of volume interaction on the side edge of the display screen in response to a touch operation on the display screen.
  • the processor 110 integrates different devices, such as integrating a CPU and a GPU, the CPU and the GPU may cooperate to execute the operation prompt method provided by the embodiments of the present application.
  • the operation prompt method some algorithms are executed by the CPU, and another part of the algorithms are executed by the GPU Execute for faster processing efficiency.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver ( universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface , and/or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple with the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the terminal device.
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface to implement the shooting function of the terminal device.
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the terminal device.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the terminal device, and can also be used to transmit data between the terminal device and peripheral devices. It can also be used to connect headphones to play audio through the headphones. This interface can also be used to connect other terminal devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the terminal device.
  • the terminal device may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the wireless communication function of the terminal device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in a terminal device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the terminal device.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on terminal devices including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation
  • FM near field communication technology
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through
  • the antenna 1 of the terminal device is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal device can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code division Multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi-zenith) satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the terminal device realizes the display function through the GPU, the display screen 194, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the display screen 194 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the processor 110 executes the volume adjustment method provided by the embodiment of the present application, and when the display screen 194 is folded, a touch operation is received on a certain screen, the processor 110 determines the touch position of the touch operation on the screen, and performs the touch operation on the screen. The touch position on the screen displays the relevant prompt information of the volume interaction.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the terminal device.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the terminal device by executing the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor.
  • the terminal device can implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the terminal device may be provided with at least one microphone 170C.
  • the terminal device may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals.
  • the terminal device may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal device can use the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a picture with the fingerprint, answer the incoming call with the fingerprint, etc.
  • a fingerprint sensor may be configured on the front of the terminal device (below the display screen 194), or a fingerprint sensor may be configured on the back of the terminal device (below the rear camera).
  • the fingerprint identification function can also be realized by configuring a fingerprint sensor in the touch screen, that is, the fingerprint sensor can be integrated with the touch screen to realize the fingerprint identification function of the terminal device.
  • the fingerprint sensor may be configured in the touch screen, may be a part of the touch screen, or may be configured in the touch screen in other ways.
  • the fingerprint sensor can also be implemented as a full-panel fingerprint sensor. Therefore, the touch screen can be regarded as a panel that can perform fingerprint collection at any position.
  • the fingerprint sensor can process the collected fingerprint (for example, whether the fingerprint is verified) and send it to the processor 110, and the processor 110 performs corresponding processing according to the fingerprint processing result.
  • the fingerprint sensor can also send the collected fingerprint to the processor 110, so that the processor 110 can process the fingerprint (eg, fingerprint verification, etc.).
  • the fingerprint sensor in the embodiments of the present application may adopt any type of sensing technology, including but not limited to optical, capacitive, piezoelectric, or ultrasonic sensing technology.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the terminal device, which is different from the location where the display screen 194 is located.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the terminal device.
  • the terminal device can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. Multiple cards can be of the same type or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the terminal device interacts with the network through the SIM card to realize functions such as call and data communication.
  • the terminal device adopts an eSIM, ie an embedded SIM card.
  • the eSIM card can be embedded in the terminal device and cannot be separated from the terminal device.
  • the terminal device may also include a Bluetooth device, a positioning device, a flash, a pico-projection device, a near field communication (NFC) device, etc., which will not be described in detail here.
  • a Bluetooth device a positioning device
  • a flash a pico-projection device
  • NFC near field communication
  • the software system of the terminal device may adopt a layered architecture.
  • the embodiments of the present application take the Android system of the layered architecture as an example to illustrate the software structure of the terminal device.
  • Fig. 1f is a software structural block diagram of a terminal device according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as phone, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, SMS, etc.
  • the application package of the Changlian application APP mentioned in the foregoing content may also be located in the application layer.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the AI engine module 21 on the terminal device side mentioned in the foregoing content may also be located at the application framework layer.
  • the application framework layer can include a window manager, content provider, view system, telephony manager, resource manager, notification manager, etc.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • Data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the telephony manager is used to provide the communication function of the terminal device. For example, the management of call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the terminal device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics AI engine module (eg: SGL), etc.
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • the 2D graphics AI engine module is a drawing AI engine module for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the following embodiments of the present application will take a terminal device having the structures shown in FIG. 1e and FIG. 1f as an example.
  • the terminal device side AI engine module 21 deployed by the terminal device 201 will be used as an example for description.
  • the following solutions that can be executed by the AI engine module deployed on the terminal device side can also be executed by the AI engine module deployed on the cloud, or the AI engine module deployed on the terminal device side and the AI engine module deployed on the cloud can be executed collaboratively (For example, the acquisition module 2121 on the terminal device side of the terminal device 201 can collect user information, and upload it to the decision module 2222 of the cloud AI engine module through the network for decision-making), which is not limited in this embodiment of the present application.
  • FIG. 2a exemplarily shows a schematic flowchart of a content push method provided by an embodiment of the present application. As shown in FIG. 2a, the method includes:
  • Step 321 the terminal device acquires first information, where the first information includes location information of the terminal device;
  • Step 322 when the first information satisfies the preset first condition, the terminal device displays the second information; the second information includes the content to be pushed or the link to the content to be pushed associated with the first information; the first condition includes: The location corresponding to the location information is located in the first area, and the type of the first area belongs to one of the preset area types.
  • the query steps in the process of actively querying the second information by the user can be reduced, thereby reducing the number of times the user inputs commands, thereby reducing the number of times the user interacts with the terminal device.
  • the second information includes: a scenic spot guide of the first area. Since it is determined that the location of the terminal device belongs to the scenic spot, the scenic spot guide is actively pushed to the user, for example, the scenic spot guide can be pushed to the user through the Changlian application. In this way, the step of the user inquiring about the scenic spot strategy is omitted, and information related to the current situation of the user can be directly obtained.
  • the second information comes from the first server.
  • the terminal device sends a first request to the first server, where the first request is used to request to obtain the second information; the terminal device receives a first response, and the first response includes the second information. For example, for example, if the first request is for requesting to query the scenic spot guide of the scenic spot where the terminal device is currently located, the first server returns the scenic spot guide of the scenic spot as the second information to the terminal device.
  • querying the scenic spot strategy can be understood as a prediction intention, that is, the terminal device predicts that the user wants to query the scenic spot strategy according to the current location of the terminal device, and then sends a first request to the first server, and then sends a first request to the first server.
  • the first request is used to request the first server to execute the prediction intent, that is, the first server queries the scenic spot strategy of the scenic spot, for example, it can be queried from the database, and then executes the prediction intent obtained by the first server.
  • the scenic spot guide of the scenic spot is returned to the terminal device as the second information.
  • the second information comes from information pre-stored by the terminal device. In this way, the speed at which the terminal device acquires the second information can be accelerated.
  • the terminal device may display the second information on the chat interface of the Changlian application.
  • the terminal device may predict the user's intention according to the first information.
  • the user's intention that is actively predicted according to the information is referred to as the predicted intention.
  • a first request for requesting the first server to perform the prediction intent may be sent to the first server, and a first response returned by the first server may be received.
  • the first response includes second information obtained after the first server executes the predicted intent.
  • the first message is sent to the interface module of the Changlian application of the terminal device, so that the terminal device displays the second information on the chat interface of the Changlian application. Since the predicted intention of the user can be determined according to the first information of the terminal device, the result of executing the predicted intention can be displayed, thereby reducing the number of times the user inputs commands, thereby reducing the number of times the user interacts with the terminal device.
  • Fig. 2b exemplarily shows a schematic flowchart of a content push method provided by an embodiment of the present application. As shown in Fig. 2b, the method includes:
  • Step 301 the AI engine module obtains first information of the first terminal device.
  • the terminal device may send the first information of the first terminal device to the AI engine module through the transceiver module.
  • the first terminal device in the embodiment of the present application may be the terminal device 201 in the foregoing FIG. 1a to FIG. 1d.
  • the AI engine module may be an AI engine module on the side of the first terminal device, and the AI engine module may collect the first information of the first terminal device.
  • the AI engine module may be an AI engine module in the cloud, and the AI engine module may query the first information by sending a query request to the first terminal device.
  • the first information belongs to a first type of information.
  • one or several types of information can be preset, and then the specified type of information can be obtained.
  • the preset types of information can include: location information of the terminal device, information on the Changlian application Chat records, meeting schedules, courier information, and more.
  • the AI engine module can periodically obtain the location information of the terminal device.
  • Step 302 the AI engine module determines the prediction intention of the first user according to the first information.
  • the acquisition module of the AI engine module may acquire the first information, and send the first information to the decision-making module, and the decision-making module determines the prediction of the first user according to the first information intention. And send the predicted intent to the data mining module of the AI engine module.
  • the correspondence between the type of message, the preset condition and the intent can be preset, and in the corresponding relationship, the first type of information, the first preset condition and the first intent are three There is a relationship between them, and it can also be said that there is a corresponding relationship between the three.
  • the AI engine module determines that the first information satisfies the first preset condition, according to the correspondence between the preset first preset condition and the first intent relationship, and the first intention is determined as the predicted intention of the first user.
  • the corresponding relationship between the intent and the slot is preset in the embodiment of the present application.
  • the first intent and the first slot can be preset according to the preset first intent and the first slot.
  • the corresponding relationship is determined, the first slot corresponding to the prediction intent is determined, and the content of the first slot is determined according to the first information.
  • the first type of information includes: location information of the first terminal device.
  • the first preset condition includes: whether the area indicated by the first type of information belongs to a scenic spot.
  • the first intention includes: inquiring about scenic spots.
  • the AI engine module can periodically obtain the location information of the terminal device. When it is determined that the location indicated by the current location information of the terminal device belongs to a scenic spot, such as the Forbidden City, the user's intention is predicted to be: "inquire about the scenic spot strategy", and the first slot The "place" is determined to be the "Forbidden City".
  • the AI engine module can predict the user's prediction intention according to the obtained first information of the terminal device, without requiring the user to issue a user command, thereby The number of times the user inputs commands can be reduced, thereby reducing the number of times the user interacts with the terminal device.
  • Step 303 the AI engine module sends a first request to the first server, where the first request is used to request the first server to execute the prediction intent.
  • the first server receives the first request.
  • the first server may be the content server in the above-mentioned FIG. 1a to FIG. 1c, for example, may be the content server 232.
  • the AI engine module can determine the services provided by each content server, and then query the corresponding content server for the required content according to the services that need to be queried.
  • the first request may be sent by the data mining module in the AI engine module.
  • Step 304 the first server executes the prediction intention to obtain the second information.
  • step 304 if the predicted intent corresponds to the first slot, the server may execute the predicted intent based on the content of the first slot to obtain the second information.
  • the second information is obtained after the first server executes the prediction intention, and the prediction intention is obtained according to the first information.
  • Step 305 the first server sends a first response to the AI engine module, where the first response carries the second information.
  • the AI engine module receives the first response returned by the first server, and the data mining module of the AI engine module may receive the first response.
  • Step 306 the AI engine module sends a first message to the interface module of the smooth connection application of the first terminal device, where the first message carries the second information.
  • the first message is used to make the first terminal device display the second information on the first chat interface of the Changlian application.
  • the data mining module of the AI engine module may send the first message to the AI interface module integrated with the Changlian application program.
  • the first terminal device receives the first message through the AI interface module integrated in the Changlian application.
  • Step 307 the first terminal device displays the second information on the first chat interface of the Changlian application.
  • a Changlian application can be installed on the first terminal device, and the Changlian application integrates an artificial intelligence AI interface module.
  • an AI interface module 252 is integrated on the Changlian application program application module 25 on the terminal device 201.
  • the Changlian application application module 25 further includes a message processing module 251, and the message processing module 251 can be used for sending and receiving messages of the Changlian application.
  • the AI interface module 252 is used for sending and receiving messages with the AI engine module.
  • the data mining module of the AI engine module can send the first message to the AI interface module integrated in the application module of the Changlian application of the first terminal device, and then the AI interface module can send the first message.
  • the module sends the second information in the received first message to the message processing module 251 , and displays it on the chat interface of the Changlian application through the message processing module 251 .
  • the first terminal device may render the received second information, so as to display it in a card form on the chat interface of the first terminal device.
  • the first terminal device may include a rendering module, the AI interface module sends the second information to the rendering module, and the rendering module may render the received second information according to a preset rendering module, and obtain The third information is returned to the AI interface module. Further, the third information is received from the AI interface module through the message processing module of the Changlian application; the third information is obtained by rendering the second information.
  • the first chat interface is a chat interface between the intelligent assistant and the first user.
  • the first user is a user logged in on the Changlian application of the first terminal device.
  • the first chat interface is a chat interface between the first user and the second user.
  • the second user is a user logged in on the Changlian application of the second terminal device.
  • the first information may include chat records on the first chat interface of the Changlian application.
  • the AI engine module may determine the predicted intention of the first user according to the chat record on the first chat interface.
  • step 308 is further included after step 307 .
  • Step 308 when the first chat interface is the chat interface between the first user and the second user, the first terminal device sends a second message to the server of the Changlian application, the second message carries the second information, and the second message is used to make The server of the Changlian application transmits the second information to the second terminal device.
  • Step 309 After the server of the Changlian application program transmits the second information to the second terminal device, the second terminal device displays the second information on the chat interface of the first user and the second user of the Changlian application program.
  • step 308 can also be replaced with the following content:
  • the first terminal device sends a second message to the server of the Changlian application, the second message carries the third information, and the second message is used to make the Changlian application
  • the server of the program transmits the third information to the second terminal device.
  • the second terminal device displays the third information on the chat interface of the first user and the second user of the Changlian application.
  • the above-mentioned steps 308 and 309 only take two users as an example, and the above-mentioned first chat interface can also be a chat interface of three or more users. In this way, the third information can be transmitted to the server through the Changlian application program. on the terminal device of each member in the first chat interface, so that all members participating in the first chat interface can see the third information.
  • the prediction intention of the user can be determined according to the first information of the terminal device, and the result of executing the prediction intention can be displayed, the number of times the user inputs a command can be reduced, and the number of user input commands can be reduced. Number of interactions with end devices.
  • the user's intention can be actively predicted and displayed according to the first information of the terminal device in the present application, the user does not need to actively wake up the intelligent assistant, which can further reduce the number of interactions between the user and the terminal device. And the intelligent assistant is integrated at the system layer, without the need for users to add it in the Changlian application.
  • the smart assistant and the Changlian app technology can be better integrated, making it easier for group users to communicate with each other.
  • the data mining module of the AI engine module can search for the corresponding content server according to the intention, so as to obtain the corresponding content from the content server, that is, the intelligent assistant in the embodiment of the present application.
  • Various types of information can be queried, such as weather information and epidemic information, without the need for users to add various types of robots to the group. Users can query various types of information just through Xiaoyi. The operation of the user can be further simplified.
  • the user does not need to input a user command, but the intelligent assistant actively pushes the information that the user may need.
  • several ways of waking up the smart assistant can also be preset, and the user wakes up the smart assistant in a preset way in the Changlian application and sends a user command.
  • the AI engine module After the AI engine module obtains the user command, it can identify the user's target intention through the target intention recognition module, fill the slot through the data mining module, and then go to the server to query the corresponding content, and return the query to the terminal device through the data mining module to the content.
  • the way for the user to send the user command may be: subscribing to a service in a group, for example, you may subscribe to the service of querying the weather forecast of Shanghai, in a possible implementation manner, you may Set a time for the reminder, like 9am.
  • the AI engine module can determine that the user's target intention is: to check the weather conditions at 9 am every day, and the "location" of the slot is "Shanghai". Then, the data mining module of the AI engine module can send the query result to the terminal device, so that the terminal device can display the information in the group that subscribes to the information.
  • the user can inquire about the intelligent assistant for all-round life and work information. Including but not limited to: convenient life, business finance, education, food, game fun, health, smart home, children and family, local services, video, vocal music, news reading, native applications & settings, shopping comparison, social communication, sports business sports, travel, quiz searches, weather, and more.
  • the content of the query is transmitted to the chat interface in a card format through the Changlian application system, so as to provide an intelligent experience.
  • a separate session between the user and the intelligent assistant is also provided, and as a resident intelligent assistant, it can perform intelligent recognition of the situation, recommend scene-based services, and the like. Including but not limited to: flight train, weather warning, birthday reminder, schedule meeting reminder, credit card repayment, express delivery reminder, scenic spot information recommendation (travel assistant), sports health data, etc.
  • the content recommended by the smart assistant can be displayed on a separate chat interface between the user and the smart assistant, or on the chat interface of groups subscribed to these recommendation services.
  • the intelligent assistant is called "Xiaoyi" as an example for introduction. In practical applications, the intelligent assistant may also have other names.
  • the following content refers to Scenario 1, Scenario 2 and Scenario 3.
  • the intelligent assistant will actively push the scenic spot guide to the user when it is determined that the user is visiting the scenic spot according to the obtained location information of the terminal device.
  • the intelligent assistant infers that when the user wants to watch a movie, it will actively push the movie theater information to the user.
  • Scenario 3 when two users need to inquire about the nearby cinemas during the conversation, they can directly @ ⁇ and order them to inquire about the surrounding cinemas.
  • Xiaoyi displays the information to be displayed on the chat interface of the Changlian app, so that the smart assistant can be closely integrated with the Changlian app.
  • 3 to 7 are schematic interface diagrams of several terminal devices provided by the embodiments of the present application.
  • Scenario 1 will be described below with reference to FIGS. 3 and 4
  • scenario 2 will be introduced with reference to FIG. 5 .
  • 6 and Figure 7 introduce the third scenario.
  • the intelligent assistant will actively push the scenic spot guide to the user when it is determined that the user is visiting the scenic spot.
  • the terminal device may display the second information on the chat interface between the first user and the Changlian application, and the first user is a user who logs into the Changlian application on the terminal device.
  • the intelligent assistant is integrated into the Changlian application.
  • the intelligent assistant can be displayed in the contact information of the Changlian application, in this case, the second information can be displayed on the first chat interface of the terminal device's Changlian application.
  • the second information is displayed on the first chat interface as chat content sent by the intelligent assistant.
  • the intelligent assistant has been anthropomorphic in the Changlian application. Users can chat with the intelligent assistant through the Changlian application, and the second information actively pushed by the terminal device can also be pushed as the intelligent assistant.
  • the present application does not require the user to actively wake up the intelligent assistant, which can further reduce the number of interactions between the user and the terminal device.
  • the terminal device AI engine module 21 is deployed on the terminal device 201 side, and the AI engine module 21 on the terminal device side executes a related solution as an example for description.
  • the acquisition module 2121 of the predicted intention recognition module 212 can acquire the user's location information, and determine whether the user's location information belongs to a scenic spot according to a preset rule.
  • the information of the scenic spot may be preset, and if the location information of the user matches the information of a preset scenic spot, it is determined that the user is currently in the scenic spot.
  • the data mining module 213 in FIG. Send a query request to the content server, where the query request is used to query the scenic spot guide of the Forbidden City.
  • the data mining module 213 receives the query response returned by the content server, and the query response carries the scenic spot guide of the Forbidden City.
  • the data mining module 213 can send the scenic spot guide of the Forbidden City to the Changlian application module 25 through the AI interface module 252 in FIG. 1d.
  • the scenic area guide of the Forbidden City received by the AI interface module 252 is in the form of text, which can be sent to the rendering module 253 for rendering.
  • a template of the scenic spot guide can be preset
  • the rendering module 253 processes the text form of the scenic spot guide of the Forbidden City in combination with the template of the scenic spot guide, thereby Get the scenic area guide of the Forbidden City after rendering, and return it to the AI interface module 252.
  • the AI interface module 252 returns the obtained scenic spot guide of the Forbidden City to the message processing module 251 .
  • the message processing module 251 sends a message to the user's terminal device in the Changlian application as Xiaoyi.
  • FIG. 3 is a schematic diagram of the interface that receives information from Xiaoyi when the user's terminal device is in the lock screen mode.
  • the content "You Received a message from Xiaoyi” the message can carry some logos, such as the icon of the Changlian application APP, so that the user can know that the message is received from Xiaoyi through the Changlian application APP Information.
  • the terminal device can open the Changlian application and display a schematic diagram of the interface between the user and Xiaoyi Danchat, the interface schematic diagram can be shown in Figure 3 (b) As shown, on this interface, you can see the scenic guide of the Forbidden City that Xiaoyi actively pushed.
  • the scenic spot strategy pushed by Xiaoyi can be displayed in cards. If the user needs to view the detailed information, he can click on the "View Details" area shown in (b) of FIG. 3 .
  • the user can also actively send commands to Xiaoyi, as shown in (c) in Figure 3, the user can send a user command to Xiaoyi on the single chat interface with Xiaoyi, "Xiaoyi, recommend something near the Forbidden City. restaurant".
  • (c) in FIG. 3 shows a schematic diagram of the interface for the user to edit the user command. After the user clicks the “send” button on the interface, the schematic interface diagram of the terminal device is shown in (d) in FIG. 3 .
  • the target intent recognition module 211 in the AI engine module can obtain the user command through the distribution module 2111, and determine the target intent through the natural language understanding module 2112 as "Search restaurants”. And perform slot matching through the data mining module 213, and fill the slot "location" as "Forbidden City”.
  • the data mining module 213 can then query the content server for restaurants near the Forbidden City, and return the obtained results to the Changlian application through the AI interface module 252, and after rendering through the rendering module 253, display the queried results as Xiaoyi. Restaurants near the Forbidden City. As shown in (e) of FIG. 3 .
  • restaurants near the Forbidden City can be displayed in cards, and the name, picture, rating, etc. of the restaurant can be displayed on the chat interface. If the user needs to know more detailed content of a restaurant, he can click on the area where the name of the restaurant belongs. In response to the click operation, the terminal device will display the detailed information of the restaurant, including the restaurant's address, phone number, signature dishes, user evaluations, etc.
  • the user can directly click the notification message on the lock screen to directly open the single chat interface between the user of the Changlian application and Xiaoyi.
  • the embodiment of the present application may additionally provide a method for the user to open an interface for chatting with Xiaoyi.
  • the lock screen interface displays “You have received a message from Xiaoyi. ”, the user can unlock the terminal device, and the unlocking method can be fingerprint unlocking, face recognition unlocking or password unlocking, etc., the method is not limited.
  • (b) in Fig. 4 shows a schematic diagram of an interface after the terminal device is unlocked. As shown in (b) in Fig.
  • the user's terminal interface may include multiple application programs. Apps for making calls and apps for connecting. In practical applications, there may also be other application programs, which are not limited in the embodiments of the present application.
  • the terminal device can open the Changlian application program APP, and a schematic interface diagram is shown in (c) of FIG. 4 .
  • FIG. 4 it can be seen that the recently contacted contacts are displayed in the tab of the "Changlian application", and the most recently contacted contacts can be displayed at the top.
  • the whole content or part of the content of the last message on the chat interface with the contact may also be displayed beside each contact.
  • Xiaoyi's message session can be displayed in the "Changlian Application” tab. Applications” tab. The user can click the "Xiaoyi” option on the interface shown in (c) in Figure 4, and in response to this operation, the terminal device will open the single chat interface between the user and Xiaoyi as shown in (b) in Figure 3 above. .
  • the chat interface of the Changlian application further includes: a third chat interface between the first user and the second device; the second device is a smart phone, a smart screen, a smart speaker, a smart phone One of ring, tablet.
  • the method further includes: the first user sends third information on the third chat interface, and the terminal device sends the third information to the second device, so as to display the third information on the display screen of the second device.
  • the terminal device is a user's smartphone
  • the user can add other devices, such as smart screens, smart speakers, smart bracelets, etc.
  • the screen projection scheme in this way is relatively simple, and for the user, it is similar to chatting with the smart big screen, which can simplify the complexity of the user's operation.
  • users can add their smartphones, smart screens, smart speakers, smart bracelets, tablets, smart watches, smart TVs, smart cameras, smart speakers and other devices with communication functions to the Instant Connect app. in the communication APP.
  • the instant messaging APP is the Changlian application APP
  • the user can add intelligence to the Changlian application APP.
  • devices such as watches, smart TVs, and smart cameras
  • users can share videos, pictures, audio and other content with other devices through the Changlian app.
  • the user opens the Changlian application APP on the mobile phone, and the user opens the chat interface with "My TV" through the Changlian application APP.
  • the content can be displayed on the screen of the smart TV corresponding to "My TV" in real time. It can be seen that the smooth connection application program APP in the embodiment of the present application can realize instant communication between various terminal devices, and this way can simplify the way of sharing information between devices.
  • the intelligent assistant will actively push the scenic spot guide to the user when it is determined that the user is visiting the scenic spot.
  • the terminal device displays the second information on the chat interface with the user as Xiaoyi.
  • the Connect application includes at least one chat group.
  • the terminal device determines the first chat group that satisfies the preset second condition.
  • the terminal device displays the second information on the chat interface of the first chat group.
  • the terminal device may send a second request to the second server, where the second request carries the second information, where the second request is used to request the second server to display the second information in N On the terminal device logged in by the second user among the second users.
  • the N second users may view the second information on the devices they log in to.
  • the terminal devices logged in by the N second users include at least one of the following: a smart phone, a smart large screen, a smart speaker, a smart bracelet, and a tablet computer. In this way, more types of terminal devices can be compatible.
  • the second condition includes at least one of the following:
  • the members of the first chat group include the first user and N second users, and the distance between each second user and the first user among the M second users in the N second users is not greater than the distance threshold, and N is A positive integer greater than 1, M is a positive integer not greater than N, and the ratio of M to N is not less than the preset value;
  • the subscription information corresponding to the first chat group includes the type of the second information
  • the first area is involved in the chat records within the preset time period of the first chat group
  • the tag value of the first chat group matches the type of the second message.
  • the members of the first chat group include the first user and N second users, and the distance between each of the M second users among the N second users and the first user is not equal to greater than the distance threshold, N is a positive integer greater than 1, M is a positive integer not greater than N, and the ratio of M to N is not less than a preset value.
  • the preset value can be set to 50%. It can be seen that if the positions of at least half of the second users in a group are relatively close to the positions of the first users, it can be predicted that this group Most of the people in the group are in the same scene.
  • the information can be directly pushed to the chat interface of the chat group, so that all members of the chat group can see the information, which can save the user from putting the first The operation of sending the second information to other users separately, thereby further saving the number of interactions between the user and the terminal device.
  • the subscription information corresponding to the first chat group includes the type of the second information.
  • the terminal device acquires the second information, it can push the second information to the first chat group. For example, if you subscribe to the guide for attractions in the first chat group, when the second information is "Attractions guide for the Forbidden City", the second information is pushed to the first chat group.
  • the health data may be of a certain user.
  • the heartbeat and blood pressure values may also be a user health report obtained by analyzing data such as the user's heartbeat and blood pressure.
  • the terminal device can autonomously acquire the chat records in the first chat group, and then perform semantic analysis on the chat records, so as to determine the chat records within the preset time period of the first chat group Whether words related to the first region have appeared in . If it exists, it can be inferred that most of the members in the first chat group may be located in the first area. Based on this, the second information can be pushed in the first chat group, thereby further saving the time between the user and the terminal device. the number of interactions.
  • the chat group in the Changlian application can have a tag value, which can display the social relationship of the members of the group, for example, the tag value can be family group, work group, donkey friend group, etc.
  • the tag value may be filled in by the user, or may be inferred from the content of chats between members, or may be inferred from the social relationship between members.
  • the tag value of a group matches the type of information, the information can be published to the group. For example, if the type of information is family health data, the information can be pushed to the family group in a chat group. For another example, when the type of information is a guide for attractions, it can be pushed to the ALICE group.
  • the type of information matched by the tag value of a chat group can be preset.
  • the intelligent assistant infers that when the user wants to watch a movie, it will actively push the cinema information to the user.
  • the terminal device autonomously obtains the chat records in the Changlian application; analyzes the chat records, predicts the user's prediction intention, and uses the Changlian application according to the prediction intention. Displays content to be pushed or a link to content to be pushed associated with the predicted intent.
  • the chat records in the Changlian application can be analyzed autonomously, so as to predict the user's prediction intention, and then push the content. It can be seen that this solution does not require the user to actively wake up the intelligent assistant and send an inquiry to it. The solution can reduce the number of user input commands and the number of interactions between the user and the terminal device.
  • the Connect application includes one or more chat groups, and one chat group includes at least two users.
  • the terminal device can acquire the chat records in the chat group, analyze them, predict the user's predicted intention, and then push content or content links as an intelligent assistant on the chat interface of the chat group. In this way, the information actively pushed by the intelligent assistant can be seen by each user in the group, which can save the communication between the two users in the group.
  • FIG. 5 shows a schematic diagram of an interface after the terminal device is unlocked.
  • the user's terminal interface may include multiple application programs. Apps for making calls and apps for connecting. In practical applications, there may also be other application programs, which are not limited in the embodiments of the present application.
  • the terminal device can open the Changlian application program APP, and a schematic interface diagram is shown in (b) of FIG. 5 .
  • the Changlian application program APP In (b) of FIG. 5, it can be seen that the recently contacted contacts are displayed in the tab of the "Changlian application".
  • a contact can correspond to one or more icons, wherein the icon 401 means that two users can conduct video chat through the Changlian application, and the icon 402 means that two users can use the
  • the chat interface of the program is used to chat, and content such as text, audio or video can be sent on the chat interface.
  • the interface diagram displayed by the terminal device is shown in (d) in FIG. 5
  • (d) in FIG. 5 is the chat interface between the user and Lili, and the user can Send the chat content to Lili, as shown in (d) in Figure 5, the user will send "Lili, let's go to the movies together?".
  • the user sends a chat record of "Lili, let's go to the movies together?” on the chat interface with Lili, which can be acquired by the acquisition module 2121 of the predicted intent recognition module 212 in FIG. 1c.
  • the decision-making module determines that the prediction intention is "inquire about the movie theater" according to the chat record, and the corresponding slot "location" is "the area near the current location”. Further, the acquisition module 2121 can acquire the user's current location information, and then the decision module 2122 fills the slot "location” with "the current location information queried by the user".
  • the data mining module 213 inquires the content server and returns the result to the terminal device 201 , and transmits it to the Changlian application program application module 25 through the AI interface module 252 .
  • the message processing module 251 sends it to the chat interface between the user of the terminal device 201 and Lili as Xiaoyi, as shown in FIG. 5 . shown in (e).
  • the message processing module 251 determines that the chat members of the chat interface also include Lili, the result of the query sent by Xiaoyi can be uploaded to the application server 242 through the network.
  • the application server 242 may also be referred to as the server of the Changlian application, and then the application server 242 sends the query result to Lili's terminal device.
  • the final displayed result is shown in (e) in Figure 5.
  • Xiaoyi sends the query result on the chat interface between the user and Lili, the user can see it on his terminal device, and Lili can also see it on Lili. on the terminal device.
  • the second server mentioned in the embodiments of this application may refer to an application server.
  • FIG. 5 shows a schematic diagram of the interface where Lili sends the chat content "Wow, this function is really cool!.
  • Scenario 3 when two users need to inquire about the nearby movie theaters during the conversation, they can directly @ ⁇ and order them to query the surrounding movie theaters.
  • FIG. 6 is a schematic interface diagram of receiving information from Lili when the user's terminal device is in the lock screen mode, as shown in (a) in Figure 6, the content "You Got a message from Lili".
  • the user can directly click on the piece of information, then in response to the user's click operation, the terminal device can open the smooth connection application, and display a schematic interface diagram of the user chatting with Lili, the interface schematic diagram can be as shown in (b) in Figure 6.
  • the user can actively send commands to Xiaoyi, as shown in (c) in Figure 6, the user can directly send the user command to Xiaoyi on the chat interface with Lili "Okay, @ ⁇ , recommend nearby. Cinema".
  • (c) in FIG. 6 shows a schematic diagram of an interface for a user to edit a user command, after the user clicks the "Send" button on the interface.
  • the target intent recognition module 211 in the AI engine module can obtain the user command through the distribution module 2111, and through the natural language understanding module 2112, determine the target intent as "inquire about restaurants".
  • the data mining module 213 performs slot matching.
  • the user's location information can be further obtained through the data mining module 213, and the user's location information is determined as the content of the slot "location". Further, the data mining module 213 can query the content server for nearby movie theaters, and return the obtained results to the Changlian application through the AI interface module 252, and after rendering through the rendering module 253, display the queried nearby movie theaters as Xiaoyi. cinema. As shown in (d) of FIG. 6 .
  • the message processing module 251 determines that the chat members of the chat interface also include Lili
  • the result of the query sent by Xiaoyi can be uploaded to the application server 242 through the network.
  • the application server 242 may also be referred to as the server of the Changlian application, and then the application server 242 sends the query result to Lili's terminal device.
  • the final displayed result is shown in (d) in Figure 6. After Xiaoyi sends the query result on the chat interface between the user and Lili, the user can see it on his terminal device, and Lili can also see it on Lili. on the terminal device.
  • FIG. 6 shows a schematic diagram of the interface where Lili sends the chat content "Wow, this function is really cool!.
  • the user can directly click the notification message on the lock screen to directly open the chat interface between the user of the Changlian application and Lili.
  • the embodiment of the present application can also provide a method for the user to open an interface for a single chat with Lili, as shown in (a) in FIG. ”, the user can unlock the terminal device, and the unlocking method can be fingerprint unlocking, face recognition unlocking or password unlocking, etc., the method is not limited.
  • (b) in Fig. 7 shows a schematic diagram of an interface after the terminal device is unlocked. As shown in (b) in Fig. 7, the user's terminal interface may include multiple application programs. Apps for making calls and apps for connecting.
  • the terminal device can open the Changlian application program APP, and the schematic interface diagram is shown in (c) of FIG. 7 .
  • the recently contacted contacts are displayed in the tab of the "Changlian application”, and the most recently contacted contacts can be displayed at the top.
  • there may be some marks on the avatar or name of the contact for example, there may be a small black dot, or a small bubble, etc., which is not done in this embodiment of the present application.
  • this flag is only to prompt the user that there is new unread information.
  • the user can click the "Lili” option on the interface shown in (c) in Figure 7, and in response to this operation, the terminal device will open the single chat interface between the user and Lili as shown in (b) in Figure 6 above. .
  • system and “network” in the embodiments of the present application may be used interchangeably.
  • At least one means one or more, and “plurality” means two or more.
  • And/or which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • At least one (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
  • ordinal numbers such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects, and are not used to limit the order, sequence, priority or importance of multiple objects degree.
  • first server and the second server are only used to distinguish different servers, but do not indicate the difference in priority or importance of the two servers.
  • each network element in the above-mentioned implementation includes corresponding hardware structures and/or software modules for executing each function.
  • the present invention can be implemented in hardware or a combination of hardware and computer software in conjunction with the units and algorithm steps of each example described in the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the present invention.
  • FIG. 8 is a schematic structural diagram of a communication device provided by an embodiment of the application.
  • the communication device may be a terminal device, or a chip or a circuit, such as a chip or circuit that can be provided in the terminal device. .
  • the communication device 1301 may further include a bus system, wherein the processor 1302, the memory 1304, and the transceiver 1303 may be connected through the bus system.
  • the aforementioned processor 1302 may be the aforementioned processor 110 in FIG. 1e.
  • the memory 1304 in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the communication apparatus may include a processor 1302 , a transceiver 1303 and a memory 1304 .
  • the memory 1304 is used to store instructions
  • the processor 1302 is used to execute the instructions stored in the memory 1304, so as to implement the relevant solutions of the terminal device in any one or any of the corresponding methods shown in FIG. 1a to FIG. 7 above. .
  • the processor 1302 is configured to acquire first information, where the first information includes location information of the terminal device; when the first information satisfies a preset first condition, display the second information; the second information includes The content to be pushed or a link to the content to be pushed associated with the first information; the first condition includes: the location corresponding to the location information of the terminal device is located in the first area, and the type of the first area belongs to one of the preset area types.
  • the second information comes from the first server, or the second information comes from information pre-stored by the terminal device.
  • the processor 1302 is specifically configured to: display the second information on the chat interface of the Changlian application.
  • the Changlian application includes at least one chat group; the processor 1302 is specifically configured to: determine a first chat group that satisfies a preset second condition; The chat interface displays the second message.
  • a transceiver 1303 is further included, configured to: send a second request to the second server, where the second request carries the second information; wherein the second request is used to request the second server to send the second information It is displayed on the terminal device logged in by the second user among the N second users.
  • the terminal devices logged in by the N second users include at least one of the following: a smart phone, a smart large screen, a smart speaker, a smart bracelet, and a tablet computer.
  • FIG. 9 is a schematic structural diagram of a communication apparatus provided by an embodiment of the present application.
  • the communication apparatus 1401 may include a communication interface 1403 , a processor 1402 and a memory 1404 .
  • the communication interface 1403 is used for inputting and/or outputting information;
  • the processor 1402 is used for executing computer programs or instructions, so that the communication device 1401 implements the method on the terminal device side in the above-mentioned related solutions of FIG. 1a to FIG. 7 , or makes the communication device 1401 implements the server-side method in the above-mentioned related solutions of FIG. 1a to FIG. 7 .
  • the communication interface 1403 can implement the solution implemented by the transceiver 1303 in FIG. 8
  • the processor 1402 can implement the solution implemented by the processor 1302 in FIG. 8
  • the memory 1404 can implement the memory 1304 in FIG. 8 .
  • the implemented solution will not be repeated here.
  • FIG. 10 is a schematic diagram of a communication apparatus provided by an embodiment of the present application.
  • the communication apparatus 1501 may be a terminal device, or may be a chip or a circuit, for example, it may be provided in the terminal device chip or circuit.
  • the communication apparatus may correspond to the terminal device in the above method.
  • the communication apparatus may implement the steps performed by the terminal device in any one or more of the corresponding methods shown in FIG. 1a to FIG. 7 above.
  • the communication apparatus may include a processing unit 1502 , a communication unit 1503 and a storage unit 1504 .
  • the processing unit 1502 may be a processor or a controller, for example, a general-purpose central processing unit (CPU), general-purpose processor, digital signal processing (DSP), application specific integrated circuit (application specific integrated circuit) circuits, ASIC), field programmable gate array (FPGA), or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • a processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
  • the storage unit 1504 may be a memory.
  • the communication unit 1503 is an interface circuit of the device for receiving signals from other devices. For example, when the device is implemented as a chip, the communication unit 1503 is an interface circuit used by the chip to receive signals from other chips or devices, or an interface circuit used by the chip to send signals to other chips or devices.
  • the communication apparatus 1501 may be a terminal device in any of the foregoing embodiments, and may also be a chip.
  • the processing unit 1502 may be, for example, a processor
  • the communication unit 1503 may be, for example, a transceiver.
  • the transceiver may include a radio frequency circuit
  • the storage unit may be, for example, a memory.
  • the processing unit 1502 may be, for example, a processor
  • the communication unit 1503 may be, for example, an input/output interface, a pin, or a circuit.
  • the processing unit 1502 can execute computer-executed instructions stored in a storage unit.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc.
  • the storage unit can also be a session management network element located in the A storage unit outside the chip, such as read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), etc.
  • ROM read-only memory
  • RAM random access memory
  • the processing unit 1502 is configured to acquire first information, where the first information includes location information of the terminal device; when the first information satisfies a preset first condition, display the second information; the second information includes The content to be pushed or a link to the content to be pushed associated with the first information; the first condition includes: the location corresponding to the location information of the terminal device is located in the first area, and the type of the first area belongs to one of the preset area types.
  • each unit in the foregoing communication apparatus 1501 may refer to the implementation of the corresponding method embodiments, and details are not described herein again.
  • the division of the units of the above communication apparatus is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • the communication unit 1503 may be implemented by the transceiver 1303 shown in FIG. 8 above, and the processing unit 1502 may be implemented by the processor 1302 shown in FIG. 8 above.
  • the present application also provides a computer program product, the computer program product includes: computer program code or instructions, when the computer program code or instructions are run on a computer, the computer is made to execute FIG. 1a To the method of any one of the embodiments shown in FIG. 7 .
  • the present application further provides a computer-readable storage medium, where the computer-readable medium stores program codes, and when the program codes are run on a computer, the computer is made to execute FIGS. 1a to 7 .
  • the present application further provides a chip system, where the chip system may include a processor.
  • the processor is coupled to the memory and can be used to perform the method of any one of the embodiments shown in FIGS. 1a to 7 .
  • the chip system further includes a memory. Memory, used to store computer programs (also called code, or instructions).
  • the processor is used to call and run the computer program from the memory, so that the device installed with the chip system executes the method of any one of the embodiments shown in FIG. 1a to FIG. 7 .
  • the present application further provides a system, which includes the aforementioned one or more terminal devices and one or more servers.
  • a computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless means to transmit to another website site, computer, server or data center.
  • a computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • Useful media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, high-density digital video disc (DVD)), or semiconductor media (eg, solid state disc (SSD)) )Wait.
  • the server in each of the above apparatus embodiments corresponds to the terminal equipment and the server or terminal equipment in the method embodiments, and corresponding steps are performed by corresponding modules or units, for example, the communication unit (transceiver) performs the receiving or sending steps in the method embodiments. , other steps except sending and receiving can be performed by a processing unit (processor).
  • processor For functions of specific units, reference may be made to corresponding method embodiments.
  • the number of processors may be one or more.
  • a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be components.
  • One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers.
  • these components can execute from various computer readable media having various data structures stored thereon.
  • a component may, for example, be based on a signal having one or more data packets (eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals) Communicate through local and/or remote processes.
  • data packets eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods of the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种内容推送方法、装置、存储介质和芯片系统,用于减少用户与终端设备互动的次数。本申请中终端设备获取第一信息,第一信息包括终端设备的位置信息。当第一信息满足预设的第一条件,终端设备显示第二信息。第二信息包括跟第一信息关联的待推送内容或者待推送内容的链接。第一条件包括:终端设备的位置信息对应的位置位于第一区域,第一区域的类型属于预设的区域类型中的一个。由于可以根据终端设备的位置信息推送第二信息,因此可以减少用户主动查询第二信息的过程中的查询步骤,从而可以减少用户输入命令的次数,进而可以减少用户与终端设备互动的次数。

Description

一种内容推送方法、装置、存储介质和芯片系统
相关申请的交叉引用
本申请要求在2020年10月22日提交中国专利局、申请号为202011142477.5、申请名称为“基于即时通信的信息传输方法、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求在2020年12月17日提交中国专利局、申请号为202011502425.4、申请名称为“一种内容推送方法、装置、存储介质和芯片系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,尤其涉及一种内容推送方法、装置、存储介质和芯片系统。
背景技术
人机对话已经广泛应用于人们的日常生活,例如,聊天机器人、机器人客服、智能音响、语音助手等。人机对话具有广泛的应用场景,并且可以直接使用在具体的业务处理中,例如,酒店预订服务、航班预订服务和火车票预订服务等。
现有技术中,用户需要通过特定的方式唤醒手机中的聊天机器人,系统中会提供一个专门的人机对话的固定的界面。当用户唤醒该聊天机器人时,终端设备可以打开固定的界面,用户可以在该界面上与聊天机器人进行对话。
现有的一种人机交互场景的具体的步骤如下:
(1)用户通过预设的方式唤醒手机中的聊天机器人;
(2)终端设备打开与聊天机器人聊天的固定界面;
(3)用户输入命令,语音命令或文本命令。
该命令包括意图和槽位的内容。其中,意图对应功能,槽位则对应完成该功能所需的参数。举个例子,用户输入命令“查询上海市嘉定区的天气情况”,则根据该命令可以识别出:用户的意图为“查询天气情况”,该意图对应的槽位包括:地点。根据该命令可以确定槽位“地点”的内容为“上海嘉定”。可以说,“地点”就是“查询天气情况”这个意图对应的槽位,槽位可以称为实体。
(4)聊天机器人对用户输入的命令进行解析,以理解用户命令的意图。即需要理解用户想要什么样的功能。进一步的,还需进行槽位内容的识别。槽位的内容的识别,是词的提取和匹配问题。
(5)根据用户输入命令的意图,生成响应信息。
通过上述内容可以看出,人机交互场景中需要用户进行的操作较多,例如需要用户输入命令等。
发明内容
本申请提供一种内容推送方法、装置、存储介质和芯片系统,用于减少用户与终端设备互动的次数。
第一方面,本申请中终端设备获取第一信息,第一信息包括终端设备的位置信息。当第一信息满足预设的第一条件,终端设备显示第二信息。第二信息包括跟第一信息关联的 待推送内容或者待推送内容的链接。其中,第一条件可以包括:终端设备的位置信息对应的位置位于第一区域,第一区域的类型属于预设的区域类型中的一个。由于可以根据终端设备的位置信息推送第二信息,因此可以减少用户主动查询第二信息的过程中的查询步骤,从而可以减少用户输入命令的次数,进而可以减少用户与终端设备互动的次数。
在一种可能地实施方式中,终端设备可以根据第一信息预测用户的意图,本申请实施例中将主动根据信息预测出的用户的意图称为预测意图。进一步可以向第一服务器发送用于请求第一服务器执行预测意图的第一请求,并接收第一服务器返回的第一响应。第一响应包括第一服务器执行预测意图后得到的第二信息。之后向终端设备的畅连应用程序的接口模块发送第一消息,以使终端设备在畅连应用程序的聊天界面上展示第二信息。由于可以根据终端设备的第一信息确定用户的预测意图,进而可以展示执行预测意图的结果,从而可以减少用户输入命令的次数,进而可以减少用户与终端设备互动的次数。
在一种可能地实施方式中,当第一区域的类型为景区,第二信息包括:第一区域的景区攻略。由于当确定终端设备所处位置属于景区内,则主动向用户推送景区攻略,比如可以通过畅连应用程序向用户推送景区攻略。如此,省去了用户查询景区攻略的步骤,直接可以得到与自身当前处境相关的信息。
在一种可能地实施方式中,第二信息来自第一服务器。在一种可能地实施方式中,终端设备向第一服务器发送第一请求,第一请求用于请求获取第二信息;终端设备接收第一响应,第一响应包括第二信息。举个例子,比如第一请求用于请求查询终端设备当前所处的景区的景区攻略,则服务器将该景区的景区攻略作为第二信息返回给终端设备。在又一种可能地实施方式中,可以将查询景区攻略理解为预测意图,即终端设备根据终端设备的当前位置预测出用户想要查询景区攻略,进而向服务器发送第一请求,在一种可能地实施方式中,也可以理解为第一请求用于请求第一服务器执行预测意图,即第一服务器查询该景区的景区攻略,比如可以从数据库中查询,进而将执行预测意图得到的该景区的景区攻略作为第二信息返回给终端设备。通过从第一服务器查询第二信息的方式可以节省终端设备的存储空间,另一方面可以得到内容较新的第二信息。
在一种可能地实施方式中,第二信息来自终端设备预存的信息。如此,可以加快终端设备获取第二信息的速度。
在一种可能地实施方式中,终端设备可以在畅连应用程序的聊天界面上显示第二信息。比如可以在第一用户的畅连应用程序的聊天界面上显示第二信息,第一用户为终端设备上登录畅连应用程序的用户。在一种可能地实施方式中,智能助手集成于畅连应用程序。智能助手可以显示在畅连应用程序的联系人信息中,这种情况下,可以在终端设备畅连应用程序的第一聊天界面上显示第二信息。第二信息在第一聊天界面上显示为智能助手发出的聊天内容。可以看出,智能助手在畅连应用程序中进行了拟人化处理,用户可以通过畅连应用程序与智能助手聊天,且终端设备主动推送的第二信息也可以以智能助手的身份来推送。另一方面,本申请无需用户主动唤醒该智能助手,可以进一步减少用户与终端设备的互动次数。
在一种可能地实施方式中,方法还包括:终端设备自主获取畅连应用程序中的聊天记录;对聊天记录进行分析,预测出用户的预测意图,根据预测意图,通过畅连应用程序显示与预测意图关联的待推送内容或待推送内容的链接。该实施方式中可以通过自主对畅连应用程序中的聊天记录进行分析,从而预测出用户的预测意图,继而推送内容,可以看出, 该方案无需用户主动唤醒智能助手,对其发出询问,该方案可以减少用户输入命令的次数,减少用户与终端设备的交互次数。
在一种可能地实施方式中,畅连应用程序包括一个或多个聊天群组,一个聊天群组包括至少两个用户。终端设备可以获取聊天群组中的聊天记录,并对其分析,预测出用户的预测意图,继而在该聊天群组的聊天界面上以智能助手的身份推送内容或内容的链接。如此,智能助手主动推送的信息可以被群组中的每个用户看到,可以节省群组的两个用户之间的沟通。
在一种可能地实施方式中,畅连应用程序包括至少一个聊天群组。终端设备确定出满足预设的第二条件的第一聊天群组。终端设备在第一聊天群组的聊天界面显示第二信息。
在一种可能地实施方式中,第二条件可以包括:第一聊天群组的成员包括第一用户和N个第二用户,N个第二用户中M个第二用户中每个第二用户与第一用户之间的距离不大于距离阈值,N为大于1的正整数,M为不大于N的正整数,M与N的比值不小于预设值。在一种可能地实施方式中,可以设置预设值为50%,可以看出,若一个群组中至少一半的第二用户的位置都与第一用户的位置比较近,则可以预测这个群组大部分人都位于同一个场景,这种情况下,可以直接向该聊天群组的聊天界面推送给信息,以使该聊天群组的成员都看到该信息,从而可以节省用户再次将第二信息单独发送给其他用户的操作,从而可以进一步节省用户与终端设备之间交互的次数。
在一种可能地实施方式中,第二条件可以包括:第一聊天群组对应的订阅信息包括第二信息的类型。如此,由于用户在第一聊天群组中订阅了第二信息的类型,因此,当终端设备获取到第二信息时,可以向第一聊天群组去推送。
在一种可能地实施方式中,第二条件可以包括:第一聊天群组的预设时间段内的聊天记录中涉及到第一区域。在一种可能地实施方式中,终端设备可以自主获取第一聊天群组内的聊天记录,进而对聊天记录进行语义分析,从而可以确定在第一聊天群组的预设时间段内的聊天记录中是否出现过第一区域相关的词汇。若存在,则可以推测出在第一聊天群组中的成员可能大部分位于第一区域,基于此,可以在第一聊天群组内推送第二信息,从而可以进一步节省用户与终端设备之间交互的次数。
在一种可能地实施方式中,当第二条件包括第一聊天群组的标签值与第二信息的类型匹配。举个例子,比如畅连应用程序中的聊天群组可以有一个标签值,该标签值可以显示该群组的成员的社会关系,比如标签值可以是家庭群、工作群、驴友群等。该标签值可以是用户自己填入的,也可以是根据成员之间聊天的内容推断出的,也可以是根据成员之间的社会关系推测出的。当一个群组的标签值与一个信息的类型匹配,可以将该信息适合发布到该群组,举个例子,若一个信息的类型为家人的健康数据,则可以将该信息推送到家庭群的聊天群组中。再比如,当一个信息的类型为景点攻略,则可以将其推送到驴友群。一个聊天群组的标签值所匹配的信息的类型可以是预设的。
在一种可能地实施方式中,终端设备在第一聊天群组的聊天界面显示第二信息之后,终端设备向第二服务器发送第二请求,第二请求携带第二信息,其中,第二请求用于请求第二服务器将第二信息显示在N个第二用户中的第二用户所登录的终端设备上。如此,可以是N个第二用户在其所登录的设备上查看到第二信息。
在一种可能地实施方式中,N个第二用户登录的终端设备包括以下内容中的至少一项:智能手机、智慧大屏、智能音箱、智能手环、平板电脑。如此,可以兼容较多的终端设备 种类。
在一种可能地实施方式中,畅连应用程序的聊天界面还包括:第一用户与第二设备之间的第三聊天界面;第二设备为智能手机、智慧大屏、智能音箱、智能手环、平板电脑中的一项。该方法还包括:第一用户在第三聊天界面发送第三信息,终端设备将第三信息发送至第二设备,以在第二设备的显示屏上显示第三信息。举个例子,若终端设备为用户的一个智能手机,用户可以通过畅连应用程序将其他设备,例如智慧大屏、智慧音箱、智慧手环等等设备加入畅连应用程序中,当用户想要在智慧大屏上展示信息时,可以通过智能手机的畅连应用程序打开与智慧大屏的聊天界面,在该聊天界面上发送信息,比如图片等,从而可以实现投屏的效果,可以看出,通过该方式进行投屏的方案较为简单,对于用户来说类似与智慧大屏进行聊天对话,可以简化用户操作的复杂度。
第二方面,本申请中第一服务器接收第一请求,第一请求用于请求获取第二信息,第一服务器将第二信息携带在第二响应中,并将第二响应发送给终端设备。如此可以为终端设备显示第二信息奠定基础。
在一种可能地实施方式中,第一服务器接收到的第一请求可以用于请求第一服务器执行预测意图。第一服务器执行该预测意图,并得到第二信息。第一服务器将第二信息携带在第二响应中,并将第二响应发送给终端设备。举个例子,比如第一请求用于请求查询终端设备当前所处的景区的景区攻略,则第一服务器将该景区的景区攻略作为第二信息返回给终端设备。在又一种可能地实施方式中,可以将查询景区攻略理解为预测意图,即终端设备根据终端设备的当前位置预测出用户想要查询景区攻略,进而向第一服务器发送第一请求,在一种可能地实施方式中,也可以理解为第一请求用于请求第一服务器执行预测意图,即第一服务器查询该景区的景区攻略,比如可以从数据库中查询,进而将执行预测意图得到的该景区的景区攻略作为第二信息返回给终端设备。
相应于第一方面至第二方面任一种内容推送方法,本申请还提供了一种通信装置。通信装置可以是以无线方式进行数据传输的任意一种发送端的设备或接收端的设备。例如,通信芯片、终端设备、或者服务器(第一服务器或第二服务器)。在通信过程中,发送端的设备和接收端的设备是相对的。在某些通信过程中,通信装置可以作为上述服务器或可用于服务器的通信芯片;在某些通信过程中,通信装置可以作为上述终端设备或可用于终端设备的通信芯片。
第三方面,提供了一种通信装置,包括通信单元和处理单元,以执行上述第一方面至第二方面任一种内容推送方法中的任一种实施方式。通信单元用于执行与发送和接收相关的功能。可选地,通信单元包括接收单元和发送单元。在一种设计中,通信装置为通信芯片,通信单元可以为通信芯片的输入输出电路或者端口。
在另一种设计中,通信单元可以为发射器和接收器,或者通信单元为发射机和接收机。
可选的,通信装置还包括可用于执行上述第一方面至第二方面任一种内容推送方法中的任一种实施方式的各个模块。
第四方面,提供了一种通信装置,该通信装置为上述终端设备或服务器(第一服务器或第二服务器)。包括处理器和存储器。可选的,还包括收发器,该存储器用于存储计算机程序或指令,该处理器用于从存储器中调用并运行该计算机程序或指令,当处理器执行存储器中的计算机程序或指令时,使得该通信装置执行上述第一方面至第二方面任一种内容推送方法中的任一种实施方式。
可选的,处理器为一个或多个,存储器为一个或多个。
可选的,存储器可以与处理器集成在一起,或者存储器与处理器分离设置。
可选的,收发器中可以包括,发射机(发射器)和接收机(接收器)。
第五方面,提供了一种通信装置,包括处理器。该处理器与存储器耦合,可用于执行第一方面至第二方面任一方面,以及第一方面至第二方面中任一种可能实现方式中的方法。可选地,该通信装置还包括存储器。可选地,该通信装置还包括通信接口,处理器与通信接口耦合。
在一种实现方式中,该通信装置为终端设备。当该通信装置为终端设备时,通信接口可以是收发器,或,输入/输出接口。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。
在另一种实现方式中,该通信装置为服务器(第一服务器或第二服务器)。当该通信装置为服务器(第一服务器或第二服务器)时,通信接口可以是收发器,或,输入/输出接口。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。
在又一种实现方式中,该通信装置为芯片或芯片系统。当该通信装置为芯片或芯片系统时,通信接口可以是该芯片或芯片系统上的输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等。处理器也可以体现为处理电路或逻辑电路。
第六方面,提供了一种系统,系统包括上述终端设备和服务器(第一服务器或第二服务器)。
第七方面,提供了一种计算机程序产品,计算机程序产品包括:计算机程序(也可以称为代码,或指令),当计算机程序被运行时,使得计算机执行上述第一方面中任一种可能实现方式中的方法,或者使得计算机执行上述第一方面至第二方面任一种实现方式中的方法。
第八方面,提供了一种计算机可读存储介质,计算机可读介质存储有计算机程序(也可以称为代码,或指令)当其在计算机上运行时,使得计算机执行上述第一方面中任一种可能实现方式中的方法,或者使得计算机执行上述第一方面至第二方面任一种实现方式中的方法。
第九方面,提供了一种芯片系统,该芯片系统可以包括处理器。该处理器与存储器耦合,可用于执行第一方面至第二方面中任一方面,以及第一方面至第二方面中任一方面中任一种可能实现方式中的方法。可选地,该芯片系统还包括存储器。存储器,用于存储计算机程序(也可以称为代码,或指令)。处理器,用于从存储器调用并运行计算机程序,使得安装有芯片系统的设备执行第一方面至第二方面中任一方面,以及第一方面至第二方面中任一方面中任一种可能实现方式中的方法。
第十方面,提供了一种处理装置,包括:输入电路、输出电路和处理电路。处理电路用于通过输入电路接收信号,并通过输出电路发射信号,使得第一方面至第二方面任一方面,以及第一方面至第二方面中任一种可能实现方式中的方法被实现。
在具体实现过程中,上述处理装置可以为芯片,输入电路可以为输入管脚,输出电路可以为输出管脚,处理电路可以为晶体管、门电路、触发器和各种逻辑电路等。输入电路所接收的输入的信号可以是由例如但不限于接收器接收并输入的,输出电路所输出的信号可以是例如但不限于输出给发射器并由发射器发射的,且输入电路和输出电路可以是同一电路,该电路在不同的时刻分别用作输入电路和输出电路。本申请实施例对处理器及各种 电路的具体实现方式不做限定。
附图说明
图1a为本申请实施例提供的一种系统架构示意图;
图1b为本申请实施例提供的另一种系统架构示意图;
图1c为本申请实施例提供的另一种系统架构示意图;
图1d为本申请实施例提供的另一种系统架构示意图;
图1e为本申请实施例提供的一种终端设备的结构示意图;
图1f为本申请实施例提供的另一种终端设备的结构示意图;
图2a为本申请实施例提供一种内容推送方法的流程示意图;
图2b为本申请实施例提供一种内容推送方法的流程示意图;
图3中的(a)为本申请实施例提供的适用于场景一的一种终端设备的界面示意图;
图3中的(b)为本申请实施例提供的适用于场景一的另一种终端设备的界面示意图;
图3中的(c)为本申请实施例提供的适用于场景一的另一种终端设备的界面示意图;
图3中的(d)为本申请实施例提供的适用于场景一的另一种终端设备的界面示意图;
图3中的(e)为本申请实施例提供的适用于场景一的另一种终端设备的界面示意图;
图4中的(a)为本申请实施例提供的适用于场景一的另一种终端设备的界面示意图;
图4中的(b)为本申请实施例提供的适用于场景一的另一种终端设备的界面示意图;
图4中的(c)为本申请实施例提供的适用于场景一的另一种终端设备的界面示意图;
图5中的(a)为本申请实施例提供的适用于场景二的一种终端设备的界面示意图;
图5中的(b)为本申请实施例提供的适用于场景二的另一种终端设备的界面示意图;
图5中的(c)为本申请实施例提供的适用于场景二的另一种终端设备的界面示意图;
图5中的(d)为本申请实施例提供的适用于场景二的另一种终端设备的界面示意图;
图5中的(e)为本申请实施例提供的适用于场景二的另一种终端设备的界面示意图;
图5中的(f)为本申请实施例提供的适用于场景二的另一种终端设备的界面示意图;
图6中的(a)为本申请实施例提供的适用于场景三的一种终端设备的界面示意图;
图6中的(b)为本申请实施例提供的适用于场景三的另一种终端设备的界面示意图;
图6中的(c)为本申请实施例提供的适用于场景三的另一种终端设备的界面示意图;
图6中的(d)为本申请实施例提供的适用于场景三的另一种终端设备的界面示意图;
图6中的(e)为本申请实施例提供的适用于场景三的另一种终端设备的界面示意图;
图7中的(a)为本申请实施例提供的适用于场景三的另一种终端设备的界面示意图;
图7中的(b)为本申请实施例提供的适用于场景三的另一种终端设备的界面示意图;
图7中的(c)为本申请实施例提供的适用于场景三的另一种终端设备的界面示意图;
图8为本申请实施例提供的一种通信装置的结构示意图;
图9为本申请实施例提供的一种通信装置的结构示意图;
图10为本申请实施例提供的一种通信装置的结构示意图。
具体实施方式
下面先对本申请实施例中涉及到的名词进行解释。
(1)终端设备。
本申请实施例中的终端设备可以有两类,第一类终端设备需要有显示屏,可以用于在显示屏上展示智能助手发送的信息。第二类的终端设备可以用于对用户的信息的采集,即可以从该终端设备上获取用户的信息,第二类的终端设备可以带有显示屏,也可以不带显示屏。
在本申请一些实施例中,第一类终端设备可以是手机、平板电脑、电脑、具有显示屏和无线通讯功能的可穿戴设备(如智能手表)、智慧屏、具有显示屏的智能路由器,以及具有显示屏和无线通讯功能的车载设备、具有显示屏和无线通讯功能的智能音箱等。在本申请一些实施例中,第二类终端设备可以是手机、平板电脑、电脑、具有无线通讯功能的可穿戴设备(如智能手表),以及具有无线通讯功能的车载设备、具有无线通讯功能的智能音箱、智慧屏、智能路由器等。
在一种可能地实施方式中,一个终端设备可以既属于第一类终端设备,也属于第二类终端设备。即,一个终端设备可以用于从该终端设备上获取用户的信息,也可以用于展示智能助手发送的信息。在另一种可能地实施方式中,一个终端设备可以仅仅属于第二类终端设备,但是并不属于第一类终端设备。即,该终端设备仅仅可以用于从该终端设备上获取用户的信息,但是并不能展示智能助手推送的信息,例如一个未带屏幕的智能手环,则仅仅可以从该智能手环上采集用户的心跳等数据,但是并不能展示智能助手推送的信息。
(2)用户命令。
在人机对话领域,用户命令为用户输入的,该也可以称为用户需求、命令、用户的命令等等。
本申请实施例中用户命令可以是语音、图像、视频、音视频、文本等中的一种或多种的组合。例如,用户命令是用户通过麦克风输入的语音,此时,用户命令也可以称为“语音命令”;又例如,用户命令是用户通过键盘或虚拟键盘输入的文本,此时,用户命令也可以称为“文本命令”;又例如,用户命令是用户通过摄像头输入的图像,并通过虚拟键盘输入“图像中的人物是谁?”,此时,用户命令为图像与文本的组合;又例如,用户命令为用户通过摄像头和麦克风输入的一段音视频,此时,用户命令也可以称为“音视频命令”。
(3)语音识别(speech recognition)。
语音识别技术,也被称为自动语音识别(automatic speech recognition,ASR)、计算机语音识别(computer speech recognition)、或语音转文本识别(speech to text,STT),是一种通过计算机将人类的语音转换为相应的文本的方法。在用户命令为语音命令或包含语音的命令时,可以通过ASR将用户命令转换为文本。
(4)自然语言理解(natural language generation,NLU)。
自然语言理解就是希望智能助手像人一样,具备正常人的语言理解能力。其中,一个重要的功能就是意图识别。
(5)意图(Intent)、预测意图和目标意图。
意图对应功能,即用户需要什么样的功能。本申请实施例中为了进行区分,将意图分为预测意图和目标意图。本申请实施例中当称呼为意图时,对意图的相关描述均适用于预测意图和目标意图。也可以理解为,意图为预测意图和目标意图的上位概念。
本申请实施例中的预测意图是指用户并未输入命令,而是根据获取到的用户数据预测出的用户可能想要的功能。举个例子,若获取用户的当前的位置信息,且分析出用户当前 处于故宫,故宫属于旅游景区,则可以预测用户的意图为“查询景区攻略”,且根据预设的意图与槽位的对应关系,可以确定该意图对应的槽位为“地点”,根据用户当前的位置信息可以确定该槽位的信息为“故宫”。该示例中,预测出的意图“查询景区攻略”即为预测意图,可见,该预测意图并不需要用户输入命令,仅仅根据获取到的用户的信息即可推测出意图,从而可以减少用户与终端设备的交互次数。
本申请实施例中的目标意图是指需要根据用户命令进行分析,进而确定出的意图。在一种可能地实施方式中,可以由用户输入“用户命令”,继而从该“用户命令”中识别出用户想要什么功能。意图识别可以理解为语义表达分类的问题,也可以说,意图识别是一个分类器(也可以称意图分类器),确定用户命令是哪个意图。常用的用于意图识别的意图分类器为支持向量机(SVM)。决策树和深度神经网络(DNN)。其中,深度神经网络可以是卷积神经网络(convolutional neural network,CNN)或循环神经网络(recurrent neural network,RNN)等,RNN可以包括长短期记忆(long short-termmemory,LSTM)网络、堆叠环神经网络(stacked recurrent neural network,SRNN)等。
根据“用户命令”识别出“目标意图”的大致的流程包括,首先,对用户命令(即一组字序列)进行预处理,如去除语料的标点符号,去除停用词等等;其次,利用词嵌入(word embedding)算法,如,word2vec算法将预处理后的语料生成词向量(word embedding);进而,利用意图分类器(如,LSTM)来进行特征提取、意图分类等工作。本申请实施例中,意图分类器为训练后的模型,可以识别一个或多个场景下的意图,或识别任意意图。例如,意图分类器可以识别机票预订场景下的意图,包括,订机票、筛选机票、查询机票价格、查询机票信息、退机票、改签机票、查询到机场距离等。
(6)槽位(slot)。
在一些实施例中,终端设备可以存储<意图,槽位>,即,终端设备存储意图和槽位的对应关系,以便于终端设备可以根据意图快速确定其对应的槽位。应理解,一个意图可以对应一个或多个槽位,也可以不对应槽位。表1示例性示出了几种可能地意图与槽位的对应关系结构示意表。
表1意图与槽位的对应关系表
Figure PCTCN2021116865-appb-000001
上述存储的意图和槽位的对应关系可以采用Map数据结构存储,其中,Map是一种依照键(key)存储元素的容器,是通过数组和链表的方式实现的。
上述以终端存储意图和槽位的对应关系为例来说明,应理解,在另一种实现中,意图和槽位的对应关系可以被存储于服务器(比如云端的服务器)中,本申请实施例中不做限制。
本申请实施例中无论是预测意图还是目标意图,都属于意图。在一种可能地实施方式中,可以从意图和槽位的对应关系中,确定出预测意图所对应的槽位。在另一种可能地实 施方式中,可以从意图和槽位的对应关系中,确定出目标意图所对应的槽位。
若一个意图为预测意图,则可以根据获取到的用户的信息对槽位进行填充,例如上述示例中可以根据用户当前所处的位置信息,将槽位“地点”的信息填充为“故宫”。若一个意图为目标意图,则至少可以根据“用户命令”对槽位进行填充。
针对一个意图可以配置一个或多个槽位。例如在“查询景区攻略”的意图中,槽位有一个,即“地点”。再例如在“订机票”的意图中,槽位有“起飞时间”、“起始地”、“目的地”。
能够准确识别槽位,需要用到槽位类型(Slot-Type),依然举上面的例子,如果你想精确的识别出“起飞时间”、“起始地”、“目的地”这三个槽位,就需要有背后对应的槽位类型,分别是“时间”,“城市名称”。可以说,槽位类型就是特定知识的结构化知识库,用以识别和转化用户口语化表达的槽位信息。从编程语言的角度来理解,intent+slot可以看成是用一个函数来描述用户的需求,其中“intent对应函数”、“slot对应的是函数的参数”,“slot_type对应参数的类型”。
一个意图被配置的槽位中可以分为必要槽位和可选槽位,其中,必要槽位是执行用户命令必须的填充的槽位,可选槽位是执行用户命令可选择的填充或不填充的槽位,在不进行说明的情况下,本申请中槽位可以是必要槽位或可选槽位,也可以为必要槽位。
上述“订机票”这个例子中定义了三个核心槽位,分别是“起飞时间”,“起始地”和“目的地”。如果要全面考虑用户订机票需要输入的内容,我们肯定能想到更多槽位,比如旅客人数、航空公司、起飞机场、降落机场等,对于槽位的设计者来说,可以基于意图的粒度来设计槽位。
(7)即时通信。
即时通信(Instant Messaging,IM)是指能够即时发送和接收互联网消息等的业务。用户之间可以通过即时通信的应用程序进行聊天。该即时通信的应用程序可以支持两个人的聊天,也可以支持一个用户与智能助手直接的单聊。也可以支持一个群组的群聊,一个群组的群成员包括三个或三个以上。智能助手也可以参与到一个群组的群聊中,智能助手可以在群聊界面上发布信息。
随着智能电子设备的发展,各种即时通信应用程序(Application,APP)应用而生,用户通过即时通信可以与他人即时沟通。即时通信APP有多种,例如华为的畅连应用程序 TM、微信 TM等,本申请实施例中以畅连应用程序APP为例进行介绍。在一种可能地实施方式中,用户可以在畅连应用程序APP上进行注册,比如可以使用手机号进行注册,注册成功之后,可以和其他注册了畅连应用程序的用户互相添加好友,添加好友的用户之间可以通过畅连应用程序进行沟通。
(8)智能助手。
本申请实施例中智能助手无需单独添加,可以集成于终端设备的系统层。该实施方式可以进一步减少用户与智能助手互动时需执行的操作步骤。
在一种可能地实施方式中,云端AI引擎模块或终端设备侧AI引擎模块根据获取到的用户数据,推测出用户的预测意图,并从内容服务器获取到满足预测意图的内容之后,可以将该内容返回至终端设备,终端设备侧可以以智能助手的身份在聊天界面上展示该内容。
在另一种可能地实施方式中,可以预设几种唤醒智能助手的方式(例如可以在聊天界面@智能助手的名字,或者直接称呼智能助手的名字),用户可以采用预设的方式唤醒智能 助手并发布用户命令,进而云端AI引擎模块或终端设备侧AI引擎模块根据获取到的用户命令,确定出用户的目标意图,并从内容服务器获取到满足目标意图的内容之后,可以将该内容返回至终端设备,终端设备侧可以以智能助手的身份在聊天界面上展示该内容。
本申请实施例中智能助手也可以称为聊天机器人,本申请实施例中以智能助手的名字为“小艺”为例进行介绍,实际应用中,智能助手也可以有其他名称,本申请实施例中不做限制。
(9)用户界面(user interface,UI)。
用户界面是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。
应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容,比如图片、文字、按钮等控件。
例如,在查询电影院的场景中,图形用户界面(graphical user interface,GUI)可以显示的多个卡片,也可以称为卡片化展示查询结果,以一个电影院卡片为一个控件为例来说明,一个电影院卡片可以用于描述一个电影院,一个电影院卡片件显示的电影院的信息可能不是该控件对应的全部信息,当点击该电影院卡片,终端设备可以输出描述该电影院卡片指定的电影院的详细信息,控件对应的GUI信息即为该电影院的详细信息。在一种可能地实施方式中,可以将多个电影院的信息进行排序,比如可以依据饭店的评分等等,下述图5中的(f)中展示了一种可能地在终端设备的界面上由小艺卡片化展示多个电影院的界面示意图。关于对查询结果的渲染方式还可以有其他形式,本申请实施例不做限制。
基于上述内容,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
图1a示例性示出了本申请实施例适用的一种系统架构示意图,如图1a所示,该系统架构包括一个或多个终端设备,例如图1a所示的终端设备201、终端设备202和终端设备203。在图1a中以终端设备201为:用于展示智能助手发送的信息的终端设备为例进行示意。终端设备201、终端设备202和终端设备203均可以作为采集用户的数据的终端设备。
如图1a所示,该系统架构还可以包括一个或多个服务器,如图1a所示的信息收集服务器241、应用程序服务器242和内容服务器23。其中,内容服务器23中可以针对不同类型的内容设置不同的服务器,例如内容服务器232和内容服务器231,内容服务器例如可以有用于提供天气服务的内容服务器(数据挖掘模块可以从该内容服务器上查询天气状况)、可以提供百科服务的内容服务器,或者用于提供影视娱乐等内容的内容服务器等等。一个内容服务器可以用于提供一种或多种类型的服务,本申请实施例不做限制。
如图1a所示,信息收集服务器241可以用于存储各个终端设备上报的数据,例如可以采集终端设备203(终端设备203为智能手环)上报的心跳数据。信息收集服务器241可以为一个,也可以为多个,图中仅仅示例性的示出了一个。
应用程序服务器242可以为本申请实施例中提到的即时通信应用的应用服务器。通过即时通信应用,一个用户可以与智能助手之间单聊。通过即时通信应用多个用户之间也可以群聊。智能助手也可以与多个用户群聊,在群组中智能助手可以作为一个群聊成员参与群聊。且在群聊的应用场景中,终端设备可以将智能助手发送的信息发送至应用程序服务器242,进而通过应用程序服务器242发送至各个群组成员的终端设备,以使该群组的各 个群组成员均可以看到智能助手在群聊界面里展示的信息。
如图1a所示,本申请实施例还包括AI引擎模块,AI引擎模块英文可以写为engine。AI引擎模块可以部署在终端设备侧,例如图1a所示部署于终端设备201上的终端设备侧AI引擎模块21。其他终端设备上也可以部署终端设备侧AI引擎模块,图中仅仅以终端设备201部署有终端设备侧AI引擎模块21进行示意。在一种可能地实施方式中,AI引擎模块可以部署在能力比较强的终端设备侧,例如智能手机、平板电脑等。在另一种可能地实施方式中,AI引擎模块也可以部署在云端侧,例如云端AI引擎模块22。方案的具体处理流程可以由终端设备侧AI引擎模块21来处理,也可以由云端AI引擎模块22来处理。当终端设备侧部署有AI引擎模块,则可以在终端设备侧AI引擎模块21处理,如此可以减少终端设备与云端的交互次数,从而加快处理流程。
如图1a所示,终端设备侧AI引擎模块21包括目标意图识别模块211、预测意图识别模块212,以及数据挖掘模块213。其中,目标意图识别模块211可以用于根据用户输入的命令识别出用户的目标意图,目标意图识别模块211可以包括分发模块2111、语音识别模块2113和自然语言理解模块2112。其中,分发模块2111可以用于接收用户输入的命令,该命令可以是语音也可以是文本。如果是语音则可以通过语音识别模块2113将其转换为文本,之后将识别后的文本输入自然语言理解模块2112。如果是文本则可以直接输入自然语言理解模块2112。自然语言理解模块2112用于根据输入的文本识别出用户的目标意图,并将目标意图发送给数据挖掘模块213。数据挖掘模块213可以依据意图与槽位的对应关系,确定出目标意图对应的槽位,并填充槽位的信息,进而向对应的服务器去查询需要满足目标意图和槽位的信息的相关内容,并将查询到的相关内容返回至终端设备侧,以便展示给用户查看。
本申请实施例中的预测意图识别模块212也可以称为全场景智慧大脑,其可以包括获取模块2121和决策模块2122。获取模块用于收集用户的信息,例如用户的日程安排、地理位置、健康数据等信息。在一种可能地实施方式中,在收集用户的数据之前可以获得用户的授权。获取模块可以收集一个或多个终端设备上的数据,例如虽然获取模块2121属于终端设备201上的模块,除了可以收集终端设备201上的数据,也可以收集其他终端设备,例如终端设备203上的数据。在一种可能地实施方式中,终端设备203可以将数据上报至云端的信息收集服务器241,获取模块2121可以通过网络获取终端设备203上报的数据。决策模块2122依据获取模块2121获取的数据,确定出用户的预测意图,也就是说预测意图识别模块2122确定出的意图并不是完全依靠的用户的命令,而是依靠采集到的数据进行分析,进而预测出的用户的意图,本申请实施例中将预测意图识别模块212预测出的意图称之为预测意图。进一步的,决策模块2122根据获取模块2121获取的数据填充预测意图的槽位,槽位填充之后发送至数据挖掘模块213。数据挖掘模块213依据收到的预测意图和槽位的信息,向对应的服务器去查询需要满足预测意图和槽位的信息的相关内容,并将查询到的相关内容返回至终端设备侧,以便展示给用户查看。
需要说明的是,无论是目标意图还是预测意图,对于数据挖掘模块213来说,都属于意图,而预测意图是需要根据采集到的用户的信息去预测的用户可能想要的功能。而目标意图是根据用户输入的用户命令,进行自然语言理解模块2112的理解之后得到的。本申请实施例中由于可以根据用户的信息去预测用户可能想要的功能,因此,可以减少用户向终端设备输入命令的步骤,进而可以减少用户与终端设备的互动次数。
上述内容以终端设备侧AI引擎模块为例进行了介绍,下面介绍云端AI引擎模块22的可能地一种方案处理流程。
如图1a所示,云端AI引擎模块22包括目标意图识别模块221、预测意图识别模块222,以及数据挖掘模块223。其中,目标意图识别模块221可以用于根据用户输入的命令识别出用户的目标意图,目标意图识别模块221可以包括分发模块2211、语音识别模块2213和自然语言理解模块2212。其中,分发模块2211可以用于接收用户输入的命令,该命令可以是语音也可以是文本。如果是语音则可以通过语音识别模块2213将其转换为文本,之后将识别后的文本输入自然语言理解模块2212。如果是文本则可以直接输入自然语言理解模块2212。自然语言理解模块2212用于根据输入的文本识别出用户的目标意图,并将目标意图发送给数据挖掘模块223。数据挖掘模块223可以依据意图与槽位的对应关系,确定出目标意图对应的槽位,并填充槽位的信息,进而向对应的服务器去查询需要满足目标意图和槽位的信息的相关内容,并将查询到的相关内容返回至云端,以便展示给用户查看。
本申请实施例中的预测意图识别模块222也可以称为全场景智慧大脑,其可以包括获取模块2221和决策模块2222。获取模块用于收集用户的信息,例如用户的日程安排、地理位置、健康数据等信息。在一种可能地实施方式中,在收集用户的数据之前可以获得用户的授权。获取模块可以收集一个或多个终端设备上的数据,例如可以收集终端设备201上的数据,也可以收集终端设备203上的数据。在一种可能地实施方式中,终端设备203可以将数据上报至云端的信息收集服务器241,获取模块2221可以通过网络获取终端设备203上报的数据。决策模块2222依据获取模块2221获取的数据,确定出用户的预测意图,也就是说预测意图识别模块2222确定出的意图并不是完全依靠的用户的命令,而是依靠采集到的数据进行分析,进而预测出的用户的意图,本申请实施例中将预测意图识别模块222预测出的意图称之为预测意图。进一步的,决策模块2222根据获取模块2221获取的数据填充预测意图的槽位,槽位填充之后发送至数据挖掘模块223。数据挖掘模块223依据收到的预测意图和槽位的信息,向对应的服务器去查询需要满足预测意图和槽位的信息的相关内容,并将查询到的相关内容返回至云端,以便展示给用户查看。
上述内容分别介绍了终端设备侧AI引擎模块21和云端AI引擎模块22,若如图1a所示,同时在终端设备201和云端均部署有AI引擎模块,则可以部分操作在终端设备侧做,部分操作在云端AI引擎模块做。举个例子,可以由终端设备侧AI引擎模块21的预测意图识别模块212执行预测意图的确定过程。由云端AI引擎模块22的目标意图识别模块221执行目标意图的确定过程。进行数据挖掘处理时可以选择用数据挖掘模块213,也可以旋转使用数据挖掘模块223。在执行预测意图的确定过程时,可以由终端设备侧的获取模块2121收集用户的数据,进而通过网络将收集到的数据上报,由云端的决策模块2222推断出用户的预测意图。本申请实施例中各个模块可以组合使用,较为灵活,本申请实施例不做限制。
图1a中示出了在终端设备侧和云端均部署有AI引擎模块的系统架构示意图,图1b示例性示出了仅在云端部署有AI引擎模块的系统架构示意图,图1c示例性示出了仅在终端设备侧部署有AI引擎模块的系统架构示意图,图1b和图1c所示的各个模块的功能和作用可参见图1a中的相应描述,在此不再赘述。
图1d示例性示出了图1a中终端设备201的结构示意图,如图1d所示,终端设备201 可以包括即时通信应用模块25。本申请实施例中即时通信应用模块25集成有AI接口模块252。从而可以在即时通信应用中使用云端AI引擎模块22或者终端设备侧AI引擎模块21。数据挖掘模块213返回的数据可以通过AI接口模块252传输至即时通信应用模块25。
如图1d所示,即时通信应用模块25还可以包括渲染模块253。渲染模块253可以用于将AI接口模块252收到的信息进行渲染,例如可以将收到的“故宫的景区攻略”进行渲染绘制,如此可以将展示给用户观看的信息绘制的更加美观。
如图1d所示,即时通信应用模块25还可以包括消息处理模块251,消息处理模块251可以用于将消息以智能助手的身份发送给用户的聊天界面。当需要将消息以智能助手的身份发布至群组的聊天界面时,消息处理模块251可以将该消息发送至应用程序服务器242,继而传输至该群组的其他各个群组成员的终端设备上,从而可以达到以智能助手的身份在群组的聊天记录中发布消息的目的。
图1e示例性示出了一种终端设备的结构示意图,该终端设备可以为上述图1a至图1d的终端设备201。
应理解,图示终端设备仅是一个范例,并且终端设备可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图1e所示,终端设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
下面结合图1e对终端设备的各个部件进行具体的介绍:
处理器110可以包括一个或多个处理单元,例如,处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是终端设备的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从存储器中直接调用,从而可避免重复存取,可减少处理器110的等待时间,因而可提高系统的效率。
处理器110可以运行本申请实施例提供的触摸屏音量调节的方法,处理器可以响应于对显示屏的触摸操作,并在显示屏的侧边边缘显示音量交互的相关提示信息。当处理器110 集成不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的操作提示的方法,比如操作提示的方法中部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
在一些实施例中,处理器110可以包括一个或多个接口。比如,接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现终端设备的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。I2S接口和PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现终端设备的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现终端设备的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为终端设备充电,也可以用于终端设备与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接 口还可以用于连接其他终端设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端设备的结构限定。在本申请另一些实施例中,终端设备也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
终端设备的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。终端设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在终端设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在终端设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,终端设备的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得终端设备可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system, GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
终端设备通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
在本申请实施例中,显示屏194可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接的显示屏。当处理器110运行本申请实施例提供的音量调节方法,在显示屏194在折叠时,在某个屏接收到触摸操作,处理器110确定该触摸操作在该屏上的触摸位置,并在该屏上的触摸位置显示音量交互的相关提示信息。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储终端设备使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行终端设备的各种功能应用以及数据处理。
终端设备可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。终端设备可以设置至少一个麦克风170C。在另一些实施例中,终端设备可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,终端设备还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
指纹传感器180H用于采集指纹。终端设备可以利用采集的指纹特性实现指纹解锁, 访问应用锁,指纹拍照,指纹接听来电等。例如,可以在终端设备的正面(显示屏194的下方)配置指纹传感器,或者,在终端设备的背面(后置摄像头的下方)配置指纹传感器。另外,也可以通过在触摸屏中配置指纹传感器来实现指纹识别功能,即指纹传感器可以与触摸屏集成在一起来实现终端设备的指纹识别功能。在这种情况下,该指纹传感器可以配置在触摸屏中,可以是触摸屏的一部分,也可以是以其他方式配置在触摸屏中。另外,该指纹传感器还可以被实现为全面板指纹传感器,因此,可以把触摸屏看成是任何位置可都可以进行指纹采集的一个面板。在一些实施例中,该指纹传感器可以对采集到的指纹进行处理(例如指纹是否验证通过)发送给处理器110,由处理器110根据指纹处理结果做出相应的处理。在另一些实施例中,还指纹传感器还可以将采集到的指纹发送给处理器110,以便处理器110对该指纹进行处理(例如指纹验证等)。本申请实施例中的指纹传感器可以采用任何类型的感测技术,包括但不限于光学式、电容式、压电式或超声波传感技术等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于终端设备的表面,与显示屏194所处的位置不同。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和终端设备的接触和分离。终端设备可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。终端设备通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,终端设备采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在终端设备中,不能和终端设备分离。
尽管图1e中未示出,终端设备还可以包括蓝牙装置、定位装置、闪光灯、微型投影装置、近场通信(near field communication,NFC)装置等,在此不予赘述。
终端设备的软件系统可以采用分层架构,本申请实施例以分层架构的Android系统为例,示例性说明终端设备的软件结构。
图1f是本发明实施例的终端设备的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图1f所示,应用程序包可以包括电话、相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。前述内容中提到的畅连应用程序APP的应用程序包也可位于该应用程序层。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。前述内容中提到的终端设备侧的AI引擎模块21也可位于应用程序框架层。
如图1f所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供终端设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,终端设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形AI引擎模块(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形AI引擎模块是2D绘图的绘图AI引擎模块。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
为了便于理解,本申请以下实施例将以具有图1e和图1f所示结构的终端设备为例。为了描述方便,且以下需要涉及到AI引擎模块所执行的方案,将以终端设备201部署的终端设备侧AI引擎模块21为例进行描述。本领域技术人员可知,以下终端设备侧部署的AI引擎模块可以执行的方案也可以由云端部署的AI引擎模块来执行,或者由终端设备侧部署的AI引擎模块和云端部署的AI引擎模块协同执行(例如可以由终端设备201的终端设备侧的获取模块2121收集用户的信息,通过网络上传至云端AI引擎模块的决策模块2222进行决策),本申请实施例不做限制。
基于上述内容,图2a示例性示出了本申请实施例提供一种内容推送方法的流程示意图,如图2a所示,该方法包括:
步骤321,终端设备获取第一信息,第一信息包括终端设备的位置信息;
步骤322,当第一信息满足预设的第一条件,终端设备显示第二信息;第二信息包括跟第一信息关联的待推送内容或者待推送内容的链接;第一条件包括:终端设备的位置信息对应的位置位于第一区域,第一区域的类型属于预设的区域类型中的一个。
由于可以根据终端设备的位置信息推送第二信息,因此可以减少用户主动查询第二信息的过程中的查询步骤,从而可以减少用户输入命令的次数,进而可以减少用户与终端设备互动的次数。
在一种可能地实施方式中,当第一区域的类型为景区,第二信息包括:第一区域的景区攻略。由于当确定终端设备所处位置属于景区内,则主动向用户推送景区攻略,比如可以通过畅连应用程序向用户推送景区攻略。如此,省去了用户查询景区攻略的步骤,直接可以得到与自身当前处境相关的信息。
在一种可能地实施方式中,第二信息来自第一服务器。在一种可能地实施方式中,终端设备向第一服务器发送第一请求,第一请求用于请求获取第二信息;终端设备接收第一响应,第一响应包括第二信息。举个例子,比如第一请求用于请求查询终端设备当前所处的景区的景区攻略,则第一服务器将该景区的景区攻略作为第二信息返回给终端设备。在又一种可能地实施方式中,可以将查询景区攻略理解为预测意图,即终端设备根据终端设备的当前位置预测出用户想要查询景区攻略,进而向第一服务器发送第一请求,在一种可能地实施方式中,也可以理解为第一请求用于请求第一服务器执行预测意图,即第一服务器查询该景区的景区攻略,比如可以从数据库中查询,进而将执行预测意图得到的该景区的景区攻略作为第二信息返回给终端设备。通过从第一服务器查询第二信息的方式可以节省终端设备的存储空间,另一方面可以得到内容较新的第二信息。
在一种可能地实施方式中,第二信息来自终端设备预存的信息。如此,可以加快终端设备获取第二信息的速度。
在一种可能地实施方式中,终端设备可以在畅连应用程序的聊天界面上显示第二信息。一种可能地实施方式中,终端设备可以根据第一信息预测用户的意图,本申请实施例中将主动根据信息预测出的用户的意图称为预测意图。进一步可以向第一服务器发送用于请求第一服务器执行预测意图的第一请求,并接收第一服务器返回的第一响应。第一响应包括第一服务器执行预测意图后得到的第二信息。之后向终端设备的畅连应用程序的接口模块发送第一消息,以使终端设备在畅连应用程序的聊天界面上展示第二信息。由于可以根据终端设备的第一信息确定用户的预测意图,进而可以展示执行预测意图的结果,从而可以减少用户输入命令的次数,进而可以减少用户与终端设备互动的次数。
基于上述内容,图2b示例性示出了本申请实施例提供一种内容推送方法的流程示意图,如图2b所示,该方法包括:
步骤301,AI引擎模块获取第一终端设备的第一信息。
在一种可能地实施方式中,终端设备可以通过收发模块向AI引擎模块发送第一终端设备的第一信息。
本申请实施例中的第一终端设备可以是上述图1a至图1d的终端设备201。在一种可能地实施方式中,AI引擎模块可以是第一终端设备侧的AI引擎模块,AI引擎模块可以采集第一终端设备的第一信息。在另一种可能地实施方式中,AI引擎模块可以是云端的AI引擎模块,AI引擎模块可以通过向第一终端设备发送查询请求的方式查询第一信息。
在一种可能地实施方式中,第一信息属于第一类型的信息。在该实施方式中,可以预设一种或几种信息的类型,进而获取该指定类型的信息,比如预设的几种类型的信息可以包括:终端设备的位置信息、畅连应用程序上的聊天记录、会议日程表、快递信息等等。举个例子,第一类型的信息为终端设备的位置信息,则AI引擎模块可以周期性的获取终端设备的位置信息。
步骤302,AI引擎模块根据第一信息,确定第一用户的预测意图。
在步骤302中,一种可能地实施方式中,可以由AI引擎模块的获取模块获取第一信息,并将第一信息发送给决策模块,由决策模块根据第一信息,确定第一用户的预测意图。并将预测意图发送给AI引擎模块的数据挖掘模块。
在步骤302中,一种可能地实施方式中,可以预设消息的类型、预设条件和意图的对应关系,该对应关系中第一类型的信息、第一预设条件和第一意图这三者之间具有关联关系,也可以称这三者之间具有对应关系。AI引擎模块根据预设的第一类型的信息和第一预设条件的对应关系,当确定第一信息满足第一预设条件,则根据预设的第一预设条件与第一意图的对应关系,将第一意图确定为第一用户的预测意图。
在一种可能地实施方式中,本申请实施例中预设有意图和槽位的对应关系,在第一意图对应有槽位的情况下,可以根据预设的第一意图和第一槽位的对应关系,确定出预测意图对应的第一槽位,根据第一信息确定第一槽位的内容。
举个例子,第一类型的信息包括:第一终端设备的位置信息。第一预设条件包括:第一类型的信息指示的区域是否属于景区。第一意图包括:查询景区攻略。AI引擎模块可以周期性获取终端设备的位置信息,当确定终端设备当前的位置信息所指示的位置属于景区,比如为故宫,则预测用户的意图为:“查询景区攻略”,且将第一槽位“地点”确定为“故宫”。
通过步骤301和步骤302的方案可以看出,本申请实施例中,AI引擎模块可以根据获取到的终端设备的第一信息预测出用户的预测意图,而并不需要用户去发布用户命令,从而可以减少用户输入命令的次数,进而可以减少用户与终端设备互动的次数。
步骤303,AI引擎模块向第一服务器发送第一请求,第一请求用于请求第一服务器执行预测意图。
相对应地,第一服务器接收第一请求。
在步骤303中,第一服务器可以是上述图1a至图1c中的内容服务器,例如可以为内容服务器232。AI引擎模块可以确定每个内容服务器所提供的业务,进而根据需要查询的业务向对应的内容服务器查询所需的内容。在一种可能地实施方式中,可以由AI引擎模块中的数据挖掘模块发送第一请求。
步骤304,第一服务器执行预测意图,得到第二信息。
在步骤304中,若预测意图对应有第一槽位,则服务器可以基于第一槽位的内容执行预测意图,从而得到第二信息。第二信息为第一服务器执行预测意图后得到的,预测意图是根据第一信息得到的。
步骤305,第一服务器向AI引擎模块发送第一响应,第一响应携带第二信息。
相对应地,AI引擎模块接收第一服务器返回的第一响应,可以由AI引擎模块的数据挖掘模块接收第一响应。
步骤306,AI引擎模块向第一终端设备的畅连应用程序的接口模块发送第一消息,第 一消息携带第二信息。第一消息用于使第一终端设备在畅连应用程序的第一聊天界面上展示第二信息。
在步骤306中,在一种可能地实施方式中,可以由AI引擎模块的数据挖掘模块向畅连应用程序集成的AI接口模块发送第一消息。
相对应地,第一终端设备通过畅连应用程序集成的AI接口模块接收第一消息。
步骤307,第一终端设备在畅连应用程序的第一聊天界面上展示第二信息。
第一终端设备上可以安装有畅连应用程序,畅连应用程序集成人工智能AI接口模块。如图1d所示,在终端设备的201上的畅连应用程序应用模块25上集成AI接口模块252。其中,畅连应用程序应用模块25还包括消息处理模块251,消息处理模块251可以用于对畅连应用程序应用的消息进行收发处理。AI接口模块252用于与AI引擎模块之间进行消息的收发。
由于AI接口模块集成于畅连应用程序应用模块,因此可以由AI引擎模块的数据挖掘模块向第一终端设备的畅连应用程序的应用模块集成的AI接口模块发送第一消息,进而由AI接口模块将收到的第一消息中的第二信息发送至消息处理模块251,并通过消息处理模块251展示在畅连应用程序的聊天界面上。
在一种可能地实施方式中,第一终端设备可以对接收到的第二信息进行渲染,从而卡片化的展示在第一终端设备的聊天界面上。在一种可能地实施方式中,第一终端设备可以包括渲染模块,AI接口模块将第二信息发送给渲染模块,渲染模块可以根据预设的渲染模块,对接收到第二信息进行渲染,得到第三信息,并将第三信息返回给AI接口模块。进一步,通过畅连应用程序的消息处理模块从AI接口模块接收第三信息;第三信息是通过对第二信息进行渲染之后得到的。
在一种可能地实施方式中,第一聊天界面为智能助手与第一用户的聊天界面。第一用户为在第一终端设备的畅连应用程序上登录的用户。
在另一种可能地实施方式中,第一聊天界面为第一用户和第二用户的聊天界面。第二用户为在第二终端设备的畅连应用程序上登录的用户。比如,第一信息可以包括在畅连应用程序的第一聊天界面上的聊天记录。AI引擎模块可以根据第一聊天界面上的聊天记录,确定第一用户的预测意图,这种实施方式中,在步骤307之后还包括步骤308。
步骤308,当第一聊天界面为第一用户和第二用户的聊天界面,第一终端设备向畅连应用程序的服务器发送第二消息,第二消息携带第二信息,第二消息用于使畅连应用程序的服务器将第二信息传输至第二终端设备。
步骤309,畅连应用程序的服务器将第二信息传输至第二终端设备之后,第二终端设备在畅连应用程序的第一用户和第二用户的聊天界面展示第二信息。
当需要在第一聊天界面上展示渲染后的第三信息时,上述步骤308也可以替换为如下内容:
当第一聊天界面为第一用户和第二用户的聊天界面,第一终端设备向畅连应用程序的服务器发送第二消息,第二消息携带第三信息,第二消息用于使畅连应用程序的服务器将第三信息传输至第二终端设备。继而,第二终端设备在畅连应用程序的第一用户和第二用户的聊天界面展示第三信息。上述步骤308和步骤309仅仅以两个用户进行举例,上述第一聊天界面也可以是三个用户或三个以上用户的聊天界面,如此,可以通过畅连应用程序的服务器将第三信息传输至第一聊天界面中的每个成员的终端设备上,从而使参与第一聊 天界面的成员均可以看到第三信息。
通过上述内容可以看出,本申请实施例中,由于可以根据终端设备的第一信息确定用户的预测意图,进而可以展示执行预测意图的结果,从而可以减少用户输入命令的次数,进而可以减少用户与终端设备互动的次数。另一方面,由于本申请中可以主动根据终端设备的第一信息去预测用户的意图,并进行展示,因此无需用户主动唤醒该智能助手,可以进一步减少用户与终端设备的互动次数。且智能助手集成于系统层,无需用户在畅连应用程序中进行添加。第三方面,由于可以在畅连应用程序应用的聊天界面上展示执行预测意图后得到的结果,从而可以使智能助手与畅连应用程序技术更好的融合,能让群用户之间可以更加方便、快捷的分享信息。第四方面,本申请实施例中可以由AI引擎模块的数据挖掘模块根据意图去查找对应的内容服务器,从而从内容服务器上获取到对应的内容,也就是说,本申请实施例中的智能助手可以查询多种类型的信息,比如可以查询天气信息,也可以查询疫情信息,而无需用户在群里添加各种类型的机器人,用户仅仅通过小艺就可以实现各种类型的信息的查询,从而可以进一步的简化用户的操作。
上述图2b提供的方式中,无需用户输入用户命令,而是由智能助手主动推送用户可能需要的信息。本申请实施例中,也可以预设几种唤醒智能助手的方式,用户在畅连应用程序应用中通过预设的方式唤醒智能助手,并发送用户命令。AI引擎模块获取到用户命令之后,可以通过目标意图识别模块识别出用户的目标意图,并通过数据挖掘模块进行槽位填充,进而去服务器查询相应的内容,并通过数据挖掘模块向终端设备返回查询到的内容。
在另一种可能地实施方式中,用户发送用户命令的方式可以是:在一个群组中订阅一项服务,例如可以订阅查询上海的天气预报的服务,在一种可能地实施方式中,可以设置提醒的时间,比如上午9点。这种情况下,AI引擎模块可以确定出用户的目标意图为:每天上午9点查询天气情况,槽位“地点”为“上海”。则AI引擎模块的数据挖掘模块可以将查询到的结果发送至终端设备,从而使终端设备在订阅该信息的群组中展示该信息。
通过本申请实施例提供的方案,用户可以询问智能助手全方位的生活、工作信息。包括但不限于:便捷生活、商务金融、教育、美食、游戏乐趣、健康、智能家居、儿童和家庭、本地服务、影像、声乐、新闻阅读、原生应用&设置、购物比价、社交通信、体业务育运动、旅行运输、问答搜索、天气等等。查询后的内容通过畅连应用程序系统,将询问的结果卡片化传输到聊天界面,从而可以提供智慧化体验。
通过本申请实施例提供的方案,也提供用户与智能助手之间的单独会话,作为一个常驻的智能助手,可以进行情景智能识别,推荐场景化服务等。包括但不限于:航班火车、天气预警、生日提醒、日程会议提醒、信用卡还款、快递接收提醒、所到景区信息推荐(旅行助手)、运动健康数据等等。智能助手推荐的内容可以在用户与智能助手之间的单独的聊天界面上展示,也可以在订阅这些推荐服务的群组的聊天界面上展示。
下面结合附图介绍几种本申请实施例提供的应用场景。本申请实施例中以智能助手称为“小艺”为例进行介绍,实际应用中,智能助手也可以有其他名称。
下述内容涉及场景一、场景二和场景三。其中,场景一中,智能助手根据获取的终端设备的位置信息,在确定用户在景区游览时,将向用户主动推送景区攻略。场景二中,智能助手根据获取的终端设备上的聊天记录,推测用户想要看电影时,将向用户主动推送电影院信息。场景三中,两个用户在交谈过程中,需要查询附近电影院时,可以直接@小艺,命令其查询周围的电影院。在该三种场景中,小艺均是将需展示的信息展示在畅连应用程 序的聊天界面,从而可以实现智能助手与畅连应用程序应用的紧密结合。图3至图7为本申请实施例提供的几种终端设备的界面示意图,其中,下面将结合附图3和图4对场景一进行介绍,结合附图5对场景二进行介绍,结合附图6和图7对场景三进行介绍。
场景一,智能助手根据获取的终端设备的位置信息,在确定用户在景区游览时,将向用户主动推送景区攻略。
在一种可能地实施方式中,终端设备可以在第一用户与畅连应用程序的聊天界面上显示第二信息,第一用户为终端设备上登录畅连应用程序的用户。在一种可能地实施方式中,智能助手集成于畅连应用程序。智能助手可以显示在畅连应用程序的联系人信息中,这种情况下,可以在终端设备畅连应用程序的第一聊天界面上显示第二信息。第二信息在第一聊天界面上显示为智能助手发出的聊天内容。可以看出,智能助手在畅连应用程序中进行了拟人化处理,用户可以通过畅连应用程序与智能助手聊天,且终端设备主动推送的第二信息也可以以智能助手的身份来推送。另一方面,本申请无需用户主动唤醒该智能助手,可以进一步减少用户与终端设备的互动次数。
下面结合附图对该场景一进行介绍。
以上述图1d为例说明,终端设备201侧部署有终端设备AI引擎模块21,以终端设备侧AI引擎模块21执行相关方案为例进行说明。预测意图识别模块212的获取模块2121可以获取用户的位置信息,根据预设的规则判断用户的位置信息是否属于景区。景区的信息可以是预设的,若用户的位置信息与一个预设的景区的信息匹配,则确定该用户当前处于景区之中。通过上述图1c中的决策模块2122确定出用户的预测意图为“查询景区攻略”,且该预测意图的槽位“地点”的内容为“故宫”,则可以通过图1c中的数据挖掘模块213向内容服务器发送查询请求,该查询请求用于查询故宫的景区攻略。数据挖掘模块213接收内容服务器返回的查询响应,该查询响应携带故宫的景区攻略。数据挖掘模块213可以通过图1d中的AI接口模块252将故宫的景区攻略发送至畅连应用程序应用模块25。AI接口模块252接收到的故宫的景区攻略为文本形式,可以将其发送至渲染模块253进行渲染绘制。一种可选地实施方式中,在终端设备侧预设有几种模板,例如可以预设有景区攻略的模板,渲染模块253将故宫的景区攻略的文本形式结合景区攻略的模板进行处理,从而得到渲染后的故宫的景区攻略,并将其返回值AI接口模块252。AI接口模块252将得到的故宫的景区攻略返还至消息处理模块251。消息处理模块251以小艺的身份在畅连应用程序中向用户的终端设备发送一条消息。
图3中的(a)为用户的终端设备处于锁屏模式时,接收到来自小艺的信息的界面示意图,如图3中的(a)所示,在用户的锁屏界面出现内容“您收到一条来自小艺的信息”,该信息上可以携带一些标识,比如可以是畅连应用程序APP的图标,从而可以让用户知道该条信息是通过畅连应用程序APP接收到的来自小艺的信息。用户可以直接点击该条信息,则响应于用户的点击操作,终端设备可以打开该畅连应用程序,并展示用户与小艺单聊的界面示意图,该界面示意图可以如图3中的(b)所示,在该界面上可以看到小艺主动推送的故宫的景区攻略。
在一种可能地实施方式中,小艺推送的景区攻略可以卡片化展示,若用户需要查看详细信息,可以点击图3中的(b)所示的“查看详情”的区域。
进一步的,用户还可以主动的向小艺发送命令,如图3中的(c)所示,用户可以与小艺的单聊界面上,向小艺发送用户命令“小艺,推荐一下故宫附近的餐馆”。图3中的(c) 展示了用户编辑用户命令的界面示意图,用户点击界面上的“发送”按钮之后,终端设备的界面示意图如图3中的(d)所示。
由于用户发送了用户命令“小艺,推荐一下故宫附近的餐馆”,AI引擎模块里的目标意图识别模块211可以通过分发模块2111获取到该用户命令,并通过自然语言理解模块2112确定目标意图为“查询餐馆”。并通过数据挖掘模块213进行槽位匹配,将槽位“地点”填充为“故宫”。数据挖掘模块213进而可以向内容服务器查询故宫附近的餐馆,并将得到的结果通过AI接口模块252返回至畅连应用程序,并通过渲染模块253进行渲染之后,以小艺的身份展示查询到的故宫附近的餐馆。如图3中的(e)所示。显然后的故宫附近的餐馆可以卡片化展示,在聊天界面上可以展示餐厅的名称、图片、评分等等。用户若需要了解一个餐厅的更加详细的内容,可以点击该餐厅的名称所属的区域,响应于该点击操作,终端设备将展示该餐厅的详细信息,包括该餐厅的地址、电话、招牌菜、用户评价等等信息。
在场景一中,上述图3中的(a)中,用户可以直接点击锁屏屏幕上的通知消息,以直接打开畅连应用程序应用的用户与小艺的单聊界面。本申请实施例中还可以另外提供一种用户打开与小艺单聊的界面的方法,如图4中的(a)所示,在锁屏界面上显示“您收到一条来自小艺的信息”,用户可以对终端设备进行解锁,解锁方式可以是指纹解锁、人脸识别解锁或者密码解锁等,方式不限。图4中的(b)示出了终端设备解锁之后的一种界面示意图,如图4中的(b)所示,用户的终端界面上可以包括多个应用程序,图中仅仅示出了用于打电话的应用程序和用于畅连应用程序。实际应用中,也还可以有其他应用程序,本申请实施例不做限制。响应于用户点击畅连应用程序APP的操作,终端设备可以打开畅连应用程序APP,界面示意图如图4中的(c)所示。在图4中的(c)中可以看出在“畅连应用程序”的选项卡里显示近期联系过得联系人,在顶部可以显示最近联系的联系人。如图4中的(c)所示,还可以在每个联系人的旁边显示与该联系人的聊天界面上的最后一条信息的全部内容或部分内容。如图4中的(c)所示,当有新消息时,可以在联系人的头像或名字上有一些标识,例如可以有一个小黑点、或者小气泡等等,本申请实施例不做限制,该标识仅仅是提示用户有新的未读信息。在一种可能地实施方式中,可以在该“畅连应用程序”选项卡中固定显示小艺的消息会话,如图4中的(c)所示,小艺的消息会话显示在“畅连应用程序”选项卡中。用户可以点击图4中的(c)所示界面上的“小艺”选项,响应于该操作,终端设备将打开如上述图3中的(b)所示的用户与小艺的单聊界面。
在一种可能地实施方式中,畅连应用程序的聊天界面还包括:第一用户与第二设备之间的第三聊天界面;第二设备为智能手机、智慧大屏、智能音箱、智能手环、平板电脑中的一项。该方法还包括:第一用户在第三聊天界面发送第三信息,终端设备将第三信息发送至第二设备,以在第二设备的显示屏上显示第三信息。举个例子,若终端设备为用户的一个智能手机,用户可以通过畅连应用程序将其他设备,例如智慧大屏、智慧音箱、智慧手环等等设备加入畅连应用程序中,当用户想要在智慧大屏上展示信息时,可以通过智能手机的畅连应用程序打开与智慧大屏的聊天界面,在该聊天界面上发送信息,比如图片等,从而可以实现投屏的效果,可以看出,通过该方式进行投屏的方案较为简单,对于用户来说类似与智慧大屏进行聊天对话,可以简化用户操作的复杂度。
例如,用户可以在畅连应用程序中将用户的智能手机、智慧大屏、智能音箱、智能手环、平板电脑、智能手表、智能电视、智能摄像机、智能音箱等具有通信功能的设备添加 至即时通信APP中。可以参见图4中的(c)所示的终端的界面图,如图4中的(c)所示,即时通信APP为畅连应用程序APP时,用户可以在畅连应用程序APP上添加智能手表、智能电视、智能摄像机等设备,用户可以通过畅连应用程序APP将视频、图片、音频等内容与其他设备共享。举个例子,用户在手机上打开畅连应用程序APP,用户通过畅连应用程序APP打开与“我的电视”的聊天界面,用户可以在该聊天界面上发送视频、图片或文本等内容,发送的内容可以实时的在“我的电视”所对应的智能电视的屏幕上显示。可以看出,本申请实施例中的畅连应用程序APP可以实现各个终端设备之间的即时通信,且该方式可以简化设备之间共享信息的方式。
针对场景一,还有一种可能地实现方式。
上述内容在场景一中,智能助手根据获取的终端设备的位置信息,在确定用户在景区游览时,将向用户主动推送景区攻略。具体如图3所示,终端设备以小艺的身份在与用户的聊天界面上展示第二信息。
在另一种可能地实施方式中,畅连应用程序包括至少一个聊天群组。终端设备确定出满足预设的第二条件的第一聊天群组。终端设备在第一聊天群组的聊天界面显示第二信息。
进一步,在一种可能地实施方式中,终端设备可以向第二服务器发送第二请求,第二请求携带第二信息,其中,第二请求用于请求第二服务器将第二信息显示在N个第二用户中的第二用户所登录的终端设备上。如此,可以是N个第二用户在其所登录的设备上查看到第二信息。在一种可能地实施方式中,N个第二用户登录的终端设备包括以下内容中的至少一项:智能手机、智慧大屏、智能音箱、智能手环、平板电脑。如此,可以兼容较多的终端设备种类。
在一种可能地实施方式中,第二条件包括以下内容中的至少一项:
第一聊天群组的成员包括第一用户和N个第二用户,N个第二用户中M个第二用户中每个第二用户与第一用户之间的距离不大于距离阈值,N为大于1的正整数,M为不大于N的正整数,M与N的比值不小于预设值;
第一聊天群组对应的订阅信息包括第二信息的类型;
第一聊天群组的预设时间段内的聊天记录中涉及到第一区域;
第一聊天群组的标签值与第二信息的类型匹配。
当第二条件包括:第一聊天群组的成员包括第一用户和N个第二用户,N个第二用户中M个第二用户中每个第二用户与第一用户之间的距离不大于距离阈值,N为大于1的正整数,M为不大于N的正整数,M与N的比值不小于预设值。在一种可能地实施方式中,可以设置预设值为50%,可以看出,若一个群组中至少一半的第二用户的位置都与第一用户的位置比较近,则可以预测这个群组大部分人都位于同一个场景,这种情况下,可以直接向该聊天群组的聊天界面推送给信息,以使该聊天群组的成员都看到该信息,从而可以节省用户再次将第二信息单独发送给其他用户的操作,从而可以进一步节省用户与终端设备之间交互的次数。
当第二条件包括:第一聊天群组对应的订阅信息包括第二信息的类型。如此,由于用户在第一聊天群组中订阅了第二信息的类型,因此,当终端设备获取到第二信息时,可以向第一聊天群组去推送。举个例子,在第一聊天群组中订阅了景点攻略,当第二信息为“故宫的景点攻略”,则向第一聊天群组推送第二信息。再举个例子,比如在第一聊天群组订阅了健康数据,当获取到第一聊天群组中一个用户的健康数据时在第一聊天群组中推送, 健康数据比如可以是某一个用户的心跳、血压值,还可以是根据用户的心跳血压等数据进行分析后得到的用户健康报告。
当第二条件可以包括:第一聊天群组的预设时间段内的聊天记录中涉及到第一区域。在一种可能地实施方式中,终端设备可以自主获取第一聊天群组内的聊天记录,进而对聊天记录进行语义分析,从而可以确定在第一聊天群组的预设时间段内的聊天记录中是否出现过第一区域相关的词汇。若存在,则可以推测出在第一聊天群组中的成员可能大部分位于第一区域,基于此,可以在第一聊天群组内推送第二信息,从而可以进一步节省用户与终端设备之间交互的次数。
当第二条件包括第一聊天群组的标签值与第二信息的类型匹配。举个例子,比如畅连应用程序中的聊天群组可以有一个标签值,该标签值可以显示该群组的成员的社会关系,比如标签值可以是家庭群、工作群、驴友群等。该标签值可以是用户自己填入的,也可以是根据成员之间聊天的内容推断出的,也可以是根据成员之间的社会关系推测出的。当一个群组的标签值与一个信息的类型匹配,可以将该信息适合发布到该群组,举个例子,若一个信息的类型为家人的健康数据,则可以将该信息推送到家庭群的聊天群组中。再比如,当一个信息的类型为景点攻略,则可以将其推送到驴友群。一个聊天群组的标签值所匹配的信息的类型可以是预设的。
场景二,智能助手根据获取的终端设备上的聊天记录,推测用户想要看电影时,将向用户主动推送电影院信息。
本申请实施例中,在一种可能地实施方式中,终端设备自主获取畅连应用程序中的聊天记录;对聊天记录进行分析,预测出用户的预测意图,根据预测意图,通过畅连应用程序显示与预测意图关联的待推送内容或待推送内容的链接。该实施方式中可以通过自主对畅连应用程序中的聊天记录进行分析,从而预测出用户的预测意图,继而推送内容,可以看出,该方案无需用户主动唤醒智能助手,对其发出询问,该方案可以减少用户输入命令的次数,减少用户与终端设备的交互次数。
在一种可能地实施方式中,畅连应用程序包括一个或多个聊天群组,一个聊天群组包括至少两个用户。终端设备可以获取聊天群组中的聊天记录,并对其分析,预测出用户的预测意图,继而在该聊天群组的聊天界面上以智能助手的身份推送内容或内容的链接。如此,智能助手主动推送的信息可以被群组中的每个用户看到,可以节省群组的两个用户之间的沟通。
下面结合图5对场景二进行描述。
图5中的(a)示出了终端设备解锁之后的一种界面示意图,如图5中的(a)所示,用户的终端界面上可以包括多个应用程序,图中仅仅示出了用于打电话的应用程序和用于畅连应用程序。实际应用中,也还可以有其他应用程序,本申请实施例不做限制。响应于用户点击畅连应用程序APP的操作,终端设备可以打开畅连应用程序APP,界面示意图如图5中的(b)所示。在图5中的(b)中可以看出在“畅连应用程序”的选项卡里显示近期联系过得联系人。用户可以直接从“畅连应用程序”选项卡中选择想要进行联系的人,也可以通过点击“通讯录”选项卡来找,例如,用户点击“通讯录”选项卡,响应于该点击操作,终端设备展示“通讯录”的界面示意图如图5中的(c)所示,在图5中的(c)中,用户可以选择“畅连应用程序”选项卡,里面所显示的联系人都是注册了“畅连应用 程序”APP的用户,用户可以通过“畅连应用程序”APP与该选项卡里展示的用户进行沟通。畅连应用程序选项卡中,一个联系人可以对应一个或多个图标,其中,图标401是指两个用户可以通过畅连应用程序进行视频聊天,图标402是指两个用户可以通过畅连应用程序的聊天界面进行聊天,在聊天界面上可以发送文字、音频或视频等内容。响应于用于点击丽丽旁边的图标302,终端设备展示的界面示意图如图5中的(d)所示,图5中的(d)为用户与丽丽的聊天界面,用户可以在该界面上发送聊天内容给丽丽,如图5中的(d)所示,用户将发送“丽丽,一起去看电影吧?”。
在一种可能地实施方式中,用户在与丽丽的聊天界面上发送“丽丽,一起去看电影吧?”的聊天记录,上述图1c中的预测意图识别模块212的获取模块2121可以获取到该聊天记录,决策模块根据该聊天记录确定出预测意图为“查询电影院”,且对应的槽位“地点”为“当前的位置的附近区域”,进一步,获取模块2121可以获取到用户当前的位置信息,进而由决策模块2122将槽位“地点”填充为“用户查询到的当前的位置信息”。由数据挖掘模块213去内容服务器查询到结果后返回给终端设备201,并通过AI接口模块252传输至畅连应用程序应用模块25。在一种可能地实施方式中,由渲染模块253对结果进行渲染之后,通过消息处理模块251以小艺的身份将其发送至终端设备201的用户与丽丽的聊天界面上,如图5中的(e)所示。另一方面,消息处理模块251确定该聊天界面的聊天成员还包括丽丽,则可以将小艺发送的查询到的结果通过网络上传至应用程序服务器242,当畅连应用程序应用为畅连应用程序APP时,应用程序服务器242也可以称为畅连应用程序的服务器,进而由应用程序服务器242将该查询到的结果发送至丽丽的终端设备上。最终展示的结果如图5中的(e)所示,小艺在用户与丽丽的聊天界面上发送了查询结果之后,用户可以在自己的终端设备上看到,丽丽也可以在丽丽的终端设备上看到。本申请实施例中提到的第二服务器可以是指应用程序服务器。
进一步,丽丽也可以在该聊天界面进行聊天,图5中的(f)示出了丽丽发送了聊天内容“哇,这个功能可真酷!”的界面示意图。
场景三,两个用户在交谈过程中,需要查询附近电影院时,可以直接@小艺,命令其查询周围的电影院。
图6中的(a)为用户的终端设备处于锁屏模式时,接收到来自丽丽的信息的界面示意图,如图6中的(a)所示,在用户的锁屏界面出现内容“您收到一条来自丽丽的信息”。用户可以直接点击该条信息,则响应于用户的点击操作,终端设备可以打开该畅连应用程序,并展示用户与丽丽聊天的界面示意图,该界面示意图可以如图6中的(b)所示,在该界面上可以看到丽丽发送的聊天记录“梓江,明天周末休息,我们去电影院看个电影吧?”。
用户可以主动的向小艺发送命令,如图6中的(c)所示,用户可以在与丽丽的聊天界面上直接向小艺发送用户命令“好啊,@小艺,推荐一下附近的电影院”。图6中的(c)展示了用户编辑用户命令的界面示意图,用户点击界面上的“发送”按钮之后。AI引擎模块里的目标意图识别模块211可以通过分发模块2111获取到该用户命令,并通过自然语言理解模块2112确定目标意图为“查询餐馆”。并通过数据挖掘模块213进行槽位匹配,当确定地点为附近区域时,可以进一步通过数据挖掘模块213获取到用户的位置信息,并将用户的位置信息确定为槽位“地点”的内容。进而数据挖掘模块213可以向内容服务器查 询附近的电影院,并将得到的结果通过AI接口模块252返回至畅连应用程序,并通过渲染模块253进行渲染之后,以小艺的身份展示查询到的附近的电影院。如图6中的(d)所示。
另一方面,消息处理模块251确定该聊天界面的聊天成员还包括丽丽,则可以将小艺发送的查询到的结果通过网络上传至应用程序服务器242,当畅连应用程序应用为畅连应用程序APP时,应用程序服务器242也可以称为畅连应用程序的服务器,进而由应用程序服务器242将该查询到的结果发送至丽丽的终端设备上。最终展示的结果如图6中的(d)所示,小艺在用户与丽丽的聊天界面上发送了查询结果之后,用户可以在自己的终端设备上看到,丽丽也可以在丽丽的终端设备上看到。
进一步,丽丽也可以在该聊天界面进行聊天,图6中的(e)示出了丽丽发送了聊天内容“哇,这个功能可真酷!”的界面示意图。
在场景三中,上述图6中的(a)中,用户可以直接点击锁屏屏幕上的通知消息,以直接打开畅连应用程序应用的用户与丽丽的聊天界面。本申请实施例中还可以另外提供一种用户打开与丽丽单聊的界面的方法,如图7中的(a)所示,在锁屏界面上显示“您收到一条来自丽丽的信息”,用户可以对终端设备进行解锁,解锁方式可以是指纹解锁、人脸识别解锁或者密码解锁等,方式不限。图7中的(b)示出了终端设备解锁之后的一种界面示意图,如图7中的(b)所示,用户的终端界面上可以包括多个应用程序,图中仅仅示出了用于打电话的应用程序和用于畅连应用程序。实际应用中,也还可以有其他应用程序,本申请实施例不做限制。响应于用户点击畅连应用程序APP的操作,终端设备可以打开畅连应用程序APP,界面示意图如图7中的(c)所示。在图7中的(c)中可以看出在“畅连应用程序”的选项卡里显示近期联系过得联系人,在顶部可以显示最近联系的联系人。如图7中的(c)所示,当有新消息时,可以在联系人的头像或名字上有一些标识,例如可以有一个小黑点、或者小气泡等等,本申请实施例不做限制,该标识仅仅是提示用户有新的未读信息。用户可以点击图7中的(c)所示界面上的“丽丽”选项,响应于该操作,终端设备将打开如上述图6中的(b)所示的用户与丽丽的单聊界面。
本申请实施例中的术语“系统”和“网络”可被互换使用。“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
以及,除非有特别说明,本申请实施例提及“第一”、“第二”等序数词是用于对多个对象进行区分,不用于限定多个对象的顺序、时序、优先级或者重要程度。例如,第一服务器和第二服务器,只是为了区分不同的服务器,而并不是表示这两个服务器的优先级或者重要程度等的不同。
需要说明的是,上述各个消息的名称仅仅是作为示例,随着通信技术的演变,上述任意消息均可能改变其名称,但不管其名称如何发生变化,只要其含义与本申请上述消息的含义相同,则均落入本申请的保护范围之内。
上述主要从各个网元之间交互的角度对本申请提供的方案进行了介绍。可以理解的是, 上述实现各网元为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
根据前述方法,图8为本申请实施例提供的通信装置的结构示意图,如图8所示,该通信装置可以为终端设备,也可以为芯片或电路,比如可设置于终端设备的芯片或电路。
进一步的,该通信装置1301还可以进一步包括总线系统,其中,处理器1302、存储器1304、收发器1303可以通过总线系统相连。应理解,上述处理器1302可以为前述图1e中的处理器110。
可以理解,本申请实施例中的存储器1304可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。本申请实施例中的存储器1304为前述图1e中的内部存储器121。
该通信装置1301对应上述方法中的终端设备的情况下,该通信装置可以包括处理器1302、收发器1303和存储器1304。该存储器1304用于存储指令,该处理器1302用于执行该存储器1304存储的指令,以实现如上图1a至图7中所示的任一项或任多项对应的方法中终端设备的相关方案。
在一种可能地实施方式中,处理器1302用于获取第一信息,第一信息包括终端设备的位置信息;当第一信息满足预设的第一条件,显示第二信息;第二信息包括跟第一信息关联的待推送内容或者待推送内容的链接;第一条件包括:终端设备的位置信息对应的位置位于第一区域,第一区域的类型属于预设的区域类型中的一个。
在一种可能地实施方式中,第二信息来自第一服务器,或者,第二信息来自终端设备预存的信息。
在一种可能地实施方式中,处理器1302,具体用于:在畅连应用程序的聊天界面上显示第二信息。
在一种可能地实施方式中,畅连应用程序包括至少一个聊天群组;处理器1302,具体用于:确定出满足预设的第二条件的第一聊天群组;在第一聊天群组的聊天界面显示第二信息。
在一种可能地实施方式中,还包括收发器1303,用于:向第二服务器发送第二请求, 第二请求携带第二信息;其中,第二请求用于请求第二服务器将第二信息显示在N个第二用户中的第二用户所登录的终端设备上。
在一种可能地实施方式中,N个第二用户登录的终端设备包括以下内容中的至少一项:智能手机、智慧大屏、智能音箱、智能手环、平板电脑。
该通信装置所涉及的与本申请实施例提供的技术方案相关的概念,解释和详细说明及其他步骤请参见前述方法或其他实施例中关于这些内容的描述,此处不做赘述。
根据前述方法,图9为本申请实施例提供的通信装置的结构示意图,如图9所示,通信装置1401可以包括通信接口1403、处理器1402和存储器1404。通信接口1403,用于输入和/或输出信息;处理器1402,用于执行计算机程序或指令,使得通信装置1401实现上述图1a至图7的相关方案中终端设备侧的方法,或使得通信装置1401实现上述图1a至图7的相关方案中服务器侧的方法。本申请实施例中,通信接口1403可以实现上述图8的收发器1303所实现的方案,处理器1402可以实现上述图8的处理器1302所实现的方案,存储器1404可以实现上述图8的存储器1304所实现的方案,在此不再赘述。
基于以上实施例以及相同构思,图10为本申请实施例提供的通信装置的示意图,如图10所示,该通信装置1501可以为终端设备,也可以为芯片或电路,比如可设置于终端设备的芯片或电路。
该通信装置可以对应上述方法中的终端设备。该通信装置可以实现如上图1a至图7中所示的任一项或任多项对应的方法中终端设备所执行的步骤。该通信装置可以包括处理单元1502、通信单元1503和存储单元1504。
其中,处理单元1502可以是处理器或控制器,例如可以是通用中央处理器(central processing unit,CPU),通用处理器,数字信号处理(digital signal processing,DSP),专用集成电路(application specific integrated circuits,ASIC),现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包括一个或多个微处理器组合,DSP和微处理器的组合等等。存储单元1504可以是存储器。通信单元1503是一种该装置的接口电路,用于从其它装置接收信号。例如,当该装置以芯片的方式实现时,该通信单元1503是该芯片用于从其它芯片或装置接收信号的接口电路,或者,是该芯片用于向其它芯片或装置发送信号的接口电路。
该通信装置1501可以为上述任一实施例中的终端设备,还可以为芯片。例如,当通信装置1501为终端设备时,该处理单元1502例如可以是处理器,该通信单元1503例如可以是收发器。可选的,该收发器可以包括射频电路,该存储单元例如可以是存储器。例如,当通信装置1501为芯片时,该处理单元1502例如可以是处理器,该通信单元1503例如可以是输入/输出接口、管脚或电路等。该处理单元1502可执行存储单元存储的计算机执行指令,可选地,该存储单元为该芯片内的存储单元,如寄存器、缓存等,该存储单元还可以是该会话管理网元内的位于该芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
在一种可能地实施方式中,处理单元1502用于获取第一信息,第一信息包括终端设备的位置信息;当第一信息满足预设的第一条件,显示第二信息;第二信息包括跟第一信 息关联的待推送内容或者待推送内容的链接;第一条件包括:终端设备的位置信息对应的位置位于第一区域,第一区域的类型属于预设的区域类型中的一个。
该通信装置所涉及的与本申请实施例提供的技术方案相关的概念,解释和详细说明及其他步骤请参见前述方法或其他实施例中关于这些内容的描述,此处不做赘述。
可以理解的是,上述通信装置1501中各个单元的功能可以参考相应方法实施例的实现,此处不再赘述。
应理解,以上通信装置的单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。本申请实施例中,通信单元1503可以由上述图8的收发器1303实现,处理单元1502可以由上述图8的处理器1302实现。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码或指令,当该计算机程序代码或指令在计算机上运行时,使得该计算机执行图1a至图7所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读存储介质,该计算机可读介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行图1a至图7所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种芯片系统,该芯片系统可以包括处理器。该处理器与存储器耦合,可用于执行图1a至图7所示实施例中任意一个实施例的方法。可选地,该芯片系统还包括存储器。存储器,用于存储计算机程序(也可以称为代码,或指令)。处理器,用于从存储器调用并运行计算机程序,使得安装有芯片系统的设备执行图1a至图7所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种系统,其包括前述的一个或多个终端设备以及一个或多个服务器。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disc,SSD))等。
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。
上述各个装置实施例中服务器与终端设备和方法实施例中的服务器或终端设备对应,由相应的模块或单元执行相应的步骤,例如通信单元(收发器)执行方法实施例中接收或发送的步骤,除发送、接收外的其它步骤可以由处理单元(处理器)执行。具体单元的功能可以参考相应的方法实施例。其中,处理器可以为一个或多个。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在两个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各种说明性逻辑块(illustrative logical block)和步骤(step),能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种内容推送方法,其特征在于,包括:
    终端设备获取第一信息,所述第一信息包括所述终端设备的位置信息;
    当所述第一信息满足预设的第一条件,所述终端设备显示所述第二信息;所述第二信息包括跟所述第一信息关联的待推送内容或者所述待推送内容的链接;
    所述第一条件包括:所述终端设备的位置信息对应的位置位于第一区域,所述第一区域的类型属于预设的区域类型中的一个。
  2. 如权利要求1所述的方法,其特征在于,所述第二信息来自第一服务器,或者,所述第二信息来自所述终端设备预存的信息。
  3. 如权利要求1或2所述的方法,其特征在于,所述终端设备显示所述第二信息,包括:
    所述终端设备在畅连应用程序的聊天界面上显示所述第二信息。
  4. 如权利要求3所述的方法,其特征在于,所述畅连应用程序包括至少一个聊天群组;
    所述终端设备显示所述第二信息,包括:
    所述终端设备确定出满足预设的第二条件的第一聊天群组;
    所述终端设备在所述第一聊天群组的聊天界面显示所述第二信息;
    其中,所述第二条件包括以下内容中的至少一项:
    所述第一聊天群组的成员包括第一用户和N个第二用户,所述N个第二用户中M个第二用户中每个第二用户与第一用户之间的距离不大于距离阈值,所述N为大于1的正整数,所述M为不大于所述N的正整数,所述M与N的比值不小于预设值;
    所述第一聊天群组对应的订阅信息包括所述第二信息的类型;
    所述第一聊天群组的预设时间段内的聊天记录中涉及到所述第一区域;
    所述第一聊天群组的标签值与所述第二信息的类型匹配。
  5. 如权利要求4所述的方法,其特征在于,所述终端设备在所述第一聊天群组的聊天界面显示所述第二信息之后,还包括:
    所述终端设备向第二服务器发送第二请求,所述第二请求携带所述第二信息;
    其中,所述第二请求用于请求所述第二服务器将所述第二信息显示在N个第二用户中的第二用户所登录的终端设备上。
  6. 如权利要求4或5所述的方法,其特征在于,所述N个第二用户登录的终端设备包括以下内容中的至少一项:
    智能手机、智慧大屏、智能音箱、智能手环、平板电脑。
  7. 一种通信装置,其特征在于,所述通信装置包括:一个或多个处理器;一个或多个存储器;其中,所述一个或多个存储器存储有一个或多个计算机可执行程序,当所述一个或多个计算机可执行程序被所述一个或多个处理器执行时,使得所述通信装置执行:
    获取第一信息,所述第一信息包括所述终端设备的位置信息;
    当所述第一信息满足预设的第一条件,所述终端设备显示所述第二信息;所述第二信息包括跟所述第一信息关联的待推送内容或者所述待推送内容的链接;
    所述第一条件包括:所述终端设备的位置信息对应的位置位于第一区域,所述第一区域的类型属于预设的区域类型中的一个。
  8. 如权利要求7所述的装置,其特征在于,所述第二信息来自第一服务器,或者,所述第二信息来自所述终端设备预存的信息。
  9. 如权利要求7或8所述的装置,其特征在于,所述处理器,具体用于:
    在畅连应用程序的聊天界面上显示所述第二信息。
  10. 如权利要求9所述的装置,其特征在于,所述畅连应用程序包括至少一个聊天群组;
    所述处理器,具体用于:
    确定出满足预设的第二条件的第一聊天群组;
    在所述第一聊天群组的聊天界面显示所述第二信息;
    其中,所述第二条件包括以下内容中的至少一项:
    所述第一聊天群组的成员包括第一用户和N个第二用户,所述N个第二用户中M个第二用户中每个第二用户与第一用户之间的距离不大于距离阈值,所述N为正整数,所述M为不大于所述N的正整数,所述M与N的比值等于预设值;
    所述第一聊天群组对应的订阅信息包括所述第二信息的类型;
    所述第一聊天群组的预设时间段内的聊天记录中涉及到所述第一区域。
  11. 如权利要求10所述的装置,其特征在于,还包括收发器,用于:
    向第二服务器发送第二请求,所述第二请求携带所述第二信息;
    其中,所述第二请求用于请求所述第二服务器将所述第二信息显示在N个第二用户中的第二用户所登录的终端设备上。
  12. 如权利要求10或11所述的装置,其特征在于,所述N个第二用户登录的终端设备包括以下内容中的至少一项:
    智能手机、智慧大屏、智能音箱、智能手环、平板电脑。
  13. 一种通信装置,其特征在于,所述装置包括处理器和通信接口,
    所述通信接口,用于输入和/或输出信息;
    所述处理器,用于执行计算机可执行程序,使得权利要求1-6中任一项所述的方法被执行。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行程序,所述计算机可执行程序在被计算机调用时,使所述计算机执行如权利要求1至6任一项所述的方法。
  15. 一种芯片系统,其特征在于,包括:
    所述通信接口,用于输入和/或输出信息;
    处理器,用于执行计算机可执行程序,使得安装有所述芯片系统的设备执行如利要求1-6任一项所述的方法。
PCT/CN2021/116865 2020-10-22 2021-09-07 一种内容推送方法、装置、存储介质和芯片系统 WO2022083328A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21881747.6A EP4213461A4 (en) 2020-10-22 2021-09-07 CONTENT PUSHING METHOD AND APPARATUS, STORAGE MEDIUM AND ELECTRONIC CHIP SYSTEM
US18/304,941 US20230262017A1 (en) 2020-10-22 2023-04-21 Content Pushing Method, Apparatus, Storage Medium, and Chip System

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202011142477 2020-10-22
CN202011142477.5 2020-10-22
CN202011502425.4 2020-12-17
CN202011502425.4A CN114465975B (zh) 2020-10-22 2020-12-17 一种内容推送方法、装置、存储介质和芯片系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/304,941 Continuation US20230262017A1 (en) 2020-10-22 2023-04-21 Content Pushing Method, Apparatus, Storage Medium, and Chip System

Publications (1)

Publication Number Publication Date
WO2022083328A1 true WO2022083328A1 (zh) 2022-04-28

Family

ID=81291600

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116865 WO2022083328A1 (zh) 2020-10-22 2021-09-07 一种内容推送方法、装置、存储介质和芯片系统

Country Status (3)

Country Link
US (1) US20230262017A1 (zh)
EP (1) EP4213461A4 (zh)
WO (1) WO2022083328A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097793B (zh) * 2023-10-19 2023-12-15 荣耀终端有限公司 一种消息推送方法、终端及服务器

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369462A (zh) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 一种基于lbs的提醒信息输出方法及系统
CN103379013A (zh) * 2012-04-12 2013-10-30 腾讯科技(深圳)有限公司 一种基于即时通信的地理信息提供方法和系统
CN104199936A (zh) * 2014-09-09 2014-12-10 联想(北京)有限公司 一种信息的处理方法和装置
CN108322611A (zh) * 2018-01-31 2018-07-24 努比亚技术有限公司 一种锁屏信息推送方法、设备及计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8490003B2 (en) * 2010-12-03 2013-07-16 International Business Machines Corporation Dynamic proximity based text exchange within a group session
US10810322B2 (en) * 2017-12-05 2020-10-20 Microsoft Technology Licensing, Llc Sharing user information with and between bots

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369462A (zh) * 2012-04-11 2013-10-23 腾讯科技(深圳)有限公司 一种基于lbs的提醒信息输出方法及系统
CN103379013A (zh) * 2012-04-12 2013-10-30 腾讯科技(深圳)有限公司 一种基于即时通信的地理信息提供方法和系统
CN104199936A (zh) * 2014-09-09 2014-12-10 联想(北京)有限公司 一种信息的处理方法和装置
CN108322611A (zh) * 2018-01-31 2018-07-24 努比亚技术有限公司 一种锁屏信息推送方法、设备及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4213461A4 *

Also Published As

Publication number Publication date
US20230262017A1 (en) 2023-08-17
EP4213461A1 (en) 2023-07-19
EP4213461A4 (en) 2024-03-13

Similar Documents

Publication Publication Date Title
WO2021129688A1 (zh) 显示方法及相关产品
WO2021013158A1 (zh) 显示方法及相关装置
WO2020221072A1 (zh) 一种语义解析方法及服务器
WO2022052776A1 (zh) 一种人机交互的方法、电子设备及系统
WO2022262541A1 (zh) 通知显示方法和电子设备
WO2020207326A1 (zh) 一种对话消息的发送方法及电子设备
WO2021057408A1 (zh) 执行命令的方法、装置及设备
CN110910872A (zh) 语音交互方法及装置
WO2022037407A1 (zh) 一种回复消息的方法、电子设备和系统
WO2021249281A1 (zh) 一种用于电子设备的交互方法和电子设备
WO2020233556A1 (zh) 一种通话内容处理方法和电子设备
KR20170100175A (ko) 전자 장치 및 전자 장치의 동작 방법
WO2022152024A1 (zh) 一种微件的显示方法与电子设备
CN109981881B (zh) 一种图像分类的方法和电子设备
US20220366327A1 (en) Information sharing method for smart scene service and related apparatus
CN111835904A (zh) 一种基于情景感知和用户画像开启应用的方法及电子设备
US20230262017A1 (en) Content Pushing Method, Apparatus, Storage Medium, and Chip System
WO2021218837A1 (zh) 一种提醒方法及相关装置
CN114465975B (zh) 一种内容推送方法、装置、存储介质和芯片系统
CN113742460A (zh) 生成虚拟角色的方法及装置
WO2023005711A1 (zh) 一种服务的推荐方法及电子设备
US20220374465A1 (en) Icon based tagging
EP4336357A1 (en) Message processing method and related apparatus
WO2022089276A1 (zh) 一种收藏处理的方法及相关装置
WO2020216144A1 (zh) 一种添加邮件联系人的方法和电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021881747

Country of ref document: EP

Effective date: 20230414

NENP Non-entry into the national phase

Ref country code: DE