CN111597435A - Voice search method and device and electronic equipment - Google Patents

Voice search method and device and electronic equipment Download PDF

Info

Publication number
CN111597435A
CN111597435A CN202010296999.4A CN202010296999A CN111597435A CN 111597435 A CN111597435 A CN 111597435A CN 202010296999 A CN202010296999 A CN 202010296999A CN 111597435 A CN111597435 A CN 111597435A
Authority
CN
China
Prior art keywords
user
content
information
module
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010296999.4A
Other languages
Chinese (zh)
Other versions
CN111597435B (en
Inventor
黄晓娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010296999.4A priority Critical patent/CN111597435B/en
Publication of CN111597435A publication Critical patent/CN111597435A/en
Application granted granted Critical
Publication of CN111597435B publication Critical patent/CN111597435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a voice search method, a voice search device and electronic equipment, relates to the technical field of communication, and aims to solve the problem that the voice search function of the existing electronic equipment is single. The method comprises the following steps: acquiring first voiceprint information and first content in first voice data; determining a first user according to the first voiceprint information, and searching second content which is matched with the first content and corresponds to the first user; and displaying the second content and displaying a target identifier, wherein the target identifier is an identifier indicating the first user in the electronic equipment. The method is applied to a voice search scene.

Description

Voice search method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a voice search method and device and electronic equipment.
Background
With the rapid development of communication technology, electronic devices are more and more widely used, and users have higher and higher requirements for the performance of the electronic devices.
Currently, electronic devices have a voice search function. Generally, a user may trigger the electronic device to perform a voice search function through a voice assistant of the electronic device. Specifically, when the user triggers the electronic device to enable the voice assistant, the user may trigger the voice assistant to perform an operation corresponding to the voice content of the user by means of voice. For example, when the user speaks a sentence, the voice assistant may locate content in the server that is related to the content of the sentence, such as web pages, articles, applications, and so forth.
However, in the above manner, since the voice assistant of the electronic device usually searches for relevant content according to the collected voice content of the user, the voice search function of the electronic device is relatively single.
Disclosure of Invention
The embodiment of the invention provides a voice search method, a voice search device and electronic equipment, and aims to solve the problem that the voice search function of the existing electronic equipment is single.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a voice search method, where the method includes: acquiring first voiceprint information and first content in first voice data; determining a first user according to the first voiceprint information, and searching second content which is matched with the first content and corresponds to the first user; and displaying the second content and displaying a target identifier, wherein the target identifier is an identifier indicating the first user in the electronic equipment.
In a second aspect, an embodiment of the present invention provides a voice search apparatus, where the voice search apparatus is applied to an electronic device, and the voice search apparatus includes: the device comprises an acquisition module, a determination module, a search module and a display module. The acquisition module is used for acquiring first voiceprint information and first content in the first voice data; the determining module is used for determining a first user according to the first voiceprint information acquired by the acquiring module; the searching module is used for searching second content which is matched with the first content acquired by the acquiring module and corresponds to the first user determined by the determining module; and the display module is used for displaying the second content searched by the search module and displaying the target identifier, wherein the target identifier is the identifier which indicates the first user in the voice search device.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the electronic device implements the steps of the voice search method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the voice search method as in the first aspect.
In the embodiment of the invention, first voiceprint information and first content in first voice data can be acquired; determining a first user according to the first voiceprint information, and searching second content which is matched with the first content and corresponds to the first user; and displaying the second content and displaying the target identifier (the identifier indicating the first user in the electronic device). According to the scheme, on one hand, in the process of voice searching of the electronic equipment, the electronic equipment can determine the user (namely the first user) providing the first voice data according to the first voiceprint information in the first voice data, and then the electronic equipment can search the content which is matched with the first content in the first voice data and corresponds to the first user, so that the electronic equipment can search related content in the electronic equipment according to the provider of the voice data, and the pertinence of the content searched by the electronic equipment can be improved. On the other hand, the electronic device can display the searched second content and display the target identification indicating the first user, so that the search result displayed to the user by the electronic device not only includes the searched content, but also includes the identification of the provider of the first voice data (namely, the first user), so that the search result can be clearly displayed to the user, and the second content is the content searched according to the voice data of the first user, so that the voice search function of the electronic device can be diversified.
Drawings
Fig. 1 is a schematic diagram of an architecture of an android operating system according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a voice search method according to an embodiment of the present invention;
FIG. 3 is a schematic interface diagram of an application of the voice search method according to an embodiment of the present invention;
FIG. 4 is a second schematic interface diagram of an application of the voice search method according to the embodiment of the present invention;
fig. 5 is a third schematic interface diagram of an application of the voice search method according to the embodiment of the present invention;
FIG. 6 is a fourth schematic view of an interface applied by the voice search method according to the embodiment of the present invention;
FIG. 7 is a fifth schematic view of an interface applied to the voice search method according to the embodiment of the present invention;
fig. 8 is a schematic structural diagram of a voice search apparatus according to an embodiment of the present invention;
fig. 9 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first voiceprint information and the second voiceprint information are for distinguishing different voiceprint information, and are not for describing a specific order of the voiceprint information.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
The embodiment of the invention provides a voice search method, a voice search device and electronic equipment, which can acquire first voiceprint information and first content in first voice data; determining a first user according to the first voiceprint information, and searching second content which is matched with the first content and corresponds to the first user; and displaying the second content and displaying the target identifier (the identifier indicating the first user in the electronic device). According to the scheme, on one hand, in the process of voice searching of the electronic equipment, the electronic equipment can determine the user (namely the first user) providing the first voice data according to the first voiceprint information in the first voice data, and then the electronic equipment can search the content which is matched with the first content in the first voice data and corresponds to the first user, so that the electronic equipment can search related content in the electronic equipment according to the provider of the voice data, and the pertinence of the content searched by the electronic equipment can be improved. On the other hand, the electronic device can display the searched second content and display the target identification indicating the first user, so that the search result displayed to the user by the electronic device not only includes the searched content, but also includes the identification of the provider of the first voice data (namely, the first user), so that the search result can be clearly displayed to the user, and the second content is the content searched according to the voice data of the first user, so that the voice search function of the electronic device can be diversified.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the voice search method provided by the embodiment of the present invention is applied, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the voice search method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the voice search method may operate based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can implement the voice search method provided by the embodiment of the invention by running the software program in the android operating system.
The electronic equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiment of the present invention is not limited in particular.
The execution subject of the voice search method provided in the embodiment of the present invention may be the electronic device, a functional module and/or a functional entity capable of implementing the voice search method in the electronic device, or a voice search apparatus in the embodiment of the present invention, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily describe the voice search method provided by the embodiment of the present invention.
In the embodiment of the invention, after the electronic equipment starts the voice search function, if the electronic equipment collects voice data, the electronic equipment can search in the electronic equipment according to the voice content in the collected voice data, so that a search result corresponding to the content in the voice data can be obtained. Specifically, after the electronic device collects a piece of voice data (e.g., first voice data in the embodiment of the present invention), the electronic device may obtain voice content (e.g., first content in the embodiment of the present invention) and voiceprint information (e.g., first voiceprint information in the embodiment of the present invention) in the voice data, and then the electronic device may determine a provider of the first voice data (e.g., a first user in the embodiment of the present invention) according to the voiceprint information, and the electronic device may search for content (e.g., second content in the embodiment of the present invention) that matches the voice content and corresponds to the user, and after searching for the content, the electronic device may display the content and display an identifier (e.g., a target identifier in the embodiment of the present invention) indicating the provider of the first voice data in the electronic device. In this way, the electronic device can inform the user that the content is searched according to the voice data provided by the user indicated by the identification, so that the voice searching function of the electronic device can be diversified.
It should be noted that the voice search method provided in the embodiment of the present invention may be applied to a scenario in which a user interacts with another user (for example, a first user in the embodiment of the present invention) through an electronic device.
Certainly, in actual implementation, the voice search method provided in the embodiment of the present invention may also be applied to any other possible scenarios, which may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
The following describes an exemplary speech search method according to an embodiment of the present invention with reference to the drawings.
As shown in fig. 2, an embodiment of the present invention provides a voice search method, which includes the following steps S201 to S203.
S201, the electronic equipment acquires first voiceprint information and first content in first voice data.
In an embodiment of the present invention, in a process of performing a voice search by an electronic device, the electronic device may first obtain the first voiceprint information and the first content from the first voice data, so that the electronic device may determine a provider (for example, a first user in the embodiment of the present invention) of the first voice data according to the first voiceprint information, and search for a content that matches the first content and corresponds to the first user.
It should be noted that, in the embodiment of the present invention, the first user may be a user other than an owner user of the electronic device.
In order to clearly distinguish the user from the first user, in the embodiment of the present invention, the user may be used to indicate the owner user of the electronic device, and the first user may be used to indicate users other than the owner user of the electronic device, except for the special description.
In an embodiment of the present invention, the first voiceprint information may be voiceprint information of a provider of the first voice data, and the first content may be text information in the first voice data.
In the embodiment of the present invention, the electronic device may convert the first voice data into text data, so that the first content may be acquired.
Optionally, in the embodiment of the present invention, before the electronic device acquires the first voiceprint information and the first content in the first voice data, the electronic device may collect the first voice data first.
Optionally, in the embodiment of the present invention, the method for acquiring the first voice data by the electronic device may include the following two methods:
the method comprises the following steps: the electronic equipment collects the first voice data through a voice assistant in the electronic equipment.
The second method comprises the following steps: under the condition that a user performs voice interaction with a first user through the electronic equipment, the electronic equipment collects first voice data in real time.
Optionally, for the second method, a scenario in which the user performs voice interaction with the first user through the electronic device may include at least one of the following: the user performs a system call (for example, a mobile phone call) of the electronic device with the first user through the electronic device, the user performs a voice call or a video call with the first user through a social application program in the electronic device, and the electronic device receives voice information triggered and sent by the first user through the social application program in the electronic device.
Certainly, in actual implementation, the scenario in which the user performs voice interaction with the first user through the electronic device may also include any other possible scenario, which may be determined according to actual usage requirements, and the embodiment of the present invention is not limited.
Optionally, in an embodiment of the present invention, the first user may include a plurality of other users, that is, the first voice data may include voice data provided by a plurality of users, so that the electronic device may respectively obtain voiceprint information and voice content of each piece of voice data in the first voice data, thereby obtaining a plurality of voiceprint information and a plurality of voice content, and each voice content has corresponding voiceprint information.
For example, assume that in the first speech data: the legend "send me a file for XX meeting" and "you see application 1 back to me"; the wording "today's photos remember to upload" and "you need to send a mail". The first user may include two users, that is, a user "xiao ming" and a user "xiao hong", respectively, and the voiceprint information corresponding to the voice content "send me a file of XX meeting" and the voice content "you see application 1 back to me" may be the voiceprint information of the user "xiao ming", and the voiceprint information corresponding to the voice content "today's photo remembers to upload" and the voice content "you need to send a mail" may be the voiceprint information of the user "xiao hong".
S202, the electronic equipment determines a first user according to the first voiceprint information, and searches second content which is matched with the first content and corresponds to the first user.
In the embodiment of the present invention, after the electronic device acquires the first voiceprint information and the first content, the electronic device may determine the first user according to the first voiceprint information, and search for a second content that is matched with the first content and corresponds to the first user, so that a search result of performing a voice search according to the first voice data may be obtained.
It is to be understood that, in the embodiment of the present invention, the first user may be a provider of the first voice data.
Optionally, in the embodiment of the present invention, an identifier of the first user (for example, an object identifier in the implementation of the present invention) and voiceprint information (for example, second voiceprint information in the implementation of the present invention) may be stored in the electronic device in advance, so that the object identifier and the second voiceprint information have an association relationship, that is, the object identifier and the second voiceprint information are bound. Therefore, after the electronic device obtains the first voiceprint information, the electronic device may determine the second voiceprint information matched with the first voiceprint information in the electronic device, and then determine the target identifier having an association relationship with the second voiceprint information according to the second voiceprint information, so that the electronic device may determine the first user.
It is to be understood that the target identifier may be used to indicate the first user, and when the electronic device determines the target identifier, the electronic device may determine the first user.
It should be noted that, in the embodiment of the present invention, matching the first voiceprint information with the second voiceprint information may be understood as: the user indicated by the first voiceprint information and the user indicated by the second voiceprint information are the same user. Specifically, in the embodiment of the present invention, the user indicated by the first voiceprint information and the second voiceprint information may both be the first user.
In addition, in order to ensure readability, for the way that the electronic device correspondingly stores the target identifier and the second voiceprint information, the embodiment of the present invention will be described in detail in the following embodiments, which are not described herein again.
Optionally, in the embodiment of the present invention, the manner in which the electronic device searches for the second content that matches the first content and corresponds to the first user may include the following first manner and second manner, which are exemplarily described below.
The first mode is as follows: the electronic equipment searches in the electronic equipment by using the keywords or the keywords in the first content to obtain the content which is the same as (related to or close to) the keywords or the keywords, and determines the content corresponding to the first user from the content to obtain the second content.
The second mode is as follows: the electronic equipment performs semantic analysis on the first content to obtain a first semantic, then searches in the electronic equipment by adopting the first semantic to obtain content which is the same as (related to or similar to) the first semantic, and determines the content corresponding to the first user from the content to obtain the second content.
It should be noted that, in the embodiment of the present invention, the matching between the second content and the first content specifically may be: the second content is the same as, related to, or similar to the first content.
The above-mentioned content corresponding to the first user may be understood as: the specific content indicated by the content includes information related to the first user, such as account information of the first user, an identifier of the first user, and the like.
Certainly, in actual implementation, the electronic device may also search for the second content matched with the first content and corresponding to the first user in any other possible manner, which may be determined specifically according to actual usage requirements, and the embodiment of the present invention is not limited.
S203, the electronic equipment displays the second content and displays the target identification.
The target identifier may be an identifier indicating a first user in the electronic device.
In this embodiment of the present invention, after the electronic device determines the first user and searches the second content, the electronic device may display the second content and display a target identifier indicating the first user. In this way, the electronic device can clearly present to the user, and the second content is the content searched according to the voice data of the first user indicated by the target identifier.
Optionally, in this embodiment of the present invention, the electronic device may correspondingly display the second content and the target identifier. Specifically, the electronic device may display the target identifier and the second content in the same display area by displaying the target identifier and the second content correspondingly.
In practical implementation, the electronic device may further display the target identifier and the second content in any other possible manner, so as to indicate the second content to the user as the content searched according to the voice data of the first user indicated by the target identifier. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in this embodiment of the present invention, when the first voice data includes voice data of multiple users, after the electronic device searches for the second content, the electronic device may distinguish (or segment) the second content by using voiceprint information of each of the multiple users, so that the user identifiers matched with the voiceprint information of each user respectively correspond to the corresponding search result content.
Optionally, in the embodiment of the present invention, the electronic device may correspondingly display the user identifiers and the search result contents corresponding to the user identifiers, so that the contents respectively searched by the electronic device according to the voice data of different users can be clearly shown to the user.
In the embodiment of the invention, because the electronic equipment can distinguish the contents searched by the electronic equipment according to the voiceprint information, the electronic equipment can directly distinguish the matters to be handed by each user according to the voiceprint information in one-field multi-person communication, thereby reducing the memory burden of the users.
Next, the above-mentioned S201 to S203 will be described by way of example with reference to fig. 3.
For example, it is assumed that the first voice data is a segment of voice data with which multiple persons communicate, where in the segment of voice data: the user "Xiaoming" says "send me a file for XX meeting" and "you see application 1 back to me"; the user says "little red" today's photos remember to upload "and" you need to send a mail ". Then, the electronic device may obtain the voiceprint information of the user "twilight" and the voiceprint information of the user "pinkish" from the piece of voice data, and obtain the voice contents "send me a file of XX meeting", "you see application 1 back to me", "today's photo remembers to upload", and "you need to send a mail"; the electronic device may then search for content that matches these speech content and corresponds to the voiceprint information of "twilight" or to the user "twilight" through a system global search. Finally, as shown in fig. 3, the electronic device may display the identifier of the user "xiao ming" and the corresponding search result content on the search result page, and display the identifier of the user "xiao hong" and the corresponding search result content on the search result page. Therefore, the content searched by the electronic equipment can be clearly shown to the user according to the voice data of the user.
On one hand, in the process of performing voice search by the electronic device, the electronic device may determine a user (i.e., a first user) providing the first voice data according to first voiceprint information in the first voice data, and then the electronic device may search for content that matches the first content in the first voice data and corresponds to the first user, so that the electronic device may search for related content in the electronic device according to a provider of the voice data, and thus, the pertinence of the content searched by the electronic device may be improved. On the other hand, the electronic device can display the searched second content and display the target identification indicating the first user, so that the search result displayed to the user by the electronic device not only includes the searched content, but also includes the identification of the provider of the first voice data (namely, the first user), so that the search result can be clearly displayed to the user, and the second content is the content searched according to the voice data of the first user, so that the voice search function of the electronic device can be diversified.
Optionally, in this embodiment of the present invention, when the first content is different, the content searched by the electronic device may be different, and thus the second content may also be different.
Accordingly, after the electronic device displays the second content, the user can trigger the electronic device to perform different interactive operations through the input of the second content. Specifically, the method may include two cases, namely a case one and a case two.
The two cases (case one and case two) described above are respectively exemplified below.
The first condition is as follows: in the case where the first content includes information indicating the first application, the second content may be an identification of the first application. After the electronic device displays the second content, if the user wants to trigger the electronic device to perform the related interaction operation in the first application program, the user may trigger the electronic device to display an interface (e.g., the first application program interface in the embodiment of the present invention) corresponding to the first user in the first application program through an input (e.g., the first input in the embodiment of the present invention) to the identifier of the first application program, so that the user may directly interact with the first user in the interface.
It can be understood that, in the embodiment of the present invention, the first application program may include content such as account information of the first user.
Optionally, in this embodiment of the present invention, the first application may be an application having an interaction function, for example, a social application, a game application, a shopping application, a financing application, and the like, which may be determined specifically according to actual usage requirements, and this embodiment of the present invention is not limited.
For example, in the embodiment of the present invention, after S203, the voice search method provided in the embodiment of the present invention may further include S204 and S205 described below.
S204, the electronic equipment receives first input of the identification of the first-in application program from the user.
S205, the electronic equipment responds to the first input and displays a first application program interface corresponding to the first user in the first application program.
In the embodiment of the present invention, after the electronic device displays the second content (specifically, the identifier of the first application program) and displays the target identifier, the user may trigger the electronic device to display the first application program interface through the first input of the identifier of the first application program, so that the user may interact with the first user through the electronic device directly in the first application program interface.
Optionally, in this embodiment of the present invention, the identifier of the first application may be an icon of the first application or a name of the first application. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in this embodiment of the present invention, the first application program interface may be an interactive interface corresponding to a first user (specifically, account information of the first user, for example, the first account information in the embodiment of the present invention) in the first application program. For example, a chat interface corresponding to the first user, or a mail editing interface corresponding to the first user, etc.
Optionally, in the embodiment of the present invention, the first application program interface may include the target identifier and a plurality of quick interaction controls, so that a user may trigger the electronic device to display an interface of an interaction operation corresponding to the interaction control through inputting a quick interaction control of the plurality of quick interaction controls, so that the user may perform the interaction operation with the first user in the interaction interface. Therefore, the mode of interacting with the first user through the electronic equipment can be convenient.
Illustratively, the S205 may be specifically implemented by the S205a described below.
S205a, the electronic equipment responds to the first input, displays the target identification and displays at least one shortcut interaction control corresponding to the target identification.
Each shortcut interaction control in the at least one shortcut interaction control may be used to indicate an interaction operation corresponding to first account information, where the first account information may be account information in the first application program that has an association relationship with the target identifier.
In the embodiment of the present invention, after the electronic device receives the first input, the electronic device may display the target identifier and the at least one shortcut interaction control, and may trigger the electronic device to display the at least one interaction identifier by the first input to the first entry identifier, in response to the first input. Therefore, the user can select the shortcut interaction identification control corresponding to the interaction operation which the user wants to trigger the electronic device to execute from the at least one shortcut interaction control identification to trigger the electronic device to correspondingly display the interaction page corresponding to the interaction operation between the user in the first application program and the first user, and thus, the user can flexibly interact with the first user through the first application program.
Optionally, in the embodiment of the present invention, the electronic device may display the at least one shortcut interaction control in a floating window form. For example, in the manner shown in FIG. 4, at least one quick interaction control 33 is displayed.
It should be noted that, in the embodiment of the present invention, by displaying the target identifier (for example, the user identifier "red dot" 31 in fig. 4), the electronic device may notify the user that the interactive operations corresponding to the at least one shortcut interaction control are all the interactive operations related to the first user.
Optionally, in this embodiment of the present invention, the electronic device may determine, according to the first semantic meaning (semantic meaning obtained by analyzing the first content), that the first user wants to perform a target interaction operation with the user, and then the electronic device may display a target shortcut interaction control (used for indicating the target interaction operation) in the at least one shortcut interaction control in a preset display manner (for example, a highlight display manner, an enlarged display manner, and the like shown in fig. 4). The user may thus be prompted with which the first user wants to perform a targeted interaction.
Optionally, in this embodiment of the present invention, the electronic device may search, in a target database of the electronic device, first account information having an association relationship with the target identifier (identifier indicating the first user).
It can be understood that, the target database may store the target identifier, the account information in the application program, and the voiceprint information in advance, so that, when the electronic device determines the first voiceprint information and the target identifier, the electronic device may determine the first account information corresponding to the target identifier in the first application program.
That is, the voiceprint information, the target identification, and the first account information are correspondingly stored (i.e., bound) in the target database in advance.
Optionally, in the embodiment of the present invention, the step S205a may be specifically implemented by the following steps S205a1-S205a 3.
S205a1, the electronic device responds to the first input, and determines first account information from the first application program according to the target identification.
S205a2, the electronic device creates a shortcut interaction control corresponding to the first account information in the first application program, and obtains at least one shortcut interaction control.
S205a3, the electronic equipment displays at least one shortcut interaction control.
In the embodiment of the present invention, after the electronic device receives the first input, the electronic device responds to the first input, and may determine the first account information from the first application program according to the target identifier, and then the electronic device may create a shortcut interaction control corresponding to the first account information in the first application program, so as to obtain the at least one shortcut interaction control and display the at least one shortcut interaction control, so that the user may select the corresponding shortcut interaction control according to the actual use requirement of the user.
In the embodiment of the invention, after the electronic device determines the first account information, the electronic device can determine the interactive operation which can be performed between the user and the first account information in the first application program, so that the electronic device can create the quick interactive controls corresponding to the interactive operation.
Optionally, in the embodiment of the present invention, if only one item of interaction operation that the user may perform with the first account information is available in the first application program, the electronic device may directly display an interaction interface that the user may perform the interaction operation with the first account information in response to the interaction interface.
Optionally, in this embodiment of the present invention, it is assumed that the first voiceprint information includes voiceprint information of a user a and voiceprint information of a user B, and the first content includes interaction X that the user a speaks and is related to the first application program, and interaction Y that the user B speaks and is related to the first application program; and account information a for user a and account information B for user B are included in the first application, then when the user clicks on the identity of the first application (e.g., "mailbox" identity as shown in fig. 3), the electronic device may display the identity of user a (e.g., "red" identity 31 as shown in fig. 4), the identity of user B (e.g., "bright" identity 32 as shown in fig. 4), and the plurality of quick interaction controls 33 as shown in fig. 4, in response to the input.
It should be noted that the multiple shortcut interaction controls in fig. 4 correspond to the identifier of the user a by default, and if the user wants to interact with the user B, the user may first trigger the electronic device to select the identifier of the user B, and then input a control of the multiple shortcut interaction controls to trigger the electronic device to display an interaction interface between the user and the user B.
Case two: in a case where the first content includes information indicating the first file, the second content may be an identifier of the first file. In this way, after the electronic device displays the second content and displays the target identifier, if the user wants to trigger the electronic device to send the first file to the first user, the user may trigger the electronic device to display account information (e.g., at least one second account information in the embodiment of the present invention) in the electronic device, which has an association relationship with the target identifier, through input of the identifier of the first file, so that the user may trigger the electronic device to send the first file to the first user through one of the account information (e.g., the target account information in the embodiment of the present invention) according to an actual usage requirement, and thus may quickly interact with the first user through the electronic device.
For example, in the embodiment of the present invention, after S203, the voice search method provided in the embodiment of the present invention may further include S206-S209 described below.
S206, the electronic equipment receives second input of the identification of the first file by the user.
And S207, the electronic equipment responds to the second input and displays at least one piece of second account information.
The at least one piece of second account information may be account information in the electronic device, which has an association relationship with the target identifier.
S208, the electronic device receives a third input of the target account information in the at least one second account information by the user.
S209, the electronic device responds to the third input and sends the first file to the target account information.
In the embodiment of the present invention, after the electronic device displays the second content and the target identifier, the user may trigger the electronic device to display the at least one piece of second account information through the second input, and then the user may trigger the electronic device to send the first file to the target account information through a third input on the target account information in the at least one piece of second account information. Specifically, the electronic device may send the first file to the first user through the target account information. Therefore, steps of the user in interaction with other users can be reduced, and the interaction mode through the electronic equipment is convenient.
It can be understood that, in the embodiment of the present invention, at least one piece of account information having an association relationship with the target identifier may be correspondingly stored in the electronic device, so that after the electronic device receives a second input of the identifier of the first file by the user, the electronic device may acquire the at least one piece of account information having an association relationship with the target identifier, and thus may obtain the at least one piece of second account information.
Optionally, in this embodiment of the present invention, after the electronic device receives the second input, the electronic device may further display at least one second application identifier, and the user may select one target application identifier from the at least one second application identifier, so as to trigger the electronic device to send the first file to the first user through the target application program indicated by the target application identifier. Specifically, the electronic device may send the first file to the first user through the third account information in the target application. The third account information may be account information in the electronic device having an association relationship with the target identifier.
It is understood that the application programs indicated by the at least one second application identifier each include account information having an association relationship with the target identifier.
Optionally, in this embodiment of the present invention, after the electronic device displays the second content and the target identifier, if the user does not input the second content within a period of time, the electronic device may create the schedule to remind the user that the backlogs are still unprocessed.
For example, in the embodiment of the present invention, after S203, the voice search method provided in the embodiment of the present invention may further include S210 and S211 described below.
S210, the electronic equipment determines whether the input of the user to the second content is received within a preset time length.
In this embodiment of the present invention, after the electronic device displays the second content and the target identifier, the electronic device may determine whether the user inputs the second content within a preset time period, and if the electronic device receives the user inputs the second content within the preset time period, the electronic device may determine that the user has processed the interaction related to the second content, so that the electronic device may cancel displaying the second content and the target identifier. If the electronic device does not receive the input of the second content from the user within the preset time period, the electronic device may execute S211 described below.
S211, the electronic equipment creates a target schedule.
The target schedule may include the target identifier and the second content, which are stored correspondingly.
In the embodiment of the invention, if the electronic device does not receive the input of the user on the second content within the preset time length, the electronic device can determine that the user has not processed the interaction related to the second content, so that the electronic device can create the target schedule, and the user can directly trigger the electronic device to execute the corresponding interactive operation from the target schedule.
Optionally, in the embodiment of the present invention, the electronic device may create the target schedule in a schedule application program in the electronic device, or create the target schedule in a calendar application program in the electronic device. The method and the device can be determined according to actual use requirements, and the embodiment of the invention is not limited.
For example, assuming that the first entry identifier includes 4 entry identifiers, and the electronic device does not receive an input of the user for 3 entry identifiers of the 4 entry identifiers within a preset time period, the electronic device may create a schedule corresponding to the 3 entry identifiers in the calendar application program, and display interface information 41 shown in fig. 5 to prompt the user that there are 3 incomplete interactions to be processed.
Optionally, in this embodiment of the present invention, before the electronic device acquires the first voiceprint information and the first content in the first voice data, the electronic device may create an association relationship between the user identifier, the account information, and the voiceprint information in the electronic device. In this way, after the electronic device collects voice data (e.g., first voice data), the electronic device may determine, according to first voiceprint information in the first voice data, voiceprint information that matches the first voiceprint information, so as to determine a user identifier (e.g., the target identifier) that matches the first voiceprint information.
For example, in the embodiment of the present invention, before the above step S201, the voice search method provided in the embodiment of the present invention may further include the following steps S212 to S214.
S212, the electronic equipment acquires second voiceprint information in the second voice data.
S213, the electronic equipment acquires the target identification and at least one account information according to the displayed session information or the received user input information.
S214, the electronic device creates an association relation of the second voiceprint information, the target identification and the at least one account information.
The first voiceprint information can be voiceprint information matched with the second voiceprint information.
It is to be understood that the first voice data and the second voice data may be voice data of the same user.
In this embodiment of the present invention, before the electronic device acquires the first voiceprint information and the first content in the first voice data, the electronic device may acquire the second voiceprint information in the second voice data, acquire the target identifier and the at least one piece of account information according to session information displayed by the electronic device or user input information received by the electronic device, and then establish an association relationship between the second voiceprint information, the target identifier and the at least one piece of account information.
It should be noted that, in the embodiment of the present invention, for the description about the manner in which the electronic device acquires the second voice data, reference may be specifically made to the detailed description about the manner in which the electronic device acquires the first voice data in the embodiment, and in order to avoid repetition, details are not described here again.
In an embodiment of the present invention, the establishing of the association relationship between the second fingerprint information, the target identifier, and the at least one account information may specifically be: and correspondingly storing the second voiceprint information, the target identification and the at least one account information into a target database. The target database may store at least one set of voiceprint information, user identification and account information.
Optionally, in this embodiment of the present invention, when the target identifier and the at least one piece of account information are obtained by the electronic device according to the session information displayed by the electronic device, after the electronic device obtains the second voiceprint information, the electronic device may search in the target database, and determine whether the target database stores voiceprint information matched with the second voiceprint information. If the second voiceprint information is not stored, the electronic device may determine that the voiceprint information corresponding to the first user is empty, and then the electronic device may acquire the target identifier (for example, a nickname of the first user) and at least one piece of account information (for example, account information of the first user corresponding to the session interface) from a currently displayed session interface, and create an association relationship between the second voiceprint information, the target identifier and the at least one piece of account information, that is, bind the second voiceprint information, the target identifier and the at least one piece of account information.
Optionally, in this embodiment of the present invention, after the electronic device creates the association relationship between the second voiceprint information, the target identifier and the at least one account information, the electronic device may display a prompt message (for example, the prompt message 51 shown in fig. 6) to notify the user that the electronic device has bound the second voiceprint information, the target identifier and the at least one account information.
Optionally, in this embodiment of the present invention, the prompt information may further include a "modification" control, and if the user wants to modify the information in the prompt information (for example, unbind the information), the user may trigger the electronic device to display an interface (for example, an interface corresponding to the target database) corresponding to the second voiceprint information, the target identifier, and the at least one account information, through an input to the "modification" control, so as to modify the interface.
Optionally, in this embodiment of the present invention, when the target identifier and the at least one piece of account information are obtained by the electronic device according to received user input information, the electronic device may obtain second voiceprint information and third content in the second voice data (where the second voice data may be obtained through a voice search function), and search for different search results according to the third content. When the search results include the application program, if the user triggers the electronic device to run the application program, the electronic device may establish an association relationship between the second voiceprint information, the target identifier and the at least one account information according to different operations of the user in the application program. Specifically, two possible implementations, namely the implementation (1) and the implementation (2), may be included.
These two modes (mode (1) and mode (2)) are exemplarily described below:
in the method (1), if the electronic device detects that the user triggers the electronic device to perform an operation of "new contact" in the application (for example, the user clicks a "new contact" control in the application), after the electronic device has newly created information (which may include the target identifier and the account information) of the contact (i.e., other users except the owner user of the electronic device), the electronic device may obtain the information of the contact and bind the second voiceprint information with the information of the contact, so that an association relationship between the second voiceprint information, the target identifier and at least one account information may be created.
In the method (2), if the electronic device detects that the user triggers the electronic device to interact with a contact already stored in the electronic device (e.g., open a friend chat window), the electronic device may obtain information of the stored contact and bind the second voiceprint information with the information of the contact, so that an association relationship between the second voiceprint information, the target identifier, and the at least one account information may be created.
Optionally, in this embodiment of the present invention, if the electronic device does not detect that the user triggers the electronic device to perform an operation related to a contact, the electronic device may pop up a dialog box, so as to prompt the user whether to create an association relationship between the second voiceprint information and a certain contact.
Optionally, in the embodiment of the present invention, after the electronic device creates the association relationship between the second voiceprint information, the target identifier, and the at least one account information, a relationship between the voiceprint information and the target identifier and the at least one account information, which may also be referred to as a relationship between the voiceprint information and the contact, may be found in a voiceprint binding interface of the electronic device.
Illustratively, as shown in fig. 7, in a "voiceprint binding" interface displayed by the electronic device, it can be seen that voiceprint information 1 has an association relationship with a role "minired" (i.e., target identification) and four account information.
It should be noted that, because one piece of voiceprint information can accurately indicate one natural person (i.e., one user), and one natural person often registers accounts in different social application pieces, one piece of voiceprint information can correspond to contacts in multiple different social application pieces, and the electronic device can uniformly represent the contacts bound to the voiceprint information, i.e., the natural person indicated by the voiceprint information, through a "role" (i.e., a target identifier in the embodiment of the present invention).
Optionally, in the embodiment of the present invention, the user may further edit the user identifier and the account information (for example, the number of the contact) corresponding to the voiceprint information on the voiceprint binding interface.
Optionally, in this embodiment of the present invention, after the electronic device creates an association relationship between the second voiceprint information, the target identifier, and the at least one account information, if the electronic device acquires the third voice information, and determines that the third voiceprint information in the third voice information matches the second voiceprint information, and the electronic device determines that the account information (hereinafter, referred to as fourth account information) corresponding to the third voice data does not create an association relationship with the second voiceprint information, the electronic device may create an association relationship between the second voiceprint information and the fourth account information again.
It is understood that after the electronic device creates the association relationship between the second voiceprint information and the fourth account information, the fourth account information also has an association relationship with the target identification.
Optionally, in the embodiment of the present invention, after the electronic device creates the association relationship between the second fingerprint information and the fourth account information, one account information (that is, the fourth account information) may be added to the account information corresponding to the second fingerprint information in the "voiceprint binding" interface, and it may also be understood that one contact is added to the contact corresponding to the second fingerprint.
As shown in fig. 8, an embodiment of the present invention provides a voice search apparatus 600, where the voice search apparatus 600 may be applied to an electronic device, and the voice search apparatus 600 may include an obtaining module 601, a determining module 602, a searching module 603, and a display module 604. An obtaining module 601, configured to obtain first voiceprint information and first content in first voice data; a determining module 602, configured to determine a first user according to the first voiceprint information acquired by the acquiring module 601; a searching module 603, configured to search for a second content that matches the first content acquired by the acquiring module 601 and corresponds to the first user determined by the determining module 602; and a display module 604, configured to display the second content searched by the search module 603, and display a target identifier, where the target identifier is an identifier indicating the first user in the voice search apparatus.
Optionally, in this embodiment of the present invention, the first content includes information used for indicating the first application program, and the second content is an identifier of the first application program; the voice searching device also comprises a receiving module; the receiving module is used for receiving first input of the identification of the first application program from a user after the second content is displayed and the target identification is displayed on the display module; the display module is further used for responding to the first input received by the receiving module and displaying a first application program interface corresponding to the first user in the first application program.
Optionally, in the embodiment of the present invention, the display module is specifically configured to display the target identifier and display at least one fast interaction control corresponding to the target identifier, where each fast interaction control is used to indicate one interaction operation corresponding to the first account information; the first account information is account information which has an association relation with the target identification in the first application program.
Optionally, in the embodiment of the present invention, the voice search apparatus further includes a creating module; the determining module is further used for determining first account information from the first application program according to the target identifier before the display module displays the at least one quick interaction control corresponding to the target identifier; and the creating module is used for creating the quick interaction control corresponding to the first account information determined by the determining module in the first application program to obtain at least one quick interaction control.
Optionally, in this embodiment of the present invention, the first content includes information used for indicating the first file, and the second content is an identifier of the first file; the voice searching device also comprises a receiving module and a sending module. The receiving module is used for receiving second input of the identification of the first file by the user after the display module displays the second content and displays the target identification; the display module is further used for responding to the second input received by the receiving module and displaying at least one piece of second account information, wherein the at least one piece of second account information is account information which is in the voice searching device and has an association relation with the target identification; the receiving module is further used for receiving a third input of target account information in the at least one second account information displayed by the display module; and the sending module is used for responding to the third input received by the receiving module and sending the first file to the target account information.
Optionally, in this embodiment of the present invention, the voice search apparatus further includes an establishing module. The acquisition module is further used for acquiring second voiceprint information in second voice data before acquiring first voiceprint information and first content in first voice data, and acquiring a target identifier and at least one account information according to the displayed session information or the received user input information; the establishing module is used for establishing the incidence relation between the second voiceprint information, the target identification and the at least one account information which are acquired by the acquiring module; and the first voiceprint information is matched with the second voiceprint information.
It should be noted that the voice search apparatus provided in the embodiment of the present invention can implement each process executed by the electronic device in the embodiment of the voice search method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
In addition, the voice search device in the embodiment of the present invention may be applied to an electronic device, may also be applied to any other possible device, and may also be an independently operating device (i.e., may be used independently). The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
On one hand, in the process of performing voice search by the voice search apparatus, the voice search apparatus may determine a user (i.e., a first user) providing the first voice data according to first voiceprint information in the first voice data, and then the voice search apparatus may search for a content that matches a first content in the first voice data and corresponds to the first user, so that the voice search apparatus may search for a related content in the voice search apparatus according to a provider of the voice data, and thus, the pertinence of the content searched by the voice search apparatus may be improved. On the other hand, the voice search device can display the searched second content and display the target identification indicating the first user, so that the search result displayed to the user by the voice search device not only includes the searched content, but also includes the identification of the provider of the first voice data (namely, the first user), so that the search result can be clearly displayed to the user, and the second content is the content searched according to the voice data of the first user, so that the voice search function of the voice search device can be diversified.
Fig. 9 is a hardware schematic diagram of an electronic device implementing various embodiments of the invention. As shown in fig. 9, electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 may be configured to obtain first voiceprint information and first content in the first voice data, determine a first user according to the first voiceprint information, and search for second content that matches the first content and corresponds to the first user; the display unit 106 may be configured to display the second content searched by the processor 110, and display a target identifier, where the target identifier is an identifier indicating the first user in the electronic device.
It can be understood that, in the embodiment of the present invention, the obtaining module 601, the determining module 602, and the searching module 603 in the structural schematic diagram of the voice searching apparatus (for example, fig. 8) may be implemented by the processor 110; the display module 604 in the schematic structural diagram of the voice search apparatus can be implemented by the display unit 106.
On one hand, in the process of performing voice search by the electronic device, the electronic device may determine a user (i.e., a first user) providing the first voice data according to first voiceprint information in the first voice data, and then may search for a content that matches the first content in the first voice data and corresponds to the first user, so that the electronic device may search for a related content in the electronic device according to a provider of the voice data, and thus, the pertinence of the content searched by the electronic device may be improved. On the other hand, the electronic device can display the searched second content and display the target identification indicating the first user, so that the search result displayed to the user by the electronic device not only includes the searched content, but also includes the identification of the provider of the first voice data (namely, the first user), so that the search result can be clearly displayed to the user, and the second content is the content searched according to the voice data of the first user, so that the voice search function of the electronic device can be diversified.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include an image capturing device (e.g., a camera) 1040, a Graphics Processing Unit (GPU) 1041, and a microphone 1042. An image capture device 1040 (e.g., a camera) captures image data for a still picture or video. The graphic processor 1041 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 9, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes the processor 110 shown in fig. 9, the memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the foregoing voice search method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor shown in fig. 9, the computer program implements each process of the foregoing voice search method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A voice search method is applied to electronic equipment, and is characterized by comprising the following steps:
acquiring first voiceprint information and first content in first voice data;
determining a first user according to the first voiceprint information, and searching second content which is matched with the first content and corresponds to the first user;
and displaying the second content and displaying a target identifier, wherein the target identifier is an identifier indicating the first user in the electronic equipment.
2. The method according to claim 1, wherein the first content includes information indicating a first application program, and the second content is an identifier of the first application program;
after the displaying the second content and the displaying the target identifier of the first user, the method further includes:
receiving a first input of an identification of the first application by a user;
and responding to the first input, and displaying a first application program interface corresponding to the first user in the first application program.
3. The method of claim 2, wherein displaying the first application interface corresponding to the first user in the first application comprises:
displaying the target identification, and displaying at least one quick interaction control corresponding to the target identification, wherein each quick interaction control is used for indicating an interaction operation corresponding to the first account information;
the first account information is account information which has an association relationship with the target identifier in the first application program.
4. The method of claim 3, wherein before displaying at least one shortcut interaction identifier corresponding to the target identifier, the method further comprises:
determining the first account information from the first application program according to the target identification;
and in the first application program, creating a quick interaction control corresponding to the first account information to obtain the at least one quick interaction control.
5. The method according to claim 1, wherein the first content includes information indicating a first file, and the second content is an identifier of the first file;
after the displaying the second content and the displaying the target identifier of the first user, the method further includes:
receiving a second input of the user's identification of the first file;
responding to the second input, and displaying at least one piece of second account information, wherein the at least one piece of second account information is account information which is in the electronic equipment and has an association relation with the target identification;
receiving a third input of the target account information in the at least one second account information by the user;
in response to the third input, sending the first file to the target account information.
6. The method of claim 1, wherein before the obtaining the first voiceprint information and the first content in the first speech data, the method further comprises:
acquiring second voiceprint information in second voice data;
acquiring the target identification and at least one account information according to the displayed session information or the received user input information;
establishing an association relationship between the second voiceprint information, the target identification and the at least one account information;
and the first voiceprint information is matched with the second voiceprint information.
7. A voice search device applied to electronic equipment is characterized by comprising: the device comprises an acquisition module, a determination module, a search module and a display module;
the acquisition module is used for acquiring first voiceprint information and first content in first voice data;
the determining module is configured to determine a first user according to the first voiceprint information acquired by the acquiring module;
the searching module is configured to search for second content that matches the first content acquired by the acquiring module and corresponds to the first user determined by the determining module;
the display module is configured to display the second content searched by the search module, and display a target identifier, where the target identifier is an identifier indicating the first user in the voice search apparatus.
8. The apparatus according to claim 7, wherein the first content includes information indicating a first application, and the second content is an identifier of the first application; the voice searching device also comprises a receiving module;
the receiving module is used for receiving a first input of the identifier of the first application program from a user after the display module displays the second content and displays the target identifier;
the display module is further configured to display a first application interface corresponding to the first user in the first application in response to the first input received by the receiving module.
9. The voice search device according to claim 8, wherein the display module is specifically configured to display the target identifier and at least one shortcut interaction control corresponding to the target identifier, where each shortcut interaction control is used to indicate an interaction operation corresponding to the first account information;
the first account information is account information which has an association relationship with the target identifier in the first application program.
10. The voice search device according to claim 9, further comprising a creation module;
the determining module is further configured to determine the first account information from the first application according to the target identifier before the display module displays the at least one shortcut interaction control corresponding to the target identifier;
the creating module is configured to create, in the first application program, a shortcut interaction control corresponding to the first account information determined by the determining module, so as to obtain the at least one shortcut interaction control.
11. The apparatus according to claim 7, wherein the first content includes information indicating a first file, and the second content is an identifier of the first file; the voice searching device also comprises a receiving module and a sending module;
the receiving module is used for receiving a second input of the identifier of the first file by the user after the display module displays the second content and displays the target identifier;
the display module is further configured to display at least one piece of second account information in response to the second input received by the receiving module, where the at least one piece of second account information is account information in the voice search apparatus, which has an association relationship with the target identifier;
the receiving module is further configured to receive a third input of the target account information in the at least one second account information displayed by the display module by the user;
the sending module is configured to send the first file to the target account information in response to the third input received by the receiving module.
12. The voice search device according to claim 7, further comprising a setup module;
the obtaining module is further configured to obtain second voiceprint information in second voice data before obtaining the first voiceprint information and the first content in the first voice data; acquiring the target identification and at least one account information according to the displayed session information or the received user input information;
the establishing module is configured to establish an association relationship between the second voiceprint information, the target identifier, and the at least one account information, which are acquired by the acquiring module;
and the first voiceprint information is matched with the second voiceprint information.
13. An electronic device, characterized in that the electronic device comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the voice search method according to any one of claims 1 to 6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the voice search method according to one of claims 1 to 6.
CN202010296999.4A 2020-04-15 2020-04-15 Voice search method and device and electronic equipment Active CN111597435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010296999.4A CN111597435B (en) 2020-04-15 2020-04-15 Voice search method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010296999.4A CN111597435B (en) 2020-04-15 2020-04-15 Voice search method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111597435A true CN111597435A (en) 2020-08-28
CN111597435B CN111597435B (en) 2023-08-08

Family

ID=72192026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010296999.4A Active CN111597435B (en) 2020-04-15 2020-04-15 Voice search method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111597435B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113849258A (en) * 2021-10-13 2021-12-28 北京字跳网络技术有限公司 Content display method, device, equipment and storage medium
CN115278316A (en) * 2022-06-29 2022-11-01 海信视像科技股份有限公司 Prompt language generation method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332226A1 (en) * 2009-06-30 2010-12-30 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN105354285A (en) * 2015-10-30 2016-02-24 百度在线网络技术(北京)有限公司 Knowledge search method and apparatus embedded in search engine and search engine
CN106024013A (en) * 2016-04-29 2016-10-12 努比亚技术有限公司 Voice data searching method and system
US20170164049A1 (en) * 2015-12-02 2017-06-08 Le Holdings (Beijing) Co., Ltd. Recommending method and device thereof
CN107357875A (en) * 2017-07-04 2017-11-17 北京奇艺世纪科技有限公司 A kind of voice search method, device and electronic equipment
CN109286726A (en) * 2018-10-25 2019-01-29 维沃移动通信有限公司 A kind of content display method and terminal device
CN109828731A (en) * 2018-12-18 2019-05-31 维沃移动通信有限公司 A kind of searching method and terminal device
CN110990685A (en) * 2019-10-12 2020-04-10 中国平安财产保险股份有限公司 Voice search method, voice search device, voice search storage medium and voice search device based on voiceprint

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332226A1 (en) * 2009-06-30 2010-12-30 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN105354285A (en) * 2015-10-30 2016-02-24 百度在线网络技术(北京)有限公司 Knowledge search method and apparatus embedded in search engine and search engine
US20170164049A1 (en) * 2015-12-02 2017-06-08 Le Holdings (Beijing) Co., Ltd. Recommending method and device thereof
CN106024013A (en) * 2016-04-29 2016-10-12 努比亚技术有限公司 Voice data searching method and system
CN107357875A (en) * 2017-07-04 2017-11-17 北京奇艺世纪科技有限公司 A kind of voice search method, device and electronic equipment
CN109286726A (en) * 2018-10-25 2019-01-29 维沃移动通信有限公司 A kind of content display method and terminal device
CN109828731A (en) * 2018-12-18 2019-05-31 维沃移动通信有限公司 A kind of searching method and terminal device
CN110990685A (en) * 2019-10-12 2020-04-10 中国平安财产保险股份有限公司 Voice search method, voice search device, voice search storage medium and voice search device based on voiceprint

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113849258A (en) * 2021-10-13 2021-12-28 北京字跳网络技术有限公司 Content display method, device, equipment and storage medium
CN115278316A (en) * 2022-06-29 2022-11-01 海信视像科技股份有限公司 Prompt language generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN111597435B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN108415652B (en) Text processing method and mobile terminal
CN108446058B (en) Mobile terminal operation method and mobile terminal
CN108334272B (en) Control method and mobile terminal
CN109284144B (en) Fast application processing method and mobile terminal
CN109085968B (en) Screen capturing method and terminal equipment
CN109523253B (en) Payment method and device
CN110752981B (en) Information control method and electronic equipment
CN110308834B (en) Setting method of application icon display mode and terminal
CN107741814B (en) Display control method and mobile terminal
CN110096203B (en) Screenshot method and mobile terminal
CN111163224B (en) Voice message playing method and electronic equipment
CN110837327A (en) Message viewing method and terminal
CN109753202B (en) Screen capturing method and mobile terminal
CN109495638B (en) Information display method and terminal
CN110990679A (en) Information searching method and electronic equipment
CN110688497A (en) Resource information searching method and device, terminal equipment and storage medium
CN110012151B (en) Information display method and terminal equipment
CN111368119A (en) Searching method and electronic equipment
CN111597435B (en) Voice search method and device and electronic equipment
CN111190515A (en) Shortcut panel operation method, device and readable storage medium
CN110825474A (en) Interface display method and device and electronic equipment
CN111130995B (en) Image control method, electronic device, and storage medium
CN109634508B (en) User information loading method and device
CN111026454A (en) Function starting method and terminal equipment
CN108287644B (en) Information display method of application program and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant