CN112309387A - Method and apparatus for processing information - Google Patents
Method and apparatus for processing information Download PDFInfo
- Publication number
- CN112309387A CN112309387A CN202010120039.2A CN202010120039A CN112309387A CN 112309387 A CN112309387 A CN 112309387A CN 202010120039 A CN202010120039 A CN 202010120039A CN 112309387 A CN112309387 A CN 112309387A
- Authority
- CN
- China
- Prior art keywords
- target user
- user
- information
- recognition result
- intention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012545 processing Methods 0.000 title claims abstract description 44
- 230000004044 response Effects 0.000 claims abstract description 33
- 238000004590 computer program Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 239000003795 chemical substances by application Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/221—Announcement of recognition results
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Embodiments of the present disclosure disclose methods and apparatus for processing information. One embodiment of the method comprises: acquiring voice information input by a target user; identifying a user intention of the target user based on the voice information; in response to not obtaining a recognition result for characterizing a user intent of the target user, obtaining auxiliary information related to the target user; based on the voice information and the auxiliary information, recognizing the user intention of the target user, and generating a candidate recognition result for representing the user intention of the target user; presenting the obtained candidate recognition result to a target user; and responding to the detected selected operation of the target user for the presented candidate recognition result, and executing the operation corresponding to the candidate recognition result selected by the target user. The embodiment can guide the user to select the candidate recognition result which accords with the real intention of the user, can improve the user experience, and can improve the conversation efficiency.
Description
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for processing information.
Background
After acquiring the voice information input by the user, the existing task-based dialog system needs to firstly recognize the user intention and then feed back information corresponding to the user intention to the user. For example, the user inputs the voice message "how today's weather", the task-based dialog system may recognize from the voice message "how today's weather" that the user intends to "inquire about weather", and may retrieve and present weather information to the user.
Currently, speech recognition technology is generally used to recognize speech information of a user to determine user intention.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatuses for processing information.
In a first aspect, an embodiment of the present disclosure provides a method for processing information, the method including: acquiring voice information input by a target user; identifying a user intention of the target user based on the voice information; in response to not obtaining a recognition result for characterizing a user intent of the target user, obtaining auxiliary information related to the target user; based on the voice information and the auxiliary information, recognizing the user intention of the target user, and generating a candidate recognition result for representing the user intention of the target user; presenting the obtained candidate recognition result to a target user; and responding to the detected selected operation of the target user for the presented candidate recognition result, and executing the operation corresponding to the candidate recognition result selected by the target user.
In some embodiments, the assistance information comprises at least one of: user attribute information of the target user; image information obtained by shooting the environment where the target user is located; and the text information is input by the target user and used for representing the user intention of the target user.
In some embodiments, after identifying the user intent of the target user based on the speech information, the method further comprises: and responding to the obtained recognition result for representing the user intention of the target user, and executing the operation corresponding to the obtained recognition result.
In some embodiments, recognizing the user intention of the target user based on the speech information and the auxiliary information, and generating candidate recognition results for characterizing the user intention of the target user includes: based on the voice information and the auxiliary information, recognizing the user intention of the target user, and obtaining at least two candidate recognition results for representing the user intention of the target user; and presenting the obtained candidate recognition result to the target user comprises: presenting the obtained at least two candidate recognition results to a target user; and in response to detecting the selection operation of the target user on the presented candidate recognition result, executing the operation corresponding to the candidate recognition result selected by the target user comprises: and responding to the detected operation that the target user selects the candidate recognition result from the at least two candidate recognition results, and executing the operation corresponding to the candidate recognition result selected by the target user.
In some embodiments, recognizing the user intention of the target user based on the speech information and the auxiliary information, and generating candidate recognition results for characterizing the user intention of the target user includes: recognizing the voice information by utilizing a pre-trained voice recognition model to obtain voice characteristics; identifying the auxiliary information by using a pre-trained auxiliary identification model to obtain auxiliary characteristics; and inputting the obtained voice features and the auxiliary features into a pre-trained intention recognition model, and generating a candidate recognition result for representing the user intention of the target user.
In a second aspect, an embodiment of the present disclosure provides an apparatus for processing information, the apparatus including: a first acquisition unit configured to acquire voice information input by a target user; a first recognition unit configured to recognize a user intention of a target user based on the voice information; a second acquisition unit configured to acquire auxiliary information related to the target user in response to a recognition result for characterizing a user intention of the target user not being obtained; the second recognition unit is configured to recognize the user intention of the target user based on the voice information and the auxiliary information, and generate a candidate recognition result for representing the user intention of the target user; a presentation unit configured to present the obtained candidate recognition result to a target user; and the execution unit is configured to respond to the detection of the selection operation of the target user on the presented candidate recognition result, and execute the operation corresponding to the candidate recognition result selected by the target user.
In a third aspect, an embodiment of the present disclosure provides a system for processing information, the system including: the system comprises an information acquisition module, a skill analysis module and a guide recommendation module, wherein: the information acquisition module is configured to acquire voice information input by a target user; and obtaining auxiliary information related to the target user; the skill analysis module is configured to identify the user intention of the target user based on the voice information sent by the information acquisition module; responding to the recognition result that the user intention for representing the target user is not obtained, and sending an instruction to the information acquisition module so as to control the information acquisition module to send the voice information and the auxiliary information to the guide recommendation module; the guiding recommendation module is configured to identify the user intention of the target user based on the received voice information and the auxiliary information, and generate a candidate identification result for representing the user intention of the target user; presenting the obtained candidate recognition result to a target user; and responding to the detected selected operation of the target user for the presented candidate recognition result, and executing the operation corresponding to the candidate recognition result selected by the target user.
In a fourth aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method of any of the embodiments of the method for processing information described above.
In a fifth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method of any of the above-described methods for processing information.
The method and the device for processing information provided by the embodiment of the disclosure can perform the operation corresponding to the candidate recognition result selected by the target user by acquiring the voice information input by the target user, then recognizing the user intention of the target user based on the voice information, then acquiring the auxiliary information related to the target user in response to the recognition result not acquired for representing the user intention of the target user, then recognizing the user intention of the target user based on the voice information and the auxiliary information, generating the candidate recognition result for representing the user intention of the target user, then presenting the acquired candidate recognition result to the target user, and finally executing the operation corresponding to the candidate recognition result selected by the target user in response to the detection of the selection operation of the target user for the presented candidate recognition result, so that when the user intention cannot be recognized based on the voice information input by the user, the method comprises the steps of identifying the user intention based on the voice information and the auxiliary information of the user, obtaining a candidate identification result, and presenting the candidate identification result to a target user, so that the user can be guided to select the candidate identification result which accords with the real intention of the user.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for processing information, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for processing information in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for processing information according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for processing information according to the present disclosure;
FIG. 6 is a timing diagram for one embodiment of a system for processing information according to the present disclosure;
FIG. 7 is a schematic block diagram of a computer system suitable for use with an electronic device implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the disclosed method for processing information or apparatus for processing information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various client applications, such as a voice interaction type application, a web browser application, a search type application, an instant messaging tool, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices having a voice acquiring function, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a voice processing server that processes voice information transmitted by users through the terminal apparatuses 101, 102, 103. The voice processing server may perform processing such as analysis on data such as the received voice information, and obtain a processing result (e.g., a recognition result for characterizing the user intention of the target user).
It should be noted that the method for processing information provided by the embodiment of the present disclosure may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105, and accordingly, the apparatus for processing information may be disposed in the terminal devices 101, 102, and 103, or may be disposed in the server 105.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In particular, in the case where data used in generating the recognition result for characterizing the user's intention of the target user does not need to be acquired from a remote place, the above system architecture may not include a network, but only a terminal device or a server.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing information in accordance with the present disclosure is shown. The method for processing information comprises the following steps:
In the present embodiment, an execution subject (e.g., a terminal device shown in fig. 1) of the method for processing information may acquire voice information input by a target user through a wired connection manner or a wireless connection manner. The target user may be a user who is to recognize the voice information input by the target user, and specifically may be a user who initiates a voice conversation request.
In practice, after the user initiates a voice conversation request, the execution body may obtain voice information input by the user, and respond to the voice information to the user.
Specifically, when the execution main body is a user terminal used by a target user, the execution main body may acquire the voice information through a voice acquisition device (e.g., a microphone) installed in advance; when the execution agent is a server, the execution agent may obtain the voice information from a user terminal used by a target user.
In this embodiment, based on the voice information obtained in step 201, the execution subject can recognize the user intention of the target user.
Specifically, the execution body may first convert the voice information into text information, and then recognize the user intention of the target user using the converted text information. Here, the execution body may recognize the user intention of the target user using various methods using the converted text information. As an example, the execution body may input text information into a pre-trained intent recognition model to recognize the user intent of the target user. The intention recognition model may be various models capable of generating a recognition result, such as a neural network model, a classifier, and the like. The intention recognition model may output a recognition result obtained by recognizing the user intention of the user, using the converted text information corresponding to the voice information as an input.
The execution agent may recognize the user intention of the target user, and may not recognize the user intention based on the speech information in consideration of the influence of spoken speech, environmental noise, or the like.
In some optional implementation manners of the embodiment, the executing body may execute, in response to obtaining a recognition result for characterizing a user intention of the target user, an operation corresponding to the obtained recognition result.
Specifically, a correspondence between the recognition result for representing the user intention and the operation may be established in advance (for example, a correspondence may be established in a manner of a correspondence table or a manner of a key value pair), and after obtaining the recognition result for representing the user intention of the target user, the execution main body may first determine the operation corresponding to the obtained recognition result, and then execute the determined operation.
As an example, the execution subject described above may search for weather information and present the searched weather information to the target user in response to obtaining the recognition result "query weather" for characterizing the user intention of the target user. Wherein, the step of searching for and presenting weather information is an operation corresponding to the obtained identification result of inquiring weather.
And step 203, in response to not obtaining the recognition result for representing the user intention of the target user, obtaining auxiliary information related to the target user.
In this embodiment, after step 202, the execution subject may acquire the auxiliary information related to the target user in response to not obtaining the recognition result for characterizing the user intention of the target user. The supplementary information may be information for recognizing a user intention of the target user in combination with the voice information input by the target user.
Specifically, the auxiliary information may be various information related to the target user. The execution subject can be obtained by different methods according to different auxiliary information.
In some optional implementations of this embodiment, the assistance information may include, but is not limited to, at least one of: user attribute information of the target user; image information obtained by shooting the environment where the target user is located; and the text information is input by the target user and used for representing the user intention of the target user.
Here, the user attribute information may be information for characterizing a user attribute of the target user. The user attributes may include age, gender, height, weight, and the like. Specifically, the execution main body may acquire locally pre-stored user attribute information as auxiliary information, or may acquire user attribute information as auxiliary information from another electronic device or a target user.
In addition, the execution subject may also take a picture of an environment in which the target user is located in response to a recognition result that is not obtained for representing the user intention of the target user, and obtain image information as auxiliary information; the execution subject described above may further acquire, as the auxiliary information, text information for characterizing the user intention of the target user, which is input by the target user, in response to a recognition result for characterizing the user intention of the target user not being obtained.
It can be understood that the environment where the user is located, the attribute of the user, and other information may have an influence on the operation instruction issued by the user. For example, when the user is in a living room, the user is more likely to issue an instruction for operating the home appliance; when the user is in the bedroom, the user is more likely to issue an instruction for playing the song; when the user is in front of the desk, the user is more likely to give an instruction for adjusting the brightness of the light. For another example, the daily operation habits of the boy user and the girl user are different, and the gender attribute of the user may have an influence on the operation instruction issued by the user.
The realization method can consider factors such as the environment where the user is located and the attribute of the user when identifying the user intention of the user, thereby being beneficial to recommending candidate identification results which are more consistent with real scenes for the user.
And step 204, recognizing the user intention of the target user based on the voice information and the auxiliary information, and generating a candidate recognition result for representing the user intention of the target user.
In this embodiment, based on the voice information obtained in step 201 and the auxiliary information obtained in step 203, the executing entity may identify the user intention of the target user, and generate a candidate identification result for representing the user intention of the target user.
Specifically, based on the voice information and the auxiliary information, the execution subject may recognize the user intention of the target user by using various methods.
As an example, the execution subject may first convert the voice information into text information, then adjust the converted text information based on the auxiliary information to obtain adjusted text information, and then perform recognition of the user's intention by using the adjusted text information. For example, if the execution first converts the voice information into text information "how much the temperature is outside now", the auxiliary information includes image information obtained by photographing the environment where the target user is located, and the image information indicates that the environment where the target user is currently located is indoor, the execution main body may adjust the converted text information to "how much the temperature is outside now", and may further perform the identification of the user's intention by adjusting the text information "how much the temperature is outside now".
In some optional implementations of the embodiment, based on the voice information and the auxiliary information, the executing body may further identify the user intention of the target user by: first, the executing agent may recognize the speech information by using a pre-trained speech recognition model to obtain speech features. Then, the executing body may recognize the auxiliary information by using a pre-trained auxiliary recognition model to obtain the auxiliary feature. Finally, the executing body may input the obtained speech features and the auxiliary features into a pre-trained intention recognition model, and generate a candidate recognition result for characterizing the user intention of the target user.
The speech recognition model may be various models capable of extracting speech features. The auxiliary recognition model may be determined based on the type of information included in the auxiliary information, for example, the auxiliary information includes text information, and the auxiliary recognition model may include various models capable of extracting text features; the auxiliary information includes image information, and the auxiliary recognition model may include various models capable of extracting image features.
The implementation mode can generate more accurate candidate recognition results by fusing the voice features of the voice information and the auxiliary features of the auxiliary information.
It should be noted that the model related to the present disclosure may be obtained by training using various training methods in the prior art, and details are not described herein.
And step 205, presenting the obtained candidate identification result to the target user.
In this embodiment, based on the candidate recognition result obtained in step 204, the executing entity may present the obtained candidate recognition result to the target user.
Specifically, the execution subject may present the candidate recognition result in various manners, for example, may present the candidate recognition result in an audio output manner, or may present the candidate recognition result in a screen display manner.
And step 206, in response to detecting the selection operation of the target user on the presented candidate recognition result, executing the operation corresponding to the candidate recognition result selected by the target user.
In this embodiment, after presenting the candidate recognition result, the executing entity may detect a selection operation of the target user for the presented candidate recognition result, and in response to detecting the selection operation of the target user for the presented candidate recognition result, the executing entity may execute an operation corresponding to the candidate recognition result selected by the target user.
Specifically, after the candidate recognition result presented by the execution main body is obtained, the target user may perform a selection operation on the candidate recognition result. Here, the selection operation may be used for the target user to confirm the candidate recognition result. Specifically, when the target user confirms that the user intention represented by the candidate recognition result is the real intention of the target user, the selection operation may be performed on the candidate recognition result. The selection operation may be various operations, for example, an operation of inputting voice information, an operation of clicking a screen, or the like.
As an example, the target user may perform a selection operation on the presented candidate recognition result by inputting the voice information "confirm this candidate recognition result is correct" or "i select the second candidate recognition result", or the like; the target user may also perform the selected operation on the presented candidate recognition result by clicking a confirmation button on the screen, or clicking a second candidate recognition result presented on the screen, etc.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for processing information according to the present embodiment. In the application scenario of fig. 3, the terminal device 301 may first obtain the voice information 303 (e.g., "cold today's outside") input by the target user 302. Then, the terminal device 301 can recognize the user intention of the target user 302 based on the voice information 303. The terminal device 301 may determine whether a recognition result for characterizing the user intention of the target user 302 is obtained, and in response to not obtaining the recognition result for characterizing the user intention of the target user, acquire the auxiliary information 304 (e.g., an image obtained by photographing an environment in which the target user is located) related to the target user 302. Next, the terminal device 301 may identify the user intention of the target user 302 based on the voice information 303 and the auxiliary information 304, and generate a candidate identification result 305 (e.g., text "query weather") for characterizing the user intention of the target user. The terminal device 301 may then present the obtained candidate recognition result 305 to the target user 302. Finally, the terminal device 301 may, in response to detecting the selection operation 306 of the target user for the presented candidate recognition result 305, perform an operation 307 (e.g. presenting weather information) corresponding to the candidate recognition result 305 selected by the target user 302.
The method provided by the embodiment of the disclosure can be used for identifying the user intention based on the voice information input by the user and the auxiliary information when the intention of the user cannot be identified based on the voice information input by the user, obtaining the candidate identification result, and presenting the candidate identification result to the target user, so that the user can be guided to select the candidate identification result which meets the real intention of the user.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for processing information is shown. The flow 400 of the method for processing information includes the steps of:
In the present embodiment, an execution subject (e.g., a terminal device shown in fig. 1) of the method for processing information may acquire voice information input by a target user through a wired connection manner or a wireless connection manner. The target user may be a user who is to recognize the voice information input by the target user, and specifically may be a user who initiates a voice conversation request.
In this embodiment, based on the voice information obtained in step 401, the execution subject can recognize the user intention of the target user.
And step 403, in response to not obtaining the recognition result for representing the user intention of the target user, obtaining auxiliary information related to the target user.
In this embodiment, after step 402, the executing body may acquire auxiliary information related to the target user in response to a recognition result that is not obtained for characterizing the user intention of the target user. The supplementary information may be information for recognizing a user intention of the target user in combination with the voice information input by the target user.
Specifically, the auxiliary information may be various information related to the target user. The execution subject can be obtained by different methods according to different auxiliary information.
And step 404, recognizing the user intention of the target user based on the voice information and the auxiliary information, and obtaining at least two candidate recognition results for representing the user intention of the target user.
In this embodiment, based on the speech information obtained in step 401 and the auxiliary information obtained in step 403, the executing entity may identify the user intention of the target user, and obtain at least two candidate identification results for characterizing the user intention of the target user.
Specifically, the executing entity may obtain at least two candidate recognition results for characterizing the user intention of the target user by using various methods, for example, in a process of recognizing the speech information and the auxiliary information by using the intention recognition model, the intention recognition model may output a plurality of candidate recognition results and probabilities corresponding to the respective candidate recognition results, and the executing entity may select at least two candidate recognition results with the highest probability from the plurality of candidate recognition results as the candidate recognition results for characterizing the user intention of the target user; or a probability threshold may be preset, and the executing entity may determine at least two candidate recognition results having a corresponding probability greater than or equal to the probability threshold as candidate recognition results for characterizing the user intention of the target user. It should be noted that the above probability may be used to characterize the possibility that the user intention characterized by the corresponding candidate recognition result is the user intention of the target user.
And step 405, presenting the obtained at least two candidate recognition results to the target user.
In this embodiment, based on the at least two candidate recognition results obtained in step 404, the executing entity may present the obtained at least two candidate recognition results to the target user, so that the target user selects a recognition result capable of characterizing its true intention from the at least two candidate recognition results.
In this embodiment, after presenting the at least two candidate recognition results, the executing entity may detect an operation of the target user for selecting a candidate recognition result from the at least two candidate recognition results, and in response to detecting the operation of the target user for selecting a candidate recognition result from the at least two candidate recognition results, the executing entity may execute an operation corresponding to the candidate recognition result selected by the target user. .
It is to be understood that the at least two candidate recognition results presented to the target user characterize the user's intention predicted based on the speech information and the auxiliary information, and the candidate recognition result selected by the target user from the at least two candidate recognition results may characterize the target user's true intention. And then, the method of presenting at least two candidate recognition results and obtaining the candidate recognition result selected by the user is adopted, so that the real intention of the user can be determined, and the accuracy of the executed operation corresponding to the candidate recognition result can be improved.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for processing information in the present embodiment highlights a step of outputting the generated at least two recognition results to the target user, so that the target user selects the recognition results, and then performing an operation corresponding to the recognition result selected by the target user. Therefore, the scheme described in the embodiment presents the identified identification result for representing the user intention to the user, and further guides the user to select the user intention, so that the real intention of the user can be determined, the voice information of the user can be responded to more accurately, and the accuracy of information processing is improved.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for processing information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for processing information of the present embodiment includes: a first acquisition unit 501, a first recognition unit 502, a second acquisition unit 503, a second recognition unit 504, a presentation unit 505, and an execution unit 506. Wherein, the first obtaining unit 501 is configured to obtain voice information input by a target user; the first recognition unit 502 is configured to recognize the user intention of the target user based on the voice information; the second obtaining unit 503 is configured to obtain auxiliary information related to the target user in response to a recognition result for characterizing the user intention of the target user not being obtained; the second recognition unit 504 is configured to recognize the user intention of the target user based on the voice information and the auxiliary information, and generate a candidate recognition result for characterizing the user intention of the target user; the presentation unit 505 is configured to present the obtained candidate recognition result to the target user; the executing unit 506 is configured to, in response to detecting the operation selected by the target user with respect to the presented candidate recognition result, execute the operation corresponding to the candidate recognition result selected by the target user.
In this embodiment, the first obtaining unit 501 of the apparatus 500 for processing information may obtain the voice information input by the target user through a wired connection manner or a wireless connection manner. The target user may be a user who is to recognize the voice information input by the target user, and specifically may be a user who initiates a voice conversation request.
In this embodiment, based on the voice information obtained by the first obtaining unit 501, the first identifying unit 502 may identify the user intention of the target user.
In this embodiment, the second obtaining unit 503 may obtain the auxiliary information related to the target user in response to the first identifying unit 502 not obtaining the identification result for characterizing the user intention of the target user. The supplementary information may be information for recognizing a user intention of the target user in combination with the voice information input by the target user.
Specifically, the auxiliary information may be various information related to the target user. The second obtaining unit 503 may obtain different auxiliary information by using different methods.
In this embodiment, based on the speech information obtained by the first obtaining unit 501 and the auxiliary information obtained by the second obtaining unit 503, the second identifying unit 504 may identify the user intention of the target user, and generate a candidate identification result for characterizing the user intention of the target user.
In this embodiment, based on the candidate recognition result obtained by the second recognition unit 504, the upper presentation unit 505 may present the obtained candidate recognition result to the target user.
In this embodiment, after presenting the candidate recognition result, the executing unit 506 may detect a selection operation of the target user for the presented candidate recognition result, and the executing unit 506 may execute an operation corresponding to the candidate recognition result selected by the target user in response to detecting the selection operation of the target user for the presented candidate recognition result.
In some optional implementations of this embodiment, the assistance information includes, but is not limited to, at least one of: user attribute information of the target user; image information obtained by shooting the environment where the target user is located; and the text information is input by the target user and used for representing the user intention of the target user.
In some optional implementations of this embodiment, the apparatus 500 further includes: and a first execution unit (not shown in the figure) configured to, in response to obtaining a recognition result for characterizing a user intention of the target user, execute an operation corresponding to the obtained recognition result.
In some optional implementations of this embodiment, the second identifying unit 504 may be further configured to: based on the voice information and the auxiliary information, recognizing the user intention of the target user, and obtaining at least two candidate recognition results for representing the user intention of the target user; and the presentation unit 505 may be further configured to: presenting the obtained at least two candidate recognition results to a target user; and the execution unit 506 may be further configured to: and responding to the detected operation that the target user selects the candidate recognition result from the at least two candidate recognition results, and executing the operation corresponding to the candidate recognition result selected by the target user.
In some optional implementations of this embodiment, the second identifying unit 504 may include: a first recognition module (not shown in the figure) configured to recognize the voice information by using a pre-trained voice recognition model to obtain voice features; a second recognition module (not shown in the figure) configured to recognize the auxiliary information by using a pre-trained auxiliary recognition model to obtain auxiliary features; and a third recognition module (not shown in the figure) configured to input the obtained speech features and the auxiliary features into a pre-trained intention recognition model and generate candidate recognition results for characterizing the user intention of the target user.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
The apparatus 500 provided in the above embodiment of the present disclosure may perform, when the intention of the user cannot be recognized based on the voice information input by the user, the recognition of the intention of the user based on the voice information of the user and the auxiliary information, obtain a candidate recognition result, and present the candidate recognition result to the target user, so that the user may be guided to select the candidate recognition result that meets the real intention of the user.
Referring to FIG. 6, a timing diagram 600 of one embodiment of a system for processing information is shown, in accordance with the present application.
The system for processing information in the embodiment of the present application may include: the system comprises an information acquisition module, a skill analysis module and a guide recommendation module, wherein: the information acquisition module is configured to acquire voice information input by a target user; and obtaining auxiliary information related to the target user; the skill analysis module is configured to identify the user intention of the target user based on the voice information sent by the information acquisition module; responding to the recognition result that the user intention for representing the target user is not obtained, and sending an instruction to the information acquisition module so as to control the information acquisition module to send the voice information and the auxiliary information to the guide recommendation module; the guiding recommendation module is configured to identify the user intention of the target user based on the received voice information and the auxiliary information, and generate a candidate identification result for representing the user intention of the target user; presenting the obtained candidate recognition result to a target user; and responding to the detected selected operation of the target user for the presented candidate recognition result, and executing the operation corresponding to the candidate recognition result selected by the target user.
As shown in fig. 6, in step 601, the information collection module obtains voice information input by a target user and auxiliary information related to the target user.
In this embodiment, the information collection module may obtain the voice information input by the target user and the auxiliary information related to the target user from a local or remote location through a wired connection manner or a wireless connection manner. The target user may be a user who is to recognize the voice information input by the target user, and specifically may be a user who initiates a voice conversation request. The supplementary information may be information for recognizing a user intention of the target user in combination with the voice information input by the target user.
In step 602, the information collection module sends the acquired voice information to the skill analysis module.
In step 603, the skill resolution module identifies the user intent of the target user based on the voice information.
In step 604, the skill analysis module sends an instruction to the information collection module in response to not obtaining the recognition result for characterizing the user intent of the target user.
It can be understood that, in a general case, the skill analysis module may obtain a recognition result for representing the user intention of the target user by recognizing the user intention of the target user, but considering the influence of spoken voice, environmental noise, and the like, a situation that the skill analysis module cannot recognize the user intention based on the voice information may also occur, and at this time, the skill analysis module may send the instruction to the information collection module.
In step 605, the information collection module sends the voice information and the auxiliary information to the guidance recommendation module in response to receiving the instruction sent by the information collection module.
In step 606, the guidance recommendation module identifies the user intention of the target user based on the received voice information and the auxiliary information, and generates a candidate identification result for characterizing the user intention of the target user.
In step 607, the guidance recommendation module presents the obtained candidate recognition result to the target user.
In step 608, the guidance recommending module, in response to detecting the selection operation of the target user for the presented candidate recognition result, executes the operation corresponding to the candidate recognition result selected by the target user.
The system provided by the above embodiment of the present disclosure may be configured to, when the skill analysis module cannot identify the intention of the user, identify the intention of the user by the scheduling guidance recommendation module, specifically, the guidance recommendation module may identify the intention of the user based on the voice information and the auxiliary information of the user, obtain a candidate identification result, and present the candidate identification result to the target user, so as to guide the user to select the candidate identification result that meets the real intention of the user.
Referring now to fig. 7, a schematic diagram of an electronic device (e.g., the terminal device of fig. 1) 700 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring voice information input by a target user; identifying a user intention of the target user based on the voice information; in response to not obtaining a recognition result for characterizing a user intent of the target user, obtaining auxiliary information related to the target user; and identifying the user intention of the target user based on the voice information and the auxiliary information, and generating an identification result for representing the user intention of the target user.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation on the unit itself, for example, the first acquisition unit may also be described as a "unit that acquires voice information".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (9)
1. A method for processing information, comprising:
acquiring voice information input by a target user;
identifying a user intent of the target user based on the voice information;
in response to not obtaining a recognition result for characterizing a user intent of the target user, obtaining auxiliary information related to the target user;
based on the voice information and the auxiliary information, recognizing the user intention of the target user, and generating a candidate recognition result for representing the user intention of the target user;
presenting the obtained candidate recognition result to the target user;
and responding to the detected selected operation of the target user for the presented candidate recognition result, and executing the operation corresponding to the candidate recognition result selected by the target user.
2. The method of claim 1, wherein the assistance information comprises at least one of:
user attribute information of the target user; image information obtained by shooting the environment where the target user is located; the text information input by the target user is used for representing the user intention of the target user.
3. The method of claim 1, wherein after the identifying the user intent of the target user based on the speech information, the method further comprises:
and responding to the obtained recognition result for representing the user intention of the target user, and executing the operation corresponding to the obtained recognition result.
4. The method of claim 1, wherein the identifying the user intent of the target user based on the speech information and the assistance information, and generating candidate identification results for characterizing the user intent of the target user comprises:
identifying the user intention of the target user based on the voice information and the auxiliary information to obtain at least two candidate identification results for representing the user intention of the target user; and
the presenting the obtained candidate recognition result to the target user comprises:
presenting the obtained at least two candidate recognition results to the target user; and
the step of executing the operation corresponding to the candidate recognition result selected by the target user in response to detecting the operation selected by the target user for the presented candidate recognition result comprises the following steps:
and responding to the operation that the target user selects the candidate recognition result from the at least two candidate recognition results, and executing the operation corresponding to the candidate recognition result selected by the target user.
5. The method according to one of claims 1 to 4, wherein the identifying the user intention of the target user based on the speech information and the auxiliary information, and the generating of the candidate identification result for characterizing the user intention of the target user comprises:
recognizing the voice information by utilizing a pre-trained voice recognition model to obtain voice characteristics;
identifying the auxiliary information by using a pre-trained auxiliary identification model to obtain auxiliary characteristics;
and inputting the obtained voice features and the auxiliary features into a pre-trained intention recognition model, and generating a candidate recognition result for representing the user intention of the target user.
6. An apparatus for processing information, comprising:
a first acquisition unit configured to acquire voice information input by a target user;
a first recognition unit configured to recognize a user intention of the target user based on the voice information;
a second acquisition unit configured to acquire auxiliary information related to the target user in response to a recognition result for characterizing a user intention of the target user not being obtained;
a second recognition unit configured to recognize the user intention of the target user based on the voice information and the auxiliary information, and generate a recognition result for characterizing the user intention of the target user;
a presentation unit configured to present the obtained candidate recognition result to the target user;
and the execution unit is configured to respond to the detection of the selection operation of the target user on the presented candidate recognition result, and execute the operation corresponding to the candidate recognition result selected by the target user.
7. A system for processing information, comprising: the system comprises an information acquisition module, a skill analysis module and a guide recommendation module, wherein:
the information acquisition module is configured to acquire voice information input by a target user; and obtaining auxiliary information related to the target user;
the skill analysis module is configured to identify the user intention of the target user based on the voice information sent by the information acquisition module; and in response to a recognition result used for representing the user intention of the target user not being obtained, sending an instruction to the information acquisition module to control the information acquisition module to send the voice information and the auxiliary information to the guidance recommendation module;
the guiding recommendation module is configured to identify the user intention of the target user based on the received voice information and auxiliary information, and generate a candidate identification result for representing the user intention of the target user; presenting the obtained candidate recognition result to the target user; and responding to the detected selected operation of the target user for the presented candidate recognition result, and executing the operation corresponding to the candidate recognition result selected by the target user.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010120039.2A CN112309387A (en) | 2020-02-26 | 2020-02-26 | Method and apparatus for processing information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010120039.2A CN112309387A (en) | 2020-02-26 | 2020-02-26 | Method and apparatus for processing information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112309387A true CN112309387A (en) | 2021-02-02 |
Family
ID=74336691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010120039.2A Pending CN112309387A (en) | 2020-02-26 | 2020-02-26 | Method and apparatus for processing information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112309387A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113779229A (en) * | 2021-08-31 | 2021-12-10 | 康键信息技术(深圳)有限公司 | User requirement identification matching method, device, equipment and readable storage medium |
CN113901837A (en) * | 2021-10-19 | 2022-01-07 | 斑马网络技术有限公司 | Intention understanding method, device, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105206266A (en) * | 2015-09-01 | 2015-12-30 | 重庆长安汽车股份有限公司 | Vehicle-mounted voice control system and method based on user intention guess |
US20180330730A1 (en) * | 2017-05-09 | 2018-11-15 | Apple Inc. | User interface for correcting recognition errors |
CN109933198A (en) * | 2019-03-13 | 2019-06-25 | 广东小天才科技有限公司 | Semantic recognition method and device |
CN110334201A (en) * | 2019-07-18 | 2019-10-15 | 中国工商银行股份有限公司 | A kind of intension recognizing method, apparatus and system |
CN110349575A (en) * | 2019-05-22 | 2019-10-18 | 深圳壹账通智能科技有限公司 | Method, apparatus, electronic equipment and the storage medium of speech recognition |
CN110603586A (en) * | 2017-05-09 | 2019-12-20 | 苹果公司 | User interface for correcting recognition errors |
-
2020
- 2020-02-26 CN CN202010120039.2A patent/CN112309387A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105206266A (en) * | 2015-09-01 | 2015-12-30 | 重庆长安汽车股份有限公司 | Vehicle-mounted voice control system and method based on user intention guess |
US20180330730A1 (en) * | 2017-05-09 | 2018-11-15 | Apple Inc. | User interface for correcting recognition errors |
CN110603586A (en) * | 2017-05-09 | 2019-12-20 | 苹果公司 | User interface for correcting recognition errors |
CN109933198A (en) * | 2019-03-13 | 2019-06-25 | 广东小天才科技有限公司 | Semantic recognition method and device |
CN110349575A (en) * | 2019-05-22 | 2019-10-18 | 深圳壹账通智能科技有限公司 | Method, apparatus, electronic equipment and the storage medium of speech recognition |
CN110334201A (en) * | 2019-07-18 | 2019-10-15 | 中国工商银行股份有限公司 | A kind of intension recognizing method, apparatus and system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113779229A (en) * | 2021-08-31 | 2021-12-10 | 康键信息技术(深圳)有限公司 | User requirement identification matching method, device, equipment and readable storage medium |
CN113901837A (en) * | 2021-10-19 | 2022-01-07 | 斑马网络技术有限公司 | Intention understanding method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107833574B (en) | Method and apparatus for providing voice service | |
CN109993150B (en) | Method and device for identifying age | |
CN107863108B (en) | Information output method and device | |
CN110162670B (en) | Method and device for generating expression package | |
CN107731229B (en) | Method and apparatus for recognizing speech | |
CN108984731A (en) | Sing single recommended method, device and storage medium | |
CN109981787B (en) | Method and device for displaying information | |
CN109961032B (en) | Method and apparatus for generating classification model | |
CN109036397B (en) | Method and apparatus for presenting content | |
CN112509562B (en) | Method, apparatus, electronic device and medium for text post-processing | |
CN110990598B (en) | Resource retrieval method and device, electronic equipment and computer-readable storage medium | |
CN110046571B (en) | Method and device for identifying age | |
CN110619078B (en) | Method and device for pushing information | |
CN111897950A (en) | Method and apparatus for generating information | |
WO2021088790A1 (en) | Display style adjustment method and apparatus for target device | |
JP2019091012A (en) | Information recognition method and device | |
WO2021068493A1 (en) | Method and apparatus for processing information | |
KR20160139771A (en) | Electronic device, information providing system and information providing method thereof | |
CN110008926B (en) | Method and device for identifying age | |
CN113823282B (en) | Voice processing method, system and device | |
CN112309387A (en) | Method and apparatus for processing information | |
CN111899747B (en) | Method and apparatus for synthesizing audio | |
CN108509442B (en) | Search method and apparatus, server, and computer-readable storage medium | |
CN113420159A (en) | Target customer intelligent identification method and device and electronic equipment | |
CN112148962B (en) | Method and device for pushing information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |