CN112671973A - Information processing method and device - Google Patents

Information processing method and device Download PDF

Info

Publication number
CN112671973A
CN112671973A CN201910936600.1A CN201910936600A CN112671973A CN 112671973 A CN112671973 A CN 112671973A CN 201910936600 A CN201910936600 A CN 201910936600A CN 112671973 A CN112671973 A CN 112671973A
Authority
CN
China
Prior art keywords
target
data
text data
text
target text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910936600.1A
Other languages
Chinese (zh)
Inventor
李龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910936600.1A priority Critical patent/CN112671973A/en
Publication of CN112671973A publication Critical patent/CN112671973A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present disclosure provides an information processing method and apparatus, the information processing method including: converting target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the conversation process; determining identification information corresponding to the target character data according to the content recorded by the target character data; and reviewing the text of the target text data according to the corresponding identification information of the target text data. According to the technical scheme, voice data acquired in the call process are converted into character data to be stored, and character records of call contents are formed. When the user needs to check the text call record, the text of the text data can be checked according to the identification information of the text data, and the user does not worry about forgetting or mistaking the call content.

Description

Information processing method and device
Technical Field
The present disclosure relates to the field of information technologies, and in particular, to an information processing method and device.
Background
Communication via telephone has become an essential part of life. During a call, the listening party often needs to remember what the speaking party has told. However, in busy life, the user often forgets what the other party says after hanging up the phone, so the user listening to the phone has to try to recall or make a call again to ask what is forgotten.
Disclosure of Invention
The embodiment of the disclosure provides an information processing method and equipment, and the technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an information processing method including:
converting target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the conversation process;
determining identification information corresponding to the target character data according to the content recorded by the target character data;
and reviewing the text of the target text data according to the corresponding identification information of the target text data.
According to the technical scheme, the voice data collected in the call process are converted into character data to be stored, and character records of call contents are formed. Further, identification information corresponding to the text data is determined, and when a user needs to check the text call record, the text of the text data can be reviewed according to the identification information of the text data. The information such as telephone numbers, names of people and places and the like mentioned in the conversation process can be recorded by a user without troublesome paper searching and writing during conversation and only by checking text of text data after the conversation is finished. If the conversation content is forgotten after a period of time, the text of the text data can be checked to obtain the required information, and the user does not need to worry about forgetting or mistaking the conversation content.
In one embodiment, the reviewing the text of the target text data according to the corresponding identification information of the target text data includes:
acquiring a retrieval keyword, and outputting a text data list when the retrieval keyword is matched with the identification information corresponding to the target text data, wherein the text data list comprises a title of the target text data;
and outputting the text of the target text data when the title of the target text data is selected.
In one embodiment, the corresponding identification information of the target text data includes a category or a custom keyword corresponding to the target text data.
In one embodiment, further comprising: setting schedule reminding according to the date indicated in the target text data text;
the reviewing the text of the target text data according to the corresponding identification information of the target text data includes: and outputting the text of the target text data when a schedule viewing instruction is received.
In one embodiment, the determining, according to the content recorded by the target text data, the identification information corresponding to the target text data is a category corresponding to the target text data, and includes:
acquiring a trained first machine learning model for semantic analysis;
and determining the category corresponding to the target character data through the first machine learning model.
In one embodiment, further comprising:
training to obtain a second machine learning model for identifying useless character data according to the behavior characteristics of the user for rechecking and deleting the stored character data;
screening out character data to be deleted through the second machine learning model;
and deleting the character data to be deleted.
According to a second aspect of the embodiments of the present disclosure, there is provided an information processing apparatus including:
the transfer module is used for converting target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the call process;
the identification module is used for determining identification information corresponding to the target character data according to the content recorded by the target character data;
and the rechecking module is used for rechecking the text of the target text data according to the corresponding identification information of the target text data.
In one embodiment, the review module comprises:
the retrieval submodule is used for acquiring a retrieval keyword, and outputting a text data list when the retrieval keyword is matched with the identification information corresponding to the target text data, wherein the text data list comprises a title of the target text data;
and the selection submodule is used for outputting the text of the target text data when the title of the target text data is selected.
In one embodiment, the corresponding identification information of the target text data includes a category or a custom keyword corresponding to the target text data.
In one embodiment, the system further comprises a schedule setting module, wherein the schedule setting module is used for setting schedule reminding according to the date indicated in the target text data body;
the rechecking module comprises a schedule management submodule which is used for outputting the text of the target text data when receiving a schedule viewing instruction.
In one embodiment, the identification information corresponding to the target text data is a category corresponding to the target text data, and the identification module includes:
the acquisition submodule is used for acquiring a trained first machine learning model for semantic analysis;
and the category division submodule is used for determining the category corresponding to the target character data through the first machine learning model.
In one embodiment, further comprising:
the modeling module is used for training to obtain a second machine learning model for identifying useless character data according to the behavior characteristics of the user for rechecking and deleting the stored character data;
the updating module is used for screening out character data to be deleted through the second machine learning model;
and deleting the character data to be deleted.
According to a third aspect of the embodiments of the present disclosure, there is provided an information processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
converting target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the conversation process;
determining identification information corresponding to the target character data according to the content recorded by the target character data;
and reviewing the text of the target text data according to the corresponding identification information of the target text data.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the information processing method provided by the first aspect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating an information processing method according to an exemplary embodiment.
FIG. 2A is an illustration of an operator interface shown in accordance with an exemplary embodiment.
FIG. 2B is an illustration of an operator interface shown in accordance with an exemplary embodiment.
FIG. 2C is an illustration of an operator interface shown in accordance with an exemplary embodiment.
FIG. 3A is an illustration of an operator interface shown in accordance with an exemplary embodiment.
FIG. 3B is an illustration of an operator interface shown in accordance with an exemplary embodiment.
Fig. 4 is a flow chart illustrating an information processing method according to an example embodiment.
FIG. 5 is a block diagram of an electronic device shown in accordance with an example embodiment.
FIG. 6 is a block diagram of an electronic device shown in accordance with an example embodiment.
FIG. 7 is a block diagram of an electronic device shown in accordance with an example embodiment.
FIG. 8 is a block diagram of an electronic device shown in accordance with an example embodiment.
FIG. 9 is a block diagram of an electronic device shown in accordance with an example embodiment.
FIG. 10 is a block diagram of an electronic device shown in accordance with an example embodiment.
Fig. 11 is a block diagram of a terminal device shown according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
During telephone communication, a telephone user often needs to remember some contents. For example, the user usually finds the paper and pen record while talking, and the information such as the telephone number and the name of the person and place. Without the recorded content, the user may be mistaken or forgotten over time.
The embodiment of the disclosure provides an information processing method, which provides a text record for a voice call of a user, classifies the text record according to contents, and provides a function of rechecking according to the contents, so that the user does not worry about forgetting or mistaking the call contents.
In the embodiment of the present disclosure, an information processing method is described by taking a process of processing target audio data as an example. The target audio data is audio data collected during a call, including but not limited to audio data collected during a telephone call, a telephone conference call, and a video conference call.
For the convenience of clearly describing the technical solutions of the embodiments of the present disclosure, in the embodiments of the present disclosure, the terms "first", "second", and the like are used for distinguishing the same items or similar items with basically the same functions and actions, and those skilled in the art can understand that the terms "first", "second", and the like are not limited in number or execution order.
Fig. 1 is a flowchart illustrating an information processing method applied to an information processing apparatus according to an exemplary embodiment. The information processing apparatus includes various apparatuses that can support voice communication, such as a mobile phone, a computer, and the like. Referring to fig. 1, the information processing method comprises steps 101-103:
in step 101, the target audio data is converted into target text data and stored.
The target audio data may be audio data collected during a call. For example, the target audio data may be voice data generated by at least one of the two parties of the call in one call.
For example, a user a calls a user B through a mobile phone P, the mobile phone P can acquire audio data of the user a through a microphone, and at the same time, the mobile phone P can receive audio data of the user B through a communication network, and the mobile phone P can use audio data of one of the user a and the user B as target audio data in a call process, or use audio data of both the user a and the user B as target audio data.
In one embodiment, the information processing apparatus may convert the target audio data into the target text data by real-time voice transcription. Or temporarily caching the audio data stream generated in the call process, and after the call is finished, performing voice transcription to obtain target character data.
And the information processing equipment locally stores the target character data obtained by the transcription or uploads the target character data to the cloud for storage.
In step 102, the identification information corresponding to the target character data is determined according to the content recorded by the target character data.
The information processing apparatus determines identification information of the character data based on the content of the newly generated character data. In one embodiment, the identification information of the text data may be a category corresponding to the text data. The category of the text data indicates a category of information included in a body of the text data. When the text of the text data includes a plurality of kinds of information, there may be a plurality of categories corresponding to the text data.
For example, in one call, user A tells user B, "company decides to let you go on a business trip to a meeting, contacts Zhang III after arriving at the meeting, arranges a specific matter by him, and the call for Zhang III is 123". The text of the text data comprises time, event, name and telephone number, and correspondingly, the category of the text data can comprise four categories of time, event, name and telephone number.
In step 103, the text of the target text data is reviewed according to the corresponding identification information of the target text data.
In one embodiment, an information processing apparatus provides a function of retrieving text data by category. Referring to fig. 2A, an entry for text data query is provided in the call log, and when the user clicks "text log", the mobile phone displays a text data list, which may include a plurality of columns, each corresponding to a category of text data.
For example, the first column is a text data list of "event" class, the second column is a text data list of "number" class, and the third column is text data of "address" class. Titles of a plurality of pieces of text data are displayed in each column. The title may include a body abstract of the text data, and the body abstract may include content in the body indicating the type of the text data. The header may also include talk time.
For example, on d days m months, user B received user A's call, user A tells user B, "company decides to let you go on a business trip to a meeting, contacts Zhang III after the meeting, arranges a specific matter by him, and the call for Zhang III is 123".
In the text data list of the "event" class, the title of the text data generated by the call may be "d days of m months — the company decides to let you go on a business trip to the meeting". Referring to fig. 2B, when the user clicks the title, the mobile phone displays the text of the text data: "company decides to let you go on a business trip to attend a meeting, and after arriving at the meeting place, contacts Zhang III, and arranges a specific matter by him, and the call of Zhang III is 123".
In the text data list of the "number" category, the title of the text data generated by the call may be "123" for a phone of d days of m months and three years ". When the user clicks the title, the mobile phone displays the text of the text data: "company decides to let you go on a business trip to attend a meeting, and after arriving at the meeting place, contacts Zhang III, and arranges a specific matter by him, and the call of Zhang III is 123".
The text of the text data, "company" decides to let you go on business on a business trip to attend a meeting, contacts Zusanli after the meeting place, arranges specific matters by the company, calls of Zusanli are 123 ", and the user can search from the" event "class bit sub-data list or the" digital number "class bit sub-data list.
In one embodiment, the identification information of the text data may be a keyword corresponding to the text data. For example, as shown in fig. 2C, after a call is completed, the user may edit a custom keyword for a text record corresponding to the call. And the information processing equipment takes the self-defined keyword as identification information corresponding to the text data of the call. FIG. 2C illustrates an example where the custom keyword is "work plan".
The user can search the text data to be searched by inputting the key words. Referring to fig. 3A, a user inputs a search keyword in a search box, where the search keyword may include a phone number of an opposite party, a call date, a category, a user-defined keyword, and the like, and the number of the category and the user-defined keyword may be one or more than one.
For example, the text of the target text data is "the company decides to let you go on a business trip to attend a meeting, contacts Zhang III after arriving at the meeting place, arranges a specific matter by him, and the call of Zhang III is 123".
The corresponding category of the target character data comprises time, events, names of people and number.
When the retrieval keyword input by the user is 'event time number', the mobile phone outputs a text data list, wherein the text data list comprises a plurality of titles of text data of which the categories comprise events, time and numbers, and the titles comprise target text data. The title of the target text data can be'm month d day-company decides to let you go on a business to attend a meeting', when the user clicks and selects the title, the mobile phone displays the text of the target text data: "company decides to let you go on a business trip to attend a meeting, and after arriving at the meeting place, contacts Zhang III, and arranges a specific matter by him, and the call of Zhang III is 123".
When the search keyword input by the user is 'work plan', the mobile phone outputs the title of the text data corresponding to the user-defined keyword, and when the user clicks and selects the title, the mobile phone displays the text of the target text data.
Alternatively, as shown in fig. 3B, the search keyword input by the user may be a character in the text data body, for example, when the search keyword input by the user is "three-page", the target text data may be searched, and in this case, the title of the target text data may be an abstract of the target text data including "three-page".
In one embodiment, after the call is completed, the mobile phone may further record information such as the phone number of the other party, the call time, and the like, and the search keyword input by the user may further include information such as the phone number of the other party, the call time, and the like.
For example, if the search keyword input by the user includes the telephone number of the opposite party, the mobile phone displays the text records of the call history with the number, and when the user clicks one of the text records, the text of the corresponding text data is displayed.
Or, the search keyword input by the user includes the call time, for example, "d days in m months", and then the mobile phone displays the text record list of the call history in d days in m months. For another example, if the search keyword is "9 o 'clock d day m month", the mobile phone displays a text record list of the call history within a time length of T before and after 9 o' clock d day m month, where the time length of T may be a preset time length, for example, half an hour.
According to the information processing method provided by the embodiment of the disclosure, the voice data acquired in the conversation process is converted into the character data to be stored, and the character record of the conversation content is formed. Further, identification information corresponding to the text data is determined, and when a user needs to check the text call record, the text of the text data can be reviewed according to the identification information of the text data. The information such as telephone numbers, names of people and places and the like mentioned in the conversation process can be recorded by a user without troublesome paper searching and writing during conversation and only by checking text of text data after the conversation is finished. If the conversation content is forgotten after a period of time, the text of the text data can be checked to obtain the required information, and the user does not need to worry about forgetting or mistaking the conversation content.
Based on the information processing method provided by the embodiments corresponding to fig. 1 to fig. 3, fig. 4 is a flowchart illustrating an information processing method according to an exemplary embodiment, and the embodiment corresponding to fig. 4 illustrates the information processing method provided by the present disclosure by taking a case where text data is classified by a machine learning model and a category corresponding to the text data is used as identification information as an example.
The content of some steps is the same as or similar to the steps in the corresponding embodiments of fig. 1-3, and only the differences in the steps will be described in detail below. Referring to fig. 4, the information processing method provided in this embodiment includes steps 401 and 408:
in step 401, a first machine learning model is obtained.
The first machine learning model is used for performing semantic analysis on the text of the text data and classifying the text data. In one embodiment, model training is performed by a server, and the information processing apparatus may classify the text data using a machine learning model performed by the server training.
In step 402, the target audio data is converted into target text data and saved.
The target audio data is audio data collected in the call process. Taking the case that the information processing device is a mobile phone as an example, in the process of a call, the mobile phone takes the voice data of both parties of the call as target audio data, and converts the target audio data into target text data through voice transcription and stores the target text data.
In step 403, the category corresponding to the target character data is determined by the first machine learning model.
And taking the target character data as input data of the first machine learning model, and determining the category corresponding to the target character data by the first machine learning model.
In step 404, a schedule reminder is set according to the date indicated in the text of the target text data.
The category corresponding to the target text data can be one or more than one. The user can retrieve the text of the target text according to the category of the target text.
In one embodiment, the information processing device can set a schedule reminder when the categories of the target text include at least an "event" category and a "time" category.
For example, the text of the target text data is: "company decides to let you go on a business trip to attend a meeting, and after arriving at the meeting place, contacts Zhang III, and arranges a specific matter by him, and the call of Zhang III is 123". The conversation time is m months and d days. Determining the category corresponding to the target character data through the first machine learning model comprises: "time", "event", "name", "number", including the "event" class and the "time" class.
The date indicated by the text of the target character data is "saturday", and the information processing apparatus determines the date of the first saturday whose schedule date is m months d days later, and records the date as S. The schedule content is a summary of the content of the target text data body about the event. The set schedule reminding content is as follows:
the schedule content is as follows: the company decides to let you go on a business trip to attend the meeting.
Schedule time: and S.
Reminding time: reminding the user one day in advance.
In step 405, when a schedule viewing instruction is received, the text of the target text data is output.
The user can view the already set schedule reminder in the calendar.
When the user clicks the schedule reminder of the date S, the information processing apparatus may output the text of the target text data: "company decides to let you go on a business trip to attend a meeting, and after arriving at the meeting place, contacts Zhang III, and arranges a specific matter by him, and the call of Zhang III is 123".
In step 406, a second machine learning model is trained.
The second machine learning model is used for screening out the character data to be deleted from the stored character data. The text data to be deleted may include text data that is out of date and has not been reviewed by the user for a long period of time.
In one embodiment, the information processing device trains a second machine learning model for identifying useless character data according to the behavior characteristics of the user for reviewing and deleting the saved character data. The behavior characteristics of the user include, but are not limited to: the type of textual data that the user reviews, the frequency of review, the type of textual data that the user manually deletes, and so forth.
In step 407, the text data to be deleted is screened out by the second machine learning model.
When the storage space occupied by the stored character data exceeds a certain limit, or at intervals, the information processing equipment can screen the stored character data once through the second machine learning model, and screen out the character data to be deleted.
In step 408, the text data to be deleted is deleted.
The information processing device may delete the text data to be deleted determined by the screening, or request user confirmation, and delete the text data after the user confirmation is obtained.
According to the information processing method provided by the embodiment of the disclosure, the voice data acquired in the conversation process is converted into the character data to be stored, and the character record of the conversation content is formed. Further, identification information corresponding to the text data is determined, and when a user needs to check the text call record, the text of the text data can be reviewed according to the identification information of the text data. The information such as telephone numbers, names of people and places and the like mentioned in the conversation process can be recorded by a user without troublesome paper searching and writing during conversation and only by checking text of text data after the conversation is finished. If the conversation content is forgotten after a period of time, the text of the text data can be checked to obtain the required information, and the user does not need to worry about forgetting or mistaking the conversation content.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 5 is a block diagram illustrating an electronic device according to an exemplary embodiment, which may implement some or all of its functions through software, hardware, or a combination of both, for performing the information processing method described in the embodiments corresponding to fig. 1-4. As shown in fig. 5, the electronic apparatus includes:
and the transcription module 501 is configured to convert the target audio data into target text data and store the target text data, where the target audio data is audio data collected during a call.
The identification module 502 is configured to determine a category corresponding to the target text data according to the content recorded by the target text data.
The review module 503 is configured to review the text of the target text data according to the identification information corresponding to the target text data.
As shown in FIG. 6, in one embodiment, the review module 503 includes:
the search sub-module 5031 is configured to obtain a search keyword, and output a text data list when the search keyword matches a category corresponding to the target text data, where the text data list includes a title of the target text data.
The selecting sub-module 5032 is configured to output the text of the target text data when the title of the target text data is selected.
As shown in fig. 7, in an embodiment, a schedule setting module 504 is further included, and the schedule setting module 504 is configured to set a schedule reminder according to a date indicated in the text of the target text data.
The review module 503 includes a schedule management submodule 5033, and the schedule management submodule 5033 is configured to output the text of the target text data when receiving the schedule viewing instruction.
As shown in FIG. 8, in one embodiment, the identification module 502 includes:
the obtaining sub-module 5021 is used for obtaining the trained first machine learning model for semantic analysis.
The category classification submodule 5022 is used for determining the category corresponding to the target character data through the first machine learning model.
As shown in fig. 9, in one embodiment, the method further includes:
and the modeling module 505 is configured to train to obtain a second machine learning model for identifying useless character data according to behavior characteristics of a user for reviewing and deleting the stored character data.
And an updating module 506, configured to filter out text data to be deleted through the second machine learning model.
According to the electronic equipment provided by the embodiment of the disclosure, the voice data collected in the conversation process is converted into the character data to be stored, and the character record of the conversation content is formed. Further, identification information corresponding to the text data is determined, and when a user needs to check the text call record, the text of the text data can be reviewed according to the identification information of the text data. The information such as telephone numbers, names of people and places and the like mentioned in the conversation process can be recorded by a user without troublesome paper searching and writing during conversation and only by checking text of text data after the conversation is finished. If the conversation content is forgotten after a period of time, the text of the text data can be checked to obtain the required information, and the user does not need to worry about forgetting or mistaking the conversation content.
Fig. 10 is a block diagram of an electronic device according to an exemplary embodiment, which may be implemented by software, hardware or a combination of the two to be a part or all of the electronic device for executing the information processing method described in the above embodiment corresponding to fig. 1 to 4. As shown in fig. 10, the electronic apparatus 100 includes:
a processor 1001.
A memory 1002 for storing instructions executable by the processor 1001.
Wherein the processor 1001 is configured to:
and converting the target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the conversation process.
And determining the category corresponding to the target character data according to the content recorded by the target character data.
And reviewing the text of the target text data according to the corresponding identification information of the target text data.
In one embodiment, the processor 1001 may be further configured to:
and acquiring a search keyword, and outputting a text data list when the search keyword is matched with the identification information corresponding to the target text data, wherein the text data list comprises the title of the target text data.
When the title of the target character data is selected, the text of the target character data is output.
In one embodiment, the corresponding identification information of the target text data includes a category or a custom keyword corresponding to the target text data.
In one embodiment, the processor 1001 may be further configured to:
further comprising: and setting schedule reminding according to the date indicated in the text of the target text data.
The method for reviewing the text of the target text data according to the corresponding identification information of the target text data comprises the following steps: and outputting the text of the target text data when the schedule viewing instruction is received.
In one embodiment, the processor 1001 may be further configured to:
a trained first machine learning model for semantic analysis is obtained.
And determining the category corresponding to the target character data through the first machine learning model.
In one embodiment, the processor 1001 may be further configured to:
and training to obtain a second machine learning model for identifying useless character data according to the behavior characteristics of the user for rechecking and deleting the stored character data.
And screening out character data to be deleted through the second machine learning model.
And deleting the character data to be deleted.
According to the electronic equipment provided by the embodiment of the disclosure, the voice data collected in the conversation process is converted into the character data to be stored, and the character record of the conversation content is formed. Further, identification information corresponding to the text data is determined, and when a user needs to check the text call record, the text of the text data can be reviewed according to the identification information of the text data. The information such as telephone numbers, names of people and places and the like mentioned in the conversation process can be recorded by a user without troublesome paper searching and writing during conversation and only by checking text of text data after the conversation is finished. If the conversation content is forgotten after a period of time, the text of the text data can be checked to obtain the required information, and the user does not need to worry about forgetting or mistaking the conversation content.
The electronic device provided by the embodiment of the present disclosure may be a terminal device as shown in fig. 11, fig. 11 is a block diagram of a terminal device shown according to an exemplary embodiment, the terminal device 110 may be a smart phone, a tablet computer, and the like, and the terminal device 110 is configured to execute the information processing method described in the embodiments corresponding to fig. 1 to fig. 4.
Terminal device 110 may include one or more of the following components: processing component 1101, memory 1102, power component 1103, multimedia component 1104, audio component 1105, input/output (I/O) interface 1106, sensor component 1107, and communication component 1108.
The processing component 1101 generally controls the overall operation of the terminal device 110, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1101 may include one or more processors 11011 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1101 can include one or more modules that facilitate interaction between the processing component 1101 and other components. For example, the processing component 1101 can include a multimedia module to facilitate interaction between the multimedia component 1104 and the processing component 1101.
Memory 1102 is configured to store various types of data to support operation at terminal device 110. Examples of such data include instructions for any application or method operating on terminal device 110, contact data, phonebook data, messages, pictures, videos, and so forth. The Memory 1102 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read Only Memory (EPROM), a Programmable Read Only Memory (PROM, ROM), a Read Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
Power supply component 1103 provides power to the various components of terminal device 110. Power components 1103 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for terminal device 110.
The multimedia component 1104 includes a screen that provides an output interface between the terminal device 110 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1104 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal device 110 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1105 is configured to output and/or input audio signals. For example, audio component 1105 may include a Microphone (MIC) configured to receive external audio signals when terminal device 110 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in memory 1102 or transmitted via communications component 1108. In some embodiments, audio component 1105 further includes a speaker for outputting audio signals.
The I/O interface 1106 provides an interface between the processing component 1101 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor component 1107 includes one or more sensors to provide various aspects of state estimation for terminal device 110. For example, sensor component 1107 may detect the open/closed state of terminal device 110, the relative positioning of components, such as a display and keypad of terminal device 110, the change in position of terminal device 110 or a component of terminal device 110, the presence or absence of user contact with terminal device 110, the orientation or acceleration/deceleration of terminal device 110, and the change in temperature of terminal device 110. Sensor assembly 1107 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. Sensor assembly 1107 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1107 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communication component 1108 is configured to facilitate communications between terminal device 110 and other devices in a wired or wireless manner. The terminal device 110 may access a Wireless network based on a communication standard, such as Wireless Fidelity (WiFi), 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component 1108 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 1108 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In an exemplary embodiment, the terminal Device 110 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components for performing the information Processing methods described in the embodiments corresponding to fig. 1-4.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as the memory 1102, comprising instructions executable by the processing component 1101 of the terminal device 110 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. When the instructions in the storage medium are executed by the processing component 1101 of the terminal device 110, the terminal device 110 is enabled to execute the information processing method described in the above embodiment corresponding to fig. 1-4, the method comprising:
and converting the target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the conversation process.
And determining the category corresponding to the target character data according to the content recorded by the target character data.
And reviewing the text of the target text data according to the corresponding identification information of the target text data.
In one embodiment, the method comprises:
and acquiring a search keyword, and outputting a text data list when the search keyword is matched with the identification information corresponding to the target text data, wherein the text data list comprises the title of the target text data.
When the title of the target character data is selected, the text of the target character data is output.
In one embodiment, the corresponding identification information of the target text data includes a category or a custom keyword corresponding to the target text data.
In one embodiment, the method comprises:
and setting schedule reminding according to the date indicated in the text of the target text data.
The method for reviewing the text of the target text data according to the corresponding identification information of the target text data comprises the following steps: and outputting the text of the target text data when the schedule viewing instruction is received.
In one embodiment, the identification information corresponding to the target text data is a category corresponding to the target text data, and the method includes:
a trained first machine learning model for semantic analysis is obtained.
And determining the category corresponding to the target character data through the first machine learning model.
In one embodiment, the method comprises:
and training to obtain a second machine learning model for identifying useless character data according to the behavior characteristics of the user for rechecking and deleting the stored character data.
And screening out character data to be deleted through the second machine learning model.
And deleting the character data to be deleted.
According to the terminal device and the storage medium provided by the embodiment of the disclosure, voice data acquired in a conversation process is converted into character data to be stored, and a character record of conversation contents is formed. Further, identification information corresponding to the text data is determined, and when a user needs to check the text call record, the text of the text data can be reviewed according to the identification information of the text data. The information such as telephone numbers, names of people and places and the like mentioned in the conversation process can be recorded by a user without troublesome paper searching and writing during conversation and only by checking text of text data after the conversation is finished. If the conversation content is forgotten after a period of time, the text of the text data can be checked to obtain the required information, and the user does not need to worry about forgetting or mistaking the conversation content.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the disclosure should be determined with reference to the appended claims.

Claims (12)

1. An information processing method characterized by comprising:
converting target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the conversation process;
determining identification information corresponding to the target character data according to the content recorded by the target character data;
and reviewing the text of the target text data according to the corresponding identification information of the target text data.
2. The information processing method according to claim 1, wherein the reviewing the text of the target text data based on the corresponding identification information of the target text data includes:
acquiring a retrieval keyword, and outputting a text data list when the retrieval keyword is matched with the identification information corresponding to the target text data, wherein the text data list comprises a title of the target text data;
and outputting the text of the target text data when the title of the target text data is selected.
3. The information processing method according to claim 1,
further comprising: setting schedule reminding according to the date indicated in the target text data text;
the reviewing the text of the target text data according to the corresponding identification information of the target text data includes: and outputting the text of the target text data when a schedule viewing instruction is received.
4. The information processing method according to claim 1, wherein the identification information corresponding to the target text data is a category corresponding to the target text data, and the determining the identification information corresponding to the target text data according to the content recorded by the target text data includes:
acquiring a trained first machine learning model for semantic analysis;
and determining the category corresponding to the target character data through the first machine learning model.
5. The information processing method according to claim 1, further comprising:
training to obtain a second machine learning model for identifying useless character data according to the behavior characteristics of the user for rechecking and deleting the stored character data;
screening out character data to be deleted through the second machine learning model;
and deleting the character data to be deleted.
6. An information processing apparatus characterized by comprising:
the transfer module is used for converting target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the call process;
the identification module is used for determining identification information corresponding to the target character data according to the content recorded by the target character data;
and the rechecking module is used for rechecking the text of the target text data according to the corresponding identification information of the target text data.
7. The information processing apparatus according to claim 6, wherein the review module includes:
the retrieval submodule is used for acquiring a retrieval keyword, and outputting a text data list when the retrieval keyword is matched with the identification information corresponding to the target text data, wherein the text data list comprises a title of the target text data;
and the selection submodule is used for outputting the text of the target text data when the title of the target text data is selected.
8. The information processing apparatus according to claim 6,
the schedule setting module is used for setting schedule reminding according to the date indicated in the target text data body;
the rechecking module comprises a schedule management submodule which is used for outputting the text of the target text data when receiving a schedule viewing instruction.
9. The information processing apparatus according to claim 6, wherein the identification information corresponding to the target text data is a category corresponding to the target text data, and the identification module includes:
the acquisition submodule is used for acquiring a trained first machine learning model for semantic analysis;
and the category division submodule is used for determining the category corresponding to the target character data through the first machine learning model.
10. The information processing apparatus according to claim 6, further comprising:
the modeling module is used for training to obtain a second machine learning model for identifying useless character data according to the behavior characteristics of the user for rechecking and deleting the stored character data;
the updating module is used for screening out character data to be deleted through the second machine learning model;
and deleting the character data to be deleted.
11. An information processing apparatus characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
converting target audio data into target character data and storing the target character data, wherein the target audio data is audio data collected in the conversation process;
determining identification information corresponding to the target character data according to the content recorded by the target character data;
and reviewing the text of the target text data according to the corresponding identification information of the target text data.
12. A computer-readable storage medium, on which computer instructions are stored, which instructions, when executed by a processor, carry out the steps of the information processing method of any one of claims 1 to 5.
CN201910936600.1A 2019-09-29 2019-09-29 Information processing method and device Pending CN112671973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910936600.1A CN112671973A (en) 2019-09-29 2019-09-29 Information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910936600.1A CN112671973A (en) 2019-09-29 2019-09-29 Information processing method and device

Publications (1)

Publication Number Publication Date
CN112671973A true CN112671973A (en) 2021-04-16

Family

ID=75399665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910936600.1A Pending CN112671973A (en) 2019-09-29 2019-09-29 Information processing method and device

Country Status (1)

Country Link
CN (1) CN112671973A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
CN103916513A (en) * 2014-03-13 2014-07-09 三星电子(中国)研发中心 Method and device for recording communication message at communication terminal
CN106302980A (en) * 2015-06-29 2017-01-04 上海卓易科技股份有限公司 The method of event actively record and terminal unit
CN106778817A (en) * 2016-11-25 2017-05-31 杭州中奥科技有限公司 A kind of automatic classification method of event
CN107302625A (en) * 2017-05-18 2017-10-27 华为机器有限公司 The method and its terminal device of management event
CN109286728A (en) * 2018-11-29 2019-01-29 维沃移动通信有限公司 A kind of dialog context processing method and terminal device
CN109670843A (en) * 2018-11-12 2019-04-23 平安科技(深圳)有限公司 Data processing method, device, computer equipment and the storage medium of complaint business

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
CN103916513A (en) * 2014-03-13 2014-07-09 三星电子(中国)研发中心 Method and device for recording communication message at communication terminal
CN106302980A (en) * 2015-06-29 2017-01-04 上海卓易科技股份有限公司 The method of event actively record and terminal unit
CN106778817A (en) * 2016-11-25 2017-05-31 杭州中奥科技有限公司 A kind of automatic classification method of event
CN107302625A (en) * 2017-05-18 2017-10-27 华为机器有限公司 The method and its terminal device of management event
CN109670843A (en) * 2018-11-12 2019-04-23 平安科技(深圳)有限公司 Data processing method, device, computer equipment and the storage medium of complaint business
CN109286728A (en) * 2018-11-29 2019-01-29 维沃移动通信有限公司 A kind of dialog context processing method and terminal device

Similar Documents

Publication Publication Date Title
KR101789783B1 (en) Method, apparatus, program, and recording medium for prompting call
CN104104768A (en) Apparatus and method for providing additional information by using caller phone number
CN105162937A (en) Incoming call information processing method and device
CN104717366A (en) Method and device for recommending contact photos
US8705707B1 (en) Labeling communcation device call logs
CN104240068A (en) Method and device for creating reminding event
CN104735243A (en) Method and device for displaying contact list
CN104850849A (en) Method, device and terminal for sending character
CN111510556B (en) Call information processing method and device and computer storage medium
CN106657543B (en) Voice information processing method and device
CN105939424B (en) Application switching method and device
CN106921958A (en) The method and apparatus for quitting the subscription of business
CN110673917A (en) Information management method and device
CN106936970A (en) Call-information remarks method and device
CN106792604A (en) The method and device of service prompts is carried out in communication process
CN106528315A (en) Information prompting method and device
CN105430194A (en) Method for making calls, device and terminal
CN106789554A (en) The method and device of short message treatment
CN105335058A (en) Method for automatically completing event and device for automatically completing event
CN109246317B (en) User information updating method, system and server
CN106406705A (en) A method and a device for information processing in a conversation process
CN105512231A (en) Contact person search method, device and terminal device
CN105872230A (en) Phone number translation method and device
CN104616133A (en) Resource storage reminding method and device
CN107846347B (en) Communication content processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210416