CN111327751A - Information recording method based on voice call and related device - Google Patents

Information recording method based on voice call and related device Download PDF

Info

Publication number
CN111327751A
CN111327751A CN202010136908.0A CN202010136908A CN111327751A CN 111327751 A CN111327751 A CN 111327751A CN 202010136908 A CN202010136908 A CN 202010136908A CN 111327751 A CN111327751 A CN 111327751A
Authority
CN
China
Prior art keywords
data
voice
information data
user
target information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010136908.0A
Other languages
Chinese (zh)
Inventor
陈增桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010136908.0A priority Critical patent/CN111327751A/en
Publication of CN111327751A publication Critical patent/CN111327751A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/65Recording arrangements for recording a message from the calling party
    • H04M1/656Recording arrangements for recording a message from the calling party for recording conversations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/60Details of telephonic subscriber devices logging of communication history, e.g. outgoing or incoming calls, missed calls, messages or URLs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/68Details of telephonic subscriber devices with means for recording information, e.g. telephone number during a conversation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application provides an information recording method and a related device based on voice communication, and the method comprises the steps of firstly, acquiring voice output data of a first user and voice input data of a second user; then, judging whether repeated information data exists in the voice input data and the voice output data; and finally, if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data. The voice information required to be recorded can be automatically identified and converted into characters to be recorded in the call process, and the convenience of information recording in the call process is greatly improved.

Description

Information recording method based on voice call and related device
Technical Field
The present application relates to the field of internet technologies, and in particular, to an information recording method and related apparatus based on voice call.
Background
In the process of communication, some important information is often required to be recorded, and the existing method generally uses other recording tools around or directly uses the current communication equipment to record information, for example, paper pens and the like can be used to record information in the process of communication, and the current communication equipment can be set to a play mode to directly use the current communication equipment to record information. But often there is no other recording tool nearby, and it may cause information leakage to put outside in public place, and it is very inconvenient.
Disclosure of Invention
Based on the above problems, the application provides an information recording method and a related device based on voice call, which can identify voice information to be recorded in the call process and convert the voice information into characters for recording, thereby greatly improving the convenience of information recording in the call process.
In a first aspect, an embodiment of the present application provides an information recording method based on a voice call, where the method includes:
acquiring voice output data of a first user and voice input data of a second user;
judging whether the voice input data and the voice output data have repeated information data or not;
and if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data.
In a second aspect, an embodiment of the present application provides an information recording apparatus based on a voice call, where the apparatus includes a processing unit, where the processing unit is configured to: acquiring voice output data of a first user and voice input data of a second user; judging whether the voice input data and the voice output data have repeated information data or not; and if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data.
In a third aspect, an embodiment of the present application provides an electronic device, including an application processor, a memory, and one or more programs, stored in the memory and configured to be executed by the application processor, the program including instructions for performing the steps in the method according to any one of the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any one of the first aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
By implementing the embodiment of the application, the following beneficial effects can be obtained:
firstly, acquiring voice output data of a first user and voice input data of a second user; then, judging whether repeated information data exists in the voice input data and the voice output data; and finally, if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data. The voice information required to be recorded can be automatically identified and converted into characters to be recorded in the call process, and the convenience of information recording in the call process is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system architecture diagram of an information recording method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an information recording method according to an embodiment of the present application;
fig. 3 is a schematic display diagram of an information recording interface according to an embodiment of the present disclosure;
fig. 4 is a schematic display view of another information recording interface provided in the embodiment of the present application;
fig. 5 is a schematic flowchart of another information recording method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a block diagram illustrating functional units of an information recording apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic Device according to the embodiments of the present application may be an electronic Device with communication capability, and the electronic Device may include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), Terminal devices (Terminal devices), and so on. It should be noted that the present application may be applied to a scenario in which a target user makes a real-time call through any electronic device, including Plain Old Telephone Service (POTS) and Voice over Internet Protocol (VoIP).
The following describes embodiments of the present application in detail.
Fig. 1 is a system architecture diagram of an information recording method according to an embodiment of the present application, including an earphone module 110, a microphone module 120, a voice recognition unit 130, and an information recording unit 140, where the earphone module 110 is connected to the voice recognition unit 130, the microphone module 120 is connected to the voice recognition unit 130, the voice recognition unit 130 is connected to the information recording unit 140, the voice recognition unit 130 can acquire a voice output by the earphone module 110 and a voice input by the microphone module 120 and recognize and determine whether there is a repeated portion between the voice input by the microphone module 120 and the voice output by the earphone module 110, the information recording unit 140 is configured to record and store the repeated portion, it can be understood that the system architecture can be applied to a real-time voice call scene, and the voice output by the earphone module 110 is a voice output by a first user, the voice output by the first user is the voice with the information to be recorded, the voice input by the microphone module 120 is the voice input by the second user, the voice input by the second user is used for confirming the recorded information, that is, the first user is the user who speaks the information to be recorded, the second user is the target user who needs to record the information, and the system architecture can be carried on any electronic equipment with a real-time voice call function and used by the second user.
Through the system architecture, the voice information to be recorded can be automatically identified and converted into characters for recording in the call process, and the convenience of information recording of a target user in the call process is greatly improved.
Fig. 2 is a flowchart illustrating an information recording method based on voice call in this embodiment, where the information recording method includes the following steps:
step 201, acquiring voice output data of a first user and voice input data of a second user.
For example, the first voice of the first user may be obtained through the earphone module, then the first voice of the second user may be obtained through the microphone module, then the second voice of the first user may be obtained through the earphone module, then the second voice of the second user may be obtained through the microphone module, and so on, each voice of the first user may be collectively referred to as the voice output data, each voice of the second user may be collectively referred to as the voice input data, and it should be noted that the information to be recorded is derived from the voice output data of the first user, the voice input data of the second user is used for determining a part of the voice output data of the first user, which needs to be recorded with information, so that the voice output data is before and the voice input data is after, the second user is a target user needing to be recorded with information, and the first user does not refer to a certain user specifically, but any user who is in a conversation with the target user.
By acquiring the voice output data of the first user and the voice input data of the second user, the information content to be recorded can be automatically judged based on real-time voice call, and the convenience of information recording of the target user in the call process is greatly improved.
Step 202, determining whether there is repeated information data in the voice input data and the voice output data.
Wherein, the voice input data and the voice output data may be preprocessed, the preprocessing may include end point detection, pre-emphasis processing, framing processing, windowing processing, etc. for ensuring accuracy of recognition, then, the preprocessed voice input data and the preprocessed voice output data may be respectively subjected to voice feature extraction to obtain input voice features and output voice features, the voice features may include Mel Frequency Cepstrum Coefficient (MFCC), etc., and finally, the voice input semantics of the voice input data and the voice output semantics of the voice output data may be obtained based on the input voice features and the output voice features, and whether there is a repeated portion between the voice input data and the voice output data may be judged based on the voice input semantics and the voice output semantics, that is, whether or not there is a part in the voice of the second user that duplicates the voice of the first user is determined as duplicated information data.
Optionally, the voice input data and the voice output data may be converted into a voice input text and a voice output text, and whether a repeated portion exists between the voice input text and the voice output text is directly determined through text recognition, where the repeated portion is used as repeated information data.
If the voice input data and the voice output data do not have repeated information data, continuously acquiring the voice input data of the second user and the voice output data of the first user; if there is duplicate information data between the voice input data and the voice output data, step 203 is executed.
By judging whether repeated information data exists in the voice input data and the voice output data or not, the voice information to be recorded can be automatically identified and converted into characters for recording in the call process, and the convenience of information recording of a target user in the call process is greatly improved.
Step 203, recording the repeated information data as target information data.
If there is duplicate information data between the voice input data and the voice output data, this step is performed, the target information data is data that needs to be recorded, and may be generated as recording data based on the target information data, the recording data may be in a text form, the recording data may include, but is not limited to, time information, location information, person information, event information, digital information, and the like, and the display status of the recording data will be described in detail for easy understanding.
For example, as shown in fig. 3, fig. 3 is a schematic display diagram of an information recording interface provided in the embodiment of the present application, where the information recording interface may include a call target display frame 310, a call time display frame 320, an information recording frame 330, and the like, and the call target display frame may display a call target, that is, first user information, such as "a certain"; the call time display box can display call time information, such as ' 9/1/19/30 ' in 2018 '; the information recording box may display the text contents of the target information data item by item, for example, "the phone number is 138 xxxxxxxx", "shenzhen north-loop major", and an arrangement order of the text contents may be arranged according to an order in which the repeated portion of the voice of the second user and the voice of the first user appears. The second user can manually edit the text content of the information recording box after the call is ended, for example, the recorded "telephone number is 138 xxxxxxxx" is edited as "a certain friend telephone number is 138 xxxxxxxxxx", the recorded "Shenzhen north loop avenue" is edited as "a certain friend lives in Shenzhen north loop avenue", and the recorded information is more in line with the self requirements.
Further, the logging data may include a call log identifier, where the call log identifier is used to indicate a call log corresponding to the logging data, and an information logging interface may also be generated on a call log interface based on the call log identifier, where the information logging interface is a sub-interface of the call log interface and may also be used to display the logging data, and it should be noted that the call log display interface is a display interface of all call logs of any first user and the second user, and includes call time, call duration, call incoming state, call outgoing state, and the like;
for example, as shown in fig. 4, fig. 4 is a schematic display diagram of another information recording interface provided in the embodiment of the present application, it can be seen that there are four call records between the second user and the first user a, the information recording interface may include a touch stretch button 410, a text editing area 420, a call object area, wherein the touch extension button 410 is displayed as an icon, and can be used to control the display state of the text editing area 420, the second user may control the display state of the text editing region 420 by clicking the touch extension button 410, sliding the touch extension button 410, or the like, and only the call log having the recorded information appears on the touch extension button 410, therefore, the recorded information of the call records can be intuitively reflected, and the second user is prevented from manually searching; the display state of the text editing area 420 may include an expanded state and a collapsed state, the text editing area 420 cannot be viewed in the collapsed state, and the second user may manually edit the display content of the text editing area 420, which is not described herein again. As can be seen from fig. 4, the time of the first call record is "2018/9/119: 31", the call duration of the first user a is 30 minutes and 28 seconds, the text editing area 420 is in the expanded state, and the recorded information is "the telephone number of the a is 138 xxxxxxxx", "the company of the a is in the shenzhen north loop corridor xxx"; the time of the second call record is "2018/8/2310: 27", the call is called by the second user, the call duration is 3 minutes and 28 seconds, and the touch extension button 410 does not exist, namely no recorded information exists; the time of the third call record is '2018/8/2310: 26', and the call record is in an unvoiced state and has no recorded information; the fourth call record is "2018/8/1712: 03", the call duration of the second user is 10 minutes and 18 seconds, the touch extension button 410 exists, that is, the recorded information exists, but the text editing area 420 is in the retracted state, and the specific recorded information content cannot be viewed.
The repeated information data is recorded as the target information data, so that the convenience degree of the target user for checking the automatically recorded information in the call can be greatly improved, and the display mode is more in line with the use habit of the target user when looking up and editing the automatically recorded information.
By the method, the voice information to be recorded can be automatically identified and converted into characters for recording in the call process, and the convenience of information recording in the call process is greatly improved.
Another information recording method based on voice call in the embodiment of the present application is described in detail below with reference to fig. 5, where fig. 5 is a schematic flow chart of another information recording method provided in the embodiment of the present application, and specifically includes the following steps:
step 501, acquiring voice output data of a first user and voice input data of a second user.
Step 502, determining whether there is repeated information data in the voice input data and the voice output data.
If there is repeated information data between the voice input data and the voice output data, go to step 503; if there is no duplicate information data between the voice input data and the voice output data, step 501 is continuously executed.
Step 503, recording the repeated information data as target information data.
Step 504, obtaining voice feedback data of the first user corresponding to the target information data.
The voice feedback data is present after the voice output data having the repeated portion with the voice input data, and indicates that the first user repeats the repeated portion, for example, a first voice of the first user a is "138 xxxxxxxx", a first voice of the second user is "good", a mobile phone number of the second user is 138 xxxxxxxxxxxxxxcounterbar ", the repeated content recognized at this time is" 138xxxxxxxx ", a second voice of the second user is" wrong ", i.e., i says that my talk about, my mobile phone number is 139 xxxxxxxxxx", and the second voice of the second user is the voice feedback data. It should be noted that the voice feedback data may be the same as or different from the target information data.
By acquiring the voice feedback data of the first user corresponding to the target information data, the information to be recorded can be recorded more intelligently, and the accuracy of information recording is improved.
And 505, judging whether the semantics of the voice feedback data are the same as the semantics of the target information data.
The voice feedback data can be preprocessed and extracted according to voice features to obtain feedback voice features of the voice feedback data, feedback semantic data is obtained based on the feedback voice features, whether the semantics of the feedback semantic data and the semantics of the target information data are the same or not is judged, due to the fact that semantic judgment is complex, judgment can be conducted through a trained neural network model, and the neural network model is analyzed and optimized continuously based on a large number of judgment results to improve judgment accuracy.
If the semantics of the voice feedback data is the same as the semantics of the target information data, the recorded target information data is not modified; if the semantics of the voice feedback data are not the same as the semantics of the target information data, step 506 is executed.
By judging whether the semantics of the voice feedback data are the same as the semantics of the target information data, the intelligent degree of the information recording method can be further improved, and the accuracy of information recording is greatly improved.
Step 506, modifying the target information data according to the voice feedback data, and recording the modified target information data.
For convenience of understanding, an example in step 504 is described, where a first voice of the first user a is "138 xxxxxxxx" in my phone number, "then the first voice of the second user is" good, "138 xxxxxxxxxx is in your phone number," 138xxxxxxxx is in your phone number, "at this time, the identified repeated content is" 138xxxxxxxx ", at this time, the repeated content is recorded as target information data to obtain recorded data, then the second voice of the second user is" not right, i wrongly say, and 139xxxxxxxx "in my phone number, then the second voice of the second user is voice feedback data, at this time, it is determined that the semantics of the voice feedback data is different from the semantics of the target information data, it is necessary to modify the target information data according to the voice feedback data to obtain modified target information data, and update the recorded data based on the modified target information data, the updated record data is the text "the mobile phone number is 139 xxxxxxxx", and the updating of the information display interface may refer to the methods described in fig. 2, fig. 3, and fig. 4, which are not described again here.
By the method, the voice information to be recorded can be identified and converted into characters for recording in the call process, the convenience of information recording in the call process is greatly improved, the intelligent degree of the information recording method is further improved, and the accuracy of information recording is greatly improved.
The parts of the above steps which are not described in detail can refer to all or part of the steps of the methods in fig. 2, fig. 3 and fig. 4, and are not described again here.
Fig. 6 is a schematic structural diagram of an electronic device 600 provided in the embodiment of the present application, and the electronic device 600 includes an application processor 601, a communication interface 602, and a memory 603, where the application processor 601, the communication interface 602, and the memory 603 are connected to each other through a bus 604, and the bus 604 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus 604 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus. Wherein the memory 603 is used for storing a computer program comprising program instructions, the application processor 601 is configured for invoking the program instructions for performing the method of:
acquiring voice output data of a first user and voice input data of a second user;
judging whether the voice input data and the voice output data have repeated information data or not;
and if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data.
In one possible embodiment, in the determining whether there is duplicate information data in the speech input data and the speech output data, the instructions in the program are specifically configured to:
performing semantic analysis on the voice input data and the voice output data to obtain voice input semantics and voice output semantics;
and judging whether the second user is replying the voice content of the first user or not based on the voice input semantic and the voice output semantic.
In one possible embodiment, in the aspect of recording the repeated information data as target information data, the instructions in the program are specifically configured to perform the following operations:
and generating record data based on the target information data, wherein the record data comprises a text corresponding to the target information data.
In a possible embodiment, in terms of after the recording of the repeated information data as target information data, the instructions in the program are specifically further configured to perform the following operations:
acquiring voice feedback data of the first user corresponding to the target information data;
judging whether the semantics of the voice feedback data are the same as the semantics of the target information data;
and if the semantics of the voice feedback data are different from the semantics of the target information data, modifying the target information data according to the voice feedback data, and recording the modified target information data.
In a possible embodiment, in the aspect of recording the modified target information data, the instructions in the program are specifically configured to perform the following operations:
and updating the record data based on the modified target information data, wherein the record data comprises a text corresponding to the modified target information data.
In one possible embodiment, the logging data includes a call log identification; in the aspect of generating the record data based on the target information data or updating the record data based on the modified target information data, the instructions in the program are specifically configured to perform the following operations:
and generating an information recording interface on a call record interface based on the call record identifier, wherein the information recording interface is used for displaying the recorded data.
In one possible embodiment, the information recording interface comprises a touch extension button and a text editing area; the touch stretching button is used for controlling the display state of the text editing area, the display state comprises a folding state and a unfolding state, and the text editing area is used for editing and displaying the text in the recorded data.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 7 is a block diagram of functional units of an information recording apparatus 700 according to an embodiment of the present application. The information recording apparatus 700 is applied to an electronic device, and includes a processing unit 701, a communication unit 702 and a storage unit 703, where the processing unit 701 is configured to execute any step in the above method embodiments, and when data transmission such as sending is performed, the communication unit 702 is optionally invoked to complete a corresponding operation. The details will be described below.
The processing unit 701 is configured to obtain voice output data of a first user and voice input data of a second user;
judging whether the voice input data and the voice output data have repeated information data or not;
and if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data.
In a possible embodiment, in the aspect of determining whether there is duplicate information data in the voice input data and the voice output data, the processing unit 701 is specifically configured to:
performing semantic analysis on the voice input data and the voice output data to obtain voice input semantics and voice output semantics;
and judging whether the second user is replying the voice content of the first user or not based on the voice input semantic and the voice output semantic.
In a possible embodiment, in terms of recording the repeated information data as target information data, the processing unit 701 is specifically configured to:
and generating record data based on the target information data, wherein the record data comprises a text corresponding to the target information data.
In a possible embodiment, after the recording the repeated information data as the target information data, the processing unit 701 is further specifically configured to:
acquiring voice feedback data of the first user corresponding to the target information data;
judging whether the semantics of the voice feedback data are the same as the semantics of the target information data;
and if the semantics of the voice feedback data are different from the semantics of the target information data, modifying the target information data according to the voice feedback data, and recording the modified target information data.
In a possible embodiment, in terms of the recording of the modified target information data, the processing unit 701 is specifically configured to:
and updating the record data based on the modified target information data, wherein the record data comprises a text corresponding to the modified target information data.
In one possible embodiment, the logging data includes a call log identification; the processing unit 701 is specifically configured to, in the aspect of generating record data based on the target information data or updating the record data based on the modified target information data:
and generating an information recording interface on a call record interface based on the call record identifier, wherein the information recording interface is used for displaying the recorded data.
In one possible embodiment, the information recording interface comprises a touch extension button and a text editing area; the touch stretching button is used for controlling the display state of the text editing area, the display state comprises a folding state and a unfolding state, and the text editing area is used for editing and displaying the text in the recorded data.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An information recording method based on voice call is characterized in that the method comprises the following steps:
acquiring voice output data of a first user and voice input data of a second user;
judging whether the voice input data and the voice output data have repeated information data or not;
and if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data.
2. The method of claim 1, wherein the determining whether there is duplicate information data in the speech input data and the speech output data comprises:
performing semantic analysis on the voice input data and the voice output data to obtain voice input semantics and voice output semantics;
and judging whether the second user is replying the voice content of the first user or not based on the voice input semantic and the voice output semantic.
3. The method according to claim 1, wherein said recording the repeated information data as target information data comprises:
and generating record data based on the target information data, wherein the record data comprises a text corresponding to the target information data.
4. The method according to claim 3, wherein after the recording of the repeated information data as target information data, the method further comprises:
acquiring voice feedback data of the first user corresponding to the target information data;
judging whether the semantics of the voice feedback data are the same as the semantics of the target information data;
and if the semantics of the voice feedback data are different from the semantics of the target information data, modifying the target information data according to the voice feedback data, and recording the modified target information data.
5. The method of claim 4, wherein the recording the modified target information data comprises:
and updating the record data based on the modified target information data, wherein the record data comprises a text corresponding to the modified target information data.
6. The method of claim 3 or 5, wherein the logging data comprises a call log identification; the generating of the record data based on the target information data or the updating of the record data based on the modified target information data includes:
and generating an information recording interface on a call record interface based on the call record identifier, wherein the information recording interface is used for displaying the recorded data.
7. The method of claim 6, wherein the information recording interface comprises a touch stretch button, a text editing area; the touch stretching button is used for controlling the display state of the text editing area, the display state comprises a folding state and a unfolding state, and the text editing area is used for editing and displaying the text in the recorded data.
8. An information recording apparatus based on voice call, characterized in that the apparatus comprises a processing unit configured to: acquiring voice output data of a first user and voice input data of a second user; judging whether the voice input data and the voice output data have repeated information data or not; and if the voice input data and the voice output data have repeated information data, recording the repeated information data as target information data.
9. An electronic device comprising an application processor, a memory, and one or more programs stored in the memory and configured to be executed by the application processor, the programs comprising instructions for performing the steps of the method of any of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-7.
CN202010136908.0A 2020-03-02 2020-03-02 Information recording method based on voice call and related device Pending CN111327751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010136908.0A CN111327751A (en) 2020-03-02 2020-03-02 Information recording method based on voice call and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010136908.0A CN111327751A (en) 2020-03-02 2020-03-02 Information recording method based on voice call and related device

Publications (1)

Publication Number Publication Date
CN111327751A true CN111327751A (en) 2020-06-23

Family

ID=71171376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010136908.0A Pending CN111327751A (en) 2020-03-02 2020-03-02 Information recording method based on voice call and related device

Country Status (1)

Country Link
CN (1) CN111327751A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075611A (en) * 2009-11-23 2011-05-25 英业达股份有限公司 Call record method and handheld communication device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075611A (en) * 2009-11-23 2011-05-25 英业达股份有限公司 Call record method and handheld communication device

Similar Documents

Publication Publication Date Title
CN100424632C (en) Semantic object synchronous understanding for highly interactive interface
CN103634472B (en) User mood and the method for personality, system and mobile phone is judged according to call voice
CN103888581B (en) A kind of communication terminal and its method for recording call-information
CN110049270A (en) Multi-person conference speech transcription method, apparatus, system, equipment and storage medium
CN110335612A (en) Minutes generation method, device and storage medium based on speech recognition
CN108305626A (en) The sound control method and device of application program
US20040064322A1 (en) Automatic consolidation of voice enabled multi-user meeting minutes
CN107612814A (en) Method and apparatus for generating candidate's return information
CN106302933B (en) Voice information processing method and terminal
CN104468989A (en) Privacy protection device for hands-free function
CN110149805A (en) Double-directional speech translation system, double-directional speech interpretation method and program
CN108874904A (en) Speech message searching method, device, computer equipment and storage medium
CN113488024B (en) Telephone interrupt recognition method and system based on semantic recognition
CN109151148B (en) Call content recording method, device, terminal and computer readable storage medium
CN101867742A (en) Television system based on sound control
CN105550235A (en) Information acquisition method and information acquisition apparatuses
JP2016102920A (en) Document record system and document record program
CN111028834B (en) Voice message reminding method and device, server and voice message reminding equipment
CN115840841A (en) Multi-modal dialog method, device, equipment and storage medium
CN108364638A (en) A kind of voice data processing method, device, electronic equipment and storage medium
JP2024037831A (en) Voice terminal voice verification and restriction method
CN111327751A (en) Information recording method based on voice call and related device
CN103067579A (en) Method and device assisting in on-line voice chat
CN114067842B (en) Customer satisfaction degree identification method and device, storage medium and electronic equipment
CN108831473B (en) Audio processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200623

RJ01 Rejection of invention patent application after publication