CN109065066B - Call control method, device and equipment - Google Patents

Call control method, device and equipment Download PDF

Info

Publication number
CN109065066B
CN109065066B CN201811146936.XA CN201811146936A CN109065066B CN 109065066 B CN109065066 B CN 109065066B CN 201811146936 A CN201811146936 A CN 201811146936A CN 109065066 B CN109065066 B CN 109065066B
Authority
CN
China
Prior art keywords
user
sound
call
voice
call control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811146936.XA
Other languages
Chinese (zh)
Other versions
CN109065066A (en
Inventor
陈超候
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan ELF Education Software Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201811146936.XA priority Critical patent/CN109065066B/en
Publication of CN109065066A publication Critical patent/CN109065066A/en
Application granted granted Critical
Publication of CN109065066B publication Critical patent/CN109065066B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

A call control method comprises the following steps: collecting sound signals of a scene where a user is located through a microphone; extracting sound signals belonging to a user audio interval from the collected sound signals according to a preset user audio interval; and the extracted sound signal is sent to the opposite party after being subjected to enhancement processing. Therefore, the opposite party can more clearly know the content spoken by the user, the call quality is favorably ensured, and the call experience of the user is improved.

Description

Call control method, device and equipment
Technical Field
The present application belongs to the field of communications, and in particular, to a method, an apparatus, and a device for controlling a call.
Background
Along with the development of intelligent equipment technology, the function of intelligent equipment is more and more diversified, has brought very big convenience for people's life work. For example, the smart phone can receive and send mails, shopping, live broadcast, video watching, cashless payment and the like besides voice communication and short message receiving and sending, so that the smart device is closer to life and work of people.
When the intelligent device is used for voice call, due to the fact that other sounds may exist in the environment, the voice data collected by the user intelligent device comprises the voice data of the user and the noise in the environment, and therefore the voice content spoken by the user cannot be clearly identified by the opposite party of the call, the call quality of the user is not high, and the improvement of user experience is not facilitated.
Disclosure of Invention
In view of this, embodiments of the present application provide a call control method, an apparatus, and a device, so as to solve the problem in the prior art that, due to the fact that other sounds may exist in the environment, sound data collected by a user intelligent device includes noise in the environment, so that a call counterpart cannot clearly identify voice content spoken by a user, and therefore, the call quality of the user is not high, which is not beneficial to improving user experience.
A first aspect of an embodiment of the present application provides a call control method, where the call control method includes:
collecting sound signals of a scene where a user is located through a microphone;
extracting sound signals belonging to a user audio interval from the collected sound signals according to a preset user audio interval;
and the extracted sound signal is sent to the opposite party after being subjected to enhancement processing.
With reference to the first aspect, in a first possible implementation manner of the first aspect, before the sending the sound signal to the call partner, the method further includes:
acquiring a scene where a user is currently located;
searching a noise characteristic corresponding to the current position of the user according to the corresponding relation between the preset scene and the noise characteristic;
and filtering the collected sound of the scene where the user is located according to the searched noise characteristics.
With reference to the first aspect, in a second possible implementation manner of the first aspect, before the subjecting the extracted sound signal to enhancement processing, the method further includes:
extracting the voice content of the opposite party of the call;
the intensity of the sound enhancement processing is determined according to the voice content of the call partner.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the determining the strength of the sound enhancement processing according to the voice content of the call counterpart includes:
determining the strength of sound enhancement processing according to the times of the speaking times of the talking counterpart rephrasing the user;
alternatively, the intensity of the sound reinforcement processing is determined according to the frequency of the question keyword included in the talking of the talking partner.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, before extracting, according to a preset user audio interval, a sound signal belonging to the user audio interval in the collected sound, the method further includes:
acquiring a plurality of voices of a user in the using process;
and determining a user audio interval according to the frequency analysis range of the acquired voice.
A second aspect of an embodiment of the present application provides a call control apparatus, including:
the sound acquisition unit is used for acquiring the sound of the scene where the user is located through a microphone;
the voice signal extraction unit is used for extracting voice signals belonging to a preset user audio interval from the collected voice signals according to the preset user audio interval;
and the enhancement processing unit is used for carrying out enhancement processing on the extracted sound signal and then sending the sound signal to the opposite party of the call.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the apparatus further includes:
the position acquisition unit is used for acquiring the current scene of the user;
the noise feature searching unit is used for searching the noise feature corresponding to the current position of the user according to the corresponding relation between the preset scene and the noise feature;
and the filtering unit is used for filtering the collected sound of the scene where the user is located according to the searched noise characteristics.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the apparatus further includes:
a voice content extraction unit for extracting the voice content of the call counterpart;
and the enhancement processing unit is used for determining the strength of sound enhancement processing according to the voice content of the call counterpart.
A third aspect of embodiments of the present application provides a call control device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the call control method according to any one of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the call control method according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: when a user calls, the voice signals of the scene where the user is located are collected, the collected voice signals are extracted according to a preset user audio interval, and the extracted voice signals are enhanced, so that the opposite party can more clearly know the content spoken by the user, the call quality is favorably ensured, and the call experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a call control method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of another call control method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of another call control method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a call control device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a call control device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart of an implementation of a call control method provided in an embodiment of the present application, which is detailed as follows:
in step S101, a sound signal of a scene where a user is located is collected through a microphone;
specifically, the call control method described in the present application may be used for voice call of a mobile phone network, and may also be used for instant messaging tools, such as WeChat and QQ. When the user is in the voice call process, the call control method can be adopted to improve the call quality of the user.
Besides the voice of the user, the sound signal of the scene where the user is located can also include other sound signals, for example, when the user is located at the roadside, the sound of a car whistle, the sound of a car engine, and the like can also be included. For example, the user may have a sound of sea wind, a wave beat, etc. when he is at sea.
In step S102, extracting a sound signal belonging to a preset user audio interval from the collected sound signals according to the preset user audio interval;
the method and the device can preset the audio frequency interval of the user, and can set the audio frequency interval of the user to be 50-500Hz as a general setting mode. Certainly, in an embodiment preferred in the present application, the voice data of the user may also be counted, the frequency interval where the audio frequency of the user speaking voice is located is calculated according to the speaking voice of the user, and the counted frequency interval is used as the user audio interval, which may be more effectively matched with the user speaking voice, so that the voice of the user may be better extracted, and the enhancement processing may be performed in a targeted manner, so that the voice processing quality may be further improved, and the call definition may be improved.
In step S103, the extracted sound signal is subjected to enhancement processing and then transmitted to the other party of the call.
After the collected sound signals are extracted through the preset user audio interval, the extracted sound signals are strengthened and processed, and voices with higher definition are obtained, so that the opposite party can more clearly know the contents required to be expressed by the user, and the improvement of the conversation quality is facilitated.
Fig. 2 is a schematic view of an implementation flow of another call control method provided in an embodiment of the present application, which is detailed as follows:
in step S201, collecting a sound signal of a scene where a user is located through a microphone;
in step S202, a scene where the user is currently located is acquired;
wherein, the scene where the user is currently located can be determined by the position. The current position of the user can be obtained through one or more of satellite positioning, base station positioning and motion sensor positioning. Alternatively, the scene information input by the user may be received before the call, for example, the user selects the current scene as a roadside before the call.
The corresponding relation between the position and the scene can be adaptively learned by sound data collected in the using process of the user and the current position of the user, so that the corresponding relation between the position and the scene can be continuously improved in the using process of the user.
In step S203, searching for a noise feature corresponding to a current location of the user according to a preset correspondence between the scene and the noise feature;
the characteristics of noise carried in various scenes can be counted in advance, and the corresponding relation between the scenes and the noise characteristics is established. For example, for a roadside scene, the corresponding noise signature may include automobile engine noise, automobile whistle noise, and the like.
In step S204, filtering the collected sound of the scene where the user is located according to the searched noise feature;
according to the noise characteristics corresponding to the current scene of the user, the scene noise included in the current scene of the user can be filtered, for example, when the user is positioned on a roadside, the sound of an automobile engine and the sound of an automobile whistle in the current scene can be filtered, so that the opposite party of the call can more effectively acquire the voice content of the user. Of course, as an optional implementation manner of the present application, in order to ensure the reality of the call scene, the scene noise of the scene where the user is located may be subjected to the enhancement processing.
In step S205, extracting a sound signal belonging to a preset user audio interval from the filtered sound signals according to the preset user audio interval;
in step S206, the extracted sound signal is subjected to enhancement processing and then transmitted to the other party of the call.
Steps S205-S205 are substantially the same as steps S102-S103 in fig. 1.
The call control method shown in fig. 2 further filters noise of a scene where the user is located based on fig. 1, so that the call partner can further conveniently obtain sound content.
Fig. 3 is a schematic view of an implementation flow of another call control method provided in the embodiment of the present application, which is detailed as follows:
in step S301, a sound signal of a scene where a user is located is collected through a microphone;
in step S302, according to a preset user audio interval, extracting a sound signal belonging to the user audio interval from the collected sound signals;
steps S301-S302 are substantially the same as steps S101-S102 in fig. 1.
In step S303, extracting the voice content of the call partner;
in the process of communication, the voice signal of the communication conversation can be extracted in real time, and the voice signal is identified or analyzed to obtain the voice content of the opposite party of the communication.
In step S304, the intensity of the sound reinforcement processing is determined according to the voice content of the call partner;
the voice content of the call counterpart is obtained through analysis, and the strength of the enhancement processing on the extracted voice signal can be determined according to the voice content of the call counterpart, which specifically includes the following processing modes:
the first method is as follows: determining the strength of sound enhancement processing according to the times of the speaking times of the talking counterpart rephrasing the user;
the speaking content of the user and the speaking content of the other party can be compared in real time, whether the speaking content of the other party is reprinted or not is judged, if yes, the fact that the clearness of the other party is not high to the knowledge of the current speaking content is probably indicated, and the current content needs to be confirmed through voice.
Whether the speaking content is restated or not can be judged according to a keyword comparison mode and a semantic recognition mode whether the speaking content of the user is restated or not by the other party of the call.
In addition, the intensity of the voice enhancement processing can be set as twice enhancement by default, and the intensity can be increased correspondingly according to the number of times of restatement of the call partner.
The second method comprises the following steps: the intensity of the sound reinforcement processing is determined according to the frequency of the question keyword included in the talking of the talking partner.
In addition to determining the current call quality by the number of times of restatements to the call partner, the quality of the current call can be estimated from the frequency of question keywords included in the content of the utterance of the call partner. For example, when the other party's speaking content includes "kay? "," what? "," Yita? "when the keywords are the same and the corresponding tone, the opposite side can be considered as not hearing the current speaking content of the user. The intensity of the sound enhancement process can be increased accordingly depending on the frequency of occurrence of the question keyword.
In step S305, the extracted sound signal is subjected to enhancement processing according to the determined intensity and then transmitted to the other party of the call.
Fig. 3 is based on fig. 1, further performing analysis on the voice content of the other party during the call, and adjusting the strength of the enhancement processing of the voice signal according to the voice content, so as to more effectively adapt to the requirements of the call scenario.
It is understood that the call control methods described in fig. 2 and 3 may be combined with each other.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 is a schematic structural diagram of a call control device according to an embodiment of the present application, which is detailed as follows:
the call control device includes:
a sound collection unit 401, configured to collect sound of a scene where a user is located through a microphone;
a sound signal extracting unit 402, configured to extract, according to a preset user audio interval, a sound signal belonging to the user audio interval from the collected sound signals;
and a reinforcement processing unit 403, configured to perform reinforcement processing on the extracted sound signal and send the sound signal to a call partner.
Preferably, the apparatus further comprises:
the position acquisition unit is used for acquiring the current scene of the user;
the noise feature searching unit is used for searching the noise feature corresponding to the current position of the user according to the corresponding relation between the preset scene and the noise feature;
and the filtering unit is used for filtering the collected sound of the scene where the user is located according to the searched noise characteristics.
Preferably, the apparatus further comprises:
a voice content extraction unit for extracting the voice content of the call counterpart;
and the enhancement processing unit is used for determining the strength of sound enhancement processing according to the voice content of the call counterpart.
The call control device described in journey 4 corresponds to the call control method described in fig. 1 to 3.
Fig. 5 is a schematic diagram of a call control device according to an embodiment of the present application. As shown in fig. 5, the call control device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52, such as a call control program, stored in said memory 51 and executable on said processor 50. The processor 50 executes the computer program 52 to implement the steps in the above-mentioned call control method embodiments, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 401 to 403 shown in fig. 4.
Illustratively, the computer program 52 may be partitioned into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 52 in the call control device 5. For example, the computer program 52 may be divided into a sound collection unit, a sound signal extraction unit, and a reinforcement processing unit, and each unit has the following specific functions:
the sound acquisition unit is used for acquiring the sound of the scene where the user is located through a microphone;
the voice signal extraction unit is used for extracting voice signals belonging to a preset user audio interval from the collected voice signals according to the preset user audio interval;
and the enhancement processing unit is used for carrying out enhancement processing on the extracted sound signal and then sending the sound signal to the opposite party of the call.
The call control device 5 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or other computing devices. The call control device may include, but is not limited to, a processor 50 and a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the call control device 5 and does not constitute a limitation of the call control device 5 and may include more or less components than those shown, or combine certain components, or different components, for example the call control device may also include input output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the call control device 5, such as a hard disk or a memory of the call control device 5. The memory 51 may also be an external storage device of the call control device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the call control device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the call control device 5. The memory 51 is used to store the computer program and other programs and data required by the call control device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (5)

1. A call control method is characterized by comprising the following steps:
collecting sound signals of a scene where a user is located through a microphone;
extracting sound signals belonging to a user audio interval from the collected sound signals according to a preset user audio interval;
the extracted sound signal is sent to the opposite party after being strengthened;
before the subjecting the extracted sound signal to the enhancement processing, the method further includes:
extracting the voice content of the opposite party of the call;
the method for determining the strength of the sound enhancement processing according to the voice content of the call counterpart comprises the following steps: determining the strength of sound enhancement processing according to the times of the speaking times of the talking counterpart rephrasing the user; or, the strength of the sound enhancement processing is determined according to the frequency of the question keywords included in the talking of the talking counterpart; the method comprises the steps of judging whether a call counterpart rephrases the speaking content of a user or not in a keyword comparison mode or a semantic recognition mode.
2. The call control method according to claim 1, wherein before extracting a sound signal belonging to a preset user audio interval from the collected sound according to the preset user audio interval, the method further comprises:
acquiring a plurality of voices of a user in the using process;
and determining a user audio interval according to the frequency analysis range of the acquired voice.
3. A call control device, comprising:
the sound acquisition unit is used for acquiring the sound of the scene where the user is located through a microphone;
the voice signal extraction unit is used for extracting voice signals belonging to a preset user audio interval from the collected voice signals according to the preset user audio interval;
a voice content extraction unit for extracting the voice content of the call counterpart;
the strengthening processing unit is used for determining the strength of sound strengthening processing according to the voice content of the call counterpart and comprises the following steps: determining the strength of sound enhancement processing according to the times of the speaking times of the talking counterpart rephrasing the user; or, the strength of the sound enhancement processing is determined according to the frequency of the question keywords included in the talking of the talking counterpart; judging whether a call counterpart rephrases the speaking content of the user or not in a keyword comparison mode or a semantic recognition mode;
and the extracted sound signal is sent to the opposite party after being strengthened.
4. A call control device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the call control method according to claim 1 or 2 when executing the computer program.
5. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the call control method according to claim 1 or 2.
CN201811146936.XA 2018-09-29 2018-09-29 Call control method, device and equipment Active CN109065066B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811146936.XA CN109065066B (en) 2018-09-29 2018-09-29 Call control method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811146936.XA CN109065066B (en) 2018-09-29 2018-09-29 Call control method, device and equipment

Publications (2)

Publication Number Publication Date
CN109065066A CN109065066A (en) 2018-12-21
CN109065066B true CN109065066B (en) 2020-03-31

Family

ID=64766831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811146936.XA Active CN109065066B (en) 2018-09-29 2018-09-29 Call control method, device and equipment

Country Status (1)

Country Link
CN (1) CN109065066B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284500B (en) * 2021-05-19 2024-02-06 Oppo广东移动通信有限公司 Audio processing method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971696A (en) * 2013-01-30 2014-08-06 华为终端有限公司 Method, device and terminal equipment for processing voice
CN104811559A (en) * 2015-05-05 2015-07-29 上海青橙实业有限公司 Noise reduction method, communication method and mobile terminal
CN106297779A (en) * 2016-07-28 2017-01-04 块互动(北京)科技有限公司 A kind of background noise removing method based on positional information and device
CN107277207A (en) * 2017-07-14 2017-10-20 广东欧珀移动通信有限公司 Adaptive call method, device, mobile terminal and storage medium
CN107682547A (en) * 2017-09-29 2018-02-09 努比亚技术有限公司 A kind of voice messaging regulation and control method, equipment and computer-readable recording medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995360B (en) * 2017-11-27 2020-08-25 Oppo广东移动通信有限公司 Call processing method and related product
CN108521621B (en) * 2018-03-30 2020-01-10 Oppo广东移动通信有限公司 Signal processing method, device, terminal, earphone and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971696A (en) * 2013-01-30 2014-08-06 华为终端有限公司 Method, device and terminal equipment for processing voice
CN104811559A (en) * 2015-05-05 2015-07-29 上海青橙实业有限公司 Noise reduction method, communication method and mobile terminal
CN106297779A (en) * 2016-07-28 2017-01-04 块互动(北京)科技有限公司 A kind of background noise removing method based on positional information and device
CN107277207A (en) * 2017-07-14 2017-10-20 广东欧珀移动通信有限公司 Adaptive call method, device, mobile terminal and storage medium
CN107682547A (en) * 2017-09-29 2018-02-09 努比亚技术有限公司 A kind of voice messaging regulation and control method, equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN109065066A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN108076226B (en) Method for adjusting call quality, mobile terminal and storage medium
CN107995360B (en) Call processing method and related product
CN111312283B (en) Cross-channel voiceprint processing method and device
CN110769111A (en) Noise reduction method, system, storage medium and terminal
CN110808030B (en) Voice awakening method, system, storage medium and electronic equipment
CN111447327A (en) Fraud telephone identification method, device, storage medium and terminal
CN111343410A (en) Mute prompt method and device, electronic equipment and storage medium
CN111540370A (en) Audio processing method and device, computer equipment and computer readable storage medium
CN109065066B (en) Call control method, device and equipment
CN114822578A (en) Voice noise reduction method, device, equipment and storage medium
CN114338623A (en) Audio processing method, device, equipment, medium and computer program product
CN114333896A (en) Voice separation method, electronic device, chip and computer readable storage medium
CN116312559A (en) Training method of cross-channel voiceprint recognition model, voiceprint recognition method and device
CN107154996B (en) Incoming call interception method and device, storage medium and terminal
CN116110418A (en) Audio noise reduction method and device, storage medium and electronic device
CN113763968B (en) Method, apparatus, device, medium, and product for recognizing speech
CN114944152A (en) Vehicle whistling sound identification method
CN112820298B (en) Voiceprint recognition method and device
CN111899747B (en) Method and apparatus for synthesizing audio
US20200184973A1 (en) Transcription of communications
CN112908339A (en) Conference link positioning method and device, positioning equipment and readable storage medium
CN107819964B (en) Method, device, terminal and computer readable storage medium for improving call quality
CN111401152B (en) Face recognition method and device
CN115376501B (en) Voice enhancement method and device, storage medium and electronic equipment
CN111726283B (en) WeChat receiving method and device for vehicle-mounted intelligent sound box

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210316

Address after: 16 / F, 168 Dongmen Middle Road, Xiaobian community, Chang'an Town, Dongguan City, Guangdong Province, 523000

Patentee after: Dongguan elf Education Software Co.,Ltd.

Address before: 523860 No. 168 Dongmen Middle Road, Xiaobian Community, Chang'an Town, Dongguan City, Guangdong Province

Patentee before: Guangdong Xiaotiancai Technology Co.,Ltd.

TR01 Transfer of patent right