CN102710542B - Method and system for processing sounds - Google Patents
Method and system for processing sounds Download PDFInfo
- Publication number
- CN102710542B CN102710542B CN201210137146.1A CN201210137146A CN102710542B CN 102710542 B CN102710542 B CN 102710542B CN 201210137146 A CN201210137146 A CN 201210137146A CN 102710542 B CN102710542 B CN 102710542B
- Authority
- CN
- China
- Prior art keywords
- sound
- server
- sounds
- bandwidth
- sample rate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Telephonic Communication Services (AREA)
Abstract
The invention provides a method and a system for processing sounds. The method comprises the following steps that a server receives different types of sounds sent by a client, searches corresponding grade coefficients from a sound type grade contrast table of the server, searches bandwidth coefficients corresponding to the sounds from a sound bandwidth coefficient contrast table of the server according to actual bandwidths carrying different sounds and stores sound information of the sounds into a sound source table in the server; the server determines sampling rates of all the sounds in the sound source table according to the grade coefficients of the types of the sounds and the bandwidth coefficients of the sounds; and the server adds the sounds, of which the sampling rates in the sound source table are the same as sampling rates appointed by a user, into a target sound table in the server and processes the sounds in the target sound table.
Description
Technical field
The invention belongs to instant messaging field, particularly relate to a kind of method and system of acoustic processing.
Background technology
Along with the develop rapidly of information-intensive society, various instant communication mode such as qq, video conference obtain applying more and more widely.In the use procedure of webpage version video conference, often have the various sound of separate sources, if there is dissimilar sound in Web conference simultaneously, will the quality of Internet phone-calling be had a strong impact on.Therefore, how to catch and the sound filtering out needs has become the hot issue of industry research.In addition, in daily use, because how the restriction of network environment and dissimilar sound are distinguished, make sound screening process be subject to the impact of many factors, how to reach the optimum efficiency that user needs rationally and effectively, become the problem needing to solve.
Summary of the invention
The invention provides a kind of method and system of acoustic processing to solve the problem.
The invention provides a kind of method of acoustic processing, comprise the following steps.The dissimilar sound that server receives client sends, each self-corresponding equivalent coefficient is searched in the sound type grade table of comparisons of described server, and after searching bandwidth factor corresponding to described sound according to the actual bandwidth of carrying alternative sounds in the sound bandwidth coefficient vs table of described server, the acoustic information of described sound is saved to the sound source table in described server.Described the server equivalent coefficient of type and bandwidth factor of described sound belonging to described sound determine the sample rate of all sound in described sound source table.The sound that sample rate and user in described sound source table specify sample rate identical is added into the target sound table in server by described server, and processes the sound in described target sound table.
The invention provides a kind of system of acoustic processing, comprise client and server, server described in described client's side link.Described client, for sending dissimilar sound to described server.Described server, comprises memory, processor and controller, and described memory connects described processor, and described processor connects described controller.Described memory, for searching each self-corresponding equivalent coefficient in the sound type grade table of comparisons of described server, and after searching bandwidth factor corresponding to described sound according to the actual bandwidth of carrying alternative sounds in the sound bandwidth coefficient vs table of described server, the acoustic information of described sound is saved to the sound source table in described server.Described processor, determines the sample rate of all sound in described sound source table for the equivalent coefficient of type belonging to described sound and the bandwidth factor of described sound.Described controller, the sound for specifying sample rate identical sample rate and user in described sound source table is added into the target sound table in server, and processes the sound in described target sound table.
Compared to prior art, according to the method and system of acoustic processing provided by the invention, server by searching each self-corresponding equivalent coefficient in the sound type grade table of comparisons of this locality, actual bandwidth according to carrying alternative sounds searches bandwidth factor corresponding to described sound in the sound bandwidth coefficient vs table of server, and acoustic information is saved to the sound source table in server.The sample rate of all sound in the equivalent coefficient of type and bandwidth factor determination sound source table belonging to sound, and target sound table sample rate is added into the sound that user specifies sample rate identical in server, finally the sound in target sound table is processed.So, the sound of only specifying user extracts and changes, and effectively removes other interference sound, improves the quality of Internet phone-calling.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, and form a application's part, schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Figure 1 shows that the flow chart of the method for the acoustic processing that preferred embodiment according to the present invention provides;
Figure 2 shows that the schematic diagram of the system of the acoustic processing that preferred embodiment according to the present invention provides.
Embodiment
Hereinafter also describe the present invention in detail with reference to accompanying drawing in conjunction with the embodiments.It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.
Figure 1 shows that the flow chart of the method for the acoustic processing that preferred embodiment according to the present invention provides.As shown in Figure 1, the method for acoustic processing that preferred embodiment of the present invention provides comprises step 101 ~ 103.
Step 101: the dissimilar sound that server receives client sends, each self-corresponding equivalent coefficient is searched in the sound type grade table of comparisons of described server, and after searching bandwidth factor corresponding to described sound according to the actual bandwidth of carrying alternative sounds in the sound bandwidth coefficient vs table of described server, the acoustic information of described sound is saved to the sound source table in described server.
Specifically, after the new sound that server receives client sends, be kept at by the acoustic information of this sound in local sound source table, described acoustic information comprises the voice data AuData of the unique identification AuSrcID of sound, the form AuWFX of sound and reality.In addition, when the sound from client stops sending to server, the acoustic information of described sound is deleted by server in real time from the sound source table of this locality.
Wherein, sound type from client comprises ad sound, background sound, people's sound and other sound, prestore in server the sound type grade table of comparisons of each sound type and type classification corresponding relation, and sound level is higher, and corresponding sound level coefficient is larger.The sound type grade table of comparisons in server is such as shown in table 1.
Sound type | Sound level | Sound type equivalent coefficient |
People's sound | Level Four | 9 |
Background sound | Three grades | 5 |
Ad sound | Secondary | 3 |
Other sound | One-level | 1 |
Table 1
In addition, also prestore in server the carrying network bandwidth of sound and the sound bandwidth coefficient vs table of sound bandwidth coefficient, and the network bandwidth of carrying sound is wider, and the bandwidth factor that this sound is corresponding is larger.Sound bandwidth coefficient vs table in described server is such as shown in table 2.
Sound title | The network bandwidth (bit/s) of carrying sound | Sound bandwidth coefficient |
Sound 1 | Be more than or equal to 200 | 0.8 |
Sound 2 | Be greater than 10 and be less than 200 | 0.5 |
Sound 3 | Be less than or equal to 10 | 0.2 |
Table 2
Step 102: described the server equivalent coefficient of type and bandwidth factor of described sound belonging to described sound determine the sample rate of all sound in described sound source table.
Specifically, determine that the method for the sample rate of all sound in described sound source table is, calculate the product of equivalent coefficient that belonging to each sound, type is corresponding and bandwidth factor corresponding to described sound, determine the sample rate of corresponding described sound according to described product scope.
Such as, associative list 1 and table 2, product and the sample rate corresponding relation of equivalent coefficient corresponding to type and this sound bandwidth coefficient belonging to setting sound are: when above-mentioned product value is greater than 4, the sample rate of corresponding sound is set to 48000Hz, when product value is more than or equal to 1.5 and product value is less than or equal to 4, the sample rate of corresponding sound is set to 44100Hz, when product value is less than 1.5, the sample rate of corresponding sound is set to 22050Hz.The product of the equivalent coefficient that type belonging to tut is corresponding and this sound bandwidth coefficient and the establishing method of sample rate corresponding relation are only the present embodiment and make an explanation and set, and in other embodiment, can reset as required, be not construed as limiting this present invention.
Step 103: the sound that sample rate and user in described sound source table specify sample rate identical is added into the target sound table in server by described server, and processes the sound in described target sound table.
Specifically, when in the sound source table of server, a certain sampled voice rate equals to specify sample rate, the acoustic information of this sound is added in target sound table.When the sound from client stops sending to server, the acoustic information of described sound is deleted by server in real time from the sound source table of this locality and in target sound table.
Example in integrating step 101 and 102, if there are three dissimilar sound in the sound source table of server, and the sample rate of each sound type equivalent coefficient, sound bandwidth coefficient and correspondence is such as shown in table 3.
Sound title | Sound type equivalent coefficient | Sound bandwidth coefficient | Product | Sample rate (Hz) |
Sound 1 | 5 | 0.8 | 4 | 44100 |
Sound 2 | 9 | 0.5 | 4.5 | 48000 |
Sound 3 | 3 | 0.2 | 0.6 | 22050 |
Table 3
According to table 3, if user specifies sample rate to be 48000Hz, the sample rate of sound 2 is identical with appointment sample rate, then the acoustic information of sound 2 is added in target sound table by server from the sound source table of this locality.Next server handles accordingly to the sound 2 in target sound table.Because the sample rate of sound 1 and sound 3 is unequal with appointment sample rate, therefore server does not add the acoustic information of sound 1 and sound 3 in target sound table, does not process it yet.
Figure 2 shows that the schematic diagram of the system of the acoustic processing that preferred embodiment according to the present invention provides.As shown in Figure 2, the system of the acoustic processing that preferred embodiment of the present invention provides comprises client 201 and server 202, and described client 201 connects described server 202.Described client 201, for sending dissimilar sound to described server 202.Described server 202, comprises memory 203, processor 204 and controller 205, and described memory 203 connects described processor 204, and described processor 204 connects described controller 205.Described memory 203, for searching each self-corresponding equivalent coefficient in the sound type grade table of comparisons of server 202, and after searching bandwidth factor corresponding to described sound according to the actual bandwidth of carrying alternative sounds in the sound bandwidth coefficient vs table of server 202, the acoustic information of described sound is saved to the sound source table in server 202.Described processor 204, determines the sample rate of all sound in described sound source table for the equivalent coefficient of type belonging to described sound and the bandwidth factor of described sound.Described controller 205, the sound for specifying sample rate identical sample rate and user in described sound source table is added into the target sound table in server, and processes the sound in described target sound table.About the concrete operations flow process of said system with described in said method, therefore repeat no more in this.
In sum, the method and system of the acoustic processing that preferred embodiment according to the present invention provides, server by searching each self-corresponding equivalent coefficient in the sound type grade table of comparisons of this locality, actual bandwidth according to carrying alternative sounds searches bandwidth factor corresponding to described sound in the sound bandwidth coefficient vs table of server, and acoustic information is saved to the sound source table in server.According to the equivalent coefficient of type belonging to sound in sound source table and the sample rate of bandwidth factor product value scope determination sound, and target sound table sample rate is added into the sound that user specifies sample rate identical in server, finally the sound in target sound table is processed.So, only specify the sound of sample rate to extract to user, effectively remove other interference sound, improve the quality of Internet phone-calling.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (3)
1. a method for acoustic processing, is characterized in that, comprises the following steps:
The dissimilar sound that server receives client sends, each self-corresponding equivalent coefficient is searched in the sound type grade table of comparisons of described server, and after searching bandwidth factor corresponding to described sound according to the actual bandwidth of carrying alternative sounds in the sound bandwidth coefficient vs table of described server, the acoustic information of described sound is saved to the sound source table in described server;
Described the server equivalent coefficient of type and bandwidth factor of described sound belonging to described sound determine the sample rate of all sound in described sound source table;
The sound that sample rate and user in described sound source table specify sample rate identical is added into the target sound table in server by described server, and processes the sound in described target sound table;
Determine that the method for the sample rate of all sound in described sound source table is, calculate the product of equivalent coefficient that belonging to each sound, type is corresponding and bandwidth factor corresponding to described sound, determine the sample rate of corresponding described sound according to described product scope.
2. method according to claim 1, is characterized in that, when described client stops sound sending, the acoustic information of described sound is deleted by described server from described sound source table or described target sound table.
3. method according to claim 1, is characterized in that, described sound type comprises ad sound, background sound, people's sound and other sound.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210137146.1A CN102710542B (en) | 2012-05-07 | 2012-05-07 | Method and system for processing sounds |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210137146.1A CN102710542B (en) | 2012-05-07 | 2012-05-07 | Method and system for processing sounds |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102710542A CN102710542A (en) | 2012-10-03 |
CN102710542B true CN102710542B (en) | 2015-04-01 |
Family
ID=46903108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210137146.1A Expired - Fee Related CN102710542B (en) | 2012-05-07 | 2012-05-07 | Method and system for processing sounds |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102710542B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105632511A (en) * | 2015-12-29 | 2016-06-01 | 太仓美宅姬娱乐传媒有限公司 | Sound processing method |
CN105721590A (en) * | 2016-02-26 | 2016-06-29 | 太仓埃特奥数据科技有限公司 | Sound processing method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101740035A (en) * | 2008-11-04 | 2010-06-16 | 索尼株式会社 | Call voice processing apparatus, call voice processing method and program |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5849411B2 (en) * | 2010-09-28 | 2016-01-27 | ヤマハ株式会社 | Maska sound output device |
-
2012
- 2012-05-07 CN CN201210137146.1A patent/CN102710542B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101740035A (en) * | 2008-11-04 | 2010-06-16 | 索尼株式会社 | Call voice processing apparatus, call voice processing method and program |
Also Published As
Publication number | Publication date |
---|---|
CN102710542A (en) | 2012-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106202211B (en) | Integrated microblog rumor identification method based on microblog types | |
CN108664480B (en) | Multi-data-source user information integration method and device | |
US20130297619A1 (en) | Social media profiling | |
WO2011130348A3 (en) | Method and system for facial recognition applications including avatar support | |
WO2007038761A3 (en) | Systems and methods of network operation and information processing, including data acquisition, processing and provision and/or interoperability features | |
GB0809162D0 (en) | Systems and methods of network operation and information processing, including data acquisitions, processing and provisions and/or interoperability features | |
TW200501667A (en) | Method and apparatus for automatically configuring a computer for different local area networks | |
CN103780613B (en) | By the method and system that fixed network associates with mobile network user | |
JP2012533787A (en) | Supplying content by using social networks | |
CN105812417B (en) | Remote server, router and bad webpage information filtering method | |
US20160350314A1 (en) | Social intelligence architecture using social media message queues | |
CN107330079B (en) | Method and device for presenting rumor splitting information based on artificial intelligence | |
CN104636473A (en) | Data processing method and system based on electronic payment behaviors | |
CN102710542B (en) | Method and system for processing sounds | |
CN102710604B (en) | Method and system for extracting sound | |
CN107612707B (en) | Preprocessing method and system for classified storage of homologous sample data in industry field | |
CN105991331A (en) | Forum review method, device and log management device | |
CN104517192A (en) | Intellectual property valuation system | |
CN105279191A (en) | Potential user mining method based on network data analysis | |
CN104951533B (en) | A kind of information business card querying method and system | |
CN110557351B (en) | Method and apparatus for generating information | |
TW200712917A (en) | Method, apparatus, and computer program product for dynamically modifying operating parameters of the system based on the current usage of a processor core's specialized processing units | |
CN107688978A (en) | The method and device of sequence information is repeated for detecting | |
CN103347081B (en) | The requesting method of a kind of self-defined page and system | |
EP4123556A4 (en) | Node, transaction system, processing method, program, and blockchain network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C56 | Change in the name or address of the patentee | ||
CP01 | Change in the name or title of a patent holder |
Address after: Suzhou City, Jiangsu province 215121 Fengting Avenue Suzhou Industrial Park No. 666 Weiting Intelligent Industrial Park Building 8 Patentee after: CODYY EDUCATION TECHNOLOGY Co.,Ltd. Address before: Suzhou City, Jiangsu province 215121 Fengting Avenue Suzhou Industrial Park No. 666 Weiting Intelligent Industrial Park Building 8 Patentee before: SUZHOU CODYY NETWORK SCIENCE & TECHNOLOGY Co.,Ltd. |
|
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150401 |
|
CF01 | Termination of patent right due to non-payment of annual fee |