CN114464210A - Sound processing method, sound processing device, computer equipment and storage medium - Google Patents

Sound processing method, sound processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN114464210A
CN114464210A CN202210137734.9A CN202210137734A CN114464210A CN 114464210 A CN114464210 A CN 114464210A CN 202210137734 A CN202210137734 A CN 202210137734A CN 114464210 A CN114464210 A CN 114464210A
Authority
CN
China
Prior art keywords
sound
information
sound information
negative
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210137734.9A
Other languages
Chinese (zh)
Inventor
崔洋洋
余俊澎
王星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youmi Technology Shenzhen Co ltd
Original Assignee
Youmi Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youmi Technology Shenzhen Co ltd filed Critical Youmi Technology Shenzhen Co ltd
Priority to CN202210137734.9A priority Critical patent/CN114464210A/en
Publication of CN114464210A publication Critical patent/CN114464210A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The present application relates to a sound processing method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring sound; analyzing the sound to obtain sound information; judging according to the sound information to determine negative sound information; and outputting the adjusted sound after adjusting the negative sound information. By adopting the method to process the sound, the sound of the middle elimination pole in the communication can be avoided, and the comfort level of the communication is improved.

Description

Sound processing method, sound processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a sound processing method, an apparatus, a computer device, a storage medium, and a computer program product.
Background
With the development of artificial intelligence technology, sound optimization technology has emerged. The conventional sound optimization process mainly removes impurities or noise in the sound, but cannot adjust emotion in the sound.
Disclosure of Invention
In view of the above, it is desirable to provide a sound processing method, apparatus, computer device, computer readable storage medium, and computer program product capable of adjusting negative emotion in a sound.
In a first aspect, the present application provides a sound processing method. The method comprises the following steps:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and outputting the adjusted sound after adjusting the negative sound information.
In one embodiment, the analyzing the sound to obtain sound information includes:
and carrying out sound quality analysis on the sound to obtain the sound information.
In one embodiment, the determining the negative sound information according to the sound information includes:
capturing sad emotions according to the sound information;
and determining negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave.
In one embodiment, the capturing of sad emotions according to the sound information includes:
acquiring emotion judgment information;
capturing sad emotions from the sound information according to the sound information and the emotion judgment information, wherein the emotion judgment information comprises: preset voice emotion judgment information and voice emotion judgment information collected by the Internet.
In one embodiment, the determining the negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave comprises:
determining the sadness degree according to the fluctuation degree of the vibrative frequency of the vibrato and the sound wave;
and determining negative sound information according to the sadness degree.
In one embodiment, the adjusting the negative sound information and outputting the adjusted sound includes:
and after the sound quality of the passive sound information is adjusted, outputting the adjusted sound.
A sound optimization device, the device comprising:
the voice acquisition module is used for acquiring voice;
the analysis module is used for analyzing the sound to obtain sound information;
the judging module is used for judging according to the sound information and determining negative sound information;
and the output module is used for outputting the adjusted sound after adjusting the passive sound information.
In a second aspect, the present application further provides a sound processing apparatus. The device comprises:
the voice acquisition module is used for acquiring voice;
the analysis module is used for analyzing the sound to obtain sound information;
the judging module is used for judging according to the sound information and determining negative sound information;
and the output module is used for outputting the adjusted sound after adjusting the passive sound information.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and outputting the adjusted sound after adjusting the negative sound information.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and outputting the adjusted sound after adjusting the negative sound information.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and outputting the adjusted sound after adjusting the negative sound information.
The sound processing method, the sound processing apparatus, the computer device, the storage medium and the computer program product are used for obtaining sound; analyzing the sound to obtain sound information; judging according to the sound information to determine negative sound information; and after the sound information of the depolarization electrode is adjusted, outputting the adjusted sound. The sound is acquired, the negative sound information in the sound is obtained after analysis, the sound after regulation is output by optimizing the sound message of the negative electrode, the negative electrode sound in the communication can be avoided, and the comfort level of the communication is improved.
Drawings
FIG. 1 is a diagram of an exemplary sound processing system;
FIG. 2 is a flow diagram of a sound processing method in one embodiment;
FIG. 3 is a flow chart of a sound processing method according to another embodiment;
FIG. 4 is a block diagram showing the structure of a sound processing apparatus according to an embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The sound processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The server 104 acquires the sound in the terminal 102; analyzing the sound to obtain sound information; judging according to the sound information to determine negative sound information; and outputting the adjusted sound after adjusting the negative sound information. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a sound processing method is provided, which is described by taking the method as an example applied to the terminal 102 or the server 104 in fig. 1, and includes the following steps:
step 202, sound is acquired.
The voice can be received by a microphone, by telephone, social software with voice function or other means, or by a tool with voice collection function. The format of the sound can be MP3 format, wav format, and other audio formats.
Specifically, the processor obtains the voice of the two parties of the communication through a microphone, or the voice of the two parties of the communication through other modes such as a telephone, or a tool with a sound collection function, such as a recording pen.
And step 204, analyzing the sound to obtain sound information.
The sound information is attribute information such as the pitch and tone of the sound. The pitch is the height of sound, and is determined by the vibration frequency of sound waves, generally the pitch of boys is deep, and the pitch of girls is sharp; timbre is a basic characteristic that one sound is distinguished from another, such as sound emitted between different objects, and is the most important characteristic factor in sound recognition.
Specifically, the processor performs decomposition analysis on the acquired sound to acquire attribute information of the sound, including tones of different sound wave vibration frequencies and characteristic timbres for distinguishing the sound.
And step 206, judging according to the sound information, and determining the negative sound information.
The negative sound information is a sound wave vibration frequency and a tone characteristic representing the tone of a sound that affects emotion, such as anger, sadness, and the like.
Specifically, the processor acquires the tone color and the tone of the sound, judges the tone color and the tone of the sound through preset negative sound judgment information, and determines the sound wave vibration frequency and the tone color characteristic of the tone of the sound which represents influence on emotion such as anger, sadness and the like.
And step 208, after the depolarization sound information is adjusted, outputting the adjusted sound.
Specifically, the processor adjusts the timbre characteristics of a sound affecting the emotion of a person and the sound wave vibration frequency of the tone, and outputs the timbre and tone-adjusted sound.
In the sound processing method, sound is acquired; analyzing the sound to obtain sound information; judging according to the sound information to determine negative sound information; the negative sound information is adjusted and then the adjusted sound is output. The sound information is obtained through sound analysis, the sound information is judged to determine the negative sound information, the sound is output after the negative sound information is adjusted, emotion in the sound can be covered, negative sound in communication can be avoided, and the communication comfort level is improved.
In one embodiment, analyzing the sound to obtain sound information includes: and carrying out sound quality analysis on the sound to obtain sound information.
The psychoacoustic analysis includes, but is not limited to, pitch analysis and timbre analysis. The tone analysis is to analyze the sound wave vibration frequency of the sound; the tone analysis is an analysis of tone characteristics representing sound.
Specifically, the processor recognizes a sound collected by the microphone or a sound generated by both parties communicating with each other by telephone or the like, analyzes the sound wave vibration frequency of the tone of the recognized sound, specifies the tone of the sound, and analyzes the tone according to the tone characteristics to specify the tone of the sound.
In this embodiment, the sound information is obtained by analyzing the sound, so that the composition of the sound information can be analyzed more comprehensively, and the sound can be processed in the next step conveniently.
In one embodiment, determining the negative sound information based on the sound information comprises: capturing sad emotions according to the sound information; and determining the negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave.
Different sound wave vibration frequency intervals are set according to the fluctuation degree of the vibration frequency of the sound wave representing the tones with different moods, and different tone color characteristic intervals are set according to the tone color characteristics representing different moods, wherein the sad moods refer to the sound of the first sound wave interval under the fixed tone vibration frequency and the first tone color interval of the fixed tone color characteristics.
Specifically, the processor captures, from the sound information, a sound that conforms to the degree of the sound wave vibration frequency of the first sound wave section of the tone and the timbre characteristic in the first timbre section of the timbre as a sad emotion according to the degree of the sound wave vibration frequency of the first sound wave section of the preset tone and the timbre characteristic in the first timbre section of the timbre. And acquiring the vibrato and the sound wave vibration frequency of the sad emotion from the sad emotion, and determining the passive sound information from the collected sound according to the fluctuation degree of the vibrato and the sound wave vibration frequency of the sad emotion.
In this embodiment, the sad emotion is captured between the sound wave interval and the tone interval of the sad emotion, and the negative sound information is determined by the vibrato of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave, so that the negative sound can be further screened.
In one embodiment, capturing sad emotions from sound information comprises: acquiring emotion judgment information; capturing sad emotions from the sound information according to the sound information and emotion judgment information, wherein the emotion judgment information comprises: preset voice emotion judgment information and voice emotion judgment information collected by the Internet.
The emotion judgment information is a tone color feature interval for distinguishing a tone color feature of a sound from a tone wave vibration frequency interval of a tone wave vibration frequency, and indicates one of happiness, sadness, anger, fear, disgust, surprise, calmness, and apprehension, but is not limited thereto.
Specifically, the processor acquires sounds capable of representing emotions collected from the internet, acquires the timbre and the pitch vibration frequency of the sounds representing emotions from the sounds representing emotions collected from the internet, and acquires the timbre and the pitch vibration frequency of the sounds conforming to the first timbre region and the timbre and pitch vibration frequencies of the sounds conforming to the first timbre region from the timbre and pitch of the sounds by comparing the sounds of the preset timbre feature region and the pitch sound wave vibration frequency region with the emotion judgment information collected from the internet.
In the embodiment, the sad emotion is captured through the preset voice emotion judgment information and the voice emotion judgment information collected by the internet, and the accuracy and comprehensiveness of capturing the sad emotion can be improved.
In one embodiment, the determining the negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave comprises: determining the sadness degree according to the fluctuation degree of the vibrative frequency of the vibrato and the sound wave; and determining the negative sound information according to the sadness degree.
And comparing the sound conforming to the first sound wave interval and the sound conforming to the first tone characteristic interval with a sound wave vibration frequency interval and a tone characteristic interval of the sound collected by the Internet and representing sad emotions, and determining the sad degree according to the overlapping degree.
Specifically, the processor compares the frequencies of the vibrato and the vibrato frequency sound interval which can represent emotion and is collected by the Internet from the sound of the sound interval which accords with the first sound wave and obtains the frequencies of the vibrato and the sound wave, and determines the sadness degree according to the overlapping degree of the sound interval which accords with the first sound wave and the sound wave interval which is collected by the Internet and represents the sadness emotion, wherein the higher the overlapping degree is, the higher the sadness degree is.
In the embodiment, the sad degree is determined by the vibrato and the audio frequency degree of the sad emotion, so that the accuracy of identifying the sad emotion can be improved.
In one embodiment, after the adjusting the depolarization sound information, outputting the adjusted sound comprises: and after the sound quality of the negative sound information is adjusted, outputting the adjusted sound.
The tone quality adjustment includes, but is not limited to, tone color adjustment and tone adjustment. The tone color adjustment means adjusting the tone color in the sound according with the first tone color characteristic interval to the tone color characteristic interval which can be in line with relaxation and joy; the pitch adjustment means adjusting a sound wave in accordance with the first sound wave section in a sound to a sound wave section capable of conforming to relaxation and cheerfulness.
Specifically, the processor acquires the passive sound information in the sound information, acquires the tone color feature and the tone wave vibration frequency according to the passive sound information, adjusts the tone color in the passive sound information to a tone color feature interval conforming to relaxed and cheerful tones, adjusts the tone wave vibration frequency of the tone in the passive sound to a tone wave vibration frequency interval conforming to relaxed and cheerful tones, and outputs the adjusted sound.
In this embodiment, by adjusting the tone and the tone of the negative audio information, it is possible to avoid misunderstandings caused by the negative audio information during communication.
Currently, the effectiveness of communication due to emotional problems in voice during the communication process is avoided. The following describes the specific steps of sound processing in conjunction with a detailed embodiment, as shown in fig. 3:
(1) the processor acquires the voice sent by both parties or any party during the communication process;
(2) the processor unpacks and analyzes the voice sent by both parties or any party to obtain the tone and the tone of the voice;
(3) the processor collects relevant sounds representing emotions from the Internet, carries out emotion judgment information for distinguishing the collected emotions according to tone characteristic intervals and sound wave vibration frequency intervals of the emotions such as happiness, sadness, anger, fear, disgust, surprise, calmness, sadness and the like, and further screens the collected emotion judgment information to obtain sadness emotion judgment information;
(4) the processor compares the voice tone and the tone of the voice with preset emotion judgment information and emotion judgment information collected by the Internet;
(5) if the sound color and the tone of the sound are compared with a negative sound wave vibration frequency interval and a sound color characteristic interval in preset emotion judgment information and an extreme sound wave vibration frequency interval and a sound color characteristic interval in emotion judgment information collected by the Internet, the negative emotion is not captured, and then the sound is directly output; if the sound color and the tone of the sound are compared with preset emotion judgment information and emotion judgment information collected by the Internet, and negative emotions appear, the sad emotion is captured from the negative emotions;
(5) the processor acquires vibrato and sound wave vibration frequency from the sad emotion;
(6) the processor compares the vibrato and sound wave vibration frequency of the sad emotion with the vibrato and sound wave vibration frequency of the sad emotion collected by the Internet, determines the sadness degree according to the overlapping degree of the vibrato and sound wave vibration frequencies of the sad emotion, and the higher the overlapping degree is, the higher the sadness degree is;
(7) the processor adjusts the sad emotion in the sound according to the sad degree;
(8) the processor obtains the sadness emotion tone and the tone, adjusts the tone of sadness emotion in the sound to accord with the tone characteristic interval of easy and happy tone, and adjusts the sound wave vibration frequency in the sadness emotion tone in the sound to accord with the sound wave vibration frequency interval of easy and happy tone;
(9) the processor outputs the adjusted sound.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a sound processing apparatus for implementing the sound processing method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so the specific limitations in one or more embodiments of the sound processing device provided below may refer to the limitations on the sound processing method in the above, and are not described herein again.
In one embodiment, as shown in fig. 4, there is provided a sound processing apparatus including: a sound acquisition module 410, an analysis module 420, a judgment module 430 and an output module 440, wherein:
a sound obtaining module 410, configured to obtain sound;
the analysis module 420 is configured to analyze the sound to obtain sound information;
the judging module 430 is used for judging according to the sound information and determining negative sound information;
and an output module 440, configured to output the adjusted sound after adjusting the negative sound information.
In one embodiment, the analysis module 420 is configured to perform a psychoacoustic analysis on the sound to obtain sound information.
In one embodiment, the sound processing apparatus further includes: a mood capture module and a sound determination module. The emotion capturing module is used for capturing sad emotions according to the sound information; the sound determining module is used for determining the passive sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave.
In one embodiment, the sound processing apparatus further includes: and an information acquisition module. The information acquisition module is used for acquiring emotion judgment information; the determining module 430 is configured to capture a sad emotion from the sound information according to the sound information and emotion determining information, where the emotion determining information includes: the system comprises preset sound emotion judgment information and sound emotion judgment information collected by the Internet.
In one embodiment, the sound processing apparatus further includes: the device comprises a degree determining module and a message determining module. The degree determining module is used for determining the sadness degree according to the fluctuation degree of the vibrative frequency; the message determining module is used for determining the negative sound information according to the sadness degree.
In one embodiment, the output module 440 is configured to output the adjusted sound after performing sound quality adjustment on the passive sound information.
The respective modules in the sound processing apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 5. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a sound processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and after the sound information of the depolarization electrode is adjusted, outputting the adjusted sound.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and carrying out sound quality analysis on the sound to obtain sound information.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
capturing sad emotions according to the sound information;
and determining the negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring emotion judgment information;
capturing sad emotions from the sound information according to the sound information and emotion judgment information, wherein the emotion judgment information comprises: preset voice emotion judgment information and voice emotion judgment information collected by the Internet.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the sadness degree according to the fluctuation degree of the vibrative frequency of the vibrato and the sound wave; and determining the negative sound information according to the sadness degree.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and after the sound quality of the negative sound information is adjusted, outputting the adjusted sound.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and after the sound information of the depolarization electrode is adjusted, outputting the adjusted sound.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and carrying out sound quality analysis on the sound to obtain sound information.
In one embodiment, the computer program when executed by the processor further performs the steps of:
capturing sad emotions according to the sound information;
and determining the negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring emotion judgment information;
capturing sad emotions from the sound information according to the sound information and emotion judgment information, wherein the emotion judgment information comprises: the system comprises preset sound emotion judgment information and sound emotion judgment information collected by the Internet.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the sadness degree according to the fluctuation degree of the vibrative frequency of the vibrato and the sound wave; and determining the negative sound information according to the sadness degree.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and after the sound quality of the negative sound information is adjusted, outputting the adjusted sound.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and after the sound information of the depolarization electrode is adjusted, outputting the adjusted sound.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and carrying out sound quality analysis on the sound to obtain sound information.
In one embodiment, the computer program when executed by the processor further performs the steps of:
capturing sad emotions according to the sound information;
and determining the negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring emotion judgment information;
capturing sad emotions from the sound information according to the sound information and emotion judgment information, wherein the emotion judgment information comprises: preset voice emotion judgment information and voice emotion judgment information collected by the Internet.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the sadness degree according to the fluctuation degree of the vibrative frequency of the vibrato and the sound wave; and determining the negative sound information according to the sadness degree.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and after the sound quality of the negative sound information is adjusted, outputting the adjusted sound.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (10)

1. A method of sound processing, the method comprising:
acquiring sound;
analyzing the sound to obtain sound information;
judging according to the sound information to determine negative sound information;
and outputting the adjusted sound after adjusting the negative sound information.
2. The method of claim 1, wherein analyzing the sound to obtain sound information comprises:
and carrying out sound quality analysis on the sound to obtain the sound information.
3. The method of claim 1, wherein determining negative sound information based on the sound information comprises:
capturing sad emotions according to the sound information;
and determining negative sound information according to the tremolo of the sad emotion and the fluctuation degree of the vibration frequency of the sound wave.
4. The method of claim 3, wherein capturing sad emotions from said sound information comprises:
acquiring emotion judgment information;
capturing sad emotions from the sound information according to the sound information and the emotion judgment information, wherein the emotion judgment information comprises: preset voice emotion judgment information and voice emotion judgment information collected by the Internet.
5. The method as claimed in claim 3, wherein said determining negative sound information according to the degree of fluctuation of the tremolo and sonic vibration frequencies of said sad emotion comprises:
determining the sadness degree according to the fluctuation degree of the vibrative frequency of the vibrato and the sound wave;
and determining negative sound information according to the sadness degree.
6. The method of claim 1, wherein the adjusting the negative sound information and outputting the adjusted sound comprises:
and after the sound quality of the passive sound information is adjusted, outputting the adjusted sound.
7. A sound optimization device, the device comprising:
the voice acquisition module is used for acquiring voice;
the analysis module is used for analyzing the sound to obtain sound information;
the judging module is used for judging according to the sound information and determining negative sound information;
and the output module is used for outputting the adjusted sound after adjusting the passive sound information.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202210137734.9A 2022-02-15 2022-02-15 Sound processing method, sound processing device, computer equipment and storage medium Pending CN114464210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210137734.9A CN114464210A (en) 2022-02-15 2022-02-15 Sound processing method, sound processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210137734.9A CN114464210A (en) 2022-02-15 2022-02-15 Sound processing method, sound processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114464210A true CN114464210A (en) 2022-05-10

Family

ID=81413451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210137734.9A Pending CN114464210A (en) 2022-02-15 2022-02-15 Sound processing method, sound processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114464210A (en)

Similar Documents

Publication Publication Date Title
US10559323B2 (en) Audio and video synchronizing perceptual model
US20140325408A1 (en) Apparatus and method for providing musical content based on graphical user inputs
WO2020177190A1 (en) Processing method, apparatus and device
CN111798821B (en) Sound conversion method, device, readable storage medium and electronic equipment
CN111782576B (en) Background music generation method and device, readable medium and electronic equipment
US11638873B2 (en) Dynamic modification of audio playback in games
Chen et al. Component tying for mixture model adaptation in personalization of music emotion recognition
WO2020228226A1 (en) Instrumental music detection method and apparatus, and storage medium
CN109189978A (en) The method, apparatus and storage medium of audio search are carried out based on speech message
CN114464210A (en) Sound processing method, sound processing device, computer equipment and storage medium
WO2023000444A1 (en) Method and apparatus for detecting noise of loudspeaker, and electronic device and storage medium
CN115116469A (en) Feature representation extraction method, feature representation extraction device, feature representation extraction apparatus, feature representation extraction medium, and program product
CN116312430B (en) Electric tone key control method, apparatus, computer device, and storage medium
CN107564534A (en) Audio quality authentication method and device
CN114038484A (en) Voice data processing method and device, computer equipment and storage medium
CN115129923B (en) Voice searching method, device and storage medium
JP2021117245A (en) Learning method, evaluation device, data structure and evaluation system
CN116312431B (en) Electric tone key control method, apparatus, computer device, and storage medium
CN114220448A (en) Voice signal generation method and device, computer equipment and storage medium
CN116259292B (en) Method, device, computer equipment and storage medium for identifying basic harmonic musical scale
CN116312636B (en) Method, apparatus, computer device and storage medium for analyzing electric tone key
WO2023273440A1 (en) Method and apparatus for generating plurality of sound effects, and terminal device
CN115510911A (en) Fundamental frequency sequence recognition model training and fundamental frequency sequence recognition method, device and product
CN116030778A (en) Audio data processing method, device, computer equipment and storage medium
CN116708670A (en) User service method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination