TW201426729A - Automatic emotion classification system with gamut-type sound effects - Google Patents

Automatic emotion classification system with gamut-type sound effects Download PDF

Info

Publication number
TW201426729A
TW201426729A TW102113989A TW102113989A TW201426729A TW 201426729 A TW201426729 A TW 201426729A TW 102113989 A TW102113989 A TW 102113989A TW 102113989 A TW102113989 A TW 102113989A TW 201426729 A TW201426729 A TW 201426729A
Authority
TW
Taiwan
Prior art keywords
sound
emotional
tonality
classification
sound effect
Prior art date
Application number
TW102113989A
Other languages
Chinese (zh)
Other versions
TWI498880B (en
Inventor
Pei-Ru Lin
Xuan-Ru Chen
Original Assignee
Univ Southern Taiwan Sci & Tec
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Southern Taiwan Sci & Tec filed Critical Univ Southern Taiwan Sci & Tec
Priority to TW102113989A priority Critical patent/TWI498880B/en
Publication of TW201426729A publication Critical patent/TW201426729A/en
Application granted granted Critical
Publication of TWI498880B publication Critical patent/TWI498880B/en

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides an automatic emotion classification system with gamut-type sound effects, which may perform emotion classification for gamut-type sound effects stored in an electronic device. The automatic emotion classification system comprises a sound effect classification library, a sound effect reading module, and a tone analysis module, a tone color analysis module and a sound effect emotion classification module for respectively analyzing the gamut-type sound effects. A user may first establish a design of multi-type emotion prediction model specifically for the user based on his or her listening feeling, so that the emotion classification for gamut-type sound effects may be more approached to the user's preference. Thus, the present invention may satisfy all kinds of user's requirements, and present a novel gamut-type sound effect classification design.

Description

帶音階類音效的自動情緒分類系統 Automatic emotion classification system with scale-like sound effects

本發明是有關於一種聲音分類系統,特別是指一種帶音階類音效的情緒分類系統。 The present invention relates to a sound classification system, and more particularly to an emotion classification system with scale-like sound effects.

隨著智慧型手機與平板電腦等行動裝置的普及,以及通訊軟體的多樣性,現代人幾乎每天都會透過這些行動裝置與通訊軟體來與家人或朋友進行溝通與分享生活趣事。例如在傳送之影像簡訊中夾帶音效,或者是將拍攝之影片上傳網路分享,但經常發現,當要在影像或影片中夾帶音效時,大多只能使用行動裝置內建的制式音效,選擇少且較單調乏味。所以行動裝置使用者通常會自行擷取音樂片段或電玩遊戲音效等帶音階類音效,以作為其簡訊或短片之背景音樂或作為來電答鈴,甚至用以作為所開發之電玩遊戲的背景音效。但到底應該使用哪種音效才可與簡訊、影片或遊戲內容產生情緒上的呼應,或者是哪種音效是屬於快樂、悲傷或激昂等都較無概念,以通常得要花很多時間不斷嘗試,較為費時。雖然目前有許多業者在開發可對音樂或音效進行情緒分類之軟體系統,但目前的系統都為制式化分類方式,並未考量每個人對於音效的情緒 感受差異,適用性差。 With the popularity of mobile devices such as smart phones and tablets, and the diversity of communication software, modern people use these mobile devices and communication software to communicate and share interesting things with family members or friends almost every day. For example, in the transmitted image newsletter, the sound effect is included, or the film is uploaded to the Internet for sharing, but it is often found that when the sound is to be carried in the image or the film, most of the built-in system sound effects can be used only, and the selection is less. And more tedious. Therefore, users of mobile devices usually take music-like sound effects such as music clips or video game sound effects as background music for their short messages or short films, or as ringing for incoming calls, and even as a background sound for the developed video game. But which kind of sound effect should be used to generate an emotional response to the newsletter, film or game content, or which sound effect is not happy, sad or passionate, so it usually takes a lot of time to try. More time consuming. Although many companies are currently developing software systems that can classify emotions in music or sound effects, the current system is a systematic classification method that does not consider everyone's emotions for sound effects. Feel the difference and poor applicability.

因此,本發明之目的,即在提供一種可依據使用者喜好對帶音階類音效進行情緒分類的自動情緒分類系統。 Accordingly, it is an object of the present invention to provide an automatic emotion classification system that can classify emotions with scales based on user preferences.

於是,本發明帶音階類音效的自動情緒分類系統,可用以對帶音階類音效進行情緒分類,該自動情緒分類系統包含一音效分類庫、一音效讀取模組、一調性分析模組、一音色分析模組,及一音效情緒分類模組。該音效分類庫具有多個分別設定有一代表特定情緒形容詞之情緒形容詞參數的情緒音效單元,且該等情緒音效單元分別可供儲存所述帶音階類音效。該音效讀取模組可被驅動讀取該等情緒音效單元儲存之每一帶音階類音效。該調性分析模組可分析該音效讀取模組讀取之每一帶音階類音效所含有的所有調性,並統計分析出每一帶音階類音效所含有之所有調性中,歸類於大調性之各個調性的分佔比例、歸類於小調性之各個調性的分佔比例,大調性所涵蓋之所有調性的總分佔比例,及小調性所涵蓋之所有調性的總分佔比例,而對應輸出一調性參數。該音色分析模組可分析該音效讀取模組讀取之所有帶音階類音效的音色,並對應每一帶音階類音效輸出一音色參數。該音效情緒分類模組包括一分類模型建立單元,及一分類儲存單元,該分類模型建立單元會彙整分析每一情緒音效單元儲存之所有帶音階類音效的調性參數與音色參數,並配合該等情緒音效單元之 情緒形容詞參數,而建立一多類別情緒預測模型,且該多類別情緒分類模型可被執行而用以對該電子裝置新擷取之帶音階類音效進行分析,並將其與其中一種情緒形容詞參數配對,該分類儲存單元會將該新擷取之帶音階類音效儲存至對應其配對之情緒形容詞參數的情緒音效單元中。 Therefore, the automatic emotion classification system with scale-like sound effects of the present invention can be used to classify emotions with scale-like sound effects, and the automatic emotion classification system includes a sound effect classification library, an audio effect reading module, and a tonal analysis module. A tone color analysis module and a sound effect emotion classification module. The sound effect classification library has a plurality of emotional sound effect units respectively configured with an emotional adjective parameter representing a specific emotional adjective, and the emotional sound effect units are respectively configured to store the sounded sound effects. The sound reading module can be driven to read each of the scale-like sound effects stored by the emotional sound unit. The tonality analysis module can analyze all the tonalities contained in each of the scale-like sound effects read by the sound effect reading module, and statistically analyze all the tonalities contained in each of the scale-like sound effects, and classify them into large The proportion of each tonality of tonality, the proportion of each tonality classified as minor tonality, the total proportion of all tonality covered by majority, and all tonality covered by minority The total score is proportional to the output one-tone parameter. The timbre analysis module can analyze all the timbre-like sounds read by the sound-effect reading module, and output a timbre parameter corresponding to each of the scale-like sound effects. The sound emotion classification module includes a classification model establishing unit and a classification storage unit, and the classification model establishing unit analyzes and analyzes all the tonal parameters and the tone parameters of the scale-like sound effects stored in each emotional sound unit, and cooperates with the Emotional sound unit Emotional adjective parameters, and a multi-category emotion prediction model is established, and the multi-category emotion classification model can be executed to analyze the newly acquired scale-like sound effects of the electronic device and associate it with one of the emotional adjective parameters. Pairing, the classified storage unit stores the newly captured scaled sound effects into the emotional sound effect unit corresponding to the paired emotional adjective parameters.

本發明之功效:透過使用者可先根據其聆聽感受建立出一專屬於自己的多類別情緒預測模型的設計,可使帶音階類音效的情緒分類會更貼近該位使用者喜好,使得本發明能夠滿足各種類型使用者的需求,乃是一創新的音效分類設計。 The effect of the invention: the user can first establish a design of a multi-category emotion prediction model based on his listening experience, so that the emotion classification with the scale-like sound effect will be closer to the user preference, so that the invention Being able to meet the needs of all types of users is an innovative sound classification design.

3‧‧‧音效讀取模組 3‧‧‧Audio reading module

5‧‧‧音效分類庫 5‧‧‧Sound classification library

51‧‧‧情緒音效單元 51‧‧‧Emotional sound unit

6‧‧‧調性分析模組 6‧‧‧Tense Analysis Module

7‧‧‧音色分析模組 7‧‧‧ tone color analysis module

8‧‧‧音效情緒分類模組 8‧‧‧Sound Emotion Classification Module

81‧‧‧分類模型建立單元 81‧‧‧Classification model building unit

82‧‧‧分類儲存單元 82‧‧‧Classified storage unit

900‧‧‧電子裝置 900‧‧‧Electronic devices

902‧‧‧喇叭 902‧‧‧ Speaker

本發明之其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中:圖1是本發明帶音階類音效的自動情緒分類系統之一較佳實施例的功能方塊示意圖。 Other features and effects of the present invention will be apparent from the following description of the drawings, wherein: FIG. 1 is a functional block diagram of a preferred embodiment of the automatic emotion classification system with scale-like sound effects of the present invention.

如圖1所示,本發明帶音階類音效的自動情緒分類系統,可用以對帶音階類音效進行情緒分類,並可設置在一電子裝置900使用,所述電子裝置900可以是智慧型手機、平板電腦或筆記型電腦等行動裝置,也可以是桌上型電腦等,且不以上述類型為限。而所述帶音階類音效泛指各種樂器演奏聲、電子音樂、電玩遊戲音樂...等音效。 As shown in FIG. 1 , the automatic emotion classification system with scale-like sound effects of the present invention can be used to classify emotions with scales and can be used in an electronic device 900. The electronic device 900 can be a smart phone. A mobile device such as a tablet or a notebook computer may be a desktop computer or the like, and is not limited to the above types. The sound effects of the scales generally refer to sound effects such as various musical performances, electronic music, video game music, and the like.

該自動情緒分類系統包含一音效讀取模組3、一音效分類庫5、一調性分析模組6、一音色分析模組7,及一音效情緒分類模組8。 The automatic emotion classification system comprises an audio effect reading module 3, a sound effect classification library 5, a tone analysis module 6, a tone color analysis module 7, and a sound effect emotion classification module 8.

該音效讀取模組3可被驅動讀取該音效分類庫5已儲存之帶音階類音效,並將得取之所有帶音階類音效傳送至該調性分析模組6與該音色分析模組7進行分析,此外,該音效讀取模組3還可被驅動讀取所述帶音階類音效,並經由該電子裝置900之一喇叭902擴音輸出供聆聽。 The sound effect reading module 3 can be driven to read the sound effects of the scales stored in the sound effect classification library 5, and transmit all the sound effects of the scales to the tonality analysis module 6 and the sound color analysis module. 7 The analysis is performed. In addition, the sound effect reading module 3 can also be driven to read the sound effect of the scaled sound, and the sound output is output through a speaker 902 of the electronic device 900 for listening.

該音效分類庫5具有多個情緒音效單元51,該等情緒音效單元51分別設定有一對應某一種情緒形容詞之情緒形容詞參數,並可分別供該電子裝置900使用者依其聆聽之情緒感受,將該電子裝置900目前已儲存之帶音階類音效分別儲存至對應某一情緒形容詞的情緒音效單元51中,進行初步分類儲存。 The sound effect classification library 5 has a plurality of emotional sound effect units 51, and the emotional sound effect units 51 respectively set an emotional adjective parameter corresponding to a certain kind of emotional adjective, and can respectively be used by the user of the electronic device 900 to listen to the emotional feelings of the user. The currently stored audio-like sound effects of the electronic device 900 are respectively stored in the emotional sound effect unit 51 corresponding to an emotional adjective for preliminary classification and storage.

該調性分析模組6可分析該音效讀取模組3讀取之所有帶音階類音效,並對每一帶音階類音效進行調性分析,而分析出每一帶音階類音效所含有之所有調性,並統計分析出歸類於大調性之各個調性的分佔比例、歸類於小調性之各個調性的分佔比例,大調性所涵蓋之所有調性的總分佔比例,及小調性所涵蓋之所有調性的總分佔比例,而對應輸出一調性參數。 The tonality analysis module 6 can analyze all the sound effects of the scales read by the sound effect reading module 3, and perform tonal analysis on each sound effect of the scales, and analyze all the tones contained in the sound effects of each scale. Sexuality, and statistical analysis of the proportion of each tonality classified as major tonality, the proportion of each tonality classified as minor tonality, and the total proportion of all tonality covered by majority. And the total score of all tonality covered by the minor tonality, and the corresponding output tone parameter.

該音色分析模組7可分析該音效讀取模組3讀取之所有帶音階類音效的音色,並對應每一帶音階類音效輸出一音色資料。 The timbre analysis module 7 can analyze all the timbre-like sounds read by the sound effect reading module 3, and output a timbre data corresponding to each of the gradation-like sound effects.

實施時,因為帶音階類音效之調性與音色分析為現有技術,且分析方法與相關裝置眾多,例如透過程式化建構有MATLAB程式或其它相關分析程式的微處理晶片等,因此不再詳述。 In the implementation, because the tonality and timbre analysis of the scale-like sound effects are prior art, and the analysis methods and related devices are numerous, for example, by micro-processing wafers with MATLAB programs or other related analysis programs, etc., they are not detailed. .

該音效情緒分類模組8包括一分類模型建立單元81,及一分類儲存單元82。該分類模型建立單元81會透過多類別支援向量機(Support Vector Machine,SVM)統計分析每一情緒音效單元51儲存之所有帶音階類音效的調性參數與音色參數,並根據該等情緒音效單元51所設定之情緒形容詞參數,而建立一多類別情緒預測模型。該多類別情緒預測模型可被執行,而用以對該電子裝置900新擷取或儲存之帶音階類音效進行情緒分類分析,而將該新擷取的帶音階類音效歸類於其中一情緒形容詞參數。該分類儲存單元82可將該新擷取帶音階類音效儲存至對應該情緒形容詞參數之情緒音效單元51中。 The sound effect emotion classification module 8 includes a classification model establishing unit 81 and a classification storage unit 82. The classification model establishing unit 81 statistically analyzes the tonal parameters and the timbre parameters of all the scale-like sound effects stored in each of the emotional sound effects units 51 through a multi-category support vector machine (SVM), and according to the emotional sound effect units. 51 sets the emotional adjective parameters, and establishes a multi-category emotion prediction model. The multi-category emotion prediction model can be executed to perform sentiment classification analysis on the scaled sound effects newly captured or stored by the electronic device 900, and classify the newly captured scale-like sound effects into one of the emotions. Adjective parameters. The classification storage unit 82 can store the new captured scale-like sound effects in the emotional sound effect unit 51 corresponding to the emotional adjective parameters.

透過上述設計,在該電子裝置900安裝設置本發明自動情緒分類系統後,使用者可先根據其聆聽感受,將電子裝置900內之現有帶音階類音效先自行分類儲存於對應某一情緒形容詞之情緒音效單元51中,並啟動該自動情緒分類系統,此時,該自動情緒分類系統會先根據目前分類儲存於該等情緒音效單元51中的所有帶音階類音效進行調性分析與音色分析,而建立一專屬的多類別情緒預測模型。 Through the above design, after the electronic device 900 is installed and installed with the automatic emotion classification system of the present invention, the user can first classify and store the existing sound-like sound effects in the electronic device 900 in a corresponding emotional adjective according to the listening experience. In the emotional sound effect unit 51, the automatic emotion classification system is activated. At this time, the automatic emotion classification system first performs tonal analysis and timbre analysis according to all the scale-like sound effects currently stored in the emotional sound effect unit 51. And establish a proprietary multi-category emotion prediction model.

在該多類別情緒預測模型建立完成後,當該電 子裝置900日後再擷取或儲存一新的帶音階類音效時,就可以該音效讀取模組3讀取該新帶音階類音效,由該調性分析模組6與音色分析模組7分析出其調性參數與音色參數,再透過該多類別情緒預測模型根據該調性參數與音色參數分析出該新帶音階類音效所對應之情緒形容詞參數。此時,該分類儲存單元82就會自動將該新帶音階類音效儲存至對應的情緒音效單元51中,而自動完成帶音階類音效情緒分類。 After the multi-category emotion prediction model is established, when the electricity is After the sub-device 900 captures or stores a new sound effect with a scale, the sound effect reading module 3 can read the new sound level sound effect, and the tonality analysis module 6 and the sound color analysis module 7 The tonal parameters and the timbre parameters are analyzed, and the multi-category emotion prediction model is used to analyze the emotional adjective parameters corresponding to the new gradation-like sound effects according to the tonal parameters and the timbre parameters. At this time, the classification storage unit 82 automatically stores the new scale-like sound effect into the corresponding emotion sound effect unit 51, and automatically completes the scale-like sound effect emotion classification.

日後當使用者需要使用到帶音階類音效時,例如要在簡訊中夾帶帶音階類音效,或者是要在製作之影片或遊戲中夾帶帶音階類音效時,可先根據想要夾帶之帶音階類音效的情緒屬性,透過該音效讀取模組3直接讀取已透過本發明自動情緒分類系統分類儲存於對應之情緒音效單元51中的帶音階類音效,並經由喇叭902輸出聆聽,以挑選出所需的帶音階類音效,相當方便實用。 In the future, when users need to use the sound with scale, for example, to bring a sound with a scale in the newsletter, or to bring a sound with a scale in the produced movie or game, you can first according to the scale you want to entrain. The sound effect of the sound effect, the sound effect reading module 3 directly reads the sound effects of the scales that have been stored in the corresponding emotional sound effect unit 51 through the automatic emotion classification system of the present invention, and outputs the listening sound through the speaker 902 to select The desired sound with a scale is quite convenient and practical.

綜上所述,透過本發明自動情緒分類系統分類可供使用者可先根據其聆聽感受,將電子裝置900現有帶音階類音效先分類至該等情緒音效單元51後,再接續分析該等音效之調性參數與音色參數,而建立出一專屬於自己的多類別情緒預測模型的設計,當該電子裝置900新擷取或儲存一新的帶音階類音效時,就可透過分析該新帶音階類音效之調性參數與音色參數,而由該多類別情緒預測模型直接分析出該新帶音階類音效之情緒形容詞參數,由於該多類別情緒預測模型是根據使用者對於帶音階類音效之 聆聽感受所建立,所以帶音階類音效的情緒分類結果會更貼近該位使用者喜好,相當方便。因此,藉由上述設計,使得本發明帶音階類音效的自動情緒分類系統,可供使用者依據其喜好來建立專屬的音效情緒分類規則,而能夠滿足各種類型使用者的需求,乃是一種創新的帶音階類音效分類設計。故確實能達成本發明之目的。 In summary, the automatic emotion classification system classification of the present invention allows the user to first classify the existing scale-like sound effects of the electronic device 900 into the emotional sound effect units 51 according to the listening experience, and then analyze the sound effects successively. The tonality parameter and the timbre parameter are used to establish a design of a multi-category emotion prediction model that is exclusive to oneself. When the electronic device 900 newly captures or stores a new band-like sound effect, the new band can be analyzed. The tonal parameters of the scale-like sound effects and the timbre parameters, and the multi-category emotion prediction model directly analyzes the emotional-adjective parameters of the new-scale sound effects, since the multi-category emotion prediction model is based on the user's sound effects with the scales The listening experience is established, so the emotional classification result with the scale sound effect will be closer to the user's preference, which is quite convenient. Therefore, with the above design, the automatic emotion classification system with the sound level of the present invention can be used by the user to establish a unique sound emotion classification rule according to his preference, and can satisfy the needs of various types of users, which is an innovation. The classification of sound effects with scales. Therefore, the object of the present invention can be achieved.

惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及專利說明書內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。 The above is only the preferred embodiment of the present invention, and the scope of the present invention is not limited thereto, that is, the simple equivalent changes and modifications made by the patent application scope and patent specification content of the present invention, All remain within the scope of the invention patent.

3‧‧‧音效讀取模組 3‧‧‧Audio reading module

5‧‧‧音效分類庫 5‧‧‧Sound classification library

51‧‧‧情緒音效單元 51‧‧‧Emotional sound unit

6‧‧‧調性分析模組 6‧‧‧Tense Analysis Module

7‧‧‧音色分析模組 7‧‧‧ tone color analysis module

8‧‧‧音效情緒分類模組 8‧‧‧Sound Emotion Classification Module

81‧‧‧分類模型建立單元 81‧‧‧Classification model building unit

82‧‧‧分類儲存單元 82‧‧‧Classified storage unit

900‧‧‧電子裝置 900‧‧‧Electronic devices

902‧‧‧喇叭 902‧‧‧ Speaker

Claims (2)

一種帶音階類音效的自動情緒分類系統,可用以對帶音階類音效進行情緒分類,包含:一個音效分類庫,具有多個分別設定有一代表特定情緒形容詞之情緒形容詞參數的情緒音效單元,且該等情緒音效單元分別可供儲存所述帶音階類音效;一音效讀取模組,可被驅動讀取該等情緒音效單元儲存之每一帶音階類音效;一調性分析模組,可分析該音效讀取模組讀取之每一帶音階類音效所含有的所有調性,並統計分析出每一帶音階類音效所含有之所有調性中,歸類於大調性之各個調性的分佔比例、歸類於小調性之各個調性的分佔比例,大調性所涵蓋之所有調性的總分佔比例,及小調性所涵蓋之所有調性的總分佔比例,而對應輸出一調性參數;一音色分析模組,可分析該音效讀取模組讀取之所有帶音階類音效的音色,並對應每一帶音階類音效輸出一音色參數;及一音效情緒分類模組,包括一分類模型建立單元,及一分類儲存單元,該分類模型建立單元會彙整分析每一情緒音效單元儲存之所有帶音階類音效的調性參數與音色參數,並配合該等情緒音效單元之情緒形容詞參數,而建立一多類別情緒預測模型,且該多類別情緒預測模型可被執行而用以對該電子裝置新擷取之一帶音 階類音效進行分析,而將其歸屬於其中一種情緒形容詞參數,該分類儲存單元會將該新擷取之帶音階類音效儲存至對應該情緒形容詞參數的情緒音效單元中。 An automatic emotion classification system with scale-like sound effects, which can be used for emotional classification of sound effects with scales, comprising: a sound effect classification library having a plurality of emotional sound effect units respectively configured with an emotional adjective parameter representing a specific emotional adjective, and The emotional sound effect unit is respectively configured to store the sound effects with the scales; an audio effect reading module can be driven to read the sound effects of each of the scales stored by the emotional sound effects units; and a tonal analysis module can analyze the sound effects The sound effect reading module reads all the tonality contained in each of the scale-like sound effects, and statistically analyzes all the tonality contained in each of the scale-like sound effects, and the division of each tonality classified into the major tonality Proportion, the proportion of each tonality classified as minor tonality, the total proportion of all tonality covered by major tonality, and the total proportion of all tonality covered by minor tonality, and the corresponding output Tonality parameter; a tone color analysis module that analyzes all the sounds of the scale-like sounds read by the sound effect reading module, and outputs a tone color parameter corresponding to each band-like sound effect And a sound effect emotion classification module, comprising a classification model establishing unit, and a classification storage unit, the classification model establishing unit will analyze and analyze all the tonal parameters and the tone parameters of the scale-like sound effects stored in each emotional sound unit. And a multi-category emotion prediction model is established in conjunction with the emotional adjective parameters of the emotional sound unit, and the multi-category emotion prediction model can be executed to take a new sound for the electronic device. The class sound effect is analyzed and attributed to one of the emotional adjective parameters, and the classification storage unit stores the newly captured band-like sound effect into the emotional sound effect unit corresponding to the emotional adjective parameter. 如請求項1所述的帶音階類音效的自動情緒分類系統,其中,該分類模型建立單元是透過多類別支援向量機分析每一情緒音效單元儲存之所有帶音階類音效的調性參數與音色參數,並配合該等情緒音效單元之情緒形容詞參數,而建立該多類別情緒預測模型。 An automatic emotion classification system with a scale-like sound effect as claimed in claim 1, wherein the classification model establishing unit analyzes, by the multi-category support vector machine, all tonal parameters and sounds of the scale-like sounds stored in each emotional sound unit. The multi-category emotion prediction model is established by parameters and with the emotional adjective parameters of the emotional sound units.
TW102113989A 2012-12-20 2012-12-20 Automatic Sentiment Classification System with Scale Sound TWI498880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW102113989A TWI498880B (en) 2012-12-20 2012-12-20 Automatic Sentiment Classification System with Scale Sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW102113989A TWI498880B (en) 2012-12-20 2012-12-20 Automatic Sentiment Classification System with Scale Sound

Publications (2)

Publication Number Publication Date
TW201426729A true TW201426729A (en) 2014-07-01
TWI498880B TWI498880B (en) 2015-09-01

Family

ID=51725631

Family Applications (1)

Application Number Title Priority Date Filing Date
TW102113989A TWI498880B (en) 2012-12-20 2012-12-20 Automatic Sentiment Classification System with Scale Sound

Country Status (1)

Country Link
TW (1) TWI498880B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684634A (en) * 2018-12-17 2019-04-26 北京百度网讯科技有限公司 Sentiment analysis method, apparatus, equipment and storage medium
CN112603266A (en) * 2020-12-23 2021-04-06 新绎健康科技有限公司 Method and system for acquiring target five-tone characteristics
CN113767434A (en) * 2019-04-30 2021-12-07 索尼互动娱乐股份有限公司 Tagging videos by correlating visual features with sound tags

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI377559B (en) * 2008-12-25 2012-11-21 Inventec Besta Co Ltd Singing system with situation sound effect and method thereof
TW201035967A (en) * 2009-03-31 2010-10-01 Univ Nat United Online game speech emotion real-time recognition system and method
TW201113870A (en) * 2009-10-09 2011-04-16 Inst Information Industry Method for analyzing sentence emotion, sentence emotion analyzing system, computer readable and writable recording medium and multimedia device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684634A (en) * 2018-12-17 2019-04-26 北京百度网讯科技有限公司 Sentiment analysis method, apparatus, equipment and storage medium
CN109684634B (en) * 2018-12-17 2023-07-25 北京百度网讯科技有限公司 Emotion analysis method, device, equipment and storage medium
CN113767434A (en) * 2019-04-30 2021-12-07 索尼互动娱乐股份有限公司 Tagging videos by correlating visual features with sound tags
CN113767434B (en) * 2019-04-30 2023-12-08 索尼互动娱乐股份有限公司 Tagging video by correlating visual features with sound tags
CN112603266A (en) * 2020-12-23 2021-04-06 新绎健康科技有限公司 Method and system for acquiring target five-tone characteristics
CN112603266B (en) * 2020-12-23 2023-02-24 新绎健康科技有限公司 Method and system for acquiring target five-tone characteristics

Also Published As

Publication number Publication date
TWI498880B (en) 2015-09-01

Similar Documents

Publication Publication Date Title
JP6505117B2 (en) Interaction of digital personal digital assistant by replication and rich multimedia at response
US9437194B2 (en) Electronic device and voice control method thereof
CN110264986A (en) Online K song device, method and computer readable storage medium
CN106790940B (en) Recording method, recording playing method, device and terminal
US20180090116A1 (en) Audio Processing Method, Apparatus and System
JP2014153715A (en) Portable terminal with voice talk function and voice talk method thereof
US20130322651A1 (en) Systems, methods, and apparatus for generating representations of images and audio
WO2018126613A1 (en) Method for playing audio data and dual-screen mobile terminal
KR100678163B1 (en) Apparatus and method for operating play function in a portable terminal unit
WO2022267468A1 (en) Sound processing method and apparatus thereof
WO2023011370A1 (en) Audio playing method and apparatus
CN109862421A (en) A kind of video information recognition methods, device, electronic equipment and storage medium
WO2020228226A1 (en) Instrumental music detection method and apparatus, and storage medium
TWI498880B (en) Automatic Sentiment Classification System with Scale Sound
JP2010166324A (en) Portable terminal, voice synthesizing method, and program for voice synthesis
CN104317404A (en) Voice-print-control audio playing equipment, control system and method
KR101507468B1 (en) Sound data generating system based on user's voice and its method
CN112259076A (en) Voice interaction method and device, electronic equipment and computer readable storage medium
CN111161734A (en) Voice interaction method and device based on designated scene
TWI498886B (en) An automatic emotion classification system with no sound
CN108073106A (en) A kind of intelligent sound
US20230297324A1 (en) Audio Control Method, System, and Electronic Device
CN111696566B (en) Voice processing method, device and medium
CN108777138B (en) Audio information acquisition method and singing device
WO2015101523A1 (en) Method of improving the human voice