TW201223231A - Handheld device and method for constructing user interface thereof - Google Patents

Handheld device and method for constructing user interface thereof Download PDF

Info

Publication number
TW201223231A
TW201223231A TW99141122A TW99141122A TW201223231A TW 201223231 A TW201223231 A TW 201223231A TW 99141122 A TW99141122 A TW 99141122A TW 99141122 A TW99141122 A TW 99141122A TW 201223231 A TW201223231 A TW 201223231A
Authority
TW
Taiwan
Prior art keywords
user
voice
sound
module
handheld device
Prior art date
Application number
TW99141122A
Other languages
Chinese (zh)
Inventor
Yi-Ching Chen
Original Assignee
Hon Hai Prec Ind Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hon Hai Prec Ind Co Ltd filed Critical Hon Hai Prec Ind Co Ltd
Priority to TW99141122A priority Critical patent/TW201223231A/en
Publication of TW201223231A publication Critical patent/TW201223231A/en

Links

Abstract

A handheld device includes a storage system, a voice capturing module, a voice identifying module, a user interface constructing module, and a display module. The storage system is operable to store a mapping relation between a plurality of voice types and a plurality of user emotion statuses. The voice capturing module is operable to capture a voice signals around the handheld device. The voice identifying module is operable to analyze the voice signal to obtain a voice type, and determine a user emotion status according to the voice type and the mapping relation. The user interface constructing module is operable to construct a user interface according to the user emotion status. The display module is operable to display the user interface. A method of constructing user interface is also provided.

Description

201223231 六、發明說明: 【發明所屬之技術領域】 [0001] 本發明涉及手持裝置,尤其涉及手持裝置用戶介面構建 方法。 【先前技術】 [0002] 目前各種手持裝置,如手機、移動網際網路設備(Mobile Internet Device , MID) 等的功能 越來越強大, 大顯示螢幕已經成為發展趨勢,手持裝置功能的強大與 大顯示螢幕使得廠商更加注重手持裝置使用者的用戶體 驗。手持裝置的用戶介面已經從用戶介面的圖示固定不 變發展到目前用戶介面可以由用戶根據喜好設定圖示的 位置、用戶介面的背景色彩及主題模式。但是,手持裝 置用戶介面的主題模式一旦被用戶設定後,除非用戶再 次更改主題模式,否則用戶介面不會發生變化。因此, 當用戶處於不同情緒下,手持裝置顯示的用戶介面並不 是與用戶情緒相適應的主題模式。 [0003] 因此,有必要提供一種手持裝置,可根據用戶情緒構建 用戶介面。 【發明内容】 [0004] 有鑒於此,本發明提供一種手持裝置,可以藉由識別用 戶發出的聲音獲知用戶的情緒,根據用戶情緒構建並顯 示用戶介面。 [0005] 此外,本發明還提供一種手持裝置的用戶介面構建方法 ,可以藉由識別用戶發出的聲音獲知用戶的情緒,根據 用戶情緒構建並顯示用戶介面。 099141122 表單編號A0101 第4頁/共26頁 0992071560-0 201223231 [0006] 本發明實施方式中提供的手持裝置, 音垃隹4^/ 有'冑卓元、聲 曰殊集模組、聲音識別模組、介面構 年 。雙抵組及顯示模袓 存儲早元用於存儲複數聲音的類型與 、·、 #4- m ΒΘ 、褒數用戶情緒的 对愿關係。聲音採集模組用於從手持 Μ A ^ -ώ- '、置的周圍環境中 抹菜聲音汛號。聲音識別模組用於解析聲音气。、 該用戶聲音的類型,並根據用戶聲 u " 型與對應關係 =用戶情緒。介面構建模組用於根⑽戶情__ 戶介面。顯示模組用於顯示用戶介面。201223231 VI. Description of the Invention: [Technical Field of the Invention] [0001] The present invention relates to a handheld device, and more particularly to a method for constructing a user interface of a handheld device. [Prior Art] [0002] At present, various handheld devices, such as mobile phones and mobile Internet devices (MIDs), are becoming more and more powerful, and large display screens have become a trend, and the functions of handheld devices are powerful and large. The display screen allows manufacturers to pay more attention to the user experience of handheld device users. The user interface of the handheld device has evolved from the graphical representation of the user interface to the current location of the user interface that can be set by the user according to preferences, the background color of the user interface, and the theme mode. However, once the theme mode of the handheld device user interface is set by the user, the user interface will not change unless the user changes the theme mode again. Therefore, when the user is in a different mood, the user interface displayed by the handheld device is not a theme mode that is compatible with the user's mood. [0003] Therefore, it is necessary to provide a handheld device that can construct a user interface based on user emotions. SUMMARY OF THE INVENTION [0004] In view of the above, the present invention provides a handheld device that can recognize a user's emotion by recognizing a user's voice, and construct and display a user interface according to the user's emotion. In addition, the present invention also provides a user interface construction method for a handheld device, which can recognize the user's emotion by recognizing the voice emitted by the user, and construct and display the user interface according to the user's emotion. 099141122 Form No. A0101 Page 4 / Total 26 Page 0992071560-0 201223231 [0006] The handheld device provided in the embodiment of the present invention, the sound 隹 4^ / has '胄卓元, sonar collection module, voice recognition module Group, interface structure. The double-receiving group and the display module store the early element for storing the type of the complex sound and the relationship between the user, the emotion, and the user's emotion. The sound collection module is used to smear the slogan from the hand-held Μ A ^ -ώ- '. The voice recognition module is used to analyze the sound gas. , the type of the user's voice, and according to the user's voice u " type and correspondence = user sentiment. The interface building module is used for the root (10) household __ user interface. The display module is used to display the user interface.

[0007] [0008][0007] [0008]

[0009] 優選地,存儲單元還用於存減數__㈣應的波 形圖;聲音採餘㈣狀㈣特裝置_圍環境中 聲音的振動轉換為對應的電流,對電流進行預定頻率的 採樣生成聲音對應的波形圖;聲音識別模組還用於將該 聲音採集模組所生成的聲音對應的波形圖與該存儲單Z 中存儲的複數聲音的類型對應的波形圖進行對比,庐取 s亥用戶聲音的類型。 優選地,聲音識別棋組斤去除聲音訊號中的環境雜訊以 獲取用戶聲音’再根據用戶聲音獲取用戶聲音的類型。 優選地,介面構建模組包括定位樓組用於確定用戶的當 前位置。 [0010] 優選地,介面構建模組還包括網路搜索模組用於經由網 路搜索預定地理區域内與用戶情緒相關的網路資訊。 [0011] 優選地,介面構建模組包括號碼獲取模組,用於從電咭 號碼薄或從網路中自動獲取預定聯繫人的電話號碼供用 戶撥號。 0992071560-0 099141122 表單編號A0101 第5頁/共26頁 201223231 剛本㈣實施方式中提供的好介㈣建方法包括以下步 驟‘提供複數聲音的類型與複數用戶情緒㈣應關係; "手持衣置的周圍壤境中採集聲音訊號;解析聲音訊號 以獲取用戶聲音的類型;根據用戶聲音的類型與 係確定用戶情緒;根據用戶情緒構建用戶介面;顯;;用 戶介面。 剛優選地,剌戶介面構建方法還包㈣下步驟:去除聲 音訊號中的環境雜訊以獲取用戶聲音,根據用戶聲音獲 取用戶聲音的類型。 又 [_優選地,該用戶介面構建方法還包括以下步驟:確定用 戶當前的位置。 剛«地,該用戶介面構建方法,還包括以下步驟:透過 網路搜索預定地理區域内與用戶情緒㈣的網路資訊。 闺«地,該用戶介面構建方法還包括以下步驟:從電話 號碼薄或從網路中自動獲取預定聯繫人的電話號碼供用 戶撥號。 闺上述手«置及其好介面構射法心識洲戶發出 的聲音,獲知用戶的情緒,並根據用戶情緒構建並顯示 用戶介面,以此提高用戶的使用體驗。 [0018] 藉由以下對具體實施方式詳細的描述結合附圖,將可輕 易的瞭解上述内容及此項發明之諸多優點。 【實施方式】 [0019] 圖1是本發明手持裝置10 —實施方式的模組圖。 099141122 表單編號A0101 第6頁/共26頁 0992071560-0 201223231 [0020] f持裝置ίο包括處理器100、存儲單元1〇2、聲音採集楔 矣1〇4、聲音識別模組1〇6、介面構建模組及顯示模 退。在本實施方式中,手持裝置1〇可以是手機、 MlD(m〇bile lnternet device)等移動終端設備。處理器100用於執行上述聲音採集模組1〇4、 处l〇R入 年日哉別模組iUb、介面構建模組108。 [0021] Ο 〇 [0022] 存儲單tl 102用於存儲複數聲音的類型對應的波形圖以及 複數聲音_型與複數用戶情緒的對應㈣。在本實:及 ,式中,該等聲音的類型對應的波形圖是指用戶的不同 牮3的類型對應的聲音波形圖。例如,圓2(a)是用戶5 出的,吟聲對應的波形圖,圖2(B)是㈣戶發出的咳漱: 對應的波形圖’圖3(A)是用戶發出的喘息聲對應二 圖,圖3(B)是用戶說話的聲音對應的波形圖。該等聲: 的類型與該等用戶情緒的對應關係可以如下:當用戶^ 音的類型為呻吟聲時,對庳的用戶情緒為,苦;當用聲 聲音的類型為咳漱聲時,對應的用戶情緒▲生病:當= 戶聲音的類型為喘息聲時,對應的用戶情緒為運動= 用戶聲音的類型為說話聲時,對應的用戶情緒為正常: 在本發明不同實施方式中,具體的對應關係可以根據使 用者的喜好自由狀,不蚊於上面所列的具體例子。 聲音採集模組104用於從該手持裝謂的環境中採集 聲音訊號’該聲音訊號包括好聲音。在本實施方式中’、 ,聲音採集模組1〇4可以是麥克風。聲音採集模組1〇4從 環境中採集聲音的時間可以是即時進行採集、間隔預定 時間進行採集或者好按預定鍵時採集。間隔預定時間 099141122 表單編號A0I01 第7頁/共26頁 0992071560-0 201223231 從環纟兄中採集聲音或者用戶按預定鍵時採集聲音,可以 即約手持裝置10的電量,獲得更持久的使用時間。具體 而言,聲音採集模組1〇4將手持裝置1〇的周圍環境中聲音 的振動轉換為對應的電流,然後對電流進行預定頻率的曰 偵測生成聲音對應的波形圖,從而實現聲音的採集。 [0023] [0024] [0025] 099141122 聲音識別模組1〇6用於解析聲音訊號以獲取用戶聲音的類 型,並根據用戶聲音的類型與該對應關係確定用戶情緒 。在本實施方式中,聲音識別模組1〇6將聲音採集模組 104生成的聲音的波形圖與存儲單元1〇2中存儲的複數聲 音的類型對應的波形圖進疔對比,獲取當前聲音的類型 ,再結合聲音的類型與用戶情緒的對應關係判斷發出聲 音的用戶的情緒。具體而言,當用戶生病咳嗽時,聲音 採集模組104採集到用戶咳嗽聲,並且將用戶的咳嗷聲轉 換為波形圖。聲音識別模組1〇6將採集到的用戶咳嗷聲與 存儲單元102中的存儲的各種聲音的波形圖對比,從而可 以識別出用戶當前聲音的類型是咳嗷,再根據該聲音的 類型,如咳嗷,與甩戶情緒的對應關係即可判定用戶處 於生病狀態。 介面構建模組108用於根據用戶情緒構建用戶介面。在本 實施方式中,介面構建模組1〇8預先設定了各種情緒下用 戶介面的構建規則。舉例而言,當判定用戶處於生病狀 態時,則根據預定的生病狀態下的用戶介面構建規則, 啟動相應的功能構建用戶介面。 顯示模組110用於顯示用戶介面。在本實施方式中,介面 構建模組108建立的用戶介面將會透過顯示模組11()顯示 0992071560-0 表單編號A0101 第8頁/共26頁 201223231 。作為本發明實施方式的進一步改進,介面構建模組108 構建用戶介面的畫面的同時也可產生語音。 [0026] ❹ Ο [0027] 在本貫施方式中’聲音識別模組106是直接將聲音採集模 組1〇4採集聲音訊號(包括用戶聲音和環境雜訊)與存儲 早凡102中存儲的聲音的波形圖比對來識別用戶聲音的類 型。作為本發明一實施方式的進一步改進,手持裝置10 的聲音識別模組106可先去除聲音訊號中的環境雜訊以獲 取用戶聲音,再根據用戶聲音獲取用戶聲音的類型。具 體而5,聲音採集模組104從手将裝置10的周圍環境中採 集的聲a讯號包細戶聲音和環境雜訊^因此聲音採 集模組104生成的聲音訊號的波形圖是用戶聲音的波形圖 和環境雜訊的波形圖的疊加。參.見圖4.’學4(A)中的坤吟 聲與圖4(B)中的咳漱聲的波形圖是經過聲音識別模組1〇6 的平β化處理’進而將環境雜訊的波形圖去除,獲得的 用戶聲音的波形圖。經過學音識別模組i巧去除環境雜訊 後獲得的用戶聲音的波形圖’増加了聲音i別模組1〇6將 用戶聲音的波形圖與存儲單元1()2中存儲的聲音的波形圖 比對的準確度,也加快了比對的速度。 作為本發明-實施方式的進—步改進,手持裝置1〇的介 面構建模組1G8包括定位模組1()8(),用於確定用戶當前位 置。在本實施方式中,定位模組1〇8〇可以藉由全球定位 系統(Global position System,Gps)獲取手持裝 置10的位置資訊,也可以藉由手機基站來確定手持裂置 10的位置資訊。 作為本發明一實施方式的進—步改進,手持裝置1〇的介 099141122 表單編號A0101 第9頁/共26頁 0992071560-0 [0028] 201223231 面構建模組108還包括網路搜索模組1 082,用於經由網路 搜索預定地理區域内與用戶情緒相關的網路資訊。在本 實施方式中,預定地理區域可以是全球範圍,也可以是 用戶設置的某個區域,或者是定位模組1 080確定的用戶 當前位置的周邊一定範圍的區域。具體而言,手持裝置 1 0偵測到用戶的咳嗽聲音,確定用戶處於生病狀態,定 位模組1080確定用戶的當前位置,網路搜索模組1 082經 由網路搜索用戶當前所處位置附近的醫院及藥店,並提 供到達醫院和藥店的最近的方式及路徑。 [0029] 作為本發明一實施方式的進一步改進,手持裝置10的介 面構建模組108還包括號碼獲取模組1084,用於從電話號 碼薄或從網路中獲取預定聯繫人的電話號碼以供用戶撥 號。在本實施方式中,預定聯繫人可以是手持裝置10中 存儲的預定聯繫人,也可以經由預定的規則由網路搜索 模組1 082經由網路搜索到的相關聯繫人的電話號碼。具 體而言,當手持裝置10偵測到用戶處於生病狀態時,提 取手持裝置10中存儲的用戶在生病狀態時想要通話求助 的聯繫人的電話,或者提取網路模組1 082搜索到的醫院 或者藥店的電話。用戶可以直接藉由撥號鍵建立與提取 的相關聯繫人的語音通話。 [0030] 圖5是本發明手持裝置10用戶介面構建方法一實施方式的 流程圖。在本實施方式中,手持裝置10用戶介面構建方 法藉由圖1中功能模組來實施。 [0031] 在步驟S200,存儲單元102存儲複數聲音的類型對應的波 形圖以及複數聲音的類型與複數用戶情緒的對應關係。 099141122 表單編號A0101 第10頁/共26頁 0992071560-0 201223231 θ [0032] Ο [0033] 在本實施方式中,該等聲音的類型的波形圖是指用戶的 不同聲音的類型對應的聲音波形圖。參見圖2與圖3,圖 2(A)是用戶發出的呻吟聲對應的波形圖,圖2(B)是用戶 發出的咳嗽聲對應的波形圖,圖3(A)是用戶發出的喘息 聲對應的波形圖,圖3(B)是用戶說話的聲音對應的波形 圖。該等聲音的類型與該等用戶情緒的對應關係是指: 當用戶聲音的類型為呻吟聲時,對應的用戶情緒為痛苦 ;當用戶聲音的類型為咳嗽聲時,對應的用戶情緒為生 病;當用戶聲音的類型為喘息聲時,對應的用戶情緒為 運動;當用戶聲音的類型為說話聲時,對應的用戶情緒 為正常。 在步驟S202,聲音採集模組104從手持裝置10的周圍環 境中採集聲音訊號,該聲音訊號包括用戶聲音。在本實 施方式中,聲音採集模組104從環境中採集聲音的時間可 以是即時採集、間隔預定時間採集或者用戶按預定鍵時 採集。具體而言,聲音採集模組104將手持裝置10的周圍 環境中聲音的振動轉換為對應的電流,對電流進行預定 頻率的採樣生成聲音對應的波形圖,從而實現聲音的採 集。 在步驟S204,聲音識別模組106解析聲音訊號以獲取用戶 聲音的類型,並根據用戶聲音的類型與該對應關係確定 用戶情緒。在本實施方式中,聲音識別模組106將聲音採 集模組104生成的聲音的波形圖與存儲單元102中存儲的 複數聲音的類型對應的波形圖進行對比,獲取當前聲音 的類型,再根據聲音的類型以及聲音的類型與用戶情緒 099141122 表單編號Α0101 第11頁/共26頁 0992071560-0 201223231 的對應關係判斷發出聲音的用戶的情緒。具體而言,冬 用戶生病咳嗽時,聲音採集模組1〇4採集到用戶咳嗽聲’ 並且將用户的咳嗽聲轉換為波形圖。聲音識別模組1〇6將 採集到的用戶咳嗽聲與存儲單元1〇2中的存儲的各種聲音 的波形圖對比,從而可以識別出用戶當前聲音的類型是 咳嗽,再根據聲音的類型與用戶情緒的對應關係即可判 定用户處於生病狀態。 [0034] [0035] [0036] 099141122 在步驟S206,介面構建模組1〇8根據用戶情緒構建用戶介 面。在本實施方式中,介面構建模組1〇8預先設定了各種 情緒下用戶介面的構建規則。舉例而言,當判斷用戶處 於生病狀態時,則根據預定的生病狀態下的用戶介面構 建規則,啟動相應的功能構建用戶介面。顯示模組ιι〇顯 示"面構建模組1Q8建立的用戶介面。 圖6為本發明手持裝置10用戶介面構建方法另—實施方式 的流程圖。 在步驟S3GG,存儲單元1G2存儲複數龄音的類型對應的波 形圖以及複數聲音賴型與複數評情緒的對應關係。 在本實施方式中,該等聲音的類型的波形圖是指用戶的 不同聲曰的類型對應的聲音波形圖。參見圖2與圖3,圖 2(A)是用戶發出的啊聲對應的波形圖圖2(β)是用戶 發出的咳漱聲對應的波形圖’圖3(A)是用戶發出的喘息 聲對應的波形圖,圖3⑻是用戶說話的聲音對應的波形 圖。該等聲音的類型與該等用戶情緒的對應關係是指: 當用戶聲音的類型為呻吟聲時,對應的用戶情緒為痛苦 ’·當用戶聲音的類型為咳«時,對應的用戶情緒為生 表單編號AOJ0I 嚷λο f η _ 0992071560-0 201223231 病;當用0 &立 尸耸曰的類型為喘息聲時, 的 運動;卷田ή + 馮 為正常田用戶耷9的類型為說話聲時,對應的用戶情緒 [0037] 在步驟^ 境中4 02,聲音採集模組104從手持裝置10的周圍環 施方如集聲音訊號,該聲音訊號包括用戶聲音。在本實 以是^中’聲音採集模組1G4從環境巾採集聲音的時間可 =即時進行採集、間隔預定時間進行採集或者用戶按 頂又鍵時採集。 〇 [0038] 在步驟Q 〇 303 ’聲音識別模组1〇6先去除聲音訊號中的 類^獲取用戶聲音,再根據用戶聲音獲取用戶聲音的 波形° ^本實施方式中,聲音採集模組1G4生成的聲音的 =《用戶聲音的波形圖和環境雜訊的波形圖的疊加 用戶=別模組1G6先去除聲音訊號中的環境雜訊以獲取 价聲音的聲音波形圖。參見圖4,_)的呼吟聲與圖 ,«是_聲音識賴_時滑化處理進而 …境雜訊的波形圖▲除,獲得的用戶聲音的波形圖。 冱過聲音識別模组1 〇 6去除環境雜訊獲得的用戶聲音的波 =圖增加了聲音識別模組⑽將用戶聲音的波形圖與存館 早πΐ 02中存儲的聲音的波形圖比對的準確度,也加快了 比對的速度。 [0039] 在步驟S304 ’聲音識別模組1G6解析用戶聲音以獲取用戶 聲音的類型,並根據用戶聲音的類型確定用戶情緒。在 本實施方式中’聲音識別模組106將去除環境雜訊獲得的 用戶聲音的波形圖與存儲單元102中存儲的該等聲音的類 蜇對應的波形圖進行對比,獲取用戶聲音的類型,再根 099141122 表單編號A0101 第13頁/共26頁 0992071560-0 201223231 據聲音的類型與用戶情緒的對應關係判斷發出聲音的用 戶的情緒。 [0040] 在步驟S306,定位模組1 080確定用戶當前位置。在本實 施方式中,定位模組1 080可以藉由全球定位單元(GPS) 獲取手持裝置10的位置資訊,也可以藉由手機基站來確 定手持裝置10的位置資訊。 [0041] 在步驟S308,網路搜索模組1 082透過網路搜索預定地理 區域内與用戶情緒相關的網路資訊。在本實施方式中, 預定地理區域可以是全球範圍,也可以是用戶設置的某 個區域,或者是定位模組1080確定的用戶當前位置的周 邊一定範圍的區域。 [0042] 圖7為本發明手持裝置10用戶介面構建方法的又一實施方 式的流程圖。本實施例中的方法與圖6中的方法相似,差 別僅在於本實施例中步驟8310與圖6中步驟8306與8308 不同。由於步驟S300、S302、S303及S304已在圖6中描 述,因此不再贅述。 [0043] 在步驟S310,號碼獲取模組1084從電話號碼薄或網路中 獲取預定聯繫人的電話號碼。在本實施方式中,預定聯 繫人可以是手持裝置1 0的電話號碼薄中存儲的預定聯繫 人,也可以是網路搜索模組1 082在網路搜索到的相關聯 繫人的電話號碼。 [0044] 因此,本發明手持裝置10及其用戶介面構建方法可以識 別用戶發出的聲音,獲知用戶的情緒,根據用戶情緒構 建並顯示用戶介面。 099141122 表單編號A0101 第14頁/共26頁 0992071560-0 201223231 【圖式簡單說明】 [0045] 圖1是本發明手持裝置一實施方式的模組圖。 [0046] 圖2是本發明手持裝置所存儲的呻吟聲與咳嗽聲一實施方 式的波形示意圖。 [0047] 圖3是本發明手持裝置所存儲的喘息聲與說話聲一實施方 式的波形示意圖。 [0048] 圖4是本發明手持裝置處理後的呻吟聲與咳嗽聲一實施方 式的波形示意圖。 ◎ [0049] 圖5是本發明手持裝置用戶介面構建方法一實施方式的流 程圖。 [0050] 圖6是本發明手持裝置用戶介面構建方法另一實施方式的 流程圖。 [0051] 圖7是本發明手持裝置用戶介面構建方法又一實施方式的 〇 流程圖。 【主要元件符號說明】 :V: i, ' [0052] 手持裝置:10 y .: [0053] 處理器:100 [0054] 存儲單元:102 [0055] 聲音採集模組: 104 [0056] 聲音識別模組: 106 [0057] 介面構建模組: 108 [0058] 顯示模組:110 2 表單編號A0101 第15頁/共26頁 0992071560-0 201223231 [0059] 定位模組:1080 [0060] 網路搜索模組:1082 [0061] 號碼獲取模組:1084 099141122 表單編號A0101 第16頁/共26頁 0992071560-0[0009] Preferably, the storage unit is further configured to store the waveform of the subtraction __(4); the sound recovery (four) shape (four) special device _ the vibration of the sound in the surrounding environment is converted into a corresponding current, and the current is sampled at a predetermined frequency. The waveform corresponding to the sound; the voice recognition module is further configured to compare the waveform diagram corresponding to the sound generated by the sound collection module with the waveform diagram corresponding to the type of the complex sound stored in the storage sheet Z, and capture the shai The type of user's voice. Preferably, the voice recognition player removes environmental noise in the voice signal to obtain the user voice' and then acquires the type of the user voice based on the user voice. Preferably, the interface building module includes a positioning floor set for determining a current location of the user. [0010] Preferably, the interface building module further includes a network search module for searching, by the network, network information related to user emotions in a predetermined geographic area. [0011] Preferably, the interface construction module includes a number acquisition module for automatically obtaining a telephone number of the predetermined contact from the e-mail directory or from the network for the user to dial. 0992071560-0 099141122 Form No. A0101 Page 5 of 26 201223231 The Good (4) construction method provided in the implementation of the (4) implementation includes the following steps to provide the type of complex sound and the complex user emotion (4); "Handheld clothing The sound signal is collected in the surrounding soil; the sound signal is parsed to obtain the type of the user's voice; the user's emotion is determined according to the type and structure of the user's voice; the user interface is constructed according to the user's emotion; the user interface is displayed; Preferably, the Set-up interface construction method further comprises the steps of: (4) removing the environmental noise in the sound signal to obtain the user voice, and obtaining the type of the user voice according to the user voice. [0] Preferably, the user interface construction method further comprises the step of determining the current location of the user. Just the ground, the user interface construction method also includes the following steps: searching the network information of the predetermined geographical area and the user's emotions (4) through the network. The user interface construction method further includes the steps of automatically obtaining the telephone number of the predetermined contact from the telephone directory or from the network for the user to dial.闺The above hand «Settings and its good interface structure method to understand the user's emotions, and to build and display the user interface according to the user's emotions, in order to improve the user experience. The above and other advantages of the invention will be readily apparent from the following detailed description of the embodiments. [Embodiment] FIG. 1 is a block diagram of an embodiment of a handheld device 10 of the present invention. 099141122 Form No. A0101 Page 6 / Total 26 Page 0992071560-0 201223231 [0020] The f holding device ίο includes a processor 100, a storage unit 〇2, a sound collection wedge 〇4, a voice recognition module 〇6, an interface Build the module and display the mold retreat. In this embodiment, the handheld device 1A may be a mobile terminal device such as a mobile phone or a M1D (m〇bile lnternet device). The processor 100 is configured to execute the sound collection module 〇4, the 入R into the annual screening module iUb, and the interface construction module 108. [0021] 存储 〇 [0022] The storage unit 121 is used to store a waveform map corresponding to the type of the complex sound and a correspondence between the complex sound _ type and the plural user emotion (4). In the present embodiment: and , the waveform diagram corresponding to the type of the sound refers to a sound waveform diagram corresponding to the type of the user 牮3. For example, the circle 2(a) is the waveform diagram corresponding to the user 5, and the waveform corresponding to the click is shown in Fig. 2(B): (4) The cough generated by the household: The corresponding waveform diagram 'Fig. 3(A) is the corresponding voice of the user. In the second figure, FIG. 3(B) is a waveform diagram corresponding to the voice spoken by the user. The correspondence between the types of the sounds and the emotions of the users may be as follows: when the type of the user's sound is a beep, the user's emotion is a bitter; when the type of the sound is cough, the corresponding User sentiment ▲ sick: When the type of the user's voice is wheezing, the corresponding user's emotion is motion = when the type of the user's voice is talking, the corresponding user's emotion is normal: In different embodiments of the present invention, specific Correspondence can be freely based on the user's preferences, not the specific examples listed above. The sound collection module 104 is configured to collect an audio signal from the environment of the handheld device. The sound signal includes a good sound. In the present embodiment, the sound collection module 1〇4 may be a microphone. The time at which the sound collection module 1〇4 collects sound from the environment may be collected immediately, collected at predetermined intervals, or collected at a predetermined button. Interval scheduled time 099141122 Form No. A0I01 Page 7 of 26 0992071560-0 201223231 When collecting sound from the ring brother or collecting sound when the user presses the predetermined button, the power of the device 10 can be hand-held to obtain a longer lasting use time. Specifically, the sound collection module 1〇4 converts the vibration of the sound in the surrounding environment of the handheld device 1〇 into a corresponding current, and then performs a predetermined frequency on the current to generate a waveform corresponding to the sound, thereby realizing the sound. collection. [0025] [0025] 099141122 The voice recognition module 1〇6 is configured to parse the voice signal to obtain the type of the user voice, and determine the user's emotion according to the type of the user voice and the corresponding relationship. In the present embodiment, the voice recognition module 1〇6 compares the waveform diagram of the sound generated by the sound collection module 104 with the waveform diagram corresponding to the type of the complex sound stored in the storage unit 1〇2, and acquires the current sound. The type, in combination with the type of sound and the user's emotion, determines the emotion of the user who made the sound. Specifically, when the user is sick and coughing, the sound collection module 104 collects the user's coughing sound and converts the user's coughing sound into a waveform diagram. The voice recognition module 1〇6 compares the collected user cough sound with the waveform patterns of the various sounds stored in the storage unit 102, so that the type of the current voice of the user is recognized as coughing, and according to the type of the sound, Such as cough, the relationship with the set-up mood can determine that the user is in a state of illness. The interface construction module 108 is configured to construct a user interface based on user emotions. In the present embodiment, the interface construction module 1 to 8 pre-configures the construction rules of the user interface under various emotions. For example, when it is determined that the user is in a sick state, the corresponding function construct user interface is started according to the user interface construction rule in the predetermined ill state. The display module 110 is used to display a user interface. In this embodiment, the user interface established by the interface building module 108 will be displayed through the display module 11(). 0992071560-0 Form No. A0101 Page 8 of 26 201223231 . As a further improvement of the embodiment of the present invention, the interface construction module 108 can also generate voice while constructing a picture of the user interface. [0027] In the present embodiment, the voice recognition module 106 directly collects the sound signal (including the user voice and the environment noise) from the sound collection module 1〇4 and stores it in the storage 102. The waveform of the sound is compared to identify the type of user's voice. As a further improvement of an embodiment of the present invention, the voice recognition module 106 of the handheld device 10 may first remove the environmental noise in the voice signal to obtain the user voice, and then acquire the type of the user voice according to the user voice. Specifically, the sound collection module 104 collects the sound a signal from the surrounding environment of the device 10 and the ambient noise. Therefore, the waveform of the sound signal generated by the sound collection module 104 is the user's voice. Superposition of waveforms and waveforms of ambient noise. See Figure 4. The waveform of the coughing sound in the 4th (A) and the coughing sound in Figure 4(B) is the flat β-processing of the voice recognition module 1〇6. The waveform of the signal is removed, and the waveform of the obtained user's voice is obtained. After the sound recognition module i removes the ambient noise, the waveform of the user's voice obtained by the user's voice is added to the waveform of the user's voice and the waveform of the sound stored in the storage unit 1 () 2 The accuracy of the comparison also speeds up the comparison. As a further improvement of the present invention-implementation, the interface building module 1G8 of the handheld device 1 includes a positioning module 1() 8() for determining the current position of the user. In the present embodiment, the positioning module 1〇8〇 can obtain the location information of the handheld device 10 by using the Global Position System (Gps), and the location information of the handheld split 10 can also be determined by the mobile phone base station. As a further improvement of an embodiment of the present invention, the handheld device 1〇099141122 Form No. A0101 Page 9 / Total 26 Page 0992071560-0 [0028] 201223231 The face building module 108 further includes a network search module 1 082 For searching for network information related to user emotions in a predetermined geographical area via the Internet. In this embodiment, the predetermined geographic area may be a global range, or may be a certain area set by the user, or a certain range of areas around the current location of the user determined by the positioning module 1 080. Specifically, the handheld device 10 detects the user's coughing sound, determines that the user is in a sick state, the positioning module 1080 determines the current location of the user, and the network search module 1 082 searches the user for the vicinity of the current location of the user via the network. Hospitals and pharmacies, and provide the most recent ways and routes to reach hospitals and pharmacies. [0029] As a further improvement of an embodiment of the present invention, the interface building module 108 of the handheld device 10 further includes a number obtaining module 1084 for obtaining the phone number of the predetermined contact from the phone directory or from the network for providing User dials. In this embodiment, the predetermined contact may be a predetermined contact stored in the handheld device 10, or may be a phone number of the related contact searched by the network search module 1 082 via the network via a predetermined rule. Specifically, when the handheld device 10 detects that the user is in an ick state, the phone of the contact stored in the handheld device 10 that wants to call for help when the user is in a sick state is extracted, or the network module 1 082 searches for the phone. Phone at the hospital or pharmacy. The user can directly establish a voice call with the extracted related contact by dialing a button. 5 is a flow chart of an embodiment of a method for constructing a user interface of the handheld device 10 of the present invention. In the present embodiment, the user interface construction method of the handheld device 10 is implemented by the functional module of FIG. [0031] In step S200, the storage unit 102 stores a waveform map corresponding to the type of the complex sound and a correspondence relationship between the type of the plural sound and the plural user emotion. 099141122 Form No. A0101 Page 10 / Total 26 Page 0992071560-0 201223231 θ [0032] In the present embodiment, the waveform diagrams of the types of the sounds refer to the sound waveforms corresponding to the types of different sounds of the user. . Referring to FIG. 2 and FIG. 3, FIG. 2(A) is a waveform diagram corresponding to the user's squeaking sound, FIG. 2(B) is a waveform diagram corresponding to the user's coughing sound, and FIG. 3(A) is a user's breathing sound. Corresponding waveform diagram, FIG. 3(B) is a waveform diagram corresponding to the voice spoken by the user. The correspondence between the types of the sounds and the emotions of the users refers to: when the type of the user's voice is a beep, the corresponding user's emotion is pain; when the type of the user's voice is a coughing sound, the corresponding user's emotion is sick; When the type of the user's voice is wheezing, the corresponding user's emotion is motion; when the type of the user's voice is talking, the corresponding user's emotion is normal. In step S202, the sound collection module 104 collects an audio signal from the surrounding environment of the handheld device 10, and the audio signal includes a user voice. In this embodiment, the time at which the sound collection module 104 collects sound from the environment may be collected in real time, collected at predetermined intervals, or collected when the user presses a predetermined button. Specifically, the sound collection module 104 converts the vibration of the sound in the surrounding environment of the handheld device 10 into a corresponding current, and samples the current at a predetermined frequency to generate a waveform corresponding to the sound, thereby realizing sound collection. In step S204, the voice recognition module 106 parses the voice signal to obtain the type of the user voice, and determines the user's emotion according to the type of the user voice and the corresponding relationship. In the present embodiment, the voice recognition module 106 compares the waveform diagram of the sound generated by the sound collection module 104 with the waveform diagram corresponding to the type of the complex sound stored in the storage unit 102, and obtains the type of the current sound, and then according to the sound. The type of the sound and the type of the sound and the user's emotions 099141122 Form No. 1010101 Page 11 / Total 26 Page 0992071560-0 201223231 The relationship between the user who made the sound is judged. Specifically, when the winter user coughs, the sound collection module 1〇4 collects the user's coughing sound' and converts the user's coughing sound into a waveform diagram. The voice recognition module 1〇6 compares the collected user cough sound with the waveform pattern of the various sounds stored in the storage unit 1〇2, so that the type of the user's current voice can be recognized as coughing, and then according to the type of the voice and the user. The correspondence of emotions can determine that the user is in a state of illness. [0036] [0036] 099141122 In step S206, the interface building module 1 8 constructs a user interface based on user emotions. In the present embodiment, the interface construction module 1 8 pre-sets the construction rules of the user interface under various emotions. For example, when it is determined that the user is in an ill state, the corresponding function is built to construct the user interface according to the user interface construction rule in the predetermined ill state. The display module ιι〇 displays the user interface created by the face construction module 1Q8. 6 is a flow chart of another embodiment of a user interface construction method for a handheld device 10 of the present invention. In step S3GG, the storage unit 1G2 stores a waveform map corresponding to the type of the complex sound and a correspondence relationship between the complex sound type and the plural emotion. In the present embodiment, the waveform diagrams of the types of the sounds refer to sound waveforms corresponding to the types of different voices of the user. Referring to FIG. 2 and FIG. 3, FIG. 2(A) is a waveform diagram corresponding to the sound emitted by the user. FIG. 2(β) is a waveform diagram corresponding to the coughing sound emitted by the user. FIG. 3(A) is a breathing sound emitted by the user. Corresponding waveform diagram, FIG. 3 (8) is a waveform diagram corresponding to the voice spoken by the user. The correspondence between the types of the sounds and the emotions of the users refers to: When the type of the user's voice is a beep, the corresponding user's emotion is painful. · When the type of the user's voice is coughing, the corresponding user's emotion is alive. Form number AOJ0I 嚷λο f η _ 0992071560-0 201223231 Disease; when using 0 & corpse shouting type of breathing, the movement; ή田ή + 冯为 normal field user 耷9 type is talking voice Corresponding user sentiment [0037] In the step environment, the sound collection module 104 collects an audio signal from the surrounding ring of the handheld device 10, and the sound signal includes a user voice. In this case, the time when the sound collection module 1G4 collects sound from the environmental towel can be collected immediately, collected at predetermined intervals, or collected when the user presses the top button. 〇[0038] In step Q 〇 303, the voice recognition module 1 先 6 first removes the class in the voice signal to obtain the user voice, and then acquires the waveform of the user voice according to the user voice. In this embodiment, the voice collection module 1G4 The generated sound = "The waveform of the user's voice and the superimposed waveform of the environmental noise = User 1G6 first removes the ambient noise in the sound signal to obtain the sound waveform of the price sound. See Fig. 4, _) the snoring sound and the figure, «Yes _ sound _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ The sound recognition module 1 〇 6 removes the noise of the user voice obtained by the environmental noise = the voice recognition module (10) compares the waveform of the user's voice with the waveform of the sound stored in the library π ΐ 02 Accuracy also speeds up the comparison. [0039] At step S304, the voice recognition module 1G6 parses the user's voice to acquire the type of the user's voice, and determines the user's emotion according to the type of the user's voice. In the present embodiment, the voice recognition module 106 compares the waveform diagram of the user voice obtained by removing the environmental noise with the waveform diagram corresponding to the class of the voices stored in the storage unit 102, and acquires the type of the user voice. Root 099141122 Form No. A0101 Page 13 of 26 0992071560-0 201223231 The emotion of the user who made the sound is judged according to the correspondence between the type of sound and the user's emotion. [0040] In step S306, the positioning module 1 080 determines the current location of the user. In this embodiment, the positioning module 1 080 can obtain the location information of the handheld device 10 by using a global positioning unit (GPS), and can also determine the location information of the handheld device 10 by using the mobile phone base station. [0041] In step S308, the network search module 1 082 searches the network for network information related to user emotions in a predetermined geographic area. In this embodiment, the predetermined geographical area may be a global scope, or may be a certain area set by the user, or an area of a certain range of the circumference of the current location of the user determined by the positioning module 1080. 7 is a flow chart of still another embodiment of a method for constructing a user interface of the handheld device 10 of the present invention. The method in this embodiment is similar to the method in Fig. 6, except that step 8310 in this embodiment is different from steps 8306 and 8308 in Fig. 6. Since steps S300, S302, S303, and S304 have been described in FIG. 6, they are not described again. [0043] In step S310, the number obtaining module 1084 obtains the phone number of the predetermined contact from the phone book or the network. In this embodiment, the predetermined contact person may be a predetermined contact person stored in the telephone directory of the handheld device 10, or may be the telephone number of the associated contact that the network search module 1082 searches on the network. Therefore, the handheld device 10 of the present invention and its user interface construction method can recognize the sound emitted by the user, know the emotion of the user, and construct and display the user interface according to the user's emotion. 099141122 Form No. A0101 Page 14 of 26 0992071560-0 201223231 BRIEF DESCRIPTION OF THE DRAWINGS [0045] FIG. 1 is a block diagram of an embodiment of a handheld device of the present invention. 2 is a waveform diagram of an embodiment of a click and cough sound stored in the handheld device of the present invention. 3 is a waveform diagram of an embodiment of a wheezing sound and a talking sound stored in the handheld device of the present invention. 4 is a waveform diagram of an embodiment of a click and cough sound after processing by the handheld device of the present invention. [0049] FIG. 5 is a flow chart of an embodiment of a method for constructing a user interface of a handheld device of the present invention. 6 is a flow chart of another embodiment of a method for constructing a user interface of a handheld device according to the present invention. 7 is a flow chart of still another embodiment of a method for constructing a user interface of a handheld device according to the present invention. [Main component symbol description] :V: i, ' [0052] Handheld device: 10 y .: [0053] Processor: 100 [0054] Storage unit: 102 [0055] Sound collection module: 104 [0056] Voice recognition Module: 106 [0057] Interface Building Module: 108 [0058] Display Module: 110 2 Form Number A0101 Page 15 / Total 26 Page 0992071560-0 201223231 [0059] Positioning Module: 1080 [0060] Network Search Module: 1082 [0061] Number acquisition module: 1084 099141122 Form number A0101 Page 16 / Total 26 page 0992071560-0

Claims (1)

201223231 七、申請專利範圍: 1 . 一種手持裝置包括: 存儲單元,用於存儲複數聲音的類型與複數用戶情緒的對 應關係; 聲音採集模組,用於從該手持裝置的周圍環境中採集聲音 訊號,該聲音訊號包括用戶聲音; 聲音識別模組,用於解析該聲音訊號以獲取該用戶聲音的 類型,並根據該用戶聲音的類型與該對應關係確定用戶情 緒; ❹ 介面構建模組,用於根據該用戶情緒構建用戶介面;及 顯示模組,用於顯示該用戶介面。 2.如申請專利範圍第1項所述之手持裝置,其中: 該存儲單元還用於存儲該等聲音的類型對應的波形圖; 該聲音採集模組還用於將該手持裝置的周圍環境中聲音的 振動轉換為電流,並對電流進行預定頻率的採樣生成聲音 對應的波形圖;及 ^ 該聲音識別模組還用於將該聲音採集模組所生成的聲音對 〇 應的波形圖與該存儲單元中存儲的該等聲音的類型對應的 波形圖進行對比,獲取該用戶聲音的類型。 3 .如申請專利範圍第1項所述之手持裝置,其中該聲音識別 模組先去除該聲音訊號中的環境雜訊以獲取該用戶聲音, 再根據該用戶聲音獲取該用戶聲音的類型。 4. 如申請專利範圍第1項所述之手持裝置,其中該介面構建 模組包括定位模組,用於確定該用戶的當前位置。 5. 如申請專利範圍第4項所述之手持裝置,其中該介面構建 099141122 表單編號A0101 第17頁/共26頁 0992071560-0 201223231 模組還包括網路搜索模組,用於經由網路搜索預定地理區 域内與該用戶情緒相關的網路資訊。 •如申請專利範圍第5項所述之手持裝置,其中該介面構建 模組包括號碼獲取模組,用於從電話號碼薄或從網路中自 動獲取預定聯繫人的電話號碼供用戶撥號。 .一種用戶介面構建方法,應用於手持裝置巾,該用戶介面 構建方法包括以下步驟·· 提供複數聲音的類型與複數用戶情緒的對應關係; 從該手持裝置的周圍環境中採集聲音訊號,該聲音訊號包 括用戶聲音; 解析s亥聲音祝號以獲取該用戶聲音的類型; 根據該用戶聲音的類型與該對應關係確定用户情緒; 根據該用戶情緒構建用戶介面,·及 顯示該用戶介面。 如申凊專利範圍第7項所述之用戶介面構建方法,其中該 解析該聲音訊號以獲取該卵聲音_類義步驟包 步驟: .:.;; .... ;': 去除該聲音訊號中的環境雜訊以獲取用戶聲音;及 根據該用戶聲音獲取該用戶聲音的類型。 如申请專利範圍第7項所述之用戶介面構建方法,其中該 根據該用戶情緒構建用戶介面的步驟包括以下步驟: 確定該用戶的當前位置; ^過網路搜索預定地理區域内與該用戶情緒相關的網路資 10 099141122 如申π專利範圍第7項所述之好介面構建方法,其中該 根據。玄用戶情緒構建用戶介面的步驟包括以下步驟: 表單編號A〇l01 第18頁/共26胃 . 201223231 從電話號碼薄或從網路中自動獲取預定聯繫人的電話號碼 供用戶撥號。 Ο 099141122 表單編號A0101 第19頁/共26頁 0992071560-0201223231 VII. Patent application scope: 1. A handheld device includes: a storage unit configured to store a correspondence between a type of a plurality of sounds and a plurality of user emotions; and a sound collection module configured to collect an audio signal from a surrounding environment of the handheld device The sound signal includes a user voice; the voice recognition module is configured to parse the voice signal to obtain the type of the user voice, and determine the user's emotion according to the type of the user voice and the corresponding relationship; ❹ the interface building module, used for Constructing a user interface according to the user's emotion; and displaying a module for displaying the user interface. 2. The handheld device of claim 1, wherein: the storage unit is further configured to store a waveform corresponding to the type of the sound; the sound collection module is further configured to be in the surrounding environment of the handheld device The vibration of the sound is converted into a current, and the current is sampled at a predetermined frequency to generate a waveform corresponding to the sound; and the sound recognition module is further configured to scan the waveform of the sound generated by the sound collection module with the waveform The waveform diagram corresponding to the type of the sounds stored in the storage unit is compared to obtain the type of the user's voice. 3. The handheld device of claim 1, wherein the voice recognition module first removes ambient noise in the voice signal to obtain the user voice, and then acquires the type of the user voice according to the user voice. 4. The handheld device of claim 1, wherein the interface building module comprises a positioning module for determining a current location of the user. 5. The handheld device of claim 4, wherein the interface is constructed 099141122 Form No. A0101 Page 17 of 26 0992071560-0 201223231 The module further includes a network search module for searching via the network Network information related to the user's sentiment within the predetermined geographic area. The handheld device of claim 5, wherein the interface building module comprises a number acquisition module for automatically obtaining a telephone number of a predetermined contact from a telephone directory or from a network for dialing by a user. A user interface construction method is applied to a handheld device towel. The user interface construction method includes the following steps: providing a correspondence between a type of a plurality of sounds and a plurality of user emotions; and collecting an audio signal from a surrounding environment of the handheld device, the sound The signal includes a user voice; parsing the singer sound to obtain the type of the user's voice; determining the user's emotion according to the type of the user's voice and the corresponding relationship; constructing the user interface according to the user's emotion, and displaying the user interface. The user interface construction method according to the seventh aspect of the invention, wherein the sound signal is parsed to obtain the egg sound_classical step package step: .:.;; ....;': removing the sound signal Environmental noise in the middle to obtain the user's voice; and to obtain the type of the user's voice based on the user's voice. The user interface construction method according to claim 7, wherein the step of constructing a user interface according to the user emotion comprises the steps of: determining a current location of the user; and searching for a user within the predetermined geographical area. Related Internet Resources 10 099141122 A good interface construction method as described in claim 7 of the π patent scope, wherein the basis. The steps of constructing the user interface for the user emotion include the following steps: Form number A〇l01 Page 18 of 26 Stomach . 201223231 Automatically obtain the phone number of the intended contact from the phone book or from the network for the user to dial. Ο 099141122 Form No. A0101 Page 19 of 26 0992071560-0
TW99141122A 2010-11-26 2010-11-26 Handheld device and method for constructing user interface thereof TW201223231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99141122A TW201223231A (en) 2010-11-26 2010-11-26 Handheld device and method for constructing user interface thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99141122A TW201223231A (en) 2010-11-26 2010-11-26 Handheld device and method for constructing user interface thereof

Publications (1)

Publication Number Publication Date
TW201223231A true TW201223231A (en) 2012-06-01

Family

ID=46725446

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99141122A TW201223231A (en) 2010-11-26 2010-11-26 Handheld device and method for constructing user interface thereof

Country Status (1)

Country Link
TW (1) TW201223231A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3080678A4 (en) * 2013-12-11 2018-01-24 LG Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3080678A4 (en) * 2013-12-11 2018-01-24 LG Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
US10269344B2 (en) 2013-12-11 2019-04-23 Lg Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
EP3761309A1 (en) * 2013-12-11 2021-01-06 LG Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances

Similar Documents

Publication Publication Date Title
JP2009136456A (en) Mobile terminal device
CN102479024A (en) Handheld device and user interface construction method thereof
WO2016155331A1 (en) Wearable-device-based information delivery method and related device
CN103918284B (en) voice control device, voice control method and program
US9467673B2 (en) Method, system, and computer-readable memory for rhythm visualization
US11281715B2 (en) Associating an audio track with an image
TWI364977B (en) Server apparatus, server control program, and server-client system
CN110992989B (en) Voice acquisition method and device and computer readable storage medium
JPWO2011158418A1 (en) Content processing execution device, content processing execution method, and program
WO2012051910A1 (en) Method for generating map phone book, as well as electronic map and mobile terminal
US10430896B2 (en) Information processing apparatus and method that receives identification and interaction information via near-field communication link
US9377922B2 (en) Aiding people with impairments
JP2010152477A (en) Information processing system, server device, community providing method, program, and recording medium
CN109257498A (en) A kind of sound processing method and mobile terminal
CN109819167A (en) A kind of image processing method, device and mobile terminal
CN110431549A (en) Information processing unit, information processing method and program
CN110706682A (en) Method, device, equipment and storage medium for outputting audio of intelligent sound box
CN109697262A (en) A kind of information display method and device
CN108763475A (en) A kind of method for recording, record device and terminal device
CN106777204B (en) Picture data processing method and device and mobile terminal
CN108549660A (en) Information-pushing method and device
TW201223231A (en) Handheld device and method for constructing user interface thereof
JP5788429B2 (en) Server system
JP2010244190A (en) Communication terminal and information display method
CN109862190A (en) Control method, device, mobile terminal and the storage medium of terminal message memorandum