TW200841255A - Robotic system and method for controlling the same - Google Patents

Robotic system and method for controlling the same Download PDF

Info

Publication number
TW200841255A
TW200841255A TW096113013A TW96113013A TW200841255A TW 200841255 A TW200841255 A TW 200841255A TW 096113013 A TW096113013 A TW 096113013A TW 96113013 A TW96113013 A TW 96113013A TW 200841255 A TW200841255 A TW 200841255A
Authority
TW
Taiwan
Prior art keywords
sound
expression
unit
signal
robot system
Prior art date
Application number
TW096113013A
Other languages
Chinese (zh)
Other versions
TWI332179B (en
Inventor
Chyi-Yeu Lin
Original Assignee
Univ Nat Taiwan Science Tech
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Taiwan Science Tech filed Critical Univ Nat Taiwan Science Tech
Priority to TW096113013A priority Critical patent/TWI332179B/en
Priority to US11/806,933 priority patent/US20080255702A1/en
Priority to JP2007236314A priority patent/JP2008259808A/en
Publication of TW200841255A publication Critical patent/TW200841255A/en
Application granted granted Critical
Publication of TWI332179B publication Critical patent/TWI332179B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

A method for controlling a robotic system. Video and audio information is received by an input unit and is input into a processor thereby. The processor transforms the video and audio information into corresponding expressional signals and audio signals. The expressional and audio signals are received by an expressional and audio synchronized output unit and are synchronously output thereby. An expression generation control unit receives the expressional signals and generates corresponding expressional output signals. Multiple actuators enable an imitative face to create facial expressions according to the expressional output signals. A voice generation control unit receives the audio signals and generates corresponding audio output signals. A speaker outputs voice according to the audio output signals. Operations of voice output from the speaker and facial expression creation on the imitative face are synchronously performed.

Description

200841255 九、發明說明: 【發明所屬之技術領域】 本發明是有關於一種機器人系統及機器 制方法,特別是有關於一種可同步輪出臉部^系统之控 機器人系統及機器人系統之控制方法。"月及聲音之 【先前技射ί】200841255 IX. Description of the Invention: [Technical Field] The present invention relates to a robot system and a machine method, and more particularly to a control robot system and a control method for a robot system that can simultaneously rotate a face system. "月和声音的 [Previous Techniques]

一般來說,目前市面上已有可執行簡單 音輸出功能的機器人。 作及語 在曰本專利第08107983Α2號專利中揭霞 人之臉部表情改變裝置,其包括有頭部及人造種機器 機構,以增加機器人之臉部表情種類。 。对脂面具等 在美國專利第6,760,646號專利中揭露有〜 及用以控制機器人動作之方法,此篇專利提及種機益人 置、偵測裝置及儲存裝置等之運作,以使機器〗用控制聚 人類之行為。 人輪出類仏 【發明内容】 以為了要解決 本發明基本上採用如下所詳述之特徵 上述之問題。 口本發明之-目的是要提供-種機器人系統, 機器頭顱;一擬真臉部,貼附於該機器頭顱之上%、— 計算單it卜輸人單元,電性連接於該指揮計算單:= 用以接收表情聲音資訊,並且係將該表情聲音資^ 該指揮計算單元之中,其中,該指揮計么夕則至 ^ ^ ^ 平畔异早凡係將該表愔In general, there are currently robots on the market that can perform simple sound output functions. The Japanese facial expression changing device of the Japanese Patent No. 08107983Α2 includes a head and an artificial seed machine mechanism for increasing the facial expression of the robot. . A method for controlling the movement of a robot is disclosed in U.S. Patent No. 6,760,646, the disclosure of which is incorporated herein by reference. Control the behavior of gathering humans. BACKGROUND OF THE INVENTION In order to solve the present invention, the above-described problems are basically employed as described in detail below. The present invention is directed to providing a robotic system, a machine head; a pseudo-face, attached to the machine head, %, a calculation unit, and an electrical connection to the command calculation unit := is used to receive the expression sound information, and the expression sound is used in the command calculation unit, wherein the commander is oh, then ^ ^ ^

Client’s Docket No. : 0950080 TT^s Docket No: 0912-A50930TW/final/Hawdong/client/inventor 箄曰貢訊處理轉換成對應之表情訊號及對應之聲音訊 200841255 號;一表情及聲音同步輸出單元,電性連接於該指揮計算 單元,係用以接收及同步輸出該表情訊號及該聲音訊號; 一表情產生控制單元,電性連接於該表情及聲音同步輸出 單元,係用以接收該表情訊號,並且係產生對應之表情輸 出訊號;複數個致動器,電性連接於該表情產生控制單 元,並且連接於該擬真臉部,係用以根據該表情輸出訊號 而驅使該擬真臉部變形產生表情;一聲音產生控制單元, 電性連接於該表情及聲音同步輸出單元,係用以接收該聲 音訊號,並且係產生對應之聲音輸出訊號;以及一揚聲 ⑩ 器,電性連接於該聲音產生控制單元,並且連接於該機器 頭顱,係用以根據該聲音輸出訊號而輸出聲音,其中,該 揚聲器輸出聲音及該等致動器驅使該擬真臉部變形產生 表情係同步進行。 根據上述目的,該機器人系統更包括一資訊媒體輸入 裝置,係電性連接於該輸入單元,其中,該表情聲音資訊 係經由該資訊媒體輸入裝置而輸入至該輸入單元之中。 根據上述目的,該指揮計算單元具有一定時控制裝 • 置,係用以定時控制啟動該資訊媒體輸入裝置。 根據上述目的,該機器人系統更包括一網路輸入裝 置,係電性連接於該輸入單元,其中,該表情聲音資訊係 經由該網路輸入裝置而輸入至該輸入單元之中。 根據上述目的,該指揮計算單元具有一定時控制裝 置,係用以定時控制啟動該網路輸入裝置。 根據上述目的,該機器人系統更包括一收音機裝置, 係電性連接於該輸入單元,其中,該表情聲音資訊係經由 該收音機裝置而輸入至該輸入單元之中。Client's Docket No. : 0950080 TT^s Docket No: 0912-A50930TW/final/Hawdong/client/inventor 箄曰 讯 处理 processing into the corresponding emoticon signal and corresponding audio signal 200841255; an expression and sound synchronization output unit, Electrically connected to the command computing unit for receiving and synchronously outputting the expression signal and the sound signal; an expression generation control unit electrically connected to the expression and sound synchronization output unit for receiving the expression signal, And generating a corresponding expression output signal; a plurality of actuators electrically connected to the expression generation control unit and connected to the immersive face for driving the imaginary face deformation according to the expression output signal Generating an expression; a sound generation control unit electrically connected to the expression and sound synchronization output unit for receiving the sound signal and generating a corresponding sound output signal; and a speaker 10 electrically connected to the a sound generation control unit connected to the head of the machine for outputting sound according to the sound output signal, wherein The speaker output sound and drive the actuator such realistic facial expressions based deformation simultaneously. According to the above object, the robot system further includes an information media input device electrically connected to the input unit, wherein the expression sound information is input into the input unit via the information media input device. According to the above object, the command calculation unit has a certain time control device for timing control to activate the information media input device. According to the above object, the robot system further includes a network input device electrically connected to the input unit, wherein the expression sound information is input into the input unit via the network input device. According to the above object, the command calculation unit has a timing control device for timing control to activate the network input device. According to the above object, the robot system further includes a radio device electrically connected to the input unit, wherein the expression sound information is input to the input unit via the radio device.

Client’s Docket No.: 0950080 TT5s Docket No: 0912-A50930TW/final/Hawdong/client/inventor 6 200841255 根據上述目的,該指揮計算單元具有一定時控制裝 置,係用以定時控制啟動該收音機裝置。 根據上述目的,該機器人系統更包括一聲音及影像分 析單元及一聲音及影像擷取裝置,其中,該聲音及影像分 析單元係電性連接於該輸入單元與該聲音及影像擷取裝 置之間,該聲音及影像擷取裝置係擷取聲音及影像,並且 係將該聲音及影像傳送至該聲音及影像分析單元之中,以 及該聲音及影像分析單元係將該聲音及影像分析轉換成 該表情聲音資訊,並且係將該表情聲音資訊輸入至該輸入 ’單元之中。 根據上述目的,該聲音及影像擷取裝置包括一收音裝 置及一攝影裝置。 根據上述目的,該機器人系統更包括一記憶單元,係 電性連接於該指揮計算單元與該表情及聲音同步輸出單 元之間,用以儲存該表情訊號及該聲音訊號。 根據上述目的,該指揮計算單元具有一定時控制裝 置,係用以定時控制輸出該記憶單元中之表情訊號及該聲 • 音訊號至該表情及聲音同步輸出單元之中。 本發明之另一目的是要提供一種機器人系統之控制 方法,其包括下列步驟:提供一機器頭顱、一擬真臉部、 複數個致動器及一揚聲器,其中,該擬真臉部係貼附於該 機器頭顱,該等致動器係連接於該擬真臉部,以及該揚聲 器係連接於該機器頭顱;以一輸入單元接收表情聲音資 訊,並將該表情聲音資訊輸入至一指揮計算單元之中,其 中,該指揮計算單元係將該表情聲音資訊處理轉換成對應 之表情訊號及對應之聲音訊號;以一表情及聲音同步輸出Client's Docket No.: 0950080 TT5s Docket No: 0912-A50930TW/final/Hawdong/client/inventor 6 200841255 According to the above purpose, the command calculation unit has a timing control device for timing control to activate the radio device. According to the above object, the robot system further includes an audio and image analysis unit and an audio and image capture device, wherein the sound and image analysis unit is electrically connected between the input unit and the sound and image capture device The sound and image capture device captures the sound and image, and transmits the sound and image to the sound and image analysis unit, and the sound and image analysis unit converts the sound and image analysis into the sound and image analysis unit The sound information is expressed, and the expression sound information is input into the input 'unit. According to the above object, the sound and image capturing device comprises a sound collecting device and a photographing device. According to the above object, the robot system further includes a memory unit electrically connected between the command computing unit and the expression and sound output unit for storing the expression signal and the sound signal. According to the above purpose, the command calculation unit has a timing control device for timingly controlling the output of the expression signal and the sound signal in the memory unit to the expression and sound synchronization output unit. Another object of the present invention is to provide a control method for a robot system, comprising the steps of: providing a machine head, a pseudo-face, a plurality of actuators, and a speaker, wherein the immersive facial tie Attached to the head of the machine, the actuators are connected to the imaginary face, and the speaker is connected to the head of the machine; the expression sound information is received by an input unit, and the expression sound information is input to a command calculation In the unit, the command computing unit converts the expression sound information processing into a corresponding expression signal and a corresponding sound signal; and outputs the expression and the sound simultaneously

Client’s Docket No.: 0950080 7 TTs Docket No: 0912-A50930TW/fmal/Hawdong/client/inventor 200841255 接收及同步輸出該表情 產,單元接收該表情訊號,並且產耳生 並且產生·控料讀㈣聲音訊號, 音輪出訊二:輸:中訊號::,聲器根據該聲 動…駆使额真臉部變形產生表情係同步進行。Client's Docket No.: 0950080 7 TTs Docket No: 0912-A50930TW/fmal/Hawdong/client/inventor 200841255 Receive and synchronize the output of the expression, the unit receives the expression signal, and produces and controls the reading (four) sound signal , the sound wheel out of the second: lose: the middle signal::, according to the sound of the sound ... ... so that the face deformation of the face is generated synchronously.

-步:據二目的,該機器人系統之控制方法更包括下列 該輸!單1Γ/1媒體輪人裝置將該表情聲音資訊輸入至 根據上述目的’該機器人系統之控制方法更包括下列 驟.以一定時控制褒置定時控制啟動該資訊媒體輸入 表置。 一根據上述目的,該機器人系統之控制方法更包括下列 -步驟:以-網路輪人裝置將該表情聲音資訊輸入至該輸 入單元之中。 根據上述目的,該機器人系統之控制方法更包括下列 一步驟:以一定時控制裝置定時控制啟動該網路輸入裝 置。 根據上述目的,該機器人系統之控制方法更包括下列 一步驟:以一收音機裝置將該表情聲音資訊輸入至該輸入 口口 — % 早70之中。 根據上述目的,該機器人系統之控制方法更包括下列 一步驟:以一定時控制裝置定時控制啟動該收音機裝置。 根據上述目的,該機器人系統之控制方法更包括下列 步驟··以一聲音及影像擷取裝置擷取聲音及影像,並將該- Step: According to the second purpose, the control method of the robot system further includes the following: the single 1/1 media wheel device inputs the expression sound information to the control method according to the above purpose. The robot system further includes the following steps. The control timing control starts the information media input table at a certain time. According to the above object, the control method of the robot system further comprises the following steps: inputting the expression sound information into the input unit by the - network wheel device. According to the above object, the control method of the robot system further comprises the step of: starting the network input device with a timing control of the control device. According to the above object, the control method of the robot system further comprises the step of: inputting the expression sound information to the input port by a radio device - % early 70. According to the above object, the control method of the robot system further includes the step of starting the radio device with a timing control of the control device for a certain period of time. According to the above object, the control method of the robot system further comprises the following steps: capturing sound and image by a sound and image capturing device, and

Client’s Docket No·: 0950080 TT5s Docket No: 〇912-A50930TW/f1nal/Hawdong/client/inventor ^ 200841255 聲音及影像傳送至一聲音及影像分析單元之中;以及以該 聲音及影像分析單元將該聲音及影像分析轉換成該表情 聲音資訊,並將該表情聲音資訊輸入至該輸入單元之中。 根據上述目的,該機器人系統之控制方法更包括下列 一步驟:以一記憶單元儲存由該指揮計算單元所處理轉換 之該表情訊號及該聲音訊號。 根據上述目的,該機器人系統之控制方法更包括下列 一步驟:以一定時控制裝置定時控制輸出該記憶單元中之 表情訊號及該聲音訊號至該表情及聲音同步輸出單元之 ,中。 為使本發明之上述目的、特徵和優點能更明顯易懂, 下文特舉較佳實施例並配合所附圖式做詳細說明。 【實施方式】 茲配合圖式說明本發明之較佳實施例。 請參閱第1圖及第2圖,本實施例之機器人系統100 主要包括有一機器頭顱110、一擬真臉部120、一指揮計 φ 算單元130、一輸入單元135、一表情及聲音同步輸出單 元140、——表情產生控制單元145、複數個致動器150、 一聲音產生控制單元155、一揚聲器160、一資訊媒體輸 入裝置171、一網路輸入裝置172、一收音機裝置173、 一聲音及影像分析單元180、一聲音及影像擷取裝置185 及一記憶單元190。 擬真臉部120是貼附於機器頭顱110之上。在此,擬 真臉部120可以是由橡膠或人造樹脂等可彈性變形之材 料所製成,並且擬真臉部120可以是選擇性地為人臉、動Client's Docket No: 0950080 TT5s Docket No: 〇912-A50930TW/f1nal/Hawdong/client/inventor ^ 200841255 The sound and image are transmitted to an audio and image analysis unit; and the sound and image analysis unit is used to The image analysis is converted into the expression sound information, and the expression sound information is input into the input unit. According to the above object, the control method of the robot system further comprises the following steps: storing the expression signal and the sound signal converted by the command calculation unit by a memory unit. According to the above object, the control method of the robot system further comprises the following steps: outputting the expression signal and the sound signal in the memory unit to the expression and sound synchronization output unit in a timing control device timing control. The above described objects, features and advantages of the present invention will become more apparent from the description of the appended claims. [Embodiment] A preferred embodiment of the present invention will be described with reference to the drawings. Referring to FIG. 1 and FIG. 2, the robot system 100 of the present embodiment mainly includes a machine head 110, a pseudo-face 120, a command meter φ calculation unit 130, an input unit 135, and an expression and sound synchronization output. The unit 140, the expression generation control unit 145, the plurality of actuators 150, a sound generation control unit 155, a speaker 160, an information medium input device 171, a network input device 172, a radio device 173, a sound And an image analyzing unit 180, a sound and image capturing device 185, and a memory unit 190. The immersive face 120 is attached to the machine head 110. Here, the pseudo-face 120 may be made of an elastically deformable material such as rubber or synthetic resin, and the pseudo-face 120 may be selectively a face or a movement.

Client's Docket No.: 0950080 TT's Docket No: 0912-A50930TW/final/Hawdong/client/inventor 9 200841255 物臉面或卡通人物臉面等形式。 值得>主意的是,指揮計算單 表情及聲音同步輪中一=早7^ 130、輪入早凡135、 聲音產生控4情產生控制單元⑷、 利早几155、資訊媒體輪入萝 入衣置172、收音機裴置173音像八、、、罔路輪 及記憶單元19〇箄ν β耳日及心像7刀析早元180 之外。 4構造可以是設置於機器頭顱no之中或 如第2圖所示,指揮計算單元13 〜士 置131,而輪入單元135 3 ♦^、击杜,、有一疋¥控制裝 其可用來接絲情聲音資連接於指揮計算單元13〇, 單元=及聲音同步輪出單元14。是電性連接於指揮計算 表。情產生控制單元145是電性連 輛出早元140。 .、表丨月及聲音同步 複數個致絲150是電性連接於 145 ’亚且複數個致動器150是分別遠接^^ 控制單元 更詳細的來說,複數個致動器15〇乃是八別1真臉部120。 於擬真臉部120之内表面。夹 ,、刀J且適當地連接 可分別連接於擬真臉部12〇 :二祝尸:复數:致動器150 部位之内表面。 月目、嘴巴、鼻子等 聲音產生控制單元155是電 輸出單元〗4〇。 運接於表情及聲音同步 揚聲器160是電性連接於聲音產生 且揚聲器160是連接於機器頭顱! ^ 凡〗55 ’並 可以是選擇性地設置於擬真臉部’揚聲器160 121(如第1圖所示)内。 ° 之一嘴部開口Client's Docket No.: 0950080 TT's Docket No: 0912-A50930TW/final/Hawdong/client/inventor 9 200841255 Object face or cartoon character face. It is worthwhile to say that the command calculation single expression and sound synchronization wheel in the first = early 7 ^ 130, turn into the early 135, the sound generation control 4 situation production control unit (4), Lee early 155, information media round into the The clothes 172, the radio set 173, the sound image eight, the 罔 road wheel and the memory unit 19 〇箄ν β ear day and the heart image 7 knife analysis early 180. 4 The structure can be set in the machine head no or as shown in Fig. 2, the command calculation unit 13 ~ Shi set 131, and the turn-in unit 135 3 ♦ ^, hit Du, a 疋 ¥ control device can be used to pick up The silk sound is connected to the command calculation unit 13A, the unit= and the sound synchronization wheeling unit 14. It is electrically connected to the command calculation table. The condition generation control unit 145 is electrically connected to the early element 140. , 丨月及和声同步, a plurality of filaments 150 are electrically connected to 145' and a plurality of actuators 150 are respectively remotely connected to the control unit. More specifically, a plurality of actuators 15 It is eight different one true face 120. On the inner surface of the immersed face 120. The clip, the knife J and the appropriate connection can be respectively connected to the imaginary face 12〇: two corpses: plural: the inner surface of the actuator 150 portion. The sound generation control unit 155 such as the moon, the mouth, the nose, and the like is an electric output unit. The operation is connected to the expression and the sound is synchronized. The speaker 160 is electrically connected to the sound generation and the speaker 160 is connected to the machine head! ^ 凡 55 ' and may be selectively placed in the immersive face 'speaker 160 121 (as shown in Figure 1). ° One of the mouth openings

Clienfs Docket No.: 0950080 TT5s Docket No: 〇912-A50930TW/fInal/Hawdong/cJient/invent〇r 10 200841255 資訊媒體輸入裝署171 λ 裝置173皆是電性連接二,輪入裝置172及收音機 申,資訊媒體輸入穿罟17\兩、早7^ U5。在本實施例之 接埠等形式,而網路輕入壯$可以疋光碟機或一USB連 為有線或無線的形式Y。衣且172可以是一網路連接埠(可 :音及影像分析單元⑽ 與聲音及影像擷取巢 疋电性連接於輸入單元135 及影像擷取裝置1δ5"φ 間。在本實施例之中,聲音 裝置咖所構成由一收音裝i心及一攝影 一麥克風之形式,裾旦、、田&來5兄,收音裝置185a可以是 形式。 ^裝置⑽則可以是-攝影機之 記憶單元190 | 及聲音同步輸出單m生;^於指揮計算單元130與表情 體η统1〇。之表演運作方式。 圖之步驟S11所干你 “之中,如弟3 含有表情及聲音^,該表情聲音資訊可以是由 讀取而輸入至輸入單的二碟經:資訊媒體輪入裝置171 將該表情聲音資,凡之中。接者’輪入單元135會 月耳曰貝讯輪入至指揮計算單元130之中, =細所示。在此,指揮計算單元!料藉由二馬3 及重新編碼寺方式來將兮矣卜主舞立-欠> 士 、、 之表^訊號及對應之聲音訊號。然後,以表情及聲音同步 輸出單元14G接收及同步輸出該表情訊號及該聲音& 號,如第3圖之步驟S13所示。接著,以表情產生控制單 元145接收該表情訊號,並且產生一系列對應之表情輪出Clienfs Docket No.: 0950080 TT5s Docket No: 〇912-A50930TW/fInal/Hawdong/cJient/invent〇r 10 200841255 Information Media Input Installation 171 λ Devices 173 are all electrically connected, wheeled into device 172 and radio, The information media input is 罟17\ two, early 7^ U5. In the form of the interface of the present embodiment, and the network is light, the optical drive can be connected to the optical drive or a USB connected to the form Y. The clothing and the 172 may be a network connection (the audio and image analysis unit (10) is electrically connected to the sound and image capturing frame 输入 between the input unit 135 and the image capturing device 1δ5"φ. In this embodiment The sound device is formed in the form of a radio and a microphone, and the microphone, the radio device 185a can be in the form. The device (10) can be a memory unit 190 of the camera. And the sound synchronization output single m; ^ in the command calculation unit 130 and the expression body η unit 1 〇. The performance mode of operation. Figure step S11 to do you "between, such as brother 3 contains expressions and sounds ^, the expression The sound information may be a two-disc input that is input to the input form by reading: the information media wheeling device 171, and the sound of the expression is in the middle. The receiver's wheeling unit 135 will turn to the commander. In the calculation unit 130, = fine is shown. Here, the command calculation unit! is prepared by the two horses 3 and the re-coded temple method to sing the main dance - owe > The sound signal. Then, the output is synchronized with the expression and sound. The unit 14G receives and synchronously outputs the expression signal and the sound & number, as shown in step S13 of Fig. 3. Next, the expression generation control unit 145 receives the expression signal, and generates a series of corresponding expression rounds.

Client’s Docket No.: 0950080 TT s Docket No: 0912-A50930TW/flnal/Hawdong/client/inventor | γ 200841255 訊號,如第3圖之步驟S14所示。同時,以聲音產生控制 單元155接收該聲音訊號,並且產生一系列對應之聲音輸 出訊號,如第3圖之步驟S14’所示。接著,複數個致動 器150即可根據該一系列對應之表情輸出訊號而驅使擬 真臉部120變形產生表情,如第3圖之步驟S15所示。在 此,位於擬真臉部120内表面上不同位置處之致動器150 會根據所接收到的表情輸出訊號而各自進行運作,以驅使 擬真臉部120變形產生表情。同時,揚聲器160即可根據 該一系列對應之聲音輸出訊號而輸出聲音,如第3圖之步 B 驟S15’所示。特別的是,藉由表情及聲音同步輸出單元 140之運作,揚聲器160輸出聲音之運作及複數個致動器 150驅使擬真臉部120變形產生表情之運作乃是同步地進 行。舉例來說,機器人系統100或機器頭顱110可在唱歌、 說話的同時使得擬真臉部120呈現相對應之表情。 另外,從資訊媒體輸入裝置171輸入至輸入單元135 之中的表情聲音資訊乃是已預先產生或錄製的。 此外,網路輸入裝置172亦可以將表情聲音資訊輸入 • 至輸入單元135之中,如第4圖之步驟821所示。舉例來 說,該表情聲音資訊可以是含有表情及聲音資訊的檔案, 並且其可經由網際網路傳輸及網路輸入裝置172接收而 輸入至輸入單元135之中。接著,輸入單元135會將該表 情聲音資訊輸入至指揮計算單元130之中,如第4圖之步 驟S22所示。在此,指揮計算單元130可藉由解碼及重新 編碼等方式來將該表情聲音資訊處理轉換成對應之表情 訊號及對應之聲音訊號。然後,以表情及聲音同步輸出單 元140接收及同步輸出該表情訊號及該聲音訊號,如第4Client's Docket No.: 0950080 TT s Docket No: 0912-A50930TW/flnal/Hawdong/client/inventor | γ 200841255 Signal, as shown in step S14 of Figure 3. At the same time, the sound generation control unit 155 receives the sound signal and generates a series of corresponding sound output signals as shown in step S14' of Fig. 3. Then, the plurality of actuators 150 can drive the simulated face 120 to deform according to the series of corresponding expression output signals, as shown in step S15 of FIG. Here, the actuators 150 located at different positions on the inner surface of the imaginary face 120 operate separately according to the received expression output signals to drive the imaginary face 120 to deform to produce an expression. At the same time, the speaker 160 can output a sound based on the series of corresponding sound output signals, as shown in step S15' of Fig. 3. In particular, by the operation of the expression and sound sync output unit 140, the operation of the speaker 160 to output the sound and the operation of the plurality of actuators 150 to drive the imaginary face 120 to deform to produce an expression are performed simultaneously. For example, the robotic system 100 or the machine head 110 can cause the immersive face 120 to present a corresponding expression while singing and speaking. Further, the expression sound information input from the information medium input device 171 to the input unit 135 is previously generated or recorded. In addition, the network input device 172 can also input the expression sound information into the input unit 135 as shown in step 821 of FIG. For example, the expression sound information may be a file containing expressions and sound information, and it may be input to the input unit 135 via the Internet transmission and network input device 172. Next, the input unit 135 inputs the expression sound information into the command calculation unit 130 as shown in step S22 of Fig. 4. Here, the command calculation unit 130 can convert the expression sound information processing into a corresponding expression signal and a corresponding audio signal by means of decoding and re-encoding. Then, the expression and sound output unit 140 receives and synchronously outputs the expression signal and the sound signal, such as the fourth

Client’s Docket No.: 0950080 TT^ Docket No: 0912-A50930TW/fmal/Hawdong/client/inventor 12 200841255 圖之步驟S23所示。接著,以表情產生控制單元145接收 該表情訊號,並且產生一系列對應之表情輸出訊號,如第 4圖之步驟S24所示。同時,以聲音產生控制單元155接 收該聲音訊號,並且產生一系列對應之聲音輸出訊號,如 第4圖之步驟S24’所示。接著,複數個致動器150即可 根據該一系列對應之表情輸出訊號而驅使擬真臉部120 變形產生表情,如第4圖之步驟S25所示。同樣地,位於 擬真臉部120内表面上不同位置處之致動器150會根據所 接收到的表情輸出訊號而各自進行運作,以驅使擬真臉部 ⑩ 120變形產生表情。同時,揚聲器160即可根據該一系列 對應之聲音輸出訊號而輸出聲音,如第4圖之步驟S25’ 所示。同樣地,藉由表情及聲音同步輸出單元140之運 作,揚聲器160輸出聲音之運作及複數個致動器150驅使 擬真臉部120變形產生表情之運作乃是同步地進行。 另外,從網路輸入裝置172輸入至輸入單元135之中 的表情聲音資訊可以是即時的(real-time)或已預先錄製 的。 ⑩ 此外,收音機裝置173亦可以將表情聲音資訊輸入至 輸入單元135之中。在此,收音機裝置173所接收及輸送 的表情聲音資訊可以僅只是廣播訊號,並最後直接由揚聲 器160輸出,此時,擬真臉部120仍可以配合產生特定的 表情。 另外,從收音機裝置173輸入至輸入單元135之中的 表情聲音資訊亦可以是即時的(real-time)或已預先錄製 的。 此外,——使用者可決定機器人系統100或機器頭顱Client's Docket No.: 0950080 TT^ Docket No: 0912-A50930TW/fmal/Hawdong/client/inventor 12 200841255 The step S23 of the figure is shown. Next, the expression generation signal is received by the expression generation control unit 145, and a series of corresponding expression output signals are generated, as shown in step S24 of Fig. 4. At the same time, the sound generation control unit 155 receives the sound signal and generates a series of corresponding sound output signals as shown in step S24' of Fig. 4. Then, the plurality of actuators 150 can drive the imaginary face 120 to deform according to the series of corresponding expression output signals, as shown in step S25 of FIG. Similarly, actuators 150 located at different locations on the inner surface of the immersive face 120 will each operate in accordance with the received expression output signals to drive the immersive face 10 120 to deform. At the same time, the speaker 160 can output a sound based on the series of corresponding sound output signals, as shown in step S25' of Fig. 4. Similarly, by the operation of the expression and sound synchronizing output unit 140, the operation of the speaker 160 to output the sound and the operation of the plurality of actuators 150 to drive the imaginary face 120 to deform to generate an expression are performed simultaneously. In addition, the expression sound information input from the network input device 172 to the input unit 135 may be real-time or pre-recorded. Further, the radio device 173 can also input the expression sound information into the input unit 135. Here, the expression sound information received and transmitted by the radio device 173 may be only a broadcast signal, and finally output directly by the speaker 160. At this time, the immersive face 120 can still cooperate to generate a specific expression. Further, the expression sound information input from the radio device 173 to the input unit 135 may also be real-time or pre-recorded. In addition, the user can determine the robot system 100 or the machine head

Client’s Docket No.: 0950080 TFs Docket No: 09I2-A50930TW/fmal/Hawdong/client/inventor 13 200841255 更詳細的來說,藉由設定 動資訊媒體輪入f置171才工網二f13卜即可定時控制啟 門於#,声/貧訊媒體輸入裝置171可在指定的^ 寸、月卓s資訊(來自於含有表情及立 、守 碟等)輸入至輪入單元耳曰貝。料虽案的光 於定㈣Η心中、網路輸入裝置W可在 ΪΪ: =際網路上之表情聲音資訊(含㈣ 曰貝Dfi的私案)輪入至輪入單元135 二耳Client's Docket No.: 0950080 TFs Docket No: 09I2-A50930TW/fmal/Hawdong/client/inventor 13 200841255 In more detail, by setting the dynamic information media to enter the f-set 171, the work network two f13 can be controlled regularly. Kaimen Yu #, sound / poor media input device 171 can be input to the wheel unit in the specified unit, the monthly information (from the expression and stand, the disc, etc.). However, the light of the case is in the heart of the (four) heart, the network input device W can be in the ΪΪ: = network on the expression of sound information (including (four) mussel Dfi private case) wheeled into the wheeling unit 135 two ears

置173可在指定的時間接收廣播訊號得裝 1 〇 Θ或機器頭顱i丨〇進行 灸侍機。。人糸統 或與人寒喧)。 了上31之“運作(例如,播報新聞 ⑽再者主從ΐ訊媒體輪入裝置171或網路輪入裝置m 雨入之表h荦音資訊在經由指揮計算單元 換成對應之表情訊號及對應之聲音訊號後,記处理轉 可先將該對應之表情訊號及該對應之聲音气;:90 來。同樣地,藉由設定指揮計算單元 σ &储存起 13卜即可定時控制輸出記憶單元19〇中之定=控制裴置 聲音訊號至表情及聲音同步輪出單元之表丨月讯號及該 器人系統100或機器頭顱110進行上、成〇之=,以使得機 此外,輸入單元135所接收到^^表项運作。 是同步的、可為尚未同步的或不盡耸音資訊可為已 表情聲音資訊内可以具有内建的時間二的。不淪是何者, 單元130與表情及聲音同步輸出單^貝矾’以供指揮計算 情聲音資訊時能加以同步。 40在處理與輪出表 在另一方面,機器人系統1 〇〇遺7〜 演運作方式。 VI具有以下所述之表Set 173 to receive the broadcast signal at the specified time to install 1 〇 Θ or the machine head i丨〇 to perform the moxibustion machine. . People or people chilling). The operation of the above 31 (for example, broadcast news (10) and the main slave from the media media entry device 171 or the network entry device m rain into the watch h voice information in the command calculation unit to replace the corresponding expression signal and After the corresponding sound signal, the processing can be first transferred to the corresponding expression signal and the corresponding sound; 90. Similarly, the output can be controlled by setting the command calculation unit σ & The unit 19〇 determines the control sound signal to the expression and the sound synchronization wheel unit and the month signal and the person system 100 or the machine head 110 are up and down, so that the machine is further input. The unit 135 receives the operation of the ^^ item. It is synchronous, can be unsynchronized or does not have the tune information, and can have the built-in time 2 in the expressed voice information. Whatever the case, the unit 130 and The expression and sound synchronization output single ^ 矾 矾 ' can be synchronized for the command of the calculation of the sound information. 40 in the processing and rotation table on the other hand, the robot system 1 7 7 ~ play mode. VI has the following Narration

Client’s Docket No.: 0950080 TT^s Docket No: 0912-A50930TW/final/Hawdong/client/inventoi 14 200841255 像,ίί該^及及裝置185可以摘取聲音及影 :如弟5圖之步驟S31所示。 = 口、柃态人系統l〇0外部之聲音及影像,例如,接收 擷取機器人系έ充1〇〇从如 ^ 』如接收 著,踔立另部之一表演者的聲音及影像。接 成表V聲;t t18Q會將該聲音及影像分析轉換 元,’如第5圖之步驟S32所示。然後早 Π5曾將該表情聲音資訊輸入至指揮計 ^ 如第/圖之步驟S33所示。在此,指揮計#^3;^二 由解碼及重新編碼等方式來將該表情聲音資訊處理榦^ ^對應之表=訊號及對應之聲音訊號。接著、,以 = 工同步輪出,元14。接收及同步輸出該表情訊心二 音,號’如第5圖之步驟S34所示。然後,以表::: 制皁凡145接收該表情訊號,並且產生-系列對‘之夺; 輸出訊號’如第5圖之步驟S35所示。同時」 控制單元155接㈣聲音訊號,並且產生-系列^^ 音輸出-Λ號,如第5圖之步驟S35,所示。接:=耳 致動态150即可根據言亥一系列對應之表 不文固 使擬真臉部變形產生表情,。5 ΐ。在此二位於擬真臉部120内表面上不同位置i之致動 為150會根據所接收到的表情輸出訊號而各自、重 作,以驅使擬真臉部120變形產生表情。同時,揚二哭 160即可根據該一系列對應之聲音輸出訊號而輪出聲二: 如第5圖之步驟S36’所示。同樣地,藉由表情及同Client's Docket No.: 0950080 TT^s Docket No: 0912-A50930TW/final/Hawdong/client/inventoi 14 200841255 Like, ίί^^ and 185 can extract sound and shadow: as shown in step S31 of Figure 5 . = The sound and image of the external system l〇0 externally, for example, the receiving robotic system is charged 1〇〇 from the reception, and the sound and image of one of the other performers is set up. The V sound is connected; t t18Q converts the sound and image analysis elements, as shown in step S32 of Fig. 5. Then, the expression sound information is input to the command meter as early as 5, as shown in step S33 of Fig. / Fig. Here, the command meter #^3;^2 is processed by decoding and re-encoding to process the expression sound information to the corresponding table = signal and corresponding audio signal. Then, turn round with = work, element 14. The emoticon second tone is received and synchronously outputted, as shown in step S34 of Fig. 5. Then, the expression signal is received by the table::: soap maker 145, and the - series pair ‘rescue; output signal' is generated as shown in step S35 of Fig. 5. At the same time, the control unit 155 receives the (four) sound signal and generates a - series ^^ sound output - apostrophe, as shown in step S35 of Fig. 5. Connect: = ear to the dynamic 150 can be based on a series of corresponding tables of the words of the sea, not to solidify the imaginary face deformation to produce expressions. 5 ΐ. The actuation of the two positions on the inner surface of the immersive face 120 is 150, and each of them is replayed according to the received expression output signal to drive the immersive face 120 to deform. At the same time, the second crying 160 can be rotated according to the series of corresponding sound output signals: as shown in step S36' of Fig. 5. Similarly, with expressions and

Client’s Docket No.: 0950080 TT's Docket No: 0912-A50930TW/final/Hawdong/client/inventor 15 200841255 步輸出單元140之運作,揚舞 疋同步地知。如上所述,機 1生表丨,之運作乃 ,,其所接收_到的“聲手器項顱 之效果。 次表濟出來,以達成娛樂 同樣地,從聲音及影像分 音資訊在經由指揮計算單元 =〇所輸入之表情聲Client’s Docket No.: 0950080 TT's Docket No: 0912-A50930TW/final/Hawdong/client/inventor 15 200841255 The operation of the step output unit 140, the dance is synchronously known. As mentioned above, the machine 1 is alive, and the operation is, the effect of the "sounds of the handcuffs" received by the machine. The sub-exhibition comes out to achieve entertainment. Similarly, the sound and video information is passed through Command calculation unit = 表情 input expression

訊號及對應之聲音訊號後,記处理轉換成對應之表情 表情訊號及該對應之聲音訊號儲7 〇可先將該對應之 指揮計算單it 13G之定時控然後^藉由設定 出記憶單元190中之表情〗,即可定時控制輸 音同步輸出單元14。之表中 頭顧110進行上述之表演運作。于1人系、统100或機器 綜上所述,本發明所揭露之機哭一 為一娛樂中心,其可在播放歌手;^二、=頭顱可 現與聲音相對應之表情’以達成模擬 果。 雖然本發明已以較佳實施例揭露於上,然其並非用以 限定本發明,任何熟習此項技藝者,在不脫離本發明之 神和範圍内,當可作些許之更動與潤飾,因此本發明 護範圍當視後附之申請專利範圍所界定者為準。After the signal and the corresponding audio signal, the processing is converted into the corresponding expression expression signal and the corresponding audio signal storage. The timing of the corresponding command calculation unit it 13G can be first controlled and then set in the memory unit 190. The expression of the voice synchronization output unit 14 can be controlled periodically. In the table, the head 110 performs the above-mentioned performance operation. According to the one-person system, the system 100 or the machine, the machine disclosed in the present invention is crying as an entertainment center, which can play the singer; ^2, = the head can appear with the sound corresponding to the expression 'to achieve the simulation fruit. Although the present invention has been disclosed in its preferred embodiments, it is not intended to limit the invention, and it is possible to make some modifications and refinements without departing from the scope of the invention. The scope of the invention is defined by the scope of the appended claims.

Clienfs Docket No.: 0950080 TT^ Docket Ήο: 0912-A50930TW/fmal/Hawdong/client/inventor 200841255 【圖式簡單說明】 之機态人系統之外形示意 圖 第1圖係顯示本發明 第2圖係顯示本發明之 一 意圖; w人系、、先之内部構造配置示 之钱為人系統之一種運作流程示 弟3圖係顯示本發明 意圖; 明之機器人系統之另—種運作流程 ^圖係顯示本發明之機器人系統之再—種運作流程 弟4圖係顯示本發 示意圖;以及 示意Clienfs Docket No.: 0950080 TT^ Docket Ήο: 0912-A50930TW/fmal/Hawdong/client/inventor 200841255 [Simplified Schematic] Schematic diagram of the external appearance of the human system Figure 1 shows the second diagram of the present invention. One of the intentions of the invention; the internal structure of the human system, the internal structure of the money is a kind of operation system of the human system, the third embodiment shows the intention of the present invention; the other operational flow of the robot system shows the invention The second part of the robot system - the operating process of the brother 4 shows the schematic diagram of the present;

【主要元件符號說明】 100〜機器人系統; 120〜擬真臉部; 130〜指揮計算單元; 135〜輸入單元; 145〜表情產生控制單元; 155〜聲音產生控制單元; 171〜資訊媒體輸入裝置; 173〜收音機裝置; 185〜聲音及影像擷取裝置; 185b〜攝影裝置; 11〇〜機器頭顱; 121〜嘴部開口; 131〜定時控制裝置; 140〜表情及聲音同步 150〜致動器; 160〜揚聲器; 輪出單元; 172〜網路輸入裝置; 180〜聲音及影像分析單元 185a〜收音裝置; 190〜記憶單元。[Main component symbol description] 100~ robot system; 120~ pseudo-real face; 130~ command calculation unit; 135~ input unit; 145~ expression generation control unit; 155~ sound generation control unit; 171~ information media input device; 173 ~ radio device; 185 ~ sound and image capture device; 185b ~ camera device; 11 〇 ~ machine head; 121 ~ mouth opening; 131 ~ timing control device; 140 ~ expression and sound synchronization 150 ~ actuator; ~ Speaker; Round-out unit; 172~ Network input device; 180~ Sound and image analysis unit 185a~ Radio device; 190~ Memory unit.

Client’s Docket No.: 0950080 TT5s Docket No: 0912-A50930TW/final/Hawdong/client/inventor 17Client’s Docket No.: 0950080 TT5s Docket No: 0912-A50930TW/final/Hawdong/client/inventor 17

Claims (1)

十、申請專利範圍: J.-種機器人系穌 一機器頭顱; 匕括: 挺真臉部,斯 一指律計算單元;、;該機器頭顱之上; 一輸入單元,電性' 音貧訊’迷且係將該二 1“軍計算單元,係用以接 訊處理轉換朗應算單以將該表情聲音資 -表情及聲音同步::广及對應之聲音訊號; 單元,係用以接收及同步,電性連接於該指揮計算 一表情產生控制單該表情訊號及該聲音訊號; 輸出單元,係用以接收誃夺二性連接於該表情及聲音同步 情輸出訊號; 訊號,並且係產生對應之表 複數個致動器,電+ 且連接於該擬真臉部,係用表情產生控制單元,並 該擬真臉部變形產生表情·根據該表情輪出訊號而驅使 一聲音產生控制單元,帝 輸出單元,剌以接收該聲;=接於該表情及2同t 音輸出訊號;以及 耳以旒,亚且係產生對應之擎 接於電性連接於該聲音產生控制單元,並且連 立二中二媒W係用以根據該聲音輪出訊號而輸出聲 二;# 知耸器輸出聲音及該等致動器驅使該擬真臉 部變形產生表情係同步進行。 2·如申明專利範圍第i項所述之機器人系統, -資訊媒體輸入裝置,係電性連接於該輪入單元, Client’s Docket No.: 0950080 TT s Docket No. 0912-A50930TW/fmal/Hawdong/client/inventor 200841255 該表情聲音資訊係經由該資訊媒體輸入裝置而輸入至該 輸入單元之中。 3. 如申請專利範圍第2項所述之機器人系統,其中, 該指揮計算單元具有一定時控制裝置,係用以定時控制啟 動該資訊媒體輸入裝置。 4. 如申請專利範圍第1項所述之機器人系統,更包括 一網路輸入裝置,係電性連接於該輸入單元,其中,該表 情聲音資訊係經由該網路輸入裝置而輸入至該輸入單元 之中。 ® 5.如申請專利範圍第4項所述之機器人系統,其中, 該指揮計算單元具有一定時控制裝置,係用以定時控制啟 動該網路輸入裝置。 6. 如申請專利範圍第1項所述之機器人系統,更包括 一收音機裝置,係電性連接於該輸入單元,其中,該表情 聲音資訊係經由該收音機裝置而輸入至該輸入單元之中。 7. 如申請專利範圍第6項所述之機器人系統,其中, 該指揮計算單元具有一定時控制裝置,係用以定時控制啟 • 動該收音機裝置。 8. 如申請專利範圍第1項所述之機器人系統,更包括 一聲音及影像分析單元及一聲音及影像擷取裝置,其中, 該聲音及影像分析單元係電性連接於該輸入單元與該聲 音及影像擷取裝置之間,該聲音及影像擷取裝置係擷取聲 音及影像,並且係將該聲音及影像傳送至該聲音及影像分 析單元之中,以及該聲音及影像分析單元係將該聲音及影 像分析轉換成該表情聲音資訊,並且係將該表情聲音資訊 輸入至該輸入單元之中。 Client’s Docket No·: 0950080 XT's Docket No: 0912-A50930TW/finaI/Hawdong/client/inventor 19 200841255 9. 如申請專利範圍第8項所述之機器人系統,其中, 該聲音及影像擷取裝置包括一收音裝置及一攝影裝置。 10. 如申請專利範圍第1項所述之機器人系統,更包括 一記憶單元,係電性連接於該指揮計算單元與該表情及聲 音同步輸出單元之間,用以儲存該表情訊號及該聲音訊 號。 11. 如申請專利範圍第10項所述之機器人系統,其 中,該指揮計算單元具有一定時控制裝置,係用以定時控 制輸出該記憶單元中之表情訊號及該聲音訊號至該表情 ® 及聲音同步輸出單元之中。 12. —種機器人系統控制方法,包括下列步驟: 提供一機器頭顱、一擬真臉部、複數個致動器及一揚 聲器,其中,該擬真臉部係貼附於該機器頭顱,該等致動 器係連接於該擬真臉部,以及該揚聲器係連接於該機器頭 顱; 以一輸入單元接收表情聲音資訊,並將該表情聲音資 訊輸入至一指揮計算單元之中,其中,該指揮計算單元係 • 將該表情聲音資訊處理轉換成對應之表情訊號及對應之 聲音訊號; 以一表情及聲音同步輸出單元接收及同步輸出該表 情訊號及該聲音訊號·, 以一表情產生控制單元接收該表情訊號,並且產生對 應之表情輸出訊號; 使該等致動器根據該表情輸出訊號驅使該擬真臉部 變形產生表情; 以一聲音產生控制單元接收該聲音訊號,並且產生對 Clienf s Docket No.: 0950080 TT’s Docket No: 0912-Α50930Γ\ν/ίίη&1/Η&Λν(!οη§/οΗεη1/ίην6η1;〇Γ 20 200841255 應之聲音輸出訊號;以及 使該揚聲器根據該聲音輸出訊號輸出聲音,其中,該 揚聲器輸出聲音及該等致動器驅使該擬真臉部變形產生 表情係同步進行。 13. 如申請專利範圍第12項所述之機器人系統控制方 法,更包括下列一步驟: 以一資訊媒體輸入裝置將該表情聲音資訊輸入至該 輸入單元之中。 14. 如申請專利範圍第13項所述之機器人系統控制方 •法,更包括下列-步驟: 以一定時控制裝置定時控制啟動該資訊媒體輸入裝 置。 15. 如申請專利範圍第12項所述之機器人系統控制方 法,更包括下列一步驟: 以一網路輸入裝置將該表情聲音資訊輸入至該輸入 單元之中。 16. 如申請專利範圍第15項所述之機器人系統控制方 ⑩ 法,更包括下列一步驟: 以一定時控制裝置定時控制啟動該網路輸入裝置。 17. 如申請專利範圍第12項所述之機器人系統控制方 法,更包括下列一步驟: 以一收音機裝置將該表情聲音資訊輸入至該輸入單 元之中。 18. 如申請專利範圍第17項所述之機器人系統控制方 法,更包括下列一步驟: 以一定時控制裝置定時控制啟動該收音機裝置。 Client’s Docket No.: 0950080 TT’s Docket No: 0912-A5093OTW/final/Hawdong/client/inventor 21 200841255 19. 如申請專利範圍第12項所述之機器人系統控制方 法,更包括下列步驟: 以一聲音及影像擷取裝置擷取聲音及影像,並將該聲 音及影像傳送至一聲音及影像分析單元之中;以及 以該聲音及影像分析單元將該聲音及影像分析轉換 成該表情聲音資訊,並將該表情聲音資訊輸入至該輸入單 元之中。 20. 如申請專利範圍第19項所述之機器人系統控制方 法,其中,該聲音及影像擷取裝置包括一收音裝置及一攝 ®影裝置。 21. 如申請專利範圍第12項所述之機器人系統控制方 法,更包括下列一步驟: 以一記憶單元儲存由該指揮計算單元所處理轉換之 該表情訊號及該聲音訊號。 22. 如申請專利範圍第21項所述之機器人系統控制方 法,更包括下列一步驟: 以一定時控制裝置定時控制輸出該記憶單元中之表 ❿ 情訊號及該聲音訊號至該表情及聲音同步輸出單元之中。 Client’s Docket No.: 0950080 22 TTss Docket No: 0912-A50930TW/final/Hawdong/client/inventorTen, the scope of application for patents: J.- kinds of robots are a machine head; including: quite true face, Siyi law calculation unit;,; above the machine head; an input unit, electrical 'sound poor news 'The fascination is to use the two 1st military calculation unit to receive the conversion processing to convert the expression sounds-expressions and sounds: the wide and corresponding audio signals; the unit is used to receive And synchronizing, electrically connecting to the command to calculate an expression generating control unit for the expression signal and the sound signal; and the output unit for receiving the output signal of the expression of the expression and the sound synchronization; the signal is generated Corresponding to the plurality of actuators, electrically + connected to the imaginary face, using an expression generation control unit, and the imaginary face deformation generates an expression, and driving a sound generation control unit according to the expression rounding signal , the output unit of the emperor, to receive the sound; = connected to the expression and the same t-sound output signal; and the ear to the 旒, the sub-correspond to the electrical connection to the sound generation control unit, And the second and second media W are used to output the sound two according to the sound of the sound; #知知器 output sound and the actuators drive the imaginary face deformation to generate the expression synchronization. 2·If stated The robot system described in item i of the patent scope, - the information medium input device, is electrically connected to the wheeling unit, Client's Docket No.: 0950080 TT s Docket No. 0912-A50930TW/fmal/Hawdong/client/inventor 200841255 The expression sound information is input to the input unit via the information medium input device. 3. The robot system according to claim 2, wherein the command calculation unit has a certain time control device, which is used for The timing system controls the information media input device. 4. The robot system of claim 1, further comprising a network input device electrically connected to the input unit, wherein the expression sound information is The network input device is input to the input unit. The robot system according to claim 4, wherein the command calculation The robot has a certain time control device for timing control to activate the network input device. 6. The robot system of claim 1, further comprising a radio device electrically connected to the input unit, wherein The expression sound information is input to the input unit via the radio device. 7. The robot system of claim 6, wherein the command calculation unit has a timing control device for timing Controlling the radio device. 8. The robot system of claim 1 further comprising an audio and image analysis unit and an audio and image capture device, wherein the sound and image analysis unit is Between the input unit and the sound and image capture device, the sound and image capture device captures the sound and image, and transmits the sound and image to the sound and image analysis unit, and The sound and image analysis unit converts the sound and image analysis into the expression sound information, and the expression sound is Among the information inputted to the input unit. Client's Docket No.: 0950080 XT's Docket No: 0912-A50930TW/finaI/Hawdong/client/inventor 19 200841255 9. The robot system of claim 8, wherein the sound and image capturing device comprises a radio Device and a photographic device. 10. The robot system of claim 1, further comprising a memory unit electrically connected between the command calculation unit and the expression and sound synchronization output unit for storing the expression signal and the sound Signal. 11. The robot system of claim 10, wherein the command calculation unit has a timing control device for timing control outputting an expression signal and the sound signal in the memory unit to the expression® and sound. In the synchronous output unit. 12. A robotic system control method comprising the steps of: providing a machine head, a pseudo-face, a plurality of actuators, and a speaker, wherein the imaginary face is attached to the head of the machine, An actuator is connected to the imaginary face, and the speaker is connected to the head of the machine; receiving an expression sound information by an input unit, and inputting the expression sound information into a command calculation unit, wherein the command The computing unit system converts the expression sound information processing into a corresponding expression signal and a corresponding sound signal; and receives and synchronously outputs the expression signal and the sound signal by an expression and sound synchronization output unit, and receives the expression signal by an expression generation control unit The expression signal, and generating a corresponding expression output signal; causing the actuators to drive the imaginary face deformation according to the expression output signal to generate an expression; receiving the sound signal by a sound generation control unit, and generating a pair of Clienf s Docket No.: 0950080 TT's Docket No: 0912-Α50930Γ\ν/ίίη&1/Η&Λν(!οη§/οΗε 1/ίην6η1;〇Γ 20 200841255 The sound output signal; and causing the speaker to output a sound according to the sound output signal, wherein the speaker output sound and the actuators drive the imaginary face deformation to generate an expression system simultaneously 13. The robot system control method according to claim 12, further comprising the step of: inputting the expression sound information into the input unit by an information media input device. The robot system control method described in the above 13 includes the following steps: The information medium input device is started by the timing control of the control device at a certain time. 15. The robot system control method according to claim 12, The method includes the following steps: inputting the expression sound information into the input unit by using a network input device. 16. The robot system controller 10 method as claimed in claim 15 further includes the following steps: The control device timing control starts the network input device at a certain time. The robot system control method according to the second aspect, further comprising the step of: inputting the expression sound information into the input unit by a radio device. 18. The robot system control method according to claim 17, The method further includes the following steps: Starting the radio device with a certain time control device timing control. Client's Docket No.: 0950080 TT's Docket No: 0912-A5093OTW/final/Hawdong/client/inventor 21 200841255 19. If the patent application scope is 12th The robot system control method further includes the steps of: capturing sound and image by a sound and image capturing device, and transmitting the sound and image to a sound and image analyzing unit; and using the sound and image The analyzing unit converts the sound and image analysis into the expression sound information, and inputs the expression sound information into the input unit. 20. The robot system control method of claim 19, wherein the sound and image capture device comprises a sound pickup device and a camera device. 21. The robot system control method of claim 12, further comprising the step of: storing, by a memory unit, the expression signal and the sound signal converted by the command calculation unit. 22. The robot system control method according to claim 21, further comprising the following steps: outputting the condition signal of the memory unit and the sound signal to the expression and sound synchronization with a timing control device timing control Among the output units. Client’s Docket No.: 0950080 22 TTss Docket No: 0912-A50930TW/final/Hawdong/client/inventor
TW096113013A 2007-04-13 2007-04-13 Robotic system and method for controlling the same TWI332179B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
TW096113013A TWI332179B (en) 2007-04-13 2007-04-13 Robotic system and method for controlling the same
US11/806,933 US20080255702A1 (en) 2007-04-13 2007-06-05 Robotic system and method for controlling the same
JP2007236314A JP2008259808A (en) 2007-04-13 2007-09-12 Robot system, and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW096113013A TWI332179B (en) 2007-04-13 2007-04-13 Robotic system and method for controlling the same

Publications (2)

Publication Number Publication Date
TW200841255A true TW200841255A (en) 2008-10-16
TWI332179B TWI332179B (en) 2010-10-21

Family

ID=39854482

Family Applications (1)

Application Number Title Priority Date Filing Date
TW096113013A TWI332179B (en) 2007-04-13 2007-04-13 Robotic system and method for controlling the same

Country Status (3)

Country Link
US (1) US20080255702A1 (en)
JP (1) JP2008259808A (en)
TW (1) TWI332179B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI447660B (en) * 2009-12-16 2014-08-01 Univ Nat Chiao Tung Robot autonomous emotion expression device and the method of expressing the robot's own emotion

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI331931B (en) * 2007-03-02 2010-10-21 Univ Nat Taiwan Science Tech Board game system and robotic device
CN101653660A (en) * 2008-08-22 2010-02-24 鸿富锦精密工业(深圳)有限公司 Type biological device for automatically doing actions in storytelling and method thereof
JP5595101B2 (en) * 2010-04-26 2014-09-24 本田技研工業株式会社 Data transmission method and apparatus
JP6693111B2 (en) * 2015-12-14 2020-05-13 カシオ計算機株式会社 Interactive device, robot, interactive method and program
US9864431B2 (en) 2016-05-11 2018-01-09 Microsoft Technology Licensing, Llc Changing an application state using neurological data
US10203751B2 (en) 2016-05-11 2019-02-12 Microsoft Technology Licensing, Llc Continuous motion controls operable using neurological data
JP6841167B2 (en) 2017-06-14 2021-03-10 トヨタ自動車株式会社 Communication devices, communication robots and communication control programs
CN107833572A (en) * 2017-11-06 2018-03-23 芋头科技(杭州)有限公司 The phoneme synthesizing method and system that a kind of analog subscriber is spoken

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4177589A (en) * 1977-10-11 1979-12-11 Walt Disney Productions Three-dimensional animated facial control
US4775352A (en) * 1986-02-07 1988-10-04 Lawrence T. Jones Talking doll with animated features
US4923428A (en) * 1988-05-05 1990-05-08 Cal R & D, Inc. Interactive talking toy
US5746602A (en) * 1996-02-27 1998-05-05 Kikinis; Dan PC peripheral interactive doll
AUPP170298A0 (en) * 1998-02-06 1998-03-05 Pracas, Victor Manuel Electronic interactive puppet
US6135845A (en) * 1998-05-01 2000-10-24 Klimpert; Randall Jon Interactive talking doll
US6249292B1 (en) * 1998-05-04 2001-06-19 Compaq Computer Corporation Technique for controlling a presentation of a computer generated object having a plurality of movable components
JP2000116964A (en) * 1998-10-12 2000-04-25 Model Tec:Kk Method of driving doll device and the doll device
US6554679B1 (en) * 1999-01-29 2003-04-29 Playmates Toys, Inc. Interactive virtual character doll
AU2002232928A1 (en) * 2000-11-03 2002-05-15 Zoesis, Inc. Interactive character system
JP3632644B2 (en) * 2001-10-04 2005-03-23 ヤマハ株式会社 Robot and robot motion pattern control program
US7209882B1 (en) * 2002-05-10 2007-04-24 At&T Corp. System and method for triphone-based unit selection for visual speech synthesis
US7113848B2 (en) * 2003-06-09 2006-09-26 Hanson David F Human emulation robot system
US7756614B2 (en) * 2004-02-27 2010-07-13 Hewlett-Packard Development Company, L.P. Mobile device control system
WO2005087337A1 (en) * 2004-03-12 2005-09-22 Koninklijke Philips Electronics N.V. Electronic device and method of enabling to animate an object
US20070128979A1 (en) * 2005-12-07 2007-06-07 J. Shackelford Associates Llc. Interactive Hi-Tech doll

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI447660B (en) * 2009-12-16 2014-08-01 Univ Nat Chiao Tung Robot autonomous emotion expression device and the method of expressing the robot's own emotion

Also Published As

Publication number Publication date
TWI332179B (en) 2010-10-21
US20080255702A1 (en) 2008-10-16
JP2008259808A (en) 2008-10-30

Similar Documents

Publication Publication Date Title
TW200841255A (en) Robotic system and method for controlling the same
AU2021250896B2 (en) Mixed reality system with spatialized audio
CN201168449Y (en) Interactive movie theatre
CN106464953A (en) Binaural audio systems and methods
CN107430868A (en) The Real-time Reconstruction of user speech in immersion visualization system
CN110062267A (en) Live data processing method, device, electronic equipment and readable storage medium storing program for executing
WO2021143574A1 (en) Augmented reality glasses, augmented reality glasses-based ktv implementation method and medium
JP2014187559A (en) Virtual reality presentation system and virtual reality presentation method
JP7428763B2 (en) Information acquisition system
WO2021246183A1 (en) Information processing device, information processing method, and program
JP7465019B2 (en) Information processing device, information processing method, and information processing program
WO2023284591A1 (en) Video capture method and apparatus, electronic device, and storage medium
TW200838228A (en) Virtual camera system and real-time communication method thereof
JP4501037B2 (en) COMMUNICATION CONTROL SYSTEM, COMMUNICATION DEVICE, AND COMMUNICATION METHOD
CN109116987A (en) A kind of holographic display system based on Kinect gesture control
TW201629952A (en) Synchronous beat effect system and method for processing synchronous beat effect
CN209265390U (en) A kind of VR cinema system
JPH10208073A (en) Virtual reality creating device
CN104780341B (en) A kind of information processing method and information processing unit
CN109462790B (en) Artificial intelligent headset-worn ear-grinding financial payment translation earphone cloud system and method
CN207354506U (en) Camera device and the live video camera of 3D augmented realities
TWI755037B (en) Video recording device and video editing and playback system
KR20130098626A (en) System and method for horse riding simulation
US20240163414A1 (en) Information processing apparatus, information processing method, and system
CN116266874A (en) Method and communication system for cooperatively playing audio in video playing

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees