201006635 九、發明說明: 【發明所屬之技術領域】 本發明係關於一種機器人,尤指可由遠端控制的臨場 機器人。 【先前技術】 鄰國的日本國已邁向超高齡社會,平均約每3個年輕 人要撫養一個老年人,這使得高齡醫療和看護工作漸成為 ® 社會的沈重負擔;在看護人手不足的情形下,日本國早已 動腦筋到機器人身上。 日本國針對高齡市場開發的機器人,有幫助長者行 走、復健、提升筋骨能力,協助臥床老人入浴、處理如廁 排泄問題,有代替看護人員背負老人、餵食行動不便老人 的機器人,甚至有娛樂用的寵物、彈琴、吹長笛、舞伴等 等機器人。 為了讓老人們有陪伴的感覺,已有人提出中華民國專 利公告號第519826號「個人化智慧型攝影機系統」,並主 要藉著自動追蹤的PTZ攝影機來搭配可受控制臉部特徵 (例如:會動的眉毛、眼皮、嘴巴..·等等),以生動地 進行遠端視訊互動。 然而,上述技術中缺乏人類特有的活動性與肢體語 言,以至於在擬人效果上仍有待加強。在增強活動性時, 5 201006635 為了避免因為外在環境或操作不當等因素導致整體功能喪 失,還必須考慮到自主反應,以避免本來是要擔任照顧、 看護的機械人,本身卻因意外故障而失去照顧與看護作用。 【發明内容】 本發明之主要目的在提供一種可由遠端控制的臨場機 器人,其主要結合虛擬實境、人機介面、通訊技術與機器 A 人運動等技術,實現遠端臨場的目的,以人性化的方式進 擊 行遠端的照顧、看護與情感交流。 基於上述目的,本發明一種可由遠端控制的臨場機器 人,包含身體、移動裝置、攝影機、模擬人類情感表達的 擬人模組、環境感測模組、生理資訊模組、與核心控制裝 置。遠端的操作者可藉由人機介面的操作,將控制指令、 聲音以無線傳輸方式(例如:網際網路)傳遞至近端的臨場 Φ 機器人,以控制臨場機器人的移動、表達操作者的情感、 與遠端聲音之傳遞;臨場機器人上之核心控制裝置除受遠 端控制外,亦可配合環境偵測模組,使臨場機器人具有自 主行為能力。同時,臨場機器人能將近端使用者的影像、 聲音、生理資訊、與其環境資訊等,以無線傳輸方式(例如: 網際網路)傳遞至遠端的人機介面上,使遠端的操作者與近 端的使用者能以具有臨場感的方式進行互動與溝通,同時 可因生理資訊的獲得,使遠端操作者能對近端使用者進行 6 201006635 ' 遠距的照護。 關於本發明之優點與精神可以藉由以下的發明詳述及 所附圖式得到進一步的瞭解。 【實施方式】 請參閱第1圖,第1圖為本發明可由遠端控制的臨場 機器人之實施示意圖。如第1圖所示,本發明一種可由遠 Φ 端控制的臨場機器人,主要包含身體10、設置在身體10 底部的移動裝置11、攝影機12、模擬人類情感表達的擬人 模組14、環境感測模組20、核心控制裝置22。其中,可 在該攝影機12整合設置一收放音裝置18,使得攝影機12 除可拍攝、傳輸所處環境的影像之外,也可以藉由該收放 音裝置18傳遞遠端和近端的聲音。 簡單來說,在本發明實施例中,在遠端的操作者32可 ❿ 藉著由伺服器26所提供的人機介面30下達命令,並經由 伺服器26將控制指令、聲音利用無線傳輸方式,例如:網 際網路,傳遞至近端的臨場機器人,藉以控制臨場機器人 的移動、表達操作者情感、與進行遠端與近端間的聲音傳 遞。亦即,操作者32透過人機介面30以無線傳輸方式(網 際網路)傳輸指令至臨場機器人的核心控制裝置2 2,該核心 控制裝置22具有收發指令訊號、資料儲存、資料運算…等 功能,核心控制裝置22得以驅動移動裝置11、處理攝影 7 201006635 * 機12所拍攝的影像與收放音裝置a的聲音、以及控制擬 人模組14 ;或,操作者32透過人機介面3〇將操作者及其 環境聲音傳送至臨場機器人上的收放音裝置18。使遠端的 操作者32得控制臨場機器人的移動裝置u而移動,控制 擬人模組14模擬人的情感、表情、動作,經由收放音裝置 18的聲音傳送與臨場機器人的近端使用者進行對話、互動。 當然,臨場機器人的近端使用者的影像、聲音、與其 環境資訊等,亦可以無線傳輸方式(網際網路)傳遞至遠端 的人機介面30上,提供遠端的操作者32 —個與近端的使 用者面對面溝通、互動的模式,達到近似臨場感覺的遠端 互動的溝通方式,進行擬人性化的遠端人際交流。 為了實現具體的陪伴感’該臨場機器人的外型為擬人 化之形體(如:具臉部、手部等),在身體1〇上設置模擬人體 參 情感元件16a、16b,該模擬人體情感元件16a、16b可為各 種模擬人類情感表達之元件。藉由模擬人體情感元件16a、 配合前述的收放音裝置18而模擬人類情感的表情、動 作、與聲音。在本實施例中’該模擬人體情感元件16a為 鍵人類的眉毛、或為LED陣列,該模擬人體情感元件i6b 為擬人類的肢體'或其他外觀元件;利用該核心控制裝置 22經由擬人模組14驅動模擬人體情感元件i6a、16b時, 穑由控制模擬人體情感元件16a、16b,即眉毛(或LED陣 8 201006635 列)、肢體(或外觀元件),而模擬人類的形體與真實情感表 達,用以豐富地表達情感。例如,遠端的操作者32希望用 親切的聲音與表情來問候近端的使用者時,操作者32可利 用人機介面30下達一個「手舞足蹈」的控制命令,則該核 心控制裝置22會經由擬人模組14驅動人體情感元件 16b(肢體)揮動、以及移動裝置11移動;也可經由收放音裝 置18收放使用者與遠端操作者32兩端之聲音,例如:操 ® 作者32說話,經由該核心控制裝置22驅動該收放音裝置 18播放在遠端的操作者32所發出的聲音,例如··您好 嗎?,到近端使用者處,並且該收放音裝置18可轉遞在近 端使用者所發出的聲音,例如:我很好!,到遠端的操作 者32。 為使遠端臨場機械人達到更為靈活的行動力,本發明 實施例中的該移動裝置11可為三輪差速控制的載具。 另外,臨場機器人的核心控制裝置22除受遠端控制 外,亦可配合一環境偵測模組20,使臨場機器人具有自主 行為能力,該能力是為避免外在環境或操作不當等因素導 致整體功能喪失而設置。為實現此目的,該環境感測模組 20可感測週遭環境狀態,並依據所偵測到的環境狀態作出 自主反應,以避免外在因素導致功能失效。例如:環境感 9 201006635 測模組20可為溫度或距離感測器;環境感測模組20依據 所债測到的環境狀態,驅動受控制的移動裝置11移動,避 開危害因素或障礙物,例如:火源、或牆壁等,對應作出 自主反應。 參考第2圖’為了能監控近端的使用者或受測者的生 理狀態,在本發明進一步於身體1〇組設一生理資訊模蚯 24,該生理資訊模組24用以偵測近端使用者(受測去、沾a201006635 IX. Description of the Invention: TECHNICAL FIELD OF THE INVENTION The present invention relates to a robot, and more particularly to a field robot that can be controlled by a remote end. [Prior Art] Japan in neighboring countries has moved towards a super-aged society, and an average of about three young people are raising an elderly person. This makes old-age medical and nursing work a heavy burden for the society; Next, the Japanese country has already brainstormed on the robot. The robots developed by the Japanese for the elderly market have the ability to help the elderly to walk, rehabilitate and improve their bones and muscles, assist the elderly in bed to enter the bath, handle the problem of toilet discharge, replace the elderly with the caregiver, feed the robots with inconvenient movements, and even have entertainment. Pets, playing the piano, playing the flute, dancing partners and other robots. In order to make the old people feel companionship, the Republic of China Patent No. 519826 "Personalized Smart Camera System" has been proposed, and the PTZ camera with automatic tracking is used to match the controllable facial features (for example: Dynamic eyebrows, eyelids, mouths, etc.) to vividly perform far-end video interaction. However, the above-mentioned techniques lack human-specific activity and body language, so that the effect of anthropomorphism still needs to be strengthened. 5 201006635 In order to avoid the loss of overall function due to factors such as external environment or improper operation, it is necessary to take into account the autonomic reaction to avoid the robot who is supposed to take care of and care for itself. Loss of care and care. SUMMARY OF THE INVENTION The main object of the present invention is to provide a remotely controllable on-site robot, which mainly combines virtual reality, human-machine interface, communication technology, and machine A-person motion to realize the purpose of remote presence and humanity. The way to attack the far-end care, care and emotional communication. Based on the above objects, the present invention provides a remotely controllable on-site robot, including a body, a mobile device, a camera, an anthropomorphic module simulating human emotion expression, an environmental sensing module, a physiological information module, and a core control device. The remote operator can transmit control commands and sounds to the near-end Φ robot by wireless transmission (eg, the Internet) through the operation of the human-machine interface to control the movement of the on-site robot, express the emotion of the operator, And the transmission of the far-end sound; the core control device on the on-site robot can be combined with the environment detection module in addition to the remote control, so that the on-site robot has the autonomous behavior. At the same time, the on-site robot can transmit the video, sound, physiological information, and environmental information of the near-end user to the remote human-machine interface by wireless transmission (for example, the Internet), so that the remote operator and the remote operator The near-end user can interact and communicate in a sense of presence, and at the same time, the remote operator can perform 6 201006635 'distance care for the near-end user due to the acquisition of physiological information. The advantages and spirit of the present invention will be further understood from the following detailed description of the invention. [Embodiment] Please refer to Fig. 1. Fig. 1 is a schematic view showing the implementation of a remotely controlled field robot according to the present invention. As shown in Fig. 1, the present invention is a presence robot controlled by a far Φ end, mainly comprising a body 10, a mobile device 11 disposed at the bottom of the body 10, a camera 12, a personification module 14 simulating human emotion expression, and environmental sensing. Module 20, core control device 22. In the camera 12, a receiving and playback device 18 can be integrated, so that the camera 12 can transmit the sound of the environment and the remote and near-end sounds. . Briefly, in the embodiment of the present invention, the operator 32 at the remote end can issue a command through the human machine interface 30 provided by the server 26, and use the wireless transmission method to control the command and sound via the server 26. For example, the Internet, passed to the near-end on-site robot, to control the movement of the on-site robot, express the operator's emotions, and transmit the sound between the far end and the near end. That is, the operator 32 transmits the command to the core control device 22 of the on-site robot through the human interface 30 via the wireless transmission mode (internet), and the core control device 22 has functions of transmitting and receiving command signals, data storage, data calculation, and the like. The core control device 22 can drive the mobile device 11, process the image captured by the camera 12 and the sound of the sound receiving device a, and control the anthropomorphic module 14; or, the operator 32 passes through the human interface 3 The operator and his ambient sound are transmitted to the playback and playback device 18 on the on-site robot. The remote operator 32 is controlled to move by the mobile device u of the on-the-spot robot, and the anthropomorphic module 14 is controlled to simulate the emotions, expressions, and actions of the person, and the sound transmission through the sound receiving and playing device 18 is performed with the near-end user of the on-site robot. Dialogue, interaction. Of course, the image, sound, and environmental information of the near-end user of the on-site robot can also be transmitted to the remote human-machine interface 30 by wireless transmission (internet), providing a remote operator 32. The near-end user has a face-to-face communication and interaction mode, and achieves a remote interactive communication mode that approximates the sense of presence, and performs anthropomorphic remote interpersonal communication. In order to achieve a specific sense of companionship, the appearance of the on-site robot is an anthropomorphic shape (eg, with a face, a hand, etc.), and a simulated human body emotion component 16a, 16b is disposed on the body 1,, the simulated human emotion component 16a, 16b can be various components that simulate human emotion expression. The human emotion expressions, movements, and sounds are simulated by simulating the human emotion element 16a in conjunction with the aforementioned sound and discharge device 18. In the present embodiment, the simulated human emotion component 16a is a button human eyebrow or an LED array, and the simulated human emotion component i6b is a human body limb or other external component; the core control device 22 is used by the anthropomorphic module. When 14 is driven to simulate human emotion components i6a, 16b, 模拟 simulates human body emotion components 16a, 16b, ie eyebrows (or LED array 8 201006635 columns), limbs (or appearance components), and simulates human body and true emotional expression, Used to express emotions richly. For example, when the remote operator 32 wishes to greet the near-end user with a friendly voice and expression, the operator 32 can use the human-machine interface 30 to issue a "hand dance" control command, and the core control device 22 will The anthropomorphic module 14 drives the human emotion component 16b (limb) to swing and the mobile device 11 to move; the sound of the user and the remote operator 32 can also be retracted via the sound-receiving device 18, for example, the operator® speaks 32 The playback device 18 is driven by the core control device 22 to play the sound emitted by the operator 32 at the far end, for example, how are you? , to the near-end user, and the sound-receiving device 18 can transmit the sound from the near-end user, for example: I am very good! , to the remote operator 32. In order to enable the remote field robot to achieve more flexible action, the mobile device 11 in the embodiment of the present invention may be a three-wheel differential control carrier. In addition, the core control device 22 of the on-site robot can be combined with an environment detection module 20 in addition to the remote control, so that the on-site robot has an autonomous behavior capability, which is to avoid the external environment or improper operation and other factors. The function is lost and set. To achieve this, the environment sensing module 20 senses the surrounding environmental conditions and makes an autonomous response based on the detected environmental conditions to prevent external factors from causing functional failure. For example: environmental sense 9 201006635 The test module 20 can be a temperature or distance sensor; the environment sensing module 20 drives the controlled mobile device 11 to move according to the environmental state measured by the debt, avoiding harmful factors or obstacles. For example, fire source, or wall, etc., corresponding to an autonomous reaction. Referring to FIG. 2, in order to be able to monitor the physiological state of the user or the subject at the proximal end, the present invention further sets up a physiological information module 24 for detecting the proximal end. User (measured to go, touch a
提供給在遠端的操作者32或監控人員。該生理資訊模級 24具生理訊號異常警示功能,可發送警示訊息予遠端的操 作者32或指以員,可確實有效地進行遠距照護的功能。、 藉由以上較佳具體實施例之詳述,係希望 二 楚描述本發明之特徵_神。但並非以上逃簡露的較: 具體實施例來對本發明之範疇加以限制, ❹發明申請範圍所作之均等變化與修飾等, 之專利涵蓋範圍内。 相反地’凡依本 音應仍屬本發明 201006635 【圖式簡單說明】 第1圖係為本發明可由遠端控制的臨場機器人之實施示 意圖。 第2圖為本發明可由遠端控制的臨場機器人之另一實施 示意圖。 【主要元件符號說明】 ^ 10身體 11移動裝置 12攝影機 14擬人模組 16a模擬人體情感元件(眉毛或LED陣列) 16b模擬人體情感元件(肢體或外觀元件) 18收放音裝置 φ 20環境感測模組 22核心控制裝置 24生理資訊模組 26伺服器 30人機介面 32操作者 11Provided to the operator 32 or the monitoring personnel at the remote end. The physiological information module has a physiological signal abnormality warning function, and can send a warning message to the remote operator 32 or the finger, and can effectively perform the function of remote care. With the above detailed description of the preferred embodiments, it is desirable to describe the features of the present invention. However, it is not intended to limit the scope of the invention, and the scope of the invention is limited to the scope of the invention. On the contrary, the local sound should still belong to the present invention. 201006635 [Simplified description of the drawings] Fig. 1 is a schematic illustration of the implementation of a remotely controlled field robot according to the present invention. Figure 2 is a schematic illustration of another embodiment of a field-controlled robot that can be remotely controlled in accordance with the present invention. [Main component symbol description] ^ 10 body 11 mobile device 12 camera 14 anthropomorphic module 16a simulates human emotion components (eyebrows or LED array) 16b simulates human emotion components (limb or appearance components) 18 receiving and playback device φ 20 environment sensing Module 22 core control device 24 physiological information module 26 server 30 human interface 32 operator 11