TWI682354B - Cognitive learning system and method for learning system thinking using the same - Google Patents

Cognitive learning system and method for learning system thinking using the same Download PDF

Info

Publication number
TWI682354B
TWI682354B TW108106382A TW108106382A TWI682354B TW I682354 B TWI682354 B TW I682354B TW 108106382 A TW108106382 A TW 108106382A TW 108106382 A TW108106382 A TW 108106382A TW I682354 B TWI682354 B TW I682354B
Authority
TW
Taiwan
Prior art keywords
participant
signal
thinking
participants
audio
Prior art date
Application number
TW108106382A
Other languages
Chinese (zh)
Other versions
TW202032494A (en
Inventor
吳孝三
Original Assignee
山衛科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山衛科技股份有限公司 filed Critical 山衛科技股份有限公司
Priority to TW108106382A priority Critical patent/TWI682354B/en
Priority to CN201910528044.4A priority patent/CN111613104B/en
Application granted granted Critical
Publication of TWI682354B publication Critical patent/TWI682354B/en
Publication of TW202032494A publication Critical patent/TW202032494A/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides a cognitive learning system for assisting system thinking, in which the empathy thinking and meta-thinking in thinking education field can be developed through the assistance of the system for achieving objective of improving systematic thinking. In one embodiment, an image acquiring system with capability of detecting bio-signals of the participants is utilized to generate a streaming media associated with a topic discussion held by a guider and the participants. The system also provides an operation interface for the guider to setup at least one pause points within the streaming media, and a plurality of questions respectively corresponding to the at least one pause points. The participants can use a terminal device for replaying the streaming media and replying questions corresponding to the respective pause points. Through repeating the foregoingly described procedures, the participants can develop meta-thinking and empathy thinking for achieving the systematic thinking part within the core competencies.

Description

認知學習系統及其系統思考學習方法Cognitive learning system and its systematic thinking learning method

本發明為一種認知學習系統,特別是指一種透過影像回放的方式讓參與者進行換位思考與後設思考訓練,以提升思考教育效果的認知學習系統其系統思考學習方法。The present invention is a cognitive learning system, and in particular, it refers to a systemic thinking learning method of a cognitive learning system that allows participants to perform transpositional thinking and post-thinking training through image playback to improve the thinking education effect.

在過去的教育訓練當中,訓練思考的方式,往往是透過人與人面對面的對談來引導學生思考問題的能力,或者是讓學生在紙上或電腦螢幕上回答設計過的問題,並以問題回答的結果來作為邏輯思考教學與訓練的依據。例如,中華民國新型公告專第第M562480教導了一種邏輯思考能力之檢測系統,包含一適用於供該受試者操作以進行測試的互動裝置、一適用於測量並記錄該受試者的眼球位置及眼球運動訊息的眼動儀,及一可接收該眼動儀所記錄的眼球位置及眼球運動訊息的分析模組,該分析模組可根據該眼球位置及眼球運動訊息由一行為資料庫中尋找相對應的分析結果,再將該分析結果顯示於該互動裝置上。此外,又如,中華民國發明專利公告第I623847號專利,其教導了一種用於評估邏輯思考能力的電腦程式產品,透過設定至少一測驗控制參數,測驗控制參數控制測驗界面中所呈現的測驗程序的測驗規則。然後再設定至少一生理刺激參數。接著使受測驗者接受測驗程序,測驗程序是組態成要求受測驗者根據具有排列關係的複數個認知物件做出判斷,同時,電腦根據生理刺激參數控制周邊裝置給予受測驗者生理刺激。測驗完畢之後,進入記分模式,計算受測驗者接受測驗程序時所得的得分。最後,進入分析模式,根據得分、測驗控制參數及生理刺激參數產生分析報告,並儲存分析報告於儲存裝置。In the past education and training, the way of training thinking is often to guide students to think through the ability of face-to-face interviews, or to let students answer the designed questions on paper or computer screens and answer them with questions. The results are used as the basis for logical thinking teaching and training. For example, No. M562480 of the New Announcement of the Republic of China teaches a detection system for logical thinking ability, including an interactive device suitable for the subject to operate for testing, and a suitable for measuring and recording the subject’s eyeball position Eye tracker and eye movement information, and an analysis module that can receive the eye position and eye movement information recorded by the eye tracker, the analysis module can be based on the eye position and eye movement information from a behavior database Find the corresponding analysis result, and then display the analysis result on the interactive device. In addition, for another example, the patent of the Republic of China Invention Patent No. I623847, which teaches a computer program product for evaluating logical thinking ability, controls the test program presented in the test interface by setting at least one test control parameter Quiz rules. Then set at least one physiological stimulation parameter. Then, the subject is subjected to a test procedure. The test procedure is configured to require the testee to make judgments based on a plurality of cognitive objects having an arrangement relationship, and at the same time, the computer controls the peripheral device to give the testee physiological stimulation according to the physiological stimulation parameters. After the test is completed, enter the scoring mode, and calculate the score obtained by the testee when receiving the test procedure. Finally, enter the analysis mode, generate an analysis report based on the score, test control parameters, and physiological stimulation parameters, and store the analysis report in the storage device.

由於前述之習用的邏輯思考訓練教育,都是屬於受測者主觀的思考訓練,由於並未透析人類思考的類型,因此其涵蓋思考養成教育的深度與廣度並不足夠。因此,近年來的教育人事推動了一種核心素養的養成教育。核心素養強調教育的價值與功能,核心素養的三面向及九項目之內涵同時可涵蓋知識、能力、態度等,其理念重視在學習的過程中透過素養促進個體全人的發展以及終身學習的培養。Since the aforementioned logical thinking training education belongs to the subject's subjective thinking training, since it does not dialyze the type of human thinking, it does not cover the depth and breadth of thinking education. Therefore, the education personnel in recent years has promoted a core quality cultivation education. Core literacy emphasizes the value and function of education. The three aspects of core literacy and the connotation of the nine projects can also cover knowledge, ability, attitude, etc. The concept emphasizes the promotion of individual whole person development and the cultivation of lifelong learning through literacy in the learning process. .

核心素養強調態度,全世界大部分主要的國家包括美國、大陸都在進行核心素養,其主要以終身學習為目標,培養自主行動、溝通互動與社會參與為三大面向。而九項目則是由自主行動、溝通互動與社會參與三面向再次展開,其中主行動包括身心素質與自我精進、系統思考與解決問題、規劃執行與創新應變等三項;溝通互動包括有符號 運用與溝通表達、科技資訊與媒體素養、藝術涵養與美感素養等三項,而社會參與則是包括有道德實踐與公民意識、人際關係與團隊合作、多元文化與國際理解,一共是三面九項。Core literacy emphasizes attitude. Most major countries in the world, including the United States and the mainland, are carrying out core literacy. Its main goal is lifelong learning, and the cultivation of autonomous action, communication and interaction and social participation are the three major aspects. The nine projects are launched again from three aspects: autonomous action, communication and interaction and social participation. The main actions include physical and mental qualities and self-improvement, systematic thinking and problem solving, planning execution and innovation response; communication and interaction include the use of signs Communication and expression, scientific and technological information and media literacy, artistic cultivation and aesthetic literacy, and social participation include ethical practice and civic consciousness, interpersonal relationships and teamwork, multiculturalism and international understanding, a total of three aspects and nine items.

在核心素養的各個三面中,以自主行動是最重要的層面,因為只有自己主動才有後續多元可能的發展。而再自主行動的三項中,又以系統思考為核心,因為目前教育體制下,小孩子不會自主行動,從小就被教育指揮,因此在發展上並不健全。而在自主行動的三個子向當中,最重要的就是系統思考與解決問題這兩個能力。Among the three aspects of core literacy, self-directed action is the most important aspect, because only you can take the initiative to have subsequent multiple possible developments. Among the three independent actions, system thinking is the core, because under the current education system, children do not act autonomously and are educated and commanded from an early age, so they are not sound in development. Among the three subdirections of autonomous action, the most important are the two abilities of systematic thinking and problem solving.

所謂系統思考(system thinking)是由彼得聖吉(Peter Senge)所提出。他提出了學習型組織,其中有所謂第五項修練(The Fifth Discipline),在其中最重要的就是系統思考。因為以前的思考大多是批判性思考或者是邏輯思考這些面向,但僅有這些並不完整。系統思考主要是要產生一個迴圈達到思考者所要達到的目的。The so-called system thinking is proposed by Peter Senge. He proposed a learning organization, including the so-called fifth practice (The Fifth Discipline), the most important of which is systematic thinking. Because most of the previous thinking is critical thinking or logical thinking, but only these are incomplete. Systematic thinking is mainly to generate a loop to achieve the goal of thinkers.

核心素養是一種理論與觀念,但實際上要如何實踐,或者是透過系統輔助的方式達到核心素養的養成效果,目前並沒有很有系統性的解決方案,特別是針對自主行動中的系統思考部份,更是需要一種認知學習系統及其系統思考學習方法來解決目前翻轉教育上所面臨的不足。Core literacy is a theory and concept, but in fact how to practice, or to achieve the effect of core literacy through system-assisted methods, there is currently no very systematic solution, especially for the system thinking department in autonomous action It also needs a cognitive learning system and its systematic thinking and learning methods to solve the current deficiencies in flipping education.

本發明主要的核心在於針對自主行動中的系統思考提出可以實際執行,而且可以有效輔助參與者進行系統思考養成的系統與操作方法。本發明的系統與操作方法主要是透過以追蹤參與者生理狀態的影像系統擷取參與者針對一主題所進行討論的過程,而錄製成一串流影像。然後提供一個操作介面讓引導者可以針對該串流影像設定暫停播放點,並於相應暫停播放點時顯示出相應的問題。參與者可以在事後透過另一終端裝置回放具有暫停播放點與相應問題的串流影像,並針對相關問題進行回覆。透過重複上述的程序可以讓參與者訓練換位思考與後設思考,達到核心素養中系統思考的訓練成效。The main core of the present invention is to propose a system and an operation method that can be actually executed for the systematic thinking in autonomous action and can effectively assist participants to develop systematic thinking. The system and operation method of the present invention mainly capture the process of discussion on a subject by an image system that tracks the physiological state of the participant, and record it as a stream of images. Then an operation interface is provided so that the guide can set a pause playback point for the streaming image, and display the corresponding problem when the corresponding pause pause point is displayed. Participants can play back the streaming video with the paused playback point and the corresponding question through another terminal device afterwards, and reply to the related question. By repeating the above procedure, participants can train empathy and meta-thinking to achieve the training effect of systematic thinking in core literacy.

本發明對於系統思考提出一種實現性的架構,可以透過三個步驟來具以實現,第一步驟為邏輯思考、第二步驟為換位思考(empathy thinking)、第三步驟為後設思考(meta-thinking),透過影音系統輔助參與學習的人員,再回去以換位和後設的角度去醒思自己參與的過程,達到訓練系統思考的效果。The present invention proposes an implementation architecture for system thinking, which can be realized through three steps. The first step is logical thinking, the second step is empathy thinking, and the third step is meta-thinking. -thinking), through the audio-visual system to assist the participants in learning, and then go back to rethink the process of their participation from the perspective of transposition and later, to achieve the effect of training system thinking.

在一具體的實施例中,本發明提供一種認知學習系統,包括有一影像擷取系統、一感測裝置、一運算處理器、一儲存裝置、一第一終端裝置以及一第二終端裝置。該影像擷取系統,用以對一教學情境鏡進行錄影。該感測裝置,用以感測於該教學情境下,關於複數個參與者的一生理訊號。該運算處理器,用以接收關於每一個參與者的生理訊號,並判斷該生理訊號的狀況,當有生理訊號符合特定條件時,控制該影像擷取系統擷取對應該生理訊號之參與者的影像。該儲存裝置,用以儲存該影像擷取系統所錄影而成的一影音串流訊號。該第一終端裝置,用以提供一第一操作介面給一引導者,以讓該引導者對該影音串流訊號進行設定以產生一設定資訊,該設定資訊包括有對該影音串流訊號的特定時間點中設置一暫停撥放點,以及對每一暫停播放點設計相應的至少一問題。該第二終端裝置,用以提供一第二操作介面給參與者,透過該第二操作介面回放該影音串流訊號,並於回放該影音串流訊號過程中,根據該設定資訊,於每一暫停撥放點顯示出相應的該至少一問題,讓參與者針對該至少第一問題進行回答,該第二操作介面將該參與者所回答的內容予以儲存。In a specific embodiment, the present invention provides a cognitive learning system, including an image capture system, a sensing device, an arithmetic processor, a storage device, a first terminal device, and a second terminal device. The image capturing system is used to record a teaching situation mirror. The sensing device is used to sense a physiological signal about a plurality of participants in the teaching situation. The arithmetic processor is used to receive the physiological signal about each participant and determine the status of the physiological signal. When a physiological signal meets a specific condition, the image acquisition system is controlled to capture the participant corresponding to the physiological signal image. The storage device is used to store a video stream signal recorded by the image capturing system. The first terminal device is used to provide a first operation interface to a leader, so that the leader can set the audio and video streaming signal to generate setting information, and the setting information includes the audio and video streaming signal. A pause playing point is set at a specific time point, and at least one problem is designed for each pause play point. The second terminal device is used to provide a second operation interface to the participants, to play back the audio and video stream signal through the second operation interface, and to play back the audio and video stream signal according to the setting information The pause play point displays the corresponding at least one question, allowing the participant to answer the at least first question, and the second operation interface stores the content answered by the participant.

在一具體的實施例中,本發明提供一種系統思考學習方法,其係包括有下列步驟:首先,進行步驟(a)提供一認知學習系統,包括有一影像擷取系統、一感測裝置、一運算處理器、一第一終端裝置以及一第二終端裝置。接著,以步驟(b)引導者讓複數個參與者針對一標的進行討論互動以建構一教學情境。然後,進行步驟(c)使用該影像擷取系統對該教學情境鏡進行錄影。接下來,進行步驟(d)利用該感測裝置,用以感測於該教學情境下,關於複數個參與者的一生理訊號。然後進行步驟(e)使該運算處理器接收關於每一個參與者的生理訊號,並判斷該生理訊號的狀況,當有生理訊號符合特定條件時,控制該影像擷取系統擷取對應該生理訊號之參與者的影像,並將影像儲存在一儲存裝置以形成一影音串流訊號。再以步驟(f)該引導者藉由該第一終端裝置提供之一第一操作介面,對該影音串流訊號進行設定以產生一設定資訊,該設定資訊包括有對該影音串流訊號的特定時間點中設置一暫停撥放點,以及對每一暫停播放點設計相應的至少一問題。然後進行步驟(g)該參與者利用該第二終端裝置所提供之一第二操作介面,回放該影音串流訊號,並於回放該影音串流訊號過程中,根據該設定資訊,於每一暫停撥放點顯示出相應的該至少一問題,以讓該參與者針對該至少第一問題進行回答,該第二操作介面將該參與者所回答的內容予以儲存。最後進行步驟(h)於該步驟(g)之後,引導者再次集合該複數個參與者,以重新對該標的進行討論互動,再重新進行步驟(c)-(g)至少一次。In a specific embodiment, the present invention provides a system thinking learning method, which includes the following steps: First, perform step (a) to provide a cognitive learning system, including an image capture system, a sensing device, a An arithmetic processor, a first terminal device and a second terminal device. Then, in step (b), the facilitator allows a plurality of participants to discuss and interact with each other to construct a teaching situation. Then, proceed to step (c) to record the teaching situation mirror using the image capturing system. Next, proceed to step (d) using the sensing device to sense a physiological signal about a plurality of participants in the teaching situation. Then proceed to step (e) to enable the arithmetic processor to receive the physiological signal about each participant and determine the status of the physiological signal. When there is a physiological signal that meets certain conditions, the image acquisition system is controlled to capture the corresponding physiological signal Participants’ images, and store the images in a storage device to form an audio and video streaming signal. Then, in step (f), the leader sets the audio and video streaming signal through a first operation interface provided by the first terminal device to generate setting information, and the setting information includes the audio and video streaming signal. A pause playing point is set at a specific time point, and at least one problem is designed for each pause play point. Then proceed to step (g). The participant uses a second operation interface provided by the second terminal device to play back the video stream signal, and in the process of playing back the video stream signal, according to the setting information, at each The pause play point displays the corresponding at least one question to allow the participant to answer the at least first question, and the second operation interface stores the content answered by the participant. Finally, step (h) is performed. After the step (g), the facilitator gathers the plurality of participants again to discuss and interact with the target again, and then repeats the steps (c)-(g) at least once.

在下文將參考隨附圖式,可更充分地描述各種例示性實施例,在隨附圖式中展示一些例示性實施例。然而,本發明概念可能以許多不同形式來體現,且不應解釋為限於本文中所闡述之例示性實施例。確切而言,提供此等例示性實施例使得本發明將為詳盡且完整,且將向熟習此項技術者充分傳達本發明概念的範疇。類似數字始終指示類似元件。以下將以多種實施例配合圖式來說明所述認知學習系統及其系統思考學習方法,然而,下述實施例並非用以限制本發明。Hereinafter, with reference to the accompanying drawings, various exemplary embodiments may be described more fully, and some exemplary embodiments are shown in the accompanying drawings. However, the inventive concept may be embodied in many different forms and should not be interpreted as being limited to the exemplary embodiments set forth herein. Rather, providing these exemplary embodiments will make the invention detailed and complete, and will fully convey the scope of the inventive concept to those skilled in the art. Similar numbers always indicate similar components. The following describes the cognitive learning system and its systematic thinking learning method with various embodiments and diagrams. However, the following embodiments are not intended to limit the present invention.

請參閱圖1所示,該圖為本發明之認知學習系統實施例示意圖。再本實施例中,該認知學習系統2包括有一影像擷取系統20、一感測裝置21、一運算處理器22、一儲存裝置23、一第一終端裝置24以及一第二中端裝置25。該影像擷取系統20包括有至少一個影像擷取裝置200,例如:攝影機,設置在一教室的周圍,其視野可以涵蓋整個教室R的各個區域。影像擷取裝置200,可以根據運算處理器22產生的控制訊號來對視野進行縮放,此外,影像擷取裝置200更設置在可以進行多維度運動的基座上,該基座可以根據控制訊號進行轉動或者是位移運動,以控制影像擷取裝置200移動到適當的視角來擷取影像。Please refer to FIG. 1, which is a schematic diagram of an embodiment of the cognitive learning system of the present invention. In this embodiment, the cognitive learning system 2 includes an image capturing system 20, a sensing device 21, an arithmetic processor 22, a storage device 23, a first terminal device 24, and a second mid-end device 25 . The image capturing system 20 includes at least one image capturing device 200, such as a camera, disposed around a classroom, and its field of view can cover all areas of the entire classroom R. The image capturing device 200 can zoom the field of view according to the control signal generated by the arithmetic processor 22. In addition, the image capturing device 200 is further provided on a base capable of multi-dimensional movement, and the base can perform the control signal Rotation or displacement movement to control the image capturing device 200 to move to an appropriate angle of view to capture images.

在此教室R的空間內,具有複數個參與者以及引導者。引導者主要在掌控討論的進行。而參與者可以幾個人分成一個小組,針對一討論標的進行討論。影像擷取裝置200的數量可以複數個,如果數量多,可以設定每一個影像擷取裝置200擔任不同的角色,例如某個或某些影像擷取裝置200負責錄製整個教室空間視野範圍的影像,某個或某些影像擷取裝置200負責錄製特定人物的特寫影像,其中整個教室R之空間影像,代表錄製討論標的進行的全部過程以及所有參與者的影像,包括不同視角的全程影像,這可以用來進行後設思考訓練用途;而特定人物的特寫影像,也可以包括不同角度的特寫影像,則可以用來進行換位思考訓練用途。在某些實施例中,某些影像擷取裝置200可以同時具有擷取全體與特寫影像轉換的功能。In the space of this classroom R, there are a plurality of participants and facilitators. The facilitator is mainly in control of the discussion. Participants can divide several people into a group and discuss the subject of a discussion. The number of image capturing devices 200 can be plural. If there are many, each image capturing device 200 can be set to play a different role. For example, one or some image capturing devices 200 are responsible for recording images of the entire field of view of the classroom. One or some image capture devices 200 are responsible for recording close-up images of specific people. The space image of the entire classroom R represents the entire process of recording the discussion subject and all participants’ images, including full-range images from different perspectives. It is used for post-thinking training purposes; and close-up images of specific characters can also include close-up images of different angles, which can be used for empathic thinking training purposes. In some embodiments, some image capturing devices 200 may have the function of capturing the whole and close-up images simultaneously.

該感測裝置在一實施例中,具有複數個感測模組210用以佩帶在每一個參與者90身上,用以感測其生理狀態而產生相應的生理訊號。感測到的生理訊號藉由無線通訊的方式傳至接收訊號的訊號接收單元211。要說明的是,感測模組210可以為具有藍牙、紅外線、RFID、ZigBee、UWB、超聲波等訊號傳收模組,用以將訊號傳給訊號接收單元211。該訊號接收單元211再將接收到的生理訊號經由有線或無線的方式傳給運算處理器22。該生理狀態包括有心跳、脈搏、血壓或聲音等資訊。要說明的是,在另一實施例中,感測裝置可以由影像擷取裝置200和運算處理器22組合而成,影像擷取裝置300擷取的影像傳輸到該運算處理器22,該運算處理器22藉由影像分析出參與者的生理狀態,包括笑容、憤怒、悲傷等表情,產生相應的生理訊號。在另一實施例中,也可以為感測模組210、影像擷取裝置200與運算處理器22共同組合而成,其係可以根據設計的需要而改變。In one embodiment, the sensing device has a plurality of sensing modules 210 for wearing on each participant 90 to sense their physiological state and generate corresponding physiological signals. The sensed physiological signal is transmitted to the signal receiving unit 211 that receives the signal by wireless communication. It should be noted that the sensing module 210 may be a signal transmission module with Bluetooth, infrared, RFID, ZigBee, UWB, ultrasonic, etc., for transmitting the signal to the signal receiving unit 211. The signal receiving unit 211 then transmits the received physiological signal to the arithmetic processor 22 in a wired or wireless manner. The physiological state includes information such as heartbeat, pulse, blood pressure or sound. It should be noted that, in another embodiment, the sensing device may be composed of the image capturing device 200 and the arithmetic processor 22, and the image captured by the image capturing device 300 is transmitted to the arithmetic processor 22, and the arithmetic The processor 22 analyzes the physiological state of the participant through images, including expressions such as smile, anger, and sadness, and generates corresponding physiological signals. In another embodiment, it can also be a combination of the sensing module 210, the image capturing device 200, and the arithmetic processor 22, which can be changed according to design needs.

感測模組210所量測到的生理訊號,再經由運算處理器22進行分析與判斷,當運算處理器22判斷有生理訊號符合特定條件時,控制該影像擷取裝置200擷取對應該生理訊號之參與者的影像。舉例來說,當參與者90A的生理訊號,例如:聲音符合特定條件時,運算處理器判斷參與者目前正在發言,因此會控制影像擷取裝置給參與者90A特寫,擷取該參與者90A的特寫影像。影像擷取裝置可以有複數個,以不同角度來擷取參與這90A的影像。The physiological signal measured by the sensing module 210 is analyzed and judged by the arithmetic processor 22. When the arithmetic processor 22 determines that a physiological signal meets a specific condition, the image capturing device 200 is controlled to acquire the corresponding physiological signal The image of the participant of the signal. For example, when the physiological signal of the participant 90A, for example, the sound meets certain conditions, the arithmetic processor determines that the participant is currently speaking, so the image capturing device is controlled to close the participant 90A and capture the participant 90A. Close-up image. There may be a plurality of image capturing devices, which capture images participating in the 90A at different angles.

此外,要說明的是,在錄製參與者90A的特寫影像時,錄製全程的影像擷取裝置200也同步在錄製全體人員的影像。在另一實施例中,在錄製參與者90A的特寫影像時的同時,如果其他參與者的生理訊號也達到被錄製的標準,運算處理器22可以同步控制其他影像擷取裝置200擷取另一參與者的特寫影像。例如,在一實施例中,如果另一參與者90B心跳上升,或者是血壓升高,或者是臉部有特殊的表情,或者是跟旁邊的人竊竊私語發出聲音,如果偵測達到符合被錄製影像的標準,同樣也會錄製參與者90B的特寫影像。在另一實施例中,當參與者的生理訊號沒有或低於特定準位時,運算處理器22會控制該影像擷取裝置200停止特寫錄影。當然,在停止特寫錄影的過程中,錄製全場影像的影像擷取裝置200還是持續在錄影中。In addition, it should be noted that when recording a close-up image of the participant 90A, the image capturing device 200 that records the entire process also synchronizes the images of all the personnel. In another embodiment, while recording a close-up image of participant 90A, if the physiological signals of other participants also reach the recorded standard, the arithmetic processor 22 may synchronously control other image capturing devices 200 to capture another Close-up image of participants. For example, in one embodiment, if another participant's 90B heartbeat rises, or the blood pressure rises, or the face has a special expression, or whispers to the person next to it, if the detection meets the record, The standard of video will also record a close-up image of participant 90B. In another embodiment, when the physiological signal of the participant is not at or below a certain level, the arithmetic processor 22 controls the image capturing device 200 to stop close-up recording. Of course, in the process of stopping the close-up video, the image capturing device 200 that records the entire field of video continues to be recorded.

在一實施例中,該運算處理器具22有影像處理能力,可以接收每一個影像擷取裝置200所產生的串流影像訊號,再根據時間序列將其彙整以形成一影音串流訊號。該運算處理器22,可以為工作站電腦、桌上型電腦、筆記型電腦或者是雲端伺服器等。在另一實施例中,該運算處理器22可以將每一個影像擷取裝置所產生的串流媒體彙整形成對應每一個參與者的影音串流訊號。例如:在一實施例中,以參與者90A~90D為例,該影音串流訊號可以具有多個,每一個影音串流訊號分別對應參與者90A、90B、90C與90D。可以達到這樣效果的原因是當錄製特寫的影像擷取裝置在錄製特定參與者的影像時,因為運算處理器已經知道哪一支影像擷取裝置目前在錄製哪一個參與者的影像,因此當錄製完畢之後,運算處理器22在進行儲存的時候,可以給予該段特寫串流影像對應該特定參與者的標記。因此當整個研討過程進行完畢之後,運算處理器可以分別儲存對應參與者的串流影像。該串流影像包含有全程的後設影像,亦即全程的錄影,以及關於該使用者(參與者以及/或引導者)的換位影像,亦即該使用者(參與者以及/或引導者)的特寫影像。In one embodiment, the computing processor 22 has image processing capabilities, and can receive the streaming image signals generated by each image capturing device 200, and then aggregate them according to the time series to form an audio and video streaming signal. The arithmetic processor 22 may be a workstation computer, a desktop computer, a notebook computer, a cloud server, or the like. In another embodiment, the arithmetic processor 22 may aggregate the streaming media generated by each image capturing device to form an audio and video streaming signal corresponding to each participant. For example, in one embodiment, taking participants 90A~90D as an example, the video stream signal may have multiple, and each video stream signal corresponds to participants 90A, 90B, 90C, and 90D, respectively. The reason why this effect can be achieved is that when the close-up image capture device is recording a specific participant’s image, because the arithmetic processor already knows which image capture device is currently recording which participant’s image, so when recording After the completion, when the arithmetic processor 22 is storing, it may give a mark that the close-up stream image corresponds to a specific participant. Therefore, after the entire discussion process is completed, the arithmetic processor can separately store the streaming images of the corresponding participants. The streaming image includes a full set of post-images, that is, full-length recordings, and transposed images about the user (participant and/or leader), that is, the user (participant and/or leader) ) Close-up image.

該儲存裝置23,用以儲存該影像擷取系統所錄影而成的一影音串流訊號。在一實施例中,該儲存裝置可以透過該運算處理器接收該影音串流訊號。該儲存裝置23可以為設置在該運算處理器22內的儲存裝置,例如:硬碟,或者是外接於該運算處理器22的儲存裝置,例如:硬碟儲存陣列,或NAS儲存陣列。在另一實施例中,該儲存裝置23也可以為在遠端的雲端儲存伺服器,透過網路和該運算處理器電性連接。在另一實施例中,該儲存裝置23可以直接與該影像擷取系統的影像擷取裝置200電性連接,以接收與儲存每一個影像擷取裝置200所產生的影音串流訊號。The storage device 23 is used to store an audio and video stream signal recorded by the image capturing system. In one embodiment, the storage device can receive the video stream signal through the arithmetic processor. The storage device 23 may be a storage device provided in the arithmetic processor 22, such as a hard disk, or a storage device externally connected to the arithmetic processor 22, such as a hard disk storage array or a NAS storage array. In another embodiment, the storage device 23 may also be a remote cloud storage server, which is electrically connected to the arithmetic processor through a network. In another embodiment, the storage device 23 can be directly electrically connected to the image capture device 200 of the image capture system to receive and store the video and audio stream signals generated by each image capture device 200.

該第一終端裝置24,內具有一執行程式,當執行之後可以產生一第一操作介面,使用者可以利用該第一操作介面進行各種設定或操作。該第一終端裝置24,可以為桌上型電腦、筆記型電腦或者是智慧型手持裝置,例如:智慧型手機或者是平板電腦等,但不以此為限制。在一實施例中,引導者透過該第一終端裝置24,利用該第一操作介面從儲存裝置23取得該影音串流訊號,並對對該影音串流訊號進行設定以產生一設定資訊,該設定資訊包括有對該影音串流訊號的特定時間點中設置一暫停撥放點,以及對每一暫停播放點設計相應的至少一問題。在一實施例中,暫停播放點可以為關於特定參與者有特寫鏡頭時的某一時間點,或者是關於整個討論過程中,引起大家關注、騷動的時間點等。The first terminal device 24 has an execution program. After execution, a first operation interface can be generated, and the user can use the first operation interface to perform various settings or operations. The first terminal device 24 may be a desktop computer, a notebook computer, or a smart handheld device, such as a smart phone or a tablet computer, but it is not limited thereto. In one embodiment, the leader uses the first terminal device 24 to obtain the video stream signal from the storage device 23 using the first operating interface, and sets the video stream signal to generate a setting message. The setting information includes setting a pause playback point at a specific time point for the video streaming signal, and designing at least one problem corresponding to each pause play point. In an embodiment, the pause play point may be a certain time point when a particular participant has a close-up shot, or a time point that causes everyone's attention and commotion during the entire discussion process.

在一實施例中,引導者可以針對每一個參與者設計不同暫停撥放點以及問題內容。例如,引導者可以針對參與者90A的特寫影像進行暫停撥放點的設置,並且設計相關的問題。同樣的,對於參與者90B~90D也有個別不同的問題與暫停撥放點的設定。在另一實施例中,每一個參與者的暫停撥放點與問題也可以是相同的內容。In one embodiment, the facilitator can design different pause and play points and question content for each participant. For example, the facilitator can set the pause point for the close-up image of participant 90A, and design related issues. Similarly, for participants 90B~90D, there are also different problems and the setting of pause play points. In another embodiment, each participant's pause play point and question may also have the same content.

該第二終端裝置25,內具有一執行程式,當執行之後可以產生一第二操作介面,參與者可以利用該第二操作介面進行各種操作。該第二終端裝置25,可以為桌上型電腦、筆記型電腦或者是智慧型手持裝置,例如:智慧型手機或者是平板電腦等,但不以此為限制。本實施例中,參與者在回到家之後,可以透過該第二終端裝置25以及該第二操作介面,從該儲存裝置23取得相應的該影音串流訊號進行回放。該第二操作介面於回放該影音串流訊號過程中,讀取先前引導者的相關設定資訊,於每一暫停撥放點時暫停播放,並且顯示出相應的該至少一問題。此時,參與者可以針對該至少第一問題進行回答,然後透過該第二操作介面將該參與者所回答的內容予以儲存回該儲存裝置23。The second terminal device 25 has an execution program. After execution, a second operation interface can be generated, and participants can use the second operation interface to perform various operations. The second terminal device 25 may be a desktop computer, a notebook computer, or a smart handheld device, such as a smart phone or a tablet computer, but it is not limited thereto. In this embodiment, after returning home, the participant can obtain the corresponding audio and video streaming signal from the storage device 23 through the second terminal device 25 and the second operation interface for playback. The second operation interface reads the relevant setting information of the previous leader during the playback of the video stream signal, pauses the playback at each pause playback point, and displays the corresponding at least one problem. At this time, the participant can answer the at least first question, and then store the content answered by the participant back to the storage device 23 through the second operation interface.

在前述參與者利用第二終端裝置25的第二操作介面回放串流影像訊號的過程可以達到進行後設思考與換位思考的效果。要能達到系統思考有三個層面,第一為邏輯思考、第二為換位思考以及第三為後設思考。邏輯是以自我中心為主,換位思考是以對方為主,後設以大家為主。其中,邏輯思考的部分就是參與者在討論主題標的過程中進行思考的過程,就是屬於邏輯思考的訓練。邏輯思考因為是以自我為中心,因此訓練容易。但是,換位與後設思考一般不容易學習,因為一般學習者都是從結果看,通常是從失敗的結果中學習,進行逆向工程去查找因果原因,但是這只能夠分析原因,並沒有辦法改善。因此透過本發明的第二終端裝置25回放串流影像訊號、在一個或多個特定時間點有暫停播放以及設定相關問題,可以讓參與者可以跳脫傳統邏輯思考的框架,進行換位思考以及後設思考的訓練。The process of the participant playing back the streaming video signal using the second operation interface of the second terminal device 25 can achieve the effects of post-consideration and transposition thinking. There are three levels to achieve systematic thinking, the first is logical thinking, the second is empathic thinking, and the third is meta-thinking. The logic is based on self-centeredness, the empathy is based on the other party, and the latter is based on everyone. Among them, the part of logical thinking is the process of thinking of participants in the process of discussing the subject, which is the training of logical thinking. Because logical thinking is self-centered, training is easy. However, transposition and meta-thinking are generally not easy to learn, because the average learner is looking at the results, usually learning from the results of failure, and performing reverse engineering to find the cause and effect, but this can only analyze the cause, there is no way improve. Therefore, the playback of streaming video signals through the second terminal device 25 of the present invention, pause playback at one or more specific time points, and related issues of settings can allow participants to escape from the framework of traditional logical thinking and perform transpositional thinking and Post-thinking training.

要說明的是,參與者在進行回答時,可以透過聲音、文字的方式回應知識點上所設計的所有問題。此外,因為本系統2具有感測裝置,可以透過感測參與者生理訊號控制影像擷取裝置200錄製生理訊號反應超過某一準位的參與者的特寫影像,因此當在後續回放的過程中,參與者可以透過觀看回放自己的特寫影像以第二人的角度來看自己,再透過回答暫停播放點的預設問題,建立一個換位思考的醒思訓練。同理,因為本發明的影像擷取裝置200也有錄製整個教室空間的影像,因此參與者除了可以看到自己的特寫影像,也可以看到全程的錄影,以旁觀者的角度來看全程,透過回答暫停播放點的問題,建立一個後設思考的醒思訓練。It should be noted that when answering, participants can respond to all the questions designed on the knowledge points through voice and text. In addition, because the system 2 has a sensing device, the image capturing device 200 can be controlled by sensing the physiological signal of the participant to record a close-up image of the participant whose physiological signal response exceeds a certain level, so when in the subsequent playback process, Participants can look at themselves from the second person's point of view by watching and playing back their close-up images, and then create a rethinking training of transposition thinking by answering the preset questions of the paused playback point. Similarly, because the image capturing device 200 of the present invention also records images of the entire classroom space, participants can see the close-up images of themselves as well as the entire video, looking at the whole process from the perspective of bystanders. Answer the question of the paused playback point, and establish a post-thinking training.

在一實施例中,該儲存裝置23設置在一雲端伺服器內,每一個參與者都有對應的帳號,因此參與者在利用該第二終端裝置25連線至該雲端伺服器時,透過輸入帳號與密碼,取得相應的影音串流訊號。In one embodiment, the storage device 23 is set in a cloud server, and each participant has a corresponding account. Therefore, when the participant uses the second terminal device 25 to connect to the cloud server, the Account and password, get the corresponding audio and video streaming signal.

請參閱圖2所示,該圖為本發明之系統思考學習方法之一實施例流程示意圖。該流程3包括有步驟30,提供一認知學習系統,在一實施例中,其係可以如圖1所示的架構,包括有一影像擷取系統2,具有複數個影像擷取裝置20、一感測裝置,具有複數個感測模組200、一運算處理器22、一第一終端裝置24以及一第二終端裝置25。該認知學習系統建構在一空間內,例如在一教室R內,教室R裡面有引導者,例如:老師,還有多位參與者,例如:學生。前述的各個元件特徵,係如前所述,在此不作贅述。以下的流程係搭配圖1的系統來作說明。Please refer to FIG. 2, which is a schematic flowchart of an embodiment of the system thinking learning method of the present invention. The process 3 includes step 30 to provide a cognitive learning system. In one embodiment, it can be as shown in FIG. 1, including an image capture system 2 with multiple image capture devices 20, a sense The measuring device has a plurality of sensing modules 200, an arithmetic processor 22, a first terminal device 24 and a second terminal device 25. The cognitive learning system is built in a space, for example, in a classroom R, where there are mentors, such as teachers, and multiple participants, such as students. The features of the foregoing elements are as described above, and will not be repeated here. The following flow is explained with the system of FIG. 1.

系統架設完畢之後,進行步驟31,引導者讓複數個參與者針對一標的進行討論互動以建構一教學情境。在本步驟中,引導者可以設定一個討論主題,讓參與者針對該主體進行討論與發表意見。在一實施例中,引導者將參與者分成複數個組別,以方便討論的進行。在一實施例中,每一個參與者身上都有配帶一感測模組200,用以感測關於參與者在討論過程中的生理訊號。在另一實施例中,感測裝置也可以包括有影像擷取裝置200,透過擷取參與者的影像之後傳給運算處理器22進行臉部表情的偵測,以偵測的結果作為生理訊號。另一實施例中,可以有穿戴式感測模組210與影像擷取裝置200組合在一起進行偵測。After the system is set up, proceed to step 31. The facilitator allows multiple participants to discuss and interact with each other to construct a teaching situation. In this step, the facilitator can set a discussion topic for participants to discuss and express their opinions on the subject. In one embodiment, the facilitator divides the participants into multiple groups to facilitate the discussion. In an embodiment, each participant is equipped with a sensing module 200 for sensing physiological signals about the participant during the discussion. In another embodiment, the sensing device may also include an image capture device 200, which captures the participant's image and transmits it to the arithmetic processor 22 to detect facial expressions, and use the detection result as a physiological signal . In another embodiment, the wearable sensing module 210 and the image capturing device 200 may be combined for detection.

在討論的過程當中,進行步驟32使用該影像擷取系統對該教學情境鏡進行錄影。由於該影像擷取系統有複數個影像擷取裝置200,因此部分的影像擷取裝置200可以設置在教室的不同位置,以不同的視角錄製全場討論過程的影像。接著進行步驟33,利用該感測裝置,用以感測於該教學情境下,關於複數個參與者的一生理訊號。步驟33中,主要的目的是當參與者在討論的過程中,偵測每一個參與者是否發聲、生氣、焦慮、高興或緊張等的情緒或生理反應。以作為後續錄製影像的判斷。During the discussion, step 32 is used to record the teaching situation mirror using the image capture system. Since the image capturing system has a plurality of image capturing devices 200, part of the image capturing devices 200 can be set at different locations in the classroom to record images of the whole discussion process from different perspectives. Next, proceed to step 33, using the sensing device to sense a physiological signal about a plurality of participants in the teaching situation. In step 33, the main purpose is to detect whether each participant's emotional or physiological response to vocalization, anger, anxiety, joy, or tension during the discussion. It can be used as the judgment of the subsequent recorded images.

偵測的生理訊號,會傳給運算處理器22進行運算處理,透過步驟34,使該運算處理器22接收關於每一個參與者的生理訊號,並判斷該生理訊號的狀況,當有生理訊號符合特定條件時,控制特定的影像擷取裝置200擷取對應該生理訊號之參與者的影像,並將影像儲存在一儲存裝置23以形成一影音串流訊號。在一實施例中,影像擷取裝置200可以具有多台,因此其中的某些影像擷取裝置200負責執行步驟32的錄影,而部分的影像擷取裝置200,則是負責執行步驟34的特寫錄影。也就是說,當運算處理器22判斷出某些生理訊號符合被特寫錄影的標準時,就會控制影像擷取裝置200擷取對應該生理訊號的參與者的特寫影像。The detected physiological signal will be transmitted to the arithmetic processor 22 for arithmetic processing. Through step 34, the arithmetic processor 22 receives the physiological signal about each participant and judges the status of the physiological signal. When the physiological signal matches Under specific conditions, the specific image capturing device 200 is controlled to capture images of participants corresponding to physiological signals, and the images are stored in a storage device 23 to form an audio-visual streaming signal. In one embodiment, there may be more than one image capturing device 200, so some of the image capturing devices 200 are responsible for performing the recording of step 32, and some image capturing devices 200 are responsible for performing the close-up of step 34 Video. That is to say, when the arithmetic processor 22 determines that some physiological signals conform to the standard of being recorded in close-up, it controls the image capturing device 200 to capture close-up images of participants corresponding to the physiological signals.

以下舉幾個例子來說明:Here are a few examples to illustrate:

情境一:當參與者發言時,會產生聲音訊號,因此其所佩帶的感測模組210會偵測到聲音訊號,當運算處理器22判斷聲音訊號超過某一標準時,會控制影像擷取裝置200會追蹤聲音訊號而對參與者擷取特寫影像。Scenario 1: When a participant speaks, an audio signal is generated, so the sensor module 210 worn by the participant detects the audio signal, and when the arithmetic processor 22 determines that the audio signal exceeds a certain standard, it controls the image capture device 200 will track the audio signal and capture close-up images of the participants.

情境二:錄製全場情況的影像擷取裝置200所取得影像會傳輸給運算處理器22進行影像辨識,當發現有參與者起立時,就控制影像擷取裝置擷取起立者的特寫影像。Scenario 2: The image obtained by the image capturing device 200 recording the whole scene will be transmitted to the arithmetic processor 22 for image recognition. When a participant is found standing, the image capturing device is controlled to capture a close-up image of the standing person.

情境三:當有使用者生理有情緒上的變化時,生理訊號會感測到血壓、脈搏或心跳的資訊,運算處理器22根據這些資訊,判斷有參與者處於情緒上的反應時,同樣控制影像擷取裝置200擷取起立者的特寫影像。Scenario 3: When there is a physiological change in the user’s physiology, the physiological signal will sense the information of blood pressure, pulse or heartbeat. Based on the information, the arithmetic processor 22 determines that the participant’s emotional response is also controlled. The image capturing device 200 captures a close-up image of the standing person.

前述的擷取特寫影像的情境僅為說明的實施例,實際上並不以上述例子為限制。The foregoing scenario of capturing close-up images is only an illustrative embodiment, and in fact is not limited to the above examples.

另外要說明的是,影像擷取裝置200如何能夠正確的擷取到生理訊號符合特寫錄影標準的參與者的影像。在一實施例中,由於影像擷取裝置200上面裝置設有可以控制空間位置的驅動裝置,例如六軸運動的基座,因此運算處理器22的控制訊號可以帶動影像擷取裝置進行三軸向的轉動以及三軸向的位移運動。接著如何定位要被擷取的參與者方位,可以有幾種方式,在一實施例中,由於影像擷取裝置200在安裝時可以用教室的某一個位置為原點,進行原點位校正。因此每一個影像擷取裝置都可以在運算處理器的控制之下改變其擷取影像的方位。然後可以透過影像定位法,例如:圖3A所示,當前情境二時,只要至少有三支的影像擷取裝置200同時擷取到舉手發言的參與者,就可以根據該三支影像擷取裝置200目前所在的方位位置,定位出參與者的方位。有了參與者的方位之後,即可以進行特寫鏡頭的錄影。定位的方式係屬於席用之技術,在此不作贅述。In addition, it is to be explained how the image capturing device 200 can accurately capture images of participants whose physiological signals meet the close-up recording standard. In one embodiment, since the image capturing device 200 is provided with a driving device capable of controlling the spatial position, such as a base for six-axis movement, the control signal of the arithmetic processor 22 can drive the image capturing device to perform three-axis Rotation and three-axis displacement movement. Next, there are several ways to locate the position of the participant to be captured. In one embodiment, since the image capturing device 200 is installed, a certain position in the classroom can be used as the origin to perform origin calibration. Therefore, each image capturing device can change the orientation of the captured image under the control of the arithmetic processor. Then, through the image positioning method, for example: as shown in FIG. 3A, when the current situation is two, as long as there are at least three image capturing devices 200 that simultaneously capture participants raising their hands to speak, they can be based on the three image capturing devices The current position of 200 is where the position of the participant is located. Once you have the position of the participant, you can record close-up shots. The method of positioning belongs to the technology used in the table and will not be repeated here.

在另一定位的實施例中,每一個感測模組上更具有無線發射元件,可以為藍牙、紅外線、RFID、ZigBee、UWB、超聲波等,然後可以利用所謂室內定位的技術,例如:「三角定位」演算法,對於上述無線發射元件所產生的訊號進行定位。比較常用的訊號處理定位的方法有接收訊號強度指標法(received signal strength indication, RSSI)定位技術,通過接收到的信號強弱測定信號點與接收點的距離,進而根據相應資料進行定位計算的一種定位技術、接收信號角度定位法(angle of arrival, AOA),是利用具方向性的天線(Directional Antenna)所 量測的訊息,得出主動式標籤(Active Tag)訊號的來源方向、到達時間定位法(time of arrival, TOA),TOA依賴的是各個「發射端」將訊號傳送至「接收端」的時間。透過時間,可計算出接收端與個別發射端的距離,再套上三角定位公式,就能算出接收端的座標。前述的定位技術皆為定位領域技術之人所熟知的技術,在此不作贅述。如圖3B所示,每一個感測模組210都具有無線發射裝置元件,透過至少三個的訊號接收單元211,就可以用上述的定位技術來定位出發出信號的位置,進而控制影像擷取裝置擷取相應參與者的特寫影像。In another positioning embodiment, each sensing module further has a wireless transmitting element, which can be Bluetooth, infrared, RFID, ZigBee, UWB, ultrasonic, etc. Then, the so-called indoor positioning technology can be used, for example: "Triangle The "positioning" algorithm locates the signals generated by the above wireless transmitting elements. The more commonly used signal processing and positioning methods include received signal strength indication (RSSI) positioning technology, which determines the distance between the signal point and the receiving point by the strength of the received signal, and then performs a positioning calculation based on the corresponding data. Technology, received signal angle 度 Locating method (angle of arrival, AOA) is the information measured by the directional antenna (Directional Antenna) to obtain the 來 source direction and arrival time locating method of the Active Tag signal (time of arrival, TOA), TOA depends on the time that each "transmitter" sends a signal to a "receiver". Through time, the distance between the receiver and the individual transmitter can be calculated, and then the triangle positioning formula can be used to calculate the coordinates of the receiver. The aforementioned positioning techniques are all well-known to those skilled in the positioning field, and will not be repeated here. As shown in FIG. 3B, each sensing module 210 has a wireless transmitting device element, and through at least three signal receiving units 211, the above-mentioned positioning technique can be used to locate the position where the signal is emitted, and then control image capture The device captures close-up images of the corresponding participants.

透過前述步驟32-34的程序進行,在討論結束之後,就可以得到關於整個討論全程影像以及各個參與者的特寫影像,進而被整合成一串流影像訊號。要說明的是,在一實施例中,每一個參與者都可以對應有一串流影像訊號,該串流影像訊號,包括有整體討論過程的錄影影像,以及關於該特定參與者的特寫影像,其係以子母影像(PIP)的方式,如圖4A所示,或者是以分割畫面的方式來呈現,如圖4B所示。在圖4B為一個三分割畫面,其中最左邊的分割畫面為全部參與者的影像,而右邊上下兩個分割畫面,為當參與者90A的生理訊號符合被特寫錄影時的兩個不同視角的特寫影像。在另一實施例中,該串流影像訊號也可以為整體討論過程的錄影影像,至於特寫影像可以透過在影像上疊加超連結的方式來呈現,如圖5所示。此外,步驟32-34的順序並不以前述之實施例為限制,亦即其執行的修後順序可以改變。Through the aforementioned procedures of steps 32-34, after the discussion is over, you can get the entire discussion video and the close-up images of each participant, which are then integrated into a stream of video signals. It should be noted that, in an embodiment, each participant may correspond to a stream of video signals. The stream of video signals includes video images of the overall discussion process and close-up images of the specific participant. It is displayed in the form of a picture-in-picture (PIP), as shown in FIG. 4A, or in the form of split screen, as shown in FIG. 4B. Figure 4B is a three-split screen, where the left-most split screen is the image of all participants, and the right and two split screens are close-ups of two different perspectives when the physiological signal of the participant 90A meets the close-up video image. In another embodiment, the streaming video signal may also be a video image of the overall discussion process, and the close-up image may be presented by superimposing a hyperlink on the image, as shown in FIG. 5. In addition, the sequence of steps 32-34 is not limited to the foregoing embodiment, that is, the sequence of the repairs performed may be changed.

完成了錄製整個討論形成一個或多個串流影像訊號之後,進行步驟35,該引導者藉由該第一終端裝置提供之一第一操作介面,對該影音串流訊號進行設定以產生一設定資訊,該設定資訊包括有對該影音串流訊號的特定時間點中設置一暫停撥放點,以及對每一暫停播放點設計相應的至少一問題。在本步驟中,主要是要讓參與者可以在討論活動進行之後,回接可以進一步透過回放串流影音訊號而進行換位思考以及後設思考的醒思。因此引導者,可以透過該第一終端裝置執行一應用程式之後所產生的第一操作介面來編輯該影像串流訊號。編輯的設定包括有,設定暫停撥放點,以及於該暫停撥放點時的和觀看回放的參與者進行互動的問題。After completing the recording of the entire discussion to form one or more streaming video signals, step 35 is performed, and the leader sets the audio and video streaming signal to generate a setting by providing a first operating interface provided by the first terminal device Information, the setting information includes setting a pause playback point at a specific time point of the video stream signal, and designing at least one problem corresponding to each pause play point. In this step, the main purpose is to allow participants to discuss the activity, and then return to further consider the transposition and post-consideration thinking by playing back the streaming audio and video signals. Therefore, the leader can edit the video streaming signal through the first operation interface generated after the first terminal device executes an application program. The settings for editing include the setting of the pause playback point, and the problem of interacting with the participants watching the playback at the pause playback point.

在步驟35的一個實施例中,系統再討論活動之後會針對每一個參與者產生一個相應的串流影音訊號,因此引導者可以根據每一個參與者相應的串流影音訊號設定不相同的暫停點,每一個暫停播放點所設定的問題,對於每一的參與者而言也會不相同。In one embodiment of step 35, the system will generate a corresponding streaming audio signal for each participant after discussing the activity again, so the leader can set different pause points according to the corresponding streaming audio signal of each participant The question set by each pause play point will be different for each participant.

步驟35之後,進行步驟36,該參與者利用該第二終端裝置所提供之一第二操作介面,回放該影音串流訊號,並於回放該影音串流訊號過程中,第二操作介面可以根據該設定資訊,於每一暫停撥放點顯示出相應的該至少一問題,以讓該參與者針對該至少第一問題進行回答,該第二操作介面將該參與者所回答的內容予以儲存。在回放的過程中,包括有整體討論過程的全景錄影影像,以及參與者在特定時間點因為生理訊號被感測而啟動錄製的特寫影像,其中參與者透過事後觀看全景錄影影像以及於暫停播放點時回答相關的問題以進行反思,可以訓練參與者進行後設思考,另外參與者透過事後觀看自己的特寫影像以及於暫停播放點時回答相關的問題以進行反思,則可以訓練參與者進行換位思考。因此,透過步驟36,可以讓參與者在回放的觀看與回答問題的過程中,同時讓自己有後設思考以及換位思考的訓練機會。因此透過本系統,可以讓每一個參與者從討論活動進行的邏輯思考訓練,以及事後回放所進行的換位與後設思考的訓練,達到核心素養中,自主行動中的系統思考訓練的目的。After step 35, proceed to step 36. The participant uses a second operation interface provided by the second terminal device to play back the audio and video streaming signal. During the playback of the audio and video streaming signal, the second operation interface can be based on The setting information displays the corresponding at least one question at each paused play point, so that the participant can answer the at least first question, and the second operation interface stores the content answered by the participant. In the playback process, it includes a panoramic video image of the overall discussion process, and a close-up image of the participant who started recording because the physiological signal was sensed at a specific point in time. The participant watched the panoramic video image afterwards and paused the playback point When answering related questions for reflection, you can train participants to think retrospectively. In addition, participants can watch participants’ close-up images afterwards and answer related questions to reflect when they pause the playback point. Thinking. Therefore, through step 36, participants can be allowed to have training opportunities for post-consideration and empathic thinking while watching and answering questions during playback. Therefore, through this system, each participant can make logical thinking training from discussion activities, as well as post-playback transposition and post-thinking training, to achieve the purpose of systematic thinking training in core literacy and autonomous action.

為了進一步提升系統思考訓練成效,更進一步在步驟36之後,可以進行步驟37,引導者再次集合該複數個參與者,以重新對該標的進行討論互動,再重新進行步驟(32)-(36)至少一次。重複進行第二輪之後,會再次取得另一個串流影像訊號,以及對應的暫停播放點還有相關問題的回答資訊。這時候,引導者或者是參與者可以透過兩次結果的比較,進一步提升參與者系統思考的訓練成效。In order to further improve the effectiveness of the system thinking training, further after step 36, step 37 can be carried out, the leader gathers the plurality of participants again to re-discuss and interact with the subject, and then repeats steps (32)-(36) At least once. After repeating the second round, another streaming video signal will be obtained again, and the corresponding paused playback point and related question answering information. At this time, the leader or the participant can further improve the training effectiveness of the participant's systematic thinking through the comparison of the two results.

綜合上述,本發明的認知學習系統及其系統思考學習方法係針對自主行動中的系統思考提出可以實際執行,而且可以有效輔助參與者進行系統思考的養成。透過以追蹤參與者生理狀態的影像系統擷取參與者針對一主題所進行討論的過程,而錄製成一串流影像。然後提供一個操作介面讓引導者可以針對該串流影像設定暫停播放點,並於相應暫停播放點時顯示出相應的問題。參與者可以在事後透過另一終端裝置回放具有暫停播放點與相應問題的串流影像,並針對相關問題進行回覆。透過重複上述的程序可以讓參與者訓練換位思考與後設思考,達到核心素養中系統思考的訓練成效。In summary, the cognitive learning system and the system thinking learning method of the present invention propose that system thinking in autonomous action can be actually executed, and can effectively assist participants to develop system thinking. Through a video system that tracks the physiological state of the participants, the process of the participants' discussion on a topic is captured and recorded as a stream of images. Then an operation interface is provided so that the guide can set a pause playback point for the streaming image, and display the corresponding problem when the corresponding pause pause point is displayed. Participants can play back the streaming video with the paused playback point and the corresponding question through another terminal device afterwards, and reply to the related question. By repeating the above procedure, participants can train empathy and meta-thinking to achieve the training effect of systematic thinking in core literacy.

以上所述,乃僅記載本發明為呈現解決問題所採用的技術手段之較佳實施方式或實施例而已,並非用來限定本發明專利實施之範圍。即凡與本發明專利申請範圍文義相符,或依本發明專利範圍所做的均等變化與修飾,皆為本發明專利範圍所涵蓋。The above descriptions only describe the preferred embodiments or examples of the technical means adopted by the present invention to solve the problem, and are not intended to limit the scope of the patent implementation of the present invention. That is, any changes and modifications that are consistent with the context of the patent application scope of the present invention, or made in accordance with the patent scope of the present invention, are covered by the patent scope of the present invention.

2‧‧‧認知學習系統 20‧‧‧影像擷取系統 200‧‧‧影像擷取裝置 21‧‧‧感測裝置 210‧‧‧感測模組 211‧‧‧訊號接收單元 22‧‧‧運算處理器 23‧‧‧儲存裝置 24‧‧‧第一終端裝置 25‧‧‧第二中端裝置 90、90A~90D‧‧‧參與者 R‧‧‧教室 3‧‧‧系統思考學習方法 30~37‧‧‧步驟2‧‧‧Cognitive learning system 20‧‧‧Image capture system 200‧‧‧Image capture device 21‧‧‧sensing device 210‧‧‧sensor module 211‧‧‧Signal receiving unit 22‧‧‧ arithmetic processor 23‧‧‧Storage device 24‧‧‧ First terminal device 25‧‧‧Second mid-range device 90、90A~90D‧‧‧Participants R‧‧‧Classroom 3‧‧‧ Systematic thinking learning method 30~37‧‧‧ steps

圖1為本發明之認知學習系統實施例示意圖。 圖2為本發明之系統思考學習方法之一實施例流程示意圖。 圖3A與圖3B分別為本發明之認知學習系統進行定位參與者位置示意圖。 圖4A與圖4B分別為本發明之認知學習系統不同種類的串流影像訊號示意圖。 圖5為本發明之認知學習系統不同種類的串流影像訊號示意圖。 FIG. 1 is a schematic diagram of an embodiment of the cognitive learning system of the present invention. FIG. 2 is a schematic flowchart of an embodiment of the system thinking learning method of the present invention. 3A and 3B are schematic diagrams of the positioning of participants by the cognitive learning system of the present invention. 4A and 4B are schematic diagrams of different types of streaming video signals of the cognitive learning system of the present invention. 5 is a schematic diagram of different types of streaming video signals of the cognitive learning system of the present invention.

2‧‧‧認知學習系統 2‧‧‧Cognitive learning system

20‧‧‧影像擷取系統 20‧‧‧Image capture system

200‧‧‧影像擷取裝置 200‧‧‧Image capture device

21‧‧‧感測裝置 21‧‧‧sensing device

210‧‧‧感測模組 210‧‧‧sensor module

211‧‧‧訊號接收單元 211‧‧‧Signal receiving unit

22‧‧‧運算處理器 22‧‧‧ arithmetic processor

23‧‧‧儲存裝置 23‧‧‧Storage device

24‧‧‧第一終端裝置 24‧‧‧ First terminal device

25‧‧‧第二中端裝置 25‧‧‧Second mid-range device

90、90A~90D‧‧‧參與者 90、90A~90D‧‧‧Participants

R‧‧‧教室 R‧‧‧Classroom

Claims (10)

一種認知學習系統,包括: 一影像擷取系統,用以對一教學情境鏡進行錄影; 一感測裝置,用以感測於該教學情境下,關於複數個參與者的一生理訊號; 一運算處理器,用以接收關於每一個參與者的生理訊號,並判斷該生理訊號的狀況,當有生理訊號符合特定條件時,控制該影像擷取系統擷取對應該生理訊號之參與者的影像; 一儲存裝置,用以儲存該影像擷取系統所錄影而成的一影音串流訊號; 一第一終端裝置,用以提供一第一操作介面給一引導者,以讓該引導者對該影音串流訊號進行設定以產生一設定資訊,該設定資訊包括有對該影音串流訊號的特定時間點中設置一暫停撥放點,以及對每一暫停播放點設計相應的至少一問題; 以及 一第二終端裝置,用以提供一第二操作介面給參與者,透過該第二操作介面回放該影音串流訊號,並於回放該影音串流訊號過程中,根據該設定資訊,於每一暫停撥放點顯示出相應的該至少一問題,讓參與者針對該至少第一問題進行回答,該第二操作介面將該參與者所回答的內容予以儲存。 A cognitive learning system, including: An image capture system for recording a teaching situation mirror; A sensing device for sensing a physiological signal about a plurality of participants in the teaching situation; An arithmetic processor is used to receive physiological signals about each participant and determine the status of the physiological signals. When a physiological signal meets a specific condition, the image acquisition system is controlled to acquire the participant corresponding to the physiological signal image; A storage device for storing a video stream signal recorded by the image capturing system; A first terminal device is used to provide a first operation interface to a leader, so that the leader can set the audio and video streaming signal to generate setting information, and the setting information includes the audio and video streaming signal. Set a pause play point at a specific time point, and design at least one question corresponding to each pause play point; and A second terminal device is used to provide a second operation interface to the participants, to play back the audio and video stream signal through the second operation interface, and in the process of playing back the audio and video stream signal, according to the setting information, in each The pause play point displays the corresponding at least one question, allowing the participant to answer the at least first question, and the second operation interface stores the content answered by the participant. 如申請專利範圍第1項所述之認知學習系統, 其中該感測裝置,包括有複數個感測模組,分別配置在每一個參與者身上。The cognitive learning system as described in item 1 of the patent application scope, wherein the sensing device includes a plurality of sensing modules, which are respectively disposed on each participant. 如申請專利範圍第1項所述之認知學習系統, 其中該生理訊號包括有脈搏、心跳、聲音、臉部表情其中之一,或者是前述之任意組合。The cognitive learning system as described in item 1 of the patent application scope, wherein the physiological signal includes one of pulse, heartbeat, sound, facial expression, or any combination of the foregoing. 如申請專利範圍第1項所述之認知學習系統, 其中該儲存裝置設置於一雲端儲存系統中,該第一影像串流訊號係藉由網路傳輸至該儲存裝置內進行儲存。The cognitive learning system as described in item 1 of the patent application scope, wherein the storage device is set in a cloud storage system, and the first image streaming signal is transmitted to the storage device via a network for storage. 一種系統思考學習方法,其係包括有下列步驟: (a)提供一認知學習系統,包括有一影像擷取系統、一感測裝置、一運算處理器、一第一終端裝置以及一第二終端裝置; (b)引導者讓複數個參與者針對一標的進行討論互動以建構一教學情境; (c)使用該影像擷取系統對該教學情境鏡進行錄影; (d)利用該感測裝置,用以感測於該教學情境下,關於複數個參與者的一生理訊號; (e)使該運算處理器接收關於每一個參與者的生理訊號,並判斷該生理訊號的狀況,當有生理訊號符合特定條件時,控制該影像擷取系統擷取對應該生理訊號之參與者的影像,並將影像儲存在一儲存裝置以形成一影音串流訊號; (f)該引導者藉由該第一終端裝置提供之一第一操作介面,對該影音串流訊號進行設定以產生一設定資訊,該設定資訊包括有對該影音串流訊號的特定時間點中設置一暫停撥放點,以及對每一暫停播放點設計相應的至少一問題; (g)該參與者利用該第二終端裝置所提供之一第二操作介面,回放該影音串流訊號,並於回放該影音串流訊號過程中,根據該設定資訊,於每一暫停撥放點顯示出相應的該至少一問題,以讓該參與者針對該至少第一問題進行回答,該第二操作介面將該參與者所回答的內容予以儲存;以及 (h)於該步驟(g)之後,引導者再次集合該複數個參與者,以重新對該標的進行討論互動,再重新進行步驟(c)-(g)至少一次。 A systematic thinking learning method, which includes the following steps: (a) Provide a cognitive learning system, including an image capture system, a sensing device, an arithmetic processor, a first terminal device, and a second terminal device; (b) The facilitator asks multiple participants to discuss and interact with each other to construct a teaching situation; (c) Use the image capture system to record the teaching situation mirror; (d) use the sensing device to sense a physiological signal about a plurality of participants in the teaching situation; (e) causing the arithmetic processor to receive physiological signals about each participant and determine the status of the physiological signals, and when there is a physiological signal that meets certain conditions, control the image acquisition system to capture the participant corresponding to the physiological signal And store the image in a storage device to form an audio and video streaming signal; (f) The leader configures the audio and video streaming signal through a first operation interface provided by the first terminal device to generate setting information, and the setting information includes a specific time point of the audio and video streaming signal Set a pause play point, and design at least one question for each pause play point; (g) The participant uses a second operating interface provided by the second terminal device to play back the audio and video streaming signal, and in the process of playing back the audio and video streaming signal, according to the setting information, play at each pause Click to display the corresponding at least one question, so that the participant can answer the at least first question, and the second operation interface stores the content answered by the participant; and (h) After this step (g), the facilitator gathers the plurality of participants again to re-discuss and interact with the subject, and then repeats steps (c)-(g) at least once again. 如申請專利範圍第5項所述之系統思考學習方法,其係更包括有在進行一次該步驟(h)之後,將兩次取得的影音串流訊號,以及參與者所針對這兩次影音串流訊號所含有之暫停撥放點所回答的內容進行比較的步驟。The systematic thinking learning method as described in item 5 of the patent application scope further includes the audio and video streaming signals obtained twice after the step (h) is performed once, and the two audio and video strings targeted by the participants The step of comparing the contents answered by the pause point contained in the streaming signal. 如申請專利範圍第5項所述之系統思考學習方法,其中,該步驟(h)更包括有回放前一次影音串流訊號以及各參與者針對相應暫停播放點的問題所回答內容的步驟。The systematic thinking learning method as described in item 5 of the patent application scope, wherein the step (h) further includes the step of playing back the previous video stream signal and the content answered by each participant in response to the question of the corresponding playback pause point. 如申請專利範圍第5項所述之系統思考學習方法, 其中該感測裝置,包括有複數個感測模組,分別配置在每一個參與者身上。The system thinking learning method as described in item 5 of the patent application scope, wherein the sensing device includes a plurality of sensing modules, which are respectively arranged on each participant. 如申請專利範圍第5項所述之系統思考學習方法, 其中該生理訊號包括有脈搏、心跳、聲音、臉部表情其中之一,或者是前述之任意組合。The systematic thinking learning method as described in item 5 of the patent application scope, wherein the physiological signal includes one of pulse, heartbeat, sound, facial expression, or any combination of the foregoing. 如申請專利範圍第5項所述之系統思考學習方法, 其中該儲存裝置設置於一雲端儲存系統中,該第一影像串流訊號係藉由網路傳輸至該儲存裝置內進行儲存。The system thinking learning method as described in item 5 of the patent application scope, wherein the storage device is set in a cloud storage system, and the first image streaming signal is transmitted to the storage device via a network for storage.
TW108106382A 2019-02-25 2019-02-25 Cognitive learning system and method for learning system thinking using the same TWI682354B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW108106382A TWI682354B (en) 2019-02-25 2019-02-25 Cognitive learning system and method for learning system thinking using the same
CN201910528044.4A CN111613104B (en) 2019-02-25 2019-06-18 Cognitive learning system and system thinking learning method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108106382A TWI682354B (en) 2019-02-25 2019-02-25 Cognitive learning system and method for learning system thinking using the same

Publications (2)

Publication Number Publication Date
TWI682354B true TWI682354B (en) 2020-01-11
TW202032494A TW202032494A (en) 2020-09-01

Family

ID=69942475

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108106382A TWI682354B (en) 2019-02-25 2019-02-25 Cognitive learning system and method for learning system thinking using the same

Country Status (2)

Country Link
CN (1) CN111613104B (en)
TW (1) TWI682354B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102485165A (en) * 2010-12-02 2012-06-06 财团法人资讯工业策进会 Physiological signal detection system and device capable of displaying emotions, and emotion display method
TW201300081A (en) * 2011-06-17 2013-01-01 Ind Tech Res Inst System, method, recording medium and computer program product for calculating physiological index
CN103055403A (en) * 2013-02-01 2013-04-24 四川大学 Mood training usage method and device
TWI516247B (en) * 2013-05-07 2016-01-11 南臺科技大學 Method for analyzing emotional physiological signals of depressive tendency for home care
TWI522958B (en) * 2014-06-11 2016-02-21 國立成功大學 Method for physiological signal analysis and its system and computer program product storing physiological signal analysis program
TW201822134A (en) * 2016-12-01 2018-06-16 易思醫創有限公司 Physiological sensor system for distinguishing personal characteristic
TWM581261U (en) * 2019-02-25 2019-07-21 山衛科技股份有限公司 Cognitive learning system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005275290A (en) * 2004-03-26 2005-10-06 Mitsubishi Electric Information Systems Corp Training system and program for training
JP4631014B2 (en) * 2004-07-07 2011-02-16 学校法人東海大学 Electronic teaching material learning support device, electronic teaching material learning support system, electronic teaching material learning support method, and electronic learning support program
CN101458715A (en) * 2008-12-31 2009-06-17 北京大学 Information publishing and playing method synchronized with video
CN102129644A (en) * 2011-03-08 2011-07-20 北京理工大学 Intelligent advertising system having functions of audience characteristic perception and counting
CN102270397B (en) * 2011-08-08 2013-04-24 福州锐达数码科技有限公司 System for realizing design of classroom question and classroom answering under projection state
CN104732823A (en) * 2013-12-19 2015-06-24 鸿合科技有限公司 Interaction type teaching method and device
CN203968235U (en) * 2014-06-25 2014-11-26 洋铭科技股份有限公司 Intelligent image tracing system
CN106792215A (en) * 2016-12-12 2017-05-31 福建天晴数码有限公司 Education video order method and its system
CN207752445U (en) * 2017-12-08 2018-08-21 李少锋 Study condition monitors system
CN108257056A (en) * 2018-01-23 2018-07-06 余绍志 A kind of classroom assisted teaching system for the big data for being applied to teaching industry
CN108447329A (en) * 2018-05-11 2018-08-24 上海陌桥网络科技有限公司 Learning effect test method, learning resource manager device, system and client
CN108810638A (en) * 2018-07-06 2018-11-13 合肥明高软件技术有限公司 A kind of on-line study monitor system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102485165A (en) * 2010-12-02 2012-06-06 财团法人资讯工业策进会 Physiological signal detection system and device capable of displaying emotions, and emotion display method
TW201300081A (en) * 2011-06-17 2013-01-01 Ind Tech Res Inst System, method, recording medium and computer program product for calculating physiological index
CN103055403A (en) * 2013-02-01 2013-04-24 四川大学 Mood training usage method and device
TWI516247B (en) * 2013-05-07 2016-01-11 南臺科技大學 Method for analyzing emotional physiological signals of depressive tendency for home care
TWI522958B (en) * 2014-06-11 2016-02-21 國立成功大學 Method for physiological signal analysis and its system and computer program product storing physiological signal analysis program
TW201822134A (en) * 2016-12-01 2018-06-16 易思醫創有限公司 Physiological sensor system for distinguishing personal characteristic
TWM581261U (en) * 2019-02-25 2019-07-21 山衛科技股份有限公司 Cognitive learning system

Also Published As

Publication number Publication date
TW202032494A (en) 2020-09-01
CN111613104A (en) 2020-09-01
CN111613104B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
US11798431B2 (en) Public speaking trainer with 3-D simulation and real-time feedback
US11744495B2 (en) Method for objectively tracking and analyzing the social and emotional activity of a patient
CN110349667B (en) Autism assessment system combining questionnaire and multi-modal model behavior data analysis
Hüttenrauch et al. Investigating spatial relationships in human-robot interaction
WO2018171223A1 (en) Data processing method and nursing robot device
US20160042648A1 (en) Emotion feedback based training and personalization system for aiding user performance in interactive presentations
US10474793B2 (en) Systems, apparatus and methods for delivery and augmentation of behavior modification therapy and teaching
Bidwell et al. Classroom analytics: Measuring student engagement with automated gaze tracking
US20220309947A1 (en) System and method for monitoring and teaching children with autistic spectrum disorders
US9355366B1 (en) Automated systems for improving communication at the human-machine interface
CN111477055A (en) Virtual reality technology-based teacher training system and method
JP2018180503A (en) Public speaking assistance device and program
TWI682354B (en) Cognitive learning system and method for learning system thinking using the same
TWI687904B (en) Interactive training and testing apparatus
TWM581261U (en) Cognitive learning system
Peng et al. Reading Students' Multiple Mental States in Conversation from Facial and Heart Rate Cues.
Jung et al. Mobile eye-tracking for research in diverse educational settings
López et al. EMO-Learning: Towards an intelligent tutoring system to assess online students’ emotions
Webb et al. SoGrIn: a non-verbal dataset of social group-level interactions
KR102383457B1 (en) Active artificial intelligence tutoring system that support teaching and learning and method for controlling the same
Artiran et al. Analysis of Gaze, Head Orientation and Joint Attention in Autism with Triadic VR Interviews
TWI780405B (en) Learning trajectory analysis system
Shoukry et al. ClasScorer: Towards a Gamified Smart Classroom
TWI715079B (en) Network learning system and method thereof
CN101799863A (en) Image recognition algorithm and application thereof