TWM528481U - Systems and applications for generating augmented reality images - Google Patents
Systems and applications for generating augmented reality images Download PDFInfo
- Publication number
- TWM528481U TWM528481U TW105202133U TW105202133U TWM528481U TW M528481 U TWM528481 U TW M528481U TW 105202133 U TW105202133 U TW 105202133U TW 105202133 U TW105202133 U TW 105202133U TW M528481 U TWM528481 U TW M528481U
- Authority
- TW
- Taiwan
- Prior art keywords
- module
- augmented reality
- generation system
- image
- reality image
- Prior art date
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
本創作係涉及一種使用者與待顯微物體間互動之系統及其應用,尤指涉及一種產生、操控待顯微物體之擴增實境影像系統及其應用。 The present invention relates to a system for interacting with a user to be a microscopic object and an application thereof, and more particularly to an augmented reality image system for generating and manipulating a microscopic object and an application thereof.
隨著電腦系統運算及處理能力提高,以及系統使用者對於輸出入資料或影像的視覺化、直觀操作、品質、反應或傳輸時間與互動性等諸多要求的增加,許多影像多媒體資料之呈現已由平面或靜態,轉為藉由立體、動態的虛擬或擴增實境(Augmented Reality,AR)影像來實現。擴增實境是由虛擬實境(Virtual Reality,VR)所衍生出來的影像技術,兩者最大的差異在於,虛擬實境為一種創造虛擬環境來模擬真實的世界,而擴增實境則是將實際上並不存在現場的物件或景象,透過將虛擬物件在現實生活中加以具現化,顯示於指定的真實空間,換言之,就是以真實的世界為基礎,去擴增虛擬資訊,藉由將「現實的環境影像」及「電腦虛擬影像」互相結合的技術,使得使用者可藉由擴增實境進行相關資訊之取得、能夠親眼看到自己在實際環境中操作虛擬立體物件的情形。因此,若能在擴增實境技術運用上,增加較多互動性且降低延遲(latency)或對準錯誤(alignment error),對於系統使用者的學習與使用動機提升將更有助益。 With the increase of computer system computing and processing capabilities, as well as the increasing visual, intuitive operation, quality, response or transmission time and interactivity of input and output data or images, many image multimedia materials have been presented. Plane or static, which is achieved by stereoscopic, dynamic virtual or Augmented Reality (AR) images. Augmented reality is an image technology derived from Virtual Reality (VR). The biggest difference between the two is that virtual reality is a virtual environment to simulate the real world, while augmented reality is There will be virtually no objects or scenes on the scene, by presenting the virtual objects in real life and displaying them in the designated real space, in other words, based on the real world, to augment the virtual information, The combination of "realistic environmental imagery" and "computer virtual imagery" enables users to obtain relevant information by augmenting the reality and to see for themselves the operation of virtual three-dimensional objects in the actual environment. Therefore, if you can increase the interactivity and reduce the latency or alignment error in the application of augmented reality technology, it will be more helpful for the system users to learn and use motivation.
目前,就已知的擴增實境影像互動相關應用的方法,包含有標記/識別標籤(marker)及無標記,以及光學穿透(optical see-through)及影像穿透(video see-through)的擴增實境技術或分類方式。所謂的有標記/識別標籤(marker)的擴增實境技術,是提供電腦可辨認的識別標籤,例如通常被使用的互動卡片,依據擴增實境的應用及功能不同來設計互動卡片的內容,使用者可使用攝影機或手機為媒介來讀取互動卡片所載之資訊,進而將相對應的擴增實境影像疊加到顯示器裡的現實世界中,亦即顯示器會將擴增實境之三維影像顯示在互動卡片上。但是,為了正確辨識互動卡片的內容,所述卡片的外觀和尺寸需符合特定條件,例如外型採用矩形、需包含連續邊界(一般使用全黑或全白的邊框),或者邊框內的標記圖像不能具備旋轉性或對稱性等,因此會受限於互動卡片;無標記技術的擴增實境技術則是將預先建立的特定靜態圖片儲存於特徵資料庫中,當在顯示器或影像畫面中偵測到特定靜態圖片時,即可將相對應的擴增特徵作為虛擬物件疊加到畫面上。然而,上述兩種先前技術皆有其限制,無法對於原始資料屬於動態或多媒體者提供擴增實境效果,缺乏視覺立體感。 Currently, known methods for augmenting reality image interactive related applications include mark/recognition tags and no marks, as well as optical see-through and video see-through. Augmented reality technology or classification. The so-called augmented reality technology with mark/recognition is to provide a computer-readable identification tag, such as an interactive card that is usually used, to design the content of the interactive card according to the application and function of the augmented reality. The user can use the camera or the mobile phone as a medium to read the information contained in the interactive card, and then superimpose the corresponding augmented reality image into the real world in the display, that is, the display will augment the three-dimensional reality. The image is displayed on the interactive card. However, in order to correctly identify the content of the interactive card, the appearance and size of the card must meet certain conditions, such as a rectangular shape, a continuous border (usually using a black or white border), or a mark in the border. The image cannot be rotated or symmetrical, so it is limited by the interactive card; the augmented reality technology of the markless technology stores the pre-established specific still image in the feature database, when it is in the display or image When a specific still picture is detected, the corresponding augmented feature can be superimposed as a virtual object on the screen. However, both of the above prior art techniques have limitations in that they cannot provide augmented reality effects for those whose original materials belong to dynamic or multimedia, and lack visual stereoscopic feeling.
至於所謂的光學穿透技術,是利用半透明的反射鏡呈現真實環境、影像或物件,並以反射於所述半透明的反射鏡呈現虛擬環境或物件,而影像穿透技術則是將虛擬環境影像或物件,疊合到由照相機或攝影機捕捉到的真實環境的影像序列(sequence)上。使用前者的優點是顯示給使用者觀看真實環境的影像時沒有顯示延遲,但是會因為真實與虛擬不同步而產生對準錯誤及顯示延遲,而且光度會因為半透明的反射鏡而減低;至於後者在顯示時期(display-timing)不會有非同步,所以沒有對準錯誤及顯示延 遲,但是擴增實境影像在顯示給使用者或觀看者時,會發生顯示延遲。 As for the so-called optical penetrating technology, a translucent mirror is used to present a real environment, an image or an object, and a virtual environment or object is presented by reflecting the translucent mirror, and the image penetrating technology is a virtual environment. An image or object that is superimposed on a sequence of images of the real environment captured by the camera or camera. The advantage of using the former is that there is no display delay when displaying the image of the real environment for the user, but the alignment error and display delay are caused by the fact that the real and the virtual are not synchronized, and the luminosity is reduced by the translucent mirror; There is no non-synchronization during display-timing, so there is no alignment error and display delay. Late, but when the augmented reality image is displayed to the user or viewer, a display delay occurs.
此外,對於需要藉由傳統電子式或光學式等顯微鏡裝置來觀測、學習或操作待顯微物的使用者而言,如果需要查閱書籍或參考資料或紀錄觀測與操作結果,往往需要頻繁、被迫中斷而離開顯微鏡頭或機台,而且未必能夠快速尋得所需參考資料,或許還必須開啟其他電腦裝置或搜尋網頁或資料庫,才能獲得所需的非平面、非靜態的多媒體資料,並且傳統顯微鏡裝置缺乏分享、評量、警示、雙向即時互動導引、遠端控制與影像串流等機制,使用不便、無效率、不符合使用者需求,而且缺乏圖形或影像等視覺效果及介面不友善、不直觀,缺乏互動與分享機制也會抑制或難以創造學習動機、限制應用領域。然而,縱使期望利用非傳統的或習知的光學式、立體、手術顯微鏡裝置來進行上述活動,惟至今仍未見到針對待觀測、學習或操作的對象物是需要經由顯微操作的真實微形物體、非模擬或訓練假體、模型(例如為訓練白內障或眼球手術而使用的假眼、或動物眼球等),且能利用該領域原先使用的器械(例如撕囊鑷、剪刀或雷射手術刀等)而非模擬操作物件操作,又不會產生環境或操作機台受生物活體組織血水浸濕、汙染且能降低特別找尋或訂購所述生物活體組織材料之費用、能正確辨識器械外型、減少運算端複雜度、產生品質良好的雙眼立體視覺擴增實境影像之技術解決方案。 In addition, for users who need to observe, learn or operate the object to be treated by a microscope device such as a conventional electronic or optical device, if it is necessary to consult books or reference materials or record observations and operation results, it is often necessary to be frequently Forced to leave the microscope head or machine, and may not be able to quickly find the required reference materials, or you must open other computer devices or search the web page or database to obtain the required non-planar, non-static multimedia materials, and Traditional microscope devices lack mechanisms such as sharing, evaluation, warning, two-way real-time interactive guidance, remote control and video streaming. They are inconvenient, inefficient, do not meet user needs, and lack visual and visual effects such as graphics or images. Friendly, unintuitive, and lack of interaction and sharing mechanisms can also inhibit or make it difficult to create learning motivation and limit application areas. However, even though it is desirable to perform the above activities using non-traditional or conventional optical, stereoscopic, and surgical microscope devices, it has not been seen so far that the object to be observed, learned, or manipulated is required to be microscopically manipulated. Shaped objects, non-simulated or trained prostheses, models (such as fake eyes used to train cataracts or eyeball surgery, or animal eyeballs, etc.), and can utilize instruments previously used in the field (eg, capsuloendrons, scissors, or lasers) The operation of the scalpel, etc.), rather than the simulated operation of the object, does not cause the environment or the operating machine to be wetted and contaminated by the biological living tissue, and can reduce the cost of specifically searching or ordering the living tissue material, and can correctly identify the device. A technical solution that reduces the complexity of the computational end and produces a good quality binocular stereoscopic augmented reality image.
本創作之一主要目的係在於提供一種擴增實境影像產生系統,以有效改善傳統顯微鏡裝置缺乏互動及擴增實境影像技術,而不符合、無法滿足對於待顯微物觀測、學習與雙向互動需求之問題。 One of the main purposes of this work is to provide an augmented reality image generation system to effectively improve the lack of interaction and augmented reality image technology of traditional microscope devices, which is inconsistent with and cannot meet the observation, learning and bidirectional of the object to be observed. The issue of interactive needs.
本創作之又一主要目的係在於提供一種應用一影像穿透技術之顯微手術教學或訓練系統,以有效拓展擴增實境技術之應用領域及互動性。 Another main purpose of this creation is to provide a microsurgery teaching or training system using an image penetrating technology to effectively expand the application field and interactivity of augmented reality technology.
本創作之再一主要目的係在於提供一種應用一影像穿透技術之電子元件組裝訓練與檢測系統,以有效拓展擴增實境技術之應用領域及互動性。 Another main purpose of this creation is to provide an electronic component assembly training and detection system using an image penetration technology to effectively expand the application field and interaction of the augmented reality technology.
本創作之再一主要目的係在於提供一種應用一影像穿透技術之物體顯微觀察與互動系統,以有效拓展擴增實境技術之應用領域及互動性。 Another main purpose of this creation is to provide an object microscopic observation and interaction system using an image penetrating technology to effectively expand the application field and interaction of augmented reality technology.
本創作之再一主要目的係在於提供一種單晶片(SOC)系統,以有效改善系統建置成本、簡化控制流程及實現系統體積微型化。 A further primary objective of the present invention is to provide a single-chip (SOC) system to effectively improve system construction costs, simplify control processes, and achieve system miniaturization.
本創作之再一主要目的係在於提供一種數位顯微模組,以有效整合顯微與處理技術相關裝置而實現模組化與系統化。 A further major objective of the present invention is to provide a digital micro-module that is effective in integrating microscopic and processing technology-related devices to achieve modularization and systematization.
為達以上之目的,本創作所提供之擴增實境影像產生系統,包含處理模組及多個攝影單元的數位顯微模組以根據控制訊號擷取對象物瞬間影像,傳送至該處理模組,其中,對象物係體積或質量上適於經由顯微及匯聚處理以利觀察與互動操作之微型物體。處理模組追蹤或偵測並解析使用者之操控動作以產生相對應之控制訊號、接收因應於操控動作或控制訊號所擷取包含至少一對象物及/或其中至少一操作或特徵區域之狀態之一瞬間影像、處理該瞬間影像以產生至少一虛擬物件,以及產生疊合虛擬物件之擴增實境影像。其中,若操控動作包含觸發了包含對於對象物顯示模式切換或即時導引或分享之啟閉的互動應用,處理模組產生經透明化、 實體化或動態化部分或全部的對象物及/或改變後的操作或特徵區域的狀態的瞬間影像、及/或經疊合、調用及/或顯示與互動應用及對象物相關聯的介面、圖像、物件、影片及/或資訊前、後之該擴增實境影像。因此,使用者可藉由擴增實境技術進行相關資訊之取得,並且因為能夠親眼看到自己在實際環境中操作虛擬立體物件的情形,可以產生身歷其境的真實感與良好的使用者經驗,可以有效提高互動、學習與使用動機。 For the above purposes, the Augmented Reality Image Generation System provided by the present invention comprises a processing module and a digital micro-module of a plurality of camera units for capturing an instantaneous image of the object according to the control signal, and transmitting the image to the processing mode. The group, wherein the object system is bulk or mass suitable for microscopic objects that are processed through microscopic and convergent processing for observation and interaction. The processing module tracks or detects and analyzes the user's manipulation actions to generate a corresponding control signal, and receives a state including at least one object and/or at least one operation or feature region thereof according to the manipulation action or the control signal A momentary image, processing the instant image to generate at least one virtual object, and generating an augmented reality image of the superimposed virtual object. Wherein, if the manipulation action includes triggering an interactive application including opening or closing of the object display mode switching or instant guidance or sharing, the processing module is transparent, Instantiating or dynamizing a momentary image of a portion or all of the object and/or the state of the altered operation or feature region, and/or overlaying, invoking, and/or displaying an interface associated with the interactive application and the object, Augmented reality images before and after images, objects, videos, and/or information. Therefore, users can obtain relevant information through augmented reality technology, and because they can see the situation of operating virtual three-dimensional objects in the actual environment, they can produce immersive realism and good user experience. Can effectively improve the motivation of interaction, learning and use.
於本創作上述實施例中,數位顯微模組可以具有匯聚模組,其中可組配匯聚控制器單元及反射鏡單元,使匯聚控制器單元因應於控制訊號或一自動調整規則,調整反射鏡單元與些攝影單元在擷取瞬間影像的相對或幾何關係,以消除觀察者近距觀看微細事物時的匯聚不足所導致的模糊困擾與相關問題。 In the above embodiment of the present invention, the digital micro-module may have a convergence module, wherein the convergence controller unit and the mirror unit may be assembled, so that the convergence controller unit adjusts the mirror according to the control signal or an automatic adjustment rule. The unit and some of the photographic units capture the relative or geometric relationship of the instantaneous images to eliminate the blurring and related problems caused by insufficient convergence of the observer when viewing the fine objects.
於本創作上述實施例中,擴增實境影像產生系統還可包含光源模組以主要提供拍攝對象物時的環境照明、單晶片微控制器介面模組可根據控制訊號致動數位顯微模組、顯示模組可為頭戴式顯示器、立體視覺顯示器或平面顯示器以顯示擴增實境影像,且電腦主機或可攜式電子裝置與顯示模組可組配為近端、遠端或雲端之架構進行控制與互動。在其他實施例中,所述系統還可包含操作平台及定位模組,以供數位顯微模組結合並因應於控制訊號而於一操作空間中至少一軸向雙向移動。在此一提,操控動作可以是使用者藉由一模擬操作物件、一手部或對象物平時即可被施用的、真實的手術或實驗實際器械來操作而進出所述的操作空間中,或者是接近、接觸、離開、操作、插置或固接部分或全部的對象物,以及改變的操作或特徵區域之狀態,或是藉由組配或耦接於處理模組的使用者操控 介面選擇或施用所述操控動作。使用者操控介面模組可以是係一腳踏板裝置、一手動操控桿裝置、一手持或頭戴或穿戴輸出入介面裝置或一行動通訊裝置,且尚可組配對於於數位顯微模組之操作參數調整物件及/或顯示模式切換物件,以供該使用者調整該數位顯微模組之焦距、縮放倍率、移動距離、旋轉角度或光源參數之數值,以及提供使用者選擇不同與多樣的擴增實境影像顯示或排列方式,例如單一顯示、並列顯示及陣列式(array)顯示模式。並且,在其他實施例中,處理模組更用以對於瞬間影像進行影像特徵追蹤、色彩偵測或動作偵測,以取得虛擬物件或決定是否觸發或已觸發互動應用,搭配評量模組及/或誤差警示模組,可以在使用者操作的擴增實境影像符合或不符合一預設規範時,相應產生或輸出一評價結果、一未對準反應或互動應用的觸發操作提示。另外,也可以搭配學習回饋或社群分享模組,以供使用者儲存、編輯、傳輸或分享擴增實境影像、評價結果、未對準反應或觸發操作提示。 In the above embodiment, the augmented reality image generation system may further include a light source module to mainly provide ambient illumination when the object is photographed, and the single-chip microcontroller interface module may actuate the digital micro-mode according to the control signal. The group and the display module can be a head-mounted display, a stereoscopic display or a flat-panel display to display augmented reality images, and the host computer or the portable electronic device and the display module can be configured as a near end, a far end or a cloud. The architecture controls and interacts. In other embodiments, the system can further include an operating platform and a positioning module for the digital micro-module to combine and at least one axially move in an operating space in response to the control signal. In this case, the manipulation action may be that the user operates in a real operation or an experimental actual instrument by a simulated operation object, a hand or an object, and is operated in and out of the operation space, or Approaching, touching, leaving, operating, inserting or securing some or all of the objects, and changing the state of the operation or feature area, or by user manipulation or coupling to the processing module The interface selects or applies the manipulation action. The user control interface module can be a pedal device, a manual joystick device, a hand-held or a head-mounted or wear-in interface device or a mobile communication device, and can also be combined with a digital microscope module. The operation parameter adjustment object and/or the display mode switching object is provided for the user to adjust the focal length, the zoom ratio, the moving distance, the rotation angle or the value of the light source parameter of the digital microscope module, and provide different and diverse user choices. Augmented reality image display or arrangement, such as single display, side-by-side display, and array display mode. In addition, in other embodiments, the processing module is further configured to perform image feature tracking, color detection, or motion detection on the instantaneous image to obtain a virtual object or determine whether to trigger or trigger an interactive application, and the evaluation module and The error warning module can generate or output an evaluation result, an unaligned response or a triggering operation prompt of the interactive application when the augmented reality image operated by the user meets or does not meet a preset specification. In addition, it can be combined with a learning feedback or community sharing module for users to store, edit, transmit or share augmented reality images, evaluation results, misaligned responses or triggering operational prompts.
為達以上之目的,本創作所提供之應用一影像穿透技術之顯微手術教學或訓練系統,係至少配置有前述之擴增實境影像產生系統,或用以執行以實現如前述之操作方法,其中,對象物係生物體微型且真實之一本體或一組織,或模擬用之一標本或一假體模型,例如蝴蝶等昆蟲的活動觀察,或者對於靜態的標本或歷史文物賦予動態化(例如疊合播放標本蝴蝶或化石拍動翅膀或旋轉的動畫或影片選項與內容)的擴增實境影像處理,使其栩栩如生,並藉由數位顯微模組擷取瞬間影像的影像穿透技術,供處理模組處理以產生虛擬物件並疊合後同步輸出至顯示模組,得以獲致消除對準錯誤及降低延遲的技術效果。 For the above purposes, the microsurgery teaching or training system provided by the present application using an image penetrating technique is configured with at least the aforementioned augmented reality image generating system, or is configured to perform the operations as described above. A method in which an object organism is microscopic and true to a body or a tissue, or simulates the use of one specimen or a prosthetic model, such as an activity observation of an insect such as a butterfly, or imparts dynamism to a static specimen or historical artifact. Augmented reality image processing (such as overlaying a butterfly or fossil flapping flap or rotating animation or video options and content) to bring it to life, and capture the image of the transient image through digital microscopy The technology is processed by the processing module to generate virtual objects and superimposed and output to the display module, thereby achieving the technical effect of eliminating alignment errors and reducing delay.
於本創作上述實施例中,對象物可以是動物或人類之一眼部、腦部、皮膚或骨骼之本體、組織、標本或假體模型,且若對象物是眼部之本體、組織、標本或假體模型,且顯微手術包括一白內障、視網膜、黃斑部或眼角膜手術時,模擬操作物件可以是具有提示機制之探棒裝置,或是醫師們平日在手術或實驗中實際操作的慣用器械,例如撕囊鑷、剪刀或電燒器等,如此可以讓訓練與實際操作更貼近,有效增加醫師訓練的經驗值與操作相關手術與研究的能力。 In the above embodiment of the present invention, the object may be an ontology, tissue, specimen or prosthesis model of an eye, brain, skin or bone of an animal or a human, and if the object is an ontology, tissue, specimen of the eye Or a prosthetic model, and when the microsurgery includes a cataract, retina, macular or corneal surgery, the simulated operating object can be a probe device with a prompting mechanism, or the usual practice of the physician in the surgery or experiment. Instruments, such as capsular sacs, scissors or electric burners, can make training and practice closer, effectively increasing the experience of physician training and the ability to operate related surgery and research.
為達以上之目的,本創作所提供之應用一影像穿透技術之電子元件組裝訓練與檢測系統,係至少配置有前述擴增實境影像產生系統,或用以執行以實現如前述操作方法,對象物係可供使用者插置或固接一電子元件之一電路板、一載體或一電子裝置,且影像穿透技術係藉由數位顯微模組擷取包含對象物及/或其中至少一操作或特徵區域之狀態之瞬間影像,供處理模組處理以產生虛擬物件及疊合後同步輸出至顯示模組以消除一對準錯誤及顯示延遲。於本創作上述實施例中,操作物件係具有提示機制之探棒裝置,手術或實驗實際器械包含焊槍或或鑷子等實際工具或器械。 For the above purposes, the electronic component assembly training and detection system provided by the present application, which is an image penetration technology, is configured with at least the aforementioned augmented reality image generation system, or is implemented to implement the foregoing operation method. The object is a circuit board, a carrier or an electronic device for inserting or fixing an electronic component, and the image penetrating technology captures the object and/or at least by the digital microscope module. An instantaneous image of the state of an operation or feature area is processed by the processing module to generate a virtual object and superimposed and output to the display module to eliminate an alignment error and display delay. In the above embodiment of the present invention, the operating object is a probe device having a prompting mechanism, and the actual instrument for surgery or experiment includes an actual tool or instrument such as a welding torch or a forceps.
為達以上之目的,本創作所提供之應用一影像穿透技術之物體顯微觀察與互動系統,係至少配置有前述擴增實境影像產生系統,或用以執行以實現如前述操作方法,對象物係選自適於顯微操作之微型生物、植物體、礦物、有機物、無機物、化學元素或化合物,且影像穿透技術係藉由數位顯微模組擷取包含對象物及/或其中至少一操作或特徵區域之狀態之瞬間影像,供處理模組處理以產生虛擬物件及疊合後同步輸出至顯示模組以消除對準錯誤及降低延遲。 For the above purposes, the object microscopic observation and interaction system of the image-transparent technology provided by the present invention is configured with at least the aforementioned augmented reality image generation system, or is implemented to implement the foregoing operation method. The object is selected from micro-organisms, plants, minerals, organic matter, inorganic substances, chemical elements or compounds suitable for micromanipulation, and the image penetrating technique captures the object and/or at least by the digital micro-module An instant image of the state of an operation or feature area for processing by the processing module to generate a virtual object and to simultaneously output the output to the display module to eliminate alignment errors and reduce delay.
於本創作上述實施例中,若操控動作包含觸發互動應用時,處理模組更用以產生至少包含經透明化或實體化部分或全部之對象物之瞬間影像及/或經疊合、調用及/或顯示與互動應用及對象物相關聯之一介面、圖像、物件及/或資訊後之擴增實境影像;若操控動作係包含顯示模式之切換之觸發互動應用時,顯示模式為一單一顯示模式、一並列式顯示模式或一陣列式顯示模式,處理模組更用以根據使用者選擇之單一顯示模式、並列式顯示模式或陣列式顯示模式,產生經透明化或實體化部分或全部之對象物及/或經疊合、顯示與對象物相關聯之一介面、圖像、物件及/或資訊後之擴增實境影像,以產生以單一、複數相同或相異之對象物同時顯示或排列之擴增實境影像。 In the above embodiment of the present invention, if the manipulation action includes triggering the interactive application, the processing module is further configured to generate an instant image including at least part or all of the transparent or materialized object and/or overlap, call, and / or display an augmented reality image associated with an interface, image, object, and/or information associated with the interactive application and object; if the manipulation action includes a triggering interactive application that switches the display mode, the display mode is one a single display mode, a side-by-side display mode, or an array display mode, and the processing module is further configured to generate a transparent or materialized portion according to a single display mode, a side-by-side display mode, or an array display mode selected by a user. All objects and/or augmented reality images that are superimposed and displayed with one interface, image, object, and/or information associated with the object to produce objects that are identical, identical, or identical Augmented reality images displayed or arranged simultaneously.
為達以上之目的,本創作再提供一種單晶片(SOC)系統,係至少包含處理模組,以模擬如前述系統,或實現前述操作方法。 To achieve the above objectives, the present invention further provides a single-chip (SOC) system comprising at least a processing module for simulating a system as described above or implementing the foregoing method of operation.
為達以上之目的,本創作再提供一種數位顯微模組,係用以耦接或電性連接於如前述系統之處理模組或其所組配、電性連接或耦接之電腦主機或可攜式電子裝置,或用以實現前述操作方法。數位顯微模組至少包含攝影單元,以因應於操控動作或根據處理模組產生之控制訊號擷取包含至少對象物及/或其中至少一操作或特徵區域之狀態之瞬間影像,傳送至處理模組,其中,對象物係體積或質量上適於經由顯微及匯聚處理以利觀察與互動操作之微型物體。 For the purpose of the above, the present invention further provides a digital micro-module for coupling or electrically connecting to a processing module such as the aforementioned system or a computer host or a computer connected or electrically connected or coupled thereto or Portable electronic device, or to implement the foregoing method of operation. The digital micro-module includes at least a photographing unit for transmitting a transient image containing at least an object and/or a state of at least one of the operations or feature regions according to a control action or a control signal generated by the processing module to the processing mode The group, wherein the object system is bulk or mass suitable for microscopic objects that are processed through microscopic and convergent processing for observation and interaction.
於本創作上述實施例中,數位顯微模組更包含:匯聚模組、定位模組、使用者操控介面模組及顯示模組。匯聚模組之至少一部分係一分光元件,以供些攝影單元各自取得穿透、反射於分光元件後之對象物及/ 或操作或特徵區域之狀態之瞬間影像;或者,更包含:一分光元件,係分離組配於匯聚模組與些攝影單元之間,以供攝影單元各自取得匯聚模組反射後穿透、再反射於分光元件之對象物及/或操作或特徵區域之狀態之瞬間影像。 In the above embodiment of the present invention, the digital micro-module further comprises: an aggregation module, a positioning module, a user manipulation interface module and a display module. At least a part of the convergence module is a light splitting component, so that the photographing units respectively obtain an object that penetrates and reflects the light splitting component and/or Or an instantaneous image of the state of the operation or feature area; or, further comprising: a light splitting component, the separation component is disposed between the convergence module and the plurality of photography units, so that the photography unit respectively obtains the reflection module to reflect and penetrate A momentary image of the state of the object and/or the operation or feature area reflected by the beam splitting element.
1‧‧‧擴增實境影像產生系統 1‧‧‧Augmented Reality Image Generation System
11‧‧‧電腦主機或可攜式電子裝置 11‧‧‧Computer host or portable electronic device
121~125‧‧‧對象物 121~125‧‧‧ objects
13‧‧‧定位模組 13‧‧‧ Positioning Module
131‧‧‧操作平台 131‧‧‧Operation platform
132‧‧‧數位顯微模組 132‧‧‧Digital Micro Modules
133‧‧‧X軸 133‧‧‧X-axis
14‧‧‧單晶片微控制器介面模組 14‧‧‧Single Chip Microcontroller Interface Module
151、152‧‧‧使用者操控介面模組 151, 152‧‧‧ user control interface module
16‧‧‧網路 16‧‧‧Network
171、172‧‧‧顯示模組 171, 172‧‧‧ display module
181‧‧‧模擬操作物件 181‧‧‧simulated operating objects
182‧‧‧手部 182‧‧‧Hands
191、192、1003‧‧‧擴增實境影像 191, 192, 1003‧‧‧ augmented reality images
22‧‧‧顯示模式切換物件 22‧‧‧Display mode switching object
231、232‧‧‧攝影單元 231, 232‧‧ ‧ photography unit
24‧‧‧光源模組 24‧‧‧Light source module
241‧‧‧LED 241‧‧‧LED
31‧‧‧匯聚模組 31‧‧‧ Convergence module
311‧‧‧反射鏡單元 311‧‧‧Mirror unit
400‧‧‧使用者眼睛 400‧‧‧User eyes
411‧‧‧反射鏡單元第一面 411‧‧‧ first side of the mirror unit
412‧‧‧反射鏡單元第二面 412‧‧‧ second side of the mirror unit
421‧‧‧光線或影像訊號 421‧‧‧Light or video signal
422、424~426‧‧‧光線或影像 422, 424~426‧‧‧Light or image
423‧‧‧虛擬物件 423‧‧‧Virtual objects
61~64‧‧‧操作參數調整物件 61~64‧‧‧Operating parameter adjustment object
70‧‧‧操控動作或指令解析 70‧‧‧Manipulation actions or instruction analysis
71‧‧‧數位顯微模組焦距、移動距離或旋轉角度調整 71‧‧‧Digital module focal length, moving distance or rotation angle adjustment
72‧‧‧光源參數調整 72‧‧‧Light source parameter adjustment
73‧‧‧縮放倍率調整 73‧‧‧Magnification ratio adjustment
74‧‧‧特徵追蹤 74‧‧‧Feature tracking
75‧‧‧色彩偵測 75‧‧‧Color detection
76‧‧‧動作偵測 76‧‧‧ motion detection
77‧‧‧互動應用 77‧‧‧Interactive applications
78‧‧‧評量模組 78‧‧‧Evaluation module
79‧‧‧誤差警示模組 79‧‧‧Error warning module
81‧‧‧選單 81‧‧‧ menu
821~823‧‧‧操作或特徵區域 821~823‧‧‧Operation or feature area
T1~T4‧‧‧時間點或區間 T1~T4‧‧‧ time point or interval
922~924、1002‧‧‧操作或特徵區域 922~924, 1002‧‧‧ operation or feature area
1001‧‧‧假眼 1001‧‧‧ False eyes
1004‧‧‧撕囊鑷 1004‧‧‧Tack cap
圖1為根據本創作擴增實境影像產生系統一實施例之系統功能方塊圖。 1 is a block diagram of a system function of an embodiment of an augmented reality image generation system according to the present invention.
圖2為根據本創作擴增實境影像產生系統一實施例之顯微影像匯聚原理之示意圖。 2 is a schematic diagram of the principle of microscopic image convergence according to an embodiment of the augmented reality image generation system of the present invention.
圖3為根據本創作擴增實境影像產生系統一實施例之數位顯微模組及光源模組之組成架構圖。 FIG. 3 is a structural diagram of a digital micro-module and a light source module according to an embodiment of the present invention.
圖4A為根據本創作擴增實境影像產生系統一實施例之數位顯微模組、定位模組及光源模組之組成架構圖。 4A is a structural diagram of a digital micro-module, a positioning module, and a light source module according to an embodiment of the present invention.
圖4B、4C分別為根據本創作擴增實境影像產生系統反射鏡單元不同實施例之部分元件架構及分光操作示意圖。 4B and 4C are respectively a partial component architecture and a spectroscopic operation diagram of different embodiments of the mirror unit of the augmented reality image generation system according to the present invention.
圖4D~4F分別為根據本創作數位顯微模組之匯聚模組與分光元件不同實施例之部分元件架構及分光操作示意圖。 4D~4F are schematic diagrams showing part of the component structure and the optical splitting operation according to different embodiments of the convergence module and the light splitting component of the digital micro-module.
圖5為根據本創作擴增實境影像產生系統一實施例之數位顯微模組、匯聚模組、定位模組及光源模組之組成架構圖。 FIG. 5 is a structural diagram of a digital micro-module, a convergence module, a positioning module, and a light source module according to an embodiment of the present invention.
圖6A、6B分別為根據本創作一實施例擴增實境影像產生系統一實施例之使用者操控介面、模擬操作物件之組成架構圖。 6A and FIG. 6B are respectively a structural diagram of a user manipulation interface and a simulated operation object according to an embodiment of the augmented reality image generation system according to an embodiment of the present invention.
圖7為根據本創作擴增實境影像產生系統一實施例之處理模 組功能執行流程圖。 7 is a processing mode of an embodiment of an augmented reality image generation system according to the present invention. Group function execution flow chart.
圖8A~8D分別為根據本創作應用一影像穿透技術之顯微手術之教學或訓練系統不同實施例之擴增實境影像示意圖。 8A-8D are schematic diagrams of augmented reality images of different embodiments of a teaching or training system for microsurgery using an image penetrating technique according to the present invention.
圖9A、9B分別為根據本創作應用一影像穿透技術之電子元件組裝訓練與檢測系統不同實施例之擴增實境影像示意圖。 9A and 9B are schematic diagrams showing augmented reality images of different embodiments of an electronic component assembly training and detecting system using an image penetrating technique according to the present invention.
圖10A、10B分別為根據本創作應用一影像穿透技術之物體顯微觀察與互動系統不同實施例之擴增實境影像示意圖。 10A and 10B are schematic diagrams showing augmented reality images of different embodiments of an object microscopic observation and interaction system using an image penetrating technique according to the present invention.
請參閱圖1,其係根據本創作擴增實境影像產生系統一實施例之系統功能方塊圖。如圖1所示,在此實施例中,擴增實境影像產生系統1具有組配或耦接於電腦主機或可攜式電子裝置11的處理模組(未示於圖中)、定位模組13、操作平台131、數位顯微模組132、單晶片微控制器介面模組14、使用者操控介面模組151與152、顯示模組171與172。其中,電腦主機或可攜式電子裝置11與顯示模組171可電性連接或配置為近端架構,以顯示擴增實境影像191,亦可與顯示模組172透過網路16電性連接或配置為遠端或雲端架構,以顯示擴增實境影像192,從而實現遠端傳輸、控制、分享等應用方式;顯示模組171與172可以是頭戴式顯示器、立體視覺顯示器或平面顯示器,用以顯示擴增實境影像或立體影像,並且顯示模組172亦揭示其可搭配具有運算能力的終端或伺服器主機,進行本創作所述系統之遠端控制;單晶片微控制器介面模組14與處理模組,或是內建所述處理模組的電腦主機或可攜式電子裝置11,可以整合組配或互相耦接,也可以組配或互相耦接於處理模組與數位顯微模組132之間,以根據來自電腦主機或可攜式 電子裝置11或單晶片微控制器介面模組14發送的控制訊號,致動所述數位顯微模組132。操作平台131、數位顯微模組132、定位模組13可以進一步單獨或與顯示模組171共同組配,其中,操作平台131可提供使用者置放適於顯微操作的微型生物的真實本體、組織、模擬用標本或假體模型,或是可供使用者插置或固接電子元件的電路板、載體或電子裝置,以作為待觀察、待操作的對象物121,在此係一個蝴蝶;可利用實體、耐重材質構成支架的定位模組13,供數位顯微模組132結合,且可因應於來自電腦主機或可攜式電子裝置11或單晶片微控制器介面模組14發送的控制訊號,驅動微機電與馬達控制等機構,從而能令數位顯微模組132於與操作平台131所界定之操作空間中(即以操作操作平台131上表面為原點,向例如三維直角座標之X軸133、Y軸、Z軸各軸向延伸所成空間)的至少一軸向(例如X軸133)分別進行雙向移動、定位。 Please refer to FIG. 1, which is a system function block diagram of an embodiment of the augmented reality image generation system according to the present invention. As shown in FIG. 1 , in this embodiment, the augmented reality image generating system 1 has a processing module (not shown) or a positioning module that is coupled or coupled to the host computer or the portable electronic device 11 . The group 13, the operation platform 131, the digital micro-module 132, the single-chip microcontroller interface module 14, the user manipulation interface modules 151 and 152, and the display modules 171 and 172. The computer host or the portable electronic device 11 and the display module 171 can be electrically connected or configured as a near-end structure to display the augmented reality image 191, and can also be electrically connected to the display module 172 through the network 16 . Or configured as a remote or cloud architecture to display the augmented reality image 192 for remote transmission, control, sharing, etc.; the display modules 171 and 172 can be head mounted displays, stereoscopic displays, or flat panel displays. For displaying augmented reality images or stereoscopic images, and the display module 172 also discloses that it can be combined with a computing terminal or a server host to perform remote control of the system of the present invention; a single-chip microcontroller interface The module 14 and the processing module, or the computer host or the portable electronic device 11 in which the processing module is built, may be integrated or coupled to each other, or may be coupled or coupled to the processing module and Between digital micro-modules 132, according to the host computer or portable The control signal sent by the electronic device 11 or the single-chip microcontroller interface module 14 activates the digital micro-module 132. The operating platform 131, the digital micro-module 132, and the positioning module 13 can be further assembled separately or together with the display module 171, wherein the operating platform 131 can provide a real body for the user to place micro-organisms suitable for micromanipulation. , organization, simulation specimen or prosthetic model, or a circuit board, carrier or electronic device for the user to insert or fix electronic components, as an object to be observed, to be operated 121, here a butterfly The positioning module 13 of the bracket can be assembled by the solid and heavy-duty materials for the digital micro-module 132, and can be sent according to the computer host or the portable electronic device 11 or the single-chip microcontroller interface module 14. The control signal drives the MEMS and the motor control mechanism to enable the digital micro-module 132 to be in the operating space defined by the operating platform 131 (ie, to operate the upper surface of the operating platform 131 as an origin, for example, to a three-dimensional right angle coordinate At least one axial direction (for example, the X-axis 133) of the X-axis 133, the Y-axis, and the Z-axis extending in the axial direction is bidirectionally moved and positioned.
請再次參閱圖1,並併予參閱圖2~4C。圖2為根據本創作擴增實境影像產生系統一實施例之顯微影像匯聚原理之示意圖,圖3為根據本創作擴增實境影像產生系統一實施例之數位顯微模組及光源模組之組成架構圖,圖4A為根據本創作擴增實境影像產生系統一實施例之數位顯微模組、定位模組及光源模組之組成架構圖,且圖4B、4C分別為根據本創作擴增實境影像產生系統反射鏡單元不同實施例之部分元件架構及分光操作示意圖。如圖2所示,圖2左方是顯示左右眼觀察例如螞蟻等微型生物而成像時,將會發生的interpupillary distance問題與現象,可以藉由匯聚處理逐漸或自動規則而調整單側或雙側的相機擷取視線,最終形成圖2右方示意圖所顯示的,從而消除了前述模糊與不合觀看需求的問題。因此在圖1、3、4A所示 實施例中,數位顯微模組132除了經組配而至少包含兩個攝影單元231與232,以根據控制訊號擷取體積或質量上適於經由顯微及匯聚處理以利觀察與互動操作之微型物體及/或所述物體上的操作或特徵區域之狀態(例如因應於互動應用所生之互動影像內容改變,此將留待實施方式後續段落再予說明,在此不予贅述)之一瞬間影像,並傳送至處理模組之外,其尚可更包含匯聚模組31以實現前述匯聚操作。 Please refer to Figure 1 again and refer to Figures 2~4C. 2 is a schematic diagram of a microscopic image convergence principle according to an embodiment of the present invention, and FIG. 3 is a digital micromodule and a light source module according to an embodiment of the present invention. FIG. 4A is a structural diagram of a digital micro-module, a positioning module, and a light source module according to an embodiment of the augmented reality image generating system of the present invention, and FIGS. 4B and 4C are respectively according to the present invention. A part of the component structure and the spectroscopic operation diagram of different embodiments of the mirror unit of the augmented reality image generation system are created. As shown in Fig. 2, the left side of Fig. 2 shows the interpupillary distance problem and phenomenon that will occur when imaging the left and right eyes to observe micro-organisms such as ants, and can be adjusted unilaterally or bilaterally by gradual or automatic rules of convergence processing. The camera captures the line of sight and eventually forms what is shown in the right diagram of Figure 2, thereby eliminating the aforementioned blurring and discrepancies. Therefore, as shown in Figures 1, 3 and 4A In an embodiment, the digital micro-module 132 is assembled to include at least two photographic units 231 and 232 for volume or mass selection according to the control signal for microscopic and convergent processing for observation and interaction. The state of the micro-object and/or the operation or feature area on the object (eg, due to changes in the interactive image content generated by the interactive application, this will be left to the subsequent paragraphs of the implementation and will not be described here) The image is transmitted to the processing module, and may further include a convergence module 31 to implement the foregoing aggregation operation.
承上實施例續述,且請併予參閱圖5,其係根據本創作擴增實境影像產生系統一實施例之數位顯微模組、匯聚模組、定位模組及光源模組之組成架構圖。其中,圖4A所示實施例中匯聚模組31係經組配且具有一匯聚控制器單元(已集成、組配為控制電路及其受控樞轉元件,故未予直接標號)及一反射鏡單元311(在此實施為V形,每一邊面向反射光的面可依分光需求及應用,設計或組配為不同或相同材質或作用),其中,匯聚控制器單元因應於控制訊號而調整反射鏡單元311與攝影單元231與232擷取瞬間影像之相對或幾何關係以實現匯聚操作之結果,光源模組24係由例如LED 241所環形組配而中心鏤空,以供光線反射或保持瞬間影像之擷取光路之通暢,故所述光源模組24在此實施例中,係經組配以提供向對象物之中心點投射而反射至反射鏡單元311(在此實施為V形,且背面為鏡面)進行分光與折射至攝影單元231與232之光線,所述光線大致上不會在對象物上產生陰影或遮蔽。另外,在圖4B、4C所示實施例中,反射鏡單元311的第一面411與第二面412可分別更組配或鍍膜為鏡面,其差異在於光線或影像訊號421經過不同路徑或距離的反射、與折射或傳送後的光線或影像422、424,可能會產生所擷取瞬間影像在使用者眼睛400觀看時,感覺較為模糊或清晰之結果。 The embodiment is continued, and please refer to FIG. 5, which is a digital micro-module, a convergence module, a positioning module, and a light source module according to an embodiment of the present invention. Architecture diagram. The convergence module 31 of the embodiment shown in FIG. 4A is assembled and has a convergence controller unit (which has been integrated, is configured as a control circuit and its controlled pivoting component, and therefore is not directly labeled) and has a reflection. The mirror unit 311 (formed here as a V-shaped surface, each side facing the reflected light can be designed or assembled according to the different light source requirements and applications), wherein the convergence controller unit is adjusted according to the control signal The mirror unit 311 and the photographing units 231 and 232 capture the relative or geometric relationship of the instantaneous images to achieve the result of the converging operation. The light source module 24 is annularly assembled by, for example, the LEDs 241 and is hollowed out at the center for light reflection or instant maintenance. In the embodiment, the light source module 24 is assembled to provide a projection to the center point of the object and is reflected to the mirror unit 311 (in this case, a V-shape, and The back side is a mirror surface that splits and refracts light to the photographing units 231 and 232, which substantially does not create shadows or shadows on the object. In addition, in the embodiment shown in FIG. 4B and FIG. 4C, the first surface 411 and the second surface 412 of the mirror unit 311 can be respectively assembled or coated as a mirror surface, the difference being that the light or image signal 421 passes through different paths or distances. The reflected, refracted or transmitted light or images 422, 424 may result in a blur or sharpness of the captured instant image being viewed by the user's eyes 400.
承上所述,且併請參閱圖6A、6B、7,其中,圖6A、6B分別為根據本創作一實施例擴增實境影像產生系統一實施例之使用者操控介面、模擬操作物件之組成架構圖,圖7則為根據本創作擴增實境影像產生系統一實施例之處理模組功能執行流程圖。在前述及本實施例中,處理模組可經組配以追蹤、偵測並解析70使用者的透過組配或耦接於處理模組之使用者操控介面151所選擇或施用的操控動作或指令,以產生相對應之控制訊號至例如光源模組24或定位模組133、接收因應於操控動作或控制訊號由數位顯微模組132所擷取、傳送過來的包含對象物及/或操作或特徵區域之狀態之瞬間影像、處理後產生疊合所產生虛擬物件之擴增實境影像191與192,送至例如顯示模組171顯示。其中,操控動作係使用者藉由模擬操作物件181、手部182或對象物之手術或實驗實際器械(未示於圖中)進入或移出操作空間中、及/或接近、接觸、離開、操作、插置或固接部分或全部之對象物及/或改變至少一操作或特徵區域之狀態。若操控動作包含觸發一互動應用77,且其中互動應用77至少包含對於對象物之顯示模式之切換或即時導引或分享之啟閉,處理模組更用以產生經透明化或實體化部分或全部之對象物及/或改變後至少一操作或特徵區域之狀態之瞬間影像、及/或經疊合、調用及/或顯示與互動應用77及對象物相關聯之一介面、圖像、物件及/或資訊後之擴增實境影像。處理模組更可用以對於由數位顯微模組132所擷取的瞬間影像進行影像特徵追蹤74、色彩偵測75或動作偵測76,以取得虛擬物件,或用以對於瞬間影像進行判斷,以決定是否觸發或已觸發該互動應用。或者,處理模組更可包含一評量模組78及/或一誤差警示模組79,係經組配以於處理模組所產生之擴增實境影像符合或不符合一預設規範時,相應產生或輸 出評價結果、未對準反應或互動應用之觸發操作提示,例如發出聲、光、語音或以視訊顯示。處理模組更可包含一學習回饋或社群分享模組(未示於圖中),係經組配以供使用者儲存、編輯、傳輸或分享擴增實境影像、評價結果、未對準反應或觸發操作提示。 As shown in FIG. 6A, FIG. 6B, FIG. 6A and FIG. 6B are respectively a user manipulation interface and a simulated operation object according to an embodiment of the augmented reality image generation system according to an embodiment of the present invention. The composition diagram is composed, and FIG. 7 is a flowchart of executing the function of the processing module according to an embodiment of the augmented reality image generation system according to the present invention. In the foregoing and the present embodiment, the processing module can be configured to track, detect, and analyze 70 user-selected or coupled manipulation actions selected or applied by the user manipulation interface 151 of the processing module or Commands to generate corresponding control signals to, for example, the light source module 24 or the positioning module 133, and receive objects and/or operations that are captured and transmitted by the digital microscope module 132 in response to the manipulation or control signals. The instantaneous image of the state of the feature area or the augmented reality images 191 and 192 of the virtual object generated by the superimposition are sent to, for example, the display module 171 for display. The manipulation action is performed by the user by simulating the operation object 181, the hand 182 or the object of the surgical or experimental actual instrument (not shown) into or out of the operation space, and/or approaching, contacting, leaving, and operating. Inserting or securing some or all of the objects and/or changing the state of at least one of the operations or feature areas. If the manipulation action includes triggering an interactive application 77, and wherein the interactive application 77 includes at least switching of the display mode of the object or opening and closing of the instant guidance or sharing, the processing module is further configured to generate a transparent or materialized portion or Instantaneous image of all objects and/or states of at least one operation or feature area after the change, and/or overlay, call and/or display of an interface, image, object associated with the interactive application 77 and the object And/or augmented reality images after the information. The processing module can be further configured to perform image feature tracking 74, color detection 75 or motion detection 76 on the instantaneous image captured by the digital micro-module 132 to obtain a virtual object or to judge the instantaneous image. To decide whether to trigger or have triggered the interactive app. Alternatively, the processing module may further include an evaluation module 78 and/or an error warning module 79, which are configured to match whether the augmented reality image generated by the processing module meets or does not meet a preset specification. Generate or lose accordingly Trigger operation prompts for evaluation results, misalignment reactions, or interactive applications, such as sound, light, voice, or video display. The processing module may further comprise a learning feedback or community sharing module (not shown), which is configured for the user to store, edit, transmit or share the augmented reality image, the evaluation result, the misalignment Respond or trigger an action prompt.
承上實施例,使用者操控介面模組151可以是腳踏板裝置,或者可以是手動操控桿裝置、手持輸出入介面裝置或行動通訊裝置。如圖6A所示,使用者操控介面模組151更組配有數位顯微模組132之操作參數調整物件61~64及顯示模式切換物件22。其中,操作參數調整物件61~64係經組配以供使用者調整數位顯微模組132之焦距、移動距離或旋轉角度71、光源參數72、縮放倍率73之數值,顯示模式切換物件22係經組配以供使用者選擇單一、複數相同或相異之對象物122~125同時顯示或排列之擴增實境影像,且顯示模式係選自一單一顯示模式(顯示如圖6B所示)、一並列式顯示模式(未示於圖中,但類似於圖8C取其中兩個來顯示)及一陣列式(array)顯示模式(請先參閱圖8C所示)。 In the embodiment, the user manipulation interface module 151 can be a foot pedal device, or can be a manual joystick device, a handheld input/output interface device, or a mobile communication device. As shown in FIG. 6A, the user manipulation interface module 151 is further provided with the operation parameter adjustment objects 61-64 of the digital micro-module 132 and the display mode switching object 22. The operation parameter adjustment objects 61-64 are assembled for the user to adjust the focal length, the moving distance or the rotation angle 71 of the digital microscope module 132, the light source parameter 72, the zoom magnification 73, and the display mode switching object 22 The user is configured to select a single, multiple identical or different objects 122-125 to simultaneously display or arrange the augmented reality image, and the display mode is selected from a single display mode (shown in FIG. 6B). A side-by-side display mode (not shown in the figure, but similar to FIG. 8C, two of which are shown) and an array display mode (please refer to FIG. 8C first).
請參閱圖4D~4F,其係分別為根據本創作數位顯微模組之匯聚模組與分光元件不同實施例之部分元件架構及分光操作示意圖。攝影單元231~233之放置位置,可因應於不同的匯聚模組31、反射鏡單元311、312視應用、需求而組配或鍍膜為鏡面之設計而調整,且由經過不同路徑或距離的反射、折射與傳送的光線或影像425、426及配合圖2所示匯聚操作及原理可知,擷取瞬間影像(在此實施例為例如螞蟻的微型物體)的系統或模組架構的設計與組配,對於後續產生的擴增實境影像之品質將有模糊與清晰之品質差異。 Please refer to FIG. 4D to FIG. 4F , which are schematic diagrams showing part of the component structure and the optical splitting operation according to different embodiments of the convergence module and the light splitting component of the digital micro-module. The placement positions of the photographing units 231 to 233 can be adjusted according to different convergence modules 31, mirror units 311, 312 depending on the application and requirements, or the coating is mirror-finished, and reflected by different paths or distances. , refracting and transmitting light or images 425, 426 and the convergence operation and principle shown in FIG. 2, the design and assembly of the system or module architecture for capturing the instantaneous image (in this embodiment, for example, an ant's miniature object) For the quality of subsequent augmented reality images, there will be blurred and clear quality differences.
由於本創作之單晶片(SOC)系統,係至少包含一處理模組以模擬所述系統之處理模組,因此應可由前述實施例及相應圖式之充分揭露、說明得到理解且能據以實現,在此不再贅述。 Since the single-chip (SOC) system of the present invention includes at least one processing module to simulate the processing module of the system, it should be fully disclosed and explained by the foregoing embodiments and corresponding drawings and can be realized accordingly. , will not repeat them here.
請參閱圖8A~8D,其係分別為根據本創作應用一影像穿透技術之物體顯微觀察與互動系統不同實施例之擴增實境影像示意圖。在此實施例中,所述物體顯微觀察與互動系統,係至少配置有前述擴增實境影像產生系統,或用以執行以實現如前述操作方法,對象物係選自適於顯微操作之微型生物、植物體、礦物、有機物、無機物、化學元素或化合物,例如本實施例中的對象物蝴蝶122~125,且影像穿透技術係藉由數位顯微模組擷取包含對象物及/或其中至少一操作或特徵區域821~823之狀態之瞬間影像,供處理模組處理以產生虛擬物件,例如請參閱圖8A圖最右圖上方所示選單符號及疊合於蝴蝶影像上的蝴蝶幼蟲虛擬物件、圖8B的選單、資訊、陣列顯示模式、快照、分享至社群等圖像或物件,疊合後可同步輸出至顯示模組以消除一對準錯誤及顯示延遲,且可增進使用或觀看者對於本實施例中蝴蝶的生態過程的視覺化或動態化的豐富學習經驗。此外,如圖8C所示,對象物121及操作或特徵區域之狀態之瞬間影像被擷取後,在陣列顯示模式下,與對象物121相關的資訊(例如類似或相同或不同的生物學分類的蝴蝶,可以併存、顯示於操作平台上供觀察與操作),選單亦可併予疊合產生與顯示或被暫時隱藏供後續調用執行。 Please refer to FIGS. 8A-8D, which are schematic diagrams of augmented reality images of different embodiments of an object microscopic observation and interaction system using an image penetrating technique according to the present invention. In this embodiment, the object microscopic observation and interaction system is configured with at least the augmented reality image generation system described above, or is configured to perform the method of operation as described above, and the object is selected from microscopic operations. Micro-organisms, plants, minerals, organic matter, inorganic substances, chemical elements or compounds, such as the object butterfly 122-125 in this embodiment, and the image penetrating technology captures the object and/or by the digital micro-module. Or a transient image of at least one of the operational or characteristic regions 821-823 for processing by the processing module to generate a virtual object, such as the menu symbol shown above the rightmost figure of FIG. 8A and the butterfly superimposed on the butterfly image. The larva virtual object, the menu of Figure 8B, the information, the array display mode, the snapshot, the image or the object shared to the community, can be synchronously output to the display module after being superimposed to eliminate an alignment error and display delay, and can be improved The use or viewer has a rich learning experience for the visualization or dynamism of the butterfly's ecological processes in this embodiment. In addition, as shown in FIG. 8C, after the instantaneous image of the state of the object 121 and the operation or feature area is captured, the information related to the object 121 in the array display mode (eg, similar or the same or different biological classification) The butterflies can be coexisted and displayed on the operating platform for observation and operation. The menus can also be superimposed to produce and display or temporarily hidden for subsequent call execution.
另外,承上實施例並如圖8D所示,例如為標本或靜物的對象物121(在此例如蝴蝶),可以依本創作,據以產生出擴增實境影像;詳言之,其係以擷取對象物之即時影像,作為3D動畫的貼圖素材,以使觀察者 產生真實標本或靜物發生動態的錯覺,所以在時間軸由時間點或區間T1往T4之後推進時,可以連續呈現對象物121的狀態的動畫或影片,即在此係可示意為撲動翅膀或飛近或遠離觀看者致使尺寸變化等影像或動畫的呈現效果,也可以示意為依時間推進,待顯微觀察的微型生物可變為同種類或不同種類,故本創作係具有多元應用及饒富趣味與跨域學習助益。 In addition, as shown in FIG. 8D, for example, a specimen 121 or a still object 121 (here, a butterfly) can be created according to the present invention to generate an augmented reality image; in detail, the system To capture the real-time image of the object as a texture material for 3D animation to make the viewer Producing the illusion that the real specimen or the still life is dynamic, so when the time axis is advanced from the time point or the interval T1 to T4, the animation or the movie of the state of the object 121 can be continuously presented, that is, the wing can be indicated as flapping wings or Flying near or away from the viewer to cause the appearance of images or animations such as dimensional changes, can also be indicated as advancing by time. The microscopic organisms to be microscopically observed can be of the same type or different types, so the creation department has multiple applications and Rich fun and cross-domain learning help.
請參閱圖9A、9B,其係分別為根據本創作應用一影像穿透技術之電子元件組裝訓練與檢測系統不同實施例之擴增實境影像示意圖。在此實施例中,所述電子元件組裝訓練與檢測系統,係至少配置有前述擴增實境影像產生系統,對象物係可供使用者插置或固接電子元件之電路板、載體或電子裝置,且影像穿透技術係藉由數位顯微模組擷取包含對象物及/或其中至少一操作或特徵區域之狀態之瞬間影像,供處理模組處理以產生虛擬物件及疊合後同步輸出至顯示模組,以消除一對準錯誤及顯示延遲。其中,如圖9A所示,對象物在此為一顆IC,具有多個pin腳,各腳位可收發不同輸出入或控制訊號內容,因此,藉由本創作實施例可知,與此IC相關的介面、圖像、資訊及選單可併予疊合產生與顯示在一虛擬實境影像上供使用者或觀看者操作或調用,無須另行或頻繁離開所述系統尋求所需參考資訊,以致被迫中斷觀察與學習進程。 Please refer to FIG. 9A and FIG. 9B , which are schematic diagrams of augmented reality images of different embodiments of an electronic component assembly training and detection system applying an image penetrating technique according to the present invention. In this embodiment, the electronic component assembly training and detection system is configured with at least the augmented reality image generation system, and the object is a circuit board, carrier or electronic device for the user to insert or fix the electronic component. The device, and the image penetrating technique captures a momentary image of the state of the object and/or at least one of the operations or feature regions by the digital microscope module for processing by the processing module to generate the virtual object and the synchronization after the overlay Output to the display module to eliminate an alignment error and display delay. As shown in FIG. 9A, the object is an IC here, and has a plurality of pin pins. Each pin can transmit and receive different input or control signal contents. Therefore, according to the present embodiment, the IC is related to the IC. Interfaces, images, information, and menus can be superimposed and displayed on a virtual reality image for users or viewers to operate or invoke without having to leave the system separately or frequently to seek the required reference information, thus being forced Interrupt observation and learning process.
承上所述,圖9B所示實施例中,對象物為可供插置或固接電子元件的電路板,所述數位顯微模組擷取包含對象物及/或其中至少一操作或特徵區域922~924之狀態之瞬間影像,即擷取了真實環境的電路板影像,以及操作或特徵區域922~924中的待插接或待焊固電子元件焊點/孔洞之狀態改變前後之影像後,所有的虛擬物件在使用者未開始實際插接電子元 件前,都將全部呈現;然後,隨著使用者操控動作開始進行,將根據操作者由時間點或區間T1~T3的操控動作,分別獲得包含所有待插接到此電路板上的電阻、電容、IC等虛擬物件的例如電阻值或顏色的相關資訊與擴增實境影像、操作者手持IC接近操作或特徵區域923時,該區域虛擬物件會自動消失,此對照於此時仍然可見虛擬物件顯現於操作或特徵區域924中的擴增實境影像、以及實際組裝後的包含真實與虛擬物件的擴增實境影像,即可見其差異。在此一提,在本創作其他實施例中,所述系統尚可處理為了避免操作者觀看受阻或受限或實作上的干擾,由處理模組在偵測到操作者手部部分或逐漸出現在擷取影像或操作或特徵區域中的動作趨勢或擷取到相關影像時,可以產生與圖9B中T2不同的擴增實境影像,即例如因應所述手持IC接近操作或特徵區域923時的狀態前後變化,或以其他條件,作為暫時禁能機制,暫時移除、透明化或禁能該即時導引或分享之啟閉及所產生相對應之該擴增實境影像,或者相對應產生不疊加該虛擬物件之該擴增實境影像,以避免干擾該使用者之操作,即移除或透明化、禁能或關閉某些即時導引或分享之啟閉功能及相關互動導引介面與資訊內容,使得操作者不會受到干擾,並且尚可依據本創作其他實施例中提供的評量或警示或分享模組與機制,因此不僅可滿足消除對準錯誤及延遲之需求,更可獲致客製化及資訊技術在輔助學習與工商業上應用的效益。 In the embodiment shown in FIG. 9B, the object is a circuit board for inserting or fixing electronic components, and the digital micro-module captures an object and/or at least one operation or feature thereof. The instantaneous image of the state of the area 922~924, that is, the image of the board that captures the real environment, and the image of the solder joint/hole of the electronic component to be inserted or to be soldered in the operation or feature area 922~924 After that, all the virtual objects are not actually plugged in by the user. All of them will be presented before the device; then, as the user's manipulation starts, the resistors that are to be plugged into the board will be obtained according to the operator's control action from the time point or interval T1~T3. When the information such as the resistance value or color of a virtual object such as a capacitor or an IC is related to the augmented reality image, the operator's handheld IC, or the feature area 923, the virtual object in the area will automatically disappear, and the comparison is still visible at this time. The difference between the augmented reality image in which the object appears in the operation or feature area 924 and the augmented reality image containing the actual and virtual objects after actual assembly can be seen. In other embodiments of the present invention, the system is still operable to prevent the operator from viewing the blocked or restricted or implemented interference, and the processing module detects the operator's hand portion or gradually An augmented reality image different from T2 in FIG. 9B may be generated when an action trend in the captured image or operation or feature area is captured or captured, ie, for example, the handheld IC proximity operation or feature area 923 is generated. The state of the time changes before or after, or other conditions, as a temporary prohibition mechanism, temporarily remove, transparent or disable the instant guidance or sharing of the opening and closing and the corresponding augmented reality image generated, or Corresponding to generate the augmented reality image without superimposing the virtual object to avoid interference with the user's operation, that is, removing or transparent, disabling or turning off some instant guidance or sharing opening and closing functions and related interaction guides The interface and the information content, so that the operator is not disturbed, and can still be based on the evaluation or warning or sharing modules and mechanisms provided in other embodiments of the present creation, thereby not only satisfying the alignment error Delay of demand, will receive more benefits caused by custom and application of information technology-assisted learning in business and industry.
請參閱圖10A、10B,其係分別為根據本創作應用一影像穿透技術之顯微手術之教學或訓練系統不同實施例之擴增實境影像示意圖。在此實施例中,所述顯微手術之教學或訓練系統,係至少配置有前述之擴增實境影像產生系統,或用以執行以實現前述之操作方法,對象物係生物 體微型且真實之一本體或一組織,或模擬用之一標本或一假體模型。在此實施例中,對象物係動物或人類之一眼部、腦部、皮膚或骨骼之本體、組織、標本或假體模型,且為眼部之本體、組織、標本或假體模型,所述顯微手術包括但不限於白內障、視網膜、黃斑部或眼角膜手術。在此實施例中,例如具有假眼1001,處理模組至少依據對象物假眼1001與操作或特徵區域1002(即眼珠週緣邊框加上眼睛上方的斑紋區塊)之狀態,產生可作為白內障手術撕囊標記或即時導引的虛擬物件1003,則操作者可以利用具有提示機制之探棒裝置或手術或實驗實際器械,例如撕囊鑷1004或剪刀、電燒器等,借助本創作所述系統依據疊合所產生的擴增實境影像進行撕囊之教學或訓練系統。 Please refer to FIG. 10A and FIG. 10B , which are schematic diagrams of augmented reality images of different embodiments of the teaching or training system for microsurgery according to the present application. In this embodiment, the teaching or training system for microsurgery is configured with at least the aforementioned augmented reality image generation system, or is configured to perform the aforementioned operation method, the object organism A miniature or real body or a tissue, or a model or a prosthetic model. In this embodiment, the object is an ontology, tissue, specimen or prosthetic model of the eye, brain, skin or bone of an animal or human, and is an ontology, tissue, specimen or prosthetic model of the eye, Microsurgery includes, but is not limited to, cataracts, retina, macula, or corneal surgery. In this embodiment, for example, having a false eye 1001, the processing module can be used as a cataract surgery according to at least the state of the object false eye 1001 and the operation or feature area 1002 (ie, the border of the eyeball plus the marking block above the eye). The capsular tag or the virtual object 1003 that is guided immediately, the operator can utilize the probe device with the prompt mechanism or the actual instrument for surgery or experiment, such as the capsular sac 1004 or scissors, the electric burner, etc., with the system of the present invention. The teaching or training system of the capsulotomy is performed according to the augmented reality image generated by the superposition.
以上所述實施例僅為舉例,並非以此限制本創作實施之範圍;舉凡在不脫離本創作精神與範圍下所作之簡單或等效變化與修飾,皆應仍屬涵蓋於本創作之範圍。 The above-mentioned embodiments are only examples, and are not intended to limit the scope of the present invention; any simple or equivalent changes and modifications made without departing from the spirit and scope of the present invention should still fall within the scope of the present invention.
1‧‧‧擴增實境影像產生系統 1‧‧‧Augmented Reality Image Generation System
11‧‧‧電腦主機或可攜式電子裝置 11‧‧‧Computer host or portable electronic device
121‧‧‧對象物 121‧‧‧ Objects
13‧‧‧定位模組 13‧‧‧ Positioning Module
131‧‧‧操作平台 131‧‧‧Operation platform
132‧‧‧數位顯微模組 132‧‧‧Digital Micro Modules
133‧‧‧X軸 133‧‧‧X-axis
14‧‧‧單晶片微控制器介面模組 14‧‧‧Single Chip Microcontroller Interface Module
151、152‧‧‧使用者操控介面模組 151, 152‧‧‧ user control interface module
16‧‧‧網路 16‧‧‧Network
171、172‧‧‧顯示模組 171, 172‧‧‧ display module
191、192‧‧‧擴增實境影像 191, 192‧‧Augmented Reality Image
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW105202133U TWM528481U (en) | 2016-02-05 | 2016-02-05 | Systems and applications for generating augmented reality images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW105202133U TWM528481U (en) | 2016-02-05 | 2016-02-05 | Systems and applications for generating augmented reality images |
Publications (1)
Publication Number | Publication Date |
---|---|
TWM528481U true TWM528481U (en) | 2016-09-11 |
Family
ID=57444087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW105202133U TWM528481U (en) | 2016-02-05 | 2016-02-05 | Systems and applications for generating augmented reality images |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWM528481U (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI576787B (en) * | 2016-02-05 | 2017-04-01 | 黃宇軒 | Systems and applications for generating augmented reality images |
TWI603227B (en) * | 2016-12-23 | 2017-10-21 | 李雨暹 | Method and system for remote management of virtual message for a moving object |
US10890751B2 (en) | 2016-02-05 | 2021-01-12 | Yu-Hsuan Huang | Systems and applications for generating augmented reality images |
TWI769400B (en) * | 2019-09-26 | 2022-07-01 | 崑山科技大學 | Visual monitoring device and method for virtual image |
-
2016
- 2016-02-05 TW TW105202133U patent/TWM528481U/en unknown
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI576787B (en) * | 2016-02-05 | 2017-04-01 | 黃宇軒 | Systems and applications for generating augmented reality images |
US10890751B2 (en) | 2016-02-05 | 2021-01-12 | Yu-Hsuan Huang | Systems and applications for generating augmented reality images |
TWI603227B (en) * | 2016-12-23 | 2017-10-21 | 李雨暹 | Method and system for remote management of virtual message for a moving object |
TWI769400B (en) * | 2019-09-26 | 2022-07-01 | 崑山科技大學 | Visual monitoring device and method for virtual image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI576787B (en) | Systems and applications for generating augmented reality images | |
JP7411133B2 (en) | Keyboards for virtual reality display systems, augmented reality display systems, and mixed reality display systems | |
US11747618B2 (en) | Systems and methods for sign language recognition | |
CN106125921B (en) | Gaze detection in 3D map environment | |
US20130154913A1 (en) | Systems and methods for a gaze and gesture interface | |
CN114721470A (en) | Device, method and graphical user interface for interacting with a three-dimensional environment | |
CN117032519A (en) | Apparatus, method and graphical user interface for interacting with a three-dimensional environment | |
CA2847975A1 (en) | System and method for using eye gaze information to enhance interactions | |
TWM528481U (en) | Systems and applications for generating augmented reality images | |
US10890751B2 (en) | Systems and applications for generating augmented reality images | |
Zhang et al. | A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality | |
Nilsson et al. | Hands Free Interaction with Virtual Information in a Real Environment: Eye Gaze as an Interaction Tool in an Augmented Reality System. | |
신종규 | Integration of Reality and Virtual Environment: Using Augmented Virtuality with Mobile Device Input | |
Leithinger | Grasping information and collaborating through shape displays | |
Müller | Challenges in Information Representation with Augmented Reality for Procedural Task Support | |
WO2024155767A1 (en) | Devices, methods, and graphical user interfaces for using a cursor to interact with three-dimensional environments | |
Aniket | Augmented reality | |
WO2024039666A1 (en) | Devices, methods, and graphical user interfaces for improving accessibility of interactions with three-dimensional environments | |
WO2024197130A1 (en) | Devices, methods, and graphical user interfaces for capturing media with a camera application | |
CN117043720A (en) | Method for interacting with objects in an environment | |
Eschey | Camera, AR and TUI based smart surfaces | |
Cham et al. | Hand Gestures Based Interaction in Augmented Reality |