TWI808017B - Portably auxiliary system for visual impairment - Google Patents

Portably auxiliary system for visual impairment Download PDF

Info

Publication number
TWI808017B
TWI808017B TW111137418A TW111137418A TWI808017B TW I808017 B TWI808017 B TW I808017B TW 111137418 A TW111137418 A TW 111137418A TW 111137418 A TW111137418 A TW 111137418A TW I808017 B TWI808017 B TW I808017B
Authority
TW
Taiwan
Prior art keywords
image
unit
information
user
visually impaired
Prior art date
Application number
TW111137418A
Other languages
Chinese (zh)
Other versions
TW202415356A (en
Inventor
傅旭文
印秉宏
鍾潤文
張嘉豪
Original Assignee
大陸商廣州印芯半導體技術有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商廣州印芯半導體技術有限公司 filed Critical 大陸商廣州印芯半導體技術有限公司
Priority to TW111137418A priority Critical patent/TWI808017B/en
Application granted granted Critical
Publication of TWI808017B publication Critical patent/TWI808017B/en
Publication of TW202415356A publication Critical patent/TW202415356A/en

Links

Images

Landscapes

  • Yarns And Mechanical Finishing Of Yarns Or Ropes (AREA)
  • Knitting Machines (AREA)
  • Devices For Use In Laboratory Experiments (AREA)
  • User Interface Of Digital Computer (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)

Abstract

A portably auxiliary system for visual impairment includes a main body, a first image sensor unit, a control unit, a processing unit and a output unit. The main body is used for wearing on an user. The first image sensor unit is disposed on the main body and is used for detecting an environment image. The control unit is disposed on the main body, electrically connected to the first image sensor unit, and received the environment image. The processing unit is disposed in the control unit or a portable electronic device and in connection with the control unit, the processing unit processes the environment image to obtain an object information of an object which is existed in the surrounding environment. The output unit is disposed on the main body, electrically connected to the control unit and is used for receiving the object information, the output unit converts the object information to a cognition guidance, and the output unit transmits the cognition guidance to the user by the main body.

Description

可攜式視障輔助系統Portable Assistive System for the Visually Impaired

本發明是有關一種視障輔助系統,特別是一種用以輔助視障人士得知周遭環境的物件的物件資訊的可攜式視障輔助系統。 The invention relates to a visually impaired assisting system, in particular to a portable visually impaired assisting system for assisting the visually impaired to obtain object information of objects in the surrounding environment.

視障人士是指視覺有障礙的人,以國際視力傷殘程度區辨視障者等級,可分成全失明、嚴重弱視(嚴重低視能)及輕微弱視(輕微低視能)。身體行動是視障人士面臨最大的挑戰之一,僅能夠依靠敏銳的聽覺、觸覺、嗅覺來判斷周遭環境的狀況。視障人士常用的輔具包含導盲犬(guide dog)和白手杖(white cane)。由於科技的發達,目前已開發出視障人士可以使用的可攜式全球定位系統(GPS)設備,連接到視障人士專用的筆記型電腦或其他設備,藉以引導視障人士到達目的地。 Visually impaired people refer to people with visual impairments. The visually impaired people can be classified into total blindness, severe amblyopia (severe low vision) and slight amblyopia (slight low vision) according to the international visual impairment scale area. Physical mobility is one of the biggest challenges faced by the visually impaired, who can only rely on their keen senses of hearing, touch, and smell to judge the state of their surroundings. Common aids for the visually impaired include guide dogs and white canes. Due to the development of technology, portable global positioning system (GPS) devices that can be used by the visually impaired have been developed, which are connected to laptops or other devices dedicated to the visually impaired, so as to guide the visually impaired to their destinations.

然而,無論是導盲犬、白手杖或可攜式GPS設備,都沒有辦法正確提供環境訊息,且各自有其問題。例如,導盲犬的維護成本高昂且繁瑣。例如,白手杖僅能提供其長度可觸及的障礙物的訊息,超出白手杖的長度的障礙物的訊息則因為不能觸及而無從得知。例如,雖然GPS技術可以提供前進方向,但GPS技術是不考慮沿途的任何障礙物。 However, whether it is a guide dog, a white cane, or a portable GPS device, there is no way to correctly provide environmental information, and each has its own problems. For example, guide dogs are expensive and cumbersome to maintain. For example, the white cane can only provide information about obstacles whose length can be touched, and the information of obstacles beyond the length of the white cane cannot be known because it cannot be touched. For example, while GPS technology can provide heading, GPS technology does not take into account any obstacles along the way.

另外,目前市面上也有將距離感測器設置在耳機、眼鏡的支架、項圈、項鍊或衣服中,利用距離感測器檢測到障礙物的距離。然而,此類 裝置具有以下數個缺點:其一,只能感測障礙物的距離,無法提供其他資訊;其二,體積大且笨重,配戴在身上相當不舒服;其三,容易被人發現是視障人士專用物而受到歧視。 In addition, currently on the market, distance sensors are also installed in earphones, glasses brackets, collars, necklaces or clothes, and the distance sensors are used to detect the distance of obstacles. However, such The device has the following disadvantages: first, it can only sense the distance of obstacles and cannot provide other information; second, it is bulky and bulky, making it uncomfortable to wear; third, it is easy to be found to be a special object for the visually impaired and be discriminated against.

本發明的主要目的在於提供一種可攜式視障輔助系統,能夠輔助視障人士得知周遭環境的物件的物件資訊。 The main purpose of the present invention is to provide a portable assisting system for the visually impaired, which can assist the visually impaired to know the object information of the objects in the surrounding environment.

為了達成前述的目的,本發明提供一種可攜式視障輔助系統,包括一本體、一第一影像感測單元、一控制單元、一處理單元以及一輸出單元。該本體用以穿戴在使用者的身上。該第一影像感測單元設置於該本體,並且用以感測一環境影像。該控制單元設置於該本體,電性連接該第一影像感測單元,並且用以接收該環境影像。該處理單元設置於該控制單元,或設置於一可攜式電子裝置並且與該控制單元連線,該處理單元處理該環境影像,以獲得存在於該環境項中的一物件之一物件資訊。該輸出單元設置於該本體,電性連接該控制單元,並且用以接收該物件資訊,該輸出單元將該物件資訊轉換成一認知指引,並且將該認知指引透過該本體傳達給使用者。 In order to achieve the aforementioned objective, the present invention provides a portable visually impaired assistance system, which includes a main body, a first image sensing unit, a control unit, a processing unit and an output unit. The body is used to be worn on the body of the user. The first image sensing unit is disposed on the body and used for sensing an environment image. The control unit is arranged on the main body, is electrically connected to the first image sensing unit, and is used for receiving the environment image. The processing unit is disposed on the control unit, or disposed on a portable electronic device and connected to the control unit, the processing unit processes the environment image to obtain object information of an object existing in the environment item. The output unit is arranged on the body, is electrically connected to the control unit, and is used for receiving the object information. The output unit converts the object information into a cognitive guide, and transmits the cognitive guide to the user through the main body.

在一些實施例中,所述的可攜式視障輔助系統更包含一第二影像感測單元,設置於該本體,電性連接該控制單元,並且用以感測使用者的手勢以獲取一手勢的影像。 In some embodiments, the portable assistive system for the visually impaired further includes a second image sensing unit disposed on the main body, electrically connected to the control unit, and used for sensing a gesture of a user to obtain an image of a gesture.

在一些實施例中,該處理單元更包含一影像資料庫及一比對程式;該物件資訊包含一方位資訊、一距離資訊及一形體資訊;該物件經該處理單元透過該比對程式比對該影像資料庫獲得該形體資訊,並且藉由該輸出單元輸出相對應之該認知指引。 In some embodiments, the processing unit further includes an image database and a comparison program; the object information includes an orientation information, a distance information and a shape information; the object is compared with the image database by the processing unit to obtain the shape information, and the output unit outputs the corresponding cognitive guidance.

在一些實施例中,該比對程式更包含一深度學習演算法,該形體資訊經該深度學習演算法之計算,藉由該輸出單元輸出相對應之認知指引。 In some embodiments, the comparison program further includes a deep learning algorithm, the shape information is calculated by the deep learning algorithm, and the corresponding cognitive guidance is output through the output unit.

在一些實施例中,所述的可攜式視障輔助系統更包括一操作介面,該操作介面設置於該本體並且電性連接該處理單元,或設置於該可攜式電子裝置,經使用者透過該操作介面設定更獲得該物件的一目標物資訊、一行進資訊及一障礙物資訊。 In some embodiments, the portable assistive system for the visually impaired further includes an operation interface, the operation interface is disposed on the main body and electrically connected to the processing unit, or disposed on the portable electronic device, and the user obtains a target object information, a travel information and an obstacle information of the object through the operation interface setting.

在一些實施例中,所述的可攜式視障輔助系統更包括一輸入單元,該輸入單元設置於該本體且電性連接該控制單元,或設置於一可攜式電子裝置並且與該控制單元連線,供使用者輸入一搜尋指令;該處理單元藉由該第一影像感測單元及該比對程式搜尋出該環境影像中符合該搜尋指令之該物件,並且將該搜尋指令轉換成一指令指引,該輸出單元接收該指令指引並且將該指令指引透過該本體傳達給使用者。 In some embodiments, the portable visually impaired assistance system further includes an input unit, the input unit is disposed on the main body and electrically connected to the control unit, or is disposed on a portable electronic device and connected to the control unit, for the user to input a search command; the processing unit searches the environment image for the object matching the search command through the first image sensing unit and the comparison program, and converts the search command into a command guide, the output unit receives the command guide and transmits the command guide to the user through the body.

在一些實施例中,該第一影像感測單元用以感測使用者的一反應行為,該控制單元用以接收該反應行為,該反應行為經該處理單元透過該比對程式比對該影像資料庫以獲得該形體資訊。 In some embodiments, the first image sensing unit is used for sensing a user's reaction behavior, the control unit is used for receiving the reaction behavior, and the processing unit compares the reaction behavior with the image database through the comparison program to obtain the shape information.

在一些實施例中,該形體資訊進一步包括該物件的一抽象類別。 In some embodiments, the shape information further includes an abstract class of the object.

在一些實施例中,該第一影像感測單元藉由光學雷達技術、結構光技術、間接飛行時間技術、直接飛行時間技術或影像辨識技術感測該環境影像。 In some embodiments, the first image sensing unit senses the environment image by optical radar technology, structured light technology, indirect time-of-flight technology, direct time-of-flight technology or image recognition technology.

在一些實施例中,該認知指引為微電流、聲音、語音或振動,該輸出單元根據該物件資訊調整微電流、聲音、語音或振動的振幅或頻率大小以及聲音或語音的音量大小。 In some embodiments, the cognitive guidance is microcurrent, sound, voice or vibration, and the output unit adjusts the amplitude or frequency of the microcurrent, sound, voice or vibration and the volume of the sound or voice according to the object information.

本發明的功效在於,本發明的可攜式視障輔助系統能夠輔助視障人士得知周遭環境的物件的物件資訊,且易於穿戴於視障人士的身上,使得視障人士瞭解周遭環境的狀況,並且從事一般日常的活動,甚至進行運動。 The effect of the present invention is that the portable visually impaired assistance system of the present invention can assist the visually impaired to know the object information of the objects in the surrounding environment, and is easy to wear on the visually impaired, so that the visually impaired can understand the situation of the surrounding environment, and engage in general daily activities, and even exercise.

1:可攜式電子裝置 1: Portable Electronic Devices

10:本體 10: Ontology

11:頭環 11: head ring

111:第一帶體 111: The first belt body

112:第二帶體 112: the second belt body

12:帽子 12: hat

121:穿孔 121: perforation

20:第一影像感測單元 20: The first image sensing unit

30:控制單元 30: Control unit

40:處理單元 40: Processing unit

41:影像資料庫 41: Image database

42:比對程式 42: Comparison program

50:輸出單元 50: output unit

60:操作介面 60: Operation interface

70:第二影像感測單元 70: The second image sensing unit

80:輸入單元 80: input unit

圖1是本發明的可攜式視障輔助系統的第一實施例的示意圖。 FIG. 1 is a schematic diagram of the first embodiment of the portable visually impaired assistance system of the present invention.

圖2A是本發明的本體為頭環連帽結構的分解圖。 Fig. 2A is an exploded view of the body of the present invention with a headband and hooded structure.

圖2B是本發明的本體為頭環連帽結構且穿戴在頭部的示意圖。 Fig. 2B is a schematic diagram of the body of the present invention having a headband-hooded structure and being worn on the head.

圖3是本發明的可攜式視障輔助系統感測環境影像及輸入搜尋指令並提供對應功能的示意圖。 FIG. 3 is a schematic diagram of the portable visually impaired assistance system of the present invention sensing environment images, inputting search commands and providing corresponding functions.

圖4是本發明的可攜式視障輔助系統感測使用者的手勢並提供對應功能的示意圖。 FIG. 4 is a schematic diagram of the portable visually impaired assistance system of the present invention sensing a user's gesture and providing corresponding functions.

圖5是本發明的可攜式視障輔助系統的第二實施例的示意圖。 FIG. 5 is a schematic diagram of a second embodiment of the portable visually impaired assistance system of the present invention.

以下配合圖式及元件符號對本發明的實施方式做更詳細的說明,俾使熟習該項技藝者在研讀本說明書後能據以實施。 The implementation of the present invention will be described in more detail below with reference to the drawings and reference symbols, so that those skilled in the art can implement it after studying this specification.

如圖1、圖2A及圖2B所示,本發明提供一種可攜式視障輔助系統,包括一本體10、一第一影像感測單元20、一控制單元30、一處理單元40、一輸出單元50、一操作介面60、一第二影像感測單元70以及一輸入單元80。本 體10用以穿戴在一使用者的身上。第一影像感測單元20設置於本體10。控制單元30設置於本體10並且電性連接第一影像感測單元20。處理單元40設置於控制單元30。輸出單元50設置於本體10並且電性連接控制單元30。操作介面60、第二影像感測單元70和輸入單元80皆設置於本體10並且電性連接控制單元30。 As shown in FIG. 1 , FIG. 2A and FIG. 2B , the present invention provides a portable assistive system for the visually impaired, including a main body 10 , a first image sensing unit 20 , a control unit 30 , a processing unit 40 , an output unit 50 , an operation interface 60 , a second image sensing unit 70 and an input unit 80 . Book The body 10 is used to be worn on the body of a user. The first image sensing unit 20 is disposed on the main body 10 . The control unit 30 is disposed on the body 10 and electrically connected to the first image sensing unit 20 . The processing unit 40 is disposed on the control unit 30 . The output unit 50 is disposed on the body 10 and electrically connected to the control unit 30 . The operation interface 60 , the second image sensing unit 70 and the input unit 80 are all disposed on the main body 10 and electrically connected to the control unit 30 .

圖3是本發明的可攜式視障輔助系統感測環境影像及輸入搜尋指令並提供對應功能的示意圖。如圖3所示,第一影像感測單元20用以感測一環境影像;控制單元30用以接收環境影像;處理單元40處理環境影像,以獲得存在於環境影像中的一物件之一物件資訊;輸出單元50用以接收物件資訊,將物件資訊轉換成一認知指引,並且將認知指引透過本體10傳達給使用者。藉此,使用者能夠透過認知指引得知周遭環境的物件的物件資訊。 FIG. 3 is a schematic diagram of the portable visually impaired assistance system of the present invention sensing environment images, inputting search commands and providing corresponding functions. As shown in FIG. 3 , the first image sensing unit 20 is used to sense an environment image; the control unit 30 is used to receive the environment image; the processing unit 40 processes the environment image to obtain object information of an object existing in the environment image; the output unit 50 is used to receive the object information, convert the object information into a cognition guide, and convey the cognition guide to the user through the main body 10. In this way, the user can know the object information of the objects in the surrounding environment through the cognitive guidance.

在一些實施例中,第一影像感測單元20藉由光學雷達技術(LiDAR)、結構光技術、間接飛行時間技術(iToF)、直接飛行時間技術(dToF)或影像辨識技術感測環境影像。 In some embodiments, the first image sensing unit 20 senses the environment image by optical radar technology (LiDAR), structured light technology, indirect time-of-flight technology (iToF), direct time-of-flight technology (dToF) or image recognition technology.

在一些實施例中,輸出單元50為一電極、一揚聲器、一振動器或其組合,電極產生的認知指引是微電流,揚聲器產生的認知指引是聲音或語音,振動器產生的認知指引是振動。 In some embodiments, the output unit 50 is an electrode, a speaker, a vibrator or a combination thereof. The cognitive guidance generated by the electrode is microcurrent, the cognitive guidance generated by the speaker is sound or voice, and the cognitive guidance generated by the vibrator is vibration.

如圖1及圖3所示,在較佳實施例中,處理單元40更包含一影像資料庫41及一比對程式42,比對程式42包含一深度學習演算法,物件資訊包括一方位資訊、一距離資訊及一形體資訊,物件經比對程式42比對影像資料庫41獲得形體資訊,形體資訊經深度學習演算法之計算,藉由輸出單元50輸出相對應之認知指引。因此,使用者能夠透過認知指引得知物件的方位資訊、距離資訊及形體資訊。 As shown in FIGS. 1 and 3 , in a preferred embodiment, the processing unit 40 further includes an image database 41 and a comparison program 42. The comparison program 42 includes a deep learning algorithm. The object information includes an orientation information, a distance information and a shape information. The object obtains shape information by comparing the image database 41 through the comparison program 42. The shape information is calculated by the deep learning algorithm, and the corresponding cognitive guidance is output through the output unit 50. Therefore, the user can know the orientation information, distance information and shape information of the object through the cognitive guidance.

物件的方位資訊是指可定義環境影像中的物件相對於第一影像感測單元20的方位。舉例來說,如果有一輛車停在使用者的左側,第一影像感測單元20能夠感測到有一輛車停在使用者的左側。然而,上述物件的方位資訊的說明僅為示例,並非以此為限。 The orientation information of the object can define the orientation of the object in the environment image relative to the first image sensing unit 20 . For example, if there is a car parked on the left side of the user, the first image sensing unit 20 can sense that there is a car parked on the left side of the user. However, the description of the orientation information of the above-mentioned objects is only an example, and is not limited thereto.

物件的距離資訊是指可定義環境影像中的物件相對於第一影像感測單元20的距離。舉例來說,使用者可透過操作介面60設定第一影像感測單元20的可感測範圍為10公尺之內的物件,如果有一位愛狗人士牽著小狗進入第一影像感測單元20的可感測範圍之內,第一影像感測器20能夠感測到10公尺內有一個人以及一隻狗。然而,上述物件的距離資訊的說明僅為示例,並非以此為限。 The distance information of the object can define the distance of the object in the environment image relative to the first image sensing unit 20 . For example, the user can set the sensing range of the first image sensing unit 20 to objects within 10 meters through the operation interface 60. If a dog lover leads a puppy into the sensing range of the first image sensing unit 20, the first image sensor 20 can sense a person and a dog within 10 meters. However, the description of the distance information of the above-mentioned objects is only an example, and is not limited thereto.

在一些實施例中,物件的形體資訊進一步包括物件的抽象類別或具體形式。因此,使用者能夠透過認知指引進一步得知物件的具體形式或抽象類別。 In some embodiments, the shape information of the object further includes an abstract category or a concrete form of the object. Therefore, the user can further know the concrete form or abstract category of the object through the cognitive guidance.

物件的具體形式是指可定義環境影像中的物件的個別名詞。舉例來說,物件的個別名詞包括公車、計程車、機車、腳踏車等交通工具。舉例來說,物件的個別名詞包括行人穿越道、人行道、行人專用號誌、交通號誌、前有幹道、注意落石、左轉等交通管制設施。舉例來說,物件的個別名詞包括人、貓、狗、鳥等動物。舉例來說,物件的個別名詞包括桌子、椅子、杯子、電腦、鍵盤、滑鼠、電視、洗衣機等室內物件。舉例來說,物件的個別名詞包括電梯、手扶梯等移動載具。然而,上述物件的具體形式的說明僅為示例,並非以此為限。物件的具體形式能夠讓使用者清楚明白周遭環境有哪些物件。 Object specifics are individual nouns that define objects in the environment image. For example, individual nouns of objects include vehicles such as buses, taxis, locomotives, and bicycles. For example, individual nouns of objects include crosswalks, sidewalks, pedestrian-only signs, traffic signs, arterial road ahead, attention to falling rocks, and left-turn traffic control facilities. For example, individual nouns for objects include animals such as people, cats, dogs, and birds. For example, individual nouns for objects include indoor objects such as tables, chairs, cups, computers, keyboards, mice, televisions, and washing machines. For example, individual nouns of objects include moving vehicles such as elevators and escalators. However, the description of the specific form of the above-mentioned objects is only an example, and is not limited thereto. The specific form of the object can make the user clearly understand what objects are in the surrounding environment.

物件的抽象類別是指可定義環境影像中的物件的集合名詞,例如交通工具、交通管制設施、動物、室內物件、移動載具。然而,上述物件的抽象類別的說明僅為示例,並非以此為限。使用者可透過操作介面60設定處理單元40提供給輸出單元50的物件資訊為物件的抽象類別,物件的抽象類別能夠讓使用者稍微瞭解周遭環境的物件屬於哪一類,可以避免太多物件造成使用者同時接受過多資訊而容易混淆。 The abstract category of objects refers to collective nouns that can define objects in an environment image, such as vehicles, traffic control facilities, animals, indoor objects, and mobile vehicles. However, the description of the above abstract categories of objects is only an example, not a limitation. The user can set the object information provided by the processing unit 40 to the output unit 50 through the operation interface 60 as the abstract category of the object. The abstract category of the object can allow the user to understand which category the objects in the surrounding environment belong to, and can avoid confusion caused by too many objects causing the user to receive too much information at the same time.

在一些實施例中,經使用者透過操作介面60設定,處理單元40更獲得物件的一目標物資訊、一行進資訊及一障礙物資訊。目標物資訊為所欲到達的地點或拿取之物品。行進資訊為使用者要前往目標物的即時方向。障礙物資訊為提醒使用者要避開以避免碰撞之物體。舉例來說,使用者可透過操作介面60設定物件為放在會議室的水杯,處理單元40獲得的目標物資訊包括會議室(所欲到達的地點)和水杯(所欲拿取之物品);處理單元40獲得的行進資訊為「向前直走10公尺以後右轉進入會議室」;萬一路上有柱子,處理單元40獲得的障礙物資訊為「前方3公尺處有柱子,請注意」,使用者可輕易地閃過柱子繼續前進;等到使用者進入會議室以後,處理單元40獲得的行進資訊為「向前直走3公尺」;等到使用者向前直走3公尺以後,處理單元40獲得的行進資訊為「到達目的地,水杯在您的左側桌上」。關於物件的目標物資訊和行進資訊,輸出單元50將語音或聲音作為認知指引透過本體10傳達給使用者;關於物件的障礙物資訊,輸出單元50將振動或微電流或作為認知指引透過本體10傳達給使用者。然而,上述物件的目標物資訊、行進資訊及障礙物資訊的說明僅為示例,並非以此為限。 In some embodiments, the processing unit 40 further obtains a target information, a traveling information, and an obstacle information of the object after being set by the user through the operation interface 60 . The target object information is the place to be reached or the item to be taken. The traveling information is the real-time direction that the user wants to go to the target. Obstacle information is an object that reminds the user to avoid to avoid collision. For example, the user can set the object through the operation interface 60 to be a water cup placed in the meeting room, and the target object information obtained by the processing unit 40 includes the meeting room (the place to be reached) and the water cup (the item to be taken); the travel information obtained by the processing unit 40 is "go straight ahead 10 meters and then turn right to enter the meeting room"; if there is a pillar on the road, the obstacle information obtained by the processing unit 40 is "there is a pillar 3 meters ahead, please pay attention", the user can easily flash past the pillar and continue moving forward; etc. After the user enters the meeting room, the processing unit 40 obtains the traveling information as "walk straight forward 3 meters"; after the user walks straight forward 3 meters, the processing unit 40 obtains the traveling information as "reach the destination, the water glass is on your left table". Regarding the target information and traveling information of the object, the output unit 50 transmits voice or sound as cognitive guidance to the user through the main body 10; regarding the obstacle information of the object, the output unit 50 transmits vibration or micro current or as a cognitive guidance to the user through the main body 10. However, the descriptions of the target information, travel information, and obstacle information of the above-mentioned objects are only examples, and are not limited thereto.

如圖2A所示,在較佳實施例中,本體10為一頭環連帽結構並且包括一頭環11及一帽子12,頭環11包括一第一帶體111及一第二帶體112,第一帶體111為圓環狀,第二帶體112為拱形且其二端分別連接於第一帶體111,四第一影像感測單元20設置在第一帶體111的前側、後側、左側和右側,一第一影像感測單元20設置在第二帶體112的頂部,帽子12的前側、後側、左側、右側和頂部分別開設一穿孔121。如圖2B所示,頭環11穿戴在使用者的頭頂,帽子12穿戴在使用者的頭頂並且位於頭環11的外側,該等第一影像感測單元20的位置分別對應帽子12的該等穿孔121。因此,該等第一影像感測單元20能夠穿透過該等穿孔121並且感測到不同方位資訊的環境影像。舉例來說,圖2B以假想線代表第一影像感測單元20的視野(即,其可感測到的某個方位),前側的第一影像感測單元20可以感測到正前方和右前方的物件,左側的第一影像感測單元20可以感測到左方和左後方的物件,兩者可共同感測到左前方的物件。 As shown in Figure 2A, in a preferred embodiment, the body 10 is a headband hood structure and includes a headband 11 and a hat 12, the headband 11 includes a first belt 111 and a second belt 112, the first belt 111 is circular, the second belt 112 is arched and its two ends are respectively connected to the first belt 111, four first image sensing units 20 are arranged on the front side, rear side, left side and right side of the first belt 111, a first image sensor The unit 20 is disposed on the top of the second belt 112 , and the front side, the rear side, the left side, the right side and the top of the hat 12 respectively define a through hole 121 . As shown in FIG. 2B , the headband 11 is worn on the top of the user's head, and the hat 12 is worn on the top of the user's head and is located outside the headband 11 . The positions of the first image sensing units 20 correspond to the through holes 121 of the hat 12 . Therefore, the first image sensing units 20 can penetrate through the through holes 121 and sense environmental images with different orientation information. For example, FIG. 2B represents the field of view of the first image sensing unit 20 (that is, a certain orientation that it can sense) with an imaginary line. The first image sensing unit 20 on the front side can sense objects in the front and right front, and the first image sensing unit 20 on the left side can sense objects in the left and rear left, and both can jointly sense objects in the front left.

在一些實施例中,頭環11的材質是彈性布且第一帶體111和第二帶體112的長度是可以調整的,使得頭環11可穿戴在任何尺寸的頭部。 In some embodiments, the material of the headband 11 is elastic cloth and the lengths of the first strap 111 and the second strap 112 can be adjusted, so that the headband 11 can be worn on heads of any size.

在一些實施例中,第一帶體111和帽子12可藉由黏扣帶、鈕扣、卡扣或其他常見的固定結構相互固定或拆卸。 In some embodiments, the first belt body 111 and the cap 12 can be fixed or detached to each other by Velcro, buttons, clasps or other common fixing structures.

在一些實施例中,帽子12可不具有該等穿孔121,使用者可以先戴上帽子12,再將頭環11套在帽子12上。 In some embodiments, the hat 12 may not have the perforations 121 , and the user may put on the hat 12 first, and then put the headband 11 on the hat 12 .

在一些實施例中,本體10為一指環結構(圖未示)並且包括一環體及一安裝座,環體穿戴在使用者的手指上,安裝座設置在環體上,一第一影像感測單元20設置在安裝座的前側。換言之,第一影像感測單元20朝向指尖的方向,因而可感測手指所指的方向的環境影像。在一些實施例中,安裝座可 相對環體水平旋轉或垂直旋轉,因而擴大第一影像感測單元20的感測範圍,有助於進行全方位蒐集環境影像。 In some embodiments, the body 10 is a finger ring structure (not shown in the figure) and includes a ring body and a mounting base, the ring body is worn on the user's finger, the mounting base is disposed on the ring body, and a first image sensing unit 20 is disposed on the front side of the mounting base. In other words, the first image sensing unit 20 faces toward the fingertip, so it can sense the environment image in the direction the finger is pointing. In some embodiments, the mount can Rotate horizontally or vertically relative to the ring body, thereby expanding the sensing range of the first image sensing unit 20, which is helpful for collecting environmental images in all directions.

在一些實施例中,本體10為一耳掛結構(圖未示)並且包括一耳掛及一耳機,耳掛掛在使用者的耳朵上,耳機設置在耳掛上,一第一影像感測單元20設置在耳機上。耳機16可以是骨傳導耳機或具有揚聲器的一般耳機。 In some embodiments, the body 10 is an earhook structure (not shown in the figure) and includes an earhook and an earphone, the earhook is hung on the user's ear, the earphone is set on the earhook, and a first image sensing unit 20 is set on the earphone. The earphone 16 may be a bone conduction earphone or a general earphone with a speaker.

在一些實施例中,本體10為一頸掛結構(圖未示)並且包括一頸掛及一機殼,頸掛掛在使用者的頸部,機殼設置在頸掛上,一第一影像感測單元20設置在機殼上。 In some embodiments, the body 10 is a neck hanging structure (not shown in the figure) and includes a neck hanging and a casing, the neck hanging is hung on the user's neck, the casing is arranged on the neck hanging, and a first image sensing unit 20 is arranged on the casing.

然而,本體10的具體形式並不限於上述四種,任何能夠穿戴在使用者的身上的配件都能夠成為本體10的具體形式,例如白手杖、手套、手錶、手環、項鍊、眼鏡、耳環或帽子。 However, the specific form of the main body 10 is not limited to the above four, and any accessory that can be worn on the body of the user can become the specific form of the main body 10, such as a white cane, gloves, watches, bracelets, necklaces, glasses, earrings or hats.

需要說明的是,由於第一影像感測單元20包括一基板(圖未示)和複數個單位像素(圖未示),該等單位像素設置在基板上,因此第一影像感測單元20的體積相當地小,體積極小的第一影像感測單元20適合安裝在上述這些本體10的具體形式上,且不會造成使用者的負擔,也不會被人發現是視障人士專用物而受到歧視。 It should be noted that since the first image sensing unit 20 includes a substrate (not shown in the figure) and a plurality of unit pixels (not shown in the figure), the unit pixels are arranged on the substrate, so the volume of the first image sensing unit 20 is relatively small, and the very small first image sensing unit 20 is suitable for being installed on the above-mentioned specific forms of the body 10, and will not cause a burden to the user, and will not be discriminated against because it is a special object for the visually impaired.

在一些實施例中,輸出單元50根據物件資訊調整微電流、聲音、語音或振動的振幅或頻率大小以及聲音或語音的音量大小。舉例來說,使用者距離物件愈來愈近,微電流、聲音、語音或振動的振幅或頻率愈來愈大,聲音或語音的音量愈來愈大;反之,使用者距離物件愈來愈遠,微電流、聲音、語音或振動的振幅或頻率愈來愈小,聲音或語音的音量愈來愈小。物件的抽象類別較適合以微電流或振動等認知指引通知使用者,物件的具體形式較適 合以聲音或語音等認知指引通知使用者,避免輸出單元50同時發出多種複雜的認知指引擾亂使用者。 In some embodiments, the output unit 50 adjusts the amplitude or frequency of the microcurrent, sound, voice or vibration, and the volume of the sound or voice according to the object information. For example, the closer the user is to the object, the greater the amplitude or frequency of the microcurrent, sound, voice or vibration, and the greater the volume of the sound or voice; conversely, the farther the user is from the object, the smaller the amplitude or frequency of the microcurrent, sound, voice or vibration, and the smaller the volume of the sound or voice. The abstract category of objects is more suitable for informing users with cognitive guidance such as microcurrent or vibration, and the concrete form of objects is more suitable Cognitive guidance such as sound or voice is used to notify the user, so as to prevent the output unit 50 from simultaneously issuing multiple complicated cognitive guidance to disturb the user.

在一些實施例中,輸出單元50的數量可以為複數個並且設置在本體10的不同位置。當使用者的周圍只有一個物件時,處理單元40能夠根據物件的方位將方位資訊透過控制單元30傳送至本體10的特定位置的輸出單元50,使用者能夠從認知指引得知物件的實際方位。當使用者的周圍有複數個物件時,本發明提供以下兩種實施方式:其一,處理單元40能夠根據該等物件的距離資訊判斷出哪一個物件最接近使用者,處理單元40進一步將最接近的物件的物件資訊透過控制單元30傳送至本體10的特定位置的輸出單元50,使用者能夠從認知指引得知在最接近的物件的物件資訊;其二,處理單元40能夠根據該等物件的方位將複數個方位資訊透過控制單元30傳送至本體10的特定位置的複數個輸出單元50,該等輸出單元50輸出的微電流、聲音、語音或振動的振幅或頻率大小不同以及聲音或語音的音量大小不同,使用者能夠從微電流、聲音、語音或振動的振幅或頻率大小以及聲音或語音的音量大小得知該等物件的物件資訊。 In some embodiments, the number of output units 50 may be plural and disposed at different positions of the body 10 . When there is only one object around the user, the processing unit 40 can transmit the orientation information through the control unit 30 to the output unit 50 at a specific position of the main body 10 according to the orientation of the object, and the user can know the actual orientation of the object from the cognitive guidance. When there are multiple objects around the user, the present invention provides the following two implementations: first, the processing unit 40 can determine which object is closest to the user according to the distance information of the objects, and the processing unit 40 further transmits the object information of the closest object to the output unit 50 at a specific position of the main body 10 through the control unit 30, and the user can know the object information of the closest object from the cognitive guidance; The unit 30 transmits to a plurality of output units 50 at a specific position of the main body 10. The output units 50 output micro-currents, sounds, voices or vibrations with different amplitudes or frequencies and different volumes of the sounds or voices. The user can learn the object information of these objects from the micro-currents, sounds, voices or vibrations with amplitudes or frequencies and the volume of the sounds or voices.

一般來說,使用者通常為視障人士,可依其需求在其身上穿戴複數個本發明的可攜式視障輔助系統。 Generally speaking, the user is usually a visually impaired person, and can wear a plurality of portable visually impaired assistance systems of the present invention on his body according to his needs.

舉例來說,當使用者同時穿戴耳掛結構和頸掛結構時,耳掛結構中的第一影像感測單元20負責感測近距離的水平方向和垂直方向的環境影像,頸掛結構中的第一影像感測單元20負責感測遠距離的水平方向的環境影像。 For example, when the user wears the ear-hanging structure and the neck-hanging structure at the same time, the first image sensing unit 20 in the ear-hanging structure is responsible for sensing short-distance horizontal and vertical environmental images, and the first image sensing unit 20 in the neck-hanging structure is responsible for sensing long-distance horizontal environmental images.

以羽球運動為例。耳掛結構中的第一影像感測單元20負責感測羽球場的影像(環境影像),耳掛結構中的處理單元40根據羽球場的影像判斷出羽球場的影像中的羽球(物件)的方位資訊和距離資訊,耳掛結構中的揚聲器或振動器(輸出單元50)將聲音或振動(認知指引)透過耳掛結構傳達給使用者,使用者根據聲音或振動得知羽球的方位資訊和距離資訊,且可從聲音或振動的振幅或頻率愈來愈大,或聲音的音量愈來愈大,得知羽球愈來愈靠近使用者。頸掛結構中的第一影像感測單元20負責感測羽球場的影像(環境影像),頸掛結構中的處理單元40根據羽球場的影像判斷出羽球場的影像中的對手(物件)的方位資訊和距離資訊,頸掛結構中的揚聲器(輸出單元50)將語音(認知指引)透過頸掛結構傳達給使用者,使用者根據語音得知對手的方位資訊和距離資訊,且可從語音的頻率或音量大小,得知對手前進或後退。 Take badminton as an example. The first image sensing unit 20 in the ear-hook structure is responsible for sensing the image of the badminton court (environmental image). The processing unit 40 in the ear-hook structure judges the orientation information and distance information of the shuttlecock (object) in the image of the badminton court according to the image of the badminton court. The loudspeaker or vibrator (output unit 50) in the ear-hook structure transmits sound or vibration (cognitive guidance) to the user through the ear-hook structure. Loud, or the volume of the sound is getting louder and louder, knowing that the shuttlecock is getting closer to the user. The first image sensing unit 20 in the neck hanging structure is responsible for sensing the image (environmental image) of the badminton court. The processing unit 40 in the neck hanging structure judges the orientation information and distance information of the opponent (object) in the image of the badminton court according to the image of the badminton court. The speaker (output unit 50) in the neck hanging structure transmits the voice (cognitive guidance) to the user through the neck hanging structure.

以前進到會議室(物件的目標物資訊)為例。耳掛結構中的第一影像感測單元20負責感測路途的影像(環境影像),耳掛結構中的處理單元40根據路途的影像判斷出路途的影像中的多個障礙物(物件的障礙物資訊)的方位資訊和距離資訊,耳掛結構中的電極或振動器(輸出單元50)將微電流或振動(認知指引)透過耳掛結構傳達給使用者,使用者根據微電流或振動得知多個障礙物的方位資訊和距離資訊,且可從微電流或振動的振幅或頻率大小得知不同障礙物的方位資訊和距離資訊,以及最接近的障礙物的方位資訊和距離資訊。頸掛結構中的第一影像感測單元20負責感測會議室的門口的影像(環境影像),頸掛結構中的處理單元40根據會議室的門口的影像判斷出會議室的門口的影像中的會議室的門或門牌(物件)的方位資訊和距離資訊,頸掛結構中 的揚聲器(輸出單元50)將聲音或語音(認知指引)透過頸掛結構傳達給使用者,使用者根據聲音或語音得知會議室的門或門牌的方位資訊和距離資訊。 Take advancing to the meeting room (target object information of the object) as an example. The first image sensing unit 20 in the ear-hook structure is responsible for sensing the image of the road (environmental image), and the processing unit 40 in the ear-hook structure judges the orientation information and distance information of multiple obstacles (obstacle information of objects) in the image of the road according to the image of the road. The azimuth information and distance information of different obstacles, as well as the azimuth information and distance information of the closest obstacle can be obtained by the amplitude or frequency of the vibration. The first image sensing unit 20 in the neck hanging structure is responsible for sensing the image (environmental image) of the doorway of the meeting room, and the processing unit 40 in the neck hanging structure judges the orientation information and distance information of the door or the door plate (object) of the meeting room in the image of the doorway of the meeting room according to the image of the doorway of the meeting room. The loudspeaker (output unit 50) transmits sound or voice (cognitive guidance) to the user through the neck hanging structure, and the user knows the direction information and distance information of the door or door plate of the meeting room according to the sound or voice.

舉例來說,當使用者同時穿戴指環結構、耳掛結構和頸掛結構時,指環結構中的第一影像感測單元20負責感測手指所指的方向的環境影像,耳掛結構中的第一影像感測單元20負責感測遠距離的環境影像,頸掛結構中的第一影像感測單元20負責感測近距離的環境影像。 For example, when the user wears the ring structure, the ear-hook structure and the neck-hook structure at the same time, the first image sensing unit 20 in the ring structure is responsible for sensing the environment image in the direction the finger is pointing, the first image sensing unit 20 in the ear-hook structure is responsible for sensing the long-distance environment image, and the first image sensing unit 20 in the neck-hook structure is responsible for sensing the short-range environment image.

以穿越馬路為例。耳掛結構中的第一影像感測單元20負責感測馬路的影像(環境影像),耳掛結構中的處理單元40根據行馬路的影像判斷馬路的影像中的出行人專用號誌(物件)的可通行狀態和剩餘可通行時間(物件資訊的具體形式),耳掛結構中的揚聲器(輸出單元50)將語音(認知指引)透過耳掛結構傳達給使用者,使用者根據語音得知可通行狀態和剩餘可通行時間。頸掛結構中的第一影像感測單元20負責感測馬路的影像(環境影像),頸掛結構中的處理單元40根據馬路的影像判斷出馬路的影像中的行人穿越道(物件)的方位資訊和距離資訊,頸掛結構中的電極(輸出單元50)將微電流(認知指引)透過頸掛結構傳達給,使用者根據微電流得知行人穿越道的方位資訊和距離資訊,提醒使用者是否在行人穿越道的範圍內,或已經超出行人穿越道的範圍,並可引導使用者走回行人穿越道的範圍內。指環結構中的第一影像感測單元20負責感測手指所指的方向的障礙物的影像(環境影像),指環結構中的處理單元40根據手指所指的方向的障礙物的影像判斷出障礙物(物件)的方位資訊及距離資訊,指環結構中的電極或振動器(輸出單元50)將微電流或振動(認知指引)透過指環結構傳達給使用者,使用者根據微電流或振動得知障礙物的方位資訊及距離資訊。例如,手指所指的方向為「前方」,障礙物為 「行人」,使用者可預先閃避。例如,手指所指的方向為「左側」或「右側」,障礙物為「車輛」,使用者可停下腳步,等車輛通過以後再前進。 Take crossing the road as an example. The first image sensing unit 20 in the ear-hook structure is responsible for sensing the image of the road (environmental image). The processing unit 40 in the ear-hook structure judges the passable state and remaining passable time (specific form of object information) of the pedestrian-specific signs (objects) in the image of the road based on the image of the road. The first image sensing unit 20 in the neck hanging structure is responsible for sensing the image of the road (environmental image). The processing unit 40 in the neck hanging structure judges the orientation information and distance information of the pedestrian crossing (object) in the image of the road according to the image of the road. crosswalk and guide users back to the crosswalk. The first image sensing unit 20 in the ring structure is responsible for sensing the image of the obstacle in the direction pointed by the finger (environment image). The processing unit 40 in the ring structure judges the orientation information and distance information of the obstacle (object) according to the image of the obstacle in the direction pointed by the finger. The electrode or vibrator (output unit 50) in the ring structure transmits the micro current or vibration (cognitive guidance) to the user through the ring structure, and the user learns the orientation information and distance information of the obstacle according to the micro current or vibration. For example, the direction pointed by the finger is "front", and the obstacle is "Pedestrian", the user can dodge in advance. For example, the direction pointed by the finger is "left" or "right", and the obstacle is "vehicle". The user can stop and wait for the vehicle to pass before moving forward.

圖4是本發明的可攜式視障輔助系統感測使用者的手勢並提供對應的功能的示意圖。如圖4所示,第二影像感測單元70用以感測使用者的手勢以獲取一手勢的影像,輸入單元為一麥克風並且用以接收一語音。使用者可藉由手勢或語音下達指令。舉例來說,使用者可藉由手勢或語音通知本發明的可攜式視障輔助系統指向一物件,輸出單元50將認知指引透過本體10傳達得使用者,使用者能夠透過認知指引得知物件的距離資訊或形體資訊。舉例來說,本發明的可攜式視障輔助系統可以藉由輸出單元50提出疑問,使用者可藉由手勢或語音回答本發明的可攜式視障輔助系統該疑問的解答。舉例來說,使用者可藉由手勢或語音設定本發明的可攜式視障輔助系統的操作模式。 FIG. 4 is a schematic diagram of the portable visually impaired assistance system of the present invention sensing a user's gesture and providing corresponding functions. As shown in FIG. 4 , the second image sensing unit 70 is used to sense a user's gesture to obtain an image of a gesture, and the input unit is a microphone and used to receive a voice. Users can give instructions through gestures or voice. For example, the user can notify the portable visually impaired assistance system of the present invention to point to an object through gestures or voice, and the output unit 50 will convey the cognitive guidance to the user through the main body 10, and the user can know the distance information or shape information of the object through the cognitive guidance. For example, the portable visually impaired assistance system of the present invention can ask questions through the output unit 50 , and the user can answer the questions of the portable visually impaired assistance system of the present invention through gestures or voice. For example, the user can set the operation mode of the portable visually impaired assistance system of the present invention through gestures or voice.

如圖3所示,輸入單元80用以供使用者輸入一搜尋指令,控制單元30用以接收搜尋指令,處理單元藉由第一影像感測單元20及比對程式42搜尋出環境影像中符合搜尋指令之物件,並且將搜尋指令轉換成一指令指引,輸出單元50接收指令指引並且將指令指引透過本體10傳達給使用者。具體來說,輸入單元80為一麥克風,搜尋指令為一語音,麥克風用以接收語音。第二影像感測單元70也可以供使用者輸入搜尋指令,搜尋指令為使用者的手勢。因此,所述語音或手勢可能是使用者詢問物件是否為目標物或障礙物。如果處理單元40藉由第一影像感測單元20及比對程式42搜尋出環境影像中符合搜尋指令之物件為目標物,輸出單元50將語音或聲音作為指令指引透過本體10傳達給使用者。如果處理單元40藉由第一影像感測單元20及比對程式42搜尋出環境影像中符合 搜尋指令之物件為障礙物,輸出單元50將振動或微電流或作為指令指引透過本體10傳達給使用者。 As shown in FIG. 3 , the input unit 80 is used for the user to input a search command, and the control unit 30 is used for receiving the search command. The processing unit uses the first image sensing unit 20 and the comparison program 42 to search for objects in the environment image that match the search command, and converts the search command into a command guide. The output unit 50 receives the command guide and transmits the command guide to the user through the main body 10. Specifically, the input unit 80 is a microphone, the search command is a voice, and the microphone is used to receive the voice. The second image sensing unit 70 can also be used for the user to input a search command, and the search command is a gesture of the user. Therefore, the voice or gesture may be the user asking whether the object is an object or an obstacle. If the processing unit 40 uses the first image sensing unit 20 and the comparison program 42 to search for an object matching the search command in the environment image as the target object, the output unit 50 transmits the voice or sound as a command guide to the user through the main body 10 . If the processing unit 40 uses the first image sensing unit 20 and the comparison program 42 to search out the matching The object of the search command is an obstacle, and the output unit 50 transmits vibration or micro current or as a command guide to the user through the main body 10 .

如圖3所示,第一影像感測單元20用以感測使用者的一反應行為,控制單元30用以接收反應行為,反應行為經處理單元40透過比對程式42比對影像資料庫41以獲得形體資訊。具體來說,反應行為是指使用者遇到物件所做出的反應,第一影像感測單元20用以感測使用者遇到物件所做出的反應,使用者遇到物件所做出的反應經處理單元40透過比對程式42比對影像資料庫41的過程中,比對程式42的深度學習演算法會自我學習並判斷物件的具體形式並自動歸類其抽象類別。例如,使用者一般會坐在椅子上且手靠在桌上操作鍵盤及滑鼠,被第一影像感測單元20會感測到上述反應行為,控制單元30用以接收上述反應行為,上述反應行為經處理單元40透過比對程式42比對影像資料庫41的過程中,比對程式42的深度學習演算法會自我學習並判斷物件的具體形式為椅子、桌子、鍵盤、滑鼠並自動歸類其抽象類別。 As shown in FIG. 3 , the first image sensing unit 20 is used to sense a user's reaction behavior, the control unit 30 is used to receive the reaction behavior, and the reaction behavior is processed by the processing unit 40 through the comparison program 42 to compare the image database 41 to obtain shape information. Specifically, the reaction behavior refers to the user’s reaction when encountering an object. The first image sensing unit 20 is used to sense the user’s reaction when encountering an object. The processing unit 40 compares the image database 41 through the processing unit 40 through the comparison program 42. The deep learning algorithm of the comparison program 42 will self-learn and judge the specific form of the object and automatically classify its abstract category. For example, the user generally sits on a chair and leans on the table to operate the keyboard and mouse. The first image sensing unit 20 will sense the above-mentioned reaction behavior, and the control unit 30 is used to receive the above-mentioned reaction behavior. The above-mentioned reaction behavior is processed by the processing unit 40. During the process of comparing the image database 41 through the comparison program 42, the deep learning algorithm of the comparison program 42 will self-learn and judge the specific form of the object as a chair, a table, a keyboard, and a mouse, and automatically classify the abstract categories.

圖5是本發明的可攜式視障輔助系統的第二實施例的示意圖。如圖5所示,第二實施例與第一實施例的差別在於:處理單元40、操作介面60與輸入單元80皆設置於一可攜式電子裝置1,處理單元40與控制單元30連線,操作介面60與輸入單元80則電性連接處理單元40。處理單元40可以是可攜式電子裝置1內建的一微處理器或一應用程式。操作介面60可以是可攜式電子裝置1的觸控螢幕或按鍵。可攜式電子裝置1(例如,智慧型手機或平板電腦)可裝載高容量的鋰電池,鋰電池提供給較耗電的元件(例如,處理單元40)所需電力,並且其配置的處理單元40通常具有高效能的運算能力,以進行較複雜且較耗電的程序(例如,分析比對程序和學習程序)。可攜式電子裝置1中的處理 單元40還可與本體10中的控制單元30互相傳送訊息。因此,本體10配置的元件減少,更為輕便,減輕使用者的負擔。 FIG. 5 is a schematic diagram of a second embodiment of the portable visually impaired assistance system of the present invention. As shown in FIG. 5 , the difference between the second embodiment and the first embodiment is that the processing unit 40 , the operation interface 60 and the input unit 80 are all arranged in a portable electronic device 1 , the processing unit 40 is connected to the control unit 30 , and the operation interface 60 and the input unit 80 are electrically connected to the processing unit 40 . The processing unit 40 can be a microprocessor or an application program built in the portable electronic device 1 . The operation interface 60 can be a touch screen or buttons of the portable electronic device 1 . The portable electronic device 1 (for example, a smart phone or a tablet computer) can be loaded with a high-capacity lithium battery, and the lithium battery provides the required power for relatively power-consuming components (for example, the processing unit 40), and the processing unit 40 configured therein usually has high-performance computing capabilities to perform more complex and more power-consuming programs (for example, analysis and comparison programs and learning programs). Processing in Portable Electronic Device 1 The unit 40 can also exchange messages with the control unit 30 in the main body 10 . Therefore, the number of components configured on the body 10 is reduced, making it lighter and less burdening on the user.

綜上所述,本發明的可攜式視障輔助系統能夠輔助視障人士得知周遭環境的物件的物件資訊,且易於穿戴於視障人士的身上,使得視障人士瞭解周遭環境的狀況,並且從事一般日常的活動,甚至進行運動。 To sum up, the portable visually impaired assistance system of the present invention can assist the visually impaired to know the object information of the objects in the surrounding environment, and is easy to wear on the visually impaired, so that the visually impaired can understand the situation of the surrounding environment, and engage in general daily activities, and even exercise.

以上所述者僅為用以解釋本發明的較佳實施例,並非企圖據以對本發明做任何形式上的限制,是以,凡有在相同的發明精神下所作有關本發明的任何修飾或變更,皆仍應包括在本發明意圖保護的範疇。 The above are only preferred embodiments for explaining the present invention, and are not intended to limit the present invention in any form. Therefore, any modifications or changes made under the same spirit of the present invention should still be included in the intended protection scope of the present invention.

10:本體 10: Ontology

20:第一影像感測單元 20: The first image sensing unit

30:控制單元 30: Control unit

40:處理單元 40: Processing unit

41:影像資料庫 41: Image database

42:比對程式 42: Comparison program

50:輸出單元 50: output unit

60:操作介面 60: Operation interface

70:第二影像感測單元 70: The second image sensing unit

80:輸入單元 80: input unit

Claims (10)

一種可攜式視障輔助系統,包括:一本體,用以穿戴在使用者的身上;一第一影像感測單元,設置於該本體,並且用以感測一環境影像;一控制單元,設置於該本體,電性連接該第一影像感測單元,並且用以接收該環境影像;一處理單元,設置於該控制單元,或設置於一可攜式電子裝置並且與該控制單元連線,該處理單元處理該環境影像,以獲得存在於該環境影像中的一物件之一物件資訊;以及一輸出單元,設置於該本體,電性連接該控制單元,並且用以接收該物件資訊,該輸出單元將該物件資訊轉換成一認知指引,並且將該認知指引透過該本體傳達給使用者。 A portable assistive system for the visually impaired, comprising: a body, used to be worn on the body of a user; a first image sensing unit, disposed on the body, and used to sense an environmental image; a control unit, disposed on the body, electrically connected to the first image sensing unit, and used to receive the environmental image; a processing unit, disposed on the control unit, or disposed on a portable electronic device and connected to the control unit, the processing unit processes the environmental image to obtain object information of an object existing in the environmental image; And an output unit, arranged on the body, electrically connected to the control unit, and used to receive the object information, the output unit converts the object information into a cognitive guide, and conveys the cognitive guide to the user through the main body. 如請求項1所述的可攜式視障輔助系統,更包含一第二影像感測單元,設置於該本體,電性連接該控制單元,並且用以感測使用者的手勢以獲取一手勢的影像。 The portable assistive system for the visually impaired as described in claim 1 further includes a second image sensing unit disposed on the main body, electrically connected to the control unit, and used for sensing a gesture of a user to obtain an image of a gesture. 如請求項1所述的可攜式視障輔助系統,其中,該處理單元更包含一影像資料庫及一比對程式;該物件資訊包含一方位資訊、一距離資訊及一形體資訊;該物件經該處理單元透過該比對程式比對該影像資料庫獲得該形體資訊,並且藉由該輸出單元輸出相對應之該認知指引。 The portable assistive system for the visually impaired as described in Claim 1, wherein the processing unit further includes an image database and a comparison program; the object information includes azimuth information, a distance information, and a shape information; the object obtains the shape information through the comparison program through the comparison program to obtain the shape information, and outputs the corresponding cognitive guidance through the output unit. 如請求項3所述的可攜式視障輔助系統,其中,該比對程式更包含一深度學習演算法,該形體資訊經該深度學習演算法之計算,藉由該輸出單元輸出相對應之認知指引。 The portable visually impaired assistance system as described in Claim 3, wherein the comparison program further includes a deep learning algorithm, the body information is calculated by the deep learning algorithm, and the corresponding cognitive guidance is output through the output unit. 如請求項3所述的可攜式視障輔助系統,更包括一操作介面,該操作介面設置於該本體且電性連接該控制單元,或設置於該可攜式電子裝置並且電性連接該處理單元,經使用者透過該操作介面設定,該處理單元更獲得該物件的一目標物資訊、一行進資訊及一障礙物資訊。 The portable assistive system for the visually impaired as described in claim 3 further includes an operation interface, the operation interface is disposed on the body and electrically connected to the control unit, or disposed on the portable electronic device and electrically connected to the processing unit, and the processing unit further obtains a target object information, a travel information and an obstacle information of the object through the user setting through the operation interface. 如請求項3所述的可攜式視障輔助系統,更包括一輸入單元,該輸入單元設置於該本體且電性連接該控制單元,或設置於一可攜式電子裝置並且電性連接該處理單元,供使用者輸入一搜尋指令;該處理單元藉由該第一影像感測單元及該比對程式搜尋出該環境影像中符合該搜尋指令之該物件,並且將該搜尋指令轉換成一指令指引,該輸出單元接收該指令指引並且將該指令指引透過該本體傳達給使用者。 The portable visually impaired assistance system as described in claim 3 further includes an input unit, which is arranged on the main body and is electrically connected to the control unit, or is arranged on a portable electronic device and is electrically connected to the processing unit, for the user to input a search command; the processing unit uses the first image sensing unit and the comparison program to search for the object in the environment image that matches the search command, and converts the search command into a command guide, and the output unit receives the command guide and transmits the command guide to the user through the body. 如請求項3所述的可攜式視障輔助系統,其中,該第一影像感測單元用以感測使用者的一反應行為,該控制單元用以接收該反應行為,該反應行為經該處理單元透過該比對程式比對該影像資料庫以獲得該形體資訊。 The portable assistive system for the visually impaired as described in Claim 3, wherein the first image sensing unit is used to sense a reaction behavior of the user, the control unit is used to receive the reaction behavior, and the processing unit compares the reaction behavior with the image database through the comparison program to obtain the shape information. 如請求項3所述的可攜式視障輔助系統,其中,該形體資訊進一步包括該物件的一抽象類別。 In the portable assistive system for the visually impaired as described in claim 3, wherein the body information further includes an abstract category of the object. 如請求項1所述的可攜式視障輔助系統,其中,該第一影像感測單元藉由光學雷達技術、結構光技術、間接飛行時間技術、直接飛行時間技術或影像辨識技術感測該環境影像。 The portable visually impaired assistance system according to claim 1, wherein the first image sensing unit senses the environment image by using optical radar technology, structured light technology, indirect time-of-flight technology, direct time-of-flight technology or image recognition technology. 如請求項1所述的可攜式視障輔助系統,其中,該認知指引為微電流、聲音、語音或振動,該輸出單元根據該物件資訊調整微電流、聲音、語音或振動的振幅或頻率大小以及聲音或語音的音量大小。 The portable aiding system for the visually impaired as described in Claim 1, wherein the cognitive guidance is a microcurrent, sound, voice or vibration, and the output unit adjusts the amplitude or frequency of the microcurrent, sound, voice or vibration and the volume of the sound or voice according to the object information.
TW111137418A 2022-09-30 2022-09-30 Portably auxiliary system for visual impairment TWI808017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111137418A TWI808017B (en) 2022-09-30 2022-09-30 Portably auxiliary system for visual impairment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111137418A TWI808017B (en) 2022-09-30 2022-09-30 Portably auxiliary system for visual impairment

Publications (2)

Publication Number Publication Date
TWI808017B true TWI808017B (en) 2023-07-01
TW202415356A TW202415356A (en) 2024-04-16

Family

ID=88149195

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111137418A TWI808017B (en) 2022-09-30 2022-09-30 Portably auxiliary system for visual impairment

Country Status (1)

Country Link
TW (1) TWI808017B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM537892U (en) * 2016-10-27 2017-03-11 正修學校財團法人正修科技大學 Indoor portable visual impairment voice navigation system
TW202007383A (en) * 2018-07-25 2020-02-16 南臺學校財團法人南臺科技大學 Smart aid system for visually impaired

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM537892U (en) * 2016-10-27 2017-03-11 正修學校財團法人正修科技大學 Indoor portable visual impairment voice navigation system
TW202007383A (en) * 2018-07-25 2020-02-16 南臺學校財團法人南臺科技大學 Smart aid system for visually impaired

Also Published As

Publication number Publication date
TW202415356A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
Manjari et al. A survey on assistive technology for visually impaired
US9792501B1 (en) Method and device for visually impaired assistance
EP2677982B1 (en) An optical device for the visually impaired
US10535280B2 (en) Multi-function electronic guidance system for persons with restricted vision
US20190362149A1 (en) Assistive device for the visually-impaired
JPWO2018043235A1 (en) Autonomous robot that recognizes the direction of the sound source
JP7549588B2 (en) 3D sound equipment for the blind and visually impaired
WO2015083183A1 (en) Hand wearable haptic feedback based navigation device
Chatterjee et al. Classification of wearable computing: A survey of electronic assistive technology and future design
KR20150097043A (en) Smart System for a person who is visually impaired using eyeglasses with camera and a cane with control module
EP4167196A1 (en) Method for notifying a blind or visually impaired user of the presence of object and/or obstacle
Ilag et al. Design review of smart stick for the blind equipped with obstacle detection and identification using artificial intelligence
Muhsin et al. Review of substitutive assistive tools and technologies for people with visual impairments: recent advancements and prospects
Argüello Prada et al. A belt-like assistive device for visually impaired people: Toward a more collaborative approach
Bouteraa Smart real time wearable navigation support system for BVIP
TWI808017B (en) Portably auxiliary system for visual impairment
JP6500139B1 (en) Visual support device
Merugu et al. A review of some assistive tools and their limitations
CN112995846A (en) Attention focusing auxiliary system
CN117797018A (en) Portable vision barrier auxiliary system
Bolla et al. Object Detection in Computer Vision Using Machine Learning Algorithm For Visually Impaired People
Chaudhary et al. State of art on wearable device to assist visually impaired person navigation in outdoor environment
TWI621868B (en) System and method for guiding brain waves to blind people
Moro et al. Sensory stimulation for human guidance in robot walkers: A comparison between haptic and acoustic solutions
AU2021100117A4 (en) Ankle band for identifying nearest obstacles