TWI806036B - Electronic device and method for monitoring posture - Google Patents

Electronic device and method for monitoring posture Download PDF

Info

Publication number
TWI806036B
TWI806036B TW110114181A TW110114181A TWI806036B TW I806036 B TWI806036 B TW I806036B TW 110114181 A TW110114181 A TW 110114181A TW 110114181 A TW110114181 A TW 110114181A TW I806036 B TWI806036 B TW I806036B
Authority
TW
Taiwan
Prior art keywords
user
image
distance
eye
head
Prior art date
Application number
TW110114181A
Other languages
Chinese (zh)
Other versions
TW202242709A (en
Inventor
江定原
Original Assignee
宏碁股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏碁股份有限公司 filed Critical 宏碁股份有限公司
Priority to TW110114181A priority Critical patent/TWI806036B/en
Publication of TW202242709A publication Critical patent/TW202242709A/en
Application granted granted Critical
Publication of TWI806036B publication Critical patent/TWI806036B/en

Links

Images

Landscapes

  • Length Measuring Devices With Unspecified Measuring Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An electronic device and a method for monitoring a sitting posture are provided. The method includes: pre-storing a plurality of interpupillary distances (IPDs) corresponding to a plurality of groups respectively; capturing an image of a user by a camera; determining that the user corresponds to a first group of the plurality of groups by a first machine learning model, wherein the first group corresponds to a first IPD of the plurality of IPDs; obtaining an IPD in image of the user according to the image; calculating a distance between the user and the camera according to the first IPD and the IPD in image; and outputting an warning message in response to the distance exceeding a default range.

Description

監視姿勢的電子裝置以及方法 Electronic device and method for monitoring posture

本發明是有關於一種監視坐姿的電子裝置以及方法。 The invention relates to an electronic device and method for monitoring sitting posture.

目前,有許多工作需仰賴人員長時間待在辦公室操作電腦來進行。長期維持不當的坐姿不但可能降低工作的效率,更可能對人員的健康狀態造成負面的影響。為了使人員維持良好的坐姿,市面上出現了需多坐姿警示系統。舉例來說,傳統的坐姿警示系統可利用設置在桌面、椅子或使用者身上的硬體裝置來偵測使用者的坐姿是否正確。然而,上述的坐姿警示系統需依賴額外的硬體裝置。因此,上述的坐姿警示系統不僅在使用上不便利,還需消耗較高的成本。 At present, there are many jobs that need to rely on personnel to stay in the office for a long time to operate computers. Maintaining an improper sitting posture for a long time may not only reduce work efficiency, but may also have a negative impact on the health of personnel. In order to enable people to maintain a good sitting posture, warning systems that require multiple sitting postures have appeared on the market. For example, a traditional sitting posture warning system can detect whether the user's sitting posture is correct by using a hardware device installed on a desk, a chair, or the user. However, the above-mentioned sitting posture warning system needs to rely on additional hardware devices. Therefore, the above-mentioned sitting posture warning system is not only inconvenient to use, but also needs to consume relatively high cost.

另一種傳統的坐姿警示系統可通過兩顆以上的視訊鏡頭來偵測使用者的坐姿。然而,視訊鏡頭的增加代表坐姿警示系統的成本增加。此外,視訊鏡頭的增加還會造成坐姿警示系統的元件的佈局空間減少。 Another traditional sitting position warning system can detect the user's sitting position through more than two video cameras. However, the increase in video cameras represents an increase in the cost of the posture warning system. In addition, the increase of the video camera also reduces the layout space of the components of the sitting posture warning system.

本發明提供一種監視坐姿的電子裝置以及方法,可利用單一個攝影機來監視使用者的坐姿是否正確。 The invention provides an electronic device and method for monitoring sitting posture, which can use a single camera to monitor whether the user's sitting posture is correct.

本發明的一種監視坐姿的電子裝置,包含處理器、儲存媒體、收發器以及攝影機。攝影機擷取使用者的影像。儲存媒體儲存多個模組以及分別對應於多個群組的多個眼距。處理器耦接儲存媒體、收發器以及攝影機,並且存取和執行多個模組,其中多個模組包含第一機器學習模型以及運算模組。第一機器學習模型根據影像判斷使用者對應於多個群組中的第一群組,其中第一群組對應於多個眼距中的第一眼距。運算模組根據影像取得使用者的影像中眼距,根據第一眼距以及影像中眼距計算使用者與攝影機之間的距離,並且響應於距離超出預設範圍而通過收發器輸出示警訊息。 An electronic device for monitoring sitting posture of the present invention includes a processor, a storage medium, a transceiver and a camera. The camera captures images of the user. The storage medium stores multiple modules and multiple eye distances respectively corresponding to multiple groups. The processor is coupled to the storage medium, the transceiver and the camera, and accesses and executes a plurality of modules, wherein the plurality of modules include a first machine learning model and a computing module. The first machine learning model judges according to the image that the user corresponds to the first group among the plurality of groups, wherein the first group corresponds to the first eye distance among the plurality of eye distances. The calculation module obtains the user's eye distance in the image according to the image, calculates the distance between the user and the camera according to the first eye distance and the image center eye distance, and outputs a warning message through the transceiver in response to the distance exceeding a preset range.

在本發明的一實施例中,上述的多個模組更包含特徵擷取模組。特徵擷取模組根據影像取得使用者的臉部的多個特徵點,其中運算模組根據多個特徵點計算影像中眼距。 In an embodiment of the present invention, the aforementioned modules further include a feature extraction module. The feature extraction module obtains multiple feature points of the user's face according to the image, and the calculation module calculates the eye distance in the image according to the multiple feature points.

在本發明的一實施例中,上述的運算模組根據多個特徵點判斷使用者的頭部動作,根據頭部動作校正第一眼距以產生第一校正眼距,並且根據第一校正眼距以及影像中眼距計算距離。 In an embodiment of the present invention, the above-mentioned calculation module judges the user's head movement according to a plurality of feature points, corrects the first eye distance according to the head movement to generate the first corrected eye distance, and according to the first corrected eye distance distance and the eye distance in the image to calculate the distance.

在本發明的一實施例中,上述的頭部動作包含上下翻轉、左右偏擺以及左右翻轉的至少其中之一,其中運算模組根據頭部動作減少第一眼距以產生第一校正眼距。 In an embodiment of the present invention, the above-mentioned head movement includes at least one of up-down turning, left-right yaw, and left-right turning, wherein the calculation module reduces the first eye distance according to the head movement to generate the first corrected eye distance .

在本發明的一實施例中,上述的第一機器學習模型根據影像判斷使用者的人種和年齡,並且根據人種和年齡判斷使用者對應於第一群組。 In an embodiment of the present invention, the above-mentioned first machine learning model judges the race and age of the user according to the image, and judges that the user corresponds to the first group according to the race and age.

在本發明的一實施例中,上述的多個模組更包含第二機器學習模型。第二機器學習模型根據影像判斷使用者的頭部位置和肩膀位置,其中運算模組根據頭部位置和肩膀位置判斷是否通過收發器輸出第二警示訊息。 In an embodiment of the present invention, the above-mentioned modules further include a second machine learning model. The second machine learning model judges the user's head position and shoulder position according to the image, wherein the calculation module judges whether to output the second warning message through the transceiver according to the head position and shoulder position.

在本發明的一實施例中,上述的運算模組根據頭部位置取得頭部水平軸,根據肩膀位置取得肩線,並且根據頭部水平軸和肩線判斷是否輸出第二警示訊息。 In an embodiment of the present invention, the above-mentioned calculation module obtains the horizontal axis of the head according to the position of the head, obtains the shoulder line according to the position of the shoulder, and determines whether to output the second warning message according to the horizontal axis of the head and the shoulder line.

在本發明的一實施例中,上述的運算模組響應於頭部水平軸並未平行於肩線而判斷輸出第二警示訊息。 In an embodiment of the present invention, the above-mentioned computing module determines to output the second warning message in response to the fact that the horizontal axis of the head is not parallel to the shoulder line.

本發明的一種監視坐姿的方法,包含:預存分別對應於多個群組的多個眼距;通過攝影機擷取使用者的影像;由第一機器學習模型根據影像判斷使用者對應於多個群組中的第一群組,其中第一群組對應於多個眼距中的第一眼距;根據影像取得使用者的影像中眼距;根據第一眼距以及影像中眼距計算使用者與攝影機之間的距離;以及響應於距離超出預設範圍而輸出示警訊息。 A method for monitoring sitting posture of the present invention includes: pre-storing a plurality of eye distances corresponding to a plurality of groups; capturing an image of a user through a camera; judging by a first machine learning model that the user corresponds to a plurality of groups according to the image The first group in the group, wherein the first group corresponds to the first eye distance among the plurality of eye distances; obtain the user's eye distance in the image according to the image; calculate the user according to the first eye distance and the eye distance in the image distance from the camera; and outputting a warning message in response to the distance exceeding a preset range.

基於上述,本發明的電子裝置可通過使用者的眼距、頭部位置和肩膀位置等資訊判斷使用者的坐姿是否正確,並可在使用者的坐姿不正確時輸出警示訊息以提示使用者調整坐姿。 Based on the above, the electronic device of the present invention can judge whether the user's sitting posture is correct based on information such as the user's eye distance, head position, and shoulder position, and can output a warning message to remind the user to adjust when the user's sitting posture is incorrect. sitting posture.

100:電子裝置 100: Electronic device

110:處理器 110: Processor

120:儲存媒體 120: storage media

121:第一機器學習模型 121:First machine learning model

122:第二機器學習模型 122:Second machine learning model

123:特徵擷取模組 123: Feature extraction module

124:運算模組 124: Operation module

130:收發器 130: Transceiver

140:攝影機 140: camera

20:第一眼距 20: first eye distance

30:影像中眼距 30: Eye distance in the image

40:距離 40: Distance

50:頭部位置 50: head position

51:頭部水平軸 51: Head horizontal axis

60:肩膀位置 60: shoulder position

61:肩線 61: shoulder line

F:焦距 F: focal length

S601、S602、S603、S604、S605、S606:步驟 S601, S602, S603, S604, S605, S606: steps

圖1根據本發明的一實施例繪示一種監視坐姿的電子裝置的示意圖。 FIG. 1 is a schematic diagram of an electronic device for monitoring sitting posture according to an embodiment of the present invention.

圖2根據本發明的一實施例繪示使用者的第一眼距的示意圖。 FIG. 2 is a schematic diagram illustrating a user's first eye distance according to an embodiment of the present invention.

圖3根據本發明的一實施例繪示使用者與攝影機之間的距離的示意圖。 FIG. 3 is a schematic diagram illustrating the distance between a user and a camera according to an embodiment of the present invention.

圖4A、4B和4C根據本發明的一實施例繪示頭部動作的示意圖。 4A , 4B and 4C illustrate schematic diagrams of head movements according to an embodiment of the present invention.

圖5根據本發明的一實施例繪示使用者的頭部位置和肩膀位置的示意圖。 FIG. 5 is a schematic diagram illustrating a user's head position and shoulder position according to an embodiment of the present invention.

圖6根據本發明的一實施例繪示一種監視坐姿的方法的流程圖。 FIG. 6 is a flow chart of a method for monitoring sitting posture according to an embodiment of the present invention.

為了使本發明之內容可以被更容易明瞭,以下特舉實施例作為本發明確實能夠據以實施的範例。另外,凡可能之處,在圖式及實施方式中使用相同標號的元件/構件/步驟,係代表相同或類似部件。 In order to make the content of the present invention more comprehensible, the following specific embodiments are taken as examples in which the present invention can actually be implemented. In addition, wherever possible, elements/components/steps using the same reference numerals in the drawings and embodiments represent the same or similar parts.

圖1根據本發明的一實施例繪示一種監視坐姿的電子裝 置100的示意圖。電子裝置100可包含處理器110、儲存媒體120、收發器130以及攝影機140。電子裝置100例如是筆記型電腦、桌上型電腦或平板電腦,本發明不限於此。 Fig. 1 illustrates an electronic device for monitoring sitting posture according to an embodiment of the present invention Schematic diagram of setting 100. The electronic device 100 may include a processor 110 , a storage medium 120 , a transceiver 130 and a camera 140 . The electronic device 100 is, for example, a notebook computer, a desktop computer or a tablet computer, and the present invention is not limited thereto.

處理器110例如是中央處理單元(central processing unit,CPU),或是其他可程式化之一般用途或特殊用途的微控制單元(micro control unit,MCU)、微處理器(microprocessor)、數位信號處理器(digital signal processor,DSP)、可程式化控制器、特殊應用積體電路(application specific integrated circuit,ASIC)、圖形處理器(graphics processing unit,GPU)、影像訊號處理器(image signal processor,ISP)、影像處理單元(image processing unit,IPU)、算數邏輯單元(arithmetic logic unit,ALU)、複雜可程式邏輯裝置(complex programmable logic device,CPLD)、現場可程式化邏輯閘陣列(field programmable gate array,FPGA)或其他類似元件或上述元件的組合。處理器110可耦接至儲存媒體120、收發器130以及攝影機140,並且存取和執行儲存於儲存媒體120中的多個模組和各種應用程式。 The processor 110 is, for example, a central processing unit (central processing unit, CPU), or other programmable general purpose or special purpose micro control unit (micro control unit, MCU), microprocessor (microprocessor), digital signal processing Digital Signal Processor (DSP), Programmable Controller, Application Specific Integrated Circuit (ASIC), Graphics Processing Unit (GPU), Image Signal Processor (ISP) ), image processing unit (image processing unit, IPU), arithmetic logic unit (arithmetic logic unit, ALU), complex programmable logic device (complex programmable logic device, CPLD), field programmable logic gate array (field programmable gate array , FPGA) or other similar components or a combination of the above components. The processor 110 can be coupled to the storage medium 120 , the transceiver 130 and the camera 140 , and access and execute multiple modules and various application programs stored in the storage medium 120 .

儲存媒體120例如是任何型態的固定式或可移動式的隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)、硬碟(hard disk drive,HDD)、固態硬碟(solid state drive,SSD)或類似元件或上述元件的組合,而用於儲存可由處理器110執行的多個模組或各種應用程式。在本實施例中,儲存媒體120可儲存 包含第一機器學習模型121、第二機器學習模型122、特徵擷取模組123以及運算模組124等多個模組,其功能將於後續說明。 The storage medium 120 is, for example, any type of fixed or removable random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), flash memory (flash memory) , hard disk drive (hard disk drive, HDD), solid state drive (solid state drive, SSD) or similar components or a combination of the above components, and are used to store multiple modules or various application programs that can be executed by the processor 110 . In this embodiment, the storage medium 120 can store It includes multiple modules such as the first machine learning model 121 , the second machine learning model 122 , the feature extraction module 123 and the computing module 124 , and their functions will be described later.

收發器130以無線或有線的方式傳送及接收訊號。收發器130還可以執行例如低噪聲放大、阻抗匹配、混頻、向上或向下頻率轉換、濾波、放大以及類似的操作。 The transceiver 130 transmits and receives signals in a wireless or wired manner. The transceiver 130 may also perform operations such as low noise amplification, impedance matching, frequency mixing, up or down frequency conversion, filtering, amplification, and the like.

攝影機140用以擷取使用者的影像。舉例來說,若電子裝置100為筆記型電腦,則攝影機140可以設置在筆記型電腦的顯示器周圍的視訊鏡頭(webcam)。當使用者操作電子裝置100時,使用者將面對攝影機140,並且攝影機140可擷取使用者的影像。 The camera 140 is used to capture images of users. For example, if the electronic device 100 is a notebook computer, the camera 140 may be a video lens (webcam) disposed around the display of the notebook computer. When the user operates the electronic device 100, the user will face the camera 140, and the camera 140 can capture the image of the user.

眼距(interpupillary distance,IPD)為兩眼的瞳孔中心之間的距離。根據統計,不同的人種或年齡的人員的眼距並不相同。舉例來說,隨著年齡增長,人的眼距會縮短。儲存媒體120可預存分別對應於多個群組的多個眼距,其中所述多個群組分別對應不同的人種及/或年齡。 The interpupillary distance (IPD) is the distance between the pupil centers of the two eyes. According to statistics, people of different races or ages have different eye distances. For example, as we age, the distance between our eyes decreases. The storage medium 120 may pre-store multiple eye distances corresponding to multiple groups, wherein the multiple groups correspond to different races and/or ages.

第一機器學習模型121可用以根據使用者的影像判斷使用者對應於多個群組中的哪一個群組。舉例來說,第一機器學習模型121可根據使用者的影像判斷使用者的人種和年齡,從而根據人種和年齡判斷使用者屬於與所述人種和年齡相對應的第一群組,其中第一群組可對應於儲存在儲存媒體120中的多個眼距中的第一眼距。因此,第一機器學習模型121可判斷使用者的眼距為第一眼距,如圖2所示。圖2根據本發明的一實施例繪示使用 者的第一眼距20的示意圖。 The first machine learning model 121 can be used to determine which group among the multiple groups the user corresponds to according to the image of the user. For example, the first machine learning model 121 can determine the user's race and age according to the user's image, thereby judging that the user belongs to the first group corresponding to the race and age according to the race and age, The first group may correspond to the first eye distance among the plurality of eye distances stored in the storage medium 120 . Therefore, the first machine learning model 121 can determine that the eye distance of the user is the first eye distance, as shown in FIG. 2 . Figure 2 illustrates the use of Schematic diagram of the first eye distance 20 of the patient.

在一實施例中,運算模組122可根據習知的機器學習演算法來訓練和產生第一機器學習模型121,本發明並不加以限制。舉例來說,運算模組122可通過收發器130取得多筆訓練資料,其中多筆訓練中的每一筆訓練資料可包含人員的影像、人員的人種標籤以及人員的年齡標籤。運算模組122可基於監督式學習(supervised learning)演算法以根據多筆訓練資料產生第一機器學習模型121。 In one embodiment, the computing module 122 can train and generate the first machine learning model 121 according to a known machine learning algorithm, which is not limited in the present invention. For example, the computing module 122 can obtain multiple pieces of training data through the transceiver 130, wherein each piece of training data in the multiple pieces of training data can include the image of the person, the label of the race of the person, and the label of the age of the person. The calculation module 122 can generate the first machine learning model 121 based on a supervised learning algorithm according to multiple training data.

另一方面,運算模組124可根據使用者的影像取得使用者的影像中眼距(即:影像中的使用者的眼距)。具體來說,特徵擷取模組123可根據使用者的影像取得使用者的臉部的多個特徵點。運算模組124可根據多個特徵點計算使用者的影像中眼距。 On the other hand, the computing module 124 can obtain the eye distance in the user's image according to the user's image (that is, the user's eye distance in the image). Specifically, the feature extraction module 123 can acquire a plurality of feature points of the user's face according to the user's image. The calculation module 124 can calculate the eye distance of the user's image according to the plurality of feature points.

在取得使用者的影像中眼距後,運算模組124可根據第一眼距20以及影像中眼距計算使用者與攝影機140之間的距離。圖3根據本發明的一實施例繪示使用者與攝影機140之間的距離40的示意圖,其中F為攝影機140的焦距,30為使用者的影像中眼距,並且20為第一眼距。在焦距F、影像中眼距30以及第一眼距20為已知的情況下,運算模組124可根據如下所示的公式(1)計算出使用者與攝影機140之間的距離40。 After obtaining the eye distance of the user in the image, the computing module 124 can calculate the distance between the user and the camera 140 according to the first eye distance 20 and the eye distance in the image. FIG. 3 shows a schematic diagram of the distance 40 between the user and the camera 140 according to an embodiment of the present invention, wherein F is the focal length of the camera 140 , 30 is the middle eye distance of the user's image, and 20 is the first eye distance. When the focal length F, the eye distance 30 in the image, and the first eye distance 20 are known, the calculation module 124 can calculate the distance 40 between the user and the camera 140 according to the following formula (1).

距離40=(第一眼距20/影像中眼距30)*焦距F...(1) Distance 40=(first eye distance 20/image distance 30)*focal length F...(1)

由於使用者的頭部動作可能會影響眼距的量測。因此,在一實施例中,運算模組122可根據使用者的頭部動作調整距離 40的計算方式。具體來說,運算模組122可根據特徵擷取模組123產生的使用者的臉部的多個特徵點來判斷使用者的頭部動作。圖4A、4B和4C根據本發明的一實施例繪示頭部動作的示意圖。頭部動作可包含如圖4A所示的上下翻轉(pitch movement)、如圖4B所示的左右偏擺(roll movement)以及如圖4C所示的左右翻轉(yaw movement)。 The measurement of the eye distance may be affected by the user's head movement. Therefore, in one embodiment, the computing module 122 can adjust the distance according to the user's head movement 40 is calculated. Specifically, the calculation module 122 can judge the user's head movement according to the multiple feature points of the user's face generated by the feature extraction module 123 . 4A , 4B and 4C illustrate schematic diagrams of head movement according to an embodiment of the present invention. The head movement may include pitch movement as shown in FIG. 4A , roll movement as shown in FIG. 4B , and yaw movement as shown in FIG. 4C .

運算模組122可根據使用者的頭部動作校正第一眼距20以產生第一校正眼距。以左右翻轉為例,若運算模組122根據使用者的臉部的多個特徵點判斷使用者的頭部左右翻轉,則運算模組122可根據左右翻轉的角度減少第一眼距20以產生第一校正眼距。在產生第一校正眼距後,運算模組122可根據第一校正眼距計算距離40,如公式(2)所示。 The computing module 122 can correct the first interocular distance 20 according to the user's head movement to generate the first corrected interocular distance. Taking left-right turning as an example, if the calculation module 122 judges that the user's head is turned left-right according to multiple feature points of the user's face, the calculation module 122 can reduce the first eye distance 20 according to the angle of the left-right turning to generate The first corrected eye distance. After generating the first corrected eye distance, the calculation module 122 can calculate the distance 40 according to the first corrected eye distance, as shown in formula (2).

距離40=(第一校正眼距/影像中眼距30)*焦距F...(2) Distance 40=(first corrected eye distance/eye distance 30 in the image)*focal length F...(2)

在計算完距離40後,運算模組122可響應於距離40超出預設範圍而通過收發器130輸出警示訊息,藉以通過警示訊息提示使用者調整使用者與攝影機140之間的距離40。舉例來說,儲存媒體120可預存距離40的預設下界和預設上界。若距離40小於預設下界,代表使用者太過靠近攝影機140。據此,運算模組122可輸出警示訊息以提示使用者遠離攝影機140以改善坐姿。若距離40大於預設上界,代表使用者太過遠離攝影機140。據此,運算模組122可輸出警示訊息以提示使用者靠近攝影機140以改善坐姿。 After the distance 40 is calculated, the computing module 122 can output a warning message through the transceiver 130 in response to the distance 40 exceeding the preset range, so as to remind the user to adjust the distance 40 between the user and the camera 140 through the warning message. For example, the storage medium 120 may pre-store a preset lower bound and a preset upper bound of the distance 40 . If the distance 40 is less than the preset lower bound, it means that the user is too close to the camera 140 . Accordingly, the computing module 122 can output a warning message to remind the user to stay away from the camera 140 to improve the sitting posture. If the distance 40 is greater than the preset upper limit, it means that the user is too far away from the camera 140 . Accordingly, the computing module 122 can output a warning message to remind the user to approach the camera 140 to improve the sitting posture.

除了根據使用者與攝影機140之間的距離之外,電子裝置100還可根據使用者的頭部與肩膀的相對關係來判斷使用者的坐姿是否需要改善。具體來說,第二機器學習模型122可根據使用者的影像判斷使用者的頭部位置50以及肩膀位置60,如圖5所示。圖5根據本發明的一實施例繪示使用者的頭部位置50和肩膀位置60的示意圖。 In addition to the distance between the user and the camera 140 , the electronic device 100 can also determine whether the user's sitting posture needs to be improved according to the relative relationship between the user's head and shoulders. Specifically, the second machine learning model 122 can determine the user's head position 50 and shoulder position 60 according to the user's image, as shown in FIG. 5 . FIG. 5 shows a schematic diagram of a user's head position 50 and shoulder positions 60 according to an embodiment of the present invention.

在一實施例中,運算模組122可根據習知的機器學習演算法來訓練和產生第二機器學習模型122,本發明並不加以限制。舉例來說,運算模組122可通過收發器130取得多筆訓練資料,其中多筆訓練中的每一筆訓練資料可包含人員的影像、人員的頭部位置標籤以及人員的肩膀位置標籤。運算模組122可基於監督式學習演算法以根據多筆訓練資料產生第二機器學習模型122。 In one embodiment, the calculation module 122 can train and generate the second machine learning model 122 according to a known machine learning algorithm, which is not limited in the present invention. For example, the computing module 122 can obtain multiple pieces of training data through the transceiver 130 , wherein each piece of training data in the multiple pieces of training data can include the person's image, the person's head position label, and the person's shoulder position label. The calculation module 122 can generate the second machine learning model 122 based on a supervised learning algorithm according to a plurality of training data.

在取得使用者的頭部位置50以及肩膀位置60後,運算模組可根據頭部位置50以及肩膀位置60判斷是否輸出第二警示訊息,其中第二警示訊息用以提示使用者調整頭部位置50或肩膀位置60以改善坐姿。具體來說,運算模組122可根據頭部位置50取得頭部水平軸51,並且根據肩膀位置60取得肩線(shoulder line)61。接著,運算模組122可判斷頭部水平軸51與肩線61是否平行。若頭部水平軸51與肩線61不平行,則運算模組122可通過收發器130輸出第二警視訊息,藉以通過第二警視訊息提示使用者調整頭部位置50或肩膀位置60以使頭部水平軸51與肩線61平行。 After obtaining the user's head position 50 and shoulder position 60, the computing module can judge whether to output a second warning message according to the head position 50 and shoulder position 60, wherein the second warning message is used to remind the user to adjust the head position 50 or shoulder position 60 for improved sitting posture. Specifically, the calculation module 122 can obtain the head horizontal axis 51 according to the head position 50 , and obtain the shoulder line 61 according to the shoulder position 60 . Next, the computing module 122 can determine whether the horizontal axis 51 of the head is parallel to the shoulder line 61 . If the head horizontal axis 51 is not parallel to the shoulder line 61, the computing module 122 can output a second warning message through the transceiver 130, so as to prompt the user to adjust the head position 50 or the shoulder position 60 so that the head The horizontal axis 51 is parallel to the shoulder line 61 .

圖6根據本發明的一實施例繪示一種監視坐姿的方法的流程圖,其中所述方法可由如圖1所示的電子裝置100實施。在步驟S601中,預存分別對應於多個群組的多個眼距。在步驟S602中,擷取使用者的影像。在步驟S603中,由第一機器學習模型根據影像判斷使用者對應於多個群組中的第一群組,其中第一群組對應於多個眼距中的第一眼距。在步驟S604中,根據影像取得使用者的影像中眼距。在步驟S605中,根據第一眼距以及影像中眼距計算使用者與攝影機之間的距離。在步驟S606中,響應於距離超出預設範圍而輸出示警訊息。 FIG. 6 shows a flowchart of a method for monitoring sitting posture according to an embodiment of the present invention, wherein the method can be implemented by the electronic device 100 shown in FIG. 1 . In step S601, a plurality of eye distances respectively corresponding to a plurality of groups are prestored. In step S602, an image of the user is captured. In step S603, the first machine learning model judges according to the image that the user corresponds to the first group among the plurality of groups, wherein the first group corresponds to the first eye distance among the plurality of eye distances. In step S604, the user's eye distance in the image is obtained according to the image. In step S605, the distance between the user and the camera is calculated according to the first eye distance and the eye distance in the image. In step S606, a warning message is output in response to the distance exceeding a preset range.

綜上所述,本發明的電子裝置可根據使用者的臉部特徵判斷使用者的人種或年齡。在確認使用者的人種或年齡後,電子裝置可根據眼距來判斷使用者與攝影機之間的距離,從而根據距離判斷使用者的坐姿是否正確。另一方面,電子裝置還可以根據使用者的頭部位置和肩膀位置判斷使用者的坐姿是否正確。當使用者的坐姿不正確時,電子裝置可輸出警示訊息以提示使用者改變坐姿。因此,本發明的電子裝置可輔助使用者維持良好的坐姿。 In summary, the electronic device of the present invention can determine the user's race or age according to the user's facial features. After confirming the user's race or age, the electronic device can judge the distance between the user and the camera according to the eye distance, thereby judging whether the user's sitting posture is correct according to the distance. On the other hand, the electronic device can also judge whether the user's sitting posture is correct according to the user's head position and shoulder position. When the user's sitting posture is incorrect, the electronic device can output a warning message to prompt the user to change the sitting posture. Therefore, the electronic device of the present invention can assist the user to maintain a good sitting posture.

S601、S602、S603、S604、S605、S606:步驟 S601, S602, S603, S604, S605, S606: steps

Claims (6)

一種監視姿勢的電子裝置,包括:攝影機,擷取使用者的影像;收發器;儲存媒體,儲存多個模組以及分別對應於多個群組的多個眼距;以及處理器,耦接所述儲存媒體、所述收發器以及所述攝影機,並且存取和執行所述多個模組,其中所述多個模組包括:第一機器學習模型,根據所述影像判斷所述使用者對應於所述多個群組中的第一群組,其中所述第一群組對應於所述多個眼距中的第一眼距;運算模組,根據所述影像取得所述使用者的影像中眼距,根據所述第一眼距以及所述影像中眼距計算所述使用者與所述攝影機之間的距離,並且響應於所述距離超出預設範圍而通過所述收發器輸出示警訊息;以及特徵擷取模組,根據所述影像取得所述使用者的臉部的多個特徵點,其中所述運算模組根據所述多個特徵點計算所述影像中眼距,根據所述多個特徵點判斷所述使用者的頭部動作,根據所述頭部動作校正所述第一眼距以產生第一校正眼距,並且根據所述第一校正眼距以及所述影像中眼距計算所述距離;第二機器學習模型,根據所述影像判斷所述使用者的頭 部位置和肩膀位置,其中所述運算模組根據所述頭部位置和所述肩膀位置判斷是否通過所述收發器輸出第二警示訊息。 An electronic device for monitoring gestures, comprising: a camera for capturing images of a user; a transceiver; a storage medium for storing a plurality of modules and a plurality of eye distances corresponding to a plurality of groups; and a processor coupled to all The storage medium, the transceiver, and the camera, and access and execute the multiple modules, wherein the multiple modules include: a first machine learning model, judging from the image that the user corresponds to In the first group of the plurality of groups, wherein the first group corresponds to the first eye distance of the plurality of eye distances; the computing module obtains the user's distance from the image according to the image an eye distance in an image, calculating the distance between the user and the camera according to the first eye distance and the eye distance in the image, and outputting it through the transceiver in response to the distance exceeding a preset range a warning message; and a feature extraction module, which obtains a plurality of feature points of the user's face according to the image, wherein the calculation module calculates the eye distance in the image according to the plurality of feature points, according to The plurality of feature points determine the user's head movement, correct the first eye distance according to the head movement to generate a first corrected eye distance, and based on the first corrected eye distance and the image The middle eye distance is used to calculate the distance; the second machine learning model judges the user's head according to the image. head position and shoulder position, wherein the computing module determines whether to output a second warning message through the transceiver according to the head position and the shoulder position. 如請求項1所述的電子裝置,其中所述頭部動作包括上下翻轉、左右偏擺以及左右翻轉的至少其中之一,其中所述運算模組根據所述頭部動作減少所述第一眼距以產生所述第一校正眼距。 The electronic device according to claim 1, wherein the head motion includes at least one of up-down flip, left-right yaw, and left-right flip, wherein the computing module reduces the first-eye view according to the head motion distance to generate the first corrected eye distance. 如請求項1所述的電子裝置,其中所述第一機器學習模型根據所述影像判斷所述使用者的人種和年齡,並且根據所述人種和所述年齡判斷所述使用者對應於所述第一群組。 The electronic device according to claim 1, wherein the first machine learning model judges the race and age of the user according to the image, and judges that the user corresponds to the first group. 如請求項1所述的電子裝置,其中所述運算模組根據所述頭部位置取得頭部水平軸,根據所述肩膀位置取得肩線,並且根據所述頭部水平軸和所述肩線判斷是否輸出所述第二警示訊息。 The electronic device according to claim 1, wherein the calculation module obtains the head horizontal axis according to the head position, obtains the shoulder line according to the shoulder position, and obtains the shoulder line according to the head horizontal axis and the shoulder line It is judged whether to output the second warning message. 如請求項4所述的電子裝置,其中所述運算模組響應於所述頭部水平軸並未平行於所述肩線而判斷輸出所述第二警示訊息。 The electronic device according to claim 4, wherein the computing module determines to output the second warning message in response to the fact that the horizontal axis of the head is not parallel to the shoulder line. 一種監視姿勢的方法,包括:預存分別對應於多個群組的多個眼距;通過攝影機擷取使用者的影像;由第一機器學習模型根據所述影像判斷所述使用者對應於所 述多個群組中的第一群組,其中所述第一群組對應於所述多個眼距中的第一眼距;根據所述影像取得所述使用者的影像中眼距,包括:根據所述影像取得所述使用者的臉部的多個特徵點;根據所述多個特徵點判斷所述使用者的頭部動作;根據所述頭部動作校正所述第一眼距以產生第一校正眼距;以及根據所述第一校正眼距以及所述影像中眼距計算所述距離;根據所述第一眼距以及所述影像中眼距計算所述使用者與所述攝影機之間的距離;以及響應於所述距離超出預設範圍而輸出示警訊息;由第二機器學習模型根據所述影像判斷所述使用者的頭部位置和肩膀位置,其中根據所述頭部位置和所述肩膀位置判斷是否輸出第二警示訊息。 A method for monitoring gestures, comprising: pre-storing a plurality of eye distances respectively corresponding to a plurality of groups; capturing an image of a user through a camera; judging by a first machine learning model according to the image that the user corresponds to the The first group in the plurality of groups, wherein the first group corresponds to the first eye distance in the plurality of eye distances; obtaining the eye distance in the image of the user according to the image includes : Acquiring multiple feature points of the user's face according to the image; judging the head movement of the user according to the multiple feature points; correcting the first eye distance according to the head movement to generating a first corrected eye distance; and calculating the distance according to the first corrected eye distance and the eye distance in the image; calculating the distance between the user and the image based on the first eye distance and the eye distance in the image the distance between the cameras; and outputting a warning message in response to the distance exceeding a preset range; judging the head position and the shoulder position of the user according to the image by the second machine learning model, wherein according to the head position and the shoulder position to determine whether to output a second warning message.
TW110114181A 2021-04-20 2021-04-20 Electronic device and method for monitoring posture TWI806036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110114181A TWI806036B (en) 2021-04-20 2021-04-20 Electronic device and method for monitoring posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110114181A TWI806036B (en) 2021-04-20 2021-04-20 Electronic device and method for monitoring posture

Publications (2)

Publication Number Publication Date
TW202242709A TW202242709A (en) 2022-11-01
TWI806036B true TWI806036B (en) 2023-06-21

Family

ID=85793301

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110114181A TWI806036B (en) 2021-04-20 2021-04-20 Electronic device and method for monitoring posture

Country Status (1)

Country Link
TW (1) TWI806036B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295370A (en) * 2012-09-13 2013-09-11 上海工融贸易有限公司 Method and system for preventing myopia by monitoring distance between eyes and screen
CN110969099A (en) * 2019-11-20 2020-04-07 湖南检信智能科技有限公司 Threshold value calculation method for myopia prevention and early warning linear distance and intelligent desk lamp

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295370A (en) * 2012-09-13 2013-09-11 上海工融贸易有限公司 Method and system for preventing myopia by monitoring distance between eyes and screen
CN110969099A (en) * 2019-11-20 2020-04-07 湖南检信智能科技有限公司 Threshold value calculation method for myopia prevention and early warning linear distance and intelligent desk lamp

Also Published As

Publication number Publication date
TW202242709A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
US10496886B1 (en) Medical environment monitoring system
US9195304B2 (en) Image processing device, image processing method, and program
US10671156B2 (en) Electronic apparatus operated by head movement and operation method thereof
TWI631506B (en) Method and system for whirling view on screen
JP2018538593A (en) Head mounted display with facial expression detection function
US20170316582A1 (en) Robust Head Pose Estimation with a Depth Camera
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
JP2017188766A (en) Electronic apparatus with camera, correction method for picked-up video image, and storage medium
CN112069863B (en) Face feature validity determination method and electronic equipment
CN106681326B (en) Seat, method of controlling seat movement and movement control system for seat
TWI643096B (en) Head and neck posture monitoring method
JP2019121045A (en) Posture estimation system, behavior estimation system, and posture estimation program
US10229489B1 (en) Medical environment monitoring system
JP2019016254A (en) Method and system for evaluating user posture
TWI806036B (en) Electronic device and method for monitoring posture
TWI474291B (en) Somatosensory fall-detection method
KR102310964B1 (en) Electronic Device, Method, and System for Diagnosing Musculoskeletal Symptoms
US20220392246A1 (en) Posture evaluating apparatus, method and system
KR102381542B1 (en) System including algorithm for determining real-time turtle neck posture, responsive cradle interlocking with the system, and control method thereof
TW201944205A (en) VR wearable device, setting method and obstacle detecting method thereof
US11467050B2 (en) Orientation device, orientation method and orientation system including a seat body, a pressure sensor and a computing unit
US20200074671A1 (en) Image detection method and image detection device for selecting representative image of user
US20200074199A1 (en) IMAGE DETECTION METHOD AND IMAGE DETECTION DEVICE utilizing dual analysis
Jolly et al. Posture Correction and Detection using 3-D Image Classification
JP2020107216A (en) Information processor, control method thereof, and program