TWI764393B - Manufacturing method of pressure garment - Google Patents

Manufacturing method of pressure garment

Info

Publication number
TWI764393B
TWI764393B TW109141684A TW109141684A TWI764393B TW I764393 B TWI764393 B TW I764393B TW 109141684 A TW109141684 A TW 109141684A TW 109141684 A TW109141684 A TW 109141684A TW I764393 B TWI764393 B TW I764393B
Authority
TW
Taiwan
Prior art keywords
model file
dimensional
file
camera
point
Prior art date
Application number
TW109141684A
Other languages
Chinese (zh)
Other versions
TW202221650A (en
Inventor
郭子綺
俊霖 李
Original Assignee
郭子綺
俊霖 李
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 郭子綺, 俊霖 李 filed Critical 郭子綺
Priority to TW109141684A priority Critical patent/TWI764393B/en
Application granted granted Critical
Publication of TWI764393B publication Critical patent/TWI764393B/en
Publication of TW202221650A publication Critical patent/TW202221650A/en

Links

Images

Abstract

The invention discloses a manufacturing method of a pressure garment, which includes a scanning step and a manufacturing step. In scanning step includes: using the processing device to first control a plurality of depth cameras to shoot the user to generate a plurality of first model files; controlling the plurality of depth cameras and the carrying device to rotate a predetermined angle relative to each other and then control the plurality of depth cameras to shoot the user again to generate a plurality of second model files; converting each first model file and each second model file into the same world coordinates through two times coordinate conversions, and then fusion all model files to generate a three-dimensional model file. The manufacturing step is to calculate the size of the injured area using the three-dimensional model file and the injured area data of the user's demand information, and then control the production equipment to produce the pressure garment based on the injured area size and the pressure data of the user's demand information.

Description

壓力織物的製造方法Manufacturing method of pressure fabric

本發明涉及一種壓力織物的製造方法,特別是不用人工量測使用者的患部位置來製造壓力織物的一種壓力織物的製造方法。The invention relates to a manufacturing method of a pressure fabric, in particular to a manufacturing method of a pressure fabric which does not need to manually measure the position of a user's affected part to manufacture the pressure fabric.

由於壓力衣是以在人體表面造成一負壓形成緊縮效果,如果要讓此壓力在人體表面分佈達到預期的醫療效果,壓力衣必須針對個別使用者的身型進行一定程度的客製化設計。為此,壓力衣製作人員必須對患者的患部進行多次量測,且患者必須進行多次的試穿,在此過程中,對傷患在生理或心理都會造成不小的壓力,且相關製作人員在量測傷患的患部的尺寸時,必須觸碰傷患的患部,如此,可能會造成傷患的不適。Since the pressure garment creates a negative pressure on the surface of the human body to form a tightening effect, if the pressure distribution on the surface of the human body is to achieve the desired medical effect, the pressure garment must be customized to a certain extent according to the body shape of the individual user. For this reason, the pressure garment manufacturers must measure the affected part of the patient many times, and the patient must try it on many times. When measuring the size of the injured part, the personnel must touch the injured part, which may cause discomfort to the injured.

本發明公開一種壓力織物的製造方法,主要用以改善傳統壓力的製造方式,必須利用人工的方式,接觸傷患的患部,以對該患部進行量測,如此,將導致傷患不適的問題。The invention discloses a manufacturing method of a pressure fabric, which is mainly used to improve the traditional manufacturing method of pressure. The affected part of the injured must be contacted manually to measure the affected part, which will lead to the problem of discomfort of the injured.

本發明的其中一實施例公開一種壓力織物的製造方法,其包含以下步驟:利用一掃描系統執行一掃描步驟,以對一使用者進行三維立體掃描作業;掃描系統包含一處理裝置、一承載裝置、多個支撐結構、多台深度相機(Depth Camera)及一旋轉機構,處理裝置電性連接旋轉機構,處理裝置能控制旋轉機構,而使多個支撐結構與承載裝置彼此相對旋轉,承載裝置具有一承載面,承載面用以提供使用者站立,多個支撐結構環繞承載裝置設置,各個支撐結構設置有一台深度相機,每一台深度相機與承載裝置的水平距離皆相同;掃描系統能於使用者站立於承載面時,執行掃描步驟,掃描步驟包含:一第一模型檔產生步驟:控制每一台深度相機拍攝站立於承載面的使用者,以使每一台深度相機產生一第一模型檔;執行一第二模型檔產生步驟M次,M為大於或等於1的正整數,第二模型檔產生步驟包含:先控制旋轉機構以使每一台深度相機及承載裝置彼此相對旋轉一預定角度,再控制每一台深度相機拍攝使用者,而使每一台深度相機產生一第二模型檔;一相機及承載裝置座標轉換步驟:利用處理裝置將每一台深度相機的第一模型檔,進行每一台深度相機各自對應的一相機及承載裝置座標轉換程序,以將各台深度相機所產生的第一模型檔及第二模型檔分別轉換為一第一轉換模型檔及一第二轉換模型檔;一世界座標轉換步驟:利用處理裝置對每一台深度相機的第一轉換模型檔及第二轉換模型檔,進行一相機及相機座標轉換程序,以將每一台深度相機的第一轉換模型檔及第二轉換模型檔分別轉換為一第一世界座標模型檔及一第二世界座標模型檔;一融合步驟:將所有的第一世界座標模型檔及所有的第二世界座標模型檔,融合(fusion)成一三維立體模型檔;一製造步驟:依據三維立體模型檔及一使用者需求資訊的一患部資料計算出一患部尺寸,再依據患部尺寸及使用者需求資訊的一施加壓力資料,控制一生產設備製作出一壓力織物。One embodiment of the present invention discloses a method for manufacturing a pressure fabric, which includes the following steps: using a scanning system to perform a scanning step to perform a three-dimensional scanning operation on a user; the scanning system includes a processing device and a carrying device , a plurality of supporting structures, a plurality of depth cameras (Depth Camera) and a rotating mechanism, the processing device is electrically connected to the rotating mechanism, and the processing device can control the rotating mechanism, so that the plurality of supporting structures and the carrying device rotate relative to each other, and the carrying device has A bearing surface, the bearing surface is used for the user to stand, a plurality of supporting structures are arranged around the bearing device, each supporting structure is provided with a depth camera, and the horizontal distance between each depth camera and the bearing device is the same; the scanning system can be used in When the user stands on the bearing surface, the scanning step is performed, and the scanning step includes: a first model file generating step: controlling each depth camera to photograph the user standing on the bearing surface, so that each depth camera generates a first model performing a second model file generating step M times, where M is a positive integer greater than or equal to 1, and the second model file generating step includes: firstly controlling the rotation mechanism to make each depth camera and the carrying device rotate relative to each other by a predetermined amount angle, and then control each depth camera to photograph the user, so that each depth camera generates a second model file; a camera and a carrier device coordinate conversion step: use the processing device to convert the first model file of each depth camera , carry out a camera and carrier device coordinate conversion program corresponding to each depth camera, so as to convert the first model file and the second model file generated by each depth camera into a first conversion model file and a second model file respectively Converting model files; a world coordinate conversion step: using the processing device to perform a camera and camera coordinate conversion program on the first conversion model file and the second conversion model file of each depth camera to convert the first conversion model file of each depth camera. A conversion model file and a second conversion model file are respectively converted into a first world coordinate model file and a second world coordinate model file; a fusion step: convert all the first world coordinate model files and all the second world coordinate model files file, and fused into a three-dimensional model file; a manufacturing step: calculating an affected part size according to the three-dimensional three-dimensional model file and an affected part data of a user's demand information, and then applying pressure according to the affected part size and the user's demand information data, control a production equipment to produce a pressure fabric.

綜上所述,本發明的壓力織物的製造方法,通過深度相機以非接觸的方式,建立出使用者的三維立體模型檔,而後以該三維立體模型檔進行壓力衣的製造,如此,在製造壓力衣的過程中,相關人員基本上不用接觸到傷患的患部,從而可以避免患者的患部被觸碰,而發生不適的問題。To sum up, the manufacturing method of the pressure fabric of the present invention establishes the user's three-dimensional three-dimensional model file in a non-contact manner through the depth camera, and then uses the three-dimensional three-dimensional model file to manufacture the pressure garment. During the process of the pressure garment, the relevant personnel basically do not need to touch the affected part of the injured patient, so as to avoid the problem of discomfort caused by the affected part of the patient being touched.

為能更進一步瞭解本發明的特徵及技術內容,請參閱以下有關本發明的詳細說明與附圖,但是此等說明與附圖僅用來說明本發明,而非對本發明的保護範圍作任何的限制。In order to further understand the features and technical content of the present invention, please refer to the following detailed description and accompanying drawings of the present invention, but these descriptions and drawings are only used to illustrate the present invention, rather than make any claims to the protection scope of the present invention. limit.

於以下說明中,如有指出請參閱特定圖式或是如特定圖式所示,其僅是用以強調於後續說明中,所述及的相關內容大部份出現於該特定圖式中,但不限制該後續說明中僅可參考所述特定圖式。In the following description, if it is indicated to refer to a specific figure or as shown in a specific figure, it is only used for emphasis in the subsequent description, and most of the related content mentioned appears in the specific figure, However, it is not limited that only the specific drawings may be referred to in this subsequent description.

請一併參閱圖1及圖2,本發明的壓力織物的製造方法是先利用一掃描系統A執行一掃描步驟S1,以對一使用者進行三維立體掃描作業,以取得一三維立體模型檔A11,再依據所述三維立體模型檔A11及一使用者需求資訊,控制一生產設備(圖未示,例如各式紡織設備)製造出一壓力織物。Please refer to FIG. 1 and FIG. 2 together. In the manufacturing method of the pressure fabric of the present invention, a scanning system A is used to perform a scanning step S1 to perform a three-dimensional three-dimensional scanning operation on a user to obtain a three-dimensional three-dimensional model file A11 , and then control a production equipment (not shown, such as various textile equipment) to manufacture a pressure fabric according to the three-dimensional three-dimensional model file A11 and a user's demand information.

如圖2所示,掃描系統A可以包含一處理裝置A1、一承載裝置A2、一旋轉機構(圖未示)、4個支撐結構A3及4台深度相機A4(Depth Camera)。處理裝置A1電性連接旋轉機構,處理裝置A1能控制旋轉機構,而使4個支撐結構A3與承載裝置A2彼此相對旋轉。處理裝置A1例如可以是包含各式微處理器,在不同的實施例中,所述處理裝置A1也可以是各式電腦、伺服器等。As shown in FIG. 2 , the scanning system A may include a processing device A1 , a carrying device A2 , a rotating mechanism (not shown), four supporting structures A3 and four depth cameras A4 (Depth Camera). The processing device A1 is electrically connected to the rotating mechanism, and the processing device A1 can control the rotating mechanism to make the four supporting structures A3 and the carrying device A2 rotate relative to each other. For example, the processing device A1 may include various microprocessors, and in different embodiments, the processing device A1 may also be various computers, servers, and the like.

承載裝置A2具有一承載面A21,承載面A21用以提供使用者站立。4個支撐結構A3環繞承載裝置A2設置。關於承載裝置A2的外型不以圖中所示為限。旋轉機構例如可以是包含一轉盤(圖未示)及一馬達(圖未示),轉盤與承載裝置A2相連接,且轉盤與馬達相連接,處理裝置A1能控制馬達作動,以使轉盤旋轉一預定角度。關於旋轉機構具體所包含的構件不以上述為限,任何可以使承載裝置A2及多個支撐結構A3彼此相對旋轉的構件,都屬於旋轉機構實際可應用的範圍。於本實施例中,是以承載裝置A2能受旋轉機構帶動,而相對於4個支撐結構A3旋轉為例,但不以此為限。在不同的實施例中,也可以是承載裝置A2為固定不動的構件,而4個支撐結構A3能被旋轉機構帶動相對於承載裝置A2旋轉。The carrying device A2 has a carrying surface A21, and the carrying surface A21 is used for the user to stand. Four supporting structures A3 are arranged around the carrier A2. The appearance of the carrying device A2 is not limited to what is shown in the figure. For example, the rotating mechanism may include a turntable (not shown) and a motor (not shown), the turntable is connected with the carrying device A2, and the turntable is connected with the motor, and the processing device A1 can control the motor to operate, so that the turntable rotates a predetermined angle. The specific components included in the rotating mechanism are not limited to the above, and any components that can make the carrying device A2 and the plurality of supporting structures A3 rotate relative to each other belong to the practical scope of the rotating mechanism. In the present embodiment, the carrying device A2 can be driven by the rotating mechanism to rotate relative to the four supporting structures A3 as an example, but it is not limited thereto. In different embodiments, the carrying device A2 may also be a fixed member, and the four supporting structures A3 can be driven by the rotating mechanism to rotate relative to the carrying device A2.

如圖1及圖3所示,圖3顯示為掃描系統A的俯視圖。每一個支撐結構A3設置有1台深度相機A4,且4台深度相機A4是大致位於同一平面,具體來說,4台深度相機A4的光軸都是位於同一平面。在實際應用中,4個支撐結構A3可以是一同連接至一固定結構A5,且於掃描系統A的俯視圖中,4台深度相機A4可以是大致位於承載裝置A2的12點鐘、3點鐘、6點鐘及9點鐘的位置,亦即,彼此相鄰的兩台深度相機A4的光軸OA彼此間的夾角P1是90度。As shown in FIGS. 1 and 3 , FIG. 3 is a top view of the scanning system A. As shown in FIG. Each support structure A3 is provided with one depth camera A4, and the four depth cameras A4 are generally located on the same plane. Specifically, the optical axes of the four depth cameras A4 are located on the same plane. In practical applications, the four supporting structures A3 may be connected to a fixed structure A5 together, and in the top view of the scanning system A, the four depth cameras A4 may be approximately located at 12 o'clock, 3 o'clock, At the positions of 6 o'clock and 9 o'clock, that is, the included angle P1 between the optical axes OA of the two adjacent depth cameras A4 is 90 degrees.

在掃描系統A具有3個支撐結構A3,且每一個支撐結構A3設置有1台深度相機A4的實施例中,彼此相鄰的兩台深度相機A4的光軸彼此間的夾角則可以是120度。也就是說,在每一個支撐結構A3僅設置1台深度相機A4的實施例中,兩台深度相機A4的光軸彼此間的夾角P1,與掃描系統A所包含的支撐結構A3的數量Q的關係可以是:360/Q=P1。In the embodiment in which the scanning system A has three supporting structures A3, and each supporting structure A3 is provided with one depth camera A4, the angle between the optical axes of the two adjacent depth cameras A4 may be 120 degrees. . That is to say, in the embodiment where each support structure A3 is provided with only one depth camera A4, the angle P1 between the optical axes of the two depth cameras A4 is equal to the number Q of the support structures A3 included in the scanning system A. The relationship can be: 360/Q=P1.

如圖1及圖3所示,當處理裝置A1控制旋轉機構作動時,承載裝置A2將以一中心軸線C為中心相對於4個支撐結構A3旋轉,且每一台深度相機A4與中心軸線C的水平距離L皆相同,具體來說,4台深度相機A4可以是相同的深度相機,且每一台深度相機A4的感光元件與中心軸線C的水平距離彼此相同。As shown in FIG. 1 and FIG. 3 , when the processing device A1 controls the operation of the rotating mechanism, the carrying device A2 will rotate relative to the four supporting structures A3 with a central axis C as the center, and each depth camera A4 is connected to the central axis C. The horizontal distances L are the same. Specifically, the four depth cameras A4 can be the same depth cameras, and the horizontal distances between the photosensitive elements of each depth camera A4 and the central axis C are the same as each other.

於本實施例的圖式中,是以單一個支撐結構A3設置有單一個深度相機A4,但單一個支撐結構A3所設置的深度相機A4的數量,不以此為限。在不同的實施例中,單一個支撐結構A3可以是設置有兩台以上的深度相機A4。在單一個支撐結構A3設置有兩台深度相機A4的實施例中,兩台深度相機A4可以是上下排列地設置於支撐結構A3,而兩台深度相機A4可以是分別用來掃描使用者的上半身及下半身;相對地,在單一個支撐結構A3設置有3台深度相機A4的實施例中,3台深度相機A4可以是分別針對人體的不同的部位進行掃描。In the drawings of this embodiment, a single support structure A3 is provided with a single depth camera A4, but the number of the depth cameras A4 set on a single support structure A3 is not limited thereto. In different embodiments, a single support structure A3 may be provided with more than two depth cameras A4. In the embodiment in which a single support structure A3 is provided with two depth cameras A4, the two depth cameras A4 may be arranged on the support structure A3 up and down, and the two depth cameras A4 may be respectively used to scan the upper body of the user and the lower body; in contrast, in the embodiment in which a single support structure A3 is provided with three depth cameras A4, the three depth cameras A4 may scan different parts of the human body respectively.

如圖1及圖2,掃描系統A能於使用者站立於承載面A21時,執行掃描步驟S1。在實際應用中,承載裝置A2例如可以是設置有一偵測器,偵測器用以偵測承載面A21上是否承載有使用者,處理裝置A1則能依據偵測器的偵測結果,依序執行所述掃描步驟S1及一製造步驟S2。所述掃描步驟S1包含:As shown in FIG. 1 and FIG. 2 , the scanning system A can execute the scanning step S1 when the user stands on the bearing surface A21 . In practical applications, the carrying device A2 may be provided with a detector, for example, the detector is used to detect whether a user is carried on the carrying surface A21 , and the processing device A1 can execute sequentially according to the detection result of the detector. The scanning step S1 and a manufacturing step S2. The scanning step S1 includes:

一第一模型檔產生步驟S11:控制每一台深度相機A4拍攝站立於承載面A21的使用者,以使各台深度相機A4對應產生一第一模型檔A41;A first model file generating step S11: controlling each depth camera A4 to photograph a user standing on the bearing surface A21, so that each depth camera A4 corresponds to a first model file A41;

執行M次一第二模型檔產生步驟S12,M為大於或等於1的正整數,所述第二模型產生步驟S12:先控制旋轉機構以使每一台深度相機A4及承載裝置A2彼此相對旋轉一預定角度,再控制每一台深度相機A4拍攝使用者,以使各台深度相機A4對應產生一第二模型檔A42;Execute M times a second model file generation step S12, M is a positive integer greater than or equal to 1, the second model generation step S12: firstly control the rotation mechanism to make each depth camera A4 and the carrying device A2 rotate relative to each other a predetermined angle, and then control each depth camera A4 to photograph the user, so that each depth camera A4 correspondingly generates a second model file A42;

一相機及承載裝置座標轉換步驟S13:利用處理裝置A1對每一台深度相機A4的第一模型檔A41,進行每一台深度相機A4各自對應的一相機及承載裝置座標轉換程序,以將各台深度相機A4的第一模型檔A41及第二模型檔A42分別轉換為一第一轉換模型檔A12及一第二轉換模型檔A13;A camera and carrier device coordinate conversion step S13: Using the processing device A1 to perform a camera and carrier device coordinate conversion program corresponding to each depth camera A4 on the first model file A41 of each depth camera A4, to convert each depth camera A4 The first model file A41 and the second model file A42 of the depth camera A4 are respectively converted into a first conversion model file A12 and a second conversion model file A13;

一世界座標轉換步驟S14:利用處理裝置A12對每一台深度相機A4的第一轉換模型檔A12及第二轉換模型檔A13,進行一相機及相機座標轉換程序,以將各台深度相機A4的第一轉換模型檔A12及第二轉換模型檔A13分別轉換為一第一世界座標模型檔A14及一第二世界座標模型檔A15;A world coordinate conversion step S14: using the processing device A12 to perform a camera and camera coordinate conversion program on the first conversion model file A12 and the second conversion model file A13 of each depth camera A4, to convert the depth camera A4 The first conversion model file A12 and the second conversion model file A13 are respectively converted into a first world coordinate model file A14 and a second world coordinate model file A15;

一融合步驟S15:將所有的第一世界座標模型檔A14及所有的第二世界座標模型檔A15,融合(fusion)成一三維立體模型檔A11;A fusion step S15: fuse all the first world coordinate model files A14 and all the second world coordinate model files A15 into a three-dimensional three-dimensional model file A11;

一製造步驟S2:依據三維立體模型檔A11及一使用者需求資訊的一患部資料計算出一患部尺寸,再依據患部尺寸及使用者需求資訊的一施加壓力資料,控制生產設備製作出壓力織物。A manufacturing step S2: Calculate an affected part size according to the three-dimensional model file A11 and an affected part data of a user's demand information, and then control the production equipment to produce a pressure fabric according to the affected part size and a pressure data of the user's demand information.

在實際應用中,處理裝置A1控制每一台深度相機A4產生第一模型檔A41後,處理裝置A1可以是接著對第一模型檔所包含的點雲(point cloud)進行一噪點(noise)去除作業及一離群點(outlier)去除作業;相同地,處理裝置A1控制每一台深度相機A4產生第二模型檔A42後,處理裝置A1可以是接著對第二模型檔所包含的點雲(point cloud)進行噪點(noise)去除作業及離群點(outlier)去除作業;如此,將有助於在融合步驟S15中,融合出更好的三維立體模型檔。另外,於融合步驟S15後還可以包含一表面重構步驟:對三維立體模型進行一表面重構(3D Surface Reconstruction)程序,如此,將可以使三維立體模型檔的成果更好。In practical applications, after the processing device A1 controls each depth camera A4 to generate the first model file A41, the processing device A1 may then perform a noise removal on the point cloud included in the first model file operation and an outlier removal operation; similarly, after the processing device A1 controls each depth camera A4 to generate the second model file A42, the processing device A1 may then proceed to analyze the point cloud ( point cloud) to remove noise and outliers; in this way, it will help to fuse a better three-dimensional model file in the fusion step S15. In addition, a surface reconstruction step may be included after the fusion step S15 : performing a 3D Surface Reconstruction procedure on the three-dimensional model, so that the result of the three-dimensional model file can be better.

第二模型檔產生步驟S12被執行的次數,可以是依據支撐結構A3、深度相機A4、預定角度及深度相機A4的視場角(Field of View, FOV)進行設計,假設每一個支撐結構A3僅設置單一台深度相機A4,則支撐結構A3的數量越多,預定角度可以是越小,相對地,支撐結構A3的數量越少,預定角度則是越大;假設每一個支撐結構A3僅設置單一台深度相機A4,則支撐結構A3的數量越多,則可以是選用視場角相對較小的深度相機A4,反之,則需要選用視場角相對較大的深度相機A4。The number of times that the second model file generating step S12 is performed may be designed according to the support structure A3, the depth camera A4, the predetermined angle, and the field of view (FOV) of the depth camera A4. It is assumed that each support structure A3 only has If a single depth camera A4 is set, the more the number of support structures A3, the smaller the predetermined angle can be. On the contrary, the smaller the number of support structures A3, the larger the predetermined angle; it is assumed that each support structure A3 is only provided with a single If the depth camera A4 is used, the greater the number of support structures A3, the depth camera A4 with a relatively small field of view can be selected. Otherwise, the depth camera A4 with a relatively large field of view needs to be selected.

為利於融合步驟S14中可以融合出更精確的三維立體模型檔A11,在較佳的實施例中,每一台深度相機A4產生的第一模型檔A41所包含的點雲(point cloud)及第二模型檔A42所包含的點雲(point cloud)兩者的重疊率大於60%。在第二模型檔產生步驟被執行2次以上的實施例中,先後產生的兩個第二模型檔A42的所包含的點雲(point cloud)兩者的重疊率大於60%。相同地,若彼此相鄰的兩台深度相機A4,分別定義為一第一深度相機及一第二深度相機,則第一深度相機所產生的第一模型檔所包含的點雲(point cloud),與第二深度相機所產生的第一模型檔A41所包含的點雲(point cloud)兩者的重疊率大於60%;第一深度相機所產生的第二模型檔所包含的點雲(point cloud),與第二深度相機所產生的第一模型檔所包含的點雲(point cloud)兩者的重疊率大於60%,將有助於於融合步驟S14中可以融合出更精確的三維立體模型檔A11。In order to facilitate the fusion of a more accurate three-dimensional three-dimensional model file A11 in the fusion step S14, in a preferred embodiment, the point cloud and the first model file A41 included in the first model file A41 generated by each depth camera A4 The overlap ratio of the point clouds included in the two model files A42 is greater than 60%. In the embodiment in which the second model file generating step is performed more than twice, the overlap ratio of the point clouds included in the two second model files A42 generated successively is greater than 60%. Similarly, if the two adjacent depth cameras A4 are respectively defined as a first depth camera and a second depth camera, then the point cloud (point cloud) included in the first model file generated by the first depth camera , the overlap ratio with the point cloud (point cloud) included in the first model file A41 generated by the second depth camera is greater than 60%; the point cloud (point cloud) included in the second model file generated by the first depth camera cloud), the overlap ratio with the point cloud (point cloud) contained in the first model file generated by the second depth camera is greater than 60%, which will help to fuse a more accurate three-dimensional stereo in the fusion step S14. Model file A11.

上述第一模型檔A41及第二模型檔A42分別為深度相機A4直接輸出的資料集,第一模型檔A41及第二模型檔A42可以是包含有深度相機A4所拍攝的影像的各個像素的X座標、Y座標、Z座標、深度值等。於所述製造步驟S2中,可以是控制處理裝置A1或生產設備,依據三維立體模型檔A11及患部資料計算出患部尺寸,於此不加以限制。所述深度相機A4例如可以是以結構光技術(structural light)或攝影量測法(photogrammetry)方式進行影像擷取。The first model file A41 and the second model file A42 are the data sets directly output by the depth camera A4, respectively, and the first model file A41 and the second model file A42 may be X containing each pixel of the image captured by the depth camera A4. Coordinates, Y coordinates, Z coordinates, depth values, etc. In the manufacturing step S2, the processing device A1 or the production equipment may be controlled to calculate the size of the affected part according to the three-dimensional model file A11 and the data of the affected part, which is not limited here. The depth camera A4 can capture images by, for example, structured light technology or photogrammetry.

在掃描系統A具有4台深度相機A4的實施例中,處理裝置A1將會儲存有4個相機及承載裝置剛性轉換矩陣(camera-carrying device rigid transformation matrix)及4個相機及4個相機及相機剛性轉換矩陣(camera- camera rigid transformation matrix)。於所述相機及承載裝置座標轉換步驟S13中,處理裝置A1對各台深度相機A4進行相機及承載裝置座標轉換程序時,處理裝置A1是利用各自深度相機A4所對應的相機及承載裝置剛性轉換矩陣,將深度相機A4的第一模型檔A41及第二模型檔A42分別轉換為第一轉換模型檔A12及第二轉換模型檔A13。於所述世界座標轉換步驟S14中,處理裝置A1對各台深度相機A4進行相機及相機座標轉換程序時,處理裝置A1是利用各自深度相機A4所對應的相機及相機剛性轉換矩陣,將深度相機A4的第一轉換模型檔A12及第二轉換模型檔A13分別轉換為第一世界座標模型檔A14及第二世界座標模型檔A15。In the embodiment where the scanning system A has 4 depth cameras A4, the processing device A1 will store 4 cameras and a rigid transformation matrix of the carrying device, 4 cameras and 4 cameras and cameras Rigid transformation matrix (camera-camera rigid transformation matrix). In the camera and carrier device coordinate conversion step S13, when the processing device A1 performs the camera and carrier device coordinate conversion procedure on each depth camera A4, the processing device A1 uses the camera and carrier device rigid conversion corresponding to the respective depth cameras A4. matrix, and convert the first model file A41 and the second model file A42 of the depth camera A4 into the first conversion model file A12 and the second conversion model file A13, respectively. In the world coordinate conversion step S14, when the processing device A1 performs the camera and camera coordinate conversion program for each depth camera A4, the processing device A1 uses the camera and camera rigid conversion matrix corresponding to the depth camera A4 to convert the depth camera. The first conversion model file A12 and the second conversion model file A13 of A4 are converted into a first world coordinate model file A14 and a second world coordinate model file A15, respectively.

值得一提的是,所述患部資料是用來描述人體的患部的位置、範圍等,而處理裝置A1可以通過患部資料瞭解壓力織物需要包覆的患部的位置、範圍等。施加壓力資料是用來描述壓力織物包覆於患部時,壓力織物施予患部的壓力值等,而處理裝置A1可以通過施加壓力資料瞭解壓力織物包覆於患部時,壓力織物需要提供多少壓力至人體的表面。It is worth mentioning that the affected part data is used to describe the position and range of the affected part of the human body, and the processing device A1 can know the position and range of the affected part to be covered by the pressure fabric through the affected part data. The applied pressure data is used to describe the pressure value applied by the pressure fabric to the affected part when the pressure fabric is covered on the affected part, and the processing device A1 can know how much pressure the pressure fabric needs to provide when the pressure fabric is covered on the affected part through the applied pressure data. surface of the human body.

依上所述,本發明的壓力織物的製造方法,通過掃描步驟S1及製造步驟S2等設計,可以快速且精確地製造出符合使用者需求的壓力織物。現有傳統的壓力織物的製作方式,是以人工的方式量測使用者欲穿戴壓力織物的部位的相關尺寸,如此,不但費時且費工,且不同人量測的結果可能因為各種因素而不同;反觀,本發明的壓力織物的製造方法,通過多台深度相機利用掃描的方式,將可避免發生上述人工量測所帶來的各種問題,且本發明的壓力織物的製造方法中的掃描步驟S1,可以在1分鐘之內完成,而傳統利用人工量測的方式,則至少需要花費15~30分鐘以上。According to the above, the manufacturing method of the pressure fabric of the present invention can quickly and accurately manufacture the pressure fabric that meets the needs of the user through the design of the scanning step S1 and the manufacturing step S2. The existing traditional manufacturing method of the pressure fabric is to manually measure the relative size of the part where the user wants to wear the pressure fabric, which is not only time-consuming and labor-intensive, but also the measurement results of different people may be different due to various factors; On the other hand, the manufacturing method of the pressure fabric of the present invention can avoid various problems caused by the above manual measurement by scanning with multiple depth cameras, and the scanning step S1 in the manufacturing method of the pressure fabric of the present invention , which can be completed within 1 minute, while the traditional manual measurement method takes at least 15 to 30 minutes.

請一併參閱圖4及圖5,圖4顯示為本發明的壓力織物的製造方法的其中一實施例的流程示意圖,圖5顯示為顯示裝置顯示對應於三維立體模型檔的三維立體影像的示意圖。本實施例與前述實施例最大不同之處在於:於融合步驟S15後,還包含:一特徵標定步驟S16、一顯示步驟S17及一判斷步驟S18。Please refer to FIG. 4 and FIG. 5 together. FIG. 4 is a schematic flowchart of one embodiment of the manufacturing method of the pressure fabric of the present invention, and FIG. 5 is a schematic diagram of a display device displaying a three-dimensional image corresponding to a three-dimensional model file. . The biggest difference between this embodiment and the previous embodiment is that after the fusion step S15 , it further includes: a feature calibration step S16 , a display step S17 and a judgment step S18 .

所述特徵標定步驟S16為:將三維立體模型檔與一資料庫中所儲存的至少一筆標準三維立體模型檔進行比對,以將三維立體模型檔所包含的點雲(point cloud)的至少一部分區分為多個預定部分,並將各個預定部分所包含的多個點座標中的一部分標定為一部位特徵點(feature point)。The feature calibration step S16 is: comparing the 3D model file with at least one standard 3D model file stored in a database, so as to compare at least a part of the point cloud included in the 3D model file It is divided into a plurality of predetermined parts, and a part of the plurality of point coordinates included in each predetermined part is marked as a part feature point (feature point).

在實際應用中,資料庫中例如可以是儲存有對應於亞洲人、美洲人、歐洲人等各式不同地區的人種所分別對應的標準三維立體模型檔,而於所述特徵標定步驟S16中,處理裝置例如可以是由使用者需求資訊所包含的人種資料等,判斷需於資料庫中提取哪一個標準三維立體模型檔進行比對。當然,資料庫中所儲存的標準三維立體模型檔,除了是以不同地區的人種進行區分外,還可以進一步利用地區、身高、體重、體脂、性別等進行區分。舉例來說,資料庫中的某一筆標準三維立體模型檔的標註例如可以是亞洲男性、體重介於70~75kg、體脂介於20~25%且身高介於175~180cm,而當處理裝置依據使用者需求資訊,判斷該使用者為亞洲男性,且體重、體脂及身高皆落在該標準三維立體模型檔的標註範圍內時,處理裝置則會於資料庫中提取出該筆標準三維立體模型檔進行比對。在不同的實施例中,於所述特徵標定步驟S15中,處理裝置也可以是利用已經通過訓練的機器學習(machine learning)模型,來判斷當前的三維立體模型檔,是要與哪一個標準三維立體模型檔進行比對。In practical applications, the database may store, for example, standard three-dimensional three-dimensional model files corresponding to races in different regions such as Asians, Americans, Europeans, etc., and in the feature calibration step S16 , the processing device may be, for example, based on the ethnic data contained in the user demand information, etc., to determine which standard three-dimensional three-dimensional model file needs to be extracted from the database for comparison. Of course, the standard three-dimensional three-dimensional model files stored in the database can be further distinguished by region, height, weight, body fat, gender, etc. For example, a standard 3D model file in the database can be marked as an Asian male, with a weight of 70-75 kg, a body fat of 20-25%, and a height of 175-180 cm, and when the processing device According to the user demand information, when it is determined that the user is an Asian male, and the weight, body fat and height are all within the marked range of the standard three-dimensional model file, the processing device will extract the standard three-dimensional model from the database. Compare the three-dimensional model files. In different embodiments, in the feature calibration step S15, the processing device may also use a trained machine learning model to determine which standard three-dimensional model file is to be compared with the current three-dimensional three-dimensional model file. Compare the three-dimensional model files.

如圖5所示,所述顯示步驟S17為:將三維立體模型檔輸入一顯示裝置A6,以於顯示裝置A6的一顯示畫面A61中顯示出一三維立體影像A62,三維立體影像A62包含多個預定區域,多個預定區域對應於三維立體模型檔的多個預定部分,各個預定區域中包含多個影像點(圖5中位於輪廓線A623內的黑點),各個預定區域中的多個影像點對應於三維立體模型的其中一個預定部分所包含的多個點座標。具體來說,各個預定區域即對應為人體的各個部位,例如頭部、胸部、腰部、臀部、左大腿、右大腿、左小腿、右小腿、左上手臂、左前手臂、右上手臂、右前手臂、左手掌、右手掌、左腳掌、右腳掌等。As shown in FIG. 5 , the displaying step S17 is: inputting the 3D stereoscopic model file into a display device A6, so as to display a 3D stereoscopic image A62 on a display screen A61 of the display device A6, and the 3D stereoscopic image A62 includes a plurality of Predetermined area, a plurality of predetermined areas correspond to a plurality of predetermined parts of the three-dimensional model file, each predetermined area contains a plurality of image points (black dots located in the outline A623 in FIG. 5), and a plurality of images in each predetermined area The points correspond to a plurality of point coordinates contained in one of the predetermined parts of the three-dimensional solid model. Specifically, each predetermined area corresponds to each part of the human body, such as head, chest, waist, buttocks, left thigh, right thigh, left calf, right calf, left upper arm, left forearm, right upper arm, right forearm, left Palm, right palm, left foot, right foot, etc.

各個預定區域中的一部分的影像點是以一第一樣式呈現,其餘的影像點則是以一第二樣式呈現;其中,以第二樣式呈現的各個影像點定義為一特徵影像點,各個特徵影像點對應於三維立體模型檔中的其中一個部位特徵點。具體來說,使用者可以通過以第一樣式呈現的多個影像點,來觀看三維立體影像的外貌,而該些特徵影像點則是作為後續量測尺寸時的參考座標點。A part of the image points in each predetermined area is presented in a first pattern, and the rest of the image points are presented in a second pattern; wherein, each image point presented in the second pattern is defined as a characteristic image point, and each image point is presented in a second pattern. The feature image point corresponds to one of the feature points in the three-dimensional model file. Specifically, the user can view the appearance of the 3D stereoscopic image through a plurality of image points presented in the first format, and these characteristic image points are used as reference coordinate points for subsequent size measurement.

所述顯示裝置A6電性連接一輸入裝置(圖未示),輸入裝置能依據使用者的操作,將至少一個以第一樣式呈現的影像點修改為以第二樣式呈現,或者,將至少一個以第二樣式呈現的影像點修改為以第一樣式呈現。當使用者通過輸入裝置修改任一個所述影像點的樣式時,輸入裝置將產生相對應的一修改檔,且輸入裝置將傳遞該修改檔至處理裝置,而處理裝置則能依據該修改檔對應修改三維立體模型檔中的點座標、部位特徵點。所述輸入裝置例如是鍵盤、滑鼠、觸控螢幕、手寫板等,於此不加以限制。其中,關於第一樣式及第二樣式的具體的呈現方式,只要可以讓相關人員於顯示畫面A61中,能夠清楚地分出兩者的差異,於此不加以限制。The display device A6 is electrically connected to an input device (not shown in the figure), and the input device can modify at least one image point presented in the first style to be presented in the second style according to the user's operation, or, at least An image point rendered in the second style is modified to be rendered in the first style. When the user modifies the style of any one of the image points through the input device, the input device will generate a corresponding modification file, and the input device will transmit the modification file to the processing device, and the processing device can correspond to the modification file according to the modification file. Modify the point coordinates and part feature points in the 3D solid model file. The input device is, for example, a keyboard, a mouse, a touch screen, a handwriting pad, etc., which are not limited herein. Wherein, regarding the specific presentation manner of the first style and the second style, as long as the relevant personnel can clearly distinguish the difference between the two on the display screen A61, no limitation is imposed here.

在實際應用中,於顯示步驟S17中,相關人員將可以於顯示裝置A6的一顯示畫面A61中觀看到三維立體影像A62,該三維立體影像A62除了包含多個影像點外,還可以是包含有至少一條輪廓線A623及多個區域輪廓線A624,輪廓線A623主要是呈現三維立體影像A62的外觀,每一個區域輪廓線A624所圍繞的區域即對應於前述三維立體模型檔的其中一個預定部分,如此,相關人員即可由顯示畫面A61中,通過觀看三維立體影像A62的該些區域輪廓線A624,來判斷處理裝置於三維立體影像模型中所標定的該些預定部位,是否正確地對應於人體的各個部位(例如肩、胸、腰、臀、大腿、小腿、手臂、手肘等部位)。In practical applications, in the displaying step S17, the relevant personnel can view the 3D stereoscopic image A62 on a display screen A61 of the display device A6. In addition to including a plurality of image points, the 3D stereoscopic image A62 may also include At least one contour line A623 and a plurality of regional contour lines A624, the contour line A623 mainly presents the appearance of the three-dimensional stereoscopic image A62, and the area surrounded by each regional contour line A624 corresponds to one of the predetermined parts of the aforementioned three-dimensional three-dimensional model file, In this way, the relevant personnel can judge whether the predetermined parts calibrated by the processing device in the three-dimensional image model correctly correspond to the human body by viewing the contour lines A624 of the regions of the three-dimensional three-dimensional image A62 on the display screen A61. Various parts (such as shoulders, chest, waist, buttocks, thighs, calves, arms, elbows, etc.).

所述判斷步驟S18為:判斷所述三維立體影像所包含的任一個所述影像點是否被修改;若沒有任何所述影像點的樣式被修改,則執行前述製造步驟S2;若任一個所述影像點的樣式被修改,則執行製造步驟S2A:先使處理裝置依據輸入裝置產生的修改檔,使所述三維立體模型檔中的至少一個所述點座標不再被標示為部位特徵點,並將其中一的所述點座標標示為所述部位特徵點;再使處理裝置依據患部資料查找三維立體模型檔中相對應的多個部位特徵點,並利用查找出的多個部位特徵點計算出三維立體模型檔對應於患部資料的位置的患部尺寸。The judging step S18 is: judging whether any of the image points included in the 3D stereoscopic image has been modified; if no pattern of the image points has been modified, execute the aforementioned manufacturing step S2; When the pattern of the image point is modified, the manufacturing step S2A is executed: firstly, the processing device is made to make at least one of the point coordinates in the three-dimensional three-dimensional model file no longer be marked as a part feature point according to the modification file generated by the input device, and Mark one of the point coordinates as the part feature point; then make the processing device search for a plurality of corresponding part feature points in the three-dimensional model file according to the affected part data, and use the found multiple part feature points to calculate The three-dimensional model file corresponds to the size of the affected part at the location of the affected part data.

具體來說,修改檔中將記錄被修改樣式的影像點的相關資料(例如座標、原樣式等),而處理裝置則能依據該修改檔,於三維立體模型檔中查找到相對應的點座標,並將其標示為部位特徵點,或者,處理裝置能依據該修改檔,於三維立體模型檔中查找到相對應的部位特徵點,並使該部位特徵點不再被標示為部位特徵點。Specifically, the modification file will record relevant data (such as coordinates, original style, etc.) of the image points of the modified style, and the processing device can find the corresponding point coordinates in the three-dimensional model file according to the modification file , and mark it as a part feature point, or the processing device can find the corresponding part feature point in the three-dimensional model file according to the modification file, and make the part feature point no longer marked as a part feature point.

如圖5及圖6所示,圖6顯示為三維立體影像A62於腰部的局部剖面立體示意圖。舉例來說,假設相關人員欲製作給使用者穿戴於腰部的壓力衣,則於所述特徵標定步驟S16中,處理裝置將會利用比對的方式,判斷三維立體模型檔中哪一部分是對應於人體的腰部,並標記該部分所對應的部分點座標為部位特徵點。而後,當相關人員點選三維立體影像的腰部時,顯示裝置A6的顯示畫面A61例如可以是由圖5轉換為圖6,而相關人員將可以於顯示畫面A61中,看到三維立體影像A62中,對應於人體的腰部的位置被區域輪廓線A624框起,且於該區域輪廓線A624的周圍包含有多個實心圓點(第一樣式)的影像點A621及多個空心圓點(第二樣式)的特徵影像點A622。其中,每一個實心圓點及每一個空間圓點是分別對應於三維立體模型檔中的其中一個點座標,而所述區域輪廓線A624則可以是處理裝置模擬出的腰部輪廓線,而該區域輪廓線A624可以是僅用來讓使用者更清楚地看出腰部的輪廓。As shown in FIG. 5 and FIG. 6 , FIG. 6 shows a partial cross-sectional perspective view of the three-dimensional image A62 at the waist. For example, if the relevant person wants to make a pressure garment for the user to wear on the waist, in the feature calibration step S16, the processing device will use a comparison method to determine which part of the three-dimensional model file corresponds to The waist of the human body is marked, and the part point coordinates corresponding to this part are marked as part feature points. Then, when the relevant person clicks on the waist of the 3D stereoscopic image, the display screen A61 of the display device A6 may be, for example, converted from FIG. 5 to FIG. 6 , and the relevant person will be able to see the 3D stereoscopic image A62 in the display screen A61 , the position corresponding to the waist of the human body is framed by the area outline A624, and the area outline A624 includes a plurality of solid dots (the first pattern) image points A621 and a plurality of hollow dots (the first pattern) around the area outline A624. Two patterns) of the feature image point A622. Wherein, each solid circle point and each space circle point respectively correspond to one of the point coordinates in the three-dimensional three-dimensional model file, and the area contour line A624 may be the waist contour line simulated by the processing device, and this area The contour line A624 may only be used to allow the user to see the contour of the waist more clearly.

在製造步驟S2中,處理裝置在計算使用者的腰圍時,處理裝置將會使位於三維立體影像中的腰部位置的多個特徵影像點A622(即三維立體模型檔中對應為腰部的預定部位的多個部位特徵點)連接成一線段,再計算該線段的長度,據以計算出使用者的腰圍。需說明的是,處理裝置可以是利用三維立體模型檔及該些部位特徵點,以各式數學方式計算出該部位的尺寸,於此不加以限制。In the manufacturing step S2, when the processing device calculates the waist circumference of the user, the processing device will make a plurality of feature image points A622 located at the waist position in the three-dimensional three-dimensional image (that is, corresponding to the predetermined part of the waist in the three-dimensional three-dimensional model file). A plurality of part feature points) are connected into a line segment, and then the length of the line segment is calculated to calculate the waist circumference of the user. It should be noted that, the processing device may use the three-dimensional model file and the feature points of the part to calculate the size of the part in various mathematical ways, which is not limited here.

承上述例子,在所述顯示步驟S16中,相關人員即可以於顯示畫面A61中,看到被處理裝置標示為部位特徵點而以第二樣式呈現的影像點A622,此時,相關人員即可通過輸入裝置選擇性地修改至少一個影像點A622,以改變處理裝置後續計算患部的尺寸時的基準,如此,將可以使處理裝置更精確地計算出腰圍。Following the above example, in the display step S16, the relevant personnel can see the image points A622 marked as part feature points by the processing device and presented in the second style in the display screen A61. At this time, the relevant personnel can At least one image point A622 is selectively modified through the input device to change the reference when the processing device subsequently calculates the size of the affected part, so that the processing device can calculate the waist circumference more accurately.

在不同的實施例中,於所述特徵標定步驟S16中,處理裝置也可以是將三維立體模型檔A11載入一機器學習模型中,以利用機器學習模型,將三維立體模型檔所包含的點雲(point cloud)的至少一部分區分為多個預定部分,並將各個預定部分所包含的多個點座標中的一部分標定為一部位特徵點。在實際應用中,為利使機器學習模型更準確地標定出三維立體模型檔A11中對應於人體的各個部位,處理裝置可以是儲存有多個機器學習模型,而處理裝置可以是依據使用者需求資訊中的人種、身高、體重、性別、體脂等,選擇相對應的機器學習模型來標定當前的三維立體模型檔。也就是說,處理裝置中儲存的多個機器學習模型,是分別利用不相同的訓練檔進行訓練,舉例來說,其中一個機器學習模型可以是利用亞洲男性的三維立體模型檔進行訓練,其中一個機器學習模型可以是利用美洲女性的三維立體模型檔進行訓練,以此類推,如此,將可以使機器學習模型更準確地標定出當前的三維立體模型檔對應於人體各部位。In a different embodiment, in the feature calibration step S16, the processing device may also load the three-dimensional three-dimensional model file A11 into a machine learning model, so as to use the machine learning model to convert the points included in the three-dimensional three-dimensional model file At least a part of a cloud (point cloud) is divided into a plurality of predetermined parts, and a part of a plurality of point coordinates included in each predetermined part is marked as a part feature point. In practical applications, in order to enable the machine learning model to more accurately demarcate various parts of the human body corresponding to the three-dimensional model file A11, the processing device may store multiple machine learning models, and the processing device may be based on user requirements. Race, height, weight, gender, body fat, etc. in the information, select the corresponding machine learning model to calibrate the current three-dimensional model file. That is to say, the multiple machine learning models stored in the processing device are trained using different training files. For example, one of the machine learning models may be trained using a three-dimensional model file of an Asian male, and one of the The machine learning model can be trained by using the three-dimensional model files of American women, and so on. In this way, the machine learning model can more accurately calibrate the current three-dimensional model files corresponding to various parts of the human body.

依上所述,通過所述特徵標定步驟S16及所述顯示步驟S17的設計,可以使生產設備更精確地製作出符合使用者欲穿戴壓力織物的部位的尺寸的壓力織物。According to the above, through the design of the feature calibration step S16 and the display step S17 , the production equipment can more accurately manufacture a pressure fabric that conforms to the size of the part where the user wants to wear the pressure fabric.

請參閱圖7,其顯示為本發明的壓力織物的製造方法的其中一實施例的流程示意圖。本實施例與前述實施例最大不同之處在於:於融合步驟S15及製造步驟S2之間,處理裝置還將三維立體模型檔所包含的點雲(point cloud)的至少一部分區分為多個預定部分,且處理裝置還執行以下步驟:Please refer to FIG. 7 , which is a schematic flow chart of one embodiment of the manufacturing method of the pressure fabric of the present invention. The biggest difference between this embodiment and the previous embodiment is that between the fusion step S15 and the manufacturing step S2, the processing device further divides at least a part of the point cloud included in the three-dimensional model file into a plurality of predetermined parts , and the processing device also performs the following steps:

一顯示步驟S16X:將三維立體模型檔輸入一顯示裝置,以於顯示裝置中顯示出相對應的一三維立體影像,三維立體影像包含多個預定區域,多個預定區域對應於三維立體模型檔的多個預定部分,各個預定區域包含多個影像點,各個預定區域中的多個影像點對應於三維立體模型的其中一個預定部分所包含的多個點座標;各個影像點是以一第一樣式呈現;其中,顯示裝置電性連接一輸入裝置;A display step S16X: Input the three-dimensional model file into a display device, so as to display a corresponding three-dimensional three-dimensional image in the display device. The three-dimensional three-dimensional image includes a plurality of predetermined areas, and the plurality of predetermined areas correspond to the A plurality of predetermined parts, each predetermined area contains a plurality of image points, and the plurality of image points in each predetermined area correspond to a plurality of point coordinates contained in one of the predetermined parts of the three-dimensional model; each image point is a first image point The display device is electrically connected to an input device;

一標定步驟S17X:利用輸入裝置依據使用者的操作,將至少一個以第一樣式呈現的影像點修改為以第二樣式呈現,並利用輸入裝置產生相對應的一修改檔;A calibration step S17X: using the input device to modify at least one image point presented in the first style to be presented in the second style according to the user's operation, and using the input device to generate a corresponding modification file;

一修改步驟S18X:使處理裝置依據修改檔,將三維立體模型檔中所對應的各個點座標標定為一部位特徵點;A modification step S18X: causing the processing device to demarcate the coordinates of each point corresponding to the three-dimensional model file as a feature point of a part according to the modification file;

本實施例與前述實施例的另一不同之處在於:在修改步驟S18X後,所執行的製造步驟S2B與前述製造步驟S2不完全相同,所述製造步驟S2B是先依據患部資料,查找出三維立體模型檔中相對應的其中一個預定部分,再利用預定部分所包含的部位特徵點計算出患部尺寸。Another difference between this embodiment and the previous embodiment is that after step S18X is modified, the executed manufacturing step S2B is not exactly the same as the aforementioned manufacturing step S2. One of the corresponding predetermined parts in the three-dimensional model file is used to calculate the size of the affected part by using the part feature points included in the predetermined part.

於前述實施例中,處理裝置是通過比對資料庫中的標準三維立體模型或是利用機器學習模型,來自動地標定三維立體模型中的部位特徵點,但於本實施例中,處理裝置是不會自動地標定三維立體模型中的部位特徵點,而相關人員於顯示畫面中,將僅會觀看到以第一樣式呈現的多個影像點,相關人員必須利用輸入裝置,自行選擇多個影像點,以使該些影像點以第二樣式呈現,而作為所述部位特徵點。In the foregoing embodiment, the processing device automatically calibrates the feature points of the parts in the three-dimensional three-dimensional model by comparing the standard three-dimensional three-dimensional model in the database or using the machine learning model, but in this embodiment, the processing device is The part feature points in the 3D model will not be automatically calibrated, and the relevant personnel will only see multiple image points presented in the first style on the display screen. The relevant personnel must use the input device to select multiple image points by themselves. image points, so that the image points are presented in a second pattern as the feature points of the part.

依上所述,通過所述標定步驟S16及修改步驟S17的設計,相關人員可以製造出更符合使用者需求的壓力織物,亦即,本實施例所指標定步驟S16及修改步驟S17,可以讓相關人員依據使用者實際患部的位置,製造客製化的壓力織物。舉例來說,患者欲穿戴壓力織物的部位可能非一般常見的部位,此時,相關人員即可通過本實施例所舉的所述標定步驟S16及修改步驟S17,依據患者所需的部位進行部位特徵點的選定,據以製造出完全符合患者需求的壓力織物。According to the above, through the design of the calibration step S16 and the modification step S17, the relevant personnel can manufacture a pressure fabric that better meets the needs of the user. That is, the calibration step S16 and the modification step S17 in this embodiment can make Relevant personnel manufacture customized pressure fabrics according to the actual location of the affected part of the user. For example, the part where the patient wants to wear the pressure fabric may not be a common part. In this case, the relevant personnel can use the calibration step S16 and the modification step S17 in this embodiment to select the part according to the part required by the patient. Feature points are selected to create pressure fabrics that fully meet the patient's needs.

請一併參閱圖9至圖11,圖9顯示為本發明的壓力織物的製造方法的其中一實施例的流程示意圖,圖10顯示為對深度相機執行相機及承載裝置校正步驟的示意圖,圖11顯示為設置有參考體的承載裝置的俯視圖。本實施例與前述實施例最大不同之處在於:在利用掃描系統執行掃描步驟S1前,還包含對每一台深度相機執行一相機及承載裝置校正步驟,其包含:Please refer to FIG. 9 to FIG. 11 together. FIG. 9 is a schematic flowchart of one embodiment of the manufacturing method of the pressure fabric according to the present invention. Shown as a top view of the carrier provided with the reference body. The biggest difference between this embodiment and the previous embodiment is that before using the scanning system to perform the scanning step S1, it also includes performing a camera and carrier device calibration step for each depth camera, which includes:

一參考體置放步驟S011:將一參考體D設置於承載面A21,並使參考體的一參考面D1垂直於承載面A21,且使參考面平行於通過承載面A21的一中心軸線C;其中,處理裝置控制旋轉機構時,多台深度相機及承載裝置是以中心軸線C為中心彼此相對旋轉,中心軸線C與承載面A21的交點定義為一世界座標原點OP([O X,O y,O z] T); A reference body placement step S011: a reference body D is arranged on the bearing surface A21, and a reference surface D1 of the reference body is perpendicular to the bearing surface A21, and the reference surface is parallel to a central axis C passing through the bearing surface A21; Wherein, when the processing device controls the rotation mechanism, the plurality of depth cameras and the carrying device rotate relative to each other with the center axis C as the center, and the intersection of the center axis C and the carrying surface A21 is defined as a world coordinate origin OP ([O X , O y ,O z ] T );

一第一校正平面方程式計算步驟S012:控制深度相機A4拍攝參考面D1,以使深度相機A4產生一第一校正雲點資訊,並利用第一校正雲點資訊計算出一第一校正平面方程式(A iX+B iY+C iZ=0)及一第一平面法向量([A i,B i,C i] T);於此步驟中參考面定義為一第一參考面,第一校正平面方程式是用來描述於空間中的第一參考面,第一平面法向量為空間中的第一參考面的法向量

Figure 02_image001
; A first calibration plane equation calculation step S012 : control the depth camera A4 to photograph the reference surface D1, so that the depth camera A4 generates a first calibration cloud point information, and uses the first calibration cloud point information to calculate a first calibration plane equation ( A i X+B i Y+C i Z=0) and a first plane normal vector ([A i ,B i ,C i ] T ); in this step, the reference plane is defined as a first reference plane, the first plane A calibration plane equation is used to describe the first reference plane in space, and the first plane normal vector is the normal vector of the first reference plane in space
Figure 02_image001
;

執行n次一第二校正平面方程式計算步驟S013,n為大於或等於3的正整數,且n大於M,所述第二校正平面方程式計算步驟S03包含:先控制旋轉機構以使深度相機A4與承載裝置A2彼此相對旋轉一校正角度;再控制深度相機A4拍攝參考體D的參考面D1,以產生一第二校正雲點資訊,接著利用第二校正雲點資訊計算出一第二校正平面方程式(A jX+B jY+C jZ=0)及一第二平面法向量([A j,B j,C j] T);於此步驟中參考面定義為一第二參考面,第二校正平面方程式是用來描述空間中的所述第二參考面,第二平面法向量為空間中的所述第二參考面的法向量

Figure 02_image003
; Execute n times a second correction plane equation calculation step S013, n is a positive integer greater than or equal to 3, and n is greater than M, the second correction plane equation calculation step S03 includes: firstly controlling the rotation mechanism to make the depth camera A4 and the The carrying devices A2 are rotated relative to each other by a correction angle; the depth camera A4 is then controlled to photograph the reference surface D1 of the reference body D to generate a second correction cloud point information, and then a second correction plane equation is calculated using the second correction cloud point information (A j X+B j Y+C j Z=0) and a second plane normal vector ([A j , B j , C j ] T ); in this step, the reference plane is defined as a second reference plane, The second calibration plane equation is used to describe the second reference plane in space, and the second plane normal vector is the normal vector of the second reference plane in space
Figure 02_image003
;

一平均法向量計算步驟S014:利用第一平面法向量及n個第二平面法向量,算出承載面A21的一平均法向量([n X,n y,n z] T);所述平均法向量即為中心軸線C的方向向量

Figure 02_image005
; An average normal vector calculation step S014: using the first plane normal vector and the n second plane normal vectors to calculate an average normal vector ([n X , ny , n z ] T ) of the bearing surface A21; the average method The vector is the direction vector of the central axis C
Figure 02_image005
;

一承載面平面方程式計算步驟S015:將世界座標原點([O X,O y,O z] T)定義為世界座標系的原點,並利用平均法向量定義出一承載面平面方程式(n xX+n yY+n zZ=0); A bearing surface plane equation calculation step S015: define the origin of the world coordinate ([O X , O y , O z ] T ) as the origin of the world coordinate system, and use the average normal vector to define a bearing surface plane equation (n x X+n y Y+n z Z=0);

一第一相交線方程式計算步驟S016:利用第一校正平面方程式(A iX+B iY+C iZ=0)及承載面平面方程式(n xX+n yY+n zZ=0),計算一第一相交線方程式;第一相交線方程式用以描述第一參考面與承載面相交的一第一相交線CL1; A first intersection line equation calculation step S016: using the first calibration plane equation (A i X+B i Y+C i Z=0) and the bearing surface plane equation (n x X+n y Y+n z Z=0 ), calculate a first intersection line equation; the first intersection line equation is used to describe a first intersection line CL1 where the first reference surface and the bearing surface intersect;

一第二相交線方程式計算步驟S017:利用第二平面方程式(A jX+B jY+C jZ=0)及承載面平面方程式(n xX+n yY+n zZ=0),計算一第二相交線方程式,第二相交線方程式用以描述第二參考面與承載面相交的一第二相交線CL2; A second intersection line equation calculation step S017: using the second plane equation (A j X+B j Y+C j Z=0) and the bearing surface plane equation (n x X+n y Y+n z Z=0) , calculate a second intersection line equation, the second intersection line equation is used to describe a second intersection line CL2 where the second reference surface and the bearing surface intersect;

一剛性轉換矩陣計算步驟S018:利用第一相交線CL1與世界座標原點([O X,O y,O z] T)的垂直距離d i,與第二相交線CL2與世界座標原點([O X,O y,O z] T)的垂直距離d j相同的關係,計算得出包含預定角度θ的一座標轉換關係式;其中,於相機及承載裝置座標轉換程序中,處理裝置是利用座標轉換關係式,將第一模型檔轉換為第一轉換模型檔,且處理裝置是利用座標轉換關係式,將第二模型檔轉換為第二轉換模型檔;座標轉換關係式為:

Figure 02_image007
+
Figure 02_image009
]; A rigid transformation matrix calculation step S018: use the vertical distance d i between the first intersecting line CL1 and the world coordinate origin ([O X , O y , O z ] T ), and the second intersecting line CL2 and the world coordinate origin ( [O X , O y , O z ] T ) with the same relationship as the vertical distance d j ), a coordinate conversion relational expression including a predetermined angle θ is obtained by calculation; wherein, in the coordinate conversion program of the camera and the carrying device, the processing device is The first model file is converted into the first conversion model file by using the coordinate conversion relational expression, and the processing device uses the coordinate conversion relational expression to convert the second model file into the second conversion model file; the coordinate conversion relational expression is:
Figure 02_image007
+
Figure 02_image009
];

P rot為第一轉換模型檔或第二轉換模型檔所包含的點座標矩陣;θ為所述預定角度;

Figure 02_image011
Figure 02_image013
Figure 02_image015
是第一模型檔或第二模型檔所包含的點座標矩陣;
Figure 02_image017
是世界座標原點。 Prot is the point coordinate matrix contained in the first conversion model file or the second conversion model file; θ is the predetermined angle;
Figure 02_image011
for
Figure 02_image013
,
Figure 02_image015
is the point coordinate matrix contained in the first model file or the second model file;
Figure 02_image017
is the origin of the world coordinates.

於所述平均法向量計算步驟S014中,可以是利用以下公式計算出平均法向量([n X,n y,n z] T):

Figure 02_image019
In the average normal vector calculation step S014, the average normal vector ([n X , ny ,n z ] T ) can be calculated by using the following formula:
Figure 02_image019

上述公式中n表示為所述第二校正平面方程式計算步驟S013被執行的次數。在較佳的實施例中,n可以是5次以上,當然,n的次數及校正角度的選擇,是必須讓深度相機每一次都可以拍攝到參考體D的同一個參考面D1。In the above formula, n represents the number of times the second correction plane equation calculation step S013 is performed. In a preferred embodiment, n may be more than 5 times. Of course, the selection of the times of n and the correction angle must allow the depth camera to capture the same reference surface D1 of the reference body D every time.

於所述剛性轉換矩陣計算步驟S018中,第一相交線CL1與世界座標原點([O X,O y,O z] T)的垂直距離d i,與第二相交線CL2與世界座標原點([O X,O y,O z] T)的垂直距離d j相同的關係,可以以數學式表示為:

Figure 02_image021
In the rigid transformation matrix calculation step S018, the vertical distance d i between the first intersecting line CL1 and the world coordinate origin ([O X , O y , O z ] T ), and the second intersecting line CL2 and the world coordinate origin The same relationship of the vertical distance d j of the point ([O X ,O y ,O z ] T ) can be expressed mathematically as:
Figure 02_image021

上述數學式可用以下矩陣形式表示:

Figure 02_image023
The above mathematical formula can be represented in the following matrix form:
Figure 02_image023

上述數學式中的

Figure 02_image025
是與A i、A j、B i、B j、C i、C j有關,
Figure 02_image027
則是與D i、D j有關,且上述數學式可以利用電腦相關應用程式(例如Matlab R),以奇異值分解(singular value decomposition, SVD)方法解出世界座標原點[O X,O y,O z]。 in the above formula
Figure 02_image025
is related to A i , A j , B i , B j , C i , C j ,
Figure 02_image027
It is related to D i , D j , and the above mathematical formula can use computer related applications (such as Matlab R ) to solve the world coordinate origin [O X , O y by singular value decomposition (SVD) method , Oz ].

需特別強調的是,本發明的壓力織物的製造方法,是利用上述方式,進行深度相機及承載裝置兩者之間的座標配准(registration),而非採用常見的最近點演算法(iterative closest points, ICP)進行座標配准,而本發明的壓力織物的製造方法中的每一台深度相機產生的第一模型檔所包含的點雲(point cloud)及第二模型檔所包含的點雲(point cloud)兩者的重疊率只要大於60%即可達到良好的融合效果。反觀,若是利用傳統近點演算法(ICP)進行座標配准,則每一台深度相機產生的第一模型檔所包含的點雲(point cloud)及第二模型檔所包含的點雲(point cloud)兩者的重疊率必須大於80%,才可以達到良好的融合效果。另外,若是利用傳統的近點演算法(ICP)進行座標配准,由於其對點雲的重疊率的要求很高,因此,深度相機及承載裝置每一次僅能旋轉小角度,為此,需要相對較長的掃描師間。It should be emphasized that, the manufacturing method of the pressure fabric of the present invention uses the above-mentioned method to perform the registration of the depth camera and the carrying device, instead of using the common iterative closest algorithm. points, ICP) for coordinate calibration, and the point cloud included in the first model file and the point cloud included in the second model file generated by each depth camera in the pressure fabric manufacturing method of the present invention (point cloud) As long as the overlap rate of the two is greater than 60%, a good fusion effect can be achieved. On the other hand, if the traditional near point algorithm (ICP) is used for coordinate calibration, the point cloud included in the first model file and the point cloud included in the second model file generated by each depth camera The overlap rate of cloud) must be greater than 80% to achieve a good fusion effect. In addition, if the traditional near point algorithm (ICP) is used for coordinate calibration, because of its high requirements on the overlap rate of point clouds, the depth camera and the carrying device can only be rotated by a small angle each time. Relatively long scanner room.

在較佳的實施例中,每一次執行第二模型檔產生步驟S12的過程中,在使每一台深度相機及承載裝置彼此相對旋轉預定角度時,還同時控制一角度偵測器,以偵測承載裝置相對於多個深度相機旋轉的一真實旋轉角度;若真實旋轉角度與預定角度的差值大於一預定範圍時,處理裝置是將真實旋轉角度帶入上述座標轉換關係式,以將第一模型檔及第二模型檔分別轉換為第一轉換模型檔及第二模型轉換檔。In a preferred embodiment, each time the second model file generating step S12 is executed, when each depth camera and the carrying device are rotated relative to each other by a predetermined angle, an angle detector is also controlled to detect Measure a real rotation angle of the bearing device relative to the rotation of the plurality of depth cameras; if the difference between the real rotation angle and the predetermined angle is greater than a predetermined range, the processing device brings the real rotation angle into the above-mentioned coordinate conversion relationship to convert the first A model file and a second model file are respectively converted into a first conversion model file and a second model conversion file.

承上,具體來說,假設處理裝置控制旋轉機構旋轉45度(即所述預定角度),但角度偵測器實際偵測到的真實旋轉角度是35度,此時,若是處理裝置以θ=45∘帶入前述座標轉換關係式,而將第一模型檔及第二模型檔分別轉換為第一轉換模型檔及第二模型轉換檔,如此,最終可能無法融合出正確的三維立體模型檔,是以,為了避免發生此種狀況,處理裝置可以是以θ=35∘帶入前述座標轉換關係式中,藉此,應可避免最終無法融合出正確的三維立體模型檔的問題。Continuing from the above, specifically, it is assumed that the processing device controls the rotation mechanism to rotate 45 degrees (that is, the predetermined angle), but the actual rotation angle actually detected by the angle detector is 35 degrees. At this time, if the processing device uses θ= 45∘ Bring in the aforesaid coordinate conversion relationship, and convert the first model file and the second model file into the first conversion model file and the second model conversion file respectively. In this way, the correct 3D model file may not be merged in the end. Therefore, in order to avoid such a situation, the processing device can take θ=35∘ into the above-mentioned coordinate conversion relationship, thereby avoiding the problem that the correct three-dimensional model file cannot be merged in the end.

本實施例與前述實施例的另一不同之處在於:本實施例與前述實施例最大不同之處在於:在利用掃描系統執行掃描步驟S1前,還包含對所有深度相機執行一相機及相機校正步驟S02,其包含:利用張友正標定法對彼此相鄰的兩台深度相機進行座標配准,據以取得相機及相機座標轉換程序中所需的剛性轉換矩陣(rigid transformation matrix)。Another difference between this embodiment and the previous embodiment is that the biggest difference between this embodiment and the previous embodiment is that: before using the scanning system to perform the scanning step S1, it also includes performing a camera and camera calibration on all the depth cameras Step S02 , which includes: using Zhang Youzheng calibration method to perform coordinate calibration on two depth cameras adjacent to each other, so as to obtain the camera and a rigid transformation matrix required in the camera coordinate transformation program.

依上所述,通過相機及承載裝置校正步驟S01及相機及相機校正步驟S02,來對多台深度相機及承載裝置進行座標配准,相較於傳統的近點演算法(ICP)的座標配准方法,具有掃描速度快、配准速度相對較快等優點。According to the above, through the camera and the carrier device calibration step S01 and the camera and the camera calibration step S02, the coordinates of the plurality of depth cameras and the carrier device are calibrated, compared to the traditional near point algorithm (ICP). It has the advantages of fast scanning speed and relatively fast registration speed.

以上所述僅為本發明的較佳可行實施例,非因此侷限本發明的專利範圍,故舉凡運用本發明說明書及圖式內容所做的等效技術變化,均包含於本發明的保護範圍內。The above descriptions are only preferred feasible embodiments of the present invention, which do not limit the scope of the present invention. Therefore, any equivalent technical changes made by using the contents of the description and drawings of the present invention are included in the protection scope of the present invention. .

A:掃描系統 A1:處理裝置 A11:三維立體模型檔 A12:第一轉換模型檔 A13:第二轉換模型檔 A14:第一世界座標模型檔 A15:第二世界座標模型檔 A2:承載裝置 A21:承載面 A3:支撐結構 A4:深度相機 A41:第一模型檔 A42:第二模型檔 A5:固定結構 A6:顯示裝置 A61:顯示畫面 A62:三維立體影像 A621:影像點 A622:影像點 A623:輪廓線 A624:區域輪廓線 C:中心軸線 CL1:第一相交線 CL2:第二相交線 D:參考體 D1:參考面 d i:垂直距離 d j:垂直距離 OA:光軸 OP:世界座標原點 P1:夾角 L:水平距離 A: Scanning system A1: Processing device A11: Three-dimensional model file A12: First conversion model file A13: Second conversion model file A14: First world coordinate model file A15: Second world coordinate model file A2: Carrying device A21: Bearing surface A3: Support structure A4: Depth camera A41: First model file A42: Second model file A5: Fixed structure A6: Display device A61: Display screen A62: 3D stereo image A621: Image point A622: Image point A623: Outline Line A624: Area outline C: Central axis CL1: First intersection line CL2: Second intersection line D: Reference body D1: Reference plane d i : Vertical distance d j : Vertical distance OA: Optical axis OP: World coordinate origin P1: included angle L: horizontal distance

圖1為適合應用本發明的壓力織物的製造方法的掃描系統的示意圖。FIG. 1 is a schematic diagram of a scanning system suitable for applying the manufacturing method of the pressure fabric of the present invention.

圖2為本發明的壓力織物的製造方法的其中一實施例的流程示意圖。FIG. 2 is a schematic flow chart of one embodiment of the manufacturing method of the pressure fabric of the present invention.

圖3為適合應用本發明的壓力織物的製造方法的掃描系統的俯視示意圖。3 is a schematic top view of a scanning system suitable for applying the manufacturing method of the pressure fabric of the present invention.

圖4為本發明的壓力織物的製造方法的其中一實施例的流程示意圖。FIG. 4 is a schematic flow chart of one embodiment of the manufacturing method of the pressure fabric of the present invention.

圖5顯示為顯示裝置顯示對應於三維立體模型檔的三維立體影像的示意圖。FIG. 5 is a schematic diagram of a display device displaying a 3D stereoscopic image corresponding to a 3D stereoscopic model file.

圖6顯示為三維立體影像於腰部的局部剖面立體示意圖。FIG. 6 is a schematic perspective view of a partial cross-section of a 3D stereoscopic image at the waist.

圖7顯示為本發明的壓力織物的製造方法的其中一實施例的流程示意圖。FIG. 7 is a schematic flow chart showing one embodiment of the manufacturing method of the pressure fabric of the present invention.

圖8顯示為本發明的壓力織物的製造方法的其中一實施例的流程示意圖。FIG. 8 is a schematic flow chart showing one embodiment of the manufacturing method of the pressure fabric of the present invention.

圖9顯示為對深度相機執行相機及承載裝置校正步驟的示意圖。FIG. 9 is a schematic diagram illustrating the steps of performing camera and carrier calibration on the depth camera.

圖10顯示為設置有參考體的承載裝置的俯視圖。Figure 10 shows a top view of a carrier device provided with a reference body.

S1、S11~S15、S2:流程步驟 S1, S11~S15, S2: process steps

Claims (9)

一種壓力織物的製造方法,其包含以下步驟:利用一掃描系統執行一掃描步驟,以對一使用者進行三維立體掃描作業;所述掃描系統包含一處理裝置、一承載裝置、多個支撐結構、多台深度相機(Depth Camera)及一旋轉機構,所述處理裝置電性連接所述旋轉機構,所述處理裝置能控制所述旋轉機構,而使多個所述支撐結構與所述承載裝置彼此相對旋轉,所述承載裝置具有一承載面,所述承載面用以提供所述使用者站立,多個所述支撐結構環繞所述承載裝置設置,各個所述支撐結構設置有一台所述深度相機,每一台所述深度相機與所述承載裝置的水平距離皆相同;所述掃描系統能於所述使用者站立於所述承載面時,執行所述掃描步驟,所述掃描步驟包含:一第一模型檔產生步驟:控制每一台所述深度相機拍攝站立於所述承載面的所述使用者,以使每一台所述深度相機產生一第一模型檔;執行一第二模型檔產生步驟M次,M為大於或等於1的正整數,所述第二模型檔產生步驟包含:先控制所述旋轉機構以使每一台所述深度相機及所述承載裝置彼此相對旋轉一預定角度,再控制每一台所述深度相機拍攝所述使用者,而使每一台所述深度相機產生一第二模型檔;一相機及承載裝置座標轉換步驟:利用所述處理裝置將每一台所述深度相機的所述第一模型檔,進行每一台所述深度相機各自對應的一相機及承載裝置座標轉換程序,以將各台所述深度相機所產生的所述第一模型 檔及所述第二模型檔分別轉換為一第一轉換模型檔及一第二轉換模型檔;一世界座標轉換步驟:利用所述處理裝置對每一台所述深度相機的所述第一轉換模型檔及所述第二轉換模型檔,進行一相機及相機座標轉換程序,以將每一台所述深度相機的所述第一轉換模型檔及所述第二轉換模型檔分別轉換為一第一世界座標模型檔及一第二世界座標模型檔;一融合步驟:將所有的所述第一世界座標模型檔及所有的所述第二世界座標模型檔,融合(fusion)成一三維立體模型檔;一製造步驟:依據所述三維立體模型檔及一使用者需求資訊的一患部資料計算出一患部尺寸,再依據所述患部尺寸及所述使用者需求資訊的一施加壓力資料,控制一生產設備製作出一壓力織物;其中,所述處理裝置控制所述旋轉機構時,所述承載裝置將被所述旋轉機構帶動,而相對於多台所述深度相機旋轉;在利用所述掃描系統執行所述掃描步驟前,還包含對每一台所述深度相機執行一相機及承載裝置校正步驟,其包含:一參考體置放步驟:將一參考體設置於所述承載面,並使所述參考體的一參考面垂直於所述承載面,且使所述參考面平行於通過所述承載面的一中心軸線;其中,所述處理裝置控制所述旋轉機構時,多台所述相機及所述承載裝置是以所述中心軸線為中心彼此相對旋轉,所述中心軸線與所述承載面的交點定義為一世界座標原點;一第一校正平面方程式計算步驟:控制所述相機拍攝所 述參考面,以使所述相機產生一第一校正雲點資訊,並利用所述第一校正雲點資訊計算出一第一校正平面方程式及一第一平面法向量;於此步驟中所述參考面定義為一第一參考面,所述第一校正平面方程式是用來描述空間中的所述第一參考面,所述第一平面法向量為空間中的所述第一參考面的法向量;執行一第二校正平面方程式計算步驟n次,n為大於或等於3的正整數,且n大於M;控制所述旋轉機構,以使所述深度相機與所述承載裝置彼此相對旋轉一校正角度;控制所述深度相機拍攝所述參考體的所述參考面,以產生一第二校正雲點資訊,並利用所述第二校正雲點資訊計算出一第二校正平面方程式及一第二平面法向量;於此步驟中所述參考面定義為一第二參考面,所述第二校正平面方程式是用來描述空間中的所述第二參考面,所述第二平面法向量為空間中的所述第二參考面的法向量;一平均法向量計算步驟:利用所述第一平面法向量及P個所述第二平面法向量,算出所述承載面的一平均法向量;所述平均法向量即為所述中心軸線的方向向量;一承載面平面方程式計算步驟:將所述世界座標原點定義為世界座標系的原點,並利用所述平均法向量定義出一承載面平面方程式;一第一相交線方程式計算步驟:利用所述第一平面方程式及所述承載面平面方程式,計算一第一相交線方程式;所述第一相交線方程式用以描述所述第一參考面與所述承載面相交的一第一相交線; 一第二相交線方程式計算步驟:利用所述第二平面方程式及所述承載面平面方程式,計算一第二相交線方程式,所述第二相交線方程式用以描述所述第二參考面與所述承載面相交的一第二相交線;一剛性轉換矩陣計算步驟:利用所述第一相交線與所述世界座標原點的垂直距離,與所述第二相交線與所述世界座標原點的垂直距離相同的關係,計算得出包含所述預定角度的一座標轉換關係式;其中,於所述相機及承載裝置座標轉換程序中,所述處理裝置是利用所述座標轉換關係式,將所述第一模型檔轉換為所述第一轉換模型檔,且所述處理裝置是利用所述座標轉換關係式,將所述第二模型檔轉換為所述第二轉換模型檔;所述座標轉換關係式為:
Figure 109141684-A0305-02-0027-1
(1-cos θ)+[O x ,O y ,O z ];所述預定角度為θ,
Figure 109141684-A0305-02-0027-2
Figure 109141684-A0305-02-0027-4
-
Figure 109141684-A0305-02-0027-5
Figure 109141684-A0305-02-0027-6
是所述第一模型檔或所述第二模型檔所包含的點座標矩陣,Prot為所述第一轉換模型檔或所述第二轉換模型檔所包含的點座標矩陣;其中,在利用所述掃描系統執行所述掃描步驟前,還包含對所有所述深度相機執行一相機及相機校正步驟,其包含:利用張友正標定法對彼此相鄰的兩台深度相機進行座標配准,以取得每一台所述深度相機於所述相機及相機座標轉換程序中所需的剛性轉換矩陣。
A method for manufacturing a pressure fabric, comprising the following steps: using a scanning system to perform a scanning step to perform a three-dimensional scanning operation on a user; the scanning system includes a processing device, a carrying device, a plurality of supporting structures, A plurality of depth cameras and a rotating mechanism, the processing device is electrically connected to the rotating mechanism, and the processing device can control the rotating mechanism, so that the plurality of the supporting structures and the carrying device are connected to each other Relatively rotated, the carrying device has a carrying surface, the carrying surface is used for the user to stand, a plurality of the supporting structures are arranged around the carrying device, and each of the supporting structures is provided with one of the depth cameras , the horizontal distance between each of the depth cameras and the carrying device is the same; the scanning system can perform the scanning step when the user is standing on the carrying surface, and the scanning step includes: a The first model file generation step: controlling each of the depth cameras to photograph the user standing on the bearing surface, so that each of the depth cameras generates a first model file; execute a second model file The generating step is M times, where M is a positive integer greater than or equal to 1, and the second model file generating step includes: firstly controlling the rotating mechanism to make each of the depth cameras and the carrying device rotate relative to each other by a predetermined amount angle, and then control each of the depth cameras to photograph the user, so that each of the depth cameras generates a second model file; a camera and a carrier device coordinate conversion step: using the processing device to convert each The first model file of each of the depth cameras, and a camera and carrying device coordinate conversion program corresponding to each of the depth cameras is performed to convert the first model file generated by each of the depth cameras. and the second model file are respectively converted into a first conversion model file and a second conversion model file; a world coordinate conversion step: using the processing device to convert the first conversion model of each of the depth cameras file and the second conversion model file, and perform a camera and camera coordinate conversion program to convert the first conversion model file and the second conversion model file of each depth camera into a first conversion model file respectively. A world coordinate model file and a second world coordinate model file; a fusion step: fuse all the first world coordinate model files and all the second world coordinate model files into a three-dimensional three-dimensional model file; A manufacturing step: calculating an affected part size according to the three-dimensional model file and an affected part data of a user demand information, and then controlling a production equipment according to the affected part size and a pressure data of the user demand information A pressure fabric is produced; wherein, when the processing device controls the rotating mechanism, the carrying device will be driven by the rotating mechanism to rotate relative to the plurality of depth cameras; when the scanning system is used to perform all Before the scanning step, it also includes performing a camera and carrying device calibration step for each of the depth cameras, which includes: a reference body placing step: setting a reference body on the carrying surface, and making the reference body body A reference surface is perpendicular to the bearing surface, and the reference surface is parallel to a central axis passing through the bearing surface; wherein, when the processing device controls the rotation mechanism, a plurality of the cameras and the bearing surface The devices are rotated relative to each other centered on the central axis, and the intersection of the central axis and the bearing surface is defined as a world coordinate origin; a first correction plane equation calculation step: controlling the camera to photograph the reference surface , so that the camera generates a first calibration cloud point information, and uses the first calibration cloud point information to calculate a first calibration plane equation and a first plane normal vector; in this step, the reference plane is defined is a first reference plane, the first calibration plane equation is used to describe the first reference plane in space, and the first plane normal vector is the normal vector of the first reference plane in space; execute A second calibration plane equation calculation step n times, n is a positive integer greater than or equal to 3, and n is greater than M; controlling the rotation mechanism, so that the depth camera and the carrying device are rotated relative to each other by a correction angle; Controlling the depth camera to photograph the reference surface of the reference body to generate a second calibration cloud point information, and using the second calibration cloud point information to calculate a second calibration plane equation and a second plane method vector; in this step, the reference plane is defined as a second reference plane, the second calibration plane equation is used to describe the second reference plane in space, and the normal vector of the second plane is the the normal vector of the second reference surface; an average normal vector calculation step: using the first plane normal vector and the P second plane normal vectors to calculate an average normal vector of the bearing surface; the average The normal vector is the direction vector of the central axis; a bearing surface plane equation calculation step: define the origin of the world coordinate as the origin of the world coordinate system, and use the average normal vector to define a bearing surface plane equation ; a first intersection line equation calculation step: using the first plane equation and the bearing surface plane equation to calculate a first intersection line equation; the first intersection line equation is used to describe the first reference surface and the a first intersection line where the bearing surface intersects; a second intersection line equation calculation step: using the second plane equation and the bearing surface plane equation to calculate a second intersection line equation, the second intersection line The equation is used to describe a second intersection line that intersects the second reference surface and the bearing surface; a rigid transformation matrix calculation step: using the vertical distance between the first intersection line and the origin of the world coordinate, and the The relationship between the second intersection line and the vertical distance of the origin of the world coordinate is the same, and a coordinate conversion relational expression including the predetermined angle is calculated; wherein, in the coordinate conversion program of the camera and the carrying device, the The processing device converts the first model file into the first conversion model file by using the coordinate conversion relationship, and the processing device converts the second model file using the coordinate conversion relationship is the second conversion model file; the coordinate conversion relationship is:
Figure 109141684-A0305-02-0027-1
(1-cos θ )+[ O x , O y , O z ]; the predetermined angle is θ,
Figure 109141684-A0305-02-0027-2
for
Figure 109141684-A0305-02-0027-4
-
Figure 109141684-A0305-02-0027-5
,
Figure 109141684-A0305-02-0027-6
is the point coordinate matrix contained in the first model file or the second model file, and P rot is the point coordinate matrix contained in the first conversion model file or the second conversion model file; wherein, when using Before the scanning system performs the scanning step, it also includes performing a camera and camera calibration step on all the depth cameras, which includes: using the Zhang Youzheng calibration method to perform coordinate calibration on two depth cameras adjacent to each other to obtain The rigid transformation matrix required by each of the depth cameras in the camera and camera coordinate transformation procedures.
如請求項1所述的壓力織物的製造方法,其中,每一台所 述深度相機所產生的所述第一模型檔所包含的點雲(point cloud)及所述第二模型檔所包含的點雲(point cloud)兩者的重疊率大於60%。 The manufacturing method of a pressure fabric as claimed in claim 1, wherein each of the The overlap ratio between the point cloud (point cloud) included in the first model file and the point cloud (point cloud) included in the second model file generated by the depth camera is greater than 60%. 如請求項1所述的壓力織物的製造方法,其中,彼此相鄰的兩台所述深度相機,分別定義為一第一深度相機及一第二深度相機;所述第一深度相機所產生的所述第一模型檔所包含的點雲(point cloud),與所述第二深度相機所產生的所述第一模型檔所包含的點雲(point cloud)兩者的重疊率大於60%;所述第一深度相機所產生的所述第二模型檔所包含的點雲(point cloud),與所述第二深度相機所產生的所述第一模型檔所包含的點雲(point cloud)兩者的重疊率大於60%;每一台所述深度相機產生的所述第一模型檔所包含的點雲(point cloud)及所述第二模型檔所包含的點雲(point cloud)兩者的重疊率大於60%;所述第二模型檔產生步驟被執行2次以上,且先後產生的兩個所述第二模型檔的所分別包含的點雲(point cloud),其兩者的重疊率大於60%。 The method for manufacturing a pressure fabric according to claim 1, wherein the two depth cameras adjacent to each other are defined as a first depth camera and a second depth camera, respectively; The overlap ratio between the point cloud (point cloud) included in the first model file and the point cloud (point cloud) included in the first model file generated by the second depth camera is greater than 60%; a point cloud (point cloud) included in the second model file generated by the first depth camera, and a point cloud (point cloud) included in the first model file generated by the second depth camera The overlap ratio of the two is greater than 60%; the point cloud included in the first model file and the point cloud included in the second model file generated by each of the depth cameras are two The overlap rate of the first model file is greater than 60%; the second model file generation step is performed more than 2 times, and the point clouds respectively included in the two second model files generated successively, the two The overlap rate is greater than 60%. 如請求項1所述的壓力織物的製造方法,其中,於所述融合步驟後,還包含一特徵標定步驟:將所述三維立體模型檔與一資料庫中所儲存的至少一筆標準三維立體模型檔進行比對,以將所述三維立體模型檔所包含的點雲(point cloud)的至少一部分區分為多個預定部分,並將各個所述預定部分所包含的多個點座標中的一部分標定為一部位特徵點,所述製造步驟則包含:依據所述患部資料,查找所述三維立體模型檔中相對應的多個所述部位特徵點;利用查找出的多個所述部位特徵點計算出所述三維立體模型檔對應於所述患部資料的位置的所述患部尺寸。 The manufacturing method of a pressure fabric according to claim 1, wherein after the fusion step, a feature calibration step is further included: comparing the three-dimensional three-dimensional model file with at least one standard three-dimensional three-dimensional model stored in a database The files are compared, so as to distinguish at least a part of the point cloud (point cloud) included in the three-dimensional model file into a plurality of predetermined parts, and to calibrate a part of the plurality of point coordinates included in each of the predetermined parts is a part feature point, and the manufacturing step includes: searching for a plurality of the corresponding part feature points in the three-dimensional three-dimensional model file according to the affected part data; using the found multiple part feature points to calculate The size of the affected part corresponding to the position of the data of the affected part is obtained from the three-dimensional three-dimensional model file. 如請求項4所述的壓力織物的製造方法,其中,於所述特徵標定步驟後及計算出所述患部尺寸之前,還包含一顯示步驟,其包含:將所述三維立體模型檔輸入一顯示裝置,以於所述顯示裝置中顯示出相對應的一三維立體影像,所述三維立體影像包含多個預定區域,多個所述預定區域對應於所述三維立體模型檔的多個所述預定部分,各個所述預定區域中包含多個影像點,各個所述預定區域中的多個所述影像點對應於所述三維立體模型的其中一個所述預定部分所包含的多個所述點座標;各個所述預定區域中的一部分的所述影像點是以一第一樣式呈現,其餘的所述影像點則是以一第二樣式呈現;以所述第二樣式呈現的各個所述影像點定義為一特徵影像點,各個所述特徵影像點對應於所述三維立體模型檔中的其中一個所述部位特徵點;其中,所述顯示裝置電性連接一輸入裝置,所述輸入裝置能依據使用者的操作,將至少一個以所述第一樣式呈現的所述影像點修改為以所述第二樣式呈現,或者,將至少一個以所述第二樣式呈現的所述影像點修改為以所述第一樣式呈現;若任一個所述影像點的樣式被修改,則所述輸入裝置將對應產生一修改檔,而所述處理裝置將依據所述修改檔,修改所述三維立體模型檔中的至少一個所述部位特徵點。 The manufacturing method of a pressure fabric according to claim 4, wherein after the feature calibration step and before calculating the size of the affected part, it further includes a display step, which includes: inputting the three-dimensional model file into a display a device for displaying a corresponding three-dimensional stereoscopic image on the display device, the three-dimensional stereoscopic image includes a plurality of predetermined areas, and the plurality of the predetermined areas correspond to a plurality of the predetermined areas of the three-dimensional stereoscopic model file part, each of the predetermined areas includes a plurality of image points, and the plurality of the image points in each of the predetermined areas correspond to the plurality of point coordinates included in one of the predetermined parts of the three-dimensional model ; A part of the image points in each of the predetermined areas is presented in a first pattern, and the rest of the image points are presented in a second pattern; each of the images presented in the second pattern A point is defined as a feature image point, and each feature image point corresponds to one of the part feature points in the three-dimensional model file; wherein, the display device is electrically connected to an input device, and the input device can According to the operation of the user, at least one of the image points presented in the first style is modified to be presented in the second style, or, at least one of the image points presented in the second style is modified is presented in the first style; if the style of any one of the image points is modified, the input device will correspondingly generate a modified file, and the processing device will modify the three-dimensional file according to the modified file At least one of the part feature points in the solid model file. 如請求項1所述的壓力織物的製造方法,其中,於所述融合步驟後,還包含一特徵標定步驟:將所述三維立體模型檔載入一機器學習模型中,以利用所述機器學習模型,將所述三維立體模型檔所包含的點雲(point cloud)的至少一部分區分為多個預定部分,並將各個所述預定部分所包含的多個點座標中的一部分標定為一部位特徵點,所述製造步 驟則包含:依據所述患部資料,查找所述三維立體模型檔中相對應的多個所述部位特徵點;利用查找出的多個所述部位特徵點計算出所述三維立體模型檔對應於所述患部資料的位置的所述患部尺寸。 The manufacturing method of a pressure fabric according to claim 1, wherein after the fusion step, a feature calibration step is further included: loading the three-dimensional model file into a machine learning model to utilize the machine learning A model, dividing at least a part of the point cloud (point cloud) contained in the three-dimensional model file into a plurality of predetermined parts, and marking a part of the plurality of point coordinates contained in each of the predetermined parts as a part feature point, the manufacturing step The steps include: searching for a plurality of the corresponding part feature points in the three-dimensional three-dimensional model file according to the data of the affected part; The size of the affected part at the location of the affected part data. 如請求項6所述的壓力織物的製造方法,其中,於所述特徵標定步驟後及計算出所述患部尺寸之前,還包含一顯示步驟,其包含:將所述三維立體模型檔輸入一顯示裝置,以於所述顯示裝置中顯示出相對應的一三維立體影像,所述三維立體影像包含多個預定區域,多個所述預定區域對應於所述三維立體模型檔的多個所述預定部分,各個所述預定區域中包含多個影像點;各個所述預定區域中的一部分的所述影像點是以一第一樣式呈現,其餘的所述影像點則是以一第二樣式呈現;以所述第二樣式呈現的各個所述影像點定義為一特徵影像點,各個所述特徵影像點對應於所述三維立體模型檔中的其中一個所述部位特徵點;其中,所述顯示裝置電性連接一輸入裝置,所述輸入裝置能依據使用者的操作,將至少一個以所述第一樣式呈現的所述影像點修改為以所述第二樣式呈現,或者,將至少一個以所述第二樣式呈現的所述影像點修改為以所述第一樣式呈現;若任一個所述影像點的樣式被修改,則所述輸入裝置將對應產生一修改檔,而所述處理裝置將依據所述修改檔,修改所述三維立體模型檔中的至少一個所述部位特徵點。 The manufacturing method of a pressure fabric according to claim 6, wherein after the feature calibration step and before calculating the size of the affected part, it further includes a display step, which includes: inputting the three-dimensional model file into a display a device for displaying a corresponding three-dimensional stereoscopic image on the display device, the three-dimensional stereoscopic image includes a plurality of predetermined areas, and the plurality of the predetermined areas correspond to a plurality of the predetermined areas of the three-dimensional stereoscopic model file Each of the predetermined areas includes a plurality of image points; a part of the image points in each of the predetermined areas is presented in a first pattern, and the rest of the image points are presented in a second pattern ; Each of the image points presented in the second style is defined as a characteristic image point, and each of the characteristic image points corresponds to one of the part characteristic points in the three-dimensional model file; wherein, the display The device is electrically connected to an input device, and the input device can modify at least one of the image points presented in the first style to be presented in the second style according to the operation of the user, or at least one The image points presented in the second style are modified to be presented in the first style; if the style of any one of the image points is modified, the input device will generate a corresponding modification file, and the The processing device will modify at least one of the part feature points in the three-dimensional model file according to the modification file. 如請求項1所述的壓力織物的製造方法,其中,所述三維立體模型檔所包含的點雲(point cloud)的至少一部分區分為多個預定部分,於所述融合步驟及所述製造步驟之間還包 含:一顯示步驟:將所述三維立體模型檔輸入一顯示裝置,以於所述顯示裝置中顯示出相對應的一三維立體影像,所述三維立體影像包含多個預定區域,多個所述預定區域對應於所述三維立體模型檔的多個所述預定部分,各個所述預定區域包含多個影像點,各個所述預定區域中的多個所述影像點對應於所述三維立體模型的其中一個所述預定部分所包含的多個點座標;各個所述影像點是以一第一樣式呈現;其中,所述顯示裝置電性連接一輸入裝置;一標定步驟:利用所述輸入裝置依據使用者的操作,將至少一個以所述第一樣式呈現的所述影像點修改為以所述第二樣式呈現,並利用所述輸入裝置產生相對應的一修改檔;一修改步驟:使所述處理裝置依據所述修改檔,將所述三維立體模型檔中所對應的各個所述點座標標定為一部位特徵點;其中,於所述製造步驟中,是先依據所述患部資料,查找出所述三維立體模型檔中相對應的其中一個所述預定部分,再利用所述預定部分所包含的所述部位特徵點,計算出所述患部尺寸。 The manufacturing method of a pressure fabric according to claim 1, wherein at least a part of the point cloud (point cloud) included in the three-dimensional model file is divided into a plurality of predetermined parts, and in the fusion step and the manufacturing step between Including: a display step: inputting the three-dimensional model file into a display device, so as to display a corresponding three-dimensional three-dimensional image in the display device, the three-dimensional three-dimensional image includes a plurality of predetermined areas, a plurality of the The predetermined area corresponds to a plurality of the predetermined parts of the three-dimensional stereo model file, each of the predetermined areas includes a plurality of image points, and the plurality of the image points in each of the predetermined areas correspond to the three-dimensional stereo model. wherein a plurality of point coordinates included in the predetermined part; each of the image points is presented in a first pattern; wherein the display device is electrically connected to an input device; a calibration step: using the input device According to the operation of the user, at least one of the image points presented in the first style is modified to be presented in the second style, and a corresponding modification file is generated by using the input device; a modification step: causing the processing device to demarcate the coordinates of each of the points corresponding to the three-dimensional three-dimensional model file as a feature point of a part according to the modification file; wherein, in the manufacturing step, firstly according to the data of the affected part , find out one of the corresponding predetermined parts in the three-dimensional three-dimensional model file, and then use the part feature points included in the predetermined part to calculate the size of the affected part. 如請求項8所述的壓力織物的製造方法,其中,每一次執行所述第二模型檔產生步驟的過程中,在使每一台所述深度相機及所述承載裝置彼此相對旋轉所述預定角度時,還同時控制一角度偵測器,以偵測所述承載裝置相對於多個所述深度相機旋轉的一真實旋轉角度;若所述真實旋轉角度與所述預定角度的差值大於一預定範圍時,所述處理裝 置是將所述真實旋轉角度帶入所述座標轉換關係式,以將所述第一模型檔及所述第二模型檔分別轉換為所述第一轉換模型檔及所述第二模型轉換檔。 The manufacturing method of the pressure fabric according to claim 8, wherein each time the second model file generating step is performed, each of the depth cameras and the carrying device are rotated relative to each other by the predetermined amount At the same time, an angle detector is also controlled to detect a real rotation angle of the bearing device relative to the plurality of depth cameras; if the difference between the real rotation angle and the predetermined angle is greater than one predetermined range, the handling device The setting is to bring the real rotation angle into the coordinate conversion relationship, so as to convert the first model file and the second model file into the first conversion model file and the second model conversion file respectively .
TW109141684A 2020-11-27 2020-11-27 Manufacturing method of pressure garment TWI764393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109141684A TWI764393B (en) 2020-11-27 2020-11-27 Manufacturing method of pressure garment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109141684A TWI764393B (en) 2020-11-27 2020-11-27 Manufacturing method of pressure garment

Publications (2)

Publication Number Publication Date
TWI764393B true TWI764393B (en) 2022-05-11
TW202221650A TW202221650A (en) 2022-06-01

Family

ID=82594057

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109141684A TWI764393B (en) 2020-11-27 2020-11-27 Manufacturing method of pressure garment

Country Status (1)

Country Link
TW (1) TWI764393B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI827516B (en) * 2023-06-06 2023-12-21 台達電子工業股份有限公司 Image processing device and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200823689A (en) * 2006-11-21 2008-06-01 Jing-Jing Fang Method of three-dimensional digital human model construction from two photos and obtaining anthropometry information
CN105526913A (en) * 2015-12-01 2016-04-27 北京航空航天大学 Three-dimensional scanning system and method based on TOF camera
CN107464156A (en) * 2016-06-02 2017-12-12 阿迪达斯股份公司 For manufacturing the method and system of clothes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200823689A (en) * 2006-11-21 2008-06-01 Jing-Jing Fang Method of three-dimensional digital human model construction from two photos and obtaining anthropometry information
CN105526913A (en) * 2015-12-01 2016-04-27 北京航空航天大学 Three-dimensional scanning system and method based on TOF camera
CN107464156A (en) * 2016-06-02 2017-12-12 阿迪达斯股份公司 For manufacturing the method and system of clothes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
網路文獻 童晶 "低成本三維人體重建工作總結" 雜七雜八實驗室 2017/04/14 https://zhuanlan.zhihu.com/p/26328860 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI827516B (en) * 2023-06-06 2023-12-21 台達電子工業股份有限公司 Image processing device and method

Also Published As

Publication number Publication date
TW202221650A (en) 2022-06-01

Similar Documents

Publication Publication Date Title
US11017547B2 (en) Method and system for postural analysis and measuring anatomical dimensions from a digital image using machine learning
US10460517B2 (en) Mobile device human body scanning and 3D model creation and analysis
US10881353B2 (en) Machine-guided imaging techniques
US10813715B1 (en) Single image mobile device human body scanning and 3D model creation and analysis
JP6368142B2 (en) Information processing apparatus and information processing method
CN110074788B (en) Body data acquisition method and device based on machine learning
CN108305321B (en) Three-dimensional human hand 3D skeleton model real-time reconstruction method and device based on binocular color imaging system
WO2018087853A1 (en) Stereoscopic image generation system, stereoscopic image generation method, and stereoscopic image generation program
CN111401340B (en) Method and device for detecting motion of target object
TWI764393B (en) Manufacturing method of pressure garment
TWI591514B (en) System and method for generating gestures
CN110477921B (en) Height measurement method based on skeleton broken line Ridge regression
Lim et al. Camera-based hand tracking using a mirror-based multi-view setup
WO2019234293A1 (en) Measuring surface distances on human bodies
KR101398193B1 (en) Device and Method for Calibration
WO2022074886A1 (en) Posture detection device, posture detection method, and sleeping posture determination method
CN113343879A (en) Method and device for manufacturing panoramic facial image, electronic equipment and storage medium
CN112991437A (en) Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN111860275A (en) Gesture recognition data acquisition system and method
TWI629662B (en) Method for realizing acupoint visualization by AR technology
JP2009095631A (en) Golf swing measuring system
JP2014117409A (en) Method and apparatus for measuring body joint position
JP6465419B2 (en) Measuring apparatus and measuring method
JP7169489B1 (en) Information processing device, information processing method and information processing program
CN110858095A (en) Electronic device capable of being controlled by head and operation method thereof