TWI804506B - Systems and methods for analyzing cutaneous conditions - Google Patents

Systems and methods for analyzing cutaneous conditions Download PDF

Info

Publication number
TWI804506B
TWI804506B TW107128742A TW107128742A TWI804506B TW I804506 B TWI804506 B TW I804506B TW 107128742 A TW107128742 A TW 107128742A TW 107128742 A TW107128742 A TW 107128742A TW I804506 B TWI804506 B TW I804506B
Authority
TW
Taiwan
Prior art keywords
image
pixel
computing device
skin condition
given pixel
Prior art date
Application number
TW107128742A
Other languages
Chinese (zh)
Other versions
TW201922163A (en
Inventor
捷爾米 馬克 梅西
辛 亥宏 艾蘭
保羅 比利阿爾第
梅 漆
Original Assignee
新加坡商三維醫學影像分析公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 新加坡商三維醫學影像分析公司 filed Critical 新加坡商三維醫學影像分析公司
Publication of TW201922163A publication Critical patent/TW201922163A/en
Application granted granted Critical
Publication of TWI804506B publication Critical patent/TWI804506B/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/444Evaluating skin marks, e.g. mole, nevi, tumour, scar
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/442Evaluating skin mechanical properties, e.g. elasticity, hardness, texture, wrinkle assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/743Displaying an image simultaneously with additional graphical information, e.g. symbols, charts, function plots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Dermatology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Image Generation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The following disclosure discusses systems and methods of detecting and analyzing cutaneous conditions. According to one embodiment, a 2D image of the cutaneous condition and a set of 3D point clouds associated with the 2D image are captured using an image capturing device. The 2D image and the set of 3D point clouds are sent to a computing device. The computing device generates a 3D surface according to the set of 3D point clouds. Subsequently, the computing device receives a depth map for the 2D image based upon the 3D surface from another computing device such that the depth map comprises depth data for each pixel of the 2D image. The cutaneous condition may then be measured and analyzed based upon the depth map using the computing device.

Description

用於分析皮膚狀況的系統和方法Systems and methods for analyzing skin condition

本發明係關於連同三維(3D)資料處理二維(2D)影像,且更特定言之,係關於用於分析皮膚狀況以供診斷及治療之系統及方法。 The present invention relates to processing two-dimensional (2D) images along with three-dimensional (3D) data, and more particularly to systems and methods for analyzing skin conditions for diagnosis and treatment.

在此章節中描述之方法可實行,但未必係先前已設想或實行之方法。因此,除非本文中另外指明,否則此章節中描述之方法並非本申請案中的申請專利範圍之先前技術,且不因包括於此章節中而承認其為先前技術。 The approaches described in this section could be practiced, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.

在皮膚學應用中,用於皮膚狀況之記錄及診斷的最常見方法為拍攝該狀況的像片,且比例尺在像片中可見(典型地,在拍攝的像片中,該狀況的旁邊有標尺)。此使影像獲取過程緩慢且繁瑣。此外,因為在該像片中僅存在一個量測參考,因此表面不規律及相機角度變化會導致量測準確性變低。 In dermatological applications, the most common method for documenting and diagnosing a skin condition is to take a photograph of the condition with a scale visible in the photograph (typically, in the photograph taken, there is a scale next to the condition ). This makes the image acquisition process slow and cumbersome. Furthermore, since there is only one measurement reference in the image, surface irregularities and camera angle variations result in lower measurement accuracy.

一些系統能夠用2D及3D成像器件捕獲受試者之皮膚的完整圖,但此等器件通常很大且昂貴,量測選項也有限。此等器件通常嚴重依賴於完形(gestalt)感知,且仍然需要臨床醫生或皮膚病學家進行仔細檢查。 Some systems are capable of capturing a complete picture of a subject's skin with 2D and 3D imaging devices, but these devices are often large and expensive, with limited measurement options. These devices often rely heavily on gestalt perception and still require careful examination by a clinician or dermatologist.

本文中描述之具體實例包括一種用於分析一皮膚狀況之方法。應瞭解,該等具體實例可以眾多方式來實施,諸如,處理程序、裝置、系統、 器件或方法。以下描述若干具體實例。 Embodiments described herein include a method for analyzing a skin condition. It should be appreciated that the specific examples can be implemented in numerous ways, such as a process, an apparatus, a system, device or method. Several specific examples are described below.

在一個具體實例中,描述一種分析一皮膚狀況之方法。該方法可包括使用一計算器件接收該皮膚狀況之一二維(2D)影像及與該2D影像相關聯之一組三維(3D)點雲之一操作。該方法可進一步包括使用該計算器件以根據該組3D點雲產生一3D表面之一操作。該方法亦可包括使用該計算器件接收基於該3D表面之用於該2D影像之一深度圖而使得該深度圖包含用於該2D影像中之每一像素的一深度值之一操作。該方法可包括使用該計算器件以基於該2D影像及該深度圖分析該皮膚狀況之一操作。在一具體實例中,該組3D點雲中之每一3D點雲中的每一3D點對應於該2D影像之至少一個像素。 In one embodiment, a method of analyzing a skin condition is described. The method may include an operation of receiving, using a computing device, a two-dimensional (2D) image of the skin condition and a set of three-dimensional (3D) point clouds associated with the 2D image. The method may further comprise an operation of using the computing device to generate a 3D surface from the set of 3D point clouds. The method may also include an operation of receiving, with the computing device, a depth map for the 2D image based on the 3D surface such that the depth map includes a depth value for each pixel in the 2D image. The method may include an operation of using the computing device to analyze the skin condition based on the 2D image and the depth map. In one embodiment, each 3D point in each 3D point cloud of the set of 3D point clouds corresponds to at least one pixel of the 2D image.

在一具體實例中,使用一第二計算器件產生該深度圖。在一具體實例中,該方法可進一步包括將該深度圖儲存於經通信耦接至該第二計算器件之一記憶體器件中之一操作。在一具體實例中,該深度圖可藉由使用該第二計算器件以根據該2D影像及該3D表面實施一光線投射演算法來計算。替代地,該深度圖可藉由使用該第二計算器件以根據該2D影像及該3D表面實施一光線跟蹤演算法來計算。 In an embodiment, the depth map is generated using a second computing device. In an embodiment, the method may further include an operation of storing the depth map in a memory device communicatively coupled to the second computing device. In one embodiment, the depth map can be calculated by using the second computing device to implement a ray casting algorithm based on the 2D image and the 3D surface. Alternatively, the depth map can be calculated by using the second computing device to implement a ray tracing algorithm based on the 2D image and the 3D surface.

在一具體實例中,該2D影像及該組3D點雲係使用一影像捕獲器件捕獲。在一具體實例中,該方法可進一步包括沿著圍繞具有皮膚狀況之一受試者的至少一個軸線解析該影像捕獲器件以捕獲一組2D影像及3D點雲之相關聯集合之操作。在另一具體實例中,該方法可包括將該2D影像及該組3D點雲儲存於經通信耦接至該影像捕獲器件之一記憶體器件中之一操作。 In one embodiment, the 2D image and the set of 3D point clouds are captured using an image capture device. In an embodiment, the method may further comprise the operation of resolving the image capture device along at least one axis about a subject having the skin condition to capture a set of 2D images and an associated set of 3D point clouds. In another embodiment, the method may include an operation of storing the 2D image and the set of 3D point clouds in a memory device communicatively coupled to the image capture device.

在一具體實例中,該影像捕獲器件可包括一二維(2D)相機。該方法可包括使用該計算器件以基於每一像素之深度值、該2D相機的水平視野之一角度及該2D相機的垂直視野之一角度產生針對該2D影像之每一像素的一組像素尺寸之一操作。在一具體實例中,該2D相機可捕獲該皮膚狀況之一彩色 2D影像。替代地,該2D相機可捕獲該皮膚狀況之一單色2D影像。在一具體實例中,該2D相機可具有至少8百萬像素之一解析度。 In a specific example, the image capture device may include a two-dimensional (2D) camera. The method may include using the computing device to generate a set of pixel sizes for each pixel of the 2D image based on a depth value for each pixel, an angle of a horizontal field of view of the 2D camera, and an angle of a vertical field of view of the 2D camera one of the operations. In a specific example, the 2D camera can capture a color of the skin condition 2D imagery. Alternatively, the 2D camera can capture a monochromatic 2D image of the skin condition. In one embodiment, the 2D camera can have a resolution of at least 8 megapixels.

在一具體實例中,該影像捕獲器件可包括一三維(3D)器件。在一具體實例中,該3D器件可為一3D掃描儀。替代地,該3D器件可為一3D相機,使得該3D相機捕獲對應於該2D影像之該組3D點雲。 In one embodiment, the image capture device may include a three-dimensional (3D) device. In a specific example, the 3D device can be a 3D scanner. Alternatively, the 3D device may be a 3D camera such that the 3D camera captures the set of 3D point clouds corresponding to the 2D image.

在一具體實例中,該3D表面可為一內插之3D表面網格。在一具體實例中,該3D表面可描繪該皮膚狀況之輪廓。在一具體實例中,該3D表面可藉由使用該計算器件濾出該組3D點雲中之至少一個3D點雲中的一組像差來產生。在一具體實例中,該3D表面可藉由使用該計算器件使用至少一個內插演算法來產生。 In one embodiment, the 3D surface can be an interpolated 3D surface mesh. In one embodiment, the 3D surface can outline the skin condition. In an embodiment, the 3D surface can be generated by using the computing device to filter out a set of aberrations in at least one of the set of 3D point clouds. In an embodiment, the 3D surface can be generated using the computing device using at least one interpolation algorithm.

在一具體實例中,分析該皮膚狀況可進一步包括使用該計算器件量測該皮膚狀況之一大小之一操作。 In a specific example, analyzing the skin condition may further include an operation of measuring a magnitude of the skin condition using the computing device.

在一具體實例中,分析該皮膚狀況可進一步包括使用該計算器件判定該皮膚狀況之該大小的一變異數(variance)之一操作。 In an embodiment, analyzing the skin condition may further comprise an operation of determining a variance of the magnitude of the skin condition using the computing device.

在一具體實例中,分析該皮膚狀況可進一步包括使用該第二計算器件以根據該皮膚狀況之該大小自動診斷該皮膚狀況之一操作。 In a specific example, analyzing the skin condition may further comprise an operation of using the second computing device to automatically diagnose the skin condition according to the magnitude of the skin condition.

在一具體實例中,揭示一種用於分析一皮膚狀況之系統。該系統可包括一影像捕獲器件,其捕獲該皮膚狀況之一二維(2D)影像及與該2D影像相關聯之一組三維(3D)點雲。該系統可進一步包括一計算器件,其經通信耦接至該影像捕獲器件及一第二計算器件,使得該計算器件可自該影像捕獲器件接收該2D影像及該組3D點雲。該計算器件亦可根據該組3D點雲產生一3D表面。該計算器件可進一步自該第二計算器件接收基於該3D表面之用於該2D影像之一深度圖,使得該深度圖包含用於該2D影像之每一像素的深度資料。該計算器件可基於該深度圖分析該皮膚狀況。 In one embodiment, a system for analyzing a skin condition is disclosed. The system may include an image capture device that captures a two-dimensional (2D) image of the skin condition and a set of three-dimensional (3D) point clouds associated with the 2D image. The system can further include a computing device communicatively coupled to the image capture device and a second computing device such that the computing device can receive the 2D image and the set of 3D point clouds from the image capture device. The computing device can also generate a 3D surface from the set of 3D point clouds. The computing device may further receive a depth map for the 2D image based on the 3D surface from the second computing device, such that the depth map includes depth data for each pixel of the 2D image. The computing device can analyze the skin condition based on the depth map.

在一具體實例中,該影像捕獲器件可進一步包括一2D相機,使得該2D相機捕獲該2D影像。在一具體實例中,影像捕獲器件可進一步包括一3D器件,使得該3D器件捕獲該組3D點雲。在一具體實例中,該影像捕獲器件可進一步包括一電池,使得該電池對該影像捕獲器件供電。類似地,該影像捕獲器件亦可包括一閃光燈裝置,使得該閃光燈裝置包括至少一個發光二極體。該影像捕獲器件可進一步包括一觸控式螢幕顯示器。 In a specific example, the image capturing device may further include a 2D camera, so that the 2D camera captures the 2D image. In a specific example, the image capture device may further include a 3D device, such that the 3D device captures the set of 3D point clouds. In a specific example, the image capture device may further include a battery such that the battery supplies power to the image capture device. Similarly, the image capture device may also include a flash device, such that the flash device includes at least one light emitting diode. The image capture device may further include a touch screen display.

在一具體實例中,該系統可包括至少一個儲存器件,其與該第二計算器件及該計算器件中之至少一者通信耦接,使得該儲存器件儲存該深度圖。在一具體實例中,至少一個儲存器件包含由以下各者組成之一群組中之至少一者:一內接硬碟機、一外接硬碟機、一通用串列匯流排(USB)驅動機、一固態硬碟機及一網路附接式儲存器件。 In one embodiment, the system can include at least one storage device communicatively coupled to at least one of the second computing device and the computing device, such that the storage device stores the depth map. In one embodiment, the at least one storage device includes at least one of the group consisting of: an internal hard drive, an external hard drive, a Universal Serial Bus (USB) drive , a solid state hard drive and a network attached storage device.

在一具體實例中,該計算器件及第二計算器件可包括由以下各者組成之一群組中之至少一者:微處理器、特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)及先進精簡指令集計算機器(ARM)。 In one embodiment, the computing device and the second computing device can include at least one of the group consisting of: a microprocessor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) and Advanced Reduced Instruction Set Computing Machine (ARM).

在一具體實例中,該計算器件經由至少一個網路連接被通信耦接至該影像捕獲器件及該第二計算器件。在一具體實例中,該網路連接可為Wi-Fi、藍芽、乙太網路、光纖連接、紅外線、近場通信或同軸電纜連接。 In an embodiment, the computing device is communicatively coupled to the image capture device and the second computing device via at least one network connection. In one embodiment, the network connection can be Wi-Fi, Bluetooth, Ethernet, fiber optic connection, infrared, near field communication or coaxial cable connection.

在一具體實例中,揭示一種用於自動偵測一皮膚狀況之系統。該系統可包括一影像捕獲器件,其自動捕獲一受試者之一組二維(2D)影像及與該組2D影像中之每一2D影像相關聯之一組三維(3D)點雲。該系統亦可包括一計算器件,其經通信耦接至該影像捕獲器件及一第二計算器件,使得該計算器件自動接收該受試者之該組2D影像及用於每一2D影像之該組3D點雲。該計算器件亦可基於該組3D點雲自動產生該受試者之一3D再現。另外,該計算器件可自該第二計算器件接收用於該組2D影像中之每一2D影像的一深度圖, 使得該深度圖包含用於每一2D影像之每一像素的深度資料。該系統可包括一儲存器件,其經通信耦接至該計算器件,使得該儲存器件自動儲存該深度圖。 In one embodiment, a system for automatically detecting a skin condition is disclosed. The system can include an image capture device that automatically captures a set of two-dimensional (2D) images of a subject and a set of three-dimensional (3D) point clouds associated with each 2D image of the set of 2D images. The system may also include a computing device communicatively coupled to the image capture device and a second computing device such that the computing device automatically receives the set of 2D images of the subject and the Group 3D point cloud. The computing device can also automatically generate a 3D representation of the subject based on the set of 3D point clouds. Additionally, the computing device may receive a depth map for each 2D image in the set of 2D images from the second computing device, Such that the depth map contains depth data for each pixel of each 2D image. The system may include a storage device communicatively coupled to the computing device such that the storage device automatically stores the depth map.

在一具體實例中,該用戶端伺服器可進一步包括一分析模組,使得該分析模組可基於該深度圖及該2D影像自動產生該皮膚狀況之量測結果。該計算器件亦可基於比較該等量測結果的狀況與該皮膚狀況之先前儲存的量測結果,來自動判定該皮膚狀況之一變異數。該計算器件可根據該等皮膚狀況之該變異數自動產生一診斷推薦。 In a specific example, the client server may further include an analysis module, so that the analysis module can automatically generate the measurement result of the skin condition based on the depth map and the 2D image. The computing device may also automatically determine a variance of the skin condition based on comparing the status of the measurements with previously stored measurements of the skin condition. The computing device can automatically generate a diagnostic recommendation based on the variance of the skin conditions.

102:受試者 102: Subject

103:皮膚狀況 103: Skin condition

104:影像捕獲器件 104: Image capture device

106:計算器件 106:Computing devices

108:處理伺服器 108:Processing the server

204:2D相機 204: 2D camera

206:3D器件 206: 3D devices

208:系統單晶片 208: System Single Chip

210:閃光燈裝置 210: Flash device

212:電池 212: battery

300:電腦系統 300: Computer system

302:匯流排 302: busbar

304:處理器 304: Processor

306:主記憶體 306: main memory

308:唯讀記憶體(ROM) 308: Read-only memory (ROM)

310:儲存器件 310: storage device

312:顯示器 312: Display

314:輸入器件 314: input device

316:游標控制器 316: Cursor controller

318:通信介面 318: communication interface

320:網路鏈路 320: Network link

322:本地網路 322: local network

324:主機電腦 324: host computer

326:網際網路服務提供者(ISP) 326: Internet Service Provider (ISP)

328:網際網路 328:Internet

330:伺服器 330: server

502:3D點 502: 3D point

504:2D像素 504: 2D pixels

506:3D表面 506: 3D surface

508:像素光線 508: Pixel light

602:3D點雲之輸入集合 602: Input collection of 3D point cloud

604:複合3D點雲 604: Composite 3D point cloud

606:立體像素相鄰者定位 606: voxel neighbor positioning

608:經過濾之3D點雲 608: Filtered 3D point cloud

在圖式中:圖1說明根據本發明之一具體實例的用於分析皮膚狀況之一例示性系統;圖2說明根據本發明之一具體實例的一例示性影像捕獲器件;圖3說明根據本發明之一具體實例的一例示性電腦系統;圖4說明根據本發明之一具體實例的用於分析皮膚狀況之一例示性方法;圖5A、圖5B及圖5C說明根據本發明之一具體實例的用於產生每像素深度資料之例示性步驟之俯視圖;圖6A、圖6B、圖6C、圖6D、圖6E及圖6F說明根據本發明之一具體實例的用於過濾一組點雲中之資料雜訊之例示性步驟;圖7A、圖7B及圖7C說明根據本發明之一具體實例的用於產生一2D影像之像素之器件-空間位置之例示性步驟;圖8A、圖8B及圖8C說明根據本發明之一具體實例的用於產生一2D影像之像素之像素尺寸之例示性步驟;圖9A、圖9B及圖9C說明根據本發明之一具體實例的用於量測2D影像之子像素區域之例示性步驟; 圖10說明根據本發明之一具體實例的用於分析皮膚狀況之輸出2D影像之一例示性視圖。 In the drawings: FIG. 1 illustrates an exemplary system for analyzing skin conditions according to an embodiment of the invention; FIG. 2 illustrates an exemplary image capture device according to an embodiment of the invention; FIG. An exemplary computer system of an embodiment of the invention; Figure 4 illustrates an exemplary method for analyzing skin conditions according to an embodiment of the invention; Figures 5A, 5B and 5C illustrate an embodiment of the invention Figure 6A, Figure 6B, Figure 6C, Figure 6D, Figure 6E, and Figure 6F illustrate the method for filtering a set of point clouds according to an embodiment of the present invention. Exemplary steps for data noise; FIGS. 7A, 7B and 7C illustrate exemplary steps for generating device-spatial locations of pixels of a 2D image according to an embodiment of the present invention; FIGS. 8A, 8B and 8C illustrates exemplary steps for generating pixel dimensions of pixels of a 2D image according to an embodiment of the present invention; FIGS. 9A , 9B, and 9C illustrate subscales for measuring pixels of a 2D image according to an embodiment of the present invention. Exemplary steps for the pixel area; 10 illustrates an exemplary view of an output 2D image for analyzing skin conditions according to an embodiment of the present invention.

雖然本發明之概念易受各種修改及替代形式影響,但該等概念之特定具體實例已在圖式中藉由實例展示且將在本文中加以詳細描述。然而,應理解,不欲將本發明之概念限於所記載之特定形式,然相反的是意欲涵蓋與本發明及所附申請專利範圍一致之所有修改、等效內容及替代內容。 While the inventive concepts are susceptible to various modifications and alternative forms, specific instances of these concepts have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intention to limit the inventive concepts to the particular forms described, but on the contrary the intention is to cover all modifications, equivalents, and alternatives consistent with the scope of the invention and appended claims.

在以下描述中,出於解釋之目的,闡述眾多特定細節,以便提供本發明之透徹理解。然而,將顯而易見,可在無此等特定細節之情況下實踐本發明。在其他個案中,以方塊圖形式展示熟知之結構及器件以便避免不必要地混淆本發明。 In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.

本發明之具體實例係關於積體電路。該等積體電路可為任何合適類型之積體電路,諸如,微處理器、特殊應用積體電路、數位信號處理器、記憶體電路或其他積體電路。若需要,該等積體電路可為含有可程式化邏輯電路系統之可程式化積體電路。在以下描述中,術語「電路系統(circuitry)」與「電路(circuit)」可互換地使用。 Embodiments of the invention relate to integrated circuits. The integrated circuits may be any suitable type of integrated circuits, such as microprocessors, application specific integrated circuits, digital signal processors, memory circuits, or other integrated circuits. If desired, the integrated circuits may be programmable integrated circuits containing programmable logic circuitry. In the following description, the terms "circuitry" and "circuit" are used interchangeably.

在圖式中,為了易於描述,可展示示意性要素(諸如,表示器件、模組、指令區塊及資料元素之要素)之特定排列或排序。然而,熟習此項技術者應理解,圖式中的示意性要素之特定排序或排列並不意謂暗示處理之特定次序或順序或過程之分開是必要的。另外,示意性要素在圖式中之包括並不意謂暗示在所有具體實例中皆需要此要素,或在一些具體實例中可不包括由此要素表示之特徵或此些特徵可與其他要素組合。 In the drawings, specific arrangements or orderings of schematic elements, such as elements representing devices, modules, instruction blocks and data elements, may be shown for ease of description. However, those skilled in the art will understand that the specific ordering or arrangement of schematic elements in the drawings is not meant to imply a specific order or sequence of processing or separation of processes is necessary. In addition, the inclusion of an illustrative element in a drawing does not mean to imply that the element is required in all embodiments, or that in some embodiments the feature represented by the element may not be included or be combined with other elements.

另外,在此些圖式中,在諸如實或虛線或箭頭之連接元件用來 說明在兩個或多個其他示意性元件之間或當中的連接、關係或關聯之情況下,任何此些連接元件之不存在並不意謂暗示無連接、關係或關聯可存在。換言之,元件之間的一些連接、關係或關聯可能未在圖式中展示,以便不混淆本發明。此外,為了易於說明,可使用一單一連接元件表示在元件之間的多個連接、關係或關聯。舉例而言,在連接元件表示信號、資料或指令之傳遞之情況下,熟習此項技術者應理解,此元件可按可能需要表示一或多個信號路徑(例如,匯流排),以影響傳遞。 Additionally, in the drawings, connecting elements such as solid or dashed lines or arrows are used to Where a connection, relationship or association is illustrated between or among two or more other illustrative elements, the absence of any such connected elements is not meant to imply that no connection, relationship or association may exist. In other words, some connections, relationships or associations between elements may not have been shown in the drawings so as not to obscure the invention. Furthermore, for ease of illustration, a single connection element may be used to represent multiple connections, relationships or associations between elements. For example, where a connected element represents the transfer of signals, data, or instructions, those skilled in the art will understand that the element may represent one or more signal paths (eg, bus bars) as may be required to effect the transfer. .

下文描述可各自彼此獨立地使用或與其他特徵之任何組合一起使用之若干特徵。然而,任一個別特徵可能不解決以上論述的問題中之任一個,或可能僅解決以上論述的問題中之一者。以上論述的問題中之一些可能未藉由本文中描述的特徵中之任一者充分解決。雖然提供了標題,但與一特定標題有關但未在具有彼標題之章節中發現之資訊亦可在說明書中其他處發現。 Several features are described below that can each be used independently of each other or with any combination of other features. However, any individual feature may not address any of the above-discussed problems, or may only solve one of the above-discussed problems. Some of the issues discussed above may not be adequately addressed by any of the features described herein. Although headings are provided, information related to a particular heading not found in the section with that heading may also be found elsewhere in the specification.

以下揭示內容論述檢測及分析皮膚狀況之系統及方法。在一個具體實例中,描述一種分析一皮膚狀況之方法。使用一影像捕獲器件捕獲皮膚狀況之2D影像及與該2D影像相關聯之一組3D點雲。將該2D影像及該組3D點雲發送至一計算器件。該計算器件根據該組3D點雲產生一3D表面。將該3D表面發送至一第二計算器件。隨後,該第二計算器件基於該3D表面計算該2D影像之一深度圖,使得該深度圖包含用於該2D影像之每一像素的深度資料。接著可基於該深度圖量測及分析該皮膚狀況。 The following disclosure discusses systems and methods for detecting and analyzing skin conditions. In one embodiment, a method of analyzing a skin condition is described. A 2D image of the skin condition and a set of 3D point clouds associated with the 2D image are captured using an image capture device. The 2D image and the set of 3D point clouds are sent to a computing device. The computing device generates a 3D surface from the set of 3D point clouds. The 3D surface is sent to a second computing device. Subsequently, the second computing device computes a depth map of the 2D image based on the 3D surface such that the depth map includes depth data for each pixel of the 2D image. The skin condition can then be measured and analyzed based on the depth map.

如本文中所使用,「皮膚狀況」或「皮膚特徵」指影響表皮系統之任何醫療或美容狀況或響應(諸如,過敏測試或曝光回應),表皮系統亦即圍住身體且包括以下各者之器官系統:皮膚、頭髮、指甲、黏膜及有關肌肉、脂肪、腺體及腺體活動(諸如,汗水及皮脂)及狀況(諸如,乾性皮膚、油性皮膚、皮膚溫度及症狀,諸如,斑點、丘疹、節結、囊泡、泡殼、膿包、 膿腫、感染、發炎、蕁麻疹、痂、脫皮、糜爛、潰爛、萎縮、肥大、皮膚異色病、苔蘚樣變、痣(包括黑素瘤及皮膚癌)、對測試之反應(諸如,過敏測試、診斷或試驗)或與該醫療或美容狀況相關聯之其他曝光)。人表皮系統之狀況構成廣泛範圍之病(又稱為皮膚病),以及許多非病理性狀態。臨床上,任一特定皮膚狀況之診斷係藉由搜集關於呈現皮膚損傷之相關資訊來進行,該資訊包括位置(諸如,臂、頭、腿)、症狀(搔癢病、疼痛)、持續時間(急性或慢性)、排列(孤立性、一般化、環形、線性)、形態(斑點、丘疹、囊泡)及色彩(紅、藍、棕、黑、白、黃)。 As used herein, "skin condition" or "skin characteristic" refers to any medical or cosmetic condition or response (such as an allergy test or exposure response) that affects the epidermal system, that is, the skin that surrounds the body and includes Organ systems: skin, hair, nails, mucous membranes and related muscles, fat, glands and glandular activity (such as sweat and sebum) and conditions (such as dry skin, oily skin, skin temperature and symptoms such as spots, papules , nodules, vesicles, blisters, pustules, Abscesses, infections, inflammations, hives, scabs, scaling, erosions, ulcerations, atrophy, hypertrophy, poikiloderma, lichenification, moles (including melanoma and skin cancer), reactions to tests (such as allergy tests, diagnosis or testing) or other exposures associated with the medical or cosmetic condition). Conditions of the human epidermal system constitute a wide range of diseases, also known as dermatoses, as well as many non-pathological conditions. Clinically, the diagnosis of any particular skin condition is made by gathering relevant information about the presence of skin lesions, including location (e.g., arm, head, leg), symptoms (prurigo, pain), duration (acute or chronic), arrangement (isolated, generalized, circular, linear), morphology (spots, papules, vesicles) and color (red, blue, brown, black, white, yellow).

在以下揭示中,術語「立體像素(voxel)」、「體積像素(volumetric pixel)」與「3D像素(3D pixel)」可互換使用。如本文中所使用「立體像素」或「體積像素」指在三維空間中之規則格線上的值。如同點陣圖中之像素,立體像素自身通常不具有其與其值一起明確編碼之位置,意即,其座標。取而代之,一立體像素之位置係基於其相對於其他立體像素之位置(其為立體像素在資料結構中組成一單一體積空間之位置)來推斷。在一具體實例中,該等立體像素可基於不同模型,例如,分格之立體像素模型、稀少立體像素模型或八叉樹立體像素模型。 In the following disclosure, the terms "voxel", "volumetric pixel" and "3D pixel" are used interchangeably. A "voxel" or "volume pixel" as used herein refers to a value on a regular grid in three-dimensional space. Like pixels in a bitmap, a voxel itself usually does not have its location, ie, its coordinates, explicitly encoded with its value. Instead, the position of a voxel is inferred based on its position relative to other voxels, which is where the voxels form a single volume in the data structure. In one embodiment, the voxels may be based on different models, such as a latticed voxel model, a sparse voxel model, or an octo-tree voxel model.

圖1說明根據本發明之一具體實例的用於分析皮膚狀況之一例示性系統。現參看圖1,計算器件106自影像捕獲器件104接收使受試者102受到影響的皮膚狀況103之2D影像及有關3D點雲,及自處理伺服器108接收額外資料。 FIG. 1 illustrates an exemplary system for analyzing skin conditions according to an embodiment of the invention. Referring now to FIG. 1 , computing device 106 receives a 2D image of a skin condition 103 affected by subject 102 and an associated 3D point cloud from image capture device 104 , and additional data from processing server 108 .

在一具體實例中,受試者102為受皮膚病症影響(例如,黑素瘤)之人類,該皮膚病症自身按皮膚狀況103(例如,癌痣或病變)之形式呈現。在一具體實例中,受試者102可為動物、植物或其他生命標本。在再一具體實例中,受試者102可為人體模型、屍體(人或其他)或可用於測試用途之 任一其他物體。在一具體實例中,皮膚狀況103可為痣、損傷、切口、擦傷、癤子或影響受試者之皮膚、頭髮或指甲之一或多層的某一其他狀況。在一具體實例中,受試者102俯臥於一平台上。在另一具體實例中,受試者102站立於一平台上。該平台可為檢驗台、金屬平台、輪床、床、或能夠支撐受試者102之重量的具各種形狀及大小且自不同材料製造之任一其他結構。在一具體實例中,該平台可為機械化或機動化的,亦即,該平台可經調整以更改高度(距平台正擱置於其上之地面或樓層)、定向、傾斜角(相對於平台正擱置於其上之地面或樓層)。該機動化之平台亦能夠旋轉。 In one embodiment, subject 102 is a human being affected by a skin disorder (eg, melanoma), which manifests itself in the form of skin condition 103 (eg, cancerous mole or lesion). In a specific example, the subject 102 may be an animal, a plant or other life specimens. In yet another embodiment, the subject 102 can be a mannequin, a cadaver (human or otherwise), or a body that can be used for testing purposes. any other object. In a specific example, skin condition 103 may be a mole, injury, cut, abrasion, boil, or some other condition affecting one or more layers of the subject's skin, hair, or nails. In one embodiment, the subject 102 lies prone on a platform. In another specific example, the subject 102 stands on a platform. The platform may be an examination table, metal platform, gurney, bed, or any other structure capable of supporting the weight of subject 102 in various shapes and sizes and fabricated from different materials. In one embodiment, the platform can be mechanized or motorized, that is, the platform can be adjusted to change height (from the ground or floor the platform is resting on), orientation, inclination angle (relative to the normal ground or floor on which it rests). The motorized platform is also capable of rotation.

在一具體實例中,影像捕獲器件104圍繞受試者102繞轉以捕獲皮膚狀況103之一或多個2D影像及有關3D點雲。在一具體實例中,影像捕獲器件104被連接至經安裝於被附接至平台之軌道上的一台車,且影像捕獲器件104沿著一固定路徑圍繞受試者102繞轉。類似地,影像捕獲器件104可藉由經手動圍繞受試者102繞轉來捕獲皮膚狀況103之2D影像及3D點。在一具體實例中,影像捕獲器件104由使用者所控制之一機器人圍繞受試者102繞轉。在一具體實例中,可遠端控制機器人。在另一具體實例中,該機器人可由計算器件106自動控制及操縱以減少捕獲一組2D影像及相關聯之3D點雲的時間。在一具體實例中,影像捕獲器件104可經由Wi-Fi、藍芽、近場通信(NFC)、乙太網路電纜、光纖電纜或某一其他傳輸資料之方式被通信耦接至計算器件106。 In one embodiment, the image capture device 104 orbits the subject 102 to capture one or more 2D images of the skin condition 103 and an associated 3D point cloud. In one embodiment, image capture device 104 is attached to a cart mounted on a track attached to a platform, and image capture device 104 orbits around subject 102 along a fixed path. Similarly, image capture device 104 can capture 2D images and 3D points of skin condition 103 by manually orbiting around subject 102 . In a specific example, the image capture device 104 is controlled by a user to revolve around the subject 102 as a robot. In one embodiment, the robot can be controlled remotely. In another embodiment, the robot can be automatically controlled and steered by computing device 106 to reduce the time to capture a set of 2D images and associated 3D point clouds. In one embodiment, image capture device 104 may be communicatively coupled to computing device 106 via Wi-Fi, Bluetooth, Near Field Communication (NFC), Ethernet cable, fiber optic cable, or some other means of transferring data .

在一具體實例中,計算器件106類似於以下關於圖3描述之電腦系統300。在一具體實例中,計算器件106可經通信耦接至第二計算器件。該第二計算器件可執行一些密集計算型任務,且將結果傳輸至計算器件106。在一具體實例中,兩個計算器件可經組態為一用戶端與伺服器系統,使得該計算器件106為用戶端器件,且另一計算器件為處理伺服器108。在一具體實例中,處理伺服器108類似於以下關於圖3描述之電腦系統300。在一具體實例中,處理 伺服器108可為一基於雲端之伺服器,其用於處理影像及有關3D點雲或3D表面或其他與2D影像有關之資料。在一具體實例中,計算器件106及處理伺服器108可為實施本文中描述之技術的一或多個專用計算器件。專用計算器件可經硬連線以執行該等技術,或可包括數位電子器件,諸如,一或多個特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)或經持久程式化以執行該等技術之其他可程式化邏輯器件(PLD),或者是可包括經程式化以依據韌體、記憶體、其他儲存裝置或一組合中之程式指令執行該等技術的一或多個通用硬體處理器。此等專用計算器件亦可組合定製硬連線邏輯、ASIC或FPGA與定製程式化以實現該等技術。專用計算器件可為桌上型電腦系統、攜帶型電腦系統、手持型器件、網路連接器件或併有硬連線及/或程式邏輯以實施該等技術之任一其他器件。 In one embodiment, computing device 106 is similar to computer system 300 described below with respect to FIG. 3 . In a particular example, computing device 106 may be communicatively coupled to a second computing device. The second computing device may perform some computationally intensive tasks and transmit the results to computing device 106 . In one embodiment, two computing devices can be configured as a client and server system such that the computing device 106 is the client device and the other computing device is the processing server 108 . In one embodiment, processing server 108 is similar to computer system 300 described below with respect to FIG. 3 . In a specific example, processing Server 108 may be a cloud-based server for processing images and related 3D point clouds or 3D surfaces or other data related to 2D images. In a particular example, computing device 106 and processing server 108 may be one or more special purpose computing devices implementing the techniques described herein. Application-specific computing devices may be hardwired to perform these techniques, or may include digital electronics such as one or more application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or permanently programmed Other programmable logic devices (PLDs) to implement these techniques, or may include one or more devices programmed to execute these techniques in accordance with program instructions in firmware, memory, other storage devices, or a combination General hardware processor. These special purpose computing devices can also combine custom hardwired logic, ASICs or FPGAs with custom programming to implement these technologies. A special purpose computing device may be a desktop computer system, a portable computer system, a handheld device, a network connection device, or any other device that incorporates hard-wired and/or programmed logic to implement such technologies.

圖2說明根據本發明之一具體實例的一例示性影像捕獲器件。現參看圖2,影像捕獲器件104可併有2D相機204,其經通信耦接至3D器件206。2D相機204及3D器件206可共用閃光燈裝置210。影像捕獲器件104可由電池212供電且由系統單晶片208控制或操作。 Figure 2 illustrates an exemplary image capture device according to an embodiment of the present invention. Referring now to FIG. 2 , the image capture device 104 may incorporate a 2D camera 204 that is communicatively coupled to the 3D device 206 . The 2D camera 204 and the 3D device 206 may share a flash device 210 . Image capture device 104 may be powered by battery 212 and controlled or operated by SoC 208 .

在一具體實例中,2D相機204為一高解析度2D彩色相機。在一具體實例中,2D相機204為一數位相機。舉例而言,2D相機204可為緊湊型相機、智慧型電話相機、無反光鏡相機、數位單鏡頭反射相機、電子取景器、可互換鏡頭相機或中畫幅相機。在一具體實例中,2D相機204可具有在八百百萬像素與兩百百萬像素之間的解析度。在一具體實例中,2D相機204可具有多於一個鏡頭。舉例而言,2D相機204可具有一個用於捕獲色彩圖像之鏡頭及一個用於捕獲單色圖像之鏡頭,兩個鏡頭則由系統單晶片208拼接在一起。在另一具體實例中,2D相機204可為一類比相機。舉例而言,2D相機204可為電影攝影機或大畫幅相機。 In a specific example, the 2D camera 204 is a high-resolution 2D color camera. In a specific example, the 2D camera 204 is a digital camera. For example, 2D camera 204 may be a compact camera, a smartphone camera, a mirrorless camera, a digital single-lens reflex camera, an electronic viewfinder, an interchangeable lens camera, or a medium format camera. In a specific example, the 2D camera 204 may have a resolution between 8 million pixels and 2 million pixels. In a specific example, the 2D camera 204 may have more than one lens. For example, the 2D camera 204 may have a lens for capturing color images and a lens for capturing monochrome images, and the two lenses are spliced together by the SoC 208 . In another specific example, the 2D camera 204 can be an analog camera. For example, 2D camera 204 may be a movie camera or a large format camera.

在一具體實例中,2D相機204經通信耦接至3D器件206。3D器件206可為利用用於3D點雲產生之紅外線投影之一結構化光深度相機。3D器件206可為一非接觸式3D掃描器。舉例而言,3D器件206可為飛行時間3D雷射掃描器、基於三角量測之3D掃描器、調變光3D掃描器或體積3D掃描器。 In one embodiment, 2D camera 204 is communicatively coupled to 3D device 206. 3D device 206 may be a structured light depth camera utilizing infrared projection for 3D point cloud generation. The 3D device 206 can be a non-contact 3D scanner. For example, the 3D device 206 may be a time-of-flight 3D laser scanner, a triangulation-based 3D scanner, a modulated light 3D scanner, or a volumetric 3D scanner.

在一具體實例中,2D相機204及3D器件206經通信耦接至系統單晶片208。系統單晶片208可為如上所述之一或多個專用計算器件。在一具體實例中,系統單晶片208可包括微控制器、微處理器、系統控制器、圖形處理器、記憶體、數位信號處理器及其他積體電路組件。在一具體實例中,系統單晶片208可類似於以下關於圖3描述之電腦系統300。在一具體實例中,系統單晶片208用於即時處理由影像捕獲器件104獲得之影像及資料。在一具體實例中,系統單晶片208實施一操作環境及一使用者介面以處理及回應自影像捕獲器件104之使用者接收到之回饋。 In one embodiment, the 2D camera 204 and the 3D device 206 are communicatively coupled to the SoC 208 . SoC 208 may be one or more application-specific computing devices as described above. In one embodiment, the SoC 208 may include microcontrollers, microprocessors, system controllers, graphics processors, memory, digital signal processors, and other integrated circuit components. In one embodiment, SoC 208 may be similar to computer system 300 described below with respect to FIG. 3 . In one embodiment, the SoC 208 is used for real-time processing of images and data obtained by the image capture device 104 . In one embodiment, SoC 208 implements an operating environment and a user interface to process and respond to feedback received from a user of image capture device 104 .

在一具體實例中,電池212對影像捕獲器件104供電。電池212可為二次電池或可再充電電池。舉例而言,電池212為鋰離子電池、鋰聚合物電池或鎳-鎘電池。在一具體實例中,電池212為原電池或非可再充電電池。舉例而言,電池212可為鹼性電池,或鋅碳電池。電池212允許使用者無線捕獲該組2D影像及該組3D點雲。 In a specific example, the battery 212 powers the image capture device 104 . The battery 212 may be a secondary battery or a rechargeable battery. For example, the battery 212 is a lithium ion battery, a lithium polymer battery or a nickel-cadmium battery. In a specific example, battery 212 is a primary battery or a non-rechargeable battery. For example, the battery 212 can be an alkaline battery, or a zinc-carbon battery. The battery 212 allows the user to wirelessly capture the set of 2D images and the set of 3D point clouds.

在一具體實例中,閃光燈裝置210用以照射皮膚狀況103。在一具體實例中,閃光燈裝置210為經連接至影像捕獲器件104上之系統單晶片208的LED燈陣列。在另一具體實例中,閃光燈裝置210經通信耦接至影像捕獲器件104,但與影像捕獲器件104截然不同。舉例而言,閃光燈裝置210可包括一或多個閃光燈泡、電子閃光燈裝置、高速閃光燈器件或氣隙閃光燈器件。在一具體實例中,閃光燈裝置210亦用以在3D點雲之獲取期間照射皮膚狀況103。在一具體實例中,影像捕獲器件104亦可包括一觸控式螢幕顯示器,其可用於使 用者控制、輸入及回饋。該觸控式螢幕顯示器可為一電容性觸控式螢幕或一電阻性觸控式螢幕。在一具體實例中,影像捕獲器件104包括一記憶體儲存器件,其用於儲存與皮膚狀況103有關之2D影像及3D點雲。記憶體儲存器件可為快閃儲存器件或非揮發性儲存器件,例如,安全數位(SD)卡。 In one embodiment, the flash device 210 is used to illuminate the skin condition 103 . In one embodiment, the flash device 210 is an array of LED lights connected to the SoC 208 on the image capture device 104 . In another embodiment, flash device 210 is communicatively coupled to image capture device 104 , but is distinct from image capture device 104 . For example, flash device 210 may include one or more flash bulbs, electronic flash devices, high-speed flash devices, or air-gap flash devices. In one embodiment, the flash device 210 is also used to illuminate the skin condition 103 during the acquisition of the 3D point cloud. In one embodiment, the image capture device 104 can also include a touch screen display, which can be used to User control, input and feedback. The touch screen display can be a capacitive touch screen or a resistive touch screen. In one embodiment, the image capture device 104 includes a memory storage device for storing 2D images and 3D point clouds related to the skin condition 103 . The memory storage device can be a flash memory device or a non-volatile storage device, such as a Secure Digital (SD) card.

圖3說明可以實施本發明之一具體實例之一電腦系統300。電腦系統300包括一匯流排302或用於傳遞資訊之其他通信機構,及用於處理資訊的與匯流排302耦接之一硬體處理器304。硬體處理器304可為(例如)通用微處理器。 FIG. 3 illustrates a computer system 300 in which an embodiment of the present invention may be practiced. Computer system 300 includes a bus 302 or other communication mechanism for communicating information, and a hardware processor 304 coupled to bus 302 for processing information. Hardware processor 304 may be, for example, a general purpose microprocessor.

電腦系統300亦包括一主記憶體306(諸如,隨機存取記憶體(RAM)或其他動態儲存器件),其經耦接至匯流排302,用於儲存資訊及要由處理器304執行之指令。主記憶體306亦可用於在執行要由處理器304執行之指令期間儲存臨時變數或其他中間資訊。此等指令當經儲存於處理器304可進行存取之非暫時性儲存媒體中時,致使電腦系統300作為經定製以執行在指令中指定之操作的專用機器。 Computer system 300 also includes a main memory 306 , such as random access memory (RAM) or other dynamic storage device, coupled to bus 302 for storing information and instructions to be executed by processor 304 . Main memory 306 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 304 . These instructions, when stored in a non-transitory storage medium accessible by processor 304, cause computer system 300 to function as a special-purpose machine customized to perform the operations specified in the instructions.

電腦系統300可包括一匯流排302或用於傳遞資訊之其他通信機構,及用於處理資訊的與匯流排302耦接之一硬體處理器304。硬體處理器304可為(例如)通用微處理器。 Computer system 300 may include a bus 302 or other communication mechanism for communicating information, and a hardware processor 304 coupled to bus 302 for processing information. Hardware processor 304 may be, for example, a general purpose microprocessor.

電腦系統300亦包括一主記憶體306(諸如,隨機存取記憶體(RAM)或其他動態儲存器件),其經耦接至匯流排302,用於儲存資訊及要由處理器304執行之指令。主記憶體306亦可用於在執行要由處理器304執行之指令期間儲存臨時變數或其他中間資訊。此等指令當經儲存於處理器304可存取之非暫時性儲存媒體中時,致使電腦系統300作為經定製以執行在指令中指定之操作的專用機器。 Computer system 300 also includes a main memory 306 , such as random access memory (RAM) or other dynamic storage device, coupled to bus 302 for storing information and instructions to be executed by processor 304 . Main memory 306 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 304 . These instructions, when stored in a non-transitory storage medium accessible by processor 304, cause computer system 300 to function as a special-purpose machine customized to perform the operations specified in the instructions.

電腦系統300進一步包括一唯讀記憶體(ROM)308或經耦接至 匯流排302以用於儲存用於處理器304之靜態資訊及指令之其他靜態儲存器件。諸如磁碟、光碟或固態硬碟之儲存器件310經提供且耦接至匯流排302以用於儲存資訊及指令。 The computer system 300 further includes a read only memory (ROM) 308 or is coupled to The bus 302 is used to store other static storage devices for static information and instructions for the processor 304 . A storage device 310 such as a magnetic disk, optical disk or solid state drive is provided and coupled to the bus 302 for storing information and instructions.

電腦系統300可經由匯流排302被耦接至一顯示器312,諸如,陰極射線管(CRT)、液晶顯示器(LCD)、電漿顯示器、發光二極體(LED)顯示器或有機發光二極體(OLED)顯示器,以用於對電腦使用者顯示資訊。包括字母數字及其他按鍵之一輸入器件314經耦接至匯流排302,以用於將資訊及命令選擇傳遞至處理器304。另一類型之使用者輸入器件為游標控制器316(諸如,滑鼠、軌跡球、具備觸控功能之顯示器或游標方向按鍵),以用於將方向資訊及命令選擇傳遞至處理器304及用於控制顯示器312上之游標移動。此輸入器件典型地在兩個軸線(第一軸(例如,x)及第二軸(例如,y))中具有兩個自由度,此允許器件指定平面中的位置。 Computer system 300 may be coupled via bus bar 302 to a display 312, such as a cathode ray tube (CRT), liquid crystal display (LCD), plasma display, light emitting diode (LED) display, or organic light emitting diode (OLED) OLED) displays for displaying information to computer users. An input device 314 including alphanumeric and other keys is coupled to bus 302 for communicating information and command selections to processor 304 . Another type of user input device is a cursor controller 316 (such as a mouse, trackball, touch-enabled display, or cursor direction keys) for communicating directional information and command selections to the processor 304 and user Cursor movement on control display 312 . This input device typically has two degrees of freedom in two axes, a first axis (eg, x) and a second axis (eg, y), which allows the device to specify a position in a plane.

根據一個具體實例,本文中之該等技術由電腦系統300回應於處理器304執行主記憶體306中所含有的一或多個指令之一或多個序列而執行。可將此等指令自諸如儲存器件310之另一儲存媒體讀取至主記憶體306內。主記憶體306中所含有的指令序列之執行使處理器304執行本文中描述之過程步驟。在替代性具體實例中,硬連線電路可用來代替軟體指令或與軟體指令結合使用。 According to one embodiment, the techniques herein are performed by computer system 300 in response to processor 304 executing one or more sequences of one or more instructions contained in main memory 306 . These instructions may be read into main memory 306 from another storage medium, such as storage device 310 . Execution of the sequences of instructions contained in main memory 306 causes processor 304 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.

如本文中所使用之術語「儲存媒體(storage media)」指儲存使機器以一特定方式操作之資料及/或指令之任何非暫時性媒體。此等儲存媒體可包含非揮發性媒體及/或揮發性媒體。非揮發性媒體包括(例如)光碟、磁碟或固態硬碟(諸如,儲存器件310)。揮發性媒體包括動態記憶體,諸如,主記憶體306。常見形式之儲存媒體包括(例如)軟碟、可撓性磁碟、硬碟、固態硬碟、磁帶或任何其他磁性資料儲存媒體、CD-ROM、任何其他光學資料儲存媒體、具有孔之圖案的任何實體媒體、RAM、PROM及EPROM、FLASH- EPROM、NV-RAM或任一其他記憶體晶片或儲存匣。 The term "storage media" as used herein refers to any non-transitory media that stores data and/or instructions that cause a machine to operate in a specific manner. Such storage media may include non-volatile media and/or volatile media. Non-volatile media include, for example, optical, magnetic or solid-state drives (such as storage device 310). Volatile media includes dynamic memory, such as main memory 306 . Common forms of storage media include, for example, floppy disks, flexible disks, hard disks, solid state drives, magnetic tape or any other magnetic data storage medium, CD-ROM, any other optical data storage medium, Any physical media, RAM, PROM and EPROM, FLASH- EPROM, NV-RAM or any other memory chips or storage cartridges.

儲存媒體截然不同於傳輸媒體但可與傳輸媒體一起使用。傳輸媒體參與在儲存媒體之間傳送資訊。舉例而言,傳輸媒體包括同軸電纜、銅線及光纖,包括其中包含匯流排302之電線。傳輸媒體亦可呈聲波或光波之形式,諸如,在無線電波及紅外資料通信期間產生之波。 Storage media is distinct from, but can be used in conjunction with, transmission media. Transmission media participate in the transfer of information between storage media. Transmission media includes, for example, coaxial cables, copper wire and fiber optics, including the wires that comprise bus 302 . Transmission media can also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.

在將一或多個指令之一或多個序列攜載至處理器304以供執行時可涉及各種形式之媒體。舉例而言,一開始可將指令載運於遠端電腦之磁碟或固態硬碟上。遠端電腦可將指令載入其動態記憶體內,且使用數據機經由電話線而發送指令。在電腦系統300本地之數據機可接收電話線上之資料,且使用紅外線傳輸器將資料轉換成紅外線信號。紅外偵測器可接收紅外信號中所載運之資料且適當電路系統可將資料置放於匯流排302上。匯流排302將資料載運至主記憶體306,處理器304自該主記憶體擷取且執行該等指令。由主記憶體306接收之指令可視情況在由處理器304執行前或後經儲存於儲存器件310上。 Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 304 for execution. For example, the instructions may initially be carried on a disk or solid state drive of the remote computer. The remote computer can load the commands into its dynamic memory and send the commands over a telephone line using a modem. A modem local to computer system 300 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector can receive the data carried in the infrared signal and appropriate circuitry can place the data on bus 302 . Bus 302 carries the data to main memory 306, from which processor 304 retrieves and executes the instructions. The instructions received by main memory 306 may optionally be stored on storage device 310 either before or after execution by processor 304 .

電腦系統300亦包括經耦接至匯流排302之一通信介面318。通信介面318提供至網路鏈路320之雙向資料通信耦接,網路鏈路320經連接至一本地網路322。舉例而言,通信介面318可為一整合式服務數位網路(ISDN)卡、電纜數據機、衛星數據機或數據機以提供資料通信連接至對應類型之電話線。作為另一實例,通信介面318可為區域網路(LAN)卡以提供資料通信連接至一相容LAN。亦可實施無線鏈路。在任一此實施中,通信介面318發送且接收載運表示各種類型之資訊之數位資料串流的電信號、電磁信號或光信號。舉例而言,可經由使用如同WiFi、藍芽、紅外線及近場通信(NFC)外加其他之網路連接技術來實施無線鏈路。 The computer system 300 also includes a communication interface 318 coupled to the bus bar 302 . Communication interface 318 provides a bi-directional data communication coupling to network link 320 , which is connected to a local network 322 . For example, communication interface 318 may be an Integrated Services Digital Network (ISDN) card, cable modem, satellite modem or modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 318 may be an area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 318 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. For example, wireless links can be implemented by using networking technologies like WiFi, Bluetooth, infrared, and near field communication (NFC), among others.

網路鏈路320典型地經由一或多個網路將資料通信提供至其他資料器件。舉例而言,網路鏈路320可經由本地網路322提供連接至主機電腦324 或連接至由網際網路服務提供者(ISP)326操作之資料設備。ISP 326又經由全世界封包資料通信網路(現在通常被稱作「網際網路」328)來提供資料通信服務。本地網路322及網際網路328兩者皆使用載運數位資料串流之電信號、電磁信號或光信號。經由各種網路之信號及在網路鏈路320上且經由通信介面318(其載運至及自電腦系統300之數位資料)之信號為傳輸媒體之實例形式。 Network link 320 typically provides data communication to other data devices via one or more networks. For example, network link 320 may provide connection to host computer 324 via local network 322 Or connect to data equipment operated by an Internet Service Provider (ISP) 326. The ISP 326 in turn provides data communication services via a worldwide packet data communication network (now commonly referred to as the "Internet" 328). Local network 322 and Internet 328 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 320 and through communication interface 318 that carry the digital data to and from computer system 300 are example forms of transmission media.

電腦系統300可經由網路、網路鏈路320及通信介面318發送及接收資料(包括程式碼)。在網際網路實例中,伺服器330可經由網際網路328、ISP 326、本地網路322及通信介面318傳輸用於應用程式之一經請求程式碼。 Computer system 300 can send and receive data (including program code) via the network, network link 320 and communication interface 318 . In the Internet example, server 330 may transmit a requested code for the application via Internet 328 , ISP 326 , local network 322 and communication interface 318 .

處理器304可在接收到時執行所接收程式碼),及/或儲存於儲存器件310或其他非揮發性儲存器中以供稍後執行。 Processor 304 may execute the received code upon receipt), and/or store in storage device 310 or other non-volatile storage for later execution.

圖4說明根據本發明之一具體實例的用於分析皮膚狀況之一例示性方法。出於說明清晰實例之目的,將關於圖1、圖2及圖3論述圖4。 Figure 4 illustrates an exemplary method for analyzing skin condition according to an embodiment of the invention. For purposes of illustrating a clear example, FIG. 4 will be discussed with respect to FIGS. 1 , 2 and 3 .

現參看圖4,在方塊402處,計算器件106接收皮膚狀況103之2D影像及與該2D影像相關聯之一組3D點雲。在一具體實例中,該2D影像及該組3D點雲由影像捕獲器件104捕獲。在一具體實例中,該組3D點雲中之每一3D點雲中的每一3D點對應於2D影像之至少一個像素。在一具體實例中,當捕獲由影像捕獲器件104觸發時,2D相機捕獲在影像捕獲器件104上之2D彩色影像,且使用系統單晶片208儲存該2D彩色影像。 Referring now to FIG. 4, at block 402, computing device 106 receives a 2D image of skin condition 103 and a set of 3D point clouds associated with the 2D image. In one embodiment, the 2D image and the set of 3D point clouds are captured by the image capture device 104 . In one embodiment, each 3D point in each 3D point cloud of the set of 3D point clouds corresponds to at least one pixel of the 2D image. In one embodiment, when capture is triggered by the image capture device 104 , the 2D camera captures a 2D color image on the image capture device 104 and uses the SoC 208 to store the 2D color image.

在一具體實例中,3D器件206在捕獲事件期間捕獲多個3D點雲且予以儲存在影像捕獲器件104上。理想地,每一點雲可由正如在對應2D影像中存在之像素一樣多之點構成。然而,實務上,歸因於由感測器解析度約束、投影儀解析度限制及光學器件造成的深度成像方法之限制,僅捕獲對應於某些像素之點。在一具體實例中,3D器件206獲取每一個別捕獲事件之多個點雲,以藉由比較多個點雲之差異來消除資料中之雜訊。在一具體實例中,3D器件 206之操作頻率由系統單晶片208判定。舉例而言,3D器件206可按六十赫茲之標稱頻率操作以捕獲具有最小時間間隔之許多點雲,以最小化在資料獲取過程期間由影像捕獲器件104之移動所引起之潛在不對準,同時仍然提供足夠資料以供雜訊過濾。 In one embodiment, 3D device 206 captures multiple 3D point clouds and stores them on image capture device 104 during a capture event. Ideally, each point cloud would consist of as many points as there are pixels in the corresponding 2D image. In practice, however, only points corresponding to certain pixels are captured due to limitations of the depth imaging method due to sensor resolution constraints, projector resolution limitations, and optics. In one embodiment, the 3D device 206 acquires multiple point clouds for each individual capture event to remove noise in the data by comparing the differences between the multiple point clouds. In a specific example, the 3D device The operating frequency of 206 is determined by SoC 208 . For example, 3D device 206 may operate at a nominal frequency of sixty hertz to capture many point clouds with a minimum time interval to minimize potential misalignment caused by movement of image capture device 104 during the data acquisition process, While still providing enough data for noise filtering.

在一具體實例中,影像捕獲器件104將該組2D影像及該組3D點雲發送至計算器件106。在一具體實例中,經由高速有線或無線網路連接(例如,Wi-Fi、乙太網路或光纖連接)發送2D影像及3D點雲。在一具體實例中,影像捕獲器件104加密該組2D影像及該組3D點雲,且經由一安全通信頻道將經加密之該組2D影像及該組3D點雲傳輸至計算器件106。舉例而言,影像捕獲器件104可使用密碼編譯雜湊函數加密該組2D影像及該組3D點雲。在一具體實例中,影像捕獲器件104可在將該組之2D影像及該組3D點雲發送至計算器件106前予以壓縮。經壓縮之該組2D影像及該組3D點雲可減少將資料傳輸至計算器件106之資源(例如,時間、頻寬及處理循環)。影像捕獲器件104可使用有損或無損資料壓縮技術及壓縮標準,例如,動畫專業團體(MPEG)1、2、4或高效視訊寫碼(HEVC)H.261、H.262、H.264、H.265、聯合攝影專家團體(JPEG)、攜帶型網路圖形(PNG)、多影像網路圖形(MNG)或加標籤之影像檔案格式(TIFF)。 In one embodiment, image capture device 104 sends the set of 2D images and the set of 3D point clouds to computing device 106 . In one embodiment, the 2D image and 3D point cloud are sent via a high-speed wired or wireless network connection (eg, Wi-Fi, Ethernet or fiber optic connection). In one embodiment, the image capture device 104 encrypts the set of 2D images and the set of 3D point clouds, and transmits the encrypted set of 2D images and the set of 3D point clouds to the computing device 106 via a secure communication channel. For example, the image capture device 104 can encrypt the set of 2D images and the set of 3D point clouds using a cryptographic hash function. In one embodiment, image capture device 104 may compress the set of 2D images and the set of 3D point clouds before sending to computing device 106 . The compressed set of 2D images and the set of 3D point clouds can reduce resources (eg, time, bandwidth, and processing cycles) for transmitting the data to computing device 106 . Image capture device 104 may use lossy or lossless data compression techniques and compression standards, such as Motion Picture Professionals Group (MPEG) 1, 2, 4 or High Efficiency Video Coding (HEVC) H.261, H.262, H.264, H.265, Joint Photographic Experts Group (JPEG), Portable Network Graphics (PNG), Multiple Image Network Graphics (MNG) or Tagged Image File Format (TIFF).

在步驟404,計算器件106自該組3D點雲產生一3D表面。在一具體實例中,計算器件106在產生3D表面前執行3D點雲過濾。以下關於圖6A至圖6F進一步論述點雲過濾過程。在一具體實例中,該3D表面為一高解析度三角形3D網格,其包含由其自經過濾之3D點雲資料推導出的共同邊緣或拐角連接之一組三角形。在一具體實例中,該3D表面為一經內插之3D網格。 At step 404, the computing device 106 generates a 3D surface from the set of 3D point clouds. In a specific example, the computing device 106 performs 3D point cloud filtering before generating the 3D surface. The point cloud filtering process is discussed further below with respect to FIGS. 6A-6F . In one embodiment, the 3D surface is a high resolution triangular 3D mesh comprising a set of triangles connected by common edges or corners that they derive from filtered 3D point cloud data. In one embodiment, the 3D surface is an interpolated 3D mesh.

在步驟406,計算器件106接收基於該3D表面之用於該2D影像之一深度圖,使得該深度圖包含用於2D影像之每一像素的深度資料。在一具體實 例中,計算器件106產生深度圖。在一具體實例中,在步驟404中產生的經內插之3D網格接著用於光線投射以針對該2D影像之每一像素產生每像素深度資料。如本文中所使用,「光線投射(ray-casting)」指使用光線跟蹤之幾何演算法之計算機圖形演算法。光線投射之理念為跟蹤來自相機之射線,每個像素一個射線,及發現阻擋此光線之路徑的最靠近之表面。在一具體實例中,每像素深度資料亦說明影像捕獲器件104或2D相機204之解析度及已知相機性質。在另一具體實例中,計算器件106將深度圖之產生分擔至處理伺服器108,且經由以上關於圖3描述之通信介面318及網路鏈路320自處理伺服器108接收深度圖。在一具體實例中,在步驟404處產生之3D表面經傳輸至處理伺服器108。處理伺服器108接著實施光線投射演算法以針對2D影像之每一像素產生每像素深度資料。接著將每像素資料傳輸至計算器件106。在一具體實例中,處理伺服器108針對在每一捕獲事件期間所捕獲之每一2D影像產生每像素深度資料。 In step 406, computing device 106 receives a depth map for the 2D image based on the 3D surface such that the depth map includes depth data for each pixel of the 2D image. in a specific In one example, computing device 106 generates a depth map. In one embodiment, the interpolated 3D mesh generated in step 404 is then used for ray casting to generate per-pixel depth data for each pixel of the 2D image. As used herein, "ray-casting" refers to a computer graphics algorithm that uses the geometric algorithm of ray tracing. The idea of ray casting is to trace a ray from the camera, one ray per pixel, and find the closest surface blocking the ray's path. In one embodiment, the depth-per-pixel data also accounts for the resolution and known camera properties of the image capture device 104 or 2D camera 204 . In another embodiment, computing device 106 offloads the generation of the depth map to processing server 108 and receives the depth map from processing server 108 via communication interface 318 and network link 320 described above with respect to FIG. 3 . In one embodiment, the 3D surface generated at step 404 is transmitted to the processing server 108 . The processing server 108 then implements a ray casting algorithm to generate per-pixel depth data for each pixel of the 2D image. The per-pixel data is then transferred to computing device 106 . In one embodiment, the processing server 108 generates per-pixel depth data for each 2D image captured during each capture event.

在步驟408,皮膚狀況103由計算器件106基於深度圖分析。在另一具體實例中,皮膚狀況103亦可由處理伺服器108分析。在一具體實例中,可將皮膚狀況103之當前量測結果與由計算器件106或處理伺服器108儲存的皮膚狀況103之較舊有之量測結果比較以用於診斷目的。舉例而言,若皮膚狀況103為較早先量測為兩公分寬且當前量測為四公分寬之傷口,則臨床醫生可懷疑受試者102身上存在之皮膚狀況103正在擴散且可能需要進一步診斷調查。在一具體實例中,計算器件106或處理伺服器108基於比較皮膚狀況103之當前與先前儲存之量測結果而自動產生建議。在一具體實例中,計算器件106或處理伺服器可將深度圖儲存於類似於以上關於圖3描述之儲存器件310的儲存器件中。 At step 408, the skin condition 103 is analyzed by the computing device 106 based on the depth map. In another embodiment, the skin condition 103 can also be analyzed by the processing server 108 . In one embodiment, the current measurement of skin condition 103 may be compared to older measurements of skin condition 103 stored by computing device 106 or processing server 108 for diagnostic purposes. For example, if skin condition 103 is a wound that earlier measured two centimeters wide and currently measures four centimeters wide, the clinician may suspect that skin condition 103 present on subject 102 is spreading and may require further diagnosis investigation. In one embodiment, computing device 106 or processing server 108 automatically generates recommendations based on comparing current and previously stored measurements of skin condition 103 . In one embodiment, computing device 106 or processing server may store the depth map in a storage device similar to storage device 310 described above with respect to FIG. 3 .

在一具體實例中,處理伺服器108可組合針對每一捕獲事件之每一2D影像所產生的3D表面,以產生描繪受試者102之身體之某一部分的3D表面。在一具體實例中,處理伺服器108可經校準以自動檢測某些皮膚狀況。舉 例而言,處理伺服器108可自動偵測大於五毫米的大小之痣,且基於提供乾淨板以用於影像分析之相機及表面角度校正2D影像中含有所測得痣之影像區域。 In one embodiment, processing server 108 may combine the 3D surfaces generated for each 2D image of each capture event to generate a 3D surface depicting a portion of subject 102's body. In one embodiment, the processing server 108 may be calibrated to automatically detect certain skin conditions. raise For example, the processing server 108 can automatically detect moles that are larger than five millimeters in size, and correct the image area in the 2D image containing the detected mole based on the camera and surface angle that provided a clean plate for image analysis.

在一具體實例中,處理伺服器108可基於預先程式化準則針對每一皮膚狀況產生一惡性評分。舉例而言,預先程式化準則中的一些可包括:皮膚狀況103相對於一特定軸線之不對稱性、可指示皮膚狀況103之不規則生長、數目、種類及色彩變異的皮膚狀況103之邊界、皮膚狀況103之直徑、皮膚狀況103在上皮表面上方之高度、及皮膚狀況103隨時間之演變。熟習此項技術者將瞭解,預先程式化準則之清單可基於關於皮膚狀況及惡性指示符的演進醫學理解來擴大或進一步定義。 In one embodiment, the processing server 108 can generate a malignancy score for each skin condition based on pre-programmed criteria. For example, some of the pre-programmed criteria may include: asymmetry of the skin condition 103 with respect to a particular axis, boundaries of the skin condition 103 which may indicate irregular growth, number, type and color variation of the skin condition 103, The diameter of the skin condition 103, the height of the skin condition 103 above the epithelial surface, and the evolution of the skin condition 103 over time. Those skilled in the art will appreciate that the list of pre-programmed criteria can be expanded or further defined based on evolving medical understanding regarding skin conditions and indicators of malignancy.

在一具體實例中,處理伺服器108基於針對各種皮膚狀況之惡性評分產生針對受試者102之組合惡性評分。在一具體實例中,各種惡性評分及組合惡性評分由處理伺服器108傳輸至計算器件106,以用於由臨床醫生進一步分析。在一具體實例中,處理伺服器108比較當前各種惡性評分及組合惡性評分與先前儲存之各種惡性評分及組合惡性評分。 In one embodiment, processing server 108 generates a combined malignancy score for subject 102 based on malignancy scores for various skin conditions. In one embodiment, the various malignancy scores and combined malignancy scores are transmitted by processing server 108 to computing device 106 for further analysis by a clinician. In one embodiment, the processing server 108 compares the current various malignancy scores and combined malignancy scores with previously stored various malignancy scores and combined malignancy scores.

圖5A、圖5B及圖5C說明根據本發明之一具體實例的用於產生每像素深度資料之例示性步驟之俯視圖。出於說明清晰實例之目的,將關於圖1至圖4論述圖5A、圖5B及圖5C。 5A, 5B, and 5C illustrate top views of exemplary steps for generating per-pixel depth data, according to an embodiment of the invention. 5A, 5B, and 5C will be discussed with respect to FIGS. 1-4 for the purpose of illustrating a clear example.

現參看圖5A,在一具體實例中,影像捕獲器件104捕獲及儲存受皮膚狀況103影響的受試者102之2D影像連同3D點雲。在一具體實例中,2D影像由2D像素504構成,且3D點雲由3D點502構成。在一具體實例中,影像捕獲器件104儲存與2D影像相關聯之一組3D點雲,隨後將其提交至計算器件106以供在稍後階段處理。 Referring now to FIG. 5A , in one embodiment, image capture device 104 captures and stores a 2D image of subject 102 affected by skin condition 103 along with a 3D point cloud. In one embodiment, the 2D image is composed of 2D pixels 504 , and the 3D point cloud is composed of 3D points 502 . In a specific example, the image capture device 104 stores a set of 3D point clouds associated with the 2D imagery, which is then submitted to the computing device 106 for processing at a later stage.

現參看圖5B,在一具體實例中,計算器件106利用3D點雲之3D點502產生一3D表面506。在一具體實例中,3D表面506為受試者102之經內插 之3D表面網格。在一具體實例中,在產生3D表面506前,計算器件106過濾由3D點502構成之該組點雲,如在以下圖6A、圖6B、圖6C、圖6D及圖6E中所描述。在一具體實例中,計算器件106利用多種內插技術產生對應於受試者102之表面幾何形狀之一平滑表面幾何形狀。 Referring now to FIG. 5B , in one embodiment, the computing device 106 generates a 3D surface 506 using the 3D points 502 of the 3D point cloud. In one embodiment, 3D surface 506 is the interpolated surface of subject 102 3D Surface Mesh. In one embodiment, computing device 106 filters the set of point clouds of 3D points 502 before generating 3D surface 506, as described in FIGS. 6A, 6B, 6C, 6D, and 6E below. In one embodiment, computing device 106 generates a smooth surface geometry corresponding to the surface geometry of subject 102 using various interpolation techniques.

舉例而言,計算器件106可藉由估計柵格單元值(藉由擬合最小曲率表面與3D點502)來利用樣條(spline)方法內插3D表面506。類似地,用於內插3D表面之反向距離加權(IDW)方法藉由平均化附近3D點502之值來估計單元值。一3D點越靠近正經估計的單元之中心,那麼給予該3D點越大之權數。可由計算器件106利用之另一內插技術為天然相鄰者(natural neighbor)技術。此技術使用相鄰3D點之加權平均值且創造不超過3D點雲中之最小或最大值的3D表面506。再一內插技術為克利金(kriging)技術。克利金技術涉及自周圍量測之值形成權數以預測在未量測之位置處的值。熟習此項技術者應瞭解,可結合用於內插3D表面506之廣泛多種技術實施揭示之發明。 For example, the computing device 106 may interpolate the 3D surface 506 using a spline method by estimating grid cell values (by fitting the minimum curvature surface to the 3D points 502). Similarly, the inverse distance weighted (IDW) method for interpolating 3D surfaces estimates cell values by averaging the values of nearby 3D points 502 . The closer a 3D point is to the center of the cell being estimated, the more weight is given to that 3D point. Another interpolation technique that may be utilized by computing device 106 is the natural neighbor technique. This technique uses a weighted average of neighboring 3D points and creates a 3D surface 506 that does not exceed the minimum or maximum value in the 3D point cloud. Yet another interpolation technique is kriging technique. Kriging techniques involve forming weights from surrounding measured values to predict values at unmeasured locations. Those skilled in the art will appreciate that the disclosed invention may be implemented in conjunction with a wide variety of techniques for interpolating the 3D surface 506 .

現參看圖5C,在一具體實例中,計算器件106計算如自影像捕獲器件104之角度所觀測的對應於2D影像之像素504中之每一像素的在3D表面506上之點之距離。在一具體實例中,計算器件106將3D表面506傳輸至處理伺服器108,且處理伺服器108利用各種技術計算如自影像捕獲器件104之角度所觀測的對應於2D影像之像素504中之每一像素的在3D表面506上之點之距離。在一具體實例中,計算器件106或處理伺服器108利用光線投射技術計算像素502之距離。 Referring now to FIG. 5C , in one embodiment, computing device 106 calculates the distance of a point on 3D surface 506 corresponding to each of pixels 504 of the 2D image as viewed from the perspective of image capture device 104 . In one embodiment, computing device 106 transmits 3D surface 506 to processing server 108, and processing server 108 computes, using various techniques, each of pixels 504 corresponding to the 2D image as viewed from the perspective of image capture device 104. The distance of a point on the 3D surface 506 for one pixel. In one embodiment, the computing device 106 or the processing server 108 calculates the distance of the pixel 502 using a ray casting technique.

在一具體實例中,計算器件106跟蹤2D影像之每一像素之光線,使得光線源自模擬之影像捕獲器件且穿過該像素。該光線可接著與3D表面506相交且產生用於該像素之一深度量測。在一具體實例中,計算器件106利用光線跟蹤演算法產生2D影像之每一像素的深度資料。熟習此項技術者應瞭解, 以上描述的計算深度資料之方法並非限制性。可使用其他表面相交及距離量測技術。在一具體實例中,由計算器件106藉由應用光線投射技術所產生之每像素深度資料可被儲存於一資料結構中。舉例而言,該資料結構可為一陣列或一清單。熟習此項技術者將瞭解,每像素深度資料對應於實際距離值。基於量測單位(例如,米或英吋)之實際距離值,因此0.63728之值將對應於0.63728米或63.728cm。實際距離值不需要進一步解碼或內插。在一具體實例中,每像素深度資料可由計算器件106或處理伺服器108使用於皮膚狀況之進一步量測、偵測及分析。 In one embodiment, the computing device 106 traces light rays for each pixel of the 2D image such that the light rays originate from the simulated image capture device and pass through the pixel. The ray may then intersect the 3D surface 506 and produce a depth measurement for the pixel. In one embodiment, the computing device 106 uses a ray tracing algorithm to generate depth data for each pixel of the 2D image. Those familiar with this technique should understand that The methods for calculating depth data described above are not limiting. Other surface intersection and distance measurement techniques may be used. In one embodiment, per-pixel depth data generated by computing device 106 by applying ray casting techniques may be stored in a data structure. For example, the data structure can be an array or a list. Those skilled in the art will appreciate that each pixel of depth data corresponds to an actual distance value. Based on actual distance values in units of measurement (eg, meters or inches), a value of 0.63728 would therefore correspond to 0.63728 meters or 63.728 cm. The actual distance values require no further decoding or interpolation. In one embodiment, the per-pixel depth data can be used by computing device 106 or processing server 108 for further measurement, detection and analysis of skin conditions.

圖6A、圖6B、圖6C、圖6D、圖6E及圖6F說明根據本發明之一具體實例的用於過濾一組點雲中之資料雜訊之一例示性方法。為了闡明清晰實例之目的,將參看圖1至圖5論述圖6A、圖6B、圖6C、圖6D、圖6E及圖6F。 6A, 6B, 6C, 6D, 6E, and 6F illustrate an exemplary method for filtering data noise in a set of point clouds, according to an embodiment of the invention. For purposes of illustrating clear examples, FIGS. 6A , 6B, 6C, 6D, 6E, and 6F will be discussed with reference to FIGS. 1-5 .

圖6A、圖6B及圖6C描繪可由影像捕獲器件104之3D器件206捕獲的3D點雲之輸入集合602。在一具體實例中,該組雲中之每一點雲與一個捕獲事件及由2D相機204捕獲的2D影像之一或多個像素有關。 6A , 6B and 6C depict an input set 602 of 3D point clouds that may be captured by 3D device 206 of image capture device 104 . In one embodiment, each point cloud in the set of clouds is associated with a capture event and one or more pixels of the 2D image captured by the 2D camera 204 .

在一具體實例中,3D點雲之輸入集合602接著經受雜訊移除機制,其利用基於立體像素之相鄰者檢查(類似於天然相鄰者技術)及點位置平均化(類似於如上關於圖5A至圖5C描述之IDW)。 In one embodiment, the input set 602 of 3D point clouds is then subjected to a noise removal mechanism that utilizes voxel-based neighbor checking (similar to the natural neighbor technique) and point position averaging (similar to the above for IDW described in Figures 5A to 5C).

現參看圖6D,在一具體實例中,計算器件106組合點雲之輸入集合602以產生複合3D點雲604。在一具體實例中,複合3D點雲604的產生係基於使用跨點雲之輸入集合602之界限的基於立體像素之雲劃分比較跨點雲之輸入集合602之區域之變化。 Referring now to FIG. 6D , in one embodiment, the computing device 106 combines the input set 602 of point clouds to produce a composite 3D point cloud 604 . In one embodiment, the generation of the composite 3D point cloud 604 is based on comparing changes in regions across the input set of point clouds 602 using voxel-based cloud partitioning across the boundaries of the input set of point clouds 602 .

現參看圖6E及圖6F,在一具體實例中,計算器件106藉由利用立體像素相鄰者定位606的技術產生經過濾之3D點雲608。舉例而言,對於每一立體像素,在將立體像素中之點判定為有效且將其位置之平均值用於此區域 前,需要自最小數目個點雲呈現一個點。如此消除可考慮為資料雜訊之3D點,因為雜訊典型地僅存在於個別雲中,且概率上不大可能持續跨點雲之輸入集合602中的所有點雲。在一具體實例中,計算器件106可對經過濾之3D點雲608執行額外過濾以移除離群值區段、在多訊框雜訊移除中未移除之平滑小雜訊,且按更適合於三角量測之一方式結構化3D點資料。舉例而言,執行額外過濾之一些技術可包括平面擬合點法線估計、簡化及雙側過濾。 Referring now to FIGS. 6E and 6F , in one embodiment, the computing device 106 generates a filtered 3D point cloud 608 by utilizing a technique of voxel neighbor positioning 606 . For example, for each voxel, the point in the voxel is determined to be valid and the average of its position is used for this region Before, a point needs to be rendered from a minimum number of point clouds. Such elimination of 3D points that may be considered data noise, since noise is typically only present in individual clouds and is probabilistically unlikely to persist across all point clouds in the input set 602 of point clouds. In one example, computing device 106 may perform additional filtering on filtered 3D point cloud 608 to remove outlier segments, smooth small noise not removed in multi-frame noise removal, and by It is more suitable for structured 3D point data in one way of triangulation. Some techniques for performing additional filtering may include plane fitting point normal estimation, simplification, and two-sided filtering, for example.

圖7A、圖7B及圖7C說明根據本發明之一具體實例的用於產生2D影像之像素之器件-空間位置之例示性步驟。出於說明清晰實例目的,將關於圖1至圖4論述圖7A至圖7C。如本文所使用,「器件-空間(device-space)」指在座標系統中相對於器件之3D位置,使得該器件具有在原點(0,0,0)處之一位置及定向。舉例而言,對於圖7A至圖7C,影像捕獲器件104處於原點。 7A, 7B, and 7C illustrate exemplary steps for generating device-spatial locations of pixels of a 2D image, according to an embodiment of the invention. For purposes of illustrating a clear example, FIGS. 7A-7C will be discussed with respect to FIGS. 1-4 . As used herein, "device-space" refers to a 3D position relative to a device in a coordinate system such that the device has a position and orientation at the origin (0,0,0). For example, for FIGS. 7A-7C , the image capture device 104 is at the origin.

現參看圖7A至圖7C,在一具體實例中,由計算器件106或處理伺服器108在如關於圖4描述之步驟406中產生的每像素深度資料或深度圖可用以由計算器件106或處理伺服器108計算2D影像中的一給定像素之尺寸。捕獲皮膚狀況103的像素之像素尺寸可有用於分析皮膚狀況103。 Referring now to FIGS. 7A-7C , in one embodiment, the per-pixel depth data or depth map generated by computing device 106 or processing server 108 in step 406 as described with respect to FIG. The server 108 calculates the size of a given pixel in the 2D image. The pixel size of the pixels capturing the skin condition 103 can be useful for analyzing the skin condition 103 .

在一具體實例中,可根據一給定像素及四個相鄰像素之深度值來計算該給定像素之尺寸。在一具體實例中,四個相鄰像素之深度值幫助確立像素之四個拐角之實際或現實世界位置。舉例而言,參看圖7A,像素p之尺寸可基於四個相鄰像素p 1 Np 2 Np 3 Np 4 N之深度值來計算。 In one embodiment, the size of a given pixel can be calculated from the depth values of the given pixel and four neighboring pixels. In one specific example, the depth values of four neighboring pixels help establish the actual or real-world positions of the four corners of the pixel. For example, referring to FIG. 7A , the size of pixel p can be calculated based on the depth values of four neighboring pixels p 1 N , p 2 N , p 3 N and p 4 N .

為了計算像素pp 1 Np 2 Np 3 Np 4 N之器件空間位置,吾人首先將二維座標指派至該等像素。舉例而言,參看圖7B,對像素pp 1 Np 2 Np 3 Np 4 N分別被指派座標(1,1)、(0,2)、(2,2)、(0,0)及(2,0)。其他具體實例可使用其他座標。可使用任何任意座標,只要將同一座標方案用於2D影像中之所有像素。 To calculate the device space positions of pixels p , p1N , p2N , p3N , and p4N , we first assign two - dimensional coordinates to these pixels. For example, referring to FIG . 7B, pixels p , p1N , p2N , p3N , and p4N are assigned coordinates ( 1,1 ) , ( 0,2 ) , (2,2), ( 0,0) and (2,0). Other embodiments may use other coordinates. Any arbitrary coordinates can be used as long as the same coordinate scheme is used for all pixels in the 2D image.

參看圖7C,在一具體實例中,用於計算給定像素p之尺寸之方法可進一步利用2D相機204之已知水平視野Θ H及垂直視野Θ V,來計算每一像素在3維中之相機-空間位置。在一具體實例中,計算器件106或處理伺服器108基於以上參數判定像素pp 1 Np 2 Np 3 Np 4 N之器件-空間位置。在一具體實例中,該等像素pp 1 Np 2 Np 3 Np 4 N之器件-空間位置係根據以下公式計算:

Figure 107128742-A0305-02-0024-1
Referring to FIG. 7C , in a specific example, the method for calculating the size of a given pixel p can further use the known horizontal field of view Θ H and vertical field of view Θ V of the 2D camera 204 to calculate the distance of each pixel in 3 dimensions camera - spatial position. In one embodiment, computing device 106 or processing server 108 determines the device-space location of pixels p , p 1 N , p 2 N , p 3 N and p 4 N based on the above parameters. In one embodiment, the device-space positions of the pixels p , p 1 N , p 2 N , p 3 N and p 4 N are calculated according to the following formula:
Figure 107128742-A0305-02-0024-1

在以上公式中,D表示給定像素之深度,C表示如上關於圖7B論述的像素之座標,其中Ch及Cv分別表示水平及垂直座標值,且R表示總影像之解析度,其中Rh及Rv分別表示水平及垂直解析度。Θ HΘ V表示2D相機204的水平及垂直視野角度。因此由計算器件106針對像素pp 1 Np 2 Np 3 Np 4 N所計算之器件-空間位置允許基於所要的面積或路徑量測來進行對像素p之進一步量測。熟習此項技術者應瞭解,利用每像素評估允許準確的表面特徵量測,而與輪廓及相機或器件角度無關。此外,當計算涵蓋多個像素的區域之尺寸(例如,皮膚狀況103之尺寸)時,跨所有相關像素使用同一過程,其中每一像素之結果之總和表示正經量測的該區域之結果。 In the above formula, D represents the depth of a given pixel, C represents the coordinates of the pixel as discussed above with respect to FIG. h and Rv represent the horizontal and vertical resolutions, respectively. Θ H and Θ V represent the horizontal and vertical viewing angles of the 2D camera 204 . The device-space positions calculated by computing device 106 for pixels p , p1N , p2N , p3N , and p4N thus allow further measurements for pixel p to be made based on desired area or path measurements. Those skilled in the art will appreciate that utilizing per-pixel evaluation allows for accurate surface feature measurements independent of profile and camera or device angles. Furthermore, when calculating the size of an area covering multiple pixels (eg, the size of skin condition 103), the same process is used across all relevant pixels, where the sum of the results for each pixel represents the result for the area being measured.

圖8A、圖8B及圖8C說明根據本發明之一具體實例的用於產生一2D影像之一像素之像素尺寸之例示性步驟。出於說明清晰實例之目的,將關於圖7A至圖7C論述圖8A至圖8C。 8A, 8B and 8C illustrate exemplary steps for generating the pixel size of a pixel of a 2D image according to an embodiment of the present invention. Figures 8A-8C will be discussed with respect to Figures 7A-7C for the purpose of illustrating a clear example.

現參看圖8A至圖8C,在一具體實例中,計算器件106或處理伺服器108計算一給定像素之像素尺寸,其是基於圖7A至圖7C中的該給定像素及其相鄰像素之先前判定之器件-空間位置。自上文繼續實例,在判定像素pp 1 Np 2 Np 3 Np 4 N之相機-空間位置後,計算器件106或處理伺服器108可使用向量減法產生像素-空間拐角向量。如本文中所使用,「像素-空間(pixel-space)」指相對於一參考像素之對應3D位置的3D位置,使得參考像素之3D位 置在原點(0,0,0)。 Referring now to FIGS. 8A-8C , in one embodiment, computing device 106 or processing server 108 calculates the pixel size of a given pixel based on the given pixel and its neighbors in FIGS. 7A-7C previously determined device-space location. Continuing the example from above, after determining the camera-space positions of pixels p , p1N , p2N , p3N , and p4N , computing device 106 or processing server 108 may use vector subtraction to generate pixel - space corners vector. As used herein, "pixel-space" refers to a 3D position relative to a corresponding 3D position of a reference pixel such that the 3D position of the reference pixel is at the origin (0,0,0).

此外,「拐角向量(corner vector)」指在像素-空間中的給定像素之每一拐角之3D位置。參看圖8A,Pos

Figure 107128742-A0305-02-0025-2
分別指像素p
Figure 107128742-A0305-02-0025-3
之位置向量。參看圖8B,V 1-4指給定像素p之拐角向量。根據以下公式計算拐角向量:
Figure 107128742-A0305-02-0025-4
Furthermore, a "corner vector" refers to the 3D position of each corner of a given pixel in pixel-space. Referring to Figure 8A, Pos and
Figure 107128742-A0305-02-0025-2
Respectively refer to pixel p and
Figure 107128742-A0305-02-0025-3
The position vector of . Referring to FIG. 8B, V 1-4 refers to the corner vector for a given pixel p . Corner vectors are calculated according to the following formula:
Figure 107128742-A0305-02-0025-4

因此,若Pos=x 0 ,y 0 ,z 0

Figure 107128742-A0305-02-0025-5
,則:
Figure 107128742-A0305-02-0025-6
Therefore, if Pos = x 0 , y 0 , z 0 and
Figure 107128742-A0305-02-0025-5
,but:
Figure 107128742-A0305-02-0025-6

V 1=(x 1 ,y 1 ,z 1-x 0 ,y 0 ,z 0)/2 V 1 =( x 1 ,y 1 ,z 1 - x 0 ,y 0 ,z 0 )/2

V 1=(x 1-x 0)/2,(y 1-y 0)/2,(z 1-z 0)/2 V 1 =( x 1 - x 0 )/2 , ( y 1 - y 0 )/2 , ( z 1 - z 0 )/2

現參看圖8C,在一具體實例中,計算器件106或處理伺服器108可利用拐角向量計算像素之水平及垂直尺寸。在一具體實例中,根據以下公式計算尺寸:

Figure 107128742-A0305-02-0025-7
Referring now to FIG. 8C, in one embodiment, computing device 106 or processing server 108 may use the corner vectors to compute the horizontal and vertical dimensions of a pixel. In a specific example, the dimensions are calculated according to the following formula:
Figure 107128742-A0305-02-0025-7

Figure 107128742-A0305-02-0025-8
Figure 107128742-A0305-02-0025-8

Figure 107128742-A0305-02-0025-12
Figure 107128742-A0305-02-0025-12

Figure 107128742-A0305-02-0025-13
Figure 107128742-A0305-02-0025-13

Figure 107128742-A0305-02-0025-50
Figure 107128742-A0305-02-0025-50

Figure 107128742-A0305-02-0025-51
Figure 107128742-A0305-02-0025-51

Figure 107128742-A0305-02-0025-52
Figure 107128742-A0305-02-0025-52

Figure 107128742-A0305-02-0025-53
Figure 107128742-A0305-02-0025-53

針對水平及垂直尺寸成對地計算拐角向量,其中ht表示頂部水平尺寸,hb表示底部水平尺寸,v1表示左垂直尺寸及vr表示右垂直尺寸。x 0 ,y 0 ,z 0 表示給定像素之位置向量。在一具體實例中,需要四個尺寸中之僅兩個用於皮膚狀況103之進一步量測或分析。舉例而言,可僅需要合計之像素寬度及合計之像素 高度以用於計算皮膚狀況103之表面積,且計算器件106或處理伺服器108可利用對應邊緣之平均值以提供單一像素寬度/高度值組合。 Corner vectors are computed for pairs of horizontal and vertical dimensions, where h t represents the top horizontal dimension, h b represents the bottom horizontal dimension, v 1 represents the left vertical dimension and v r represents the right vertical dimension. x 0 , y 0 , z 0 represent the position vector of a given pixel. In one embodiment, only two of the four sizes are required for further measurement or analysis of the skin condition 103 . For example, only an aggregated pixel width and an aggregated pixel height may be required for calculating the surface area of skin condition 103, and computing device 106 or processing server 108 may use the average of the corresponding edges to provide a single pixel width/height value combination.

圖9A、圖9B及圖9C說明根據本發明之一具體實例的用於量測2D影像之子像素區域之例示性步驟。用於說明清晰實例,將參看先前論述之圖論述圖9A至圖9C。如上關於圖8A至圖8C描述的由計算器件106或處理伺服器108所產生的一或多個像素之像素尺寸,可由計算器件106或處理伺服器108用以量測2D影像中的給定像素之子像素區域。2D影像之區域的子像素量測結果可有用於分析皮膚狀況103,其中皮膚狀況103未由完整數目的像素所描繪,亦即,皮膚狀況103被包括於某些像素之部分中。 9A, 9B and 9C illustrate exemplary steps for measuring sub-pixel regions of a 2D image according to an embodiment of the present invention. For purposes of illustrating a clear example, FIGS. 9A-9C will be discussed with reference to previously discussed figures. The pixel dimensions of one or more pixels generated by computing device 106 or processing server 108 as described above with respect to FIGS. 8A-8C may be used by computing device 106 or processing server 108 to measure a given pixel in a 2D image. sub-pixel area. Sub-pixel measurements of regions of the 2D image can be useful for analyzing skin conditions 103, where the skin condition 103 is not depicted by a full number of pixels, ie the skin condition 103 is included in parts of certain pixels.

現參看圖9A至圖9C,在一具體實例中,像素-空間量測欄位可表示一子像素區域(例如,如圖9A中所描繪之線),或一部分像素區域(例如,圖9B及圖9C)。在一具體實例中,正規化之像素邊緣交叉點被用以經由以上關於圖8A至圖8C論述的拐角向量之線性內插來推導新相機-空間向量。在一具體實例中,藉由計算在一對相交點向量之間的距離來計算長度。舉例而言,在圖9A中,使用兩個相交點向量I 1 I 2 來判定圖9A中所展示的線之長度。 Referring now to FIGS. 9A-9C , in one embodiment, a pixel-space measurement field can represent a sub-pixel area (eg, a line as depicted in FIG. 9A ), or a portion of a pixel area (eg, as in FIGS. 9B and 9C ). Figure 9C). In one embodiment, the normalized pixel edge intersections are used to derive new camera-space vectors via linear interpolation of the corner vectors discussed above with respect to FIGS. 8A-8C . In one embodiment, the length is calculated by calculating the distance between a pair of intersection point vectors. For example, in FIG. 9A, two intersection point vectors I1 and I2 are used to determine the length of the line shown in FIG . 9A .

類似地,當需要一部分範圍時,判定像素之「內部(interior)」且使用相關邊緣相交及三角函數之相反數來計算三個三角形點向量,以計算三角形性質。舉例而言,在圖9B中,使用由相交向量I 2、

Figure 107128742-A0305-02-0026-49
I 1形成之三角形計算部分像素範圍。在不表示完整像素「內部」的相交三角形之情況下,使用一矩形計算其餘範圍尺寸(若需要)。舉例而言,在圖9C中,使用由相交向量I 2、
Figure 107128742-A0305-02-0026-16
、V3及V4形成之矩形計算由相交向量限界的子像素區域之範圍。 Similarly, when a portion of the extent is required, the "interior" of the pixel is determined and three triangle point vectors are computed using the relevant edge intersections and inverses of trigonometric functions to compute triangle properties. For example, in Figure 9B, using the intersection vector I 2,
Figure 107128742-A0305-02-0026-49
And the triangle formed by I 1 calculates part of the pixel range. In the case of intersecting triangles that do not represent the "inside" of a full pixel, a rectangle is used to calculate the remaining bounds dimensions (if needed). For example, in Figure 9C, using the intersection vector I 2,
Figure 107128742-A0305-02-0026-16
The rectangle formed by , V 3 and V 4 calculates the extent of the sub-pixel area bounded by the intersecting vectors.

在一具體實例中,使用以下給出之一標準2D無限線相交公式來計算邊緣交叉點,其中將出現於正規化之像素空間以外的所得交叉點拋棄。 In one embodiment, edge intersections are computed using one of the standard 2D infinite line intersection formulas given below, where resulting intersections that occur outside the normalized pixel space are discarded.

Figure 107128742-A0305-02-0027-17
Figure 107128742-A0305-02-0027-17

在以上公式中,L s 1 表示線1開始座標,L e 1 表示線1結束座標。類似地,L s 2 表示線2開始座標,L e 2 表示線2結束座標。在一具體實例中,在圖5A至圖5C、圖6A至圖6F、圖7A至圖7C、圖8A至圖8C及圖9A至圖9C中所描述之計算由計算器件106或處理伺服器108之專用計算及分析模組進行。 In the above formula, L s 1 represents the start coordinate of line 1, and L e 1 represents the end coordinate of line 1. Similarly, L s 2 represents the start coordinates of line 2, and L e 2 represents the end coordinates of line 2. In one embodiment, the calculations described in FIGS. The dedicated calculation and analysis module is used.

圖10說明根據本發明之一具體實例的用於分析皮膚狀況之輸出2D影像之一例示性視圖。出於說明清晰實例目的,將關於圖1至圖4論述圖10。 10 illustrates an exemplary view of an output 2D image for analyzing skin conditions according to an embodiment of the present invention. For purposes of illustrating a clear example, FIG. 10 will be discussed with respect to FIGS. 1-4 .

在圖10中,受試者102及至少一個皮膚狀況103經由圖形使用者介面(GUI)而可看見。在一具體實例中,GUI由在計算器件106或處理伺服器108上執行之一獨立應用程式實施。在一具體實例中,GUI為在影像捕獲器件104上實施的使用者介面之擴展,其可由影像捕獲器件104之使用者經由如上所述之觸控式螢幕操縱。在一具體實例中,GUI可允許使用者藉由使用如關於圖3之電腦系統300所描述的輸入器件314與受試者102之最終2D影像連同用於2D影像之每一像素的經內嵌深度資料互動。在一具體實例中,使用者可能能夠操縱受試者102之最終2D影像以用於皮膚狀況103之量測及分析。舉例而言,使用者可能能夠旋轉、放大或縮小、改變觀看點、進行螢幕截圖、量測針對皮膚狀況103之改變。在一具體實例中,使用者能夠在GUI中起始最終2D影像及由處理伺服器108儲存之一先前最終2D影像的並排比較。在一具體實例中,GUI允許使用者以不同放大率檢視皮膚狀況103,此亦有用於偵測不同大小之其他皮膚狀況。 In FIG. 10, a subject 102 and at least one skin condition 103 are visible via a graphical user interface (GUI). In one embodiment, the GUI is implemented by a stand-alone application executing on computing device 106 or processing server 108 . In one embodiment, the GUI is an extension of the user interface implemented on the image capture device 104 that can be manipulated by a user of the image capture device 104 via a touch screen as described above. In one embodiment, the GUI may allow the user to interact with the final 2D image of subject 102 together with the embedded 2D image for each pixel of the 2D image by using input device 314 as described with respect to computer system 300 of FIG. 3 . In-depth data interaction. In one embodiment, the user may be able to manipulate the final 2D image of the subject 102 for measurement and analysis of the skin condition 103 . For example, a user may be able to rotate, zoom in or out, change point of view, take screenshots, measure changes to skin condition 103 . In one embodiment, the user can initiate a side-by-side comparison of the final 2D image and a previous final 2D image stored by the processing server 108 in the GUI. In one embodiment, the GUI allows the user to view the skin condition 103 at different magnifications, which is also useful for detecting other skin conditions of different sizes.

在前文說明書中,已參考可在實施間有變化之眾多特定細節而描述本發明之具體實例。因此,應按說明性意義上而非限制性意義來看待說明書及圖式。本發明之範疇之唯一且排他性的指示及申請者意欲作為本發明之範疇的內容為:以申請專利範圍發佈之特定形式而自本申請案發佈之此等申請專利範圍的集合之文字範疇及等效範疇,包括任何隨後的校正。本文中針對此等申請專利範圍中含有之術語明確闡述之任何定義應控管如在申請專利範圍中使用的此等術語之意義。 In the foregoing specification, specific examples of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive one. The sole and exclusive indication of the scope of the invention and what the applicant intends to be the scope of the invention is: the literal scope of the collection of such claims issued from this application in the specific form in which they were issued and etc. range of effects, including any subsequent corrections. Any definitions expressly set forth herein for terms contained in these claims shall govern the meaning of such terms as used in these claims.

102‧‧‧受試者 102‧‧‧Subjects

103‧‧‧皮膚狀況 103‧‧‧Skin condition

104‧‧‧影像捕獲器件 104‧‧‧Image capture device

106‧‧‧計算器件 106‧‧‧Computing devices

108‧‧‧處理伺服器 108‧‧‧Processing the server

Claims (39)

一種分析一皮膚狀況之方法,其包含:使用一計算器件接收該皮膚狀況之一二維(2D)影像及與該2D影像相關聯的一組三維(3D)點雲;使用該計算器件以根據該組3D點雲產生一3D表面;使用一第二計算器件以基於該3D表面產生用於該2D影像之一深度圖,其中該深度圖包含用於該2D影像之每一像素的一深度值;及使用該計算器件以基於該2D影像及該深度圖分析該皮膚狀況;其中在該2D影像中之所要面積或路徑上之每一像素的水平及垂直尺寸經過計算來進行分析。 A method of analyzing a skin condition, comprising: using a computing device to receive a two-dimensional (2D) image of the skin condition and a set of three-dimensional (3D) point clouds associated with the 2D image; The set of 3D point clouds generates a 3D surface; using a second computing device to generate a depth map for the 2D image based on the 3D surface, wherein the depth map includes a depth value for each pixel of the 2D image and using the computing device to analyze the skin condition based on the 2D image and the depth map; wherein the horizontal and vertical dimensions of each pixel on a desired area or path in the 2D image are calculated for analysis. 如請求項1所述之方法,其中該2D影像中之該所要面積或路徑上之每一像素的該水平及垂直尺寸使用針對該2D影像中之該所要面積或路徑上之每一像素之像素-空間拐角向量來計算,其依據以下公式進行:
Figure 107128742-A0305-02-0030-19
Figure 107128742-A0305-02-0030-21
Figure 107128742-A0305-02-0030-54
Figure 107128742-A0305-02-0030-55
Figure 107128742-A0305-02-0030-56
Figure 107128742-A0305-02-0030-59
Figure 107128742-A0305-02-0030-60
Figure 107128742-A0305-02-0030-61
其中:V1、V2、V3、V4表示用於給定像素之像素-空間拐角向量,x 0 ,y 0 ,z 0表示該給定像素之位置向量, ht表示該給定像素之頂部水平尺寸,hb表示該給定像素之底部水平尺寸,v1表示該給定像素之左垂直尺寸,vr表示該給定像素之右垂直尺寸,其中該2D影像中之該所要面積或路徑上之每一像素的該像素-空間拐角向量使用該2D影像中之該所要面積或路徑上之每一像素之3D相機-空間位置來計算,其依據以下公式進行:
Figure 107128742-A0305-02-0031-25
其中:Pos表示該給定像素之位置向量,
Figure 107128742-A0305-02-0031-26
表示位於該給定像素之拐角之像素之位置向量,V n 表示用於該給定像素之像素-空間向量,n為1、2、3或4,其中該2D影像中之該所要面積或路徑上之每一像素的該3D相機-空間位置依據以下公式來計算:
Figure 107128742-A0305-02-0031-27
Figure 107128742-A0305-02-0031-28
z=D其中:D表示該給定像素之深度,C表示該給定像素之座標,其中Ch及Cv分別表示水平及垂直座標值,R表示總影像之解析度,其中Rh及Rv分別表示水平及垂直解析度,ΘH及ΘV表示水平及垂直的2D相機視野角度。
The method as claimed in claim 1, wherein the horizontal and vertical dimensions of the desired area in the 2D image or each pixel on the path use pixels for the desired area in the 2D image or each pixel on the path -Space corner vector is calculated according to the following formula:
Figure 107128742-A0305-02-0030-19
Figure 107128742-A0305-02-0030-21
Figure 107128742-A0305-02-0030-54
Figure 107128742-A0305-02-0030-55
Figure 107128742-A0305-02-0030-56
Figure 107128742-A0305-02-0030-59
Figure 107128742-A0305-02-0030-60
Figure 107128742-A0305-02-0030-61
Among them: V 1 , V 2 , V 3 , V 4 represent the pixel-space corner vectors for a given pixel, x 0 , y 0 , z 0 represent the position vectors of the given pixel, h t represent the given pixel the top horizontal dimension of the given pixel, h b represents the bottom horizontal dimension of the given pixel, v 1 represents the left vertical dimension of the given pixel, v r represents the right vertical dimension of the given pixel, wherein the desired area in the 2D image or the pixel-space corner vector of each pixel on the path is calculated using the desired area in the 2D image or the 3D camera-space position of each pixel on the path, according to the following formula:
Figure 107128742-A0305-02-0031-25
Where: Pos represents the position vector of the given pixel,
Figure 107128742-A0305-02-0031-26
represents the position vector of the pixel located at the corner of the given pixel, V n represents the pixel-space vector for the given pixel, n is 1, 2, 3 or 4, wherein the desired area or path in the 2D image The 3D camera-space position of each pixel above is calculated according to the following formula:
Figure 107128742-A0305-02-0031-27
Figure 107128742-A0305-02-0031-28
z = D where: D represents the depth of the given pixel, C represents the coordinates of the given pixel, where C h and C v represent the horizontal and vertical coordinate values, R represents the resolution of the total image, where Rh and R v represent the horizontal and vertical resolutions respectively, and Θ H and Θ V represent the horizontal and vertical 2D camera viewing angles.
如請求項1所述之方法,其中該組3D點雲中的每一3D點雲中之每一3D點對應於該2D影像之至少一個像素。 The method according to claim 1, wherein each 3D point in each 3D point cloud of the set of 3D point clouds corresponds to at least one pixel of the 2D image. 如請求項1所述之方法,其進一步包含:將該深度圖儲存於經通信耦接至該第二計算器件之一記憶體器件中。 The method of claim 1, further comprising: storing the depth map in a memory device communicatively coupled to the second computing device. 如請求項1所述之方法,其中使用該第二計算器件以藉由根據該2D影像及該3D表面實施一光線投射演算法來計算該深度圖。 The method as claimed in claim 1, wherein the second computing device is used to calculate the depth map by implementing a ray-casting algorithm according to the 2D image and the 3D surface. 如請求項1所述之方法其中使用該第二計算器件以藉由根據該2D影像及該3D表面實施一光線跟蹤演算法來計算該深度圖。 The method of claim 1 wherein the second computing device is used to compute the depth map by implementing a ray tracing algorithm based on the 2D image and the 3D surface. 如請求項1所述之方法,其中使用一影像捕獲器件捕獲該2D影像及該組3D點雲。 The method as claimed in claim 1, wherein an image capture device is used to capture the 2D image and the set of 3D point clouds. 如請求項7所述之方法,其進一步包含:沿著圍繞具有該皮膚狀況之一受試者的至少一個軸線繞轉該影像捕獲器件,以捕獲一組2D影像及3D點雲之相關聯集合。 The method of claim 7, further comprising: orbiting the image capture device along at least one axis about a subject having the skin condition to capture a set of 2D images and an associated set of 3D point clouds . 如請求項7所述之方法,其進一步包含將該2D影像及該組3D點雲儲存於經通信耦接至該影像捕獲器件之一記憶體器件中。 The method of claim 7, further comprising storing the 2D image and the set of 3D point clouds in a memory device communicatively coupled to the image capture device. 如請求項7所述之方法,其中該影像捕獲器件包含一二維(2D)相機。 The method of claim 7, wherein the image capture device comprises a two-dimensional (2D) camera. 如請求項10所述之方法,其進一步包含:使用該計算器件,以基於每一像素之深度值、該2D相機之水平視野之一角度及該2D相機之垂直視野之一角度,來產生針對該2D影像之每一像素的一組像素尺寸。 The method as recited in claim 10, further comprising: using the computing device to generate a value for A set of pixel dimensions for each pixel of the 2D image. 如請求項10所述之方法,其中該2D相機捕獲該皮膚狀況之一彩色2D影像。 The method of claim 10, wherein the 2D camera captures a color 2D image of the skin condition. 如請求項10所述之方法,其中該2D相機捕獲該皮膚狀況之一單色2D影像。 The method of claim 10, wherein the 2D camera captures a monochromatic 2D image of the skin condition. 如請求項10、11、12或13所述之方法,其中該2D相機具有至少 8百萬像素之一解析度。 The method as claimed in claim 10, 11, 12 or 13, wherein the 2D camera has at least 8 megapixel resolution. 如請求項7所述之方法,其中該影像捕獲器件包含一三維(3D)器件。 The method of claim 7, wherein the image capture device comprises a three-dimensional (3D) device. 如請求項15所述之方法,其中該3D器件為一3D掃描器。 The method as claimed in claim 15, wherein the 3D device is a 3D scanner. 如請求項15所述之方法,其中該3D器件為一3D相機,且其中該3D相機捕獲對應於該2D影像之該組3D點雲。 The method of claim 15, wherein the 3D device is a 3D camera, and wherein the 3D camera captures the set of 3D point clouds corresponding to the 2D image. 如請求項1所述之方法,其中該3D表面為一經內插之3D表面網格。 The method of claim 1, wherein the 3D surface is an interpolated 3D surface mesh. 如請求項1所述之方法,其中該3D表面描繪該皮膚狀況之輪廓。 The method of claim 1, wherein the 3D surface outlines the skin condition. 如請求項1所述之方法,其中該3D表面係使用該計算器件藉由濾出該組3D點雲中之至少一個3D點雲中的一組像差來產生。 The method of claim 1, wherein the 3D surface is generated using the computing device by filtering out a set of aberrations in at least one of the set of 3D point clouds. 如請求項1所述之方法,其中該3D表面係使用該計算器件藉由使用至少一個內插演算法來產生。 The method of claim 1, wherein the 3D surface is generated using the computing device by using at least one interpolation algorithm. 如請求項1所述之方法,其中分析該皮膚狀況進一步包含使用該計算器件量測該皮膚狀況之一大小。 The method of claim 1, wherein analyzing the skin condition further comprises using the computing device to measure a size of the skin condition. 如請求項22所述之方法,其中分析該皮膚狀況進一步包含使用該計算器件判定該皮膚狀況之該大小之一變異數(variance)。 The method of claim 22, wherein analyzing the skin condition further comprises determining a variance of the magnitude of the skin condition using the computing device. 如請求項23所述之方法,其中分析進一步包含使用該第二計算器件以根據該皮膚狀況之該大小之該變異數來自動診斷該皮膚狀況。 The method of claim 23, wherein analyzing further comprises using the second computing device to automatically diagnose the skin condition based on the variation in the magnitude of the skin condition. 一種用於分析一皮膚狀況之系統,其包含:一影像捕獲器件,其捕獲該皮膚狀況之一二維(2D)影像及與該2D影像相關聯的一組三維(3D)點雲;一計算器件,其經通信耦接至該影像捕獲器件及一第二計算器件,其中該 計算器件:自該影像捕獲器件接收該2D影像及該組3D點雲;根據該組3D點雲產生一3D表面;使用該第二計算器件產生基於該3D表面之用於該2D影像之一深度圖,其中該深度圖包含用於該2D影像之每一像素的深度資料;及基於該深度圖分析該皮膚狀況;其中在該2D影像中之所要面積或路徑上之每一像素的水平及垂直尺寸經過計算來進行分析。 A system for analyzing a skin condition, comprising: an image capture device that captures a two-dimensional (2D) image of the skin condition and a set of three-dimensional (3D) point clouds associated with the 2D image; a computing device communicatively coupled to the image capture device and a second computing device, wherein the a computing device: receiving the 2D image and the set of 3D point clouds from the image capture device; generating a 3D surface based on the set of 3D point clouds; using the second computing device to generate a depth for the 2D image based on the 3D surface map, wherein the depth map includes depth data for each pixel of the 2D image; and analyzes the skin condition based on the depth map; wherein the horizontal and vertical values of each pixel on a desired area or path in the 2D image Dimensions are calculated for analysis. 如請求項25所述之系統,其中該2D影像中之該所要面積或路徑上之每一像素的該水平及垂直尺寸使用針對該2D影像中之該所要面積或路徑上之每一像素之像素-空間拐角向量來計算,其依據以下公式進行:
Figure 107128742-A0305-02-0034-29
Figure 107128742-A0305-02-0034-30
Figure 107128742-A0305-02-0034-62
Figure 107128742-A0305-02-0034-64
Figure 107128742-A0305-02-0034-65
Figure 107128742-A0305-02-0034-66
Figure 107128742-A0305-02-0034-67
Figure 107128742-A0305-02-0034-70
其中:V1、V2、V3、V4表示用於給定像素之像素-空間拐角向量,x 0 ,y 0 ,z 0表示該給定像素之位置向量,ht表示該給定像素之頂部水平尺寸,hb表示該給定像素之底部水平尺寸, v1表示該給定像素之左垂直尺寸,vr表示該給定像素之右垂直尺寸,其中該2D影像中之該所要面積或路徑上之每一像素的該像素-空間拐角向量使用該2D影像中之該所要面積或路徑上之每一像素之3D相機-空間位置來計算,其依據以下公式進行:
Figure 107128742-A0305-02-0035-34
其中:Pos表示該給定像素之位置向量,
Figure 107128742-A0305-02-0035-37
表示位於該給定像素之拐角之像素之位置向量,V n 表示用於該給定像素之像素-空間向量,n為1、2、3或4,其中該2D影像中之該所要面積或路徑上之每一像素的該3D相機-空間位置依據以下公式來計算:
Figure 107128742-A0305-02-0035-38
Figure 107128742-A0305-02-0035-71
z=D其中:D表示該給定像素之深度,C表示該給定像素之座標,其中Ch及Cv分別表示水平及垂直座標值,R表示總影像之解析度,其中Rh及Rv分別表示水平及垂直解析度,ΘH及ΘV表示水平及垂直的2D相機視野角度。
The system of claim 25, wherein the horizontal and vertical dimensions of the desired area in the 2D image or each pixel on the path use pixels for the desired area in the 2D image or each pixel on the path -Space corner vector is calculated according to the following formula:
Figure 107128742-A0305-02-0034-29
Figure 107128742-A0305-02-0034-30
Figure 107128742-A0305-02-0034-62
Figure 107128742-A0305-02-0034-64
Figure 107128742-A0305-02-0034-65
Figure 107128742-A0305-02-0034-66
Figure 107128742-A0305-02-0034-67
Figure 107128742-A0305-02-0034-70
Among them: V 1 , V 2 , V 3 , V 4 represent the pixel-space corner vectors for a given pixel, x 0 , y 0 , z 0 represent the position vectors of the given pixel, h t represent the given pixel the top horizontal dimension of the given pixel, h b represents the bottom horizontal dimension of the given pixel, v 1 represents the left vertical dimension of the given pixel, v r represents the right vertical dimension of the given pixel, wherein the desired area in the 2D image or the pixel-space corner vector of each pixel on the path is calculated using the desired area in the 2D image or the 3D camera-space position of each pixel on the path, according to the following formula:
Figure 107128742-A0305-02-0035-34
Where: Pos represents the position vector of the given pixel,
Figure 107128742-A0305-02-0035-37
represents the position vector of the pixel located at the corner of the given pixel, V n represents the pixel-space vector for the given pixel, n is 1, 2, 3 or 4, wherein the desired area or path in the 2D image The 3D camera-space position of each pixel above is calculated according to the following formula:
Figure 107128742-A0305-02-0035-38
Figure 107128742-A0305-02-0035-71
z = D where: D represents the depth of the given pixel, C represents the coordinates of the given pixel, where C h and C v represent the horizontal and vertical coordinate values, R represents the resolution of the total image, where Rh and R v represent the horizontal and vertical resolutions, respectively, and Θ H and Θ V represent the horizontal and vertical 2D camera viewing angles.
如請求項25所述之系統,其中該影像捕獲器件進一步包含一2D相機,其中該2D相機捕獲該2D影像。 The system of claim 25, wherein the image capture device further comprises a 2D camera, wherein the 2D camera captures the 2D image. 如請求項25所述之系統,其中該影像捕獲器件進一步包含一3D器件,其中該3D器件捕獲該組3D點雲。 The system of claim 25, wherein the image capture device further comprises a 3D device, wherein the 3D device captures the set of 3D point clouds. 如請求項25所述之系統,其中該影像捕獲器件進一步包含一電池,其中該電池對該影像捕獲器件供電。 The system of claim 25, wherein the image capture device further comprises a battery, wherein the battery provides power to the image capture device. 如請求項25所述之系統,其中該影像捕獲器件進一步包含一閃光燈裝置,其中該閃光燈裝置包含至少一個發光二極體。 The system of claim 25, wherein the image capture device further comprises a flash device, wherein the flash device comprises at least one light emitting diode. 如請求項25所述之系統,其中該影像捕獲器件進一步包含一觸控式螢幕顯示器。 The system of claim 25, wherein the image capture device further comprises a touch screen display. 如請求項25所述之系統,其進一步包含與該第二計算器件中之至少一者經通信耦接的至少一個儲存器件,其中該儲存器件儲存該深度圖。 The system of claim 25, further comprising at least one storage device communicatively coupled to at least one of the second computing devices, wherein the storage device stores the depth map. 如請求項32所述之系統,其中該至少一個儲存器件包含由以下各者組成之一群組中之至少一者:一內接硬碟、一外接硬碟、一通用串列匯流排(USB)驅動機、一固態硬碟及一網路附接式儲存器件。 The system as claimed in claim 32, wherein the at least one storage device includes at least one of a group consisting of the following: an internal hard disk, an external hard disk, a Universal Serial Bus (USB ) drive machine, a solid state hard disk and a network attached storage device. 如請求項25所述之系統,其中該計算器件及該第二計算器件包含由以下各者組成之一群組中之至少一者:微處理器、特殊應用積體電路(ASIC)、場可程式化閘陣列(FPGA)及進階型精簡指令集計算機器(ARM)。 The system of claim 25, wherein the computing device and the second computing device comprise at least one of the group consisting of: a microprocessor, an application specific integrated circuit (ASIC), a field Programmable Gate Array (FPGA) and Advanced Reduced Instruction Set Computing Machine (ARM). 如請求項25所述之系統,其中該計算器件經由至少一個網路連接被通信耦接至該影像捕獲器件及該第二計算器件。 The system of claim 25, wherein the computing device is communicatively coupled to the image capture device and the second computing device via at least one network connection. 如請求項35所述之系統,其中該至少一個網路連接包含由以下各者組成之一群組中之至少一者:Wi-Fi、藍芽、乙太網路、光纖連接、紅外線、近場通信或同軸電纜連接。 The system of claim 35, wherein the at least one network connection comprises at least one of the group consisting of: Wi-Fi, Bluetooth, Ethernet, fiber optic connection, infrared, proximity Field communication or coaxial cable connection. 一種用於自動偵測皮膚狀況之系統,其包含:一影像捕獲器件,其自動捕獲一受試者之一組二維(2D)影像及與該組2D影像中之每一2D影像相關聯的一組三維(3D)點雲;一計算器件,其經通信耦接至該影像捕獲器件及一第二計算器件, 其中該計算器件:自動接收該受試者之該組2D影像及用於該每一2D影像之該組3D點雲;且基於該組3D點雲自動產生該受試者之一3D再現;其中該第二計算器件基於該3D再現產生用於該組2D影像中之該每一2D影像的一深度圖,其中該深度圖包含用於該每一2D影像中之每一像素的深度資料;及一儲存器件,其經通信耦接至該計算器件,其中該儲存器件自動儲存該深度圖;其中在該2D影像中之所要面積或路徑上之每一像素的水平及垂直尺寸經過計算來進行分析。 A system for automatically detecting a skin condition, comprising: an image capture device that automatically captures a set of two-dimensional (2D) images of a subject and associated with each 2D image in the set of 2D images a set of three-dimensional (3D) point clouds; a computing device communicatively coupled to the image capture device and a second computing device, wherein the computing device: automatically receives the set of 2D images of the subject and the set of 3D point clouds for each of the 2D images; and automatically generates a 3D representation of the subject based on the set of 3D point clouds; wherein the second computing device generates a depth map for each 2D image in the set of 2D images based on the 3D rendering, wherein the depth map includes depth data for each pixel in each 2D image; and a storage device communicatively coupled to the computing device, wherein the storage device automatically stores the depth map; wherein the horizontal and vertical dimensions of each pixel on a desired area or path in the 2D image are calculated for analysis . 如請求項37所述之系統,其中該2D影像中之該所要面積或路徑上之每一像素的該水平及垂直尺寸使用針對該2D影像中之該所要面積或路徑上之每一像素之像素-空間拐角向量來計算,其依據以下公式進行:
Figure 107128742-A0305-02-0037-40
Figure 107128742-A0305-02-0037-41
Figure 107128742-A0305-02-0037-72
Figure 107128742-A0305-02-0037-74
Figure 107128742-A0305-02-0037-76
Figure 107128742-A0305-02-0037-78
Figure 107128742-A0305-02-0037-79
Figure 107128742-A0305-02-0037-81
其中:V1、V2、V3、V4表示用於給定像素之像素-空間拐角向量, x 0 ,y 0 ,z 0表示該給定像素之位置向量,ht表示該給定像素之頂部水平尺寸,hb表示該給定像素之底部水平尺寸,v1表示該給定像素之左垂直尺寸,vr表示該給定像素之右垂直尺寸,其中該2D影像中之該所要面積或路徑上之每一像素的該像素-空間拐角向量使用該2D影像中之該所要面積或路徑上之每一像素之3D相機-空間位置來計算,其依據以下公式進行:
Figure 107128742-A0305-02-0038-45
其中:Pos表示該給定像素之位置向量,
Figure 107128742-A0305-02-0038-46
表示位於該給定像素之拐角之像素之位置向量,V n 表示用於該給定像素之像素-空間向量,n為1、2、3或4,其中該2D影像中之該所要面積或路徑上之每一像素的該3D相機-空間位置依據以下公式來計算:
Figure 107128742-A0305-02-0038-47
Figure 107128742-A0305-02-0038-48
z=D其中:D表示該給定像素之深度,C表示該給定像素之座標,其中Ch及Cv分別表示水平及垂直座標值,R表示總影像之解析度,其中Rh及Rv分別表示水平及垂直解析度,ΘH及ΘV表示水平及垂直的2D相機視野角度。
The system of claim 37, wherein the horizontal and vertical dimensions of the desired area in the 2D image or each pixel on the path use pixels for the desired area in the 2D image or each pixel on the path - The space corner vector is calculated according to the following formula:
Figure 107128742-A0305-02-0037-40
Figure 107128742-A0305-02-0037-41
Figure 107128742-A0305-02-0037-72
Figure 107128742-A0305-02-0037-74
Figure 107128742-A0305-02-0037-76
Figure 107128742-A0305-02-0037-78
Figure 107128742-A0305-02-0037-79
Figure 107128742-A0305-02-0037-81
Among them: V 1 , V 2 , V 3 , V 4 represent the pixel-space corner vectors for a given pixel, x 0 , y 0 , z 0 represent the position vectors of the given pixel, h t represent the given pixel the top horizontal dimension of the given pixel, h b represents the bottom horizontal dimension of the given pixel, v 1 represents the left vertical dimension of the given pixel, v r represents the right vertical dimension of the given pixel, wherein the desired area in the 2D image or the pixel-space corner vector of each pixel on the path is calculated using the desired area in the 2D image or the 3D camera-space position of each pixel on the path, according to the following formula:
Figure 107128742-A0305-02-0038-45
Where: Pos represents the position vector of the given pixel,
Figure 107128742-A0305-02-0038-46
represents the position vector of the pixel located at the corner of the given pixel, V n represents the pixel-space vector for the given pixel, n is 1, 2, 3 or 4, wherein the desired area or path in the 2D image The 3D camera-space position of each pixel above is calculated according to the following formula:
Figure 107128742-A0305-02-0038-47
Figure 107128742-A0305-02-0038-48
z = D where: D represents the depth of the given pixel, C represents the coordinates of the given pixel, where C h and C v represent the horizontal and vertical coordinate values, R represents the resolution of the total image, where Rh and R v represent the horizontal and vertical resolutions, respectively, and Θ H and Θ V represent the horizontal and vertical 2D camera viewing angles.
如請求項37所述之系統,其中用戶端伺服器進一步包含一分析 模組,且其中該分析模組:基於該深度圖及該2D影像自動產生該皮膚狀況之量測結果;基於比較該皮膚狀況之該量測結果與該皮膚狀況之先前儲存的量測結果,來自動判定該皮膚狀況之一變異數;及根據該皮膚狀況之該變異數自動產生評估及診斷輸出。 The system as claimed in claim 37, wherein the client server further includes an analysis module, and wherein the analysis module: automatically generates a measurement of the skin condition based on the depth map and the 2D image; based on comparing the measurement of the skin condition with previously stored measurements of the skin condition, automatically determining a variation of the skin condition; and automatically generating assessment and diagnosis outputs according to the variation of the skin condition.
TW107128742A 2017-08-17 2018-08-17 Systems and methods for analyzing cutaneous conditions TWI804506B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10201706752X 2017-08-17
SG10201706752XA SG10201706752XA (en) 2017-08-17 2017-08-17 Systems and methods for analyzing cutaneous conditions

Publications (2)

Publication Number Publication Date
TW201922163A TW201922163A (en) 2019-06-16
TWI804506B true TWI804506B (en) 2023-06-11

Family

ID=65362686

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107128742A TWI804506B (en) 2017-08-17 2018-08-17 Systems and methods for analyzing cutaneous conditions

Country Status (12)

Country Link
US (1) US11504055B2 (en)
EP (1) EP3668387B1 (en)
KR (1) KR102635541B1 (en)
CN (1) CN111031897B (en)
AU (1) AU2018316381A1 (en)
BR (1) BR112020003278A2 (en)
CA (1) CA3073259A1 (en)
ES (1) ES2950643T3 (en)
NZ (1) NZ761745A (en)
SG (1) SG10201706752XA (en)
TW (1) TWI804506B (en)
WO (1) WO2019035768A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11087523B2 (en) * 2018-10-26 2021-08-10 Autodesk, Inc. Production ray tracing of feature lines
CN112085846A (en) * 2019-06-14 2020-12-15 通用电气精准医疗有限责任公司 Method and system for generating a 3D point cloud of an object in an imaging system
CN111461980B (en) * 2020-03-30 2023-08-29 北京百度网讯科技有限公司 Performance estimation method and device of point cloud stitching algorithm
US10853955B1 (en) 2020-05-29 2020-12-01 Illuscio, Inc. Systems and methods for point cloud encryption
WO2022108067A1 (en) * 2020-11-19 2022-05-27 Samsung Electronics Co., Ltd. Method for rendering relighted 3d portrait of person and computing device for the same
US11823327B2 (en) 2020-11-19 2023-11-21 Samsung Electronics Co., Ltd. Method for rendering relighted 3D portrait of person and computing device for the same
KR102492692B1 (en) * 2020-12-01 2023-01-27 (주)와이즈콘 System and method for analysis of skin image using deep learning and computer program for the same
TWI775229B (en) * 2020-12-02 2022-08-21 碁曄科技股份有限公司 A skin condition detection system and method thereof
KR102472583B1 (en) 2021-01-28 2022-11-30 한림대학교 산학협력단 Artificial intelligence based electronic apparatus for providing diagnosis information on abrasion, control method, and computer program
US11386235B1 (en) 2021-11-12 2022-07-12 Illuscio, Inc. Systems and methods for dynamic checksum generation and validation with customizable levels of integrity verification
WO2023199357A1 (en) * 2022-04-13 2023-10-19 Garg Dr Suruchi A system of identifying plurality of parameters of a subject's skin and a method thereof
US11527017B1 (en) 2022-05-03 2022-12-13 Illuscio, Inc. Systems and methods for dynamic decimation of point clouds and data points in a three-dimensional space
US11468583B1 (en) 2022-05-26 2022-10-11 Illuscio, Inc. Systems and methods for detecting and correcting data density during point cloud generation
CN116320199B (en) * 2023-05-19 2023-10-31 科大乾延科技有限公司 Intelligent management system for meta-universe holographic display information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090118600A1 (en) * 2007-11-02 2009-05-07 Ortiz Joseph L Method and apparatus for skin documentation and analysis
TWI573100B (en) * 2016-06-02 2017-03-01 Zong Jing Investment Inc Method for automatically putting on face-makeup
US20170084026A1 (en) * 2015-09-21 2017-03-23 Korea Institute Of Science And Technology Method for forming 3d maxillofacial model by automatically segmenting medical image, automatic image segmentation and model formation server performing the same, and storage medium storing the same

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6278460B1 (en) * 1998-12-15 2001-08-21 Point Cloud, Inc. Creating a three-dimensional model from two-dimensional images
CN1218282C (en) * 2001-11-27 2005-09-07 三星电子株式会社 Node structure for representing three-D object by depth image
GB0209080D0 (en) * 2002-04-20 2002-05-29 Virtual Mirrors Ltd Methods of generating body models from scanned data
US7546156B2 (en) * 2003-05-09 2009-06-09 University Of Rochester Medical Center Method of indexing biological imaging data using a three-dimensional body representation
US20110218428A1 (en) * 2010-03-04 2011-09-08 Medical Scan Technologies, Inc. System and Method for Three Dimensional Medical Imaging with Structured Light
CN101966083B (en) * 2010-04-08 2013-02-13 太阳系美容事业有限公司 Abnormal skin area computing system and computing method
US20130235233A1 (en) * 2012-03-09 2013-09-12 Research In Motion Limited Methods and devices for capturing images
CA2930184C (en) * 2013-12-03 2024-04-23 Children's National Medical Center Method and system for wound assessment and management
SG10201405182WA (en) * 2014-08-25 2016-03-30 Univ Singapore Technology & Design Method and system
US10004403B2 (en) * 2014-08-28 2018-06-26 Mela Sciences, Inc. Three dimensional tissue imaging system and method
US10455134B2 (en) * 2014-10-26 2019-10-22 Galileo Group, Inc. Temporal processes for aggregating multi dimensional data from discrete and distributed collectors to provide enhanced space-time perspective
US10007971B2 (en) * 2016-03-14 2018-06-26 Sensors Unlimited, Inc. Systems and methods for user machine interaction for image-based metrology
WO2018185560A2 (en) * 2017-04-04 2018-10-11 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
CN110305140B (en) * 2019-07-30 2020-08-04 上海勋和医药科技有限公司 Dihydropyrrolopyrimidines selective JAK2 inhibitors

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090118600A1 (en) * 2007-11-02 2009-05-07 Ortiz Joseph L Method and apparatus for skin documentation and analysis
US20170084026A1 (en) * 2015-09-21 2017-03-23 Korea Institute Of Science And Technology Method for forming 3d maxillofacial model by automatically segmenting medical image, automatic image segmentation and model formation server performing the same, and storage medium storing the same
TWI573100B (en) * 2016-06-02 2017-03-01 Zong Jing Investment Inc Method for automatically putting on face-makeup

Also Published As

Publication number Publication date
EP3668387A1 (en) 2020-06-24
TW201922163A (en) 2019-06-16
ES2950643T3 (en) 2023-10-11
SG10201706752XA (en) 2019-03-28
AU2018316381A1 (en) 2020-03-12
CN111031897A (en) 2020-04-17
KR20200042509A (en) 2020-04-23
CN111031897B (en) 2023-09-29
BR112020003278A2 (en) 2020-09-01
WO2019035768A1 (en) 2019-02-21
KR102635541B1 (en) 2024-02-08
US20200205723A1 (en) 2020-07-02
EP3668387A4 (en) 2021-05-12
NZ761745A (en) 2022-07-01
CA3073259A1 (en) 2019-02-21
EP3668387B1 (en) 2023-05-24
US11504055B2 (en) 2022-11-22

Similar Documents

Publication Publication Date Title
TWI804506B (en) Systems and methods for analyzing cutaneous conditions
US10825198B2 (en) 3 dimensional coordinates calculating apparatus, 3 dimensional coordinates calculating method, 3 dimensional distance measuring apparatus and 3 dimensional distance measuring method using images
US8988317B1 (en) Depth determination for light field images
US9886774B2 (en) Photogrammetric methods and devices related thereto
EP2976748B1 (en) Image-based 3d panorama
US10681269B2 (en) Computer-readable recording medium, information processing method, and information processing apparatus
WO2019015154A1 (en) Monocular three-dimensional scanning system based three-dimensional reconstruction method and apparatus
WO2012165491A1 (en) Stereo camera device and computer-readable recording medium
KR20170086476A (en) Distance measurement device for motion picture camera focus applications
WO2017199696A1 (en) Image processing device and image processing method
JP2019508165A (en) Malignant tissue determination based on thermal image contours
WO2023213252A1 (en) Scanning data processing method and apparatus, and device and medium
US11937967B2 (en) Automating a medical environment
JP7006810B2 (en) 3D measuring device, mobile robot, push wheel type moving device and 3D measurement processing method
CN112258538A (en) Method and device for acquiring three-dimensional data of human body
JP3637416B2 (en) Three-dimensional measurement method, three-dimensional measurement system, image processing apparatus, and computer program
JP5409451B2 (en) 3D change detector
US20240164758A1 (en) Systems and methods for generating patient models based on ultrasound images
JP6257798B2 (en) Image processing apparatus and image processing method
JP2015005200A (en) Information processing apparatus, information processing system, information processing method, program, and memory medium
CN112489745A (en) Sensing device for medical facility and implementation method