TW202242379A - Methods and apparati for nondestructive detection of undissolved particles in a fluid - Google Patents

Methods and apparati for nondestructive detection of undissolved particles in a fluid Download PDF

Info

Publication number
TW202242379A
TW202242379A TW111126982A TW111126982A TW202242379A TW 202242379 A TW202242379 A TW 202242379A TW 111126982 A TW111126982 A TW 111126982A TW 111126982 A TW111126982 A TW 111126982A TW 202242379 A TW202242379 A TW 202242379A
Authority
TW
Taiwan
Prior art keywords
particles
particle
vessel
container
fluid
Prior art date
Application number
TW111126982A
Other languages
Chinese (zh)
Other versions
TWI840888B (en
Inventor
葛萊漢 F 米爾內
爾維 福瑞恩德
萊恩 L 史密斯
Original Assignee
美商安美基公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=47008675&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=TW202242379(A) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by 美商安美基公司 filed Critical 美商安美基公司
Publication of TW202242379A publication Critical patent/TW202242379A/en
Application granted granted Critical
Publication of TWI840888B publication Critical patent/TWI840888B/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • G01N15/1433Signal processing using image recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/47Scattering, i.e. diffuse reflection
    • G01N21/49Scattering, i.e. diffuse reflection within a body or fluid
    • G01N21/51Scattering, i.e. diffuse reflection within a body or fluid inside a container, e.g. in an ampoule
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/90Investigating the presence of flaws or contamination in a container or its contents
    • G01N21/9018Dirt detection in containers
    • G01N21/9027Dirt detection in containers in containers after filling
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/15Medicinal preparations ; Physical properties thereof, e.g. dissolubility
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1425Optical investigation techniques, e.g. flow cytometry using an analyser being characterised by its control arrangement
    • G01N15/1427Optical investigation techniques, e.g. flow cytometry using an analyser being characterised by its control arrangement with the synchronisation of components, a time gate for operation of components, or suppression of particle coincidences
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N2015/1027Determining speed or velocity of a particle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N2015/1029Particle size
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • G01N2015/144Imaging characterised by its optical setup
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • G01N2015/144Imaging characterised by its optical setup
    • G01N2015/1445Three-dimensional imaging, imaging in different image planes, e.g. under different angles or at different depths, e.g. by a relative motion of sample and detector, for instance by tomography
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • G01N2015/1452Adjustment of focus; Alignment
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1468Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle
    • G01N2015/1472Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle with colour
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N2015/1477Multiparameters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N2015/1493Particle size
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N2015/1497Particle shape
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1765Method using an image detector and processing of image signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dispersion Chemistry (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Geometry (AREA)
  • Medicinal Chemistry (AREA)
  • Food Science & Technology (AREA)
  • Pharmacology & Pharmacy (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Sampling And Sample Adjustment (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating Or Analyzing Materials By The Use Of Electric Means (AREA)
  • Accessories For Mixers (AREA)
  • Geophysics And Detection Of Objects (AREA)
  • Image Processing (AREA)

Abstract

The apparati, methods, and computer program products disclosed herein can be used to nondestructively detect undissolved particles, such as glass flakes and/or protein aggregates, in a fluid in a vessel, such as, but not limited to, a fluid that contains a drug.

Description

用於非破壞性檢測-流體中未溶解粒子之方法及裝置Method and apparatus for non-destructive detection - undissolved particles in a fluid

本申請案係關於用於非破壞性檢測-流體中未溶解粒子之方法及裝置。The present application relates to methods and devices for non-destructive detection of undissolved particles in fluids.

為了特性化藥物產品之給定配方的品質,各種類型之粒子之間的差異為重要的。舉例而言,差異之低特異性可能將諸如玻璃薄片之物件混淆為蛋白質微粒物質。需要差異系統之高特異性以便在決定配方時提供準確決定。在無關於特定藥物產品中之粒子的類型之資訊的情況下,可能難以恰當地配製藥物產品。 不幸地,習知粒子檢測技術不適合用於檢測蛋白質聚集體及其他小的及/或精細粒子。人類檢查員通常不能檢測小於約100微米之粒子。自動檢查技術通常為破壞性的;亦即,其涉及自其容器移除經檢查之流體,此通常使流體不適合用於治療使用。另外,習知非破壞性檢查系統僅使用容器之單一快照以判定粒子是否存在,此經常導致不精確粒子大小量測及/或粒子計數。習知檢查技術亦可涉及破壞較精細粒子(諸如蛋白質聚集體)。舉例而言,使瓶裝流體以高速(例如,2000 rpm或更多歷時若干秒)自旋可撕開流體中之蛋白質聚集體。 In order to characterize the quality of a given formulation of a pharmaceutical product, the differences between the various types of particles are important. For example, the low specificity of the difference may confuse objects such as glass flakes as proteinaceous particulate matter. High specificity of the differential system is required in order to provide accurate decisions when deciding on formulations. Without information about the type of particles in a particular drug product, it can be difficult to properly formulate a drug product. Unfortunately, conventional particle detection techniques are not suitable for detecting protein aggregates and other small and/or fine particles. Human inspectors are generally unable to detect particles smaller than about 100 microns. Automated inspection techniques are generally destructive; that is, they involve removal of the inspected fluid from its container, which often renders the fluid unsuitable for therapeutic use. Additionally, conventional non-destructive inspection systems use only a single snapshot of the container to determine the presence or absence of particles, which often results in inaccurate particle size measurements and/or particle counts. Conventional inspection techniques may also involve breaking up finer particles such as protein aggregates. For example, spinning bottled fluid at high speeds (eg, 2000 rpm or more for several seconds) can tear apart protein aggregates in the fluid.

本文中所揭示之技術的一個實施例係關於一種用於非破壞性檢測至少部分以流體(諸如含水流體、乳液、油、有機溶劑)填充之器皿中之粒子(亦即,未溶解粒子)的裝置。如本文中所使用,術語「檢測」理解為包括檢測、特性化、區分、區別或識別粒子之存在、數目、位置、識別碼、大小、形狀(例如,伸長率或圓度)、顏色、螢光、對比度、吸收性、反射率或其他特性,或此等特性中之二個、三個、四個、五個、六個、七個、八個、九個、十個、十一個、十二個或更多之組合。在說明性實施例中,裝置包括獲取表示流體中之粒子之軌跡的時間序列資料之一成像器。可操作地耦接至成像器之記憶體儲存時間序列資料,且可操作地耦接至記憶體之處理器檢測及/或識別粒子。更具體而言,處理器反轉時間序列資料之時間次序以形成反轉時間序列資料;自反轉時間序列資料估計粒子之軌跡;及基於軌跡判定粒子之存在或類型。如本文中所定義,反轉時間序列資料包括以相反時間次序配置之時間序列資料的圖框,使得最後發生之事件首先出現(且反之亦然)。 其他實施例包括一種用於非破壞性檢測至少部分以流體填充之器皿中之未溶解粒子的方法及對應電腦程式產品。實施該方法涉及(例如)藉由處理器反轉表示流體中之粒子之軌跡的時間序列資料之時間次序以形成反轉時間序列資料,該處理器執行編碼於電腦程式產品之非揮發性記憶體中的指令。該方法進一步包括自反轉時間序列資料估計粒子之軌跡,接著基於軌跡檢測及/或識別粒子。 另一實施例為一種用於非破壞性檢測至少部分以流體填充之器皿中之未溶解粒子的裝置,該裝置涉及: (a)  至少兩個成像器,其經定位以自不同視角使該粒子成像,每一成像器經組態以獲取該流體中之該粒子的一或多個二維影像; (b)  一記憶體,其可操作地耦接至該成像器且經組態以儲存該時間序列;及 (c)  一處理器,其可操作地耦接至該記憶體且經組態以藉由以下操作來檢測該粒子: (i)   組合來自該至少三個成像器之該等二維影像以判定指示該器皿中之該粒子的位置之三維資料;及 (ii)  至少部分基於該三維資料檢測該粒子。 亦涵蓋一種用於非破壞性檢測至少部分以流體填充之器皿中之未溶解粒子的方法,該方法包含: (a)  使用至少兩個成像器以自不同視角使該粒子成像以各自獲取該流體中之該粒子的各別一或多個二維影像; (b)  組合來自該至少兩個成像器之該等二維影像以判定指示該器皿中之該粒子的位置之三維資料;及 (c)  至少部分基於該三維資料檢測該粒子。 本發明之其他實施例包括一種用於非破壞性檢測至少部分以流體填充之器皿中之(一或多個)透明或反射性物件(例如,玻璃薄片)的裝置、方法及電腦程式產品。成像器獲取表示隨時間而變之自該器皿中的複數個空間位置反射之光的資料,且將資料儲存於可操作地耦接至成像器之記憶體中。可操作地耦接至記憶體之處理器可能回應於編碼於電腦程式產品中之指令而基於該資料藉由識別用於由該資料表示的複數個位置中之每一位置之反射光的各別最大量來檢測物件(例如,玻璃薄片)。處理器接著基於反射光之各別最大量超過預定值之空間位置的數目來判定器皿中之物件(例如,玻璃薄片)的存在或缺乏。 本發明之另一實施例為一種對至少部分以流體填充之器皿中之未溶解粒子進行非破壞性計數及定大小的方法。該方法涉及: (a)  接收在特定成像條件下獲得之器皿中之粒子的至少一影像; (b)  基於該至少一影像,檢測該等粒子且判定指示該影像中之該等所檢測粒子之表觀大小的資訊; (c)  判定指示該等所檢測粒子之一表觀粒子大小分佈的表觀粒子大小群體資訊;及 (d)  基於以下各者判定指示該等所檢測粒子之實際粒子大小分佈的實際粒子大小群體資訊 (i)   該表觀粒子大小群體資訊;及 (ii)  指示在對應於該等特定成像條件之條件下成像的標準大小粒子之一或多個集合的該表觀大小分佈之校準群體資訊。 本發明之另一實施例為一種用於對至少部分以流體填充之器皿中之未溶解粒子計數及定大小的裝置,該裝置包括經組態以執行以下操作之至少一處理器: (a)  接收在特定成像條件下獲得之該器皿中之該等粒子的至少一影像; (b)  基於該至少一影像,檢測該等粒子且判定指示該影像中之該等所檢測粒子之表觀大小的資訊; (c)  判定指示該等所檢測粒子之一表觀粒子大小分佈的表觀粒子大小群體資訊;及 (d)  基於以下各者判定指示該等所檢測粒子之實際粒子大小分佈的實際粒子大小群體資訊 (i)   該表觀粒子大小群體資訊;及 (ii)  指示在對應於該等特定成像條件之條件下成像的標準大小粒子之一或多個集合的該表觀大小分佈之校準群體資訊。 本發明之進一步實施例為一種用於對至少部分以流體填充之器皿中之未溶解粒子進行非破壞性計數及定大小的電腦程式產品,該電腦程式產品包含非揮發性機器可讀指令,該等指令在由處理器執行時使該處理器執行以下操作: (a)  接收在特定成像條件下獲得之該器皿中之該等粒子的至少一影像; (b)  基於該至少一影像,檢測該等粒子且判定指示該影像中之該等所檢測粒子之表觀大小的資訊; (c)  判定指示該等所檢測粒子之一表觀粒子大小分佈的表觀粒子大小群體資訊;及 (d)  基於以下各者判定指示該等所檢測粒子之實際粒子大小分佈的實際粒子大小群體資訊 (i)   該表觀粒子大小群體資訊及 (ii)  指示在對應於該等特定成像條件之條件下成像的標準大小粒子之一或多個集合的該表觀大小分佈之校準群體資訊。 本發明之進一步實施例為一種用於非破壞性檢測至少部分以流體填充之器皿中之未溶解粒子的方法,該方法包括: (a)  使用至少一成像器以使該粒子成像; (b)  處理該影像以判定指示該器皿中之該粒子之位置的位置資料; (c)  至少部分基於該位置資料檢測該粒子,其中至少部分基於位置資料檢測該粒子包括識別該器皿之一子區中的該粒子之存在; (d)  當該粒子定位於該器皿之該子區中時使用一感測器以判定該粒子之一特性; (e)  產生指示該所判定特性之粒子特性資料;及 (f)   使該粒子特性資料與識別該粒子之資料相關聯。 本發明之進一步實施例為一種用於非破壞性檢測至少部分以流體填充之器皿中之未溶解粒子的裝置,該裝置包括: (a)  至少一成像器,其經定位以使該粒子成像; (b)  至少一感測器,其經組態以當該粒子定位於該器皿之子區中時判定該粒子之一特性; (b)  至少一處理器,其可操作地耦接至該至少一成像器及該感測器且經組態以: 處理該影像以判定指示該器皿中之該粒子之位置的位置資料; 至少部分基於該位置資料檢測該粒子且識別該器皿之一子區中之該粒子的存在; 當該粒子定位於該器皿之該子區中時使用來自該感測器之一信號以判定該粒子之一特性, 產生指示該所判定特性之粒子特性資料;及 使該粒子特性資料與識別該粒子之資料相關聯。 本發明之另一實施例為一種用於非破壞性檢測至少部分以流體填充之器皿中之未溶解粒子的裝置,其中該器皿包括繞著一縱向軸線安置之一透明管狀器皿壁,該裝置包括:一成像器,其經組態以獲取該流體中之粒子的一或多個影像,該成像器包括經定位以使該粒子成像至感測器上之至少一成像光學元件;一照明源,其至少部分定位於穿過該器皿且實質上與該器皿之該縱向軸線垂直的一平面內,該照明源經配置以實質上消除自該源發射之光線的存在,該等光線自該器皿壁之一表面反射或折射且由該至少一光學元件成像至該感測器上。 本發明之另一實施例為一種用於非破壞性檢測至少部分以流體填充之器皿中之未溶解粒子的方法,其中該器皿包含繞著一縱向軸線安置之一透明管狀器皿壁,該方法包含:使用成像器獲取該流體中之粒子的一或多個影像,該成像器包含經定位以使該粒子成像至感測器上之至少一成像光學元件;及以照明源照明該器皿,該照明源至少部分定位於穿過該器皿且實質上與該器皿之該縱向軸線垂直的一平面內,該照明源經配置以實質上消除自該源發射之光線的存在,該等光線自該器皿壁之一表面反射或折射且由該至少一光學元件成像至該感測器上。 與其他粒子檢測系統及技術不同,本發明系統及技術非破壞性地操作-不需要自器皿移除流體來檢測、計數及識別器皿中之粒子。因此,本發明系統及技術可用以研究在長時間跨度(例如,幾分鐘、幾小時、幾天、幾個月或幾年)內粒子、流體及器皿之改變及互動。另外,本發明系統及技術不必涉及或導致器皿中之甚至較精細粒子(諸如小蛋白質聚集體)的破壞。其亦俘獲時間序列資料,亦即,表示移動流體中之粒子之軌跡的資料。因為本發明系統使用器皿之時間序列資料以代替單一圖框快照,所以其可較精確地估計器皿中之粒子的數目及粒子大小。其亦可自粒子之運動導出關於每一粒子之更多資訊,諸如粒子形態及粒子組成。舉例而言,降落粒子傾向於比上升粒子緻密。 上述概述僅為說明性的,且不意欲以任何方式限制。除上文所描述之說明性態樣、實施例及特徵之外,其他態樣、實施例及特徵將參考以下圖式及詳細描述而變得顯而易見。 One embodiment of the technology disclosed herein relates to a method for the non-destructive detection of particles (i.e., undissolved particles) in vessels at least partially filled with fluids (such as aqueous fluids, emulsions, oils, organic solvents). device. As used herein, the term "detection" is understood to include detecting, characterizing, distinguishing, differentiating or identifying the presence, number, position, identifier, size, shape (e.g., elongation or roundness), color, fluorescence Light, contrast, absorptivity, reflectivity, or other properties, or two, three, four, five, six, seven, eight, nine, ten, eleven, A combination of twelve or more. In an illustrative embodiment, the apparatus includes an imager that acquires time-series data representing trajectories of particles in the fluid. A memory operatively coupled to the imager stores time-series data, and a processor operably coupled to the memory detects and/or identifies particles. More specifically, the processor reverses the temporal order of the time series data to form reversed time series data; estimates trajectories of particles from the reversed time series data; and determines the presence or type of particles based on the trajectories. As defined herein, reversed time-series data includes frames of time-series data arranged in reverse chronological order such that events that occur last occur first (and vice versa). Other embodiments include a method and corresponding computer program product for non-destructive detection of undissolved particles in a vessel at least partially filled with a fluid. Implementing the method involves reversing the time-order of time-series data representing trajectories of particles in a fluid to form reversed time-series data, for example, by a processor executing a non-volatile memory encoded in a computer program product instructions in . The method further includes estimating trajectories of the particles from the inverted time series data, and then detecting and/or identifying particles based on the trajectories. Another embodiment is an apparatus for non-destructive detection of undissolved particles in a vessel at least partially filled with a fluid, the apparatus involving: (a) at least two imagers positioned to image the particle from different viewing angles, each imager configured to acquire one or more two-dimensional images of the particle in the fluid; (b) a memory operatively coupled to the imager and configured to store the time series; and (c) a processor operably coupled to the memory and configured to detect the particle by: (i) combining the two-dimensional images from the at least three imagers to determine three-dimensional data indicative of the position of the particle in the vessel; and (ii) detecting the particle based at least in part on the three-dimensional data. Also contemplated is a method for the non-destructive detection of undissolved particles in a vessel at least partially filled with a fluid, the method comprising: (a) using at least two imagers to image the particle from different viewing angles to each obtain a respective one or more two-dimensional images of the particle in the fluid; (b) combining the two-dimensional images from the at least two imagers to determine three-dimensional data indicative of the location of the particle in the vessel; and (c) detecting the particle based at least in part on the three-dimensional data. Other embodiments of the invention include an apparatus, method and computer program product for non-destructive inspection of transparent or reflective object(s) (eg, glass flakes) in an at least partially fluid-filled vessel. The imager acquires data representing light reflected from a plurality of spatial locations in the vessel as a function of time and stores the data in memory operatively coupled to the imager. A processor operably coupled to the memory may respond to instructions encoded in a computer program product based on the data by identifying a respective Maximum amount to detect objects (eg glass flakes). The processor then determines the presence or absence of an item (eg, glass sheet) in the vessel based on the number of spatial locations for which the respective maximum amounts of reflected light exceed a predetermined value. Another embodiment of the invention is a method of non-destructively counting and sizing undissolved particles in a vessel at least partially filled with a fluid. The method involves: (a) receiving at least one image of particles in the vessel obtained under specified imaging conditions; (b) based on the at least one image, detect the particles and determine information indicative of the apparent size of the detected particles in the image; (c) determining apparent particle size population information indicative of an apparent particle size distribution of one of the detected particles; and (d) determine actual particle size population information indicative of the actual particle size distribution of the detected particles based on (i) the apparent particle size population information; and (ii) calibration population information indicative of the apparent size distribution of one or more collections of standard-sized particles imaged under conditions corresponding to those particular imaging conditions. Another embodiment of the present invention is an apparatus for counting and sizing undissolved particles in a vessel at least partially filled with a fluid, the apparatus comprising at least one processor configured to: (a) receiving at least one image of the particles in the vessel obtained under specified imaging conditions; (b) based on the at least one image, detect the particles and determine information indicative of the apparent size of the detected particles in the image; (c) determining apparent particle size population information indicative of an apparent particle size distribution of one of the detected particles; and (d) determine actual particle size population information indicative of the actual particle size distribution of the detected particles based on (i) the apparent particle size population information; and (ii) calibration population information indicative of the apparent size distribution of one or more collections of standard-sized particles imaged under conditions corresponding to those particular imaging conditions. A further embodiment of the present invention is a computer program product for non-destructively counting and sizing undissolved particles in a vessel at least partially filled with a fluid, the computer program product comprising non-volatile machine readable instructions, the etc. instructions, when executed by a processor, cause the processor to: (a) receiving at least one image of the particles in the vessel obtained under specified imaging conditions; (b) based on the at least one image, detect the particles and determine information indicative of the apparent size of the detected particles in the image; (c) determining apparent particle size population information indicative of an apparent particle size distribution of one of the detected particles; and (d) determine actual particle size population information indicative of the actual particle size distribution of the detected particles based on (i) the apparent particle size population information and (ii) calibration population information indicative of the apparent size distribution of one or more collections of standard-sized particles imaged under conditions corresponding to those particular imaging conditions. A further embodiment of the invention is a method for the non-destructive detection of undissolved particles in a vessel at least partially filled with a fluid, the method comprising: (a) using at least one imager to image the particle; (b) processing the image to determine location data indicative of the location of the particle in the vessel; (c) detecting the particle based at least in part on the location data, wherein detecting the particle based at least in part on the location data includes identifying the presence of the particle in a subregion of the vessel; (d) using a sensor to determine a property of the particle when the particle is located in the subregion of the vessel; (e) generate particle property data indicative of the determined property; and (f) correlating the particle characteristic data with data identifying the particle. A further embodiment of the present invention is an apparatus for non-destructive detection of undissolved particles in a vessel at least partially filled with a fluid, the apparatus comprising: (a) at least one imager positioned to image the particle; (b) at least one sensor configured to determine a property of the particle when the particle is located in the subregion of the vessel; (b) at least one processor operably coupled to the at least one imager and the sensor and configured to: processing the image to determine location data indicative of the location of the particle in the vessel; detecting the particle based at least in part on the location data and identifying the presence of the particle in a subregion of the vessel; using a signal from the sensor to determine a property of the particle when the particle is located in the subregion of the vessel, generating particle property data indicative of the determined property; and The particle property data is associated with data identifying the particle. Another embodiment of the present invention is an apparatus for the non-destructive detection of undissolved particles in a vessel at least partially filled with a fluid, wherein the vessel includes a transparent tubular vessel wall disposed about a longitudinal axis, the apparatus comprising : an imager configured to acquire one or more images of particles in the fluid, the imager comprising at least one imaging optic positioned to image the particles onto a sensor; an illumination source, positioned at least partially in a plane passing through the vessel and substantially perpendicular to the longitudinal axis of the vessel, the source of illumination being configured to substantially eliminate the presence of light emitted from the source that passes from the vessel wall A surface is reflected or refracted and imaged onto the sensor by the at least one optical element. Another embodiment of the present invention is a method for the non-destructive detection of undissolved particles in a vessel at least partially filled with a fluid, wherein the vessel comprises a transparent tubular vessel wall disposed about a longitudinal axis, the method comprising : acquiring one or more images of particles in the fluid using an imager comprising at least one imaging optic positioned to image the particles onto a sensor; and illuminating the vessel with an illumination source, the illumination A source is at least partially positioned in a plane passing through the vessel and substantially perpendicular to the longitudinal axis of the vessel, the illumination source being configured to substantially eliminate the presence of light emitted from the source that passes from the vessel wall A surface is reflected or refracted and imaged onto the sensor by the at least one optical element. Unlike other particle detection systems and techniques, the present systems and techniques operate non-destructively - no removal of fluid from the vessel is required to detect, count and identify particles in the vessel. Thus, the present systems and techniques can be used to study the changes and interactions of particles, fluids, and vessels over long time spans (eg, minutes, hours, days, months, or years). Additionally, the present systems and techniques do not necessarily involve or result in the destruction of even finer particles, such as small protein aggregates, in the vessel. It also captures time-series data, that is, data representing the trajectories of particles in a moving fluid. Because the system of the present invention uses the time-series data of the vessel instead of a single frame snapshot, it can more accurately estimate the number and particle size of the particles in the vessel. It can also derive more information about each particle from the particle's motion, such as particle morphology and particle composition. For example, falling particles tend to be denser than ascending particles. The above summary is illustrative only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments and features described above, other aspects, embodiments and features will become apparent with reference to the following drawings and detailed description.

併入本說明書中且構成本說明書之一部分的隨附圖式說明所揭示技術之實施例且連同描述用以解釋所揭示技術之原理。 圖1A展示經組態以非破壞性地檢測及/或識別至少部分以流體填充之透明容器10中之粒子的例示性自動視覺檢查單元100,該流體諸如由美國食品及藥物管理局規定之含有蛋白質之醫藥組合物、藥物、生物技術產品、飲料及其他半透明流體。 儘管在典型實施例中,粒子之存在或缺乏的檢測可藉由觀看外部非均一之容器的部分(例如,跟座)來實現以用於粒子特性化量測(諸如計數及定大小),但可能有必要經由容器之實質上均一垂直壁來看粒子以便減輕失真。此對最小填充容積具有暗示,因為對單元100可見之容器10中的流體之顯而易見的二維橫截面必須具有適當面積以提供可使用統計。所需填充容積取決於容器之圓直徑(容器愈小,所需填充容積愈少)。在各種實施例中,容器之內部容積可為至少1%、至少5%、至少10%、至少20%、至少30%、至少40%、至少50%、至少60%、至少70%、至少80%、至少90%或至少100%以流體填充。 在各種實施例中,本文中所描述之粒子檢測技術本質上為光學的。因此,在一些實施例中,容器10之壁在照明波長處足夠透明以允許其中含有之液體的視覺化。舉例而言,在一些實施例中,容器10可自透明硼矽酸玻璃製成,但可使用其他合適材料。器皿內含有之流體的渾濁度亦為重要的,且應足夠低以允許所要視覺化程度。在一些實施例中,實施例,流體具有在0至100 NTU(濁度單位(Nephelometric Turbidity Unit)),較佳0至20 NTU,且更佳0至10 NTU之範圍中的渾濁度。可發現用於渾濁度量測之標準實踐,例如,EPA Guidance Manual, Turbity Provisions第3章(1999年4月)。 說明性系統可檢測及識別折射及/或散射光之透明及/或半透明粒子(例如,蛋白質聚集體、玻璃碎片或薄片及油斑)、反射光之粒子(例如,金屬碎片)及/或基於其不同光學特性而吸收光之粒子(例如,黑碳及塑膠粒子)。一些本發明視覺檢查單元100可藉由使用照明序列(諸如下文描述之照明序列)來檢測所有三類粒子。本發明視覺檢查單元100亦可經特定地組態以檢測、識別及/或追蹤蛋白質,蛋白質可呈現為緻密結合之聚集體、具有高水含量之鬆散結合之棉絨物質、(反射性)晶體、膠質物質,及/或非晶形聚集體。 可與術語「多肽」互換地使用之術語「蛋白質」在其最寬廣意義上指代兩種或兩種以上次單元胺基酸、胺基酸類似物或肽模擬物之化合物。次單元可由肽鍵連結。在另一實施例中,次單元可由其他鍵(例如,酯、醚等)連結。如本文中所使用,術語「胺基酸」指代天然及/或非天然或合成胺基酸,包括甘胺酸以及D及L光學異構體兩者、胺基酸類似物及肽模擬物。若肽鏈為短的,則三種或三種以上胺基酸之肽通常稱作寡肽。若肽鏈為長的,則肽通常稱作多肽或蛋白質。如本文中所使用,術語「肽片段」亦稱作肽鏈。 容器10可為由玻璃或塑膠製成之矩形或圓柱形器皿(例如,光析管、小瓶、安瓿、濾筒、試管或注射器);其亦可具有另一形狀及/或由不同材料製成,只要其在成像波長處提供容器內含物的視覺化。儘管特定實施例提供容器內含物之清楚及未擾動視覺化,但其他實施例可對影像獲取計時以與容器未經擾動時之時段一致及/或使用後處理來補償所記錄資料之失真。 單元100包括具有將容器內含物之影像投影至感測器上之收集光學器件的成像器110。在此狀況下,收集光學器件包括遠心透鏡114,且感測器為電荷耦合器件(CCD) 112。耦接至CCD 112之記憶體140記錄及儲存表示容器內含物之影像的串流,且耦接至記憶體140之處理器130如下文所描述分析所記錄影像序列以檢測及識別容器10中之粒子。如熟習此項技術者所理解,處理器130可與合適地組態之通用電腦(例如,使用Intel® Core™ i5或先進微型器件Athlon™處理器之電腦)、場可程式化閘陣列(例如,Altera® Stratix®或Xilinx® Spartan®-6 FPGA)或特殊應用積體電路一起實施。記憶體140可實施於固態記憶體(例如,快閃記憶體)、光碟(例如,CD或DVD)或磁性媒體中,且可經選擇為任何適當大小(例如,1 GB、10 GB、100 GB或更大)。 包括安置於容器10周圍之一或多個光源122a及122b的照明系統120在影像獲取期間照明容器10及其內含物。視覺檢查單元100可整合至檢查模組160中,檢查模組160亦包括主軸150、震動器、超音波振動器或在成像之前使容器內含物自旋、震動或以其他方式攪動容器內含物且在成像期間固持容器10之其他攪動器,如圖1(b)中。 圖1(c)展示中等至高輸送量視覺檢查平台170,視覺檢查平台170包括一或多個檢查模組160-1至160-5(總體上,檢查模組160)、機器人180及小瓶盤172,小瓶盤172將未檢查及/或已檢查容器10固持於個別容器井中。在來自使用者或自動控制器(未圖示)之指令後,機器人180即刻將容器10自小瓶盤172移動至檢查模組160,檢查模組160俘獲及記錄在容器10中移動之粒子的時間序列資料。機器人180接著將容器10返回至小瓶盤172。 在一些實例中,小瓶盤172之頂層及/或容器井之輪緣由Delrin ®縮醛樹脂或另一類似材料製成,且容器井之內邊緣成斜面以防止容器10在插入至容器井及自容器井移除時變得刮傷。小瓶盤172可包括由鋁或不易於翹曲或破裂之另一類似材料製成之基底層。容器井之壁通常為厚的以在盤172經攜載(例如,藉由人)至視覺檢查平台170及自視覺檢查平台170攜載時緊密地固持小瓶。取決於其構造,小瓶盤170可在微米級容限內將容器10固持於預定義位置以促進藉由機器人180之容器擷取及插入,機器人180可以微米級精度操作。 機器人180為「取放型」系統,其自盤172拔取小瓶,沿著自盤172上方延伸至主軸160上方之軌道182移動每一容器10,且將容器10置放於特定主軸160上。一些機器人亦可經組態以在置放容器10之前使容器10自旋,從而消除對主軸160之需要。或者,機器人180可包括可使容器10自旋、振動及/或震動(例如,執行下文描述之「往返」針震動)之六軸機器人臂,此亦消除對主軸160之需要。熟習此項技術者將易於瞭解其他裝載及攪動機構及序列可與本發明視覺檢查系統及程序一起使用。 視覺檢查平台170如圖2(a)中所展示而操作。在步驟202中,清潔待檢查之容器10(例如,藉由手使用適當溶劑),接著在步驟204中將容器10裝載至盤172中。機器人180自盤172抽出容器10且將容器10置放於主軸160上。接下來,在步驟206中,處理器130自成像器110所獲取之靜態容器10的影像判定彎液面及/或關注區(ROI)(例如,以流體填充之容器10的部分)之大小及位置。或者,若以足夠確定性知道填充容積及容器形狀及容積,使用者可指定彎液面及/或關注區之位置。一旦處理器130已定位ROI,在步驟208中主軸160便使容器10自旋及停止,此使流體移動且使容器10中之粒子變得懸浮於移動流體中。在步驟210中,成像器110將時間序列資料以靜態影像(稱作「圖框」)之序列的形式記錄於記憶體140中,該等靜態影像表示按規則間隔之時間間隔取得之ROI之快照。 在成像器110已獲取足夠時間序列資料之後,處理器130減去可表示容器之表面中之一或多者上的灰塵及/或刮痕之背景資料。處理器130亦可自時間序列資料對雜訊濾波,如由熟習此項技術者所理解,且如下文所描述而執行強度定限。處理器130亦反轉時間序列資料之次序。亦即,若時間序列資料中之每一圖框具有指示其經獲取之次序的索引1, 2, …, n-1, n,則反轉時間序列資料中之圖框以按 n, n-1, …, 2, 1排序之索引配置。若必要,則處理器130亦如下文所描述而選擇待分析之資料的開始及結束點。(熟習此項技術者將易於瞭解處理器130可以任何次序執行背景相減、雜訊濾波、強度定限、時間序列資料反轉,及開始/結束點判定。)處理器130在步驟212中追蹤在流體中或隨流體移動之粒子,接著在步驟214中基於粒子軌跡對粒子定大小、計數及/或以其他方式特性化粒子。 每一檢查模組160可執行同一類型之檢查,從而允許容器10之並行處理;可取決於所要輸送量而調整模組160之數目。在其他實施例中,每一模組160可經組態以執行不同類型之檢查。舉例而言,每一模組160可以不同照明波長檢查粒子:模組160-1可尋找回應可見光(亦即,在約390 nm至約760 nm之波長處輻射)之粒子,模組160-2可使用近紅外線照明(760 nm至1400 nm)檢查容器,模組160-2可使用短波長紅外線照明(1.4 µm至3.0 µm)檢查容器,模組160-4可以紫外線波長(10 nm至390 nm)檢查粒子,且模組160-5可以X射線波長(10 nm以下)檢查粒子。或者,一或多個模組160可尋找偏光效應及/或粒子螢光。 在具有不同類型之模組160的實施例中,第一模組160-1可執行初步檢查,隨初步檢查之結果而定執行後續檢查。舉例而言,第一模組160-1可執行可見光檢查,可見光檢查表明特定容器含有偏光敏感粒子。處理器130接著可指示模組160-2檢查容器以便確認(或反駁)偏光敏感粒子之存在,模組160-2經組態以執行基於偏光之量測。由模組160-1獲取之可見光時間序列資料可指示若干粒子在特定容器10中之存在,但並非該粒子類型,此可導致處理器130命令(例如)模組160-3處之紅外線檢查。 誘發粒子移動之容器攪動如上文所描述,以機械方式攪動容器10使容器10之底部處或容器之內壁的側面上之粒子變得懸浮於容器內之流體中。在特定實施例中,使用者及/或視覺檢查系統選擇並執行使容器中之流體進入層流型態的攪動序列,層流型態為流體以平行層流動而各層之間無渦流、漩渦或破壞之型態。在流體動力學中,層流為由高動量擴散及低動量對流特性化之流動型態-換言之,層流為激擾流之對立物。攪動亦使粒子變得懸浮於移動流體中。最終,摩擦使流體停止移動,此時粒子可黏附至容器之壁或沈澱至容器之底部。 與擾流相比,層流產生較平滑粒子運動,此使得較易於估計粒子軌跡。(當然,處理器亦可經組態以亦估計某些擾流型態中之粒子軌跡,其條件為感測器圖框速率足夠快以俘獲粒子軌跡之「平滑」區段。)若需要,可以產生實質上層流之方式攪動容器。舉例而言,主軸可以特定速度(或速度設定檔)旋轉容器歷時特定時間,該特定時間如自針對不同容器大小及形狀及/或不同液位及黏度之流體行為的量測而判定。 在一個特定實施例中,伺服馬達或步進馬達驅動固持圓柱形容器之主軸,從而使容器繞著其中心軸線自旋,如圖3(a)中所展示。使容器10以足夠速度自旋使甚至重粒子(諸如金屬碎片)自容器10之底部上升至流體中。對於許多流體及粒子,馬達以300 rpm驅動固持容器10之主軸歷時約3秒。(可需要較高自旋速度以激發重粒子。)在自旋3秒之後,馬達突然停止,且允許流體在現在靜止之容器中自由地流動。此時,成像器110開始俘獲旋轉流體之視訊。記憶體140記錄視訊歷時長達約7至15秒,此取決於處於檢視下之容器的大小(記憶體140記錄較小容器中之流體的較少視訊,此係因為在較小容器中流體歸因於壁阻力之增加的影響而較迅速地減速)。 在另一實施例中,主軸以兩階段攪動/成像序列旋轉容器10。在第一階段,主軸使容器10以300 rpm自旋歷時3秒,從而使較不緻密(及較精細)粒子(諸如蛋白質)變得懸浮於移動流體中。成像器110接著俘獲移動流體中之蛋白質的視訊。一旦成像器110已收集足夠時間序列資料,第二階段便開始:主軸以約1600 rpm至1800 rpm旋轉容器10歷時1至3秒,從而使較緻密粒子(諸如金屬碎片)變得懸浮於移動流體中,且成像器110俘獲表示在容器10中移動之較緻密粒子的時間序列資料。第二階段中之高速旋轉可足夠強以暫時溶解蛋白質聚集體或使蛋白質聚集體變性,蛋白質聚集體可在流體減慢或停止移動之後重組。兩階段操作使得有可能檢測可能未由低速旋轉激發之緻密粒子及可由高速旋轉變性之蛋白質兩者。 本發明系統亦可使用其他旋轉序列,此取決於(但不限於)以下參數中之任一者:流體黏度、流體填充位、流體類型、表面張力、容器形狀、容器大小、容器材料、容器紋理、粒子大小、粒子形狀、粒子類型及粒子密度。舉例而言,本發明系統可在使容器內含物成像之前使較大容器自旋歷時較長時段。可藉由常規實驗來計算、特性化及/或判定用於給定流體/容器組合之確切攪動曲線。 若視覺檢查模組針對良好特性化之容器/流體組合使用預定攪動序列,則其可僅在流體(及懸浮粒子)處於層流型態時觸發資料獲取。或者,其可獲取額外時間序列資料,且處理器可基於容器/流體組合及/或攪動序列而自動選擇開始及結束圖框。 上文描述之視覺檢查系統中之任一者亦可用以檢測及/或識別注射器12中的原生及外來粒子,注射器12至少部分以藥物產品32或其他流體填充,如圖3B中所展示。注射器12經常為針朝下貯存的。因而,微粒可沈澱於注射器之針34中。為了使此等粒子視覺化,機器人或人顛倒注射器12—亦即,機器人或人使注射器12繞著與其縱向軸線垂直之軸線旋轉180°使得針34指向上。已沈澱於針34中之微粒垂直地降落,從而藉由成像器110實現視覺化。機器人或人亦可在翻轉期間使注射器自旋以完全移動流體。 許多注射器12具有具相對小內徑(例如,約5 mm)之筒,此顯著增加壁阻力之效應。對於許多藥物產品32,壁阻力使所有旋轉流體運動在約1秒內停止。對於實際粒子分析,此為極其短之時間窗。幸運地,繞著與其縱向軸線垂直之軸線輕微地搖動注射器12(如圖3(c)中所展示)產生持續長於1秒之粒子運動。可藉由機器人或藉由手進行之橫向搖動經由注射器12之運動及在注射器12之筒內振盪的任何氣泡30之運動而攪動粒子。上文描述之視覺檢查模組、單元及平台經設計以為可重組態的,且可適應此替代攪動方法。 一旦完成攪動,視覺檢查系統應在視訊記錄階段保持靜止。由於通常使用之影像的高解析度,因此影像之空間解析度極其精密(例如,約10微米或更小)且可至少如繞射極限一樣精密。對於某些組態,樣本之小(例如,10微米)移動等同於所檢測影像中之全像素移動。此運動危害靜態特徵移除(背景相減)之有效性,此又使分析工具之效能及輸出資料之完整性降級。 記住此,振動隔離為關鍵設計考量。在特定實施例中,說明性視覺檢查系統之基底(例如)使用振動阻尼衝擊、浮體及/或墊片而與實驗環境機械地隔離。另外,在單元內部,諸如電腦及機器人控制器之振動源可與系統之其餘者機械地隔離。或者,資料獲取可與容器相對於成像器之殘餘運動同步化,或藉由執行像素移位或某一其他運動補償行為之相機執行。此殘餘運動亦可經記錄以用於後處理以移除影像運動之不利效應。 成像器組態說明性視覺檢查系統可使用具有任何合適感測器之標準現成成像器,該感測器包括但不限於電荷耦合器件(CCD)或互補金屬氧化物半導體(CMOS)陣列。感測器之選擇為靈活的且稍微取決於特定應用之要求。舉例而言,具有高圖框速率之感測器使得能夠準確地映射快速移動之粒子(例如,在低黏度流體中)的軌跡。敏感性及雜訊效能亦為重要的,此係因為許多蛋白質粒子在溶液中為透明的且較弱地散射光,從而產生模糊影像。為了改良雜訊效能,可冷卻感測器,如技術中所理解。對於大多數應用,單色感測器提供最佳效能,此歸因於其相比於彩色相機之稍微較高解析度以及擁有較高敏感性。然而,對於應用之小子集,彩色感測器可為較佳的,此係因為其俘獲粒子之顏色,此在確定其來源(例如,衣服纖維)時可極其重要。舉例而言,在產品品質調查(亦稱作辯論學)方面,彩色感測器可用於在可污染藥物產品之製造設施的不同類型之材料(例如,纖維)之間進行區別。 為了完成容器檢查,成像器之視野應涵蓋整個流體容積。同時,成像器應能夠解析小粒子。視覺檢查系統藉由大格式高解析度感測器(諸如具有3296×2472像素之Allied Vision Technologies (AVT) Prosilica GX3300八百萬像素CCD感測器)來達成大視野及精密解析度。其他合適感測器包括ACT Pike F505-B及Basler Pilot piA2400-17gm五百萬像素相機。當成像光學器件經選擇以完全使1 ml BD Hypak注射器之承載流體的主體成像時,AVT Prosilica GX3300 CCD感測器在兩個橫向尺寸上以每像素約10微米之空間解析度俘獲時間序列資料。高速度及高解析度之組合暗示記錄時間序列資料可涉及大資料傳送速率及大檔案大小。作為推論,下文描述之視訊壓縮技術經特別設計以減少資料儲存要求,同時保留影像中俘獲之粒子之細微細節的完整性。 應選擇使關注區成像至感測器上之收集光學器件以提供整個容積之清晰影像,該等影像具有等於或小於感測器之像素大小的最小點大小以確保系統以最精密可能解析度操作。另外,收集光學器件較佳具有足夠大以配合整個樣本容積之視野深度。 遠心透鏡(諸如圖4中所展示之透鏡114)特別適合於流體容積之視覺檢查,此係因為其經特定地設計以對視野深度不敏感。如由熟習此項技術者所理解,遠心透鏡為多元件透鏡,其中主光線經準直且平行於影像及/或物件空間中的光軸,此導致恆定放大率而不管影像及/或物件位置如何。換言之,對於在離具有遠心透鏡之成像器某一距離範圍內之物件,由成像器俘獲之物件的影像為清晰的且具有恆定放大率而不管物件離成像器之距離如何。此使得有可能俘獲在容器10之「後部」處的粒子呈現為類似於容器10之「前部」處的粒子之影像。使用遠心透鏡亦減少環境光之檢測,只要使用均一黑暗底板。合適遠心透鏡114包括Edmund Optics NT62-901大格式遠心透鏡及Edmund Optics NT56-675 TECHSPEC Silver系列0.16×遠心透鏡。 容器特定盲點幾乎任何視覺檢查系統之一個目標為提供100%容器容積檢查。然而,實際上,可存在不能檢測粒子之固定區帶,如圖5A中所展示。首先,彎液面周圍之液體可難以併入分析中,此係因為彎液面本身以可能使彼位置處之檢測器飽和的方式散射光,從而混淆任何粒子或其他關注特徵。第二,對於小瓶,容器之基底通常在拐角處彎曲,其一般稱作「跟座」。彎曲跟座具有使冒險足夠接近小瓶之底部的任何粒子失真且最終混淆該等粒子之效應。第三,對於注射器而言,橡皮塞以稍微內突至容器容積中之中心圓錐為特徵。此圓錐之尖端可能隱藏粒子,儘管該尖端為小的。最微細盲點歸因於小瓶之曲率而出現。 圓柱形容器亦可引起透鏡化效應,如圖5B中所展示,(由彎曲光線指示18)其用以破壞遠心透鏡之效能。容器之彎曲壁亦產生盲點14。 圖5E展示由圓柱形容器10引起之透鏡化效應的實例。相機/觀測器處於圖之底部。如上文所描述,當使容器10中之粒子成像時可使用遠心透鏡以確保粒子在影像中具有一致外觀,該外觀不取決於粒子在容器中之位置,亦即其離相機之距離。為了實現此,在一些實施例中,遠心透鏡之焦點深度經選擇為大於流體容積之直徑。在一些實施例中,在缺乏校正光學元件的情況下,容器曲率破壞此原理。 如所展示,容器10中之成像粒子的形狀及放大率將取決於粒子在容器中之位置。在容器之正面中心的粒子501完全不失真(頂部插圖)。在後側的相同粒子502失真最多(底部插圖)。注意,對於圓柱形容器而言,失真僅沿著水平軸線發生(如在底部插圖中為明顯的)。 為了減輕此等效應,可選校正光學器件(諸如校正透鏡116)置於遠心透鏡114與容器10之間,如圖5C中所展示。額外空間校正光學器件118可提供用於由容器之形狀引起的失真之額外補償,如圖5D中所展示。在各種實施例中,(例如)基於容器10之曲率及/或流體之折射率定製之任何合適校正光學元件可用於添加至或替代於校正透鏡116及光學器件118。 舉例而言,在一些實施例中,可開發由圓柱形容器10引起之透鏡化效應的模型。該模型可基於特性化光學失真之參數的合適集合,該等參數包括(例如)容器外徑、容器內徑、容器折射率、液體折射率及照明光之波長。可使用此項技術中已知之任何合適技術來開發模型,該等技術包括(例如)光線追蹤技術。圖5F展示用於容器參數之兩個不同集合之透鏡化效應的理論模型(左上、左下),以及用於對應實體情形之實驗資料(右上、右下)的實例。如所展示,理論模型與實驗資料極其一致。 參看圖5G及圖5H,校正光學元件503(如展示為透鏡)用以校正上文描述之透鏡化效應。校正光學元件之設計可基於容器之理論光學模型、指示容器之光學屬性的實驗資料,或其組合。如所展示,校正光學元件503由具有圓柱形前表面及後表面之折射材料製成。在一些實施例中,可使用自由參數判定透鏡之設計,自由參數包括前表面及後表面之半徑、透鏡之厚度、透鏡之折射率,及透鏡相對於容器之位置。 在一些實施例中,其他形狀可用於透鏡之前表面及後表面(例如,抛物線或任意定製形狀)。在一些實施例中,放寬表面為圓柱形之要求將增加校正光學元件503之設計參數空間的大小,藉此允許改良校正。 在一些實施例中,校正光學元件503可包括多個元件,藉此進一步增加設計參數空間。在一些實施例中,校正光學元件503可校正其他類型之光學失真、像差或其他效應。舉例而言,在使用多個波長處之照明的狀況下,校正光學元件503可用以校正色像差。 在一些實施例中,校正光學元件503可經設計以校正由特定容器及/或流體類型引起之失真。因為單一自動化視覺檢查單元100可與多個容器類型一起使用,所以在一些實施例中,可能需要允許校正光學元件503可選擇性地改變以匹配處於檢查下之特定容器10。舉例而言,圖5I展示固持多個校正光學元件503之支架504。可移動(手動或自動)支架以將元件中之選定者置放至用於成像器110之光學鏈中。注意,儘管展示支架,但在各種實施例中,可使用用於自多個光學元件之集合中選出一個光學元件之任何其他合適機構。 替代視覺檢查系統可包括補償歸因於容器之曲率之失真的適應性光學器件。舉例而言,遠心透鏡114可經組態以俘獲自可變形鏡(諸如微機電系統(MEMS)鏡)反射之容器10的影像。感測器112使用背景資料以導出自容器10中之表面曲率、表面缺陷及其他瑕疵產生之像差的性質及量值。感測器112將此資訊反饋至可變形鏡,可變形鏡藉由調整其表面以補償像差來回應。舉例而言,可變形鏡可在一個方向上彎曲或變曲以補償容器曲率。因為可變形鏡動態地回應,所以其可用以補償對每一個別容器10特定之像差。 另外,可調諧粒子追蹤以結合此等盲點之已知位置來檢測粒子消失,從而允許程式預測相同粒子稍後是否可重新出現於視訊序列中及相同粒子稍後可重新出現於視訊序列中何處,如下文所描述。 下文描述用於處理與盲點有關之問題的額外技術(例如,使用多個成像器)。 相機圖框速率下文描述之使用最近匹配(貪心)演算法的有效粒子追蹤可視為隨三個主要因素而變:相機俘獲速率(圖框速率)、粒子密度(在二維影像中)及典型粒子速度。對於使用最近匹配演算法之真正有效的追蹤,相機較佳應足夠快以滿足以下準則:

Figure 02_image001
。 實際上,當將三維容積投影至二維影像上時,當粒子實際上在容器中間隔適宜時,使粒子呈現為彼此極其靠近(甚至彼此遮蔽)係有可能的。當考慮此時,考慮平均最近相鄰距離相比於考慮顯而易見之最小粒子間分離距離較有意義。此處注意,最近相鄰距離為時間序列資料之給定圖框中之鄰近粒子之間的距離,而最近匹配距離指代針對時間序列資料之連續圖框中的單一粒子觀察之位置差異之間的距離。按照最近匹配距離重寫用於攝影速度之準則給出:
Figure 02_image003
。 替代視覺檢查系統可使用預測性追蹤技術以代替最近匹配(貪心)粒子追蹤技術。預測性技術使用粒子之已知軌跡之知識,結合容器之空間約束及預期流體行為之知識來估計粒子在後續圖框中之最可能位置。當恰當地實施時,此方法可較準確地追蹤高速移動通過緻密填入之影像的粒子。 當試圖檢測及量測相對大容器中之極其小的粒子時,最大化影像感測器之空間解析度為有利的。一般而言,此具有降低感測器之最大可達成圖框速率的直接效應。 藉由多個成像器之視覺檢查使用單一相機可受已知盲點之存在所危害。另外,將三維粒子分佈映射至二維影像上可歸因於遮蔽(例如,如圖5E中所展示,其中在容器之背部中心的粒子被前部中心的粒子遮蔽)而導致模糊性。原則上,替代視覺檢查系統(例如,如圖6中所見)可藉由使來自兩個或兩個以上成像系統之結果相關來解決此問題。藉由使來自兩個或兩個以上相機之位置軌跡資訊相關,有可能建構詳細三維軌跡映射,三維軌跡映射相比於二維軌跡映射可較穩健且較不易於出現由遮蔽(下文所論述)引起之錯誤。 增加成像器之空間解析度亦針對給定粒子濃度及粒子速度而限制資料獲取速率(圖框速率)。當檢查未知容器時,可不保證粒子濃度將為適度地低的。同時,為了使諸如玻璃或金屬之重粒子懸浮於流體中,容器之旋轉速率可需要相當高,從而導致所俘獲視訊串流中之高粒子速度。解決此衝突之一種方式為使用下文描述之新穎成像硬體組態。假定已使用最佳市售感測器,且容器中之粒子散射大量光,仍可能藉由以來自專用觸發源之恆定可靠觸發而多工兩個或兩個以上感測器來增加資料獲取速率。 另外,例示性視覺檢查系統可經組態以藉由放寬對全容器檢查之要求,且替代地僅考慮容積之子集來提供比10微米更精密之空間解析度。一般而言,對於在顯微鏡下才可見的(sub-visible)粒子,尤其為蛋白質聚集體,此為可接受的,此係因為較小粒子傾向於以較高數目出現且較均勻地分佈遍及容積。或者,例示性視覺檢查系統可藉由使用具有不同放大率之多個成像器以並列地獲取廣泛區域及精密解析度時間序列資料來提供全容器檢查及精密空間解析度。 可同時使用替代放大率(例如,如在圖6A中),其中一成像器1102查看全容器,且具有較高放大率(例如,長工作距離顯微鏡物鏡)之第二成像器1104移向於較小子容積上且檢查(例如)極其小之粒子(例如,具有約10微米、5微米、1微米或更小之直徑的粒子)。其他視覺檢查系統可包括安置於容器10周圍之多個成像器1102、1104及1106,容器10由安裝於容器10上方及下方之發光二極體(LED) 1120的一或多個環照明,如圖6B中所展示。安裝於不同位置之相同成像器1102提供雙目視覺。具有長工作距離顯微鏡物鏡之成像器1104提供用於容器10之子容積的精密解析度,且具有替代感測器(例如,紅外線感測器、輻射熱計等)之成像器1106提供額外時間序列資料。 圖6C及圖6D展示利用遠心成像之屬性的替代成像組態。在遠心透鏡之背部孔隙處,50/50光束分裂方塊1202將投影影像分裂為兩個單獨成像臂。每一成像臂可包括高解析度低速度感測器1222,其與另一臂中之感測器1222(如圖6C中所展示)以交錯方式操作以加倍圖框速率。亦即,以半循環相對相位偏移同時運轉兩個感測器1222改良時間解析度達兩倍。接著可組合影像串流以按標稱感測器圖框速率之兩倍提供單一影片。 或者,每一臂可包括如圖6D中所展示之不同感測器(例如)以補償成像感測器陣列之取捨:相機解析度愈精密,相機之最大可能圖框速率愈慢(例如,在全解析度下每秒10至50或15至25個圖框,在低解析度下每秒50至200個圖框,等)。對於準確粒子追蹤而言,主要感測器效能參數為高時間解析度(高圖框速率)。然而,對於準確粒子定大小而言,主要感測器效能參數為精密空間解析度(影像中儘可能多之像素)。目前,對空間解析度及資料傳送速率之主要限制因素為資料傳送匯流排。對於標準個人電腦匯流排(例如,雙GigE或CameraLink匯流排)而言,可用成像器可藉由每像素約10微米之空間解析度及每秒約25個圖框之資料傳送速率來獲取四公分高容器之時間序列資料。 圖6D說明達成快速圖框速率及精密解析度之一方式:藉由高解析度低速度感測器1222及具有較中等空間解析度但具有較高圖框速率之感測器1224兩者使流體成像。外部觸發可確保兩個相機以同等方式同步化。因為相機觀看相同影像之副本,所以其資料可直接相關以產生改良粒子分析。 圖7A及圖7B說明照明源120及多個相機之時序及控制。在圖7A及圖7B兩者中,觸發控制器702發射藉由抽取主脈衝信號而導出之兩個觸發信號(在圖7A及圖7B中標記為ARM 1及ARM 2)。ARM 1觸發信號驅動第一相機(圖7A中之1102a、圖7B中之1222a),且ARM 2觸發信號以交錯方式驅動第二相機(圖7A中之1102b、圖7B中之1222b)。亦即,觸發信號使第一相機及第二相機獲取交替之圖框序列。觸發控制器702亦可藉由照明信號驅動照明源120,照明信號在每次第一相機或第二相機獲取影像時使照明源120照明容器。其他觸發序列亦為可能的;舉例而言,觸發控制器702可驅動額外相機及/或以不同圖框速率獲取影像之高解析度與低解析度相機的組合。 其他配置為可能的,如對於熟習此項技術者為明顯的。舉例而言,每一臂上之影像感測器可彼此等效,但收集光學器件可不同。一個臂可包括額外影像放大光學器件,其對影像之特定子集「放大」,從而提供同時廣視野及放大視圖。 照明組態本發明視覺檢查系統利用各種粒子與光互動之方式以檢測及識別承載流體之容器中的粒子。粒子與光之互動為眾多因素之複合函數,該等因素包括粒子之大小、形狀、折射率、反射率及不透明性。蛋白質粒子可主要經由折射來散射光,而層狀玻璃粒子可主要反射光。一些粒子(例如膠原纖維)可修改光之固有物理屬性,諸如偏光之旋轉。定製檢測器、粒子及光幾何形狀以最大化各種粒子類型之間的對比度可導致高度準確的檢測及差異。 圖8至圖12展示經定製或可在特定類型之粒子、容器及/或流體之不同照明模式當中切換/致動的各種照明組態。舉例而言,光源可以一方式照明粒子以致最大化其朝向檢測器反射或折射之光的量,同時將背景保持為黑暗的以最大化粒子與背景之影像之間的對比度。另外,源可發射處於任何合適波長或波長範圍處之輻射。舉例而言,其可發射寬頻白光(390 nm至760 nm)、窄頻光束(例如,632nm處)或甚至紫外線或X射線輻射。合適範圍包括10 nm至3000 nm、100 nm至390 nm (紫外線)、390 nm至760 nm (可見)、760 nm至1400 nm (近紅外線)及1400 nm至3000 nm(中波長紅外線)。X射線發射(<10 nm)亦為可能的。當視為完整之整體時,本文中所揭示之光照選項的陣列允許本發明視覺檢查系統檢測及識別可能在藥物產品中出現之粒子的全範圍。 因為一些粒子僅極其弱地散射,所以藉由儘可能多之光輻照樣本經常為有益的。樣本輻照之上限主要由處於檢驗下之產品的光敏性驅動。波長之明智選擇亦可為必要的,特別對於生物產品;確切選擇取決於經照明之產品。以630 nm為中心之單色紅光表示「快樂介質」且在可負擔光源方面為易於獲得之波長。 LED陣列(諸如來自CCS光照之LDL2系列LED陣列)對於照明在醫藥產品中見到之粒子為有效的;然而,亦可使用準直雷射光束。在一些狀況下,照明光學器件可圖案化或塑形待在流體容積內(與容器外部形成對比)準直之照明光束。對於替代光源,若擔憂來自光源之加熱,則可經由使用光學波導或光學纖維124將光遞送至檢查區域,如圖8中所展示。 可基於經分析之流體及/或粒子的吸收及/或反射性來選擇照明波長;此對於光敏醫藥產品尤其重要。紅光(630 nm)在蛋白質之低吸收與水之低吸收之間提供優良平衡。與時間序列資料獲取同步地選通照明藉由最小化產品對入射光之曝露來進一步保護光敏醫藥產品之完整性。選通具有兩個其他優點:LED以此方式運轉時更有效地操作,且選通減少運動模糊之效應,運動模糊造成未被注意的危害粒子大小量測,如下文所描述。 圖8展示包括若干光源122a至122f(統稱為光源122)之例示性重組態照明系統120,該等光源可為LED、雷射、螢光或白熾燈泡、閃光燈,或任何其他合適光源或合適光源之組合。光源122可發射可見光、紅外線及/或紫外線輻射。其按需要可為窄頻或寬頻,且可使用適當光學濾波器或偏光器來濾波。舉例而言,在圖8中,偏光器129偏光由背面光照容器之光源122f發射的光。除了背光122f之外,照明系統120包括在容器10周圍之矩形稜鏡之拐角處的四個光源122a至122d。另一光源122e經由耦接至指向容器10之底部的準直器126之光學纖維124而自底部照明容器10。在一些狀況下,纖維124及準直器126可容納於用以旋轉器皿之主軸的中空軸128內部。 圖8中所展示之多個光源122可用以基於給定粒子與光之互動來判定給定粒子之光學屬性以獲得差異。如熟習此項技術者所理解,不同粒子以變化之方式與光互動。一般互動模式包括散射、反射、遮蔽或旋轉光之偏光,如表1中所展示,其中「X」指示此類型之粒子將使用給定光照技術出現,如圖9A至圖9D及圖11中例證(下文所描述)。「M」指示此類型之粒子可使用給定技術出現,但仍可使用後處理影像分段及特徵識別技術而經檢測/區分。 表1:用於各種粒子類型之光互動    粒子類型    蛋白質 薄片 不透明 纖維素 空氣    主要互動 光照技術 散射 反射 遮蔽 偏光改變 散射 後方角度 X X X X X 底部    X M       背光       X       偏光    M M X M 圖9A至圖9C說明可藉由圖8之照明系統120(出於清晰起見省略一些光源122)實施以基於光互動區分粒子類型之不同照明型樣。在圖9A中,光源122a及122b提供後方成角度光照,其用於展示蛋白質,以及散射光之大多數粒子類型。在圖9B中,光源122e提供底部光,其用於展示朝向成像器110反射光之反射性粒子(諸如玻璃薄片)(水平箭頭);散射但不反射光之粒子(例如,蛋白質)可不出現於感測器上(對角線箭頭)。在圖9C中,光源122f提供均一背光,其用於展示遮蔽光之粒子,諸如金屬、深色塑膠及纖維。熟習此項技術者將易於瞭解,其他光源及/或照明型樣及序列亦為可能的。 圖9D展示圖9A至圖9C之光照技術可依序應用以俘獲散射、反射及/或遮蔽粒子的時間序列資料的方式。在此狀況下,含有均一背光、後方成角度光、底部光及單一相機之系統在每一圖框交替光照,使得一次僅一個特定光源122(或光源122之組合)在作用中。對於單一成像器(未圖示),時間序列資料之每一獲取圖框僅使用一組光。重複此序列提供用於每一光照組態之視訊。 依序使用上述光照技術獲取視訊序列為每一光源122提供一近同時視訊。完成時,此提供三個交錯視訊,每一光照技術一個視訊。對於每一視訊,給定圖框中之粒子可使用交替光照技術而與其他兩個視訊中之相同粒子相關(忽略圖框之間的小時間差)。使用自給定粒子與各種光照技術互動之方式含有的相互資訊,可作出關於粒子之材料組成的結論。 此技術可與其他影像特徵提取資訊組合以便增加特異性。舉例而言,視訊可自動分段以判定每一圖框中之特徵。對於每一光照技術,可自動判定用於每一特徵之資訊,諸如大小、形狀、亮度、平滑度等。此可有助於區分在對不同光照技術中之每一者的可見性方面具有類似符號之不同粒子類型。 圖10A至圖10C說明減少由來自容器10外部之光源122之光的不想要之反射/折射引起的眩光之方式。照明容器10使不想要之眩光呈現於由成像器110俘獲之影像中,成像器110之光軸與來自光源122之反射離開容器表面之光的傳播方向對準。眩光可混淆本將可檢測之粒子與感測器之飽和區域。定位成像器110或光源122使得成像器之光軸與由光源122發射之反射離開容器表面的光線不重合或平行減少或消除由感測器檢測到之眩光。舉例而言,將光源122置放於藉由繞著容器10之縱向軸線轉動成像器所界定之排除區帶外部減少由成像器俘獲之不想要的反射及/或折射光之量。或者,區帶1000可界定為與圓柱形容器之中心軸線垂直的平面,其具有等於容器之垂直壁之高度的厚度。如此項技術中所理解,具有較複雜形狀(諸如凹形側壁)之容器可具有不同排除區帶及不同校正光學器件。 自區帶1000上方或下方傾斜地或自容器基底下方直地照明容器側壁亦減少由成像器110檢測到之眩光。自下方(例如,藉由光源122e(圖8))照明容器10亦在反射光之粒子(例如,玻璃薄片)與散射光之粒子(例如,蛋白質)之間提供極佳對比度。 圖10D至圖10E說明用於減少或消除來自容器10之眩光的替代照明方案,其中一或多個光源源122置於上文描述之排除區帶中(例如,在容器10之水平平面中)。 圖10D至圖10E展示光線自成像器110之感測器向外,穿過成像器之成像光學器件(如所展示,包括遠心透鏡),且反向穿過容器10之傳播的光線光學模型。沿著自感測器反向傳播之光線的任一者置放之光源將會將光折射或反射至感測器上,藉此可能混淆容器10及其內含物。然而,注意,兩個區1001定位於容器10之水平平面中且靠近容器10之外壁。如圖10E中所展示,若一或多個光源122置於區1001中,則可減少或實質上消除來自光源之眩光。 注意,因為遠心透鏡用於所展示之實例中,所以在光線光學模型中僅需要考慮與感測器垂直地入射之光線。然而,類似方法可應用於其他類型之成像光學器件,從而考慮額外光線。舉例而言,在一些實施例中,可自感測器反向傳播代表性光線組(例如,包括成像系統之主光線)以識別無或實質上無反向傳播之光線的區。照明光源可置於識別區中同時避免眩光。 圖11展示用於藉由偏光來區別細長蛋白質聚集體與纖維素及/或纖維(天然或合成)之裝備。照明系統120朝向容器10發射光,容器10夾於提供缺乏粒子之黑色影像的交叉偏光器900之間。修改(例如,旋轉)入射光之偏光的粒子在由成像器110檢測之時間序列資料中呈現為白色。 若已知關注粒子發螢光,則可使用螢光成像以用於粒子識別,如圖12中所展示。在此狀況下,照明源920發射激勵關注粒子之藍光。置於成像器110前方之窄頻(例如,綠色)濾波器922確保僅來自受激勵粒子之螢光將到達檢測器。可選擇此等照明及濾波器波長以適合特定關注波長。 最後,檢測(及識別)不散射(折射)亦不反射光之粒子(諸如小塊黑色不透明材料)為可能的。對於此等不透明粒子,應直接自後方背面光照樣本。接著粒子可識別為明亮背景上之黑暗特徵。若需要,可顛倒不透明粒子之影像以將藉由相同極性按比例調整之影像形成為散射及反射粒子之影像(亦即,因此粒子呈現為黑暗背景上之亮點而非亮背景上之黑暗點)。 薄片特定視覺檢查平台如由熟習此項技術者所理解,玻璃薄片為由涉及玻璃容器之內表面的化學反應形成之薄的可撓性玻璃塊或碎片。本發明系統及技術可用以及/或適合於檢測、識別及計數玻璃薄片以最小化施用含有玻璃薄片之藥物的可能性,以便防止含有(過量)玻璃薄片之藥物的施用。本發明系統及技術亦可用以及/或適合於研究玻璃薄片形成,玻璃薄片形成取決於給定配方之構成且與蛋白質及其他類型之微粒物質不同,因為其反射及折射光。在不限於任何特定理論的情況下,某些條件看起來比其他條件更可能促進或妨礙玻璃薄片形成。舉例而言,藉由管製程及/或在較高熱下製造之玻璃小瓶相比於模製玻璃小瓶傾向於較不抵抗薄片形成。在高pH(鹼性)下配製且具有某些緩衝液(諸如檸檬酸鹽及酒石酸鹽)之藥物溶液亦與薄片相關聯。藥物產品保持曝露於容器之內表面的時間長度,及藥物產品溫度亦影響玻璃薄片將形成之機會。為了獲得更多,見(例如)美國食品及藥物管理局,Advisory to Drug Manufacturers: Formation of Glass Lamellae in Certain Injectable Drugs (2011年3月25日)(www.fda.gov/Drugs/ DrugSafety/ucm248490.htm),其全文以引用的方式併入本文中。 為了基於此原理建立用於區分之系統,成像器可以典型方式與小瓶對準且定向成入射光穿過容器之底部進行光照(與攝影軸線垂直)。此產生來自散射之粒子(例如,蛋白質)的極小信號,及來自反射之粒子(例如,玻璃薄片)的大信號。換言之,當薄片浮動通過器皿時,其看起來間歇地閃光。此技術展示為在區分薄片粒子與蛋白質聚集體方面為高度特定的。另外,使用此成像技術獲得之信號與小瓶內之薄片的濃度相關。因此,此技術可不僅用於非破壞性地檢測商業產品中之薄片,亦用作用於判定哪些配方組成物導致增加/減少之薄片存在的工具。 圖13A及圖13B展示藉由說明性視覺檢查系統獲取之玻璃薄片(圖13A)及蛋白質(圖13B)的最大強度投影(MIP)影像。習知MIP影像用於電腦化斷層攝影以視覺化沿著一個空間軸線(例如,z軸)觀看之三維空間。典型習知MIP影像表示沿著與視覺化軸線平行之光線取得之資料的最大值。然而,在此狀況下,圖13A及圖13B中所展示之MIP影像為表示二維影像之時間演變的資料之視覺化-其為沿著時間軸線而非空間軸線之投影。 為了產生圖13A及圖13B中所展示之MIP影像,處理器選擇時間序列資料中之像素之至少一些的最大值,其中每一像素表示自器皿中之各別空間位置反射(及/或透射)之光的量。將所得值繪圖產生表示像素之最亮歷史值之MIP影像,諸如圖13A及圖13B中所展示之MIP影像。處理器藉由對MIP影像中之值超過預定臨限值的像素之數目計數來對MIP影像計分。若得分超過表示類似器皿中之薄片之數目的歷史值,則處理器判定器皿按統計可能含有玻璃薄片。處理器亦可藉由自MIP影像估計玻璃薄片之數目、平均大小及/或大小分佈來判定薄片污染之嚴重性。 本發明系統亦可用以(例如)基於隨時間而變之由粒子反射之光的量之差異及/或基於由粒子透射之光的量之差異來區別玻璃薄片與器皿中之其他粒子。一些非薄片粒子可將來自自下方照明器皿之光源(例如,圖8中之光源122e)的光反射至檢測器。(例如)玻璃大塊、金屬大塊及外來纖維可使用底部光照組態而連續地出現。此等類型之粒子在移動通過容器時將一貫地被檢測到,與取決於定向且每次其對準自身以朝向成像器反射光時僅在幾個圖框可見的薄片形成對比。可對底部光時間序列影像使用粒子追蹤以追蹤一貫可見但仍在移動中之微粒物質。接著可自用於薄片計分之MIP計算消除此等追蹤,或者此等追蹤可包括於相互光資訊技術中以判定給定粒子與其他光照定向互動之方式。舉例而言,可在底部光照組態上追蹤反射光之金屬粒子。彼相同粒子在以背光(例如,圖8中之光源122f)照明時遮蔽光。使用此等量度兩者使得有可能區分金屬粒子與玻璃大塊,玻璃大塊反射底部光照但不遮蔽後部光照。 粒子檢測、追蹤及特性化如上文所描述,圖1中所展示之視覺檢查單元100可記錄相對於黑暗背景成像之明亮粒子之影像(時間序列資料)的高品質高解析度單色串流。(或者,粒子可顯示為白色背景上之黑暗點。)因為藥物產品可含有根本上不同之粒子的廣泛分類,所以可使用眾多不同方法分析時間序列資料以區分影像上之特徵與背景。經常,單一影像(時間序列資料之圖框)上之粒子的外觀不足以作出關鍵目標之真正準確的數量估計(例如,計數/大小)。舉例而言,在時間序列資料之一個圖框中呈現為單一粒子之物實際上可能為彼此碰撞或經過彼此之兩個或兩個以上粒子,此可導致準確粒子計數及/或粒子大小之估計。 視訊序列中之圖框之間的影像特徵之時間相關性改良粒子計數及大小量測之精度。將連續圖框中之影像特徵連結於一起以形成用於每一粒子之與時間有關的軌跡之程序稱作粒子追蹤、註冊或指派。粒子追蹤技術用於其他應用(顯著地在流體力學之實驗研究中)。然而,此等應用通常使用良好定義之球形示蹤物粒子。將該原理應用於藥物產品及其他流體需要顯著較複雜之解決方案。另外,對於一些粒子種類,時間(追蹤)分析並不始終為實際的。在此等狀況下,可使用統計方法作為替代方案來產生特性量測。 圖14提供高階粒子檢測及識別1300之概述,其藉由時間序列資料之獲取1310開始。預處理時間序列資料(及/或反轉時間序列資料)1320,且經預處理之反轉時間序列資料用於二維粒子識別及量測1330,二維粒子識別及量測1330可包括反轉時間序列資料之統計分析1340及/或粒子追蹤1350。如上文所解釋,反轉時間序列資料為圖框以相反時間次序重新排序之時間序列資料。粒子報告產生1360在粒子識別及量測1330完成時發生。 時間序列資料預處理預處理1320包括靜態特徵移除(背景相減)1321、影像雜訊抑制/濾波1322及強度定限1323。靜態特徵移除1321利用使容器自旋激發流體及流體內含有之粒子的事實。流體及粒子之動態運動允許其與其他成像特徵區別。由於影像俘獲在容器停止自旋之後開始,因此假定正在移動之每一事物為潛在粒子。靜態特徵隨後為不相關的且可自影像移除以改良清晰性。 在一個實施例中,最小強度投影建立用於影像中之靜態的特徵之近似模板。此包括(例如)可存在於容器壁上之刮痕、灰塵及缺陷。隨後可自整個視訊序列減去此「靜態特徵影像」以產生僅含有相對於黑色背景之移動特徵的新視訊序列。舉例而言,圖15A及圖15B展示在靜態特徵移除之前及之後的時間序列資料之單一圖框。眩光、刮痕及其他靜態特徵混淆圖15A中之容器的部分。背景相減移除靜態特徵中之許多者,從而留下具有較清楚可見之移動粒子的影像(圖15B)。 此方法之警告為大多數玻璃缺陷(諸如表面刮痕)散射相對大量光,從而當檢測器像素飽和時在俘獲影像中顯現為明亮白色。減去此等特徵可導致影像中之「死」區。當粒子移動至此等照明缺陷之後面或前面時,其可被部分遮蔽或甚至完全消失。為了解決此問題,「靜態特徵影像」可經保持、分析且用以使缺陷位置與粒子位置相關以最小化表面缺陷對粒子大小及計數資料的影響。(作為邊註,建議在操作系統之前應用清潔協定以確保儘可能多地移除表面缺陷。)資料亦可經濾波1322(例如)以移除高頻及/或低頻雜訊。舉例而言,將空間帶通濾波器應用於(反轉)時間序列資料移除及/或抑制在第一空間頻率或第二空間頻率上方變化之資料。 一旦已移除背景特徵,便藉由對影像中之每一像素的強度值修整來將時間序列資料定限1323為預定數目個值中之一者。考慮圖16A及圖16C中所展示之灰度影像,其係根據左側所展示之八位元尺度(其他可能尺度包括16位元及32位元)而按比例調整。每一像素具有自0至255之強度值,其中0表示無檢測到之光且255表示檢測到之光的最高量。對127或127以下至0之彼等強度值以及128及128以上至255之彼等強度值修整產生圖16B及圖16D中所展示之黑白影像。熟習此項技術者將易於瞭解,其他臨限值(及多個臨限值)亦為可能的。 粒子檢測影像中之有效粒子檢測依賴於多種影像處理及分段技術。分段指代影像中之關注特徵藉以簡化為離散可管理物件之計算程序。用於自影像提取特徵之分段方法廣泛用於(例如)醫學成像領域,且此等技術已用於粒子識別。簡而言之,使用定限、背景(靜態特徵)相減、濾波(例如,帶通濾波)及/或最大化對比度之其他技術來預處理自相機獲取之影像。在完成時,處理器130將影像分段,接著選擇影像之某些區域作為表示粒子且相應地對彼等區域分類。合適分段方法包括(但不限於)信賴連接(confidence-connected)、分水界、水平集(level-set)、圖形分割、基於壓縮、群集、區生長、多尺度、邊緣檢測及基於直方圖之方法。在獲取影像之後,分段可產生額外資訊以使所獲取影像上之給定特徵與粒子類型相關。舉例而言,關於給定分段特徵之資訊(諸如區域、周長、強度、銳度及其他特性)可接著用以判定粒子之類型。 粒子追蹤及時間反轉關鍵地,先前可用之粒子識別工具未十分詳細地考慮粒子在小瓶周圍移動時粒子的時間行為。若僅自單一「快照」量測,則粒子之計數及定大小可能不準確。然而,時間序列資料提供可使用粒子追蹤1340解析之粒子行為的較完整圖像,粒子追蹤使得能夠產生用於每一個別粒子之與時間有關的試算表,從而能夠更穩健及準確地量測其基本屬性。粒子追蹤為廣泛地用於視訊顯微術以及流體動力學工程中之技術(其中其通常稱作粒子追蹤測速法,或PTV)。 儘管PTV為已知的,但大部分粒子追蹤解決方案假定連續視訊圖框之間的粒子之移動為輕微的,且小於給定影像中之粒子之間的典型分離距離。在此等狀況下,藉由識別最近匹配相鄰者來連結粒子位置為足夠的。然而,在許多應用中,此並非適當模型。歸因於自旋速度(例如,約300 rpm、1600 rpm及/或1800 rpm)及可能高粒子濃度,可預期粒子在連續圖框之間比典型粒子間分離距離移動得遠。此可藉由使用預測性追蹤之形式來解決,預測性追蹤涉及在由粒子之先前運動預測之區中搜尋粒子。預測性追蹤包括評估物理方程式以按數學方式預測粒子在後續圖框中之近似未來位置,如圖17中所展示。為了獲得改良效能,預測性追蹤之此階段可與局部流體行為(若已知)之知識耦合,例如,如關於圖21C所描述。 形成用於給定軌跡之準確預測可需要軌跡所基於之一些先前資料點。此呈現難題-在影像序列開始時,當粒子移動最快時,可存在極少至無位置預測所基於之先前資料。然而,隨著時間過去,容器中之壁阻力使旋轉流體減慢且最終停止。記錄時間序列資料足夠久產生粒子顯著減慢且甚至停止之圖框。 反轉視訊之時間線1331,因此粒子最初呈現為靜態的,且在視訊進展時緩慢地加速提供用於判定軌跡之「先前」資料點。在視訊開始時,其中粒子現在幾乎不移動,最近匹配原理可用以建立每一軌跡之初始階段。在適當時間,系統接著可切換至預測性模式。以此方式反轉所獲取資料之時間線動態地改良效能。 圖17展示藉由時間反轉之預測性追蹤之概述。粒子追蹤之目標為追蹤連結粒子在圖框 i中之位置 a i 至其在圖框 i+1中之位置 a i+1 ,如圖17(a)中所展示。若圖框之間的粒子之移動小於至其最近相鄰者(粒子 b)之距離d,則此為直接的。若粒子之移動方向為未知或隨機的,則最簡單方法為具有搜尋區帶(通常為半徑為 r s 之圓),其中 r s 經選擇以便比粒子移動之預期範圍長,但小於典型粒子間分離距離d,如圖17(b)中所展示。在反轉影片時間線之後,如圖17(c)中,粒子呈現為開始緩慢地移動。然而,一會兒之後,粒子呈現為加速,且最近匹配搜尋方法可開始失效。反轉時間序列資料之前幾個圖框部分地建立軌跡,從而產生粒子之速度及加速度之一些知識。此資訊可輸入至適當方程式中以預測粒子在圖框 i+1中之近似位置,如圖17(d)中。此預測性追蹤方法顯著比簡單最近匹配追蹤更有效,尤其在緻密及/或快速移動之樣本中。 質量中心檢測圖18A及圖18B說明在定限之後用於(反轉)時間序列資料中之粒子的質量中心檢測。首先,處理器130將灰度影像(圖18A)轉變為定限影像(圖18B)。每一粒子呈現為二維投影,其形狀及大小取決於圖框經記錄時粒子之形狀、大小及定向。接下來,處理器使用任何合適方法(例如,鉛垂線方法,藉由幾何分解等)計算每一二維投影之幾何中心或質心(例如,如由座標 x i y i 指示)。處理器130可逐圖框地比較特定粒子之質心的位置以判定粒子之軌跡。 粒子遮蔽本文中所揭示之視覺檢查系統的每一者將三維容積(容器及其內含物)投影至影像感測器之二維表面上。對於給定二維感測器,三維容積中之粒子有可能呈現為交叉路徑。當此發生時,一個粒子可部分或完全遮蔽另一粒子,如圖19中所展示。在圖19(1)中,在影像序列中識別到新粒子;追蹤通過影像序列之粒子產生一系列依序步驟,如圖19(2)中所展示。使用搜尋區帶以在連續圖框中尋找可能匹配,如圖19(3)中所展示。有時,一個以上候選粒子將佔據搜尋區帶,如圖19(4)中所展示,在此狀況下,系統選擇最佳匹配。如熟習此項技術者易於瞭解,可使用不同方法之組合中的任一者來決定最佳匹配。舉例而言,表示一個圖框中之候選粒子的資料可與表示先前圖框中之粒子的資料比較及/或相關。包括(但不限於)大小、形狀、亮度及/或外觀改變之比較及/或相關參數導致候選粒子之匹配。說明性視覺檢查系統可對付碰撞、遮蔽及暫時粒子消失,諸如圖19(5)中所展示之遮蔽。當粒子恢復時,如圖19(6)中,可重建構徑跡。說明性系統亦可解決當兩個徑跡(及其搜尋區帶)碰撞時引起的衝突,從而確保形成正確軌跡,如圖19(7)中。 圖20說明二維影像中之粒子遮蔽的另一狀況:(a)懸浮之粒子的典型影像。圖20(b)至圖20(e)展示圖20(a)中之框中區的近視圖,其中兩個粒子自相反方向接近彼此。(反轉)時間序列資料中之接下來的圖框展示遮蔽使兩個粒子呈現為單一假大粒子。若遮蔽為部分的(圖20(c)),則此可導致出現單一假大粒子。若遮蔽為完全的(圖20(d)),則較小粒子可自視野完全失去且粒子計數可減少1。此在檢查藥物產品時可十分重要,此係因為當實際上處於檢視下之產品僅含有可接受在顯微鏡下才可見之粒子時,假增加之大小量測可能足以超過法規臨限。在圖20(e),粒子已移動超過彼此且獨立追蹤可繼續。藉由分析粒子軌跡及隨後之與時間有關的大小分佈,視覺檢查系統可自動校正歸因於遮蔽之錯誤,從而導致較低錯誤拒絕率。 考慮丟失粒子如所論述,粒子可出於眾多理由而自給定視訊序列之一部分消失。其可橫穿「盲點」及/或「死」區,此歸因於如上文論述之靜態特徵移除。最後,一些類型之粒子可展現光學行為,其中該等粒子相對於成像光學器件而出現及消失(閃爍)。在此等狀況下,處理器可如下預測此等「丟失粒子」之移動。若粒子重新出現於某一時間圖框內之預期位置處,則處理器可連結軌跡且內插用於臨時圖框之虛擬粒子資料。注意,自法規立場,清楚虛擬粒子資料經適當地貼標籤使得其可與真正量測之粒子資料加以區別為重要的。 圖21A至圖21C說明用於追蹤及恢復丟失粒子(亦即,在視訊序列之過程中暫時自視野消失之粒子)的一種技術。消失可歸因於遮蔽於另一(較大)粒子後面、遮蔽於表面缺陷後面、過渡穿過已知盲點或僅粒子之光學幾何屬性(舉例而言,一些類型之粒子可僅在特定定向處可見)。找到或恢復自視野消失之粒子改良粒子可被檢測及識別之精度。 圖21A說明找到由容器表面上之缺陷遮蔽的粒子之預測性追蹤。表面缺陷散射大量光,從而使影像之對應區飽和。在使用靜態特徵移除之後,此導致影像中之「死區帶」。橫穿此區帶之任何粒子暫時消失。處理器130可藉由在有限數目個步驟內產生虛擬粒子來恢復「丟失」粒子。若粒子重新出現且被檢測到,則徑跡經聯合。 更具體而言,處理器130使用預測性追蹤以在粒子消失之前判定粒子之速度。其亦可使用預測性追蹤及粒子之速度來外推預期粒子位置。若粒子再次出現於預期位置,則可連結虛擬位置以形成完整軌跡。若粒子於預定義時間窗內未重新出現,則可發信號將其作為永久丟失,且不再追蹤該粒子。 圖21B展示追蹤當不被看見時經歷顯著加速或方向改變之粒子的方式。並非預測粒子軌跡,處理器130使用流體之局部行為的性質追溯地連結片段軌跡。在此狀況下,處理器130藉由按此速度及尺度考慮流體之層流特性來聯合軌跡。 圖21C說明當粒子橫穿已知盲點時粒子消失及重新出現之方式。在此實例中,粒子橫穿在容器之極限邊緣處的已知盲點。藉由關於盲點相對於容器影像之位置的資訊程式化處理器130使得處理器130能夠重建構軌跡。 粒子形狀不規則性一些粒子並非球形的或足夠小以視為點狀,如由大多數粒子追蹤技術假定。實際上,許多粒子為不規則形狀且在移動通過流體時可相對於相機滾轉及旋轉,如圖22A至圖22C所展示。在一些狀況下,不規則形狀粒子可呈現為兩個單獨粒子,每一粒子具有其自己的軌跡,如圖22B中所展示。二維物件之所量測質量中心之此不可預測移動可混淆粒子之真實移動。此行為嚴重地使預測性追蹤之程序變複雜。本文中描述之視覺檢查系統可含有(例如)藉由計算用於不規則形狀粒子之平均軌跡(如圖22A及圖22C中所展示)來對付不規則形狀粒子之顯而易見地擾動運動的功能性。 容器 / 產品 - 特定流體動力學自旋後容器中之粒子的運動為流體之運動與重力之效應的組合之結果。流體之運動隨流體之黏度、填充容積、容器形狀及大小,及初始自旋速度而變。可藉由將流體系統之物理限制知識併入至軌跡構建中來顯著改良粒子追蹤效能。 在習知容器中自旋之液體的流體動力學在某些環境下可驚人地複雜。將流體動力學知識併入(因為其係關於通常用於藥物產業之容器)至軌跡構建中構成優於先前技術之顯著新穎性及開發區域。 圖23展示典型容器中之流體行為的一些實例,其中來自計算模型之結果與由視覺檢查平台產生之真實世界粒子軌跡比較。研究發現非預期微妙之處:作為實例,在圖23(d)中可看見在小瓶中心沿著窄垂直柱之粒子移動,此歸因於自旋階段期間產生之漩渦的釋放(圖23(a))。當此中心柱中之流體垂直向上移動時,其可向上拂掠通常可預期下沈之重粒子。此可(例如)引起預期將上升之識別氣泡與歸因於容器特定流體運動而上升之外來粒子之間的混淆。 說明性視覺檢查系統可利用藥物產品之預期流體動力學的先前知識來產生比本將可能之結果顯著更準確的結果。以此方式組合實體模型(諸如圖23中說明之模型)與粒子追蹤表示優於現有技術之顯著改良。 錯誤校正儘管本文中所揭示之視覺檢查系統在大多數實驗條件下為穩健的,但追蹤在小的三維容積中移動之大量粒子的挑戰之複雜性意謂始終存在引入一些錯誤之風險,該等錯誤主要呈當粒子「碰撞」時在連續圖框之間形成的不正確軌跡之形式。此現象在圖24A中加以說明。 可有利地使用視覺檢查系統之物理限制的理解。廣泛而言,局部地在每一粒子周圍之流體的主要移動為層狀的(而非擾動或隨機的)。此基本上意謂:在足夠快之相機的情況下,此系統中之自然粒子軌跡應平滑地變化,而無突然急劇的方向改變,特別當在影像中粒子橫穿容器中心時。一旦初始軌跡連結完成,系統便可追溯地分析軌跡是否有此等錯誤。若檢測到錯誤,則系統可比較附近軌跡以確定是否可發現物理上較一致之溶液。此展示於圖24B中。 準確粒子計數可藉由在粒子檢測之後對在單一時間點取得之快照影像中的粒子數目計數(例如,如圖24A中所展示)來推斷粒子計數,其中每一粒子標記有計數編號。此方法為直接的,但具有出於多種理由而在系統上少計數容積中之粒子數目的傾向。舉例而言,一或多個粒子可由另一粒子或表面缺陷遮蔽。粒子可在已知(或未知)盲點中。另外,當粒子移動跨越量測臨限時,極端小或模糊粒子可自視圖間隙地出現及消失。 本文中所論述之粒子追蹤的一個優點為其可考慮所有此等問題。因此,對於穩健粒子追蹤,可藉由計數個別粒子徑跡之數目(如圖24B中),而非單一影像中之粒子的數目或若干影像之統計分析來改良粒子計數。計數粒子軌跡之數目而非單一圖框(或圖框之整體)中之粒子的數目表示優於習知粒子追蹤技術之顯著改良。改良之大小隨存在之粒子的數目及大小而變化。粗略而言,當粒子數目增加時,遮蔽之機會增加且因此歸因於本發明粒子追蹤之時間能力的改良而成比例地增加。 準確粒子定大小習知粒子量測系統量測來自靜態影像之粒子大小。最通常,此係藉由根據法規及/或工業標準量測粒子之最長顯而易見之軸線的長度或斐瑞特(Feret)直徑來進行,如圖25中所展示,斐瑞特直徑可將粒子大小定義為粒子之最長單一尺寸。在此定義下,1 mm頭髮被分類為與具有1 mm直徑之球形粒子相同。記住此,根據二維影像,最大斐瑞特直徑為將使用之合理量測。然而,來自靜態影像之粒子大小的量測遭遇若干關鍵問題。 首先,在三維容積之二維投影中,多個粒子易於可能重疊,從而產生呈現為單一大得多之粒子。在法規對可允許粒子大小設定極其嚴格之上限的產業中,此為關鍵問題,特別對於製造應用,其中其可導致錯誤拒絕,特別對於緻密填入之樣本。 第二,不規則形狀粒子在容器周圍流動時可不可預測地(相對於相機)滾轉。藉由單一二維快照,可能不可能保證給定粒子之最長尺寸垂直於相機之觀看軸線。因此系統可能在系統上將粒子大小定得過小,此在嚴重規定之產業中可具有可怕結果。當粒子在容器周圍流動時經由粒子追蹤檢驗粒子之與時間有關的最大斐瑞特直徑提供粒子之最大尺寸的準確得多之量測。 第三,當粒子在圓柱形容器周圍移動時,其大體上將其長軸線與周圍流體流動之方向對準,如圖25A及圖25B中所展示。一般而言,對於圓柱形容器,此意謂細長粒子在影像之中心可比在極限橫向邊緣呈現得大。通常,當粒子相對於影像感測器之光軸垂直行進時,成像器檢測到最大顯而易見之粒子大小(斐瑞特直徑)。若在單一粒子在容器周圍流動時追蹤單一粒子,則可準確地量測其正確最大伸長率-靜態量測程序難以達成之某物。 最後,儘管努力藉由選通照明來最小化運動模糊之效應(如上文所論述),但某一運動模糊程度仍可能在影像俘獲序列開始時當流體及粒子移動最快時發生。藉由使用粒子大小之與時間有關的分析,可識別及抑制資料中歸因於運動模糊之假影(其傾向於增加所量測粒子大小)。 圖25C至圖25E說明使用時間序列資料來追蹤粒子軌跡以獲得較精確粒子大小量測。圖25C展示在自旋之後在小瓶周圍移動之100微米聚合物微球體之典型徑跡。當粒子呈現為跨越容器之中心,當其速度與觀看方向垂直時,如圖25D中所展示,粒子相對於相機移動最快。舉例而言,若初始自旋速度為300 rpm,且粒子之徑向位置r p為5 mm,則粒子速度v p為約9.4 m/s。按此速度,僅10 µs之相機曝光時間歸因於運動模糊而使顯而易見之粒子大小加倍。圖25E展示運動模糊可影響影像之不良程度:在左側,粒子正快速移動(約300 rpm)且伸直;在右側,相同粒子靜止且呈現為較圓。 圖25F為用於圖25C中所展示之粒子的與時間有關之斐瑞特直徑之圖。歸因於圓柱形容器之透鏡化效應,粒子之顯而易見的大小在容器之邊緣附近減小(右側軸線註記D)。最大粒子大小之最佳估計在粒子以中等速度橫穿容器之中心時發生(右側軸線註記B)。若速度過高(其通常在容器自旋之後的前幾秒期間發生),則運動模糊放大粒子大小(右側軸線註記A)。最終,歸因於流體阻力,粒子將停止一起移動(右側軸線註記C)。在此狀況下,中間範圍峰值(右側軸線註記B)為最大粒子大小之最準確讀數。 粒子特性化圖26A展示具有粒子及其軌跡兩者之時間序列資料的連續圖框。粗略平面徑跡表示模擬蛋白質聚集體之100微米聚合物微球體的軌跡。始終為平衡浮力之此等粒子隨流體移動且不會顯著下沈或上升。垂直下降之徑跡表示100微米玻璃珠之軌跡,玻璃珠最初隨流體旋轉但隨著序列進展而下沈。上升徑跡表示氣泡及具有正浮力之粒子的軌跡。 粒子追蹤使得能夠量測可關於處於檢驗下之粒子的性質給出重要線索之眾多與時間有關之屬性。舉例而言,自法規立場一般可視為良性之氣泡可使當前基於光學之檢查機器混亂,從而導致錯誤肯定及不必要之拒絕。在此狀況下,粒子之與時間有關的運動(當流體開始減慢時氣泡傾向於垂直地上升)導致可易於自粒子追蹤所產生之軌跡識別的極其明顯之特性。類似地,平衡浮力粒子可不上升或降落許多,而緻密粒子下沈至容器之底部。較輕粒子可在由自旋流體形成之漩渦中拂掠掉,且重粒子可具有直線軌跡。 更寬廣而言,粒子追蹤程序產生含有所有相關參數之細節的與時間有關之試算表(諸如圖26B中展示之試算表),該等相關參數包括位置、移動速度、移動方向、加速度、大小(例如,二維面積)、大小(最大斐瑞特直徑)、伸長率、球度、對比度及亮度。此等參數提供可用以將粒子分類為特定種類之符號。經由粒子追蹤解決方案可達成之此方法針對大多數關注粒子工作良好。基於與時間有關之量測的此陣列逐粒子地對粒子分類之能力為本發明之特定益處。 視訊壓縮視覺化比較大容器中之極其小的粒子受益於極其高解析度影像感測器之使用。亦需要最大化影像俘獲速率以確保準確軌跡構建。此等要求之組合導致極端大之視訊檔案,例如,1 GB、2 GB、5 GB、10 GB或更大。對於一些應用,除了封存分析資料之外,可有必要封存原始視訊。對於甚至中等大小之樣本集合,所涉及之大檔案大小可能使資料儲存成本超限。 (反轉)時間序列資料之視訊壓縮可用以減少(反轉)時間序列資料檔案之大小。保護粒子資料完整性可需要使用無損視訊壓縮。研究表明較常使用(及更有效)之有損壓縮技術(例如,MPEG)可嚴重地使影像失真及擾動影像,從而引入眾多不想要之視覺假影。 儘管一般而言,與有損壓縮相比無損壓縮比較低效,但存在可改良其效率之眾多步驟。時間序列資料之大多數圖框展示相對於黑暗背景之少數小的明亮物件集合。黑暗背景不含有有用資訊。其並非真正黑色,實情為,其係由極其模糊之隨機雜訊組成。以純黑色背景替換此背景極大地簡化影像,且使標準無損壓縮技術(例如,zip、Huffyuv)操作有效得多。 已在文獻中其他地方報告此程序。然而,此處新穎之處為實際上構成給定圖框中之背景之物的特定決策。其他壓縮程序設定臨限強度位準且假定影像中低於此位準之所有像素為背景之部分。此為廣泛有效之策略但可導致所保持粒子之大小的輕微減小,且可完全移除其亮度與固有隨機背景「雜訊」之上限為相同階的極其模糊之粒子。 儘管此等習知技術與(反轉)時間序列資料一起工作,但說明性實施例中所使用之壓縮使用在使用破壞性定限之前分析背景是否有模糊粒子之獨立階段。此確保保持粒子完整性同時最大化粒子儲存要求之減小的最佳平衡。 填充容積 / 彎液面檢測視覺檢查平台之自動實施例準確地檢測樣本之填充容積,此在研究應用中為重要的,其中不保證填充容積跨越特定行程將一致。此在處理極其大之資料檔案(諸如由高解析度影像感測器產生之資料檔案)時尤其有用,從而對資料傳送及儲存產生壓力。出於此理由,可需要限制記錄影像以涵蓋不超過流體容積,此係因為任何進一步資訊為不相關的。 說明性系統可使用(例如)自動邊緣檢測或特徵辨識演算法以檢測如圖27至圖29中所展示及下文描述之影像中之容器的邊界。因為彎液面及小瓶基底兩者為單一獨特特徵,所以可使用眾多可能光照組態及/或影像處理技術來準確地識別其在影像中之位置。量測填充容積及判定影像之由流體佔用的區產生關注區。具體而言,根據圖8,使用光源122f(背光)、122e(底部光)及122a與122b之組合(後方成角度光照)之組態可均用以檢測填充容積,如下文所描述。 圖27A至圖27F說明使用圖8中之後方成角度光照122a及122b自動檢測容器內之關注區。圖27A展示容器之靜態影像,其中器皿之基底及彎液面清晰地可見為相異明亮物件。作為實例,處理器可使用邊緣檢測以識別容器之垂直壁及關注區之寬度 w,如圖27B中所展示。對於彎液面及小瓶基底(其外觀可能較不可預測)之檢測,處理器可(例如)使用強度定限及分段以提供關注區之簡化影像(圖27C中所展示)。在此階段,處理器可自動識別可能不適合用於粒子分析之容器,例如表面經刮傷及/或覆蓋於灰塵中之容器。系統之有效性可受過量渾濁度、容器表面缺陷或過高粒子濃度危害(藉此個別粒子可不再離散化於影像中)。若處理器判定容器為令人滿意的,則可接著隔離及簡化對應於彎液面及小瓶基底之物件,如圖27D中所展示。處理器將關注區之垂直高度 h定義為彎液面之下邊緣與小瓶基底之上邊緣之間的距離,如圖27E中所展示。最後,處理器可使用關注區之寬度及高度尺寸來剪裁原始影像串流,使得僅記錄影像之由可見流體佔用的區域,如圖27F中所展示。 圖28A至圖28C說明藉由使用背光組態(例如,圖8中之光源122f)獲取之資料執行的類似彎液面檢測程序。圖28A展示表示藉由背光成像之典型容器的時間序列資料之圖框。彎液面、壁及基底清晰地可區別,且可使用如圖28B中之邊緣檢測自動識別。然而,諸如大刮痕之缺陷可能危害彎液面位置之準確檢測,無論使用背光(圖28B)抑或後方成角度光(例如,如圖29C中,下文所描述)。在一個實施中,使用影像之強度定限來識別彎液面及小瓶基底。由於此等為相對大物件,且歸因於其形狀朝向檢測器散射相對大量光,因此其可清晰地被識別,與可能存在之任何其他特徵不同。 圖29A至圖29D說明具有粗略平面底部之圓柱形器皿中之彎液面的檢測。自動填充容積檢測以定限(圖29A)開始以檢測彎液面,其接著設定關注區且亦為填充容積之量測。接下來,在圖29B中,傾斜光照反白顯示諸如刮痕(圖示)、灰塵、指紋、玻璃缺陷或凝聚之表面缺陷可使邊緣檢測為困難的。自下方(例如,使用如圖8中之光源122e)光照小瓶(如圖29C中)以對表面缺陷(相對)不敏感之方式照明彎液面-此處,彎液面可見,儘管表面被嚴重刮傷。來自下方之光照亦使得可能難以在空小瓶與滿小瓶(如圖29D中所展示)之間進行區分,及準確地檢測彼等極限之間的所有填充位處之彎液面高度。自下方照明小瓶增加彎液面檢測之有效性,此係因為其減輕歸因於刮痕及其他表面缺陷(圖27C)之錯誤。設定光源122e以按小角度照明器皿進一步減少對表面缺陷之敏感性。對於可歸因於缺乏透明容器基底而難以自下方照明之注射器,可藉由按窄角度傾斜地照明來達成類似效應。 類似於上文描述之彎液面檢測的檢查技術亦可用以篩選將破壞識別及分析懸浮於流體中之粒子的任何後續嘗試之特徵。此可包括識別過度擾動液體、嚴重損壞之容器(包括過量刮傷或表面碎屑)及其中粒子濃度過高而使得粒子可不再離散化之流體。 處理器及記憶體熟習此項技術者將易於瞭解,本文中所揭示之處理器可包含提供執行應用程式及其類似者之處理、儲存及輸入/輸出器件的任何合適器件。例示性處理器可實施於積體電路、場可程式化閘陣列,及/或任何其他合適架構中。說明性處理器亦經由通信網路連結至其他計算器件,包括其他處理器及/或伺服器電腦。通信網路可為遠端存取網路、全球網路(例如,網際網路)、全球電腦集合、區域網路或廣域網路及當前使用各別協定(例如,TCP/IP、藍芽等)之閘道的部分以與彼此進行通信。其他電子器件/電腦網路架構亦為合適的。 圖30為說明性處理器50之內部結構的圖。處理器50含有系統匯流排79,其中匯流排為用於電腦或處理系統之組件當中的資料傳送之硬體線集合。匯流排79實質上為連接電腦系統之不同元件(例如,處理器、磁碟儲存器、記憶體、輸入/輸出埠、網路埠等)的共用管道,該共用管道實現元件之間的資訊之傳送。用於將各種輸入及輸出器件(例如,鍵盤、滑鼠、顯示器、印表機、揚聲器等)連接至處理器50之I/O器件介面82附接至系統匯流排79。網路介面86允許電腦連接至附接至網路之各種其他器件。記憶體90向用以實施說明性視覺檢查系統及技術之實施例的電腦軟體指令92及資料94提供揮發性及/或非揮發性儲存。磁碟儲存器95向用以實施說明性視覺檢查之實施例的電腦軟體指令92及資料94提供(額外)非揮發性儲存。中央處理器單元84亦附接至系統匯流排79且準備電腦指令之執行。 在一個實施例中,處理器常式92及資料94為電腦程式產品(一般參考92),其包括提供用於說明性視覺檢查系統之軟體指令之至少一部分的電腦可讀媒體(例如,可卸除儲存媒體,諸如一或多個DVD-ROM、CD-ROM、磁片、磁帶等)。電腦程式產品92可藉由任何合適軟體安裝程序安裝,如此項技術中所熟知。在另一實施例中,亦可經由纜線、通信及/或無線連接下載軟體指令之至少一部分。在其他實施例中,例示性程式為體現於傳播媒體上之經傳播信號(例如,無線電波、紅外線波、雷射波、聲波,或經由諸如網際網路之全球網路或其他網路傳播的電子波)上的電腦程式傳播信號產品107。此等載波媒體或信號提供用於說明性常式/程式92之軟體指令之至少一部分。 在替代實施例中,傳播信號為攜載於傳播媒體上之類比載波或數位信號。舉例而言,傳播信號可為經由全球網路(例如,網際網路)、電信網路或其他網路傳播之數位化信號。在一個實施例中,傳播信號為經由傳播媒體在一時段內傳輸之信號,諸如經由網路在幾毫秒、幾秒、幾分鐘或更久之時段內在封包中發送之軟體應用程式的指令。在另一實施例中,電腦程式產品92之電腦可讀媒體為處理器50可(諸如)藉由接收傳播媒體及識別如上文描述之用於電腦程式傳播信號產品的傳播媒體中體現之傳播信號而接收及讀取之傳播媒體。 一般而言,術語「載波媒體」或暫時載波涵蓋上述暫時信號、傳播信號、傳播媒體、儲存媒體及其類似者。 感測器冷卻在上述實施例中,電子感測器用以俘獲粒子之影像。諸如CCD之電子感測器經受若干類型之隨機雜訊,該等雜訊用以危害量測信號(尤其處於低信號強度)之完整性。在一些實施例中,可冷卻感測器以降低雜訊。可使用任何合適技術實現冷卻,包括(例如)使用熱電冷卻器、熱交換器(例如,低溫冷卻器)、液氮冷卻,及其組合。 在各種實施例中,雜訊降低在粒子檢測方面具有優點,尤其與蛋白質聚集體之檢測有關。在典型應用中,蛋白質聚集體可相對大(例如,直徑多達幾百微米),然而此等聚集體粒子之實體結構經常極其鬆散,相比於周圍介質具有低密度(大部分粒子可為多孔的且以周圍介質填充)及低折射率。歸因於此等實體屬性,與其他粒子(諸如玻璃片段或纖維)相比,蛋白質聚集體可散射相對小量光。 影響當代電子影像感測器之雜訊的大多數本質上為熱。此雜訊主要影響感測器之動態範圍的下端。舉例而言,在一些實施例中,動態範圍之下部X%(例如,10%)由雜訊佔用且必須在影像定限程序(例如,如上文所描述)期間移除。用於粒子檢測之臨限值最小必須高於~X%之此值,藉此自信號移除低強度資料。此可防止諸如蛋白質聚集體之模糊粒子的準確檢測。藉由降低雜訊(例如,藉由冷卻感測器),可使用下部臨限值,從而允許低強度信號之改良檢測。 圖31說明上文描述之定限問題。圖31之面板A展示來自使用本文中描述之器件及技術獲取之典型影像序列的剪裁片段。如所展示,影像為8位元灰度影像,亦即,每一像素可具有自0(黑色)線性地變動至255(白色)之強度值。影像含有兩個粒子,一個粒子相對明亮且一個粒子極其模糊。圖31之面板B展示強度直方圖,其展示對應於不含有任何粒子之影像中的框之「背景」之強度值。 感測器在強度直方圖之低端處展現高斯背景雜訊曲線,此至少部分歸因於熱效應。此曲線之寬度判定用於粒子檢測之臨限值。簡而言之,粒子需要顯著比背景雜訊亮以經歷定限後仍然存在(survive)。 圖31之面板C展示用於亮粒子之強度直方圖。粒子影像具有在直方圖中臨限值右側之大量像素且因此可在定限之後將易於檢測到。 相比之下,如圖31之面板D中所展示,較模糊粒子具有在臨限值以上之相對少量像素-其將可能在定限/分段程序期間經掃除。然而,若應用冷卻或其他技術以降低雜訊底限,藉此將臨限值移位至左側,則可能可檢測到較模糊粒子。 基於光之列舉及非破壞性定大小 (LENS)在一些實施例中,當對容器內之粒子執行非破壞性定大小及計數時,存在由容器本身產生之可觀假影。液體界面折射穿過小瓶之光,此引起用於定大小及計數程序之粒子之一或多個影像的可觀失真。因此,給定大小之粒子在影像中呈現為高達(例如)四倍,此取決於粒子在小瓶內之空間位置。注意,對於圓柱形容器,粒子影像通常僅沿著小瓶之橫軸線而不沿著垂直軸線伸展。(對於此等效應之說明見圖5E)。 如上文所指出,在一些實施例中,可使用校正光學技術校正(例如,減輕或甚至消除)此等失真效應。然而,在一些實施例中,此光學校正可不完全或不可用。在此等狀況下,不能執行粒子之大小與檢測器上之對應影像的直接相關。 舉例而言,圖32展示使用系統獲取的流體中之標準大小(如所展示100 μm直徑)粒子(聚合物微球體)群體之所檢測影像大小的直方圖,其中來自容器之失真尚未校正(對應於圖5E中所展示之情形)。清晰地展示歸因於容器失真效應之表觀影像大小的顯著變化。 此變化使在不同大小之粒子群體之間進行區分為困難的,因為來自每一大小群體之檢測器上的表觀區域可存在實質上重疊。舉例而言,圖33展示用於流體中之兩個標準大小(如所展示100 μm及140 μm直徑)粒子群體之所檢測影像大小的直方圖。清晰地展示兩個大小群體之直方圖之間的顯著重疊。 在一些實施例中,可應用處理技術以恢復準確大小資訊,甚至在存在上文描述之失真效應的情況下亦如此。使用資料校準處理,該資料使用已知大小標準而獲得。舉例而言,圖34展示以實驗方式獲取之四個不同標準大小粒子(聚合物微球體)群體的表觀大小直方圖。儘管展示四個校準曲線,但在各種實施例中,可使用任何合適數目。在一些實施例中,可使用至少兩個、至少三個、至少四個、至少五個或至少六個曲線。在一些實施例中,曲線之數目在2至100之範圍,或其任何子範圍(諸如4至6)中。在一些實施例中,實驗校準曲線之集合可經內插以產生額外曲線(例如,對應於按實驗方式量測之值之間的大小值)。 在一些實施例中,校準曲線可對應於具有相差任何合適量(例如,至少1 μm、至少5 μm、至少10 μm、至少20 μm,或更多,例如在1 μm至1000 μm之範圍或其任何子範圍中)之實際大小的粒子群體。 一旦已判定校準曲線,便可獲得(例如,自一或多個靜態影像,或任何其他合適技術)具有未知大小粒子之樣本的表觀大小分佈曲線。可在相同或類似實驗條件(例如,相同或類似容器大小及形狀、流體屬性、照明條件、成像條件等)下獲得樣本曲線。將此樣本曲線與校準曲線作比較以判定指示樣本中之粒子之大小的資訊。 舉例而言,在一些實施例中,將校準曲線之加權疊加與樣本曲線作比較。改變疊加之加權以(例如)使用此項技術中已知之任何合適配合技術使疊加與樣本曲線配合。與樣本曲線之最佳配合的加權接著提供關於樣本中之粒子的實際大小之資訊。舉例而言,在一些實施例中,每一校準曲線在最佳配合疊加中出現之次數對應於樣本內之大小種類的計數。 圖35說明校準曲線之疊加與實驗樣本曲線之配合。在此狀況下,準備樣本以使得已知粒子直徑在75 μm至125 μm之範圍內。圖36展示來自配合之所得大小計數,與僅藉由對來自對應影像之原始表觀大小方格化獲得之大小計數相比較。對於原始資料,在實際75 μm至125 μm大小範圍之外存在大量虛計數。相比之下,自校準曲線之配合獲得之結果展示極大減少數目的虛計數。 注意,儘管已描述比較樣本資料與校準資料之一個可能方法,但可使用其他合適技術。舉例而言,在一些實施例中,可使用校正曲線作為基函數來分解樣本曲線,與使用正弦基函數之波形的傅立葉分解類似。一般而言,可使用任何合適卷積、反卷積、分解或其他技術。 在一些實施例中,基於光之列舉及非破壞性(「LENS」)定大小技術可用於與先前描述之粒子追蹤技術組合。舉例而言,LENS技術將傾向於在粒子之形狀接近用以產生校準資料之大小標準之粒子的形狀時操作得較佳。另外,該等技術傾向於在粒子之數目高(例如,大於10、大於50、大於100或更多)從而提供較大資料集合以供演算法處理時執行良好。 然而,在一些應用中,存在之粒子的數目可為低的。在一些應用中,焦點可在樣本中之較大粒子上。另外,在一些應用中,樣本可包括具有與大小標準粒子之形狀不同之形狀的粒子。舉例而言,纖維將為伸長的而非許多標準中使用之球形。在此等條件下,LENS技術可能不有效地工作。 一般而言,可使用上文描述之技術來計數任何數目之粒子。在一些實施例中,可計數之粒子之數目的上限由樣本中之粒子/粒子重疊判定。一般而言,容器中存在之粒子愈多,兩個粒子將愈可能向單一2D檢測器呈現為接合的。此隨每容積之粒子及粒子之大小而變。通常,大粒子在檢測器上佔據較多面積(因此與較小粒子相比對於給定計數/ ml重疊較多)。舉例而言,在某些條件下,在以8 ml之流體填充的10 cc小瓶中,在歸因於粒子重疊之少計數及大小定過大之效應變得顯而易見之前,可計數多達約500個具有50 µm之直徑的粒子。 然而,上文呈現之粒子追蹤技術可有效地對相對大粒子計數及定大小。因此,在一些實施例中,可使用兩種方法之混合。圖37展示此混合程序之例示性實施例。在步驟3701中,(例如)使用本文中描述之技術中之任一者記錄影像序列。在步驟3702中,處理(例如,濾波、定限、分段等)影像序列。在步驟3703中,可預篩選在步驟3702中產生之粒子資料以獲得在臨限大小以上之粒子。此等大粒子可自資料集合移除且在步驟3704中使用追蹤技術處理。此可提供大粒子之品質、與時間有關的大小量測。若存在較小粒子之背景(在大小臨限值以下),則此可在步驟3705中使用LENS技術處理。由兩種不同技術產生之資料接著可在步驟3706中組合以產生用於檢視下之容器的單一粒子報告。 在各種實施例中,用以判定應用哪一技術之大小臨限值可設定為任何合適臨限值或約1 µm或更大(例如,約在粒子之寬度或直徑之1 µm至400 µm之範圍或其任何子範圍中,例如,約1 µm至約50 µm、約50 µm至約250 µm,或約75 µm至約100 µm)之最小值。在一些實施例中,可使用除了大小之外的準則(例如,與粒子之形狀有關的資訊)來選擇發送至每一技術之粒子資料。一般而言,可使用準則之任何合適組合。 三維成像及粒子檢測技術如上文所指出,在一些實施例中,自動視覺檢查單元100可包括兩個或兩個以上成像器110,從而允許容器10之內含物的三維成像。 舉例而言,圖38A至圖38C說明以三個成像器110為特徵之單元100。如所展示,成像器110以120度間隔定位於容器10周圍之圓中,然而在各種實施例中,可使用更多或更少感測器。鄰近成像感測器之間的角度不需要彼此相等,然而,在一些實施例中,等角度配置簡化下文描述之影像處理技術。 在一些實施例中,每一成像器110實質上相同。可對準成像器110使得其相對於容器10均在相同實體高度,其中容器10位於每一成像器之視野的中心。 在一些實施例中,甚至當注意最佳化此實體對準時,可能發生置放之小錯誤。為了考慮此,可藉由使已知校準夾具成像來校準成像器110。接著可藉由相應地對所俘獲影像重新取樣及移位來考慮任何足夠小之橫向或垂直對準偏差。在一些實施例中,可處理影像以校正用於成像器110中之不同感測器之間的敏感性之變化或其他效能特性差異。 圖38C展示用於單元100之單一成像臂。如上文詳細描述,藉由使用遠心成像配置,保證僅實質上平行於成像軸線之光線到達成像器110之感測器表面。如圖39中所展示,使用幾何光線光學技術(或其他合適技術),可建立容器10內之傳播穿過容器壁且到達感測器表面之光線的模型。 在已知光線向量的情況下,可自二維影像上採取一點或區,且將彼強度向後傳播至容器10中。一次自二維採取一個水平列,可在容器容積內繪製二維水平柵格。與三個成像器110中之每一者相關聯的水平柵格可經疊加以產生單一圖。藉由重複用於額外水平感測器列之程序,可建立二維柵格之垂直堆疊以形成(例如)對應於容器10之所有或部分容積的三維(3D)結構。 可在所得3D結構內使用強度定限以與上文描述之類似的方式識別粒子候選者。可對來自成像器110之原始二維影像進行定限,或可對疊加之後的3D結構內之水平圖實行定限。 使用經定限3D結構,可識別候選粒子,藉此獲得粒子在容器10之流體容積內之3D位置的直接量測。在典型應用中,3D位置量測對於流體容積之大多數為準確的,然而,在一些狀況下(例如,當成像器110包括遠心透鏡時),可歸因於容器曲率及相關聯之透鏡化效應而經歷盲點(例如,如圖39中所展示,右側面板)。 當使用處於120度之角度的三個成像臂時,盲點成對緊密相關(見圖39,右側面板)。可排除三個盲點區3901內之準確3D定位。然而,在彼等區中,可藉由檢驗來自最近成像臂之二維資料來建立位置資料。 在各種實施例中,可藉由增加感測器臂之數目以確保重疊成像來減輕或消除盲點問題。 儘管已描述使用多個成像器110來判定關於容器10之內含物之3D資訊的一個實例,但應理解可使用其他技術。舉例而言,在實施例中,使用兩個成像器可應用立體成像技術來判定3D資訊。 在一些實施例(例如,以靜態或緩慢移動樣本為特徵之實施例)中,可使用旋轉成像臂以與醫學計算斷層攝影機器類似之方式獲得3D資訊。旋轉臂將自各種視角獲取2D影像之時間序列,2D影像可用以(例如)使用任何合適技術(諸如自醫學成像已知之技術)建構3D資訊。若以相對於樣本之動力學為快的速度獲取影像,則3D影像可提供準確3D資訊以用於粒子檢測。 在一些實施例中,使用上文描述之技術產生的3D資訊可適合用於檢測候選粒子位置,但對於確定粒子之其他特性(例如,粒子大小或形狀)並不理想。因此,在一些實施例中,可使用混合方法。舉例而言,在一些實施例中,基於3D資訊(例如,如上文描述所產生之3D結構)確定粒子之3D位置。一旦已確定粒子之三維定位,則可將自來自成像器110之一些或全部的二維影像獲得之大小及形狀量測與此等位置相關聯。 在一些實施例中,可(例如)使用與上文描述之二維技術類似的3D追蹤技術來對3D位置資料實行粒子追蹤。 在一些實施例中,3D追蹤提供優點,特別在與自每一成像器110獲得之二維影像組合使用時。 在3D追蹤中,減少或消除粒子-粒子遮蔽(例如,如圖5E中所展示)。在一些實施例中,可能遮蔽可(例如)針對真實3D定位失效之盲點中的密集樣本而發生。 如在上文描述之二維狀況下,在一些實例中,預測性追蹤技術可用於利用與容器10內之流體動力學有關的資訊之3D情境中。 在一些實施例中,一旦已追蹤到3D粒子位置,關於粒子之特性(例如,大小及形狀)的資訊可自來自多個成像器110之二維資料聚集至用於每一粒子之多個與時間有關的資料集合中。在一些實施例中,此可允許個別粒子特性(例如,大小及形狀)之量測比藉由單一成像感測器可能之量測更準確。舉例而言,在一些實施例中,此技術允許伸長粒子之較清楚檢測及大小量測,此係因為粒子之外觀不再嚴格地取決於其相對於單一成像器110之定向。 在一些實施例中,此方法可用以減輕由容器10之曲率引起的透鏡化效應。使用粒子之3D位置,可藉由校正透鏡化效應(例如,藉由以透鏡化效應比例因子來修改大小量測之橫向(水平)分量)來調整由成像器110中之每一者獲取的二維影像上之所量測粒子大小。可基於光穿過容器10至成像器110中之每一者的傳播之光學模型(如上文詳細描述)來判定此比例因子。 光譜檢測圖45展示可與本文中描述之類型的視覺檢查單元100一起使用之感測器4500(如所展示,光柵光譜儀)。舉例而言,感測器4500可形成與圖38A中所展示之單元100的實施例一起使用之第四成像臂。 感測器4500可用以檢測容器10中之一或多個粒子的特性(例如,光譜特性)。舉例而言,如所展示,藉由寬頻光源122照明容器122。感測器4500經由失真校正光學器件4501(例如,上文描述之類型之任一者)及遠心透鏡4501自容器10接收光。來自透鏡4501之光導引至繞射光柵4503上,繞射光柵4503分離接著成像於成像感測器4504上之光的光譜分量。在一些實施例中,繞射光柵4503操作而使得沿著感測器4504之一個尺寸(例如,垂直尺寸)的入射光之位置對應於光之波長。成像感測器4504上之其他尺寸對應於容器10內之不同空間位置。亦即,感測器4500(例如)以展示子區為容器10之水平「切片」的組態而提供用於容器之子區的光譜資訊。 當粒子穿過此中心水平平面時,可記錄其光譜符號。同時,如上文詳細描述,單元100之習知成像臂可用以追蹤容器內之粒子的位置(例如,以三維)。此資訊可用以判定給定粒子何時進入由感測器4500涵蓋之檢測子區。當粒子進入子區時,感測器4500將感測粒子之特性(例如,光譜符號)。單元100可產生與此特性有關之資料,且使此資料與追蹤資料中之指示粒子的識別碼之資料相關聯。 在各種實施例中,特性資料可用於任何合適目的(例如,識別粒子類型)。舉例而言,關於給定粒子之光譜資訊可與關於粒子之大小、形狀、移動或其他資訊組合以便判定粒子之類型。 在一些實施例中,可修改感測器4500及照明光源122以檢測粒子螢光性或任何其他合適特性。一般而言,可檢測粒子之任何光譜特性,包括顏色、吸收光譜、發射光譜,或透射光譜或此等中之任一者的組合。 儘管在上文描述之實例中,感測器4500包括於以三個影像臂為特徵之單元100中,但在其他實施例中,可使用任何其他合適數目個成像臂(例如,一個、兩個、四個、五個或更多)。在一些實施例中,其中使用單一成像臂,感測器4500可(例如)藉由使用分裂來自容器10之光束的光束分裂器(未圖示)而與成像臂對準,且將分量導引至單一成像臂及感測器4500。在其他實施例(例如,其中使用多個成像臂)中,感測器4500可相對於成像器以任何合適角度定向。 實例以下提供用於本文中描述之類型的自動視覺檢查單元100之實施例的例示性效能特性。 參看圖40,單元100呈現有容器10,每一容器10僅包括已知大小之單一聚合物球體。對每一容器執行多個檢測行程(n=80),且量測檢測百分比(圖中標記為「APT」之資料條)。如所展示,對於直徑自15 μm至200 μm變動之粒子大小,系統之檢測百分比大於90%。呈現由受訓練之人以視覺執行之相同任務的檢測百分比以用於比較(標記為「人」之資料條)。注意,對於大小在200 μm以下之粒子,人檢測能力迅速下降。 參看圖41,在另一測試中,單元100呈現有容納直徑在125 μm之可見截止以上及以下之粒子的容器。單元100檢測粒子且亦基於大小在125 μm之可見截止以上抑或以下來分類粒子。如所展示,對於直徑自15 μm至200 μm變動之粒子大小,系統之檢測百分比大於90%。單元100亦以極其高之準確度正確地對所檢測粒子分類。 參看圖42,產生用於多個大小標準之稀釋序列,每一序列由容納給定濃度之粒子的容器構成。所得容器由單元100分析以提供粒子計數,且回歸用以判定用於計數與反稀釋因子之線性的R平方「R^2」值。如所展示,對於自15 μm至200 μm變動之粒子大小,「R^2」值在0.95以上,從而指示極佳線性。 參看圖43,含有蛋白質粒子之受力樣本由單元100分析以判定由粒子大小方格化之粒子計數。展示用於採用10次行程以上之每一方格之粒子計數的精度。蛋白質粒子大小未知,此使得絕對大小準確性比較為不可能的,然而,如所展示,用於對蛋白質計數及定大小之系統的精度為高的。量測之歸一化錯誤為3%,此指示極佳精度。 參看圖44,單元100之特性亦為檢測空白與含有蛋白質粒子之小瓶。單元100之效能與觀察小瓶之相同集合的受認證視覺檢查員之效能比較。單元100(圖中標記為「APT」)以三倍行程正確地檢測到所有40個蛋白質小瓶及80個空白。分類可見粒子及在顯微鏡下才可見之粒子的自評為100%。人在兩種分類中得分僅約85%。 結論一般熟習此項技術者瞭解,用於非破壞性粒子檢測及識別(處理經由視覺檢查獲取之時間序列資料)之自動系統及方法中涉及的程序可體現於包括電腦可使用媒體之製造物品中。舉例而言,此電腦可使用媒體可包括其上儲存有電腦可讀程式碼片段之可讀記憶體器件,諸如硬碟機器件、CD-ROM、DVD-ROM、電腦磁片或固態記憶體組件(ROM、RAM)。電腦可讀媒體亦可包括其上攜載有程式碼片段以作為數位或類比資料信號之通信或傳輸媒體,諸如匯流排或通信鏈路(光學、有線或無線)。 本文中使用流程圖。使用流程圖不意謂關於所執行之操作的次序而受限制。本文中描述之標的物有時說明不同其他組件內含有或與不同其他組件連接之不同組件。應理解,此等所描繪架構僅為例示性的,且實際上可實施達成相同功能性之許多其他架構。在概念意義上,達成相同功能性之組件的任何配置有效地「相關聯」使得達成所要功能性。因此,可將本文中經組合以達成特定功能性之任何兩個組件視為彼此「相關聯」使得達成所要功能性,而不管架構或中間組件。同樣,如此相關聯之任何兩個組件亦可視為「可操作地連接」或「可操作地耦接」至彼此以達成所要功能性,且能夠如此相關聯之任何兩個組件亦可視為「可操作地可耦接」至彼此以達成所要功能性。可操作地可耦接之特定實例包括但不限於實體可配合及/或實體互動組件及/或以無線方式可互動及/或以無線方式互動組件及/或邏輯互動及/或邏輯可互動組件。 關於本文中使用實質上任何複數及/或單數術語,熟習此項技術者可自複數闡釋成單數及/或自單數闡釋成複數,視上下文及/或應用之需要而定。為清晰起見,在本文中可能明確闡述各種單數/複數交換。 熟習此項技術者將理解,一般而言,本文中且尤其在所附申請專利範圍(例如,所附申請專利範圍之主體)中使用之術語一般意欲為「開放」術語(例如,術語「包括」應解釋為「包括但不限於」,術語「具有」應解釋為「至少具有」等)。熟習此項技術者將進一步理解,若想要引入請求項敍述之特定編號,則此意圖將在請求項中明確地敍述,且在無此敍述的情況下不存在此意圖。舉例而言,作為對理解之輔助,以下所附申請專利範圍可含有引入請求項敍述之引入片語「至少一」及「一或多個」之使用。 然而,使用此等片語不應解釋為暗示藉由不定冠詞「一」之請求項敍述的引入將含有此引入之請求項敍述的任何特定請求項限於含有僅一個此敍述之標的物,甚至當相同請求項包括引入片語「一或多個」或「至少一」及諸如「一」之不定冠詞時亦如此(例如,「一」通常應理解為意謂「至少一」或「一或多個」);用以引入請求項敍述之定冠詞的使用情況亦如此。另外,即使明確地敍述了引入之請求項敍述之特定編號,但熟習此項技術者將瞭解,此敍述通常應理解為意謂至少該敍述編號(例如,「兩個敍述」之裸敍述而無其他修飾詞通常意謂至少兩個敍述,或兩個或兩個以上敍述)。 此外,在使用類似於「A、B及C等中之至少一者」之慣例的彼等情況下,一般而言,此建構意欲在熟習此項技術者將理解慣例之意義上(例如,「具有A、B及C中之至少一者的系統」將包括但不限於僅具有A、僅具有B、僅具有C、同時具有A及B、同時具有A及C、同時具有B及C,及/或同時具有A、B及C等之系統)。在使用類似於「A、B或C等中之至少一者」之慣例的彼等情況下,一般而言,此建構意欲在熟習此項技術者將理解慣例之意義上(例如,「具有A、B或C中之至少一者的系統」將包括但不限於僅具有A、僅具有B、僅具有C、同時具有A及B、同時具有A及C、同時具有B及C,及/或同時具有A、B及C等之系統)。 熟習此項技術者將進一步理解,實際上呈現兩個或兩個以上替代術語之任何分離詞語及/或片語(無論在描述、申請專利範圍抑或圖式中)應理解為涵蓋包括術語中之一者、術語中之任一者或兩個術語的可能性。舉例而言,片語「A或B」將理解為包括「A」或「B」或「A及B」之可能性。 如本文中所使用,術語光學元件可指代呈任何合適組合之一或多個折射、反射、繞射、全像、偏光或濾波元件。如本文中所使用,諸如「光」、「光學」或其他相關術語之術語應理解為不僅指代對人眼可見之光,亦可包括(例如)電磁光譜之紫外線、可見光及紅外線部分中的光。 出於說明及描述之目的已呈現說明性實施例之上述描述。相對於所揭示之精確形式,其並不意欲具有詳盡性或限制性,且修改及變化鑒於以上教示係可能的或可自所揭示之實施例的實踐獲取。本發明之範疇意欲由附加於此之申請專利範圍及其等效物界定。 The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosed technology and, together with the description, serve to explain the principles of the disclosed technology. FIG. 1A shows an exemplary automated visual inspection unit 100 configured to non-destructively detect and/or identify particles in a transparent container 10 at least partially filled with a fluid, such as those specified by the U.S. Food and Drug Administration to contain Protein-based pharmaceutical compositions, drugs, biotechnology products, beverages and other translucent fluids. Although in typical embodiments, detection of the presence or absence of particles can be accomplished by viewing portions of the external non-uniform container (e.g., seating) for particle characterization measurements such as counting and sizing, It may be necessary to view the particles through the substantially uniform vertical walls of the container in order to mitigate distortion. This has implications for the minimum fill volume, since an apparent two-dimensional cross-section of the fluid in the container 10 visible to the unit 100 must have an appropriate area to provide usable statistics. The required filling volume depends on the circular diameter of the container (the smaller the container, the less the required filling volume). In various embodiments, the internal volume of the container may be at least 1%, at least 5%, at least 10%, at least 20%, at least 30%, at least 40%, at least 50%, at least 60%, at least 70%, at least 80% %, at least 90% or at least 100% filled with fluid. In various embodiments, the particle detection techniques described herein are optical in nature. Thus, in some embodiments, the walls of container 10 are sufficiently transparent at the illumination wavelength to allow visualization of the liquid contained therein. For example, in some embodiments, container 10 may be fabricated from clear borosilicate glass, although other suitable materials may be used. The turbidity of the fluid contained within the vessel is also important and should be low enough to allow the desired degree of visualization. In some embodiments, embodiments, the fluid has a turbidity in the range of 0 to 100 NTU (Nephelometric Turbidity Unit), preferably 0 to 20 NTU, and more preferably 0 to 10 NTU. Standard practice for turbidity measurement can be found, eg, EPA Guidance Manual, Turbity Provisions Chapter 3 (April 1999). The illustrative system can detect and identify transparent and/or translucent particles that refract and/or scatter light (e.g., protein aggregates, glass shards or flakes, and oil spots), particles that reflect light (e.g., metal shards), and/or Particles that absorb light due to their different optical properties (for example, black carbon and plastic particles). Some inventive visual inspection units 100 can detect all three types of particles by using illumination sequences, such as those described below. The visual inspection unit 100 of the present invention can also be specifically configured to detect, identify and/or track proteins, which can appear as densely bound aggregates, loosely bound lint material with high water content, (reflective) crystals , colloidal substances, and/or amorphous aggregates. The term "protein" used interchangeably with the term "polypeptide" refers in its broadest sense to a compound of two or more subunit amino acids, amino acid analogs or peptidomimetics. Subunits may be linked by peptide bonds. In another embodiment, subunits may be linked by other linkages (eg, esters, ethers, etc.). As used herein, the term "amino acid" refers to natural and/or unnatural or synthetic amino acids, including glycine and both D and L optical isomers, amino acid analogs and peptidomimetics . Peptides of three or more amino acids are often called oligopeptides if the peptide chain is short. If the peptide chain is long, the peptide is often called a polypeptide or protein. As used herein, the term "peptide fragment" is also referred to as a peptide chain. Container 10 may be a rectangular or cylindrical vessel (e.g., cuvette, vial, ampoule, filter cartridge, test tube, or syringe) made of glass or plastic; it may also have another shape and/or be made of a different material , as long as it provides visualization of the contents of the container at the imaging wavelength. While certain embodiments provide clear and undisturbed visualization of container contents, other embodiments may time image acquisition to coincide with periods when the container was undisturbed and/or use post-processing to compensate for distortions in recorded data. Unit 100 includes an imager 110 with collection optics that project an image of the contents of the container onto a sensor. In this case, the collection optics comprise a telecentric lens 114 and the sensor is a Charge Coupled Device (CCD) 112 . A memory 140 coupled to the CCD 112 records and stores a stream of images representing the contents of the container, and a processor 130 coupled to the memory 140 analyzes the recorded sequence of images to detect and identify objects in the container 10 as described below. of particles. As will be appreciated by those skilled in the art, processor 130 may be associated with a suitably configured general-purpose computer (e.g., a computer using an Intel® Core™ i5 or Advanced Micro Devices Athlon™ processor), a field programmable gate array (e.g., , Altera® Stratix® or Xilinx® Spartan®-6 FPGA) or application-specific integrated circuits. Memory 140 may be implemented in solid-state memory (e.g., flash memory), optical disk (e.g., CD or DVD), or magnetic media, and may be selected to be of any suitable size (e.g., 1 GB, 10 GB, 100 GB or larger). Illumination system 120 including one or more light sources 122a and 122b disposed about container 10 illuminates container 10 and its contents during image acquisition. The visual inspection unit 100 can be integrated into an inspection module 160 that also includes a spindle 150, a vibrator, an ultrasonic vibrator, or otherwise spins, vibrates, or otherwise agitates the container contents prior to imaging. Objects and other agitators that hold container 10 during imaging, as in FIG. 1( b ). Figure 1(c) shows a medium to high throughput visual inspection platform 170, which includes one or more inspection modules 160-1 to 160-5 (collectively, inspection module 160), robot 180, and vial tray 172 , the vial tray 172 holds uninspected and/or inspected containers 10 in individual container wells. Immediately following instructions from the user or an automated controller (not shown), the robot 180 moves the container 10 from the vial tray 172 to the inspection module 160, which captures and records the time of particles moving in the container 10 sequence data. The robot 180 then returns the container 10 to the vial tray 172 . In some examples, the top layer of the vial tray 172 and/or the rim of the container well is made of Delrin ®Acetal resin or another similar material, and the inner edge of the container well is beveled to prevent the container 10 from becoming scratched when inserted into and removed from the container well. Vial tray 172 may include a base layer made of aluminum or another similar material that is not prone to warping or breaking. The walls of the container wells are typically thick to tightly hold the vials as the tray 172 is carried (eg, by a person) to and from the visual inspection platform 170 . Depending on its configuration, the vial tray 170 can hold the container 10 in a predefined position within micron-scale tolerances to facilitate container retrieval and insertion by the robot 180, which can operate with micron-scale precision. The robot 180 is a "pick and place" system that pulls vials from the tray 172 , moves each container 10 along a track 182 extending from above the tray 172 to above the spindle 160 , and places the container 10 on a specific spindle 160 . Some robots can also be configured to spin the container 10 before placing it, thereby eliminating the need for the spindle 160 . Alternatively, robot 180 may include a six-axis robotic arm that can spin, vibrate, and/or vibrate container 10 (eg, to perform the "shuttle" needle vibration described below), which also eliminates the need for spindle 160 . Those skilled in the art will readily understand that other loading and agitation mechanisms and sequences may be used with the vision inspection system and program of the present invention. The visual inspection platform 170 operates as shown in Figure 2(a). In step 202 , the container 10 to be inspected is cleaned (eg, by hand using a suitable solvent), and then in step 204 the container 10 is loaded into tray 172 . The robot 180 draws the container 10 from the tray 172 and places the container 10 on the spindle 160 . Next, in step 206, the processor 130 determines the size and size of the meniscus and/or region of interest (ROI) (e.g., the portion of the container 10 filled with fluid) from the image of the static container 10 acquired by the imager 110. Location. Alternatively, the user may specify the location of the meniscus and/or region of interest if the fill volume and container shape and volume are known with sufficient certainty. Once the processor 130 has located the ROI, the spindle 160 spins and stops the container 10 in step 208, which moves the fluid and causes the particles in the container 10 to become suspended in the moving fluid. In step 210, imager 110 records the time-series data in memory 140 in the form of a sequence of still images (referred to as "frames") representing snapshots of the ROI taken at regularly spaced time intervals. . After imager 110 has acquired sufficient time-series data, processor 130 subtracts background data that may represent dust and/or scratches on one or more of the surfaces of the container. Processor 130 may also filter noise from the time series data, as understood by those skilled in the art, and perform intensity quantification as described below. Processor 130 also reverses the order of the time series data. That is, if each frame in the time series data has an index 1, 2, ..., indicating the order in which it was acquired no-1, no, then invert the frame in the time series data to press no, noIndex configuration for -1, …, 2, 1 sorting. If necessary, processor 130 also selects the start and end points of the data to be analyzed as described below. (Those skilled in the art will readily appreciate that processor 130 may perform background subtraction, noise filtering, intensity limiting, time-series data inversion, and start/end point determination in any order.) Processor 130 tracks in step 212 Particles in or moving with the fluid, the particles are then sized, counted, and/or otherwise characterized in step 214 based on the particle trajectories. Each inspection module 160 can perform the same type of inspection, allowing parallel processing of containers 10; the number of modules 160 can be adjusted depending on the desired throughput. In other embodiments, each module 160 can be configured to perform different types of checks. For example, each module 160 may inspect particles at different illumination wavelengths: module 160-1 may look for particles that respond to visible light (i.e., radiation at wavelengths from about 390 nm to about 760 nm), module 160-2 Containers can be inspected with near infrared illumination (760 nm to 1400 nm), module 160-2 can be inspected with short wavelength infrared illumination (1.4 µm to 3.0 µm), module 160-4 can be inspected with ultraviolet wavelengths (10 nm to 390 nm ) to inspect particles, and the module 160-5 can inspect particles at X-ray wavelengths (below 10 nm). Alternatively, one or more modules 160 may look for polarization effects and/or particle fluorescence. In an embodiment having different types of modules 160, the first module 160-1 may perform a preliminary check, and subsequent checks may be performed depending on the results of the preliminary check. For example, the first module 160-1 may perform a visible light inspection that indicates that a particular container contains polarization sensitive particles. Processor 130 may then instruct module 160-2 to inspect the container to confirm (or disprove) the presence of polarization-sensitive particles, module 160-2 being configured to perform polarization-based measurements. Visible light time series data acquired by module 160-1 may indicate the presence of particles in a particular container 10, but not that type of particle, which may cause processor 130 to order an infrared inspection at, for example, module 160-3. Container agitation to induce particle movementAs described above, mechanically agitating the container 10 causes particles at the bottom of the container 10 or on the sides of the container's inner walls to become suspended in the fluid within the container. In a particular embodiment, the user and/or the visual inspection system selects and executes an agitation sequence that causes the fluid in the container to enter a laminar flow pattern in which the fluid flows in parallel layers without eddies, swirls, or eddies between the layers. The type of destruction. In fluid dynamics, laminar flow is the flow regime characterized by high momentum diffusion and low momentum convection—in other words, laminar flow is the opposite of turbulent flow. Agitation also causes the particles to become suspended in the moving fluid. Eventually, friction stops the fluid from moving, at which point the particles can stick to the walls of the container or settle to the bottom of the container. Laminar flow produces smoother particle motion than turbulent flow, which makes it easier to estimate particle trajectories. (Of course, the processor can also be configured to also estimate particle trajectories in certain turbulence regimes, provided the sensor frame rate is fast enough to capture "smooth" segments of the particle trajectories.) If desired, The container may be agitated in such a way as to create a substantially laminar flow. For example, the spindle may rotate the container at a specific speed (or speed profile) for a specific time as determined from measurements of fluid behavior for different container sizes and shapes and/or different liquid levels and viscosities. In one particular embodiment, a servo motor or a stepper motor drives a spindle holding a cylindrical container such that the container spins about its central axis, as shown in Figure 3(a). Spinning the container 10 at a sufficient velocity causes even heavy particles, such as metal fragments, to rise from the bottom of the container 10 into the fluid. For many fluids and particles, the motor drives the spindle holding the container 10 at 300 rpm for about 3 seconds. (Higher spin speeds may be needed to excite heavy particles.) After 3 seconds of spinning, the motor was stopped abruptly and the fluid was allowed to flow freely in the now stationary container. At this point, the imager 110 begins capturing video of the rotating fluid. Memory 140 records video for up to about 7 to 15 seconds, depending on the size of the container under inspection (memory 140 records less video of fluid in smaller containers because fluid returns in smaller containers decelerate more rapidly due to the effect of increased wall resistance). In another embodiment, the spindle rotates the container 10 in a two-stage agitation/imaging sequence. In the first stage, the spindle spins the container 10 at 300 rpm for 3 seconds so that less dense (and finer) particles such as proteins become suspended in the moving fluid. Imager 110 then captures video of the protein in the moving fluid. Once enough time series data has been collected by the imager 110, the second phase begins: the spindle rotates the container 10 at about 1600 rpm to 1800 rpm for 1 to 3 seconds so that denser particles such as metal fragments become suspended in the moving fluid , and the imager 110 captures time-series data representing denser particles moving in the container 10 . The high speed rotation in the second stage can be strong enough to temporarily dissolve or denature protein aggregates, which can recombine after the fluid slows or stops moving. The two-stage operation makes it possible to detect both dense particles that may not be excited by low-speed rotation and proteins that may be denatured by high-speed rotation. Other rotation sequences can also be used with the system of the present invention, depending on (but not limited to) any of the following parameters: fluid viscosity, fluid fill level, fluid type, surface tension, container shape, container size, container material, container texture , particle size, particle shape, particle type and particle density. For example, the present system can spin larger containers for longer periods of time before imaging the container contents. The exact agitation profile for a given fluid/vessel combination can be calculated, characterized, and/or determined by routine experimentation. If the visual inspection module uses a predetermined agitation sequence for a well-characterized container/fluid combination, it can trigger data acquisition only when the fluid (and suspended particles) is in a laminar flow regime. Alternatively, it can acquire additional time series data, and the processor can automatically select the start and end frames based on the container/fluid combination and/or agitation sequence. Any of the visual inspection systems described above may also be used to detect and/or identify native and foreign particles in a syringe 12 at least partially filled with a drug product 32 or other fluid, as shown in FIG. 3B . Syringe 12 is often stored with the needle facing down. Thus, the microparticles can settle in the needle 34 of the syringe. To visualize these particles, the robot or human turns the syringe 12 upside down—that is, the robot or human rotates the syringe 12 180° about an axis perpendicular to its longitudinal axis so that the needle 34 points upward. Particles that have settled in the needle 34 fall vertically for visualization by the imager 110 . A robot or human could also spin the syringe during inversion to completely move the fluid. Many syringes 12 have barrels with relatively small inner diameters (eg, about 5 mm), which significantly increases the effect of wall resistance. For many drug products 32, wall resistance stops all rotational fluid motion within about 1 second. For practical particle analysis, this is an extremely short time window. Fortunately, shaking the syringe 12 slightly about an axis perpendicular to its longitudinal axis (as shown in Figure 3(c)) produces particle motion lasting longer than 1 second. The particles can be agitated by the movement of the syringe 12 and any air bubbles 30 oscillating within the barrel of the syringe 12 by robotic or lateral shaking by hand. The visual inspection modules, units and platforms described above are designed to be reconfigurable and adaptable to this alternative method of agitation. Once agitation is complete, the visual inspection system should remain stationary during the video recording phase. Due to the high resolution of images commonly used, the spatial resolution of images is extremely fine (eg, on the order of 10 microns or less) and can be at least as fine as the diffraction limit. For some configurations, small (eg, 10 microns) movement of the sample is equivalent to a full pixel movement in the detected image. This motion compromises the effectiveness of static feature removal (background subtraction), which in turn degrades the performance of the analysis tool and the integrity of the output data. With this in mind, vibration isolation is a key design consideration. In a particular embodiment, the base of the illustrative visual inspection system is mechanically isolated from the experimental environment, eg, using vibration damping shocks, floats, and/or spacers. Additionally, inside the unit, vibration sources such as computers and robotic controllers can be mechanically isolated from the rest of the system. Alternatively, data acquisition may be synchronized with the residual motion of the container relative to the imager, or performed by the camera performing pixel shifting or some other motion compensating action. This residual motion can also be recorded for post-processing to remove adverse effects of image motion. Imager configurationThe illustrative visual inspection system may use a standard off-the-shelf imager with any suitable sensor including, but not limited to, a charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) array. The choice of sensor is flexible and depends somewhat on the requirements of a particular application. For example, sensors with high frame rates enable accurate mapping of trajectories of fast-moving particles (eg, in low-viscosity fluids). Sensitivity and noise performance are also important because many protein particles are transparent in solution and scatter light poorly, producing blurry images. To improve noise performance, the sensor can be cooled, as understood in the art. For most applications, monochrome sensors provide the best performance due to their slightly higher resolution and higher sensitivity than color cameras. However, for a small subset of applications, a color sensor may be preferable because it captures the color of the particle, which can be extremely important in determining its source (eg, clothing fibers). For example, in product quality surveys (also known as forensics), color sensors can be used to distinguish between different types of materials (eg, fibers) that can contaminate a manufacturing facility for pharmaceutical products. To accomplish container inspection, the field of view of the imager should cover the entire fluid volume. At the same time, the imager should be able to resolve small particles. Vision inspection systems achieve large field of view and precision resolution with large-format high-resolution sensors such as the Allied Vision Technologies (AVT) Prosilica GX3300 8-megapixel CCD sensor with 3296×2472 pixels. Other suitable sensors include the ACT Pike F505-B and the Basler Pilot piA2400-17gm five megapixel camera. When the imaging optics were chosen to fully image the fluid-bearing body of a 1 ml BD Hypak syringe, the AVT Prosilica GX3300 CCD sensor captured time-series data with a spatial resolution of about 10 microns per pixel in two lateral dimensions. The combination of high speed and high resolution implies that recording time series data can involve large data transfer rates and large file sizes. As a corollary, the video compression techniques described below are specifically designed to reduce data storage requirements while preserving the integrity of the fine details of particles captured in the image. Collection optics that image the region of interest onto the sensor should be chosen to provide sharp images of the entire volume with a minimum spot size equal to or smaller than the pixel size of the sensor to ensure the system operates at the finest possible resolution . Additionally, the collection optics preferably have a depth of field large enough to accommodate the entire sample volume. Telecentric lenses, such as lens 114 shown in FIG. 4 , are particularly suitable for visual inspection of fluid volumes because they are specifically designed to be insensitive to depth of field. As understood by those skilled in the art, a telecentric lens is a multi-element lens in which chief rays are collimated and parallel to the optical axis in image and/or object space, which results in constant magnification regardless of image and/or object position how. In other words, for objects within a certain distance range from an imager with a telecentric lens, the image of the object captured by the imager is sharp and has constant magnification regardless of the distance of the object from the imager. This makes it possible for particles trapped at the “back” of the container 10 to appear similar to the image of particles at the “front” of the container 10 . The use of telecentric lenses also reduces the detection of ambient light, as long as a uniform dark base plate is used. Suitable telecentric lenses 114 include Edmund Optics NT62-901 Large Format Telecentric Lens and Edmund Optics NT56-675 TECHSPEC Silver Series 0.16x Telecentric Lens. Container Specific Blind SpotsOne goal of almost any visual inspection system is to provide 100% container volume inspection. In practice, however, there may be fixed zones where particles cannot be detected, as shown in Figure 5A. First, the liquid surrounding the meniscus can be difficult to incorporate into the analysis because the meniscus itself scatters light in a way that could saturate the detector at that location, confounding any particles or other features of interest. Second, for vials, the base of the container is usually curved at the corners, which are commonly referred to as "heels." The curved heel has the effect of distorting and ultimately confusing any particles that venture close enough to the bottom of the vial. Third, for syringes, the rubber stopper is characterized by a central cone that protrudes slightly into the container volume. The tip of this cone may hide particles, although the tip is small. The tiniest blind spot occurs due to the curvature of the vial. Cylindrical containers can also cause lensing effects, as shown in Figure 5B, (indicated by curved rays 18) which serve to destroy the effectiveness of telecentric lenses. The curved walls of the container also create blind spots 14 . FIG. 5E shows an example of the lensing effect caused by the cylindrical container 10 . The camera/viewer is at the bottom of the figure. As described above, a telecentric lens can be used when imaging particles in the container 10 to ensure that the particles have a consistent appearance in the image, independent of the particle's position in the container, ie its distance from the camera. To achieve this, in some embodiments, the depth of focus of the telecentric lens is chosen to be larger than the diameter of the fluid volume. In some embodiments, container curvature defeats this principle in the absence of corrective optics. As shown, the shape and magnification of the imaged particles in the container 10 will depend on the location of the particles in the container. The particle 501 in the center of the front face of the container is not distorted at all (top inset). The same particle 502 on the rear side is most distorted (bottom inset). Note that for cylindrical vessels, distortion occurs only along the horizontal axis (as evident in the bottom inset). To mitigate these effects, optional corrective optics, such as corrective lens 116, are placed between telecentric lens 114 and container 10, as shown in Figure 5C. Additional spatial correction optics 118 can provide additional compensation for distortion caused by the shape of the container, as shown in Figure 5D. In various embodiments, any suitable corrective optics may be used in addition to or in place of corrective lens 116 and optics 118 , eg, tailored based on the curvature of container 10 and/or the refractive index of the fluid. For example, in some embodiments, a model of the lensing effect caused by the cylindrical container 10 can be developed. The model may be based on a suitable set of parameters characterizing the optical distortion, including, for example, the outer diameter of the container, the inner diameter of the container, the refractive index of the container, the refractive index of the liquid, and the wavelength of the illumination light. Models may be developed using any suitable technique known in the art, including, for example, ray tracing techniques. Figure 5F shows a theoretical model of the lensing effect for two different sets of vessel parameters (upper left, lower left), and an example of experimental data for the corresponding physical case (upper right, lower right). As shown, the theoretical model is in excellent agreement with the experimental data. Referring to Figures 5G and 5H, corrective optics 503, such as shown as lenses, are used to correct the lensing effect described above. The design of the corrective optics can be based on a theoretical optical model of the container, experimental data indicative of the optical properties of the container, or a combination thereof. As shown, correcting optics 503 are made of a refractive material with cylindrical front and back surfaces. In some embodiments, the design of the lens can be determined using free parameters including the radii of the front and back surfaces, the thickness of the lens, the refractive index of the lens, and the position of the lens relative to the container. In some embodiments, other shapes can be used for the front and back surfaces of the lens (eg, parabolic or any custom shape). In some embodiments, relaxing the requirement that the surface be cylindrical will increase the size of the design parameter space of the correcting optics 503, thereby allowing for improved correction. In some embodiments, corrective optics 503 may include multiple elements, thereby further increasing the design parameter space. In some embodiments, corrective optics 503 can correct other types of optical distortions, aberrations, or other effects. For example, where illumination at multiple wavelengths is used, corrective optics 503 may be used to correct for chromatic aberrations. In some embodiments, corrective optics 503 may be designed to correct for distortions caused by particular containers and/or fluid types. Because a single automated visual inspection unit 100 can be used with multiple container types, it may be desirable in some embodiments to allow the corrective optics 503 to be selectively changed to match the particular container 10 under inspection. For example, FIG. 51 shows a holder 504 holding a plurality of correcting optical elements 503 . The carriage can be moved (manually or automatically) to place selected ones of the elements into the optical chain for the imager 110 . Note that although a stand is shown, in various embodiments any other suitable mechanism for selecting an optical element from a set of multiple optical elements may be used. Alternative visual inspection systems may include adaptive optics that compensate for distortions due to the curvature of the container. For example, telecentric lens 114 may be configured to capture an image of container 10 reflected from a deformable mirror, such as a microelectromechanical system (MEMS) mirror. The sensor 112 uses the background data to derive the nature and magnitude of aberrations arising from surface curvature, surface defects, and other imperfections in the container 10 . The sensor 112 feeds this information back to the deformable mirror, which responds by adjusting its surface to compensate for the aberration. For example, a deformable mirror can bend or warp in one direction to compensate for container curvature. Because the deformable mirror responds dynamically, it can be used to compensate for aberrations specific to each individual container 10 . In addition, particle tracking can be tuned to detect particle disappearance in conjunction with the known locations of these blind spots, allowing the program to predict whether and where the same particle may later reappear in the video sequence , as described below. Additional techniques for addressing issues related to blind spots (eg, using multiple imagers) are described below. Camera frame rateEfficient particle tracking using the nearest match (greedy) algorithm described below can be considered as a function of three main factors: camera capture rate (frame rate), particle density (in 2D images), and typical particle velocity. For truly efficient tracking using the nearest matching algorithm, the camera should preferably be fast enough to meet the following criteria:
Figure 02_image001
. In fact, when projecting a 3D volume onto a 2D image, it is possible to have particles appear to be very close to each other (or even shadow each other) when the particles are actually properly spaced in the container. When considering this, it makes sense to consider the average nearest neighbor distance rather than the apparent minimum separation distance between particles. Note here that the nearest neighbor distance is the distance between adjacent particles in a given frame of time series data, and the nearest matching distance refers to the distance between the position differences observed for a single particle in consecutive frames of time series data. distance. Rewriting the criterion for camera speed in terms of closest matching distance gives:
Figure 02_image003
. Alternative visual inspection systems may use predictive tracking techniques instead of nearest match (greedy) particle tracking techniques. Predictive techniques use knowledge of the particle's known trajectory, combined with knowledge of the spatial constraints of the container and expected fluid behavior, to estimate the most likely position of the particle in subsequent frames. When implemented properly, this method can more accurately track particles moving at high speed through densely packed images. Maximizing the spatial resolution of an image sensor is advantageous when attempting to detect and measure extremely small particles in relatively large containers. In general, this has the direct effect of reducing the maximum achievable frame rate of the sensor. Visual inspection with multiple imagersUse of a single camera can be compromised by the existence of known blind spots. In addition, mapping the 3D particle distribution onto a 2D image can result in ambiguity due to occlusion (eg, as shown in Figure 5E, where particles in the back center of the container are obscured by particles in the front center). In principle, an alternative visual inspection system (eg, as seen in Figure 6) could solve this problem by correlating the results from two or more imaging systems. By correlating positional trajectory information from two or more cameras, it is possible to construct detailed 3D trajectory maps, which can be more robust and less prone to occlusions than 2D trajectory maps (discussed below) caused by the error. Increasing the spatial resolution of the imager also limits the data acquisition rate (frame rate) for a given particle concentration and particle velocity. When inspecting an unknown container, there may be no guarantee that the particle concentration will be reasonably low. At the same time, in order to suspend heavy particles such as glass or metal in the fluid, the rotation rate of the container may need to be quite high, resulting in high particle velocities in the captured video stream. One way to resolve this conflict is to use the novel imaging hardware configuration described below. Assuming the best commercially available sensors are used, and that the particles in the container scatter a lot of light, it is still possible to increase the data acquisition rate by multiplexing two or more sensors with a constant reliable trigger from a dedicated trigger source . Additionally, the exemplary visual inspection system can be configured to provide finer spatial resolution than 10 microns by relaxing the requirement for full container inspection, and instead only considering a subset of volumes. In general, this is acceptable for sub-visible particles, especially protein aggregates, since smaller particles tend to be present in higher numbers and more evenly distributed throughout the volume . Alternatively, an exemplary visual inspection system may provide full container inspection and fine spatial resolution by using multiple imagers with different magnifications to acquire wide area and fine resolution time series data in parallel. Alternative magnifications can be used simultaneously (eg, as in FIG. 6A ), with one imager 1102 viewing the full vessel, and a second imager 1104 with higher magnification (eg, a long working distance microscope objective) shifted toward a higher magnification. Small volumetrically and examines, for example, extremely small particles (eg, particles having a diameter of about 10 microns, 5 microns, 1 micron or less). Other visual inspection systems may include multiple imagers 1102, 1104, and 1106 positioned around the container 10 illuminated by one or more rings of light emitting diodes (LEDs) 1120 mounted above and below the container 10, such as Shown in Figure 6B. The same imagers 1102 mounted in different locations provide binocular vision. Imager 1104 with a long working distance microscope objective provides fine resolution for sub-volumes of container 10, and imager 1106 with alternative sensors (eg, infrared sensors, bolometers, etc.) provides additional time series data. 6C and 6D show alternative imaging configurations that exploit the properties of telecentric imaging. At the back aperture of the telecentric lens, a 50/50 beam splitting block 1202 splits the projected image into two separate imaging arms. Each imaging arm may include a high resolution low velocity sensor 1222 that operates in an interleaved fashion with the sensor 1222 in the other arm (as shown in FIG. 6C ) to double the frame rate. That is, simultaneously operating both sensors 1222 with a half-cycle relative phase offset improves the time resolution by a factor of two. The video streams can then be combined to provide a single video at twice the nominal sensor frame rate. Alternatively, each arm may include a different sensor as shown in FIG. 6D (for example) to compensate for the trade-off of the imaging sensor array: the finer the camera resolution, the slower the camera's maximum possible frame rate (for example, at 10 to 50 or 15 to 25 frames per second at full resolution, 50 to 200 frames per second at low resolution, etc.). For accurate particle tracking, the main sensor performance parameter is high temporal resolution (high frame rate). However, for accurate particle sizing, the primary sensor performance parameter is fine spatial resolution (as many pixels in the image as possible). Currently, the main limiting factor on spatial resolution and data transfer rate is the data transfer bus. For a standard PC bus (e.g., dual GigE or CameraLink bus), available imagers can capture four centimeters with a spatial resolution of about 10 microns per pixel and a data transfer rate of about 25 frames per second. High container time series data. FIG. 6D illustrates one way to achieve fast frame rate and fine resolution: the fluid is moved by both a high-resolution low-velocity sensor 1222 and a sensor 1224 with a moderate spatial resolution but a higher frame rate. imaging. External triggering ensures that both cameras are equally synchronized. Because the cameras view copies of the same image, their data can be directly correlated to produce improved particle analysis. 7A and 7B illustrate the timing and control of the illumination source 120 and multiple cameras. In both FIGS. 7A and 7B , the trigger controller 702 emits two trigger signals (labeled ARM 1 and ARM 2 in FIGS. 7A and 7B ) derived by decimating the master pulse signal. The ARM 1 trigger signal drives the first camera (1102a in FIG. 7A, 1222a in FIG. 7B), and the ARM 2 trigger signal drives the second camera (1102b in FIG. 7A, 1222b in FIG. 7B) in an interleaved fashion. That is, the trigger signal causes the first camera and the second camera to capture alternate sequences of frames. The trigger controller 702 can also drive the illumination source 120 with an illumination signal, and the illumination signal enables the illumination source 120 to illuminate the container every time the first camera or the second camera acquires an image. Other trigger sequences are also possible; for example, the trigger controller 702 can drive additional cameras and/or a combination of high and low resolution cameras that acquire images at different frame rates. Other configurations are possible, as will be apparent to those skilled in the art. For example, the image sensors on each arm can be equivalent to each other, but the collection optics can be different. One arm may include additional image magnification optics that "zoom in" on a specific subset of the image, providing simultaneous wide field of view and magnified view. lighting configurationThe visual inspection system of the present invention utilizes various ways in which particles interact with light to detect and identify particles in containers carrying fluids. The interaction of particles with light is a complex function of many factors, including particle size, shape, refractive index, reflectivity, and opacity. Protein particles can scatter light primarily via refraction, while layered glass particles can primarily reflect light. Some particles, such as collagen fibers, can modify inherent physical properties of light, such as the rotation of polarization. Tailoring detector, particle, and light geometries to maximize contrast between various particle types can lead to highly accurate detection and differentiation. 8-12 show various lighting configurations that are customizable or switchable/actuatable among different lighting modes for specific types of particles, containers, and/or fluids. For example, the light source can illuminate the particles in such a way as to maximize the amount of light it reflects or refracts towards the detector, while keeping the background dark to maximize the contrast between images of the particles and the background. Additionally, a source may emit radiation at any suitable wavelength or range of wavelengths. For example, it may emit broadband white light (390 nm to 760 nm), a narrow band beam (eg, at 632 nm), or even ultraviolet or X-ray radiation. Suitable ranges include 10 nm to 3000 nm, 100 nm to 390 nm (ultraviolet), 390 nm to 760 nm (visible), 760 nm to 1400 nm (near infrared), and 1400 nm to 3000 nm (mid wavelength infrared). X-ray emission (<10 nm) is also possible. When viewed as a complete whole, the array of illumination options disclosed herein allows the vision inspection system of the present invention to detect and identify the full range of particles that may be present in a drug product. Because some particles scatter only very weakly, it is often beneficial to irradiate the sample with as much light as possible. The upper limit of sample irradiation is primarily driven by the photosensitivity of the product under inspection. Judicious choice of wavelength may also be necessary, especially for biological products; the exact choice depends on the product being illuminated. Monochromatic red light centered at 630 nm represents the "happy medium" and is a readily available wavelength in terms of affordable light sources. LED arrays, such as the LDL2 series LED arrays from CCS Lighting, are effective for illuminating particles seen in pharmaceutical products; however, collimated laser beams can also be used. In some cases, the illumination optics can pattern or shape the illumination beam to be collimated within the fluid volume (as opposed to the exterior of the container). For an alternative light source, if heating from the light source is a concern, the light can be delivered to the examination region through the use of optical waveguides or optical fibers 124, as shown in FIG. The illumination wavelength can be selected based on the absorbance and/or reflectivity of the analyzed fluid and/or particles; this is especially important for photosensitive pharmaceutical products. Red light (630 nm) provides an excellent balance between the low absorption of proteins and that of water. Gating the illumination synchronously with time-series data acquisition further protects the integrity of photosensitive pharmaceutical products by minimizing the product's exposure to incident light. Gating has two other advantages: LEDs operate more efficiently when operated in this manner, and gating reduces the effects of motion blur, which can cause unnoticed hazards to particle size measurements, as described below. 8 shows an exemplary reconfigurable lighting system 120 that includes a number of light sources 122a through 122f (collectively referred to as light sources 122), which may be LEDs, lasers, fluorescent or incandescent light bulbs, flashlights, or any other suitable light source or suitable Combination of light sources. The light source 122 can emit visible light, infrared and/or ultraviolet radiation. It can be narrowband or broadband as desired, and can be filtered using appropriate optical filters or polarizers. For example, in FIG. 8, polarizer 129 polarizes the light emitted by light source 122f of the backlit container. In addition to the backlight 122f, the lighting system 120 includes four light sources 122a to 122d at the corners of a rectangular enclosure around the container 10. Another light source 122e illuminates the container 10 from the bottom via an optical fiber 124 coupled to a collimator 126 directed towards the bottom of the container 10 . In some cases, fiber 124 and collimator 126 may be housed inside hollow shaft 128, the main axis used to rotate the vessel. The plurality of light sources 122 shown in FIG. 8 can be used to determine the optical properties of a given particle based on its interaction with light to obtain a difference. As understood by those skilled in the art, different particles interact with light in varying ways. Common interaction modes include scattering, reflecting, occluding, or rotating the polarization of light, as shown in Table 1, where an "X" indicates that a particle of this type will appear using a given lighting technique, as exemplified in Figures 9A-9D and Figure 11 (described below). "M" indicates that particles of this type can be present using the given technique, but can still be detected/differentiated using post-processing image segmentation and feature recognition techniques. Table 1: Light interactions for various particle types particle type protein flakes opaque Cellulose Air main interaction lighting technology scattering reflection shade polarization change scattering rear angle x x x x x bottom x m backlight x Polarized m m x m 9A-9C illustrate different illumination patterns that may be implemented by the illumination system 120 of FIG. 8 (some light sources 122 omitted for clarity) to distinguish particle types based on light interactions. In FIG. 9A, light sources 122a and 122b provide rear angled illumination, which is used to visualize proteins, as well as most particle types that scatter light. In FIG. 9B , light source 122e provides bottom light, which is used to show reflective particles, such as glass flakes, that reflect light toward imager 110 (horizontal arrows); particles that scatter but do not reflect light (e.g., proteins) may not be present at on the sensor (diagonal arrow). In FIG. 9C, light source 122f provides a uniform backlight that is used to reveal light-blocking particles such as metals, dark plastics, and fibers. Those skilled in the art will readily appreciate that other light sources and/or illumination patterns and sequences are possible. 9D shows the manner in which the lighting techniques of FIGS. 9A-9C may be applied sequentially to capture time-series data of scattered, reflected and/or occluded particles. In this case, a system containing a uniform backlight, rear angled light, bottom light, and a single camera alternates lighting each frame so that only one particular light source 122 (or combination of light sources 122) is active at a time. For a single imager (not shown), only one set of lights is used for each acquisition frame of time series data. This sequence is repeated to provide video for each lighting configuration. Sequentially using the lighting techniques described above to acquire a video sequence provides a near-simultaneous video for each light source 122 . When complete, this provides three interlaced videos, one for each lighting technique. For each video, particles in a given frame can be related to the same particles in the other two videos (neglecting small time differences between frames) using an alternating lighting technique. Using mutual information contained in the way a given particle interacts with various lighting techniques, conclusions can be drawn about the particle's material composition. This technique can be combined with other image feature extraction information to increase specificity. For example, video can be automatically segmented to determine features within each frame. For each lighting technique, information for each feature, such as size, shape, brightness, smoothness, etc., can be automatically determined. This can help distinguish different particle types that have similar signs in terms of visibility to each of the different lighting techniques. FIGS. 10A-10C illustrate ways to reduce glare caused by unwanted reflection/refraction of light from a light source 122 external to the container 10 . Illumination of container 10 causes unwanted glare to appear in the image captured by imager 110 whose optical axis is aligned with the direction of travel of light from light source 122 reflected off the container surface. Glare can confuse otherwise detectable particles with the sensor's saturation area. Positioning the imager 110 or the light source 122 so that the optical axis of the imager is misaligned or parallel to the light emitted by the light source 122 that is reflected off the container surface reduces or eliminates glare detected by the sensor. For example, placing light source 122 outside the exclusion zone defined by rotating the imager about the longitudinal axis of container 10 reduces the amount of unwanted reflected and/or refracted light captured by the imager. Alternatively, the zone 1000 may be defined as a plane perpendicular to the central axis of the cylindrical vessel having a thickness equal to the height of the vertical wall of the vessel. As understood in the art, containers with more complex shapes, such as concave sidewalls, may have different exclusion zones and different corrective optics. Illuminating the container sidewall obliquely from above or below the zone 1000 or straight from below the container base also reduces glare detected by the imager 110 . Illumination of container 10 from below (eg, by light source 122e (FIG. 8)) also provides excellent contrast between light-reflecting particles (eg, glass flakes) and light-scattering particles (eg, proteins). 10D-10E illustrate alternative lighting schemes for reducing or eliminating glare from the container 10, wherein one or more light source sources 122 are placed in the exclusion zone described above (e.g., in the horizontal plane of the container 10) . FIGS. 10D-10E show the optical model of light rays traveling out from the sensor of imager 110 , through the imager's imaging optics (including a telecentric lens, as shown), and back through container 10 . A light source placed along either of the light propagating back from the sensor will refract or reflect the light onto the sensor, thereby possibly confusing the container 10 and its contents. Note, however, that the two regions 1001 are positioned in the horizontal plane of the container 10 and close to the outer wall of the container 10 . As shown in Figure 10E, if one or more light sources 122 are placed in region 1001, glare from the light sources can be reduced or substantially eliminated. Note that since a telecentric lens is used in the example shown, only rays incident perpendicularly to the sensor need be considered in the ray optics model. However, a similar approach can be applied to other types of imaging optics to account for additional rays. For example, in some embodiments, a representative set of rays (eg, including the chief ray of the imaging system) may be backpropagated from the sensor to identify regions with no or substantially no backpropagated rays. Lighting sources can be placed in the identification zone while avoiding glare. Figure 11 shows the setup for distinguishing elongated protein aggregates from cellulose and/or fibers (natural or synthetic) by means of polarized light. The illumination system 120 emits light towards the container 10, which is sandwiched between crossed polarizers 900 which provide a black image devoid of particles. Particles that modify (eg, rotate) the polarization of incident light appear as white in the time-series data detected by the imager 110 . If the particle of interest is known to fluoresce, fluorescence imaging can be used for particle identification, as shown in FIG. 12 . In this case, the illumination source 920 emits blue light that excites the particle of interest. A narrowband (eg, green) filter 922 placed in front of the imager 110 ensures that only fluorescent light from the excited particles will reach the detector. These illumination and filter wavelengths can be selected to suit particular wavelengths of interest. Finally, it is possible to detect (and identify) particles that neither scatter (refract) nor reflect light, such as small pieces of black opaque material. For such opaque particles, the sample should be backlit directly from behind. The particles can then be identified as dark features on a bright background. If desired, the image of the opaque particles can be inverted to form images scaled by the same polarity as the images of the scattering and reflecting particles (i.e., so the particles appear as bright spots on a dark background rather than dark spots on a light background) . Sheet Specific Vision Inspection PlatformAs understood by those skilled in the art, glass flakes are thin flexible pieces or fragments of glass formed from chemical reactions involving the interior surfaces of glass containers. The present systems and techniques can be used and/or adapted to detect, identify and count glass flakes to minimize the possibility of administering medications containing glass flakes in order to prevent the administration of medications containing (excess) glass flakes. The present systems and techniques are also useful and/or suitable for studying glass flake formation, which depends on the composition of a given formulation and differs from proteins and other types of particulate matter because it reflects and refracts light. Without being bound by any particular theory, certain conditions appear to be more likely than others to promote or hinder glass flake formation. For example, glass vials manufactured by tubing processes and/or at higher heat tend to be less resistant to flake formation than molded glass vials. Drug solutions formulated at high pH (alkaline) and with certain buffers such as citrate and tartrate are also associated with the flakes. The length of time the drug product remains exposed to the inner surfaces of the container, and the temperature of the drug product also affects the chance that glass flakes will form. For more, see (eg) U.S. Food and Drug Administration, Advisory to Drug Manufacturers: Formation of Glass Lamellae in Certain Injectable Drugs (March 25, 2011) (www.fda.gov/Drugs/DrugSafety/ucm248490. htm), which is incorporated herein by reference in its entirety. To create a system for discrimination based on this principle, an imager can typically be aligned with the vial and oriented so that incident light shines through the bottom of the container (perpendicular to the camera axis). This produces a very small signal from scattered particles (eg, proteins), and a large signal from reflected particles (eg, glass flakes). In other words, the flake appears to shimmer intermittently as it floats through the vessel. This technique was shown to be highly specific in distinguishing flake particles from protein aggregates. Additionally, the signal obtained using this imaging technique is related to the concentration of flakes within the vial. Therefore, this technique can be used not only for the non-destructive detection of flakes in commercial products, but also as a tool for determining which formulation components lead to the presence of increased/decreased flakes. 13A and 13B show maximum intensity projection (MIP) images of glass flakes (FIG. 13A) and proteins (FIG. 13B) acquired by an illustrative visual inspection system. Conventional MIP images are used in computerized tomography to visualize three-dimensional space viewed along one spatial axis (eg, the z-axis). A typical conventional MIP image represents the maximum value of data taken along rays parallel to the visualization axis. In this case, however, the MIP images shown in Figures 13A and 13B are visualizations of data representing the time evolution of a two-dimensional image—it is a projection along a time axis rather than a spatial axis. To generate the MIP images shown in FIGS. 13A and 13B , the processor selects the maximum value of at least some of the pixels in the time-series data, where each pixel represents reflection (and/or transmission) from a respective spatial location in the vessel amount of light. Plotting the resulting values produces a MIP image representing the brightest historical value of the pixel, such as the MIP image shown in Figures 13A and 13B. The processor scores the MIP image by counting the number of pixels in the MIP image whose value exceeds a predetermined threshold. If the score exceeds a historical value representing the number of flakes in similar vessels, the processor determines that the vessel is statistically likely to contain glass flakes. The processor can also determine the severity of flake contamination by estimating the number, average size, and/or size distribution of glass flakes from the MIP image. The inventive system can also be used to distinguish glass flakes from other particles in the vessel, for example, based on differences in the amount of light reflected by the particles as a function of time and/or based on differences in the amount of light transmitted by the particles. Some non-flake particles may reflect light to the detector from a light source illuminating the vessel from below (eg, light source 122e in Figure 8). For example, glass chunks, metal chunks, and foreign fibers can appear sequentially using a bottom-lit configuration. These types of particles will be detected consistently as they move through the container, in contrast to flakes which are orientation dependent and are only visible for a few frames each time they align themselves to reflect light towards the imager. Particle tracking can be used on bottom-light time-series images to track particulate matter that is always visible but still moving. These tracks can then be eliminated from the MIP calculations used for lamella scoring, or they can be included in mutual light information techniques to determine how a given particle interacts with other light orientations. For example, metal particles that reflect light can be traced on a bottom lit configuration. These same particles shade light when illuminated with a backlight (eg, light source 122f in FIG. 8). Using both of these measures makes it possible to distinguish metal particles from glass masses, which reflect bottom lighting but do not obscure rear lighting. Particle Detection, Tracking and CharacterizationAs described above, the visual inspection unit 100 shown in FIG. 1 can record a high quality high resolution monochrome stream of images (time series data) of bright particles imaged against a dark background. (Alternatively, the particles can appear as dark dots on a white background.) Because pharmaceutical products can contain a broad assortment of fundamentally different particles, the time-series data can be analyzed using many different methods to distinguish features on the image from the background. Often, the appearance of particles on a single image (frame of time-series data) is insufficient to make truly accurate population estimates (eg, count/size) of key objects. For example, what appears to be a single particle in one frame of time series data may actually be two or more particles colliding with or passing by each other, which can lead to accurate particle counts and/or particle size estimates . Temporal correlation of image features between frames in a video sequence improves the accuracy of particle counting and size measurements. The process of linking together image features in successive frames to form a time-dependent trajectory for each particle is called particle tracking, registration or assignment. Particle tracking techniques are used in other applications (notably in the experimental study of fluid mechanics). However, such applications typically use well-defined spherical tracer particles. Applying this principle to pharmaceutical products and other fluids requires significantly more complex solutions. Also, for some particle classes, temporal (tracing) analysis is not always practical. In such cases, statistical methods can be used as an alternative to generate characteristic measurements. FIG. 14 provides an overview of high-level particle detection and identification 1300, which begins with the acquisition 1310 of time-series data. Preprocessing the time series data (and/or inverting the time series data) 1320, and the preprocessed inverting time series data is used for two-dimensional particle identification and measurement 1330, which may include inversion Statistical analysis 1340 and/or particle tracking 1350 of time series data. As explained above, reversed time series data is time series data in which the plot frames are reordered in reverse time order. Particle report generation 1360 occurs when particle identification and measurement 1330 is complete. Time Series Data PreprocessingPreprocessing 1320 includes static feature removal (background subtraction) 1321 , image noise suppression/filtering 1322 and intensity limiting 1323 . Static feature removal 1321 exploits the fact that spinning the container excites the fluid and the particles contained within the fluid. The dynamic motion of fluids and particles allows them to be distinguished from other imaging features. Since image capture starts after the container stops spinning, everything that is moving is assumed to be a potential particle. Static features are then irrelevant and can be removed from the image to improve clarity. In one embodiment, minimum intensity projection creates an approximate template for static features in the image. This includes, for example, scratches, dirt and defects that may be present on the container walls. This "static feature image" can then be subtracted from the entire video sequence to generate a new video sequence containing only moving features against a black background. For example, Figures 15A and 15B show a single frame of time-series data before and after static feature removal. Glare, scratches, and other static features confuse portions of the container in Figure 15A. Background subtraction removes many of the static features, leaving an image with moving particles that are more clearly visible (FIG. 15B). A caveat to this approach is that most glass defects, such as surface scratches, scatter a relatively large amount of light, appearing bright white in the captured image when the detector pixels are saturated. Subtracting these features can result in "dead" areas in the image. As particles move behind or in front of such lighting defects, they can be partially obscured or even disappear completely. To address this issue, a "static feature image" can be maintained, analyzed and used to correlate defect locations with particle locations to minimize the effect of surface defects on particle size and count data. (As a side note, it is recommended to apply a cleaning protocol prior to operating the system to ensure that as many surface defects as possible are removed.) The data may also be filtered 1322, for example, to remove high frequency and/or low frequency noise. For example, applying a spatial bandpass filter to (inverting) the time series data removes and/or suppresses data varying above the first spatial frequency or the second spatial frequency. Once background features have been removed, the time-series data is limited 1323 to one of a predetermined number of values by shaping the intensity value for each pixel in the image. Consider the grayscale images shown in Figures 16A and 16C, scaled according to the eight-bit scale shown on the left (other possible scales include 16-bit and 32-bit). Each pixel has an intensity value from 0 to 255, where 0 represents no light detected and 255 represents the highest amount of light detected. Clipping those intensity values of 127 or below to 0 and those of 128 and above to 255 produces the black and white images shown in Figures 16B and 16D. Those skilled in the art will readily appreciate that other thresholds (and thresholds) are possible. particle detectionEfficient particle detection in images relies on various image processing and segmentation techniques. Segments refer to features of interest in an image to simplify calculations into discrete manageable objects. Segmentation methods for extracting features from images are widely used, for example, in the field of medical imaging, and these techniques have been used for particle recognition. Briefly, images acquired from the camera are preprocessed using clipping, background (static feature) subtraction, filtering (eg, bandpass filtering), and/or other techniques to maximize contrast. Upon completion, processor 130 segments the image, then selects certain regions of the image as representative particles and classifies those regions accordingly. Suitable segmentation methods include (but are not limited to) confidence-connected, watershed, level-set, graph segmentation, compression-based, clustering, region growing, multiscale, edge detection, and histogram-based method. After the image is acquired, segmentation can generate additional information to correlate a given feature on the acquired image with a particle type. For example, information about the characteristics of a given segment, such as area, perimeter, intensity, sharpness, and other characteristics, can then be used to determine the type of particle. Particle Tracing and Time ReversalCritically, previously available particle identification tools did not consider in great detail the temporal behavior of the particles as they moved around the vial. Particle counting and sizing may not be accurate if measured from a single "snapshot". However, time series data provide a more complete picture of particle behavior that can be resolved using particle tracking 1340, which enables generation of time-dependent spreadsheets for each individual particle, enabling more robust and accurate measurements of its basic attributes. Particle tracking is a technique widely used in video microscopy and fluid dynamics engineering (where it is often referred to as particle tracking velocimetry, or PTV). Although PTV is known, most particle tracking solutions assume that the movement of particles between successive video frames is slight and smaller than the typical separation distance between particles in a given image. In such cases, linking particle positions by identifying nearest matching neighbors is sufficient. However, in many applications this is not an appropriate model. Due to spin speeds (eg, about 300 rpm, 1600 rpm, and/or 1800 rpm) and possibly high particle concentrations, particles can be expected to move farther between successive frames than typical interparticle separation distances. This can be solved by using a form of predictive tracking, which involves searching for particles in regions predicted by their previous motion. Predictive tracking involves evaluating physics equations to mathematically predict the approximate future positions of particles in subsequent frames, as shown in FIG. 17 . For improved performance, this phase of predictive tracking can be coupled with knowledge of the local fluid behavior (if known), eg, as described with respect to Figure 21C. Forming an accurate prediction for a given trajectory may require some previous data points on which the trajectory is based. This presents a challenge - at the beginning of the image sequence, when the particle is moving the fastest, there may be little to no prior data on which to base the position prediction. However, over time, wall resistance in the container slows and eventually stops the rotating fluid. Recording the time series data long enough produces frames in which the particles slow down significantly and even stop. Inverting the timeline 1331 of the video so the particles initially appear static and slowly accelerate as the video progresses provides "previous" data points for determining trajectories. At the beginning of the video, where the particles are now hardly moving, the nearest matching principle can be used to establish the initial phase of each trajectory. At the appropriate time, the system can then switch to predictive mode. Reversing the timeline of acquired data in this way dynamically improves performance. Figure 17 shows an overview of predictive tracking by time reversal. The goal of particle tracking is to track the connected particles in the frame imiddle position a i to its frame i+1middle position a i+1 , as shown in Fig. 17(a). If particles move less between frames than to their nearest neighbors (particles b) distance d, then this is direct. If the particle's direction of movement is unknown or random, then the simplest approach is to have a search zone (usually a radius of r s circle), where r s is chosen to be longer than the expected range of particle movement, but smaller than the typical interparticle separation distance d, as shown in Figure 17(b). After reversing the movie timeline, as in Figure 17(c), the particles appear to start moving slowly. After a while, however, the particles appear to accelerate, and the closest match search method may begin to fail. Inverting the time series data for the first few frames partially builds the trajectory, yielding some knowledge of the particle's velocity and acceleration. This information can be input into appropriate equations to predict the particle's i+1The approximate position in it is shown in Figure 17(d). This predictive tracking method is significantly more efficient than simple nearest matching tracking, especially in dense and/or fast-moving samples. Quality Center InspectionFigures 18A and 18B illustrate center-of-mass detection for particles in (reversed) time-series data after quantification. First, the processor 130 converts the grayscale image (FIG. 18A) into a delimited image (FIG. 18B). Each particle appears as a two-dimensional projection whose shape and size depend on the shape, size and orientation of the particle when the frame was recorded. Next, the processor calculates the geometric center or centroid of each two-dimensional projection using any suitable method (e.g., plumb line method, by geometric decomposition, etc.) (e.g., as determined by coordinate x i and y i instruct). The processor 130 can compare the location of the centroid of a particular particle on a frame-by-frame basis to determine the trajectory of the particle. particle occlusionEach of the vision inspection systems disclosed herein projects a three-dimensional volume (container and its contents) onto a two-dimensional surface of an image sensor. For a given two-dimensional sensor, particles in a three-dimensional volume may appear as intersecting paths. When this occurs, one particle may partially or completely occlude the other, as shown in FIG. 19 . In Figure 19(1), new particles are identified in the image sequence; tracking particles through the image sequence produces a series of sequential steps, as shown in Figure 19(2). Search zones are used to find possible matches in consecutive frames, as shown in Figure 19(3). Sometimes more than one candidate particle will occupy the search zone, as shown in Figure 19(4), in which case the system selects the best match. As is readily apparent to those skilled in the art, any combination of different methods may be used to determine the best match. For example, data representing candidate particles in one frame may be compared and/or correlated with data representing particles in previous frames. Comparison of and/or correlation parameters including, but not limited to, changes in size, shape, brightness and/or appearance result in a match of candidate particles. An illustrative visual inspection system can account for collisions, occlusions, and temporary particle disappearances, such as the occlusions shown in Figure 19(5). When the particle recovers, as in Fig. 19(6), the track can be reconstructed. The illustrative system can also resolve conflicts that arise when two tracks (and their search zones) collide, ensuring that the correct trajectory is formed, as in Figure 19(7). Figure 20 illustrates another case of particle occlusion in 2D images: (a) Typical image of suspended particles. Figures 20(b)-20(e) show close-up views of the boxed region in Figure 20(a), where two particles approach each other from opposite directions. The next frame in the (reversed) time series data shows that occlusion makes the two particles appear as a single pseudolarge particle. If the shading is partial (Fig. 20(c)), this can lead to the appearance of a single pseudo-large particle. If occlusion is complete (FIG. 20(d)), smaller particles can be completely lost from view and the particle count can be reduced by one. This can be very important when inspecting pharmaceutical products because falsely increased size measurements may be sufficient to exceed regulatory thresholds when the product actually under inspection contains only acceptable microscopic particles. In Figure 20(e), the particles have moved past each other and independent tracking can continue. By analyzing particle trajectories and subsequent time-dependent size distributions, vision inspection systems can automatically correct errors due to shadowing, resulting in a lower false rejection rate. Consider missing particlesAs discussed, particles can disappear from part of a given video sequence for any number of reasons. It can traverse "blind spots" and/or "dead" zones due to static feature removal as discussed above. Finally, some types of particles can exhibit optical behavior in which the particles appear and disappear (blinking) relative to the imaging optics. In such cases, the processor can predict the movement of these "lost particles" as follows. If the particle reappears at the expected location within a certain time frame, the processor may concatenate the trajectory and interpolate the virtual particle data for the temporary frame. Note that from a regulatory standpoint, it is important to know that virtual particle data is properly labeled so that it can be distinguished from real measured particle data. 21A-21C illustrate one technique for tracking and recovering lost particles (ie, particles that temporarily disappear from view during the course of a video sequence). The disappearance can be due to shading behind another (larger) particle, shading behind a surface defect, transitioning through a known blind spot, or simply optical geometric properties of the particle (for example, some types of particles may only be in certain orientations visible). Finding or restoring particles that disappear from view improves the accuracy with which particles can be detected and identified. Figure 21A illustrates a predictive track to find particles obscured by defects on the container surface. Surface imperfections scatter large amounts of light, saturating corresponding areas of the image. This results in "dead bands" in the image after removal using static features. Any particles that cross this zone temporarily disappear. Processor 130 can recover "lost" particles by generating virtual particles in a limited number of steps. If the particle reappears and is detected, the tracks are coalesced. More specifically, processor 130 uses predictive tracking to determine the velocity of a particle before it disappears. It can also extrapolate expected particle positions using predictive tracking and the particle's velocity. If the particle reappears at the expected location, the virtual locations can be linked to form a complete trajectory. If a particle does not reappear within a predefined time window, it can be signaled as permanently lost and the particle is no longer tracked. FIG. 21B shows a way of tracking particles that undergo significant acceleration or direction changes while not being seen. Rather than predicting particle trajectories, processor 130 uses properties of the fluid's local behavior to retroactively link segment trajectories. In this case, the processor 130 combines the trajectories by taking into account the laminar properties of the fluid at this velocity and scale. Figure 21C illustrates the manner in which particles disappear and reappear as they traverse known blind spots. In this example, the particles traverse a known blind spot at the limiting edge of the container. Programming the processor 130 with information about the location of the blind spot relative to the container image enables the processor 130 to reconstruct the trajectory. particle shape irregularitiesSome particles are not spherical or small enough to be considered point-like, as assumed by most particle tracing techniques. In fact, many particles are irregularly shaped and can roll and rotate relative to the camera as they move through the fluid, as shown in Figures 22A-22C. In some cases, an irregularly shaped particle may appear as two separate particles, each with its own trajectory, as shown in Figure 22B. This unpredictable movement of the measured center of mass of the two-dimensional object can confuse the true movement of the particles. This behavior seriously complicates the program of predictive tracking. The visual inspection systems described herein may contain functionality to account for apparently disturbing motion of irregularly shaped particles, for example, by computing an average trajectory for the irregularly shaped particles (as shown in Figures 22A and 22C). container / product - specific fluid dynamicsThe motion of the particles in the container after spinning is the result of a combination of the motion of the fluid and the effect of gravity. The motion of the fluid is a function of the viscosity of the fluid, the fill volume, the shape and size of the container, and the initial spin velocity. Particle tracking performance can be significantly improved by incorporating knowledge of the physical constraints of the fluid system into trajectory construction. The hydrodynamics of spinning liquids in conventional containers can be surprisingly complex under certain circumstances. The incorporation of fluid dynamics knowledge (as it relates to containers commonly used in the pharmaceutical industry) into trajectory construction constitutes a significant novelty and area of development over the prior art. Figure 23 shows some examples of fluid behavior in a typical container, where results from a computational model are compared to real world particle trajectories produced by a visual inspection platform. The study found an unexpected subtlety: as an example, in Fig. 23(d) one can see particle movement along a narrow vertical column in the center of the vial, due to the release of the vortices generated during the spin phase (Fig. 23(a )). As the fluid in this central column moves vertically upwards, it can sweep up heavy particles that would normally be expected to sink. This can, for example, cause confusion between identified air bubbles that are expected to rise and foreign particles that rise due to container-specific fluid motion. The illustrative visual inspection system can take advantage of prior knowledge of the expected fluid dynamics of the drug product to produce significantly more accurate results than would otherwise be possible. Combining a solid model (such as the one illustrated in Figure 23) and particle tracking in this way represents a significant improvement over the prior art. error correctionAlthough the visual inspection system disclosed herein is robust under most experimental conditions, the complexity of the challenge of tracking large numbers of particles moving in a small three-dimensional volume means that there is always a risk of introducing some errors, mainly In the form of incorrect trajectories formed between successive frames when particles "collide". This phenomenon is illustrated in Figure 24A. An understanding of the physical limitations of the visual inspection system can be advantageously used. Broadly speaking, the predominant movement of the fluid locally around each particle is laminar (rather than perturbed or random). This basically means that with a fast enough camera, the natural particle trajectories in this system should vary smoothly without sudden sharp direction changes, especially when the particles traverse the center of the container in the image. Once the initial track link is complete, the system retrospectively analyzes the track for such errors. If an error is detected, the system can compare nearby trajectories to determine if a physically more consistent solution can be found. This is shown in Figure 24B. Accurate Particle CountingParticle counts can be inferred by counting the number of particles in a snapshot image taken at a single point in time after particle detection (eg, as shown in FIG. 24A ), where each particle is labeled with a count number. This method is straightforward, but has a tendency to systematically undercount the number of particles in the count volume for a number of reasons. For example, one or more particles may be obscured by another particle or a surface defect. Particles may be in known (or unknown) blind spots. Additionally, extremely small or blurry particles may intermittently appear and disappear from the view when the particle moves across a measurement threshold. One advantage of particle tracking as discussed herein is that it can take all of these issues into account. Thus, for robust particle tracking, particle counting can be improved by counting the number of individual particle tracks (as in FIG. 24B ), rather than the number of particles in a single image or statistical analysis of several images. Counting the number of particle trajectories rather than the number of particles in a single frame (or ensemble of frames) represents a significant improvement over conventional particle tracking techniques. The size of the modification varies with the number and size of particles present. Roughly speaking, as the number of particles increases, the chance of occlusion increases and thus increases proportionally due to the improved temporal capabilities of particle tracking of the present invention. accurate particle sizingConventional particle measurement systems measure particle sizes from static images. Most commonly, this is done by measuring the length of the longest apparent axis of the particle, or the Feret diameter, as shown in Figure 25, according to regulatory and/or industry standards Defined as the longest single dimension of a particle. Under this definition, a 1 mm hair is classified the same as a spherical particle having a diameter of 1 mm. Keeping this in mind, based on 2D images, the largest Ferret diameter is a reasonable measure to use. However, the measurement of particle size from static images suffers from several key problems. First, in a two-dimensional projection of a three-dimensional volume, multiple particles tend to possibly overlap, resulting in a particle that appears as a single much larger particle. In industries where regulations set extremely strict upper limits on allowable particle sizes, this is a critical issue, especially for manufacturing applications, where it can lead to false rejections, especially for densely packed samples. Second, irregularly shaped particles can roll unpredictably (relative to the camera) as they flow around the container. With a single 2D snapshot, it may not be possible to guarantee that the longest dimension of a given particle is perpendicular to the viewing axis of the camera. The system may therefore systematically undersize the particles, which can have dire consequences in heavily regulated industries. Examining the time-dependent maximum Ferret diameter of a particle via particle tracking as the particle flows around the container provides a much more accurate measure of the maximum dimension of the particle. Third, as a particle moves around a cylindrical container, it generally aligns its long axis with the direction of surrounding fluid flow, as shown in Figures 25A and 25B. In general, for cylindrical vessels, this means that elongated particles may appear larger in the center of the image than at the extreme lateral edges. Typically, the imager detects the largest apparent particle size (Ferret diameter) when the particle travels perpendicular to the optical axis of the image sensor. If a single particle is tracked as it flows around the container, its correct maximum elongation can be accurately measured - something difficult to achieve with static measurement procedures. Finally, despite efforts to minimize the effects of motion blur by gated lighting (as discussed above), some degree of motion blur may still occur at the beginning of the image capture sequence when fluid and particles are moving the fastest. By using time-dependent analysis of particle size, artifacts in the data due to motion blur (which tend to increase the measured particle size) can be identified and suppressed. 25C-25E illustrate the use of time series data to track particle trajectories for more accurate particle size measurements. Figure 25C shows a typical trajectory of a 100 micron polymer microsphere moving around a vial after spinning. Particles move fastest relative to the camera when they appear to span the center of the container when their velocity is perpendicular to the viewing direction, as shown in Figure 25D. For example, if the initial spin speed is 300 rpm, and the particle's radial position r pis 5 mm, then the particle velocity v pis about 9.4 m/s. At this speed, a camera exposure time of only 10 µs doubles the apparent particle size due to motion blur. Figure 25E shows how undesirable motion blur can affect an image: on the left, a particle is moving fast (about 300 rpm) and straightened; on the right, the same particle is stationary and appears rounder. Figure 25F is a graph of the time-dependent Ferret diameter for the particles shown in Figure 25C. Due to the lensing effect of the cylindrical container, the apparent size of the particles decreases near the edge of the container (note D on the right axis). The best estimate of the maximum particle size occurs when the particle traverses the center of the container at moderate velocity (note B on the right axis). If the velocity is too high (which typically occurs during the first few seconds after the container spins), motion blur amplifies the particle size (note A on the right axis). Eventually, due to fluid resistance, the particles will stop moving together (right axis notation C). In this case, the mid-range peak (note B on the right axis) is the most accurate reading of the largest particle size. particle characterizationFigure 26A shows successive frames with time-series data of both particles and their trajectories. Coarse planar trajectories represent the trajectories of 100 micron polymer microspheres simulating protein aggregates. These particles, always in equilibrium buoyancy, move with the fluid and do not sink or rise significantly. The vertically descending track represents the trajectory of a 100 micron glass bead that initially rotated with the fluid but sank as the sequence progressed. Rising tracks represent the trajectories of bubbles and positively buoyant particles. Particle tracking enables the measurement of numerous time-dependent properties that can give important clues about the properties of the particle under examination. For example, air bubbles that are generally considered benign from a regulatory standpoint can confuse current optical-based inspection machines, leading to false positives and unnecessary rejections. In this case, the time-dependent motion of the particles (bubbles tend to rise vertically as the fluid begins to slow down) results in a very distinct property that can be easily identified from the trajectories produced by particle tracking. Similarly, equilibrium buoyant particles may not rise or fall much, while dense particles sink to the bottom of the container. Lighter particles can be swept away in the eddies formed by the spin fluid, and heavy particles can have straight trajectories. More broadly, the particle tracking program produces a time-dependent spreadsheet (such as the spreadsheet shown in FIG. 26B ) that contains details of all relevant parameters including position, velocity of movement, direction of movement, acceleration, magnitude ( For example, two-dimensional area), size (max Ferret diameter), elongation, sphericity, contrast and brightness. These parameters provide symbols that can be used to classify particles into specific classes. This approach, achievable via particle tracing solutions, works well for most particles of interest. The ability of this array to sort particles on a particle-by-particle basis based on time-related measurements is a particular benefit of the present invention. video compressionVisualization of extremely small particles in larger containers benefits from the use of extremely high-resolution image sensors. It is also necessary to maximize the image capture rate to ensure accurate trajectory construction. The combination of these requirements results in extremely large video files, for example, 1 GB, 2 GB, 5 GB, 10 GB or larger. For some applications, it may be necessary to archive raw video in addition to archiving analytics data. For even moderately sized sample sets, the large file sizes involved may overwhelm data storage costs. Video compression of (reversed) time series data can be used to reduce the file size of (reversed) time series data. Protecting particle data integrity may require the use of lossless video compression. Research has shown that more commonly used (and more efficient) lossy compression techniques (eg, MPEG) can severely distort and perturb images, thereby introducing numerous unwanted visual artifacts. Although in general, lossless compression is less efficient than lossy compression, there are numerous steps that can improve its efficiency. Most frames of the time series data show a few small collections of bright objects against a dark background. A dark background contains no useful information. It's not really black, it's made up of very vague random noise. Replacing this background with a solid black background greatly simplifies the image and makes standard lossless compression techniques (eg, zip, Huffyuv) operate much more efficiently. This procedure has been reported elsewhere in the literature. What is novel here, however, is the specific decision of what actually constitutes the background in a given frame. Other compression programs set a threshold intensity level and assume that all pixels in the image below this level are part of the background. This is a broadly effective strategy but results in a slight reduction in the size of the retained particles and can completely remove extremely blurry particles whose brightness is on the same order as the upper bound of the inherently random background "noise". While these conventional techniques work with (reversed) time-series data, the compression used in the illustrative embodiment uses a separate stage of analyzing the background for ambiguous particles before using destructive limiting. This ensures an optimal balance of maintaining particle integrity while maximizing the reduction of particle storage requirements. filling volume / Meniscus detectionAn automated embodiment of a visual inspection platform accurately detects the fill volume of a sample, which is important in research applications where there is no guarantee that the fill volume will be consistent across a particular run. This is especially useful when dealing with extremely large data files, such as those produced by high-resolution image sensors, thereby stressing data transfer and storage. For this reason, recording images may need to be limited to cover no more than the fluid volume, since any further information is irrelevant. The illustrative system may use, for example, automatic edge detection or feature recognition algorithms to detect the boundaries of containers in the images as shown in FIGS. 27-29 and described below. Because both the meniscus and the vial base are a single unique feature, their location in the image can be accurately identified using many possible lighting configurations and/or image processing techniques. Measuring the fill volume and determining the region of the image occupied by fluid creates a region of interest. Specifically, according to FIG. 8, configurations using light sources 122f (backlight), 122e (bottom light), and the combination of 122a and 122b (rear angled lighting) can all be used to detect fill volumes, as described below. 27A-27F illustrate automatic detection of a region of interest within a container using rear angled illumination 122a and 122b of FIG. 8 . Figure 27A shows a still image of the vessel, where the base and meniscus of the vessel are clearly visible as distinct bright objects. As an example, the processor may use edge detection to identify the vertical walls of the container and the width of the region of interest w, as shown in Figure 27B. For detection of the meniscus and vial base (whose appearance may be less predictable), the processor can, for example, use intensity clipping and segmentation to provide a simplified image of the region of interest (shown in Figure 27C). At this stage, the processor can automatically identify containers that may not be suitable for particle analysis, such as containers with scratched surfaces and/or covered in dust. The effectiveness of the system can be compromised by excessive turbidity, container surface defects, or too high a particle concentration (whereby individual particles can no longer be discretized in the image). If the processor determines that the container is satisfactory, the objects corresponding to the meniscus and vial base may then be isolated and simplified, as shown in Figure 27D. The processor will focus on the vertical height of the region hDefined as the distance between the lower edge of the meniscus and the upper edge of the vial base, as shown in Figure 27E. Finally, the processor can use the width and height dimensions of the region of interest to crop the raw image stream so that only the area of the image occupied by the visible fluid is recorded, as shown in Figure 27F. 28A-28C illustrate a similar meniscus detection procedure performed by data acquired using a backlight configuration (eg, light source 122f in FIG. 8). Figure 28A shows a frame representing time-series data of a typical container imaged by backlighting. The meniscus, wall and base are clearly distinguishable and can be automatically identified using edge detection as in Figure 28B. However, defects such as large scratches can compromise accurate detection of the meniscus position, whether using backlighting (FIG. 28B) or rear angled light (eg, as in FIG. 29C, described below). In one implementation, intensity definition of the image is used to identify the meniscus and vial base. Since these are relatively large objects, and due to their shape scatters a relatively large amount of light towards the detector, they can be clearly identified, unlike any other features that may be present. 29A-29D illustrate the detection of a meniscus in a cylindrical vessel with a roughly planar bottom. Automatic fill volume detection begins with delimitation (FIG. 29A) to detect the meniscus, which then sets the region of interest and is also a measurement of the fill volume. Next, in Figure 29B, oblique lighting highlights show that surface defects such as scratches (shown), dust, fingerprints, glass defects, or condensation can make edge detection difficult. Illuminating the vial from below (e.g., using light source 122e as in FIG. 8) (as in FIG. 29C) illuminates the meniscus in a manner that is (relatively) insensitive to surface imperfections - here, the meniscus is visible, although the surface is severely damaged scratched. Illumination from below also makes it potentially difficult to distinguish between empty and full vials (as shown in FIG. 29D ), and to accurately detect meniscus heights at all fill positions between those limits. Illuminating the vial from below increases the effectiveness of meniscus detection because it mitigates errors due to scratches and other surface defects (Figure 27C). Setting the light source 122e to illuminate the vessel at a small angle further reduces susceptibility to surface defects. For syringes, which are difficult to illuminate from below due to the lack of a transparent container base, a similar effect can be achieved by obliquely illuminating at a narrow angle. Inspection techniques similar to the meniscus detection described above can also be used to screen for features that would disrupt any subsequent attempts to identify and analyze particles suspended in the fluid. This may include identifying excessively disturbed liquids, severely damaged containers (including excessive scratches or surface debris), and fluids in which the concentration of particles is so high that the particles can no longer be discretized. processor and memoryThose skilled in the art will readily appreciate that the processors disclosed herein may include any suitable device that provides processing, storage, and input/output means for executing applications and the like. Exemplary processors may be implemented in integrated circuits, field programmable gate arrays, and/or any other suitable architecture. The illustrative processor is also linked via a communications network to other computing devices, including other processors and/or server computers. The communication network can be a remote access network, a global network (e.g., the Internet), a global collection of computers, a local area network or a wide area network and currently uses respective protocols (e.g., TCP/IP, Bluetooth, etc.) parts of the gateway to communicate with each other. Other electronic device/computer network architectures are also suitable. FIG. 30 is a diagram illustrating the internal structure of processor 50 . Processor 50 contains system bus 79, where a bus is a collection of hardware lines used for data transfer among components of a computer or processing system. The bus 79 is essentially a shared conduit that connects different components of a computer system (eg, processors, disk storage, memory, input/output ports, network ports, etc.) send. Attached to system bus 79 is an I/O device interface 82 for connecting various input and output devices (eg, keyboard, mouse, display, printer, speakers, etc.) to processor 50 . The network interface 86 allows the computer to connect to various other devices attached to the network. Memory 90 provides volatile and/or non-volatile storage for computer software instructions 92 and data 94 used to implement embodiments of the illustrative visual inspection systems and techniques. Disk storage 95 provides (additional) non-volatile storage for computer software instructions 92 and data 94 used to implement the illustrative visual inspection embodiments. Central processing unit 84 is also attached to system bus 79 and prepares the execution of computer instructions. In one embodiment, processor routine 92 and data 94 are a computer program product (generally referenced 92) that includes a computer-readable medium (e.g., removable other than storage media, such as one or more DVD-ROMs, CD-ROMs, floppy disks, tapes, etc.). Computer program product 92 may be installed by any suitable software installation program, as is known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded via a cable, communication and/or wireless connection. In other embodiments, the exemplary program is a propagated signal embodied on a propagation medium (e.g., radio waves, infrared waves, laser waves, sound waves, or a signal transmitted over a global network such as the Internet or other network). Computer program broadcast signal product 107 on electronic wave. Such carrier media or signals provide at least a portion of the software instructions for illustrative routine/program 92 . In alternative embodiments, the propagated signal is an analog carrier wave or a digital signal carried on a propagated medium. For example, a propagated signal may be a digitized signal propagated over a global network (eg, the Internet), a telecommunications network, or other network. In one embodiment, the propagated signal is a signal transmitted over a period of time via a propagation medium, such as an instruction of a software application sent in a packet over a network within a period of milliseconds, seconds, minutes or longer. In another embodiment, the computer-readable medium of the computer program product 92 is a processor 50 that can, for example, receive a broadcast signal embodied in a broadcast medium for a computer program broadcast-signal product, such as by receiving the broadcast medium and identifying And receive and read the communication media. In general, the term "carrier medium" or transitory carrier encompasses the foregoing transitory signal, propagated signal, propagated medium, storage medium, and the like. sensor coolingIn the above embodiments, electronic sensors are used to capture images of particles. Electronic sensors such as CCDs are subject to several types of random noise that serve to compromise the integrity of the measurement signal, especially at low signal strengths. In some embodiments, the sensor may be cooled to reduce noise. Cooling can be achieved using any suitable technique, including, for example, using thermoelectric coolers, heat exchangers (eg, cryocoolers), liquid nitrogen cooling, and combinations thereof. In various embodiments, noise reduction has advantages in particle detection, particularly in relation to the detection of protein aggregates. In typical applications, protein aggregates can be relatively large (e.g., up to a few hundred microns in diameter), yet the physical structure of these aggregate particles is often very loose, with low density compared to the surrounding medium (most particles can be porous and filled with the surrounding medium) and low refractive index. Due to these physical properties, protein aggregates can scatter relatively small amounts of light compared to other particles such as glass fragments or fibers. Most of the noise that affects modern electronic image sensors is thermal in nature. This noise primarily affects the lower end of the sensor's dynamic range. For example, in some embodiments, the lower X% (eg, 10%) of the dynamic range is occupied by noise and must be removed during the image sizing process (eg, as described above). The threshold for particle detection must at least be above ~X% of this value, thereby removing low intensity data from the signal. This prevents accurate detection of ambiguous particles such as protein aggregates. By reducing noise (eg, by cooling the sensor), a lower threshold can be used, allowing improved detection of low-intensity signals. Figure 31 illustrates the bounding problem described above. Panel A of Figure 31 shows a cropped segment from a typical image sequence acquired using the devices and techniques described herein. As shown, the images are 8-bit grayscale images, ie, each pixel can have an intensity value that varies linearly from 0 (black) to 255 (white). The image contains two particles, one relatively bright and one extremely blurry. Panel B of Figure 31 shows an intensity histogram showing intensity values corresponding to the "background" of the box in the image that does not contain any particles. The sensor exhibits a Gaussian background noise profile at the low end of the intensity histogram, at least in part due to thermal effects. The width of this curve determines the threshold value for particle detection. In short, particles need to be significantly brighter than background noise in order to survive the limit. Panel C of Figure 31 shows the intensity histogram for bright particles. Particle images have a large number of pixels to the right of the threshold in the histogram and thus can be easily detected after clipping. In contrast, as shown in panel D of Figure 31, the blurrier particles have a relatively small number of pixels above the threshold - which would likely be swept out during the delimiting/segmentation process. However, if cooling or other techniques are applied to lower the noise floor, thereby shifting the threshold to the left, fainter particles may be detectable. Light-based enumeration and non-destructive sizing (LENS)In some embodiments, when non-destructive sizing and counting of particles within the container is performed, there are considerable artifacts produced by the container itself. The liquid interface refracts light passing through the vial, which causes considerable distortion of one or more images of the particles used in the sizing and counting procedure. Thus, particles of a given size appear up to, for example, four times as large in the image, depending on the particle's spatial position within the vial. Note that for cylindrical containers, the particle image typically only extends along the transverse axis of the vial and not along the vertical axis. (See Figure 5E for an illustration of these effects). As noted above, in some embodiments, such distortion effects may be corrected (eg, mitigated or even eliminated) using corrective optics techniques. However, in some embodiments, this optical correction may not be complete or available. In such cases, a direct correlation of the size of the particle with the corresponding image on the detector cannot be performed. For example, FIG. 32 shows a histogram of detected image sizes of a population of standard-sized (100 μm diameter as shown) particles (polymeric microspheres) in a fluid acquired using the system, where distortion from the container has not been corrected (corresponding to the situation shown in Figure 5E). Clearly demonstrate significant changes in apparent image size due to container distortion effects. This variation makes it difficult to distinguish between particle populations of different sizes, since there can be substantial overlap in the apparent areas on the detector from each size population. For example, Figure 33 shows a histogram of detected image sizes for two standard sized (100 μm and 140 μm diameter as shown) particle populations in a fluid. Clearly demonstrates the significant overlap between the histograms of the two size groups. In some embodiments, processing techniques may be applied to recover accurate size information, even in the presence of the distortion effects described above. The calibration process uses data obtained using known size standards. For example, Figure 34 shows the apparent size histograms of four experimentally obtained populations of different standard-sized particles (polymeric microspheres). Although four calibration curves are shown, in various embodiments any suitable number can be used. In some embodiments, at least two, at least three, at least four, at least five, or at least six curves may be used. In some embodiments, the number of curves is in the range of 2-100, or any sub-range thereof (such as 4-6). In some embodiments, the set of experimental calibration curves can be interpolated to generate additional curves (eg, corresponding to magnitude values between experimentally measured values). In some embodiments, the calibration curve can correspond to a difference with any suitable amount (eg, at least 1 μm, at least 5 μm, at least 10 μm, at least 20 μm, or more, such as in the range of 1 μm to 1000 μm or A population of particles of actual size in any subrange). Once the calibration curve has been determined, an apparent size distribution curve for a sample with particles of unknown size can be obtained (eg, from one or more still images, or any other suitable technique). Sample curves can be obtained under the same or similar experimental conditions (eg, same or similar vessel size and shape, fluid properties, lighting conditions, imaging conditions, etc.). This sample curve is compared to the calibration curve to determine information indicative of the size of the particles in the sample. For example, in some embodiments, a weighted overlay of the calibration curve is compared to the sample curve. The weighting of the overlay is varied to fit the overlay to the sample curve, eg, using any suitable fitting technique known in the art. The weighting of the best fit to the sample curve then provides information about the actual size of the particles in the sample. For example, in some embodiments, the number of times each calibration curve occurs in the best fit stack corresponds to a count of size classes within the sample. Figure 35 illustrates the superposition of the calibration curves and the fit of the experimental sample curves. In this case, the sample was prepared such that the known particle diameter was in the range of 75 μm to 125 μm. Figure 36 shows the resulting size counts from the fit compared to those obtained by binning the original apparent size from the corresponding image only. For the original source, there are a large number of spurious counts outside the actual 75 μm to 125 μm size range. In contrast, the results obtained from the fit of the calibration curves showed a greatly reduced number of false counts. Note that although one possible method of comparing sample data to calibration data has been described, other suitable techniques may be used. For example, in some embodiments, the calibration curve may be used as the basis function to decompose the sample curve, similar to Fourier decomposition of a waveform using a sinusoidal basis function. In general, any suitable convolution, deconvolution, decomposition, or other technique may be used. In some embodiments, light-based enumeration and non-destructive ("LENS") sizing techniques may be used in combination with previously described particle tracking techniques. For example, LENS techniques will tend to work better when the shape of the particles is close to that of the size standard used to generate the calibration data. Additionally, these techniques tend to perform well when the number of particles is high (eg, greater than 10, greater than 50, greater than 100, or more), providing a larger data set for the algorithm to process. However, in some applications the number of particles present may be low. In some applications, the focus may be on larger particles in the sample. Additionally, in some applications, the sample may include particles having a different shape than the size standard particles. For example, the fibers will be elongated rather than spherical as used in many standards. Under these conditions, LENS technology may not work effectively. In general, any number of particles can be counted using the techniques described above. In some embodiments, the upper limit on the number of countable particles is determined by particle/particle overlap in the sample. In general, the more particles present in the container, the more likely two particles will appear as bound to a single 2D detector. This is a function of the particles per volume and the size of the particles. In general, larger particles occupy more area on the detector (and thus have more overlap for a given count/ml than smaller particles). For example, under certain conditions, in a 10 cc vial filled with 8 ml of fluid, up to about 500 can be counted before the effects of undercounting and oversizing due to particle overlap become apparent Particles with a diameter of 50 µm. However, the particle tracking techniques presented above can effectively count and size relatively large particles. Thus, in some embodiments, a mix of the two approaches may be used. Figure 37 shows an exemplary embodiment of this mixing procedure. In step 3701, a sequence of images is recorded, eg, using any of the techniques described herein. In step 3702, the image sequence is processed (eg, filtered, limited, segmented, etc.). In step 3703, the particle data generated in step 3702 may be pre-screened to obtain particles above a threshold size. These large particles can be removed from the data set and processed in step 3704 using tracking techniques. This provides qualitative, time-dependent size measurements of large particles. If there is a background of smaller particles (below a size threshold), this can be handled in step 3705 using LENS techniques. The data generated by the two different techniques can then be combined in step 3706 to generate a single particle report for the container under inspection. In various embodiments, the size threshold used to determine which technique to apply can be set at any suitable threshold or about 1 µm or greater (e.g., between about 1 µm to 400 µm in width or diameter of the particle range or any subrange thereof, eg, about 1 µm to about 50 µm, about 50 µm to about 250 µm, or about 75 µm to about 100 µm). In some embodiments, criteria other than size (eg, information about the shape of the particle) may be used to select the particle data sent to each technique. In general, any suitable combination of criteria can be used. 3D imaging and particle detection technologyAs noted above, in some embodiments, automated visual inspection unit 100 may include two or more imagers 110 , allowing three-dimensional imaging of the contents of container 10 . For example, FIGS. 38A-38C illustrate a unit 100 featuring three imagers 110 . As shown, imagers 110 are positioned in a circle around container 10 at 120 degree intervals, however in various embodiments more or fewer sensors may be used. The angles between adjacent imaging sensors need not be equal to each other, however, in some embodiments, an equiangular arrangement simplifies the image processing techniques described below. In some embodiments, each imager 110 is substantially identical. The imagers 110 can be aligned so that they are all at the same physical height relative to the container 10 with the container 10 at the center of each imager's field of view. In some embodiments, even when care is taken to optimize this physical alignment, small errors in placement may occur. To account for this, the imager 110 may be calibrated by imaging a known calibration jig. Any sufficiently small lateral or vertical misalignment can then be accounted for by resampling and shifting the captured image accordingly. In some embodiments, images may be processed to correct for variations in sensitivity or other performance characteristic differences between the different sensors used in imager 110 . FIG. 38C shows a single imaging arm for unit 100 . As described in detail above, by using a telecentric imaging configuration, it is ensured that only rays substantially parallel to the imaging axis reach the sensor surface of the imager 110 . As shown in FIG. 39, using geometric ray optics techniques (or other suitable techniques), light rays within the vessel 10 that propagate through the vessel wall and reach the sensor surface can be modeled. With the ray vector known, a point or region can be taken from the 2D image and that intensity propagated back into the container 10 . Draws a 2D horizontal grid within the container volume by taking one horizontal column from 2D at a time. The horizontal grids associated with each of the three imagers 110 can be superimposed to produce a single map. By repeating the process for additional horizontal sensor columns, a vertical stack of two-dimensional grids can be created to form, for example, a three-dimensional (3D) structure corresponding to all or part of the volume of container 10 . Particle candidates can be identified within the resulting 3D structure using intensity quantification in a manner similar to that described above. Delimitation can be performed on the original 2D image from imager 110, or can be performed on the horizon within the superimposed 3D structure. Using the defined 3D structure, candidate particles can be identified, whereby a direct measurement of the particle's 3D position within the fluid volume of container 10 is obtained. In typical applications, 3D position measurement is accurate for most of the fluid volume, however, in some cases (for example, when the imager 110 includes a telecentric lens) Blind spots were experienced due to the effect (eg, as shown in Figure 39, right panel). When using three imaging arms at an angle of 120 degrees, the blind spots were closely related in pairs (see Figure 39, right panel). Accurate 3D positioning within the three blind spot areas 3901 can be excluded. However, in these regions positional data can be established by examining 2D data from the nearest imaging arm. In various embodiments, the blind spot problem can be reduced or eliminated by increasing the number of sensor arms to ensure overlapping imaging. Although one example of using multiple imagers 110 to determine 3D information about the contents of container 10 has been described, it should be understood that other techniques may be used. For example, in an embodiment, stereoscopic imaging techniques can be applied using two imagers to determine 3D information. In some embodiments (eg, those featuring static or slowly moving samples), a rotating imaging arm may be used to obtain 3D information in a manner similar to a medical computed tomography machine. The rotating arm will acquire a time sequence of 2D images from various viewing angles, which can be used to construct 3D information, for example, using any suitable technique, such as known from medical imaging. 3D imaging can provide accurate 3D information for particle detection if the image is acquired at a rate that is fast relative to the kinetics of the sample. In some embodiments, the 3D information generated using the techniques described above may be suitable for detecting candidate particle locations, but is not ideal for determining other characteristics of the particles (eg, particle size or shape). Thus, in some embodiments, a hybrid approach may be used. For example, in some embodiments, the 3D position of a particle is determined based on 3D information (eg, a 3D structure generated as described above). Once the three-dimensional location of the particle has been determined, size and shape measurements obtained from some or all of the two-dimensional images from imager 110 can be correlated with these locations. In some embodiments, particle tracking may be performed on the 3D positional data, for example, using 3D tracking techniques similar to the 2D techniques described above. In some embodiments, 3D tracking provides advantages, particularly when used in combination with the two-dimensional images obtained from each imager 110 . In 3D tracking, particle-to-particle occlusion is reduced or eliminated (eg, as shown in Figure 5E). In some embodiments, possible occlusion may occur, for example, for dense samples in blind spots where true 3D localization fails. As in the two-dimensional situation described above, in some examples, predictive tracking techniques can be used in a 3D context utilizing information related to fluid dynamics within container 10 . In some embodiments, once the 3D particle positions have been tracked, information about the characteristics of the particles (e.g., size and shape) can be gathered from two-dimensional data from multiple imagers 110 to multiple and time-related data set. In some embodiments, this may allow for more accurate measurements of individual particle characteristics (eg, size and shape) than would be possible with a single imaging sensor. For example, in some embodiments, this technique allows for clearer detection and size measurement of elongated particles because the particle's appearance is no longer strictly dependent on its orientation relative to a single imager 110 . In some embodiments, this approach can be used to mitigate lensing effects caused by the curvature of the container 10 . Using the 3D position of the particle, the two images acquired by each of the imagers 110 can be adjusted by correcting for lensing effects (e.g., by modifying the lateral (horizontal) component of the size measurement by a lensing scaling factor). The measured particle size on the dimensional image. This scaling factor may be determined based on an optical model of the propagation of light through container 10 to each of imagers 110 (as described in detail above). Spectral detectionFIG. 45 shows a sensor 4500 (as shown, a grating spectrometer) that may be used with a visual inspection unit 100 of the type described herein. For example, sensor 4500 may form a fourth imaging arm for use with the embodiment of unit 100 shown in Figure 38A. Sensor 4500 may be used to detect a characteristic (eg, a spectral characteristic) of one or more particles in container 10 . For example, as shown, container 122 is illuminated by broadband light source 122 . The sensor 4500 receives light from the container 10 via distortion correcting optics 4501 (eg, any of the types described above) and a telecentric lens 4501 . Light from lens 4501 is directed onto diffraction grating 4503 , which separates the spectral components of the light that is subsequently imaged on imaging sensor 4504 . In some embodiments, diffraction grating 4503 operates such that the position of incident light along one dimension (eg, the vertical dimension) of sensor 4504 corresponds to the wavelength of the light. Other dimensions on imaging sensor 4504 correspond to different spatial locations within container 10 . That is, the sensor 4500 provides spectral information for a sub-region of the container, for example, in a configuration that exhibits the sub-region as a horizontal "slice" of the container 10 . As a particle passes through this central horizontal plane, its spectral sign is recorded. Also, as described in detail above, the conventional imaging arm of unit 100 can be used to track the position (eg, in three dimensions) of the particles within the container. This information can be used to determine when a given particle enters the detection sub-region covered by sensor 4500. When a particle enters a sub-region, the sensor 4500 will sense a characteristic of the particle (eg, spectral signature). Unit 100 can generate data related to this characteristic and correlate this data with the data indicating the identification code of the particle in the tracking data. In various embodiments, the characteristic data may be used for any suitable purpose (eg, to identify particle types). For example, spectral information about a given particle can be combined with size, shape, movement, or other information about the particle to determine the type of particle. In some embodiments, the sensor 4500 and illumination source 122 may be modified to detect particle fluorescence or any other suitable characteristic. In general, any spectral characteristic of a particle can be detected, including color, absorption spectrum, emission spectrum, or transmission spectrum, or a combination of any of these. Although in the examples described above, the sensor 4500 is included in the unit 100 featuring three imaging arms, in other embodiments any other suitable number of imaging arms (e.g., one, two , four, five or more). In some embodiments, where a single imaging arm is used, the sensor 4500 can be aligned with the imaging arm, such as by using a beam splitter (not shown) that splits the beam from the container 10, and directs the components to to a single imaging arm and sensor 4500. In other embodiments (eg, where multiple imaging arms are used), the sensor 4500 may be oriented at any suitable angle relative to the imager. exampleExemplary performance characteristics for an embodiment of an automated visual inspection unit 100 of the type described herein are provided below. Referring to Figure 40, a unit 100 is presented with containers 10 each comprising only a single polymer sphere of known size. Multiple testing runs (n=80) were performed for each container and the detection percentage was measured (bar labeled "APT" in the figure). As shown, the detection percentage of the system is greater than 90% for particle sizes ranging in diameter from 15 μm to 200 μm. The detection percentages for the same tasks performed visually by trained humans are presented for comparison (bars labeled "Human"). Note that for particles below 200 μm in size, human detection capability drops off rapidly. Referring to FIG. 41 , in another test, unit 100 was presented with containers containing particles with diameters above and below the visible cut-off of 125 μm. Unit 100 detects particles and also classifies particles based on whether the size is above or below the visible cutoff of 125 μm. As shown, the detection percentage of the system is greater than 90% for particle sizes ranging in diameter from 15 μm to 200 μm. The unit 100 also correctly classifies the detected particles with extremely high accuracy. Referring to Figure 42, dilution sequences for multiple size standards are generated, each sequence consisting of containers holding a given concentration of particles. The resulting containers are analyzed by unit 100 to provide particle counts, and regression is used to determine the R-squared "R^2" value for linearity of counts and de-dilution factors. As shown, for particle sizes ranging from 15 μm to 200 μm, “R^2” values are above 0.95, indicating excellent linearity. Referring to Figure 43, a stressed sample containing protein particles is analyzed by unit 100 to determine particle counts binned by particle size. Demonstrate the accuracy of particle counts for each square using more than 10 passes. The protein particle size is unknown, which makes absolute size accuracy comparisons impossible, however, as demonstrated, the precision of the system used to count and size proteins is high. The normalized error of the measurements was 3%, which indicates excellent precision. Referring to Figure 44, unit 100 also features a detection blank and a vial containing protein particles. The performance of unit 100 was compared to that of a certified visual inspector viewing the same collection of vials. Unit 100 (labeled "APT" in the figure) correctly detected all 40 protein vials and 80 blanks with three strokes. The self-assessment for classifying visible particles and particles that can only be seen under a microscope is 100%. Humans scored only about 85% in both categories. in conclusionThose of ordinary skill in the art understand that the processes involved in automated systems and methods for non-destructive particle detection and identification (processing time-series data obtained by visual inspection) can be embodied in an article of manufacture including a computer usable medium. For example, the computer usable medium may include a readable memory device having computer readable program code segments stored thereon, such as a hard disk drive device, CD-ROM, DVD-ROM, computer disk or solid-state memory device (ROM, RAM). Computer-readable media can also include communication or transmission media, such as a bus or communication link (optical, wired or wireless), on which program code segments are carried as digital or analog data signals. Flowcharts are used in this article. The use of flowcharts is not meant to be limited with respect to the order in which operations are performed. The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are exemplary only, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermediary components. Likewise, any two components so associated can also be considered "operably connected" or "operably coupled" to each other to achieve the desired functionality, and any two components capable of being so associated can also be considered "operably coupled." are operatively coupleable to each other to achieve the desired functionality. Specific examples of operably couplable include, but are not limited to, physically mateable and/or physically interactable components and/or wirelessly interactable and/or wirelessly interactable components and/or logically interactable and/or logically interactable components . With respect to the use of substantially any plural and/or singular term herein, those skilled in the art may interpret from the plural to the singular and/or from the singular to the plural, as the context and/or application require. For clarity, various singular/plural interchanges may be expressly set forth herein. Those skilled in the art will understand that terms used herein in general and in the appended claims (e.g., the subject of the appended claims) are generally intended to be "open" terms (e.g., the term "comprising " shall be interpreted as "including but not limited to", the term "having" shall be construed as "having at least" etc.). Those skilled in the art will further understand that if it is desired to introduce a specific number recited in the claim, this intention will be explicitly stated in the claim, and there is no such intention in the absence of such a recitation. For example, as an aid to understanding, the following appended claims may include the use of the introduced phrases "at least one" and "one or more" that introduce the claims. However, use of these phrases should not be construed to imply that the introduction of a claim recitation by the indefinite article "a" limits any particular claim containing such introduced claim recitation to containing only one of the subject matter of such recitation, even when The same is true where the same claim includes the introduction of the phrase "one or more" or "at least one" and an indefinite article such as "a" (for example, "a" should generally be understood to mean "at least one" or "one or more ""); the same is true of the use of the definite article used to introduce the claim statement. In addition, even though a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will appreciate that such recitation is generally understood to mean at least that recitation number (eg, a bare recitation of "both recitations" without Other modifiers generally mean at least two statements, or two or more statements). Furthermore, in those cases where a convention similar to "at least one of A, B, and C, etc." is used, generally speaking, this construction is intended in the sense that those skilled in the art would understand the convention (eg, " A system having at least one of A, B, and C" will include, but is not limited to, having only A, only B, only C, both A and B, both A and C, both B and C, and /or a system with A, B and C, etc. at the same time). In those cases where a convention similar to "at least one of A, B, or C, etc." is used, in general, the construction is intended in the sense that those skilled in the art would understand the convention (e.g., "has A , B, or C" will include, but is not limited to, having only A, only B, only C, both A and B, both A and C, both B and C, and/or A system with A, B, and C at the same time). Those skilled in the art will further understand that any discrete word and/or phrase (whether in the description, claims, or drawings) that actually presents two or more alternative terms should be understood to encompass both terms included in the terms. Possibility of one, either, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B." As used herein, the term optical element may refer to one or more refractive, reflective, diffractive, holographic, polarizing or filtering elements in any suitable combination. As used herein, terms such as "light", "optical" or other related terms should be understood to refer not only to light visible to the human eye, but also include, for example, light in the ultraviolet, visible and infrared portions of the electromagnetic spectrum. Light. The foregoing description of illustrative embodiments has been presented for purposes of illustration and description. It is not intended to be exhaustive or limiting with respect to the precise forms disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

10:透明容器 12:注射器 14:盲點 18:透鏡化效應 30:氣泡 32:藥物產品 34:針 50:處理器 79:系統匯流排 82:I/O器件介面 84:中央處理器單元 86:網路介面 90:記憶體 92:電腦軟體指令/處理器常式/電腦程式產品 94:資料 95:磁碟儲存器 100:視覺檢查單元 110:成像器 112:電荷耦合器件(CCD)/感測器 114:遠心透鏡 116:校正透鏡 118:空間校正光學器件 120:照明系統/照明源 122:寬頻光源 122a:光源 122b:光源 122c:光源 122d:光源 122e:光源 122f:光源 124:光學纖維 126:準直器 128:中空軸 129:偏光器 130:處理器 140:記憶體 150:主軸 160:檢查模組 160-1:檢查模組 160-2:檢查模組 160-3:檢查模組 160-4:檢查模組 160-5:檢查模組 170:視覺檢查平台 172:小瓶盤 180:機器人 182:軌道 501:粒子 502:粒子 503:校正光學元件 504:支架 702:觸發控制器 900:交叉偏光器 920:照明源 922:窄頻濾波器 1000:區帶 1001:區 1102:成像器 1102a:第一相機 1102b:第二相機 1104:成像器 1106:成像器 1120:發光二極體(LED) 1202:光束分裂方塊 1222:感測器 1222a:第一相機 1222b:第二相機 1224:感測器 4500:感測器 4501:失真校正光學器件/遠心透鏡 4503:繞射光柵 4504:成像感測器 ARM 1:觸發信號 ARM 2:觸發信號 10: Transparent container 12: Syringe 14: Blind spots 18: Lensing effect 30: Bubbles 32: Pharmaceutical products 34: Needle 50: Processor 79: System bus 82: I/O device interface 84: Central processing unit 86: Network interface 90: memory 92: Computer software instructions/processor routines/computer program products 94: Information 95: Disk storage 100:Visual inspection unit 110: Imager 112: Charge Coupled Device (CCD)/Sensor 114: telecentric lens 116: correction lens 118: Spatial Correction Optics 120: Lighting system/lighting source 122: Broadband light source 122a: light source 122b: light source 122c: light source 122d: light source 122e: light source 122f: light source 124: Optical fiber 126: Collimator 128: hollow shaft 129: Polarizer 130: Processor 140: Memory 150: spindle 160:Check module 160-1: Check the module 160-2: Check the module 160-3: Check the module 160-4: Check the module 160-5: Check module 170:Visual inspection platform 172: vial plate 180: Robot 182: track 501: Particles 502: Particles 503: Correction optics 504: Bracket 702:Trigger controller 900: crossed polarizer 920: lighting source 922: Narrowband filter 1000: zone 1001: area 1102: imager 1102a: first camera 1102b: second camera 1104: imager 1106: imager 1120: light emitting diode (LED) 1202: beam splitting block 1222: sensor 1222a: first camera 1222b: second camera 1224: sensor 4500: sensor 4501: Distortion Correcting Optics/Telecentric Lenses 4503: Diffraction grating 4504: Imaging sensor ARM 1: trigger signal ARM 2: trigger signal

圖1A至圖1C分別展示視覺檢查單元、視覺檢查成像模組及視覺檢查平台,其可各自用以檢測及識別至少部分以流體填充之容器中的粒子。 圖2A說明圖1A至圖1C中所展示之視覺檢查系統的樣本準備、裝載及操作。 圖2B展示藉由說明性視覺檢查系統俘獲之器皿中的移動流體中之粒子及其軌跡的經處理影像。 圖3A至圖3C說明含有自粒子檢測及識別準備之流體及一或多個粒子的三種類型之器皿攪動:圓柱形器皿之旋轉(圖3A),注射器之倒轉及旋轉(圖3B),及注射器之搖動(圖3C)。 圖4為用以使圓柱形器皿成像之遠心透鏡的光線光學圖。 圖5A展示含有流體之圓柱形器皿中的流體彎液面及記錄容積。 圖5B說明由容器之形狀產生的圓柱形容器之失真及盲點。 圖5C及圖5D說明在使圓柱形器皿成像時補償失真及盲點的技術。 圖5E說明針對容器中之各位置處的粒子之由容器之形狀產生的圓柱形容器之失真及盲點。 圖5F說明由圓柱形容器引起之失真的理論模型,每一模型對應於相同容器但以具有不同折射率之流體填充。該圖亦展示確認理論模型之對應實驗量測。 圖5G說明校正由容器之形狀產生的圓柱形容器之失真的校正光學元件之使用。 圖5H為圖5G之校正光學元件的細節圖。 圖5I說明用於選擇若干校正光學元件中之一者的器件。 圖6A至圖6D展示具有多個成像器之粒子追蹤系統,該等成像器自許多角度(圖6A及圖6B)、自同一角度以較高圖框速率(圖6C)及自同一角度以不同空間解析度(圖6D)俘獲移動粒子之時間序列資料。 圖7A及圖7B說明用於藉由雙重感測器成像器使粒子成像的影像獲取及照明之觸發。 圖8為包括定位於經檢查之器皿前面、後面及下面之光源的可撓性多目的照明組態之示意圖。 圖9A至圖9C說明使用圖8中所展示之光源的用於在不同粒子種類之間進行區別之來自不同角度之照明。 圖9D展示用於使用圖9A至圖9C之組態以在不同各種粒子種類之間進行區別的照明序列及時序圖。 圖10A至圖10C說明來自部分以流體填充之器皿的眩光(圖10A)及將光源定位於藉由繞著器皿之縱向軸線轉動成像器所界定之區帶外部(圖10B及圖10C)。 圖10D至圖10E說明用於減少或消除來自器皿之眩光的替代照明方案。 圖11為適合用於使偏光(例如,對掌性)粒子成像之成像組態的示意圖。 圖12為適合用於激勵螢光粒子及使螢光粒子成像之成像組態的示意圖。 圖13A及圖13B展示藉由說明性視覺檢查系統獲取之玻璃薄片(圖13A)及蛋白質(圖13B)的最大強度投影影像。 圖14包括說明不同之整體粒子檢測及識別程序以及影像預處理、粒子追蹤及統計分析子程序之流程圖。 圖15A及圖15B展示在背景相減之前(圖15A)及之後(圖15B)的時間序列資料之圖框。 圖16A為按八位元灰度展示之粒子的時間序列資料圖框(展示於左側)。 圖16B為圖16B中所展示之時間序列資料圖框的近視圖。 圖16C及圖16D分別為圖16A及圖16B中所展示之時間序列資料圖框的定限版本。 圖17A至圖17D說明一對連續時間序列資料圖框(圖17A)可用以執行預測性追蹤(圖17B至圖17D)之方式。 圖18A展示灰度時間序列資料圖框,其展示若干粒子。 圖18B展示用以定位粒子之幾何中心的圖18A之定限版本。 圖19展示說明粒子碰撞/遮蔽之連續時間序列資料圖框。 圖20A展示在反白顯示區內部具有一對彼此靠近之粒子的時間序列資料之圖框。 圖20B至圖20E為展示當圖20A之反白顯示區中的粒子傳播經過彼此時顯而易見之粒子遮蔽的連續時間序列資料圖框。 圖21A至圖21C說明針對筆直軌跡(圖21A)、彎曲軌跡(圖21B)及抛物線軌跡(圖21C)之由器皿之壁上的假影(諸如刮痕或一塊灰塵)之背景相減引起的移動粒子之顯而易見的遮蔽。 圖22A至圖22C說明使用反轉時間序列資料定位不規則形狀粒子的質量中心(圖22B及圖22C)及使用質量中心位置來判定粒子軌跡(圖22A)。 圖23A至圖23D說明圓柱形器皿中觀察及模型化之流體動力學。圖23A展示彎液面之形狀的改變。圖23B及圖23C說明流體填充之器皿中的漩渦形成,且圖23D展示說明性漩渦中之粒子軌跡。 圖24A及圖24B展示尚未正確解析粒子碰撞之反轉時間序列資料之連續圖框的近視圖(圖24A)及錯誤校正之後的相同曲線(圖24B)。 圖25A至圖25E說明歸因於粒子移動之粒子大小量測的時間相依性。 圖25F為用於圖25C中所展示之粒子的與時間有關之斐瑞特直徑之圖。 圖26A展示不同間隔處之已處理時間序列資料之圖框,其中跡線指示不同粒子軌跡。 圖26B展示來自圖26A中之粒子軌跡的眾多與時間有關之粒子屬性的說明性量測。 圖27A至圖27F說明使用後方成角度照明檢測關注區。圖27A展示經受邊緣檢測(圖27B)、灰度定限(圖27C)、彎液面及小瓶基底之識別(圖27D)、關注區之判定(由圖27E中之虛線定界)及剪裁(圖27F)以產生容器中可見之流體的影像之原始影像(時間序列資料之圖框)。 圖28A至圖28C說明背面光照小瓶之填充容積檢測。圖28A展示小瓶之原始影像。圖28B展示使用定限及邊緣檢測判定之關注區(由虛線定界)。小瓶之表面上的缺陷(圖28C中所展示)可妨礙填充容積檢測。 圖29A至圖29D說明自下方照明之小瓶的填充容積檢測。圖29A及圖29B為部分滿之器皿(圖29A)及空器皿(圖29B)的假色影像。圖29C及圖29D說明部分滿、空的及部分填充器皿之自動彎液面檢測。 圖30展示適合用於處理時間序列資料之處理器。 圖31說明包括亮粒子及模糊粒子之影像的灰度定限之實例。 圖32展示用於具有標準大小之粒子群體的表觀粒子大小之直方圖。 圖33展示用於兩個粒子群體之表觀粒子大小計數曲線,每一群體具有標準大小。 圖34展示用於四個粒子群體之表觀粒子大小計數校準曲線,每一群體具有標準大小。 圖35說明使校準曲線之疊加與樣本表觀粒子大小計數曲線配合。 圖36比較對粒子計數及定大小之兩種技術(原始方格化及LENS)的結果。 圖37說明以針對在臨限大小以下及以上之粒子的不同定大小技術為特徵之對粒子計數及定大小之程序。 圖38A至圖38C說明具有自多個角度俘獲移動粒子之時間序列資料之多個成像器的粒子追蹤系統。 圖39說明光線傳播穿過容器,光線由圖38A至圖38C之粒子追蹤系統的兩個成像器中之每一者(左側面板)及三個成像器中之每一者(右側面板)接收。 圖40展示用於自動粒子檢測系統(指示為「APT」)之粒子檢測結果與藉由視覺檢查之人結果作比較。 圖41展示用於自動粒子檢測系統之粒子檢測及分類結果。 圖42展示概述粒子計數隨自動粒子檢測系統之樣本稀釋而變之線性的圖。 圖43展示用以檢測及計數蛋白質聚集體粒子之自動粒子檢測系統的精度。 圖44展示用於自動粒子檢測系統(指示為「APT」)之蛋白質聚集體粒子檢測結果與藉由視覺檢查之人結果作比較。 圖45展示用於與視覺檢查單元一起使用之光譜儀。 1A-1C illustrate a vision inspection unit, a vision inspection imaging module, and a vision inspection platform, respectively, which can each be used to detect and identify particles in a container at least partially filled with a fluid. Figure 2A illustrates sample preparation, loading and operation of the vision inspection system shown in Figures 1A-1C. Figure 2B shows a processed image of particles and their trajectories in a moving fluid in a vessel captured by an illustrative visual inspection system. Figures 3A-3C illustrate three types of vessel agitation containing fluid and one or more particles prepared from particle detection and identification: rotation of a cylindrical vessel (Figure 3A), inversion and rotation of a syringe (Figure 3B), and Shaking of the syringe (Figure 3C). Figure 4 is a ray optical diagram of a telecentric lens used to image a cylindrical vessel. Figure 5A shows the fluid meniscus and recorded volume in a cylindrical vessel containing fluid. Figure 5B illustrates the distortion and blind spots of a cylindrical container created by the shape of the container. 5C and 5D illustrate techniques for compensating for distortion and blind spots when imaging cylindrical vessels. Figure 5E illustrates the distortion and blind spots of a cylindrical container resulting from the shape of the container for particles at various locations in the container. Figure 5F illustrates theoretical models of the distortion induced by cylindrical containers, each model corresponding to the same container but filled with a fluid with a different refractive index. The figure also shows the corresponding experimental measurements confirming the theoretical model. Figure 5G illustrates the use of corrective optics to correct for distortion of a cylindrical vessel produced by the shape of the vessel. FIG. 5H is a detailed view of the correcting optical element of FIG. 5G. Figure 5I illustrates means for selecting one of several corrective optical elements. 6A-6D show particle tracking systems with multiple imagers from many angles (FIGS. 6A and 6B), from the same angle at a higher frame rate (FIG. 6C), and from the same angle at different Spatial resolution (FIG. 6D) captures time-series data of moving particles. 7A and 7B illustrate triggering of image acquisition and illumination for imaging particles by a dual sensor imager. 8 is a schematic diagram of a flexible multi-purpose lighting configuration including light sources positioned in front, behind, and below inspected vessels. 9A-9C illustrate illumination from different angles for discriminating between different particle species using the light source shown in FIG. 8 . Figure 9D shows an illumination sequence and timing diagram for distinguishing between different various particle species using the configuration of Figures 9A-9C. Figures 10A-10C illustrate glare from a vessel partially filled with fluid (Figure 10A) and positioning the light source outside the zone defined by rotating the imager about the longitudinal axis of the vessel (Figures 10B and 10C). 10D-10E illustrate alternative lighting schemes for reducing or eliminating glare from vessels. 11 is a schematic diagram of an imaging configuration suitable for imaging polarized (eg, chiral) particles. 12 is a schematic diagram of an imaging configuration suitable for exciting and imaging fluorescent particles. 13A and 13B show maximum intensity projection images of glass flakes (FIG. 13A) and proteins (FIG. 13B) acquired by an illustrative visual inspection system. Figure 14 includes a flowchart illustrating different overall particle detection and identification routines, as well as image preprocessing, particle tracking, and statistical analysis subroutines. Figures 15A and 15B show plots of time series data before (Figure 15A) and after (Figure 15B) background subtraction. Figure 16A is a time-series data frame (shown on the left) of particles displayed in octet gray scale. Figure 16B is a close-up view of the time series data frame shown in Figure 16B. Figures 16C and 16D are delimited versions of the time series data frames shown in Figures 16A and 16B, respectively. Figures 17A-17D illustrate the manner in which a pair of continuous time-series data frames (Figure 17A) can be used to perform predictive tracking (Figures 17B-17D). Figure 18A shows a gray scale time series data frame showing several particles. Figure 18B shows a delimited version of Figure 18A used to locate the geometric center of a particle. Figure 19 shows a continuous time series data frame illustrating particle collision/occlusion. Figure 20A shows a frame of time series data with a pair of particles close to each other inside the highlighted display area. 20B-20E are continuous time-series data frames showing particle occlusion evident as particles in the highlighted display area of FIG. 20A propagate past each other. 21A-21C illustrate the effect of background subtraction caused by artifacts (such as scratches or a piece of dust) on the wall of the vessel for straight trajectories (FIG. 21A), curved trajectories (FIG. 21B), and parabolic trajectories (FIG. 21C). Visible occlusion of moving particles. 22A-22C illustrate the use of reversed time series data to locate the center of mass of irregularly shaped particles (FIGS. 22B and 22C) and the use of the center of mass location to determine particle trajectories (FIG. 22A). 23A-23D illustrate observed and modeled fluid dynamics in a cylindrical vessel. Figure 23A shows the change in shape of the meniscus. Figures 23B and 23C illustrate vortex formation in a fluid-filled vessel, and Figure 23D shows particle trajectories in an illustrative vortex. Figures 24A and 24B show close-ups of successive frames of reversed time-series data of particle collisions that have not been correctly resolved (Figure 24A) and the same curve after error correction (Figure 24B). 25A-25E illustrate the time dependence of particle size measurements due to particle movement. Figure 25F is a graph of the time-dependent Ferret diameter for the particles shown in Figure 25C. Figure 26A shows a frame of processed time-series data at different intervals, where the traces indicate different particle trajectories. Figure 26B shows illustrative measurements of numerous time-dependent particle properties from the particle trajectories in Figure 26A. 27A-27F illustrate detection of a region of interest using rear angled illumination. FIG. 27A shows a graph subjected to edge detection (FIG. 27B), gray scale definition (FIG. 27C), identification of the meniscus and vial base (FIG. 27D), determination of the region of interest (bounded by the dashed line in FIG. 27E), and clipping (FIG. 27E). Figure 27F) to generate a raw image of the image of the fluid visible in the container (frame of time series data). 28A-28C illustrate fill volume detection of back-illuminated vials. Figure 28A shows a raw image of a vial. Figure 28B shows the region of interest (bounded by dashed lines) determined using limping and edge detection. Imperfections on the surface of the vial (shown in Figure 28C) can prevent fill volume detection. 29A-29D illustrate fill volume detection of vials illuminated from below. Figures 29A and 29B are false color images of a partially full vessel (Figure 29A) and an empty vessel (Figure 29B). 29C and 29D illustrate automatic meniscus detection of partially full, empty, and partially filled vessels. Figure 30 shows a processor suitable for processing time series data. FIG. 31 illustrates an example of grayscale definition of an image including bright and blurred particles. Figure 32 shows a histogram of apparent particle size for a population of particles with a standard size. Figure 33 shows the apparent particle size count curves for two particle populations, each population having a standard size. Figure 34 shows the apparent particle size count calibration curves for four particle populations, each population having a standard size. Figure 35 illustrates fitting the overlay of the calibration curve to the sample apparent particle size count curve. Figure 36 compares the results of two techniques (raw binning and LENS) for particle counting and sizing. Figure 37 illustrates a particle counting and sizing procedure featuring different sizing techniques for particles below and above a threshold size. 38A-38C illustrate a particle tracking system with multiple imagers capturing time-series data of moving particles from multiple angles. Figure 39 illustrates light rays propagating through a container, light rays being received by each of the two imagers (left panel) and each of the three imagers (right panel) of the particle tracking system of Figures 38A-38C. Figure 40 shows particle detection results for an automated particle detection system (indicated as "APT") compared to human results by visual inspection. Figure 41 shows particle detection and classification results for an automated particle detection system. Figure 42 shows a graph outlining the linearity of particle count as a function of sample dilution for an automated particle detection system. Figure 43 shows the accuracy of the automated particle detection system used to detect and count protein aggregate particles. Figure 44 shows protein aggregate particle detection results for an automated particle detection system (indicated as "APT") compared to human results by visual inspection. Figure 45 shows a spectrometer for use with a visual inspection unit.

10:透明容器 10: Transparent container

100:視覺檢查單元 100:Visual inspection unit

110:成像器 110: Imager

112:電荷耦合器件(CCD)/感測器 112: Charge Coupled Device (CCD)/Sensor

114:遠心透鏡 114: telecentric lens

120:照明系統 120: Lighting system

122a:光源 122a: light source

122b:光源 122b: light source

130:處理器 130: Processor

140:記憶體 140: memory

Claims (15)

一種用於非破壞性檢測至少部分以一流體填充之一器皿中之一或多個透明或反射性物件的裝置,該裝置包含: (a)     一成像器,其經組態以獲取表示隨時間而變之自該器皿中的複數個空間位置反射之光的資料; (b)    一記憶體,其可操作地耦接至該成像器並經組態以儲存該資料;及 (c)    一處理器,其可操作地耦接至該記憶體並經組態而基於該資料藉由以下操作來檢測該物件: (i)  識別在該複數個位置中之每一位置之反射光的一各別最大量,及 (ii) 基於反射光之各別最大量超過一預定值之空間位置的數目來判定該器皿中該物件的一存在或缺乏。 An apparatus for non-destructive inspection of one or more transparent or reflective objects in a vessel at least partially filled with a fluid, the apparatus comprising: (a) an imager configured to acquire data representing light reflected from a plurality of spatial locations in the vessel as a function of time; (b) a memory operatively coupled to the imager and configured to store the data; and (c) a processor operably coupled to the memory and configured to detect the object based on the data by: (i) identifying a respective maximum amount of reflected light at each of the plurality of locations, and (ii) Determining a presence or absence of the object in the vessel based on the number of spatial locations for which the respective maximum amounts of reflected light exceed a predetermined value. 如請求項1之裝置,其中該器皿具有一底部並進一步包含: 一光源經組態以照明該器皿之該底部。 The device as claimed in item 1, wherein the container has a bottom and further comprises: A light source is configured to illuminate the bottom of the vessel. 如請求項1之裝置,其中該器皿含有一蛋白質聚集體,及其中該處理器進一步經組態以基於該資料而區別該蛋白質聚集體與該物件。The device of claim 1, wherein the vessel contains a protein aggregate, and wherein the processor is further configured to distinguish the protein aggregate from the object based on the data. 如請求項1之裝置,其中該處理器進一步經組態以估計該物件之一平均大小、該物件之一大小分布及該物件之一數目之至少一者。The device of claim 1, wherein the processor is further configured to estimate at least one of an average size of the objects, a size distribution of the objects, and a number of the objects. 如請求項1之裝置,其中該處理器進一步經組態以基於隨時間而變之反射光量之一變化而區別該物件與其他類型的粒子。The device of claim 1, wherein the processor is further configured to distinguish the object from other types of particles based on a change in the amount of reflected light as a function of time. 如請求項5之裝置,其中該處理器進一步經組態以使用該資料和表示透射通過該器皿之光的額外資料而識別該器皿中之至少一其他類型的粒子。The device of claim 5, wherein the processor is further configured to identify at least one other type of particle in the vessel using the data and additional data representing light transmitted through the vessel. 如請求項1之裝置,其中該物件包含玻璃薄片。The device according to claim 1, wherein the object comprises a glass sheet. 一種非破壞性檢測至少部分以一流體填充之一器皿中之透明或反射性物件的方法,該方法包含: (a)     基於表示隨時間而變之在該器皿中之複數個空間位置反射之光的資料來識別該複數個位置中之每一位置之反射光的一各別最大量;及 (b)    基於反射光之各別最大量超過一預定值之空間位置的數目來判定該器皿中該等物件的一存在或缺乏。 A method of non-destructively inspecting a transparent or reflective object in a vessel at least partially filled with a fluid, the method comprising: (a) identifying a respective maximum amount of reflected light at each of the plurality of locations in the vessel based on data representing light reflected at the plurality of spatial locations in the vessel as a function of time; and (b) Determining a presence or absence of the objects in the vessel based on the number of spatial locations at which the respective maximum amounts of reflected light exceed a predetermined value. 如請求項8之方法,其中該器皿具有一底部並進一步包含: 照明該器皿之該底部;及 獲取表示自該複數個空間位置反射之光的該資料。 The method of claim 8, wherein the vessel has a bottom and further comprises: illuminate the bottom of the vessel; and The data representing light reflected from the plurality of spatial locations is obtained. 如請求項8之方法,其中該器皿含有一蛋白質聚集體並進一步包含: 根據該資料而區別該等物件與該蛋白質聚集體。 The method of claim 8, wherein the vessel contains a protein aggregate and further comprises: The objects are distinguished from the protein aggregate based on the data. 如請求項8之方法,其進一步包含: 估計該等物件之一平均大小、該等物件之一大小分布及該等物件之一數目之至少一者。 As the method of claim item 8, it further comprises: Estimating at least one of an average size of the objects, a size distribution of the objects, and a number of the objects. 如請求項8之方法,其進一步包含: 基於隨時間而變之反射光量之一變化而區別該等物件與另一類型的粒子。 As the method of claim item 8, it further comprises: The objects are distinguished from another type of particle based on a change in the amount of reflected light as a function of time. 如請求項8之方法,其進一步包含: 使用該資料和表示透射通過該器皿之光的額外資料而識別該器皿中之至少一其他類型的粒子。 As the method of claim item 8, it further comprises: At least one other type of particle in the vessel is identified using the data and additional data indicative of light transmitted through the vessel. 如請求項8之方法,其中該等物件包含玻璃薄片。The method according to claim 8, wherein the objects comprise glass flakes. 一種用於非破壞性檢測至少部分以一流體填充之一器皿中之透明或反射性物件的電腦程式產品,該電腦程式產品包含非揮發性機器可讀指令,該等指令在由一處理器執行時使該處理器執行以下操作: (a)     基於表示隨時間而變之自該器皿中的複數個空間位置反射之光的資料來識別在該複數個位置中之每一位置之反射光的一各別最大量;及 (b)    基於反射光之各別最大量超過一預定值之空間位置的數目來判定該器皿中該等物件的一存在或缺乏。 A computer program product for non-destructive inspection of a transparent or reflective object in a vessel at least partially filled with a fluid, the computer program product comprising non-volatile machine-readable instructions executed by a processor causes the processor to do the following: (a) identifying a respective maximum amount of reflected light at each of the plurality of locations based on data representing light reflected from the plurality of spatial locations in the vessel as a function of time; and (b) Determining a presence or absence of the objects in the vessel based on the number of spatial locations at which the respective maximum amounts of reflected light exceed a predetermined value.
TW111126982A 2011-08-29 2012-08-28 Methods and apparati for nondestructive detection of undissolved particles in a fluid TWI840888B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201161528589P 2011-08-29 2011-08-29
US61/528,589 2011-08-29
US201161542058P 2011-09-30 2011-09-30
US61/542,058 2011-09-30
US201261691211P 2012-08-20 2012-08-20
US61/691,211 2012-08-20

Publications (2)

Publication Number Publication Date
TW202242379A true TW202242379A (en) 2022-11-01
TWI840888B TWI840888B (en) 2024-05-01

Family

ID=

Also Published As

Publication number Publication date
EP4365836A3 (en) 2024-06-19
IL230948A0 (en) 2014-03-31
AU2016100220A4 (en) 2016-03-31
US9892523B2 (en) 2018-02-13
BR112014003321A2 (en) 2017-03-14
KR102433281B1 (en) 2022-08-16
WO2013033253A1 (en) 2013-03-07
FI2751543T3 (en) 2024-07-11
JP2021167821A (en) 2021-10-21
AU2016100220B4 (en) 2016-09-29
CN105136649B (en) 2018-07-31
IL261825A (en) 2018-10-31
CN106442279B (en) 2019-06-14
ZA201400737B (en) 2016-01-27
JP2016197130A (en) 2016-11-24
KR101982365B1 (en) 2019-05-27
EA201790956A2 (en) 2017-09-29
TWI772897B (en) 2022-08-01
KR20200121917A (en) 2020-10-26
CL2014000374A1 (en) 2014-06-27
DK2751543T3 (en) 2024-07-22
US10832433B2 (en) 2020-11-10
MX2014001808A (en) 2014-09-12
CN105136649A (en) 2015-12-09
US11501458B2 (en) 2022-11-15
TWI648532B (en) 2019-01-21
KR102558391B1 (en) 2023-07-24
KR20210107146A (en) 2021-08-31
TW202119011A (en) 2021-05-16
CA3134342A1 (en) 2013-03-07
CA3048942A1 (en) 2013-03-07
HK1199301A1 (en) 2015-06-26
US20210019905A1 (en) 2021-01-21
TWI654419B (en) 2019-03-21
KR102050202B1 (en) 2019-11-28
JP2017201335A (en) 2017-11-09
US9842408B2 (en) 2017-12-12
EA201790956A3 (en) 2018-01-31
EA034584B1 (en) 2020-02-21
EA028127B9 (en) 2018-01-31
TW201315992A (en) 2013-04-16
IL286529B2 (en) 2023-12-01
US9922429B2 (en) 2018-03-20
IL304646A (en) 2023-09-01
SG10201806093UA (en) 2018-08-30
US20160379376A1 (en) 2016-12-29
AR087733A1 (en) 2014-04-16
EP2751543B1 (en) 2024-05-01
TWI582408B (en) 2017-05-11
IL286529A (en) 2021-10-31
US20180150965A1 (en) 2018-05-31
TWI708052B (en) 2020-10-21
IL248337B (en) 2019-03-31
JP2014525583A (en) 2014-09-29
AU2012302036B2 (en) 2015-12-03
CN103765191A (en) 2014-04-30
TW201721124A (en) 2017-06-16
US11803983B2 (en) 2023-10-31
EP2751543A1 (en) 2014-07-09
KR20220116080A (en) 2022-08-19
US20160379377A1 (en) 2016-12-29
TW201721130A (en) 2017-06-16
EA028127B1 (en) 2017-10-31
JP7216146B2 (en) 2023-01-31
JP6902077B2 (en) 2021-07-14
EP4365836A2 (en) 2024-05-08
JP6018208B2 (en) 2016-11-02
IL248338B (en) 2018-10-31
AU2012302036A1 (en) 2014-02-13
CN103765191B (en) 2016-09-28
US20230038654A1 (en) 2023-02-09
JP2016197131A (en) 2016-11-24
US20140177932A1 (en) 2014-06-26
KR102293636B1 (en) 2021-08-24
IL230948B (en) 2018-08-30
IL286529B1 (en) 2023-08-01
CA2843016C (en) 2019-09-17
JP6368341B2 (en) 2018-08-01
EA201490169A1 (en) 2014-08-29
IL261825B (en) 2021-10-31
CA3048942C (en) 2021-11-30
KR20190057426A (en) 2019-05-28
JP2019215376A (en) 2019-12-19
KR102168983B1 (en) 2020-10-22
CN106442279A (en) 2017-02-22
KR20140060300A (en) 2014-05-19
SG10201606760TA (en) 2016-10-28
KR20190132579A (en) 2019-11-27
JP6590876B2 (en) 2019-10-16
TW201903382A (en) 2019-01-16
US9418416B2 (en) 2016-08-16
CA2843016A1 (en) 2013-03-07
US20160379378A1 (en) 2016-12-29
JP6302017B2 (en) 2018-03-28

Similar Documents

Publication Publication Date Title
US11803983B2 (en) Methods and apparati for nondestructive detection of undissolved particles in a fluid
TWI840888B (en) Methods and apparati for nondestructive detection of undissolved particles in a fluid