CN1451230A - 图象拼合方法与设备 - Google Patents

图象拼合方法与设备 Download PDF

Info

Publication number
CN1451230A
CN1451230A CN01813115A CN01813115A CN1451230A CN 1451230 A CN1451230 A CN 1451230A CN 01813115 A CN01813115 A CN 01813115A CN 01813115 A CN01813115 A CN 01813115A CN 1451230 A CN1451230 A CN 1451230A
Authority
CN
China
Prior art keywords
imager
group
principal ray
measuring
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN01813115A
Other languages
English (en)
Inventor
Y·Y·谢希纳
S·K·纳亚尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Columbia University in the City of New York
Original Assignee
Columbia University in the City of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Columbia University in the City of New York filed Critical Columbia University in the City of New York
Publication of CN1451230A publication Critical patent/CN1451230A/zh
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

提供一种用摄像机或其他成像灵敏度特性在其视角内变化的成像器捕获图象的方法和设备。成像器的特性相对曝光、色灵敏度、偏振灵敏度、焦距和/或图象检测任一其他方面可以不均一。成像器可转动或平移,以便捕获被成像的不同景部。由于成像器在捕获各景部时处于多个位置,各景部可用成像器灵敏度分布曲线的多个部分成像。

Description

图象拼合方法与设备
相关申请的交叉引用
本申请要求题为“Multidimensional Image Mosaics”的美国临时专利申请续列号60/220,025的优先权,该申请于2000年7月21日提交,其内容通过引用包括在这里。
关于联邦政府资助研发的声明
本发明部分得到美国政府“国家科学基础研究奖NO.IIS-00-85864”的支持,因而美国政府对本发明拥有一定权利。
发明背景
发明领域
本发明一般涉及图象拼合,尤其涉及应用拼图技术扩展图象动态范围和/或测定接收自景像的幅射信号的附加特性的方法与系统。
相关技术的说明
成像系统(如摄像机)的主要局限性包括视场、动态范围、光谱(如色彩)分辨度和景深(即在图象平面中保持景点正确聚焦的距离范围)的限制。另外,常规成像系统一般只测量作为光接收方向的函数的入射光的光强,不能测量其他特性,诸如深度(如目标与摄像机的距离)、以及光的偏振态,它们适用于远程识别材料,形状与照明条件并适合反射分析,而且,常规摄像机的测量质量较低,如在一般CCD摄像机中,光强分辨率只有8位,光谱分辨率极差,仅包括三条宽带通道,通常为红绿蓝通道。
为克服上述诸局限性,即便作了种种努力,得到的系统仍很复杂,而且只解决了一个小问题而不及其余,如成像分光仪虽在谱维度上有高的分辨率,但未扩展传感器的光强动态范围。
在不损害空间分辨率的情况下,获得大视场图象的一般方法是应用“拼图法”。这种技术将较小的图象组合成视场更宽的较大图象,而各小图象覆盖一不同的景观(view of the scene)。该法一直应用于各种科学领域,诸如射电天文学、合成孔径雷达(SAR)遥感、光学观测天文学和地球与太阳系其他天体的光学遥感。为适应摄像机随机移动,近年开发了若干算法,而且这类算法使拼图应用于电视摄像机。在重叠了较小部分图象的区域中,原始数据经处理可提高其空间分辨率。然而,就各像素的光谱、偏振与亮度而言,常规的拼图技术不能提高分辨率(如动态范围)。若对一系列图象引入视差,则可利用拼图法恢复深度。但与用聚/散焦插入信号估算深度的方法相比,视差法往往不强健,而且更复杂。
非线性检测器可用于扩展图象的光学动态范围,如业已制造的CMOS检测器可以:(1)得到是光强对数的电子输出信号,或(2)组合两副不同积分次数的图象。这类传感器的光强动态范围的量级为1∶106,能对大(即高幅照度)信号作不饱和检测。不过这类器件中的光强信息受压缩,以便对高光强范围作采样(稀疏地),检测器应用了较低光强专用的量化电平,因而输出的光强分辨率仍只有8~12位。
对较高光强具有较低透射比的非线性透射比硬件,可扩展任一给定检测器的动态范围。然而,光强动态范围仍要按检测器有限的分辨率作量化,即普通CCD的8位分辨率经简单的非线性延伸而覆盖更高的幅照度范围,因而非线性压缩牺牲了较低光强范围的分辨率。
自动增益控制(AGC)在视频与数码摄像机中很常见,且类似于静像摄像机的自动曝光,但其一大缺点在于它的作用是全面性的,结果增益设定对图象的某些部分可能过高,而对其他部分则过低。例如,亮点在较暗图象内会饱和,而暗点在较亮图象内则暗得无法正确检出。拼图可由序列构成,当扫描景像时,AGC自适应地改变传感器增益。然而,尽管该技术扩展了一些动态范围,但是这类方法仍无法正确地测量大部分暗图象的亮点和大部分亮图象的暗点。
在业余与专业摄像中,普遍在摄像机上安装空间变动的滤光器,但这种滤光器主要用来改变原始图象以产生特殊的视觉效果,未曾与分辨率提高算法一起用过。
有人曾提出运用一组不同曝光的多幅图象来扩展图象各像素的动态范围。一种这样的方法是对各像素估算与其多个样本的数据最合适的值。另一方法是对各像素选择使局部反差最大的值。然而,这些方法用固定的摄像机拍摄图像序列,所以并不扩展视场。
另一种方法应用了小滤光器的拼合阵列,覆盖成像器的检测器阵列,各滤光器覆盖一特定的检测器像素,得出的空间不均一的遮蔽可调制撞击检测器的光。为扩展光强动态范围,可对传感器阵列覆盖一中性(即与色彩无关)密度滤光器的空间拼合阵列。但这种结构为了扩展动态范围,牺牲了空间分辨率。对检测器覆盖一拼合的滤色器阵列,能获得光谱信息,但这种结构为获得某种光谱分辨率(即色彩信息),牺牲了空间分辨率。此外,还可对检测器覆盖一拼合的以各种不同方向定向的线性偏振器,但这种结构为获得偏振信息,牺牲了空间分辨率。
对检测器阵列覆盖一种空间变化的滤谱器,即其光谱通带在检测器的垂直和/或水平视角上变化的滤谱器,可得到高分辨率滤谱作用。在这种系统中,视场内不同的点被不同地过滤。跨过景象扫描摄像机的视场,得到各点的光谱。但将滤谱器直接放在检波器阵列上,因难以改变滤谱作用的有效特性或测量接收自景象的光的其他特性,而降低了系统的灵活性。
若线性扫描器逐行扫描景象,可不牺牲空间分辨率而得到谱信息,如在三线扫描器中,用红绿蓝滤色器连续检测图象的各线性部分。遥感工作中常用的Pushbroom摄像机也如此操作;色散元件把每条景线衍射到2D检测器阵列上,结果在多条谱通道中同时测出每条线。然而,这类扫描器与Pushbroom限于定速的一维(1D)扫描,而且这种系统形成的图象没有凹陷,整幅图形利用同样的检测器特性扫描。相应地,为拍摄大的视场,要求多次采集,因为每次采集只能捕获1像素宽的柱。
图象一直用不同的聚焦设定值拍摄,然后组合起来生成大景深的图象。一种应用倾斜传感器的方法,在扩展视场的同时成功地拍摄了所有聚焦的景点,但是并未扩展图象动态范围。
在光学行业,在成像系统前面或内部转动空间上变化的遮光器与十字线是常用的方法,但是这类系统要求成像器具有在图象采集期间移动的附加内部或外部部件。
发明内容
因此,本发明的一个目的是提供一种能扩展视场并扩大亮度动态范围的成像技术。
本发明的另一目的是提供一种既可扩展视场,又能提供有关景象的光谱、偏振和/或深度信息的成像技术。
各种目的由下述本发明诸方面实现。
根据本发明的一个方面,增强分辨率数据的生成方法包括:一种成像方法,包括:用成像器作第一组组测量以生成第一像值的第一步骤,第一组测量包括至少测量一次来自第一景区的第一射线束的强度,第一射线束在成像器参考帧内具有第一主射线,成像器对第一主射线射线束具有第一强度灵敏度特性,而且成像器对第一主射线射线束的强度具有第一动态范围;用成像器作第二组测量以生成第二像值的第二步骤,第二组测量包括至少测量一次第一景区发射的第二射线束的强度,第二射线束在成像器参考帧内具有第二主射线,第二主射线与第一主射线不同,成像器对第二主射线射线束具有第二强度灵敏度特性,第二强度灵敏度特性与第一强度灵敏度特性不同,成像器对第二主射线射线束的强度具有第二动态范围;而且对第一和第二像值作拼合操作,以至少对第一与第二射线束的强度之一生成与成像器的第三动态范围相关的第三像值,而第三动态范围至少大于成像器的第一与第二动态范围之一。
根据本发明的另一个方面,成像方法包括:用成像器作第一组测量以生成第一像值的第一步骤,第一组测量包括至少测量一次第一景区发射的第一射线束中至少一个选择偏振分量的强度,第一射线束在成像器参考帧内具有第一主射线,成像器对第一主射线射线束具有第一偏振灵敏度特性,而第一偏振灵敏度特性包括对偏振角在第一角范围之外的信号分量减小的灵敏度,第一射线束至少一个选择的偏振分量在第一角范围内有一偏振角;用成像器作第二组测量以生成第二像值的第二步骤,第二组测量包括至少测量一次第一景区发射的第二射线束中至少一个选择的偏振分量的强度,第二射线束在成像器参考帧内具有第二主射线,第二主射线与第一主射线不同,成像器对第二主射线射线束具有第二偏振灵敏度特性,第二偏振灵敏度特性对偏振角在第二角范围之外的信号分量减小了灵敏度,第二射线束中至少一个选择的偏振分量在第二角范围内有一偏振角,而第二与第一角范围不同;移动成像器的第三步骤,包括下列之一:在第一与第二步骤之间相对第一景区转动成像器;在第一与第二步骤之间相对第一景区平移成像器;和运用第一与第二像值判定第一与第二射线束之一的偏振态。
附图简介
通过结合示明本发明诸示例性实施例的附图所作的详述,将明白本发明的其他目的、特征与优点,其中:
图1是本发明拼图法示例步骤的流程图;
图2是本发明示例校正步骤的流程图;
图3是本发明另一示例校正步骤的流程图;
图4是本发明示例拼图步骤的流程图;
图5是本发明示例拼图技术的框图;
图6是本发明示例拼图系统的框图;
图7是本发明另一示例拼图系统的示图;
图8是本发明又一拼图系统的示图;
图9是本发明再一拼图系统的示图;
图10是本发明另一示例拼图系统的示图;
图11是本发明另一拼图系统的示图;
图12是本发明又一示例拼图系统的示图;
图13是本发明再一示例拼图系统的示图;
图14是本发明又一示例拼图系统的示图;
图15是本发明再一示例拼图系统的示图;
图16是本发明又一示例拼图系统的示图;
图17是本发明另一拼图系统的示图;
图18是本发明又一拼图系统的示图;
图19是本发明再一示例拼图系统的示图;
图20A是本发明示例拼图步骤的空间与强度范围的曲线图;
图20B是本发明另一示例拼图步骤的空间与强度范围的曲线图;
图21是本发明又一示例拼图步骤的空间与强度范围的曲线图;
图22A是本发明示例滤波器的密度分布与相应有效遮蔽特性的曲线图;
图22B是本发明另一示例滤波器的密度分布与相应有效遮蔽特性的曲线图;
图22C是本发明又一示例滤波器的密度分布与相应有效遮蔽特性的曲线图;
图23是本发明示例拼图系统的示图;
图24A是本发明示例成像器附件的有效密度分布曲线图;
图24B是图24A中分布曲线的对数函数曲线图;
图25A是本发明示例滤波器的中心波长分布与相应有效遮蔽特性的曲线图;
图25B是本发明另一示例滤波器的中心波长分布与相应有效遮蔽特性的曲线图;
图25C是本发明再一示例滤波器的中心波长分布与相应有效遮蔽特性的曲线图;
图26A是本发明一示例滤波器的截止波长分布与相应有效遮蔽特性的曲线图;
图26B是本发明另一示例滤波器的截止波长分布与相应有效遮蔽特性的曲线图;
图26C是本发明再一示例滤波器的截止波长分布与相应有效遮蔽特性的曲线图;
图27是本发明一具有高通滤波器阵列的示例成像器的一部分灵敏度特性曲线图;
图28是本发明一具有窄带滤波器的示例成像器的灵敏度特性曲线图;
图29是本发明一具有一组宽带滤波器的示例成像器的灵敏度特性曲线图;
图30A是本发明一示例偏振滤波器阵列示图;
图30B是本发明一示例偏振滤波器制造示图;
图30C是本发明另一示例偏振滤波器制造图;
图31是一示例摄像机的焦点特性图;
图32A是本发明一示例折射元件的折射分布曲线图;
图32B是本发明另一示例折射元件的折射分布曲线图;
图32C是本发明又一示例折射元件的折射分布曲线图;
图33是本发明一组光学元件的示例结构图;
图34是本发明一示例光学元件图;
图35A是本发明一示例图象部分配准步骤图;
图35B是本发明另一示例图象部分配准步骤图;
图36A是本发明一示例遮蔽的衰减分布曲线;
图36B是图36A中衰减分布的对数特性曲线;
图37是本发明执行拼图算法的计算机系统图;和
图38是用于图37中计算机系统的处理器部分的框图。
图中,除非另有说明,一般用同样的标号与字符标明所示实施例相同的特征、单元、元件或部分。而且,现在虽然参照诸附图并结合所示实施例对本发明作详细描述,但是可对描述的实施例作变化与更改而不违背所附权项所限定的本发明的实际范围与精神。
发明的详细描述
根据本发明,成像器经配置,可使其一个或多个检波特性在垂直和/或水平视角而变化;该成像器可以包括电子静像摄像机或视频摄像机等运动图象摄像机。例如,经配置,成像器的视场左侧与右侧的特性可以不同,或者视场上部与下部的特性可以相异。例如,成像器的不均一(即空间变化)灵敏度特性可以包括对景象亮度的灵敏度、对特定色光的灵敏度、对特定偏振角和/或焦距(即目标聚焦距离)光的灵敏度。在连续的快照或帧之间转动和/或平移成像器,可将连续的快照或帧组合成视场更宽的较大图象。这种将不同景观的组合称为“拼图”。此外,若摄像机在快照或帧之间的移动足够小,则在每次通过摄像机的不同视场部分时,可多次拍摄某些景区。由于成像器每一部分的视场有一不同的灵敏度特性,所以可用各种成像灵敏度特性摄得最终的“多采样”景像部分(即多次采样部分),从而得到景像各部分的附加信息。例如,可用宽范围的强度灵敏度特性多次采样某一景部,如在用透镜上装有空间不均一衰减器的摄像机跨越景象作垂直或水平(或任意角度)随动拍摄时,摄取多个帧,这样就有效地摄取各景部,并扩展了动态范围。此外,在用具有空间不均一色彩灵敏度特性的成像器作多次快照时,通过跨越景象的随动拍摄可获得各景部的谱信息。同样地,应用具有不均一偏振灵敏度特性的成像器(如带不均一偏振滤波器的摄像机)可获得偏振信息,应用有不均一焦距的成像器可获得深度(即距离)信息。
如图5的框图所示,本发明的系统一般可看作有两个部件。该系统包括一硬件部件,含有用于摄像504的成像器(如摄像机)502。图象504由图象处理部件执行软件处理,后者包括一种或多种图象处理算法506,提供增强的输出图象508和/或有关被成像景象特性的信息510。图6更详细地展示了此系统。成像器502包括一个或多个滤波器604,滤除进入成像器502的光信号(如光束)632。成像器502包括成像光学元件606和检测器阵列608,诸如CCD或CMOS图象检测阵列。检测器阵列608产生可由电路610处理的信号。为转动和/或平移成像器502,可将成像器502装在机械化的转动和/或平移机架630上,和/或由飞机之类的运动平台载运。
模/数转换器612处理摄像机电路610产生的模拟信号,把模拟信号转换成可存入帧存储器614的数字信号。图象用处理器618分析处理,它可以是计算机的处理器,执行本发明的各种算法。处理器618还可用来“提供”(即产生)新景象。转换器616可将图象转换成可由外部设备602处理的格式而生成输出图象508。外部设备602可以包括例如转换器620,用于将图象转换成视频信号、打印机信号或其他输出格式。处理器618还可提取有关被成像景象特性的信号622。下面详细描述上面诸技术的实例。
图8示出一例按本发明作拼图的成像器。成像器712包括有孔径710、物镜802和可以是如CCD检测器阵列的图象检测器708的摄像机702,还包括一空间变化的中性(即与波长无关)密度滤波器704,在图示实例中,靠近上部816的衰减较小,靠近下部820的衰减较大。成像器712分别从景点A、B、接收射线束810、812与814(本例为光束)。在成像器712的参考帧内,若干主射线例如射线804、806与808,限定了成像器712接收的射线束810、812与814或其他辐射信号组的各自的方向。如光学中所周知,一般认为,光束的主射线可以限定束传播的路径。在图示实例中,束810、812与814的主射线分别是成像器712的主射线804、806与808。再者,尽管图8只示出三条主射线804、806与808,但是成像器在理论上在其参考帧内拥有无数条主射线。此外,虽然图8的示例成像器712主要用于图象光,但是本发明的技术还适用于任何电磁辐射或其他辐射的成像,包括但不限于红外(IR)辐射、x射线辐射、合成孔径雷达(SAR)信号、粒子束(如电子显微镜的电子束)与声(如超声)辐射。
分别包括景点A、B、C发射的信号的信号组810、812与814,被物镜802分别聚焦到检测器708上的点A’,B’与C’上。成像器712可以绕旋转矢量
Figure A0181311500161
旋转,使成像器712沿任一主射线804、806与808或沿成像器712的视场中任一其他主射线接收任一射线束810、812与814。例如,可用如图8那样定向的成像器712作第一次快照,此时沿图示的主射线804接收信号组810。然后,成像器712以逆时针方向旋转,从而沿主射线806接收射线束810。接着作另一次快照。于是,成像器712再旋转,从而沿主射线808接收信号组810,并可作第三次快照。相应地,在各三次快照中,射线束810通过滤波器704的三个部分816、818与820接收,因而在检测器708的三个点A’,B’与C’接收。另应指出,为了跨越景象随动拍摄,虽然图8示出了旋转的成像器712,但是成像器712也可像图7那样平移。其实可用任一种运动使成像器712跨越景象706随动拍摄,让成像器712沿其视场的任一主射线接收景点A的辐射信号。对景象706作了多次快照后,就可将快照用于本发明的拼图步骤。例如,如下面参照图1详述的那样,可以用拼图步骤扩展代表接收自各种景区706的光信号组强度的图象的动态范围。
图1示出本发明示例性拼图步骤。较佳地,该步骤包括校正步骤100,下面详细讨论其实例。图7~19、23和31中标为712的成像器(如带空间变化衰减滤波器的摄像机)对被成像景象作第一组测量(步骤102)。在成像器712的参考帧中,有第一主射线(如图8~18中的主射线804),对于沿第一主射线接收的信号,如第一主射线光束,成像器712具有第一强度灵敏度特性与第一动态范围。成像系统具有第一强度灵敏度特性和第一动态范围。例如,在成像器712视场内,第一主射线对应于特定的观察方向,并对该部分视场覆盖一第一衰减量衰减滤波器。成像器对通过衰减滤波器特定部分的光束的灵敏度,取决于滤波器该特定部分的衰减量。成像器对光束的动态范围一般由光束在其上聚焦的检测器部分(如CCD单元)的动态范围决定。
第一组测量的输出是第一测量值,例如可代表第一射线束或景象内第一区或点发射的其他辐射的强度(即亮度)。第一射线束的主射线与成像器的第一主射线相对应或等同。
移动成像器712(如转动和/或平移),可拍摄不同的景部(步骤104)。第二景部与第一景部重叠,使第一景点或景区仍在成像器712的视场内,但是来自第一景区的光射线束沿成像器712的参考帧内的第二主射线(如图8~18的主射线806)接收。成像器作第二组测量步骤106),产生第二测量值,该值代表来自第一景区的第二光线束的强度。成像器712对沿第二主射线接收的辐射信号组具有第二强度灵敏度特性。例如,若第二主射线通过其第二衰减量不同于第一衰减量的衰减器,则成像器712对第二光线束的灵敏度就不同于它对第一光线束的灵敏度。
对第一和第二测量值作数学运算,就产生第三测量值(步骤108)。例如,若成像器712包括一用于拍摄图像的CCD检测器阵列708,就用该阵列的第一单元或第一组单元测量第一光线束而产生第一测量值,并用第二单元或第二组单元测量第二光线束而产生第二测量值。根据成像器712对沿第一与第二主射线接收的信号的特性,第一或第二单元接收的信号可能过亮或过暗,使信号位于该单元的准确范围以外。若第一测量值表明已准确地测量了第一光线束,但第二测量值指示第二光线束未能准确地测量,则可丢弃第二测量值,而把第一测量值用作第三测量值,并作为代表第一景区的像素的输出值。若成像器712受第一主射线撞击的区域与受第二主射线撞击的区域的特性不同,例如若沿第一主射线接收的光比沿第二主射线的光被衰减得更大,则有效地扩展了成像器712的动态范围,因为高强度光在沿第一主射线接收时可准确地测量,而低强度光在沿第二主射线接收时可准确地测量。由于第一景区发射的光是沿两条主射线捕获的,所以光更容易被检测器两区域的至少一个区域准确地测量。相应地,可将第三测量值视作具有第三有效动态范围,而且大于第一与第二测量值的有效动态范围之一或二者。
除了上述有效地扩展强度测量的动态范围的步骤以外,图1的示例拼图步骤还可以包括测量接收自第一景区的光线束的其它特性的步骤。这类其它特性可以包括例如谱特性、偏振和/或焦点特性,可用于推导摄像机与各种景象细节的距离。例如,第三组测量可测量来自第一景区的第三光线束中至少一个谱分量的强度(步骤110)。第三组测量产生第四测量值。第三光线束可以沿成像器712的第三主射线接收,该成像器配置成对第三主射线光线束具有第一光谱灵敏度特性。为了选择第三光线束的谱分量,第一光谱灵敏度特性较佳地包括一具有第一波长灵敏度带的带通特性。换言之,所选的谱分量具有位于第一波长灵敏度带内的一个或多个波长,而且可以拥有或可以不拥有产生高于检测器噪声的信号所需的能量。
接着,摄像机转动或平移(步骤112),作第四组测量(步骤114)。产生第五测量值的第四组测量包括至少测量一次来自第一景区的第四光线束某一谱分量的强度。第四光线束沿成像器712的第四主射线接收。成像器712对第四主射线辐射信号具有第二光谱灵敏度特性,它较佳地包括第二波长灵敏度带的带通特性,以便选择波长位于第二波长灵敏度带内的诸分量。
此外,本发明的拼图法还能测量第一景区发射的光的偏振。在这种系统中,第三组测量(步骤110)包括至少测量一次第三光线束中选择的一偏振分量强度。具有上述第三主射线的第三光线束包括来自第一景区的第三辐射信号。成像器712对沿第三主射线接收的射线束具有第一灵敏度特性,该特性包括对偏振角在第一角范围之外的光线降低的灵敏度。检测第三光线束中所选的偏振分量,因其偏振角在第一角范围内。然后转动或平移成像器712(步骤112),执行第四组测量,产生第五测量值(步骤114)。第四组测量包括至少测量一次来自第一景区第四光线束所选偏振分量的强度。第四光线束沿成像器712的第四主射线接收。成像器712对沿第四主射线接收的射线束具有第二偏振灵敏度特性,该特性包括对偏振角在第二角范围外的信号分量减低的灵敏度。成像器712检测第四光线束的所选偏振分量,因为该分量的偏振角在第二角范围内。
另外,可用焦点特性不均一的成像器确定第一景区离成像器712有多远。具体而言,可将成像器712配置成对沿第三主射线接收的光具有第一焦点特性,而对沿第四主射线接收的光具有第二焦点特性。作第三组测量以产生第四测量值,其中第三组测量包括至少测量一次来自第一景区的第三光线束的强度(步骤110)。第三射线束沿第三主射线接收。摄像机的第一焦点特性包括目标聚焦的第一焦距。转动或平移成像器712(步骤112),作第四组测量而产生第五测量值(步骤114),其中第四组测量包括至少测量一次第一景区第四射线束的强度。第四组信号沿第四主射线接收。若沿第四主射线成像,则摄像机的第二焦点特性包括目标聚焦的第二焦距。第二焦距与第一焦距不同。
对于应用强度灵敏度特性不均一的成像器拼图,通过获取摄像机的强度灵敏度特性如何跨越视场而变化的估值有利于对成像器做校正。一种校正技术是用成像器捕获各种不同的景像与景部,再将检测器各部分如各检测器单元产生的测量值相加或求均,结果是一组相对的和/或标定的值,代表沿各主射线的成像器特性。图2示出的一例校正方法100可用于图1的方法。与图1所示方法的其他步骤一样,图2所示一系列步骤100由在成像器712的参考帧中接收第一与第二主射线的成像器执行。成像器712测量第一组第一主射线射线束(如光线束)的强度,产生第一组校正测量值(步骤202),用于确定成像器712对沿第一主射线接收的信号的第一强度灵敏度特性的第一估值(步骤204)。该第一估值通过计算第一组校正测量值的总和和/或平均值而确定。成像器712还测量第二组第二主射线射线束的强度,产生第二组校正测量值(步骤206),用于确定成像器712对沿第二主射线接收的信号的第二强度灵敏度特性的第二估值(步骤208)。第二估算通过计算第二组校正测量值的总和和/或平均值确定。
此外,当景区跨越视场运行时,通过对它进行跟踪可以校正成像器的强度灵敏度特性。例如,图3示出的校正方法300就应用了沿成像器接收的多束主射线从选择的景部接收的射线束(如光线束)的测量值。图3的校正方法100可用于图3的拼图方法。在图示校正方法300中,成像器712作第三组测量,产生第四测量值(步骤302),其中包括至少测量一次第二景区第三光线束的强度。第三光线束沿成像器接收的第一主射线接收。然后,成像器转动或平移,使成像器沿其第二主射线接收来自第二景区的光线(步骤304)。成像器作第四组测量,产生第五测量值(步骤306),其中第四组测量包括至少测量一次第二景区第四光线束的强度。第四光线束沿成像器接收的第二主射线接收。第四与第五测量值用于评估成像器712的第一强度灵敏度特性(即成像器对第一主射线光线束的灵敏度)和该成像器原第二强度灵敏度特性(即成像器相对于沿第二主射线接收的光线的灵敏度)之间的数学关系(步骤308)。该数学关系通过计算第四与第五测量值之差或它们的比率进行评估。
上述校正法可进一步理解如下。考虑一种只沿x轴改变透射比的中性密度遮蔽M(x)。某景点在图像k中表示为像点Xk;像点xk的线性化强度为 。同一景点在图像p中表示为xp;像点xp的强度为
Figure A0181311500202
。这两点都遵循下述关系: M ( x k ) g ^ p - M ( x p ) g ^ k = 0 . - - - ( 1 )
跟踪若干图像中大一些景点可得出许多遮蔽应在各像素x遵循的公式,这些公式可用矩阵形式表示为FM=0。一例矩阵F为:
Figure A0181311500204
帧数标为p。有些点可能没有可靠的数据,因而采用附加的平滑度公式有利于使解规范化。例如,使D2M2劣化的公式可以定为寻求LM=0。其中
Figure A0181311500205
上述公式通常相互矛盾,因而该算法利用最小二乘方法、耐用统计法或其他方法寻找优化求解法以实现优化。对最小二乘方法,优化解为: M ^ = arg min M ( M ′ A ′ AM ) , - - - ( 4 ) 其中 A = F βL , - - - ( 5 ) 而β是相对于不一致数据的补偿对不平滑解的补偿加权的参数。用单值分解(SVD)求出非无效解,再将最大
Figure A0181311500211
(即
Figure A0181311500212
的最大值)设为1。根据上述公式将M的协方差矩阵估算为: Cov ( M ) = ( A ′ A ) - 1 M ^ ′ A ′ A M ^ ( n r + 1 - l ) - 1 - - - ( 6 )
式中l为M的元数,nr是A的行数。可以将它视作加权的最小二乘方问题:以β加权属于L的行,而属于F的行通常对更强的像素(即更大的g)更准确。这相当于应用归一化行,然后以
Figure A0181311500214
加权每行r。相应地,算法应用nr=∑r,cA2(r,c),据此加每行平方的权值。
Cov(m)的对角线给出的M的方差导致 的置信区间。注意,该公式不在logM域内,因而并不严厉补偿极低M处数据之间的相对不一致或在低M处可能相对显著的扰动。作为最终的后处理步骤,还要平滑 ,主要对强烈光衰减区域的估算有影响。
应指出,实际上遮蔽自校正以同一图象序列为基础,该图象序列最终经处理而得到代表景象的图象,此时遮蔽在点x的估值在统计上并非与在特定帧中的点x测得的信号无关。但要指出,根据对若干帧中众多点的跟踪,M在各点的估值受到上百甚至上万个公式(A的行)的影响。因此在配准与融合图象时,为实用起见,可假定每次特定采样的信号与估算的遮蔽无关。
本发明的自校正方法已作过实验测试,测试应用的市售线性可变密度滤波器,长3cm,固定于线性CCD摄像机25mm透镜的前方约30cm。滤波器的最大密度为2(相当于衰减100倍),尽管有效遮蔽特性因系统的附加渐晕效应而具有较宽范围。在视场中衰减最小的部分,M接近不变。摄像机在帧间转动,各点越过视场成像14次。利用M的粗估值,图象按下述配准方法配准。接着,根据随机对应不饱和点与非暗点,产生5万多个公式,用于以614个像素的分辨率确定遮蔽特性。图36A示出上述自校正方法产生的估算遮蔽函数,其对数示于图36B。
对多项式、S形等参数曲线或仿样曲线的数据,也可把M确定为最佳拟合解。或者,除了应用MSE判据外,可以应用其他优化法,如增强统计学与选代投影法等。由于M为相乘,所以可将logM用作相加参数,结合I在各景点的估值利用线性优化形式得以优化。
此外,若景象I只沿x轴变化(一维信号),而且各“帧”P相对球坐标系平移tp,则方差和为: Σ p Σ x ( g ^ p ( x ) - M ( x ) I ( x + t p ) ) 2 - - - ( 40 ) (x)不饱和。
该算法对平移参数tp、遮蔽M与变量I优化了上述误差函数。作为优化过程部分作调整的其他参数包括:(1)M的平滑度(即M参数化所需的系数数量),和(2)I对帧间的特定移动量(即tp与tp+1之差)的平滑度。I与M较佳地限于正数。公式(7)里的方差和作为M的函数而加权(使问题非线性),或在对数域中计算该误差,使上述公式对未知量I与M成为线性优化问题。
遮蔽还可以选代校正,例如校正方法以遮蔽各点的初始估值开始,可用初始评估的遮蔽函数估算各像素强度的融值。融合像值利用上述校正法对遮蔽计算改进的估值,而改进的遮蔽估值可对融合图象导出较佳的估值,并依次类推。
根据本发明,具有空间变化偏振灵敏度特性的成像器可获得景象发射的辐射(如光或其他电磁辐射)的偏振信息,不管该成像器是否也具有空间变化强度灵敏度特性。图4示出一例这样的方法。图示方法400应用的成像器712,对沿成像器712第一主射线接收的辐射信号组具有第一偏振灵敏度特性,对沿成像器712第二主射线接收的辐射具有第二偏振灵敏度特性。例如,成像器712包括带不均一偏振滤波器的摄像机702,该滤波器接纳在第一部分视场中有第一偏振角的光,并且接纳在第二部分视场中有第二偏振角的光。图30A示出一例这样的滤波器。图示滤波器3010包括三个偏振部分3002、3004与3006,分别发射偏振角为3020、3022与3024的光。滤波器3010还包括可通过任何偏振光的部分3008。图30A的滤波器3010可在摄像机702内部外或外安装,形成偏振灵敏度特性在视场左右两侧之间变化的成像器712。当成像器跨越景象随动拍摄而捕获景象的多幅快照或帧时,就通过空间变化偏振滤波器3010的部分3002、3004、3006与3008中的一个或多个捕获各景点。
空间变化偏振滤波器可用几种不同方法制作,如在图30B中,这种偏振滤波器3012从盘3018中切割,发射光的偏振角位于与盘3018水平的方向。在图30C中,空间变化偏振滤波器3016从盘3014中切割,发射偏振角与盘成经向的光。
包括上述滤波器之一的成像器712通常具有空间变化偏振灵敏度特性。在任何场合中,不管成像器如何构成,在图4的方法400中,都用成像器712作第一组测量而产生第一测量值(步骤402),其中包括至少测量一次第一景区第一光线束中至少一个所选偏振分量的强度。第一光线束的主射线对应于成像器接收的第一主射线。成像器对第一主射线光线束的第一偏振灵敏度特性,包括对偏振角在第一角范围外的光分量降低的灵敏度。选择的第一信号偏振分量由成像器检测,因其偏振角在第一角范围内。为捕获景象的不同部分,要转动或平移成像器(步骤401)。用成像器作第二组测量而产生第二测量值(步骤406),其中包括至少测量一次第一景区第二光线束中至少一个所选偏振分量的强度。第二区域的主射线对应于成像器接收的第二主射线。成像器对第二主射线光线束的第二偏振灵敏度特性包括对偏振角在第二角范围外的信号分量降低的灵敏度,选择的第二光线束的偏振分量要予以检测,因其偏振角在第二角范围内。
捕获了一组快照和/或视频帧后,以几种不同的方法分析得到的图象数据。希望以尽可能高的精确度测量景象特性,还希望以最有效的可能方法作必要的测量与计算。在采集的数据量与数据捕获和分析所需的时间与运算能力之间有一折衷方案。通过在单位视角变化内获取更多的快照或帧,可实现更大的精密度,不过更大数据量的捕获与分析更加耗时费钱。捕获大量图象的回报正在减少,因为在实践中,被测的景象特性的自由度通常有限,因而景象的过分采样会导致冗余数据。再者,外装滤波器的效果一般有点模糊,因为滤波器散焦了。因而应用快照之间极小的角度变化并不能获得足够的附加信息来证明捕获与处理数据所需的附加时间与费用是值得的。因此,本发明的拼图法应该较佳地平衡附加信息与附加时间费用之间的折衷。下面针对应用对强度、波长、偏振和深度具有空间变化灵敏度特性的成像器的拼图法,讨论单位视角变化的有效实用帧速率的测定方法。
例如,考虑一由装在摄像机上的遮蔽形成其空间变化强度灵敏度特性的示例成像器。设遮蔽的透射比为M,检测器无遮蔽的光强为I,则滤波后落在检测器上的光为:
g(x,y)=M(x,y)l(x,y).                 (8)
观看高反差景象时,一般用幅值或倍频程定义强度,为此,通常以倍频程来配置摄像机孔径“f光圈”,使每次“光圈”增加对应于加倍的被测强度。在数码相机传感器中,这在测量的二进制表示法嗯对应于移1位。例如,若8位摄像机在图象某一像素位置把光强测量00011010,则增大一个光圈将在第二图象中对应于读数0011010(0),其中新的最低位是新图象添加的信息。
例如,在一种实现偶数幅值分度的优化遮蔽中,衰减倍频程跨越视场线性的变化,即衰减呈指数变化,于是log2M(x)正比于x。在这种结构中,固定扫描增量(如成像器观察方向的一系列等量变化)将产生被测强度幅值的固定变化,同等地采样所有的强度范围。在透镜前方以某一距离将线性可变密度滤波器装到摄像机,可近似实现这一特性。但要指出,鉴于晕像、透视与透镜失真,在logM(x)中不能准确地保持滤波器密度的线性度。
设I是滤波器透射比最大时(即M=1)落在检测器上的光强(即幅照度)。特定的线性可变滤波器与特定的检测器共同测定可被系统不饱和检测的景象幅照度的最小与最大界限。设检测器能检测的高于该检测器噪声(暗度)的最小幅照度(对指定的摄像机指标而言)为 ,这决定了整个系统能检测的最小幅照度。设检测器能不饱和测量的最大幅照度为
Figure A0181311500242
。于是,检测器以倍频程计的光学动态范围为: DR det ector = log 2 I max det ector I min det ector - - ( 9 )
通常DRdetector=8位。整个系统能不饱和检测的最大幅照度是检测器在最强衰减下(即最小遮蔽M)得出的最大输出; I max system = I max det ector /最小M。
因此,系统的总光学动态范围为: DR system = log 2 I max system I max det ector = DR det ector - log 2 ( min M ) = DR det ector + log 2 [ max ( 1 / M ) ] - - - ( 10 )
关于如何最有效地采样景象信息,可假定捕获的图象的幅照度范围在
Figure A0181311500247
之间。改变观察方向相当于改变通过其看见某一点的孔径光圈(允许进入检测器的光功率),每改变一次全光圈(即衰减改变2倍)相当于在检测器动态范围DRdetecd内测量值的、二进制表示法中移1位。对某些场合,在扫描中丢失最小信息可能较佳,此时在完成扫描时,不会“遗漏”每个帧增量获得的位。但对无冗余的场合而言,应用“有损”扫描就行了,完成扫描时,只要求无饱和点,而且所有点都高于检测器的阈值(只要 I ≥ I min det ector )。这种方法可实现最有效的扫描,每次增量可最大程度地扩展测量的光学动态范围。
例如考虑某一成像系统,其中log2[max(1/M)]<DRdetector,DRdetector=8位,minM=1/8,所以DRsystem=11位。设某景点得出的二进制值在M=1时为11111111,在M=1/2时为10111001,则前者测量饱和,而后者测量不饱和。相应地,M=1/8时(右移2位以上),测量得出00101110。这样,幅照度的11位表示是0010111001(0)。假定景象包含系统整个光学动态范围的幅照度值,则保证信息(位)丢失最小的最小冗余扫描是一种以一个倍数程增量对各点作四次采样的扫描。若使用了线性可变滤波器,则在下列二者之间存在线性关系:(1)连续帧之间的位移,和(2)特定景区被成像的衰减量。
对动态范围要求不高的场合,例如可对各点只作单次不饱和测量,若 I ≥ I min det ector ,测量值就高于检测器的阈值。此时,只用M=1与M=1/8就够了。这样可对上述示例像素产生值00101110(000),而对景象的暗点产生例如00000000011的值。在这种系统中,任一点可用遮蔽的一个极值测量,然后用另一极值测量,可以跳过中间遮蔽值。换言之,对各点采样两次就够了。帧间增量可以很大,只要它小于帧长一半,而且M=1与M=1/8之间的过渡区小于该帧间增量。
现在考虑一log2[max(1/M)]≥DRdetector的系统。包括遮蔽在内,系统的光学动态范围至少是检测器本身的两倍。以一个倍数程增量对每点采样,保证丢失的位最少。对有损但最有效的扫描而言,测量每点时在连续扫描增量间不重叠强度信息最好。例如,若DRdetector=8位而DRsystem=16位,则每点两次采样就足以获取整个值范围。第一次采样高清晰度地检测系统量化器对其产生标定为
Figure A0181311500253
的像值的每点,该值小于256,第二次采样不饱和地捕获像值大于255而小于65536的各点。
通常,对“无损失”的有效扫描,各点因采样1+log2[max(1/M)]次,对最有效扫描,各点应采样ξ=[DRsystem/DRdetector]次,ξ总是被舍入到最接近的整数。实践中,希望根据经验应用上述数字,并且应用较为稠密的采样速率,因为与较稠密的采样相关的冗余度可使幅照度估算不嘈杂,且便于稳定地配准图象序列内的图象。还要指出,上述景象采样方法不是景象采样的唯一方法,还可采用其他周期和/或非周期的采样间隔。
本发明的拼图方法还可在强度域中实现“超分辨率”,因为平滑变化的遮蔽可起到模拟计算设备的作用,因而在许多场合中能提供优于单次量化图象测量的量化精度。例如,考虑一种其整数量化器区分不出像值96.8与97.2的成像器,即这两个像值都将产生一个输出值97。运用M=1/2将信号衰减一倍,成像器分别输出值48与49,因而增强了鉴别力。在另一例中,利用M=2-6的检测结果,即引用7幅图象,可区分出95.8和96.2。一般而言,若应用更多图象,则融合估值的强度分辨率更高。[具体而言,要得出精确的结果应如何组合多个采样?]这是对景象比上述速率更稠密地采样的另一个优点。[何种速率?]
对于测量景象光谱特性的拼图法,所选视角的优化间隔取决于检测器各像素位置的强度与波长函数相对于波长的变化速度,这类似于多类型信号的采样问题:若采样少得难以检测信号扰动,就会出现混淆。因此,若各景点或景区的光谱平滑,如黑体幅射场合,就可稀疏地采样。另一方面,若光谱含有细谱线,如原子蒸气的放射与吸收谱线,该光谱应稠密地采样,相应地,成像器在帧间的转动或平移量要很小。
然而,即使来自景象的光谱具有窄谱带,在测量这些谱带时,通过使外装滤波器散焦模糊,也可平滑测得的分布曲线。例如,研究一种线性变化的滤谱器。可将模糊的测量值模拟成谱s(λ)与宽度Δλblur的窗函数h(λ)的卷积。设S(uλ)与H(uλ)分别是s(λ)与h(λ)的富里叶变换,H是截止频率≈1/Δλblur的低通滤波器。若滤波器的发射特性H在频率上无限伸展,可将该截止频率定义为:(1)信号能量极低的频率,(2)H的第一个零点,或(3)正比于函数H的标准偏差的量。在任一情况下,该“截止”频率将≈1/Δλblur。要求的采样间隔为
λ(x)sample≈Δλblur/2       (11)
可用系统的物理量纲表示上述采样判据。对于长为L的线性可变滤波器而言, λ ( x ) = λ max + λ min 2 + B L x , x ∈ [ - L 2 , L 2 ] , - - - ( 12 )
其中λmax与λmin分别是整个滤波器通过的最大与最小波长,B≡λmaxmin是滤波器总带宽。若滤波器散焦模糊模型是宽度Δxblur的窗核的卷积,则ΔXblur=BΔXblur/L,因而: Δ λ sample ≈ Bλx blur 2 L - - - ( 13 )
例如,研究图8的成像器712。若成像器712在无限远聚焦,则外接滤波器704被核模糊,该核的宽度等于通过孔径710的目标光束(如光束810、812或814)的宽度。因此,对透镜802与外接滤波器704图示结构而言,Δxblur=D,这里的D为孔径710的直径。因此: Δλ sample ≈ BD 2 L - - - ( 14 )
上述结果可定性说明如下。若带B在长滤波器上伸展,使模糊核的大小不重要,即D<<L,则原始光谱(可以有锐谱线)的细节得以保留,因而采样较佳地应稠密一些。定量地说,从公式(14)可看出,当D<<L时,采样周期Δλsample变小了。
假定成像器712在图象采集之间以角增量ΔΦsample转动,设滤波器704与转轴的距离为A,例如可以是图8所示系统的投射中心点0。而且,让滤波器704与光轴垂直。于是,从景象传播到投射中心的每条光线在采样之间位移Δxsample≈AΔΦsample,假定视场全角θ很小,即假定sinθ≈θ。参照公式(11), Δλ sample ≈ B L Δx sample ≈ B L &Agr;Δ φ sample - - - ( 15 ) 比较公式(14)与(15),注意f光圈数f#≡F/D,F是焦距,可发现: Δφ sample ≈ D 2 A = F 2 Af # , - - - ( 16 )
与滤波器的量纲或总带宽无关。对应于公式(16)的定性说明是较大的孔径D引起较大的散焦模糊;相应地,要求较少的采样,因而可增大ΔΦsample。再者,对给定的转动增量,滤波器与投射中心之间较小的距离A,导致通过投射中心O的光线的波长λ的变化较小,从而减小了得到的有效波长采样周期。因此对期望的Δλsample而言,ΔΦλsample较佳地随A减小而正大。
公式(16)示出应用外装滤波器与把滤波器直接放在检测器阵列上的优点:利用外装滤波器,用户只要改变透镜孔径大小或外装滤波器与透镜的距离,就能选择期望的采样速率。若应用对谱内容要求高分辨率,可将滤波器置于远离投射中心,而且孔径很小。另一方面,若用户要快扫景象,须对指定的视场摄较少的帧,则ΔΦsample应较大,因而孔径要大,而且/或者滤波器要靠近透镜。
另一例研究要求360°全景景观的应用,并假定使用的成像器包括一长度为L能覆盖成像器整个视场的滤波器。成像器长标为Ld,则F/A=Ld/L,因此: Δφ sample ≈ 1 2 f # Ld L - - - ( 17 ) 为捕获360°全景,要求图像数为:
须指出,上述分析虽然假设了单一透镜的摄像系统、小角度和作为波长函数呈线性的外部带通滤波器遮蔽,但是上述原理通常可导出其他系统的类似关系。再者,对单透镜成像器或任何其他系统而言,采样间隔可以不同,并且实际上可以是非周期性的。
此外,为测定景象发射幅射(如光)的偏振特性,可分析由具有空间变化偏振灵敏度特性的成像器所捕获的一组图象。来自景点的光的偏振态有四种自由度,即4个Stokes参数,已为众所周知。因此,若检测器对景点发射光作四次或更多次测量,每次测量有不同的偏振滤波,就能评估光的偏振态。在众多场合中,可略去椭圆与圆偏振态,只剩三种自由度:强度、部分偏振和偏振平面(即偏振角)。将线性偏振器调谐成通过与光轴成α角的偏振分量,使入射光偏振平面成θ角,设入射光强度为I,其部分偏振为P,则通过滤波器的光强为:
g=C+Acos[2(α-θ)],                  (19)
式中C=I/2,A=PC。此时,三次测量就足以评估光的偏振态。例如,若每次测量(对应于帧P)仅是偏振器的αp角变化,则 [ 1 cos ( 2 a p ) sin ( 2 a p ) ] C A c A s = g p , - - - ( 20 ) 式中Ac=Acos(2θ),As=Asin(2θ)。因而: M = C A c A s = g → , - - - ( 21 ) 其中 M = 1 cos ( 2 a 1 ) sin ( 2 a 1 ) 1 cos ( 2 a 2 ) sin ( 2 a 2 ) · · · · · · · · · 1 cos ( 2 a m ) sin ( 2 a m ) , - - - ( 22 ) 而m是测量景点的帧数,而且 g → = g 1 g 2 · · · g m - - - ( 23 ) 略去椭圆偏振分量,用m=3方程将公式(21)变为: C A c A s = M - 1 g → - - - ( 24 )
公式(24)可计算光强值I、部分偏振P和偏振平面角θ例如,若成像器应用覆盖视场ΦFOV的滤波器,而且偏振滤波器跨越视场ΦFOV而变化,则在每帧之间,成像器最好将其观察方向改变Δφsample≈φFOV/3。
须指出,在某些情况下,对来自各景点的偏振光作两次测量,足以得到该景象的偏振信息,若有偏振的先验信息,则更是如此。这类先验信息包括景象中出现的典型景物的偏振特性信息。
被检辐射的偏振角不一定是帧间变化的唯一灵敏度特性,例如辐射总衰减(影响I)也能变化。另外,成像器使用的偏振器不一定完全偏振,可以只偏重一个偏振分量。此时,分析相似,但矩阵M每行必须归一化,以补偿该行相关的衰减。在一些常规系统中,测量偏振时用偏振分束器分裂光束,并将分离的分量送给分离的传感器。其他系统应用了旋转线性偏振器或液晶单元一类的电控偏振设备。但根据本发明,偏振测量可用具有空间变化偏振灵敏度特性的成像器如配备空间变化偏振滤波器的摄像机执行。这种成像器作偏振测量时,不用附加传感器,成像器的任一内部部件在图象采集时无须移动,也不用电子滤波器。免用附加移动的内部部件,减小了能耗,提高了可靠性。
在实施本发明的拼图法,应用的成像器712最好包括一种图7和图8所示摄像机702外装的空间不均一滤波器704,因为这种结构便于成像器712通过更改或置换外装滤波器704而修改。带外装滤波器的其他成像器实例示于图9和10。图9的成像器712包括弯曲的外装空间变化滤波器902,其每一部分与孔径710的投影中心O的距离几乎一样。在图9系统中,滤波器902投射到孔径710上的任意两个同尺寸部分,如部分904与906,几乎有同一尺寸,因而成像器712的灵敏度特性主要取决于形成滤波器902的材料的特性和/或厚度。
图10系统包括经定向的滤波器1002,滤波器1002与孔径710投射中心O的距离随视角而变。尤其是滤波器1002的第一部分1004比第二部分1006更接近投射中心O。如图10所示,部分1006定向成与通过部分1006和投射中心O的主射线808成一角度1010,部分1004定向成与通过部分1004和投射中心O的主射线804成一不同角度1008。角1010小于角1008,因而尽管部分1006的面积大于部分1004,但是这两个部分1004与1006在孔径710的投射中心O上具有同样的投射面积。
在衰减滤减器的情况下,滤波器特定部分的有效衰减量随光通过滤波器的角的锐度而增大。在干扰滤波器等波选滤波器的情况中,滤波器发射的波长取决于入射角,因而外装滤波器尤其有利,因为若滤波器是柔性或可拆卸安装的,就容易修正其角定向,所以变于修正成像器的特性。
虽然以上强调了应用带外装滤波器的摄像机,但是空间变化滤波器不一定外装于摄像机,如图11示出的成像器712将其滤波器1102直接装在检测器708上,其优点在于,由于检测器708处于透镜802的聚焦平面,而且滤波器1102极靠近检测器708,因而滤波器1102的特性模糊极少。结果,成像器712的特性由滤波器1102更精确地限定,从而可应用滤波特性变化很微细的滤波器。然而,这一优点的代价是更改滤波器的灵活性丧失了,一般与带内部滤波器而不用外装滤波器的摄像机有关。
图12示出的摄像机中,滤波器1202位于摄像机内部,但不与传感器708直接接触。图12的结构可使用弯曲或倾斜的滤波器。
在图13所示带复合透镜的摄像系统中,空间变化滤波器1302可装在复合透镜的元件1306与1304之间。在图13实例中,物镜1306形成目标图象。另一透镜1304将图象透射到检测器708上,滤波器1302较佳地靠近透镜1304。所有摄像机透镜几乎都是用于校正像差等的复合型,通常有两个以上元件。
如图14所示,带复合透镜的成像器还可包括位于物镜1404聚焦平面的扩散器1406。具有空间变化特性的滤波器1402尽量靠近扩散器1406。物镜1404在扩散器1406上形成由空间变化滤波器1402修正的聚焦图象。另一透镜1408将聚焦图象从扩散器1406投射到检测器708。
本发明的成像器可以配置成利用有空间变化反射比的反射器接收景象的辐射(如光)线。在这种系统中,图15的示例用具有空间变化反射比的反射器1502将景象的光信号810、812与814(即从点A、B、C)反射入摄像机孔径710。若空间变化反射比反射器1502部分透射(即发射某种光),它可以置于摄像机孔径710与参照目标1506之间,而参照目标1506可校正系统或吸收摄像机702的杂散反射,例如已知的校正技术可利用已知光源校正摄像机中检测器的特性。图15的结构在功能上相当于虚拟摄像机1504,后者与实际摄像机702的定向角度不一,能直接有效地接收光线束810、812与814,不靠反射。
在成像器712中使用外装反射器1502的另一优点是反射器1502可任意弯曲,便于控制成像器712的视场和/或放大倍数。
图16示出的成像器712,有一位于摄像机702外部的滤波器1602和内部的另一滤波器1604。
图17的成像器712具有在摄像机702外部的滤波器1702与参照目标1780,另一滤波器1704在摄像机702的光学元件1710内,还有一个滤波器1706在摄像机702内部靠近检测器708。
本发明的成像器,如图17的成像器712可以平移而扫描景象706,如图18所示。随着成像器712沿图示方向平移,景象706的任一特定点,如图18的A点,可在成像器712的参考帧里沿多条主射线804、806与808成像。另如图19所示,成像器712不一定直线移动,而是能沿路径1902运行,而且还可以同时平移和转动。此外,尽管图7~18的滤波器主要图示成从上到下变化,但是本发明的滤波器704能沿任何方向变化,例如滤波器704的特性能水平变化,此时拼图法包括使成像器712绕其光轴转动以捕获图象作处理。再者,虽然滤波器704能以不弯曲和/或不移动的方式装到摄像机上,但是也可使用相对摄像机702弯曲或移动的滤波器。在这种系统中,滤波器可在一系统图象的各帧之间弯曲或移动。
如图20A与20B所示,本发明的拼图技术能扩展图象测量的空间范围(即视场的宽度或高度)和总强度范围(和相应的动态范围)。图20A示出单帧2002与拼图2004的空间范围的差别,拼图2004由多个帧构成。图20B示出拼图2004扩展的空间范围与强度范围。实际上,由于拼图2004边缘附近的像素被采样较少次数,所以其总强度范围比拼图2004中心像素的强度范围小。该效应可从图21看出,在靠近拼图2004空间范围边缘的区域2102中,总强度范围缩小了。但在远离边缘的部分2104内,强度范围均一地更大,这一效应类似于人肉眼的凹型,视场中心附近的分辨率更大,这有利于景象中心部分比边缘位置更受关注的场合,因而成像时比边缘部分具有更高的分辨率。
如上所述,由于外装滤波器相对于摄像机检测器容易偏离焦点,所以它对摄像机特性的作用容易变模糊,如在图22A所示的滤波器密度函数2202中,密度D在第一位置2214与第二位置2216之间跨过滤波器表面一维呈线性变化,但在该范围外不变。透射比T为10-D,因而所得成像器相应的强度灵敏度特性函数在整个滤波器上呈指数变化。由于滤波器密度函数2202的模糊作用,滤波器的有效密度函数2204有圆角。同样如图22B所示,有阶跃状密度函数2206的滤波器具有带圆角的有效密度函数2208。如图22C所示摄像机外装滤波器可以具有任一种任选的密度函数2210,但若函数2210有锐角,则有效函数2212就有圆角。
对于空间变化衰减遮蔽函数f,一般可将有效遮蔽函数M模拟为:
M(x,y)=f(x,y)*h(x,y)                 (25)
式中h(x,y)是系统对远景聚焦时,摄像机对滤波器那样近的目标的散焦模糊点扩展函数(PSF)。对于圆对称PSF,遮蔽有效地是x的一维函数,f与
Figure A0181311500311
(即h的Abbel变换)卷积。例如,若把核模拟为高斯函数,则h是带标准偏差r的高斯函数,M(x)=erf(-x/r)。若从适当位置通过滤波器观看,由于M(x)取0与1之间的任一值,则任一景点在理论上都可成像而不饱和,而与该点亮度无关。因此,该简型系统在理论上有无限大的动态范围。
以图23的简型遮断器2302为例,可进一步说明外装附件的模糊作用。虽然遮断器2302本身具有明显的急剧空间变化特性,但它对摄像机702特性的作用在摄像机702视场内是渐变的,如图24A与24B所示。图24A示出简型遮断器2302更圆滑的的有效密度函数2402,图24B示出简型遮断器2302的有效密度函数的对数2406。此外,图24A还示出了对线性变化密度滤波器的有效密度函数2404的模糊作用,图24B示出线性变化密度滤波器有效密度函数2404的对数2408,对数函数2408是直线。
上述易于在带外装光学元件的成像器中出现的模糊作用,对成像器的可调性很有利,因为只要沿成像器的光轴改变该光学元件的位置,就可更改成像器的光学特性。例如,简型遮断器2302有效密度函数2402中间附近的斜率(见图24A与23),通过将遮断器2302移得更接近摄像机702的焦距而增大,而通过移得更远离来使该焦距减小。
尽管以上强调了导致具有空间变化、波长中性强度灵敏度特性的成像器的空间变化、波长中性密度滤波器,但是也可应用空间变化滤色器或其他基于波长的滤波器。例如,这种空间变化波长滤波器可以是一种空间变化中心波长为~的空间变化带通滤波器(如干扰滤波器)。图25A示出滤波器上中心波长λ0与位置x的函数2502,该滤波器的中心波长λ0在视场内线性变化。滤波器函数2502产生一种具有波长灵敏度特性函数2504的成像器,函数2504在视场内也近似线性地变化,但有圆角。图25B的一例滤波器,其中心波长特性函数2506在视场内呈阶跃变化。这种滤波器导致的成像器,其波长灵敏度特性函数2508基本上对应于滤波器的阶跃状函数2506,不过因模糊作用而有圆角,因为该滤波器对摄像机有点散焦。如图25C所示,可以使用具有任一随选中心波长函数2510的带通滤波器,但若滤波器的函数2510有锐角,得出的成像器将具有带更多圆角的波长灵敏度特性函数2512。
本发明的成像系统可应用空间变化低通或高通滤波器。例如,图26A~26C示出了低通或高通滤波器上的截止波长λcut与位置x的示例函数2602、2606与2610。与图22A~22C和25A~25C所示的滤波器相似,模糊作用使得用各自滤波器构成的诸成像器各自的波长灵敏度特性函数2604、2608与2612圆化了。
图27进一步示出了模糊作用,图示为检测器灵敏度与光线束波长λ的曲线图,该光线束的主射线靠近高通滤波器阵列两段的边界。一半束光通过截止波长为 的段,另一半束光通过截止波长为 的段,检测器的有效截止波长为
Figure A0181311500333
,介于
Figure A0181311500335
。相对于两种独立段各自的过渡带,边界上有主射线的光束的过渡带更宽。
与上述空间变化衰减滤波器一样,改变滤波器沿成像器光轴的位置,也可调整波长滤波器的有效特性。具体而言,若滤波器更接近摄像机的焦距,其特性变得更锐、不圆和更斜,而随着滤波器移离摄像机焦距,其特性就变得更圆滑而不很斜。实际上,特性2506与2606分别示于图25B与26B的阶跃状波长滤波器阵列,若充分偏离焦点,可用于近似为特性2502与2602分别示于图25A与26A的线性变化滤波器。
本发明的空间变化带通滤波器还可用来提高彩色成像器对鉴别接收自景象的信号波长的精度(即动态范围)。如图28所示,典型的彩色成像器(如彩色摄像机)有三条相当宽的彩色通道2802、2804和2806,可分别接收蓝绿红光。对成像器接一个带较窄通带2808的滤波器,可提高其鉴别波长的能力。例如,若窄带滤波器的通带2808在摄像机较宽的蓝带2802内,则最后被摄像机检出的蓝光2810要比绿红光2812与2814强得多。相应地,由于窄带滤波器特性2808不像原始摄像机特性2802、2804与2806那么宽,测定的色彩比仅将信号定为“蓝”更仔细。窄带滤波器通带2802的中心波长λ在视场上变化,因而成像器跨越景象作随动拍摄并捕获多幅图象,各景点在多条色带内成像,各有一不同的中心波长λ0,因此能以更大的色分辨率测定各景点发射光的谱特性。
图29示出被分成三大部分的外接滤色器的用法,蓝色滤波器部分具有蓝色带通特性2904,绿色滤波器部分具有绿色带通特性2906,红色滤波器部分具有红色带通特性2908。三色滤波器阵列外装到彩色摄像机,摄像机的一组三色通道分别由通到摄像机中检测器的蓝绿红光的带通特性2902、2912与2910限定。当成像器跨越景象随动拍摄时,各景部通过外装的蓝绿红色滤波器部分观察并成像,因而各滤波器部分的各蓝绿红特性2904、2906和2908就同摄像机本身的各蓝绿红特性2902、2912和2910一起应用,所以该成像器实际上有9条色带,代表了传感器通道与外装滤波器元件特性所有可能的组合。相应地,摄像机的色分辨率提高了。一般而言,若摄像机本身有Ns条带,而附加的遮蔽有Nm条带,则成像器整体能测量NsNm条带。
根据本发明的另一个方面,具有空间变化聚焦特性的成像器可以跨越景象随动拍摄,而且可根据各景点在成像器712随动拍摄时捕获的成组图象中每幅图象内的聚焦情况,来确定诸景点与成像器712的距离。图31示出成像器712对分别由景象聚焦部3104与散焦部3106发射的光线束3108与3110的响应特性。物镜3102与摄像机702的焦平面3118的距离为 。聚焦景点3104作为单点3114投射到焦平面3118上,但散焦景点3106以直径为d的圆盘3116投射到焦平面3118上。
一般而言,与摄像机702的孔径710的距离为 的目标处于聚焦状态,相应地,处于聚焦的任一目标的距离均为 。与之相对照,任何散焦的目标都与孔径710有不同距离(如u)。
成像器经配置,可使目标聚焦的距离 在成像器视场内变化。若聚焦特性变化足够大,且可作足够多的快照,则在至少这些快照之一中,所有或大多数景点将处于或接近聚焦状态。测定成像器沿聚焦的目标接收的特定主射线并掌握了成像器的空间变化聚焦特性后,可按下法测定各景点的距离。
任何光学系统都有一个目标聚焦或接近聚焦的距离范围,目标移离该范围就会变模糊。该范围的大小通常称为系统的“景深”。本发明的成像系统把这一距离/深度范围转换成成像器内沿其光轴的位置范围。该位置范围的大小称为“焦深”。在许多场合中,景物所处的深度范围宽于景深,因而某些目标会散焦。参照图31的成像系统,目标点在距离u的模糊直径d为: d = D | uF - v ~ u + F v | ~ Fu , - - - ( 26 )
式中F为焦距,D为透镜孔径。能以不同的聚焦设定捕获多幅目标图象(如采集期间移动系统内部部件,诸如透镜或检测器)。选择在对应于目标点的像点使图像锐度最大的图象,通常可测出该目标点的聚焦状态。再者,可用“融合”法对整个景象构成完全清晰的图象。这种方法对每个景区或景点在景区或景点最清晰的图象中选择相应的点,再把整组选出的点组合成所有或大部分景点的清晰图象。
还可用其他技术测量景点的距离/深度。例如,若改变 而改变聚焦设定,并用图象拉普拉斯算子测出锐度,就可将最佳聚焦地v算为: v = arg max | ▿ x , y 2 I v ~ ( x , y ) | - - - ( 27 )
式中
Figure A0181311500348
指图象平面中的拉普拉斯算子,而各图象I(x,y)用 参数化,即定义为
Figure A0181311500351
的函数。要注意,一旦确定了最佳聚焦平面,就可以用它来推导目标点的深度u。另外,该深度不仅可通过求出最佳聚焦平面推导出来,还可通过基于两个或更多帧运用“散焦深度”法估算模糊直径d而求出。2001年5月8日颁发给Nayar等人的题为“Apparatus and Methods for Determining theThree-Dimension shape of an Object Using Active Illumination andRelative Blurring in Two-Images Due to Defocus”的美国专利N0.6,229,913,已详述过该“散焦深度”法,其内容通过引用包括在这里。
图象中某一景点由尺寸对应于不确定位置U(即模糊量)的区域代表,而U由检测器阵列的相邻元件间的像差、衍射模糊或有限距离ΔXgrid造成。若模糊直径d不大于U,可认为该景点在系统的景深内,因为认为聚焦了。由于模糊直径取决于与最佳聚焦态的距离,所以d中的U变化对应于特定的深度(即距离)变化。例如研究这样一种情况,即为在特定的景深间隔内捕获一组图象,要将成像系统某个元件移动Δv。不用物理地改变Δv,可用一块折射率为n的透明介质改变光程长度。为作调整,要改变介质的厚度t或折射率n,实际上这两种方法可一起应用。根据视场内的厚度或折射率变化速率,系统计算帧间横越增量,在选择的景深间隔内产生期望的有效轴向变化。
将整个光学系统移近或移离景物,可避免移动部件,但这种方法只有在深度范围极小诸如显微镜应用时才实用。另一方面,若把一块玻璃或聚碳酸酯等透明介质放在检测器上,检测器与透镜间的光程长度就变长。对折射率为n的材料,沿光线的每毫米材料相当于通过n毫米自由空间的传播,这有效地将一部分
Figure A0181311500352
拉长了n倍(若折射元件在摄像机内部),或拉长了一部分 (若折射元件在摄像机外部),使聚焦的目标点变散焦。反之,原来在检测器平面后面某一平面上聚焦的某些目标点,现在变成在检测器上聚焦了。
本发明一例空间变化折射元件是一块透明材料,厚度在视场内变化。或者,除了空间变化厚度以外,滤波器具有空间变化折射率,这类滤波器的几例特性示于图32A~32C。图32A示出折射率线性变化的棱镜或折射元件的折射特性。若将这种元件置于光学系统之外,诸如图8所示的成像器712之外,将基本上只偏转视场,由此改变了系统的投射中心,很少有助于延长成像器的景深。因此,最好使该元件位于或靠近检测器,如图11所示的成像器712中。若在图14所示成像器712中的扩散器1406上形成中间图象,则最好将元件1402直接装在扩散器前面。
若折射元件的折射特性和/或厚度作为视场内的位置的函数而变化,则得到的成像器将具有空间变化聚焦特性。在图32B的实例中,折射元件的厚度或折射率以阶跃状变化。实际上,折射元件的厚度或折射率可按图32C所示的任一种函数变化。
以上讨论了灵敏度或聚焦特性在视场内变化的成像器的用法。如上所述,这类空间变化灵敏度或聚焦特性可以包括空间变化强度灵敏度特性(如由空间变化衰减器形成)、空间变化波长灵敏度特性(如由空间变化滤色器形成)、空间变化偏振灵敏度特性(如由空间变化偏振滤波器形成)和/或空间变化聚焦特性(如由空间变化折射元件形成)。根据本发明另一个方面,可将二种或更多的上述特性组合在单个成像器内。如图33所示,摄像机可同时将空间变化衰减器3302、空间变化滤谱器3304、空间变化偏振滤波器3306和空间变化折射元件3308全部或部分内接或外接起来,使形成的成像器具有多种在成像器视场内变化的特性。
在有些场合,光的每种特性与其他特性无关,如在体积目标中,各像点接收来自不同深度的多个(或连续)点的光。任一点发射的光可以包括源自不同景点或景区的光(如不同景点或景区的散射或放射),因而各谱分量具有不同的偏振、亮度与聚焦态。
但在大多数情况下,某些特性会被简并或高度耦合。如在特定偏振分量与谱带内,来自特定深度的某种辐射可能强得无法不饱和地检测,因而最好用宽亮度动态范围的滤波器检测强度幅值。此外,若目标不是三维的,则深度测量应与光谱和偏振测量分开。尤其是,为避免检测器饱和引起的深度估算误差,在具有可变的厚度或折射率的滤波器中应用可变密度滤波处理,有利于扩展亮度动态范围。此时最好应用所有可能的带宽灵敏度、偏振角灵敏度和强度灵敏度组合,在各像素位置作多次独立的测量。此类测量可用配置成使用复合滤波器的成像器执行,该复合滤波器有各种区域,每个区域为特定灵敏度特性的空间变化专用。如图34所示,可以使用滤波器3410,其第一段3402用于空间改变成像器的第一灵敏度特性,第二段3404用于改变第二灵敏度特性,第三段3406用于改变第三灵敏度特性,第四段3408用于改变第四灵敏度特性。此外,各段3402、3404、3406与3408不一定限为具有任何特定的形状或位置,或沿任一特定方向具有各自空间变化的滤波器特性,而可以是任何形状和/或位置并在滤波器3410上沿任何方向空间变化的滤波器特性的区域。
须指出,各滤波器特性的变化可配置成对滤波器有一不同的空间频率,例如深度随cosx而变,中性密度随cos2x而变,中心波长随cos4x而变。而且,各滤波器特性的空间变化不一定是正弦形,也可采用方波形、锯齿形和其他图形。
另一方面,有些滤波器特性能不要求专用的独立滤波区域,如来自某景点或景区的光的偏振往往将比较强烈地依赖于光波长。相应地,某个空间变化偏振滤波器重叠一空间变化波长滤波器,很少或不会丢失信息。再者,来自景象所有部分的光不一定按其特性或以同一分辨率来测量。此外,对于速度远比测量完整度重要的场合,可用图33的滤波器3302、3304、3306与3308等成组重叠滤波器作快速扫描,不使用图34的复合滤波器3410;复合滤波器3410一般要求较慢的扫描来得到给定的分辨率。
用本发明的扫描法获取图象后,在相应地像素位置即代表同一景点的像素位置获得的数据经处理,可提取被处理图象各像素的高动态范围强度、偏振态、深度与色彩的值。为了融合各景点得到的所有原始数据,要配准一系列图象。
图35A示出一系列快照3502、3504、3506、3508和3510,其中在各幅快照的像素位置3522、3524、3526、3528和3530检测某景点发出的光。测定随便哪个代表各快照该景点的像素的位置,就能以数字方法对准图35B所示的快照3502、3504、3506、3508与3510,并处理相应于各像素位置3522、3524、3526、3528与3530的数据,可产生增强动态范围、谱信息、偏振信息和/或深度信息的优质像素,如上面针对本发明各种拼图技术所讨论的那样。配准方法包括下列方法的一种或多种:
1.可以知道帧间的移动。将成像系统装在电机上并按连续图象间观察方向的偏移校正其速度,就是这种情况。从位置受监视的卫星、飞机或其他运载工具摄取图象也是如此。
2.人工配准图象,如使图象的控制斑点匹配。这些斑点一般包含明显的图象特征,其形态耐受来自这类图象特征的光的特性变化。换言之,在不同的波长带、偏振态和宽范围的光强分布中,控制斑点的形状相似。
3.跟踪第二方法所述的控制斑点,自动配准图象。这类配准法已为众所周知,有些方法应用较高程度的特征描述,但大多数方法使斑点间的相关性最大,或使斑点差最小。若图象在帧间的变化很小,这类方法效果更佳,否则无法作相似性测量。例如,景象在λ=630nm的内容可同在λ=400nm的内容完全不相关。然而,若采用差别不大的波长捕获图象,而且捕获图象的光的波长带足够宽,则连续图象将一般具有使它们匹配的充分的相似性。
4.求出能提供最佳整体匹配的图象间的变形参数而自动配准图象。这类方法已为众所周知,常用匹配指标包括MSE判据的质量变化、耐统计误差指标和相关性。这类方法与方法3相似,即更适合图象在帧间变化小的场合。这些方法对图象可作有效的粗细比较与配准。
5.当图象在帧间变化很大时,匹配问题类似于不同传感器获取配准图象的情况。虽然景象在帧间通过成像器视场移动,但是遮蔽在特征上控制着图象,由此会使这种算法配准小于正确平移的图象平移,或甚至配准不平移。众所周知,解决该问题的一个办法是结合应用方法3和4。在这种组合方法中,为增强对在图象中不容易变化的明显特征的依赖性,各图象要作高通滤波。应用粗细法配准,并优化像间整体变形参数。
6.即使变化很大,如在多传感器系统中,配准图象的另一方法是用交互信息,即图象间特征的相似性作为匹配标准。该法一般要求图象在统计上并非完全独立,这是一种有效假设。
有时为了便于匹配,可以部分补偿滤波作用,例如假定成像器应用一种变化的密度滤波器。这种滤波器起遮蔽作用,将视场各部分衰减M倍,M在视场内变化。由于遮蔽此时为相乘,有利于运算各图象的对数,使遮蔽函数M成为相加分量,于是对得到的对数图象作高滤波可减小遮蔽作用,因为遮蔽变化通常较慢。若在配准操作开始之前已得到M的估值,则在操作前,可用相应的1/M值放大各图象的每一像素值,该步骤可减轻短暂恒定遮蔽对运动估值的偏置作用。因强衰减而变得更暗的测量结果,经传感器量化后将变得较嘈杂。在比较与配准图象时,最好计入这种与衰减相关的不确定性。
还要注意,若重叠区比各幅图象的像素格更密集地采样,能以子像素精度作配准,因此除了得出景点的多维特性以外,还可同时获得空间超分辨率。
一示例算法以遮蔽M的粗估值为基础,不确定性为ΔM。估值1/M用于使图象场展平。如上所述,由于映射变换,衰减成极暗的像素(尤其是量化为零的像素)变嘈杂了,因而在后面的所有算法中,测量结果在作比较、区分与融合时,最好考虑到不确定性。该法的原理如下:
1、粗平整场反应1/M。设落在检测器上的光强在M=1时为I,测量单位为,无量纲。落在检测器上的的光在光学滤波后为g(x,y)=M(x,y)I(x,y)。对每一测量结果g±Δg,估算出景象幅照度I±ΔI。运用一次近似法传播不确定性。
2、由于应用了高动态范围图象,所以把暗景点误差为像亮景点误差一样重要,例如应将亮点的5%误差像暗点的同等误差那样给予补偿。为调节各像素的亮度,要算出各被测景象幅照度的对数,即S=logI,得出S±ΔS。
3、粗/细样式。众所周知,可靠有效的配准方法是用图象四面体(多分辨率)表示法。在以粗空间分辨度对图象配准和弯曲后,该粗移动估值就用作以更细的标度改善转换估值等的初始条件。但与忽略了数据不确定性的常规方法相比,本发明方法对S图象建立一个最大似然四面体;在四面体的每一层面,代表某一邻域的值是在研究了参与像素的不确定性后最可能的值。
4、配准似然性的最大化。估计测量值S具有高斯分布(虽然方法不限于高斯分布)。最佳配准是将图象间的Mahalanobis距离减至最小的一种配准。
5、拼合配准。在球坐标系中,只配准成对图象会导致估算图象位置误差累加,因此最好对最新更新的拼合配准系列中每幅新图象,再与更新拼图融合。
该算法详述如下。检测器在其动态范围内有一响应R(可以为非线性),像素的像值
Figure A0181311500392
为: g ~ = R ( g ) - - - ( 28 )
使响应线性化,检测器的估算强度
Figure A0181311500394
g ^ = R - 1 ( g ^ ) , - - - ( 29 )
因而: Δ g ^ = | dR - 1 d g ~ | Δ g ~ - - - ( 30 )
假定估算的遮蔽与测得的信号无关, ( ΔI ) 2 = ( Δ g ^ / M ) 2 + ( g ^ ΔM / M 2 ) 2 , - - - ( 31 )
其中假设 Δ g ~ = 0.5 ,因为摄像机输出为整数形式(8位检测器为0~255)。为避免暗点(I=0)对数运算潜在的不稳定性,后续运算可以除去诸暗点。因此,计算中最好应用S=log(1+I),不用I。在任何情况下, Δs = | ds dI | ΔI - - - ( 32 )
注意,若检测器响应R为对数,不必应用公式(29)~(32),S可置成 。任何被认为饱和的像素(如对8位检测器而言,
Figure A0181311500405
接近255),被作为极不确定对待。因此,将其相应的ΔS置成极大值(即偏离系统的动态范围)。
假定S测量结果为高斯型,因而对两个独立的测量结果S1±ΔS1,和S2±ΔS2而言,Se值的对数似然特性如下: log Likelihood ~ - E 2 ≡ ( s e - s 1 Δs 1 ) 2 - ( s e - s 2 Δs 2 ) 2 - - - ( 33 )
Se的最大似然解使Mahalanobis距离E最小: s e = Δs 2 ^ ( s 1 2 Δs 1 + s 2 2 Δs 2 ) , - - - ( 34 )
其中: Δs 2 ^ = ( 0.5 * d 2 E 2 ds e 2 ) - 1 = ( 1 Δs 1 2 + 1 Δs 2 2 ) - - - ( 35 )
于是,对应该景点的图象测量的距离值为: E ^ 2 = ( s ^ - s 1 Δs 1 ) 2 + ( s ^ - s 2 Δ s 2 ) 2 - - - ( 36 )
假定图象的所有像素均独立,则整个帧或像素任何其他子组的距离量度为: E ^ total 2 = Σ allpixels E ^ 2 ( eachpixel ) . - - - ( 37 )
根据上述目标函数,两帧间(或新帧与原有拼图间)的最佳配准是使 最小的配准。这里每对测量S1与S2对应于相应像素的像值。
注意
Figure A0181311500411
一般随像素数而增大,使配准偏向将公式(37)的和中的像素总数减至最少,减少了图象间的重叠。为对付这一作用,可用重叠区里的像素数,即 等使公式(37)归一化,其中: Δs total 2 ≡ ( Σ allpixels Δs ^ eachpixel - 2 ) - 1 - - - ( 38 )
若不同测量或不同像素间的统计相依性不能忽略,则
Figure A0181311500415
的公式可以统一使用测量结果的协方差矩阵,而不单是它们的方差。
为使配准更加可靠而有效,要从粗到细分辨率分层地配准。图象的特定四面体等级的粗表示,可通过对图象以某个核作低通滤波后再欠采样而实现,核的宽度取决于该等级(等级更高/更粗,对原始图象操作的核越宽)。在任何情况下,这种表示法的像素值是被测像素的加权归一化之和: s = Σ k ω k s k Σ k ω k - - - ( 39 )
式中ωk是像素值Sk的权重。
须指出,这里的讨论指四面体等级的原始全分辨率图象的结构,诸像素是独立的,以简化推导。但四面体通常是迭代构成的,中间等级的像素在统计上不独立。若在迭代过程中寻求附加精度,则加权不仅取决于中间等级的像素方差,还依赖于其邻居的全协方差矩阵。因此,该矩阵也要扩展到四面体。
在常规四面体中,权重ωk等于高斯核值ak。但在本发明方法中,可将公式(39)视作公式(34)的统一化。指定给像素的权重ωk随其高斯权重ak的减小和其不确定性ΔSk的增大而线性地减小,因而 ω k = a k / Δ s k 2 ,而且: Δs = ( Σ k ω k ) - 1 - - - ( 40 )
在上述最大似然四面体中,各分辨级的表示不仅包括各像素的加权平均值,还包括各像素值的不确定性。在该四面体中,由于不确定性较小的点接收更大的加权,因而不确定区(如饱和区或极暗点)在较粗等级时减小了重要性,其表示受到相邻稳定点更多的影响。因此由一个值表示的区域更可靠。相应地,以每个标度代表S±ΔS,可通过使匹配似然性最大而有效地配准图象。
须指出,上述方法并非是配准与融合图象系列的唯一方法,例如可用其他加权法测定各点的值,或用每点合格的单次测量结果代表该点。若运动场被参数化,中性密度遮蔽不随时间变化,而且景象静止,则可将遮蔽、运动和未知景象的所有参数当作单一的优化问题。
成组图象被捕获配准而且知道了成像器特性(如通过校正)后,本发明的算法产生至少一个数据(如像素)代表景象的每一点或每个小区域。这种融合法用来自每幅原始图象的该特定像素产生代表性像素,而该特定像素对应于被代表的景点。融合法的选择一般取决于要成像的景象特性,如亮度、色彩、偏振或深度。
例如,为融合以空间变化衰减法获取的强度(即亮度)图像,最好应用下列原则:
1、该点所选的值要容易将到数据点的Mahalanobis距离减至最小,因而公式(34)和(35)使用sk=Ik,其中 I k = g ^ k / M
2、对原始图象的饱和点(在8位摄像机中,
Figure A0181311500422
接近255)指定极大的不确定性,如上所述。
3、用接缝最小化方法避免边界可能出现的无审美性接缝。拼合图象边界上,存在用不同数据源评估的点之间的过渡区。除去拼合接缝有多种方法。一种方法是寻找“优化”接缝线,即对观众最不显眼的接缝线。优化接缝线选择法已在本领域广泛采用。另一方法是应用羽化操作法,对图象按相对于图象中心或边界的像素位置来加权。这种加权法容易配合方法1所述的加权平均法,且类似于上述在四面体结构中使用的加权。例如,可将不确定性乘上某个因子,该因子在图象边界平滑地增大到∞。
4、在不确定性有突变而
Figure A0181311500423
变化通常小的饱和区边界,也会出现接缝。这种接缝可用方法3讨论的方法消除。最好用下述方法平滑饱和区的清晰度。规定一“低饱和值”a(如8位摄像机为230),再规定一“高饱和值”b(如250),若 ,认为该像素饱和了;若
Figure A0181311500425
,而且该点邻近某饱和点,则认为该点“与饱和相关”;若 ,而且该点邻近“与饱和相关”点,也被认为“与饱和相关”。这样,“与饱和相关”的点总被包括在与饱和点连接的组内,因此该方法不影响与饱和点不相关的强度为
Figure A0181311500427
的点 。找出了被称为“饱和相关”的所有点之后,将其不确定性乘上 或另一函数,随着到达饱和值,逐渐从规则不确定性(乘上1)过渡到极大不确定性。
须指出,本发明的数据融合并不限于上述示例方法,还可运用减少其他效应的持久统计法或运用迭代投射法融合数据。还能用多分辨率方法,即通过以若干不同分辨率分析图象来消除接缝,这在本领域是公知的。
还要指出,上述原则3和4对高动态范围拼图的产生并非关键,只是用于使得出的图象更具审美性。其他方法包括:(1)只选各点单次“合格”的测量结果作为该点的代表值;和(2)选择产生最大反差的值(如以各种标度[说明])。
对偏振数据,图象点分析如下。假定可以忽略椭圆偏振分量,并对各点应用三次测量,就可用公式(24)导出各像素的偏振态分量。若应用多于最小要求的像素,则可应用一组超定的方程。通过将方差减至最小求解该组方程,得出: C A c A s = ( M ′ M ) - 1 M ′ g ~ - - - ( 41 )
然后,该方法设 I = 2 C , A = A c 2 + A s 2 , P = A / C ,而θ=Sin-1(As/A)=Cos-1(Ac/A)。
应当指出,上述方法并非是能用数据提取偏振信息的唯一方法,比如可用持久统计法估算偏振。P、θ与I可以直接估算,即略去中间变量Ac、As与C;可以应用投射迭代法(如迫使I和P为正);或者不必提取偏振信息,只是通过比较诸图象的拉普拉斯算子四面体表示法并选择具有最大值的表示法,以产生最大反差的方式来融合诸图象,这是本领域公知的。
为导出深度信息,要对相应的点实施由焦点得深度的方法,选择给出最佳焦点的图象,像常规的由焦点得深度技术一样。接着融合选出的图象,只让“清晰的”像素作拼合。例如,若拉普拉斯算子测量图象清晰度,则 I ^ ( x , y ) = I p ( x , y ) ( x , y ) p ( x , y ) = arg max | ▿ 2 x , y I p ~ ( x , y ) | , - - - ( 42 )
式中(x,y)是静止(外部世界)坐标系里的坐标, 为帧索引。然而,融合不一定只在像素一级,能以多个分辨率标度实施。再者,该技术并不限于聚焦的拉普拉斯算子指标,也可用其他聚焦指标。
为导出光谱,对不同谱带的同一景点连续作相应的测量。若滤波器不是带通滤波器,而是高通或低通滤波器,可利用相邻测量的微分导出窄带谱信息。光谱可用变宽带测量,甚至包括多条带。考虑到带结构、谱的正性和其他约束条件,求解一组原始数据提供的方程,可提取高分辨率光谱。还可利用已知源与材料光谱的先验信息,确定哪些类型的已知源与照射分量符合测量结果,包括相对强度、温度和/或测量的空间位置。
应用这里描述的本发明方法,可以扩展任何视频摄像机(或静像摄像机)的动态范围,显著提高摄像机的谱分辨率,而且/或者将滤波器接在摄像机透镜前面,或者甚至拓展摄像机原有的渐晕特性,可直接求出进入信号的偏振态与聚焦特性。
景象谱内容知识可以导出拍摄时景象照明光源的信息,也可导出目标反射比,因此可用不同模拟的照明源与特性拍出景象。例如,可在白炽灯下以模拟方式拍出被摄目标,就像在荧光灯或夕阳下拍摄时一样。
还要指出,大多数检测器的光谱响应特性,即使在检测器中使用了RGB滤波器,也与人肉眼的谱特性极不一样。胶卷与彩印也有同样的限制。掌握了景象实际的谱内容,就能以更接近肉眼直接观察景象的方式拍摄图象。还可根据胶卷、屏幕或打印机的特性提供图象,而不按照肉眼的响应特性。
本发明的技术在拼图的同时扩展了成像器的光学动态范围,动态范围对各景点扩展,与周围诸点的亮度无关。成像器不必内装任何移动部件,成像器的移动与建立普通拼图的要求一样。本发明的技术还可结合其他扩展动态范围的方法,如有源CMOS检测器或AGC,实现进一步扩展。注意,若密度滤波器有较宽的密度范围,光学动态范围就更大。例如,若M∈[10-4,1],则检测器的动态范围将扩展得超过其原范围约13位(相当于提高80dB)。由于滤波器密度是相加特性,所以串接若干这种滤波器可加强空间变化衰减作用。
增大焦深可在宽距离范围内产生清晰的图象。采集偏振数据可消除或增强反射与半反射视觉效果。
本领域的技术人员知道,图1~6、35A与35B所示的方法可在附录中程序说明的合适软件控制下运行的各种标准计算机平台上实施。在有些场合,专用计算机硬件,如留驻在标准个人计算机或工作站的总线上的外设卡,都可提高上述诸方法的操作效率。
图37和38是适合实施本发明的示例计算机硬件。参照图37,计算机系统包括处理部3710、显示器3720、键盘3730和鼠标,还可包括其他输入装置,如扫描图象媒体3700的光学扫描器3750和摄像机3780,此外还可包括打印机3760。该计算机系统一般包括一台或多台盘片驱动器3770,可对磁性媒体(如盒带)或光学媒体(即CD-ROM)等计算机可读媒体进行读写,存贮数据与应用软件。
图38是处理器部3710的功能框图。处理器部3710一般包括处理单元3810、控制逻辑3820与存储单元3830以及硬盘驱动器与接口,还包括定时器3850(即时钟电路)与输入/输出口3840。根据处理单元使用的微处理器,处理部3710还可包括协同处理器3860。控制逻辑3820与处理单元3810一起提供存储单元3830与输入/输出口3840通信所需的控制。定时器3850对处理单元3810和控制逻辑3820提供时序参考信号。协同处理器3860可增强实时复数运算的能力。现代化计算机比协同处理器具有更多的处理单元。
存储单元3830包括不同类型的存储器,如易失与非易失存储器和只读与可编程存储器。如图38所示,存储单元3830包括只读存储器(ROM)3831、电可擦可编程只读存储器(EEPROM)3832和随机存取存储器(RAM)3833。可用不同的计算机处理器、存储器结构、数据结构等实施本发明,而且本发明并不限于特定的平台。例如,虽然图37把处理器部3710示为计算机系统的一部分,但是也可将处理部3710和/或其图示元件包括在静像摄像机或运动图象摄像机(如视频摄像机)之类的成像器中。
本领域技术人员知道,附录源代码表示出的软件可用各种编程语言编制。本发明的示例软件算法都用MatlabTM语言编写。附录提供了若干此类示例算法的计算机源代码。
虽然已结合诸特定示例实施例描述了本发明,但是应该明白,在不违背所附权项提出的本发明的精神与范围的情况下,可对揭示的诸实施例作出各种变化、替代与更改。
                       附录用于拼合、平整场、图象融合、高动态范围、多分辨率配准、最大似然四面体、衰减遮蔽自校正、接缝与饱和区羽化等的MATLAB源代码。形成高动态范围拼合的源代码
function m=calchorizprof2(basefilename,postfix,netframes,left,right,bottom,roof);
imagehight=roof-bottom+1;
basephotoname=′photo′;
rawname=′basicim′;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CALCULATING THE MEAN PROFILE
ystep=-1;
%frame_inds=36:ystep:1;
frame_inds=length(netframes):ystep:1;
imagewidth=right-left+1;
M=zeros(length(frame_inds),imagewidth);
for indframe=1:length(frame_inds),
  yframe_ind=frame_inds(indframe);

   yframe1=netframes(yframe_ind);

   sframe1=sprintf(′%d′,yframe1);

   shem1=[basefilename sframe1 postfix];

   rawim=imread(shem1);

   basicim=double(flipud(rawim))-1;

   X=basicim(bottom:roof,left:right);
 %    LX=log2(X+eps);
 %    saturated=(X>sun);
 %    unsaturated=(~saturated);
 %    unsatinds=find(unsaturated);
 %    XN=NaN*ones(size(X));
 %    XN(unsatinds)=X(unsatinds);
 %    LXN=NaN*ones(size(X));
 %    LXN(unsatinds)=LX(unsatinds);
  OrdM(indframe,:)=mean(X);
				
				<dp n="d38"/>
 %  nonsatM(indframe,:)=nanmean(XN);
 %  LM(indframe,:)=mean(LX);
 %  nosatLM(indframe,:)=nanmean(LXN);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
tm=nanmean(OrdM);%The 0.05 is out of O(200) so it is very insignificant in the bright

                     %points while being still small in the dark O(1)points.
mx=max(tm);
%tm=0.005+tm/mx;
tm=0.001+tm/mx;
mx=max(tm);
m=tm/mx;
%ylm=log2(m);
%figure(1)
%hold off
%plot(m,′k′);
%set(gca,′xlim′,[1,imagewidth]);
%figure(2)
%hold off
%plot(ylm,′k′)
%set(gca,′ylim′,[-12,0])
%set(gca,′xlim′,[1,imagewidth]);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 function clipped=clipfrombig(oldcell,Lnew,Rnew,Dnew,Unew)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%   DECODING THE CELLS INTO THEIR CONSTITUENTS
photo=oldcell{1,1};     % the 1st (old) image
centerx=oldcell{2,1};%coordinates of center of old image
centery=oldcell{3,1};
   [oldrows,oldcols]=size(photo);
   Lold=centerx-(oldcols-1)/2;      %the Left side of the old image

     Rold=centerx+(oldcols-1)/2;     % the Right side

     Dold=centery-(oldrows-1)/2;     % the Down of the old image

     Uold=centery+(oldrows-1)/2;     % the Upper side of the old image

     greatL=Lold;          %the Left side of the combined-greater image

     greatR=Rold;

     greatD=Dold;

     greatU=Uold;

     spanx=greatR-greatL+1;

     spany=greatU-greatD+1;

     greatX=ones(spany,1)*(greatL:greatR);    %the coordinate grid of the combined-greater
image

     greatY=(greatD:greatU)′*ones(1,spanx);
   %Lnew;
   %Rnew;
   %Dnew;
   %Unew;
				
				<dp n="d39"/>
  [newLDy,newLDx]=find((greatX==Lnew) &amp; (greatY==Dnew));

      [newLUy,newLUx]=find((greatX==Lnew) &amp; (greatY==Unew));
      [newRDy,newRDx]=find((greatX==Rnew) &amp; (greatY=Dnew));

      [newRUy,newRUx]=find((greatX=Rnew) &amp; (greatY==Unew));

      [newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);

      [newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);

      [newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);

  [newRU_row,newRU_col]=xy2ij_brack([newRUx,newRUy],spany);

  clipped=photo(newLD_row:-1:newLU_row,newLD_col:newRD_col);

     clipped=flipud(clipped);
clear
pack
fuzzmarge=45;
transit=80;
%load ../decayfun
m=exp(log(valint)-max(log(valint)));
delta_m=(1/max(valint))*ones(size(m));
load iterdecay
levels_x=2;
levels_y=0;
minlevel=0;
Lshulaim=26;
%Lshulaim=0;
left=min(xint)+Lshulaim;
right=max(xint);
bottom=170;
roof=355;
imagehight=roof-bottom+1;
sun=250;
sunerror=1/eps;
SE=ones(3,5);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
cols=right-left+1;
prof=profil2(cols,transit);
inv_prof2=(1./prof).^2;
FETHER2=ones(imagehight,1)*inv_prof2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
oldlength=length(m);
m=m(1+Lshulaim:oldlength);
delta_m=delta_m(1+Lshulaim:oldlength);
sig=1;
invm=1./m;
invm_square=invm.^2;
invm_four=invm.^4;
a=(sig.^2).*invm_square;
b=(delta_m.^2).*invm_four;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
invM=ones(imagehight,1)*invm;
A=ones(imagehight,1)*a;
B=ones(imagehight,1)*b;
				
				<dp n="d40"/>
load ../coordm18.mat   %%%% LOADING THE COORDINATES
basefilename=′../gradfilt1_′;
postfix=′.GIF′;
basephotoname=′photo′;
rawname=′basicim′;
netframes=[1∶9,11∶37];
centerx1=0;    %this is always
centery1=0;    %this is always
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%--
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MERGING IMAGES IN A BIG FRAME
%ystep=-1;
%frame_inds=36:ystep:30;
ystep=-1;
frame_inds=36:ystep:2;
k=1;
for indframe=1:length(frame_inds),
tic
   yframe_ind=frame_inds(indframe);
   if k==1

    yframe1=netframes(yframe_ind);

    sframe1=sprintf(′%d′,yframe1);

    shem1=[basefilename sframe1 postfix];

    rawim=imread(shem1);

    basicim=double(flipud(rawim))-1;

    X=basicim(bottom:roof,left:right);

    Ya=X.*invM;

    saturated=(X>sun);

    dilsaturated=double(dilate(saturated,SE));

    DELTAYa2=(A+((X.^2).*B)).*(~dilsaturated) + sunerror.*dilsaturated;

    DELTAYa2=DELTAYa2.*FETHER2;
%      [rows,cols]=size(Ya);
%      CoordYa=ones(rows,1)*(1:cols);
%      fuzzline=round(fuzzmarge*rand(rows,1));
%      FuzzlineL=fuzzline*ones(1,cols);
%      fuzzline=cols-round(fuzzmarge*rand(rows,1));
%      FuzzlineR=fuzzline*ones(1,cols);
%      lobeseder=(CoordYa<FuzzlineL) | (CoordYa>FuzzlineR);
%      Ilobeseder=find(lobeseder);
%      clear FuzzlineR FuzzlineL lobeseder
%      tmp=Ya;
%      tmp(Ilobeseder)=NaN;
%      Ya=tmp;

    Acell{1,1}=Ya;

    Acell{2,1}=DELTAYa2;

    Acell{3,1}=centerx2(yframe_ind);

    Acell{4,1}=centery2(yframe_ind);

  else

    Acell=fusedcell;

  end
   yframe2=netframes(yframe_ind + ystep);
   sframe2=sprintf(′%d′,yframe2);
				
				<dp n="d41"/>
   shem2=[basefilename sframe2 postfix];
   rawim=imread(shem2);
   basicim=double(flipud(rawim))-1;
   X=basicim(bottom:roof,left:right);
   Yb=X.*invM;
   saturated=(X>sun);
   dilsaturated=double(dilate(saturated,SE));
   DELTAYb2=(A+((X.^2).*B)).*(~dilsaturated) + sunerror.*dilsaturated;
   DELTAYb2=DELTAYb2.*FETHER2;
   clear rawim basicim X
%   fuzzline=round(fuzzmarge*rand(rows,1));
%   FuzzlineL=fuzzline*ones(1,cols);
%   fuzzline=cols-round(fuzzmarge*rand(rows,1));
%   FuzzlineR=fuzzline*ones(1,cols);
%   lobeseder=(CoordYa<FuzzlineL) | (CoordYa>FuzzlineR);
%   Ilobeseder=find(lobeseder);
%   clear FuzzlineR FuzzlineL lobeseder
%   tmp=Yb;
%   tmp(Ilobeseder)=NaN;
%   Yb=tmp;
   Bcell(1,1}=Yb;
   Bcell{2,1}=DELTAYb2;
   Bcell{3,1}=centerx2(yframe_ind+ystep);
   Bcell{4,1}=centery2(yframe_ind+ystep);
   fusedcell=fuseml11(Acell,Bcell);
%fused=double(fusedcell{1,1});
%deltaz2=fusedcell{2,1};
%figure(1)
%imagesc(fused);axis xy;axis image
%figure(2)
%imagesc(deltaz2);axis xy;axis image
%pause

  k=k+1;

  clear Acell Bcell
toc
end
fused=double(fusedcell(1,1});
deltaz2=fusedcell{2,1};
greatcenterx=fusedcell{3,1};
greatcentery=fusedcell{4,1};
%save fethermosaic0.mat fused deltaz2 greatcenterx greatcentery
clear fusedcell
figure(3);
image(fused);      yshow24;
set(gcf,′menubar′,′figure′)
figure(4)
image(150-40*log(deltaz2));yshow24
set(gcf,′menubar′,′figure′)
				
				<dp n="d42"/>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
logfused=log(l+fused);
mx=max(logfused(:));
imx8=(2^8)/mx;
logfused8=uint8(round(logfused*imx8));
figure(5);
image(logfused8);    yshow24;
set(gcf,′menubar′,′figure′)
clear saturated logfused valint tikunx tikuny invM invm invm_square
clear invm_square dilsaturated flatcurve flatfun centery2 centerx2
clear a b Ya Yb DELTAYb2 DELTAYa2 A B
  function fusedcell=fusem122(oldcell,newcell)
%
%  oldcell and newcell contain images to put in a single
%  coordinate system,and the vector-coordinates of the
%  image centers in (x,y).
%  fusedcell is the cell of the updated mosaic image
%  k is the number of images already fused
%
%  merging=′average′- The overlap area is an average of
%       all the images that exist in it,and the program
%       makes all weights be the same.
%  merging=′avdecay′-The overlap area is the average of the
%       old image or mosaic,withthe new image.So,older
%       images in the old mosaic decay exponentially in time.
%  merging=′split′- the overlap is cut by half - each side
%       comes from a different image.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%    DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold   =oldcell{1,1};      % the 1st (old) image
DELTAZ2old=oldcell{2,1);
oldcenterx=oldcell{3,1};%coordinates of center of old image
oldcentery=oldcell{4,1};
Znew=newcell{1,1};      % the 2nd (new) image
DELTAZ2new=newcell{2,1};
newcenterx=newcell{3,1};
newcentery=newcell{4,1};
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[oldrows,oldcols]=size(Zold);
Lold=oldcenterx-(oldcols-1)/2;      %the Left side of the old image
Rold=oldcenterx+(oldcols-1)/2;      % the Right side
Dold=oldcentery-(oldrows-1)/2;      % the Down of the old image
Uold=oldcentery+(oldrows-1)/2;      % the Upper side of the old image
[newrows,newcols]=size(Znew);
Lnew=newcenterx-(newcols-1)/2;      %the Left side of the new (2nd) image
Rnew=newcenterx+(newcols-1)/2;
				
				<dp n="d43"/>
Dnew=newcentery-(newrows-1)/2;
Unew=newcentery+(newrows-l)/2;
greatL=min([Lold,Lnew]);       %the Left side of the combined-greater image
greatR=max([Rold,Rnew]);
greatD=min([Dold,Dnew]);
greatU=max([Uold,Unew]);
spanx=greatR-greatL+1;
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);    %the coordinate grid of the combined-greater image
greatY=(greatD:greatU)′*ones(1,spanx);
%%%%%%% PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
[oldLDy,oldLDx]=find((greatX==Lold) &amp; (greatY==Dold));
[oldLUy,oldLUx]=find((greatX==Lold) &amp; (greatY==Uold));
[oldRDy,oldRDx]=find((greatX==Rold) &amp; (greatY==Dold));
[oldRUy,oldRUx]=find((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
[oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
Zoldgreat=NaN*zeros(size{greatX));
Zoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
DELTAZoldgreat=NaN*zeros(size(greatX));
DELTAZoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=DELTAZ2old;
%%%%%%%%%  PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
[newLDy,newLDx]=find((greatX==Lnew) &amp; (greatY==Dnew));
[newLUy,newLUx]=find((greatX==Lnew) &amp; (greatY==Unew));
[newRDy,newRDx]=find((greatX==Rnew) &amp; (greatY==Dnew));
[newRUy,newRUx]=find((greatX==Rnew) &amp; (greatY==Unew));
[newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);
[newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);
[newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);
[newRU_row,newRU_col]=xy2ij_brack((newRUx,newRUy],spany);
Znewgreat=NaN*zeros(size(greatX));
Znewgreat(newLD_row:-1:newLU_row,newLD_col:newRD_col)=Znew;
DELTAZnewgreat=NaN*zeros(size(greatX));
DELTAZnewgreat(newLD_row:-1:newLU_row,newLD_col:newRD_col)=DELTAZ2new;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%% TAKING CARE OF THE OVERLAP AREA
nanold=uint8(isnan(Zoldgreat));
nannew=uint8(isnan(Znewgreat));
fused=NaN*zeros(size(greatX));
%fused(newLD_row:-1:newLU_row,newLD_col:newRD_col)=Znew;
%fused(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
besdernew=find(-nannew);
fused(besdernew)=Znewgreat(besdernew);
				
				<dp n="d44"/>
besderold=find(-nanold);
fused(besderold)=Zoldgreat(besderold);
DELTA2fused=NaN*zeros(size(greatX));
%DELTA2fused(newLD_row:-1:newLU_row,newLD_col:newRD_col)=DELTAZ2new;
%DELTA2fused(oldLD_row:-l:oldLU_row,oldLD_col:oldRD_col)=DELTAZ2old;
DELTA2fused(besdernew)=DELTAZnewgreat(besdernew);
DELTA2fused(besderold)=DELTAZoldgreat(besderold);
%Sigmaoldgreat=sqrt(DELTAZoldgreat);
%Sigmanewgreat=sqrt(DELTAZnewgreat);
Sigma2oldgreat=DELTAZoldgreat;
Sigma2newgreat=DELTAZnewgreat;
nangreat=(nanold | nannew);
E=~nangreat;
F=find(E);
if ~isempty(F)

       oldneto=NaN*ones(size(Zoldgreat));

       newneto=NaN*ones(size(Znewgreat));

       oldsigneto=oldneto;

       newsigneto=newneto;

       oldneto(F)=Zoldgreat(F);

       newneto(F)=Znewgreat(F);

       oldsig2neto(F)=Sigma2oldgreat(F);

       newsig2neto(F)=Sigma2newgreat(F);

       %size(oldneto(F))

       %size(newneto(F))

       %size(oldsig2neto(F)′)

       %size(newsig2neto(F)′)

       nominat=(oldneto(F)./oldsig2neto(F)′) + (newneto(F)./newsig2neto(F)′);

       denomin=(1./oldsig2neto(F)′) + (1./newsig2neto(E)′);

       sig2seam=1./denomin;

       seamvalue=sig2seam.*nominat;

       fused(F)=seamvalue;

          DELTA2fused(F)=sigseam.^2;

          DELTA2fused(F)=sig2seam;
end
fused=flipud(fused);
DELTA2fused=flipud(DELTA2fused);
%%%%%%%%%% THE CENTRAL COORDINATE OF THE (WARPED) GREAT=COMBINED IMAGE
greatcenterx=(greatR+greatL)/2;
greatcentery=(greatU+greatD)/2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% PUTTING THE OUTPUT TO CELLS,FOR EASIER TRANSFER
fusedcell{1,1}=fused;
fusedcell{2,1}=DELTA2fused;
fusedcell{3,1}=greatcenterx;
fusedcell{4,1}=greatcentery;
				
				<dp n="d45"/>
function diffml=generallikely31(oldcell,newcell)
Zold=oldcell(1,1};     %THE 1ST SUB-IMAGE
Znew=newcell{1,1};     %THE 2ND SUB-IMAGE
DELTAZ2old=oldcell{2,1};
DELTAZ2new=newcell{2,1};
nanold=uint8(isnan(Zold));
nannew=uint8(isnan(Znew));
nangreat=(nanold | nannew);
E=~nangreat;
F=find(E);
Zoldneto=NaN*ones(size(Zold));
Znewneto=NaN*ones(size(Znew));
DELTAZ2oldneto=NaN*ones(size(Zold));
DELTAZ2newneto=NaN*ones(size(Znew));
bestvalue=NaN*ones(size(Zold));
if isempty(F)
   diffml=Inf;
else

       Zoldneto(F)=Zold(F);

     Znewneto(F)=Znew(F);
   DELTAZ2oldneto(F)=DELTAZ2old(F);
   DELTAZ2newneto(F)=DELTAZ2new(F);
   Sigma2old=DELTAZ2oldneto;
   Sigma2new=DELTAZ2newneto;
   nominat=(Zoldneto(F)./Sigma2old(F)) + (Znewneto(F)./Sigma2new(F));
   denomin=(1./Sigma2old(F)) + (1./Sigma2new(F));
   sig2seam=1./denomin;
   bestvalue(F)=sig2seam.*nominat;
   Eold=((bestvalue(F) -Zoldneto(F)).^2)./Sigma2old(F);
   Enew=((bestvalue(F) -Znewneto(F)).^2) ./Sigma2new(F);
   E=Eold+Enew;
   SSD=sum(E(:));
   generalized_denomin=sum(denomin(:));
   diffm1=SSD/generalized_denomin;
%   diffm1=mean(E(:));
end
clear
load decayfun
				
				<dp n="d46"/>
levels_x=3;
levels_y=0;
minlevel=0;
left=min(xint);
right=max(xint);
bottom=170;
roof=355;
imagehight=roof-bottom+1;
sun=250;
sunerror=1/eps;
SE=ones(3,5);
%yepsilon=0.01;
yepsilon=eps;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
basefilename=′gradfilt1_′;
postfix=′.GIF′;
basephotoname=′photo′;
rawname=′basicim′;
netframes=[1∶9,11∶37];
m=calchorizprof4(basefilename,postfix,netframes,left,right,bottom,roof);
clear xint valint
%m=valint/max(valint);
%delta_m=1/max(valint);
%sig=1;
delta_m=0.01*ones(size(m));
sig=0.5;
invm=1./m;
invm_square=invm.^2;
c=(delta_m.^2).*invm_square;

  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
M   =ones(imagehight,1)*m;
epsM=yepsilon*M;
invM=ones(imagehight,1)*invm;
C   =ones(imagehight,1)*c;
SIG2=(sig.^2);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
centerx2=zeros(1,length(netframes));
centery2=zeros(1,length(netframes));
tikunx=zeros(1,length(netframes));
tikuny=zeros(1,length(netframes));
ystep=1;
centerx2guess=47*ystep;    %initializing the disparity
centery2guess=0;
%frame_inds=11:ystep:18;
frame_inds=1:ystep:35;
k=1;
				
				<dp n="d47"/>
for indframe=1:length(frame_inds),

  %%%%%%%%%%%%%%%%%%%%%%%%%%%% READING AND PREPARING THE RAW IMAGES

  yframe_ind=frame_inds(indframe);
tic

  if k==1

     yframe1=netframes(yframe_ind);

     sframe1=sprintf(′%d′,yframe1);

     shem1=[basefilename sframe1 postfix];

     rawim=imread(shem1);

     basicim=double(flipud(rawim))-1;

     X=basicim(bottom:roof,left:right);

     Za=log(yepsilon+X.*invM);

     saturated=(X>sun);

     dilsaturated=double(dilate(saturated,SE));

     DELTAZa2=( (1./((epsM+X).^2)) .* (SIG2+(C.*(X-^2)))  ).*(~dilsaturated) +
sunerror.*dilsaturated;

     centerx1=0;centery1=0;

     holdcell{1,1}=Za;

     holdcell{2,1}=DELTAZa2;

     holdcell{3,1}=centerx1;

     holdcell{4,1}=centery1;

     fusedcell=holdcell;

  else

     holdcell=fusedcell;

  end

  yframe2=netframes(yframe_ind+ystep)

  sframe2=sprintf(′%d′,yframe2);

  shem2=[basefilename sframe2 postfix];

  rawim=imread(shem2);

  basicim=double(flipud(rawim))-1;

  X=basicim(bottom:roof,left:right);

  Zb=log(yepsilon+X.*invM);

  saturated= (X>sun);

  dilsaturated=double(dilate(saturated,SE));

  DELTAZb2=( (1./((epsM+X).^2)) .* (SIG2+(C.*(X.^2))) ).*(~dilsaturated)   +
sunerror.*dilsaturated;

  hnewcell{1,1}=Zb;

  hnewcell{2,1}=DELTAZb2;

  if k==1

    hnewcell{3,1}=centerx2guess;

    hnewcell{4,1}=centery2guess;

  else

    hnewcell{3,1}=centerx2(old_yframe_ind)+centerx2guess;

    hnewcell{4,1}=centery2(old_yframe_ind)+centery2guess;

  end

  newcell=hnewcell;

  %if yframe2>11,

    %figure(1);  image(42*fusedcell{1,1});colormap(gray(256));axis xy;

    %figure(3);  imagesc(-log(fusedcell{2,1}));colormap(gray(256));axis xy;

    %figure(2);  image(42*Zb);colormap(gray(256));axis xy;

    %figure(4);  imagesc(-log((DELTAZb2)));colormap(gray(256));axis xy;

    %drawnow

    %1111111

    %pause

  %end
				
				<dp n="d48"/>
   clear rawim basicim
   %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  DERIVING TRANSLATION BETWEEN IMAGES
[tikunx(yframe_ind+ystep),tikuny(yframe_ind+ystep)]=gmultireml31(hldcell,hnewcell,levels_x,le
vels_y,minlevel);%
  centerx2(yframe_ind+ystep)=newcell{3,1} + tikunx(yframe_ind+ystep);
  centery2(yframe_ind+ystep)=newcell{4,1} + tikuny(yframe_ind+ystep);
  [tikunx(yframe_ind+ystep)   tikuny(yframe_ind+ystep)]
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FUSING THE IMAGES
  newcell{3,1}=centerx2(yframe_ind+ystep);
  newcell{4,1}=centery2(yframe_ind+ystep);
  %%%%%%%%% NEED TO CORRECT THIS WITH THE NEW KIND OF FUSION
  fusedcell=fusem122(fusedcell,newcell);
  old_yframe_ind=yframe_ind+ystep;
  k=k+1;
toc
   save yytemp.mat  centerx2 centery2 tikunx tikuny
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear newcell oldcell
save coordml40.mat  centerx2 centery2 tikunx tikuny
fused=fusedcell{1,1};
deltaz2=fusedcell{2,1};
save fused40.mat fused deltaz2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear fusedcell
  figure(2);
  imagesc(fused);     yshow24;
  set(gcf,′menubar′,′figure′)
  figure(3);
  imagesc(-log((DELTAZb2)));   yshow24;
  set(gcf,′menubar′, ′figure′)
  %darklevel=3;
%sunlevel=249;
%      clippedl=validimag(bottom:roof,left:right).*flatfun;
%      hclippedl=log2(clippedl+1);            %LOG to deal with the high dynamic range
				
				<dp n="d49"/>
function [xmove,ymove]=gmlscalemov31(Acell,OBcell)
%
% [xmove,ymove]=mlscalemov4(Acell,OBcell)
% Acell and OBcell are cells containing images at a specific
% scale,and each of their initial center-coordinates.
% The routine searches for the best GLOBAL CORRELATION between
% the images,in a 5×5 neighborhood around the initial
% centers.
% THE IMAGES NEED NOT BE OF THE SAME SIZE.
% [xmove,ymove]give the shifting in (x,y) coordinates of
% Bcell relative to Acell - that is the correction of the
% center vector of Bcell.
%
% See also scalprod2 putinbig7
Bcell=OBcell;
%movementx=[-2:1:2];
%movementy=[-2:1:2];
movementx=[-1:1:1];
movementy=[-1:1:1];
centerx1=Acell{3,1};
centery1=Acell{4,1};
centerx2=Bcell{3,1};
centery2=Bcell{4,1};
for xind=1:length(movementx),
  centerx2_updated =centerx2+movementx(xind);
  Bcell{3,1}=centerx2_updated;

  for yind=1:length(movementy),

    centery2_updated=centery2+movementy(yind);

    Bcell{4,1}=centery2_updated;

    [oldgreatcell,newgreatcell]=putbigml3(Acell,Bcell);

    %photoA=oldgreatcell{1,1};      %THE 1ST SUB-IMAGE

    %photoB=newgreatcell{1,1};      %THE 2ND SUB-IMAGE

    %delta2A=oldgreatcell{2,1};     %THE 1ST SUB-IMAGE

    %delta2B=newgreatcell{2,1};     %THE 2ND SUB-IMAGE

    %figure(5);  image(42*photoA);colormap(gray(256));axis xy;

    %figure(6);  image(42*photoB);colormap(gray(256));axis xy;

    %figure(7);  imagesc(-log((delta2A)));colormap(gray(256));axis xy;

    %figure(8);  imagesc(-log((delta2B)));colormap(gray(256));axis xy;

    %22222

    %pause

    diffm1(xind,yind)=generallikely31(oldgreatcell,newgreatcell);

  end
end
mindiff=min(diffml(:));
%diffm1
[xmovementind,ymovementind]= find(diffm1==mindiff);
xmove=round(mean(movementx(xmovementind)));
ymove=round(mean(movementy(ymovementind)));
				
				<dp n="d50"/>
function[trans1atex,translatey]=gmultireml31(oldcell,newcell,levels_x,levels_y,minlevel)
%
% [translatex,translatey]=multiml5(oldcell,newcell,levels_x,levels_y,minlevel)
% oldcell and newcell have two input images at raw resolution
% who will be registerred.
% THE REGISTRATION IS DONE IN A COARSE-TO-FINE MANNER.The output vector
% is the (x,y) shift of the image in newcell relative to the one in
% oldcell.
%
% See also scaledmove7 mizoor3 putinbig6
shape=′valid′;
[oldgreatcell,newgreatcell]=puthullml2(oldcell,newcell);
za=oldgreatcell{1,1};
DELTAZ2a=oldgreatcell{2,1};
centerx1=0;
centery1=0;
Zb=newgreatcell{1,1};
DELTAZ2b=newgreatcell{2,1};
centerx2(1)=0;
centery2(1)=0;
clear oldgreatcell newgreatcell
if ~(levels_x==levels_y)

      [a,maindirection]=max([levels_x,levels_y]);

      if maindirection==1                             %%%  x-direction is the main movement
unceratinty

    levelseq=[levels_x:-1:levels_y+1];

            for levelind=1:length(levelseq),

            level=levelseq(levelind);

            if ~(levelind==1)

            leveldifference=levelseq(levelind-1)-level;

            centerx2(levelind)=centerx2temp*(2^leveldifference);

            centery2(levelind)=centery2temp;

            end

        [photoA,delta2A]=mizoorml21_xy(Za,DELTAZ2a,level,levels_y,shape);

        [photoB,delta2B]=mizoorml21_xy(Zb,DELTAZ2b,level,levels_y,shape);

                    Acell{1,1}=photoA;

                    Acell{2,1}=delta2A;

                    Acell{3,1}=centerx1;

                    Acell{4,1}=centery1;
 % figure(5);   image(42*photoA); colormap(gray(256)); axis xy;
 % figure(6);   image(42*photoB); colormap(gray(256)); axis xy;
 % figure(7);   imagesc(-log((delta2A))); colormap(gray(256)); axis xy;
 % figure(8);   imagesc(-log((delta2B))); colormap(gray(256)); axis xy;
 % 22222
 % pause

                Bcell{1,1}=photoB;

                Bcell{2,1}=delta2B;

                   Bcell{3,1}=centerx2(levelind);

                   Bcell{4,1}=centery2(levelind);
				
				<dp n="d51"/>
                         [xmove,ymove]=gmlscalemov31(Acell,Bcell);

                         centerx2temp=centerx2(levelind)+xmove;

                         centery2temp=centery2(levelind)+ymove;

    end
   else                             %%% y-direction is the main movement unceratinty

     levelseq=[levels_y:-1:levels_x+1];

            for levelind=1:length(levelseq),

            level=levelseq(levelind);

            if -(levelind==1)

            leveldifference=levelseq(levelind-1)-level;

            centerx2(levelind)=centerx2temp;

            centery2(levelind)=centery2temp*(2^leveldifference);

            end

       [photoA,delta2A]=mizoorml21_xy(Za,DELTAZ2a,levels_x,level,shape);

       [photoB,delta2B]=mizoorml21_xy(Zb,DELTAZ2b,levels_x,level,shape);

                   Acell{1,1}=photoA;

                   Acell{2,1}=delta2A;

                   Acell{3,1}=centerx1;

                   Acell{4,1}=centery1;

               Bcell{1,1}=photoB;

               Bcell{2,1}=delta2B;

                   Bcell{3,1}=centerx2(levelind);

                   Bcell{4,1}=centery2(levelind);

                   [xmove,ymove]=gmlscalemov31(Acell,Bcell);

                   centerx2temp=centerx2(levelind)+xmove;

                   centery2temp=centery2(levelind)+ymove;

   end
  end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
if ~(levels_x==levels_y)
   [a,maindirection]=max([levels_x,levels_y]);
   if maindirection==1

    levelseq=[levels_y:-1:minlevel];

    centerx2(1)=centerx2temp*(2^ieveldifference);

    centery2(1)=centery2temp;
   else

    levelseq=[levels_x:-1:minlevel];

           centerx2(1)=centerx2temp;

           centery2(1)=centery2temp*(2^leveldifference);
   end
else
   levelseq=[levels_x:-1:minlevel];
   centerx2(1)=0;
   centery2(1)=0;
end
for levelind=1:length(levelseq),
   level=levelseq(levelind);
  if ~(levelind==1)

   leveldifference=levelseq(levelind-1)-level;

   centerx2(levelind)=centerx2temp*(2^leveldifference);

   centery2(levelind)=centery2temp*(2^leveldifference);
  end
				
				<dp n="d52"/>
  [photoA,delta2A]=mizoorml21(Za,DELTAZ2a,level,shape);

  [photoB,delta2B]=mizoorml21(Zb,DELTAZ2b,level,shape);

  Acell{1,1}=photoA:

  Acell{2,1}=delta2A;

  Acell{3,1}=centerx1;

  Acell{4,1}=centery1;

  Bcell{1,1}=photoB;

  Bcell{2,1}=delta2B;

  Bcell{3,1}=centerx2(levelind);

  Bcell{4,1}=centery2(levelind);

  [xmove,ymove]=gmlscalemov31(Acell,Bcell);

  centerx2temp=centerx2(levelind)+xmove;

  centery2temp=centery2(levelind)+ymove;
  end
  translatex=centerx2temp*(2^minlevel);
  translatey=centery2temp*(2^minlevel);
  function[enhanced,mnval,mxval]=histwin4(X,Y,photo,logphoto)
XS=sort(X);
YS=sort(Y);
L=round(XS(2));
R=round(XS(3));
D=round(YS(2));
U=round(YS(3));
%    LD   LU  RU  RD  LD
Xp= [L   L   R   R   L];
Yp= [D   U   U   D   D];
XO= (L+R)/2;
YO= (D+U)/2;
spanx=size(photo,2);
spany=size(photo,1);
[LD_row,LD_col]=xy2ij_brack([L,D],spany);
[RD_row,RD_col]=xy2ij_brack([R,D],spany);
[LU_row,LU_col]=xy2ij_brack([L,U],spany);
[RU_row,RU_col]=xy2ij_brack([R,U],spany);
stam=flipud(photo);
subarea=stam(LD_row:-1:LU_row,LD_col:RD_col);
mn=min(subarea(:));
mx=max(subarea(:));
lemata=subarea-mn;
mx=max(lemata(:));
imx16=(2^16)/mx;
substretch=uint16(round(lemata*imx16));
subareahist=histeq(substretch,256);
mx=max(subareahist(:));
				
				<dp n="d53"/>
imx8=(2^8)/double(mx);
subenhanced=uint8(round(double(subareahist)*imx8));
stam=flipud(logphoto);
stam(LD_row:-1:LU_row,LD_col:RD_col)=subenhanced;
enhanced=flipud(stam);
mnval=min(subarea(:));
mxval=max(subarea(:));
clear
close all
pack
kamut=1240;
%lambda=3.5e+O4;
%lambdalog=3.5e4;
%lambda=2.5e+O4;
lambda=le+O4;
lambdalog=6e4;
load ../decayfun
clear m delta_m
load lsatmosaic0.mat
Lshulaim=0;oldLshulaim=0;
left=min(xint)+Lshulaim;
right=max(xint);
bottom=170;roof=355;
imagehight=roof-bottom+1;
imagewidth=right-1eft+1;
%sun=250;
%sun=100;
sun=190;
dark=2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
logfused=log(1+fused);
mx=max(logfused(:));
imx8=(2^8)/mx;
logfused8=uint8(round(logfused*imx8));
[rows,cols]=size(logfused8);
L=l;R=cols;
D=1;U=rows;
figure(1);
hold off
image(logfused8);
colormap(gray(256));axis(′image′,′off′,′xy′);
%Xwide=cols;Ywide=round(Xwide*rows/cols);
Xwide=cols/1.8;Ywide=round(Xwide*rows/cols);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*rows/cols;
set(gcf,′position′,[1 500 Xwide Ywide]);
				
				<dp n="d54"/>
  spanx=R-L+1;
  spany=U-D+1;
  X=uint16(ones(spany,1)*(L:R));      %the coordinate grid of the combined-greater image
  Y=uint16((D:U) ′*ones(1,spanx));
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% COORDINATE GRID OF THE MOSAIC
  [bigrows,bigcols]=size(fused);
  fusedcell{1,1}=flipud(fused);      % the 1st (old) image
  fusedcell{2,1}=greatcenterx+oldLshulaim/2;%coordinates of center of old image
  fusedcell{3,1}=greatcentery;
  mxx=max(fused(:));
  ybase=sqrt(2);
  %ybase=2;
  denomin=log2(ybase);
  fracoctav=(log2(mxx/sun)/denomin)+1;
  intoctav=ceil(fracoctav);
  goodpts=[];
  for octavnum=l:intoctav;

  if octavnum==l

     minlevel=10*dark;

     maxlevel=sun;

  else

     minlevel=sun.*(ybase.^(octavnum-2));

     maxlevel=sun.*(ybase.^(octavnum-1));

  end

  inrange=((fused>minlevel) &amp; (~(fused>maxlevel)));

  inrange_ind=find(inrange);

  lenrange=length(inrange_ind);

  if lenrange<kamut

     goodpts=[goodpts;inrange_ind];

  else

     linerand=1+round((lenrange-1)*(rand(kamut,1)));

     picked_ind=inrange_ind(linerand);

     goodpts=[goodpts;picked_ind];
  %  x=X(picked_ind);
  %  y=Y(picked_ind);
  end
end

    x=X(goodpts);

    y=Y(goodpts);

    length(x)

    figure(1);

    hold off

    image(logfused8);colormap(gray(256));axis(′image′,′off′,′xy,);

    hold on
%     plot(x,y,′r.′);axis xy

    plot(x,y, ′r*′);axis xy

  set(gca,′xtick′,0:10:2400);set(gca,′ytick′,6:10:250);
%    grid on;  axis on;

  drawnow

  pause(10)
[rows,cols]=size(fused);
				
				<dp n="d55"/>
greatL=(greatcenterx-(cols-1)/2);      %the Left side of the image
greatR=(greatcenterx+(cols-1)/2);      % the Right side
greatD=(greatcentery-(rows-1)/2);      % the Down of the image
greatU=(greatcentery+(rows-1)/2);      % the Upper side of the image
clear deltaz2 fused logfused8 X Y x y fusedcell logfused8 logfused
clear inrange inrange_ind intoctav lenrange linerand maxlevel minlevel
clear picked_ind imx8
xpix=1:imagewidth;
ystep=1;
frame_inds=1:ystep:36;
load ../coordml8.mat    %%%% LOADING THE COORDINATES
%basefilename=′../gradfilt1 ′;
basefilename=′/proj/cavel/users/yoav/MOSAIC/MAY26/gradfilt1_′;
postfix=′.GIF′;
basephotoname=′photo′;
rawname=′basicim′;
netframes=[1∶9 ,11∶37];
ILUTZIM=sparse([]);
tic
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MERGING IMAGES IN A BIG FRAME
for indframe=1:length{frame_inds),
   yframe_ind=frame_inds(indframe);
   yframe=netframes(yframe_ind);
   sframe=sprintf(′%d′,yframe);
   shem2=[basefilename sframe postfix];
   rawim=imread(shem2);
   basicim=double(flipud(rawim))-1;
   photo=uint8(basicim(bottom:roof,left:right));
   centerx=centerx2(yframe_ind);
   centery=centery2(yframe_ind);

    oldcell{1,1}=photo;          % the 1st (old) image

    oldcell{2,1}=centerx;  %coordinates of center of old image
   oldcell{3,1}=centery;
[beseder,xbeseder,ybeseder,valbeseder]=subpts21(oldcell,greatL,greatR,greatD,greatU,goodpts);
   beseder_ind=find(beseder);
   xsmall=xbeseder(beseder_ind);
   ysmall=ybeseder(beseder_ind);
   valsmall=valbeseder(beseder_ind);
   [rows,cols]=size(photo);
   if  ~(indframe==1)

    bothbeseder=(oldbeseder &amp; beseder);

        beseder_ind=find(bothbeseder);

        OLDxsmall=xbesederold(beseder_ind);

        OLDysmall=ybesederold(beseder_ind);

        OLDval= valold(beseder_ind);

        NEWxsmall=xbeseder(beseder_ind);

        NEWysmall=ybeseder(beseder_ind);
				
				<dp n="d56"/>
    NEWval  =valbeseder(beseder ind);

    XPIX=ones(length(OLDxsmall),1)*xpix;

    OLDXSMALL=OLDxsmall*ones(1,imagewidth);

    IOLD=find(~[abs(XPIX-OLDXSMALL)′));

    chunk=sparse((zeros(length(OLDxsmall),imagewidth))′);

    chunk(IOLD)=NEWval;    % positive sign

    NEWXSMALL=NEWxsmall*ones(1,imagewidth);

    INEW=find(~(abs(XPIX-NEWXSMALL)′));

    chunk(INEW)=-OLDval;     %minus sign

    chunk=sparse(chunk′);

    ILUTZIM=[ILUTZIM;chunk];

     %figure(2); hold off; image(oldphoto);

     %colormap(gray(256)); axis(′image′,′off′,′xy′);

     %set(gcf,′position′,[59 292 612 108]);

     %set(gca,′position′,[0 0 1 1]);hold on

     %plot(OLDxsmall,OLDysmall,′r.′);axis xy

     %axison;grid on;set(gca,′xtick′,0:10:700);set(gca,′ytick′,7:10:250);
   %figure(3);   hold off;  image(photo);

     %colormap(gray(256)); axis(′image′,′off′,′xy′);

     %Xwide=cols;Ywide=round(Xwide*rows/cols);

     %set(gcf,′position′,[59 100 612 108]);

     %set(gca,′position′,[0 0 1 1]);  hold on;

     %plot(NEWxsmall,NEWysmall,′r.′); axis xy

  %axis on;grid on;set(gca,′xtick′,0:10:700);set(gca,′ytick′,7:10:250);

  end

  oldbeseder=beseder;

  xbesederold=xbeseder;

  ybesederold=ybeseder;

  valold=valbeseder;
end
toc
ILUTZ=sparse(ILUTZIM);
whos  ILUTZ
clear ILUTZIM
clear A oldphoto D U L R INEW IOLD Lshulaim NEWXSMALL NEWval NEWxsmall
clear NEWysmall OLDXSMALL OLDval OLDxsmall OLDysmall SE XPIX xpix
clear Xwide Ywide ans beseder beseder_ind bigcols bigrows bothbeseder
clear bottom chunk roof cols rows fracoctav frame_inds fusedcell
clear mx mxx netframes octavnum oldLshulaim oldbeseder oldcell
clear photo oldcell clipped fusedclipped saturated dilsaturated
clear hetclipped beseder netfused LI LG centerx centery Dnew Dold
clear Unew Uold Lnew Lold Rnew Rold centerx2 centery2 oldcols oldrows
clear bigrows bigcols greatcenterx greatcentery ystep fusedcell
clear rawim basicim X big2D basefilename basephotoname SE picked_ind
clear centerx centery imagehight indframe postfix right roof spanx spany
clear rawname sframe shem2 tikunx tikuny yframe yframe_ind sun sunerror
clear tikunx tikuny tzamtzam valbeseder valint valold valsmall vec
clear xbeseder xbesederold xsmall ybeseder ybesederold ysmall ystep
close all
pack
				
				<dp n="d57"/>
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  
  ynegat=(ILUTZ<0);
  yposit=(ILUTZ>0);
  ynozer=(ynegat | yposit);
  ymeasured=sum(ynozer);
  meanmeasured=mean(ymeasured);
  %saf=min([1+(meanmeasured/10),5]);
  saf=l;
  reliable=find(~(ymeasured<saf));
  %mnn=max([min(reliable)-1,1]);
  mnn= min(reliable)-1;
  %mxx=min([max(reliable)+1,imagewidth]);
  mxx= max(reliable)+1;
  %I=find(ymeasured);
  %endmesures=min ([max(I)+1,imagewidth]);
  mnnozer=ynozer(:,1:mnn);
  J=[];
  if mnn>1
   J=find ( (sum(mnnozer′) ) ′);
  elseif mnn==1

   J=find(mnnozer);
  end
  NEWILUTZ=ILUTZ;
  NEWILUTZ(J,:)=0;
  mxnozer=ynozer(:,mxx:imagewidth);
  J=[];
  if   (mxx<imagewidth)
   J=find((sum(mxnozer′)) ′);
  elseif (imagewidth==mxx)

   J=find(mxnozer);
  end
  NEWILUTZ(J, :)=0;
  ynegat=(NEWILUTZ<0);
  yposit=(NEWILUTZ>0);
  ynozer=(ynegat | yposit);
  ymeasured=sum(ynozer);
  yesmeasured=find(ymeasured>0);
  mnn=min(yesmeasured);
  mxx=max(yesmeasured);
  oldimagewidth=imagewidth;
  imagewidth=mxx-mnn+1;
  NETILUTZ=NEWILUTZ(:,mnn:mxx);
  clear I J NEWILUTZ mnnozer mxnozer reliable yesmeasured
  clear ynegat ynozer yposit meanmeasured
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  %imagewidth=oldimagewidth;
  tic
  amain=ones(imagewidth,1);
				
				<dp n="d58"/>
Amain=diag(amain);
aoff1=-0.5*ones(imagewidth-1,1);
aoff1(1)=-1;
Aoff1=diag(aoff1,1);
aoff_1=-0.5*ones(imaqewidth-1,1);
aoff_1(imagewidth-1)=-1;
Aoff_1=diag(aoff_1,-1);
SM=Amain+Aoff1+Aoff_1;
SM(1,:)=0;
SM(imagewidth,:)=0;
clear Amain Aoff1 Aoff_1 aoff_1 aoff1 amain
sqlambda=sqrt(lambda);
HALAKUT=sqlambda*sparse(SM);
LOCENERG=NETILUTZ.^2;
%locenerg=sqrt(sum(LOCENERG′))′;
locenerg=(sum(LOCENERG′))′;
maxenerg=sqrt(max(locenerg(:)));
NORMILUTZ=NETILUTZ/235;
ALLILUTZ=[NORMILUTZ;HALAKUT];
clear HALAKUT SM LOCENERG NORMILUTZ NETILUTZ
pack
[U,S,V]=svd(full(ALLILUTZ),0);
s=diag(S);
[Y,I]=min(s)
clear S U
M=V(:,I);
[Mxx,I]=max (abs(M));
Imn=min (I); 
mpeak=M(Imn);
m=M/mpeak;
m_amud=m;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%invC=(NORMILUTZ′*NORMILUTZ) + 0.5*eye(imagewidth);
invC=(ALLILUTZ′*ALLILUTZ) + 0.5*eye(imagewidth);
C=inv(invC);
%NORMILUTZ=NETILUTZ/235;
%Sr=m_amud′*((NORMILUTZ′*NORMILUTZ)*m_amud);
Sr=m_amud′*(invC*m_amud);
%LOCENERG=NORMILUTZ.^2;
LOCENERG=ALLILUTZ.^2;
%locenerg=sqrt(sum(LOCENERG′))′;
locenerg=(sum(LOCENERG′))′;
n_keilu=sum(locenerg(:));
dof=length(m_amud)-1;
syx=sqrt(Sr/(n_keilu-dof));
syx2=syx^2;
				
				<dp n="d59"/>
C_norm=C* syx2;
sig_m=sqrt(diag(C_norm));
figure(5)
plot(log10(sig_m(1:imagewidth)./m_amud(mnn :mxx)))
grid on
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
toc
clear V
mlog=log(abs(m));
% P=mlog ′;
P=zeros(1,oldimagewidth);
P(mnn :mxx)=mlog;
leftmarg=1:mnn-1;
P(fliplr(leftmarg))=2*mlog(1)-mlog(leftmarg+1);
rightmarg=mxx+1:oldimagewidth;
oldrightmarg=rightmarg-oldimagewidth+imagewidth;
P(fliplr(rightmarg))=2*mlog(imagewidth)-mlog(oldrightmarg-1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
L=length(P);
Imain=ones(L,1);
if mnn>1
  Imain(1 :mnn)=1;
end
if mxx<L
  Imain(mxx+1:L)=1;
end
Imat=sparse(diag(Imain));
amain=ones(L,1);
Amain=diag(amain);
aoff1=-0.5*ones(L-1,1);
aoff1(1)=-1;
Aoff1=diag(aoff1,1);
aoff_1=-0.5*ones(L-1,1);
aoff_1(L-1)=-1;
Aoff_1=diag(aoff_1, -1);
S=sparse(Amain+Aoff1+Aoff_1);
S(1,:)=0;
S(L,:)=0;
clear Amain Aoff1 Aoff_1 aoff_1 aoff1 amain
StS=S ′* S;
clear S;
				
				<dp n="d60"/>
  R=I mat + lambdalog*StS;
  iR=inv(R);
  Y=iR*P′;
  Ynorm=Y-max(Y);
  smoothm=exp(Ynorm′);
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  mlogdelta=(sig_m(1:imagewidth)./m_amud(mnn:mxx));
  %P=mlog′;
  dP=zeros(1,oldimagewidth);
  dP(mnn:mxx)=mlogdelta;
  leftmarg=1:mnn-1;
  dP(fliplr(leftmarg))=2*mlogdelta(1)-mlogdelta(leftmarg+1);
  rightmarg=mxx+1:oldimagewidth;
  oldrightmarg=rightmarg-oldimagewidth+imagewidth;
  dP(fliplr(rightmarg))=2*mlogdelta(imagewidth)-mlogdelta(oldrightmarg-1);
  dY=iR*dP′;
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  longm=exp(P);
  longdm=smoothm.*dY′;
  m_lemata=smoothm-longdm;
  m_lemala=smoothm+longdm;
  figure(1)
  hold off
  plot(longm,′b′)
  hold on
  plot(smoothm,′r′)
  plot(m_lemata,′m′)
  plot(m_lemala,′m′)
  set(gca,′xlim′,[1,oldimagewidth]);
  set(gca,′ylim′,[0 1]);
  dlog2m=dY′/log(2);
  log2m_lemata=log2(smoothm)-dlog2m;
  log2m_lemala=log2(smoothm)+dlog2m;
  figure(2)
  hold off
  plot(log2(abs(longm)),′b′)
  hold on
  plot(log2(smoothm),′r′)
  plot(log2m_lemata,′m′)
  plot(log2m_lemala,′m′)
  set(gca,′xlim′,[1,oldimagewidth]);
  m=smoothm;
  delta_m=longdm;
  save randecay37.mat xint m delta_m Ynorm smoothm C ILUTZ sig_m mnn mxx
				
				<dp n="d61"/>
load ../decayfun
origm=valint/max(valint);
figure(1)
hold on
plot(origm,′g′)
set(gca,′xlim′,[1,oldimagewidth]);
figure(2)
hold on
plot(log2(origm),′g′)
set(gca,′xlim′,[1,oldimagewidth]);
function[small,delta2small]=mizoorml21(big,delta2big,levels,shape);
newA=big;
newdelta2A=delta2big;
for iteration=1:levels,
   [reducedA,reducedelta2A]=reduceml21(newA,newdelta2A,shape);
   newA=reducedA;
   newdelta2A=reducedelta2A;
end
small=newA;
delta2small=newdelta2A;
function{small,delta2small]=mizoorml21_xy(big,delta2big,levels_x,levels_y,shape);
newA=big;
newdelta2A=delta2big;
if ~(levels_x==levels_y)
   for iteration=1:min([levels_x,levels_y]),

     [reducedA_x,reducedelta2A_x]=reduceml21_x(newA,newdelta2A,shape);

     [reducedA,reducedelta2A]=reduceml21_y(reducedA_x,reducedelta2A_x,shape);

            newA=reducedA;

     newdelta2A=reducedelta2A;
   end
   [a,maindirection]=max([levels_x,levels_y]);
   if maindirection==1   %%% x-direction is the main movement unceratinty

        for iteration=levels_y+1 :levels_x,

        [reducedA,reducedelta2A]=reduceml21_x(newA,newdelta2A,shape);

            newA=reducedA;

         newdelta2A=reducedelta2A;

      end

  else                                %%%  y-direction is the main movement
unceratinty

        for iteration=levels_x+1 :levels_y,

        [reducedA,reducedelta2A]=reduceml21_y(newA,newdelta2A,shape);

            newA=reducedA;

        newdelta2A=reducedelta2A;

     end
   end
else
   levels=levels_x;
   for iteration=1:levels,

   [reducedA_x,reducedelta2A_x]=reduceml21_x(newA,newdelta2A,shape);
				
				<dp n="d62"/>
  [reducedA,reducedelta2A]=reduceml21_y(reducedA_x,reducedelta2A_x,shape);

         newA=reducedA;

  newdelta2A=reducedelta2A;
  end
end
small=newA;
delta2small=newdelta2A;
function diffm1=mostlikely2(oldcell,newcell)
%
% corrcoef=scalprod2(Zoldgreat,Znewgreat)
% Calculates the correlation coefficient (″normalized″)
% between partly overlapping images Zoldgreat and Znewgreat.
% The result is    corrcoef
Zold=oldcell{1,1};   %THE 1ST SUB-IMAGE
Znew=newcell{i,1};   %THE 2ND SUB-IMAGE
DELTAZ2old=oldcell{2,1};
DELTAZ2new=newcell{2,1};
nanold=uint8(isnan(Zold));
nannew=uint8(isnan(Znew));
nangreat= (nanold ) nannew);
E=~nangreat;
F=find(E);
Zoldneto=NaN*ones(size(Zold));
Znewneto=NaN*ones(size(Znew));
DELTAZ2oldneto=NaN*ones(size(Zold));
DELTAZ2newneto=NaN*ones(size(Znew));
bestvalue=NaN*ones(size(Zold));
if isempty(F)

   diffml=Inf;
else

          Zoldneto(F)=Zold(F);

          Znewneto(F)=Znew(F);

   DELTAZ2oldneto(F)=DELTAZ2old(F);

  DELTAZ2newneto(F)=DELTAZ2new(F);
%     Sigmaold=sqrt(DELTAZ2oldneto);
%     Sigmanew=sqrt(DELTAZ2newneto);

   Sigma2old=DELTAZ2oldneto;

   Sigma2new=DELTAZ2newneto;

    nominat=(Zoldneto(F)./Sigma2old(F)) + (Znewneto(F)./Sigma2new(F));

   denomin=(1./Sigma2old(F)) + (1./Sigma2new(F));

   sig2seam=1./denomin;

   bestvalue(F)=sig2seam.*nominat;
%     Eold=( (bestvalue(F) -Zoldneto(F))./Sigmaold(F) ).^2;
%     Enew=( (bestvalue(F) -Znewneto(F)) ./Sigmanew(F) ).^2;

   Eold=((bestvalue(F) -Zoldneto(F)) .^2) ./Sigma2old(F);

   Enew=((bestvalue(F) -Znewneto(F)) .^2) ./Sigma2new(F);

   E=Eold+Enew;

   diffm1=mean(E(:));
end
				
				<dp n="d63"/>
function [translatex,translatey]=gmultireml31(oldcell,newcell,levels_x,levels_y,minlevel)
%
% [translatex,translatey]=multiml5(oldcell,newcell,levels_x,levels_y,minlevel)
% oldcell and newcell have two input images at raw resolution
% who will be registerred.
% THE REGISTRATION IS DONE IN A COARSE-TO-FINE MANNER.The output vector
% is the (x,y) shift of the image in newcell relative to the one in
% oldcell.
%
% See also scaledmove7 mizoor3 putinbig6
shape=′valid′;
[oldgreatcell,newgreatcell]=puthullml2(oldcell,newcell);
Za=oldgreatcell{1,1};
DELTAZ2a=oldgreatcell{2,1};
centerx1=0;
centery1=0;
Zb=newgreatcell{1,1};
DELTAZ2b=newgreatcell{2,1};
centerx2(1)=0;
centery2(1)=0;
clear oldgreatcell newgreatcell
if -(levels_x==ievels_y)

     [a,maindirection]=max([levels_x,levels_y]);

     if maindirection==1                        %%%  x-direction is the main movement
unceratinty

    levelseq=[levels_x:-1:levels_y+1];

            for levelind=1:length(levelseq),

            level=levelseq(levelind);

            if ~(levelind==1)

            leveldifference=levelseq(levelind-1)-level;

            centerx2(levelind)=centerx2temp*(2^leveldifference);

            centery2(levelind)=centery2temp;

            end

        [photoA,delta2A]=mizoorml21_xy(Za,DELTAZ2a,level,levels_y,shape);

        [photoB,delta2B]=mizoorml21_xy(Zb,DELTAZ2b,level,levels_y,shape);

                   Acell{1,1}=photoA;

                   Acell{2,1}=delta2A;

                   Acell{3,1}=centerxl;

                   Acell{4,1}=centery1;
 % figure(5);   image(42*photoA); colormap(gray(256)); axis xy;
 % figure(6);   image(42*photoB); colormap(gray(256)); axis xy;
 % figure(7);   imagesc(-log((delta2A))); colormap(gray(256)); axis xy;
 % figure(8);   imagesc(-log((delta2B))); colormap(gray(256)); axis xy;
 % 22222
 % pause

               Bcell{1,1}=photoB;

               Bcell{2,1}=delta2B;
				
				<dp n="d64"/>
                         Bcell{3,1}=centerx2(levelind);

                         Bcell{4,1}=centery2(levelind);

                         [xmove,ymove]=mlscalemov21(Acell,Bcell);

                         centerx2temp=centerx2(levelind)+xmove;

                         centery2temp=centery2(levelind)+ymove;

    end
   else                             %%% y-direction is the main movement unceratinty

         levelseq=[levels_y:-1:levels_x+l];

                for levelind=1:length(levelseq),

                level=levelseq(levelind);

                if ~(levelind==1)

                leveldifference=levelseq(levelind-1)-level;

                centerx2(levelind)=centerx2temp;

                centery2(levelind)=centery2temp*(2^leveldifference);

                end

            [photoA,delta2A]=mizoorml21_xy(Za,DELTAZ2a,levels_x,level,shape);

            [photoB,delta2B]=mizoorml21_xy(Zb,DELTAZ2b,levels_x,level,shape);

                        ′Acell{1,1}=photoA;

                        Acell{2,1}=delta2A;

                        Acell{3,1}=centerxl;

                        Acell{4,1}=centeryl;

                     Bcell{1,1}=photoB;

                     Bcell{2,1}=delta2B;

                        Bcell{3,1}=centerx2(levelind);

                        Bcell{4,1}=centery2(levelind);

                        [xmove,ymove]=mlscalemov21(Acell,Bcell);

                        centerx2temp=centerx2(levelind)+xmove;

                        centery2temp=centery2(levelind)+ymove;

   end
  end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
if ~(levels_x==levels_y)

  [a,maindirection]=max([levels_x,levels_y]);

  if maindirection==1

     levelseq=[levels_y:-1:minlevel];

     centerx2(1)=centerx2temp*(2^leveldifference);

     centery2(1)=centery2temp;

  else

     levelseq=[levels_x:-1:minlevel];

            centerx2(1)=centerx2temp;

            centery2(1)=centery2temp*(2^leveldifference);
   end
else
   levelseq=[levels_x:-1:minlevel];
   centerx2(1)=0;
   centery2(1)=0;
end
for levelind=1:length(levelseq),
   level=levelseq(levelind);
  if ~(levelind==1)

   leveldifference=levelseq(levelind-1)-level;
				
				<dp n="d65"/>
   centerx2(levelind)=centerx2temp*(2^leveldifference);

   centery2(levelind)=centery2temp*(2^leveldifference);
  end
  [photoA,delta2A]=mizoorml21(Za,DELTAZ2a,level,shape);
  [photoB,delta2B]=mizoorml21(Zb,DELTAZ2b,level,shape);
  Acell{1,1}=photoA;
  Acell{2,1}=delta2A;
  Acell{3,1}=centerxl;
  Acell{4,1}=centeryl;
  Bcell{1,1}=photoB;
  Bcell{2,1}=delta2B;
  Bcell(3,1}=centerx2(levelind);
  Bcell{4,1}=centery2(levelind);
  [xmove,ymove]=mlscalemov21(Acell,Bcell);
  centerx2temp=centerx2(levelind)+xmove;
  centery2temp=centery2(levelind)+ymove;
end
translatex=centerx2temp*(2^minlevel);
translatey=centery2temp*(2^minlevel);
clear
close all
pack
kamut=1240;
%lambda=3.5e+04;
%lambdalog=3.5e4;
%lambda=2.5e+04;
lambda=le+04;
lambdalog=6e4;
load ../decayfun
clear m delta_m
load lsatmosaic0.mat
Lshulaim=0;oldLshulaim=0;
left=min(xint)+Lshulaim;
right=max(xint);
bottom=170; roof=355;
imagehight=roof-bottom+1;
imagewidth=right-left+1;
%sun=250;
%sun=100;
sun=190;
dark=2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
logfused=log(l+fused);
mx=max(logfused(:));
imx8=(2^8)/mx;
logfused8=uint8(round(logfused*imx8));
[rows,cols]=size(logfused8);
L=i;R=cols;
				
				<dp n="d66"/>
D=1;U=rows;
figure(1);
hold off
image(logfused8);
colormap(gray(256));axis(′image′,′off′,′xy′);
%Xwide=cols;Ywide=round(Xwide*rows/cols);
Xwide=cols/1.8;Ywide=round(Xwide*rows/cols);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*rows/cols;
set(gcf,′position′,[1 500 Xwide Ywide]);
spanx=R-L+1;
spany=U-D+1;
X=uint16(ones(spany,1)*(L:R));       %the coordinate grid of the combined-greater image
Y=uint16((D:U)′*ones(1,spanx));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% COORDINATE GRID OF THE MOSAIC
[bigrows,bigcols]=size(fused);
fusedcell{1,1}=flipud(fused);         % the 1st (old) image
fusedcell{2,1}=greatcenterx+oldLshulaim/2;   %coordinates of center of old image
fusedcell{3,1}=greatcentery;
mxx=max(fused(:));
%ybase=sqrt(2);
%denomin=log2(ybase);
%fracoctav=(log2(mxx/sun)/denomin) +1;
%intoctav=ceil(fracoctav);
[rows,cols]=size(fused);
greatL=(greatcenterx-(cols-1)/2);      %the  Left side of the image
greatR=(greatcenterx+(cols-1)/2);      % the Right side
greatD=(greatcentery-(rows-1)/2);      % the Down of the image
greatU={greatcentery+(rows-1)/2);      % the Upper side of the image
clear deltaz2 fused logfused8 X Y x y fusedcell logfused8 logfused
clear inrange inrange_ind intoctav lenrange linerand maxlevel minlevel
clear picked_ind imx8
xpix=1:imagewidth;
ystep=1;
frame_inds=1:ystep:36;.
load ../coordm18.mat    %%%% LOADING THE COORDINATES
basefilename=′/proj/cavel/users/yoav/MOSAIC/MAY26/gradfilt1_′;
postfix=′.GIF′; basephotoname=′photo′; rawname=′basicim′;
netframes=[1:9 ,11:37];
ILUTZIM=sparse([]);
tic
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MERGING IMAGES IN A BIG FRAME
for indframe=1:length(frame_inds)-1,
   yframe_ind=frame_inds(indframe);
				
				<dp n="d67"/>
yframe=netframes(yfrsme_ind);
sframe=sprintf(′%d′,yframe);
shem2=[basefilename sframe postfix];
rawim=imread(shem2);
basicim=double(flipud(rawim))-1;
photo=uint8(basicim(bottom:roof,left:right));
centerx=centerx2(yframe_ind);
centery=centery2(yframe_ind);
oldcell{1,1}=photo;        % the 1st (old) image
oldcell{2,1}=centerx;   %coordinates of center of old image
oldcell{3,1}=centery;
oldphoto=photo;
yframe_ind=frame_inds(indframe+1);
yframe=netframes(yframe_ind);
sframe=sprintf(′%d′,yframe);
shem2=[basefilename sframe postfix];
rawim=imread(shem2);
basicim=double(flipud(rawim))-1;
photo=uint8(basicim(bottom:roof,left:right));
centerx=centerx2(yframe_ind);
centery=centery2(yframe_ind);
newcell{1,1}=photo;         % the 1st (old) image
newcell{2,1}=centerx;   %coordinates of center of old image
newcell{3,1}=centery;
[oldselect,newselect]=ptspick7(oldcell,newcell,greatL,greatR,greatD,greatU);
  [rows,cols]=size(photo);
oldxsmall=oldselect{1,1};
oldysmall=oldselect{2,1};
oldvals =oldselect{3,1};
newxsmall=newselect{1,1};
newysmall=newselect{2,1};
newvals  =newselect{3,1};
chunkleng=length(oldvals);
chunk=sparse(  (zeros(chunkleng,imagewidth)  )′ );
XPIX=ones(chunkleng,1)*xpix;
OLDXSMALL=oldxsmall*ones(1,imagewidth);
IOLD=find(~(abs(XPIX-OLDXSMALL)′));
chunk(IOLD)=newvals;                % positive sign
NEWXSMALL=newxsmall*ones(1,imagewidth);
INEW=find(~(abs(XPIX-NEWXSMALL)′));
chunk(INEW)=-oldvals;               %minus sign
				
				<dp n="d68"/>
   chunk=sparse(chunk′);
   ILUTZIM=[ILUTZIM;chunk];

    figure(2);  hold off;     image(oldphoto);

    colormap(gray(256));axis(′image′,′off′,′xy′);

    set(gcf,′position′,[59 292 612 108]);

    set(gca,′position′,[0 0 1 1]);  hold on

    plot(oldxsmall,oldysmall,′r.′);axis xy

    axis on;grid on; set(gca,′xtick′,0:10:700);set(gca,′ytick′,7:10:250);

     figure(3); hold off; image(photo);

    colormap(gray(256));axis(′image′,′off′,′xy′);

    set(gcf,′position′,[59 100 612 108]);

    set(gca,′position′,[0 0 1 1]);  hold on;

    plot(newxsmall,newysmall,′r.′);axis xy

     axis on;grid on; set(gca,′xtick′;0:10:700);set(gca,′ytick′,7:10:250);
pause
end
toc
5555555
pause
ILUTZ=sparse(ILUTZIM);
whos ILUTZ
clear ILUTZIM
clear A oldphoto D U L R INEW IOLD Lshulaim NEWXSMALL NEWval NEWxsmall
clear NEWysmall OLDXSMALL OLDval OLDxsmall OLDysmall SE XPIX xpix
clear Xwide Ywide ans beseder beseder_ind bigcols bigrows bothbeseder
clear bottom chunk roof cols rows fracoctav frame_inds fusedcell
clear mx mxx netframes octavnum oldLshulaim oldbeseder oldcell
clear photo oldcell clipped fusedclipped saturated dilsaturated
clear netclipped beseder netfused LI LG centerx centery Dnew Dold
clear Unew Uold Lnew Lold Rnew Rold centerx2 centery2 oldcols oldrows
clear bigrows bigcols greatcenterx greatcentery ystep fusedcell
clear rawim basicim X big2D basefilename basephotoname SE picked_ind
clear centerx centery imagehight indframe postfix right roof spanx spany
clear rawname sframe shem2 tikunx tikuny yframe yframe_ind sun sunerror
clear tikunx tikuny tzamtzam valbeseder valint valold valsmall vec
clear xbeseder xbesederold xsmall ybeseder ybesederold ysmall ystep
close all
pack
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
ynegat=(ILUTZ<0);
yposit=(ILUTZ>0);
ynozer=(ynegat | yposit);
ymeasured=sum(ynozer);
meanmeasured=mean(ymeasured);
%saf= min([ 1+(meanmeasured/10), 5]);
saf=1;
reliable=find(~(ymeasured<saf));
%mnn=max([min(reliable)-1, 1]);
				
				<dp n="d69"/>
mnn==min(reliable)-1;
%mxx=min ([max(reliable)+1,imagewidth]);
mxx= max(reliable)+1;
%I=find(ymeasured);
%endmesures=min([max(I)+1,imagewidth]);
mnnozer=ynozer(:,1:mnn);
J=[ ];
if mnn>1
  J=find((sum(mnnozer′)) ′);
elseif mnn==1
  J=find(mnnozer);
end
NEWILUTZ=ILUTZ;
NEWILUTZ(J,:)=0;
mxnozer=ynozer(:,mxx:imagewidth);
J=[ ];
if    (mxx<imagewidth)
  J=find((sum(mxnozer′)) ′);
elseif  (imagewidth==mxx)
   J=find(mxnozer);
end
NEWILUTZ(J,:)=0;
ynegat=(NEWILUTZ<0);
yposit=(NEWILUTZ>0);
ynozer=(ynegat | yposit);
ymeasured=sum(ynozer);
yesmeasured=find(ymeasured>0);
mnn=min(yesmeasured);
mxx=max(yesmeasured);
oldimagewidth=imagewidth;
imagewidth=mxx-mnn+1;
NETILUTZ=NEWILUTZ(:,mnn:mxx);
clear I J NEWILUTZ mnnozer mxnozer reliable yesmeasured
clear ynegat ynozer yposit meanmeasured
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%imagewidth=oldimagewidth;
tic
amain=ones(imagewidth,1);
Amain=diag(amain);
aoff1=-0.5*ones(imagewidth-1,1);
aoff1(1)=-1;
Aoff1=diag(aoff1,1);
aoff_1=-0.5*ones(imagewidth-1,1);
aoff_1(imagewidth-1)=-1;
Aoff_1=diag(aoff_1,-1);
SM=Amain+Aoff1+Aoff_1;
				
				<dp n="d70"/>
SM(i,:)=0;
SM(imagewidth,:)=0;
clear Amain Aoff1 Aoff_1 aoff_1 aoff1 amain
sqlambda=sqrt(lambda);
HALAKUT=sqlambda*sparse(SM);
LOCENERG=NETILUTZ.^2;
%locenerg=sqrt(sum(LOCENERG′))′;
locenerg=(sum(LOCENERG′))′;
maxenerg=sqrt(max(locenerg(:)));
NORMILUTZ=NETILUTZ/235;
ALLILUTZ=[NORMILUTZ;HALAKUT];
clear HALAKUT SM LOCENERG NORMILUTZ NETILUTZ
pack
[U,S,V]=svd(full(ALLILUTZ),0);
s=diag(S);
[Y,I]=min(s)
clear S U
M=V(:,I);
[Mxx,I]=max(abs(M));
Imn=min(I);
mpeak=M(Imn);
m=M/mpeak;
m_amud=m;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%invC=(NORMILUTZ′*NORMILUTZ) + 0.5*eye(imagewidth);
invC=(ALLILUTZ′*ALLILUTZ) + 0.5*eye(imagewidth);
C=inv(invC);
%NORMILUTZ=NETILUTZ/235;
%Sr=m_amud′*((NORMILUTZ′*NORMILUTZ)*m_amud);
Sr=m_amud′*(invC*m_amud);
%LOCENERG=NORMILUTZ.^2;
LOCENERG=ALLILUTZ.^2;
%locenerg=sqrt(sum(LOCENERG′))′;
locenerg=(sum(LOCENERG′))′;
n_keilu=sum(locenerg(:));
dof=length(m_amud)-1;
syx=sqrt(Sr/(n_keilu-dof));
syx2=syx^2;
C_norm=C*syx2;
sig_m=sqrt(diag(C_norm));
figure(5)
plot(log10(sig_m(1:imagewidth)./m_amud(mnn:mxx)))
grid on
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
				
				<dp n="d71"/>
toc
clear V
mlog=log(abs(m));
%P=mlog′;
P=zeros(1,oldimagewidth);
P(mnn:mxx)=mlog;
leftmarg=1:mnn-1;
P(fliplr(leftmarg))=2*mlog(1)-mlog(leftmarg+1);
rightmarg=mxx+1:oldimagewidth;
oldrightmarg=rightmarg-oldimagewidth+imagewidth;
P(fliplr(rightmarg))=2*mlog(imagewidth)-mlog(oldrightmarg-1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
L=length(P);
Imain=ones(L,1);
if mnn>1
  Imain(1:mnn)=1;
end
if mxx<L
  Imain(mxx+1:L)=1;
end
Imat=sparse(diag(Imain));
amain=ones(L,1);
Amain=diag(amain);
aoff1=-0.5*ones(L-1,1);
aoff1(1)=-1;
Aoff1=diag(aoff1,1);
aoff_1=-0.5*ones(L-1,1);
aoff_1(L-1)=-1;
Aoff_1=diag(aoff_1,-1);
S=sparse(Amain+Aoff1+Aoff_1);
S(1,:)=0;
S(L,:)=0;
clear Amain Aoff1 Aoff_1 aoff_1 aoff1 amain
StS=S′*S;
clear S;
R=Imat + lambdalog*StS;
iR=inv(R);
Y=iR*P′;
Ynorm=Y-max(Y);
smoothm=exp(Ynorm′);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
				
				<dp n="d72"/>
mlogdelta=(sig_m(1:imagewidth)./m_amud(mnn:mxx));
%P=mlog′;
dP=zeros(1,oldimagewidth);
dP(mnn:mxx)=mlogdelta;
leftmarg=1:mnn-1;
dP(fliplr(leftmarg))=2*mlogdelta(1)-mlogdelta(leftmarg+1);
rightmarg=mxx+1:oldimagewidth;
oldrightmarg=rightmarg-oldimagewidth+imagewidth;
dP(fliplr(rightmarg))=2*mlogdelta(imagewidth)-mlogdelta(oldrightmarg-1);
dY=iR*dP′;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
longm=exp(P);
longdm=smoothm.*dY′; 
m_lemata=smoothm-longdm;
m_lemala=smoothm+longdm;
figure(1)
hold off
plot(longm,′b′)
hold on
plot(smoothm,′r′)
plot(m_lemata,′m′)
plot(m_lemala,′m′)
set(gca,′xlim′,[1,oldimagewidth]);
set(gca,′ylim′,[0 1]);
dlog2m=dY′/log(2);
log2m_lemata=log2(smoothm)-dlog2m;
log2m_lemala=log2(smoothm)+dlog2m;
figure(2)
hold off
plot(log2(abs(longm)),′b′)
hold on
plot(log2(smoothm),′r′)
plot(log2m_lemata,′m′)
plot(log2m_lemala,′m′)
set(gca,′xlim′,[1,oldimagewidth]);
m=smoothm;
delta_m=longdm;
save randecay37.mat xint m delta_m Ynorm smoothm C ILUTZ sig_m mnn mxx
load ../decayfun
origm=valint/max(valint);
figure(1)
hold on
plot(origm,′g′)
set(gca,′xlim′,[1,oldimagewidth]);
figure(2)
				
				<dp n="d73"/>
hold on
plot(log2(origm),′g′)
set(gca,′xlim′,[1,oldimagewidth]);
clear
close all
pack
width=101;
half_width=(width-1)/2;
load nfuseupdated
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
logfused=log(1+fused);
mx=max(logfused(:));
imx16=(2^16)/mx;
logfused16=uint16(round(logfused*imx16));
histlog=double(histeq(logfused16,256));
mn=min(histlog(:));
histlog=histlog-mn;
mx=max(histlog(:));
imx8=(2^8)/double(mx);
histlog8=uint8(round(double(histlog)*imx8));
[rows,cols]=size(logfused16);
L=1;R=cols;
D=1;U=rows;
figure(2);
hold off
imagesc(histlog8);
colormap(gray(256));axis(′image′,′off′,′xy′);
Xwide=cols/1.8;Ywide=round(Xwide*rows/cols);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*rows/cols;
set(gcf,′position′,[1 500 Xwide Ywide]);
clear deltaz2 logfused
bakasha=[′\n PRESS ANY KEY BETWEEN ORDERS′...

          ′\n Press the letter ″n″ for normalized histogram′ ...

         ′\n Press the letter ″e″ for normalized histogram′...

         ′\n Press the digits ″s″,″m″,″l″ for smal//medium/large/windows \n\n′];
button=0;
sprintf(bakasha)
pause
plotedimg=histlog8;
figure(2)
koteretl=sprintf(′Click LEFT mouse to mark area. Window width is %d′,width);
				
				<dp n="d74"/>
  set(gcf,′name′,koteretl)

  while ~(button==113),    %′q′= botton=113

           figure(2)

           [x,y,button]=ginput(1);

                hold on

                   if(button==115)    %′s′= botton=115

              width=41;

                        koteretl=sprintf(′Click LEFT mouse to mark area.  Window width  is %d′,
  width);

                        set(gcf,′name′,koteretl)

       elseif (button==109)  %′m′= botton=109

           width=75;

                      koteretl=sprintf(′Click LEFT mouse to mark area. Window width is %d′,
width);

                      set(gcf,′name′,koteretl)

       elseif (button==108)  %′1′= botton=108

         width=101;

                    koteretl=sprintf(′Click LEFT mouse to mark area. Window width is %d′,
width);

                    set(gcf,′name′,koteretl)

       elseif (button==1) | (button==3)

                        half_width=(width-1)/2;

            XL=round(x-half_width);XR=round(x+half_width);

            YD=round(y-half_width);YU=round(y+half_width);

            XLL=max(XL,L); XRR=min(XR,R);YDD=max(YD,D);YUU=min(YU,U);

            X=[xLL XLL XRR XRR];   Y=[YDD YUU YUU YDD];

            plotedimg=plotcontrs5(X,Y,plotedimg,histlog8);

            figure(2);

                           hold off

                           imagesc(plotedimg);

                       colormap(gray(256)); axis(′image′,′off′,′xy′);
  %Xwide=cols/1.8; Ywide=round(Xwide*rows/cols);
  %set(gcf,′position′,[55 109 Xwide Ywide]);
  %set(gca,′position′,[0 0 1 1]);
  %vec=get(gcf,′position′);
  %Ywide=vec(3)*rows/cols;
  %set(gcf,′position′,[1 500 Xwide Ywide]);

                   axis(′image′,′off′,′xy′);  set(gca,′position′,[0 0 1 1]);
  %                  ribuax=[X, X(1)]; ribuay=[Y, Y(1)];
  %                  hold on;plot(ribuax,ribuay,′m-′)

                   [enhanced,subenhanced,mnval,mxval]=stretchwin5(X,Y,fused,histlog8);

                   smunval=sprintf(′%d′,round(mnval))

                   smxval=sprintf(′%d′,round(mxval))

                   figure

                   imagesc(subenhanced);

                   colormap(gray(256))

                   axis(′image′,′off′,′xy′);set(gca,′position′,[0 0 1 1]);

                   set(gcf,′position′,[100 100 2*width 2*width]);

            end

            pause(0.5)

       end
  %tofileimg=flipud(plotedimg);
  %to_mx=max(tofileimg(:));
  %tofileimg=double(tofileimg)/double(to_mx);
				
				<dp n="d75"/>
%imwrite(tofileimg,′plotedmos1.jpg′,′jpeg′,′Quality′,100);
function plotedimg=plotcontrs5(X,Y,oldplotimg,logphoto)
XS=sort (X);
YS=sort (Y);
L=round (XS (2));
R=round (XS (3));
D=round (YS (2));
U=round (YS (3));
%   LD  LU  RU  RD  LD
Xp=[L   L   R   R  L];
Yp=[D   U   U   D  D];
X0=(L+R)/2;
Y0=(D+U)/2;
spanx=size(oldplotimg,2);
spany=size(oldplotimg,1);
[LD_row,LD_col]=xy2ij_brack([L,D],spany);
[RD_row,RD_col]=xy2ij_brack([R,D],spany);
[LU_row,LU_col]=xy2ij_brack([L,U],spany);
[RU_row,RU_col]=xy2ij_brack([R,U],spany);
stam=flipud(logphoto);
smol=stam(LD_row:-1:LU_row ,LD_col);
newsmol=uint8(255*double(smol<128));
yamin=stam(RD_row:-1:RU_row,RD_col);
newyamin=uint8(255*double(yamin<128));
lemata=stam(RD_row ,LD_col:RD_col);
newlemata=uint8(255*double(lemata<128));
lemala=stam(RU_row ,LU_col:RU_col);
newlemala=uint8(255*double(lemala<128));
%whos newsmol newyamin newlemata newlemala
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stam=flipud(oldplotimg);
stam( LD_row:-1:LU_row ,LD_col)=newsmol;
stam( RD_row:-1:RU_row ,RD_col)=newyamin;
stam( RD_row ,LD_col:RD_col)=newlemata;
stam( RU_row ,LU_col:RU_col)=newlemala;
plotedimg=flipud(stam);
function prof=profil2(cols,transit)
				
				<dp n="d76"/>
mn=eps^7;
tmp=ones(1,cols);
L=l:transit;
R=cols-L+1;
%tmp(L)=mn+ (1-mn)*(L-1)/(transit-1);
%tmp(R)=mn+ (1-mn)*(L-1)/(transit-1);
tmp(L)=mn+ (1-mn) *  ( 1 - cos(pi*(L-1)/(transit-1))  )/2;
tmp(R)=mn+ (1-mn) *  ( 1 - cos(pi*(L-1)/(transit-1)) )/2;
prof=tmp;
function {oldselect,newselect]=ptspick6(oldcell,newcell,greatL,greatR,greatD,greatU)
sun=240;
%sun=200;
%sun=235;
dark=2;
kamut=50;
segments=20;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%   DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold=oldcell{1,1};     % the 1st (old) image
oldcenterx=oldcell{2,1};%coordinates of center of old image
oldcentery=oldcell{3,1};
[oldrows,oldcols]=size(Zold);
Lold=oldcenterx-(oldcols-1)/2;      %the  Left side of the old image
Rold=oldcenterx+(oldcols-1)/2;      % the Right side
Dold=oldcentery-(oldrows-1)/2;      % the Down  of the old image
Uold=oldcentery+(oldrows-1)/2;      % the Upper side of the old image
Znew=newcell{1,1};          % the 2nd (new) image
newcenterx=newcell{2,1};    %coordinates of center of new image
newcentery=newcell{3,1};
[newrows,newcols]=size(Znew);
Lnew=newcenterx-(newcols-1)/2;      % the Left side of the new image
Rnew=newcenterx+(newcols-1)/2;      % the Right side
Dnew=newcentery-(newrows-1)/2;      % the Down of the new image
Unew=newcentery+(newrows-1)/2;      % the Upper side of the new image
spanx=greatR-greatL+1;
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);       %the coordinate grid of the combined-greater image
greatY=(greatD:greatU)′*ones(1,spanx);
%%%%%%% PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
				
				<dp n="d77"/>
[oldLDy, oldLDx]=find ((greatX==Lold) &amp; (greatY==Dold));
[oldLUy, oldLUx]=find ((greatX==Lold) &amp; (greatY==Uold));
[oldRDy, oldRDx]=find ((greatX==Rold) &amp; (greatY==Dold));
[oldRUy, oldRUx]=find ((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
{oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
Zoldgreat=NaN*zeros(size(greatX));
Zoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
Zoldgreat=flipud(Zoldgreat);
%%%%%%%%% PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
[newLDy,newLDx]=find ((greatX==Lnew) &amp; (greatY==Dnew));
[newLUy,newLUx]=find ((greatX==Lnew) &amp; (greatY==Unew));
[newRDy,newRDx]=find ((greatX==Rnew) &amp; (greatY==Dnew));
[newRUy,newRUx]=find ((greatX==Rnew) &amp; (greatY==Unew));
[newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);
[newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);
[newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);
[newRU_row,newRU_col]=xy2ij_brack([newRUx,newRUy],spany);
Znewgreat=NaN*zeros(size(greatX));
Znewgreat(  newLD_row:-1:newLU_row,newLD_col:newRD_col )=Znew;
Znewgreat=flipud(Znewgreat);
%%%%%%%%%%%%%%% DIVIDING TO PARTS
oldLOnan= (~isnan(Zoldgreat));
newLOnan= (~isnan(Znewgreat));
LOnan= (oldLOnan &amp; newLOnan);
oversegm=max(LOnan);
I=find(oversegm);
shortL=min(I);
shortR=max(I);
deltax=(shortR-shortL)/segments;
%%%%%%%%%%%%%%%% SELECTING GOOD POINTS
minlevel=dark;
maxlevel=sun;
oldinrange=(  (Zoldgreat>minlevel)  &amp;  (~(Zoldgreat>maxlevel))  );
newinrange=(  (Znewgreat>minlevel)  &amp;  (~(Znewgreat>maxlevel))  );
inrange= ( (oldinrange &amp; newinrange)   &amp; LOnan);
[rows,cols]=size(inrange);
greatCOLS=ones(rows,1)*(1:cols);
%figure(1);
%hold off
				
				<dp n="d78"/>
%imagesc(inrange);
%colormap(gray(256)); axis(′image′,′off′,′xy′);
%Xwide=cols/1.8;Ywide=round(Xwide*rows/cols);
%set(gcf,′position′,[1 500 Xwide Ywide]);
%set(gca,′position′,[0 0 1 1]);
%vec=get(gcf,′position′);
%Ywide=vec(3)*rows/cols;
%set(gcf,′position′, [1 500 Xwide Ywide]);
%pause
goodpts=[ ];
for ypart=1:segments,

   Lsegm= shortL + round(deltax*(ypart-1));

   Rsegm= shortL + round(deltax*(ypart));
%    insegment= (  (~(greatX < Lsegm)) &amp; (~(greatX > Rsegm))  );

   insegment= (  (~(greatCOLS < Lsegm)) &amp; (~(greatCOLS > Rsegm))  );

   beseder= (inrange &amp; insegment);
%figure(1);
%hold off
%imagesc(beseder);
%colormap(gray(256));axis(′image′,′off′,′xy′);
%[rows,cols]=size(inrange);
%Xwide=cols/1.8; Ywide=round(Xwide*rows/cols);
%set(gcf,′position′,[1 500 Xwide Ywide]);
%set(gca,′position′,[0 0 1 1]);
%vec=get(gcf,′position′);
%Ywide=vec(3)*rows/cols;
%set(gcf,′position′, [1 500 Xwide Ywide]);
%pause
   inrange_ind=find(beseder);
   lenrange=length(inrange_ind);
   if lenrange<kamut

    goodpts=[goodpts;inrange_ind];
   else

    linerand=1+round((lenrange-1)*(rand(kamut,1)));

    picked_ind=inrange_ind(linerand);

    goodpts=[goodpts;picked_ind];
   end
end
%%%%%%%%% THE POINT COORDS &amp; VALS
x=greatX(goodpts);
y=greatY(goodpts);
oldxsmall=x-Lold+1;
oldysmall=y-Dold+1;
newxsmall=x-Lnew+1;
newysmall=y-Dnew+1;
				
				<dp n="d79"/>
oldvals=Zoldgreat(goodpts);
newvals=Znewgreat(goodpts);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
oldselect{1,1}=oldxsmall;
oldselect{2,1}=oldysmall;
oldselect{3,1′}=oldvals;
newselect{1,1}=newxsmall;
newselect{2,1}=newysmall;
newselect{3,1}=newvals;
function [oldgreatcell,newgreatcell]=putbigml3(oldcell,newcell)
%
% oldcell and newcell contain images to put in a single
% coordinate system, and the vector-coordinates of the
% image centers in  (x,y).
% oldgreatcell  and  newgreatcell are the cells of the new large
% frame with a common coorinate system,in each of which
% the single images are embedded. These cells also contain the
% coordinates of the center of the big frames.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%   DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold    =oldcell{1,1};      % the 1st (old) image
DELTAZ2old=oldcell{2,1};
oldcenterx=oldcell{3,1};  %coordinates of center of old image
oldcentery=oldcell{4,1};
Znew=newcell{1,1};        % the 2nd (new) image
DELTAZ2new=newcell{2,1};
newcenterx=newcell{3,1};
newcentery=newcell{4,1};
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[oldrows,oldcols]=size(Zold);
Lold=oldcenterx-(oldcols-1)/2;      % the Left side of the old image
Rold=oldcenterx+(oldcols-1)/2;      % the Right side
Dold=oldcentery-(oldrows-1)/2;      % the Down  of the old image
Uold=oldcentery+(oldrows-1)/2;      % the Upper side of the old image
[newrows,newcols]=size(Znew);
Lnew=newcenterx-(newcols-1)/2;      %the Left side of the new (2nd) image
Rnew=newcenterx+(newcols-1)/2;
Dnew=newcentery-(newrows-1)/2;
Unew=newcentery+(newrows-1)/2;
greatL=min([Lold,Lnew]);           %the Left side of the combined-greater image
greatR=max([Rold,Rnew]);
greatD=min([Dold,Dnew]);
greatU=max([Uold,Unew]);
spanx=greatR-greatL+1;
				
				<dp n="d80"/>
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);      %the coordinate grid of the combined-greater image
greatY=(greatD:greatU)′*ones(1,spanx);
%%%%%%% PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
[oldLDy,oldLDx]=find ((greatX==Lold) &amp; (greatY==Dold));
[oldLUy,oldLUx]=find ((greatX==Lold) &amp; (greatY==Uold));
[oldRDy,oldRDx]=find ((greatX==Rold) &amp; (greatY==Dold));
[oldRUy,oldRUx]=find ((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
[oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
Zoldgreat=NaN*zeros(size(greatX));
Zoldgreat(  oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col )=Zold;
Zoldgreat=flipud(Zoldgreat);
DELTAZoldgreat=NaN*zeros(size(greatX));
DELTAZoldgreat( oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col )=DELTAZ2old;
DELTAZoldgreat=flipud(DELTAZoldgreat);
%%%%%%%%% PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
[newLDy,newLDx]=find ((greatX==Lnew) &amp; (greatY==Dnew));
[newLUy,newLUx]=find ((grsatX==Lnew) &amp; (greatY==Unew));
[newRDy,newRDx]=find ((greatX==Rnew) &amp; (greatY==Dnew));
[newRUy,newRUx]=find ((greatX==Rnew) &amp; (greatY==Unew));
[newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);
[newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);
[newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);
[newRU_row,newRU_col]=xy2ij_brack([newRUx,newRUy],spany);
Znewgreat=NaN*zeros(size(greatX));
Znewgreat(  newLD_row:-1:newLU_row,newLD_col:newRD_col )=Znew;
Znewgreat=flipud(Znewgreat);
DELTAZnewgreat=NaN*zeros(size(greatX));
DELTAZnewgreat(  newLD_row:-1:newLU_row,newLD_col:newRD_col )=DELTAZ2new;
DELTAZnewgreat=flipud(DELTAZnewgreat);
%%%%%%%%%%%%%%%%%% THE CENTRAL COORDINATE OF THE (WARPED) GREAT-COMBINED IMAGE
greatcenterx=(greatR+greatL)/2;
greatcentery=(greatU+greatD)/2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% PUTTING THE OUTPUT TO CELLS,FOR EASIER TRANSFER
oldgreatcell{1,1}=Zoldgreat;
oldgreatcell{2,1}=DELTAZoldgreat;
oldgreatcell{3,1}=greatcenterx;
oldgreatcell{4,1}=greatcentery;
newgreatcell{1,1}=Znewgreat;
				
				<dp n="d81"/>
newgreatcell{2,1}=DELTAZnewgreat;
newgreatcell{3,1}=greatcenterx;
newgreatcell{4,1}=greatcentery;
function [oldgreatcell,newgreatcell]=puthullmll(oldcell,newcell)
%
% oldcell and newcell contain images to put in a single
% coordinate system, and the vector-coordinates of the
% image centers in (x,y).
% oldgreatcell and newgreatcell are the cells of the new large
% frame with a common coorinate system,in each of which
% the single images are embedded. These cells also contain the
% coordinates of the center of the big frames.
ymargin=100;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%   DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold   =oldcell{1,1};      % the 1st (old) image
DELTAZ2old=oldcell{2,1};
oldcenterx=oldcell{3,1};%coordinates of center of old image
oldcentery=oldcell{4,1};
Znew=newcell{1,1};      % the 2nd (new) image
DELTAZ2new=newcell{2,1};
newcenterx=newcell{3,1};
newcentery=newcell{4,1};
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[oldrows,oldcols]=size(Zold);
Lold=ceil(oldcenterx-(oldcols-1)/2)-0.5;       % the Left side of the old image
Rold=ceil(oldcenterx+(oldcols-1)/2)-0.5;       % the Right side
Dold=ceil(oldcentery-(oldrows-1)/2)-0.5;       % the Down of the old image
Uold=ceil(oldcentery+(oldrows-1)/2)-0.5;       % the Upper side of the old image
[newrows,newcols]=size(Znew);
Lnew=ceil(newcenterx-(newcols-1)/2)-0.5;       % the Left side of the new (2nd) image
Rnew=ceil(newcenterx+(newcols-1)/2)-0.5; 
Dnew=ceil(newcentery-(newrows-1)/2)-0.5;
Unew=ceil(newcentery+(newrows-1)/2)-0.5;
greatL=min([Lold,Lnew]);         %the Left side of the combined-greater image
greatR=max([Rold,Rnew]);
greatD=min([Dold,Dnew]);
greatU=max([Uold,Unew]);
spanx=greatR-greatL+1;
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);      %the coordinate grid of the combined-greater image
greatY=(greatD:greatU)′*ones(1,spanx);
%%%%%%% PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
[oldLDy,oldLDx]=find ((greatX==Lold)  &amp; (greatY==Dold));
[oldLUy,oldLUx]=find ((greatX==Lold)  &amp; (greatY==Uold));
				
				<dp n="d82"/>
[oldRDy,oldRDx]=find ((greatX==Rold) &amp; (greatY==Dold));
[oldRUy,oldRUx]=find ((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
[oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
Zoldgreat=NaN*zeros(size(greatX));
Zoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
DELTAZoldgreat=NaN*zeros(size(greatX));
DELTAZoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=DELTAZ2old;
%%%%%%%%%  PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
[newLDy,newLDx]=find ((greatX==Lnew) &amp; (greatY==Dnew));
[newLUy,newLUx]=find ((greatX==Lnew) &amp; (greatY==Unew));
[newRDy,newRDx]=find ((greatX==Rnew) &amp; (greatY==Dnew));
[newRUy,newRUx]=find ((greatX==Rnew) &amp; (greatY==Unew));
[newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);
[newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);
[newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);
[nawRU_row,newRU_col]=xy2ij_brack([newRUx,newRUy],spany);
Znewgreat=NaN*zeros(size(greatX));
Znewgreat(newLD_row:-1:newLU_row,newLD_col:newRD_col)=Znew;
DELTAZnewgreat=NaN*zeros(size(greatX));
DELTAZnewgreat(newLD_row:-l:newLU_row,newLD_col:newRD_col)=DELTAZ2new;
%%%%%%%%%%%%%%%%%%%% %%%%%%%%  DEFINING HULL
Lnew_margin=Lnew-ymargin;
Rnew_margin=Rnew+ymargin;
Dnew_margin=Dnew-ymargin;
Unew_margin=Unew+ymargin;
hullL=min([ max([Lold,Lnew_margin],Lnew]);        %the Left side of the combined-greater
image
hullR=max([ min([Rold,Rnew_margin],Rnew]);
hullD=min([ max([Dold,Dnew_margin],Dnew]);
hullU=max([ min([Uold,Unew_margin],Unew]);
[hullLDy,hullLDx]=find ((greatX==hullL) &amp; (greatY==hullD));
[hullLUy,hullLUx]=find ((greatX==hullL) &amp; (greatY==hullU));
[hullRDy,hullRDx]=find ((greatX==hullR) &amp; (greatY==hullD));
[hullRUy,hullRUx]=find ((greatX==hullR) &amp; (greatY==hullU));
[hullLD_row, hullLD_col]=xy2ij_brack([hullLDx,hullLDy],spany);
[hul1RD_row,hullRD_col]=xy2ij_brack([hullRDx,hullRDy],spany);
[hullLU_row,hullLU_col]=xy2ij_brack([hullLUx,hullLUy],spany);
[hullRU_row,hullRU_col]=xy2ij_brack([hullRUx,hullRUy],spany);
Zoldhull=Zoldgreat( hullLD_row:-1:hullLU_row,hullLD_col:hullRD_col);
Znewhull=Znewgreat( hullLD_row:-1:hullLU_row,hullLD_col:hullRD_col);
DELTAZ2oldhull=DELTAZoldgreat( hullLD_row:-1:hullLU_row ,hullLD_col:hullRD_col);
DELTAZ2newhull=DELTAZnewgreat( hullLD_row:-1:hullLU_row ,hullLD_col:hullRD_col);
				
				<dp n="d83"/>
%%%%%%%%%%%%%%%%%% THE CENTRAL COORDINATE OF THE (WARPED) GREAT-COMBINED IMAGE
%figure(1)
%imagesc(Zoldhull);axis xy
%figure(2)
%imagesc(Znewhull);axis xy
%Zoldhull=flipud(Zoldhull);
%Znewhull=flipud(Znewhull);
%figure(3)
%imagesc(Zoldhull);axis xy;colormap(gray(256))
%figure(4)
%imagesc(Znewhull);axis xy;colormap(gray(256))
%size(Zoldhull)
hullcenterx=(hullR+hullL)/2;
hullcentery=(hullU+hullD)/2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% PUTTING THE OUTPUT TO CELLS,FOR EASIER TRANSFER
oldgreatcell{1,1}=Zoldhull;
oldgreatcell{2,1}=DELTAZ2oldhull;
oldgreatcell{3,1}=hullcenterx;
oldgreatcell{4,1}=hullcentery;
newgreatcell{1,1}=Znewhull;
newgreatcell{2,1}=DELTAZ2newhull;
newgreatcell{3,1}=hullcenterx;
newgreatcell{4,1}=hullcentery;
function [oldgreatcell,newgreatcell]=putinbig6(oldcell,newcell)
%
% oldcell and newcell contain images to put in a single
% coordinate system,and the vector-coordinates of the
% image centers in (x,y).
% oldgreatcell and newgreatcell are the cells of the new large
% frame with a common coorinate system,in each of which
% the single images are embedded.These cells also contain the
% coordinates of the center of the big frames.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%   DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold=oldcell{1,1};      % the 1st (old) image
oldcenterx=oldcell{2,1};%coordinates of center of old image
oldcentery=oldcell{3,1};
Znew=newcell[1,1};      % the 2nd (new) image
newcenterx=newcell{2,1};
newcentery=newcell{3,1};
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[oldrows,oldcols]=size(Zold);
				
				<dp n="d84"/>
Lold=oldcenterx-(oldcols-1)/2;      %the  Left side of the old image
Rold=oldcenterx+(oldcols-1)/2;      % the Right side
Dold=oldcentery-(oldrows-1)/2;      % the Down of the old image
Uold=oldcentery+(oldrows-1)/2;      % the Upper side of the old image
[newrows,newcols]=size(Znew);
Lnew=newcenterx-(newcols-1)/2;      %the Left side of the new (2nd) image
Rnew=newcenterx+(newcols-1)/2;
Dnew=newcentery-(newrows-1)/2;
Unew=newcentery+(newrows-1)/2;
greatL=min([Lold,Lnew]);           %the Left side of the combined-greater image
greatR=max([Rold,Rnew]);
greatD=min([Dold,Dnew]);
greatU=max([Uold,Unew]);
spanx=greatR-greatL+1;
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);%the coordinate grid of the combined-greater image
greatY=(greatD:greatU) ′*ones(1,spanx);
%%%%%%% PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
[oldLDy,oldLDx]=find ((greatX==Lold) &amp; (greatY==Dold));
[oldLUy,oldLUx]=find ((greatX==Lold) &amp; (greatY==Uold));
[oldRDy,oldRDx]=find ((greatX==Rold) &amp; (greatY==Dold));
[oldRUy,oldRUx]=find ((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
[oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
Zoldgreat=NaN*zeros(size(greatX));
Zoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
Zoldgreat=flipud(Zoldgreat);
%%%%%%%%% PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
[newLDy,newLDx]=find ((greatX==Lnew) &amp; (greatY==Dnew));
[newLUy,newLUx]=find ((greatX==Lnew) &amp; (greatY==Unew));
[newRDy,newRDx]=find ((greatX==Rnew) &amp; (greatY==Dnew));
[newRUy,newRUx]=find ((greatX==Rnew) &amp; (greatY==Unew));
[newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);
[newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);
[newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);
[newRU_row,newRU_col]=xy2ij_brack([newRUx,newRUy],spany);
Znewgreat=NaN*zeros(size(greatX));
Znewgreat(newLD_row:-1:newLU_row,newLD_col:newRD_col)=Znew;
Znewgreat=flipud(Znewgreat);
%%%%%%%%%%%%%%%%%% THE CENTRAL COORDINATE OF THE (WARPED) GREAT-COMBINED IMAGE
greatcenterx=(greatR+greatL)/2;
greatcentery=(greatU+greatD)/2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
				
				<dp n="d85"/>
%%%%%%%%%%%%%%%%% PUTTING THE OUTPUT TO CELLS,FOR EASIER TRANSFER
oldgreatcell{1,1}=Zoldgreat;
oldgreatcell{2,1}=greatcenterx;
oldgreatcell{3,1}=greatcentery;
newgreatcell{1,1}=Znewgreat;
newgreatcell{2,1}=greatcenterx;
newgreatcell{3,1}=greatcentery;
function [small,smalldelta2]=reduceml21(big,bigdelta2,shape);
gs=zeros(5,1);
a=0.4;
gs(3)=a;
gs(2)=0.25;gs(4)=gs(2);
gs(1)=0.25-a/2;gs(5)=gs(1);
%invsig=1./sqrt(bigdelta2);
invsig2=1./bigdelta2;
weightedbig=big.*invsig2;
blurnominat=conv2(conv2(weightedbig,gs,shape),gs′,shape);     %convolution in one line
blurdenomin=conv2(conv2(invsig2,gs,shape),gs′,shape);
samples_row=1 :2 :size(blurnominat,1);          %sample grid in one line
samples_col=1 :2 :size(blurnominat,2);          %sample grid in one line
smallnominat=blurnominat(samples_row,samples_col);%sampling in one line
smalldenomin=blurdenomin(samples_row,samples_col);%sampling in one line
small=smallnominat./smalldenomin;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
newsig=1./smalldenomin;
%smalldelta2=newsig.^2;
smalldelta2=newsig2;
function [small,smalldelta2]=reduceml21_x(big,bigdelta2,shape)
gs=zeros(5,1);
a=0.4;
gs(3)=a;
gs(2)=0.25; gs(4)=gs(2);
gs(1)=0.25-a/2;gs(5)=gs(1);
%invsig=l./sqrt(bigdelta2);
invsig2=1./bigdelta2;
weightedbig=big.*invsig2;
blurnominat=conv2(weightedbig,gs,shape);      %convolution in one line
blurdenomin=conv2(invsig2,gs,shape);
samples_col=1 :2 : size(blurnominat,2);      %sample grid in one line
smallnominat=blurnominat(:,samples_col);      %sampling in one line
smalldenomin=blurdenomin(:,samples_col);      %sampling in one line
				
				<dp n="d86"/>
small=smallnominat./smalldenomin;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
newsig2=1./smalldenomin;
%smalldelta2=newsig.^2;
smalldelta2=newsig2;
function [small,smalldelta2]=reduceml21_x(big,bigdelta2,shape)
gs=zeros(5,1);
a=0.4;
gs(3)=a;
gs(2)=0.25;gs(4)=gs(2);
gs(1)=0.25-a/2;gs(5)=gs(1);
%invsig=1./sqrt(bigdelta2);
invsig2=1./bigdelta2;
weightedbig=big.*invsig2;
blurnominat=conv2(weightedbig,gs′,shape);     %convolution in one line
blurdenomin=conv2(invsig2,gs′,shape);
samples_row=1 :2 :size(blurnominat, 1);       %sample grid in one line
smallnominat=blurnominat(samples_row,:);       %sampling in one line
smalldenomin=blurdenomin(samples_row,:);       %sampling in one line
small=smallnominat./smalldenomin;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
newsig2=1./smalldenomin;
%smalldelta2=newsig.^2;
smalldelta2=newsig;
function Fethsatur=satursmooth7(X,uppersun,lowersun)
SE=ones(3);
mn=eps;
saturated= uint8(X>uppersun);
maybesatur=uint8(X>lowersun);
s=max (saturated (:) );
if s==0
   Fethsatur=ones(size (X));
else
   shulaim=uint8(1);
   oldregion=uint8(saturated);
   %k=0;
   while ~(shulaim==0)

     %k=k+1

     dilsaturated=(dilate(oldregion,SE));

     newregion=(dilsaturated &amp; maybesatur);

     added= newregion &amp; (~(oldregion));

     shulaim=max(added(:));

     oldregion=newregion;
  end
				
				<dp n="d87"/>
   tmp=ones (size (X));
   fact=(1-mn)/(uppersun- (lowersun-1));
   for level=lowersun:uppersun,

    badpixels=((X==level) &amp; newregion);

    badvalue=mn+ (uppersun-level) * fact;

    lobeseder=find (badpixels);

    if ~isempty (lobeseder)

       tmp (lobeseder) =badvalue;

    end

  end

  lobeseder=find(saturated);

  tmp(lobeseder)=mn;

  Fethsatur=tmp;
end
function [enhanced,subenhanced,mnval,mxval]=stretchwin5(X,Y,photo,logphoto)
XS=sort(X);
YS=sort(Y);
L=round(XS(2));
R=round(XS(3));
D=round(YS(2));
U=round(YS(3));
%   LD  LU  RU  RD LD
Xp=[L  L   R   R  L];
Yp=[D  U   U   D  D];
X0=(L+R)/2;
Y0=(D+U)/2;
spanx=size(photo,2);
spany=size(photo,1);
[LD_row,LD_col]=xy2ij_brack([L,D],spany);
[RD_row,RD_col]=xy2ij_brack([R,D],spany);
[LU_row,LU_col]=xy2ij_brack([L,U],spany);
[RU_row,RU_col]=xy2ij_brack([R,U],spany);
stam=flipud(photo);
subarea=stam(LD_row:-1:LU_row,LD_col:RD_col);
mn=min(subarea(:));
lemata=subarea-mn;
				
				<dp n="d88"/>
mx=max(lemata(:));
imx8=(2^8)/mx;
subenhanced=uint8(round(lemata*imx8));
mn=min(logphoto(:));
lemata=double(logphoto)-double(mn);
mx=max(lemata(:));
imx8=(2^8)/mx;
stam=flipud(lemata*imx8);
stam(LD_row:-1:LU_row,LD_col:RD_col)=subenhanced;
enhanced=flipud(stam);
mnval=min(subarea(:));
mxval=max(subarea(:));
function
[beseder,xbeseder,ybeseder,valbeseder]=subpts2l(oldcell,greatL,greatR,greatD,greatu,goodpts)
%sun=240;
%sun=200;
sun=235;
dark=2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%   DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold=oldcell{1,1};     % the 1st (old) image
oldcenterx=oldcell{2,1};%coordinates of center of old image
oldcentery=oldcell{3,1};
[oldrows,oldcols]=size(Zold);
Lold=oldcenterx-(oldcols-1)/2;     %the Left side of the old image
Rold=oldcenterx+(oldcols-1)/2;     % the Right side
Dold=oldcentery-(oldrows-1)/2;     % the Down of the old image
Uold=oldcentery+(oldrows-1)/2;     % the Upper side of the old image
spanx=greatR-greatL+1;
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);   %the coordinate grid of the combined-greater image
greatY=(greatD:greatU)′*ones(1,spanx);
%%%%%%%  PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
[oldLDy,oldLDx]=find ((greatX==Lold) &amp; (greatY==Dold));
[oldLUy,oldLUx]=find ((greatX==Lold) &amp; (greatY==Uold));
[oldRDy,oldRDx]=find ((greatX==Rold) &amp; (greatY==Dold));
[oldRUy,oldRUx]=find ((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
[oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
				
				<dp n="d89"/>
Zoldgreat=NaN*zeros(size(greatX));
Zoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
Zoldgreat=flipud(Zoldgreat); 
%%%%%%%%% PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
vals=Zoldgreat(goodpts);
LOnan    =(-isnan(vals));
LObright =(vals<sun);
LOdark   =(~(vals<dark));
beseder=  ((LOnan &amp; LObright) &amp; LOdark);
%beseder= (LOnan);
beseder_ind=find(beseder);
besederpts=goodpts(beseder_ind);
x=greatX(besederpts);
y=greatY(besederpts);
xsmall=x-Lold+1;
ysmall=y-Dold+1;
xbeseder=NaN*ones(size(goodpts));
xbeseder(beseder_ind)=xsmall;
ybeseder=NaN*ones(size(goodpts));
ybeseder(beseder_ind)=ysmall;
valbeseder=NaN*ones(size(goodpts));
valbeseder(beseder_ind)=Zoldgreat(besederpts);
%spanx=Rold-Lold+1;
%spany=Uold-Dold+1;
%greatX=ones(spany,1)*(greatL:greatR);      %the coordinate grid of the comoined-greater image
%greatY=(greatD:greatU) ′*ones(1,spanx);
clear
close all
pack
%load ../decayfun
%clear m delta_m
load decay53337pairs
clear ILUTZ C Ynorm sig_m smoothm mnn mxx
%xint m delta_m Ynorm smoothm C ILUTZ sig_m mnn mxx
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
fuzzmarge=45;
transit=80;
uppersun=250;
lowersun=205;
levels_x=2;
levels_y=0;
minlevel=0;
				
				<dp n="d90"/>
Lshulaim=0;
left=min(xint)+Lshulaim;
right=max(xint);
bottom=170;
roof=355;
imagehight=roof-bottom+1;
SE=ones(3,5);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
cols=right-left+1;
prof=profil2(cols,transit);
inv_prof2=(1./prof).^2;
FETHER2=ones(imagehight,1)*inv_prof2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
oldlength=length(m);
m=m(1+Lshulaim:oldlength);
delta m=delta_m(1+Lshulaim:oldlength);
sig=1;
invm=1./m;
invm_square=invm.^2;
invm_four  =invm.^4;
a=(sig.^2).*invm_square;
b=(delta_m.^2).*invm_four;
invM=ones(imagehight,1)*invm;
A=   ones(imagehight,1)*a;
B=   ones(imagehight,1)*b;
load ../coordml10.mat     %%%% LOADING THE COORDINATES
basefilename=′../gradfilt1_′;
postfix=′.GIF′;
basephotoname=′photo′;
rawname=′basicim′;
netframes=[1∶9 , 11∶37];
centerx1=0;  %this is always
centery1=0;  %this is always
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MERGING IMAGES IN A BIG FRAME
stam=round(rand(1));
ystep=(2*stam)-1;
if ystep==-1
   frame_inds=36:ystep:2;
elseif ystep==1
   frame_inds=1:ystep:35;
end
%ystep=-1;
%frame_inds=36:ystep:2;
k=1;
for indframe=1:length(frame_inds),
%tic
   yframe_ind=frame_inds(indframe);
   if k==1

    yframe1=netframes(yframe_ind);

    sframe1=sprintf(′%d′,yframe1);

    shem1=[basefilename sframe1 postfix];
				
				<dp n="d91"/>
    rawim=imread(shem1);

    basicim=double(flipud(rawim))-1;

    X=basicim(bottom:roof,left:right);

    Ya=X.*invM;

    Fethsatur=satursmooth7(X,uppersun,lowersun);

    IFETHSATUR2=(1./Fethsatur).^2;

    DELTAYa2=(A+ ((X.^2).*B)).*IFETHSATUR2;

    DELTAYa2=DELTAYa2.*FETHER2;

    Acell[1,1}=Ya;

    Acell{2,1}=DELTAYa2;

    Acell{3,1}=centerx2(yframe_ind);

    Acell{4,1}=centery2(yframe_ind);

  else

    Acell=fusedcell;

  end
   yframe2=netframes(yframe_ind + ystep);
   sframe2=sprintf(′%d′,yframe2);
   shem2=[basefilename sframe2 postfix];
   rawim=imread(shem2);
   basicim=double(flipud(rawim))-1;
   X=basicim(bottom:roof,left:right);
   Yb=X.*invM;
   Fethsatur=satursmooth7(X,uppersun,lowersun);
   IFETHSATUR2=(1./Fethsatur).^2;
   DELTAYb2=(A+ ((X.^2).*B)).*IFETHSATUR2;
   DELTAYb2=DELTAYb2.*FETHER2;
   clear rawim basicim X Fethsatur IFETHSATUR2
   Bcell{1,1}=Yb;
   Bcell{2,1}=DELTAYb2;
   Bcell{3,1}=centerx2(yframe_ind+ystep);
   Bcell{4,1}=centery2(yframe_ind+ystep);
   fusedcell=fuseml22(Acell,Bcell);

  k=k+1;

  clear Acel1 Bcell
%toc
end
fused=double(fusedcell{1,1});
deltaz2=fusedcell{2,1};
greatcenterx=fusedcell{3,1};
greatcentery=fusedcell{4,1};
save nfuseupdated fused deltaz2 greatcenterx greatcentery
clear fusedcell
clear saturated logfused valint tikunx tikuny invM invm invm_square
clear invm_square dilsaturated flatcurve flatfun centery2 centerx2
clear a b Ya Yb DELTAYb2 DELTAYa2 A B
max(fused(:))
				
				<dp n="d92"/>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
load nfuseupdated
logfused=log(1+fused);
mx=max(logfused(:));
%imx8=(2^8)/mx;
imx16=(2^16)/mx;
logfused16=uint16(round(logfused*imx16));
histlog=histeq(logfused16,256);
mx=max(histlog(:));
%imx8=(2^8)/double(mx);
imx16=(2^16)/double(mx);
histlog16=uint16(round(double(histlog)*imx16));
[rows,cols]=size(logfused16);
figure(2);
hold off
imagesc(histlog16);
colormap(gray(256));axis(′image′,′off′,′xy′);
Xwide=cols/1.8;Ywide=round(Xwide*rows/cols);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*rows/cols;
set(gcf,′position′,[1 300 Xwide Ywide]);
% [I,J]=xy2ij_brack(XYcoord,N);
% Converts x-y coordinates to ij coordinates:
%   I is the row-number of the element,
%   J is the column-number ofthe element
%   N is the number of rows in the matrix.
function [I,J]=xy2ij_brack(XYcoord,N);
X=XYcoord(:,1);
Y=XYcoord(:,2);
%index in row-number
I=N-Y+1;
%index in cols
J=X;
%IJcoord=[I,J];
				
				<dp n="d93"/>
形成偏振拼合的源代码
clear
close all
pack
load fusecomponl
[ROWS,COLS]=size(Lfused);
I0 =reshape(Lfused,ROWS*COLS,1);
I45=reshape(Mfused,ROWS*COLS,1);
I90=reshape(Rfused,ROWS*COLS,1);
F=[I0,I45,I90]′;
alfa=[0,45,90]′*pi/180;
M=[ones(3,1),cos(2*alfa),sin(2*alfa)];
IM=inv(M);
Param=IM*F:
%%%%%%%%%%%%%%%%%%%%%%
%%%%%% finding the net size
%%%%%%%%%%%%%
bigC= Param(1,:);
bigC=reshape(bigC,ROWS,COLS);
stamC=sum(double(uint8(bigC)));
a=find(stamC>0);
l=min(a);
r=max(a);
stamC=sum(double(uint8(bigC)),2);
a=find (stamC>0);
d=min (a);
u=max (a);
lfused=Lfused(d:u,l:r);
mfused=Mfused(d:u,l:r);
rfused=Rfused(d:u,l:r);
[rows,cols]=size(lfused);
clear bigC Lfused Mfused Rfused a stamC
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I0 =reshape(lfused,rows*cols,1);
I45=reshape(mfused,rows*cols,1);
I90=reshape(rfused,rows*cols,1);
F=[I0,I45,I90]′;
alfa=[0,45,90]′*pi/180;
M=[ones(3,1),cos(2*alfa),sin(2*alfa)];
IM=inv(M);
Param=IM*F;
				
				<dp n="d94"/>
C= Param(1,:);
Ac=Param(2,:);
As=Param(3,:);
clear Param
%%%%%%%%%%%%%%%%%%%
%%%%%%  A positive,ambiguous theta
%%%%%%%%%%%%%%%%%%%
Twotheta=atan(As./Ac).*(~(Ac==0))-pi/2*((Ac==0).*(As<0))+pi/2*((Ac==0).*(~(As<0)));
twotheta=Twotheta -((Ac<0).*(As<0)*pi) +((Ac<0).*(~(As<0))*pi);
theta=(twotheta/2);
A=sqrt(Ac.^2 + As.^2);
theta=theta*180/pi;
%C=reshape(C,ROWS,COLS);
%A=reshape(A,ROWS,COLs);
%THETA=reshape(theta,ROWS,COLS);
C=reshape(C,rows,cols);
A=reshape(A,rows,cols);
THETA=reshape(theta,rows,cols);
clear Twotheta twotheta theta
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
ROWS=rows;
COLS=cols;
figure(1);
hold off
imagesc(C);
axis(′image′,′off′,′xy′);
%Xwide=COLS/0.7;
Xwide=COLS/0.9;
Ywide=round(Xwide*ROWS/COLS);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*ROWS/COLS;
set(gcf,′position′,[10 300 Xwide Ywide]);
colormap(gray(256)
AA=(C-A);
b=AA.*(AA>0);
figure(2);
hold off
imagesc(b);
axis(′image′,′off′,′xy′);
%Xwide=COLS/0.7;
Xwide=COLS/0.9;
Ywide=round(Xwide*ROWS/COLS);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*ROWS/COLS;
set(gcf,′position′,[10 300 Xwide Ywide]);
colormap(gray(256))
				
				<dp n="d95"/>
figure(3);
hold off
imagesc(A);
axis(′image′,′off′,′xy′);
%Xwide=COLS/0.7;
Xwide=COLS/0.9;
Ywide=round(Xwide*ROWS/COLS);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*ROWS/COLS;
set(gcf,′position′,[10 300 Xwide Ywide]);
colormap(gray(256))
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear A Ac As C F I0 I45 I90 THETA AA b
clear
close all
pack
load polarresl
[ROWS,COLS]=size(C);
c=reshape(C,1,ROWS*COLS);
a=reshape(A,1,ROWS*COLS);
ac=[c;a]
boundeda=min(ac);
boundedA=reshape(boundeda,ROWS,COLS);
figure(1);

             hold off

             imagesc(C);

    imagesc(A);

           axis(′image′,′off′,′xy′);

           %Xwide=COLS/0.7;

           Xwide=COLS/0.9;

           Ywide=round(Xwide*ROWS/COLS);

           set(gcf,′position′,[55 109 Xwide Ywide]);

           set(gca,′position′,[0 0 1 1]);

           vec=get(gcf,′position′);

           Ywide=vec(3)*ROWS/COLS;

           set(gcf,′position′,[10 300 Xwide Ywide]);

    colormap(gray(256))

    figure(3+yiter);

           hold off
				
				<dp n="d96"/>
           axis(′image′,′off′,′xy′);

           %Xwide=COLS/0.7;

           Xwide=COLS/0.9;

           Ywide=round(Xwide*ROWS/COLS);

           set(gcf,′position′,[55 109 Xwide Ywide]);

           set(gca,′position′,[0 0 1 1]);

           vec=get(gcf,′position′);
          Ywide=vec(3)*ROWS/COLS;

           set(gcf,′position′,[10 300 Xwide Ywide]);

    colormap(gray (256))

           AA= (C-A);

           b=AA.*(AA>0);

    figure (4+yiter);

           hold off

           imagesc(b);

           axis(′image′,′off′,′xy′);

           %Xwide=COLS/0.7;

           Xwide=COLS/0.9;

           Ywide=round(Xwide*ROWS/COLS);

           set(gcf,′position′,[55 109 Xwide Ywide]);

           set(gca,′position′,[0 0 1 1]);

           vec=get(gcf,′position′);

           Ywide=vec(3)*ROWS/COLS;

           set(gcf,′position′,[10 300 Xwide Ywide]);

   colormap(gray(256))
clear
close all
pack
load flatparams.mat
clear fused8 xlims ylims cparams
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
transit=40;
uppersun=250;
lowersun=205;
left=97;
right=600;
bottom=200;
roof=385;
imagehight=roof-bottom+1;
imagewidth=right-left+1;
LeftspaceL=135;
LeftspaceR=185;
RightspaceL=315;
RightspaceR=365;
Leftspacewidth= LeftspaceR-LeftspaceL+1;
Rightspacewidth=RightspaceR-RightspaceL+1;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
sig=0.5;
invM=flatfun(bottom:roof,left:right);
SIG2=(sig.^2);
cols=right-left+1;
				
				<dp n="d97"/>
prof=profil2(cols,transit);
inv_prof2=(1./prof).^2;
FETHER2=ones(imagehight,1)*inv_prof2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
load coordpoll.mat   %%%% LOADING THE COORDINATES
basefilename=′stepol_′;
postfix=′.GIF′;
basephotoname=′photo′;
rawname=′basicim′;
netframes=[1:15];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MERGING IMAGES IN A BIG FRAME
stam=1;
ystep=(2*stam)-1;
frame_inds=1:ystep:(length(netframes)-1);
k=1;
for indframe=1:length (frame_inds),
%tic
   yframe_ind=frame_inds(indframe)
   if k=1

    yframe1=netframes(yframe_ind);

    sframe1=sprintf(′%d′,yframe1);

    shem1=[basefilename sframe1 ′_1′ postfix];

    shem2=[basefilename sframe1 ′_2′ postfix];

    shem3=[basefilename sframe1 ′_3′ postfix];

    rawim1=imread(shem1);

    rawim2=imread(shem2);

    rawim3=imread(shem3);

    rawim=(double(flipud(rawim1))~1 ) + ( double(flipud(rawim2))-1 ) + (
double(flipud(rawim3))-1);

    basicim=(rawim/3);

    Za=basicim.*flatfun;

    Za=Za(bottom:roof,left:right);

    AL=NaN*ones(size(Za));AM=NaN*ones(size(Za));AR=NaN*ones(size(Za));

    DELTAAL2=NaN*ones(size(Za));                              DELTAAM2=NaN*ones(size(Za));
DELTAAR2=NaN*ones(size(Za));

     AL(:,1:LeftspaceL-1)=Za(:,1:LeftspaceL-1);

     AM(:,LeftspaceR+1:RightspaceL-1)=Za(:,LeftspaceR+1:RightspaceL-1);

     AR(:,RightspaceR+1:imagewidth)=Za(:,RightspaceR+1:imagewidth);

     X=basicim(bottom:roof,left:right);

     Fethsatur=satursmooth7(X,uppersun,lowersun);

     IFETHSATUR2=(1./Fethsatur).^2;

     DELTAZa2=(SIG2*(invM.^2)).*IFETHSATUR2;
%       DELTAYa2=DELTAYa2.*FETHER2;
%       DELTAZa2=(SIG2*(invM.^2)).*(~dilsaturated) + sunerror.*dilsaturated;

     DELTAAL2(:,1:LeftspaceL-1)=DELTAZa2(:,1:LeftspaceL-1);

     DELTAAM2(:,LeftspaceR+1:RightspaceL~1)=DELTAZa2(:,LeftspaceR+1:RightspaceL-1);
				
				<dp n="d98"/>
   DELTAAR2(:,RightspaceR+1:imagewidth)=DELTAZa2(:,RightspaceR+1:imagewidth);

   ALcell{1,1}=AL;

   ALcell{2,1}=DELTAAL2;

   ALcell{3,1}=centerx2(yframe_ind);

   ALcell{4,1}=centery2(yframe_ind);

   AMcell=ALcell;

   AMcell{1,1}=AM;

   AMcell{2,1}=DELTAAM2;

   ARcell=ALcell;

   ARcell{1,1}=AR;

   ARcell{2,1}=DELTAAR2;
   else

   ALcell=Lfusedcell;
   AMcell=Mfusedcell;

   ARcell=Rfusedcell;
   end
   yframe2=netframes(yframe_ind+ystep);
   sframe1=sprintf(′%d′,yframe2);
   shem1=[basefilename sframe1 ′_1′postfix];
   shem2=[basefilename sframe1 ′_2′postfix];
   shem3=[basefilename sframe1 ′_3′postfix];
   rawim1=imread(shem1);
   rawim2=imread(shem2);
   rawim3=imread(shem3);
   rawim=(    double(flipud(rawim1))-1 ) + ( double(flipud(rawim2))-1 ) + (
double(flipud(rawim3))-1 );
   basicim=(rawim/3);
   Zb=basicim.*flatfun;
   Zb=Zb(bottom:roof,left:right);
   BL=NaN*ones(size(Zb));BM=NaN*ones(size(Zb));BR=NaN*ones(size(Zb));
   DELTABL2=NaN*ones(size(Zb));DELTABM2=NaN*ones(size(Zb));  DELTABR2=NaN*ones(size(Zb));
   BL(:,1:LeftspaceL-1)=Zb(:,1:LeftspaceL-1);
   BM(:,LeftspaceR+1:RightspaceL-1)=Zb(:,LeftspaceR+1:RightspaceL-1);
   BR(:,RightspaceR+1:imagewidth)=Zb(:,RightspaceR+1:imagewidth);
   X=basicim(bottom:roof,left:right);
   Fethsatur=satursmooth7(X,uppersun,lowersun);
   IFETHSATUR2=(1./Fethsatur).^2;
   DELTAZb2=(SIG2*(invM.^2)).*IFETHSATUR2;
%   DELTAZb2=(SIG2*(invM.^2)).*(~dilsaturated) + sunerror.*dilsaturated;
%  DELTAZb2=DELTAZb2.*FETHER2;
   DELTABL2(:,1:LeftspaceL-1)=DELTAZb2(:,1:LeftspaceL-1);
   DELTABM2(:,LeftspaceR+1:RightspaceL~1)=DELTAZb2(:,LeftspaceR+1:RightspaceL-1);
   DELTABR2(:,RightspaceR+1:imagewidth)=DELTAZb2(:,RightspaceR+1:imagewidth);
   clear rawim basicim Fethsatur IFETHSATUR2
				
				<dp n="d99"/>
  BLcell{1,1}=BL;
  BLcell{2,1}=DELTABL2;
  BLcell{3,1}=centerx2{yframe_ind+ystep);
  BLcell{4,1}=centery2(yframe_ind+ystep);
  BMcell=BLcell;
  BMcell{1,1}=BM;
  BMcell{2,1}=DELTABM2;
  BRcell=BLcell;
  BRcell{1,1}=BR;
  BRcell{2,1}=DELTABR2;
  Lfusedcell=fuseml22(ALcell,BLcell);
  Mfusedcell=fuseml22 (AMcell,BMcell);
  Rfusedcell=fuseml22(ARcell,BRcell);
   k=k+1;
   clear ARcell BRcell AGcell BGcell ABcell BBcell
%toc
end
Lfused=double(Lfusedcell{1,1});
Mfused=double(Mfusedcell{1,1});
Rfused=double(Rfusedcell{1,1});
greatcenterx=Lfusedcell{3,1};
greatcentery=Lfusedcell{4,1};
save fusecomponl Lfused Mfused Rfused greatcenterx greatcentery
clear Rfusedcell Gfusedcell Bfusedcell
clear saturated logfused valint tikunx tikuny invM invm invm_square
clear invm_square dilsaturated flatcurve centery2 centerx2
clear a b Ya Yb DELTAYb2 DELTAYa2 A B FETHER2 AB AG AR BB BG BR
clear basefilename basephotoname greatcenterx  greatcentery imagehight
clear indframe inv_prof2 k lambda_pos left netframes postfix prof
clear rawname right roof sframel sframe2 shem1 shem2 stam transit
clear yframel yframe2 yframe_ind ystep
%clear bX CAMERESP invCAMERESP invCAMRESframe X
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
yGAMMA=0.5;
load fusecomponl
[ROWS,COLS]=size(Lfused);
fused(1:ROWS,1:COLS,1)=Lfused;
fused(1:ROWS,1:COLS,2)=Mfused;
fused(1:ROWS,1:COLS,3)=Rfused;
mx=max(fused(:));
fused=fused/mx;
oldHSV=rgb2hsv(fused);
luminance=oidHSV(:,:,3);
mn=min(luminance(:));mx=max(luminance(:));
newlumin=imadjust(luminance,[mn mx],[0 1],yGAMMA);
newHSV=oldHSV;
				
				<dp n="d100"/>
newHSV(:,:,3)=newlumin;
newfused=hsv2rgb(newHSV);
mx=max(newfused(:));
newfused=(newfused/mx).*(~(newfused<0));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
figure(1);
hold off
imagesc(newfused);
axis(′image′,′off′,′xy′);
%Xwide=COLS/0.7;
Xwide=COLS/0.9;
Ywide=round(Xwide*ROWS/COLS);
set(gcf,′position′,[55 109 Xwide Ywide]);
set(gca,′position′,[0 0 1 1]);
vec=get(gcf,′position′);
Ywide=vec(3)*ROWS/COLS;
set(gcf,′position′,[10 300 Xwide Ywide]);
newfused(:,:,1)=flipud(newfused(:,:,1));
newfused(:,:,2)=flipud(newfused(:,:,2));
newfused(:,:,3)=flipud(newfused(:,:,3));
%imwrite(newfused,′rawmosNOccd2.jpg′,′jpeg′,′quality′,100);
function fusedcell=fuseml22(oldcell,newcell)
%
% oldcell and newcell contain images to put in a single
% coordinate system,and the vector-coordinates of the
% image centers in (x,y).
% fusedcell is the cell of the updated mosaic image
% k is the number of images already fused
%
% merging=′average′ - The overlap area is an average of
%      all the images that exist in it,and the program
%      makes all weights be the same.
% merging=′avdecay′ - The overlap area is the average of the
%      old image or mosaic,withthe new image.So,older
%      images in the old mosaic decay exponentially in time.
% merging=′split′- the overlap is cut by half - each side
%      comes from a different image.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%   DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold    =oldcell{1,1};     % the 1st (old) image
DELTAZ2old=oldcell{2,1};
oldcenterx=oldcell{3,1};   %coordinates of center of old image
oldcentery=oldcell{4,1};
Znew=newcell{1,1};         % the 2nd (new) image
DELTAZ2new=newcell{2,1};
				
				<dp n="d101"/>
newcenterx=newcell{3,1};
newcentery=newcell{4,1};
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[oldrows,oldcols]=size(Zold);
Lold=oldcenterx-(oldcols-1)/2;     %the  Left side of the old image
Rold=oldcenterx+(oldcols-1)/2;     % the Right side
Dold=oldcentery-(oldrows-1)/2;     % the Down of the old image
Uold=oldcentery+(oldrows-1)/2;     % the Upper side of the old image
[newrows,newcols]=size(Znew);
Lnew=newcenterx-(newcols-1)/2;     %the Left side of the new (2nd) image
Rnew=newcenterx+(newcols-1)/2;
Dnew=newcentery-(newrows-1)/2;
Unew=newcentery+(newrows-1)/2;
greatL=min([Lold,Lnew]);          %the Left side of the combined-greater image
greatR=max([Rold,Rnew]);
greatD=min([Dold,Dnew]);
greatU=max([Uold,Unew]);
spanx=greatR-greatL+1;
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);      %the coordinate grid of the combined-greater image
greatY=(greatD:greatU)′*ones(1,spanx);
%%%%%%% PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
[oldLDy,oldLDx]=find ((greatX==Lold) &amp; (greatY==Dold));
[oldLUy,oldLUx]=find ((greatX==Lold) &amp; (greatY==Uold));
[oldRDy,oldRDx]=find ((greatX==Rold) &amp; (greatY==Dold));
[oldRUy,oldRUx]=find ((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
[oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
Zoldgreat=NaN*zeros(size(greatX));
Zoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
DELTAZoldgreat=NaN*zeros(size(greatX));
DELTAZoldgreat(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=DELTAZ2old;
%%%%%%%%% PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
[newLDy,newLDx]=find ((greatX==Lnew) &amp; (greatY==Dnew));
[newLUy,newLUx]=find ((greatX==Lnew) &amp; (greatY==Unew));
[newRDy,newRDx]=find ((greatX==Rnew) &amp; (greatY==Dnew));
[newRUy,newRUx]=find ((greatX==Rnew) &amp; (greatY==Unew));
[newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);
[newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);
[newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);
[newRU_row,newRU_col]=xy2ij_brack([newRUx,newRUy],spany);
Znewgreat=NaN*zeros(size(greatX));
Znewgreat(newLD_row:-1:newLU_row,newLD_col:newRD_col)=Znew;
				
				<dp n="d102"/>
DELTAZnewgreat=NaN*zeros(size(greatX));
DELTAZnewgreat(newLD_row:-l:newLU_row,newLD_col:newRD_col)=DELTAZ2new;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%% TAKING CARE OF THE OVERLAP AREA
nanold=uint8(isnan(Zoldgreat));
nannew=uint8(isnan(Znewgreat));
fused=NaN*zeros(size(greatX));
%fused(newLD_row:-1:newLU_row,newLD_col:newRD_col)=Znew;
%fused(oldLD_row:-1:oldLU_row,oldLD_col:oldRD_col)=Zold;
besdernew=find(~nannew);
fused(besdernew)=Znewgreat(besdernew);
besderold=find(-nanold);
fused(besderold)=Zoldgreat(besderold);
DELTA2fused=NaN*zeros(size(greatX));
%DELTA2fused(newLD_row:-1:newLU_row,newLD_col:newRD_col)=DELTAZ2new;
%DELTA2fused(oldLD_row:-1:oldLU_row,oldLD_coi:oldRD_col)=DELTAZ2old;
DELTA2fused(besdernew)=DELTAZnewgreat(besdernew);
DELTA2fused(besderold)=DELTAZoldgreat(besderold);
%Sigmaoldgreat=sqrt(DELTAZoldgreat);
%Sigmanewgreat=sqrt(DELTAZnewgreat);
Sigma2oldgreat=DELTAZoldgreat;
Sigma2newgreat=DELTAZnewgreat;
nangreat=(nanold | nannew);
E=~nangreat;
F=find(E);
if ~isempty(F)

         oldneto=NaN*ones(size(Zoldgreat));

         newneto=NaN*ones(size(Znewgreat));

         oldsigneto=oldneto;

       newsigneto=newneto;

       oldneto(F)=Zoldgreat(F);

       newneto(F)=Znewgreat(F);

       oldsig2neto(F)=Sigma2oldgreat(F);

       newsig2neto(F)=Sigma2newgreat(F);

       %size(oldneto(F))

       %size(newneto(F))

       %size(oldsig2neto(F)′)

       %size(newsig2neto(F)′)

       nominat=(oldneto(F)./oldsig2neto(F)′) + (newneto(F)./newsig2neto(F)′);

       denomin=(1./oldsig2neto(F)′) + (1./newsig2neto(F)′);

       sig2seam=1./denomin;

       seamvalue=sig2seam.*nominat;

       fused(F)=seamvalue;

       DELTA2fused(F)=sigseam.^2;

       DELTA2fused(F)=sig2seam;
end
fused=flipud(fused);
				
				<dp n="d103"/>
DELTA2fused=flipud(DELTA2fused);
%%%%%%%%%% THE CENTRAL COORDINATE OF THE (WARPED) GREAT-COMBINED IMAGE
greatcenterx=(greatR+greatL)/2;
greatcentery=(greatU+greatD)/2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% PUTTING THE OUTPUT TO CELLS,FOR EASIER TRANSFER
fusedcell{1,1}=fused;
fusedcell{2,1}=DELTA2fused;
fusedcell{3,1}=greatcenterx;
fusedcell{4,1}=greatcentery;
function [small,delta2small]=mizoorml21(big,delta2big,levels,shape);
newA=big;
newdelta2A=delta2big;
for iteration=1:levels,

  [reducedA,reducedelta2A]=reduceml21(newA,newdelta2A,shape);

  newA=reducedA;

  newdelta2A=reducedelta2A;
end
small=newA;
delta2small=newdelta2A;
function [small,delta2small]=mizoorml21_xy(big,delta2big,levels_x,levels_y,shape);
newA=big;
newdelta2A=delta2big;
if ~(levels_x==levels_y)

   for iteration=1:min([levels_x,levels_y]),

       [reducedA_x,reducedelta2A_x]=reduceml21_x{newA,newdelta2A,shape);

       [reducedA,reducedelta2A]=reduceml21_y(reducedA_x,reducedelta2A_x,shape);

              newA=reducedA;

       newdelta2A=reducedelta2A;

   end

   [a,maindirection]=max([levels_x,levels_y]);

   if maindirection==1     %%% x-direction is the main movement unceratinty

          for iteration=levels_y+l:levels_x,

          [reducedA,reducedelta2A]=reduceml21_x(newA,newdelta2A,shape);

              newA=reducedA;

          newdelta2A=reducedelta2A;

        end

     else                                          %%% y-direction is the main movement
unceratinty

        for iteration=levels_x+l:levels_y,

        [reducedA,reducedelta2A]=reduceml21_y(newA,newdelta2A,shape);

            newA=reducedA;

        newdelta2A=reducedelta2A;

     end
				
				<dp n="d104"/>
  end
 else

  levels=levels_x;

  for iteration=1:levels,

     [reducedA_x,reducedelta2A_x]=reduceml21_x(newA,newdelta2A,shape);

     [reducedA,reducedelta2A]=reduceml21_y(reducedA_x,reducedelta2A_x,shape);

            newA=reducedA;

     newdelta2A=reducedelta2A;

   end
end
small=newA;
delta2small=newdelta2A;
clear
pack
levels_x=3;
levels_y=0;
minlevel=0;
yepsilon=eps;
left=97;
right=600;
bottom=200;
roof=385;
imagehight=roof-bottom+1;
imagewidth=right-left+1;
LeftspaceL=135;
LeftspaceR=185;
RightspaceL=315;
RightspaceR=365;
Leftspacewidth=LeftspaceR-LeftspaceL+1;
Rightspacewidth=RightspaceR-RightspaceL+1;
ystep=1;
%centerx2guess=100*ystep;    %initializing the disparity
centerx2guess=115*ystep;    %initializing the disparity
centery2guess=0;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
basefilename=′stepol_′;
postfix=′.GIF′;
basephotoname=′photo′;
rawname=′basicim′;
netframes=[1:15];
load flatparams.mat
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%spec=maspect2(basefilename,netframes,left,right,bottom,roof);
sig=0.5;
invM=flatfun(bottom:roof,left:right);
				
				<dp n="d105"/>
  SIG2=(sig.^2);
  sun=250;
  sunerror=1/eps;
  SE=ones(3,5);
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  %h=fspecial(′prewitt′);
  centerx2=zeros(1,length (netframes));
  centery2=zeros(1,length(netframes));
  tikunx=zeros(1,length(netframes));
  tikuny=zeros(1,length(netframes));
  frame_inds=1:ystep:(length(netframes)-1);
  k=1;
  for indframe=1:length(frame_inds-1),

  %%%%%%%%%%%%%%%%%%%%%%%%%%%% READING AND PREPARING THE RAW IMAGES

  yframe_ind=frame_inds(indframe);
tic

  if k==1

     yframe1=netframes(yframe_ind);

     sframe1=sprintf(′%d′,yframe1);

     shem1=[basefilename sframe1 ′_1′postfix];

     shem2=[basefilename sframe1 ′_2′postfix];

     shem3=[basefilename sframe1 ′_3′postfix];

     rawim1=imread(shem1);

     rawim2=imread(shem2);

     rawim3=imread(shem3);

     rawim=(double(flipud(rawim1))-1) + (double(flipud(rawim2))-1) + (
double(flipud(rawim3))-1 );

     basicim=(rawim/3);

     Za=basicim.*flatfun;

     Za=Za(bottom:roof,left:right);

     X=basicim(bottom:roof,left:right);

     saturated=(X>sun);

     dilsaturated=double(dilate(saturated,SE));

     DELTAZa2=(SIG2*(invM.^2)).*(~dilsaturated) + sunerror.*dilsaturated;

     DELTAZa2(:,LeftspaceL :LeftspaceR)=sunerror*ones(imagehight,Leftspacewidth);

     DELTAZa2(:,RightspaceL:RightspaceR)=sunerror*ones(imagehight,Rightspacewidth);

     centerx1=0;centery1=0;

     holdcell{1,1}=Za;

     holdcell{2,1}=DELTAZa2;

     holdcell{3,1}=centerx1;

     holdcell{4,1}=centeryl;

     fusedcell=holdcell;

  else

     holdcell=fusedcell;

  end

  yframe2=netframes(yframe_ind+ystep)

  sframe1=sprintf(′%d′,yframe2);
				
				<dp n="d106"/>
  shem1=[basefi1ename sframe1 ′_1′postfix];
  shem2=[basefi1ename sframe1 ′_2′postfix];
  shem3=[basefilename sframe1 ′_3′postfix];
  rawim1=imread(shem1);
  rawim2=imread(shem2);
  rawim3=imread(shem3);
  rawim=(double(flipud(rawim1))-1) + (double(flipud(rawim2))-1) + (
double(flipud(rawim3))-1);
  basicim=(rawim/3);
  Zb=basicim.*flatfun;
  Zb=Zb(bottom:roof,left:right);
  X=basicim(bottom:roof,left:right);
  saturated=(X>sun);
  dilsaturated=double(dilate(saturated,SE));
  DELTAZb2=(SIG2*(invM.^2)).*(-dilsaturated) + sunerror.*dilsaturated;
  DELTAZb2(:,LeftspaceL :LeftspaceR)=sunerror*ones(imagehight,Leftspacewidth);
  DELTAZb2(:,RightspaceL:RightspaceR)=sunerror*ones(imagehight,Rightspacewidth);
  hnewcell{1,1}=Zb;
  hnewcell{2,1}=DELTAZb2;
  if k==1

  hnewcell{3,1}=centerx2guess;

  hnewcell{4,1}=centery2guess;
  else

  hnewcell{3,1}=centerx2(old_yframe_ind)+centerx2guess;

  hnewcell{4,1}=centery2(old_yframe_ind)+centery2guess;
  end
  newcell=hnewcell;
  clear rawim basicim
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  DERIVING TRANSLATION BETWEEN IMAGES
[tikunx(yframe_ind+ystep),tikuny(yframe_ind+ystep)]=gmultireml31(holdcell,hnewcell,levels_x,le
vels_y,minlevel);%
  centerx2(yframe_ind+ystep)=newcell{3,1} + tikunx(yframe_ind+ystep);
  centery2(yframe_ind+ystep)=newcell{4,1} + tikuny(yframe_ind+ystep);
  [tikunx(yframe_ind+ystep)   tikuny(yframe_ind+ystep)]
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FUSING THE IMAGES
  newcell{3,1}=centerx2(yframe_ind+ystep);
  newcell{4,1}=centery2(yframe_ind+ystep);
  %%%%%%%%% NEED TO CORRECT THIS WITH THE NEW KIND OF FUSION
  fusedcell=fuseml22(fusedcell,newcell);
  old_yframe_ind=yframe_ind+ystep;
  k=k+1;
toc
				
				<dp n="d107"/>
  save yytemp.mat centerx2 centery2 tikunx tikuny
  %pause
  end
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  %save coordpol2.mat centerx2 centery2 tikunx tikuny
  fused=fusedcell{1,1};
  deltaz2=fusedcell{2,1};
  greatcenterx=fusedcell{3,1};
  greatcentery=fusedcell{4,1};
  %save fusedpoll.mat fused deltaz2 greatcenterx greatcentery
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  clear fusedcell
  image(fused)
  colormap(gray(256))
  axis image
  axis xy
  set(gcf,′menubar′,′figure′)
  function prof=profil2(cols,transit)
  mn=eps^7;
  tmp=ones(1,cols);
  L=1:transit;
  R=cols-L+1;
  %tmp(L)=mn+(1-mn) * (L-1)/(transit-1);
  %tmp(R)=mn+(1-mn) * (L-1)/(transit-1);
  tmp(L)=mn+(1-mn) * (1-cos(pi*(L-1)/(transit-1)))/2;
  tmp(R)=mn+(1-mn) * (1-cos(pi*(L-1)/(transit-1)))/2;
  prof=tmp;
  function [oldgreatcell,newgreatcell]=putbigm13(oldcell,newcell)
  %
  % oldcell and newcell contain images to put in a single
  % coordinate system,and the vector-coordinates of the
  % image centers in (x,y).
  % oldgreatcell and newgreatcell are the cells of the new large
  % frame with a common coorinate system,in each of which
  % the single images are embedded. These cells also contain the
  % coordinates of the center of the big frames.
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
				
				<dp n="d108"/>
%%%%%%%%%    DECODING THE CELLS INTO THEIR CONSTITUENTS
Zold      =oldcell{1,1};  % the 1st (old) image
DELTAZ2old=oldcell{2,1};
oldcenterx=oldcell{3,1}; %coordinates of center of old image
oldcentery=oldcell{4,1};
Znew=newcell{1,1};       % the 2nd (new) image
DELTAZ2new=newcell{2,1};
newcenterx=newcell{3,1};
newcentery=newcell{4,1};
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[oldrows,oldcols]=size(Zold);
Lold=oldcenterx-(oldcols-1)/2;      %the  Left side of the old image
Rold=oldcenterx+(oldcols-1)/2;      % the Right side
Dold=oldcentery-(oldrows-1)/2;      % the Down of the old image
Uold=oldcentery+(oldrows-1)/2;      % the Upper side of the old image
[newrows,newcols]=size(Znew);
Lnew=newcenterx-(newcols-1)/2;      % the Left sideof the new (2nd) image
Rnew=newcenterx+(newcols-1)/2;
Dnew=newcentery-(newrows-1)/2;
Unew=newcentery+(newrows-1)/2;
greatL=min([Lold,Lnew]);           % the Left side of the combined-greater image
greatR=max([Rold,Rnew]);
greatD=min([Dold,Dnew]);
greatU=max([Uold,Unew]);
spanx=greatR-greatL+1;
spany=greatU-greatD+1;
greatX=ones(spany,1)*(greatL:greatR);       %the coordinate grid of the combined-greater image
greatY=(greatD:greatU)′*ones(1,spanx);
%%%%%%% PUTTING THE OLD IMAGE IN THE COMBINED GREAT-GRID
[oldLDy,oldLDx]=find((greatX==Lold) &amp; (greatY==Dold));
[oldLUy,oldLUx]=find((greatX==Lold) &amp; (greatY==Uold));
[oldRDy,oldRDx]=find((greatX==Rold) &amp; (greatY==Dold));
[oldRUy,oldRUx]=find((greatX==Rold) &amp; (greatY==Uold));
[oldLD_row,oldLD_col]=xy2ij_brack([oldLDx,oldLDy],spany);
[oldRD_row,oldRD_col]=xy2ij_brack([oldRDx,oldRDy],spany);
[oldLU_row,oldLU_col]=xy2ij_brack([oldLUx,oldLUy],spany);
[oldRU_row,oldRU_col]=xy2ij_brack([oldRUx,oldRUy],spany);
Zoldgreat=NaN*zeros(size(greatX));
Zoldgreat(oldLD_row:-l:oldLU_row,oldLD_col:oldRD_col)=Zold;
Zoldgreat=flipud(Zoldgreat);
DELTAZoldgreat=NaN*zeros(size(greatX));
DELTAZoldgreat(oldLD_row:-l:oldLU_row,oldLD_col:oldRD_col)=DELTAZ2old;
DELTAZoldgreat=flipud(DELTAZoldgreat);
%%%%%%%%% PUTTING THE NEW IMAGE IN THE COMBINED GREAT-GRID
[newLDy,newLDx]=find ((greatX==Lnew) &amp; (greatY==Dnew));
				
				<dp n="d109"/>
[newLUy,newLUx]=find((greatX==Lnew) &amp; (greatY==Unew));
[newRDy,newRDx]=find((greatX==Rnew) &amp; (greatY==Dnew));
[newRUy,newRUx]=find((greatX==Rnew) &amp; (greatY==Unew));
[newLD_row,newLD_col]=xy2ij_brack([newLDx,newLDy],spany);
[newRD_row,newRD_col]=xy2ij_brack([newRDx,newRDy],spany);
[newLU_row,newLU_col]=xy2ij_brack([newLUx,newLUy],spany);
[newRU_row,newRU_col]=xy2ij_brack([newRUx,newRUy],spany);
Znewgreat=NaN*zeros(size(greatX));
Znewgreat(newLD_row:-1:newLU_row,newLD_col:newRD_col)=Znew;
Znewgreat=flipud(Znewgreat);
DELTAZnewgreat=NaN*zeros(size(greatX));
DELTAZnewgreat(newLD_row:-l:newLU_row,newLD_col:newRD_col)=DELTAZ2new;
DELTAZnewgreat=flipud(DELTAZnewgreat);
%%%%%%%%%%%%%%%%%% THE CENTRAL COORDINATE OF THE (WARPED) GREAT-COMBINED IMAGE
greatcenterx=(greatR+greatL)/2;
greatcentery=(greatU+greatD)/2;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% PUTTING THE OUTPUT TO CELLS,FOR EASIER TRANSFER
oldgreatcell{1,1}=Zoldgreat;
oldgreatcell{2,1}=DELTAZoldgreat;
oldgreatcell{3,1}=greatcenterx;
oldgreatcell{4,1}=greatcentery;
newgreatcell{1,1}=Znewgreat;
newgreatcell{2,1}=DELTAZnewgreat;
newgreatcell{3,1}=greatcenterx;
newgreatcell{4,1}=greatcentery;
function[small,smalldelta2]=reduceml21(big,bigdelta2,shape);
gs=zeros(5,1);
a=0.4;
gs(3)=a;
gs(2)=0.25;gs(4)=gs(2);
gs(1)=0.25-a/2;gs(5)=gs(1);
%invsig=1./sqrt(bigdelta2);
invsig2=1./bigdelta2;
weightedbig=big.*invsig2;
blurnominat=conv2(conv2(weightedbig,gs,shape),gs′,shape);     %convolution in one line
blurdenomin=conv2(conv2(invsig2,gs,shape),gs′,shape);
samples_row=1 : 2 : size(blurnominat,1);         %sample grid in one line
samples_col=1 : 2 : size(blurnominat,2);         %sample grid in one line
smallnominat=blurnominat(samples_row,samples_col);      %sampling in one line
smalldenomin=blurdenomin(samples_row,samples_col);      %sampling in one line
small=smallnominat./smalldenomin;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
				
				<dp n="d110"/>
newsig=1./smalldenomin;
%smalldelta2=newsig.^2;
smalldelta2=newsig2;
function[small,smalldelta2]=reduceml21_x(big,bigdelta2,shape)
gs=zeros(5,1);
a=0.4;
gs(3)=a;
gs(2)=0.25;gs(4)=gs(2);
gs(1)=0.25-a/2;gs(5)=gs(1);
%invsig=1./sqrt(bigdelta2);
invsig2=1./bigdelta2;
weightedbig=big.*invsig2;
blurnominat=conv2(weightedbig,gs,shape);    %convolution in one line
blurdenomin=conv2(invsig2,gs,shape);
samples_col=1:2:size(blurnominat,2);         %sample grid in one line
smallnominat=blurnominat(:,samples_col);    %sampling in one line
smalldenomin=blurdenomin(:,samples_col);    %sampling in one line
small=smallnominat./smalldenomin;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
newsig2=1./smalldenomin;
%smalldelta2=newsig.^2;
smalldelta2=newsig2;
function[small,smalldelta2]=reduceml21_x(big,bigdelta2,shape)
gs=zeros(5,1);
a=0.4;
gs(3)=a;
gs(2)=0.25;gs(4)=gs(2);
gs(1)=0.25-a/2;gs(5)=gs(1);
%invsig=1./sqrt(bigdelta2);
invsig2=1./bigdelta2;
weightedbig=big.*invsig2;
blurnominat=conv2(weightedbig,gs′,shape);     %convolution in one line
blurdenomin=conv2(invsig2,gs′,shape);
samples_row=1 :2 :size(blurnominat,1);          %sample grid in one line
smallnominat=blurnominat(samples_row,:);       %sampling in one line
smalldenomin=blurdenomin(samples_row,:);       %sampling in one line
small=smallnominat./smalldenomin;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
newsig2=1./smalldenomin;
%smalldelta2=newsig.^2;
smalldelta2=newsig;
				
				<dp n="d111"/>
function Fethsatur=satursmooth7(X,uppersun,lowersun)
SE=ones(3);
mn=eps;
saturated=uint8(X>uppersun);
maybesatur=uint8(X>lowersun);
s=max(saturated(:));
if  s==0

  Fethsatur=ones(size(x));
else

  shulaim=uint8(1);

  oldregion=uint8(saturated);

  %k=0;

  while~(shulaim==0)

     %k=k+l

     dilsaturated=(dilate(oldregion,SE));

     newregion=(dilsaturated &amp; maybesatur);

     added=newregion &amp; (~(oldregion));

     shulaim=max(added(:));

     oldregion=newregion;

  end

    tmp=ones(size(X));

    fact=(1-mn)/(uppersun-(lowersun-1));

    for level=lowersun:uppersun,

       badpixels=((X==level) &amp; newregion);

       badvalue=mn+(uppersun-level)*fact;

       lobeseder=find(badpixels);

       if~isempty(lobeseder)

          tmp(lobeseder)=badvalue;

       end

    end

    lobeseder=find(saturated);

    tmp(lobeseder)=mn;

    Fethsatur=tmp;
end
function [enhanced,subenhanced,mnval,mxval]=stretchwin5(X,Y,photo,logphoto)
XS=sort(X);
YS=sort(Y);
L=round(XS(2));
				
				<dp n="d112"/>
R=round(XS(3));
D=round(YS(2));
U=round(YS(3));
%    LD  LU  RU   RD  LD
Xp=[L   L    R    R  L];
Yp=[D   U    U    D  D];
X0=(L+R)/2;
Y0=(D+U)/2;
spanx=size(photo,2);
spany=size(photo,1);
[LD_row,LD_col]=xy2ij_brack([L,D],spany);
[RD_row,RD_col]=xy2ij_brack([R,D],spany);
[LU_row,LU_col]=xy2ij_brack([L,U],spany);
[RU_row,RU_col]=xy2ij_brack([R,U],spany);
stam=flipud(photo);
subarea=stam(LD_row:-1:LU_row,LD_col:RD_col);
mn=min(subarea(:));
lemata=subarea-mn;
mx=max(lemata(:));
imx8=(2^8)/mx;
subenhanced=uint8(round(lemata*imx8));
mn=min(logphoto(:));
lemata=double(logphoto)-double(mn);
mx=max(lemata(:));
imx8=(2^8)/mx;
stam=flipud(lemata*imx8);
stam(LD_row:-1:LU_row,LD_col:RD_col)=subenhanced;
enhanced=flipud(stam);
mnval=min(subarea(:));
mxval=max(subarea(:));
% [I,J]=xy2ij_brack(XYcoord,N);
% Converts x-y coordinates to ij coordinates:
%   I is the row-number of the element,
%   J is the column-number of the element
%   N is the number of rows in the matrix.
function [I,J]=xy2ij_brack(XYcoord,N);
%XYcoord
X=XYcoord(:,1);
Y=XYcoord(:,2);
				
				<dp n="d113"/>
%index in row-number
I=N-Y+1;
%index in cols
J=X;
%IJcoord=[I,J];

Claims (20)

1、一种成像方法,其特征在于包括:
用成像器作第一组测量以生成第一像值的第一步骤,第一组测量包括至少测量一次第一景区第一射线束的强度,第一射线束在成像器参考帧内具有第一主射线,成像器对第一主射线射线束具有第一强度灵敏度特性,成像器对第一主射线射线束的强度具有第一动态范围;
用成像器作第二组测量以生成第二像值的第二步骤,第二组测量包括至少测量一次第一景区发射的第二射线束的强度,第二射线束在成像器参考帧内具有第二主射线,第二主射线不同于第一主射线,成像器对第二主射线射线束具有第二强度灵敏度特性,第二与第一强度灵敏度特性不同,而且成像器对第二主射线射线束的强度具有第二动态范围;和
对第一与第二像值作拼合操作而生成第三像值,第三像值与成像器对第一和第二射线束强度中至少一个的第三动态范围相关,第三动态范围大于成像器第一与第二动态范围中的至少一个。
2、根据权利要求1的方法,其特征在于第一射线束包括第一电磁射线
束,第二射线束包括第二电磁射线束。
3、根据权利要求1的方法,其特征在于还包括下列方法之一:
在第一与第二步骤之间相对第一景区转动成像器;和
在第一与第二步骤之间相对第一景区平移成像器。
4、根据权利要求1的方法,还包括校正成像器,其特征在于,校正步骤
包括:
用成像器测量第一组第一主射线射线束的强度以生成第一组校正测量值;
通过测定下列值之一确定第一强度灵敏度特性的第一估值:a)第一组校正测量值之和,和b)第一组校正测量值的平均值;
用成像器测量第二组第二主射线射线束的强度,生成第二组校正测量值;和
通过测定下列值之一确定第二强度灵敏度特性的第二估值:a)第二组校正测量值之和,和b)第二组校正测量值的平均值。
5、根据权利要求1的方法,还包括校正成像器,其特征在于,校正步骤包
括:
用成像器作第三组测量而生成第四像值,第三组测量包括至少测量一次第三射线束的强度,第三射线束由第二景区发射并居于第一主射线;
用成像器作第四组测量而生成第五像值,第四组测量包括至少测量一次第四射线束的强度,第四射线束由第二景区发射并且有第二主射线;和
评估第一与第二强度灵敏度特性间的关系,评估步骤包括测定下列值之一:a)第四与第五像值之差,和b)第四与第五像值之比。
6、根据权利要求1的方法,其特征在于还包括:
用成像器作第三组测量而生成第四像值,第三组测量包括至少测量一次第三射线束至少一种选择谱分量的强度,第三射线束由第一景区发射且在成像器参考帧内具有第三主射线,成像器对第三主射线射线束具有第一光谱灵敏度特性,第一光谱灵敏度特性包括具有第一波长灵敏度带的带通特性,第三射线束至少一个选择的谱分量在第一波长灵敏度带内有一波长;和
用成像器作第四组测量而生成第五像值,第四组测量包括至少测量一次第四射线束至少一个选择谱分量的强度,第四射线束由第一景区发射且在成像器参考帧内具有第四主射线,第四主射线不同于第三主射线,成像器对第四主射线射线束具有第二光谱灵敏度特性,第二光谱灵敏度特性包括具有第二波长灵敏度带的带通特性,第四射线束至少一个选择的谱分量在第二波长灵敏度带内有一波长,而第二与第一波长灵敏度带不同。
7、根据权利要求1的方法,其特征在于还包括:
用成像器作第三组测量而生成第四像值,第三组测量包括至少测量一次第一景区发射的第三射线束中至少一个选择的偏振分量的强度,第三射线束在成像器参考帧内具有第三主射线,成像器对第三主射线射线束具有第一偏振灵敏度特性,第一偏振灵敏度特性包括对偏振角在第一角范围外的射线分量降低的灵敏度,而第三射线束至少一个选择的偏振分量在第一角范围内有一偏振角;和
用成像器作第四组测量而生成第五像值,第四组测量包括至少测量一次第四射线束至少一个所选偏振分量的强度,第四射线束由第一景区发射且在成像器参考帧内具有第四主射线,第四与第三主射线不同,成像器对第四主射线射线束具有第二偏振灵敏度特性,第二偏振灵敏度特性包括对偏振角在第二角范围外的信号分量降低的灵敏度,第四射线束至少一个所选偏振分量在第二角范围内有一偏振角,而第二与第一角范围不同。
8、根据权利要求1的方法,其特征在于还包括:
用成像器作第三组测量而生成第四像值,第三组测量包括至少测量一次第一景区发射的第三射线束的强度,第三射线束在成像器参考帧内具有第三主射线,成像器对第三主射线射线束具有第一聚焦特性,第一聚焦特性包括第一焦距;和
用成像器作第四组测量而生成第五像值,第四组测量包括至少测量一次第一景区发射的第四射线束的强度,第四射线束在成像器参考帧内具有第四主射线,第四与第三主射线不同,成像器对第四主射线射线束具有第二聚焦特性,第二聚焦特性包括第二焦距,第二与第一焦距不同。
9、一种成像方法,其特征在于包括:
用成像器作第一组测量而生成第一像值的第一步骤,第一组测量包括至少测量一次第一景区发射的第一射线束中至少一个所选偏振分量的强度,第一射线束在成像器参考帧内具有第一主射线,成像器对第一主射线射线束具有第一偏振灵敏度特性,而第一偏振灵敏度特性包括对偏振角在第一角范围外的信号分量降低的灵敏度,第一射线束至少一个所选偏振分量在第一角范围内有一偏振角;
用成像器作第二组测量而生成第二像值的第二步骤,第二组测量包括至少测量一次第一景区发射的第二射线束中至少一个所选偏振分量的强度,第二射线束在成像器参考帧内具有第二主射线,第二与第一主射线不同,成像器对第二主射线射线束具有第二偏振灵敏度特性,第二偏振灵敏度特性包括对偏振角在第二角范围外的信号分量降低的灵敏度,第二射线束至少一个所选偏振分量在第二角范围内有一偏振角,而第二与第一角范围不同;
移动成像器的第三步骤,包括下列之一:
在第一与第二步骤之间相对第一景区转动成像器;和
在第一与第二步骤之间相对第一景区平移成像器;和
用第一与第二像值确定第一与第二射线束之一的偏振态。
10、根据权利要求9的方法,其特征在于,第一射线束包括第一电磁射线
束,第二射线束包括第二电磁射线束。
11、一种成像设备,其特征在于包括:
用于作第一组测量而生成第一像值的第一成像器,第一组测量包括至少测量一次第一景区发射的第一射线束的强度,第一射线束在成像器参考帧内具有第一主射线,成像器对第一主射线射线束具有第一强度灵敏度特性,而对第一主射线射线束的强度具有第一动态范围;
用于作第二组测量而生成第二像值的第二成像器,第二组测量包括至少测量一次第一景区发射的第二射线束的强度,第二射线束在成像器参考帧内具有第二主射线,第二与第一主射线不同,成像器对第二主射线射线束具有第二强度灵敏度特性,第二与第一强度灵敏度特性不同,成像器对第二主射线辐射信号组的强度具有第二动态范围;和
处理器,用于对第一和第二测量值作拼合操作而生成第三像值,第三像值与相对于第一和第二射线束强度中至少一个的第三动态范围相关,第三动态范围大于第一和第二动态范围中的至少一个。
12、根据权利要求11的设备,其特征在于第一射线束包括第一电磁射线
束,第二射线束包括第二电磁射线束。
13、根据权利要求11的设备,其特征在于第二成像器就是第一成像器,
第一组测量不晚于第一时间,第二组测量不早于第二时间,第二时间晚
于第一时间,而且该设备还包括下列结构之一:
在第一与第二时间之间相对于第一和第二景区转动第一成像器的结构;和
在第一与第二时间之间相对于第一和第二景区平移第一成像器的结构。
14、根据权利要求11的设备,其特征在于还包括:
用于用成像器测量第一组第一主射线射线束的强度的处理器,生成第一组校正测量值;
用于测定第一强度灵敏度特性的第一估值的处理器,包括下列之一:a)测定第一组校正测量值之和的处理器,和b)测定第一组校正测量值平均值的处理器;
用于用成像器测量第二组第二主射线射线束的强度的处理器,生成第二组校正测量值;和
测定第二强度灵敏度特性的第二估值的处理器,包括下列之一:a)测定第二组校正测量值之和的处理器,和b)测定第二组校正测量值平均值的处理器。
15、根据权利要求11的设备,其特征在于还包括:
用于作第三组测量而生成第四像值的第三成像器,第三组测量包括至少测量一次第二景区发射的第三射线束的强度,第三射线束具有第一主射线;
用于作第四组测量而生成第五像值的第四成像器,第四组测量包括至少测量一次第二景区发射的第四射线束的强度,第四射线束具有第二主射线;和
用于评估第一与第二强度灵敏度特性的关系的处理器,包括下列之一:a)测定第四与第五像值之差的处理器,和b)测定第四与第五像值之比的处理器。
16、根据权利要求11的设备,其特征在于还包括:
用于作第三组测量而生成第四像值的第三成像器,第三组测量包括至少测量一次第一景区发射的第三射线束中至少一个所选谱分量的强度,第三射线束在成像器参考帧内具有第三主射线,成像器对第三主射线射线束具有第一光谱灵敏度特性,第一光谱灵敏度特性包括具有第一波长灵敏度带的带通特性第三射线束至少一个所选谱分量有一位于第一波长灵敏度带内的波长;
用于作第四组测量而生成第五像值的第四成像器,第四组测量包括至少测量一次第一景区发射的第四射线束中至少一个所选谱分量的强度,第四射线束在成像器参考帧内具有第四主射线,第四主射线与第三主射线不同,成像器对第四主射线射线束具有第二光谱灵敏度特性,第二光谱灵敏度特性包括具有第二波长灵敏度带的带通特性,第四射线束至少一个所选谱分琅具有位于第二波长灵敏度带内的波长,第二与第一波长灵敏度带不同。
17、根据权利要求11的设备,其特征在于还包括:
用于作第三组测量而生成第四像值的第三成像器,第三组测量包括至少测量一次第一景区发射的第三射线束中至少一个所选偏振分量的强度,第三射线束在成像器参考帧内具有第三主射线,成像器对第三主射线射线束具有第一偏振灵敏度特性,第一偏振灵敏度特性包括对偏振角在第一角范围外的信号分量降低的灵敏度,而第三射线束至少一个所选偏振分量的偏振角在第一角范围内;
用于作第四组测量而生成第五像值的第四成像器,第四组测量包括至少测量一次第一景区发射的第四射线束中至少一个所选偏振分量的强度,第四射线束在成像器参考帧内具有第四主射线,第四与第三主射线不同,成像器对第四主射线射线束具有第二偏振灵敏度特性,第二偏振灵敏度特性包括对偏振角在第二角范围外的信号分量降低的灵敏度,第四射线束至少一个所选偏振分量的偏振角在第二角范围内,第二与第一角范围不同。
18、根据权利要求11的设备,其特征在于还包括:
用于作第三组测量而生成第四像值的第三成像器,第三组测量包括至少测量一次第一景区发射的第三射线束的强度,第三射线束在成像器参考帧内具有第三主射线,而且成像器对第三主射线射线束具有第一聚焦特性,第一聚焦特性包括第一焦距;
用于作第四组测量而生成第五像值的第四成像器,第四组测量包括至少测量一次第一景区发射的第四射线束的强度,第四射线束在成像器参考帧内具有第四主射线,第四与第三主射线不同,成像器对第四主射线射线束具有第二聚焦特性,第二聚焦特性包括第二焦距,第二与第一焦距不同。
19、一种成像设备,其特征在于包括:
用于作第一组测量而生成第一像值的第一成像器,第一组测量包括至少测量一次第一景区发射的第一射线束中至少一个所选偏振分量的强度,第一射线束在成像器参考帧内具有第一主射线,成像器对第一主射线射线束参考帧内具有第一偏振灵敏度特性,第一偏振灵敏度特性包括对偏振角在第一角范围外的信号分量减低的灵敏度,第一射线束至少一个所选偏振分量的偏振角在第一角范围内;
用于作第二组测量而生成第二像值的第二成像器,第二组测量包括至少测量一次第一景区发射的第二射线束中至少一个所选偏振分量的强度,第二射线束在成像器参考帧内具有第二主射线,第二与第一主射线不同,成像器对第二主射线射线束具有第二偏振灵敏度特性,第二偏振灵敏度特性包括对偏振角在第二角范围外的信号分量减低的灵敏度,第二射线束至少一个所选偏振分量的偏振角在第二角范围内,而第二与第一角范围不同,其中第一组测量不晚于第一时间,第二组测量不早于第二时间,第二时间晚于第一时间;
一种移动装置,包括下列之一:
在第一与第二时间之间相对于第一和第二景区转动第一成像器的结构,和
在第一与第二时间之间相对于第一和第二景区平移第一成像器的结构;和
应用第一和第二像值确定第一与第二射线束之一的偏振态的处理器。
20、根据权利要求19的设备,其特征在于,第一射线束包括第一电磁射线束,第二射线束包括第二电磁射线束。
CN01813115A 2000-07-21 2001-07-23 图象拼合方法与设备 Pending CN1451230A (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US22002500P 2000-07-21 2000-07-21
US60/220,025 2000-07-21

Publications (1)

Publication Number Publication Date
CN1451230A true CN1451230A (zh) 2003-10-22

Family

ID=22821741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN01813115A Pending CN1451230A (zh) 2000-07-21 2001-07-23 图象拼合方法与设备

Country Status (6)

Country Link
US (1) US7440637B2 (zh)
JP (1) JP4015944B2 (zh)
KR (1) KR20030028553A (zh)
CN (1) CN1451230A (zh)
AU (1) AU2001277964A1 (zh)
WO (1) WO2002009424A2 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538757A (zh) * 2011-11-08 2012-07-04 昆明理工大学 亮源及其周边暗源目标的观测方法
CN101534391B (zh) * 2008-02-05 2012-11-21 富士胶片株式会社 摄像装置、摄像方法、图像处理装置、图像处理方法
CN102957861A (zh) * 2011-08-24 2013-03-06 索尼公司 图像处理设备及其控制方法及程序
CN104937653A (zh) * 2013-03-21 2015-09-23 Lg电子株式会社 显示装置及其控制方法
CN105264881A (zh) * 2013-04-25 2016-01-20 空客防务与空间有限公司 累加相邻光电探测器积累信号的图像采集
US9464938B2 (en) 2014-02-06 2016-10-11 The Boeing Company Systems and methods for measuring polarization of light in images
CN107370962A (zh) * 2016-07-29 2017-11-21 广东欧珀移动通信有限公司 高动态范围图像拍摄方法、装置和终端设备
CN108293093A (zh) * 2015-11-19 2018-07-17 索尼公司 图像处理设备和图像处理方法
CN110476118A (zh) * 2017-04-01 2019-11-19 深圳市大疆创新科技有限公司 用于机器视觉的低轮廓多波段高光谱成像
US10901197B2 (en) 2014-09-19 2021-01-26 Sony Corporation Medical observation device and lens barrel of medical observation device

Families Citing this family (134)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738015B2 (en) 1997-10-09 2010-06-15 Fotonation Vision Limited Red-eye filter method and apparatus
US6789040B2 (en) * 2000-08-22 2004-09-07 Affymetrix, Inc. System, method, and computer software product for specifying a scanning area of a substrate
DE10109130B4 (de) * 2001-02-24 2015-02-19 Carl Zeiss Microscopy Gmbh Verfahren zur Aufzeichnung und Darstellung von Fluoreszenzbildern mit hoher räumlicher Auflösung
US7273562B2 (en) * 2002-04-17 2007-09-25 Nutech 03, Inc. Ozone injection method and system
JP2007520934A (ja) * 2003-12-24 2007-07-26 ウオーカー ディジタル、エルエルシー 画像を自動的に捕捉し、管理する方法および装置
US7840038B2 (en) * 2006-12-08 2010-11-23 The Invention Science Fund I, Llc Defect correction based on “virtual” lenslets
US7232221B2 (en) * 2004-02-24 2007-06-19 Searete, Llc Volumetric imaging using “virtual” lenslets
US7220001B2 (en) * 2004-02-24 2007-05-22 Searete, Llc Defect correction based on “virtual” lenslets
US8169534B2 (en) * 2004-02-24 2012-05-01 The Invention Science Fund I, Llc Volumetric imaging using “virtual” lenslets
US7542592B2 (en) * 2004-03-29 2009-06-02 Siemesn Corporate Research, Inc. Systems and methods for face detection and recognition using infrared imaging
US7173698B2 (en) * 2004-04-13 2007-02-06 The United States Of America As Represented By The Secretary Of The Army Simultaneous 4-stokes parameter determination using a single digital image
US7420542B2 (en) * 2004-05-25 2008-09-02 Avago Technologies Ecbu Ip Pte Ltd Apparatus for capturing and analyzing light and method embodied therein
KR101042638B1 (ko) * 2004-07-27 2011-06-20 삼성전자주식회사 파노라마 영상 생성을 위한 디지털 촬상장치 및 그 생성방법
JP4023494B2 (ja) * 2005-01-18 2007-12-19 ソニー株式会社 撮像装置および撮像方法、並びに撮像装置の設計方法
US7274449B1 (en) 2005-06-20 2007-09-25 United States Of America As Represented By The Secretary Of The Army System for determining stokes parameters
US20070031063A1 (en) * 2005-08-05 2007-02-08 Hui Zhou Method and apparatus for generating a composite image from a set of images
EP1761041A1 (de) * 2005-08-29 2007-03-07 Stueckler Gerd Digitale Filmkamera
US7970239B2 (en) 2006-01-19 2011-06-28 Qualcomm Incorporated Hand jitter reduction compensating for rotational motion
US8120658B2 (en) * 2006-01-19 2012-02-21 Qualcomm Incorporated Hand jitter reduction system for cameras
US8019179B2 (en) 2006-01-19 2011-09-13 Qualcomm Incorporated Hand jitter reduction for compensating for linear displacement
JP2009524849A (ja) * 2006-01-24 2009-07-02 ザ トラスティーズ オブ コロンビア ユニヴァーシティ イン ザ シティ オブ ニューヨーク シーン画像および奥行き形状を取り込んで補償画像を生成するためのシステム、方法、および媒体
KR100813978B1 (ko) * 2006-02-22 2008-03-17 삼성전자주식회사 멀티미디어 데이터를 기록 및 재생하는 방법 및 장치
US7295312B1 (en) * 2006-05-10 2007-11-13 United States Of America As Represented By The Secretary Of The Army Rapid 4-Stokes parameter determination via Stokes filter wheel
JP2007318490A (ja) * 2006-05-26 2007-12-06 Olympus Imaging Corp 画像処理装置、カメラ、及び画像処理プログラム
IES20070229A2 (en) * 2006-06-05 2007-10-03 Fotonation Vision Ltd Image acquisition method and apparatus
WO2008085553A1 (en) * 2006-08-25 2008-07-17 Eliezer Jacob Improved digital camera with non-uniform image resolution
US9730573B2 (en) * 2007-03-20 2017-08-15 Given Imaging Ltd. Narrow band in-vivo imaging device
US8218822B2 (en) * 2007-05-14 2012-07-10 Pips Technology, Inc. Apparatus and method for recognizing the state of origin of a vehicle license plate
EP2156370B1 (en) * 2007-05-14 2011-10-12 Historx, Inc. Compartment segregation by pixel characterization using image data clustering
EP2162728B1 (en) * 2007-06-15 2016-07-27 Novartis AG Microscope system and method for obtaining standardized sample data
CA2604317C (en) 2007-08-06 2017-02-28 Historx, Inc. Methods and system for validating sample images for quantitative immunoassays
CA2596204C (en) * 2007-08-07 2019-02-26 Historx, Inc. Method and system for determining an optimal dilution of a reagent
WO2009029810A1 (en) * 2007-08-31 2009-03-05 Historx, Inc. Automatic exposure time selection for imaging tissue
TWI448808B (zh) * 2008-01-25 2014-08-11 Chi Mei Comm Systems Inc 投影機自動對焦系統及方法
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
EP2289235A4 (en) 2008-05-20 2011-12-28 Pelican Imaging Corp RECORDING AND PROCESSING IMAGES BY MONOLITHIC CAMERA ARRANGEMENT WITH HETEROGENIC IMAGE TRANSFORMER
JP4507265B2 (ja) * 2008-06-30 2010-07-21 ルネサスエレクトロニクス株式会社 画像処理回路、及びそれを搭載する表示パネルドライバ並びに表示装置
WO2010033508A1 (en) * 2008-09-16 2010-03-25 Historx, Inc. Reproducible quantification of biomarker expression
US8391634B1 (en) 2009-04-28 2013-03-05 Google Inc. Illumination estimation for images
US8723926B2 (en) * 2009-07-22 2014-05-13 Panasonic Corporation Parallax detecting apparatus, distance measuring apparatus, and parallax detecting method
US8723118B2 (en) 2009-10-01 2014-05-13 Microsoft Corporation Imager for constructing color and depth images
US8514491B2 (en) 2009-11-20 2013-08-20 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
JP5486298B2 (ja) * 2009-12-28 2014-05-07 キヤノン株式会社 画像処理装置および画像処理方法
WO2011127247A2 (en) * 2010-04-07 2011-10-13 Sanjay Krishna Apparatus and techniques of non-invasive analysis
US8774494B2 (en) * 2010-04-30 2014-07-08 Complete Genomics, Inc. Method and system for accurate alignment and registration of array for DNA sequencing
KR101824672B1 (ko) 2010-05-12 2018-02-05 포토네이션 케이맨 리미티드 이미저 어레이 구조 및 어레이 카메라
WO2012004709A1 (en) * 2010-07-06 2012-01-12 Koninklijke Philips Electronics N.V. Generation of high dynamic range images from low dynamic range images
US20130188878A1 (en) * 2010-07-20 2013-07-25 Lockheed Martin Corporation Image analysis systems having image sharpening capabilities and methods using same
US9294755B2 (en) * 2010-10-20 2016-03-22 Raytheon Company Correcting frame-to-frame image changes due to motion for three dimensional (3-D) persistent observations
US8798393B2 (en) * 2010-12-01 2014-08-05 Google Inc. Removing illumination variation from images
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
EP2708019B1 (en) 2011-05-11 2019-10-16 FotoNation Limited Systems and methods for transmitting and receiving array camera image data
US8923401B2 (en) 2011-05-31 2014-12-30 Raytheon Company Hybrid motion image compression
WO2013003485A1 (en) * 2011-06-28 2013-01-03 Inview Technology Corporation Image sequence reconstruction based on overlapping measurement subsets
EP2726930A4 (en) 2011-06-28 2015-03-04 Pelican Imaging Corp OPTICAL ARRANGEMENTS FOR USE WITH AN ARRAY CAMERA
US20130265459A1 (en) 2011-06-28 2013-10-10 Pelican Imaging Corporation Optical arrangements for use with an array camera
US20130070060A1 (en) 2011-09-19 2013-03-21 Pelican Imaging Corporation Systems and methods for determining depth from multiple views of a scene that include aliasing using hypothesized fusion
WO2013049699A1 (en) 2011-09-28 2013-04-04 Pelican Imaging Corporation Systems and methods for encoding and decoding light field image files
JP5802520B2 (ja) * 2011-11-11 2015-10-28 株式会社 日立産業制御ソリューションズ 撮像装置
JP6309459B2 (ja) * 2012-02-15 2018-04-11 ヘプタゴン・マイクロ・オプティクス・ピーティーイー・エルティーディーHeptagon Micro Optics Pte.Ltd. ストライプ照明の飛行時間型カメラ
US9412206B2 (en) 2012-02-21 2016-08-09 Pelican Imaging Corporation Systems and methods for the manipulation of captured light field image data
US9230333B2 (en) 2012-02-22 2016-01-05 Raytheon Company Method and apparatus for image processing
US9210392B2 (en) 2012-05-01 2015-12-08 Pelican Imaging Coporation Camera modules patterned with pi filter groups
CN104508681B (zh) 2012-06-28 2018-10-30 Fotonation开曼有限公司 用于检测有缺陷的相机阵列、光学器件阵列和传感器的系统及方法
US20140002674A1 (en) 2012-06-30 2014-01-02 Pelican Imaging Corporation Systems and Methods for Manufacturing Camera Modules Using Active Alignment of Lens Stack Arrays and Sensors
US8619082B1 (en) 2012-08-21 2013-12-31 Pelican Imaging Corporation Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation
US20140055632A1 (en) 2012-08-23 2014-02-27 Pelican Imaging Corporation Feature based high resolution motion estimation from low resolution images captured using an array source
WO2014043641A1 (en) 2012-09-14 2014-03-20 Pelican Imaging Corporation Systems and methods for correcting user identified artifacts in light field images
US20140092281A1 (en) 2012-09-28 2014-04-03 Pelican Imaging Corporation Generating Images from Light Fields Utilizing Virtual Viewpoints
US9143711B2 (en) 2012-11-13 2015-09-22 Pelican Imaging Corporation Systems and methods for array camera focal plane control
WO2014130849A1 (en) 2013-02-21 2014-08-28 Pelican Imaging Corporation Generating compressed light field representation data
US9253380B2 (en) 2013-02-24 2016-02-02 Pelican Imaging Corporation Thin form factor computational array cameras and modular array cameras
US9638883B1 (en) 2013-03-04 2017-05-02 Fotonation Cayman Limited Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process
WO2014138695A1 (en) 2013-03-08 2014-09-12 Pelican Imaging Corporation Systems and methods for measuring scene information while capturing images using array cameras
US8866912B2 (en) 2013-03-10 2014-10-21 Pelican Imaging Corporation System and methods for calibration of an array camera using a single captured image
US9521416B1 (en) 2013-03-11 2016-12-13 Kip Peli P1 Lp Systems and methods for image data compression
WO2014164909A1 (en) 2013-03-13 2014-10-09 Pelican Imaging Corporation Array camera architecture implementing quantum film sensors
US9106784B2 (en) 2013-03-13 2015-08-11 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US9519972B2 (en) 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9124831B2 (en) 2013-03-13 2015-09-01 Pelican Imaging Corporation System and methods for calibration of an array camera
WO2014159779A1 (en) 2013-03-14 2014-10-02 Pelican Imaging Corporation Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
WO2014153098A1 (en) 2013-03-14 2014-09-25 Pelican Imaging Corporation Photmetric normalization in array cameras
US9445003B1 (en) 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
WO2014145856A1 (en) 2013-03-15 2014-09-18 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
US9633442B2 (en) 2013-03-15 2017-04-25 Fotonation Cayman Limited Array cameras including an array camera module augmented with a separate camera
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
WO2014150856A1 (en) 2013-03-15 2014-09-25 Pelican Imaging Corporation Array camera implementing quantum dot color filters
EP2990850B1 (en) * 2013-04-26 2020-09-16 Hamamatsu Photonics K.K. Image acquisition device and method and system for acquiring focusing information for specimen
US9383259B2 (en) * 2013-08-29 2016-07-05 Nokia Technologies Oy Method, apparatus and computer program product for sensing of visible spectrum and near infrared spectrum
US9898856B2 (en) 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
US9264592B2 (en) 2013-11-07 2016-02-16 Pelican Imaging Corporation Array camera modules incorporating independently aligned lens stacks
DE102013112553A1 (de) * 2013-11-14 2015-06-03 Odos Imaging Ltd. Beleuchtungsvorrichtung und Verfahren zum Beleuchten eines Objekts
WO2015074078A1 (en) 2013-11-18 2015-05-21 Pelican Imaging Corporation Estimating depth from projected texture using camera arrays
US9456134B2 (en) 2013-11-26 2016-09-27 Pelican Imaging Corporation Array camera configurations incorporating constituent array cameras and constituent cameras
DE102014201181A1 (de) * 2014-01-23 2015-07-23 Robert Bosch Gmbh Kamerasystem, insbesondere für ein Fahrzeug, und Verfahren zur Ermittlung von Bildinformationen einer zeitlich gepulsten Signalquelle
WO2015134996A1 (en) 2014-03-07 2015-09-11 Pelican Imaging Corporation System and methods for depth regularization and semiautomatic interactive matting using rgb-d images
US9247117B2 (en) 2014-04-07 2016-01-26 Pelican Imaging Corporation Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array
KR102073206B1 (ko) 2014-05-06 2020-02-04 자카리야 니야지 촬상 시스템, 방법, 및 용도
US9521319B2 (en) 2014-06-18 2016-12-13 Pelican Imaging Corporation Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor
WO2016054089A1 (en) 2014-09-29 2016-04-07 Pelican Imaging Corporation Systems and methods for dynamic calibration of array cameras
CN104267265B (zh) * 2014-09-30 2017-02-08 中国科学院新疆天文台 一种基于射电天文仪器设备电磁辐射的评估系统及方法
US9628695B2 (en) * 2014-12-29 2017-04-18 Intel Corporation Method and system of lens shift correction for a camera array
US9432193B1 (en) * 2015-02-05 2016-08-30 Sensory, Incorporated Face-based authentication with situational adaptivity
US9723197B2 (en) 2015-03-31 2017-08-01 Sony Corporation Depth estimation from image defocus using multiple resolution Gaussian difference
US9942474B2 (en) 2015-04-17 2018-04-10 Fotonation Cayman Limited Systems and methods for performing high speed video capture and depth estimation using array cameras
US9906733B2 (en) * 2015-06-22 2018-02-27 The Johns Hopkins University Hardware and system for single-camera stereo range determination
US10048127B2 (en) 2015-08-05 2018-08-14 Viavi Solutions Inc. Optical filter and spectrometer
TW201716933A (zh) * 2015-11-06 2017-05-16 原相科技股份有限公司 具有補償失焦影像的光學導航裝置及其補償電路
US10412280B2 (en) 2016-02-10 2019-09-10 Microsoft Technology Licensing, Llc Camera with light valve over sensor array
US10341565B2 (en) 2016-05-10 2019-07-02 Raytheon Company Self correcting adaptive low light optical payload
US10306155B2 (en) * 2016-06-28 2019-05-28 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Compact, high resolution thermal infrared imager
JP6864819B2 (ja) * 2016-06-30 2021-04-28 富士フイルムビジネスイノベーション株式会社 情報処理装置及びプログラム
JP6662745B2 (ja) 2016-10-04 2020-03-11 株式会社ソニー・インタラクティブエンタテインメント 撮影装置、情報処理システム、および偏光画像処理方法
US10742894B2 (en) * 2017-08-11 2020-08-11 Ut-Battelle, Llc Optical array for high-quality imaging in harsh environments
US10482618B2 (en) 2017-08-21 2019-11-19 Fotonation Limited Systems and methods for hybrid depth regularization
EP3606032B1 (en) * 2018-07-30 2020-10-21 Axis AB Method and camera system combining views from plurality of cameras
US11393115B2 (en) * 2018-11-27 2022-07-19 Infineon Technologies Ag Filtering continuous-wave time-of-flight measurements, based on coded modulation images
US11073610B2 (en) * 2019-01-31 2021-07-27 International Business Machines Corporation Portable imager
CN114600165A (zh) 2019-09-17 2022-06-07 波士顿偏振测定公司 用于使用偏振提示表面建模的系统和方法
EP4042101A4 (en) 2019-10-07 2023-11-22 Boston Polarimetrics, Inc. SYSTEMS AND METHODS FOR DETECTING SURFACE NORMALS USING POLARIZATION
KR102558903B1 (ko) 2019-11-30 2023-07-24 보스턴 폴라리메트릭스, 인크. 편광 신호를 이용한 투명 물체 분할을 위한 시스템 및 방법
JP7462769B2 (ja) 2020-01-29 2024-04-05 イントリンジック イノベーション エルエルシー 物体の姿勢の検出および測定システムを特徴付けるためのシステムおよび方法
CN115428028A (zh) 2020-01-30 2022-12-02 因思创新有限责任公司 用于合成用于在包括偏振图像的不同成像模态下训练统计模型的数据的系统和方法
EP4100784A4 (en) * 2020-02-06 2024-03-06 Valve Corporation POSITION TRACKING SYSTEM FOR HEAD-MOUNTED DISPLAY SYSTEMS
WO2021243088A1 (en) 2020-05-27 2021-12-02 Boston Polarimetrics, Inc. Multi-aperture polarization optical systems using beam splitters
US12020455B2 (en) 2021-03-10 2024-06-25 Intrinsic Innovation Llc Systems and methods for high dynamic range image reconstruction
US12069227B2 (en) 2021-03-10 2024-08-20 Intrinsic Innovation Llc Multi-modal and multi-spectral stereo camera arrays
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US12067746B2 (en) 2021-05-07 2024-08-20 Intrinsic Innovation Llc Systems and methods for using computer vision to pick up small objects
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
CN114166348B (zh) * 2021-12-16 2023-09-19 中国科学院光电技术研究所 一种基于全斯托克斯矢量的快速偏振成像方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6133638A (ja) * 1984-07-26 1986-02-17 オリンパス光学工業株式会社 内視鏡撮像装置
JPH01115808A (ja) * 1987-10-29 1989-05-09 Toshiba Corp 希ガス回収装置
JP2883648B2 (ja) * 1988-10-03 1999-04-19 オリンパス光学工業株式会社 画像入出力装置
US6078701A (en) * 1997-08-01 2000-06-20 Sarnoff Corporation Method and apparatus for performing local to global multiframe alignment to construct mosaic images
JPH11155808A (ja) * 1997-11-27 1999-06-15 Olympus Optical Co Ltd 内視鏡撮像装置
JPH11331686A (ja) * 1998-05-14 1999-11-30 Matsushita Electric Ind Co Ltd 画像入力装置
GB2341029B (en) * 1998-08-29 2002-12-31 Marconi Gec Ltd Cameras
US6454410B1 (en) * 1999-10-20 2002-09-24 The Trustees Of The University Of Pennsylvania Mosaicing and enhancement of images for ophthalmic diagnosis and documentation
US6909459B2 (en) * 2002-08-21 2005-06-21 Alpha Innotech Corporation Method of and apparatus for extending signal ranges of digital images

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534391B (zh) * 2008-02-05 2012-11-21 富士胶片株式会社 摄像装置、摄像方法、图像处理装置、图像处理方法
US10455220B2 (en) 2011-08-24 2019-10-22 Sony Corporation Image processing device, method of controlling image processing device and program causing computer to execute method
CN102957861A (zh) * 2011-08-24 2013-03-06 索尼公司 图像处理设备及其控制方法及程序
US9609308B2 (en) 2011-08-24 2017-03-28 Sony Corporation Image processing device, method of controlling image processing device and program causing computer to execute method
CN102538757A (zh) * 2011-11-08 2012-07-04 昆明理工大学 亮源及其周边暗源目标的观测方法
US9571732B2 (en) 2013-03-21 2017-02-14 Lg Electronics Inc. Display device and method for controlling the same
CN104937653A (zh) * 2013-03-21 2015-09-23 Lg电子株式会社 显示装置及其控制方法
CN105264881A (zh) * 2013-04-25 2016-01-20 空客防务与空间有限公司 累加相邻光电探测器积累信号的图像采集
CN105264881B (zh) * 2013-04-25 2019-01-18 空客防务与空间有限公司 累加相邻光电探测器积累信号的图像采集
US9464938B2 (en) 2014-02-06 2016-10-11 The Boeing Company Systems and methods for measuring polarization of light in images
CN104833424B (zh) * 2014-02-06 2017-10-24 波音公司 用于测量图像中的光的偏振的系统和方法
US10901197B2 (en) 2014-09-19 2021-01-26 Sony Corporation Medical observation device and lens barrel of medical observation device
CN108293093A (zh) * 2015-11-19 2018-07-17 索尼公司 图像处理设备和图像处理方法
CN108293093B (zh) * 2015-11-19 2020-11-03 索尼公司 图像处理设备和图像处理方法
WO2018018934A1 (zh) * 2016-07-29 2018-02-01 广东欧珀移动通信有限公司 高动态范围图像拍摄方法、装置和终端设备
US10616499B2 (en) 2016-07-29 2020-04-07 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for capturing high dynamic range image, and electronic device
US10623654B2 (en) 2016-07-29 2020-04-14 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for capturing high dynamic range image, and electronic device
CN107370962B (zh) * 2016-07-29 2019-10-15 Oppo广东移动通信有限公司 高动态范围图像拍摄方法、装置和终端设备
CN107370962A (zh) * 2016-07-29 2017-11-21 广东欧珀移动通信有限公司 高动态范围图像拍摄方法、装置和终端设备
CN110476118A (zh) * 2017-04-01 2019-11-19 深圳市大疆创新科技有限公司 用于机器视觉的低轮廓多波段高光谱成像
US10962858B2 (en) 2017-04-01 2021-03-30 SZ DJI Technology Co., Ltd. Low-profile multi-band hyperspectral imaging for machine vision
CN110476118B (zh) * 2017-04-01 2021-10-15 深圳市大疆创新科技有限公司 用于机器视觉的低轮廓多波段高光谱成像

Also Published As

Publication number Publication date
WO2002009424A2 (en) 2002-01-31
AU2001277964A1 (en) 2002-02-05
US7440637B2 (en) 2008-10-21
KR20030028553A (ko) 2003-04-08
WO2002009424A3 (en) 2002-06-06
JP2004505512A (ja) 2004-02-19
JP4015944B2 (ja) 2007-11-28
US20040056966A1 (en) 2004-03-25

Similar Documents

Publication Publication Date Title
CN1451230A (zh) 图象拼合方法与设备
US9721344B2 (en) Multi-aperture depth map using partial blurring
US8269861B2 (en) Image acquisition apparatus which has a focus detection function and program recording device
US7583858B2 (en) Image processing based on direction of gravity
US9832432B2 (en) Control apparatus, image pickup apparatus, control method, and non-transitory computer-readable storage medium
Schechner et al. Generalized mosaicing: Wide field of view multispectral imaging
US9172853B2 (en) Microlens array architecture for avoiding ghosting in projected images
US7215493B2 (en) Imaging system with a lens having increased light collection efficiency and a deblurring equalizer
US7616842B2 (en) End-to-end design of electro-optic imaging systems with constrained digital filters
US9774880B2 (en) Depth-based video compression
US20060078215A1 (en) Image processing based on direction of gravity
US20110157387A1 (en) Method and apparatus for generating image data
EP2375755A1 (en) Apparatus for detecting direction of image pickup device and moving body comprising same
CN101065955A (zh) 成像装置及其方法
WO2018221224A1 (ja) 画像処理装置、画像処理方法および画像処理プログラム
US20050053309A1 (en) Image processors and methods of image processing
CN107077722A (zh) 图像捕获装置及图像捕获方法
CN100349542C (zh) 实时自动无侵害的虹膜光学成像装置
JP7218722B2 (ja) 情報処理装置、情報処理方法、プログラム
US10225537B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
US20210127046A1 (en) Device, method and computer program
US7898591B2 (en) Method and apparatus for imaging using sensitivity coefficients
US11539875B1 (en) Image-focusing method and associated image sensor
Zhang et al. Image Acquisition Devices
Bostater et al. Mobile Platform Pushbroom Sensor Motion Control, Image Corrections and Spectral Band Selection: Examples of Hyperspectral Imagery Using Low Flying Aircraft and Small Vessels in Coastal Littoral Areas

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication