WO2021134713A1 - 红外图像处理方法及装置 - Google Patents

红外图像处理方法及装置 Download PDF

Info

Publication number
WO2021134713A1
WO2021134713A1 PCT/CN2019/130968 CN2019130968W WO2021134713A1 WO 2021134713 A1 WO2021134713 A1 WO 2021134713A1 CN 2019130968 W CN2019130968 W CN 2019130968W WO 2021134713 A1 WO2021134713 A1 WO 2021134713A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
temperature
temperature value
neighborhood
difference
Prior art date
Application number
PCT/CN2019/130968
Other languages
English (en)
French (fr)
Inventor
张青涛
庹伟
赵新涛
鄢蕾
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/130968 priority Critical patent/WO2021134713A1/zh
Publication of WO2021134713A1 publication Critical patent/WO2021134713A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • This application relates to the field of infrared image processing, and in particular to an infrared image processing method and device.
  • Dead pixels can be divided into static dead pixels and dynamic dead pixels.
  • the recognition accuracy of dynamic dead pixels is not high, and it is easy to damage the real imaging objects of single pixel or 2-3 pixels, and mistake them for dead pixels, resulting in low infrared imaging quality, which is difficult for infrared remote detection and detection.
  • the detection capability has a negative effect.
  • the embodiments of the present application provide an infrared image processing method and device, which can improve the accuracy of identifying dynamic dead pixels.
  • an embodiment of the present application provides an infrared image processing method, including: determining a first neighborhood of a first pixel in an infrared image; wherein the number of pixels included in the first neighborhood is a preset Value; Determine the probability that the first pixel is a dead pixel according to the temperature value of the pixel in the first neighborhood; According to the initial temperature value of the first pixel, the probability that the first pixel is a dead pixel And the reconstructed temperature value of the first pixel to determine the final temperature value of the first pixel, wherein the reconstructed temperature value of the first pixel is based on the temperature in the second neighborhood of the first pixel The temperature value of one or more pixels is interpolated.
  • the infrared image processing method provided by the embodiments of the application can determine the probability of a pixel as a dead pixel by means of soft threshold judgment, which effectively avoids judging other pixels around the pixel as dead pixels, and improves the accuracy of identifying dynamic dead pixels. rate. While removing dead pixels, it also guarantees the remote discovery and detection capabilities of the infrared system.
  • an embodiment of the present application provides an infrared image processing device, including: a memory and a processor; the above-mentioned memory is used for storing program instructions, and the above-mentioned processor is used for calling the program instructions in the memory to make the above-mentioned device execute: Determine the first neighborhood of the first pixel in the infrared image; wherein, the number of pixel points included in the first neighborhood is a preset value; determine the above based on the temperature value of the pixel in the first neighborhood The probability that the first pixel is a dead pixel; according to the initial temperature value of the first pixel, the probability that the first pixel is a dead pixel, and the reconstructed temperature value of the first pixel, the final value of the first pixel is determined The temperature value, wherein the reconstructed temperature value of the first pixel point is obtained by interpolation according to the temperature value of one or more pixel points in the second neighborhood of the first pixel point.
  • the infrared image processing device provided by the embodiment of the application can determine the probability of a pixel as a dead pixel by means of soft threshold judgment, which effectively avoids judging other pixels around the pixel as dead pixels, and improves the accuracy of identifying dynamic dead pixels. rate. While removing dead pixels, it also guarantees the remote discovery and detection capabilities of the infrared system.
  • an embodiment of the present application provides a computer-readable storage medium having a computer program stored thereon, and the foregoing computer program is executed by a processor to implement the method provided in the first aspect of the embodiment of the present application.
  • FIG. 1 is a schematic diagram of the architecture of an infrared imaging system provided by an embodiment of the application
  • FIG. 2 is a schematic flowchart of an infrared image processing method provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of a first neighborhood provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of another first neighborhood provided by an embodiment of this application.
  • FIG. 5 is a schematic diagram of a second neighborhood provided by an embodiment of this application.
  • FIG. 6 is a schematic diagram of a mapping curve between temperature error and dead pixel probability according to an embodiment of the application
  • FIG. 7 is a schematic structural diagram of an infrared image processing device provided by an embodiment of the application.
  • Static dead pixels can be identified according to the response rate, for example, pixels with a response rate exceeding the range are static dead pixels, for example, the response rate is within the range of X-Y as non-static dead pixels, and the response rate is greater than Y or less than X
  • the pixels are static dead pixels.
  • X is equal to 0.5 and Y is equal to 1.
  • Dynamic dead pixels within a certain pixel range, the display is normal, and beyond this range, the brightness value of the point is brighter than the surrounding pixels. This is related to the temperature and gain of the sensor. When the temperature of the sensor increases or the gain increases, the dynamic dead pixels will become more obvious. Pixels with a response rate greater than the threshold are dynamic dead pixels. For example, pixels that exist in some image frames, and pixels that do not exist in some image frames and whose response rate is greater than a threshold are dynamic dead pixels. After the infrared sensor leaves the factory, the newly generated pixels during the user's use are also dynamic dead pixels.
  • the dead pixels involved in the infrared image processing method provided in the following embodiments of the present application may be dynamic dead pixels.
  • Fig. 1 shows a schematic structural diagram of an infrared imaging system provided by an embodiment of the present invention.
  • the system includes an infrared camera 10 and an infrared display device 20.
  • the infrared photographing device 10 may include an infrared sensor for acquiring infrared images.
  • the infrared display device 20 may be used to receive the infrared image sent by the infrared camera 10 and display the infrared image.
  • the infrared camera 10 may be mounted on a movable device (including but not limited to aircraft, boats, automobiles, etc., the aircraft 30 is taken as an example in this application). Specifically, it can be mounted on the gimbal of the aircraft 30 to complete the aerial photography mission of the corresponding target during the flight of the aircraft 30. Possibly, when the infrared photographing device 10 is mounted on an aircraft, the infrared display device 20 may be a ground control device of the aircraft 30.
  • the ground control equipment of the aircraft 30 can establish a communication connection through a wireless connection (for example, a wireless connection based on WIFI or radio frequency communication), which is used to control the flight trajectory of the aircraft 30 and receive infrared images sent by the infrared camera 10.
  • a wireless connection for example, a wireless connection based on WIFI or radio frequency communication
  • the ground control equipment of the aircraft 30 may include a display screen for displaying the infrared image sent by the infrared camera 10.
  • the infrared camera 10 is used to complete infrared image/video shooting
  • the aircraft 30 is used to implement the infrared image processing process provided in this application, and send the processed infrared image/video to the ground control equipment for display.
  • the infrared camera 10 can also be used to complete infrared image/video shooting
  • the aircraft 30 is used to send the infrared image/video captured by the infrared camera 10 to the ground control equipment, and the ground control equipment implements the infrared image provided by this application. Processing process and image display.
  • the ground control device of the above-mentioned aircraft 30 may be a controller with a joystick, or may be a smart device such as a smart phone or a tablet computer.
  • the infrared photographing device 10 can be integrated with the infrared display device 20, that is, a device can realize the function of photographing infrared images and the function of displaying infrared images.
  • the infrared camera includes both the infrared photographing device 10 and the infrared display device 20.
  • the infrared camera is used to complete the infrared image/video shooting and the infrared image processing process provided in this application, and display the processed infrared image/video.
  • the infrared image processing method may include the following steps:
  • S201 Determine the first neighborhood of the first pixel in the infrared image.
  • the number of pixels included in the first neighborhood is a preset value.
  • the preset value may be M*N, for example.
  • M and N are both positive integers.
  • the first neighborhood may include, but is not limited to, 3*3 pixels, 3*5 pixels, 9*8 pixels, 11*11 pixels, and so on.
  • the first neighborhood may be centered on the first pixel P i,j as shown in FIG. 3.
  • the first neighborhood may not be centered on the first pixel point P i, j as shown in FIG. 4.
  • S202 Determine the probability that the first pixel is a dead pixel according to the temperature value of the pixel in the first neighborhood.
  • S2021 Determine the allowable temperature difference according to the temperature values of the pixel points in the first neighborhood.
  • S2022 Determine the probability that the first pixel is a dead pixel according to the initial temperature value of the first pixel and the allowable temperature difference.
  • the regional average temperature value can be determined according to the temperature value of the pixel points in the first neighborhood, and then the allowable temperature difference can be determined according to the regional average temperature value and/or the regional temperature difference in the first neighborhood.
  • the regional average temperature value may be the average value of the temperature values of all pixels in the first neighborhood.
  • the regional temperature difference of the first neighborhood is the difference between the highest temperature value and the lowest temperature value of all pixels in the first neighborhood.
  • the regional temperature difference in the first neighborhood is T1-T2.
  • the regional temperature difference of the first neighborhood is the difference between the highest temperature value and the lowest temperature value within the preset temperature range.
  • the preset temperature range is (a, b), and the highest temperature value of the pixels in the first neighborhood within the temperature range is b1, and the lowest temperature value is a1, then the regional temperature difference in the first neighborhood is b1 -A1.
  • the allowable temperature difference can be determined based on the regional average temperature value and the regional temperature difference of the first neighborhood.
  • the temperature noise can be determined according to the regional average temperature value and the preset temperature-noise model.
  • the temperature-noise model is a model obtained based on the temperature of multiple known noises.
  • the allowable temperature difference can be determined based on the temperature noise and the regional temperature difference.
  • the allowable temperature difference is determined based on the product of the first coefficient and the temperature noise, and the sum of the product of the second coefficient and the regional temperature difference.
  • the first coefficient is obtained according to the noise distribution law of the infrared system, and the introduction of the first coefficient can enable the noise of the infrared system to accurately act on the probability of dead pixels.
  • the second coefficient is determined by the performance of the infrared system. The introduction of the second coefficient can make the regional temperature difference allowed by the infrared system act on the probability of dead pixels, reduce the misjudgment of real single-pixel (or 2-3 pixel objects), and prevent Real objects are removed as dead pixels to ensure that the discovery and detection capabilities of the infrared system are not affected.
  • allowable temperature difference k0*temperature noise+k1*area temperature difference.
  • k0 is the first coefficient
  • k1 is the second coefficient.
  • the allowable temperature difference can be determined based on the regional average temperature value.
  • the temperature noise can be determined according to the regional average temperature value and a preset temperature-noise model, where the temperature-noise model is a model obtained based on the temperature of multiple known noises.
  • the allowable temperature difference can then be determined based on the temperature noise.
  • the allowable temperature difference is based on the product of the first coefficient and the temperature noise.
  • the first coefficient is obtained according to the noise distribution law of the infrared system.
  • allowable temperature difference k0*temperature noise.
  • k0 is the first coefficient.
  • the allowable temperature difference can be determined based on the regional temperature difference of the first neighborhood.
  • the above-mentioned temperature-noise model may be a model obtained by training the temperature measured during the production of the infrared sensor and the noise corresponding to the temperature.
  • the values of the above-mentioned first coefficient k0 and second coefficient k1 determine the degree of misjudgment of dead pixels.
  • the allowable error can be calibrated in advance to determine the values of the first coefficient k0 and the second coefficient k1.
  • the value range of the first coefficient k0 and the second coefficient k1 is, for example, but not limited to [1, 4].
  • the probability that the first pixel point is a bad point can be determined according to the allowable temperature difference and the temperature difference of the first pixel point.
  • the temperature difference of the first pixel point is the difference between the initial temperature value of the first pixel point and the average temperature value of the area.
  • the probability that the first pixel is a dead pixel can be determined according to the difference between the temperature difference of the first pixel and the allowable temperature difference.
  • the difference between the temperature difference of the first pixel and the allowable temperature difference can be referred to as a temperature error.
  • the probability that the first pixel is a dead pixel can be determined through the mapping relationship.
  • the mapping relationship is used to characterize the corresponding relationship between different temperature errors and the probability of dead pixels.
  • the mapping relationship can be specifically represented by a curve as shown in FIG. 6.
  • the curve can be a mapping curve of different temperature errors and probabilities. After the temperature error is determined, the probability corresponding to the temperature error can be found according to the curve, that is, the probability that the first pixel is a dead pixel.
  • mapping curve shown in FIG. 6 It is not limited to the mapping curve shown in FIG. 6, and the mapping relationship may also be expressed in other forms such as mapping straight lines or curves, which are not limited in the embodiment of the present application.
  • S203 Determine the final temperature value of the first pixel according to the initial temperature value of the first pixel, the probability that the first pixel is a bad pixel, and the reconstructed temperature value of the first pixel.
  • the reconstructed temperature value of the first pixel is obtained by interpolation according to the temperature value of one or more pixels in the second neighborhood of the first pixel.
  • the second neighborhood can be the same as the first neighborhood, or different from the first neighborhood.
  • the second neighborhood may be centered on the first pixel point, or may not be centered on the first pixel point.
  • Fig. 5 exemplarily shows a second neighborhood.
  • the final temperature value of the first pixel point is the sum of the following two: the product of the reconstruction temperature value and the probability that the first pixel point is a bad point, the initial temperature value of the first pixel point and the first pixel point is not bad
  • the product of the probabilities of the points is 1.
  • T T reconstruction *P dead pixels +T initial *(1-P dead pixels ).
  • T is a first pixel value of the final temperature
  • T is the reconstructed first pixel value of a point reconstruction of temperature
  • T is the initial value of the initial temperature of the first pixel
  • P dead pixel is a first pixel as defective pixel probability .
  • the reconstructed temperature value of the first pixel is obtained by interpolation according to the temperature value of a pixel in the second neighborhood, and the reconstructed temperature value of the first pixel is the temperature value of the aforementioned one pixel.
  • the first pixel point P i, j is the reconstruction value of the temperature according to the temperature value reconstructed pixels of the second neighborhood of P i, j-1 interpolated temperature values obtained, the first pixel point P i, j, i.e., Is the temperature value of the pixel P i,j-1.
  • the reconstructed temperature value of the first pixel is obtained by interpolation based on the temperature values of multiple pixels in the second neighborhood, and the reconstructed temperature value of the first pixel is the average of the temperature values of the multiple pixels. Value or weighted mean.
  • the reconstructed temperature value of the first pixel point Pi ,j is based on the multiple pixel points Pi ,j-2 , Pi ,j-1 , Pi ,j+1 and Pi ,j in the second neighborhood. +2 temperature value interpolation, the reconstructed temperature value of the first pixel can be the average of the temperature values of the four adjacent pixels, or the four adjacent pixels have different weights, the first pixel
  • the reconstructed temperature value of is the value of the weighted summation of the temperature values of the four adjacent pixels. It can be known that the sum of the weights occupied by four adjacent pixels is 1.
  • the reconstructed temperature value of the first pixel can also be obtained according to the interpolation of the temperature values of other pixels in the second neighborhood, which is not limited in the embodiment of the present application. For example, it can be determined that the first pixel point is on the edge or contour of the image according to the content of the image, and the direction of interpolation can be a direction along the contour or edge. That is, the reconstructed temperature value of the first pixel may be obtained by interpolation of the temperature values of other pixels located on the contour or edge in the second neighborhood.
  • the infrared image processing method provided by the embodiment of the application can introduce regional temperature difference, infrared system temperature difference proportional coefficient (i.e. second coefficient k1), temperature-noise model, infrared system noise partial scale coefficient (i.e. first coefficient k0), Combined with the soft threshold judgment method, it can more accurately determine the probability that the current pixel is a bad pixel, and effectively reduce the side effect of judging a single-pixel object (or 2-3 pixel object) as a bad pixel. While removing the dead pixel, it will not Reduce the remote discovery and detection capabilities of the infrared system.
  • infrared system temperature difference proportional coefficient i.e. second coefficient k1
  • temperature-noise model i.e. first coefficient k0
  • FIG. 7 shows a schematic structural diagram of an infrared image processing apparatus provided by an embodiment of the present application.
  • the infrared image processing apparatus 70 may include: at least one processor 701, such as a CPU, a memory 703, and at least one communication bus 702. Among them, the communication bus 702 is used to implement connection and communication between these components.
  • the memory 703 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the memory 703 includes the flash in the embodiment of the present invention.
  • the memory 703 may also be at least one storage system located far away from the foregoing processor 701.
  • the infrared image processing device 70 is the infrared photographing device 10, and the infrared image processing device 70 may also include an infrared sensor and at least one network interface. Among them, infrared sensors can be used to obtain infrared images.
  • the network interface can optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and a communication connection with the infrared display device 20 can be established through the network interface.
  • the infrared processing device 70 is an infrared display device 20, and the infrared processing device 70 may also include a display screen and a network interface.
  • the display screen can be used to display infrared images.
  • the network interface can optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and a communication connection with the infrared camera 10 can be established through the network interface.
  • the infrared processing device 70 may also include a user interface for receiving user operations on the infrared image by the user, or for controlling the infrared photographing device 10.
  • the user interface 703 may include a touch screen, a keyboard or a mouse, a joystick, a physical button, and so on.
  • the network interface can be connected to a receiver, transmitter or other communication module.
  • Other communication modules can include but are not limited to WiFi module, Bluetooth module, etc.
  • the infrared image processing device 70 in the embodiment of the present invention can also be Including receivers, transmitters and other communication modules.
  • the infrared processing device 70 is an integration of the infrared photographing device 10 and the infrared display device 20, and the infrared processing device 70 may also include an infrared sensor and a display screen. Among them, infrared sensors can be used to obtain infrared images. The display screen can be used to display infrared images.
  • the infrared processing device 70 may further include a user interface for receiving user operations performed by the user on the infrared image.
  • the processor 701 may be used to call program instructions stored in the memory 703 and perform the following operations:
  • the final temperature value of the first pixel is determined, wherein The reconstructed temperature value of the first pixel is obtained by interpolation according to the temperature value of one or more pixels in the second neighborhood of the first pixel.
  • the device when the device determines the probability that the first pixel is a dead pixel according to the temperature value of the pixel in the first neighborhood, it specifically executes:
  • the probability that the first pixel is a dead pixel is determined according to the initial temperature value of the first pixel and the allowable temperature difference.
  • the device when the device executes the determination of the allowable temperature difference according to the temperature values of the pixels in the first neighborhood, it specifically executes:
  • the allowable temperature difference is determined according to the regional average temperature value and/or the regional temperature difference in the first neighborhood.
  • the regional temperature difference is the difference between the highest temperature value and the lowest temperature value of all pixels in the first neighborhood.
  • the regional temperature difference is the difference between the highest temperature value and the lowest temperature value within a preset temperature range.
  • the device when the device executes the determination of the allowable temperature difference according to the regional average temperature value and/or the regional temperature difference, it specifically executes:
  • the temperature-noise model is a model obtained according to the temperature of a plurality of known temperature noises
  • the allowable temperature difference is determined according to the temperature noise and the regional temperature difference.
  • the allowable temperature difference is determined based on the product of the first coefficient and the temperature noise, and the sum of the product of the second coefficient and the regional temperature difference; wherein, the first coefficient is based on infrared The noise distribution law of the system is obtained, and the second coefficient is determined by the performance of the infrared system.
  • the device when the device determines the probability that the first pixel is a dead pixel according to the initial temperature value of the first pixel and the allowable temperature difference, it specifically executes:
  • the probability that the first pixel is a dead pixel is determined according to the difference between the temperature difference of the first pixel and the allowable temperature difference.
  • the final temperature value of the first pixel is the sum of the following two: the product of the reconstructed temperature value and the probability that the first pixel is a bad pixel, and the first pixel The product of the initial temperature value of the dot and the probability that the first pixel is not a dead pixel; the sum of the probability that the first pixel is a dead pixel and the probability that the first pixel is not a dead pixel is 1.
  • the reconstructed temperature value of the first pixel is obtained by interpolation according to the temperature value of a pixel in the second neighborhood, and the reconstructed temperature value is the value of the one pixel.
  • Temperature value; or the reconstructed temperature value of the first pixel is obtained by interpolation according to the temperature values of multiple pixels in the second neighborhood, and the reconstructed temperature value is the temperature value of the multiple pixels The average or weighted average of.
  • the first neighborhood is different from the second neighborhood.
  • the first neighborhood is the same as the second neighborhood.
  • the first neighborhood is centered on the first pixel.
  • infrared image processing device 70 in this embodiment can be specifically implemented according to the methods in the foregoing method embodiments, and details are not described herein again.
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random storage memory RAM, etc. In the case of no conflict, the technical features in this embodiment and the implementation can be combined arbitrarily.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

一种红外图像处理方法及装置。该方法包括:确定红外图像中第一像素点的第一邻域(S201);其中,该第一邻域中包括的像素点的个数的数量为预设值;根据该第一邻域内的像素点的温度值确定该第一像素点为坏点的概率(S202);根据该第一像素点的初始温度值、该第一像素点为坏点的概率及该第一像素点的重建温度值,确定该第一像素点的最终温度值(S203),其中,该第一像素点的重建温度值是根据该第一像素点的第二邻域内的一个或多个像素点的温度值进行插值得到的。本方法可以提高识别动态坏点的准确率。

Description

红外图像处理方法及装置 技术领域
本申请涉及红外图像处理领域,尤其涉及一种红外图像处理方法及装置。
背景技术
红外传感器在制造过程中,由于灰尘、制造工艺等影响,会产生坏点。坏点可以分为静态坏点和动态坏点。
现有技术中对于动态坏点的识别准确率不高,容易伤及单像素或2-3个像素的真实成像物体,把它们误认为坏点,导致红外成像画质低,对红外远程发现和探测能力有负面作用。
发明内容
本申请实施例提供了一种红外图像处理方法及装置,可以提高识别动态坏点的准确率。
第一方面,本申请实施例提供了一种红外图像处理方法,包括:确定红外图像中第一像素点的第一邻域;其中,上述第一邻域中包括的像素点的数量为预设值;根据上述第一邻域内的像素点的温度值确定所述第一像素点为坏点的概率;根据所述第一像素点的初始温度值、所述第一像素点为坏点的概率及所述第一像素点的重建温度值,确定所述第一像素点的最终温度值,其中,所述第一像素点的重建温度值是根据所述第一像素点的第二邻域内的一个或多个像素点的温度值进行插值得到的。
本申请实施例提供的红外图像处理方法可以通过软阈值判断的方式确定一个像素点为坏点的概率,有效避免把该像素点周围的其他像素点判定为坏点,提高识别动态坏点的准确率。在去除坏点的同时,保障红外系统的远程发现和探测能力。
第二方面,本申请实施例提供了一种红外图像处理装置,包括:存储器及处理器;上述存储器用于存储程序指令,上述处理器用于调用所述存储器中的程序指令,使上述装置执行:确定红外图像中第一像素点的第一邻域;其中, 上述第一邻域中包括的像素点的个数的数量为预设值;根据上述第一邻域内的像素点的温度值确定上述第一像素点为坏点的概率;根据上述第一像素点的初始温度值、上述第一像素点为坏点的概率及上述第一像素点的重建温度值,确定上述第一像素点的最终温度值,其中,上述第一像素点的重建温度值是根据上述第一像素点的第二邻域内的一个或多个像素点的温度值进行插值得到的。
本申请实施例提供的红外图像处理装置可以通过软阈值判断的方式确定一个像素点为坏点的概率,有效避免把该像素点周围的其他像素点判定为坏点,提高识别动态坏点的准确率。在去除坏点的同时,保障红外系统的远程发现和探测能力。
第三方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,上述计算机程序被处理器执行时实现本申请实施例第一方面提供的方法。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍。
图1为本申请实施例提供的一种红外成像系统的架构示意图;
图2为本申请实施例提供的一种红外图像处理方法流程示意图;
图3为本申请实施例提供的一种第一邻域示意图;
图4为本申请实施例提供的另一种第一邻域示意图;
图5为本申请实施例提供的一种第二邻域示意图;
图6为本申请实施例提供的一种温度误差与坏点概率的映射曲线示意图;
图7为本申请实施例提供的红外图像处理装置结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
首先介绍本申请实施例中涉及的两个概念:
静态坏点:可以按照响应率来识别,比如,响应率超过范围的像素点为静态坏点,例如,响应率为X-Y范围内的为非静态坏点,响应率大于Y的或者 小于X的像素点为静态坏点。例如,X等于0.5,Y等于1。
动态坏点:在一定像素范围内,显示正常,而超过这一范围后,该点的亮度值比周围的像素要亮。这与传感器的温度、增益有关。传感器的温度升高或者增益增大时,动态坏点会变的更加明显。响应率大于阈值的像素点为动态坏点。比如,有的图像帧中存在,有的图像帧中不存在的且响应率大于阈值的像素点为动态坏点。红外传感器出厂后在用户使用过程中新产生的像素点也为动态坏点。
本申请以下实施例提供的红外图像处理方法中涉及的坏点可以是动态坏点。
图1示出了本发明实施例提供的一种红外成像系统的架构示意图。该系统包括红外拍摄装置10及红外显示装置20。其中,红外拍摄装置10可以包括红外传感器,用于获取红外图像。红外显示装置20可以用于接收红外拍摄装置10发送的红外图像,并显示该红外图像。
在一些可能的实施例中,红外拍摄装置10可以搭载于可移动设备(包括但不限于飞行器、船、汽车等,本申请以飞行器30为例)上。具体可以搭载于飞行器30的云台上,以在飞行器30飞行的过程中完成相应目标的航拍任务。可能地,当红外拍摄装置10搭载于飞行器上时,红外显示装置20可以是飞行器30的地面控制设备。飞行器30的地面控制设备可以通过无线连接方式(例如基于WIFI或射频通信的无线连接方式)建立通信连接,用于控制飞行器30的飞行轨迹,并接收红外拍摄装置10发送的红外图像。飞行器30的地面控制设备可以包括显示屏,用于显示红外拍摄装置10发送的红外图像。这种情况下,红外拍摄装置10用于完成红外图像/视频拍摄,飞行器30用于实现本申请提供的红外图像处理过程,并将处理后的红外图像/视频发送给地面控制设备进行显示。或者,还可以是红外拍摄装置10用于完成红外图像/视频拍摄,飞行器30用于将红外拍摄装置10拍摄的红外图像/视频发送给地面控制设备,由地面控制设备实现本申请提供的红外图像处理过程并进行图像显示。
其中,上述飞行器30的地面控制设备可以是带摇杆的控制器,也可以为智能手机、平板电脑等智能设备。
在另外一些可能的实施例中,红外拍摄装置10可以与红外显示装置20集成在一起,即一个设备既可以实现拍摄红外图像的功能,又可以实现显示红外 图像的功能。例如,红外相机中既包括红外拍摄装置10,还包括红外显示装置20。这种情况下,红外相机用于完成红外图像/视频拍摄,以及本申请提供的红外图像处理过程,并将处理后的红外图像/视频进行显示。
接下来结合图1示出的红外成像系统的架构,介绍本申请实施例提供的红外图像处理方法。
如图2所示,红外图像处理方法可以包括以下几个步骤:
S201:确定红外图像中第一像素点的第一邻域。
具体地,第一邻域中包括的像素点的数量为预设值。预设值例如可以是M*N。其中,M和N均为正整数。第一邻域例如可以但不限于包括3*3个像素点、3*5个像素点、9*8个像素点、11*11个像素点等。
可能地,第一邻域可以如图3示出的以第一像素点P i,j为中心。
可能地,第一邻域也可以如图4示出的不以第一像素点P i,j为中心。
S202:根据第一邻域内的像素点的温度值确定第一像素点为坏点的概率。
具体地,上述S202又可以细分为以下几个步骤:
S2021:根据第一邻域内的像素点的温度值确定允许温差。
S2022:根据第一像素点的初始温度值与允许温差确定第一像素点为坏点的概率。
接下来介绍如何确定允许温差。
具体地,首先可以根据第一邻域内像素点的温度值确定区域平均温度值,再根据区域平均温度值和/或第一邻域内的区域温差确定允许温差。区域平均温度值可以是第一邻域内所有像素点的温度值的平均值。
可能地,第一邻域的区域温差为第一邻域内所有像素点中的最高温度值与最低温度值的差值。
例如,第一邻域内所有像素点的最高温度值为T1,最低温度值为T2,则第一邻域的区域温差为T1-T2。
可能地,第一邻域的区域温差为预设温度范围内的最高温度值与最低温度值的差值。
例如,预设的温度范围为(a,b),而第一邻域内的像素点在该温度范围内的最高温度值为b1,最低温度值为a1,则第一邻域的区域温差为b1-a1。
可能地,允许温差可以根据区域平均温度值和第一邻域的区域温差确定。
具体地,首先可以根据区域平均温度值与预设的温度-噪声模型确定温度噪声。其中,温度-噪声模型为根据多个已知噪声的温度得到的模型。然后可以根据温度噪声及区域温差确定允许温差。
具体地,允许温差是根据第一系数与温度噪声的乘积,与第二系数与区域温差的乘积之和确定的。其中,第一系数根据红外系统的噪声分布规律得到,引入第一系数可以使红外系统的噪声能够准确作用在坏点概率判定上。第二系数由红外系统的性能决定,引入第二系数可以使红外系统允许的区域温差作用在坏点概率判定上,减小对真实单像素(或者2-3像素物体)的误判,防止把真实物体当作坏点去除,保证红外系统的发现和探测能力不受影响。
具体地,允许温差=k0*温度噪声+k1*区域温差。其中,k0为第一系数,k1为第二系数。
可能地,允许温差可以根据区域平均温度值确定。
具体地,可以根据区域平均温度值与预设的温度-噪声模型确定温度噪声,其中,温度-噪声模型为根据多个已知噪声的温度得到的模型。然后可以根据温度噪声确定允许温差。
具体地,允许温差是根据第一系数与温度噪声的乘积。其中,第一系数根据红外系统的噪声分布规律得到。
具体地,允许温差=k0*温度噪声。其中,k0为第一系数。
可能地,允许温差可以根据第一邻域的区域温差确定。
可选的,允许温差还可以根据第二系数与第一邻域的区域温差的乘积确定。例如,允许温差=k1*区域温差。其中,k1为第二系数,第二系数可以由红外系统的性能决定。
上述温度-噪声模型可以是在该红外传感器生产的过程中测得的温度与该温度对应的噪声训练得到的模型。
上述第一系数k0和第二系数k1的取值决定了坏点误判的程度。具体可以提前标定允许接受的误差,从而确定第一系数k0和第二系数k1的取值。例如,第一系数k0和第二系数k1的取值范围例如但不限于为[1,4]。
在确定了允许温差后,可以根据允许温差与第一像素点温差确定第一像素点为坏点的概率。其中,第一像素点温差为第一像素点的初始温度值与区域平均温度值的差值。
进一步地,根据第一像素点温差与允许温差的差值可以确定第一像素点为坏点的概率。第一像素点温差与允许温差的差值可以称为温度误差。
具体地,可以通过映射关系判断第一像素点为坏点的概率。该映射关系用于表征不同的温度误差与坏点概率的对应关系。该映射关系具体可通过如图6所示的曲线表示。该曲线可以是不同的温度误差与概率的映射曲线。在确定温度误差后,即可根据该曲线查找到该温度误差对应的概率,即第一像素点为坏点的概率。
不限于图6示出的映射曲线,该映射关系还可以用其他形式如映射直线或曲线的形式表示,本申请实施例对此不作限定。
S203:根据第一像素点的初始温度值、第一像素点为坏点的概率及第一像素点的重建温度值,确定第一像素点的最终温度值。
其中,第一像素点的重建温度值是根据第一像素点的第二邻域内的一个或多个像素点的温度值进行插值得到的。
其中,第二邻域可以与第一邻域相同,也可以与第一邻域不同。第二邻域可以以第一像素点为中心,也可以不以第一像素点为中心。图5示例性示出了一种第二邻域。
具体地,第一像素点的最终温度值为以下两者的和:重建温度值与第一像素点为坏点的概率的乘积、第一像素点的初始温度值与第一像素点不为坏点的概率的乘积。其中,第一像素点为坏点的概率与第一像素点不为坏点的概率的和为1。
具体地,T=T 重建*P 坏点+T 初始*(1-P 坏点)。
其中,T为第一像素点的最终温度值,T 重建为第一像素点的重建温度值,T 初始为第一像素点的初始温度值,P 坏点为第一像素点为坏点的概率。
可能地,第一像素点的重建温度值是根据第二邻域内的一个像素点的温度值进行插值得到的,第一像素点的重建温度值即为上述一个像素点的温度值。
例如,第一像素点P i,j的重建温度值是根据第二邻域内的像素点P i,j-1的温度值插值得到的,则第一像素点P i,j的重建温度值即为像素点P i,j-1的温度值。
可能地,第一像素点的重建温度值是根据第二邻域内的多个像素点的温度值进行插值得到的,第一像素点的重建温度值即为上述多个像素点的温度值的平均值或加权均值。
例如,第一像素点P i,j的重建温度值是根据第二邻域内的多个像素点P i,j-2、P i,j-1、P i,j+1和P i,j+2的温度值插值得到的,第一像素点的重建温度值可以是上述四个相邻像素点的温度值的平均值,或者以上四个相邻像素点占不同的权重,第一像素点的重建温度值为上述四个相邻像素点的温度值的加权求和的值。可以知道,四个相邻像素点占的权重总和为1。
不限于上述列举的四个相邻像素点,在具体实现中还可以根据第二邻域内其他像素点的温度值的插值得到第一像素点的重建温度值,本申请实施例对此不作限定。例如可以根据图像的内容确定第一像素点在图像边缘或轮廓上,则插值的方向可以是沿着轮廓或边缘的方向。即第一像素点的重建温度值可以是第二邻域内位于该轮廓或者边缘上的其他像素点的温度值插值得到的。
本申请实施例提供的红外图像处理方法可以通过引入区域温差、红外系统的温差比例系数(即第二系数k1)、温度-噪声模型、红外系统噪声分部比例系数(即第一系数k0),结合软阈值判断的方法,更加准确地判定当前像素点是坏点的概率,有效降低把单像素物体(或者2-3像素物体)判定为坏点的副作用,在去除坏点的同时,不会降低红外系统的远程发现和探测能力。
上述详细阐述了本申请实施例的方法,下面为了便于更好地实施本申请实施例的上述方案,相应地,下面还提供用于配合实施上述方案的相关装置。
图7示出了本申请实施例提供的一种红外图像处理装置结构示意图,控红外图像处理装置70可以包括:至少一个处理器701,例如CPU,存储器703,至少一个通信总线702。其中,通信总线702用于实现这些组件之间的连接通信。存储器703可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatilememory),例如至少一个磁盘存储器,存储器703包括本发明实施例中的flash。存储器703可选的还可以是至少一个位于远离前述处理器701的存储系统。
在一种可能的实施例中,红外图像处理装置70为红外拍摄装置10,则红外图像处理装置70还可以包括红外传感器及至少一个网络接口。其中,红外传感器可以用于获取红外图像。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通过网络接口可以与红外显示装置20建立通信连接。
在另外一种可能的实施例中,红外处理装置70为红外显示装置20,则红 外处理装置70还可以包括显示屏及网络接口。其中,显示屏可以用于显示红外图像。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通过网络接口可以与红外拍摄装置10建立通信连接。
可选地,红外处理装置70还可以包括用户接口,用于接收用户作用于红外图像的用户操作,或者用于操控红外拍摄装置10。其中,用户接口703可以包括触摸屏、键盘或鼠标、摇杆、物理按钮等等。
需要说明的是,网络接口可以连接接收器、发射器或其他通信模块,其他通信模块可以包括但不限于WiFi模块、蓝牙模块等,可以理解,本发明实施例中的红外图像处理装置70也可以包括接收器、发射器和其他通信模块等。
在另外一种可能的实施例中,红外处理装置70为红外拍摄装置10与红外显示装置20的集成,则红外处理装置70还可以包括红外传感器及显示屏。其中,红外传感器可以用于获取红外图像。显示屏可以用于显示红外图像。
可选地,红外处理装置70还可以包括用户接口,用于接收用户作用于红外图像的用户操作。
处理器701可以用于调用存储器703中存储的程序指令,并执行以下操作:
确定红外图像中第一像素点的第一邻域;其中,所述第一邻域中包括的像素点的个数的数量为预设值;
根据所述第一邻域内的像素点的温度值确定所述第一像素点为坏点的概率;
根据所述第一像素点的初始温度值、所述第一像素点为坏点的概率及所述第一像素点的重建温度值,确定所述第一像素点的最终温度值,其中,所述第一像素点的重建温度值是根据所述第一像素点的第二邻域内的一个或多个像素点的温度值进行插值得到的。
在一些可能的实施例中,所述装置执行根据所述第一邻域内的像素点的温度值确定所述第一像素点为坏点的概率时,具体执行:
根据所述第一邻域内的像素点的温度值确定允许温差;
根据所述第一像素点的初始温度值与所述允许温差确定所述第一像素点为坏点的概率。
在一些可能的实施例中,所述装置执行根据所述第一邻域内的像素点的温 度值确定允许温差时,具体执行:
根据所述第一邻域内像素点的温度值确定区域平均温度值;
根据所述区域平均温度值和/或所述第一邻域内的区域温差确定所述允许温差。
在一些可能的实施例中,所述区域温差为所述第一邻域内所有像素点中的最高温度值与最低温度值的差值。
在一些可能的实施例中,所述区域温差为预设温度范围内的最高温度值与最低温度值的差值。
在一些可能的实施例中,所述装置执行根据所述区域平均温度值和/或区域温差确定所述允许温差时,具体执行:
根据所述区域平均温度值与预设的温度-噪声模型确定温度噪声;所述温度-噪声模型为根据多个已知温度噪声的温度得到的模型;
根据所述温度噪声及所述区域温差确定所述允许温差。
在一些可能的实施例中,所述允许温差是根据第一系数与所述温度噪声的乘积,与第二系数与所述区域温差的乘积之和确定的;其中,所述第一系数根据红外系统的噪声分布规律得到,所述第二系数由所述红外系统的性能决定。
在一些可能的实施例中,所述装置执行根据所述第一像素点的初始温度值与所述允许温差确定所述第一像素点为坏点的概率时,具体执行:
计算所述第一像素点的初始温度值与所述区域平均温度值的差值,所述差值为第一像素点温差;
根据所述第一像素点温差与所述允许温差的差值确定所述第一像素点为坏点的概率。
在一些可能的实施例中,所述第一像素点的最终温度值为以下两者的和:所述重建温度值与所述第一像素点为坏点的概率的乘积、所述第一像素点的初始温度值与所述第一像素点不为坏点的概率的乘积;所述第一像素点为坏点的概率与所述第一像素点不为坏点的概率的和为1。
在一些可能的实施例中,所述第一像素点的重建温度值是根据所述第二邻域内的一个像素点的温度值进行插值得到的,所述重建温度值为所述一个像素点的温度值;或所述第一像素点的重建温度值是根据所述第二邻域内的多个像素点的温度值进行插值得到的,所述重建温度值为所述多个像素点的温度值的 平均值或加权均值。
在一些可能的实施例中,所述第一邻域与所述第二邻域不同。
在一些可能的实施例中,所述第一邻域与所述第二邻域相同。
在一些可能的实施例中,所述第一邻域以所述第一像素点为中心。
可理解的是,本实施例的红外图像处理装置70的功能可根据上述方法实施例中的方法具体实现,此处不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-OnlyMemory,ROM)或随机存储记忆体RAM等。在不冲突的情况下,本实施例和实施方案中的技术特征可以任意组合。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (28)

  1. 一种红外图像处理方法,其特征在于,包括:
    确定红外图像中第一像素点的第一邻域;其中,所述第一邻域中包括的像素点的数量为预设值;
    根据所述第一邻域内的像素点的温度值确定所述第一像素点为坏点的概率;
    根据所述第一像素点的初始温度值、所述第一像素点为坏点的概率及所述第一像素点的重建温度值,确定所述第一像素点的最终温度值,其中,所述第一像素点的重建温度值是根据所述第一像素点的第二邻域内的一个或多个像素点的温度值进行插值得到的。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述第一邻域内的像素点的温度值确定所述第一像素点为坏点的概率,包括:
    根据所述第一邻域内的像素点的温度值确定允许温差;
    根据所述第一像素点的初始温度值与所述允许温差确定所述第一像素点为坏点的概率。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述第一邻域内的像素点的温度值确定允许温差,包括:
    根据所述第一邻域内像素点的温度值确定区域平均温度值;
    根据所述区域平均温度值和/或所述第一邻域内的区域温差确定所述允许温差。
  4. 如权利要求3所述的方法,其特征在于,所述区域温差为所述第一邻域内所有像素点中的最高温度值与最低温度值的差值。
  5. 如权利要求3所述的方法,其特征在于,所述区域温差为预设温度范围内的最高温度值与最低温度值的差值。
  6. 如权利要求3-5任一项所述的方法,其特征在于,所述根据所述区域平均温度值和/或区域温差确定所述允许温差,包括:
    根据所述区域平均温度值与预设的温度-噪声模型确定温度噪声;所述温度-噪声模型为根据多个已知温度噪声的温度得到的模型;
    根据所述温度噪声及所述区域温差确定所述允许温差。
  7. 如权利要求6所述的方法,其特征在于,所述允许温差是根据第一系数与所述温度噪声的乘积,与第二系数与所述区域温差的乘积之和确定的;其中,所述第一系数根据红外系统的噪声分布规律得到,所述第二系数由所述红外系统的性能决定。
  8. 如权利要求3-7任一项所述的方法,其特征在于,所述根据所述第一像素点的初始温度值与所述允许温差确定所述第一像素点为坏点的概率,包括:
    计算所述第一像素点的初始温度值与所述区域平均温度值的差值,所述差值为第一像素点温差;
    根据所述第一像素点温差与所述允许温差的差值确定所述第一像素点为坏点的概率。
  9. 如权利要求1-8任一项所述的方法,其特征在于,所述第一像素点的最终温度值为以下两者的和:所述重建温度值与所述第一像素点为坏点的概率的乘积、所述第一像素点的初始温度值与所述第一像素点不为坏点的概率的乘积;所述第一像素点为坏点的概率与所述第一像素点不为坏点的概率的和为1。
  10. 如权利要求1-9任一项所述的方法,其特征在于,所述第一像素点的重建温度值是根据所述第二邻域内的一个像素点的温度值进行插值得到的,所述重建温度值为所述一个像素点的温度值;或
    所述第一像素点的重建温度值是根据所述第二邻域内的多个像素点的温度值进行插值得到的,所述重建温度值为所述多个像素点的温度值的平均值或加权均值。
  11. 如权利要求1-10任一项所述的方法,其特征在于,所述第一邻域与所述第二邻域不同。
  12. 如权利要求1-10任一项所述的方法,其特征在于,所述第一邻域与所述第二邻域相同。
  13. 如权利要求1-12任一项所述的方法,其特征在于,所述第一邻域以所述第一像素点为中心。
  14. 一种红外图像处理装置,其特征在于,包括:存储器及处理器;
    所述存储器用于存储程序指令,所述处理器用于调用所述存储器中的程序指令执行如下操作:
    确定红外图像中第一像素点的第一邻域;其中,所述第一邻域中包括的像素点的个数的数量为预设值;
    根据所述第一邻域内的像素点的温度值确定所述第一像素点为坏点的概率;
    根据所述第一像素点的初始温度值、所述第一像素点为坏点的概率及所述第一像素点的重建温度值,确定所述第一像素点的最终温度值,其中,所述第一像素点的重建温度值是根据所述第一像素点的第二邻域内的一个或多个像素点的温度值进行插值得到的。
  15. 如权利要求14所述的装置,其特征在于,所述处理器执行根据所述第一邻域内的像素点的温度值确定所述第一像素点为坏点的概率时,具体执行:
    根据所述第一邻域内的像素点的温度值确定允许温差;
    根据所述第一像素点的初始温度值与所述允许温差确定所述第一像素点为坏点的概率。
  16. 如权利要求15所述的装置,其特征在于,所述处理器执行根据所述第一邻域内的像素点的温度值确定允许温差时,具体执行:
    根据所述第一邻域内像素点的温度值确定区域平均温度值;
    根据所述区域平均温度值和/或所述第一邻域内的区域温差确定所述允许温差。
  17. 如权利要求16所述的装置,其特征在于,所述区域温差为所述第一邻域内所有像素点中的最高温度值与最低温度值的差值。
  18. 如权利要求16所述的装置,其特征在于,所述区域温差为预设温度范围内的最高温度值与最低温度值的差值。
  19. 如权利要求16-18任一项所述的装置,其特征在于,所述处理器执行根据所述区域平均温度值和/或区域温差确定所述允许温差时,具体执行:
    根据所述区域平均温度值与预设的温度-噪声模型确定温度噪声;所述温度-噪声模型为根据多个已知温度噪声的温度得到的模型;
    根据所述温度噪声及所述区域温差确定所述允许温差。
  20. 如权利要求19所述的装置,其特征在于,所述允许温差是根据第一系数与所述温度噪声的乘积,与第二系数与所述区域温差的乘积之和确定的; 其中,所述第一系数根据红外系统的噪声分布规律得到,所述第二系数由所述红外系统的性能决定。
  21. 如权利要求16-20任一项所述的装置,其特征在于,所述处理器执行根据所述第一像素点的初始温度值与所述允许温差确定所述第一像素点为坏点的概率时,具体执行:
    计算所述第一像素点的初始温度值与所述区域平均温度值的差值,所述差值为第一像素点温差;
    根据所述第一像素点温差与所述允许温差的差值确定所述第一像素点为坏点的概率。
  22. 如权利要求14-21任一项所述的装置,其特征在于,所述第一像素点的最终温度值为以下两者的和:所述重建温度值与所述第一像素点为坏点的概率的乘积、所述第一像素点的初始温度值与所述第一像素点不为坏点的概率的乘积;所述第一像素点为坏点的概率与所述第一像素点不为坏点的概率的和为1。
  23. 如权利要求14-22任一项所述的装置,其特征在于,所述第一像素点的重建温度值是根据所述第二邻域内的一个像素点的温度值进行插值得到的,所述重建温度值为所述一个像素点的温度值;或
    所述第一像素点的重建温度值是根据所述第二邻域内的多个像素点的温度值进行插值得到的,所述重建温度值为所述多个像素点的温度值的平均值或加权均值。
  24. 如权利要求14-23任一项所述的装置,其特征在于,所述第一邻域与所述第二邻域不同。
  25. 如权利要求14-23任一项所述的装置,其特征在于,所述第一邻域与所述第二邻域相同。
  26. 如权利要求14-25任一项所述的装置,其特征在于,所述第一邻域以所述第一像素点为中心。
  27. 如权利要求14-26任一项所述的装置,其特征在于,所述装置为红外相机、搭载有红外传感器的可移动设备或用于控制所述可移动设备的控制设备。
  28. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所 述计算机程序被处理器执行时实现如权利要求1-13任一项所述的方法。
PCT/CN2019/130968 2019-12-31 2019-12-31 红外图像处理方法及装置 WO2021134713A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130968 WO2021134713A1 (zh) 2019-12-31 2019-12-31 红外图像处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130968 WO2021134713A1 (zh) 2019-12-31 2019-12-31 红外图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2021134713A1 true WO2021134713A1 (zh) 2021-07-08

Family

ID=76686325

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130968 WO2021134713A1 (zh) 2019-12-31 2019-12-31 红外图像处理方法及装置

Country Status (1)

Country Link
WO (1) WO2021134713A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106373094A (zh) * 2016-08-25 2017-02-01 中国科学院长春光学精密机械与物理研究所 一种红外图像的非均匀性校正方法及装置
CN108596862A (zh) * 2018-05-29 2018-09-28 深圳点扬科技有限公司 用于排除红外热像全景图干扰源的处理方法
US20180316882A1 (en) * 2015-06-26 2018-11-01 Ulis Detection of bad pixels in an infrared image-capturing apparatus
CN109767441A (zh) * 2019-01-15 2019-05-17 电子科技大学 一种自动检测盲元标记方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180316882A1 (en) * 2015-06-26 2018-11-01 Ulis Detection of bad pixels in an infrared image-capturing apparatus
CN106373094A (zh) * 2016-08-25 2017-02-01 中国科学院长春光学精密机械与物理研究所 一种红外图像的非均匀性校正方法及装置
CN108596862A (zh) * 2018-05-29 2018-09-28 深圳点扬科技有限公司 用于排除红外热像全景图干扰源的处理方法
CN109767441A (zh) * 2019-01-15 2019-05-17 电子科技大学 一种自动检测盲元标记方法

Similar Documents

Publication Publication Date Title
US10504242B2 (en) Method and device for calibrating dual fisheye lens panoramic camera, and storage medium and terminal thereof
US9235897B2 (en) Stereoscopic image generating device and stereoscopic image generating method
US10645364B2 (en) Dynamic calibration of multi-camera systems using multiple multi-view image frames
US20220076391A1 (en) Image Distortion Correction Method and Apparatus
US10198621B2 (en) Image-Processing device and method for foreground mask correction for object segmentation
US8682068B2 (en) Image processing apparatus, image processing method, and program
CN108989678B (zh) 一种图像处理方法、移动终端
US9007481B2 (en) Information processing device and method for recognition of target objects within an image
US20160275682A1 (en) Machine vision image sensor calibration
US11941796B2 (en) Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium
CN107113376A (zh) 一种图像处理方法、装置及摄像机
CN107566749B (zh) 拍摄方法及移动终端
TW201419853A (zh) 影像處理器及其影像壞點偵測方法
CN108257186B (zh) 标定图像的确定方法及装置、摄像机及存储介质
JP2010041419A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
US10742852B2 (en) Image processing apparatus, object shape estimation method, and storage medium
WO2020114433A1 (zh) 一种深度感知方法,装置和深度感知设备
US20240259544A1 (en) Information processing apparatus, information processing method, and program
CN111145151A (zh) 一种运动区域确定方法及电子设备
CN114125280A (zh) 相机曝光控制方法、装置、设备、存储介质及程序产品
WO2021134713A1 (zh) 红外图像处理方法及装置
CN112598610A (zh) 一种深度图像获得方法、装置、电子设备及存储介质
JP2019020839A (ja) 画像処理装置、画像処理方法、及びプログラム
US20220254035A1 (en) Image processing apparatus, head-mounted display, and method for acquiring space information
CN111801709B (zh) 一种圆形特征检测方法、处理系统及具有存储功能的装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958589

Country of ref document: EP

Kind code of ref document: A1