WO2021092815A1 - 识别方法、测温方法、设备及存储介质 - Google Patents

识别方法、测温方法、设备及存储介质 Download PDF

Info

Publication number
WO2021092815A1
WO2021092815A1 PCT/CN2019/118225 CN2019118225W WO2021092815A1 WO 2021092815 A1 WO2021092815 A1 WO 2021092815A1 CN 2019118225 W CN2019118225 W CN 2019118225W WO 2021092815 A1 WO2021092815 A1 WO 2021092815A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
extreme
target image
pixel
image
Prior art date
Application number
PCT/CN2019/118225
Other languages
English (en)
French (fr)
Inventor
杨磊
张青涛
赵新涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980032929.3A priority Critical patent/CN112154450A/zh
Priority to PCT/CN2019/118225 priority patent/WO2021092815A1/zh
Publication of WO2021092815A1 publication Critical patent/WO2021092815A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/10Radiation pyrometry, e.g. infrared or optical thermometry using electric radiation detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • This application relates to the field of image processing technology, and in particular to an extreme point recognition method, extreme position recognition method, temperature measurement method, device, infrared temperature measurement equipment, movable platform and storage medium.
  • the drone can be equipped with infrared temperature measuring equipment to conduct temperature inspections.
  • infrared temperature measurement if you can quickly and accurately capture high and low temperature targets in a wide viewing angle and get a stable temperature, and you can quickly get the accurate and stable temperature value of the target when aligning the target, it will undoubtedly increase the number of people. The machine's inspection temperature efficiency.
  • the average temperature, minimum temperature and maximum temperature of a specified point or specified area are the focus of temperature measurement.
  • the pixel value image in the infrared image is directly used. To map the temperature, it will cause the output temperature value to jump and the stability is poor; when there are two extreme points with similar temperatures in the area, the mark point marking the extreme position of the image on the video image will also be in the two extreme positions Switching repeatedly, the temperature measurement display has a poor look and feel. Therefore, how to identify extreme points and extreme positions to improve the stability and accuracy of temperature measurement becomes an urgent problem to be solved.
  • this application provides an extreme point identification method, extreme position identification method, temperature measurement method, device, infrared temperature measurement equipment, movable platform and storage medium, which can more accurately identify the extreme point Value points and extreme positions.
  • the present application provides a method for identifying extreme points, the method including:
  • the target image includes a current frame image or an image of a designated area determined in the current frame image
  • the extreme point of the target image is determined according to the pixel points that are non-abnormal extreme points.
  • this application also provides a method for identifying extreme positions, the method including:
  • the target image includes a current frame image or an image of a designated area determined in the current frame image
  • the extreme position of the target image is determined according to the previous frame image of the target image.
  • this application also provides a temperature measurement method, which includes:
  • the target image is an infrared image
  • the temperature corresponding to the extreme point is determined according to the pixel value corresponding to the extreme point.
  • the target image includes a current frame image or an image of a designated area determined in the current frame image
  • the temperature of the target point is determined according to the filtering result.
  • the present application also provides an extreme point identification device, the extreme point identification device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when the computer program is executed, realize the steps of the above-mentioned extreme point identification method.
  • the present application also provides an extreme position recognition device, the extreme position recognition device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when the computer program is executed, realize the steps of the extreme position identification method as described above.
  • the present application also provides a temperature measurement device, the temperature measurement device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when the computer program is executed, realize the steps of the temperature measurement method described above.
  • the present application also provides an infrared temperature measurement device, the infrared temperature measurement device including an optical lens, a photodetector, a memory, and a processor;
  • the optical lens is used to receive infrared light emitted by the target
  • the photodetector is electrically connected to the processor, and is configured to convert the infrared light into an electrical signal and send it to the processor for imaging;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when the computer program is executed, realize the steps of the temperature measurement method described above.
  • this application also provides a movable platform, which includes an infrared temperature measurement device, a memory, and a processor;
  • the infrared temperature measuring device is mounted on the movable platform and used to obtain infrared images
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when the computer program is executed, realize the steps of the temperature measurement method described above.
  • the present application also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the processor implements the above-mentioned terrain detection method.
  • the extreme point identification method, extreme position identification method, temperature measurement method, device, infrared temperature measuring equipment, movable platform and storage medium proposed in this application can more accurately identify extreme points and extreme positions, If the extreme point identification method and extreme position identification method are applied to temperature measurement, that is, the temperature measurement method provided in this application, the accuracy, stability and temperature measurement efficiency of temperature measurement can be improved.
  • FIG. 1 is a schematic block diagram of a movable platform provided by an embodiment of the present application
  • FIG. 2 is a schematic block diagram of an infrared temperature measurement device provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of steps of an extreme point identification method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the effect of determining adjacent pixels provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the effect of determining a pixel area block provided by an embodiment of the present application.
  • FIG. 6 is another schematic diagram of the effect of determining adjacent pixels provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of steps of a method for recognizing extreme positions according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of steps of a temperature measurement method provided by an embodiment of the present application.
  • FIG. 9 is a schematic block diagram of an extreme point identification device provided by an embodiment of the present application.
  • FIG. 10 is a schematic block diagram of an extreme position identification device provided by an embodiment of the present application.
  • Fig. 11 is a schematic block diagram of a temperature measuring device provided by an embodiment of the present application.
  • the embodiments of the present application provide an extreme point identification method, extreme position identification method, temperature measurement method, device, infrared temperature measurement equipment, movable platform and storage medium, which can more accurately identify extreme points and extremes. Value location.
  • the temperature corresponding to the extreme point can be quickly and accurately determined, thereby improving the accuracy, stability, and reliability of high temperature measurement .
  • the temperature measurement method is applied to a movable platform, the temperature measurement efficiency of the movable platform can be improved by using the movable platform to patrol the temperature of the target.
  • the movable platform includes an aircraft, a robot, or an autonomous unmanned vehicle, etc.
  • the movable platform is equipped with infrared temperature measurement equipment, of course, it can also be other equipment that uses image acquisition to achieve temperature measurement.
  • the infrared temperature measurement equipment includes an infrared thermometer, an infrared array imaging device, and the like.
  • the movable platform 100 includes an infrared temperature measuring device 110, and the infrared temperature measuring device 110 is mounted on the movable platform 100 for acquiring infrared images.
  • the movable platform 100 may be an aircraft, the aircraft includes a drone, and the drone includes a rotary wing drone, such as a quadrotor drone, a hexarotor drone, or an octorotor drone.
  • the fixed-wing UAV can also be a combination of a rotary-wing type and a fixed-wing UAV, which is not limited here.
  • the mobile platform 100 includes a processor 101 and a memory 102, and the processor 101 and the memory 102 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • a bus such as an I2C (Inter-integrated Circuit) bus.
  • the processor 101 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 102 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor is used to run a computer program stored in a memory, and when executing the computer program, implement the extreme point identification method, extreme position identification or temperature measurement method provided in the embodiments of the present application. step. In order to improve the accuracy of the recognition of extreme points and extreme positions, as well as improve the accuracy and stability of temperature measurement.
  • the current frame image is collected by infrared temperature measuring equipment, and the current frame image is sent to the movable platform; the movable platform acquires the target image, and determines multiple neighboring pixels of the pixel in the target image, according to all The adjacent pixel points determine whether the pixel point is an abnormal extreme value point; and the extreme value point of the target image is determined according to the pixel point that is a non-abnormal extreme value point.
  • the target image includes a current frame image or an image of a designated area determined in the current frame image. After determining the extreme point of the target image, the temperature corresponding to the extreme point can be determined according to the extreme point.
  • the extreme point recognition method can quickly and accurately capture high and low temperature targets in a wide viewing angle and obtain a stable temperature.
  • the accurate and stable temperature of the target can be quickly obtained. Therefore, the temperature measurement efficiency of the drone is increased.
  • the infrared temperature measurement device 110 includes an optical lens 111, a photodetector 112, a memory 113 and a processor 114.
  • the optical lens 111 is used to receive the infrared light emitted by the target; the photodetector 112 is electrically connected to the processor 114, and is used to convert the infrared light into an electrical signal and send it to the processor 114 for imaging; the memory 113 is used to Stores a computer program; the processor 114 is used to execute the computer program and, when executing the computer program, implement the steps of any of the extreme point identification methods, extreme position identification or temperature measurement methods provided in the embodiments of this application . In order to improve the accuracy of the recognition of extreme points and extreme positions, as well as improve the accuracy and stability of temperature measurement.
  • the infrared temperature measurement device can send the acquired infrared image as a target image to a movable platform, and the movable platform implements any of the extreme point identification methods and extreme positions provided in the embodiments of this application. Identify or measure the steps of the temperature method. In order to improve the recognition accuracy of extreme points and extreme positions, as well as improve the accuracy, stability and efficiency of temperature measurement.
  • the embodiments of the present application provide any of the extreme point identification methods and extreme position identification methods, which are used to accurately and quickly identify the extreme points and extreme positions of the extreme points of the image, and Ensure the stability of the extreme position between different frames. Therefore, the extreme point recognition method and the extreme position recognition method can be applied not only in infrared temperature measurement, but also in other image processing fields, such as visual positioning.
  • an aircraft and an infrared temperature measuring device are taken as examples to introduce the methods provided in the embodiments of the present application in detail.
  • the first is to reduce the temperature measurement frequency.
  • the temperature measurement frequency is low, the fluctuation of the temperature measurement result and the fluctuation of the extreme position will become insignificant, but this is at the expense of the rapid and real-time temperature measurement.
  • the second is to smooth the temperature measurement pixel spatially or temporally. For example, instead of measuring the temperature at a specified point in space, it measures the average temperature of a certain small area including the specified point. This will cause only a single or a few A small target with a pixel point cannot accurately measure temperature, local high or low temperature points are smoothed, or the local area of temperature measurement may not all fall on the temperature measurement target, resulting in inaccurate temperature measurement.
  • smoothing in time is mainly required for temperature measurement. Extend the temperature measurement time, collect multiple times on the same temperature measurement target, and the multi-frame average will output the accurate temperature measurement result. This will cause the temperature measurement result to not reach the accurate stable value after the target is aligned, and the temperature measurement response Slow and difficult to achieve accurate temperature measurement under target temperature changes.
  • the above problems are mainly caused by insufficient recognition of extreme points of infrared images collected during temperature measurement, unstable extreme positions, or insufficient stability of the pixel values used to characterize temperature measurement values.
  • the embodiments of the present application provide an extreme point identification method, an extreme position identification method, and a temperature measurement method to solve the problems of slow temperature measurement response, inaccurate temperature measurement results, and insufficient stability during temperature measurement.
  • FIG. 3 is a schematic flowchart of the steps of an extreme point identification method provided by an embodiment of the present application. This method is used to quickly and accurately identify the extreme points of the collected images.
  • the method for identifying extreme points includes steps S101 to S104.
  • the target image can be the current frame image collected, or the selected partial area image in the current frame image, that is, the designated area image.
  • the current frame of image may be the currently displayed image, or may be a specified frame of image to be processed.
  • S102 Determine multiple neighboring pixels of a pixel in the target image.
  • multiple neighboring pixels of each pixel in the target image can be determined. Or in order to determine the extreme point of the target image more quickly, multiple adjacent pixel points of some pixels in the target image may also be determined.
  • the part of the pixels may be quasi-extreme points.
  • pixels with a pixel value greater than a first preset pixel value are regarded as quasi-extreme points, or pixels with a pixel value greater than a second preset pixel value are regarded as quasi-extreme points. Quasi extreme point.
  • the first preset pixel value is greater than the second preset pixel value.
  • the adjacent pixel point may be a pixel point adjacent to the pixel point or a pixel point close to the pixel point.
  • the four pixel points A1, pixel point A2, pixel point A3, and pixel point A4 adjacent to pixel point A can all be used as neighboring pixels of pixel point A; or close to pixel point A
  • the four pixel points A5, pixel point A6, pixel point A7, and pixel point A8 can also be used as adjacent pixels of pixel point A.
  • a pixel area block may be constructed for the pixels in the target image, and the pixel points contained in the pixel area block of the pixel points are regarded as adjacent pixels of the pixel point.
  • a plurality of pixels adjacent to the pixel may be determined, and an area formed by the plurality of adjacent pixels may be regarded as a pixel area block.
  • multiple pixels adjacent to the pixel are determined, and the pixel adjacent to the pixel and/or the pixel adjacent to the pixel may be regarded as the adjacent pixel.
  • the pixel points adjacent to the pixel point B and/or the pixel points close to the pixel point B are regarded as adjacent pixels, and the area S formed by a plurality of adjacent pixels is regarded as a pixel area block.
  • determining the pixel area block may specifically be: determining a pixel adjacent to the pixel, and mirroring the adjacent pixel Processing to obtain mirrored pixels; and taking the area formed by the adjacent pixels and the mirrored pixels as a pixel area block. In this way, each pixel point can constitute a pixel area block, thereby improving the recognition accuracy of extreme points.
  • the edge pixels include pixels on the edge of the target image and pixels on the four top corners.
  • Pixel mirror processing includes horizontal mirroring, vertical mirroring, and center point mirroring.
  • the pixel points (specifically, the pixel point C1, the pixel point C2, and the pixel point C3) adjacent to the pixel point C are determined. These adjacent pixels are mirrored to obtain mirrored pixels (specifically, pixel point C11, pixel point C21, pixel point C21, pixel point C23, and pixel point C31); and these adjacent pixels and mirrored pixels are formed
  • the area S serves as a pixel area block.
  • the pixel area block of the pixels includes an area of 3*3 pixels, that is, 9 pixels form a pixel area block.
  • the pixel point is a central pixel point of the pixel area block.
  • other forms or numbers of pixels can also be used, such as an area of 5*5 pixels.
  • S103 Determine whether the pixel point is an abnormal extreme point according to the adjacent pixel point.
  • the pixel values of multiple neighboring pixels of the pixel can be compared with the pixel value of the pixel to determine whether the pixel is an abnormal extreme point.
  • determining the maximum pixel value and the minimum pixel value from the pixel values of the multiple adjacent pixels and according to the The maximum pixel value and the minimum pixel value determine whether the pixel point is an abnormal extreme point.
  • determining whether the pixel point is an abnormal extreme value point according to the maximum pixel value and the minimum pixel value may be specifically adopted: determining whether the pixel value corresponding to the pixel point is greater than the maximum pixel value, and Whether the difference between the pixel value corresponding to the pixel and the maximum pixel value is greater than or equal to the preset difference.
  • the pixel point corresponding to the pixel point is greater than the maximum pixel value, and the difference between the pixel value corresponding to the pixel point and the maximum pixel value is greater than or equal to the preset difference value, it is determined that the pixel point is abnormal Extreme point.
  • the pixel point corresponding to the pixel point is less than the minimum pixel value, and the difference between the minimum pixel value and the pixel value corresponding to the pixel point is greater than or equal to a preset difference value, it is determined that the pixel point is abnormal Extreme point.
  • the preset difference is the product of the difference between the maximum pixel value and the minimum pixel value multiplied by an extreme point determination coefficient, and the extreme point determination coefficient is a constant.
  • the corresponding preset difference value may be the same or different.
  • the determination of the abnormal extreme value point described above is expressed as the following formula, and if the pixel value of the pixel point satisfies one of the following formulas, it can be determined that the pixel point is an abnormal extreme value point.
  • minn is the minimum pixel value
  • maxn is the maximum pixel value
  • x represents the pixel value of the pixel
  • k is the extreme point judgment coefficient
  • the pixel value x of a pixel meets one of the above two conditions, it can be determined that the point is an abnormal extreme point; otherwise, the pixel is a non-abnormal pixel.
  • a pixel area block composed of 3*3 pixels it can be determined by the above formula. If the value of the pixel and the nearest 8 pixels are significantly different, it indicates that the point may be an isolated pixel. Salt and pepper noise points are excluded as abnormal pixels and will not participate in the identification of extreme points. This improves the accuracy of extreme point recognition.
  • the median filter can also be used to effectively remove pixels that may be salt and pepper noise.
  • the median filter will use the true maximum and minimum values as noise to filter out, and the missed detection will only account for Small target extremum points of a few pixels are not conducive to accurate acquisition of extremum points.
  • S104 Determine the extreme point of the target image according to the pixel points that are non-abnormal extreme points.
  • the pixel with the smallest pixel value and the pixel with the largest pixel value are determined from a plurality of non-abnormal extreme value points, and the pixel with the smallest pixel value and the pixel with the largest pixel value are said The extreme point of the target image.
  • determining the extremum points of the target image based on pixels that are non-abnormal extremum points may be specifically adopted: a plurality of determined non-abnormal extremum points are formed into non-abnormal extremum points From the non-abnormal extreme value point set, determine the pixel with the largest pixel value and the pixel with the smallest pixel value; and combine the pixel with the largest pixel value and the pixel with the smallest pixel value As the extreme point of the target image, the extreme point can be determined quickly.
  • the extreme point recognition method provided by the foregoing embodiments can effectively eliminate abnormal extreme points, thereby improving the accuracy of extreme point recognition, and providing a more accurate and fast way for image processing that needs to determine extreme points.
  • Method of identification can effectively eliminate abnormal extreme points, thereby improving the accuracy of extreme point recognition, and providing a more accurate and fast way for image processing that needs to determine extreme points.
  • the extreme position of the target image can also be determined.
  • the output update of the extreme temperature is performed according to the calculation result of the extreme point of each frame of the image, but the extreme position and value will repeatedly jump in some scenes, especially when there are two or more high and low temperatures that are close to each other in a scene.
  • the target or the high and low temperature target is a large area background, it is not conducive to the temperature detection and observation of the real high and low temperature target at this time.
  • determining the extreme value position of the target image specifically includes: determining the extreme value position of the target image according to a position determination condition satisfied by the target image.
  • the extreme position is the pixel coordinate of the pixel point corresponding to the extreme point in the target image.
  • determining the extreme position of the target image according to the position determination condition satisfied by the target image is specifically: determining whether the target image is the first frame image, or whether the target image is an updated designated area image.
  • the pixel coordinates corresponding to the extreme points of the target image are taken as the extreme positions of the target image.
  • the first frame image is the first frame image corresponding to when the target image is collected
  • the updated designated area image refers to the designated area image corresponding to the designated area changed in different current frame images. If the target image is the first frame image or an updated designated area image, the pixel coordinates corresponding to the extreme points of the target image are taken as the extreme positions of the target image.
  • the target image is not the first frame image and is not the updated designated area image, determine the reference extreme point corresponding to the target image, and determine whether the reference extreme point is an abnormal extreme point;
  • the reference extreme value point is an abnormal extreme value point, and the pixel coordinates corresponding to the extreme value point of the target image are taken as the extreme value position of the target image.
  • the reference extremum point is a pixel point corresponding to the target image in the extremum position in the previous frame of the target image, and the pixel value corresponding to the reference extremum point is a reference extremum .
  • a one-step judgment process may be performed to ensure the extreme value position of the target image.
  • the reference extreme value point is not an abnormal extreme value point, calculate the difference between the reference extreme value and the pixel value corresponding to the extreme value point of the target image, and determine the absolute value of the difference Whether it is less than or equal to the first preset threshold; if the absolute value of the difference is less than or equal to the first preset threshold, the extreme position in the previous frame of image is taken as the extreme value of the target image position.
  • the pixel coordinates corresponding to the extreme points of the target image are taken as the extreme positions of the target image.
  • the first preset threshold value is a first preset multiple of the time-domain noise standard deviation.
  • the time-domain noise standard deviation is the difference between two adjacent frames of the same target, and the same target can be a flat surface with uniform temperature.
  • the first preset multiple may be greater than or equal to 3 times, for example, the first preset multiple is 4 times the time-domain noise standard deviation.
  • determining whether the reference extremum point is an abnormal extremum point can be determined by the above embodiment in the manner of determining whether a pixel point is an abnormal extremum point.
  • the first preset threshold corresponding to the extreme point may be set to the extreme value.
  • the value point is a minimum value
  • the corresponding first preset threshold setting is different.
  • the first preset threshold is positively correlated with the pixel value of the extreme point of the target image, that is, it can be set as a dynamic threshold related to the pixel value of the current extreme point, and the pixel value of the extreme point is relatively low.
  • the first preset threshold may be relatively larger.
  • the method of the foregoing embodiment can not only quickly and accurately identify the extreme point of the target image, but also determine the extreme position of the target image, so as to ensure the numerical accuracy of the extreme point and the stability of the extreme position. For example, in temperature measurement, the position of the extreme temperature measurement point can be stabilized to ensure that the extreme position will not jump during stable temperature measurement.
  • the method of the extreme point may further include: determining the target image Whether the extreme point of the target image satisfies the preset filtering trigger condition; if the extreme point of the target image satisfies the preset filtering trigger condition, time-domain filtering is performed on the pixel value corresponding to the extreme point of the target image.
  • the pixel value corresponding to the extreme point of the target image is not filtered in time domain, and the pixel value corresponding to the extreme point of the target image is not filtered.
  • the pixel value determines the temperature corresponding to the extreme point.
  • the pixel value before performing temporal filtering on the pixel value corresponding to the extreme point of the target image, the pixel value can also be judged, specifically: calculating the reference extreme value corresponding to the extreme point of the previous frame of image The difference value of the pixel values; determine whether the absolute value of the difference value is less than or equal to the second preset threshold.
  • the reference extremum value is the pixel value of the pixel point corresponding to the target image at the extremum position in the previous frame of the target image.
  • the pixel value corresponding to the extreme point of the target image is filtered in time domain; if the absolute value of the difference is greater than the first Two preset thresholds, then no temporal filtering is performed on the extreme points according to the target image.
  • the second preset threshold is a second preset multiple of the time-domain noise standard deviation.
  • the time-domain noise standard deviation is the difference between two adjacent frames of the same target, and the same target can be a plane with uniform temperature.
  • the second preset multiple may be greater than or equal to 3 times, for example, the second preset multiple is 5 times the standard deviation of the time-domain noise.
  • the time domain filtering includes one of IIR time domain filtering and FIR time domain filtering.
  • IIR time domain filtering specifically calculating the product of the pixel value of the extreme point of the target image and the first time domain filter coefficient, and calculating the pixel value of the extreme point of the previous frame of image and The product of the second time domain filter coefficient, and the sum of the two products is used as the filtering result; wherein the sum of the first time domain filter coefficient and the second time domain filter coefficient is 1.
  • the filtering result can be expressed by the following formula:
  • x c (p, q) is the pixel value of the extreme point of the target image
  • x r (p, q) is the pixel value of the extreme point of the previous frame of the target image
  • ratio is the temporal filter coefficient
  • It can be set as a constant. In the embodiment of the present application, it can be set to 0.9
  • (p, q) is the extreme point coordinates
  • x o (p, q) is the filtered output result.
  • median filtering can also be used to filter the extreme points of the target image. In order to improve the stability of the value of the extreme point.
  • the extreme point identification method provided by the above embodiments can accurately identify the extreme point of the image, and can ensure the stability of the extreme position and the value of the extreme point in the current frame of the image.
  • the recognition method is applied to various image processing fields that need to recognize extreme points, so that the accuracy, stability and reliability of image processing can be improved.
  • an embodiment of the present application further provides a temperature measurement method, which includes the following steps: using the extreme point identification method described in any one of the above embodiments to identify the extreme point of the target image, so The target image is an infrared image; the temperature corresponding to the extreme point is determined according to the pixel value corresponding to the extreme point.
  • the extreme point recognition method can accurately identify the extreme point of the image, and can ensure the stability of the extreme position and the value of the extreme point in the current frame of the image. Therefore, the position of the extreme temperature measurement point can be stabilized to ensure that the extreme position will not jump during stable temperature measurement.
  • the filtering out mechanism ensures the temperature measurement response. Fastness and accuracy, fast and accurate temperature measurement can be achieved when switching to a new target, and the temperature measurement result can be quickly followed when the target temperature changes suddenly.
  • FIG. 7 is a schematic flowchart of the steps of a method for recognizing extreme positions according to an embodiment of the present application. This method is used to determine the extreme position of the image to improve the stability of the extreme position.
  • the method for identifying extreme positions includes steps S201 to S203.
  • S201 Acquire a target image, where the target image includes a current frame image or a designated area image determined in the current frame image;
  • S202 Determine whether the target image is the first frame image, or whether the target image is an updated designated area image
  • S203 When the target image is not the first frame image and is not the updated designated area image, determine the extreme position of the target image according to the previous frame image of the target image.
  • the extreme position of the target image is the pixel coordinate of the pixel point corresponding to the extreme point of the target image;
  • the extreme position of the previous frame of image is the extreme of the previous frame of image The pixel coordinates of the pixel corresponding to the point.
  • determining the extremum position of the target image according to the previous frame of the target image is specifically: determining a reference extremum point corresponding to the target image; judging whether the reference extremum point is an abnormal extremum Value point; wherein, the reference extreme value point is a pixel point corresponding to the target image in the extreme value position in the previous frame of the target image.
  • the reference extreme value point is not an abnormal extreme value point, the extreme value position in the previous frame image is used as the extreme value position of the target image; if the reference extreme value point is an abnormal extreme value Point, the extreme point of the target image is used to determine the extreme position of the target image.
  • the pixel value corresponding to the reference extreme point is a reference extreme; accordingly, the extreme position in the previous frame of image is used as the extreme position of the target image, specifically: calculating The difference between the reference extreme value and the pixel value corresponding to the extreme point of the target image, and determining whether the absolute value of the difference is less than or equal to a first preset threshold.
  • the extreme position in the previous frame of image is taken as the extreme position of the target image; if the difference is If the absolute value of the value is greater than the first preset threshold, the pixel coordinates corresponding to the extreme point of the target image are taken as the extreme position of the target image.
  • the first preset threshold value is a first preset multiple of the time-domain noise standard deviation.
  • the time-domain noise standard deviation is the difference between two adjacent frames of the same target, and the same target can be a plane with uniform temperature.
  • the first preset multiple may be greater than or equal to 3 times, for example, the first preset multiple is 4 times the time-domain noise standard deviation.
  • determining whether the reference extremum point is an abnormal extremum point can be determined by the above embodiment in the manner of determining whether a pixel point is an abnormal extremum point.
  • the first preset threshold corresponding to the extreme point may be set to the extreme value.
  • the value point is a minimum value
  • the corresponding first preset threshold setting is different.
  • the first preset threshold is positively correlated with the pixel value of the extreme point of the target image, that is, it can be set as a dynamic threshold related to the pixel value of the current extreme point, and the pixel value of the extreme point is relatively low.
  • the first preset threshold may be relatively larger.
  • the method of the foregoing embodiment can determine the extreme position of the target image, thereby ensuring the stability of the extreme position. For example, in temperature measurement, the position of the extreme temperature measurement point can be stabilized to ensure that the extreme position will not jump during stable temperature measurement.
  • FIG. 8 is a schematic flowchart of steps of a temperature measurement method provided by an embodiment of the present application. This temperature measurement method ensures the stability of temperature measurement.
  • the temperature measurement method includes steps S301 to S304.
  • S301 Acquire a target image, where the target image includes a current frame image or a designated area image determined in the current frame image;
  • S304 Determine the temperature of the target point according to the filtering result.
  • the target point includes an extreme point or a designated point
  • the extreme point includes a maximum point and/or a minimum point of the target image
  • the designated point is selected on the target image Location point.
  • the selected location point on the target image can be selected by the user, or the device can automatically recognize the selection, which is not limited here.
  • the target point is an extreme point, it can be determined by the extreme position corresponding to the extreme point, that is, it is determined whether the target point of the target image meets a preset filtering trigger condition, specifically: determining the extreme value of the target image Whether the position is consistent with the extreme position of the previous frame of the target image.
  • the extreme position of the target image is consistent with the extreme position of the previous frame of image, it is determined that the extreme point of the target image satisfies the preset filtering trigger condition; if the extreme position of the target image is consistent with the If the extreme positions of the previous frame of image are not consistent, it is determined that the extreme points of the target image do not satisfy the preset filtering trigger condition.
  • the stability of the position is ensured, and the stability of the value is guaranteed.
  • the extreme position of the target image is consistent with the extreme position of the previous frame of the target image
  • the extreme position of the target image is consistent with the extreme position of the previous frame of image
  • the pixel value of the pixel corresponding to the position in the target image determining whether the absolute value of the difference value is less than or equal to a second preset threshold.
  • the absolute value of the difference is less than or equal to the second preset threshold, it is determined that the extreme point of the target image satisfies a preset filtering trigger condition; if the absolute value of the difference is greater than the second preset Threshold, which determines that the extreme point of the target image does not meet a preset filtering trigger condition.
  • the second preset threshold is a second preset multiple of the time-domain noise standard deviation.
  • the time-domain noise standard deviation is the difference between two adjacent frames of the same target, and the same target can be a plane with uniform temperature.
  • the second preset multiple may be greater than or equal to 3 times, for example, the second preset multiple is 5 times the standard deviation of the time-domain noise.
  • the target point is a designated point, it can be determined by the designated point corresponding to the designated position, that is, whether the target point of the target image meets the preset filtering trigger condition, specifically: determining whether the designated point is based on the designated position Movement occurs; if the specified point does not move, it is determined that the specified point meets a preset filter trigger condition; if the specified point moves, it is determined that the specified point does not meet the preset filter trigger condition.
  • the designated point moves in the target image, it can be determined that the designated point does not satisfy the preset filtering trigger condition, and then the time domain filtering is not performed on the designated point. Specifically, when the pixel value of the designated point changes in the previous and next frames, it can be confirmed that it has moved. In turn, it ensures that the filtering mechanism is timely jumped out, and the response speed and accuracy of temperature measurement are improved.
  • the designated point may be a switch to a new target, which also ensures that rapid and accurate temperature measurement can be achieved when switching to a new target, and the temperature measurement result can quickly follow when the target temperature changes suddenly.
  • the designated point of the previous frame of the current frame image can also be determined according to the designated position; the designated point of the current frame image is calculated The difference between the pixel value corresponding to a point and the pixel value corresponding to the specified point of the previous frame of image; determining whether the absolute value of the difference is smaller than a third preset threshold.
  • the absolute value of the difference is less than or equal to the third preset threshold, it is determined that the specified point satisfies the filter trigger condition; if the absolute value of the difference is greater than the third preset threshold, it is determined that The specified point does not meet the filter trigger condition.
  • the third preset threshold is a third preset multiple of the time-domain noise standard deviation.
  • the time-domain noise standard deviation is the difference between two adjacent frames of the same target, and the same target can be a plane with uniform temperature.
  • the third preset multiple may be greater than or equal to 3 times, for example, the third preset multiple is 5 times the time-domain noise standard deviation.
  • time domain filtering includes one of IIR time domain filtering and FIR time domain filtering.
  • IIR time domain filtering is used, that is, the product of the pixel value of the extreme point of the target image and the first time domain filter coefficient is calculated, and the pixel value of the extreme point of the previous frame image and the first time domain filter coefficient are calculated.
  • the product of the two time-domain filter coefficients, and the sum of the two products is used as the filtering result; wherein the sum of the first time-domain filter coefficient and the second time-domain filter coefficient is 1.
  • the filtering result can be expressed by the following formula:
  • x o (m,n) (1-ratio) ⁇ x c (m,n)+ratio ⁇ x r (m,n)
  • x c (m, n) is the pixel value of the specified point (or extreme point) of the target image
  • x r (m, n) is the pixel of the specified point (extreme point) of the previous frame of the target image Value
  • ratio is a time-domain filter coefficient, which can be set as a constant. In the embodiment of the present application, it can be set to 0.9, (m, n) is the extreme point coordinates, and x o (m, n) is the filtered output result.
  • the temperature corresponding to the target point is determined according to the filtered pixel value, that is, the temperature of the target point is determined according to the filtering result, which can improve the temperature and accuracy of the temperature.
  • FIG. 9 is a schematic block diagram of an extreme point identification device provided by an embodiment of the present application.
  • the extreme point identification device 400 includes a processor 401 and a memory 402, and the processor 401 and the memory 402 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 401 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 402 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor is configured to run a computer program stored in a memory, and when executing the computer program, implement any of the extreme point identification methods provided in the embodiments of the present application.
  • the processor is configured to run a computer program stored in a memory, and implement the following steps when the computer program is executed:
  • the target image including a current frame image or an image of a designated area determined in the current frame image; determine a plurality of neighboring pixels of a pixel in the target image; determine the neighboring pixels according to the neighboring pixels Whether the pixel point is an abnormal extreme value point; the extreme value point of the target image is determined according to the pixel point that is a non-abnormal extreme value point.
  • implementing the processor to determine multiple neighboring pixels of a pixel in the target image includes:
  • a pixel area block is constructed for the pixels in the target image, and the pixels included in the pixel area block of the pixel points are regarded as adjacent pixels of the pixel points.
  • implementing the processor to determine whether the pixel is an abnormal extreme point according to the neighboring pixels includes:
  • the maximum pixel value and the minimum pixel value are determined from the pixel values of the plurality of adjacent pixels, and whether the pixel point is an abnormal extreme point is determined according to the maximum pixel value and the minimum pixel value.
  • implementing the processor to determine whether the pixel point is an abnormal extreme point according to the maximum pixel value and the minimum pixel value includes:
  • the pixel point is abnormal Extreme point
  • the pixel point corresponding to the pixel point is less than the minimum pixel value, and the difference between the minimum pixel value and the pixel value corresponding to the pixel point is greater than or equal to a preset difference value, it is determined that the pixel point is abnormal Extreme point.
  • the preset difference is a product of the difference between the maximum pixel value and the minimum pixel value multiplied by an extreme point determination coefficient, and the extreme point determination coefficient is a constant.
  • implementing, by the processor, the construction of a pixel area block for pixels in the target image includes:
  • a plurality of pixels adjacent to the pixel points are determined, and an area formed by the plurality of adjacent pixels is regarded as a pixel area block.
  • implementing the processor to determine a plurality of pixels adjacent to the pixel includes:
  • a pixel point adjacent to the pixel point and/or a pixel point close to the pixel point is regarded as an adjacent pixel point.
  • the processor implements the determination of a plurality of pixels adjacent to the pixel point, and the plurality of adjacent pixels
  • the area formed by dots is regarded as a pixel area block, including:
  • the pixel area block of the pixel point includes an area of 3*3 pixels, and the pixel point is a central pixel point of the pixel area block.
  • implementing the processor to determine the extremum points of the target image based on the pixel points that are non-abnormal extremum points includes:
  • a plurality of determined pixels that are non-abnormal extreme value points form a non-abnormal extreme value point set; determine the pixel point with the largest pixel value and the pixel point with the smallest pixel value from the non-abnormal extreme value point set; and The pixel point with the largest pixel value and the pixel point with the smallest pixel value are used as extreme points of the target image.
  • the processor further implements: determining the extreme position of the target image according to a position determination condition satisfied by the target image; wherein different position determination conditions determine the extreme position of the target image The way is different.
  • implementing the processor to determine the extreme position of the target image according to the position determination condition satisfied by the target image includes:
  • the target image is the first frame image, or whether the target image is an updated designated area image; if the target image is the first frame image or an updated designated area image, the extremes of the target image
  • the pixel coordinates corresponding to the value point are used as the extreme value position of the target image.
  • the processor further implements: if the target image is not the first frame image and is not an updated designated area image, determine the reference extreme point corresponding to the target image, and determine the reference extreme Whether the value point is an abnormal extreme value point; if the reference extreme value point is an abnormal extreme value point, the pixel coordinates corresponding to the extreme value point of the target image are taken as the extreme value position of the target image;
  • the reference extremum point is a pixel point corresponding to the target image in the extremum position in the previous frame of the target image.
  • the processor further implements:
  • the reference extreme point is not an abnormal extreme point, calculate the difference between the reference extreme value and the pixel value corresponding to the extreme point of the target image, and determine whether the absolute value of the difference is less than or equal to the first A preset threshold; if the absolute value of the difference is less than or equal to the first preset threshold, the extreme position in the previous frame of image is taken as the extreme position of the target image.
  • the pixel coordinates corresponding to the extreme point of the target image are used as the extreme position of the target image.
  • the first preset threshold is a first preset multiple of the standard deviation of time-domain noise.
  • the first preset threshold corresponding to when the extreme point is a maximum value is different from the first preset threshold corresponding to when the extreme point is a minimum value.
  • the first preset threshold is positively correlated with the pixel value of the extreme point of the target image.
  • the processor further implements: if the extreme point of the target image meets a preset filtering trigger condition, time-domain filtering is performed on the pixel value corresponding to the extreme point of the target image.
  • the processor further implements: if the extreme point of the target image does not meet a preset filtering trigger condition, then no time domain filtering is performed on the pixel value corresponding to the extreme point of the target image, and The temperature corresponding to the extreme point is determined according to the pixel value corresponding to the extreme point of the target image.
  • the processor further implements: determining whether the extremum position of the target image is consistent with the extremum position of the previous frame of the target image; if the extremum position of the target image is consistent with the extremum position of the target image; If the extreme position of the previous frame of image is consistent, it is determined that the extreme point of the target image meets the preset filtering trigger condition; if the extreme position of the target image is inconsistent with the extreme position of the previous frame of image, then It is determined that the extreme point of the target image does not satisfy a preset filtering trigger condition.
  • the method before the processor implements the temporal filtering of the pixel values corresponding to the extreme points of the target image, the method further includes:
  • the second preset threshold is a second preset multiple of the standard deviation of time-domain noise.
  • the time domain filtering includes one of IIR time domain filtering and FIR time domain filtering.
  • the time-domain filtering is IIR time-domain filtering; the processor implementing time-domain filtering on pixel values corresponding to extreme points of the target image includes:
  • the processor further implements: filtering the extreme points of the target image by using median filtering.
  • FIG. 10 is a schematic block diagram of an extreme position identification device provided by an embodiment of the present application.
  • the extreme position identification device 500 includes a processor 501 and a memory 502, and the processor 501 and the memory 502 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 501 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor is configured to run a computer program stored in a memory, and when executing the computer program, implement any of the extreme position identification methods provided in the embodiments of the present application.
  • the processor is configured to run a computer program stored in a memory, and implement the following steps when the computer program is executed:
  • a target image including the current frame image or the specified area image determined in the current frame image; determine whether the target image is the first frame image, or whether the target image is an updated specified area image
  • the extreme position of the target image is determined according to the previous frame of the target image.
  • implementing the processor to determine the extreme position of the target image according to the previous frame of the target image includes:
  • the reference extreme point is the pixel point corresponding to the target image in the extreme position in the previous frame of the target image; if the reference extreme The value point is not an abnormal extreme value point, and the extreme value position in the previous frame image is taken as the extreme value position of the target image.
  • the processor further implements: if the reference extreme value point is an abnormal extreme value point, determining the extreme value position of the target image by the extreme value point of the target image.
  • the pixel value corresponding to the reference extremum point is a reference extremum; the processor is implementing the extremum position in the previous frame of image as the extremum position of the target image ,include:
  • the processor further implements: if the absolute value of the difference is greater than the first preset threshold, the pixel coordinates corresponding to the extreme points of the target image are used as the target image The extreme position.
  • the first preset threshold is a first preset multiple of the standard deviation of time-domain noise.
  • the first preset threshold corresponding to when the extreme point is a maximum value is different from the first preset threshold corresponding to when the extreme point is a minimum value.
  • the first preset threshold is positively correlated with the pixel value of the extreme point of the target image.
  • FIG. 11 is a schematic block diagram of a temperature measuring device provided by an embodiment of the present application.
  • the temperature measuring device 600 includes a processor 601 and a memory 602, and the processor 601 and the memory 602 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 601 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 602 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor is configured to run a computer program stored in a memory, and when executing the computer program, implement any temperature measurement method as provided in the embodiments of the present application.
  • the processor is configured to run a computer program stored in a memory, and implement the following steps when the computer program is executed:
  • the extreme value point recognition method provided in the foregoing embodiment is used to identify the extreme value points of a target image, the target image is an infrared image; the temperature corresponding to the extreme value point is determined according to the pixel value corresponding to the extreme value point
  • the processor is configured to run a computer program stored in a memory, and implement the following steps when the computer program is executed
  • a target image where the target image includes the current frame image or the specified area image determined in the current frame image; determine the target point of the target image; if the target point of the target image meets a preset filtering trigger condition, Time-domain filtering is performed on the pixel value corresponding to the target point of the target image, and a filtering result is obtained; and the temperature of the target point is determined according to the filtering result.
  • the target point includes an extreme point or a designated point
  • the extreme point includes a maximum point and/or a minimum point of the target image
  • the designated point is at the target image. The selected location point on the image.
  • the extreme point corresponds to an extreme position; the processor further implements: determining whether the extreme position of the target image is consistent with the extreme position of the previous frame of the target image; if If the extreme position of the target image is consistent with the extreme position of the previous frame of image, it is determined that the extreme point of the target image satisfies the preset filter trigger condition; if the extreme position of the target image is the same as the previous If the extreme positions of a frame of image are not consistent, it is determined that the extreme points of the target image do not satisfy the preset filtering trigger condition.
  • the method further includes:
  • the difference between the reference extreme value and the pixel value corresponding to the extreme value point of the previous frame image is calculated, wherein the reference The extreme value is the pixel value of the pixel point corresponding to the target image in the extreme position in the previous frame of the target image; determining whether the absolute value of the difference is less than or equal to the second preset threshold; if so If the absolute value of the difference is less than or equal to the second preset threshold, it is determined that the extreme point of the target image satisfies a preset filtering trigger condition; if the absolute value of the difference is greater than the second preset threshold, It is determined that the extreme point of the target image does not satisfy a preset filtering trigger condition.
  • the designated point corresponds to a designated position; the processor further implements:
  • the processor further implements: determining a designated point of the previous frame of the current frame of image according to the designated position; calculating the pixel value corresponding to the designated point of the current frame of image and the previous The difference in pixel values corresponding to a specified point of a frame of image; determining whether the absolute value of the difference is less than a third preset threshold; if the absolute value of the difference is less than or equal to the third preset threshold, then It is determined that the specified point meets the filter trigger condition; if the absolute value of the difference is greater than the third preset threshold, it is determined that the specified point does not meet the filter trigger condition.
  • the second preset threshold is a second preset multiple of the standard deviation of time-domain noise.
  • the third preset threshold is a third preset multiple of the standard deviation of time-domain noise.
  • the processor implementing time-domain filtering on the pixel values corresponding to the extreme points of the target image and obtaining the filtering result includes:
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation Examples provide the steps of the extreme point identification method, extreme position identification method or temperature measurement method.
  • the computer-readable storage medium may be an internal storage unit of the device described in any of the foregoing embodiments, such as a hard disk or memory of the removable platform.
  • the computer-readable storage medium may also be an external storage device of the movable platform, for example, a plug-in hard disk equipped on the movable platform, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital). , SD) card, flash card (Flash Card), etc.

Abstract

一种识别方法、测温方法、设备及存储介质,该方法包括:获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像(S101);确定所述目标图像中像素点的多个临近像素点(S102);根据所述临近像素点确定所述像素点是否为异常极值点(S103);根据为非异常极值点的像素点确定所述目标图像的极值点(S104)。

Description

识别方法、测温方法、设备及存储介质 技术领域
本申请涉及图像处理技术领域,尤其涉及一种极值点识别方法、极值位置识别方法、测温方法、装置、红外测温设备、可移动平台及存储介质。
背景技术
目前,在工业生产及设备故障诊断中,为了保证设备的正常运行以及产品的质量,经常需要对设备以及产品进行温度监测,目前一般都是靠人工手动用红外测温设备进行对准监测。而当设备在高处或人不易到达的地方时,可采用无人机搭载红外测温设备来进行巡检测温。在进行红外测温时,如果能够在广视角内快速准确地捕捉到高低温目标并得到稳定的温度,并且在对准目标时能快速得到目标的准确稳定的温度值,无疑能够显著增加无人机的巡检测温效率。
在使用红外测温设备测温时,指定点或指定区域的平均温度、最小温度和最大温度是测温关注的重点,然而由于红外噪声或其他因素的影响,直接采用红外图像中的像素值像来映射温度,会造成输出的温度值跳动且稳定性差;当区域内存在两个温度相近的极值点时,在视频图像上标记图像极值位置的标记点也会在两个极值位置中反复切换,测温显示观感性差。因此,如何识别极值点和极值位置以提高温度测量的稳定性和准确性成为亟需解决的问题。
发明内容
为了解决以上其中一个问题,本申请提供了一种极值点识别方法、极值位置识别方法、测温方法、装置、红外测温设备、可移动平台及存储介质,可以更为准确地识别极值点和极值位置。
第一方面,本申请提供了一种极值点识别方法,所述方法包括:
获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
确定所述目标图像中像素点的多个临近像素点;
根据所述临近像素点确定所述像素点是否为异常极值点;
根据为非异常极值点的像素点确定所述目标图像的极值点。
第二方面,本申请还提供了一种极值位置识别方法,所述方法包括:
获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
确定所述目标图像是否为第一帧图像,或者所述目标图像是否为更新的指定区域图像;
当所述目标图像不是第一帧图像且也不是更新的指定区域图像时,根据所述目标图像的前一帧图像确定所述目标图像的极值位置。
第三方面,本申请还提供了一种测温方法,所述方法包括:
采用上述的极值点识别方法识别目标图像的极值点,所述目标图像为红外图像;
根据所述极值点对应的像素值确定所述极值点对应的温度。
此外,还提供了另一种测温方法,所述方法包括:
获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
确定所述目标图像的目标点;
若所述目标图像的目标点满足预设滤波触发条件,对所述目标图像的目标点对应的像素值进行时域滤波,并获得滤波结果;
根据所述滤波结果确定所述目标点的温度。
第四方面,本申请还提供了一种极值点识别装置,所述极值点识别装置包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如上述的极值点识别方法的步骤。
第五方面,本申请还提供了一种极值位置识别装置,所述极值位置识别装置包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现 如上述的极值位置识别方法的步骤。
第六方面,本申请还提供了一种测温装置,所述测温装置包括存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如上述的测温方法的步骤。
第七方面,本申请还提供了一种红外测温设备,所述红外测温设备包括光学镜头、光电探测器、存储器和处理器;
所述光学镜头用于接收目标物发射的红外光;
所述光电探测器,与所述处理器电连接,用于将所述红外光转换成电信号并发送至所述处理器进行成像;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如上述的测温方法的步骤。
第八方面,本申请还提供了一种可移动平台,所述可移动平台包括红外测温设备、存储器和处理器;
所述红外测温设备,搭载在所述可移动平台上,用于获取红外图像;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如上述的测温方法的步骤。
第九方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现上述的地形检测方法。
本申请提出的一种极值点识别方法、极值位置识别方法、测温方法、装置、红外测温设备、可移动平台及存储介质,可以更为准确地识别极值点和极值位置,若将该极值点识别方法、极值位置识别方法应用在测温中,即本申请提供的测温方法,可以提高温度测量的准确性、稳定性和测温效率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的一种可移动平台的示意性框图;
图2是本申请一实施例提供的一种红外测温设备的示意性框图;
图3是本申请一实施例提供的一种极值点识别方法的步骤示意流程图;
图4是本申请实施例提供的确定临近像素点的效果示意图;
图5是本申请实施例提供的确定像素区域块的效果示意图;
图6是本申请实施例提供的另一种确定临近像素点的效果示意图;
图7是本申请一实施例提供的一种极值位置识别方法的步骤示意流程图;
图8是本申请一实施例提供的一种测温方法的步骤示意流程图;
图9是本申请一实施例提供的极值点识别装置的示意性框图;
图10是本申请一实施例提供的极值位置识别装置的示意性框图;
图11是本申请一实施例提供的测温装置的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
本申请的实施例提供了一种极值点识别方法、极值位置识别方法、测温方法、装置、红外测温设备、可移动平台及存储介质,可以更为准确地识别极值点和极值位置。
示例性的,若测温过程中采用极值点识别方法和/或极值位置识别方法,可 以快速准确地确定极值点对应温度,进而提了高温度测量的准确性、稳定性以及可靠性。若将测温方法应用在可移动平台中,利用可移动平台对目标物进行巡检测温,可以提高可移动平台的测温效率。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
在本申请的实施例中,可移动平台包括飞行器、机器人或自动无人驾驶车辆等。可移动平台上搭载有红外测温设备,当然也可以是其他利用图像采集实现测温的设备。其中,该红外测温设备包括红外测温仪、红外阵列成像设备等。
如图1所示,可移动平台100包括红外测温设备110,红外测温设备110搭载在可移动平台100上,用于获取红外图像。
其中,可移动平台100可以为飞行器,该飞行器包括无人机,该无人机包括旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。
可移动平台100包括处理器101和存储器102,处理器101和存储器102通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器101可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器102可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如本申请实施例提供的一种的极值点识别方法、极值位置识别或测温方法的步骤。以提高识别极值点和极值位置识别准确率,以及提高温度测量的准确性和稳定性。
示例性的,通过红外测温设备采集当前帧图像,并将当前帧图像发送给可移动平台;可移动平台获取目标图像,并确定所述目标图像中像素点的多个临近像素点,根据所述临近像素点确定所述像素点是否为异常极值点;以及根据为非异常极值点的像素点确定所述目标图像的极值点。其中,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像。确定目标图像的极值点后,即可以根据该极值点确定该极值点对应的温度。
由此,在使用飞行器进行红外测温时,通过该极值点识别方法能够在广视角内快速准确的捕捉到高低温目标并得到稳定的温度,对准目标时能快速得到目标准确稳定的温度值,因此增加了无人机的测温效率。
如图2所示,红外测温设备110包括光学镜头111、光电探测器112、存储器113和处理器114。
其中,光学镜头111用于接收目标物发射的红外光;光电探测器112与处理器114电连接,用于将所述红外光转换成电信号并发送至处理器114进行成像;存储器113用于存储计算机程序;处理器114用于执行所述计算机程序并在执行所述计算机程序时,实现如本申请实施例提供任一种的极值点识别方法、极值位置识别或测温方法的步骤。以提高识别极值点和极值位置识别准确率,以及提高温度测量的准确性和稳定性。
可以理解的是,红外测温设备可以将获取的红外图像作为目标图像发送至可移动平台,由所述可移动平台实现如本申请实施例提供任一种的极值点识别方法、极值位置识别或测温方法的步骤。以提高识别极值点和极值位置识别准确率,以及提高温度测量的准确性、稳定性和效率。
应理解的是,上述对于可移动平台、红外测温设备各组成部分的命名仅是出于标识的目的,并不应理解为对本说明书的实施例的限制。
应理解的是,本申请的实施例提供任一种的极值点识别方法和极值位置识别方法,用于准确且快速地识别出图像的极值点和极值点的极值位置,并确保不同帧之间的极值位置的稳定性。因此,极值点识别方法和极值位置识别方法不仅可以应用在基于红外测温中,还可以应用在其他图像处理领域中,比如视觉定位等。
还应理解的是,本申请的实施例提供任一种的极值点识别方法、极值位置识别方法和测温方法,不仅仅可以应用在红外测温设备或可移动平台中,还可以应用在其他电子设备。
为了便于理解,在本申请以下的实施例中以飞行器和红外测温设备为例,对本申请实施例提供的方法进行详细介绍。
在采用红外测温设备进行测温,比如红外阵列成像设备,如果直接采用每帧图像的像素点的像素值或极值点对应的像素值来映射温度,随机时域噪声或环境等引起的温度突变噪声必然会引起温度显示值的不稳定跳动,进而造成测 温效果变差。
目前,为了得到平滑稳定的测温结果,一般有以下几种方法:
一是降低测温频率,当测温频率较低时测温结果的跳动及极值位置的跳动将变得不明显,但这是以牺牲测温快速性及实时性为代价的。
二是进行测温像素空间上或时间上的平滑,如空间上不是进行指定点的测温,而测量的是包含指定点在内的一定小区域的平均温度,这样会造成只占单个或几个像素点的小目标无法准确测温,局部的高温或低温点被平滑,也可能测温局部区域没有全部落到测温目标上造成测温不准确,同时时间上平滑主要是测温时需要拉长测温时间,对同一测温目标采集多次,多帧平均才会输出准确的测温结果,这样会造成对准目标后刚开始的测温结果没有达到准确的稳定值,测温响应慢且难以实现目标温度变化下的准确测温。
造成以上问题,主要是测温时采集的红外图像的极值点识别不够准确、极值位置不稳定,或者用于表征测温值的像素值也不够稳定等原因。
因此,本申请的实施例提供了一种极值点识别方法、极值位置识别方法和测温方法以解决测温时测温响应慢、测温结果不准确以及不够稳定等问题。
请参阅图3,图3是本申请一实施例提供的一种极值点识别方法的步骤示意流程图。该方法用于快速准确地识别采集图像的极值点。
如图3所示,该极值点识别方法包括步骤S101至步骤S104。
S101、获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像。
其中,目标图像可以采集的当前帧图像,或者是在当前帧图像中选定的部分区域图像,即指定区域图像。
比如,通过显示当前帧图像,并获取用户在当前帧图像中选择的区域作为指定区域图像。
需要说明的是,当前帧图像可以是当前显示的图像,也可以为指定的某帧待处理图像。
S102、确定所述目标图像中像素点的多个临近像素点。
具体地,可以确定所述目标图像中的每个像素点的多个临近像素点。或者为了更快速地确定目标图像的极值点,也可以确定所述目标图像中的部分像素点的多个临近像素点。
示例性的,该部分像素点可以为准极值点,比如将像素值大于第一预设像素值的像素点作为准极值点,或者将像素值大于第二预设像素值的像素点作为准极值点。其中,第一预设像素值大于第二预设像素值。
需要说明的是,临近像素点可以是与该像素点相邻的像素点或者靠近该像素点的像素点。
示例性的,如图4所示,与像素点A相邻的四个像素点A1、像素点A2、像素点A3和像素点A4均可以作为像素点A的临近像素点;或者靠近像素点A的四个像素点A5、像素点A6、像素点A7和像素点A8也可以作为像素点A的临近像素点。
在一些实施例中,可以为所述目标图像中的像素点构建像素区域块,将所述像素点的像素区域块包含的像素点作为所述像素点的临近像素点。由此可以快速地确定临近像素点,并提高了极值点的识别效率。
具体地,可以确定多个与所述像素点临近的像素点,将多个所述临近的像素点构成的区域作为像素区域块。其中,确定多个与所述像素点临近的像素点,可以将与所述像素点相邻的像素点和/或靠近所述像素点的像素点作为临近的像素点。
示例性的,如图5所示,将像素点B相邻的像素点和/或靠近像素点B的像素点(具体为像素点B1、像素点B2、像素点B3、像素点B4、像素点B5、像素点B6、像素点B7和像素点B8)作为临近的像素点,多个所述临近的像素点构成的区域S作为像素区域块。
在一个实施例中,若所述像素点在所述目标图像中为边缘像素点;确定像素区域块具体可以为:确定与所述像素点临近的像素点,对所述临近的像素点作镜像处理以得到镜像像素点;以及将所述临近的像素点和所述镜像像素点构成的区域作为像素区域块。由此可以每个像素点均构成像素区域块,进而提高了极值点的识别准确率。
其中,边缘像素点包括目标图像的边缘上的像素点和四个顶角上的像素点。像素点作镜像处理包括水平镜像、竖直镜像和中心点镜像等。
示例性的,如图6所示,比如像素点C在目标图像中为边缘像素点,则确定与像素点C临近的像素点(具体为像素点C1、像素点C2和像素点C3),对这些临近的像素点作镜像处理以得到镜像像素点(具体为像素点C11、像素点 C21、像素点C21、像素点C23和像素点C31);以及将这些临近的像素点和镜像像素点构成的区域S作为像素区域块。
实际应用中,所述像素点的像素区域块包括3*3个像素点的区域,即9个像素点组成像素区域块。其中,所述像素点为所述像素区域块的中心像素点。当然也可以采用其他形式或数量的像素点,比如5*5个像素点的区域。
S103、根据所述临近像素点确定所述像素点是否为异常极值点。
具体地,可以将所述像素点的多个临近像素点的像素值与该像素点的像素值进行比较,以确定该像素点是否为异常极值点。
在一些实施例中,为了快速准确地确定所述像素点是否为异常极值点,具体可以采用:从所述多个临近像素点的像素值中确定最大像素值和最小像素值,根据所述最大像素值和最小像素值确定所述像素点是否为异常极值点。
示例性的,根据所述最大像素值和最小像素值确定所述像素点是否为异常极值点,具体可以采用:确定所述像素点对应的像素值是否大于所述最大像素值,以及所述像素点对应的像素值与所述最大像素值的差值是否大于或等于预设差值。
若所述像素点对应的像素值大于所述最大像素值,且所述像素点对应的像素值与所述最大像素值的差值大于或等于预设差值,则确定所述像素点为异常极值点。
若所述像素点对应的像素值小于所述最小像素值,且所述最小像素值与所述像素点对应的像素值的差值大于或等于预设差值,则确定所述像素点为异常极值点。
其中,所述预设差值为所述最大像素值与所述最小像素值的差值乘以极值点判断系数得到的乘积,所述极值点判断系数为常数。
需要说明的是,在与最大像素值和最小像素值比较时,对应的预设差值可以相同,也可以不同。
具体地,将上述确定异常极值点表示为以下公式,若像素点的像素值满足下述公式之一,则可以确定该像素点为异常极值点。
Figure PCTCN2019118225-appb-000001
其中,minn为最小像素值,maxn为最大像素值,x表示该像素点的像素值,k为极值点判断系数,可以取一个经验值,比如k=2。
当像素点的像素值x符合以上两个条件之一时,则可以确定该点为异常极值点;否则该像素点为非异常像素点。
具体地,假设采用3*3个像素点组成的像素区域块,通过上述公式可以确定,若该像素点与最邻近的8个像素值均差距较大时,则表明该点可能是一个孤立的椒盐噪声点,作为异常像素点进行排除,将不会参与极值点的识别。由此提高了极值点识别的准确率。
在一些实施例中,也可以采用中值滤波的方式,有效去除可能为椒盐噪声的像素点,但中值滤波会将真实的最大值及最小值当作噪声滤除,也会漏检只占几个像素的小目标极值点,不利于极值点的准确获取。
通过上述方式遍历目标图像的每个像素点或者部分像素点,可以确定目标图像中的哪些像素点为异常极值点,哪些像素点为非异常极值点。
S104、根据为非异常极值点的像素点确定所述目标图像的极值点。
具体地,从多个为非异常极值点中确定具有最小像素值的像素点和具有最大像素值的像素点,该具有最小像素值的像素点和具有最大像素值的像素点即为所述目标图像的极值点。
在一些实施例中,根据为非异常极值点的像素点确定所述目标图像的极值点,具体可采用:将确定的多个为非异常极值点的像素点组成非异常极值点集合;从所述非异常极值点集合中确定具有最大像素值的像素点和具有最小像素值的像素点;以及将所述具有最大像素值的像素点和所述具有最小像素值的像素点作为所述目标图像的极值点,由此可以快速地确定极值点。
上述各实施例提供的极值点识别方法,可以有效地剔除异常极值点,进而提高了极值点识别的准确率,为需要确定极值点的图像处理提供了一种更为准确以及快速的识别方法。
在一些实施例中,在确定极值点之后,为了确定极值位置的稳定,还可以确定该目标图像的极值位置。比如按照每帧图像的极值点计算结果进行极值温度的输出更新,但是在某些场景下极值位置及数值会反复跳动,特别是当一个场景存在两个或多个温度接近的高低温目标或高低温目标是大面积背景时,此时不利于真实高低温目标的温度检测和观察。
其中,确定目标图像的极值位置具体为:根据所述目标图像满足的位置确定条件确定所述目标图像的极值位置。
需要说明的是,该极值位置为极值点对应的像素点在所述目标图像中的像素坐标。
其中,不同的位置确定条件确定所述目标图像的极值位置的方式不同。
示例性的,根据所述目标图像满足的位置确定条件确定所述目标图像的极值位置,具体为:确定所述目标图像是否为第一帧图像,或者所述目标图像是否为更新的指定区域图像。
若所述目标图像为第一帧图像或为更新的指定区域图像,将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
其中,第一帧图像为采集目标图像时对应的首帧图像,更新的指定区域图像是指在不同的当前帧图像中变化了指定区域对应的指定区域图像。若所述目标图像为第一帧图像或为更新的指定区域图像,将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
其中,若所述目标图像不是第一帧图像且也不是更新的指定区域图像,确定所述目标图像对应的参考极值点,并确定所述参考极值点是否为异常极值点;若所述参考极值点为异常极值点,将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
需要说明的是,所述参考极值点为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点,所述参考极值点对应的像素值为参考极值。
为了进一步地确保极值点对应的极值位置的稳定性,在确定所述参考极值点不是异常极值点之后,还可以做一步地判断处理,以确保所述目标图像的极值位置。
示例性的,若所述参考极值点不是异常极值点后,计算所述参考极值与所述目标图像的极值点对应的像素值的差值,并确定所述差值的绝对值是否小于或等于第一预设阈值;若所述差值的绝对值小于或等于所述第一预设阈值,则将所述前一帧图像中的极值位置作为所述目标图像的极值位置。
示例性的,若所述差值的绝对值大于所述第一预设阈值,则将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
需要说明的是,所述第一预设阈值为时域噪声标准差的第一预设倍数。其 中,时域噪声标准差为同一目标相邻两帧图像的差值,该同一目标可以温度均匀的平面。具体地,第一预设倍数可以为大于或等于3倍,比如第一预设倍数为时域噪声标准差的4倍。
需要说明的是,确定所述参考极值点是否为异常极值点可以上述实施例确定像素点是否为异常极值点的方式。
在一些实施例中,为了更为准确地确定目标图像的极值位置且保证极值位置的稳定性,可以将所述极值点为极大值时对应的第一预设阈值与所述极值点为极小值时对应的第一预设阈值设置不同。
具体地,所述第一预设阈值与所述目标图像的极值点的像素值正相关,即可以设定为和当前极值点的像素值有关的动态阈值,极值点的像素值较大时第一预设阈值可以相对更大一些。
上述实施例的方法不仅可以快速准确地识别目标图像的极值点,还可以确定目标图像的极值位置,以确保极值点的数值准确性以及极值位置的稳定性。比如应用在测温中,可以实现极值测温点位置的稳定,保证稳定测温时极值位置不会发生跳动。
在一些实施例中,在确定极值点的准确性以及极值位置的稳定性之后,还需要确保极值点在数值的稳定,因此该极值点的方法还可包括:确定所述目标图像的极值点是否满足预设滤波触发条件;若所述目标图像的极值点满足预设滤波触发条件,对所述目标图像的极值点对应的像素值进行时域滤波。
此外,若所述目标图像的极值点不满足预设滤波触发条件,则不对所述目标图像的极值点对应的像素值进行时域滤波,并根据所述目标图像的极值点对应的像素值确定所述极值点对应的温度。
示例性的,为了更为准确地确保极值点的数值稳定,确定所述目标图像的极值点是否满足预设滤波触发条件,具体可以采用:确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致;若所述目标图像的极值位置与所述前一帧图像的极值位置一致,确定所述目标图像的极值点满足预设滤波触发条件;若所述目标图像的极值位置与所述前一帧图像的极值位置不一致,则确定所述目标图像的极值点不满足预设滤波触发条件。
在一些实施例中,为了避免目标图像采集时摄像装置的干扰,比如,因为红外相机打快门或其他特殊情况时,前后帧同一目标图像输出亮度上可能会有 较大差异,如果此时进行了时域滤波处理则会带来滤波错误。
为此,在对所述目标图像的极值点对应的像素值进行时域滤波之前,还可以进行像素值的判断,具体为:计算参考极值与所述前一帧图像的极值点对应的像素值的差值;确定所述差值的绝对值是否小于或等于第二预设阈值。其中,所述参考极值为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点的像素值。
若所述差值的绝对值小于或等于所述第二预设阈值,则对所述目标图像的极值点对应的像素值进行时域滤波;若所述差值的绝对值大于所述第二预设阈值,则不对所述根据所述目标图像的极值点进行时域滤波。
需要说明的是,所述第二预设阈值为时域噪声标准差的第二预设倍数。其中,时域噪声标准差为同一目标相邻两帧图像的差值,该同一目标可以温度均匀的平面。具体地,第二预设倍数可以为大于或等于3倍,比如第二预设倍数为时域噪声标准差的5倍。
需要说明的是,在本申请的实施例中,所述时域滤波包括IIR时域滤波和FIR时域滤波的一种。
示例性的,比如采用IIR时域滤波,具体为计算所述目标图像的极值点的像素值与第一时域滤波系数的乘积,计算所述前一帧图像的极值点的像素值与第二时域滤波系数的乘积,将两个所述乘积之和作为滤波结果;其中,所述第一时域滤波系数与所述第二时域滤波系数的和为1。
具体地,滤波结果可以用以下公式表示:
x o(p,q)=(1-ratio)×x c(p,q)+ratio×x r(p,q)
其中,x c(p,q)为目标图像的极值点的像素值,x r(p,q)为目标图像的前一帧图像的极值点的像素值,ratio为时域滤波系数,可设置为常数,在本申请的实施例中可取为0.9,(p,q)为极值点坐标,x o(p,q)为滤波输出结果。
在一些实施例中,当然也可以采用中值滤波对所述目标图像的极值点进行滤波。以提高极值点的数值的稳定性。
上述各实施例提供的极值点识别方法,可以准确地识别到图像的极值点、并可以确保当前帧图像中的极值位置以及极值点的数值的稳定性。将该识别方法应用到各种需要识别极值点的图像处理领域,由此可以提高图像处理的准确性、稳定性和可靠性。
示例性的,本申请的实施例还提供一种测温方法,该测温方法包括以下步骤:采用上述实施例中任意一项所述的极值点识别方法识别目标图像的极值点,所述目标图像为红外图像;根据所述极值点对应的像素值确定所述极值点对应的温度。
由于极值点识别方法可以准确地识别到图像的极值点、并可以确保当前帧图像中的极值位置以及极值点的数值的稳定性。因此可实现极值测温点位置的稳定,保证稳定测温时极值位置不会发生跳动,同时滤波跳出机制(即何时进行时域滤波或不进行时域滤波)保证了测温响应的快速性及准确性,切换新目标时能实现快速准确测温,目标温度突变时测温结果能够快速跟随。
请参阅图7,图7是本申请实施例提供的一种极值位置识别方法的步骤示意流程图。该方法用于确定图像极值位置,以提高极值位置的稳定性。
如图7所示,该极值位置识别方法包括步骤S201至步骤S203。
S201、获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
S202、确定所述目标图像是否为第一帧图像,或者所述目标图像是否为更新的指定区域图像;
S203、当所述目标图像不是第一帧图像且也不是更新的指定区域图像时,根据所述目标图像的前一帧图像确定所述目标图像的极值位置。
需要说明的是,所述目标图像的极值位置为所述目标图像的极值点对应的像素点的像素坐标;所述前一帧图像的极值位置为所述前一帧图像的极值点对应的像素点的像素坐标。
示例性的,根据所述目标图像的前一帧图像确定所述目标图像的极值位置,具体为:确定所述目标图像对应的参考极值点;判断所述参考极值点是否为异常极值点;其中,所述参考极值点为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点。
示例性的,若所述参考极值点不是异常极值点,将所述前一帧图像中的极值位置作为所述目标图像的极值位置;若所述参考极值点为异常极值点,将所述目标图像的极值点确定所述目标图像的极值位置。
在一些实施例中,所述参考极值点对应的像素值为参考极值;相应地,将所述前一帧图像中的极值位置作为所述目标图像的极值位置,具体为:计算所 述参考极值与所述目标图像的极值点对应的像素值的差值,并确定所述差值的绝对值是否小于或等于第一预设阈值。
示例性的,若所述差值的绝对值小于或等于所述第一预设阈值,则将所述前一帧图像中的极值位置作为所述目标图像的极值位置;若所述差值的绝对值大于所述第一预设阈值,则将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
需要说明的是,所述第一预设阈值为时域噪声标准差的第一预设倍数。其中,时域噪声标准差为同一目标相邻两帧图像的差值,该同一目标可以温度均匀的平面。具体地,第一预设倍数可以为大于或等于3倍,比如第一预设倍数为时域噪声标准差的4倍。
需要说明的是,确定所述参考极值点是否为异常极值点可以上述实施例确定像素点是否为异常极值点的方式。
在一些实施例中,为了更为准确地确定目标图像的极值位置且保证极值位置的稳定性,可以将所述极值点为极大值时对应的第一预设阈值与所述极值点为极小值时对应的第一预设阈值设置不同。
具体地,所述第一预设阈值与所述目标图像的极值点的像素值正相关,即可以设定为和当前极值点的像素值有关的动态阈值,极值点的像素值较大时第一预设阈值可以相对更大一些。
上述实施例的方法可以确定目标图像的极值位置,进而确保极值位置的稳定性。比如应用在测温中,可以实现极值测温点位置的稳定,保证稳定测温时极值位置不会发生跳动。
请参阅图8,图8是本申请实施例提供的一种测温方法的步骤示意流程图。该测温方法确保测温的稳定性。
如图3所示,该测温方法包括步骤S301至步骤S304。
S301、获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
S302、确定所述目标图像的目标点;
S303、若所述目标图像的目标点满足预设滤波触发条件,对所述目标图像的目标点对应的像素值进行时域滤波,并获得滤波结果;
S304、根据所述滤波结果确定所述目标点的温度。
其中,所述目标点包括极值点或指定点,所述极值点包括所述目标图像的极大值点和/或极小值点,所述指定点为在所述目标图像上选定的位置点。
需要说明的是,在所述目标图像上选定的位置点可以由用户选择,或者设备自动识别选择,在此不做限定。
若目标点为极值点,可通过所述极值点对应极值位置进行确定,即确定所述目标图像的目标点是否满足预设滤波触发条件,具体为:确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致。
若所述目标图像的极值位置与所述前一帧图像的极值位置一致,确定所述目标图像的极值点满足预设滤波触发条件;若所述目标图像的极值位置与所述前一帧图像的极值位置不一致,则确定所述目标图像的极值点不满足预设滤波触发条件。通过极值位置即保证了位置上的稳定,有保证了数值上的稳定。
在一些实施例中,为了更好确定数值的稳定性,以及及时跳出滤波,进而确保了测温响应的快速性及准确性,切换新目标时能实现快速准确测温,目标温度突变时测温结果能够快速跟随。
示例性的,在确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致之后,若所述目标图像的极值位置与所述前一帧图像的极值位置一致,则需要计算参考极值与所述前一帧图像的极值点对应的像素值的差值,其中,所述参考极值为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点的像素值;确定所述差值的绝对值是否小于或等于第二预设阈值。
若所述差值的绝对值小于或等于所述第二预设阈值,确定所述目标图像的极值点满足预设滤波触发条件;若所述差值的绝对值大于所述第二预设阈值,确定所述目标图像的极值点不满足预设滤波触发条件。
需要说明的是,所述第二预设阈值为时域噪声标准差的第二预设倍数。其中,时域噪声标准差为同一目标相邻两帧图像的差值,该同一目标可以温度均匀的平面。具体地,第二预设倍数可以为大于或等于3倍,比如第二预设倍数为时域噪声标准差的5倍。
若目标点为指定点,可通过所述指定点对应指定位置进行确定,即确定所述目标图像的目标点是否满足预设滤波触发条件,具体为:根据所述指定位置确定所述指定点是否发生运动;若所述指定点未发生运动,则确定所述指定点 满足预设滤波触发条件;若所述指定点发生运动,则确定所述指定点不满足预设滤波触发条件。
具体地,若指定点在目标图像中发生了移动,可以确定所述指定点不满足预设滤波触发条件,则不对该指定点进行时域滤波。具体地,当所述指定点在前后帧发生了像素值变化,即可确认其发生了移动。进而确保及时跳出滤波机制,提高了测温响应速度及准确性。一般指定点在目标图像中发生了移动,有可能是切换了新目标,由此也确保了切换新目标时能实现快速准确测温,目标温度突变时测温结果能够快速跟随。
在一些实施例中,为了更为准确地确保测温在数值的稳定性,还可以根据所述指定位置确定所述当前帧图像的前一帧图像的指定点;计算所述当前帧图像的指定点对应的像素值与所述前一帧图像的指定点对应的像素值的差值;确定所述差值的绝对值是否小于第三预设阈值。
若所述差值的绝对值小于或等于所述第三预设阈值,则确定所述指定点满足滤波触发条件;若所述差值的绝对值大于所述第三预设阈值,则确定所述指定点不满足滤波触发条件。
需要说明的是,所述第三预设阈值为时域噪声标准差的第三预设倍数。其中,时域噪声标准差为同一目标相邻两帧图像的差值,该同一目标可以温度均匀的平面。具体地,第三预设倍数可以为大于或等于3倍,比如第三预设倍数为时域噪声标准差的5倍。
需要说明的是,所述时域滤波包括IIR时域滤波和FIR时域滤波的一种。
示例性的,比如采用IIR时域滤波,即计算所述目标图像的极值点的像素值与第一时域滤波系数的乘积,计算所述前一帧图像的极值点的像素值与第二时域滤波系数的乘积,将两个所述乘积之和作为滤波结果;其中,所述第一时域滤波系数与所述第二时域滤波系数的和为1。
具体地,滤波结果可以用以下公式表示:
x o(m,n)=(1-ratio)×x c(m,n)+ratio×x r(m,n)
其中,x c(m,n)为目标图像的指定点(或极值点)的像素值,x r(m,n)为目标图像的前一帧图像的指定点(极值点)的像素值,ratio为时域滤波系数,可设置为常数,在本申请的实施例中可取为0.9,(m,n)为极值点坐标,x o(m,n)为滤波输出结果。
再根据滤波后的像素值确定该目标点对应的温度,即根据滤波结果确定目标点的温度,由此可以提高温度的温度性和准确性。
请参阅图9,图9是本申请一实施例提供的极值点识别装置的示意性框图。该极值点识别装置400包括处理器401和存储器402,处理器401和存储器402通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器401可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器402可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如本申请实施例提供的任意一种极值点识别方法。
示例性的,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;确定所述目标图像中像素点的多个临近像素点;根据所述临近像素点确定所述像素点是否为异常极值点;根据为非异常极值点的像素点确定所述目标图像的极值点。
在一些实施例中,所述处理器实现所述确定所述目标图像中像素点的多个临近像素点,包括:
为所述目标图像中的像素点构建像素区域块,将所述像素点的像素区域块包含的像素点作为所述像素点的临近像素点。
在一些实施例中,所述处理器实现所述根据所述临近像素点确定所述像素点是否为异常极值点,包括:
从所述多个临近像素点的像素值中确定最大像素值和最小像素值,根据所述最大像素值和最小像素值确定所述像素点是否为异常极值点。
在一些实施例中,所述处理器实现所述根据所述最大像素值和最小像素值确定所述像素点是否为异常极值点,包括:
若所述像素点对应的像素值大于所述最大像素值,且所述像素点对应的像素值与所述最大像素值的差值大于或等于预设差值,则确定所述像素点为异常 极值点;或者
若所述像素点对应的像素值小于所述最小像素值,且所述最小像素值与所述像素点对应的像素值的差值大于或等于预设差值,则确定所述像素点为异常极值点。
在一些实施例中,所述预设差值为所述最大像素值与所述最小像素值的差值乘以极值点判断系数得到的乘积,所述极值点判断系数为常数。
在一些实施例中,所述处理器实现所述为所述目标图像中的像素点构建像素区域块,包括:
确定多个与所述像素点临近的像素点,将多个所述临近的像素点构成的区域作为像素区域块。
在一些实施例中,所述处理器实现所述确定多个与所述像素点临近的像素点,包括:
将与所述像素点相邻的像素点和/或靠近所述像素点的像素点作为临近的像素点。
在一些实施例中,若所述像素点在所述目标图像中为边缘像素点;所述处理器实现所述确定多个与所述像素点临近的像素点,将多个所述临近的像素点构成的区域作为像素区域块,包括:
确定与所述像素点临近的像素点,对所述临近的像素点作镜像处理以得到镜像像素点;以及将所述临近的像素点和所述镜像像素点构成的区域作为像素区域块。
在一些实施例中,所述像素点的像素区域块包括3*3个像素点的区域,所述像素点为所述像素区域块的中心像素点。
在一些实施例中,所述处理器实现所述根据为非异常极值点的像素点确定所述目标图像的极值点,包括:
将确定的多个为非异常极值点的像素点组成非异常极值点集合;从所述非异常极值点集合中确定具有最大像素值的像素点和具有最小像素值的像素点;以及将所述具有最大像素值的像素点和所述具有最小像素值的像素点作为所述目标图像的极值点。
在一些实施例中,所述处理器还实现:根据所述目标图像满足的位置确定条件确定所述目标图像的极值位置;其中,不同的位置确定条件确定所述目标 图像的极值位置的方式不同。
在一些实施例中,所述处理器实现所述根据所述目标图像满足的位置确定条件确定所述目标图像的极值位置,包括:
确定所述目标图像是否为第一帧图像,或者所述目标图像是否为更新的指定区域图像;若所述目标图像为第一帧图像或为更新的指定区域图像,将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
在一些实施例中,所述处理器还实现:若所述目标图像不是第一帧图像且也不是更新的指定区域图像,确定所述目标图像对应的参考极值点,并确定所述参考极值点是否为异常极值点;若所述参考极值点为异常极值点,将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置;
其中,所述参考极值点为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点。
在一些实施例中,所述参考极值点对应的像素值为参考极值;所述处理器还实现:
若所述参考极值点不是异常极值点,计算所述参考极值与所述目标图像的极值点对应的像素值的差值,并确定所述差值的绝对值是否小于或等于第一预设阈值;若所述差值的绝对值小于或等于所述第一预设阈值,则将所述前一帧图像中的极值位置作为所述目标图像的极值位置。
在一些实施例中,若所述差值的绝对值大于所述第一预设阈值,则将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
在一些实施例中,所述第一预设阈值为时域噪声标准差的第一预设倍数。
在一些实施例中,所述极值点为极大值时对应的第一预设阈值与所述极值点为极小值时对应的第一预设阈值不同。
在一些实施例中,所述第一预设阈值与所述目标图像的极值点的像素值正相关。
在一些实施例中,所述处理器还实现:若所述目标图像的极值点满足预设滤波触发条件,对所述目标图像的极值点对应的像素值进行时域滤波。
在一些实施例中,所述处理器还实现:若所述目标图像的极值点不满足预设滤波触发条件,则不对所述目标图像的极值点对应的像素值进行时域滤波,并根据所述目标图像的极值点对应的像素值确定所述极值点对应的温度。
在一些实施例中,所述处理器还实现:确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致;若所述目标图像的极值位置与所述前一帧图像的极值位置一致,确定所述目标图像的极值点满足预设滤波触发条件;若所述目标图像的极值位置与所述前一帧图像的极值位置不一致,则确定所述目标图像的极值点不满足预设滤波触发条件。
在一些实施例中,所述处理器实现所述对所述目标图像的极值点对应的像素值进行时域滤波之前,还包括:
计算参考极值与所述前一帧图像的极值点对应的像素值的差值,其中,所述参考极值为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点的像素值;确定所述差值的绝对值是否小于或等于第二预设阈值;若所述差值的绝对值小于或等于所述第二预设阈值,执行所述对所述目标图像的极值点对应的像素值进行时域滤波的步骤;若所述差值的绝对值大于所述第二预设阈值,则不对所述根据所述目标图像的极值点进行时域滤波。
在一些实施例中,所述第二预设阈值为时域噪声标准差的第二预设倍数。
在一些实施例中,所述时域滤波包括IIR时域滤波和FIR时域滤波的一种。
在一些实施例中,所述时域滤波为IIR时域滤波;所述处理器实现所述对所述目标图像的极值点对应的像素值进行时域滤波,包括:
计算所述目标图像的极值点的像素值与第一时域滤波系数的乘积,计算所述前一帧图像的极值点的像素值与第二时域滤波系数的乘积,将两个所述乘积之和作为滤波结果;其中,所述第一时域滤波系数与所述第二时域滤波系数的和为1。
在一些实施例中,所述处理器还实现:采用中值滤波对所述目标图像的极值点进行滤波。
请参阅图10,图10是本申请一实施例提供的极值位置识别装置的示意性框图。该极值位置识别装置500包括处理器501和存储器502,处理器501和存储器502通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器501可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器502可以是Flash芯片、只读存储器(ROM,Read-Only  Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如本申请实施例提供的任意一种极值位置识别方法。
示例性的,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如下步骤:
获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;确定所述目标图像是否为第一帧图像,或者所述目标图像是否为更新的指定区域图像;当所述目标图像不是第一帧图像且也不是更新的指定区域图像时,根据所述目标图像的前一帧图像确定所述目标图像的极值位置。
在一些实施例中,所述处理器实现所述根据所述目标图像的前一帧图像确定所述目标图像的极值位置,包括:
确定所述目标图像对应的参考极值点,其中,所述参考极值点为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点;若所述参考极值点不是异常极值点,将所述前一帧图像中的极值位置作为所述目标图像的极值位置。
在一些实施例中,所述处理器还实现:若所述参考极值点为异常极值点,将所述目标图像的极值点确定所述目标图像的极值位置。
在一些实施例中,所述参考极值点对应的像素值为参考极值;所述处理器在实现所述将所述前一帧图像中的极值位置作为所述目标图像的极值位置,包括:
计算所述参考极值与所述目标图像的极值点对应的像素值的差值,并确定所述差值的绝对值是否小于或等于第一预设阈值;若所述差值的绝对值小于或等于所述第一预设阈值,则将所述前一帧图像中的极值位置作为所述目标图像的极值位置。
在一些实施例中,所述处理器在还实现:若所述差值的绝对值大于所述第一预设阈值,则将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
在一些实施例中,所述第一预设阈值为时域噪声标准差的第一预设倍数。
在一些实施例中,所述极值点为极大值时对应的第一预设阈值与所述极值点为极小值时对应的第一预设阈值不同。
在一些实施例中,所述第一预设阈值与所述目标图像的极值点的像素值正相关。
请参阅图11,图11是本申请一实施例提供的测温装置的示意性框图。该测温装置600包括处理器601和存储器602,处理器601和存储器602通过总线连接,该总线比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器601可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器602可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如本申请实施例提供的任意一种测温方法。
示例性的,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如下步骤:
采用上述实施例提供的极值点识别方法识别目标图像的极值点,所述目标图像为红外图像;根据所述极值点对应的像素值确定所述极值点对应的温度
示例性的,所述处理器用于运行存储在存储器中的计算机程序,并在执行所述计算机程序时实现如下步骤
获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;确定所述目标图像的目标点;若所述目标图像的目标点满足预设滤波触发条件,对所述目标图像的目标点对应的像素值进行时域滤波,并获得滤波结果;根据所述滤波结果确定所述目标点的温度。
在一些实施例中,所述目标点包括极值点或指定点,所述极值点包括所述目标图像的极大值点和/或极小值点,所述指定点为在所述目标图像上选定的位置点。
在一些实施例中,所述极值点对应极值位置;所述处理器还实现:确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致;若所述目标图像的极值位置与所述前一帧图像的极值位置一致,确定所述目标图像的极值点满足预设滤波触发条件;若所述目标图像的极值位置与所述前一帧图像的极值位置不一致,则确定所述目标图像的极值点不满足预设滤波触发条 件。
在一些实施例中,所述处理器实现所述确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致之后,还包括:
若所述目标图像的极值位置与所述前一帧图像的极值位置一致,计算参考极值与所述前一帧图像的极值点对应的像素值的差值,其中,所述参考极值为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点的像素值;确定所述差值的绝对值是否小于或等于第二预设阈值;若所述差值的绝对值小于或等于所述第二预设阈值,确定所述目标图像的极值点满足预设滤波触发条件;若所述差值的绝对值大于所述第二预设阈值,确定所述目标图像的极值点不满足预设滤波触发条件。
在一些实施例中,所述指定点对应指定位置;所述处理器还实现:
根据所述指定位置确定所述指定点是否发生运动;若所述指定点未发生运动,则确定所述指定点满足预设滤波触发条件;若所述指定点发生运动,则确定所述指定点不满足预设滤波触发条件。
在一些实施例中,所述处理器还实现:根据所述指定位置确定所述当前帧图像的前一帧图像的指定点;计算所述当前帧图像的指定点对应的像素值与所述前一帧图像的指定点对应的像素值的差值;确定所述差值的绝对值是否小于第三预设阈值;若所述差值的绝对值小于或等于所述第三预设阈值,则确定所述指定点满足滤波触发条件;若所述差值的绝对值大于所述第三预设阈值,则确定所述指定点不满足滤波触发条件。
在一些实施例中,所述第二预设阈值为时域噪声标准差的第二预设倍数。
在一些实施例中,所述第三预设阈值为时域噪声标准差的第三预设倍数。
在一些实施例中,所述处理器实现所述对所述目标图像的极值点对应的像素值进行时域滤波,并获得滤波结果,包括:
计算所述目标图像的极值点的像素值与第一时域滤波系数的乘积,计算所述前一帧图像的极值点的像素值与第二时域滤波系数的乘积,将两个所述乘积之和作为滤波结果;其中,所述第一时域滤波系数与所述二时域滤波系数的和为1。
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所 述程序指令,实现上述实施例提供的极值点识别方法、极值位置识别方或测温方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的设备的内部存储单元,例如所述可移动平台的硬盘或内存。所述计算机可读存储介质也可以是所述可移动平台的外部存储设备,例如所述可移动平台上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (53)

  1. 一种极值点识别方法,其特征在于,包括:
    获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
    确定所述目标图像中像素点的多个临近像素点;
    根据所述临近像素点确定所述像素点是否为异常极值点;
    根据为非异常极值点的像素点确定所述目标图像的极值点。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述目标图像中像素点的多个临近像素点,包括:
    为所述目标图像中的像素点构建像素区域块,将所述像素点的像素区域块包含的像素点作为所述像素点的临近像素点。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述临近像素点确定所述像素点是否为异常极值点,包括:
    从所述多个临近像素点的像素值中确定最大像素值和最小像素值,根据所述最大像素值和最小像素值确定所述像素点是否为异常极值点。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述最大像素值和最小像素值确定所述像素点是否为异常极值点,包括:
    若所述像素点对应的像素值大于所述最大像素值,且所述像素点对应的像素值与所述最大像素值的差值大于或等于预设差值,则确定所述像素点为异常极值点;或者
    若所述像素点对应的像素值小于所述最小像素值,且所述最小像素值与所述像素点对应的像素值的差值大于或等于预设差值,则确定所述像素点为异常极值点。
  5. 根据权利要求4所述的方法,其特征在于,所述预设差值为所述最大像素值与所述最小像素值的差值乘以极值点判断系数得到的乘积,所述极值点判断系数为常数。
  6. 根据权利要求2所述的方法,其特征在于,所述为所述目标图像中的像素点构建像素区域块,包括:
    确定多个与所述像素点临近的像素点,将多个所述临近的像素点构成的区域作为像素区域块。
  7. 根据权利要求6所述的方法,其特征在于,所述确定多个与所述像素点临近的像素点,包括:
    将与所述像素点相邻的像素点和/或靠近所述像素点的像素点作为临近的像素点。
  8. 根据权利要求6所述的方法,其特征在于,若所述像素点在所述目标图像中为边缘像素点;所述确定多个与所述像素点临近的像素点,将多个所述临近的像素点构成的区域作为像素区域块,包括:
    确定与所述像素点临近的像素点,对所述临近的像素点作镜像处理以得到镜像像素点;以及
    将所述临近的像素点和所述镜像像素点构成的区域作为像素区域块。
  9. 根据权利要求6所述的方法,其特征在于,所述像素点的像素区域块包括3*3个像素点的区域,所述像素点为所述像素区域块的中心像素点。
  10. 根据权利要求1所述的方法,其特征在于,所述根据为非异常极值点的像素点确定所述目标图像的极值点,包括:
    将确定的多个为非异常极值点的像素点组成非异常极值点集合;
    从所述非异常极值点集合中确定具有最大像素值的像素点和具有最小像素值的像素点;以及
    将所述具有最大像素值的像素点和所述具有最小像素值的像素点作为所述目标图像的极值点。
  11. 根据权利要求1至10任一项所述的方法,其特征在于,所述方法还包括:
    根据所述目标图像满足的位置确定条件确定所述目标图像的极值位置;其中,不同的位置确定条件确定所述目标图像的极值位置的方式不同。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述目标图像满足的位置确定条件确定所述目标图像的极值位置,包括:
    确定所述目标图像是否为第一帧图像,或者所述目标图像是否为更新的指定区域图像;
    若所述目标图像为第一帧图像或为更新的指定区域图像,将所述目标图像 的极值点对应的像素坐标作为所述目标图像的极值位置。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    若所述目标图像不是第一帧图像且也不是更新的指定区域图像,确定所述目标图像对应的参考极值点,并确定所述参考极值点是否为异常极值点;
    若所述参考极值点为异常极值点,将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置;
    其中,所述参考极值点为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点。
  14. 根据权利要求13所述的方法,其特征在于,所述参考极值点对应的像素值为参考极值;所述方法还包括:
    若所述参考极值点不是异常极值点,计算所述参考极值与所述目标图像的极值点对应的像素值的差值,并确定所述差值的绝对值是否小于或等于第一预设阈值;
    若所述差值的绝对值小于或等于所述第一预设阈值,则将所述前一帧图像中的极值位置作为所述目标图像的极值位置。
  15. 根据权利要求14所述的方法,其特征在于,所述方法还包括:
    若所述差值的绝对值大于所述第一预设阈值,则将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
  16. 根据权利要求14所述的方法,其特征在于,所述第一预设阈值为时域噪声标准差的第一预设倍数。
  17. 根据权利要求14所述的方法,其特征在于,所述极值点为极大值时对应的第一预设阈值与所述极值点为极小值时对应的第一预设阈值不同。
  18. 根据权利要求14所述的方法,其特征在于,所述第一预设阈值与所述目标图像的极值点的像素值正相关。
  19. 根据权利要求1至10任一项所述的方法,其特征在于,所述方法还包括:
    若所述目标图像的极值点满足预设滤波触发条件,对所述目标图像的极值点对应的像素值进行时域滤波。
  20. 根据权利要求19所述的方法,其特征在于,所述方法还包括:
    若所述目标图像的极值点不满足预设滤波触发条件,则不对所述目标图像 的极值点对应的像素值进行时域滤波。
  21. 根据权利要求19所述的方法,其特征在于,所述方法包括:
    确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致;
    若所述目标图像的极值位置与所述前一帧图像的极值位置一致,确定所述目标图像的极值点满足预设滤波触发条件;
    若所述目标图像的极值位置与所述前一帧图像的极值位置不一致,则确定所述目标图像的极值点不满足预设滤波触发条件。
  22. 根据权利要求19所述的方法,其特征在于,所述对所述目标图像的极值点对应的像素值进行时域滤波之前,还包括:
    计算参考极值与所述前一帧图像的极值点对应的像素值的差值,其中,所述参考极值为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点的像素值;
    确定所述差值的绝对值是否小于或等于第二预设阈值;
    若所述差值的绝对值小于或等于所述第二预设阈值,执行所述对所述目标图像的极值点对应的像素值进行时域滤波的步骤;
    若所述差值的绝对值大于所述第二预设阈值,则不对所述根据所述目标图像的极值点进行时域滤波。
  23. 根据权利要求22所述的方法,其特征在于,所述第二预设阈值为时域噪声标准差的第二预设倍数。
  24. 根据权利要求19所述的方法,其特征在于,所述时域滤波包括IIR时域滤波和FIR时域滤波的一种。
  25. 根据权利要求24所述的方法,其特征在于,所述时域滤波为IIR时域滤波;所述对所述目标图像的极值点对应的像素值进行时域滤波,包括:
    计算所述目标图像的极值点的像素值与第一时域滤波系数的乘积,计算所述前一帧图像的极值点的像素值与第二时域滤波系数的乘积,将两个所述乘积之和作为滤波结果;
    其中,所述第一时域滤波系数与所述第二时域滤波系数的和为1。
  26. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    采用中值滤波对所述目标图像的极值点进行滤波。
  27. 一种测温方法,其特征在于,所述方法包括:
    采用权利要求1至26任一项所述的极值点识别方法识别目标图像的极值点,所述目标图像为红外图像;
    根据所述极值点对应的像素值确定所述极值点对应的温度。
  28. 一种极值位置识别方法,其特征在于,包括:
    获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
    确定所述目标图像是否为第一帧图像,或者所述目标图像是否为更新的指定区域图像;
    当所述目标图像不是第一帧图像且也不是更新的指定区域图像时,根据所述目标图像的前一帧图像确定所述目标图像的极值位置。
  29. 根据权利要求28所述的方法,其特征在于,所述根据所述目标图像的前一帧图像确定所述目标图像的极值位置,包括:
    确定所述目标图像对应的参考极值点,其中,所述参考极值点为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点;
    若所述参考极值点不是异常极值点,将所述前一帧图像中的极值位置作为所述目标图像的极值位置。
  30. 根据权利要求29所述的方法,其特征在于,所述方法还包括:
    若所述参考极值点为异常极值点,将所述目标图像的极值点确定所述目标图像的极值位置。
  31. 根据权利要求29所述的方法,其特征在于,所述参考极值点对应的像素值为参考极值;所述将所述前一帧图像中的极值位置作为所述目标图像的极值位置,包括:
    计算所述参考极值与所述目标图像的极值点对应的像素值的差值,并确定所述差值的绝对值是否小于或等于第一预设阈值;
    若所述差值的绝对值小于或等于所述第一预设阈值,则将所述前一帧图像中的极值位置作为所述目标图像的极值位置。
  32. 根据权利要求31所述的方法,其特征在于,所述方法还包括:
    若所述差值的绝对值大于所述第一预设阈值,则将所述目标图像的极值点对应的像素坐标作为所述目标图像的极值位置。
  33. 根据权利要求31所述的方法,其特征在于,所述第一预设阈值为时域噪声标准差的第一预设倍数。
  34. 根据权利要求31所述的方法,其特征在于,所述极值点为极大值时对应的第一预设阈值与所述极值点为极小值时对应的第一预设阈值不同。
  35. 根据权利要求31所述的方法,其特征在于,所述第一预设阈值与所述目标图像的极值点的像素值正相关。
  36. 一种测温方法,其特征在于,包括:
    获取目标图像,所述目标图像包括当前帧图像或在所述当前帧图像中确定的指定区域图像;
    确定所述目标图像的目标点;
    若所述目标图像的目标点满足预设滤波触发条件,对所述目标图像的目标点对应的像素值进行时域滤波,并获得滤波结果;
    根据所述滤波结果确定所述目标点的温度。
  37. 根据权利要求36所述的方法,其特征在于,所述目标点包括极值点或指定点;所述极值点包括所述目标图像的极大值点和/或极小值点,所述极值点对应极值位置;所述指定点为在所述目标图像上选定的位置点,所述指定点对应指定位置。
  38. 根据权利要求37所述的方法,其特征在于,所述方法包括:
    确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致;
    若所述目标图像的极值位置与所述前一帧图像的极值位置一致,确定所述目标图像的极值点满足预设滤波触发条件;
    若所述目标图像的极值位置与所述前一帧图像的极值位置不一致,则确定所述目标图像的极值点不满足预设滤波触发条件。
  39. 根据权利要求38所述的方法,其特征在于,所述确定所述目标图像的极值位置与所述目标图像的前一帧图像的极值位置是否一致之后,还包括:
    若所述目标图像的极值位置与所述前一帧图像的极值位置一致,计算参考极值与所述前一帧图像的极值点对应的像素值的差值,其中,所述参考极值为所述目标图像的前一帧图像中的极值位置在所述目标图像对应的像素点的像素值;
    确定所述差值的绝对值是否小于或等于第二预设阈值;
    若所述差值的绝对值小于或等于所述第二预设阈值,确定所述目标图像的极值点满足预设滤波触发条件;
    若所述差值的绝对值大于所述第二预设阈值,确定所述目标图像的极值点不满足预设滤波触发条件。
  40. 根据权利要求37所述的方法,其特征在于,所述方法还包括:
    根据所述指定位置确定所述指定点是否发生运动;
    若所述指定点未发生运动,则确定所述指定点满足预设滤波触发条件;
    若所述指定点发生运动,则确定所述指定点不满足预设滤波触发条件。
  41. 根据权利要求40所述的方法,其特征在于,所述方法还包括:
    根据所述指定位置确定所述当前帧图像的前一帧图像的指定点;
    计算所述当前帧图像的指定点对应的像素值与所述前一帧图像的指定点对应的像素值的差值;
    确定所述差值的绝对值是否小于第三预设阈值;
    若所述差值的绝对值小于或等于所述第三预设阈值,则确定所述指定点满足滤波触发条件;
    若所述差值的绝对值大于所述第三预设阈值,则确定所述指定点不满足滤波触发条件。
  42. 根据权利要求39所述的方法,其特征在于,所述第二预设阈值为时域噪声标准差的第二预设倍数。
  43. 根据权利要求41所述的方法,其特征在于,所述第三预设阈值为时域噪声标准差的第三预设倍数。
  44. 根据权利要求36-43任一项所述的方法,其特征在于,所述对所述目标图像的极值点对应的像素值进行时域滤波,并获得滤波结果,包括:
    计算所述目标图像的极值点的像素值与第一时域滤波系数的乘积,计算所述前一帧图像的极值点的像素值与第二时域滤波系数的乘积,将两个所述乘积之和作为滤波结果;
    其中,所述第一时域滤波系数与所述二时域滤波系数的和为1。
  45. 一种极值点识别装置,其特征在于,所述极值点识别装置包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如权利要求1至26任一项所述的极值点识别方法的步骤。
  46. 一种测温装置,其特征在于,所述测温装置包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如权利要求27所述的测温方法的步骤。
  47. 一种极值位置识别装置,其特征在于,所述极值位置识别装置包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如权利要求28至35任一项所述的极值位置识别方法的步骤。
  48. 一种测温装置,其特征在于,所述测温装置包括存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如权利要求36至44任一项所述的测温方法的步骤。
  49. 一种红外测温设备,其特征在于,所述红外测温设备包括光学镜头、光电探测器、存储器和处理器;
    所述光学镜头用于接收目标物发射的红外光;
    所述光电探测器,与所述处理器电连接,用于将所述红外光转换成电信号并发送至所述处理器进行成像;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如权利要求27所述的测温方法的步骤。
  50. 一种红外测温设备,其特征在于,所述红外测温设备包括光学镜头、光电探测器、存储器和处理器;
    所述光学镜头用于接收目标物发射的红外光;
    所述光电探测器,与所述处理器电连接,用于将所述红外光转换成电信号并发送至所述处理器进行成像;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如权利要求36至44任一项所述的测温方法的步骤。
  51. 一种可移动平台,其特征在于,所述可移动平台包括红外测温设备、存储器和处理器;
    所述红外测温设备,搭载在所述可移动平台上,用于获取红外图像;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如权利要求27所述的测温方法的步骤,或者实现如权利要求36至44任一项所述的测温方法的步骤。
  52. 根据权利要求51所述的可移动平台,其特征在于,所述可移动平台包括飞行器、无人驾驶车或机器人。
  53. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至24中任一项所述的极值点识别方法,或者如权利要求27所述的测温方法,或者如权利要求28至35中任一项所述的极值位置识别方法,或者如权利要求36至44中任一项所述的测温方法。
PCT/CN2019/118225 2019-11-13 2019-11-13 识别方法、测温方法、设备及存储介质 WO2021092815A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980032929.3A CN112154450A (zh) 2019-11-13 2019-11-13 识别方法、测温方法、设备及存储介质
PCT/CN2019/118225 WO2021092815A1 (zh) 2019-11-13 2019-11-13 识别方法、测温方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/118225 WO2021092815A1 (zh) 2019-11-13 2019-11-13 识别方法、测温方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021092815A1 true WO2021092815A1 (zh) 2021-05-20

Family

ID=73891996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118225 WO2021092815A1 (zh) 2019-11-13 2019-11-13 识别方法、测温方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112154450A (zh)
WO (1) WO2021092815A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781351A (zh) * 2021-09-16 2021-12-10 广州安方生物科技有限公司 图像处理方法、设备及计算机可读存储介质
CN115115822A (zh) * 2022-06-30 2022-09-27 小米汽车科技有限公司 车端图像处理方法、装置、车辆、存储介质及芯片
CN115761212A (zh) * 2022-11-02 2023-03-07 北京鹰之眼智能健康科技有限公司 一种基于红外图像的人体状态预警系统
CN116109604A (zh) * 2023-02-21 2023-05-12 天合光能(宿迁)光电有限公司 用于TOPCon结构太阳能电池板的栅线检测方法
CN116309625A (zh) * 2022-11-11 2023-06-23 句容市乡土树种研究所 适用于智慧农业的数据处理方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113916383B (zh) * 2021-09-29 2023-11-21 杭州微影软件有限公司 热成像温度测量方法、装置及电子设备
CN114544002A (zh) * 2022-02-17 2022-05-27 深圳市同为数码科技股份有限公司 测温跳变处理方法、装置、计算机设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105547495A (zh) * 2016-01-25 2016-05-04 成都国铁电气设备有限公司 弓网高温干扰识别检测方法
CN107025648A (zh) * 2017-03-20 2017-08-08 中国人民解放军空军工程大学 一种电路板故障红外图像自动检测方法
CN109154815A (zh) * 2017-11-30 2019-01-04 深圳市大疆创新科技有限公司 最高温度点跟踪方法、装置和无人机
CN109859126A (zh) * 2019-01-17 2019-06-07 浙江大华技术股份有限公司 一种视频降噪方法、装置、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105547495A (zh) * 2016-01-25 2016-05-04 成都国铁电气设备有限公司 弓网高温干扰识别检测方法
CN107025648A (zh) * 2017-03-20 2017-08-08 中国人民解放军空军工程大学 一种电路板故障红外图像自动检测方法
CN109154815A (zh) * 2017-11-30 2019-01-04 深圳市大疆创新科技有限公司 最高温度点跟踪方法、装置和无人机
CN109859126A (zh) * 2019-01-17 2019-06-07 浙江大华技术股份有限公司 一种视频降噪方法、装置、电子设备及存储介质

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781351A (zh) * 2021-09-16 2021-12-10 广州安方生物科技有限公司 图像处理方法、设备及计算机可读存储介质
CN113781351B (zh) * 2021-09-16 2023-12-08 广州安方生物科技有限公司 图像处理方法、设备及计算机可读存储介质
CN115115822A (zh) * 2022-06-30 2022-09-27 小米汽车科技有限公司 车端图像处理方法、装置、车辆、存储介质及芯片
CN115115822B (zh) * 2022-06-30 2023-10-31 小米汽车科技有限公司 车端图像处理方法、装置、车辆、存储介质及芯片
CN115761212A (zh) * 2022-11-02 2023-03-07 北京鹰之眼智能健康科技有限公司 一种基于红外图像的人体状态预警系统
CN115761212B (zh) * 2022-11-02 2023-08-04 北京鹰之眼智能健康科技有限公司 一种基于红外图像的人体状态预警系统
CN116309625A (zh) * 2022-11-11 2023-06-23 句容市乡土树种研究所 适用于智慧农业的数据处理方法
CN116309625B (zh) * 2022-11-11 2024-01-30 句容市乡土树种研究所有限公司 适用于智慧农业的数据处理方法
CN116109604A (zh) * 2023-02-21 2023-05-12 天合光能(宿迁)光电有限公司 用于TOPCon结构太阳能电池板的栅线检测方法
CN116109604B (zh) * 2023-02-21 2023-11-07 天合光能(宿迁)光电有限公司 用于TOPCon结构太阳能电池板的栅线检测方法

Also Published As

Publication number Publication date
CN112154450A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2021092815A1 (zh) 识别方法、测温方法、设备及存储介质
US10659676B2 (en) Method and apparatus for tracking a moving subject image based on reliability of the tracking state
US20150278996A1 (en) Image processing apparatus, method, and medium for generating color image data
KR101687530B1 (ko) 촬상 시스템에 있어서의 제어방법, 제어장치 및 컴퓨터 판독 가능한 기억매체
CN106815869B (zh) 鱼眼相机的光心确定方法及装置
KR20170041636A (ko) 표시 제어장치, 표시 제어방법 및 프로그램
WO2017043258A1 (ja) 計算装置および計算装置の制御方法
US9990739B1 (en) Method and device for fisheye camera automatic calibration
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
JP6694281B2 (ja) ステレオカメラおよび撮像システム
EP2793172B1 (en) Image processing apparatus, image processing method and program
JP2018182593A (ja) 画像処理装置、画像処理方法
CN111998959B (zh) 基于实时测温系统的温度校准方法、装置及存储介质
JP2013044597A (ja) 画像処理装置および方法、プログラム
JP2008026999A (ja) 障害物検出システム、及び障害物検出方法
WO2018179119A1 (ja) 映像解析装置、映像解析方法および記録媒体
US10432916B2 (en) Measurement apparatus and operation method of measurement apparatus
JP2021027584A (ja) 画像処理装置、画像処理方法およびプログラム
JP2009302731A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
TW201445458A (zh) 一種攝像設備的檢測裝置及方法
WO2022022136A1 (zh) 深度图像生成方法及装置、参考图像生成方法及装置、电子设备以及计算机可读存储介质
JP7040627B2 (ja) 算出装置、情報処理方法およびプログラム
CN110781712B (zh) 一种基于人脸检测与识别的人头空间定位方法
JP2019020839A (ja) 画像処理装置、画像処理方法、及びプログラム
EP4007265A1 (en) Image processing device, image processing method, program, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19952188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19952188

Country of ref document: EP

Kind code of ref document: A1