WO2024051067A1 - 红外图像处理方法、装置及设备、存储介质 - Google Patents

红外图像处理方法、装置及设备、存储介质 Download PDF

Info

Publication number
WO2024051067A1
WO2024051067A1 PCT/CN2023/072732 CN2023072732W WO2024051067A1 WO 2024051067 A1 WO2024051067 A1 WO 2024051067A1 CN 2023072732 W CN2023072732 W CN 2023072732W WO 2024051067 A1 WO2024051067 A1 WO 2024051067A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
target object
interest
infrared image
image
Prior art date
Application number
PCT/CN2023/072732
Other languages
English (en)
French (fr)
Inventor
徐召飞
李钢强
牟道禄
王少龙
王水根
Original Assignee
烟台艾睿光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 烟台艾睿光电科技有限公司 filed Critical 烟台艾睿光电科技有限公司
Publication of WO2024051067A1 publication Critical patent/WO2024051067A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present application relates to the field of image processing technology, and in particular to an infrared image processing method, device and equipment, and a computer-readable storage medium.
  • infrared thermal imaging is to sense the infrared rays emitted by the object itself through an infrared lens and an infrared detector, and form an infrared image visible to the human eye through photo-electric conversion.
  • Objects above absolute zero (-273.15°C) will radiate infrared rays, so infrared thermal imaging has a wide range of applications. Because it can sense temperature, does not require visible light imaging, and can avoid interference such as smoke and sand, infrared thermal imaging has a wide range of applications. Imaging has developed very mature applications in medical, industrial, military and other fields.
  • infrared thermal imaging basically restores the radiation of infrared hot lines.
  • the converted infrared image is very different from the visible light image usually observed by the human eye.
  • infrared thermal imaging requires A period of training and adaptation; in addition, infrared thermal imaging images can only reflect the overall thermal radiation of the object, but cannot reflect the details of the object, and are not suitable for scenes where specific categories and details need to be distinguished.
  • this application provides an infrared image processing method, device and equipment, and a computer-readable storage medium that can effectively highlight the observation target.
  • Embodiments of the present invention in the first aspect, embodiments of the present application provide an infrared image processing method, including:
  • True color processing is performed on the contour area according to the category of the target object of interest, and the target object of interest is highlighted in the infrared image.
  • determining the target object of interest from the target objects includes:
  • a target object of interest is determined based on the selection operation on the target object.
  • performing true color processing on the contour area according to the category of the target object of interest and highlighting the target object of interest in the infrared image includes:
  • a corresponding color template is determined according to the category of the target object of interest, and the outline area of the target object of interest is colored according to the corresponding color template in the infrared image and then displayed.
  • determining the contour area of the target object of interest includes:
  • Determining a corresponding color template according to the category of the target object of interest, coloring the outline area of the target object of interest in the infrared image according to the corresponding color template and then displaying it includes:
  • the method further includes:
  • the motion trajectory prompt information of the target object of interest is displayed in the infrared image.
  • displaying the motion trajectory prompt information of the target object of interest in the infrared image according to the trajectory prediction information includes:
  • movement direction prompt information is displayed on one side of the target object of interest in the infrared image.
  • displaying the motion trajectory prompt information of the target object of interest in the infrared image according to the trajectory prediction information includes:
  • a movement path of the target object of interest is determined, and the movement path is displayed in the infrared image.
  • displaying the motion trajectory prompt information of the target object of interest in the infrared image according to the trajectory prediction information includes:
  • the trajectory prediction information the next predicted position of the target object of interest in the infrared image is determined, and the corresponding virtual image of the target object of interest is displayed at the next predicted position in the infrared image. image.
  • performing target tracking on the target object of interest to obtain trajectory prediction information includes:
  • Target frames are set respectively according to different step lengths and scales, and the target area is determined through the target frame Search for the starting position, perform feature extraction on the area where each target frame is located, and obtain the corresponding second feature extraction result;
  • the area of the corresponding target frame whose feature similarity meets the requirements is determined as the target of the target object of interest. tracking area;
  • Trajectory prediction information is obtained according to the target tracking area.
  • the target tracking area of the target object of interest it also includes:
  • a reference step length and a reference direction for the next trajectory prediction of the target object of interest are determined.
  • the target frames are respectively set according to different step lengths and scales, the target frame is searched with the target area as the starting position, and the features of the area where each target frame is located are separately extracted to obtain
  • the corresponding second feature extraction results include:
  • Different scales are set according to different proportions according to the size of the target object of interest, and different step sizes are set according to linear motion rules according to the category of the target object of interest.
  • direction setting target frames respectively according to different step lengths and scales;
  • performing target detection on the infrared image and determining the target object in the infrared image includes:
  • the target object mask image is fused with the original infrared image to obtain a target screening image, and target detection is performed on the target screening image to determine the category and location information of the target object contained in the infrared image.
  • performing target detection on the target screening image to determine the category and location information of the target object contained in the infrared image includes:
  • the detection model is obtained by training a neural network model with a training sample set, and the training sample set includes training sample images containing different target objects and their categories and location labels respectively.
  • the infrared image is binarized to obtain a corresponding mask image, including:
  • the image morphological filtering process based on the mask image to obtain the target object mask image corresponding to the infrared image includes:
  • the mask image is subjected to image processing such as erosion and then expansion to obtain a target object mask image corresponding to the infrared image.
  • embodiments of the present application provide an infrared image processing device, including an acquisition module for acquiring an infrared image; a detection module for performing target detection on the infrared image and determining the target object in the infrared image, Obtain target detection results containing categories of each of the target objects; a determination module for determining the target object of interest from the target objects according to the target detection result; a segmentation module for determining the target object of interest The outline area; a display module, configured to perform true color processing on the outline area according to the category of the target object of interest, and highlight the target object of interest in the infrared image.
  • embodiments of the present application provide an infrared thermal imaging device, including a processor, a memory connected to the processor, and a computer program stored on the memory and executable by the processor. The computer When the program is executed by the processor, the infrared image processing method described in any embodiment of this application is implemented.
  • the infrared thermal imaging device also includes an infrared photography module and a display module connected to the processor; the infrared photography module is used to collect infrared images and send them to the processor; the display module is used to display the infrared images.
  • embodiments of the present application provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, the method described in any embodiment of the present application is implemented. Infrared image processing methods.
  • the target object of interest by acquiring the infrared image, performing target detection on the infrared image to determine the target object in the infrared image, determining the target object of interest from the target object, and determining the outline area of the target object of interest.
  • the category of the target object of interest is subjected to true color processing on the contour area, and the target object of interest is highlighted in the infrared image.
  • the target object of interest is determined from the target detection result.
  • the contour area of the target object of interest in the image is processed in true color according to the category of the target object, and the target object of interest is highlighted in the infrared image.
  • true color processing the target object of interest can be displayed according to the category of the target object. Different coloring and texture protruding display make the target of interest in the infrared image more clearly presented, making it easier for the human eye to observe and identify.
  • the infrared image processing device, infrared thermal imaging equipment and computer-readable storage medium belong to the same concept as the corresponding infrared image processing method embodiment, and thus have the same technical effects as the corresponding infrared image processing method embodiment. I won’t go into details here.
  • Figure 1 is a schematic diagram of an application scenario of an infrared image processing method in an embodiment
  • Figure 2 is a schematic diagram of an application scenario of an infrared image processing method in another embodiment
  • Figure 3 is a schematic diagram of a contour segmentation image in an optional specific example
  • Figure 4 is a schematic diagram of true color conversion of the target object of interest in the contour segmentation image in an example
  • Figure 5 is a schematic diagram of an infrared image that highlights a target object in an example
  • Figure 6 is a schematic diagram of an infrared image showing prompt information about the movement direction of a target object in an example
  • Figure 7 is a schematic diagram showing an infrared image showing the movement path of a target object in an example
  • Figure 8 is a schematic diagram showing an infrared image showing the next predicted position of a target object in an example
  • Figure 9 is a schematic diagram of the target detection results obtained by performing target detection on infrared images in an example
  • Figure 10 is a schematic diagram of setting the tracking area of the target object of interest in the infrared image in an example
  • Figure 11 is a schematic diagram of the original infrared image in an example
  • Figure 12 is a schematic diagram of the mask image after binarization of the infrared image shown in Figure 11;
  • Figure 13 is a schematic diagram of the target object mask image after the mask image shown in Figure 12 is subjected to image morphology filtering processing;
  • Figure 14 is a schematic diagram of the target screening image obtained after the target object mask image shown in Figure 13 is applied to the original infrared image;
  • Figure 15 is a schematic structural diagram of an example of mid-infrared thermal imaging equipment
  • Figure 16 is a schematic diagram of the main flow of an example mid-infrared image processing method
  • Figure 17 is a flow chart of an optional specific example of a mid-infrared image processing method
  • Figure 18 is a schematic diagram of an image processing device in an embodiment
  • Figure 19 is a schematic structural diagram of an infrared thermal imaging device in an embodiment.
  • first, second, and third involved are only used to distinguish similar The objects do not represent a specific ordering of the objects. It is understood that “first, second, and third” can be interchanged in a specific order or sequence if permitted, so that the embodiments of the application described here can be Implementation in sequences other than those illustrated or described herein.
  • FIG. 1 is a schematic diagram of an optional application scenario of the infrared image processing method provided by the embodiment of the present application.
  • the infrared thermal imaging device 11 includes a processor 12 , a memory 13 connected to the processor 12 and an infrared camera. Module 14.
  • the infrared thermal imaging device 11 collects infrared images in real time through the infrared shooting module 14 and sends them to the processor 12.
  • the memory 13 stores a computer program for implementing the infrared image processing method provided by the embodiment of the present application.
  • the processor 12 By executing the computer program, the target object in the infrared image is identified by performing target detection on the infrared image, the target object of interest is determined from the target objects, the outline area of the target object of interest is determined, and the target object of interest is determined according to the target object of interest.
  • the categories are processed in true color to highlight the target objects of interest in the infrared image.
  • the infrared thermal imaging device 11 can be an infrared shooting module 14 integrated with an infrared image shooting function, and various intelligent terminals with storage and processing functions, such as handheld observers, various aiming devices, vehicle-mounted/airborne photoelectric loads Equipment etc.
  • an infrared image processing method provided by an embodiment of the present application can be applied to the infrared thermal imaging device in the application scenario shown in Figure 1. Among them, the infrared image processing method includes the following steps:
  • Infrared thermal imaging equipment can include an image capturing module, which includes an infrared lens and an infrared detector.
  • the detector includes various refrigerated and uncooled short-wave, medium-wave, and long-wave detectors.
  • the obtaining the infrared image includes: the infrared thermal imaging device collects the infrared image of the target scene in real time through the image capturing module.
  • the infrared thermal imaging device does not include an image capture module, and the obtaining the infrared image includes: the infrared thermal imaging device acquires infrared images sent by other smart devices with image capture functions.
  • other smart devices It can be an infrared detector, mobile phone terminal, cloud, etc.
  • S103 Perform target detection on the infrared image and determine the target object in the infrared image.
  • Target detection can use traditional target detection algorithms based on Haar feature extraction to detect targets, target detection algorithms based on deep learning models, etc.
  • the target object in the infrared image is identified through target detection, and the recognized target object is displayed in the infrared image to provide users with interactive operations for selection.
  • the target object can be the image content in the infrared image that the user wants to pay attention to, and can be one or more preset categories of objects, such as people, different animals, vehicles, etc.
  • Determining the target object in the infrared image through target detection, and then determining the target object of interest from the target detection results can improve the accuracy and efficiency of determining the target object of interest.
  • determining the target object of interest based on the target detection result may be in at least one of the following ways: using the target object of a preset category in the target detection result as the target object of interest; The target object at the preset position in the infrared image is regarded as the target object of interest; the target object whose size meets the preset requirements in the target detection result is regarded as the target object of interest; the target object is selected from the target detection result as the object of interest based on interactive operation target.
  • Determining the contour area of the target object of interest may include extracting the contour line information of the target object of interest through a contour detection algorithm; or determining the contour segmentation image of the target object of interest through an image segmentation algorithm; or determining the contour area of the target object of interest.
  • the contour area of the target object of interest, etc. is determined based on the position information of each target object obtained from the target detection result.
  • S107 Perform true color processing on the outline area according to the category of the target object of interest, and highlight the target object of interest in the infrared image.
  • Corresponding true color templates can be preset for different categories.
  • the outline area of the target object of interest can be colored by the corresponding true color template to realize the target object of interest. In the infrared image, different colors and textures are highlighted according to categories.
  • Performing true color processing on the contour area of the target object of interest can make each pixel value of the target object of interest image have three primary color components of R, G, and B in the imaging area of the target object of interest. The component directly determines the primary color intensity of the display device to produce color, which is consistent with the color rendering of visible light images and is more in line with the human eye perception system.
  • the infrared image by acquiring the infrared image, performing target detection on the infrared image to determine the target object in the infrared image, determining the target object of interest from the target object, and determining the outline area of the target object of interest.
  • the Perform true color processing on the outline area of the category of the target object of interest highlight the target object of interest in the infrared image according to a specified color, and determine the target object of interest from the target detection results by performing target detection on the infrared image.
  • the outline area of the target object of interest is processed in true color according to the category of the target object, and the target object of interest is highlighted in the infrared image.
  • the target object of interest can be distinguished according to the category of the target object.
  • the target object is displayed through different coloring and texture protrusion, so that the target of interest in the infrared image is presented more clearly, making it easier for the human eye to observe and identify.
  • S104 determine the target object of interest from the target objects, including:
  • a target object of interest is determined based on the selection operation on the target object.
  • the selected operation on the target object may be different according to different interactive operation types supported by the infrared thermal imaging device.
  • the infrared thermal imaging device provides an interactive touch interface, and the target objects identified through target detection in the infrared image are displayed in the interactive touch interface. The user can directly select a certain object in the interactive touch interface. A target object serves as the current target object of interest.
  • the infrared thermal imaging device provides a non-touch display interface, and the infrared thermal imaging device is provided with operation buttons, and the target object detected and recognized in the infrared image is displayed on the non-touch display interface. In the display interface, there is a marquee on the non-touch display interface.
  • the user can move the marquee by operating buttons to select a target object as the current target of interest.
  • the infrared thermal imaging device can support voice input, and will perform target detection and recognition on infrared images.
  • the target objects arrived are displayed in the display interface, and each target object has a number that uniquely represents its identity information.
  • the user can voice input the number of the target object he wants to select to complete the selection operation to determine the target object of interest.
  • the infrared thermal imaging equipment regards the selected target object as the current target object of interest, and subsequently only determines the outline area, true color processing, etc. of the selected target object of interest to facilitate user satisfaction. The need to select the current observation object in real time.
  • S107 perform true color processing on the contour area according to the category of the target object of interest, and highlight the target object of interest in the infrared image, including:
  • a corresponding color template is determined according to the category of the target object of interest, and the outline area of the target object of interest is colored according to the corresponding color template in the infrared image and then displayed.
  • the category of the target object can be determined based on the category of the object positioned through image collection with the infrared thermal imaging device.
  • the target object can be preset Categories include pedestrians, vehicles, birds, rabbits, cows, sheep, wolves, dogs, pigs, and cats.
  • different color templates can be preset for different target object categories. According to the selected category of the target object of interest, the corresponding color template is determined accordingly, and the target object of interest is displayed in the infrared image according to its corresponding color template. Color templates are colored and displayed. For example, the corresponding color template for birds is the blue template.
  • each color template may also include texture information.
  • the corresponding color template for birds is a blue feather texture template. If the target object of interest selected in the current infrared image is a bird, the corresponding color template will be based on the blue feather texture template.
  • the feather texture template colors and displays the target object of interest according to the blue feather texture template; the corresponding color template for rabbits is the yellow fluff texture template. If the target object of interest selected in the current infrared image is a rabbit, the color template is colored according to the yellow fluff.
  • the texture template colors and displays the target object of interest according to the yellow fluff texture template.
  • the target objects of interest are colored and displayed according to categories using different color templates, so that the user's target of interest in the infrared image can conform to the way the human eye perceives visible light, thereby improving the accuracy of target recognition.
  • S105 determining the outline area of the target object of interest includes:
  • Determining a corresponding color template according to the category of the target object of interest, coloring the outline area of the target object of interest in the infrared image according to the corresponding color template and then displaying it includes:
  • the position information of the target object of interest can be obtained according to the step of performing target detection on the infrared image.
  • the obtained target detection result includes the corresponding categories and the corresponding categories of all target objects included in the infrared image. location information.
  • the expression form of the position information can be, but is not limited to, any of the following: a combination of center point coordinates and size information of the target object of interest, a combination of coordinates of the four corner points in the outer contour of the target object of interest, a combination of coordinates of the center point of the target object of interest, The combination of the coordinates of the lower left corner point and the size information.
  • the content of the area image of the target object of interest in the infrared image is extracted to obtain the contour segmentation image of the target object of interest, and the segmented contour segmentation image of the target object of interest is pressed according to the corresponding color template Coloring is performed, and the colored contour segmentation image is fused with the original infrared image for display.
  • the segmentation method that obtains the corresponding contour segmentation image based on the position information of the target object of interest can include but is not limited to the Otsu method, gray threshold segmentation method, temperature segmentation method or deep learning model segmentation method, etc.
  • Figure 3 is a schematic diagram of a contour segmentation image in an optional specific example.
  • the target of interest in the contour segmentation image is colored according to the preset true color templates of different categories, making the target more prominent and in line with human eye perception, such as for people and different types of animals. Attached colors respectively to avoid the inability to distinguish specific types due to similar temperatures in infrared images, resulting in misjudgment of the target.
  • FIG 4 it is a schematic diagram of the true color conversion of the target object of interest in the contour segmentation image.
  • Image fusion refers to using image processing technology to extract image data about the same target collected from multiple source channels to maximize the extraction of beneficial information in each channel, and finally synthesize it into a high-quality image to improve image information.
  • the utilization rate, improvement of computer interpretation accuracy and reliability, and improvement of spatial resolution and spectral resolution of original images are beneficial to monitoring.
  • the infrared thermal imaging equipment fuses the true-color converted contour segmentation image of the target object of interest with the original infrared image, so that the colored target object of interest replaces the position of the original target in the original infrared image to obtain the described
  • the target object of interest is highlighted in the original infrared image in a preset manner in the fused image.
  • the infrared thermal imaging equipment can also enlarge the contour segmentation image of the target object of interest after true color conversion according to a preset ratio, and then fuse it with the original infrared image according to the specified display position.
  • the specified display position can be It refers to uniformly displaying the target object of interest after proportional enlargement and coloring at a certain position in the field of view of the current image, such as the lower left corner; or, after proportional enlargement and coloring
  • the target object of interest replaces the original target to avoid that the original target may be smaller in size in the field of view, which is not conducive to observation.
  • the infrared thermal imaging device can simultaneously fuse the colored contour segmented image with the original infrared image, and enlarge the colored contour segmented image according to a preset ratio, and then merge it with the original infrared image according to the specified display position.
  • Infrared images are fused, as shown in Figure 5. The details and contours of the target object that users can observe in the fused image can be clearer.
  • the infrared thermal imaging device fuses the contour segmentation image of the target object of interest with the original infrared image after coloring, or after coloring and then enlarging the proportion, so as to realize the target object of interest in the infrared image according to the The way the human eye perceives the color of visible light images is highlighted, making it easier for the human eye to observe and identify.
  • S104 after determining the target object of interest from the target objects, further includes:
  • the motion trajectory prompt information of the target object of interest is displayed in the infrared image.
  • the trajectory prediction information may include at least one of the following: predicting the next location of the target object of interest, predicting the movement direction of the target object of interest, predicting the movement trajectory of the target object of interest, etc.
  • displaying the motion trajectory prompt information of the target object of interest in the infrared image according to the trajectory prediction information may mean marking the next predicted position of the target object of interest in the infrared image. Mark the predicted movement direction of the target object of interest in the infrared image, mark the predicted movement trajectory of the target object of interest in the infrared image, etc.
  • Determining the target object of interest based on the target detection results and performing target tracking only on the target object of interest can greatly reduce the calculation amount and efficiency of target tracking and improve the accuracy of the target tracking results.
  • target detection is performed on infrared images to allow the user to select the target object of interest currently being focused on from the target detection results, which can simplify the complexity of subsequent trajectory prediction of the target object of interest and improve the accuracy of trajectory prediction.
  • perform target tracking on the target object of interest to obtain trajectory prediction information, and display the motion trajectory prompt information of the target object of interest in the infrared image, and highlight the target object of interest in the infrared image while displaying its
  • the motion trajectory prompt information makes the target of interest in the infrared image more clearly presented, making it easier to accurately track the moving target based on the motion trajectory prompt information.
  • displaying the motion trajectory prompt information of the target object of interest in the infrared image according to the trajectory prediction information includes:
  • trajectory prediction information display motion direction prompt information on one side of the target object of interest in the infrared image.
  • the movement path of the target object of interest is determined. Display the motion path in the external image; and/or,
  • the trajectory prediction information the next predicted position of the target object of interest in the infrared image is determined, and the corresponding virtual image of the target object of interest is displayed at the next predicted position in the infrared image. image.
  • the trajectory prediction information may include one or more related information that can characterize the movement trajectory characteristics of the target object of interest.
  • the trajectory prediction information includes information that characterizes the movement direction of the target object of interest.
  • a designated mark is displayed on one side of the infrared image in the direction of movement of the target object of interest.
  • the position of the designated mark indicates that the corresponding side is the direction of movement of the target object of interest.
  • it is displayed on the right side of the target object of interest.
  • the circular mark on the side indicates that the predicted movement direction of the target object of interest is to the right;
  • the trajectory prediction information includes information characterizing the movement path of the target object of interest.
  • the target object of interest is displayed in the infrared image.
  • the movement path of is displayed on the right side of the target object of interest.
  • the arrow represents the movement path of the target object of interest;
  • the trajectory prediction information includes information that characterizes the position of the target object of interest at the next prediction moment.
  • the target object of interest in the infrared image displays a designated mark at the position of the next predicted moment, and the position of the designated mark represents the next predicted position of the target object of interest, as shown in Figure 8,
  • a virtual icon is displayed at the end of the arrow on the right side of the target object of interest. The position of the virtual icon represents the next predicted position of the target object of interest.
  • each type of motion trajectory prompt information can be formed into a display mode accordingly.
  • display mode 1 displays the predicted motion direction prompt information
  • display mode 2 displays the predicted motion path
  • display mode 3 displays the predicted motion path.
  • a virtual icon is displayed at the next position, and the user can select from multiple display modes to select the corresponding motion trajectory prompt information to be displayed in the current infrared image as motion direction prompt information, motion path, or virtual icon.
  • the trajectory prediction information of the target object of interest is obtained, and the motion trajectory prompt information of the target object of interest is displayed in the infrared image based on the trajectory prediction information.
  • performing target tracking on the target object of interest to obtain trajectory prediction information includes:
  • the area of the corresponding target frame whose feature similarity meets the requirements is determined as the target of the target object of interest. tracking area;
  • Trajectory prediction information is obtained according to the target tracking area.
  • FIG. 9 it is a schematic diagram of the target detection result obtained by performing target detection on an infrared image in an optional specific example, including a first target object ID 1, a second target object ID 2, a third target object ID 3 and The fourth target object ID 4 is from which the user can select the target object of interest through human-computer interaction.
  • the first target object ID 1 is selected as the target object of interest.
  • Infrared thermal imaging equipment performs feature extraction on selected target objects of interest for tracking and trajectory prediction.
  • the target object that can be selected by the user in the current scene is initially locked.
  • the user selects a target object that he wants to focus on through human-computer interaction and tracks it. Subsequently, a single
  • the target feature extraction method for target tracking and trajectory prediction can not only simplify the tracking calculation, but also facilitate the improvement of tracking accuracy.
  • the method for extracting features from a specified area of an infrared image may include, but is not limited to, Haar features, HOG features, or features extracted based on convolutional neural networks.
  • feature extraction is performed on the designated area of the infrared image, including feature extraction on the target area where the target object of interest is located to obtain the first feature extraction result; and when searching through the target frame, the area where each target frame is located is separately extracted. Feature extraction to obtain the second feature extraction result.
  • HOG feature calculation only processes the content of the target area, mainly including gradient calculation, gradient histogram calculation, block normalization and HOG feature statistics Four steps.
  • the gradient calculation steps include: the horizontal gradient g x of a single pixel is the pixel value difference between its left and right adjacent pixels, and the vertical gradient g The total gradient intensity of the pixel and gradient direction The gradient direction takes the absolute value.
  • the gradient histogram calculation steps include: dividing a pixel matrix into a unit (cell), and making gradient histogram statistics based on the gradient information of the pixels in each cell.
  • the block normalization step includes: normalizing the gradient intensity according to the change range of the gradient intensity exceeding the preset value, multiple units forming a block, and normalizing the values of all units in the block.
  • the feature vectors are concatenated in order to obtain the HOG features of the block.
  • the HOG feature statistics step includes: collecting HOG features from all overlapping blocks in the detection window, and combining them into the final feature vector as the feature extraction result of the corresponding target area.
  • the features extracted based on the convolutional neural network may refer to a feature extraction model obtained after training based on an independent neural network model.
  • this feature extraction model features are extracted from the target area where the target object of interest is located to obtain the first Feature extraction results; and search through the target frame When searching, feature extraction is performed on the area where each target frame is located to obtain the second feature extraction result.
  • the step of performing target detection on infrared images to determine the target object is completed using a target detection model obtained by training a neural network model.
  • feature extraction is also performed on a designated area of the infrared image. This can be achieved by reusing the feature extraction network layer in the target detection model.
  • Set target frames using the position of the target object of interest as the starting position to search around the target object of interest, and compare the feature extraction results of the area where the target frame is located with the location of the target object of interest.
  • the feature extraction results of the target area are compared, and the area of the target frame with the greatest feature similarity is determined as the target tracking area of the target object of interest to obtain trajectory prediction information.
  • the trajectory prediction information of the target object of interest is obtained based on the single target tracking result.
  • the calculation process of single target tracking can be iterated, and the single target obtained by an iterative process is
  • the target tracking results can be used to optimize the setting parameters of the target frame in the next iteration process to further reduce the calculation amount of target tracking and improve the accuracy of the target tracking results.
  • the target tracking area of the target object of interest it also includes:
  • a reference step length and a reference direction for the next trajectory prediction of the target object of interest are determined.
  • the relative position information between the target tracking area and the target area may refer to calculating the vector size and vector direction based on the position center coordinates of the target tracking area and the position center coordinates of the target area.
  • the vector size corresponds to the relative distance between the two, which can be used as the next
  • the trajectory prediction is a reference to the step size of the target object's movement speed
  • the vector direction corresponds to the relative angle between the two, which can be used as a reference for the next trajectory prediction to the target object's movement direction.
  • the efficiency of the next trajectory prediction calculation for the target object of interest can be improved; single target tracking
  • the calculation process can be iterated, and a corresponding set of reference steps and reference directions are obtained according to each iteration process.
  • the setting parameters of the target frame in the next iteration process can be optimized based on the single target tracking results obtained in one iteration process, which can further reduce the Reduce the calculation amount of target tracking and improve the accuracy of target tracking results.
  • the target frames are respectively set according to different step lengths and scales, the target frame is searched with the target area as the starting position, and the features of the area where each target frame is located are separately extracted to obtain
  • the corresponding second feature extraction results include:
  • Different scales are set according to different scaling ratios according to the size of the target object of interest, and different step sizes are set according to linear motion rules according to the category of the target object of interest.
  • target frames are respectively set according to different step lengths and different scales;
  • the scale of the target frame can be obtained by scaling the size of the target object of interest according to different proportions.
  • the size of the target object of interest is x
  • the scale of the target frame can be 0.5x, 1x, 1.5x, 2x, etc.
  • Different step sizes can be based on the category of the target object of interest and the motion characteristics of the target object of interest of the corresponding category in a linear motion manner in the upper, lower, left, and right directions of the target object of interest, respectively.
  • Set the target frame with different step lengths and different scales. Different step lengths correspond to different pixel distances.
  • the corresponding step length is usually larger; conversely, For targets whose motion characteristics are slower than that of the target object of interest, the step size set accordingly is usually smaller.
  • the scales are set in sequence above the target object of interest to 1x target box, 0.5x target box, and the scales are set in sequence below the target object of interest. Set the 1x target frame, 0.5x target frame, and the 1x target frame, 0.5x target frame, and 1x target frame in sequence on the right side of the target object of interest. In this way, a tracking area range for the target object of interest is formed.
  • tracking and predicting different categories of target objects of interest in a linear motion manner can improve the efficiency of tracking calculations and the accuracy of target tracking results.
  • the target object mask image is fused with the original infrared image to obtain a target screening image, and target detection is performed on the target screening image to determine the category and location information of the target object contained in the infrared image.
  • the imaging temperature of the target object in the thermal imaging field of view is higher than the imaging temperature of the surrounding environment.
  • binary processing the infrared image the corresponding imaging position of the target object in the infrared image can be roughly positioned, and each target object can be positioned in the infrared image.
  • the mask image for positioning the imaging position in the image is shown in Figure 11 and Figure 12. It is an optional specific example. After binarizing the original infrared image, the mask image containing the high-temperature target is obtained. Schematic diagram, in which the white area in the binary image corresponds to the high-temperature target area, and the separated white breakpoints circled by the box represent the non-high-temperature abnormal target area.
  • the mask image is subjected to image morphology filtering processing, and the obtained target object mask image contains a true high-temperature target.
  • a complete and clear-edged target object mask image as shown in Figure 11.
  • High-temperature anomaly targets have been filtered through image morphology filtering.
  • the target object mask image is fused with the original infrared image to obtain a target screening image.
  • the target screening image is used as input to perform target detection and determine the location of the target object contained in the infrared image. category, location information.
  • a mask image is formed, the mask image is subjected to image morphology filtering processing to form a target object mask image, the target object mask image is used to filter the image information contained in the infrared image, and the filtered image information containing Images of real target object content are used for target detection, which can improve the efficiency of target detection and improve the accuracy of target detection results based on the detection and recognition of target objects that can cover a larger category.
  • performing target detection on the target screening image to determine the category and location information of the target object contained in the infrared image includes:
  • the detection model is obtained by training a neural network model with a training sample set, and the training sample set includes training sample images containing different target objects and their categories and location labels respectively.
  • the detection model can use a convolutional neural network model obtained after training, and use the detection model to perform target detection on the target screening image to obtain target detection results of the category and location information of the target object contained in the infrared image.
  • the training sample set for training the detection model includes positive training sample images and negative training sample images.
  • a training sample set is established by collecting outdoor infrared images containing target objects to be identified at different time periods, in different regions, and in different seasons.
  • the target objects to be identified can mainly include birds, rabbits, cattle, sheep, wolves, dogs, and pigs. , cats, pedestrians, cars, trucks, special vehicles, etc.
  • the positive training sample image can be an original infrared image containing the target object to be identified and the category and location label of the corresponding target object; it can also be an image obtained based on the target object mask image formed and containing the target object to be identified and the corresponding target object. There are target filtering images corresponding to the category and location labels of the target object.
  • the inherent features between different categories can be obtained, such as the ears, paws, postures of animals, the outlines and corners of vehicles, the physical signs of pedestrians, etc., to form a detection model .
  • the number of original infrared images and target screening images in the positive training sample images can be set according to a certain proportion.
  • the training samples for the detection model include a certain proportion of target objects to be identified, and targets with category and location labels corresponding to the target objects. Filtering images can improve the training speed of the detection model, improve the efficiency of target recognition of the detection model obtained after training, improve the recognition efficiency and accuracy of target category screening and target position positioning, and reduce false detections caused by other interfering backgrounds.
  • a mask image is formed, the mask image is subjected to image morphology filtering processing to form a target object mask image, the target object mask image is used to filter the image information contained in the infrared image, and the filtered image information containing The image of the real target object content is used for the trained neural network model for target detection.
  • the defect of poor target detection effect in the existing technology of directly detecting the target in the infrared image through the neural network model can be overcome, and effectively Improved use of neural networks
  • the infrared image is binarized to obtain a corresponding mask image, including:
  • the image morphological filtering process based on the mask image to obtain the target object mask image corresponding to the infrared image includes:
  • the mask image is subjected to image processing such as erosion and then expansion to obtain a target object mask image corresponding to the infrared image.
  • the imaging temperature of target objects in infrared images is usually higher than the imaging temperature of the surrounding environment.
  • Obtain the original infrared image assign values to pixels above the temperature threshold and pixels below the temperature threshold respectively to form a corresponding mask image.
  • the first grayscale value is 255 and the second grayscale value is 0.
  • Image morphology filtering is mainly used to solve the problem of intermittent edge points in the binarized mask image, which causes internal holes and fragmentation in the same target object.
  • image morphology filtering processes the mask in real time.
  • the membrane image is processed by an opening operation that first erodes and then expands.
  • the original infrared image is sequentially subjected to binarization processing and image processing of first corrosion and then expansion, so as to obtain a mask image of a real high-temperature target, thereby improving the accuracy of detecting and identifying target objects from infrared images.
  • FIGS. 15 to 17 Please refer to FIGS. 15 to 17 .
  • a specific example will be used for description below.
  • infrared thermal imaging equipment includes but is not limited to lenses and detectors, image processors, and display components.
  • the detectors include various refrigerated and uncooled short-wave, medium-wave, and long-wave detectors;
  • the image processor includes But it is not limited to various FPGA (Field Programmable Gate Array, field editable gate array), Soc (System on a Chip, system-level chip), microcontroller, etc.
  • the display components are not limited to OLED (Organic Light-Emitting Diode, organic light-emitting diode) , LCD (Liquid Crystal Display, liquid crystal display), LCOS (Liquid Crystal on Silicon, silicon-based liquid crystal), optical waveguide and other display modules.
  • Infrared rays are projected onto the focal plane detector through the lens.
  • the detector converts the infrared light signal into an electronic signal and transmits it to the image processor.
  • the image processor is the core component of the image processing system of the infrared thermal imaging equipment and plays a vital role in the transmission of the detector.
  • the incoming signal is processed, and the processed image is transmitted to the display component for display.
  • the human eye obtains the final infrared image by observing the display component.
  • the main processes involved in the image processor are shown in Figure 16 and Figure 17.
  • the implementation of the main processes includes the following steps:
  • S12 Binarize the infrared image to obtain a mask image ( Figure 12).
  • the temperature threshold setting can come from the temperature sensor or user settings.
  • the human eye is more sensitive to high-temperature targets; set the points above the temperature threshold in the thermal imaging field of view to 255, and set the points below the temperature threshold to 255. is 0, forming a binary mask.
  • the mask image is first etched and then expanded to obtain a real high-temperature target mask image ( Figure 13).
  • the acquired mask will have edge point discontinuities, and there will be internal holes for the same target.
  • the acquired mask image is first corroded and then expanded in order to eliminate burrs and noise to obtain the real target data.
  • the main operation methods of target classification and positioning operations include: collecting a large number of outdoor samples at different time periods, different areas, and different seasons, mainly including birds, rabbits, cattle, sheep, wolves, dogs, pigs, cats, pedestrians, cars, and trucks With samples such as special vehicles; using deep learning methods to extract features from the data set, it is possible to obtain inherent features between different categories, such as the ears, paws, postures of animals, the outlines and corners of vehicles, the physical signs of pedestrians, etc., to form A detection model can also distinguish backgrounds and has better overall adaptability to the environment. By inputting the high-temperature target content obtained from the infrared image into the trained target detection neural network, the target category, label and coordinate position of the target can be obtained.
  • S17 perform target tracking on the target of interest. After the user selects a target, feature extraction is performed on the selected target of interest for tracking and trajectory prediction.
  • Step 1 Extract features of the target of interest.
  • Commonly used single-target feature extraction methods include but are not limited to Haar features, HOG features or features extracted based on convolutional neural networks. Taking HOG features as an example, assume that the selected ID label is 1 target, the target is extracted based on the coordinate position of the ID 1 target.
  • the target detection network will output four pieces of information (x, y, w, h) for each target as the target's position information, among which (x, y ) is the center point position of the target, (w, h) is the width and height of the target.
  • HOG feature calculation only processes the content of the target area, mainly including gradient calculation, gradient histogram calculation, block normalization and HOG feature statistics. The steps are explained by taking gradient calculation as an example.
  • the horizontal gradient g x of a single pixel is the pixel value difference between its left and right adjacent pixels, and the vertical gradient g
  • the difference in pixel values of adjacent pixels can then be used to obtain the total gradient intensity of the pixel.
  • gradient direction The gradient direction needs to be an absolute value.
  • Step 2 Track and predict the target according to linear motion, and search around the target according to different step sizes and different scales.
  • Different step sizes refer to different pixel distances in the four directions of up, down, left and right.
  • Different scales refer to setting selection areas with different shrinkage sizes at different positions based on the initial ID1 target size, such as 0.5x, 1x, 1.5x, 2x, etc.
  • Features are also extracted for the selected target area, compared with the feature extraction results of the target of interest, and the target area with the largest feature similarity is selected as the single target tracking result; the vector between the single target tracking result and the position center point of the selected target is calculated.
  • the size and vector direction where the vector size is used as the step reference for the next predicted target movement speed, and the vector direction is the reference for the next predicted target movement direction, focusing on extracting the regional features of this position.
  • Step 3 The above steps 1 and 2 can be repeated to obtain multiple sets of motion reference step lengths and motion reference speeds. The accuracy of the single target motion trajectory will gradually improve.
  • the regional image content is extracted based on the coordinate position information of target ID 1, and then the background and foreground segmentation method is used to obtain the accurate target of interest.
  • Contours including but not limited to Otsu method, gray threshold segmentation method, temperature segmentation method or convolutional neural network segmentation method.
  • the display mode can include a variety of modes, such as: directly enlarging the original image display, using the enlarged true color target to replace the original target; displaying the predicted position, using a circle to represent the position, and an arrow to represent the predicted movement direction; displaying the predicted position, and adding a virtual target, etc. .
  • the infrared image processing device can be implemented using an infrared handheld aiming device.
  • the infrared image processing device includes: an acquisition module 1121, used to acquire an infrared image; a detection module 1122, used to perform target detection on the infrared image and determine the target object in the infrared image; a determination module 1123, with To determine the target object of interest from the target object; the segmentation module 1124 is used to determine the outline area of the target object of interest; the display module 1125 is used to classify the outline area according to the category of the target object of interest. Perform true color processing to highlight the target object of interest in the infrared image.
  • the determination module 1123 is configured to determine the target object of interest based on the selected operation on the target object.
  • the display module 1125 is also configured to determine a corresponding color template according to the category of the target object of interest, and map the outline area of the target object of interest in the infrared image according to the corresponding color.
  • the template is colored and then displayed.
  • the segmentation module 1124 is also configured to segment the target object of interest according to the position information of the target object of interest to obtain a contour segmentation image of the target object of interest;
  • the display module 1125 is also configured to Determining a corresponding color template according to the category of the target object of interest, coloring the contour segmentation image according to the corresponding color template; fusing the colored contour segmentation image with the original infrared image; And/or, after the colored contour segmentation image is enlarged according to a preset ratio, it is fused with the original infrared image according to the specified display position.
  • a tracking module is also included for performing target tracking on the target object of interest to obtain trajectory prediction information; the display module 1125 is also configured to display in the infrared image according to the trajectory prediction information. Display the motion trajectory prompt information of the target object of interest.
  • the display module 1125 is also configured to display motion direction prompt information on one side of the target object of interest in the infrared image according to the trajectory prediction information; and/or, based on the trajectory prediction information information, determine the movement path of the target object of interest, and display the movement path in the infrared image; and/or determine the movement path of the target object of interest in the infrared image according to the trajectory prediction information. a next predicted position, and display a corresponding virtual image of the target object of interest at the next predicted position in the infrared image.
  • the tracking module is also used to determine the target object of interest based on the selection operation of the target object; perform feature extraction on the target area where the target object of interest is located, and obtain the first feature extraction result; Set target frames respectively according to different step lengths and scales, search through the target frame with the target area as the starting position, perform feature extraction on the area where each target frame is located, and obtain the corresponding second feature extraction Result: According to the feature similarity between the second feature extraction result corresponding to each target frame and the first feature extraction result, the area of the corresponding target frame whose feature similarity meets the requirements is determined as the target object of interest. target tracking area; obtain trajectory prediction information according to the target tracking area.
  • the tracking module is also configured to determine the reference step length and reference direction for the next trajectory prediction of the target object of interest based on the relative position information of the target tracking area and the target area.
  • the tracking module is also configured to scale and set different scales according to different proportions according to the size of the target object of interest, and set different step sizes according to linear motion rules according to the category of the target object of interest.
  • Each direction of the target area where the target object of interest is located, according to different locations The step length and the scale are respectively set to target frames; the target area is used as the starting position to search within the set image area, and feature extraction is performed on the area where each target frame is located to obtain the corresponding second feature extraction. result.
  • the detection module 1122 is also used to perform binarization processing on the infrared image to obtain the corresponding mask image; perform image morphological filtering based on the mask image to obtain the corresponding infrared image.
  • the target object mask image fuse the target object mask image with the original infrared image to obtain a target screening image, perform target detection on the target screening image, and determine the target object contained in the infrared image. Category and location information.
  • the detection module 1122 is also used to perform target detection on the target screening image through the trained detection model, and determine the category and location information of the target object contained in the infrared image; wherein, the detection The model is obtained by training the neural network model with a training sample set, which includes training sample images containing different target objects and their category and location labels respectively.
  • the detection module 1122 is also used to set the pixel points higher than the temperature threshold in the infrared image to a first gray value; to set the pixel points lower than the temperature threshold to a second gray value, to obtain Corresponding mask image; perform image processing on the mask image by first etching and then expanding to obtain a target object mask image corresponding to the infrared image.
  • the infrared image processing device provided in the above embodiment, only the division of the above program modules is used as an example. In practical applications, the above processing can be allocated by When different program modules are completed, the internal structure of the device can be divided into different program modules to complete all or part of the method steps described above.
  • the infrared image processing device provided by the above embodiments and the infrared image processing method embodiments belong to the same concept. Please refer to the method embodiments for the specific implementation process, which will not be described again here.
  • the present application provides an infrared thermal imaging device.
  • Figure 19 is a schematic diagram of an optional hardware structure of the infrared thermal imaging device provided by an embodiment of the present application.
  • the infrared thermal imaging device includes a processor 111, and
  • the memory 112 connected to the processor 111 is used to store various types of data to support the operation of the image processing device, and stores a computer program for implementing the infrared image processing method provided by any embodiment of the present application, When the computer program is executed by the processor, the steps of the infrared image processing method provided by any embodiment of the present application are implemented, and the same technical effect can be achieved. To avoid repetition, the details will not be described here.
  • the infrared thermal imaging device also includes an infrared photography module 113 and a display module 114 connected to the processor 111.
  • the infrared photography module 113 is used to capture infrared images and send them to the processor as images to be processed.
  • the display module 114 is configured to display the infrared image output by the processor 111 that highlights the target object of interest.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, each process of the above-mentioned infrared image processing method embodiment is implemented, and can achieve The same technical effects are not repeated here to avoid repetition.
  • the computer-readable storage medium such as read-only memory (ROM), Random Access Memory (RandomAccessMemory, RAM for short), magnetic disk or optical disk, etc.
  • the methods of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. implementation.
  • the technical solution of the present invention can be embodied in the form of a software product in essence or the part that contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk). ), includes several instructions to cause a terminal (which can be an infrared imaging device, a mobile phone, a computer, a server or a network device, etc.) to execute the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供一种红外图像处理方法、装置及设备、存储介质,涉及图像处理技术领域,所述红外图像处理方法包括:获取红外图像;对所述红外图像进行目标检测,确定所述红外图像中的目标对象;从所述目标对象中确定感兴趣目标对象;确定所述感兴趣目标对象的轮廓区域;根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显。通过将感兴趣目标对象在红外图像中进行凸显,通过真彩处理,从而实现按目标对象的类别将感兴趣目标对象通过不同着色、纹理凸出显示,使得红外图像中关注的感兴趣目标更加清晰地呈现,更便于人眼观察和识别。

Description

红外图像处理方法、装置及设备、存储介质 技术领域
本申请涉及图像处理技术领域,尤其是涉及一种红外图像处理方法、装置及设备、计算机可读存储介质。
背景技术
红外热成像的原理是通过红外镜头和红外探测器感知物体自身发散的红外线,经过光-电转换形成的人眼可见的红外图像。在绝对零度(-273.15℃)以上的物体都会向外辐射红外线,所以红外热成像的应用范围极广,由于其能感知温度、无需可见光成像、可以避免烟雾、沙尘等干扰的特点,红外热成像在医疗、工业、军事等领域已经形成非常成熟的应用。
然而,由于人眼并不能观察到红外线,目前红外热成像基本上都是还原红外热线的辐射情况,转换后的红外图像和人眼平时观察到的可见光图像差异很大,使用红外热成像需要经过一段时间的训练和适应;此外,红外热成像图像仅能反映出物体整体热辐射情况,并不能体现物体的细节,不适用于需要分辨具体类别和细节的场景。
技术问题
为解决现有技术存在的问题,本申请提供一种能够有效凸显观测目标的红外图像处理方法、装置及设备、以及计算机可读存储介质。
技术解决方案
本发明实施例,第一方面,本申请实施例提供一种红外图像处理方法,包括:
获取红外图像;
对所述红外图像进行目标检测,确定所述红外图像中的目标对象,得到包含各所述目标对象的类别的目标检测结果;
根据所述目标检测结果,从所述目标对象中确定感兴趣目标对象;
确定所述感兴趣目标对象的轮廓区域;
根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显。
可选的,所述从所述目标对象中确定感兴趣目标对象,包括:
根据对所述目标对象的选定操作,确定感兴趣目标对象。
可选的,所述根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显,包括:
根据所述感兴趣目标对象的类别确定对应色彩模板,在所述红外图像中将所述感兴趣目标对象的所述轮廓区域按照所述对应色彩模板进行着色后进行显示。
可选的,所述确定所述感兴趣目标对象的轮廓区域,包括:
根据所述感兴趣目标对象的位置信息,对所述感兴趣目标对象进行分割得到所述感兴趣目标对象的轮廓分割图像;
所述根据所述感兴趣目标对象的类别确定对应色彩模板,在所述红外图像中将所述感兴趣目标对象的所述轮廓区域按照所述对应色彩模板进行着色后进行显示,包括:
根据所述感兴趣目标对象的类别确定对应色彩模板,按照所述对应色彩模板对所述轮廓分割图像进行着色;
将着色后的所述轮廓分割图像与原始的所述红外图像进行融合;和/或,将着色后的所述轮廓分割图像按预设比例进行放大后,按照指定显示位置与原始的所述红外图像进行融合。
可选的,所述从所述目标对象中确定感兴趣目标对象之后,还包括:
对所述感兴趣目标对象进行目标跟踪,以获得轨迹预测信息;
根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息。
可选的,所述根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,包括:
根据所述轨迹预测信息,在所述红外图像的所述感兴趣目标对象的一侧显示运动方向提示信息。
可选的,所述根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,包括:
根据所述轨迹预测信息,确定所述感兴趣目标对象的运动路径,在所述红外图像中显示所述运动路径。
可选的,所述根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,包括:
根据所述轨迹预测信息,确定所述感兴趣目标对象在所述红外图像中的下一预测位置,并在所述红外图像中的所述下一预测位置显示所述感兴趣目标对象的对应虚拟图像。
可选的,所述对所述感兴趣目标对象进行目标跟踪,以获得轨迹预测信息,包括:
对所述感兴趣目标对象所在的目标区域进行特征提取,得到第一特征提取结果;
按照不同的步长和尺度分别设定目标框,通过所述目标框以所述目标区域 为起始位置进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果;
根据各所述目标框对应的所述第二特征提取结果与所述第一特征提取结果的特征相似度,将特征相似度满足要求的对应目标框所在区域确定为所述感兴趣目标对象的目标追踪区域;
根据所述目标追踪区域获得轨迹预测信息。
可选的,所述根据各所述目标框对应的所述第二特征提取结果与所述第一特征提取结果的特征相似度,将特征相似度满足要求的对应目标框所在区域确定为所述感兴趣目标对象的目标追踪区域之后,还包括:
根据所述目标追踪区域与所述目标区域的相对位置信息,确定对所述感兴趣目标对象的下一次轨迹预测的参考步长和参考方向。
可选的,所述按照不同的步长和尺度分别设定目标框,通过所述目标框以所述目标区域为起始位置进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果,包括:
根据所述感兴趣目标对象的尺寸按不同比例缩放设置不同的尺度,根据所述感兴趣目标对象的类别按线性运动规则设置不同的步长,在所述感兴趣目标对象所在的目标区域的各个方向,按不同的所述步长和所述尺度分别设置目标框;
以所述目标区域为起始位置在设定图像区域内进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果。
可选的,所述对所述红外图像进行目标检测,确定所述红外图像中的目标对象,包括:
对所述红外图像进行二值化处理,得到对应的掩膜图像;
基于所述掩膜图像进行图像形态学滤波处理,得到所述红外图像对应的目标对象掩膜图像;
将所述目标对象掩膜图像与原始的所述红外图像进行融合,得到目标筛选图像,对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息。
可选的,所述对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息,包括:
通过训练后的检测模型对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息;
其中,所述检测模型由训练样本集对神经网络模型进行训练后得到,所述训练样本集包括分别包含不同目标对象及其类别、位置标签的训练样本图像。
可选的,所述对所述红外图像进行二值化处理,得到对应的掩膜图像,包括:
将所述红外图像中高于温度阈值的像素点置第一灰度值;将低于所述温度阈值的像素点置第二灰度值,得到对应的掩膜图像;
所述基于所述掩膜图像进行图像形态学滤波处理,得到所述红外图像对应的目标对象掩膜图像,包括:
对所述掩膜图像进行先腐蚀后膨胀的图像处理,得到所述红外图像对应的目标对象掩膜图像。
第三方面,本申请实施例提供一种红外图像处理装置,包括获取模块,用于获取红外图像;检测模块,用于对所述红外图像进行目标检测,确定所述红外图像中的目标对象,得到包含各所述目标对象的类别的目标检测结果;确定模块,用于根据所述目标检测结果,从所述目标对象中确定感兴趣目标对象;分割模块,用于确定所述感兴趣目标对象的轮廓区域;显示模块,用于根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显。
第三方面,本申请实施例提供一种红外热成像设备,包括处理器、与所述处理器连接的存储器及存储在所述存储器上并可被所述处理器执行的计算机程序,所述计算机程序被所述处理器执行时实现本申请任一实施例所述的红外图像处理方法。
其中,所述红外热成像设备还包括与所述处理器连接的红外拍摄模块和显示模块;所述红外拍摄模块用于采集红外图像并发送给所述处理器;所述显示模块用于显示所述处理器输出的对所述感兴趣目标对象进行凸显的所述红外图像。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如本申请任一实施例所述的红外图像处理方法。
有益效果
上述实施例中,通过获取红外图像,对红外图像进行目标检测以确定红外图像中的目标对象,从所述目标对象中确定感兴趣目标对象,确定所述感兴趣目标对象的轮廓区域,根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将感兴趣目标对象在红外图像中进行凸显,通过对红外图像进行目标检测,从目标检测结果中确定感兴趣目标对象,在红外图像中对感兴趣目标对象的轮廓区域按照目标对象的类别进行真彩处理,将感兴趣目标对象在红外图像中进行凸显,通过真彩处理,可实现按目标对象的类别将感兴趣目标对象通过不同着色、纹理凸出显示,使得红外图像中关注的感兴趣目标更加清晰地呈现,更便于人眼观察和识别。
上述实施例中,红外图像处理装置、红外热成像设备及计算机可读存储介质与对应的红外图像处理方法实施例属于同一构思,从而分别与对应的红外图像处理方法实施例具有相同的技术效果,在此不再赘述。
附图说明
图1为一实施例中红外图像处理方法的应用场景示意图;
图2为另一实施例中红外图像处理方法的应用场景示意图;
图3为一可选的具体示例中轮廓分割图像的示意图;
图4为一示例中对轮廓分割图像中感兴趣目标对象进行真彩转换后的示意图;
图5为一示例中对目标对象进行凸显的红外图像的示意图;
图6为一示例中显示有目标对象运动方向提示信息的红外图像的示意图;
图7为一示例中显示有目标对象运动路径的红外图像的示意图;
图8为一示例中显示有目标对象下一预测位置的红外图像的示意图;
图9为一示例中对红外图像进行目标检测得到的目标检测结果的示意图;
图10为一示例中对红外图像中感兴趣目标对象的追踪区域设置的示意图;
图11为一示例中原始的红外图像的示意图;
图12为图11所示红外图像二值化处理后的掩膜图像的示意图;
图13为图12所示掩膜图像进行图像形态学滤波处理后的目标对象掩膜图像的示意图;
图14为图13所示目标对象掩膜图像作用于原始的红外图像后得到的目标筛选图像的示意图;
图15为一示例中红外热成像设备的结构示意图;
图16为一示例中红外图像处理方法的主要流程示意图;
图17为一可选的具体示例中红外图像处理方法的流程图;
图18为一实施例中图像处理装置的示意图;
图19为一实施例中红外热成像设备的结构示意图。
本发明的实施方式
以下结合说明书附图及具体实施例对本发明技术方案做进一步的详细阐述。
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”的表述,其描述了所有可能实施例的子集,需要说明的是,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一、第二、第三”仅仅是区别类似的 对象,不代表针对对象的特定排序,可以理解地,“第一、第二、第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
请参阅图1,为本申请实施例提供的红外图像处理方法的一可选应用场景的示意图,其中,红外热成像设备11包括处理器12、与所述处理器12连接的存储器13和红外拍摄模块14。所述红外热成像设备11通过所述红外拍摄模块14实时采集红外图像发送给处理器12,所述存储器13内存储有实施本申请实施例所提供的红外图像处理方法的计算机程序,处理器12通过执行所述计算机程序,通过对红外图像进行目标检测以识别红外图像中的目标对象,从目标对象中确定出感兴趣目标对象,确定出感兴趣目标对象的轮廓区域并根据感兴趣目标对象的类别进行真彩处理,在红外图像中将感兴趣目标对象进行凸显。其中,红外热成像设备11可以是集成有红外图像拍摄功能的红外拍摄模块14、且具备存储和处理功能的各类智能终端,如手持观察仪、各类瞄准类设备、车载/机载光电载荷设备等。
请参阅图2,为本申请一实施例提供的红外图像处理方法,可以应用于图1所示应用场景中的红外热成像设备。其中,红外图像处理方法包括如下步骤:
S101,获取红外图像。
红外热成像设备可以包括图像拍摄模块,图像拍摄模块包括红外镜头和红外探测器,探测器包含各种制冷、非制冷的短波、中波、长波探测器。所述获取红外图像包括:红外热成像设备通过图像拍摄模块实时采集目标场景的红外图像。在另一些可选的实施例中,红外热成像设备不包括图像拍摄模块,所述获取红外图像包括:红外热成像设备获取具备图像拍摄功能的其它智能设备发送的红外图像,这里,其它智能设备可以是红外探测器、手机终端、云端等。
S103,对所述红外图像进行目标检测,确定所述红外图像中的目标对象。
红外热成像设备对红外图像进行目标检测,通过目标检测识别红外图像中的目标对象。目标检测可以采用传统的基于Haar特征提取对目标进行检测的目标检测算法、基于深度学习模型的目标检测算法等。通过目标检测对红外图像中的目标对象进行识别,识别到的目标对象显示在红外图像中可提供用户交互式操作以进行选定。
其中目标对象可以是红外图像中用户想要关注的图像内容,可以是一个或多个预设类别的对象,如人、不同动物、车辆等。
S104,从所述目标对象中确定感兴趣目标对象;
通过目标检测确定出红外图像中的目标对象,再从目标检测结果中确定出感兴趣目标对象,可以提高确定感兴趣目标对象的准确性和效率。可选的,根据目标检测结果确定感兴趣目标对象,可以是如下至少一种方式:将目标检测结果中的预设类别的目标对象作为感兴趣目标对象;将目标检测结果中位于所 述红外图像中预设位置的目标对象作为感兴趣目标对象;将目标检测结果中尺寸满足预设要求的目标对象作为感兴趣目标对象;基于交互式操作从目标检测结果中选取目标对象作为感兴趣目标对象。
S105,确定所述感兴趣目标对象的轮廓区域;
确定感兴趣目标对象的轮廓区域,可以是通过轮廓检测算法提取所述感兴趣目标对象的轮廓线信息;也可以是通过图像分割算法确定出所述感兴趣目标对象的轮廓分割图像;还可以是基于目标检测结果中得到的各目标对象的位置信息确定的所述感兴趣目标对象的轮廓区域等。
S107,根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显。
将感兴趣目标对象按类别对其轮廓区域进行真彩处理,可以针对不同类别预先设置对应的真彩模板,通过对应真彩模板对感兴趣目标对象的轮廓区域进行着色,实现将感兴趣目标对象在所述红外图像中按类别以不同颜色、纹理进行凸显。对感兴趣目标对象的轮廓区域进行真彩处理,可使得感兴趣目标对象的成像区域内,感兴趣目标对象图像的每个像素值中,有R、G、B三个基色分量,每个基色分量直接决定显示设备的基色强度而产生彩色,与可见光图像的显色一致,更符合人眼感知系统。
上述实施例中,通过获取红外图像,对红外图像进行目标检测以确定红外图像中的目标对象,从所述目标对象中确定感兴趣目标对象,确定所述感兴趣目标对象的轮廓区域,根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将感兴趣目标对象在红外图像中按指定颜色进行凸显,通过对红外图像进行目标检测,从目标检测结果中确定感兴趣目标对象,在红外图像中对感兴趣目标对象的轮廓区域按照目标对象的类别进行真彩处理,将感兴趣目标对象在红外图像中进行凸显,通过真彩处理,可实现按目标对象的类别将感兴趣目标对象通过不同着色、纹理凸出显示,使得红外图像中关注的感兴趣目标更加清晰地呈现,更便于人眼观察和识别。
可选的,S104,从所述目标对象中确定感兴趣目标对象,包括:
根据对所述目标对象的选定操作,确定感兴趣目标对象。
对目标对象的选定操作可以是根据红外热成像设备支持的不同交互式操作类型而不同。可选的,红外热成像设备提供交互式触控界面,将对红外图像进行目标检测识别到的目标对象显示在对交互式触控界面中,用户可通过在交互式触控界面中直接选取某一目标对象作为当前关注的感兴趣目标对象。在另一些可选实施例中,红外热成像设备提供非触控式显示界面、且红外热成像设备上设有操作按键,将对红外图像进行目标检测识别到的目标对象显示在非触控式显示界面中,非触控式显示界面上有选取框,用户可通过操作按键移动选取框来选取某一目标对象作为当前关注的感兴趣目标对象。作为另一些可选的实施例,红外热成像设备可以支持语音式输入,将对红外图像进行目标检测识别 到的目标对象显示在显示界面中,各目标对象分别带有唯一表征其身份信息的编号,用户可以语音输入想要选取的目标对象的编号来完成选定操作以确定感兴趣目标对象。
红外热成像设备根据用户的选定操作,将选定的目标对象作为当前关注的感兴趣目标对象,后续仅对选定的感兴趣目标对象进行轮廓区域的确定、真彩处理等,便于满足用户实时选定当前观测对象的需求。
在一些实施例中,所述S107,根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显,包括:
根据所述感兴趣目标对象的类别确定对应色彩模板,在所述红外图像中将所述感兴趣目标对象的所述轮廓区域按照所述对应色彩模板进行着色后进行显示。
目标对象的类别可以根据通过红外热成像设备进行图像采集而实现目标定位的对象类别来确定,以红外热成像设备为应用于户外的狩猎场景中的红外瞄准设备为例,可以预先设定目标对象的类别包括行人、车辆、鸟类、兔子、牛、羊、狼、狗、猪、猫。可选的,针对不同目标对象的类别可以预设不同的色彩模板,根据选定的感兴趣目标对象的类别,相应确定其对应的色彩模板,在红外图像中将感兴趣目标对象按照其对应的色彩模板进行着色并显示。如,针对鸟类的对应色彩模板为蓝色模板,若当前红外图像中选定的感兴趣目标对象为鸟类,则根据蓝色模板对感兴趣目标对象进行着色并显示;针对兔子的对应色彩模板为黄色模板,若当前红外图像中选定的感兴趣目标对象为兔子,则根据黄色模板对感兴趣目标对象进行着色并显示。可选的,各个色彩模板中还可以包括纹理信息,如,针对鸟类的对应色彩模板为蓝色羽毛纹理模板,若当前红外图像中选定的感兴趣目标对象为鸟类,则根据蓝色羽毛纹理模板对感兴趣目标对象按蓝色羽毛纹理模板进行着色并显示;针对兔子的对应色彩模板为黄色绒毛纹理模板,若当前红外图像中选定的感兴趣目标对象为兔子,则根据黄色绒毛纹理模板对感兴趣目标对象按黄色绒毛纹理模板进行着色并显示。
通过按感兴趣目标对象的类别设置不同的色彩模板,可以实现对不同的关注目标采用不同颜色、纹理进行凸显,用户可以根据自己的偏好,对自己更加关注的目标对象类别采用更醒目、更符合自己感知度的色彩来凸显显示,减少不适应和用眼疲劳,减少通过红外图像来实现对关注目标进行追踪的过程中出现失误。
上述实施例中,将感兴趣目标对象按类别采用不同的色彩模板进行着色显示,使得红外图像中对用户的关注目标能够符合人眼对可见光的感知方式,提升对目标识别的准确性。
在一些实施例中,所述S105,确定所述感兴趣目标对象的轮廓区域,包括:
根据所述感兴趣目标对象的位置信息,对所述感兴趣目标对象进行分割得到所述感兴趣目标对象的轮廓分割图像;
则所述根据所述感兴趣目标对象的类别确定对应色彩模板,在所述红外图像中将所述感兴趣目标对象的所述轮廓区域按照所述对应色彩模板进行着色后进行显示,包括:
根据所述感兴趣目标对象的类别确定对应色彩模板,按照所述对应色彩模板对所述轮廓分割图像进行着色;
将着色后的所述轮廓分割图像与原始的所述红外图像进行融合;和/或,将着色后的所述轮廓分割图像按预设比例进行放大后,按照指定显示位置与原始的所述红外图像进行融合。
感兴趣目标对象的位置信息可以是根据对红外图像进行目标检测的步骤得到,如,通过对红外图像进行目标检测,得到的目标检测结果包括所述红外图像中包含的全部目标对象的对应类别及位置信息。位置信息的表达形式可以是但不限于如下任意一种:感兴趣目标对象的中心点坐标与尺寸信息的组合、感兴趣目标对象的外轮廓中四个角点的坐标组合、感兴趣目标对象的左下角点坐标与尺寸信息的组合。根据感兴趣目标对象的位置信息,对红外图像中感兴趣目标对象的区域图像进行内容提取,得到感兴趣目标对象的轮廓分割图像,将分割得到的感兴趣目标对象的轮廓分割图像按对应色彩模板进行着色,将着色后的轮廓分割图像与原始的红外图像进行融合显示。
其中,根据感兴趣目标对象的位置信息得到其对应的轮廓分割图像的分割方法,可以包括但不限于大津法、灰度阈值分割法、温度分割法或深度学习模型分割法等,如图3所示,为一可选的具体示例中轮廓分割图像的示意图。根据轮廓分割图像和感兴趣目标对象的类别,按照不同类别预置的真彩模板对轮廓分割图像中感兴趣目标进行着色,使得目标更加凸显,符合人眼感知,如针对人和不同类型的动物分别附色,避免在红外图像中因温度相似而不能区分具体类型,导致对目标误判,如图4所示,为对轮廓分割图像中感兴趣目标对象进行真彩转换后的示意图。
图像融合(Image Fusion)是指将多源信道所采集到的关于同一目标的图像数据经过图像处理技术,最大限度的提取各自信道中的有利信息,最后综合成高质量的图像,以提高图像信息的利用率、改善计算机解译精度和可靠性、提升原始图像的空间分辨率和光谱分辨率,利于监测。红外热成像设备将对感兴趣目标对象进行真彩转换后的轮廓分割图像与原始的红外图像进行融合,使得着色后的感兴趣目标对象替代原始的红外图像中原始目标的位置,得到将所述感兴趣目标对象在原始的红外图像中按照预设方式进行凸显的融合图像。
可选的,红外热成像设备还可以对感兴趣目标对象进行真彩转换后的轮廓分割图像按预设比例进行放大后,按照指定显示位置与原始的所述红外图像进行融合,指定显示位置可以是指,统一将按比例放大、着色后的感兴趣目标对象显示于当前图像视野中的某个位置,如左下角;或,将按比例放大、着色后 的感兴趣目标对象替代原始目标,以避免原始目标可能在视野中尺寸较小,不利于观察。
红外热成像设备可以同时将着色后的所述轮廓分割图像与原始的所述红外图像进行融合、及将着色后的所述轮廓分割图像按预设比例进行放大后,按照指定显示位置与原始的红外图像进行融合,如图5所示,用户可以在融合图像中观测到的目标对象的细节、轮廓可以更加清晰。
上述实施例中,红外热成像设备通过将感兴趣目标对象的轮廓分割图像通过着色后、或通过着色再按比例放大后与原始的红外图像进行融合,实现在红外图像中将关注的目标对象按照人眼对可见光图像的颜色感知的方式进行凸出显示,更便于人眼观察和识别。
可选的,所述S104,从所述目标对象中确定感兴趣目标对象之后,还包括:
对所述感兴趣目标对象进行目标跟踪,以获得轨迹预测信息;
根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息。
其中,轨迹预测信息可以包括如下至少之一:预测感兴趣目标对象的下一位置、预测感兴趣目标对象的运动方向、预测感兴趣目标对象的运动轨迹等。相应的,根据所述轨迹预测信息在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,可以是指在红外图像中标记出感兴趣目标对象的下一预测位置、在红外图像中标记出感兴趣目标对象的预测运动方向、在红外图像中标记出感兴趣目标对象的预测运动轨迹等。
基于目标检测结果确定感兴趣目标对象,仅对感兴趣目标对象进行目标跟踪,可以大大减小目标跟踪的计算量和效率,且可以提升目标跟踪结果的精准性。
上述实施例中,通过对红外图像进行目标检测,以提供用户从目标检测结果中选取当前关注的感兴趣目标对象,可以简化后续对感兴趣目标对象进行轨迹预测的复杂度,提升轨迹预测的准确性;另一方面,对感兴趣目标对象进行目标跟踪以获取轨迹预测信息,并在红外图像中显示感兴趣目标对象的运动轨迹提示信息,在红外图像中将感兴趣目标对象进行凸显同时显示其运动轨迹提示信息,使得红外图像中关注的感兴趣目标更加清晰地呈现,便于根据运行轨迹提示信息对运动目标进行准确追踪。
在一些实施例中,所述根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,包括:
根据所述轨迹预测信息,在所述红外图像的所述感兴趣目标对象的一侧显示运动方向提示信息;和/或,
根据所述轨迹预测信息,确定所述感兴趣目标对象的运动路径,在所述红 外图像中显示所述运动路径;和/或,
根据所述轨迹预测信息,确定所述感兴趣目标对象在所述红外图像中的下一预测位置,并在所述红外图像中的所述下一预测位置显示所述感兴趣目标对象的对应虚拟图像。
其中,轨迹预测信息可以包括一种或多种能够表征感兴趣目标对象的运动轨迹特性的相关信息,如,轨迹预测信息包括表征感兴趣目标对象的运动方向的信息,根据所述轨迹预测信息,在红外图像的感兴趣目标对象的运动方向的一侧显示指定标识,通过该指定标识的位置表示该对应侧为感兴趣目标对象的运动方向,如图6所示,显示于感兴趣目标对象右侧的圆形标记,表示感兴趣目标对象的预测运动方向为右侧;轨迹预测信息包括表征感兴趣目标对象的运动路径的信息,根据所述轨迹预测信息,在红外图像中显示感兴趣目标对象的运动路径,如图7所示,显示于感兴趣目标对象右侧的箭头,通过该箭头表示感兴趣目标对象的运动路径;轨迹预测信息包括表征感兴趣目标对象在下一预测时刻的位置的信息,根据所述轨迹预测信息,在红外图像中感兴趣目标对象在下一预测时刻的位置处显示指定标识,通过该指定标识的位置表示感兴趣目标对象的下一预测位置,如图8所示,显示于感兴趣目标对象右侧箭头末端的虚拟图标,该虚拟图标的位置即表示感兴趣目标对象的下一预测位置。
可选的,每一运动轨迹提示信息的类型可以相应形成为一个显示模式,如显示模式1为显示预测的运动方向提示信息、显示模式2为显示预测的运动路径、显示模式3为在预测的下一位置显示虚拟图标,用户可以从多个显示模式中进行选择,以选定在当前红外图像中显示对应的运动轨迹提示信息为运动方向提示信息、运动路径或者虚拟图标。
上述实施例中,通过对用户选定的感兴趣目标对象进行单目标跟踪,得到感兴趣目标对象的轨迹预测信息,并根据轨迹预测信息在红外图像中显示感兴趣目标对象的运动轨迹提示信息,以便于用户可通过红外图像快速、准确地捕获到关注的运动目标,尤其适合于户外狩猎的应用场景。
在一些实施例中,所述对所述感兴趣目标对象进行目标跟踪,以获得轨迹预测信息,包括:
对所述感兴趣目标对象所在的目标区域进行特征提取,得到第一特征提取结果;
按照不同的步长和尺度分别设定目标框,通过所述目标框以所述目标区域为起始位置进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果;
根据各所述目标框对应的所述第二特征提取结果与所述第一特征提取结果的特征相似度,将特征相似度满足要求的对应目标框所在区域确定为所述感兴趣目标对象的目标追踪区域;
根据所述目标追踪区域获得轨迹预测信息。
对红外图像进行目标检测识别出的目标对象,用户可以对识别出的目标按照感兴趣信息进行筛选,选定其中一个目标对象作为想要关注的感兴趣目标对象。如图9所示,为一可选的具体示例中对红外图像进行目标检测得到的目标检测结果的示意图,包括第一目标对象ID 1、第二目标对象ID 2、第三目标对象ID 3及第四目标对象ID 4,用户通过人机交互操作,可以从中选定感兴趣目标对象,如选定第一目标对象ID 1为感兴趣目标对象。红外热成像设备对选定的感兴趣目标对象进行特征提取,以用于跟踪和轨迹预测。通过对当前场景下采集到的红外图像进行目标检测,初步锁定当前场景中可提供用户选择的目标对象,由用户通过人机交互操作来选取想要关注的一个目标对象进行跟踪,后续可采用单目标特征提取方式来进行目标跟踪和轨迹预测,不仅可以简化跟踪计算,且便于提升跟踪的精准性。
对红外图像的指定区域进行特征提取的方式,可以包括但不限于Haar特征、HOG特征或基于卷积神经网络提取的特征。这里,对红外图像的指定区域进行特征提取,包括对感兴趣目标对象所在的目标区域进行特征提取以得到第一特征提取结果;以及对通过目标框进行搜索时,分别对各目标框所在区域进行特征提取以得到第二特征提取结果。以选定第一目标对象ID 1、采用HOG特征提取为例,根据第一目标对象ID 1的位置信息(x,y,w,h),其中(x,y)是第一目标对象ID 1的中心点位置,(w,h)是第一目标对象ID 1的宽高,HOG特征计算只对目标区域内容进行处理,主要包括梯度计算、梯度直方图计算、块归一化和HOG特征统计四个步骤。梯度计算步骤包括:单个像素点的水平方向梯度gx是它左右相邻像素点的像素值差值,垂直方向的梯度gx是它上下相邻像素点的像素值差值,然后可以得到该像素点的总梯度强度以及梯度方向梯度方向取绝对值。梯度直方图计算步骤包括:将一个像素矩阵划为一个单元(cell),根据每个单元内像素点的梯度信息,做梯度直方图统计。块(block)归一化步骤包括:根据梯度强度的变化范围超过预设值的情况下,对梯度强度做归一化,多个单元组成一个块,将归一化后块内的所有单元的特征向量按序串联起来,得到该块的HOG特征。HOG特征统计步骤包括:将检测窗口中所有重叠的块进行HOG特征的收集,将它们结合成最终的特征向量,以作为对应目标区域的特征提取结果。
其中,基于卷积神经网络提取的特征可以是指,基于独立的神经网络模型训练后得到的特征提取模型,通过该特征提取模型分别对感兴趣目标对象所在的目标区域进行特征提取以得到第一特征提取结果;以及对通过目标框进行搜 索时,分别对各目标框所在区域进行特征提取以得到第二特征提取结果。作为另一可选的实施例,对红外图像进行目标检测以确定目标对象的步骤,采用通过对神经网络模型训练后得到的目标检测模型来完成,这里,对红外图像的指定区域进行特征提取也可以是复用该目标检测模型中的特征提取网络层来实现。
按照不同步长和尺度分别设定目标框,以感兴趣目标对象的位置为起始位置在感兴趣目标对象的周围进行搜索,将各目标框所在区域的特征提取结果与感兴趣目标对象所在的目标区域的特征提取结果进行比较,将特征相似度最大的目标框所在区域确定为所述感兴趣目标对象的目标追踪区域,以得到轨迹预测信息。
上述实施例中,通过对用户选定的感兴趣目标对象进行单目标跟踪,基于单目标跟踪结果得到感兴趣目标对象的轨迹预测信息,单目标跟踪的计算过程可以迭代,一个迭代过程得到的单目标跟踪结果可用于对下一迭代过程中目标框的设置参数进行优化,以进一步减小目标跟踪的计算量、提升目标跟踪结果的精准性。
可选的,所述根据各所述目标框对应的所述第二特征提取结果与所述第一特征提取结果的特征相似度,将特征相似度满足要求的对应目标框所在区域确定为所述感兴趣目标对象的目标追踪区域之后,还包括:
根据所述目标追踪区域与所述目标区域的相对位置信息,确定对所述感兴趣目标对象的下一次轨迹预测的参考步长和参考方向。
目标追踪区域与目标区域的相对位置信息可以是指,根据目标追踪区域的位置中心坐标与目标区域的位置中心坐标,计算向量大小和向量方向,向量大小与二者相对距离对应,可以作为下一次轨迹预测对目标对象运动速度的步长参考,向量方向与二者相对夹角对应,可以作为下一次轨迹预测对目标对象运动方向的参考。
上述实施例中,通过根据目标追踪区域和目标区域的相对位置信息确定下一次轨迹预测的参考步长和参考方向,可以提升对感兴趣目标对象的下一次轨迹预测计算的效率;单目标跟踪的计算过程可以迭代,根据每一次迭代过程得到对应一组参考步长和参考方向,可以根据一个迭代过程得到的单目标跟踪结果对下一迭代过程中目标框的设置参数进行优化,可进一步减小目标跟踪的计算量、提升目标跟踪结果的精准性。
可选的,所述按照不同的步长和尺度分别设定目标框,通过所述目标框以所述目标区域为起始位置进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果,包括:
根据所述感兴趣目标对象的尺寸按不同比例缩放设置不同的尺度,根据所述感兴趣目标对象的类别按线性运动规则设置不同的步长,在所述感兴趣目标 对象所在的目标区域的各个方向,按不同的所述步长和所述尺度分别设置目标框;
以所述目标区域为起始位置在设定图像区域内进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果。
目标框的尺度可以是将感兴趣目标对象的尺寸按不同比例缩放得到,如感兴趣目标对象的尺寸为x,目标框的尺度可以是0.5x、1x、1.5x、2x等。不同的步长,可以是根据感兴趣目标对象的类别、根据对应类别的感兴趣目标对象的运动特性按照线性运动的方式依次在感兴趣目标对象的上、下、左、右不同方向,分别以不同步长、不同尺度设定目标框,其中,不同步长分别对应不同像素距离,对于感兴趣目标对象的运动特性为运动速度更快的目标,相应设置的步长通常也更大;反之,对于感兴趣目标对象的运动特性为运动速度更慢的目标,相应设置的步长通常也更小。如图10所示,根据感兴趣目标对象的尺寸为x,分别在感兴趣目标对象的上方依序设置尺度分别为1x目标框、0.5x目标框、在感兴趣目标对象的下方依序设置尺度为1x目标框、0.5x目标框、在感兴趣目标对象的右侧依序设置1x目标框、0.5x目标框、1x目标框,如此,形成对感兴趣目标对象的追踪区域范围。
上述实施例中,按照线性运动的方式对不同类别的感兴趣目标对象进行追踪预测,可以提升追踪计算的效率及目标跟踪结果的精准性。
可选的,所述S103,对所述红外图像进行目标检测,确定所述红外图像中的目标对象,包括:
对所述红外图像进行二值化处理,得到对应的掩膜图像;
基于所述掩膜图像进行图像形态学滤波处理,得到所述红外图像对应的目标对象掩膜图像;
将所述目标对象掩膜图像与原始的所述红外图像进行融合,得到目标筛选图像,对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息。
热成像视野中目标对象的成像温度高于周围环境的成像温度,对红外图像进行二值化处理,可以将红外图像中目标对象的对应成像位置进行粗定位,得到可将各目标对象在红外图像中的成像位置进行定位的掩膜图像,如图11和图12所示,为一可选的具体示例中,对原始的红外图像进行二值化处理后,得到含高温目标的掩膜图像的示意图,其中,二值化图像中白色区域对应为高温目标区域,方框圈出的分离的白色断点则表示非高温异常目标区域。二值化处理过程中,对应同一目标对象可能会存在边缘点断续,对于同一个目标对存在内部空洞的情况,通过对掩膜图像进行图像形态学滤波处理,消除掩膜图像中的毛刺、噪声,得到各目标对象轮廓更完整、清晰的目标对象掩膜图像,如图13所示,为可选的具体示例中,对掩膜图像进行图像形态学滤波处理,得到的包含真正高温目标更完整、清晰边缘的目标对象掩膜图像,如图11方框圈出的非 高温异常目标通过图像形态学滤波处理后已被过滤。将所述目标对象掩膜图像与原始的所述红外图像进行融合,得到目标筛选图像,如图14所示,以目标筛选图像作为输入进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息。
上述实施例中,通过形成掩膜图像,对掩膜图像进行图像形态学滤波处理形成目标对象掩膜图像,利用目标对象掩膜图像对红外图像中包含的图像信息进行筛选,将筛选后的包含真正目标对象内容的图像用于目标检测,可以在能够覆盖更大类别的目标对象的检测识别的基础上,提升目标检测的效率,以及提升目标检测结果的准确性。
可选的,所述对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息,包括:
通过训练后的检测模型对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息;
其中,所述检测模型由训练样本集对神经网络模型进行训练后得到,所述训练样本集包括分别包含不同目标对象及其类别、位置标签的训练样本图像。
检测模型可采用训练后得到的卷积神经网络模型,采用检测模型对目标筛选图像进行目标检测,得到所述红外图像中包含的目标对象的类别、位置信息的目标检测结果。其中,对检测模型进行训练的训练样本集包括正训练样本图像和负训练样本图像。通过采集户外不同时间段、不同地区、不同季节的包含有待识别的目标对象的红外图像来建立训练样本集,待识别的目标对象可以主要包括鸟类、兔子、牛、羊、狼、狗、猪、猫、行人、轿车、卡车、特种车辆等。正训练样本图像可以是包含有待识别的目标对象、及带有对应目标对象的类别、位置标签的原始红外图像;也可以是基于形成目标对象掩膜图像得到的包含有待识别的目标对象、及带有对应目标对象的类别、位置标签的目标筛选图像。通过深度学习的方法对训练样本集中训练样本图像进行特征提取,能够获取不同类别之间固有的特征,如动物的耳朵、爪子、姿态,车辆的轮廓、边角,行人的体征等,形成检测模型。正训练样本图像中原始红外图像、目标筛选图像的数量可以按照一定比例设置,对检测模型的训练样本包括一定比例的含有待识别的目标对象、及带有对应目标对象的类别、位置标签的目标筛选图像,可以提升检测模型的训练速度,提升训练后得到的检测模型对目标识别的效率,提升目标类别筛选和目标位置定位的识别效率和准确性,减少因其它干扰背景带来的错误检测。
上述实施例中,通过形成掩膜图像,对掩膜图像进行图像形态学滤波处理形成目标对象掩膜图像,利用目标对象掩膜图像对红外图像中包含的图像信息进行筛选,将筛选后的包含真正目标对象内容的图像用于训练后的神经网络模型来进行目标检测,如此,可以克服现有技术中通过神经网络模型直接对红外图像中目标进行检测的目标检测效果不好的缺陷,有效地提升了利用神经网络 模型对红外图像进行处理的应用能力。
可选的,所述对所述红外图像进行二值化处理,得到对应的掩膜图像,包括:
将所述红外图像中高于温度阈值的像素点置第一灰度值;将低于所述温度阈值的像素点置第二灰度值,得到对应的掩膜图像;
所述基于所述掩膜图像进行图像形态学滤波处理,得到所述红外图像对应的目标对象掩膜图像,包括:
对所述掩膜图像进行先腐蚀后膨胀的图像处理,得到所述红外图像对应的目标对象掩膜图像。
红外图像中目标对象的成像温度通常高于周围环境的成像温度。获取原始的红外图像,将高于温度阈值的像素点和低于温度阈值的像素点分别赋值,形成对应的掩膜图像。本实施例中,第一灰度值为255,第二灰度值为0。图像形态学滤波处理主要用于解决二值化处理后的掩膜图像中,边缘点断续而导致同一目标对象存在内部空洞、割裂的问题,本实施例中,图像形态学滤波处理实时对掩膜图像进行先腐蚀后膨胀的开运算处理。
上述实施例中,对原始的红外图像依次进行二值化处理和先腐蚀后膨胀的图像处理,能够获得真正高温目标的掩膜图像,以提升从红外图像中对目标对象的检测识别的准确性。
请参阅图15至图17,为了能够对本申请实施例提供的红外图像处理方法具有更加整体的理解,下面以一具体示例进行说明。
如图15所示,红外热成像设备包括但不限于镜头及探测器、图像处理器、显示组件,其中探测器包含各种制冷、非制冷的短波、中波、长波探测器;图像处理器包括但不限于各种FPGA(Field Programmable Gate Array,现场可编辑门阵列)、Soc(System on a Chip,系统级芯片)、单片机等,显示组件不限于OLED(Organic Light-Emitting Diode,有机发光二极管)、LCD(Liquid Crystal Display,液晶显示器)、LCOS(Liquid Crystal on Silicon,硅基液晶)、光波导等各种显示模块。
红外线通过镜头投射到焦平面探测器上,探测器将红外的光信号转换为电子信号后传输给图像处理器,图像处理器是红外热成像设备的图像处理系统的核心组成部分,对探测器传输过来的信号进行处理,处理后的图像传输给显示组件进行显示,人眼通过观察显示组件来获得最终的红外图像。图像处理器所涉及的主要流程如图16所示,如图17所示,主要流程的实施过程包括如下步骤:
S11,获取红外图像(即原始的红外数据,如图10);
S12,对红外图像进行二值化,得到掩膜图像(图12)。根据所处环境的温 度对红外图像进行预处理,温度的阈值设置可以来自于温度传感器或用户设置,人眼对于高温目标更敏感;将热成像视野中高于温度阈值的点置为255,低于温度阈值的点置为0,形成一个二值化mask。
S13,对掩膜图像先腐蚀后膨胀,获得真正高温目标掩膜图像(图13)。所获取到的mask会存在边缘点断续,对于同一个目标会存在内部空洞,对所获取的mask图像先腐蚀后膨胀,目的是消除毛刺、噪声获得真正目标数据。
S14,将真正高温目标掩膜图像与红外图像融合,得到目标筛选图像(图14)。掩膜中是白色的部分保留原始图像内容,掩膜中是黑色的部分,则将原始图像中的内容置为0。
S15,将目标筛选图像送入预先训练的卷积神经网络中进行目标类别筛选和和目标的坐标位置定位,得到目标检测结果(图9)。使用筛选后的目标筛选图像作为神经网络模型的输入,可以减少因其他干扰背景带来的错误检测。目标分类与定位操作的主要操作方式包括:采集户外不同时间段、不同地区、不同季节的大量样本,主要包括鸟类、兔子、牛、羊、狼、狗、猪、猫、行人、轿车、卡车与特种车辆等样本;使用深度学习的方法对数据集进行特征提取,能够获取不同类别之间固有的特征,如动物的耳朵、爪子、姿态,车辆的轮廓、边角,行人的体征等,形成一个检测模型,同时可以对背景做区分,对环境的整体适应性较好。将从红外图像中获取到的高温目标内容,输入已训练完成的目标检测神经网络中,即可获取目标的类别、标号和目标的坐标位置。
S16,用户根据目标检测结果,选取感兴趣目标;
S17,对感兴趣目标进行目标跟踪。用户选择目标后,对选择的感兴趣目标进行特征提取,用于跟踪和轨迹预测。
步骤一,对感兴趣目标进行特征提取,常用的单目标特征提取方式,包括但不限于Haar特征、HOG特征或基于卷积神经网络提取的特征,以HOG特征为例,假设选定ID标号为1的目标,则根据ID 1目标的坐标位置,对目标进行提取,目标检测网络对于每个目标会输出(x,y,w,h)四个信息作为目标的位置信息,其中(x,y)是目标的中心点位置,(w,h)是目标的宽高,HOG特征计算只对目标区域内容进行处理,主要包括梯度计算、梯度直方图计算、block归一化和HOG特征统计四个步骤,以梯度计算为例阐述,其他过程均使用标准计算流程完成:单个像素点的水平方向梯度gx是它左右相邻像素点的像素值差值,垂直方向的梯度gx是它上下相邻像素点的像素值差值,然后可以得到该像素点的总梯度强度以及梯度方向 其中梯度方向需要取绝对值。
步骤二,按照线性运动的方式对目标进行追踪预测,按照不同的步长,不同的尺度,在目标周围进行搜索,其中不同步长指的是在上下左右四个方向,分别跨越不同的像素距离设定目标框,不同尺度指的是在不同位置根据初始ID1目标大小,设置不同缩方尺寸大小的选择区域,比如0.5x、1x、1.5x、2x等。对于选择的目标区域同样进行特征提取,与对感兴趣目标的特征提取结果进行比较,选择特征相似度最大的目标区域作为单目标跟踪结果;计算单目标跟踪结果与选择目标的位置中心点的向量大小和向量方向,其中向量大小作为下一次预测目标运动速度的步长参考,向量方向为下一次预测目标运动方向的参考,重点提取该位置的区域特征。
步骤三,上述步骤一和二可以重复,得到多组运动参考步长和运动参考速度,对于单目标运动轨迹的精度会逐渐提升。
S181,对感兴趣目标进行区域图像内容提取,获取感兴趣目标的精确轮廓(图3);
对输出的所有目标进行筛选后,假设选定目标ID 1作为感兴趣目标,则根据目标ID 1的坐标位置信息,进行区域图像内容提取,然后利用背景与前景的分割方法获取感兴趣目标的精确轮廓,包括但不限于大津法、灰度阈值分割法、温度分割法或卷积神经网络分割法。
S182,根据获取到的精确轮廓和感兴趣目标的类别,按照不同类别预置的真彩模板,对感兴趣目标进行着色(图4)。通过着色使得感兴趣目标更加凸显,符合人眼感知。整体而言,对人、不同类型的动物分别赋色,避免因温度相近而不能区分具体类型,尤其能够满足户外打猎场景的要求;
S19,将预测轨迹信息和着色后的感兴趣目标与原始红外图像进行融合显示(图5至图8中任一显示模式)。原始目标可能在视野中过小,不利于观察,可以将其放大后,按照使用者设定位置与原始红外图像融合显示;经过目标凸显方法处理后,红外图像中目标的细节和轮廓更加清晰,相较于单纯的灰度图像,明显的颜色差异比灰度差异更明显,更容易发现目标。显示模式可以包括多种,如:直接放大原图显示,使用放大后的真彩目标替换原始目标;显示预测位置,使用圆圈代表位置,箭头代表预测运动方向;显示预测位置,并增加虚拟目标等。
请参阅图18,本申请另一方面,提供一种红外图像处理装置,在示例性实施例中,该红外图像处理装置可以采用红外手持式瞄准设备实施。红外图像处理装置包括:获取模块1121,用于获取红外图像;检测模块1122,用于对所述红外图像进行目标检测,确定所述红外图像中的目标对象;确定模块1123,用 于从所述目标对象中确定感兴趣目标对象;分割模块1124,用于确定所述感兴趣目标对象的轮廓区域;显示模块1125,用于根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显。
可选的,所述确定模块1123,用于根据对所述目标对象的选定操作,确定感兴趣目标对象。
可选的,所述显示模块1125,还用于根据所述感兴趣目标对象的类别确定对应色彩模板,在所述红外图像中将所述感兴趣目标对象的所述轮廓区域按照所述对应色彩模板进行着色后进行显示。
可选的,分割模块1124,还用于根据所述感兴趣目标对象的位置信息,对所述感兴趣目标对象进行分割得到所述感兴趣目标对象的轮廓分割图像;所述显示模块1125,还用于根据所述感兴趣目标对象的类别确定对应色彩模板,按照所述对应色彩模板对所述轮廓分割图像进行着色;将着色后的所述轮廓分割图像与原始的所述红外图像进行融合;和/或,将着色后的所述轮廓分割图像按预设比例进行放大后,按照指定显示位置与原始的所述红外图像进行融合。
可选的,还包括跟踪模块,用于对所述感兴趣目标对象进行目标跟踪,以获得轨迹预测信息;所述显示模块1125,还用于根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息。
可选的,所述显示模块1125,还用于根据所述轨迹预测信息,在所述红外图像的所述感兴趣目标对象的一侧显示运动方向提示信息;和/或,根据所述轨迹预测信息,确定所述感兴趣目标对象的运动路径,在所述红外图像中显示所述运动路径;和/或,根据所述轨迹预测信息,确定所述感兴趣目标对象在所述红外图像中的下一预测位置,并在所述红外图像中的所述下一预测位置显示所述感兴趣目标对象的对应虚拟图像。
可选的,所述跟踪模块,还用于根据对所述目标对象的选定操作确定感兴趣目标对象;对所述感兴趣目标对象所在的目标区域进行特征提取,得到第一特征提取结果;按照不同的步长和尺度分别设定目标框,通过所述目标框以所述目标区域为起始位置进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果;根据各所述目标框对应的所述第二特征提取结果与所述第一特征提取结果的特征相似度,将特征相似度满足要求的对应目标框所在区域确定为所述感兴趣目标对象的目标追踪区域;根据所述目标追踪区域获得轨迹预测信息。
可选的,所述跟踪模块,还用于根据所述目标追踪区域与所述目标区域的相对位置信息,确定对所述感兴趣目标对象的下一次轨迹预测的参考步长和参考方向。
可选的,所述跟踪模块,还用于根据所述感兴趣目标对象的尺寸按不同比例缩放设置不同的尺度,根据所述感兴趣目标对象的类别按线性运动规则设置不同的步长,在所述感兴趣目标对象所在的目标区域的各个方向,按不同的所 述步长和所述尺度分别设置目标框;以所述目标区域为起始位置在设定图像区域内进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果。
可选的,所述检测模块1122,还用于对所述红外图像进行二值化处理,得到对应的掩膜图像;基于所述掩膜图像进行图像形态学滤波处理,得到所述红外图像对应的目标对象掩膜图像;将所述目标对象掩膜图像与原始的所述红外图像进行融合,得到目标筛选图像,对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息。
可选的,所述检测模块1122,还用于通过训练后的检测模型对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息;其中,所述检测模型由训练样本集对神经网络模型进行训练后得到,所述训练样本集包括分别包含不同目标对象及其类别、位置标签的训练样本图像。
可选的,所述检测模块1122,还用于将所述红外图像中高于温度阈值的像素点置第一灰度值;将低于所述温度阈值的像素点置第二灰度值,得到对应的掩膜图像;对所述掩膜图像进行先腐蚀后膨胀的图像处理,得到所述红外图像对应的目标对象掩膜图像。
需要说明的是:上述实施例提供的红外图像处理装置在实现目标凸显和跟踪处理过程中,仅以上述各程序模块的划分进行举例说明,在实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即可将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或部分方法步骤。另外,上述实施例提供的红外图像处理装置与红外图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本申请另一方面提供一种红外热成像设备,请参阅图19,为本申请实施例提供的红外热成像设备的一个可选的硬件结构示意图,所述红外热成像设备包括处理器111、与所述处理器111连接的存储器112,存储器112内用于存储各种类别的数据以支持图像处理装置的操作,且存储有用于实现本申请任一实施例提供的红外图像处理方法的计算机程序,所述计算机程序被所述处理器执行时,实现本申请任一实施例提供的红外图像处理方法的步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
可选的,所述红外热成像设备还包括与所述处理器111连接的红外拍摄模块113和显示模块114,所述红外拍摄模块113用于拍摄红外图像作为待处理图像发送给所述处理器111。所述显示模块114用于显示所述处理器111输出的对所述感兴趣目标对象进行凸显的所述红外图像。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述红外图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-OnlyMemory,简称ROM)、 随机存取存储器(RandomAccessMemory,简称RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是红外图像设备、手机,计算机,服务器或者网络设备等)执行本发明各个实施例所述的方法。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围之内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种红外图像处理方法,其特征在于,包括:
    获取红外图像;
    对所述红外图像进行目标检测,确定所述红外图像中的目标对象,得到包含各所述目标对象的类别的目标检测结果;
    根据所述目标检测结果,从所述目标对象中确定感兴趣目标对象;
    确定所述感兴趣目标对象的轮廓区域;
    根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显。
  2. 如权利要求1所述的红外图像处理方法,其特征在于,所述从所述目标对象中确定感兴趣目标对象,包括:
    根据对所述目标对象的选定操作,确定感兴趣目标对象。
  3. 如权利要求1所述红外图像处理方法,其特征在于,所述根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显,包括:
    根据所述感兴趣目标对象的类别确定对应色彩模板,在所述红外图像中将所述感兴趣目标对象的所述轮廓区域按照所述对应色彩模板进行着色后进行显示。
  4. 如权利要求3所述红外图像处理方法,其特征在于,所述确 定所述感兴趣目标对象的轮廓区域,包括:
    根据所述感兴趣目标对象的位置信息,对所述感兴趣目标对象进行分割得到所述感兴趣目标对象的轮廓分割图像;
    所述根据所述感兴趣目标对象的类别确定对应色彩模板,在所述红外图像中将所述感兴趣目标对象的所述轮廓区域按照所述对应色彩模板进行着色后进行显示,包括:
    根据所述感兴趣目标对象的类别确定对应色彩模板,按照所述对应色彩模板对所述轮廓分割图像进行着色;
    将着色后的所述轮廓分割图像与原始的所述红外图像进行融合;和/或,将着色后的所述轮廓分割图像按预设比例进行放大后,按照指定显示位置与原始的所述红外图像进行融合。
  5. 如权利要求1至4中任一项所述红外图像处理方法,其特征在于,所述从所述目标对象中确定感兴趣目标对象之后,还包括:
    对所述感兴趣目标对象进行目标跟踪,以获得轨迹预测信息;
    根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息。
  6. 如权利要求5所述红外图像处理方法,其特征在于,所述根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,包括:
    根据所述轨迹预测信息,在所述红外图像的所述感兴趣目标对象的一侧显示运动方向提示信息。
  7. 如权利要求5所述红外图像处理方法,其特征在于,所述根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,包括:
    根据所述轨迹预测信息,确定所述感兴趣目标对象的运动路径,在所述红外图像中显示所述运动路径。
  8. 如权利要求5所述红外图像处理方法,其特征在于,所述根据所述轨迹预测信息,在所述红外图像中显示所述感兴趣目标对象的运动轨迹提示信息,包括:
    根据所述轨迹预测信息,确定所述感兴趣目标对象在所述红外图像中的下一预测位置,并在所述红外图像中的所述下一预测位置显示所述感兴趣目标对象的对应虚拟图像。
  9. 如权利要求5所述红外图像处理方法,其特征在于,所述对所述感兴趣目标对象进行目标跟踪,以获得轨迹预测信息,包括:
    对所述感兴趣目标对象所在的目标区域进行特征提取,得到第一特征提取结果;
    按照不同的步长和尺度分别设定目标框,通过所述目标框以所述目标区域为起始位置进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果;
    根据各所述目标框对应的所述第二特征提取结果与所述第一特征提取结果的特征相似度,将特征相似度满足要求的对应目标框所在区域确定为所述感兴趣目标对象的目标追踪区域;
    根据所述目标追踪区域获得轨迹预测信息。
  10. 如权利要求9所述红外图像处理方法,其特征在于,所述根据各所述目标框对应的所述第二特征提取结果与所述第一特征提取结果的特征相似度,将特征相似度满足要求的对应目标框所在区域确定为所述感兴趣目标对象的目标追踪区域之后,还包括:
    根据所述目标追踪区域与所述目标区域的相对位置信息,确定对所述感兴趣目标对象的下一次轨迹预测的参考步长和参考方向。
  11. 如权利要求9所述红外图像处理方法,其特征在于,所述按照不同的步长和尺度分别设定目标框,通过所述目标框以所述目标区域为起始位置进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果,包括:
    根据所述感兴趣目标对象的尺寸按不同比例缩放设置不同的尺度,根据所述感兴趣目标对象的类别按线性运动规则设置不同的步长,在所述感兴趣目标对象所在的目标区域的各个方向,按不同的所述步长和所述尺度分别设置目标框;
    以所述目标区域为起始位置在设定图像区域内进行搜索,分别对各所述目标框所在区域进行特征提取,得到对应的第二特征提取结果。
  12. 如权利要求1所述红外图像处理方法,其特征在于,所述对所述红外图像进行目标检测,确定所述红外图像中的目标对象,包括:
    对所述红外图像进行二值化处理,得到对应的掩膜图像;
    基于所述掩膜图像进行图像形态学滤波处理,得到所述红外图像 对应的目标对象掩膜图像;
    将所述目标对象掩膜图像与原始的所述红外图像进行融合,得到目标筛选图像,对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息。
  13. 如权利要求12所述红外图像处理方法,其特征在于,所述对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息,包括:
    通过训练后的检测模型对所述目标筛选图像进行目标检测,确定所述红外图像中包含的目标对象的类别、位置信息;
    其中,所述检测模型由训练样本集对神经网络模型进行训练后得到,所述训练样本集包括分别包含不同目标对象及其类别、位置标签的训练样本图像。
  14. 如权利要求12所述红外图像处理方法,其特征在于,所述对所述红外图像进行二值化处理,得到对应的掩膜图像,包括:
    将所述红外图像中高于温度阈值的像素点置第一灰度值;将低于所述温度阈值的像素点置第二灰度值,得到对应的掩膜图像;
    所述基于所述掩膜图像进行图像形态学滤波处理,得到所述红外图像对应的目标对象掩膜图像,包括:
    对所述掩膜图像进行先腐蚀后膨胀的图像处理,得到所述红外图像对应的目标对象掩膜图像。
  15. 一种红外图像处理装置,其特征在于,包括:
    获取模块,用于获取红外图像;
    检测模块,用于对所述红外图像进行目标检测,确定所述红外图像中的目标对象,得到包含各所述目标对象的类别的目标检测结果;
    确定模块,用于根据所述目标检测结果,从所述目标对象中确定感兴趣目标对象;
    分割模块,用于确定所述感兴趣目标对象的轮廓区域;
    显示模块,用于根据所述感兴趣目标对象的类别对所述轮廓区域进行真彩处理,将所述感兴趣目标对象在所述红外图像中进行凸显。
  16. 一种红外热成像设备,其特征在于,包括处理器、与所述处理器连接的存储器及存储在所述存储器上并可被所述处理器执行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至14中任一项所述的红外图像处理方法。
  17. 如权利要求16所述红外热成像设备,其特征在于,还包括与所述处理器连接的红外拍摄模块和显示模块;
    所述红外拍摄模块用于采集红外图像并发送给所述处理器;
    所述显示模块用于显示所述处理器输出的对所述感兴趣目标对象进行凸显的所述红外图像。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至14任一项所述的红外图像处理方法。
PCT/CN2023/072732 2022-09-07 2023-01-17 红外图像处理方法、装置及设备、存储介质 WO2024051067A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211086689.5 2022-09-07
CN202211086689.5A CN115170792B (zh) 2022-09-07 2022-09-07 红外图像处理方法、装置及设备、存储介质

Publications (1)

Publication Number Publication Date
WO2024051067A1 true WO2024051067A1 (zh) 2024-03-14

Family

ID=83480917

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072732 WO2024051067A1 (zh) 2022-09-07 2023-01-17 红外图像处理方法、装置及设备、存储介质

Country Status (2)

Country Link
CN (1) CN115170792B (zh)
WO (1) WO2024051067A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170792B (zh) * 2022-09-07 2023-01-10 烟台艾睿光电科技有限公司 红外图像处理方法、装置及设备、存储介质
CN115633939B (zh) * 2022-10-13 2023-06-13 北京鹰之眼智能健康科技有限公司 一种基于红外图像获取代谢状态复合区域的方法
CN115393578B (zh) * 2022-10-13 2023-04-21 北京鹰之眼智能健康科技有限公司 一种获取代谢状态的复合区域数据处理系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180054573A1 (en) * 2015-05-08 2018-02-22 Flir Systems, Inc. Isothermal image enhancement systems and methods
CN110084830A (zh) * 2019-04-07 2019-08-02 西安电子科技大学 一种视频运动目标检测与跟踪方法
CN111753692A (zh) * 2020-06-15 2020-10-09 珠海格力电器股份有限公司 目标对象提取方法、产品检测方法、装置、计算机和介质
CN113378818A (zh) * 2021-06-21 2021-09-10 中国南方电网有限责任公司超高压输电公司柳州局 电气设备缺陷确定方法、装置、电子设备及存储介质
CN114581351A (zh) * 2022-03-02 2022-06-03 合肥英睿系统技术有限公司 目标增强的图像显示方法、装置、设备及存储介质
CN115170792A (zh) * 2022-09-07 2022-10-11 烟台艾睿光电科技有限公司 红外图像处理方法、装置及设备、存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792136B1 (en) * 2000-11-07 2004-09-14 Trw Inc. True color infrared photography and video
CN101277429B (zh) * 2007-03-27 2011-09-07 中国科学院自动化研究所 监控中多路视频信息融合处理与显示的方法和系统
CN103854285A (zh) * 2014-02-27 2014-06-11 西安电子科技大学 基于随机投影和改进谱聚类的sar图像地物分割方法
CN104134073B (zh) * 2014-07-31 2017-09-05 郑州航空工业管理学院 一种基于一类归一化的遥感影像单类分类方法
CN108460794B (zh) * 2016-12-12 2021-12-28 南京理工大学 一种双目立体红外显著目标检测方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180054573A1 (en) * 2015-05-08 2018-02-22 Flir Systems, Inc. Isothermal image enhancement systems and methods
CN110084830A (zh) * 2019-04-07 2019-08-02 西安电子科技大学 一种视频运动目标检测与跟踪方法
CN111753692A (zh) * 2020-06-15 2020-10-09 珠海格力电器股份有限公司 目标对象提取方法、产品检测方法、装置、计算机和介质
CN113378818A (zh) * 2021-06-21 2021-09-10 中国南方电网有限责任公司超高压输电公司柳州局 电气设备缺陷确定方法、装置、电子设备及存储介质
CN114581351A (zh) * 2022-03-02 2022-06-03 合肥英睿系统技术有限公司 目标增强的图像显示方法、装置、设备及存储介质
CN115170792A (zh) * 2022-09-07 2022-10-11 烟台艾睿光电科技有限公司 红外图像处理方法、装置及设备、存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GE, MAN; SUN, SHAO-YUAN; XI, LIN; QIAO, SHUAI: "Infrared Image Colorization Based on Monocular Depth Estimation", MICROCOMPUTER INFORMATION, WEIJISUANJI XINXI, CHINA, vol. 28, no. 10, 31 October 2012 (2012-10-31), China , pages 413 - 414, XP009553893, ISSN: 1008-0570 *

Also Published As

Publication number Publication date
CN115170792A (zh) 2022-10-11
CN115170792B (zh) 2023-01-10

Similar Documents

Publication Publication Date Title
WO2024051067A1 (zh) 红外图像处理方法、装置及设备、存储介质
CN109271921B (zh) 一种多光谱成像的智能识别方法及系统
CN106446873B (zh) 人脸检测方法及装置
WO2020078229A1 (zh) 目标对象的识别方法和装置、存储介质、电子装置
CN111126325B (zh) 一种基于视频的智能人员安防识别统计方法
WO2021051601A1 (zh) 利用Mask R-CNN选择检测框的方法及系统、电子装置及存储介质
CN110379020B (zh) 一种基于生成对抗网络的激光点云上色方法和装置
WO2020206850A1 (zh) 基于高维图像的图像标注方法和装置
CN105930822A (zh) 一种人脸抓拍方法及系统
US20110025834A1 (en) Method and apparatus of identifying human body posture
CN112381075B (zh) 一种机房特定场景下进行人脸识别的方法及系统
US20230334890A1 (en) Pedestrian re-identification method and device
TW202026948A (zh) 活體檢測方法、裝置以及儲存介質
US20220012884A1 (en) Image analysis system and analysis method
CN111881849A (zh) 图像场景检测方法、装置、电子设备及存储介质
CN112541403B (zh) 一种利用红外摄像头的室内人员跌倒检测方法
US20190096066A1 (en) System and Method for Segmenting Out Multiple Body Parts
CN111008576A (zh) 行人检测及其模型训练、更新方法、设备及可读存储介质
CN111563398A (zh) 用于确定目标物的信息的方法和装置
CN111967527A (zh) 一种基于人工智能牡丹品种识别方法及识别系统
CN113780145A (zh) 精子形态检测方法、装置、计算机设备和存储介质
JP7074174B2 (ja) 識別器学習装置、識別器学習方法およびコンピュータプログラム
CN111898427A (zh) 一种基于特征融合深度神经网络的多光谱行人检测方法
Jiang et al. Depth image-based obstacle avoidance for an in-door patrol robot
WO2021118386A1 (ru) Способ получения набора объектов трехмерной сцены

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23861779

Country of ref document: EP

Kind code of ref document: A1