CN117036207A - Method for enhancing infrared image in three-dimensional acquisition box - Google Patents

Method for enhancing infrared image in three-dimensional acquisition box Download PDF

Info

Publication number
CN117036207A
CN117036207A CN202311302715.8A CN202311302715A CN117036207A CN 117036207 A CN117036207 A CN 117036207A CN 202311302715 A CN202311302715 A CN 202311302715A CN 117036207 A CN117036207 A CN 117036207A
Authority
CN
China
Prior art keywords
target
pixel point
portrait color
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311302715.8A
Other languages
Chinese (zh)
Other versions
CN117036207B (en
Inventor
李慧
张立坤
陆明明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd
Original Assignee
Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd filed Critical Huiyigu Traditional Chinese Medicine Technology Tianjin Co ltd
Priority to CN202311302715.8A priority Critical patent/CN117036207B/en
Publication of CN117036207A publication Critical patent/CN117036207A/en
Application granted granted Critical
Publication of CN117036207B publication Critical patent/CN117036207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image enhancement, in particular to an infrared image enhancement method in a three-dimensional acquisition box, which comprises the following steps: acquiring a target infrared image in the three-dimensional acquisition box, and performing image conversion on the target infrared image; screening a mouth triangle area from the target portrait color drawing; determining a radiation intensity index corresponding to each pixel point in the triangular region of the mouth; clustering the pixel points except the mouth triangle area in the target portrait color map; determining a radiation intensity index corresponding to each pixel point in the target cluster set; determining the corresponding transmissivity of each pixel point in the target portrait color drawing; and enhancing the target portrait color drawing according to the transmittance and the radiation intensity index corresponding to each pixel point in the target portrait color drawing. According to the invention, the image data processing is carried out on the target infrared image, so that the infrared image enhancement effect is improved.

Description

Method for enhancing infrared image in three-dimensional acquisition box
Technical Field
The invention relates to the technical field of image enhancement, in particular to an infrared image enhancement method in a three-dimensional acquisition box.
Background
A three-dimensional acquisition box is a device for acquiring geometric and texture information of the surface of an object. The method can convert the actual object into a digitized three-dimensional model and is widely applied to the fields of computer graphics, industrial design, cultural heritage protection, virtual reality and the like. For example, in medical three-dimensional acquisitions, three-dimensional temperature models often can be constructed using three-dimensional images in combination with infrared images. For example, the infrared image may be an infrared image in which a portrait is photographed, and is denoted as an infrared portrait image. In the process of acquiring the infrared image, the acquired infrared image is often influenced by various factors, so that the image quality of the acquired infrared image is low, and the three-dimensional temperature model is inaccurate when the acquired infrared image is fused with the three-dimensional image. Therefore, image enhancement of infrared images is often required. At present, when an image is enhanced, the following methods are generally adopted: and carrying out histogram equalization on the image according to the gray level histogram of the image to obtain an enhanced image.
However, when performing histogram equalization on an infrared image according to a gray histogram of the infrared image, there are often the following technical problems when image enhancement is implemented: because the gray histogram equalization is usually performed according to the gray value distribution of the image, when the histogram equalization is performed on the infrared image directly according to the gray histogram of the infrared image, less important information of the pixel points may be lost, so that the effect of enhancing the infrared image is poor.
Disclosure of Invention
The summary of the invention is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. The summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In order to solve the technical problem of poor effect of enhancing infrared images, the invention provides an infrared image enhancement method in a three-dimensional acquisition box.
The invention provides a three-dimensional acquisition box internal infrared image enhancement method, which comprises the following steps:
acquiring a target infrared image in a three-dimensional acquisition box, and performing image conversion on the target infrared image to obtain a target portrait color image;
according to the target infrared portrait drawing, a mouth triangle area is screened out from the target portrait color drawing;
determining a radiation intensity index corresponding to each pixel point in the mouth triangular region according to the dark channel image corresponding to the target portrait color drawing;
clustering pixel points except the mouth triangle area in the target portrait color image according to the B channel image corresponding to the target portrait color image to obtain a target cluster set;
Determining a radiation intensity index corresponding to each pixel point in each target cluster in the target cluster set according to the dark channel image corresponding to the target portrait color map;
determining the transmissivity corresponding to each pixel point in the target portrait color drawing according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color drawing;
and enhancing the target portrait color image according to the transmittance and the radiation intensity indexes corresponding to each pixel point in the target portrait color image, so as to obtain a target enhanced image.
Optionally, the filtering the mouth triangle area from the target portrait color map according to the target infrared portrait map includes:
performing edge detection on the target infrared image graph to obtain an edge contour area set;
screening out an edge contour region with the largest area from the edge contour region set, and taking the edge contour region as a reference region;
screening out the pixel point with the minimum gray value from the reference area as a nose tip pixel point;
constructing a first target straight line and a second target straight line which are perpendicular to each other through the nose tip pixel point, and determining two intersection points of the first target straight line and the edge of the reference area as a first auricle pixel point and a second auricle pixel point respectively;
Screening out a target face contour from the target infrared image graph through Hough change;
screening chin pixel points from the intersection points of the second target straight line and the target face outline;
carrying out pairwise connection on the first auricle pixel point, the second auricle pixel point and the chin pixel point to obtain a target triangle area;
and determining the target triangle area corresponding to the area in the target portrait color drawing as a mouth triangle area.
Optionally, the determining, according to the dark channel image corresponding to the target portrait color drawing, a radiation intensity index corresponding to each pixel point in the mouth triangle area includes:
the mouth triangle area corresponds to the area in the dark channel image and is determined to be a mouth dark channel area;
screening out the pixel point with the largest pixel value and the preset duty ratio from the dark channel area of the mouth as a first candidate pixel point to obtain a first candidate pixel point set;
each first candidate pixel point in the first candidate pixel point set is determined to be a second candidate pixel point corresponding to the pixel point in the mouth triangle area, and a second candidate pixel point set is obtained;
And determining a radiation intensity index corresponding to each pixel point in the mouth triangle area according to the second candidate pixel point set.
Optionally, the determining, according to the second candidate pixel point set, a radiation intensity index corresponding to each pixel point in the mouth triangle area includes:
determining an average value of R channel values corresponding to all the second candidate pixel points in the second candidate pixel point set as a mouth representative R value;
determining the average value of the G channel values corresponding to all the second candidate pixel points in the second candidate pixel point set as a mouth representative G value;
determining the average value of the B channel values corresponding to all the second candidate pixel points in the second candidate pixel point set as a mouth representative B value;
combining the mouth representing R value, the mouth representing G value and the mouth representing B value into a mouth representing radiation index;
and determining the radiation index represented by the mouth as the radiation intensity index corresponding to each pixel point in the triangular region of the mouth.
Optionally, clustering the pixels in the target portrait color map except the mouth triangle area according to the B-channel image corresponding to the target portrait color map to obtain a target cluster set, including:
Clustering all the pixel points except the mouth triangle area in the target portrait color image according to the B channel values corresponding to all the pixel points except the mouth triangle area in the target portrait color image, and determining each obtained cluster as a target cluster to obtain a target cluster set, wherein the B channel value corresponding to the pixel point is the B channel value corresponding to the pixel point in the B channel image.
Optionally, the determining, according to the dark channel image corresponding to the target portrait color map, a radiation intensity index corresponding to each pixel point in each target cluster in the target cluster set includes:
the target cluster is corresponding to the area in the dark channel image, and the target dark channel area corresponding to the target cluster is determined;
screening out the pixel points with the largest pixel values and preset duty ratio from the target dark channel area as third candidate pixel points to obtain a third candidate pixel point set corresponding to the target cluster;
determining each third candidate pixel point in the third candidate pixel point set to be a fourth candidate pixel point corresponding to the pixel point in the target cluster, and obtaining a fourth candidate pixel point set corresponding to the target cluster;
Determining an average value of R channel values corresponding to all fourth candidate pixel points in the fourth candidate pixel point set as a target representative R value corresponding to the target cluster;
determining an average value of the G channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set as a target representative G value corresponding to the target cluster;
determining an average value of B channel values corresponding to all fourth candidate pixel points in the fourth candidate pixel point set as a target representative B value corresponding to the target cluster;
combining the target representative R value, the target representative G value and the target representative B value into a target representative radiation index corresponding to the target cluster;
and determining the target representative radiation index as a radiation intensity index corresponding to each pixel point in the target cluster.
Optionally, the determining the transmittance corresponding to each pixel point in the target portrait color map according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color map includes:
determining a target depth of field corresponding to each pixel point in the target portrait color map according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color map;
Determining the product of a preset medium scattering coefficient and a target depth of field corresponding to each pixel point in the target portrait color map as an initial transmission index corresponding to each pixel point in the target portrait color map;
and determining the transmissivity corresponding to each pixel point in the target portrait color drawing according to the initial transmissivity index corresponding to each pixel point in the target portrait color drawing, wherein the initial transmissivity index and the transmissivity are in negative correlation.
Optionally, the determining, according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color map, the target depth of field corresponding to each pixel point in the target portrait color map includes:
determining a gradient representative value corresponding to each pixel point in the target portrait color map according to the gradient amplitude corresponding to each pixel point in a preset window corresponding to each pixel point in the target portrait color map, wherein the gradient amplitude corresponding to the pixel point in the preset window and the gradient representative value are positively correlated;
carrying out information quantity analysis processing on a preset window corresponding to each pixel point in the target portrait color drawing to obtain high-frequency information quantity corresponding to each pixel point in the target portrait color drawing;
Determining a first depth of field index corresponding to each pixel point in the target portrait color map according to a gradient representative value and a high-frequency information quantity corresponding to each pixel point in the target portrait color map, wherein the gradient representative value and the high-frequency information quantity are positively correlated with the first depth of field index;
determining an average value of R channel values corresponding to all pixel points in a preset window corresponding to each pixel point in the target portrait color drawing as a window representing R value corresponding to each pixel point in the target portrait color drawing;
determining an average value of B channel values corresponding to all pixel points in a preset window corresponding to each pixel point in the target portrait color drawing as a window representative B value corresponding to each pixel point in the target portrait color drawing;
normalizing the difference value of the value corresponding to the window representative R value and the value corresponding to the window representative B value corresponding to each pixel point in the target portrait color drawing to obtain a second depth index corresponding to each pixel point in the target portrait color drawing;
and determining the target depth of field corresponding to each pixel point in the target portrait color map according to a first depth of field index and a second depth of field index corresponding to each pixel point in the target portrait color map, wherein the first depth of field index and the second depth of field index are in negative correlation with the target depth of field.
Optionally, the analyzing the information amount of the preset window corresponding to each pixel point in the target portrait color map to obtain the high-frequency information amount corresponding to each pixel point in the target portrait color map includes:
performing discrete cosine transform on a preset window corresponding to each pixel point in the target portrait color drawing to obtain a DCT (discrete cosine transform) result corresponding to each pixel point in the target portrait color drawing;
and determining the sum of squares of all high-frequency coefficients in the DCT conversion result corresponding to each pixel point in the target portrait color drawing as the high-frequency information quantity corresponding to each pixel point in the target portrait color drawing.
Optionally, the enhancing the target portrait color map according to the transmittance and the radiation intensity index corresponding to each pixel point in the target portrait color map to obtain a target enhanced image includes:
according to the transmittance and radiation intensity index corresponding to each pixel point in the target portrait color map, determining a formula corresponding to a target enhanced pixel value corresponding to each pixel point in the target portrait color map as follows:
wherein,is the first one in the target portrait color pictureuTarget enhanced pixel values corresponding to the pixel points; / >Is the first one in the target portrait color pictureuPixel values corresponding to the pixel points; />Is the first one in the target portrait color pictureuRadiation intensity indexes corresponding to the pixel points;is the first one in the target portrait color pictureuTransmittance corresponding to each pixel point;tis a preset transmission threshold; />Is->Andtmaximum value of (2); />Is a maximum function;uis a pixel in the target portrait color pictureA sequence number of the point;
and updating the pixel value corresponding to each pixel point in the target portrait color drawing to the corresponding target enhanced pixel value to obtain a target enhanced image.
The invention has the following beneficial effects:
according to the method for enhancing the infrared image in the three-dimensional acquisition box, disclosed by the invention, the image is enhanced by carrying out image data processing on the target infrared image, the technical problem of poor enhancement effect on the infrared image is solved, and the enhancement effect on the infrared image is improved. Firstly, a target infrared image in a three-dimensional acquisition box is acquired, and the target infrared image is subjected to image conversion, so that the subsequent image enhancement can be facilitated. Then, because the reason that people breathe out carbon dioxide when breathing often makes the temperature in mouth and nose region be higher than normal temperature, and the pseudo-color image in mouth and nose department high temperature region often appears vaporificly simultaneously, and the influence that the mouth and nose department breathes out is often great in comparison with the heat that the human body gives off, so the condition that the precision is low often appears when the infrared figure of target appears, fuses the effect relatively poor with the three-dimensional image. Therefore, based on the target infrared image, the mouth triangle area is screened from the target image color image, so that the mouth triangle area can be conveniently and accurately enhanced later. Then, based on the dark channel image corresponding to the target portrait color drawing, the accuracy of determining the radiation intensity index corresponding to each pixel point in the mouth triangle area can be improved. And continuing to cluster the pixel points except the mouth triangle area in the target portrait color image based on the B channel image corresponding to the target portrait color image, so that the accuracy of determining the target cluster can be improved. Then, based on the dark channel image corresponding to the target portrait color map, the accuracy of determining the radiation intensity index corresponding to each pixel point in each target cluster can be improved. And then, comprehensively considering the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait drawing, and improving the accuracy of determining the transmissivity corresponding to each pixel point in the target portrait drawing. Finally, the transmissivity and the radiation intensity indexes corresponding to each pixel point in the target portrait color image are comprehensively considered, so that the target portrait color image is enhanced, the target infrared portrait image is enhanced, the image enhancement effect is improved, and compared with an image enhancement mode adopting gray histogram equalization, the method and the device realize accurate enhancement of a mouth triangle area and other areas, and important information of images is reserved. Thereby improving the image enhancement effect.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for enhancing an infrared image in a three-dimensional collection box according to the present invention;
fig. 2 is a schematic diagram of an infrared image of a subject of the present invention.
Wherein, the reference numerals include: image 201, first straight line 202, second straight line 203, first hollow point C1, second hollow point C2, and third hollow point C3.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of the specific implementation, structure, features and effects of the technical solution according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a three-dimensional acquisition box internal infrared image enhancement method, which comprises the following steps:
acquiring a target infrared image in a three-dimensional acquisition box, and performing image conversion on the target infrared image to obtain a target portrait color image;
according to the target infrared portrait drawing, a mouth triangle area is screened from the target portrait color drawing;
determining a radiation intensity index corresponding to each pixel point in the triangular region of the mouth according to the dark channel image corresponding to the target portrait color drawing;
clustering pixel points except a mouth triangle area in the target portrait color image according to the B channel image corresponding to the target portrait color image to obtain a target cluster set;
determining a radiation intensity index corresponding to each pixel point in each target cluster in the target cluster set according to the dark channel image corresponding to the target portrait color map;
determining the transmissivity corresponding to each pixel point in the target portrait color drawing according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color drawing;
And enhancing the target portrait color drawing according to the transmittance and the radiation intensity index corresponding to each pixel point in the target portrait color drawing, so as to obtain a target enhanced image.
The following detailed development of each step is performed:
referring to fig. 1, a flow of some embodiments of a method of infrared image enhancement within a three-dimensional acquisition box according to the present invention is shown. The method for enhancing the infrared image in the three-dimensional acquisition box comprises the following steps of:
step S1, acquiring a target infrared image in a three-dimensional acquisition box, and performing image conversion on the target infrared image to obtain a target portrait color image.
In some embodiments, a target infrared portrait graph in the three-dimensional collection box can be obtained, and the target infrared portrait graph is subjected to image conversion to obtain a target portrait color graph.
Wherein the three-dimensional acquisition box is a device for acquiring geometrical and texture information of the surface of the object. The method can convert the actual object into a digitized three-dimensional model and is widely applied to the fields of computer graphics, industrial design, cultural heritage protection, virtual reality and the like. The three-dimensional acquisition box is mainly used for acquiring geometric shape and texture information of the surface of an object in a non-contact mode. The method can quickly and accurately capture the shape, detail and texture of the object and generate corresponding three-dimensional point cloud data or a three-dimensional model. The data can be used for various applications such as design, analysis, simulation, cultural heritage protection and the like. The three-dimensional acquisition box is generally composed of a light source, a camera or a sensor, an operation unit and a control unit. Three-dimensional collection boxes find wide application in many fields and applications, including but not limited to: industrial design and manufacture, artistic and cultural heritage protection, construction and construction engineering, virtual reality and game development, medicine and bioscience. In medical three-dimensional acquisition, a three-dimensional temperature model is often constructed by combining a three-dimensional image with an infrared image. The target infrared portrait image may be an infrared image of a face of a human body photographed. For example, the target infrared portrait image may be an infrared image of a human face or a region above a human chest. The straight line of the high position of the portrait in the target infrared portrait drawing may be parallel to the straight line of the high position of the target infrared portrait drawing. The target infrared image map may be representative of the temperature of the human body. The pixel value corresponding to the pixel point in the target infrared image map may represent the infrared radiation intensity at that location. The target portrait color map may be an RGB (Red Green Blue) image corresponding to the target infrared portrait map.
The method is characterized in that the target infrared image in the three-dimensional acquisition box is acquired, and the target infrared image is subjected to image conversion, so that the subsequent image enhancement can be facilitated.
As an example, this step may include the steps of:
the first step is to obtain the target infrared image in the three-dimensional collection box.
For example, an infrared image of the face of a human body may be acquired by an infrared camera as a target infrared image map. Wherein the target infrared image map is also a gray scale map.
It should be noted that, the long-wave infrared thermal imaging camera can be erected in the three-dimensional collection box, the influence of the external environment on the image shot by the internal camera can be reduced to a certain extent, and the target infrared image is collected by the long-wave infrared thermal imaging camera erected in the three-dimensional collection box. And the pixel value corresponding to the pixel point in the target infrared image map can represent the infrared radiation intensity of the position.
And secondly, performing image conversion on the target infrared portrait graph to obtain a target portrait color graph.
For example, the target infrared portrait map may be converted into an RGB image by an applymomap function, and the RGB image is taken as a target portrait color map.
It should be noted that, through the applymap function, the pseudo color mapping of the infrared single-channel image, that is, the target infrared image, can be realized, and the pseudo color image after mapping, that is, the closer the color temperature in the target image is to the warm color, the stronger the infrared radiation at the position is indicated, and the closer the infrared radiation at the position is to the cold color, the weaker the infrared radiation at the position is indicated.
And S2, screening out a mouth triangle area from the target portrait color image according to the target infrared portrait image.
In some embodiments, the mouth triangle area may be selected from the target portrait color map according to the target infrared portrait map.
It should be noted that, because the reason that people exhaled carbon dioxide when breathing often makes the temperature in mouth and nose region be higher than normal temperature, and the pseudo-color image in mouth and nose department high temperature region often appears vaporificly to be distributed simultaneously, and the influence that the mouth and nose department breathes and produces is often great in comparison with the heat that the human body gives off, so the condition that the precision is low often appears when the infrared figure of the target presents, fuses the effect relatively poor with the three-dimensional image. Therefore, based on the target infrared image, the mouth triangle area is screened from the target image color image, so that the mouth triangle area can be conveniently and accurately enhanced later.
As an example, this step may include the steps of:
and firstly, carrying out edge detection on the target infrared image graph to obtain an edge contour region set.
Wherein the edge contour regions in the set of edge contour regions may be regions surrounded by closed edges.
For example, a Canny edge detection algorithm can be utilized to detect edges of the target infrared image graph, and an area surrounded by the closed edges is used as an edge contour area to obtain an edge contour area set.
And secondly, screening out the edge contour region with the largest area from the edge contour region set, and taking the edge contour region as a reference region.
The reference area may represent a portrait taken in the target infrared portrait map.
And thirdly, screening out the pixel point with the minimum gray value from the reference area as the nose tip pixel point.
For the target infrared image, the nose tip and the auricle have fewer blood vessels and thinner skin, so that the temperature is lower than that of other parts, the corresponding gray value in the target infrared image is lower, and the gray value corresponding to the nose tip pixel is lower than that of the auricle pixel, so that the pixel with the minimum gray value in the reference area is the nose tip pixel.
Fourth, a first target straight line and a second target straight line which are perpendicular to each other are constructed through the nose tip pixel points, and two intersection points where the first target straight line and the edge of the reference area intersect are respectively determined as a first auricle pixel point and a second auricle pixel point.
The first target straight line may be a straight line passing through a nose tip pixel point and parallel to a straight line where the width of the target infrared image map is located. The second target line may be a line passing through the tip pixels and parallel to the line at which the target infrared image is high.
And fifthly, screening out a target face contour from the target infrared image by Hough change.
For example, a hough transform may be used to obtain a fitted ellipse in the edge detection result map and determine the fitted ellipse as the target face contour. The edge detection result image may be an image obtained by performing edge detection on the target infrared image.
And sixthly, screening out chin pixel points from the intersection point of the second target straight line and the target face outline.
For example, a pixel point closest to the lower position may be selected from the intersections of the second target straight line and the target face contour as the chin pixel point.
And seventhly, carrying out pairwise connection on the first auricle pixel point, the second auricle pixel point and the chin pixel point to obtain a target triangle area.
As shown in fig. 2, image 201 may represent a target infrared image map; the first straight line 202 may represent a first target straight line; the second line 203 may represent a second target line; the intersection of the first line 202 and the second line 203 may represent a nose tip pixel point; the first open point C1 may characterize a first auricle pixel point; the second open point C2 may represent a second auricle pixel point; the third open point C3 may characterize the chin pixel point. And the first hollow point C1, the second hollow point C2 and the third hollow point C3 are connected in pairs, and the triangular area enclosed by the first hollow point C1, the second hollow point C2 and the third hollow point C3 can represent a target triangular area.
And eighth, determining the target triangle area corresponding to the area in the target portrait color drawing as a mouth triangle area.
The mouth triangle area may be an area obtained by performing image conversion on the target triangle area. The mouth triangle area is the RGB area corresponding to the target triangle area. Mist generated by human breathing in the triangular region of the mouth is often more obvious.
It should be noted that, the image enhancement process of the present invention can be regarded as that image enhancement is achieved and image enhancement effect is improved by improving the dark channel defogging algorithm to a certain extent. Dark channel defogging algorithms are relatively common image enhancement algorithms. The dark channel defogging algorithm obtains a mask image by calculating the dark channel of the image, the image can represent the density distribution of fog, and then the transmission diagram is restored according to the mask image so as to restore the scene radiation value under the non-fog state, thereby realizing image defogging.
For the target portrait color image, because the shot target is a portrait, and the breathing of the person often has a certain influence on the definition of the target infrared portrait image. Because the reason that people breathe out carbon dioxide when breathing often makes the temperature in mouth and nose region be higher than normal temperature, the pseudo-color image in mouth and nose department high temperature region often appears vaporificly simultaneously, compare in the heat that the human body distributed, the influence that mouth and nose department breathed and produced often is great, so the infrared portrait of target appears the precision when presenting and often is low, with the relatively poor condition of three-dimensional image fusion effect, consequently follow the radiation intensity index and the transmissivity that correspond through self-adaptation each pixel point of confirming, the self-adaptation has improved dark passageway defogging algorithm, the self-adaptation enhancement to each pixel point has been realized.
The infrared image is usually not dependent on a natural light source when being shot, infrared radiation is used in a closed environment, atmospheric light is not usually existed, and the self-adaptive radiation intensity index of each pixel point is used for replacing the global atmospheric light value in the existing dark channel defogging algorithm.
When the dark channel image is acquired, firstly, the minimum value in the three channel values corresponding to each pixel point in the target portrait color drawing can be recorded, convolution calculation is carried out on the minimum value filtering and each pixel point to obtain the dark channel image of the target portrait color drawing, and the values of most pixel points in the dark channel image are 0, so that the dark channel prior value in the non-fog state of the original image can be represented. The dark channel image has a larger area, i.e. it indicates that there may be fog in the area, where if the first 0.1% of the maximum pixel value corresponding to the dark channel pixel point is directly selected as the radiation intensity value, the real infrared radiation light and the local highlight area may not be distinguished, and meanwhile, the change of the infrared radiation light may not be adapted, i.e. in an actual scene, the infrared radiation values at different positions may be different. According to the analysis content and the prior knowledge, compared with other areas, the fog in the mouth-nose area of the face of the human body is often intensively distributed and obvious, so that in order to improve the effect of image defogging enhancement, a mouth triangle area is often required.
And S3, determining a radiation intensity index corresponding to each pixel point in the triangular region of the mouth according to the dark channel image corresponding to the target portrait color drawing.
In some embodiments, the radiation intensity index corresponding to each pixel point in the mouth triangle area may be determined according to the dark channel image corresponding to the target portrait color map.
The dark channel image corresponding to the target portrait color drawing may be a dark channel image of the target portrait color drawing. The dark channel image corresponding to the target portrait color drawing may include: and carrying out convolution calculation on the minimum value in the three channel values corresponding to each pixel point in the target portrait color drawing and each pixel point by using minimum value filtering to obtain a dark channel image of the target portrait color drawing.
It should be noted that, based on the dark channel image corresponding to the target portrait color drawing, the accuracy of determining the radiation intensity index corresponding to each pixel point in the mouth triangle area can be improved.
As an example, this step may include the steps of:
first, the mouth triangle area is determined as a mouth dark channel area corresponding to the area in the dark channel image.
The dark channel region of the mouth may be a region obtained by defogging a triangular region of the mouth in a dark channel.
And secondly, screening out the pixel point with the largest pixel value and the preset duty ratio from the dark channel area of the mouth as a first candidate pixel point to obtain a first candidate pixel point set.
The preset duty cycle may be a preset duty cycle. For example, the preset duty cycle may be 0.1%.
For example, a pixel point with the largest pixel value of 0.1% can be screened out from the dark channel region of the mouth and used as a first candidate pixel point, so as to obtain a first candidate pixel point set.
And thirdly, determining each first candidate pixel point in the first candidate pixel point set to be a second candidate pixel point corresponding to the pixel point in the mouth triangle area, and obtaining a second candidate pixel point set.
The second candidate pixel point may be a pixel point obtained by performing image conversion on the first candidate pixel point. The second candidate pixel point is namely the RGB pixel point corresponding to the first candidate pixel point.
The fourth step of determining, according to the second candidate pixel point set, a radiation intensity index corresponding to each pixel point in the mouth triangle area may include the following sub-steps:
and a first sub-step of determining the average value of the R channel values corresponding to all the second candidate pixel points in the second candidate pixel point set as the R value represented by the mouth.
Wherein the R-channel value may be an R-value comprised by the RGB-value.
And a second sub-step of determining the average value of the G channel values corresponding to all the second candidate pixels in the second candidate pixel set as the mouth representative G value.
The G channel value may be a G value included in the RGB value.
And a third sub-step of determining the average value of the B channel values corresponding to all the second candidate pixels in the second candidate pixel set as the mouth representative B value.
The B-channel value may be a B value included in the RGB value.
And a fourth sub-step of combining the mouth representing R value, the mouth representing G value and the mouth representing B value into a mouth representing radiation index.
For example, the mouth represents the radiation index may be:
wherein,the mouth represents the radiation index. />The mouth represents the R value, namely the average value of R channel values corresponding to all second candidate pixel points in the second candidate pixel point set. />Is a mouth representativeAnd G value, namely the average value of the G channel values corresponding to all the second candidate pixel points in the second candidate pixel point set. />The mouth represents the B value, namely the average value of the B channel values corresponding to all the second candidate pixels in the second candidate pixel point set.
It should be noted that the mouth representative radiation index may be an adaptive radiation intensity index of each pixel point in the triangle area of the mouth. That is, the radiation intensity index corresponding to each pixel point in the triangular region of the mouth can be
And a fifth substep, determining the radiation index represented by the mouth as the radiation intensity index corresponding to each pixel point in the triangular region of the mouth.
And S4, clustering the pixel points except the mouth triangle area in the target portrait color image according to the B channel image corresponding to the target portrait color image to obtain a target cluster set.
In some embodiments, the pixel points in the target portrait color map except the mouth triangle area may be clustered according to the B-channel image corresponding to the target portrait color map, so as to obtain a target cluster set.
The B-channel image corresponding to the target portrait color drawing may be an image obtained by updating pixel values corresponding to each pixel point in the target portrait color drawing to the corresponding B-channel image.
It should be noted that, based on the B-channel image corresponding to the target portrait color image, the pixel points in the target portrait color image except the mouth triangle area are clustered, so that the accuracy of determining the target clustering cluster can be improved.
As an example, according to the B-channel values corresponding to the pixels except the mouth triangle area in the target portrait color drawing, clustering the pixels except the mouth triangle area in the target portrait color drawing, and determining each obtained cluster as a target cluster to obtain a target cluster set. The B-channel value corresponding to the pixel point may be a B-channel value corresponding to the pixel point in the B-channel image.
For example, according to the B channel value corresponding to each pixel point except the mouth triangle area in the target portrait color drawing, clustering is performed on each pixel point except the mouth triangle area in the target portrait color drawing through a K-means clustering algorithm, and each obtained cluster is determined as a target cluster, so as to obtain a target cluster set.
It should be noted that, the image segmentation is performed on the pixels except the mouth triangle area in the target portrait color map based on a clustering algorithm. The prior knowledge can show that when the human body breathes to generate fog, the temperature of the fog is slightly higher than the body surface temperature due to the carbon dioxide contained in the fog, and when the temperature is higher, the value of the R channel of the pixel point in the corresponding area in the target portrait color map is larger than that of the other two channels. And because fog has diffusivity, the fog cannot be always concentrated in a mouth triangle area, when the fog is diffused to other areas, the temperature of the diffused area is always higher than the normal temperature, so that the adaptive radiation intensity index cannot be accurately acquired, but because the B channel value is less influenced by high temperature, the target portrait situation without the fog can be preliminarily reflected through the B channel value, and the other areas except the mouth triangle area can be clustered through the B channel value. Therefore, the target portrait color map is processed into a B channel image, namely each pixel point only keeps a B channel value, the rest pixel points except the mouth triangle area are clustered by using a K-means clustering algorithm to realize image segmentation, and the clustering algorithm is input into the B channel image corresponding to the target portrait color map and output as a clustering result.
And S5, determining a radiation intensity index corresponding to each pixel point in each target cluster in the target cluster set according to the dark channel image corresponding to the target portrait color map.
In some embodiments, the radiation intensity index corresponding to each pixel point in each target cluster in the target cluster set may be determined according to the dark channel image corresponding to the target portrait color map.
It should be noted that, based on the dark channel image corresponding to the target portrait color map, the accuracy of determining the radiation intensity index corresponding to each pixel point in each target cluster can be improved.
As an example, this step may include the steps of:
and a first step of determining the area corresponding to the target cluster in the dark channel image as a target dark channel area corresponding to the target cluster.
The target dark channel region may be a region obtained by defogging a dark channel of the target cluster.
And a second step of screening out the pixel point with the largest pixel value and preset duty ratio from the target dark channel area as a third candidate pixel point to obtain a third candidate pixel point set corresponding to the target cluster.
For example, for each target cluster, a pixel point with the largest pixel value of 0.1% may be screened from the target dark channel area corresponding to the target cluster, and used as a third candidate pixel point, to obtain a third candidate pixel point set corresponding to the target cluster.
And thirdly, determining that each third candidate pixel point in the third candidate pixel point set corresponds to a pixel point in the target cluster as a fourth candidate pixel point, and obtaining a fourth candidate pixel point set corresponding to the target cluster.
The fourth candidate pixel point may be a pixel point obtained by performing image conversion on the third candidate pixel point. The fourth candidate pixel point is namely the RGB pixel point corresponding to the third candidate pixel point.
And step four, determining the average value of the R channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set as the target representative R value corresponding to the target cluster.
And fifthly, determining the average value of the G channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set as the target representative G value corresponding to the target cluster.
And sixthly, determining the average value of the B channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set as the target representative B value corresponding to the target cluster.
Seventh, combining the target representative R value, the target representative G value and the target representative B value into a target representative radiation index corresponding to the target cluster.
For example, the target representative radiation index corresponding to the target cluster may be:
wherein,is the first in the target cluster setiTargets corresponding to the target clusters represent radiation indexes. />Is the first in the target cluster setiTargets corresponding to the target clusters represent R values; i.e. the firstiAnd the average value of R channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set corresponding to the target cluster. />Is the first in the target cluster setiTargets corresponding to the target cluster represent G values; i.e. the firstiAnd the average value of the G channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set corresponding to the target cluster. />Is the first in the target cluster setiTargets corresponding to the target clusters represent B values; i.e. the firstiAnd the average value of the B channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set corresponding to the target cluster.iIs in the target cluster setSequence number of target cluster.
It should be noted that, the target representative radiation index corresponding to the target cluster may be a radiation intensity index corresponding to each pixel point in the target cluster. I.e. the first cluster in the target cluster set iThe radiation intensity index corresponding to each pixel point in each target cluster can be
And eighth, determining the target representative radiation index as a radiation intensity index corresponding to each pixel point in the target cluster.
And S6, determining the transmissivity corresponding to each pixel point in the target portrait drawing according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait drawing.
In some embodiments, the transmittance corresponding to each pixel point in the target portrait color map may be determined according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color map.
The gradient amplitude is also called gradient value and gradient size. And a Sobel operator can be used for acquiring the gradient amplitude corresponding to the pixel point.
It should be noted that, the accuracy of determining the transmissivity corresponding to each pixel point in the target portrait color drawing can be improved by comprehensively considering the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color drawing.
As an example, this step may include the steps of:
the first step, determining the target depth of field corresponding to each pixel point in the target portrait color map according to the pixel value and the gradient amplitude value corresponding to each pixel point in the target portrait color map may include the following sub-steps:
And a first sub-step of determining a gradient representative value corresponding to each pixel point in the target portrait color map according to the gradient amplitude corresponding to each pixel point in a preset window corresponding to each pixel point in the target portrait color map.
The gradient amplitude corresponding to the pixel point in the preset window can be positively correlated with the gradient representative value. The preset window may be a preset window. The preset window may be a 5×5 window.
A second sub-step of analyzing and processing the information quantity of the preset window corresponding to each pixel point in the target portrait color drawing to obtain the high-frequency information quantity corresponding to each pixel point in the target portrait color drawing, which comprises the following steps:
firstly, discrete cosine transform is carried out on a preset window corresponding to each pixel point in the target portrait color drawing, and DCT transform results corresponding to each pixel point in the target portrait color drawing are obtained.
For example, discrete cosine transform can be performed on a preset window corresponding to each pixel point in the target portrait color drawing, so as to obtain a DCT result corresponding to the preset window, and the DCT result is used as the DCT result corresponding to the pixel point.
And then, determining the sum of squares of all high-frequency coefficients in the DCT conversion result corresponding to each pixel point in the target portrait color drawing as the high-frequency information quantity corresponding to each pixel point in the target portrait color drawing.
For example, for each pixel point in the target portrait color map, the sum of squares of all high-frequency coefficients in the DCT transformation result corresponding to the pixel point can be determined as the high-frequency information amount corresponding to the pixel point. The high frequency coefficients in the DCT transform result are often denoted AC. The high frequency coefficients in the DCT transform result are i.e. the high frequency coefficients AC in the high frequency region in the DCT transform result.
And a third sub-step of determining a first depth index corresponding to each pixel point in the target portrait color map according to the gradient representative value and the high-frequency information quantity corresponding to each pixel point in the target portrait color map.
Wherein, the gradient representative value and the high-frequency information quantity can be positively correlated with the first depth index.
And a fourth sub-step of determining an average value of R channel values corresponding to all pixel points in a preset window corresponding to each pixel point in the target portrait color drawing as a window representing R value corresponding to each pixel point in the target portrait color drawing.
And a fifth substep, determining the average value of the B channel values corresponding to all the pixels in the preset window corresponding to each pixel in the target portrait color drawing as the window representing the B value corresponding to each pixel in the target portrait color drawing.
And a sixth substep, normalizing the difference value between the value corresponding to the window representative R value and the value corresponding to the window representative B value corresponding to each pixel point in the target portrait color map to obtain a second depth index corresponding to each pixel point in the target portrait color map.
And a seventh sub-step of determining a target depth of field corresponding to each pixel point in the target portrait color map according to the first depth of field index and the second depth of field index corresponding to each pixel point in the target portrait color map.
The first depth of field indicator and the second depth of field indicator may both be inversely related to the target depth of field.
For example, the formula for determining the target depth of field corresponding to each pixel point in the target portrait color map may be:
wherein,is the first one in the target portrait color pictureuTarget depth of field corresponding to each pixel point. />Is the first one in the target portrait color pictureuAnd a first depth index corresponding to each pixel point. />Is the first one in the target portrait color pictureuThe first pixel point is in a preset window corresponding to each pixel pointpGradient amplitude corresponding to each pixel point. />Is the first one in the target portrait color pictureuGradient representation corresponding to each pixel pointValues. />Is the first one in the target portrait color pictureuThe number of the pixel points in the preset window corresponding to the pixel points. / >Is the first one in the target portrait color pictureuHigh frequency information amount corresponding to each pixel point. />And->And shows positive correlation. />And->All are in charge of>And shows positive correlation. />Is the first one in the target portrait color pictureuAnd a second depth index corresponding to each pixel point. />Is the first one in the target portrait color pictureuWindows corresponding to the pixel points represent numerical values corresponding to R values; namely the first image of the target portraituAnd a value corresponding to the average value of the R channel values corresponding to all the pixel points in the preset window corresponding to each pixel point. />Is the first one in the target portrait color pictureuWindows corresponding to the pixel points represent numerical values corresponding to the B values; namely the first image of the target portraituAnd a value corresponding to the average value of the B channel values corresponding to all the pixel points in the preset window corresponding to each pixel point. />Is thatNormalized values, which can range from 0,1]。/>Is a normalization function, and normalization can be achieved. />Is->Normalized values, which can range from 0,1]。/>And->All are in charge of>And has negative correlation.
It should be noted that, when the infrared camera captures an image of a target, the far and near targets tend to be different in infrared radiation presentation along with the attenuation of the distance, and at the same time, the thermal contrast of the distant object tends to be low. Therefore, the target depth of field corresponding to the pixel point can be used for judging whether the pixel point belongs to a near-view pixel point or a far-view pixel point. When (when) The larger the tends to explain the firstuThe clearer the details in the preset window corresponding to each pixel point are, the more likely it is a close-range region, and the smaller the corresponding depth of field is. When->The larger the tends to explain the firstuThe larger the amount of high-frequency information in the preset window corresponding to each pixel point is, the more likely it is a close-range region, and the smaller the corresponding depth of field is. Since the higher the temperature, the more warmth tends to be presented in the image, the corresponding R channel value tends to be largeOther channel values; the lower the temperature, the more cold color tone tends to be present in the image, which tends to have a corresponding B-channel value that is greater than the other channel values. Therefore->Can represent the firstuThe thermal contrast in the preset window corresponding to each pixel point is often larger in the near view area than in the distant view area. So (1) is->The larger the tends to explain the firstuThe more likely the region where the individual pixels are located is a close-range region, the smaller the corresponding depth of field tends to be. Thus, when->The larger the tends to explain the firstuThe more likely a pixel is a far-view pixel, and vice versa, the more likely a near-view pixel. Because the image in the invention can be shot in a closed space, the foreground object is often close to the camera, and therefore, the depth value can be processed by using a normalization function.
And secondly, determining the product of a preset medium scattering coefficient and a target depth of field corresponding to each pixel point in the target portrait color drawing as an initial transmission index corresponding to each pixel point in the target portrait color drawing.
And thirdly, determining the transmissivity corresponding to each pixel point in the target portrait color drawing according to the initial transmissivity index corresponding to each pixel point in the target portrait color drawing.
Wherein the initial transmission index may be inversely related to the transmittance.
For example, the formula for determining the transmittance corresponding to each pixel point in the target portrait color map may be:
wherein,is the first one in the target portrait color pictureuTransmittance corresponding to each pixel point. />Is a preset medium scattering coefficient. For example, a->0.5 may be taken. />Is the first one in the target portrait color pictureuTarget depth of field corresponding to each pixel point. />Is the first one in the target portrait color pictureuInitial transmission index corresponding to each pixel point. />Is a natural constant +.>To the power. />Is an exponential function with a base of natural constant. />And->And has negative correlation.uIs the serial number of the pixel point in the target portrait color drawing.
When the following is performedThe larger the tends to explain the firstuThe more likely a pixel is a far-view pixel, and vice versa, the more likely a near-view pixel. / >The attenuation degree of the light rays passing through the medium can be represented, namely the attenuation degree of the infrared radiation light rays passing through fog can be represented.
And S7, enhancing the target portrait color drawing according to the transmittance and the radiation intensity indexes corresponding to each pixel point in the target portrait color drawing, and obtaining a target enhanced image.
In some embodiments, the target portrait color image may be enhanced according to the transmittance and the radiation intensity index corresponding to each pixel point in the target portrait color image, so as to obtain a target enhanced image.
It should be noted that, the transmissivity and the radiation intensity index corresponding to each pixel point in the target portrait color drawing are comprehensively considered, so that the enhancement of the target portrait color drawing is realized, the enhancement of the target infrared portrait drawing is realized, and the image enhancement effect is improved.
As an example, this step may include the steps of:
the first step, according to the transmissivity and radiation intensity index corresponding to each pixel point in the target portrait color map, determining a formula corresponding to the target enhanced pixel value corresponding to each pixel point in the target portrait color map as follows:
wherein,is the first one in the target portrait color pictureuTarget enhanced pixel values corresponding to the individual pixel points. / >Is the first one in the target portrait color pictureuThe pixel value corresponding to each pixel point is the first pixel value in the target portrait color drawinguEach pixel point performs a pixel value before image enhancement. />Is the first one in the target portrait color pictureuAnd the radiation intensity index corresponding to each pixel point. />Is the first one in the target portrait color pictureuPersonal imageThe transmittance corresponding to the pixel point.tIs a preset transmission threshold, e.g.,tmay be 0.1./>Is->Andtis the maximum value of (a). />Is a function of taking the maximum value.uIs the serial number of the pixel point in the target portrait color drawing.
And secondly, updating the pixel value corresponding to each pixel point in the target portrait color drawing to the corresponding target enhanced pixel value to obtain a target enhanced image.
In sum, through improving the defogging enhancement algorithm of the dark channel, the enhancement of the image is realized, a mouth triangular region is constructed according to facial features in the target portrait color image features, an adaptive radiation intensity index is constructed, the situation that the self-infrared radiation of different region positions is different is adapted, meanwhile, the definition is constructed according to a preset window corresponding to a pixel point and the thermal contrast is obtained, the target depth of field is obtained, the transmissivity is obtained, the self-adaptive improvement of the dark channel algorithm in the target infrared portrait image scene is completed, and the fusion of the three-dimensional temperature model is more accurate in the medical three-dimensional image construction. The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention and are intended to be included within the scope of the invention.

Claims (10)

1. The method for enhancing the infrared image in the three-dimensional collection box is characterized by comprising the following steps of:
acquiring a target infrared image in a three-dimensional acquisition box, and performing image conversion on the target infrared image to obtain a target portrait color image;
according to the target infrared portrait drawing, a mouth triangle area is screened out from the target portrait color drawing;
determining a radiation intensity index corresponding to each pixel point in the mouth triangular region according to the dark channel image corresponding to the target portrait color drawing;
clustering pixel points except the mouth triangle area in the target portrait color image according to the B channel image corresponding to the target portrait color image to obtain a target cluster set;
determining a radiation intensity index corresponding to each pixel point in each target cluster in the target cluster set according to the dark channel image corresponding to the target portrait color map;
determining the transmissivity corresponding to each pixel point in the target portrait color drawing according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color drawing;
and enhancing the target portrait color image according to the transmittance and the radiation intensity indexes corresponding to each pixel point in the target portrait color image, so as to obtain a target enhanced image.
2. The method for enhancing an infrared image in a three-dimensional collection box according to claim 1, wherein said screening a mouth triangle area from said target portrait color map according to said target infrared portrait map comprises:
performing edge detection on the target infrared image graph to obtain an edge contour area set;
screening out an edge contour region with the largest area from the edge contour region set, and taking the edge contour region as a reference region;
screening out the pixel point with the minimum gray value from the reference area as a nose tip pixel point;
constructing a first target straight line and a second target straight line which are perpendicular to each other through the nose tip pixel point, and determining two intersection points of the first target straight line and the edge of the reference area as a first auricle pixel point and a second auricle pixel point respectively;
screening out a target face contour from the target infrared image graph through Hough change;
screening chin pixel points from the intersection points of the second target straight line and the target face outline;
carrying out pairwise connection on the first auricle pixel point, the second auricle pixel point and the chin pixel point to obtain a target triangle area;
And determining the target triangle area corresponding to the area in the target portrait color drawing as a mouth triangle area.
3. The method for enhancing an infrared image in a three-dimensional collection box according to claim 1, wherein the determining a radiation intensity index corresponding to each pixel point in the mouth triangle according to the dark channel image corresponding to the target portrait color map comprises:
the mouth triangle area corresponds to the area in the dark channel image and is determined to be a mouth dark channel area;
screening out the pixel point with the largest pixel value and the preset duty ratio from the dark channel area of the mouth as a first candidate pixel point to obtain a first candidate pixel point set;
each first candidate pixel point in the first candidate pixel point set is determined to be a second candidate pixel point corresponding to the pixel point in the mouth triangle area, and a second candidate pixel point set is obtained;
and determining a radiation intensity index corresponding to each pixel point in the mouth triangle area according to the second candidate pixel point set.
4. A method for enhancing an infrared image in a three-dimensional collection box according to claim 3, wherein said determining, according to said second set of candidate pixels, a radiation intensity index corresponding to each pixel in said mouth triangle comprises:
Determining an average value of R channel values corresponding to all the second candidate pixel points in the second candidate pixel point set as a mouth representative R value;
determining the average value of the G channel values corresponding to all the second candidate pixel points in the second candidate pixel point set as a mouth representative G value;
determining the average value of the B channel values corresponding to all the second candidate pixel points in the second candidate pixel point set as a mouth representative B value;
combining the mouth representing R value, the mouth representing G value and the mouth representing B value into a mouth representing radiation index;
and determining the radiation index represented by the mouth as the radiation intensity index corresponding to each pixel point in the triangular region of the mouth.
5. The method for enhancing an infrared image in a three-dimensional collection box according to claim 1, wherein the clustering the pixels except the mouth triangle area in the target portrait color map according to the B-channel image corresponding to the target portrait color map to obtain a target cluster set includes:
clustering all the pixel points except the mouth triangle area in the target portrait color image according to the B channel values corresponding to all the pixel points except the mouth triangle area in the target portrait color image, and determining each obtained cluster as a target cluster to obtain a target cluster set, wherein the B channel value corresponding to the pixel point is the B channel value corresponding to the pixel point in the B channel image.
6. The method for enhancing an infrared image in a three-dimensional collection box according to claim 1, wherein the determining a radiation intensity index corresponding to each pixel point in each target cluster in the target cluster set according to the dark channel image corresponding to the target portrait color map includes:
the target cluster is corresponding to the area in the dark channel image, and the target dark channel area corresponding to the target cluster is determined;
screening out the pixel points with the largest pixel values and preset duty ratio from the target dark channel area as third candidate pixel points to obtain a third candidate pixel point set corresponding to the target cluster;
determining each third candidate pixel point in the third candidate pixel point set to be a fourth candidate pixel point corresponding to the pixel point in the target cluster, and obtaining a fourth candidate pixel point set corresponding to the target cluster;
determining an average value of R channel values corresponding to all fourth candidate pixel points in the fourth candidate pixel point set as a target representative R value corresponding to the target cluster;
determining an average value of the G channel values corresponding to all the fourth candidate pixel points in the fourth candidate pixel point set as a target representative G value corresponding to the target cluster;
Determining an average value of B channel values corresponding to all fourth candidate pixel points in the fourth candidate pixel point set as a target representative B value corresponding to the target cluster;
combining the target representative R value, the target representative G value and the target representative B value into a target representative radiation index corresponding to the target cluster;
and determining the target representative radiation index as a radiation intensity index corresponding to each pixel point in the target cluster.
7. The method for enhancing an infrared image in a three-dimensional collection box according to claim 1, wherein determining the transmittance of each pixel point in the target portrait color map according to the pixel value and the gradient amplitude of each pixel point in the target portrait color map comprises:
determining a target depth of field corresponding to each pixel point in the target portrait color map according to the pixel value and the gradient amplitude corresponding to each pixel point in the target portrait color map;
determining the product of a preset medium scattering coefficient and a target depth of field corresponding to each pixel point in the target portrait color map as an initial transmission index corresponding to each pixel point in the target portrait color map;
And determining the transmissivity corresponding to each pixel point in the target portrait color drawing according to the initial transmissivity index corresponding to each pixel point in the target portrait color drawing, wherein the initial transmissivity index and the transmissivity are in negative correlation.
8. The method for enhancing an infrared image in a three-dimensional collection box according to claim 7, wherein determining a target depth of field corresponding to each pixel point in the target portrait color map according to a pixel value and a gradient amplitude corresponding to each pixel point in the target portrait color map comprises:
determining a gradient representative value corresponding to each pixel point in the target portrait color map according to the gradient amplitude corresponding to each pixel point in a preset window corresponding to each pixel point in the target portrait color map, wherein the gradient amplitude corresponding to the pixel point in the preset window and the gradient representative value are positively correlated;
carrying out information quantity analysis processing on a preset window corresponding to each pixel point in the target portrait color drawing to obtain high-frequency information quantity corresponding to each pixel point in the target portrait color drawing;
determining a first depth of field index corresponding to each pixel point in the target portrait color map according to a gradient representative value and a high-frequency information quantity corresponding to each pixel point in the target portrait color map, wherein the gradient representative value and the high-frequency information quantity are positively correlated with the first depth of field index;
Determining an average value of R channel values corresponding to all pixel points in a preset window corresponding to each pixel point in the target portrait color drawing as a window representing R value corresponding to each pixel point in the target portrait color drawing;
determining an average value of B channel values corresponding to all pixel points in a preset window corresponding to each pixel point in the target portrait color drawing as a window representative B value corresponding to each pixel point in the target portrait color drawing;
normalizing the difference value of the value corresponding to the window representative R value and the value corresponding to the window representative B value corresponding to each pixel point in the target portrait color drawing to obtain a second depth index corresponding to each pixel point in the target portrait color drawing;
and determining the target depth of field corresponding to each pixel point in the target portrait color map according to a first depth of field index and a second depth of field index corresponding to each pixel point in the target portrait color map, wherein the first depth of field index and the second depth of field index are in negative correlation with the target depth of field.
9. The method for enhancing an infrared image in a three-dimensional collection box according to claim 8, wherein the performing information quantity analysis processing on a preset window corresponding to each pixel point in the target portrait color drawing to obtain a high-frequency information quantity corresponding to each pixel point in the target portrait color drawing comprises:
Performing discrete cosine transform on a preset window corresponding to each pixel point in the target portrait color drawing to obtain a DCT (discrete cosine transform) result corresponding to each pixel point in the target portrait color drawing;
and determining the sum of squares of all high-frequency coefficients in the DCT conversion result corresponding to each pixel point in the target portrait color drawing as the high-frequency information quantity corresponding to each pixel point in the target portrait color drawing.
10. The method for enhancing an infrared image in a three-dimensional collection box according to claim 1, wherein the enhancing the target portrait color drawing according to the transmittance and the radiation intensity index corresponding to each pixel point in the target portrait color drawing to obtain a target enhanced image comprises:
according to the transmittance and radiation intensity index corresponding to each pixel point in the target portrait color map, determining a formula corresponding to a target enhanced pixel value corresponding to each pixel point in the target portrait color map as follows:
wherein,is the first one in the target portrait color pictureuTarget enhanced pixel values corresponding to the pixel points; />Is the first one in the target portrait color pictureuPixel values corresponding to the pixel points; />Is the first one in the target portrait color pictureuRadiation intensity indexes corresponding to the pixel points; / >Is the first one in the target portrait color pictureuTransmittance corresponding to each pixel point;tis a preset transmission threshold; />Is->Andtmaximum value of (2); />Is a maximum function;uis the serial number of the pixel point in the target portrait color drawing; and updating the pixel value corresponding to each pixel point in the target portrait color drawing to the corresponding target enhanced pixel value to obtain a target enhanced image.
CN202311302715.8A 2023-10-10 2023-10-10 Method for enhancing infrared image in three-dimensional acquisition box Active CN117036207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311302715.8A CN117036207B (en) 2023-10-10 2023-10-10 Method for enhancing infrared image in three-dimensional acquisition box

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311302715.8A CN117036207B (en) 2023-10-10 2023-10-10 Method for enhancing infrared image in three-dimensional acquisition box

Publications (2)

Publication Number Publication Date
CN117036207A true CN117036207A (en) 2023-11-10
CN117036207B CN117036207B (en) 2024-01-19

Family

ID=88639490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311302715.8A Active CN117036207B (en) 2023-10-10 2023-10-10 Method for enhancing infrared image in three-dimensional acquisition box

Country Status (1)

Country Link
CN (1) CN117036207B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140072216A1 (en) * 2012-09-10 2014-03-13 Google Inc. Image De-Hazing by Solving Transmission Value
CN105469372A (en) * 2015-12-30 2016-04-06 广西师范大学 Mean filtering-based fog-degraded image sharp processing method
GB2585754A (en) * 2019-05-14 2021-01-20 Univ Beijing Science & Technology Underwater image enhancement method and enhancement device
CN113487509A (en) * 2021-07-14 2021-10-08 杭州电子科技大学 Remote sensing image fog removing method based on pixel clustering and transmissivity fusion
CN116456200A (en) * 2023-04-21 2023-07-18 安徽大学 Defogging system and method for infrared camera based on polarization imaging
US20230281955A1 (en) * 2022-03-07 2023-09-07 Quidient, Llc Systems and methods for generalized scene reconstruction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140072216A1 (en) * 2012-09-10 2014-03-13 Google Inc. Image De-Hazing by Solving Transmission Value
CN105469372A (en) * 2015-12-30 2016-04-06 广西师范大学 Mean filtering-based fog-degraded image sharp processing method
GB2585754A (en) * 2019-05-14 2021-01-20 Univ Beijing Science & Technology Underwater image enhancement method and enhancement device
CN113487509A (en) * 2021-07-14 2021-10-08 杭州电子科技大学 Remote sensing image fog removing method based on pixel clustering and transmissivity fusion
US20230281955A1 (en) * 2022-03-07 2023-09-07 Quidient, Llc Systems and methods for generalized scene reconstruction
CN116456200A (en) * 2023-04-21 2023-07-18 安徽大学 Defogging system and method for infrared camera based on polarization imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨俊霞: "基于特征保持的降质图像增强方法研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 1, pages 138 - 2905 *

Also Published As

Publication number Publication date
CN117036207B (en) 2024-01-19

Similar Documents

Publication Publication Date Title
Li et al. Image dehazing using residual-based deep CNN
CN112766160B (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN111539912B (en) Health index evaluation method and equipment based on face structure positioning and storage medium
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
CN111524080A (en) Face skin feature identification method, terminal and computer equipment
KR20230150397A (en) Eyelid shape estimation using eye pose measurement
CN110110629A (en) Personal information detection method and system towards indoor environmental condition control
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN103902958A (en) Method for face recognition
CN111523398A (en) Method and device for fusing 2D face detection and 3D face recognition
CN110738676A (en) GrabCT automatic segmentation algorithm combined with RGBD data
US20110116707A1 (en) Method for grouping 3d models to classify constitution
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN106372629A (en) Living body detection method and device
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN113436734A (en) Tooth health assessment method and device based on face structure positioning and storage medium
CN112036284B (en) Image processing method, device, equipment and storage medium
CN111062936B (en) Quantitative index evaluation method for facial deformation diagnosis and treatment effect
CN110909571B (en) High-precision face recognition space positioning method
CN117036207B (en) Method for enhancing infrared image in three-dimensional acquisition box
CN111178271B (en) Face image feature enhancement method, face recognition method and electronic equipment
CN113011333B (en) System and method for obtaining optimal venipuncture point and direction based on near-infrared image
CN115239607A (en) Method and system for self-adaptive fusion of infrared and visible light images
CN111784658B (en) Quality analysis method and system for face image
Ukai et al. Facial skin blood perfusion change based liveness detection using video images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant