CN112907571A - Target judgment method based on multispectral image fusion recognition - Google Patents

Target judgment method based on multispectral image fusion recognition Download PDF

Info

Publication number
CN112907571A
CN112907571A CN202110316711.XA CN202110316711A CN112907571A CN 112907571 A CN112907571 A CN 112907571A CN 202110316711 A CN202110316711 A CN 202110316711A CN 112907571 A CN112907571 A CN 112907571A
Authority
CN
China
Prior art keywords
image
visible light
target
infrared
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110316711.XA
Other languages
Chinese (zh)
Inventor
陆胜美
耿可可
卢山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Drum Tower Hospital
Original Assignee
Nanjing Drum Tower Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Drum Tower Hospital filed Critical Nanjing Drum Tower Hospital
Priority to CN202110316711.XA priority Critical patent/CN112907571A/en
Publication of CN112907571A publication Critical patent/CN112907571A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a target judgment method based on multispectral image fusion recognition, which comprises the following steps: s1, collecting the existing image sample; s2, importing the image sample into a computer in a specific format, reading and analyzing the characteristics of the sample image, labeling according to the injection duration and the characteristics after injection, and establishing a sample database; s3, constructing a multispectral image and a target recognition algorithm, respectively collecting image data by using an infrared camera and a visible light camera, and carrying out image registration; automatically extracting the time sequence characteristics of the multispectral image by using a deep learning model, and sending the multispectral image to a classifier for target detection, classification and positioning; s4, uploading the target detection and classification results to the APP, and judging the classification results by medical staff to obtain final results. The invention improves the precision and accuracy of the target detection result, improves the working efficiency of medical personnel and ensures the safe medication of patients.

Description

Target judgment method based on multispectral image fusion recognition
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a target judgment method based on multispectral image fusion recognition.
Background
At present, the penicillin skin test result is visually observed and judged by a nurse according to a judgment standard, and the standard is executed according to a recommendation method of pharmacopoeia of the people's republic of China for clinical medication: local redness, diameter > 1cm, or greater than 3mm larger than the original skin dome, or local blush, positive. In actual operation, due to differences of subjective judgment of nurses, interference of objective environmental factors (light rays and skin characteristics of patients), accurate and uniform judgment on the size of a skin dome, the range of a red halo, the boundary between the skin dome and surrounding skin and the characteristics of the skin of the patient (such as sensitive skin) is difficult to make, and even if measuring tools such as a skin test ruler are used, different results can be obtained due to different measuring point selections. When the skin test reaction is ambiguous, nurses are in the consideration of avoiding risks and often judge the patients to be positive, more than 95 percent of the patients are false positive (from data of control tests made by hospital departments) through the skin test control test of the penicillin, and the patients lose the optimal medication opportunity if the patients are misjudged. There is a need for a more intelligent, accurate technique to assist the nurse in making a more accurate determination.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a target judgment method based on multispectral image fusion recognition, and aims to solve the problem of subjective difference of nurses in visual inspection observation and judgment of penicillin skin test results and improve the accuracy and the precision of result judgment.
In order to achieve the purpose, the invention adopts the following technical scheme: a target judgment method based on multispectral image fusion recognition comprises the following steps:
s1, collecting the existing image sample;
s2, importing the image sample into a computer in a specific format, reading and analyzing the characteristics of the sample image, labeling according to the injection duration and the characteristics after injection, and establishing a sample database;
s3, constructing a multispectral image and a target recognition algorithm, and respectively collecting image data by using an infrared camera and a visible light camera to perform image registration; automatically extracting the time sequence characteristics of the multispectral image by using a deep learning model, and sending the multispectral image to a classifier for target detection, classification and positioning;
s4, carrying out recognition judgment on the image sample by a target recognition algorithm; if the result of the algorithm judgment is consistent with the result of the manual judgment, directly marking the result in the sample database; otherwise, the result is marked after the third ginseng judgment and correction;
s5, uploading the target detection and classification results to the APP, and judging the classification results by medical staff to obtain final results.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, in step S1, the acquired image sample includes pictures of the injection time 0 minute and the injection time 20 minutes, and the medical staff gives the judgment result.
In step S2, the determination results of "0 minute injection" time, "20 minutes injection" time, and both times are shown.
Further, in step S3, the multispectral image and the target recognition algorithm recognize and detect features in the infrared image and the visible image, wherein the features include the size, color, and range of vignetting of the skin dome, and the boundary between the skin dome and the surrounding skin.
Further, in the image registration process, an image calibration plate is manufactured, and features in the infrared image are enhanced through a local heating method to complete combined calibration.
Further, step S3 includes constructing an off-line registration method based on the infrared image and the visible light image, first obtaining a mapping transformation matrix H of the infrared image and the visible light image by using the image calibration plate and the feature points, and then implementing registration between the multispectral image pairs by using the H matrix.
Further, step S3 specifically includes the following steps:
s31, acquiring image data by using an infrared camera and a visible light camera respectively, wherein a group of data acquired for the same image comprises an infrared image and a visible light image;
s32, judging whether the acquired image pairs are larger than 20 groups, and if not, entering the step S33; if so, go to step S34;
s33, correspondingly taking 4 pairs of characteristic points for the infrared image and the visible light image, and calculating an H matrix from the visible light image to the infrared image plane; continuing with step S32;
s34, averaging and storing 20H matrixes;
s35, judging the residual groups, calculating an H matrix, and outputting a calibration result; judging the calibration effect by medical personnel, and if the calibration effect is not good, returning to the step S32; and if the calibration effect is good, finishing the image registration.
Further, the H matrix is a homography matrix, and is represented by the following formula:
Figure BDA0002989965340000021
the pixel conversion relationship between the infrared image and the visible light image can be written as follows:
Figure BDA0002989965340000022
in the formula (x)1,y1) The coordinates of the pixels of the visible light image are obtained; (x)2,y2) The coordinates of the pixel points of the infrared image are obtained; h is00-h22Are parameters of the homography matrix.
Further, step S3 includes constructing a target detection neural network model, performing feature extraction on the visible light image and the infrared image respectively by using VGG16, removing the full connection layers, adding a multi-modal image feature map fusion module behind the fourth layer convolution layer, adding an RPN network, an ROI network, and two full connection layers, and sending the fully connected feature information to the bounding box regression prediction head and the classification prediction head, respectively, to obtain a final target detection result and the positioning of the skin test target region in the visible light image.
Further, after the fourth VGG-16 convolution module, the feature maps obtained by the visible light image and the infrared image sub-networks are fused through a multi-mode image feature map fusion module, the feature maps are stacked by using a concat function, and feature dimension reduction is performed by using a 1 × 1 convolution layer, so that the size of the feature map is reduced to 512.
The invention has the beneficial effects that: 1. the method adopts an optimized technical scheme in the aspects of image acquisition precision, image registration, feature image extraction, feature fusion, result detection and the like, and fully utilizes fine-grained features such as skin dome size, vignetting range, color and surrounding skin boundary, so that the accuracy of a target detection result is improved.
2. With the continuous expansion of the sample database, the mass data provides more information for the algorithm, the efficiency of machine deep learning becomes more obvious, and the accuracy of target identification becomes higher and higher. And finally, a mobile phone APP application program is created, and a nurse only needs one mobile phone to obtain a detection result and judge the detection result.
3. The work risk that medical personnel bore reduces, and in the face of the skin test result that is difficult to judge, medical personnel's operating pressure is alleviated.
Drawings
FIG. 1 is a flowchart of a target determination method based on sensor image time-series feature analysis according to an embodiment of the present invention.
Fig. 2 is a flowchart of a multispectral image registration method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a visible light image and infrared image registration calibration plate according to an embodiment of the present invention.
FIG. 4 is a schematic diagram of a penicillin skin test result detection neural network model based on multispectral image fusion according to an embodiment of the present invention.
Fig. 5 is a structure diagram of a multi-spectral image feature fusion model according to an embodiment of the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
It should be noted that the terms "upper", "lower", "left", "right", "front", "back", etc. used in the present invention are for clarity of description only, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not limited by the technical contents of the essential changes.
Penicillin is one of the commonly used antibiotics at present, and has the characteristics of high curative effect, low toxicity and easy occurrence of anaphylactic reaction. Therefore, before using various penicillin preparations, allergy test (skin test) is required, and the medicines can be used when the test result is negative. The test method comprises the following steps: the test solution of penicillin was injected into the skin at 0.1ml (50U) and the results were observed after 20 minutes. Judging the intradermal test result: (1) negative: the skin dome is unchanged, and the periphery of the skin dome is not red, swollen, red and dizzy, and has no subjective symptoms. (2) Positive: local redness, diameter > 1cm, or greater than 3mm larger than the original skin dome, or local blush.
As shown in the accompanying drawings, the penicillin skin test result determination technology based on multispectral image fusion recognition in the present embodiment includes:
skin test multispectral image acquisition equipment, a hospital information system, a high-performance computer, a multispectral image and target identification algorithm, and a mobile phone ' penicillin skin test result judgment application program ' APP '.
The target judgment method based on multispectral image fusion recognition applied in the embodiment comprises the following steps:
and S1, acquiring an image sample.
The image samples collected by the invention are skin test images in a database provided by a hospital, the collected image samples comprise pictures at the time of 0 minute injection and the time of 20 minutes injection, and medical personnel give judgment results and input the judgment results into the system.
When a hospital acquires a skin test image, a collecting device is arranged at a position where a patient performs a skin test in advance, and the multispectral image collecting device for the skin test adopts a certificate photo shooting instrument of a bank system, namely an S500L office high-speed shooting instrument. The 500 ten thousand pixel high definition camera can be connected with an operating system above window7, and has an image character editing function. The shooting rod is the type of falling L, lays patient skin test arm on its base during the collection image, shoots under the appearance camera, can guarantee to shoot the distance fixed unchangeably, and light intensity is unanimous, and the at utmost reduces the interference.
S2, importing the image sample into a high-performance computer in a RAW format, reading and analyzing the characteristics of the sample image, including marking time for two pictures of skin hump formation (0 minute of injection) and result judgment (20 minutes after injection) one by one, marking negative results and positive results by medical personnel, completing the steps for each penicillin skin test image, and establishing a sample database.
S3, constructing a multispectral image and a target recognition algorithm, collecting image data by an infrared camera and a visible light camera respectively, carrying out image registration, automatically extracting time-sequence characteristics of the multispectral image by using a deep learning model, such as the size of a skin dune, the color, the range of a red halo, the change of the boundary of surrounding skin and the like, sending the multispectral image to a pre-trained classifier, and carrying out target detection, classification and positioning.
Specifically, an off-line registration method based on an infrared image and a visible light image is constructed to ensure that the same target position is the same in the image plane positions of the two images. In the image registration process, an aluminum alloy calibration plate is designed and manufactured, the characteristics in the infrared image are enhanced through a local heating method, rotation and translation are carried out, an H matrix is obtained, and combined calibration is completed. As shown in fig. 3, each group of pictures is an infrared image and a visible light image, and the H matrix obtained by the method has better robustness due to different shooting angles. Firstly, an image calibration plate and characteristic points are utilized to obtain a mapping transformation matrix H of an infrared image and a visible light image, and finally, the H matrix is utilized to realize registration between multispectral image pairs.
As shown in fig. 2, the method comprises the following steps:
s31, acquiring image data by using an infrared camera and a visible light camera respectively, wherein a group of data acquired for the same image comprises an infrared image and a visible light image;
s32, judging whether the acquired image pairs are larger than 20 groups, and if not, entering the step S33; if so, go to step S34;
and S33, taking 4 pairs of feature points corresponding to the infrared image and the visible light image, and calculating an H matrix from the visible light image to an infrared image plane, namely a Homography matrix (Homography), which is a conversion matrix (3 x 3) of the mapping relation from one image to another image. Can be represented by the following formula:
Figure BDA0002989965340000051
the pixel conversion relationship between the infrared image and the visible light image can be written as follows:
Figure BDA0002989965340000052
in the formula (x)1y1) Pixel point coordinates of visible light; (x)2,y2) The coordinates of the pixel points of the infrared image are obtained; h is00-h22Are parameters of the homography matrix.
Continuing with step S32;
s34, averaging and storing 20H matrixes; and the error generated by acquiring the characteristic points with poor quality is reduced.
S35, judging the residual groups to calculate an H matrix, and verifying whether the H matrix can achieve a satisfactory effect; outputting a calibration result, namely the effect of image registration; judging the calibration effect by medical personnel, and if the calibration effect is not good, returning to the step S32; and if the calibration effect is good, finishing the image registration.
The embodiment of the invention also provides a penicillin skin test result detection method (target detection neural network model) based on multispectral image fusion. The VGG16 is utilized to respectively extract the characteristics of the visible light image and the infrared image and remove the full connecting layer, and a multi-mode image characteristic map fusion module is added behind the fourth layer of the convolutional layer through comparison of a large amount of experimental data. And then adding an RPN (resilient packet network), an ROI (region of interest) network and two full-connection layers, and respectively sending the fully-connected characteristic information into a boundary frame regression prediction head and a classification prediction head to obtain a final penicillin skin test result detection result and positioning of a skin test target region in a visible light image.
After the fourth VGG-16 convolution module, the feature maps obtained by the visible light image and the infrared image sub-networks are fused by the fusion module, the feature maps are stacked by using the concat function, the feature dimension reduction is performed by using the 1 × 1 convolution layer, and the size of the feature map is reduced to 512, as shown in fig. 5.
And when one skin test picture is collected, the skin test picture is brought into the sample database according to the steps, and the computer obtains more learning samples.
And S4, after the database samples reach more than 1000 cases, starting to identify and judge the skin test images through the target identification algorithm. If the algorithm judgment result is consistent with the manual judgment result, directly marking the skin test result; otherwise, the result is marked after the third ginseng is judged and corrected. When a penicillin skin test image is introduced, a computer presents a judgment result according to earlier learning and algorithms. With the expansion of the sample database, the computer identification accuracy is higher and higher until reaching or approaching 95-100%. A mobile phone APP program 'penicillin skin test result judgment application program' is created, and judgment results obtained by the program assist medical care personnel in judgment. It should be noted that the invention is only used for assisting medical care personnel in judging the skin reaction of penicillin skin test, and if the skin test of a patient is accompanied by other body reactions, a nurse is required to make comprehensive analysis and judgment.
S5, uploading the target detection and classification results to a mobile phone APP program, and judging the classification results by medical staff to obtain final results.
The experimental verification method of the embodiment of the invention comprises the following steps:
1. and establishing a multi-modal skin test target change image dataset. The project constructs an image management system for skin test change image data management and data set establishment, and specifically comprises the following functions:
(1) and (3) data uploading: uploading environmental data acquired by the multi-modal optical sensor to a data image management system;
(2) target location and tag marking: simultaneously displaying the infrared thermal imaging image and the corresponding visible light image, performing fine registration of the multispectral image pair, and automatically framing the target area of the infrared thermal imaging image corresponding to the framed target object on the visible light image;
(3) evaluation and checking of marking results: in order to improve the accuracy of the manually marked picture, the marked result needs to be checked and verified, the contact ratio of the target boundary frames marked twice is compared, if the entrance and the exit are large, marking needs to be carried out for the third time, and the average value of the two closest marked results is taken as the final marked result;
(4) training sample expansion: the target sample is expanded by adopting the modes of rotation, translation, scaling, blurring and the like, the generalization capability of the deep neural network is improved, and the overfitting problem caused by the undersize sample amount is inhibited.
2. And (5) building a test verification platform. The RGB and infrared camera perception platform is arranged and can be used for carrying out experimental verification on the penicillin skin test result detection research scheme based on multi-mode image fusion. The multimode optical sensor composed of an industrial camera and an infrared thermal imager develops design of a skin test result detection and identification algorithm and ROS program compiling by reading information of an onboard camera and infrared thermal images.
The method provided by the invention has the advantages that the skin test sample image is read, identified, analyzed and processed through a specific software program, the characteristic rule is extracted from mass information, and the method has the irreplaceable advantages of human eyes in the aspects of measuring the size of a skin dome and the range of red halos and distinguishing the skin color and the boundary of the surrounding skin. The penicillin skin test result judgment technology based on multispectral image fusion recognition is an innovative technology which is beneficial to patients and nurses and is a new technology application of Artificial Intelligence (AI) in the field of nursing, the more time the penicillin skin test result judgment technology is applied is, the higher the accuracy is.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (10)

1. The target judgment method based on multispectral image fusion recognition is characterized by comprising the following steps of:
s1, collecting the existing image sample;
s2, importing the image sample into a computer in a specific format, reading and analyzing the characteristics of the sample image, manually marking according to the injection duration and the characteristics after injection, and establishing a sample database;
s3, constructing a multispectral image and a target recognition algorithm, respectively collecting image data by using an infrared camera and a visible light camera, and carrying out image registration; automatically extracting the time sequence characteristics of the multispectral image by using a deep learning model, and sending the multispectral image to a classifier for target detection, classification and positioning;
s4, carrying out recognition judgment on the image sample by a target recognition algorithm; if the result of the algorithm judgment is consistent with the result of the manual judgment, directly marking the result in the sample database; otherwise, the result is marked after the third ginseng judgment and correction;
s5, uploading the target detection and classification results to the APP, and judging the classification results by medical staff to obtain final results.
2. The object determination method according to claim 1, wherein in step S1, the acquired image samples include pictures of the time of injection at 0 minute and the time of injection at 20 minutes, and the medical staff gives the determination result.
3. The object determination method according to claim 1, wherein in step S2, the determination results of "0 minute injection" time, "20 minute injection" time and both times are respectively labeled.
4. The method for determining the target according to claim 1, wherein in step S3, the multispectral image and the target recognition algorithm recognize and detect the features in the infrared image and the visible image, wherein the features include the size, color, and range of vignetting of the skin dome, and the boundary between the skin dome and the surrounding skin.
5. The method of claim 1, wherein during the image registration, an image calibration board is fabricated, and features in the infrared image are enhanced by local heating to complete the joint calibration.
6. The method of claim 5, wherein the step S3 includes constructing an off-line registration method based on the IR image and the visible light image, first using the image calibration plate and the feature points to obtain a mapping transformation matrix H of the IR image and the visible light image, and then using the H matrix to realize the registration between the multispectral image pairs.
7. The object determination method according to claim 6, wherein step S3 specifically includes the steps of:
s31, acquiring image data by using an infrared camera and a visible light camera respectively, wherein a group of data acquired for the same image comprises an infrared image and a visible light image;
s32, judging whether the acquired image pairs are larger than 20 groups, and if not, entering the step S33; if so, go to step S34;
s33, correspondingly taking 4 pairs of characteristic points for the infrared image and the visible light image, and calculating an H matrix from the visible light image to the infrared image plane; continuing with step S32;
s34, averaging and storing 20H matrixes;
s35, judging the residual groups, calculating an H matrix, and outputting a calibration result; judging the calibration effect by medical personnel, and if the calibration effect is not good, returning to the step S32; and if the calibration effect is good, finishing the image registration.
8. The object determination method according to claim 7, wherein the H matrix is a homography matrix and is expressed by the following formula:
Figure FDA0002989965330000021
the pixel conversion relationship between the infrared image and the visible light image can be written as follows:
Figure FDA0002989965330000022
in the formula (x)1,y1) The coordinates of the pixels of the visible light image are obtained; (x)2,y2) The coordinates of the pixel points of the infrared image are obtained; h is00-h22Are parameters of the homography matrix.
9. The target determination method of claim 1, wherein the step S3 includes constructing a target detection neural network model, performing feature extraction on the visible light image and the infrared image respectively by using VGG16, removing the full-connected layers, adding a multi-modal image feature map fusion module behind the fourth layer convolution layer, adding an RPN network, an ROI network and two full-connected layers, and sending the fully-connected feature information to the boundary frame regression prediction head and the classification prediction head respectively to obtain a final target detection result and positioning of the skin test target region in the visible light image.
10. The object determination method of claim 9, wherein after the fourth VGG-16 convolution module, the feature maps obtained from the visible light image and the infrared image sub-networks are fused by a multi-modal image feature map fusion module, the feature maps are stacked by using a concat function, and feature dimension reduction is performed by using a 1 × 1 convolution layer, so that the size of the feature map is reduced to 512.
CN202110316711.XA 2021-03-24 2021-03-24 Target judgment method based on multispectral image fusion recognition Pending CN112907571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110316711.XA CN112907571A (en) 2021-03-24 2021-03-24 Target judgment method based on multispectral image fusion recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110316711.XA CN112907571A (en) 2021-03-24 2021-03-24 Target judgment method based on multispectral image fusion recognition

Publications (1)

Publication Number Publication Date
CN112907571A true CN112907571A (en) 2021-06-04

Family

ID=76106664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110316711.XA Pending CN112907571A (en) 2021-03-24 2021-03-24 Target judgment method based on multispectral image fusion recognition

Country Status (1)

Country Link
CN (1) CN112907571A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344914A (en) * 2021-07-09 2021-09-03 重庆医科大学附属第一医院 Method and device for intelligently analyzing PPD skin test result based on image recognition
CN117809124A (en) * 2024-03-01 2024-04-02 青岛瑞思德生物科技有限公司 Medical image association calling method and system based on multi-feature fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215893B1 (en) * 1998-05-24 2001-04-10 Romedix Ltd. Apparatus and method for measurement and temporal comparison of skin surface images
CN202093419U (en) * 2010-12-28 2011-12-28 罗健 Multi-function working computer having touch screen and specialized for nurses
KR101670335B1 (en) * 2015-10-07 2016-10-28 금오공과대학교 산학협력단 A ultrasound and nuclear medicine integrated imaging probe system for obtaining high-resolution image
CN107456249A (en) * 2016-06-03 2017-12-12 谢晴 A kind of hand-held skin test analyzer
CN108272437A (en) * 2017-12-27 2018-07-13 中国科学院西安光学精密机械研究所 Spectral detection system and sorter model construction method for skin disease diagnosis
CN109152537A (en) * 2016-05-23 2019-01-04 布鲁德普医疗有限公司 A kind of skin examination equipment of exception for identification
CN109145799A (en) * 2018-08-13 2019-01-04 湖南志东科技有限公司 A kind of object discrimination method based on multi-layer information
WO2020171281A1 (en) * 2019-02-22 2020-08-27 써모아이 주식회사 Visible light and infrared fusion image-based object detection method and apparatus
CN112070757A (en) * 2020-09-16 2020-12-11 重庆康盛医道信息科技有限公司 Skin allergen prick automatic detection analysis method based on deep learning algorithm

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215893B1 (en) * 1998-05-24 2001-04-10 Romedix Ltd. Apparatus and method for measurement and temporal comparison of skin surface images
CN202093419U (en) * 2010-12-28 2011-12-28 罗健 Multi-function working computer having touch screen and specialized for nurses
KR101670335B1 (en) * 2015-10-07 2016-10-28 금오공과대학교 산학협력단 A ultrasound and nuclear medicine integrated imaging probe system for obtaining high-resolution image
CN109152537A (en) * 2016-05-23 2019-01-04 布鲁德普医疗有限公司 A kind of skin examination equipment of exception for identification
CN107456249A (en) * 2016-06-03 2017-12-12 谢晴 A kind of hand-held skin test analyzer
CN108272437A (en) * 2017-12-27 2018-07-13 中国科学院西安光学精密机械研究所 Spectral detection system and sorter model construction method for skin disease diagnosis
CN109145799A (en) * 2018-08-13 2019-01-04 湖南志东科技有限公司 A kind of object discrimination method based on multi-layer information
WO2020171281A1 (en) * 2019-02-22 2020-08-27 써모아이 주식회사 Visible light and infrared fusion image-based object detection method and apparatus
CN112070757A (en) * 2020-09-16 2020-12-11 重庆康盛医道信息科技有限公司 Skin allergen prick automatic detection analysis method based on deep learning algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KEKE GENG 等: "Low-observable targets detection for autonomous vehicles based on dual-modal sensor fusion with deep learning approach", 《PROC IMECHE PART D: J AUTOMOBILE ENGINEERING》 *
邹伟 等: "基于多模态特征融合的自主驾驶车辆低辨识目标检测方法", 《中国机械工程》 *
陈丽 等: "一种融合方法的皮肤检测技术", 《科技广场》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344914A (en) * 2021-07-09 2021-09-03 重庆医科大学附属第一医院 Method and device for intelligently analyzing PPD skin test result based on image recognition
CN113344914B (en) * 2021-07-09 2023-04-07 重庆医科大学附属第一医院 Method and device for intelligently analyzing PPD skin test result based on image recognition
CN117809124A (en) * 2024-03-01 2024-04-02 青岛瑞思德生物科技有限公司 Medical image association calling method and system based on multi-feature fusion
CN117809124B (en) * 2024-03-01 2024-04-26 青岛瑞思德生物科技有限公司 Medical image association calling method and system based on multi-feature fusion

Similar Documents

Publication Publication Date Title
CN108921100B (en) Face recognition method and system based on visible light image and infrared image fusion
KR102161477B1 (en) Improved method for accurate temperature of thermal imaging camera using face recognition
US8345936B2 (en) Multispectral iris fusion for enhancement and interoperability
CN111339951A (en) Body temperature measuring method, device and system
CN112033545B (en) Human body temperature infrared measurement method and device and computer equipment
CN107657639A (en) A kind of method and apparatus of quickly positioning target
US20120133753A1 (en) System, device, method, and computer program product for facial defect analysis using angular facial image
US20180182091A1 (en) Method and system for imaging and analysis of anatomical features
JPH11244261A (en) Iris recognition method and device thereof, data conversion method and device thereof
US20120157800A1 (en) Dermatology imaging device and method
CN112907571A (en) Target judgment method based on multispectral image fusion recognition
KR102162683B1 (en) Reading aid using atypical skin disease image data
CN102479322A (en) System, apparatus and method for analyzing facial defect by facial image with angle
Besinger et al. Optical flow based analyses to detect emotion from human facial image data
WO2023155488A1 (en) Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
CN113435236A (en) Home old man posture detection method, system, storage medium, equipment and application
TWI557601B (en) A puppil positioning system, method, computer program product and computer readable recording medium
CN110110606A (en) The fusion method of visible light neural network based and infrared face image
CN111275754B (en) Face acne mark proportion calculation method based on deep learning
CN117373110A (en) Visible light-thermal infrared imaging infant behavior recognition method, device and equipment
Chang et al. AI HAM 10000 database to assist residents in learning differential diagnosis of skin cancer
CN113808256B (en) High-precision holographic human body reconstruction method combined with identity recognition
CN111428577B (en) Face living body judgment method based on deep learning and video amplification technology
Strąkowska et al. Automatic eye corners detection and tracking algorithm in sequence of thermal medical images
KR102473744B1 (en) A method of diagnosing strabismus through the analysis of eyeball image from cover and uncovered test

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination