CN117523649B - Mining iris safety recognition detection method, system, medium and terminal - Google Patents

Mining iris safety recognition detection method, system, medium and terminal Download PDF

Info

Publication number
CN117523649B
CN117523649B CN202410008035.3A CN202410008035A CN117523649B CN 117523649 B CN117523649 B CN 117523649B CN 202410008035 A CN202410008035 A CN 202410008035A CN 117523649 B CN117523649 B CN 117523649B
Authority
CN
China
Prior art keywords
image
iris
texture
gray level
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410008035.3A
Other languages
Chinese (zh)
Other versions
CN117523649A (en
Inventor
彭斌
罗杰
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Keruite Electric Automation Co ltd
Original Assignee
Chengdu Keruite Electric Automation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Keruite Electric Automation Co ltd filed Critical Chengdu Keruite Electric Automation Co ltd
Priority to CN202410008035.3A priority Critical patent/CN117523649B/en
Publication of CN117523649A publication Critical patent/CN117523649A/en
Application granted granted Critical
Publication of CN117523649B publication Critical patent/CN117523649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a mining iris safety recognition detection method, a system, a medium and a terminal, which relate to the technical field of mining personnel safety management, and comprise the steps of obtaining iris images of eyes of a target object, and extracting images with qualified quality from a first iris image by adopting an image overall definition evaluation method to be recorded as a second iris image; performing image recognition on the first texture image based on the depth separable convolution structure to obtain texture features in the first texture image; extracting the characteristics of the second texture image to obtain the color characteristics of the second texture image; combining the texture features and the color features by adopting a threshold binarization method, and performing dimension reduction treatment to obtain the spliced iris integral features; inputting the spliced iris integral features into a preset fatigue degree detection model for fatigue recognition, and judging whether the iris of the target object belongs to a fatigue mode. The invention has the beneficial effects of detecting the fatigue state of open workers in the well, preventing the occurrence of fatigue in the well and improving the safety in the well.

Description

Mining iris safety recognition detection method, system, medium and terminal
Technical Field
The invention relates to the technical field of mining personnel safety management, in particular to a mining iris safety recognition detection method, a mining iris safety recognition detection system, a mining iris safety recognition detection medium and a mining iris safety recognition terminal.
Background
The actual exploitation operation of mine enterprises such as coal mines has a plurality of unpredictable various potential safety hazards such as gas outburst, water permeability, roof fall, collapse and the like, and the mine safety production cannot be ignored. In mine safety production operations, there is a high demand for safety detection of miners into wells due to the high-risk nature of their production and exploitation operations.
Usually, the identity of the miners needs to be checked through face recognition before entering the well, and real and effective information of the miners is obtained. Today, the iris recognition technology utilizes the characteristics of lifelong invariance and individual variability of human iris textures to realize identity recognition. However, at present, because actual mining environments of mines such as coal mines are poor, dust, wind and sand are large, face recognition is single, and certain errors are unavoidable when face information of open workers is acquired; according to the related data, the underground accident caused by fatigue well descending has larger proportion, because the air is thin in production and exploitation operation, the physical power is high, some open workers can feel uncomfortable before well descending or the long-term well descending operation is tired excessively, but the underground accident is still persisted, the underground work difficulty is high, and the safety accident is more easy to occur due to fatigue. Therefore, the mining iris safety recognition detection method is necessary for the research of detecting the fatigue state of open workers.
Disclosure of Invention
The invention aims to provide a mining iris safety recognition detection method, a mining iris safety recognition detection system, a mining iris safety recognition detection medium and a mining iris safety recognition detection terminal, so as to solve the problems. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, the present application provides a mining iris safety recognition detection method, including:
acquiring an iris image of an eye of a target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image by adopting an image overall definition evaluation method to be recorded as a second iris image, wherein the second iris image comprises a first texture image and a second texture image, the first texture image comprises spots, filaments and stripes, and the second texture image comprises pigments and blood vessels;
performing image recognition on the first texture image based on the depth separable convolution structure to obtain texture features in the first texture image;
extracting features of a second texture image by utilizing a Yolov4 model to obtain color features of the second texture image, wherein the second texture image comprises selection information of a backbone network, a threshold value for non-maximum suppression and prior frame information;
combining the texture features and the color features by adopting a threshold binarization method to realize dimension reduction treatment, and splicing the processed texture features and color features to obtain spliced iris integral features;
Inputting the spliced iris integral characteristics into a preset fatigue degree detection model for fatigue recognition, judging whether the iris of the target object belongs to a fatigue mode, if so, judging the fatigue degree of the target object and giving an alarm, and if not, judging that the target object is in a safe state.
Preferably, the step of obtaining an iris image of the eye of the target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image by using an image overall definition evaluation method to be recorded as a second iris image, wherein the method comprises the following steps:
acquiring iris images of eyes of a target object through a camera device, and performing iris ring positioning, iris ring normalization and normalized iris image enhancement on the iris images to obtain first iris images after image enhancement;
scaling the first iris image, removing a background area exceeding a preset proportion to obtain a first iris image with the background removed, and weakening noise of the first iris image by Gaussian filtering to obtain a preprocessed first iris image;
and evaluating the preprocessed first iris image by adopting a definition evaluation method of a Tenengard gradient function, removing a blurred image and a severe blurred image in the first iris image, reserving a clear image and marking the clear image as a second iris image, wherein the evaluation method is to extract the gradient amplitude of the image by utilizing a Sobel operator and accumulating to calculate the definition of the image.
Preferably, the image recognition is performed on the first texture image based on the depth separable convolution structure to obtain texture features in the first texture image, which includes:
extracting features of a first texture image through a depth separable convolution structure to obtain a first feature image, wherein the first feature image comprises a spot feature image, a filament feature image and a stripe feature image;
respectively carrying out gray level conversion on the speckle characteristic image, the filament characteristic image and the stripe characteristic image to obtain a corresponding speckle gray level image, a filament gray level image and a stripe gray level image, and respectively connecting pixel points with the same gray level value in the speckle gray level image, the filament gray level image and the stripe gray level image, wherein a linear interpolation method is adopted to carry out interpolation treatment on the connecting lines to obtain a first speckle gray level image, a first filament gray level image and a first stripe gray level image;
fusing the first speckle gray level image, the first filament gray level image and the first stripe gray level image to obtain a second characteristic image, performing edge detection on the second characteristic image, and converting an image outside the second characteristic image into white;
and clustering the R, G, B components of all pixel points in the second characteristic image, solving the average value of the center points of all the obtained clustering clusters to obtain the average value of the center points of all the clustering clusters, and taking the average value as the texture characteristic of the first texture image.
Preferably, the edge detection is performed on the second feature image, and the image outside the second feature image is converted into white, which includes:
denoising the second characteristic image to obtain a denoised second characteristic image, and calculating the pixel intensity gradient of the denoised second characteristic image;
non-maximum suppression is carried out on the denoised pixels of the denoised second characteristic image according to the pixel intensity gradient, and an edge image of the denoised second characteristic image is obtained;
decomposing the edge image by using a wavelet transformation method to obtain three high-frequency components after the decomposition treatment, namely a spot edge image, a filament edge image and a stripe edge image, wherein the spot edge image comprises a low-frequency component of spots in the edge image in the horizontal direction and a high-frequency component in the vertical direction, the filament edge image comprises a high-frequency component of filaments in the edge image in the horizontal direction and a low-frequency component in the vertical direction, and the stripe edge image comprises a high-frequency component of stripes in the edge image in the horizontal direction and a high-frequency component in the vertical direction, and a wavelet function adopted by the wavelet transformation method is a mexico hat-shaped function;
and respectively carrying out tracking processing on the spot edge image, the filament edge image and the stripe edge image to obtain a first edge image, a second edge image and a third edge image, and converting images except the first edge image, the second edge image and the third edge image into white.
Preferably, the extracting the features of the second texture image by using the Yolov4 model, to obtain color features of the second texture image, includes:
based on a deep learning target detection algorithm, adjusting the second texture image to a preset size and inputting the second texture image into a Yolov4 target detection model after training, determining a rectangular frame, and recording a region surrounded by the rectangular frame as a pigment region and a blood vessel region, wherein determining the rectangular frame comprises: performing sliding processing on the second texture image by utilizing a sliding window, determining a plurality of first center points, performing mapping processing on the second iris image based on the plurality of first center points, and determining a plurality of second center points; generating a plurality of candidate anchor frames at each second center point based on anchor frames of a preset size, wherein the anchor frames of the preset size are obtained from training data of a Yolov4 target detection model, and determining pigment areas and blood vessel areas in the second texture image according to the plurality of candidate anchor frames;
the color tone value, the saturation value and the brightness value are obtained by component extraction of the pigment region and are marked as first color quantization extraction values, and the color tone value, the saturation value and the brightness value are obtained by component extraction of the blood vessel region and are marked as second color quantization extraction values;
And equally dividing the first color quantization extraction value and the second color quantization extraction value respectively to obtain at least four equal parts of color quantization extraction values, and combining the color quantization extraction values of each part to obtain the color characteristics of the second texture image.
In a second aspect, the application further provides a mining iris safety recognition detection system, which comprises an acquisition module, a recognition module, an extraction module, a processing module and a judgment module, wherein:
the acquisition module is used for: the method comprises the steps of acquiring an iris image of an eye of a target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image by adopting an image overall definition evaluation method to be recorded as a second iris image, wherein the second iris image comprises a first texture image and a second texture image, the first texture image comprises spots, filaments and stripes, and the second texture image comprises pigments and blood vessels;
and an identification module: the method comprises the steps of performing image recognition on a first texture image based on a depth separable convolution structure to obtain texture features in the first texture image;
and an extraction module: the method comprises the steps of extracting features of a second texture image by utilizing a Yolov4 model to obtain color features of the second texture image, wherein the second texture image comprises selection information of a backbone network, a threshold value for non-maximum suppression and prior frame information;
The processing module is used for: the method is used for combining the texture features and the color features by adopting a threshold binarization method to realize dimension reduction treatment, and splicing the processed texture features and color features to obtain spliced iris integral features;
and a judging module: and inputting the spliced iris integral characteristics into a preset fatigue degree detection model to perform fatigue recognition, judging whether the iris of the target object belongs to a fatigue mode, if so, judging the fatigue degree of the target object and giving an alarm, and if not, judging that the target object is in a safe state.
In a third aspect, the present application further provides a readable storage medium, where a computer program is stored, where the computer program, when executed by a processor, implements the steps of the mining iris safety recognition detection method described above.
In a fourth aspect, the present application further provides a terminal, including a memory and a processor, where the memory stores a computer program, and the computer program when executed by the processor causes the processor to execute the steps of the mining iris safety recognition detection method.
The beneficial effects of the invention are as follows: the invention adopts the image overall definition evaluation method to extract the image, so that the definition of the image can be improved; the over-blurred or heavily-blurred images are rapidly screened through a Sobel operator, and clear iris images are selected, so that subsequent calculation is facilitated; the target detection model is added to identify and position the pigment region and the blood vessel region, and compared with the traditional identification method, the method has higher processing speed and positioning accuracy.
According to the invention, the first texture image is subjected to texture extraction, so that the speckle characteristic image, the filament characteristic image and the stripe characteristic image are rapidly determined, and further, the speckle gray level image, the filament gray level image and the stripe gray level image are determined, and preparation is made for later judgment of the texture characteristics.
According to the method, all pixel points in the second characteristic image are clustered, the central point of the cluster is rapidly determined, and then the average value of all the central points is used as the texture characteristic of the first texture image, so that the texture degree of the first texture image can be judged according to different color characteristics; denoising the second characteristic image, and improving the definition and resolvable performance of the image, so that the effect of image identification, detection and analysis is improved, non-maximum suppression is adopted, an optimal boundary frame is selected from a group of overlapped frames, and the rest redundant detection frame results which are more overlapped with the maximum value are removed, so that the edge image of the second characteristic image is obtained; the invention adopts dimension reduction processing when processing the image, improves the calculation efficiency, is beneficial to the robustness of the whole recognition algorithm, prevents the loss of the space information of the original characteristics, and effectively characterizes the change of the iris texture.
The invention detects the fatigue state of open workers in the well, prevents the occurrence of fatigue in the well, and improves the safety of the miners in the well.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a mining iris safety recognition detection method according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a mining iris safety recognition detection system according to an embodiment of the present invention;
fig. 3 is a terminal structure diagram of a mining iris safety recognition detection method according to an embodiment of the invention.
In the figure: 701. an acquisition module; 7011. a first processing unit; 7012. a second processing unit; 7013. an evaluation unit; 702. An identification module; 7021. a first extraction unit; 7022. a conversion unit; 7023. a fusion unit; 70231. a denoising unit; 70232. a third processing unit; 70233. a fourth processing unit; 70234. a fifth processing unit; 7024. a clustering unit; 703. an extraction module; 7031. a training unit; 7032. a second extraction unit; 7033. dividing units; 704. a processing module; 705. and a judging module.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1
The embodiment provides a mining iris safety recognition detection method.
Referring to fig. 1, the method is shown to include steps S100, S200, S300, S400, and S500.
S100, acquiring an iris image of an eye of a target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image by adopting an image overall definition evaluation method to be recorded as a second iris image, wherein the second iris image comprises a first texture image and a second texture image, the first texture image comprises spots, filaments and stripes, and the second texture image comprises pigments and blood vessels.
It should be noted that, when the image is clearer, the detail information contained in the image is more abundant, and the larger the gray level change between adjacent pixels in the image is considered to be, the clearer the image is. Therefore, the image is extracted by adopting an overall definition evaluation method, so that the definition of the image can be improved. The quality evaluation of the iris image is that the iris image actually acquired has poor images such as image blurring or serious shading of eyelid eyelashes due to the influence of a plurality of factors during acquisition, so that the quality of the acquired image needs to be judged in the iris image acquisition stage, the unqualified image is screened out on the premise of improving the survival rate of the image, the gray value of the pupil area is smaller, the gray value of other areas is obviously different from the gray value of the pupil area, in order to extract the pupil area, the gray histogram of the iris image can be obtained, the threshold value is set for judgment processing, the iris image is subjected to binarization operation, the area of the pupil is calculated in the extracted pupil area, the index of the iris area is measured, the threshold value is set, and the number of pixels in the pupil is compared with the threshold value, so that the purpose of screening is achieved. In the present application, a method of high frequency energy in the fourier domain may also be used for quality evaluation, which is not described in detail herein.
It will be appreciated that S101, S102 and S103 are included in this step S100, wherein:
s101, acquiring iris images of eyes of a target object through a camera device, and performing iris ring positioning, iris ring normalization and normalized iris image enhancement on the iris images to obtain first iris images after image enhancement;
it should be noted that, first, the collected iris image is initially preprocessed, the iris image generally includes the human eye and the surrounding eyelid area, and the iris texture area is only a part of the iris image, so as to eliminate the interference of non-iris texture areas such as eyelid, eyelash, pupil and sclera. And secondly, the iris texture region is elastically deformed due to the change of illumination, and certain rotation deviation exists between iris images acquired by the same person at different moments, so that the iris ring is required to be normalized to generate a normalized iris image with unified standard. In order to enhance the effect of subsequent texture descriptions, the contrast of the iris texture needs to be enhanced by an image enhancement algorithm.
S102, scaling the first iris image, removing a background area exceeding a preset proportion to obtain a first iris image with the background removed, and weakening noise of the first iris image by Gaussian filtering to obtain a preprocessed first iris image;
The method comprises the steps of carrying out enhancement processing on a reduced first iris image to obtain an enhanced first iris image, and carrying out an anchoring frame on an iris image in the first iris image by adopting a K-means algorithm; and scaling the enhanced iris image according to the length-width ratio of the iris image after the anchor frame to obtain a processed iris image.
In the actual iris acquisition process, due to the special physiological structure of human eyes and external factors such as dust in the surrounding environment, the eyelid shielding generated during the blinking of eyes can cause partial deletion of iris areas, and too dense eyelashes can shield iris textures, thereby directly affecting the recognition result, therefore, the shielded background part needs to be removed,
in terms of noise attenuation, the first iris image after removing the background may be subjected to gaussian filtering, and after several previous operations, the first iris image may be read after removing the background, but noise in the first iris image may be enhanced in the following operations. The Gaussian filter is selected to filter the image, so that eyelid and eyelash noise areas can be effectively segmented. In the step, the Gaussian filtering process is to scan each pixel point in the image by a convolution kernel, multiply and sum the pixel values of the neighborhood with the weights of the corresponding positions, and the whole process can also be regarded as convolution operation of the image and Gaussian normal distribution, wherein the Gaussian normal distribution convolution kernel with the size of 3 multiplied by 3 and the variance of 2 can be selected for Gaussian filtering in the step.
S103, evaluating the preprocessed first iris image by adopting a definition evaluation method of a Tenengard gradient function, removing a blurred image and a severe blurred image in the first iris image, reserving the clear image and recording the clear image as a second iris image, wherein the evaluation method is to extract the gradient amplitude of the image by utilizing a Sobel operator and accumulating and calculating the definition of the image.
The calculation process of the sharpness evaluation method using the tenangrad gradient function is as follows:
in the formula, the gradient amplitude of the preprocessed first iris image at a certain point is set as S (x, y), F (F) is a definition evaluation value of the preprocessed first iris image, M and N represent the resolution of the preprocessed first iris image, a Tenenrad gradient function extracts the gradient amplitude of the image according to the Sobel operator, wherein the Tenenrad calculates gradients in the horizontal direction and the vertical direction by using the Sobel operator respectively, and in the same image, the higher the gradient value is, the clearer the image is.
It can be understood that the average gray value is obtained after the Sobel operator is used for processing, the larger the gray value is, the clearer the image is, and the purpose of adopting the algorithm is to rapidly screen the image which is too blurred or severely blurred, and select the clear image and record the clear image as the second iris image. In particular, the textured and non-textured regions may also be separated for eyelash segmentation based on an energy minimization algorithm for image segmentation, using pixel intensity values to segment the pupil, iris, and background.
S200, performing image recognition on the first texture image based on the depth separable convolution structure to obtain texture features in the first texture image.
It will be appreciated that the present step S200 includes steps S201, S202, S203 and S204, wherein:
s201, extracting features of a first texture image through a depth separable convolution structure to obtain a first feature image, wherein the first feature image comprises a spot feature image, a filament feature image and a stripe feature image;
it should be noted that, when the nerve is in a tension or fatigue state, spots appear in a portion where tissue fibers are weak or lack of oxygen, and filaments and fringes increase, and are light white, when the fiber density of the iris is very sparse, and no light, the iris is also in a fatigue state, the resistance to diseases is very poor, and when the fiber density of the iris is as tight as silk, light, it is important to indicate that the target object is not in a tired state, and thus it is important to extract the characteristics of the first texture image.
S202, respectively carrying out gray level conversion on a speckle characteristic image, a filament characteristic image and a stripe characteristic image to obtain a speckle gray level image, a filament gray level image and a stripe gray level image which correspond to each other, and respectively connecting pixel points with the same gray level value in the speckle gray level image, the filament gray level image and the stripe gray level image, wherein a linear interpolation method is adopted to carry out interpolation processing on the connecting lines to obtain a first speckle gray level image, a first filament gray level image and a first stripe gray level image;
It can be understood that by performing texture extraction on the first texture image, the speckle characteristic image, the filament characteristic image and the stripe characteristic image are rapidly determined, and further, the speckle gray level image, the filament gray level image and the stripe gray level image are determined, so that preparation is made for later determination of the texture characteristics.
When the immunity is poor, namely when the iris is tired, the fibers, namely the filaments, of the iris can appear to be loose like a gunny bag (the density is four-level), spots can be increased, and stripes appear to be white lines; if the body tissue functions have strong immunity, namely, when not in fatigue, the fibers of the iris can be closely combined like silk, almost no spots appear, the stripes are reduced, and the body density is 1 level, thus representing the body health.
S203, fusing the first speckle gray level image, the first filament gray level image and the first stripe gray level image to obtain a second characteristic image, performing edge detection on the second characteristic image, and converting an image outside the second characteristic image into white;
s204, clustering the R, G, B components of all pixel points in the second characteristic image, solving the average value of the center points of all the obtained clustering clusters to obtain the average value of the center points of all the clustering clusters, and taking the average value as the texture characteristic of the first texture image.
It can be understood that after the edge detection is performed on the second feature image in this step, the image outside the second feature image is converted into white, and also can be converted into other colors, so that the white is relatively convenient for observation, errors related to color features are prevented from being generated in the process of extracting the second feature image.
In step S203, edge detection is performed on the second feature image, and an image outside the second feature image is converted into white, which includes S2031, S2032, S2033, and S2034, where:
s2031, denoising the second characteristic image to obtain a denoised second characteristic image, and calculating the pixel intensity gradient of the denoised second characteristic image;
s2032, performing non-maximum suppression on the denoised pixels of the denoised second feature image according to the pixel intensity gradient to obtain an edge image of the denoised second feature image;
The method includes denoising the second characteristic image, improving the definition and resolvable performance of the image, improving the effect of image identification, detection and analysis, selecting an optimal boundary frame from a group of overlapped frames by adopting non-maximum value inhibition, removing the rest redundant detection frame results which are overlapped with the maximum value more, and further obtaining the edge image of the second characteristic image.
S2033, performing decomposition processing on the edge image by using a wavelet transformation method to obtain three high-frequency components after the decomposition processing, namely a spot edge image, a filament edge image and a stripe edge image, wherein the spot edge image comprises a low-frequency component of a spot in the edge image in the horizontal direction and a high-frequency component in the vertical direction, the filament edge image comprises a high-frequency component of a filament in the edge image in the horizontal direction and a low-frequency component in the vertical direction, and the stripe edge image comprises a high-frequency component of a stripe in the edge image in the horizontal direction and a high-frequency component in the vertical direction, and a wavelet function adopted by the wavelet transformation method is a mexico hat function;
the wavelet function used may be a Moret wavelet function, sym6 wavelet function, or the like, in addition to the mexico cap function. Specifically, the high-frequency components correspond to edges, details and noise parts of the image, so that the edge image needs to be decomposed and enhanced, and after the edge image is decomposed, different enhancement processing methods are adopted to process the edge image, so that the enhanced high-frequency components with higher quality can be obtained, the enhanced image with high quality can be obtained, and the image quality can be further improved.
S2034, respectively carrying out tracking processing on the spot edge image, the filament edge image and the stripe edge image to obtain a first edge image, a second edge image and a third edge image, and converting images except the first edge image, the second edge image and the third edge image into white.
It should be noted that, the purpose of edge detection is to identify the points with obvious brightness changes in the first edge image, the second edge image and the third edge image, and convert the images outside the first edge image, the second edge image and the third edge image into white, which is clearer, facilitates the subsequent observation and prevents errors.
And S300, extracting the characteristics of the second texture image by utilizing the Yolov4 model to obtain the color characteristics of the second texture image, wherein the second texture image comprises selection information of a backbone network, a threshold value for non-maximum suppression and prior frame information.
It will be appreciated that the present step S300 includes S301, S302 and S303, where:
s301, based on a deep learning target detection algorithm, adjusting a second texture image to a preset size, inputting the second texture image into a Yolov4 target detection model after training, determining a rectangular frame, and recording a region surrounded by the rectangular frame as a pigment region and a blood vessel region, wherein determining the rectangular frame comprises: performing sliding processing on the second texture image by utilizing a sliding window, determining a plurality of first center points, performing mapping processing on the second iris image based on the plurality of first center points, and determining a plurality of second center points; generating a plurality of candidate anchor frames at each second center point based on anchor frames of a preset size, wherein the anchor frames of the preset size are obtained from training data of a Yolov4 target detection model, and determining pigment areas and blood vessel areas in the second texture image according to the plurality of candidate anchor frames;
The method is characterized in that Yolo is a neural network target detection algorithm based on an anchor, and training is performed by adopting a method of CutMix, mosaic, self-universal and the like. Specifically, the picture may be adjusted to a preset size, for example, 520×520, and then a trained Yolov4 target detection model is input to obtain a rectangular frame with a side length of 2*3, and then the region surrounded by the rectangular frame is regarded as a pigment region and a blood vessel region. The target detection algorithm can be divided into one-stage algorithm, yolo algorithm, SSD algorithm, two-stage algorithm, R-CNN system algorithm, etc., and will not be described here. Specific parameters such as selection of a feature extraction backbone network, a threshold value of non-maximum value suppression, a priori frame and the like in the Yolov4 can be automatically adjusted to ideal values by workers, and specific requirements are not required. In the step, a target detection model is added to identify and position the pigment region and the blood vessel region, and compared with the traditional identification method, the method has higher processing speed and positioning accuracy.
S302, carrying out component extraction on a pigment region to obtain a tone value, a saturation value and a brightness value, and marking the tone value, the saturation value and the brightness value as a first color quantization extraction value, and carrying out component extraction on a blood vessel region to obtain a tone value, a saturation value and a brightness value, and marking the tone value, the saturation value and the brightness value as a second color quantization extraction value;
The hue value, saturation value, and brightness value in this step are extracted as quantized values of the color feature.
S303, equally dividing the first color quantization extraction value and the second color quantization extraction value to obtain at least four equal parts of color quantization extraction values, and combining the color quantization extraction values of each part to obtain the color characteristics of the second texture image.
The histogram statistics or the equal division is performed on the first color quantization extraction value and the second color quantization extraction value. And combining the obtained color quantization extraction values to form a multi-dimensional combined color feature, so that the accuracy of image color recognition is improved. In this embodiment, the iris melanin precipitation may be caused by eye fatigue, and when the emotion is excited, blood circulation is accelerated, blood vessels in the iris are expanded, the color of the eye is lightened, and if the iris is tired, iris congestion is also caused, so that the recognition of the color characteristics of the second texture image lays a foundation for the subsequent judgment of whether a miner is tired.
S400, combining the texture features and the color features by adopting a threshold binarization method to realize dimension reduction processing, and splicing the processed texture features and color features to obtain the spliced iris integral features.
It can be understood that in this step, the texture feature and the color feature are combined to generate a feature map, and the feature map is subjected to dimension reduction by using 8 bits coding, where the feature map has the characteristics of clear texture structure and obvious boundary, and the pixel median 128 of the image is used as a threshold, and the calculation formula is as follows:
in the method, in the process of the invention,is a characteristic diagram->For the binarized feature map, (i, j) is the image position, the feature map after threshold binarization keeps the texture information represented in the original feature map, and simultaneously enables the feature dimensionThe method has the advantages that the method is greatly reduced, the dimension reduction processing is realized, the calculation efficiency is improved, the robustness of the overall recognition algorithm is facilitated, the loss of the spatial information of the original characteristics is prevented, and the change of the iris texture is effectively represented.
It should be noted that, the texture features and the color features are adopted to splice, so as to obtain the overall features of the iris after splicing, thereby improving the high accuracy of iris detection, i.e. ensuring the high accuracy of iris feature condition detection of all different pixel area occupation ratios. The principle of the iris recognition technology is based on the common rule of human eyes, and an image processing method is adopted to position and extract the iris
The texture features of the region are to be extracted and spliced to obtain the integral iris features, and the identity of miners is determined by comparing the similarity of different iris texture features, so that a solid foundation is laid for the follow-up recognition of fatigue degree.
S500, inputting the integral characteristics of the spliced iris into a preset fatigue degree detection model for fatigue recognition, judging whether the iris of the target object belongs to a fatigue mode, if so, judging the fatigue degree of the target object and giving an alarm, and if not, judging that the target object is in a safe state.
It can be understood that in this step, the average iris diameter of normal adult is 11.4-12mm with the iris diameter as the scoring point, if the iris is abnormal in contraction or the problems of nervous system loss and fatigue, the iris diameter will be too small, if there is no abnormality, the iris diameter will be displayed normally; photoreaction is a condition of the iris's response to light secondary, generally including both contraction and expansion of light, where an iris's response to light indicates that the iris has a normal contractive response to light stimuli, and if there is an abnormal response to light, indicates that the nervous system is lost or is considered tired. The method is divided into 3 modes according to different thresholds, namely a common fatigue mode, a moderate fatigue mode and a severe fatigue mode, states of irises are fully described, different iris characteristic conditions under different fatigue degrees can be better identified, and further accuracy of fatigue degree detection is improved, and fatigue diagnosis and severity of a detection object to be identified are recorded according to the three threshold conditions. If the detection object does not belong to the fatigue mode, the detection object to be identified is safe, and the operation can be carried out in the well.
Example 2
As shown in fig. 2, the embodiment provides a mining iris safety recognition detection system, and the system described with reference to fig. 2 includes an acquisition module 701, a recognition module 702, an extraction module 703, a processing module 704 and a judgment module 705, where:
the acquisition module 701: the method comprises the steps of acquiring an iris image of an eye of a target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image by adopting an image overall definition evaluation method to be recorded as a second iris image, wherein the second iris image comprises a first texture image and a second texture image, the first texture image comprises spots, filaments and stripes, and the second texture image comprises pigments and blood vessels;
the identification module 702: the method comprises the steps of performing image recognition on a first texture image based on a depth separable convolution structure to obtain texture features in the first texture image;
extraction module 703: the method comprises the steps of extracting features of a second texture image by utilizing a Yolov4 model to obtain color features of the second texture image, wherein the second texture image comprises selection information of a backbone network, a threshold value for non-maximum suppression and prior frame information;
Processing module 704: the method is used for combining the texture features and the color features by adopting a threshold binarization method to realize dimension reduction treatment, and splicing the processed texture features and color features to obtain spliced iris integral features;
the judging module 705: and inputting the spliced iris integral characteristics into a preset fatigue degree detection model to perform fatigue recognition, judging whether the iris of the target object belongs to a fatigue mode, if so, judging the fatigue degree of the target object and giving an alarm, and if not, judging that the target object is in a safe state.
Specifically, the acquisition module 701 includes a first processing unit 7011, a second processing unit 7012, and an evaluation unit 7013, wherein:
first processing unit 7011: the iris image enhancement method comprises the steps of acquiring iris images of eyes of a target object through a camera device, and carrying out iris ring positioning, iris ring normalization and normalized iris image enhancement on the iris images to obtain first iris images after image enhancement;
second processing unit 7012: the method comprises the steps of performing scaling treatment on a first iris image, removing a background area exceeding a preset proportion to obtain a first iris image with a background removed, and performing noise attenuation on the first iris image by Gaussian filtering treatment to obtain a preprocessed first iris image;
Evaluation unit 7013: the method is used for evaluating the preprocessed first iris image by adopting a definition evaluation method of a Tenengard gradient function, removing a blurred image and a severe blurred image in the first iris image, reserving a clear image and recording the clear image as a second iris image, wherein the evaluation method is to extract the gradient amplitude of the image by utilizing a Sobel operator and accumulating and calculating the definition of the image.
Specifically, the identification module 702 includes a first extraction unit 7021, a transformation unit 7022, a fusion unit 7023, and a clustering unit 7024, wherein:
first extraction unit 7021: the method comprises the steps of extracting features of a first texture image through a depth separable convolution structure to obtain a first feature image, wherein the first feature image comprises a spot feature image, a filament feature image and a stripe feature image;
conversion unit 7022: the method comprises the steps of respectively carrying out gray level transformation on a speckle characteristic image, a filament characteristic image and a stripe characteristic image to obtain a corresponding speckle gray level image, a filament gray level image and a stripe gray level image, and respectively connecting pixel points with the same gray level value in the speckle gray level image, the filament gray level image and the stripe gray level image, wherein a linear interpolation method is adopted to carry out interpolation treatment on the connecting lines to obtain a first speckle gray level image, a first filament gray level image and a first stripe gray level image;
Fusion unit 7023: the method comprises the steps of fusing a first speckle gray level image, a first filament gray level image and a first stripe gray level image to obtain a second characteristic image, performing edge detection on the second characteristic image, and converting an image outside the second characteristic image into white;
clustering unit 7024: and the R, G, B components of all pixel points in the second characteristic image are clustered respectively, the obtained central points of all clusters are subjected to average value solving, the average value of the central points of all clusters is obtained, and the average value is used as the texture characteristic of the first texture image.
Specifically, the fusing unit 7023 includes a denoising unit 70231, a third processing unit 70232, a fourth processing unit 70233, and a fifth processing unit 70234, wherein:
denoising unit 70231: the method comprises the steps of carrying out denoising treatment on a second characteristic image to obtain a denoised second characteristic image, and calculating the pixel intensity gradient of the denoised second characteristic image;
third processing unit 70232: the method comprises the steps of carrying out non-maximum suppression on denoising pixels of a denoised second characteristic image according to pixel intensity gradients to obtain an edge image of the denoised second characteristic image;
Fourth processing unit 70233: the method comprises the steps of using a dot edge image, a filament edge image and a stripe edge image, wherein the dot edge image comprises a low-frequency component of a dot in the edge image in the horizontal direction and a high-frequency component of a dot in the vertical direction, the filament edge image comprises a high-frequency component of a filament in the edge image in the horizontal direction and a low-frequency component of a filament in the edge image in the vertical direction, and the stripe edge image comprises a high-frequency component of a stripe in the edge image in the horizontal direction and a high-frequency component of a stripe in the vertical direction, and a wavelet function adopted by a wavelet transformation method is a mexico hat function;
fifth processing unit 70234: the method is used for respectively carrying out tracking processing on the spot edge image, the filament edge image and the stripe edge image to obtain a first edge image, a second edge image and a third edge image, and converting images except the first edge image, the second edge image and the third edge image into white.
Specifically, the extraction module 703 includes a training unit 7031, a second extraction unit 7032, and a dividing unit 7033, wherein:
training unit 7031: the method is used for adjusting the second texture image to a preset size based on a deep learning target detection algorithm, inputting the second texture image into a Yolov4 target detection model after training, determining a rectangular frame, and recording a region surrounded by the rectangular frame as a pigment region and a blood vessel region, wherein determining the rectangular frame comprises: performing sliding processing on the second texture image by utilizing a sliding window, determining a plurality of first center points, performing mapping processing on the second iris image based on the plurality of first center points, and determining a plurality of second center points; generating a plurality of candidate anchor frames at each second center point based on anchor frames of a preset size, wherein the anchor frames of the preset size are obtained from training data of a Yolov4 target detection model, and determining pigment areas and blood vessel areas in the second texture image according to the plurality of candidate anchor frames;
Second extraction unit 7032: the method comprises the steps of performing component extraction on a pigment region to obtain a tone value, a saturation value and a brightness value, marking the tone value, the saturation value and the brightness value as first color quantization extraction values, and performing component extraction on a blood vessel region to obtain a tone value, a saturation value and a brightness value, marking the tone value, the saturation value and the brightness value as second color quantization extraction values;
division unit 7033: and the color characteristic of the second texture image is obtained by equally dividing the first color quantization extraction value and the second color quantization extraction value respectively to obtain at least four equal parts of color quantization extraction values and combining the color quantization extraction values of each part.
It should be noted that, regarding the system in the above embodiment, the specific manner in which the respective modules perform the operations has been described in detail in the embodiment regarding the method, and will not be described in detail herein.
Example 3
Corresponding to the above method embodiment, a readable storage medium is also provided in this embodiment, and a readable storage medium described below and a mining iris safety recognition detection method described above may be referred to correspondingly.
The readable storage medium stores a computer program which when executed by a processor realizes the steps of the mining iris safety recognition detection method of the method embodiment.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, and the like.
Example 4
Fig. 3 is a block diagram of a terminal according to an embodiment of the present application. As shown in the figure, the terminal 4 of this embodiment includes: at least one processor 40 (only one is shown in fig. 3), a memory 41 and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, the processor 40 implementing the steps in any of the various method embodiments described above when executing the computer program 42.
The terminal 4 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal 4 may include, but is not limited to, a processor 40, a memory 41. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the terminal 4 and is not limiting of the terminal 4, and may include more or fewer components than shown, or may combine some components, or different components, e.g., the terminal may further include input and output devices, network access devices, buses, etc. The processor 40 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory 41 may be an internal storage unit of the terminal 4, such as a hard disk or a memory of the terminal 4. The memory 41 may also be an external storage device of the terminal 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal 4. The memory 41 is used for storing the computer program as well as other programs and data required by the terminal. The memory 41 may also be used for temporarily storing data that has been output or is to be output.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (7)

1. The mining iris safety recognition detection method is characterized by comprising the following steps of:
acquiring an iris image of an eye of a target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image by adopting an image overall definition evaluation method to be recorded as a second iris image, wherein the second iris image comprises a first texture image and a second texture image, the first texture image comprises spots, filaments and stripes, and the second texture image comprises pigments and blood vessels;
Performing image recognition on the first texture image based on the depth separable convolution structure to obtain texture features in the first texture image;
extracting features of a second texture image by utilizing a Yolov4 model to obtain color features of the second texture image, wherein the second texture image comprises selection information of a backbone network, a threshold value for non-maximum suppression and prior frame information;
combining the texture features and the color features by adopting a threshold binarization method to realize dimension reduction treatment, and splicing the processed texture features and color features to obtain spliced iris integral features;
inputting the spliced iris integral characteristics into a preset fatigue degree detection model for fatigue recognition, judging whether the iris of the target object belongs to a fatigue mode, if so, judging the fatigue degree of the target object and giving an alarm, and if not, judging that the target object is in a safe state;
the image recognition is performed on the first texture image based on the depth separable convolution structure to obtain texture features in the first texture image, and the method comprises the following steps: extracting features of a first texture image through a depth separable convolution structure to obtain a first feature image, wherein the first feature image comprises a spot feature image, a filament feature image and a stripe feature image; respectively carrying out gray level conversion on the speckle characteristic image, the filament characteristic image and the stripe characteristic image to obtain a corresponding speckle gray level image, a filament gray level image and a stripe gray level image, and respectively connecting pixel points with the same gray level value in the speckle gray level image, the filament gray level image and the stripe gray level image, wherein a linear interpolation method is adopted to carry out interpolation treatment on the connecting lines to obtain a first speckle gray level image, a first filament gray level image and a first stripe gray level image;
Fusing the first speckle gray level image, the first filament gray level image and the first stripe gray level image to obtain a second characteristic image, performing edge detection on the second characteristic image, and converting an image outside the second characteristic image into white;
clustering R, G, B components of all pixel points in the second characteristic image respectively, solving the average value of the center points of all the obtained clustering clusters to obtain the average value of the center points of all the clustering clusters, and taking the average value as the texture characteristic of the first texture image;
the extracting the features of the second texture image by using the Yolov4 model to obtain the color features of the second texture image includes: based on a deep learning target detection algorithm, adjusting the second texture image to a preset size and inputting the second texture image into a Yolov4 target detection model after training, determining a rectangular frame, and recording a region surrounded by the rectangular frame as a pigment region and a blood vessel region, wherein determining the rectangular frame comprises: performing sliding processing on the second texture image by utilizing a sliding window, determining a plurality of first center points, performing mapping processing on the second iris image based on the plurality of first center points, and determining a plurality of second center points;
Generating a plurality of candidate anchor frames at each second center point based on anchor frames of a preset size, wherein the anchor frames of the preset size are obtained from training data of a Yolov4 target detection model, and determining pigment areas and blood vessel areas in the second texture image according to the plurality of candidate anchor frames;
the color tone value, the saturation value and the brightness value are obtained by component extraction of the pigment region and are marked as first color quantization extraction values, and the color tone value, the saturation value and the brightness value are obtained by component extraction of the blood vessel region and are marked as second color quantization extraction values;
and equally dividing the first color quantization extraction value and the second color quantization extraction value respectively to obtain at least four equal parts of color quantization extraction values, and combining the color quantization extraction values of each part to obtain the color characteristics of the second texture image.
2. The mining iris safety recognition detection method according to claim 1, wherein the steps of obtaining an iris image of the eye of the target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image as a second iris image by using an image overall definition evaluation method comprise the following steps:
Acquiring iris images of eyes of a target object through a camera device, and performing iris ring positioning, iris ring normalization and normalized iris image enhancement on the iris images to obtain first iris images after image enhancement;
scaling the first iris image, removing a background area exceeding a preset proportion to obtain a first iris image with the background removed, and weakening noise of the first iris image by Gaussian filtering to obtain a preprocessed first iris image;
and evaluating the preprocessed first iris image by adopting a definition evaluation method of a Tenengard gradient function, removing a blurred image and a severe blurred image in the first iris image, reserving a clear image and marking the clear image as a second iris image, wherein the evaluation method is to extract the gradient amplitude of the image by utilizing a Sobel operator and accumulating to calculate the definition of the image.
3. The mining iris safety recognition detection method according to claim 1, wherein the edge detection is performed on the second feature image, and the image outside the second feature image is converted into white, and the method comprises:
denoising the second characteristic image to obtain a denoised second characteristic image, and calculating the pixel intensity gradient of the denoised second characteristic image;
Non-maximum suppression is carried out on the denoised pixels of the denoised second characteristic image according to the pixel intensity gradient, and an edge image of the denoised second characteristic image is obtained;
decomposing the edge image by using a wavelet transformation method to obtain three high-frequency components after the decomposition treatment, namely a spot edge image, a filament edge image and a stripe edge image, wherein the spot edge image comprises a low-frequency component of spots in the edge image in the horizontal direction and a high-frequency component in the vertical direction, the filament edge image comprises a high-frequency component of filaments in the edge image in the horizontal direction and a low-frequency component in the vertical direction, and the stripe edge image comprises a high-frequency component of stripes in the edge image in the horizontal direction and a high-frequency component in the vertical direction, and a wavelet function adopted by the wavelet transformation method is a mexico hat-shaped function;
and respectively carrying out tracking processing on the spot edge image, the filament edge image and the stripe edge image to obtain a first edge image, a second edge image and a third edge image, and converting images except the first edge image, the second edge image and the third edge image into white.
4. The mining iris safety recognition detection system based on the mining iris safety recognition detection method of claim 1, which is characterized by comprising the following steps:
The acquisition module is used for: the method comprises the steps of acquiring an iris image of an eye of a target object, preprocessing the iris image to obtain a preprocessed first iris image, and extracting an image with qualified quality from the first iris image by adopting an image overall definition evaluation method to be recorded as a second iris image, wherein the second iris image comprises a first texture image and a second texture image, the first texture image comprises spots, filaments and stripes, and the second texture image comprises pigments and blood vessels;
and an identification module: the method comprises the steps of performing image recognition on a first texture image based on a depth separable convolution structure to obtain texture features in the first texture image;
and an extraction module: the method comprises the steps of extracting features of a second texture image by utilizing a Yolov4 model to obtain color features of the second texture image, wherein the second texture image comprises selection information of a backbone network, a threshold value for non-maximum suppression and prior frame information;
the processing module is used for: the method is used for combining the texture features and the color features by adopting a threshold binarization method to realize dimension reduction treatment, and splicing the processed texture features and color features to obtain spliced iris integral features;
and a judging module: the method comprises the steps of inputting the spliced iris integral characteristics into a preset fatigue degree detection model for fatigue recognition, judging whether the iris of a target object belongs to a fatigue mode, if so, judging the fatigue degree of the target object and giving an alarm, and if not, judging that the target object is in a safe state;
Wherein the identification module comprises:
a first extraction unit: the method comprises the steps of extracting features of a first texture image through a depth separable convolution structure to obtain a first feature image, wherein the first feature image comprises a spot feature image, a filament feature image and a stripe feature image;
a conversion unit: the method comprises the steps of respectively carrying out gray level transformation on a speckle characteristic image, a filament characteristic image and a stripe characteristic image to obtain a corresponding speckle gray level image, a filament gray level image and a stripe gray level image, and respectively connecting pixel points with the same gray level value in the speckle gray level image, the filament gray level image and the stripe gray level image, wherein a linear interpolation method is adopted to carry out interpolation treatment on the connecting lines to obtain a first speckle gray level image, a first filament gray level image and a first stripe gray level image;
fusion unit: the method comprises the steps of fusing a first speckle gray level image, a first filament gray level image and a first stripe gray level image to obtain a second characteristic image, performing edge detection on the second characteristic image, and converting an image outside the second characteristic image into white;
clustering unit: the method comprises the steps of clustering R, G, B components of all pixel points in a second characteristic image respectively, solving the average value of the center points of all obtained clustering clusters to obtain the average value of the center points of all the clustering clusters, and taking the average value as the texture characteristic of a first texture image;
Wherein the extraction module comprises:
training unit: the method is used for adjusting the second texture image to a preset size based on a deep learning target detection algorithm, inputting the second texture image into a Yolov4 target detection model after training, determining a rectangular frame, and recording a region surrounded by the rectangular frame as a pigment region and a blood vessel region, wherein determining the rectangular frame comprises: performing sliding processing on the second texture image by utilizing a sliding window, determining a plurality of first center points, performing mapping processing on the second iris image based on the plurality of first center points, and determining a plurality of second center points; generating a plurality of candidate anchor frames at each second center point based on anchor frames of a preset size, wherein the anchor frames of the preset size are obtained from training data of a Yolov4 target detection model, and determining pigment areas and blood vessel areas in the second texture image according to the plurality of candidate anchor frames;
a second extraction unit: the method comprises the steps of performing component extraction on a pigment region to obtain a tone value, a saturation value and a brightness value, marking the tone value, the saturation value and the brightness value as first color quantization extraction values, and performing component extraction on a blood vessel region to obtain a tone value, a saturation value and a brightness value, marking the tone value, the saturation value and the brightness value as second color quantization extraction values;
dividing unit: and the color characteristic of the second texture image is obtained by equally dividing the first color quantization extraction value and the second color quantization extraction value respectively to obtain at least four equal parts of color quantization extraction values and combining the color quantization extraction values of each part.
5. The mining iris safety recognition detection system according to claim 4, wherein the acquisition module comprises:
a first processing unit: the iris image enhancement method comprises the steps of acquiring iris images of eyes of a target object through a camera device, and carrying out iris ring positioning, iris ring normalization and normalized iris image enhancement on the iris images to obtain first iris images after image enhancement;
a second processing unit: the method comprises the steps of performing scaling treatment on a first iris image, removing a background area exceeding a preset proportion to obtain a first iris image with a background removed, and performing noise attenuation on the first iris image by Gaussian filtering treatment to obtain a preprocessed first iris image;
evaluation unit: the method is used for evaluating the preprocessed first iris image by adopting a definition evaluation method of a Tenengard gradient function, removing a blurred image and a severe blurred image in the first iris image, reserving a clear image and recording the clear image as a second iris image, wherein the evaluation method is to extract the gradient amplitude of the image by utilizing a Sobel operator and accumulating and calculating the definition of the image.
6. A readable storage medium, wherein a computer program is stored on the readable storage medium, and when executed by a processor, the computer program implements the mining iris safety recognition detection method according to any one of claims 1 to 3.
7. A terminal comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the mining iris safety recognition detection method as claimed in any one of claims 1 to 3.
CN202410008035.3A 2024-01-04 2024-01-04 Mining iris safety recognition detection method, system, medium and terminal Active CN117523649B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410008035.3A CN117523649B (en) 2024-01-04 2024-01-04 Mining iris safety recognition detection method, system, medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410008035.3A CN117523649B (en) 2024-01-04 2024-01-04 Mining iris safety recognition detection method, system, medium and terminal

Publications (2)

Publication Number Publication Date
CN117523649A CN117523649A (en) 2024-02-06
CN117523649B true CN117523649B (en) 2024-03-15

Family

ID=89753454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410008035.3A Active CN117523649B (en) 2024-01-04 2024-01-04 Mining iris safety recognition detection method, system, medium and terminal

Country Status (1)

Country Link
CN (1) CN117523649B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882222A (en) * 2009-06-26 2010-11-10 哈尔滨工业大学 Iris partitioning and sunlight radiating canal extracting method based on basic-element structure definition and region growing technology
CN108256378A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 Driver Fatigue Detection based on eyeball action recognition
CN108720851A (en) * 2018-05-23 2018-11-02 释码融和(上海)信息科技有限公司 A kind of driving condition detection method, mobile terminal and storage medium
CN112711308A (en) * 2020-12-29 2021-04-27 成都科瑞特电气自动化有限公司 Intelligent edge calculation server device for face recognition
CN113505672A (en) * 2021-06-30 2021-10-15 上海聚虹光电科技有限公司 Iris acquisition device, iris acquisition method, electronic device, and readable medium
CN113793336A (en) * 2021-11-17 2021-12-14 成都西交智汇大数据科技有限公司 Method, device and equipment for detecting blood cells and readable storage medium
CN115310061A (en) * 2021-12-27 2022-11-08 重庆科创职业学院 Security computer security authentication system and authentication method
CN115512251A (en) * 2022-11-04 2022-12-23 深圳市瓴鹰智能科技有限公司 Unmanned aerial vehicle low-illumination target tracking method based on double-branch progressive feature enhancement
CN117082665A (en) * 2023-10-17 2023-11-17 深圳市帝狼光电有限公司 LED eye-protection desk lamp illumination control method and system
CN117333359A (en) * 2023-09-08 2024-01-02 西北大学 Mountain-water painting image super-resolution reconstruction method based on separable convolution network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091550A1 (en) * 2014-07-15 2017-03-30 Qualcomm Incorporated Multispectral eye analysis for identity authentication
US11074675B2 (en) * 2018-07-31 2021-07-27 Snap Inc. Eye texture inpainting

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882222A (en) * 2009-06-26 2010-11-10 哈尔滨工业大学 Iris partitioning and sunlight radiating canal extracting method based on basic-element structure definition and region growing technology
CN108256378A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 Driver Fatigue Detection based on eyeball action recognition
CN108720851A (en) * 2018-05-23 2018-11-02 释码融和(上海)信息科技有限公司 A kind of driving condition detection method, mobile terminal and storage medium
CN112711308A (en) * 2020-12-29 2021-04-27 成都科瑞特电气自动化有限公司 Intelligent edge calculation server device for face recognition
CN113505672A (en) * 2021-06-30 2021-10-15 上海聚虹光电科技有限公司 Iris acquisition device, iris acquisition method, electronic device, and readable medium
CN113793336A (en) * 2021-11-17 2021-12-14 成都西交智汇大数据科技有限公司 Method, device and equipment for detecting blood cells and readable storage medium
CN115310061A (en) * 2021-12-27 2022-11-08 重庆科创职业学院 Security computer security authentication system and authentication method
CN115512251A (en) * 2022-11-04 2022-12-23 深圳市瓴鹰智能科技有限公司 Unmanned aerial vehicle low-illumination target tracking method based on double-branch progressive feature enhancement
CN117333359A (en) * 2023-09-08 2024-01-02 西北大学 Mountain-water painting image super-resolution reconstruction method based on separable convolution network
CN117082665A (en) * 2023-10-17 2023-11-17 深圳市帝狼光电有限公司 LED eye-protection desk lamp illumination control method and system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Deep learning methods for object detection in smart manufacturing;Ahmad H M等;《Journal of Manufacturing Systems》;20221231;第64卷;181-196 *
一种快速驾驶员疲劳检测方法;蒋文博等;《电子设计工程》;20151205(第23期);42-44+47 *
基于显著性目标检测和双目视觉的弹着点定位算法研究;周璇;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》;20221115(第11期);C032-4 *
基于类卷积神经网络的可见光虹膜识别方法;刘笑楠等;《仪器仪表学报》;20171115(第11期);39-46 *
基于纹理方向能量特征的虹膜识别算法;邓玉波等;《计算机工程与应用》;20160810;第53卷(第15期);196-199 *
面向火焰快速检测的轻量化深度网络研究;王斌等;《Journal of Computer Engineering & Applications》;20221231;第58卷(第17期);256-262 *

Also Published As

Publication number Publication date
CN117523649A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN105138954B (en) A kind of image automatic screening inquiry identifying system
CN101689301A (en) Detecting haemorrhagic stroke in ct image data
US20090232397A1 (en) Apparatus and method for processing image
CN104484652A (en) Method for fingerprint recognition
CN106203338B (en) Human eye state method for quickly identifying based on net region segmentation and threshold adaptive
CN114842524B (en) Face false distinguishing method based on irregular significant pixel cluster
Velliangira et al. A novel forgery detection in image frames of the videos using enhanced convolutional neural network in face images
Wazirali et al. Hybrid Feature Extractions and CNN for Enhanced Periocular Identification During Covid-19.
CN114241542A (en) Face recognition method based on image stitching
CN117523649B (en) Mining iris safety recognition detection method, system, medium and terminal
CN110633666A (en) Gesture track recognition method based on finger color patches
Habib et al. Brain tumor segmentation and classification using machine learning
CN110909601A (en) Beautiful pupil identification method and system based on deep learning
CN115995097A (en) Deep learning-based safety helmet wearing standard judging method
CN113486712B (en) Multi-face recognition method, system and medium based on deep learning
CN112418085B (en) Facial expression recognition method under partial shielding working condition
Kaur et al. An efficient scheme for brain tumor detection of MRI brain images using Euclidean distance with FVT
Akila et al. Detection of melanoma skin cancer using segmentation and classification algorithms
CN110264418A (en) Method for enhancing picture contrast, system and device
CN110619696B (en) Vehicle door unlocking method, device, equipment and medium
Xia et al. A multi-scale gated network for retinal hemorrhage detection
Dhiravidachelvi et al. Computerized detection of optic disc in diabetic retinal images using background subtraction model
CN110929681B (en) Wrinkle detection method
Su-qiong et al. Tie-dye technique and pattern features
Ll et al. Night Fatigue Driving Detection Algorithm based on Lightweight Zero-DCE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant