CN116503933A - Periocular feature extraction method and device, electronic equipment and storage medium - Google Patents

Periocular feature extraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116503933A
CN116503933A CN202310592929.7A CN202310592929A CN116503933A CN 116503933 A CN116503933 A CN 116503933A CN 202310592929 A CN202310592929 A CN 202310592929A CN 116503933 A CN116503933 A CN 116503933A
Authority
CN
China
Prior art keywords
periocular
feature map
feature
key
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310592929.7A
Other languages
Chinese (zh)
Other versions
CN116503933B (en
Inventor
张小亮
李茂林
吴明岩
魏衍召
杨占金
戚纪纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202310592929.7A priority Critical patent/CN116503933B/en
Publication of CN116503933A publication Critical patent/CN116503933A/en
Application granted granted Critical
Publication of CN116503933B publication Critical patent/CN116503933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The application relates to a periocular feature extraction method, a periocular feature extraction device, electronic equipment and a storage medium, and relates to the field of image processing. The method and the device can obtain the more robust periocular feature map containing periocular local features and periocular global features.

Description

Periocular feature extraction method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and apparatus for extracting periocular features, an electronic device, and a storage medium.
Background
With rapid development of technology and complexity of social environment, requirements on information security and privacy protection are increasing. The single-mode biological characteristics such as fingerprints gradually appear the risks of easy counterfeiting and reduced safety degree, so that a scheme for protecting information by combining the multi-mode biological characteristics, such as combining the fingerprints with facial characteristics, is adopted to achieve a better protection effect, but the facial characteristics are used under special conditions, such as mask shielding, glasses shielding and other interference, so that the face recognition performance is reduced. The periocular region contains rich color and texture features, and compared with the whole facial features, the periocular region has smaller influence along with age transformation and expression change, and has higher feature stability, so that the features of the periocular region can be combined with single-mode biological features such as faces, irises and the like to form multi-mode auxiliary recognition so as to protect information and privacy.
The periocular image generally comprises periocular global features and periocular local features, wherein the periocular global features are integral features of the periocular, and represent integral conditions of the periocular, such as colors, shapes and the like of the periocular; periocular local features such as periocular eyelash texture and the like. At present, the local characteristics of the periphery of the eye can be obtained by independently carrying out convolution treatment on the eye Zhou Tuxiang, and the global characteristics of the periphery of the eye are ignored, and the specific condition of the image of the periphery of the eye is represented only by the local characteristics of the periphery of the eye; or the self-attention mechanism is singly used for the periocular image to obtain the periocular global characteristic, and the periocular local characteristic is ignored, and the specific condition of the periocular image is represented only through the periocular global characteristic. The specific situation of the circumference is characterized by only the circumference local feature or the circumference global feature, and is relatively unilateral and less robust. It is therefore a problem how to obtain a more robust periocular feature map that contains both periocular local features and periocular global features.
Disclosure of Invention
In order to obtain a more robust periocular feature map containing periocular local features and periocular global features, the application provides a periocular feature extraction method, a device, an electronic device and a storage medium.
In a first aspect, the present application provides a method for extracting periocular features, which adopts the following technical scheme:
a periocular feature extraction method comprising:
acquiring a periocular image;
extracting spatial features of the periocular image to obtain at least one spatial feature map, wherein each spatial feature map comprises periocular local features and periocular global features;
determining a screening coefficient corresponding to each space feature map based on each space feature map, wherein the screening coefficient is used for screening key periocular features in each space feature map;
and determining a key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient.
By adopting the technical scheme, the periocular image is acquired, the spatial feature extraction is carried out on the periocular image, so that at least one spatial feature image of the periocular image can be obtained, each spatial feature image comprises the periocular global feature and the periocular local feature, and as the global feature condition and the local feature condition of the periocular image are recorded in the spatial feature image, the screening coefficient corresponding to each spatial feature image can be determined according to each spatial feature image, the screening coefficient is used for screening the critical global feature and the critical local feature in the at least one spatial feature image, the spatial feature image further corresponding to the screening coefficient is processed, so that the critical global feature and the critical local feature in the at least one spatial feature image are obtained, namely, the critical periocular feature image corresponding to the periocular image is determined, and the periocular feature image with robustness is obtained.
In another possible implementation manner, the determining the key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient includes:
performing singular value decomposition on each space feature map to obtain a singular value matrix, a left orthogonal matrix and a right orthogonal matrix corresponding to each space feature map;
determining key elements in the corresponding singular value matrix based on the screening coefficients corresponding to each space feature map;
determining elements except the key elements as 0 to obtain a key singular value matrix;
determining a key space feature map corresponding to each space feature map based on the key singular value matrix, the left orthogonal matrix and the right orthogonal matrix;
and determining a key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map.
By adopting the technical scheme, each space feature graph comprises a part of global feature around eyes and a part of local feature around eyes, singular value decomposition is carried out on each space feature graph, so that a singular value matrix, a left orthogonal matrix and a right orthogonal matrix are obtained, wherein key information around eyes is recorded in the singular value matrix corresponding to each space feature graph, after the singular value matrix is obtained, key elements in the singular value matrix are determined according to screening coefficients for screening key features, the key degree and the importance of elements except the key elements are not high, therefore, the elements except the key elements are determined to be 0, so that the elements which are not important are ignored, the key singular value matrix is obtained, after the key singular value matrix is obtained, the key space feature graph corresponding to each space feature graph is obtained according to the key value matrix, the left orthogonal matrix and the right orthogonal matrix, and the key periocular key feature graph with the key elements around eyes and robustness is obtained according to the key elements in each key space feature graph.
In another possible implementation manner, the determining the key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map includes:
if the number of the space feature images is one, determining a key space feature image corresponding to the space feature images as a key periocular feature image of the periocular image;
if the number of the space feature images is at least two, splicing the key space feature images corresponding to the space feature images respectively to obtain the key periocular feature images of the periocular images.
By adopting the technical scheme, the number of the space feature images is at least two, namely, the key space feature images corresponding to each space feature image comprise key elements of part of the local feature around the eyes and key elements of part of the global feature around the eyes, so that all the key elements are spliced into a whole by splicing the key space feature images corresponding to the space feature images respectively, and the more comprehensive and accurate key space feature images are obtained.
In another possible implementation manner, the performing spatial feature extraction on the periocular image to obtain at least one spatial feature map includes:
Extracting local spatial features and global spatial features of the periocular image to obtain a periocular local feature map and a periocular global feature map;
performing channel separation on the periocular local feature map according to the preset channel number to obtain periocular local sub-feature maps corresponding to each channel, and performing channel separation on the periocular global feature map according to the preset channel number to obtain periocular global sub-feature maps corresponding to each channel;
determining a first key feature map of each periocular local sub-feature map and a second key feature map of each periocular global sub-feature map;
splicing the first key feature map and the second key feature map which are the same in the channels to obtain a spliced eye feature map corresponding to each channel;
carrying out space weight extraction on the periocular image to obtain a space weight feature map;
and multiplying the space weight feature map and the periocular feature mosaic map corresponding to each channel element by element to obtain the at least one space feature map.
By adopting the technical scheme, the partial spatial feature extraction is carried out on the periocular image, so that the periocular partial feature map and the periocular global feature map of the periocular image are obtained, the periocular partial feature map and the periocular global feature map are respectively separated according to the preset channel number, so that the periocular partial feature map corresponding to each channel and the periocular global sub feature map corresponding to each channel are obtained, the channel separation can be carried out to more accurately and comprehensively analyze the periocular full-local feature and the periocular partial feature, because each periocular partial sub-feature map comprises a part of the periocular partial key feature, the first key feature map of each periocular partial sub-feature map and the second key feature map of each periocular global sub-feature are determined, then the first key feature map and the second key feature map corresponding to each channel are spliced, the periocular feature splicing map corresponding to each channel and the periocular global sub-feature map corresponding to each channel is obtained, the weight spatial feature extraction can be carried out on the periocular partial feature map corresponding to the periocular full-local feature map and the periocular full-global sub-feature map, and the periocular partial feature map corresponding to the periocular full-feature is obtained, and the periocular partial feature is completely-weighted by the weight spatial feature map corresponding to the periocular partial feature map and the periocular partial feature map corresponding to the periocular full-global sub-feature map.
In another possible implementation manner, the determining the first key feature map of each periocular local sub-feature map and the second key feature map of each periocular global sub-feature map includes:
determining at least two first characteristic values of each periocular local sub-characteristic map and at least two second characteristic values of each periocular global sub-characteristic map;
screening a first number of first feature values from at least two first feature values, wherein the screened first feature values are larger than first feature values which are not screened in the at least two first feature values;
screening a second number of second feature values from at least two second feature values, wherein the screened second feature values are larger than non-screened second feature values in the at least two second feature values, and the first number and the second number are determined based on a target dimension, and the target dimension is the dimension of each periocular local sub-feature map or the dimension of each periocular global sub-feature map;
determining a first key feature map of each periocular local sub-feature map based on the first number of first feature values, and determining a second key feature map of each periocular global sub-feature map based on the second number of second feature values.
By adopting the technical scheme, the periocular local sub-feature map comprises at least two first feature values, and the periocular global sub-feature map also comprises at least two second feature values, so that the first number of first feature values are screened out from the at least two first feature values, the first number of first feature values are larger than other first feature values which are not screened out, namely the first number of first feature values which are most important are screened out, the elements corresponding to the first number of first feature values are the elements which are most suitable for representing periocular local features in the periocular local sub-feature map, the second number of second feature values are screened out from the at least two second feature values, the second number of second feature values are larger than other second feature values which are not screened out, the elements corresponding to the second number of second feature values are the elements which are most suitable for representing periocular global features in the periocular global sub-feature map, the elements corresponding to the first feature values are determined, the element corresponding to the first feature values are the first feature values, the first feature values corresponding to the periocular sub-feature map are determined to the first feature values, the critical sub-feature map is determined according to the first number of second feature values, and the critical feature map is determined according to the critical feature map is not to the second feature map.
In another possible implementation manner, the performing spatial weight extraction on the periocular image to obtain a spatial weight feature map includes:
and carrying out normalization processing on the periocular image to obtain the space weight feature map.
By adopting the technical scheme, the periocular image is normalized, so that a space weight characteristic diagram is better obtained.
In another possible implementation manner, the determining the key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient further includes:
if the preset condition is not met, the following steps are circularly executed until the preset condition is met:
extracting spatial features of the feature map obtained last time to obtain at least one current spatial feature map;
determining at least one obtained current space feature map as a target feature map, or determining the last obtained key periocular feature map and the at least one current space feature map as target feature maps;
determining a current screening coefficient corresponding to the target feature map based on the target feature map, wherein the current screening coefficient is used for screening key periocular features in the target feature map;
Determining a current key periocular feature map of the periocular image based on the current screening coefficient and the target feature map;
the preset conditions include: the determined times of the key periocular feature map reach a preset times threshold;
the last obtained feature map comprises any one of the following:
the last obtained key periocular feature map;
the last time the key periocular feature map was obtained, the last time the spatial feature map was extracted.
By adopting the technical scheme, if the preset condition is not met, continuously extracting the spatial features of the feature map obtained last time to obtain at least one current spatial feature map, determining a target feature map according to the at least one current spatial feature map and the key periocular feature map obtained last time, determining a current screening coefficient corresponding to the target feature map according to the target feature map, and determining the key periocular feature map of the periocular image according to the current screening coefficient and the target feature map; and judging whether the preset conditions are met or not again, if the preset conditions are not met, repeating the steps until the preset conditions are met, and circularly extracting the spatial characteristics and determining the current key periocular characteristics to further improve the accuracy of the key periocular characteristic map of the periocular image and improve the quality of the finally obtained key periocular characteristic map.
In another possible implementation manner, the extracting spatial features of the periocular image to obtain at least one spatial feature map includes: performing downsampling on the periocular image, performing spatial feature extraction on the downsampled periocular image to obtain at least one spatial feature map,
and/or the number of the groups of groups,
the step of extracting the spatial features of the feature map obtained last time to obtain at least one current spatial feature map comprises the following steps: and when the determined times of the key periocular feature images reach the designated times, carrying out downsampling on the feature images obtained last time, and carrying out spatial feature extraction on the feature images obtained last time after the downsampling, so as to obtain the at least one current spatial feature image.
By adopting the technical scheme, the resolution of the periocular image and/or the feature map obtained last time can be reduced, the calculated amount is reduced, and the efficiency of identifying the periocular features is improved by downsampling the periocular image and/or downsampling the critical periocular feature map of the periocular image in the circulation process when the determined times of the periocular image reach the designated times.
In a second aspect, the present application provides an periocular feature extraction device, which adopts the following technical scheme:
A periocular feature extraction device comprising:
the image acquisition module is used for acquiring periocular images;
the spatial feature extraction module is used for extracting spatial features of the periocular image to obtain at least one spatial feature map, and each spatial feature map comprises periocular local features and periocular global features;
the coefficient determining module is used for determining screening coefficients corresponding to each space feature map based on each space feature map, and the screening coefficients are used for screening key periocular features in each space feature map;
and the key periocular feature extraction module is used for determining a key periocular feature map of the periocular image based on the screening coefficient and a spatial feature map corresponding to the screening coefficient.
By adopting the technical scheme, the image acquisition module acquires the periocular image, the spatial feature extraction module performs spatial feature extraction on the periocular image so as to obtain at least one spatial feature map of the periocular image, each spatial feature map comprises a periocular global feature and a periocular local feature, and the coefficient determination module can determine a screening coefficient corresponding to each spatial feature map according to each spatial feature map, wherein the screening coefficient is used for screening key global features and key local features in the at least one spatial feature map, so that the key periocular feature extraction module further processes the corresponding spatial feature map according to the screening coefficient to obtain the key global features and the key local features in the at least one spatial feature map, namely, determine the key periocular feature map corresponding to the periocular image, namely, obtain the periocular feature map with more robustness.
In another possible implementation manner, the key periocular feature extraction module is specifically configured to, when determining the key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient:
performing singular value decomposition on each space feature map to obtain a singular value matrix, a left orthogonal matrix and a right orthogonal matrix corresponding to each space feature map;
determining key elements in the corresponding singular value matrix based on the screening coefficients corresponding to each space feature map;
determining elements except the key elements as 0 to obtain a key singular value matrix;
determining a key space feature map corresponding to each space feature map based on the key singular value matrix, the left orthogonal matrix and the right orthogonal matrix;
and determining a key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map.
In another possible implementation manner, the key periocular feature extraction module is specifically configured to, when determining the key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map:
If the number of the space feature images is one, determining a key space feature image corresponding to the space feature images as a key periocular feature image of the periocular image;
if the number of the space feature images is at least two, splicing the key space feature images corresponding to the space feature images respectively to obtain the key periocular feature images of the periocular images.
In another possible implementation manner, the spatial feature extraction module is specifically configured to, when performing spatial feature extraction on the periocular image to obtain at least one spatial feature map:
extracting local spatial features and global spatial features of the periocular image to obtain a periocular local feature map and a periocular global feature map;
performing channel separation on the periocular local feature map according to the preset channel number to obtain periocular local sub-feature maps corresponding to each channel, and performing channel separation on the periocular global feature map according to the preset channel number to obtain periocular global sub-feature maps corresponding to each channel;
determining a first key feature map of each periocular local sub-feature map and a second key feature map of each periocular global sub-feature map;
Splicing the first key feature map and the second key feature map which are the same in the channels to obtain a spliced eye feature map corresponding to each channel;
carrying out space weight extraction on the periocular image to obtain a space weight feature map;
and multiplying the space weight feature map and the periocular feature mosaic map corresponding to each channel element by element to obtain the at least one space feature map.
In another possible implementation manner, the spatial feature extraction module is specifically configured to, when determining a first key feature map of each periocular local sub-feature map and a second key feature map of each periocular global sub-feature map:
determining at least two first characteristic values of each periocular local sub-characteristic map and at least two second characteristic values of each periocular global sub-characteristic map;
screening a first number of first feature values from at least two first feature values, wherein the screened first feature values are larger than first feature values which are not screened in the at least two first feature values;
screening a second number of second feature values from at least two second feature values, wherein the screened second feature values are larger than non-screened second feature values in the at least two second feature values, and the first number and the second number are determined based on a target dimension, and the target dimension is the dimension of each periocular local sub-feature map or the dimension of each periocular global sub-feature map;
Determining a first key feature map of each periocular local sub-feature map based on the first number of first feature values, and determining a second key feature map of each periocular global sub-feature map based on the second number of second feature values.
In another possible implementation manner, the spatial feature extraction module is specifically configured to, when performing spatial weight extraction on the periocular image to obtain a spatial weight feature map:
and carrying out normalization processing on the periocular image to obtain the space weight feature map.
In another possible implementation, the apparatus further includes:
and the circulation module is used for executing the following steps in a circulating way until the preset condition is met when the preset condition is not met:
extracting spatial features of the feature map obtained last time to obtain at least one current spatial feature map;
determining at least one obtained current space feature map as a target feature map, or determining the last obtained key periocular feature map and the at least one current space feature map as target feature maps;
determining a current screening coefficient corresponding to the target feature map based on the target feature map, wherein the current screening coefficient is used for screening key periocular features in the target feature map;
Determining a current key periocular feature map of the periocular image based on the current screening coefficient and the target feature map;
the preset conditions include: the determined times of the key periocular feature map reach a preset times threshold;
the last obtained feature map comprises any one of the following:
the last obtained key periocular feature map;
the last time the key periocular feature map was obtained, the last time the spatial feature map was extracted.
In another possible implementation manner, the spatial feature extraction module is specifically configured to, when performing spatial feature extraction on the periocular image to obtain at least one spatial feature map: performing downsampling on the periocular image, performing spatial feature extraction on the downsampled periocular image to obtain at least one spatial feature map,
and/or the number of the groups of groups,
the circulation module is specifically configured to, when performing spatial feature extraction on the feature map obtained last time to obtain at least one current spatial feature map: and when the determined times of the key periocular feature images reach the designated times, carrying out downsampling on the feature images obtained last time, and carrying out spatial feature extraction on the feature images obtained last time after the downsampling, so as to obtain the at least one current spatial feature image.
In a third aspect, the present application provides an electronic device, which adopts the following technical scheme:
an electronic device, the electronic device comprising:
at least one processor;
a memory;
at least one application, wherein the at least one application is stored in the memory and configured to be executed by the at least one processor, the at least one processor configured to: a periocular feature extraction method according to any one of the possible implementations of the first aspect is performed.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer-readable storage medium, which when executed in a computer, causes the computer to perform a periocular feature extraction method according to any one of the first aspects.
In summary, the present application includes at least one of the following beneficial technical effects:
1. acquiring a periocular image, extracting spatial features of the periocular image to obtain at least one spatial feature map of the periocular image, wherein each spatial feature map comprises global features and local features of the periocular image, and determining a screening coefficient corresponding to each spatial feature map according to the global feature conditions and the local feature conditions of the periocular image, wherein the screening coefficient is used for screening key global features and key local features in the at least one spatial feature map, and processing the spatial feature map further corresponding to the screening coefficient to obtain key global features and key local features in the at least one spatial feature map, namely determining the key periocular feature map corresponding to the periocular image, namely obtaining the periocular feature map with robustness;
2. Each space feature graph comprises a part of global feature around eyes and local feature around eyes, singular value decomposition is carried out on each space feature graph, so that a singular value matrix, a left orthogonal matrix and a right orthogonal matrix are obtained, key information around eyes is recorded in the singular value matrix corresponding to each space feature graph, after the singular value matrix is obtained, key elements in the singular value matrix are determined according to screening coefficients for screening key features, the key degree and the importance of the elements except the key elements are not high, therefore, the elements except the key elements are determined to be 0 so as to ignore the non-important elements, a key singular value matrix is obtained, after the key singular value matrix is obtained, the key space feature graph corresponding to each space feature graph is obtained according to the key singular value matrix, the left orthogonal matrix and the right orthogonal matrix, and then the key elements in each key space feature graph are spliced into a whole, and further the key feature graph with the key elements around eyes and robustness is obtained.
Drawings
Fig. 1 is a flow chart of a method for extracting periocular features according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of key feature extraction in an embodiment of the present application.
Fig. 3 is a schematic flow chart of spatial feature extraction in an embodiment of the present application.
Fig. 4 is a schematic flow chart of performing spatial feature extraction and key feature extraction in a circulating manner in the embodiment of the application.
Fig. 5 is a schematic structural view of an apparatus for extracting periocular features in an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Description of the embodiments
The present application is described in further detail below with reference to the accompanying drawings.
Modifications of the embodiments which do not creatively contribute to the invention may be made by those skilled in the art after reading the present specification, but are protected by patent laws only within the scope of claims of the present application.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, unless otherwise specified, the term "/" generally indicates that the associated object is an "or" relationship.
Embodiments of the present application are described in further detail below with reference to the drawings attached hereto.
The embodiment of the application provides a periocular feature extraction method, which is executed by electronic equipment, wherein the electronic equipment can be a server or terminal equipment, and the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., and the terminal device and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein, and as shown in fig. 1, the method includes: step S101, step S102, step S103, and step S104, wherein,
Step S101, acquiring an periocular image.
Specifically, the periocular image may be acquired by an image acquisition device, which is connected to an electronic device by wire or wirelessly, such as a camera device. After the image acquisition device acquires the periocular image, the electronic device acquires the periocular image. The periocular image may be a periocular image of a human or an periocular image of an animal, and is not limited herein.
Step S102, extracting spatial features of the periocular image to obtain at least one spatial feature map.
Each spatial feature map comprises a local feature around the eyes and a global feature around the eyes.
For the embodiment of the application, the periocular image is an image of an area where eyes are located, and the image may include characteristics such as eyeballs, eyelashes, eyebrows, wrinkles, and the like, that is, periocular local characteristics; the whole of the above-mentioned eyeball, eyelash, eyebrow, wrinkle, etc. features is regarded as the periocular global feature.
Specifically, the spatial feature map refers to the spatial position, the relative direction relationship, the eyeball size, and the like of each of the plurality of local features segmented in the periocular image, and thus the spatial feature map includes the periocular local features and the periocular global features. The spatial feature can enhance the description distinguishing capability of the content in the periocular image, so that at least one spatial feature map can be obtained by extracting the spatial feature of the periocular image, and the number of the spatial feature maps is determined according to the channel dimension of the periocular image.
Step S103, determining screening coefficients corresponding to each space feature map based on each space feature map.
Wherein the screening coefficientFor screening key periocular features in each spatial feature map.
For the embodiment of the application, after obtaining at least one spatial feature map, global maximization may be performed on the at least one spatial feature map to obtain a 1×1 first feature map including periocular texture information; global averaging pooling of at least one spatial feature map results in a second feature map of 1 x 1 comprising periocular background information.
Thus, the screening coefficient is determined according to the first characteristic diagram and the second characteristic diagramFeatures of the global conditions around the reaction eyes with low correlation degree and features of the local conditions around the reaction eyes with low correlation degree are filtered.
Specifically, referring to fig. 2, the electronic device may first convolve the first feature map and the second feature map, and perform dimension reduction on the channel dimension according to the scaling coefficient r1, so as to obtain a reduced first feature map and a reduced second feature map. And convolving the reduced first feature map and the reduced second feature map, and amplifying the first feature map and the reduced second feature map in the channel dimension according to a scaling coefficient r1, so that the dimensions of the first feature map and the second feature map are restored, and nonlinear features are added under the condition that the channel dimensions of the first feature map and the second feature map are kept unchanged. Specifically, the convolution kernels of the two convolutions are the same size, and for example, a convolution is performed using a 1×1 convolution kernel. In other embodiments, the first feature map and the second feature map may be first scaled up according to the scaling factor r1, and then scaled down according to the scaling factor r 1.
Adding the restored first feature map and the restored second feature map element by element to obtain an overall feature map, and normalizing the overall feature map through a Sigmoid function to obtain a weight value between 0 and 1, wherein the higher the weight value is, the higher the importance degree of the explanatory element is, namely, the weight value characterizes the duty ratio of the number of key elements in a singular value matrix, so that the coefficient is screenedMay be the weight value itself. It is also possible to subtract the weight value by 1 to filter the coefficients +.>At this time, the selection coefficient +.>The weight is occupied by the number of elements with lower importance in the singular value matrix.
In other embodiments, at least one spatial signature may also be pooled only globally maximally or only globally averaged.
Step S104, based on the screening coefficientScreening coefficient->The corresponding spatial feature map determines a key periocular feature map of the periocular image.
Determining the screening coefficientAfter that, according to the screening coefficient for screening key periocular characteristics +.>And determining a key periocular feature map by the obtained at least one spatial feature map, wherein the key periocular feature map only comprises key global features and key local features, namely, more robust periocular features are obtained.
In order to obtain the key periocular feature map according to the screening coefficient, the step S104 determines the key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient, and specifically includes a step Sa (not shown), a step Sb (not shown), a step Sc (not shown), a step Sd (not shown), and a step Se (not shown), wherein,
step Sa, singular value decomposition is carried out on each space feature map, and a singular value matrix, a left orthogonal matrix and a right orthogonal matrix corresponding to each space feature map are obtained.
Referring to fig. 2, the electronic device converts each spatial feature map into a matrix, and then performs singular value decomposition on the matrix corresponding to each spatial feature map, thereby obtaining a singular value matrix, a left orthogonal matrix and a right orthogonal matrix corresponding to the spatial feature map, the spatial feature map may be identified by the product of the three matrices, and is formed as follows,. Wherein,,characterization of the first embodimentThe individual spatial feature maps are subjected to singular value decomposition to form representations,is the firstThe left orthogonal matrix corresponding to the individual spatial feature map,is the firstA matrix of singular values corresponding to the individual spatial feature maps,is the firstThe right orthogonal matrix corresponding to the individual spatial feature map, Is the number of spatial feature maps. The singular value matrix is a matrix for recording key important information in the space feature diagram. The singular value matrix is a diagonal matrix, that is, the values of the other elements in the matrix except for the elements on the main diagonal are 0, and the elements on the diagonal in the matrix are arranged in descending order from large to small. In the singular value matrix, the larger the value of an element, the higher the importance degree, and the more can be used as a value representing the periocular feature.
Assuming a singular value matrix corresponding to a certain space feature mapThe method comprises the following steps:. Wherein,,for the number of elements in the matrix of singular values,is the firstTerm singular values.
Further, referring to fig. 2, before performing singular value decomposition, the electronic device may further perform convolution on each spatial feature map once, so as to amplify the dimension of each spatial feature map, and then perform convolution again, so as to restore the dimension of each spatial feature map, and in other embodiments, may perform dimension reduction on each spatial feature map first, and then perform dimension amplification. Specifically, the dimension of each spatial feature map is scaled by a scaling factor r2, where r2 may be the same or different from r1 in step S104. The abundant periocular features in each spatial feature map can be extracted by two convolutions and scaling of dimensions.
And step Sb, determining key elements in the corresponding singular value matrix based on the screening coefficients corresponding to each space feature map.
For the embodiment of the application, each spatial feature map corresponds to a filtering coefficient due to the difference between the periocular global features and the periocular local features recorded in each spatial feature mapMatrix of singular valuesScreening coefficientFor screening out key periocular features, so that the electronic equipment can screen out the corresponding screening coefficients according to each spatial feature mapFor singular value matrixThe key elements in the method are determined, and the key elements are the elements which can represent the global characteristics and the local characteristics of the eyes.
Taking step S104 as an example, ifWeight is subtracted from 1Value acquisition, determinationAfter the output, due to the singular value matrixThe number of elements in (a) isPersonal, electronic device determinationAnd (3) withAnd is rounded down to obtain the product of (2)The number of elements with lower importance to be filtered is determined by the singular value matrixThe medium elements are arranged in descending order from large to small, thus the singular value matrixThe elements of the preceding item are determined to be key elements. Assume thatEqual to 50176, assuming a screening factor of 0.4.Equal to 20070.Equal to 30106, the electronic device determines the first 30106 item of elements in the singular value matrix as key elements.
If it isFor the weight value itself, +.>Namely, the electronic equipment directly uses the front ++in the singular value matrix as the number of key elements to be determined>The elements of the item are determined to be key elements.
In other embodiments, it is also possible toAnd (3) withThe product of (2) is rounded up and will not be described in detail.
And step Sc, determining elements except the key elements as 0, and obtaining a key singular value matrix.
Specifically, after determining the key element that can most represent the global feature around the eye and the local feature around the eye, the electronic device determines the values of the elements other than the key element as 0, so that the singular value corresponding to the noise in each spatial feature map or the singular value with lower correlation with the feature around the eye in each spatial feature map can be filtered, and only the singular value with high correlation with the feature around the eye is reserved, thereby obtaining the key singular value matrix. In addition, determining elements other than the key element as 0 can reduce the subsequent calculation amount.
Taking step Sb as an example, the electronic equipment matrices singular valuesThe front 30106 item element is determined to be a key element, and a singular value matrix is determinedElements following 30106 in (c) are of lower importance and less relevance in characterizing periocular critical local features and periocular critical global features. The electronic device may determine the element following the 20106 item as 0.
Determining elements other than the key element as 0, i.e.The elements following the term are determined to be 0, and the obtained key singular value matrix is as follows: />
And Sd, determining a key space feature map corresponding to each space feature map based on the key singular value matrix, the left orthogonal matrix and the right orthogonal matrix.
The electronic device is only used for singular value matrixScreening key elements without left orthogonal matrixRight orthogonal matrixOperates and the electronic device matrices singular valuesElements other than key elements in (b) are determined to be 0 so that a singular value matrixThe dimensions of (2) remain unchanged. As shown in fig. 2, the electronic device pairs key singular value matricesOrthogonal matrix for leftRight orthogonal matrixAnd multiplying to obtain the key space feature diagram corresponding to each space feature diagram.
And step Se, determining a key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map.
Specifically, if the number of the spatial feature maps is one, the number of the key spatial feature maps corresponding to the spatial feature maps is also one, that is, all key elements are included in the key spatial feature maps, the key spatial feature maps corresponding to the spatial feature maps are determined to be key periocular feature maps of the periocular image.
If the number of the space feature images is at least two, part of the periocular key features are recorded in each key space feature image, as shown in fig. 2, the electronic device performs stitching on the key space feature images corresponding to the space feature images, so as to obtain a key space feature image corresponding to the periocular image, wherein the key space feature image comprises all key periocular local features and periocular global features.
In order to obtain a spatial feature map including a local feature around the eye and a global feature around the eye from the periocular image, at least one spatial feature map is obtained by performing spatial feature extraction on the periocular image in step S102, which specifically includes step S1021 (not shown in the figure), step S1022 (not shown in the figure), step S1023 (not shown in the figure), step S1024 (not shown in the figure), step S1025 (not shown in the figure), and step S1026 (not shown in the figure), wherein,
step S1021, local spatial feature extraction and global spatial feature extraction are respectively carried out on the periocular image to obtain a periocular local feature map and a periocular global feature map.
Specifically, the number of channels of the periocular image is assumed to be 3, and the resolution is 224×224. The electronic device performs local spatial feature extraction on the periocular image, referring to fig. 3, the electronic device may perform Depth-wise Convolution (DWConv) on the periocular image using a Convolution kernel of a specified size, so as to obtain a preset number of channels For example, the size of the convolution kernel is 5×5, and more significant semantic information can be extracted around the eye by performing depth-separable convolution on the periocular image using a convolution kernel of 5×5. Then dividing the feature map of the preset channel number according to the preset branch number and the preset proportion, for example, the preset channel number is thatThe number of preset branches is 3, and the preset ratio is 3:3:2. So the electronic device willThe characteristic diagrams of the channels are divided into three branches in sequence, and the number of the three characteristic diagrams is respectively as followsAnd. The feature map is divided into a plurality of branches, so that local features around eyes in different receptive field ranges can be conveniently extracted in a subsequent mode under the condition that the calculated amount is not increased. And (3) carrying out depth separable convolution on each characteristic map by using convolution kernels with different sizes, so as to extract local characteristics of the inner eyes of different receptive fields. When convolution kernels with different sizes are used for convolution, different hole coefficients, namely sampling step sizes of the convolution kernels on the feature map, can be set for each convolution kernel.
For example, the first branch is convolved with a hole coefficient of 1 and a convolution kernel size of 3×3The feature map of the model (2) is convolved to extract the local feature around the eyes in a smaller receptive field, and the convolution with the hole coefficient of 2 and the convolution kernel size of 5×5 is used for the second branch The feature map of the model (1) can be convolved to extract the local feature around the eyes in a larger receptive field, and the convolution with the cavity coefficient of 3 and the convolution kernel size of 7 multiplied by 7 is used for the third branchThe feature map of (2) is convolved to extract periocular local features within a large receptive field. Then as shown in fig. 2, the obtained three different periocular local features in the receptive fields are subjected to channel stitching,furthermore, the spliced eye periphery local features can be input into 1×1 convolution for nonlinear mapping so as to increase nonlinear features, and finally, an eye periphery local feature map is obtained. The convolution kernel size and the cavity coefficient used by each feature map can be obtained by a person through simulation test, and the convolution kernel size and the cavity coefficient can be adaptively modified by the person according to actual conditions so as to ensure the extraction effect.
Specifically, referring to fig. 3, when global spatial feature extraction is performed on the periocular image, the electronic device performs normalization processing on the periocular image, thereby improving convergence speed. Then, the periocular images are input to three 1×1 convolution layers, respectively, to obtain a query feature map Q, a key feature map K, and a value feature map V, respectively. The three convolution kernels 1×1 are different, the dimensions of the query feature map Q, the key feature map K and the value feature map V are unified by 1×1, and nonlinear features of the feature maps are added, so that the query feature map Q, the key feature map K and the value feature map V can be operated conveniently by using matrix multiplication.
Inputting the obtained query feature map Q, key feature map K and value feature map V into a multi-head self-attention layer to operate by the following formula, so that global features of the periocular image in different aspects can be obtained, and semantic information of the periocular image is rich;
wherein,,representing adjustment coefficients for adjustingIs used for preventing the inner product from being overlarge, in particular,may be the square root of the key feature map dimension,performing multi-head self-attention layer operation;is the maximum pooling;for the purpose of normalization,is the transposed matrix of the key feature map K. The electronic device calculates the product of the query feature map Q and the maximized pooled key feature map K, and divides the product by the adjustment coefficientObtaining a matrix with the same column number as the row number of the value characteristic diagram V after the maximum pooling, normalizing the obtained matrix to obtain a coefficient matrix between 0 and 1, and calculating the product of the coefficient matrix and the value characteristic diagram V after the maximum pooling to obtain the global characteristic diagram of the periocular image in multiple aspects.
Further, referring to fig. 3, in order to reduce the calculation amount when the product of the coefficient matrix and the value feature map V after the maximum pooling is calculated later, the key feature map K and the value feature map V are respectively subjected to one-dimensional maximum pooling, so that the number of pixels participating in the calculation in the periocular image is reduced, and the feature map Q and the K and V subjected to the one-dimensional maximum pooling are input to the multi-head self-attention module for calculation.
As shown in fig. 3, the feature map output by the multi-head self-attention module is convolved to increase the nonlinear feature, so as to finally obtain the global feature around the eyes, and the convolution kernel used for increasing the nonlinear feature may be 1×1.
Step S1022, performing channel separation on the periocular local feature map according to the preset channel number to obtain a periocular local sub-feature map corresponding to each channel, and performing channel separation on the periocular global feature map according to the preset channel number to obtain an periocular global sub-feature map corresponding to each channel.
Wherein, the staff can write the preset channel number into the electronic device through the input devices such as the mouse, the keyboard, the touch screen, etc., referring to fig. 3, assume that the preset channel number is. Separation of periocular local feature maps along a channel dimension by an electronic deviceThe characteristic diagrams are the partial sub-characteristic diagrams around the eyes with the preset channel number. In order to ensure the accuracy and uniformity of the subsequent stitching of the periocular local features and the periocular global features, the electronic device also separates the periocular global images intoAnd (5) feature graphs, namely a global sub-feature graph around eyes with preset channel number.
Step S1023, determining a first key feature map of each periocular local sub-feature map and a second key feature map of each periocular global sub-feature map.
For the embodiment of the application, each periocular local sub-feature map includes a partial periocular local feature, and each periocular global sub-feature map includes a partial periocular global feature. Each periocular local sub-feature map comprises elements which are critical and can most represent periocular local features, elements which have low correlation with the periocular local features, and each periocular global sub-feature map comprises elements which are critical and can most represent periocular global features, and elements which have low correlation with the periocular global features. The electronic device thus determines a first key feature map for each periocular local sub-feature map and a second key feature map for each periocular global sub-feature map.
And step S1024, splicing the first key feature map and the second key feature map which are the same in the channels to obtain a spliced eye feature map corresponding to each channel.
Specifically, as shown in FIG. 3, willThe first key feature graphs of the single channels are spliced along the channel dimension to obtain key local features around eyes, and the key local features are obtainedSingle channelAnd splicing the second key feature graphs along the channel dimension to obtain key global features around eyes. The first key feature map and the second key feature map which are the same in the channel are spliced in space, namely the first key feature map and the second key feature map which are the same in the channel are added element by element to form an eye feature splice map corresponding to each channel, and after space splicing, the eye feature splice maps can be convolved to increase nonlinear features. In addition, the feature map can be activated by using an activation function, and nonlinear features are added. The activation function may be a GELU activation function or other activation functions.
Step S1025, spatial weight extraction is carried out on the periocular image to obtain a spatial weight feature map.
For the embodiment of the application, the space weight feature extraction is performed on the periocular image to obtain a space weight feature map, and the weights in the space weight feature map are used for carrying out weight calculation on elements in the space feature map, so that obvious periocular global features and periocular local features in the periocular image can be highlighted.
Further, step S1025 may be performed simultaneously with step S1021, and step S1025 may be performed before step S1021 or after step S1021.
And step S1026, multiplying the space weight feature map and the periocular feature mosaic map corresponding to each channel element by element to obtain at least one space feature map.
Specifically, referring to fig. 3, the element-by-element multiplication refers to multiplication of the spatial weight feature map and the same weight and element in the position of the periocular feature mosaic, for example, the weight of the first row and the first column of the spatial weight feature map is multiplied by the weight of the first row and the first column of the periocular feature mosaic, and so on, and the spatial feature map corresponding to each channel is determined according to the result obtained by the element-by-element multiplication.
In order to determine the first key feature map and the second key feature map, in step S1023, the first key feature map of each periocular local sub-feature map and the second key feature map of each periocular global sub-feature map are determined, and specifically include the following steps:
At least two first feature values for each periocular local sub-feature map and at least two second feature values for each periocular global sub-feature map are determined. And screening a first number of first characteristic values from the at least two first characteristic values, wherein the screened first characteristic values are larger than first characteristic values which are not screened in the at least two first characteristic values. And screening a second number of second characteristic values from the at least two second characteristic values, wherein the screened second characteristic values are larger than the second characteristic values which are not screened in the at least two second characteristic values. Wherein the first number and the second number are determined based on a target dimension, the target dimension being a dimension of each periocular local sub-feature map or a dimension of each periocular global sub-feature map. A first key feature map for each periocular local sub-feature map is determined based on a first number of first feature values, and a second key feature map for each periocular global sub-feature map is determined based on a second number of second feature values.
Specifically, referring to fig. 3, since each of the partial sub-feature images around the eyes is obtained by channel dimension separation, the electronic device can calculate at least two first feature values of each of the partial sub-feature images around the eyes according to vector transformation during channel separation Then for at least two first characteristic valuesSorting according to the size from big to small to obtain a sorting resultThe larger the value of (c) is,the higher the importance of the corresponding feature vector.
Similarly, the electronic device calculates at least two second characteristic values of each periocular global sub-characteristic map according to vector transformation during channel separationThen for at least two second characteristic values +.>Sorting according to the order from big to small to obtain a sorting result +.>,/>The greater the value of +.>The higher the importance of the corresponding feature vector.
Because the dimensions of the periocular local sub-feature map and the periocular global sub-feature map are the same, the electronic device may determine the first number according to a preset ratio, the dimensions of the periocular local sub-feature map, or the dimensions of the periocular global sub-feature map. Assuming that the predetermined ratio is 2/3, the dimension is P2, and P2 may be the same or different from P1 in step S104. The electronic device can determine the product of 2/3 and dimension P2 and round down to a first number, i.eTaking the front part of each partial sub-feature map around the eyesAnd the electronic equipment multiplies the first number of first eigenvalues with the corresponding eigenvectors and splices the first eigenvalues to form a first key eigenvector of a single channel. For example, a certain first key feature map is: . Wherein,,is thatIs described.
Electronic device determinationAfter the first feature value of (2), in order to keep the dimensions of the obtained spatial feature map and the periocular local sub-feature map or the periocular global sub-feature map unchanged, the electronic device may determine the second number, i.e., </10 >, according to the dimension P2 and the first number>. The electronic device takes the front +/in the global sub-feature map of each eye periphery>And the electronic equipment multiplies and splices the second number of second eigenvalues with the corresponding eigenvectors to form a second key eigenvector of the single channel. For example, a certain second key feature map is:. Wherein (1)>Is->Is described. In other embodiments, the product of 2/3 and the dimension P2 may be rounded up, which is not described in detail.
In other embodiments, if the number of the first feature values and the number of the second feature values are both one, the electronic device directly calculates a feature vector of the first feature values, and the feature vector may form a first key feature map of the single channel. The electronic device directly calculates a feature vector of the second feature value, and the feature vector can form a second key feature map of the single channel.
In order to obtain a spatial weight feature map, in step S1025, spatial weight extraction is performed on the periocular image to obtain a spatial weight feature map, which specifically includes the following steps:
and carrying out normalization processing on the periocular image to obtain a spatial weight characteristic diagram.
Specifically, referring to fig. 3, the electronic device may first perform convolution processing on the periocular image using a convolution kernel of 1×1 size, thereby increasing the nonlinear characteristics while maintaining the resolution of the periocular image unchanged. And then carrying out average pooling on the convolved periocular image to obtain a third feature map comprising periocular background information. And carrying out maximum pooling on the periocular image after convolution processing to obtain a fourth feature map comprising periocular texture information. The electronic equipment splices the third feature map and the fourth feature map in the channel dimension, so that a spliced feature map is obtained, wherein the spliced feature map comprises background information of the eyes and texture information of the eyes. Then the electronic device can use a larger convolution kernel to convolve the spliced feature images, for example, a convolution kernel with the size of 7×7 is used for convolution, so that the space features around eyes are extracted with a larger receptive field, and a target space feature image is obtained. And the electronic equipment performs normalization processing on the target space feature map, and maps elements in the target space feature map between 0 and 1 to obtain a space weight feature map. Specifically, the electronic device may input the target spatial feature map to a Sigmoid function for normalization processing, so as to obtain a spatial weight feature map. In other embodiments, other functions may also be used for normalization.
In other embodiments, only the maximum pooling or only the average pooling may be performed on the periocular image, or the periocular image may be directly normalized to obtain the spatial weight feature map.
In order to further improve the quality of the key periocular feature map, step S103 further includes the following steps:
if the preset condition is not met, the following steps are circularly executed until the preset condition is met:
extracting spatial features of the feature map obtained last time to obtain at least one current spatial feature map;
determining at least one obtained current space feature map as a target feature map, or determining the last obtained key periocular feature map and at least one current space feature map as a target feature map;
determining a current screening coefficient corresponding to the target feature map based on the target feature map, wherein the current screening coefficient is used for screening key periocular features in the target feature map;
determining a current key periocular feature map of the periocular image based on the current screening coefficient and the target feature map;
the preset conditions comprise: the number of times of determination of the key periocular feature map reaches a preset number of times threshold.
The last obtained feature map comprises any one of the following:
The last obtained key periocular feature map;
the last time the key periocular feature map was obtained, the last time the spatial feature map was extracted.
For the embodiment of the application, after the preset frequency threshold is assumed to be 20 times and the key periocular feature map of the periocular image is obtained, the electronic device counts the number of times of determining the key periocular feature map of the periocular image, and judges whether the number of times of determining the key periocular feature map of the periocular image reaches 20 times, if not, the electronic device indicates that the space feature extraction needs to be continued and the key periocular feature map needs to be determined. The electronic device performs spatial feature extraction on the periocular feature map obtained last time, then determines a current screening coefficient according to the target feature map, and determines a current key periocular feature map according to the current screening coefficient and the target feature map. And then the electronic equipment continuously judges whether the determination times of the key periocular feature images of the periocular images (comprising the first obtained key periocular feature images and the sum of the times of the current key periocular feature images determined in the circulation process) reach 20 times, if not, the electronic equipment further needs to continuously perform spatial feature extraction and determine the key periocular feature images of the periocular images, and the like, when the determination times of the key periocular feature images of the periocular images reach 20 times, the electronic equipment further needs to continuously perform spatial feature extraction and determine the key periocular feature images of the periocular images, and the quality of the obtained key periocular feature images is higher. Referring to fig. 4, the key feature extraction includes determining a current screening coefficient based on the target feature map, and determining a current key periocular feature map of the periocular image based on the current screening coefficient, and after the number of key feature extraction reaches a preset number of times, obtaining final key periocular features from the obtained key periocular feature map through the full connection layer.
Specifically, the spatial feature extraction of the feature map obtained last time at each cycle may be performed in the manner described in steps S1021 to S1026. Similarly, the determination of the current screening coefficient and the determination of the current key periocular feature map for each cycle may be performed in the manner described in steps S103 to S104.
Further, referring to fig. 4, when the previous spatial feature extraction is performed using the key periocular feature map obtained last time and the spatial feature map obtained by the previous spatial feature extraction, the spatial feature map obtained by the previous spatial feature extraction is applied as a residual to the current spatial feature extraction, so that a gradient can be maintained in multiple spatial feature extraction, and parameter convergence can be accelerated. Similarly, when the target feature map is the last obtained key periocular feature map and at least one current spatial feature map, the last obtained key periocular feature map is applied as a residual to the current key periocular feature map determined at the current time, so that the gradient can be maintained in the current key periocular feature map determined for multiple times, and the parameter convergence is accelerated.
In order to reduce the calculation amount, the space feature extraction is performed on the periocular image in step S101 to obtain at least one space feature map, which specifically includes: performing downsampling on the periocular image, performing spatial feature extraction on the downsampled periocular image to obtain at least one spatial feature map,
And/or the number of the groups of groups,
extracting the spatial features of the feature map obtained last time to obtain at least one current spatial feature map, wherein the method comprises the following steps: and when the determined times of the key periocular feature images reach the designated times, performing downsampling processing on the feature images obtained last time, and performing spatial feature extraction on the feature images obtained last time after the downsampling processing to obtain at least one current spatial feature image.
Specifically, downsampling the periocular image can initially reduce the resolution of the periocular image and/or the feature map obtained last time, assuming that the resolution of the periocular image is 224×224. For example, after the electronic device acquires the periocular image, the 224×224 periocular image may be downsampled to obtain a periocular image with a resolution of 56×56.
The preset number of times threshold is exemplified by the preset number of times threshold 20 described above, assuming that the designated number of times is the third, sixth, tenth, and twentieth times, respectively. When the electronic equipment judges that the times of the key feature extraction reach the specified times, the electronic equipment indicates that the feature image obtained last time needs to be subjected to downsampling processing between the next space feature extraction, the resolution of the feature image obtained last time is sequentially reduced, and the calculated amount is gradually reduced.
Specifically, the downsampling process may use a convolution layer to downsample the periocular image and/or the last obtained feature map, or may use maximum pooling to downsample the periocular image and/or the last obtained feature map, which is not limited herein.
The above embodiment describes a method for extracting periocular features from the viewpoint of a method flow, and the following embodiment describes an apparatus for extracting periocular features from the viewpoint of a virtual module or a virtual unit, specifically the following embodiment.
The embodiment of the present application provides a periocular feature extraction device 20, as shown in fig. 5, the periocular feature extraction device 20 specifically may include:
an image acquisition module 201 for acquiring an periocular image;
the spatial feature extraction module 202 is configured to perform spatial feature extraction on the periocular image to obtain at least one spatial feature map, where each spatial feature map includes periocular local features and periocular global features;
the coefficient determining module 203 is configured to determine a screening coefficient corresponding to each spatial feature map based on each spatial feature map, where the screening coefficient is used to screen key periocular features in each spatial feature map;
the key periocular feature extraction module 204 is configured to determine a key periocular feature map of the periocular image based on the screening coefficient and a spatial feature map corresponding to the screening coefficient.
The embodiment of the application provides a periocular feature extraction device, wherein an image acquisition module 201 acquires a periocular image, a spatial feature extraction module 202 performs spatial feature extraction on the periocular image so as to obtain at least one spatial feature map of the periocular image, each spatial feature map includes a periocular global feature and a periocular local feature, and since the spatial feature map records a global feature condition and a local feature condition of the periocular image, a coefficient determination module 203 can determine a screening coefficient corresponding to each spatial feature map according to each spatial feature map, and the screening coefficient is used for screening a critical global feature and a critical local feature in at least one spatial feature map, so that a critical periocular feature extraction module 204 further processes the corresponding spatial feature map according to the screening coefficient to obtain a critical global feature and a critical local feature in at least one spatial feature map, namely, determine a critical periocular feature map corresponding to the periocular image, and thus obtain a periocular feature map with higher robustness.
In one possible implementation manner of this embodiment of the present application, when determining the key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient, the key periocular feature extraction module 204 is specifically configured to:
Singular value decomposition is carried out on each space feature map to obtain a singular value matrix, a left orthogonal matrix and a right orthogonal matrix corresponding to each space feature map;
determining key elements in the corresponding singular value matrix based on the screening coefficients corresponding to each space feature graph;
determining elements except the key elements as 0 to obtain a key singular value matrix;
determining a key space feature map corresponding to each space feature map based on the key singular value matrix, the left orthogonal matrix and the right orthogonal matrix;
and determining a key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map.
In one possible implementation manner of this embodiment of the present application, when determining the key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map, the key periocular feature extraction module 204 is specifically configured to:
if the number of the space feature images is one, determining the key space feature image corresponding to the space feature images as the key periocular feature image of the periocular image;
if the number of the space feature images is at least two, the key space feature images corresponding to the space feature images are spliced to obtain key periocular feature images of periocular images.
In one possible implementation manner of the embodiment of the present application, when the spatial feature extraction module 202 performs spatial feature extraction on the periocular image to obtain at least one spatial feature map, the spatial feature extraction module is specifically configured to:
extracting local spatial features and global spatial features of the periocular images to obtain periocular local feature images and periocular global feature images;
performing channel separation on the periocular local feature map according to the number of preset channels to obtain a periocular local sub-feature map corresponding to each channel, and performing channel separation on the periocular global feature map according to the number of preset channels to obtain a periocular global sub-feature map corresponding to each channel;
determining a first key feature map of each periocular local sub-feature map and a second key feature map of each periocular global sub-feature map;
splicing the first key feature map and the second key feature map which are the same in the channels to obtain a spliced eye feature map corresponding to each channel;
extracting spatial weight of the periocular image to obtain a spatial weight feature map;
and multiplying the space weight feature map and the periocular feature mosaic map corresponding to each channel element by element to obtain at least one space feature map.
In one possible implementation manner of the embodiment of the present application, the spatial feature extraction module 202 is specifically configured to, when determining the first key feature map of each periocular local sub-feature map and the second key feature map of each periocular global sub-feature map:
determining at least two first characteristic values of each periocular local sub-characteristic map and at least two second characteristic values of each periocular global sub-characteristic map;
screening a first number of first feature values from the at least two first feature values, wherein the screened first feature values are larger than first feature values which are not screened in the at least two first feature values;
screening a second number of second feature values from the at least two second feature values, wherein the screened second feature values are larger than the second feature values which are not screened in the at least two second feature values, the first number and the second number are determined based on target dimensions, and the target dimensions are dimensions of each periocular local sub-feature map or dimensions of each periocular global sub-feature map;
a first key feature map for each periocular local sub-feature map is determined based on a first number of first feature values, and a second key feature map for each periocular global sub-feature map is determined based on a second number of second feature values.
In one possible implementation manner of the embodiment of the present application, when the spatial feature extraction module 202 extracts the spatial weight of the periocular image to obtain the spatial weight feature map, the spatial feature extraction module is specifically configured to:
and carrying out normalization processing on the periocular image to obtain a spatial weight characteristic diagram.
In one possible implementation manner of the embodiment of the present application, the apparatus 20 further includes:
and the circulation module is used for executing the following steps in a circulating way until the preset condition is met when the preset condition is not met:
extracting spatial features of the feature map obtained last time to obtain at least one current spatial feature map;
determining at least one obtained current space feature map as a target feature map, or determining the last obtained key periocular feature map and at least one current space feature map as a target feature map;
determining a current screening coefficient corresponding to the target feature map based on the target feature map, wherein the current screening coefficient is used for screening key periocular features in the target feature map;
determining a current key periocular feature map of the periocular image based on the current screening coefficient and the target feature map;
the preset conditions comprise: the determined times of the key periocular feature map reach a preset times threshold;
The last obtained feature map comprises any one of the following:
the last obtained key periocular feature map;
the last time the key periocular feature map was obtained, the last time the spatial feature map was extracted.
In one possible implementation manner of the embodiment of the present application, when the spatial feature extraction module 202 performs spatial feature extraction on the periocular image to obtain at least one spatial feature map, the spatial feature extraction module is specifically configured to: performing downsampling on the periocular image, performing spatial feature extraction on the downsampled periocular image to obtain at least one spatial feature map,
and/or the number of the groups of groups,
the circulation module is specifically configured to, when performing spatial feature extraction on the feature map obtained last time to obtain at least one current spatial feature map: and when the determined times of the key periocular feature images reach the designated times, performing downsampling processing on the feature images obtained last time, and performing spatial feature extraction on the feature images obtained last time after the downsampling processing to obtain at least one current spatial feature image.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the periocular feature extraction device 20 described above may refer to the corresponding process in the foregoing method embodiment, and will not be described herein again.
In an embodiment of the present application, as shown in fig. 5, an electronic device 30 shown in fig. 5 includes: a processor 301 and a memory 303. Wherein the processor 301 is coupled to the memory 303, such as via a bus 302. Optionally, the electronic device 30 may also include a transceiver 304. It should be noted that, in practical applications, the transceiver 304 is not limited to one, and the structure of the electronic device 30 is not limited to the embodiment of the present application.
The processor 301 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. Processor 301 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 302 may include a path to transfer information between the components. Bus 302 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect Standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. Bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or type of bus.
The Memory 303 may be, but is not limited to, a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory ), a CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 303 is used for storing application program codes for executing the present application and is controlled to be executed by the processor 301. The processor 301 is configured to execute the application code stored in the memory 303 to implement what is shown in the foregoing method embodiments.
Among them, electronic devices include, but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. But may also be a server or the like. The electronic device shown in fig. 5 is only an example and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
The present application provides a computer readable storage medium having a computer program stored thereon, which when run on a computer, causes the computer to perform the corresponding method embodiments described above. Compared with the related art, in the embodiment of the application, the periocular image is acquired, the spatial feature extraction is performed on the periocular image, so that at least one spatial feature image of the periocular image can be obtained, each spatial feature image comprises the periocular global feature and the periocular local feature, and because the global feature condition and the local feature condition of the periocular image are recorded in the spatial feature image, the screening coefficient corresponding to each spatial feature image can be determined according to each spatial feature image, the screening coefficient is used for screening the critical global feature and the critical local feature in the at least one spatial feature image, and therefore the spatial feature image further corresponding to the screening coefficient is processed, so that the critical global feature and the critical local feature in the at least one spatial feature image are obtained, namely, the critical periocular feature image corresponding to the periocular image is determined, and the periocular feature image with robustness is obtained.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A periocular feature extraction method, comprising:
acquiring a periocular image;
extracting spatial features of the periocular image to obtain at least one spatial feature map, wherein each spatial feature map comprises periocular local features and periocular global features;
Determining a screening coefficient corresponding to each space feature map based on each space feature map, wherein the screening coefficient is used for screening key periocular features in each space feature map;
and determining a key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient.
2. The method according to claim 1, wherein the determining the key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient includes:
performing singular value decomposition on each space feature map to obtain a singular value matrix, a left orthogonal matrix and a right orthogonal matrix corresponding to each space feature map;
determining key elements in the corresponding singular value matrix based on the screening coefficients corresponding to each space feature map;
determining elements except the key elements as 0 to obtain a key singular value matrix;
determining a key space feature map corresponding to each space feature map based on the key singular value matrix, the left orthogonal matrix and the right orthogonal matrix;
and determining a key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map.
3. The method according to claim 2, wherein determining the key periocular feature map of the periocular image based on the key spatial feature map corresponding to each spatial feature map includes any one of:
if the number of the space feature images is one, determining a key space feature image corresponding to the space feature images as a key periocular feature image of the periocular image;
if the number of the space feature images is at least two, splicing the key space feature images corresponding to the space feature images respectively to obtain the key periocular feature images of the periocular images.
4. The method for extracting periocular features according to claim 1, wherein the performing spatial feature extraction on the periocular image to obtain at least one spatial feature map includes:
extracting local spatial features and global spatial features of the periocular image to obtain a periocular local feature map and a periocular global feature map;
performing channel separation on the periocular local feature map according to the preset channel number to obtain periocular local sub-feature maps corresponding to each channel, and performing channel separation on the periocular global feature map according to the preset channel number to obtain periocular global sub-feature maps corresponding to each channel;
Determining a first key feature map of each periocular local sub-feature map and a second key feature map of each periocular global sub-feature map;
splicing the first key feature map and the second key feature map which are the same in the channels to obtain a spliced eye feature map corresponding to each channel;
carrying out space weight extraction on the periocular image to obtain a space weight feature map;
and multiplying the space weight feature map and the periocular feature mosaic map corresponding to each channel element by element to obtain the at least one space feature map.
5. The method according to claim 4, wherein determining the first key feature map of each periocular local sub-feature map and the second key feature map of each periocular global sub-feature map comprises:
determining at least two first characteristic values of each periocular local sub-characteristic map and at least two second characteristic values of each periocular global sub-characteristic map;
screening a first number of first feature values from at least two first feature values, wherein the screened first feature values are larger than first feature values which are not screened in the at least two first feature values;
screening a second number of second feature values from at least two second feature values, wherein the screened second feature values are larger than non-screened second feature values in the at least two second feature values, and the first number and the second number are determined based on a target dimension, and the target dimension is the dimension of each periocular local sub-feature map or the dimension of each periocular global sub-feature map;
Determining a first key feature map of each periocular local sub-feature map based on the first number of first feature values, and determining a second key feature map of each periocular global sub-feature map based on the second number of second feature values.
6. The method of claim 1, wherein the determining the key periocular feature map of the periocular image based on the screening coefficient and the spatial feature map corresponding to the screening coefficient further comprises:
if the preset condition is not met, the following steps are circularly executed until the preset condition is met:
extracting spatial features of the feature map obtained last time to obtain at least one current spatial feature map;
determining at least one obtained current space feature map as a target feature map, or determining the last obtained key periocular feature map and the at least one current space feature map as target feature maps;
determining a current screening coefficient corresponding to the target feature map based on the target feature map, wherein the current screening coefficient is used for screening key periocular features in the target feature map;
determining a current key periocular feature map of the periocular image based on the current screening coefficient and the target feature map;
The preset conditions include: the determined times of the key periocular feature map reach a preset times threshold;
the last obtained feature map comprises any one of the following:
the last obtained key periocular feature map;
the last time the key periocular feature map was obtained, the last time the spatial feature map was extracted.
7. The method for extracting periocular features of claim 6, wherein performing spatial feature extraction on the periocular image to obtain at least one spatial feature map comprises: performing downsampling on the periocular image, performing spatial feature extraction on the downsampled periocular image to obtain at least one spatial feature map,
and/or the number of the groups of groups,
the step of extracting the spatial features of the feature map obtained last time to obtain at least one current spatial feature map comprises the following steps: and when the determined times of the key periocular feature images reach the designated times, carrying out downsampling on the feature images obtained last time, and carrying out spatial feature extraction on the feature images obtained last time after the downsampling, so as to obtain the at least one current spatial feature image.
8. A periocular feature extraction device, comprising:
The image acquisition module is used for acquiring periocular images;
the spatial feature extraction module is used for extracting spatial features of the periocular image to obtain at least one spatial feature map, and each spatial feature map comprises periocular local features and periocular global features;
the coefficient determining module is used for determining screening coefficients corresponding to each space feature map based on each space feature map, and the screening coefficients are used for screening key periocular features in each space feature map;
and the key periocular feature extraction module is used for determining a key periocular feature map of the periocular image based on the screening coefficient and a spatial feature map corresponding to the screening coefficient.
9. An electronic device, comprising:
at least one processor;
a memory;
at least one application program, wherein the at least one application program is stored in the memory and configured to be executed by the at least one processor, the at least one application program: for performing a periocular feature extraction method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed in a computer, causes the computer to execute a periocular feature extraction method according to any one of claims 1 to 7.
CN202310592929.7A 2023-05-24 2023-05-24 Periocular feature extraction method and device, electronic equipment and storage medium Active CN116503933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310592929.7A CN116503933B (en) 2023-05-24 2023-05-24 Periocular feature extraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310592929.7A CN116503933B (en) 2023-05-24 2023-05-24 Periocular feature extraction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116503933A true CN116503933A (en) 2023-07-28
CN116503933B CN116503933B (en) 2023-12-12

Family

ID=87318337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310592929.7A Active CN116503933B (en) 2023-05-24 2023-05-24 Periocular feature extraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116503933B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009503A (en) * 2017-12-04 2018-05-08 北京中科虹霸科技有限公司 Personal identification method based on periocular area
CN108304847A (en) * 2017-11-30 2018-07-20 腾讯科技(深圳)有限公司 Image classification method and device, personalized recommendation method and device
CN109508695A (en) * 2018-12-13 2019-03-22 北京中科虹霸科技有限公司 Eye multi-modal biological characteristic recognition methods
CN111429407A (en) * 2020-03-09 2020-07-17 清华大学深圳国际研究生院 Chest X-ray disease detection device and method based on two-channel separation network
CN112101314A (en) * 2020-11-17 2020-12-18 北京健康有益科技有限公司 Human body posture recognition method and device based on mobile terminal
CN112308654A (en) * 2019-07-31 2021-02-02 株式会社资生堂 Eye makeup commodity recommendation program, method, device and system
CN112541433A (en) * 2020-12-11 2021-03-23 中国电子技术标准化研究院 Two-stage human eye pupil accurate positioning method based on attention mechanism
CN112927783A (en) * 2021-03-30 2021-06-08 泰康保险集团股份有限公司 Image retrieval method and device
CN113870289A (en) * 2021-09-22 2021-12-31 浙江大学 Facial nerve segmentation method and device for decoupling and dividing treatment
WO2022042124A1 (en) * 2020-08-25 2022-03-03 深圳思谋信息科技有限公司 Super-resolution image reconstruction method and apparatus, computer device, and storage medium
CN114897932A (en) * 2022-03-31 2022-08-12 北京航天飞腾装备技术有限责任公司 Infrared target tracking implementation method based on feature and gray level fusion
WO2022179215A1 (en) * 2021-02-23 2022-09-01 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN115760657A (en) * 2021-08-27 2023-03-07 中移(杭州)信息技术有限公司 Image fusion method and device, electronic equipment and computer storage medium
CN116152751A (en) * 2022-12-23 2023-05-23 东软睿驰汽车技术(沈阳)有限公司 Image processing method, device, system and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304847A (en) * 2017-11-30 2018-07-20 腾讯科技(深圳)有限公司 Image classification method and device, personalized recommendation method and device
CN108009503A (en) * 2017-12-04 2018-05-08 北京中科虹霸科技有限公司 Personal identification method based on periocular area
CN109508695A (en) * 2018-12-13 2019-03-22 北京中科虹霸科技有限公司 Eye multi-modal biological characteristic recognition methods
CN112308654A (en) * 2019-07-31 2021-02-02 株式会社资生堂 Eye makeup commodity recommendation program, method, device and system
CN111429407A (en) * 2020-03-09 2020-07-17 清华大学深圳国际研究生院 Chest X-ray disease detection device and method based on two-channel separation network
WO2022042124A1 (en) * 2020-08-25 2022-03-03 深圳思谋信息科技有限公司 Super-resolution image reconstruction method and apparatus, computer device, and storage medium
CN112101314A (en) * 2020-11-17 2020-12-18 北京健康有益科技有限公司 Human body posture recognition method and device based on mobile terminal
CN112541433A (en) * 2020-12-11 2021-03-23 中国电子技术标准化研究院 Two-stage human eye pupil accurate positioning method based on attention mechanism
WO2022179215A1 (en) * 2021-02-23 2022-09-01 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112927783A (en) * 2021-03-30 2021-06-08 泰康保险集团股份有限公司 Image retrieval method and device
CN115760657A (en) * 2021-08-27 2023-03-07 中移(杭州)信息技术有限公司 Image fusion method and device, electronic equipment and computer storage medium
CN113870289A (en) * 2021-09-22 2021-12-31 浙江大学 Facial nerve segmentation method and device for decoupling and dividing treatment
CN114897932A (en) * 2022-03-31 2022-08-12 北京航天飞腾装备技术有限责任公司 Infrared target tracking implementation method based on feature and gray level fusion
CN116152751A (en) * 2022-12-23 2023-05-23 东软睿驰汽车技术(沈阳)有限公司 Image processing method, device, system and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YANG Y等: "Latitude and binocular perception based blind stereoscopic omnidirectional image quality assessment for VR system", 《SIGNAL PROCESSING》, vol. 173, pages 1 - 18 *
张成等: "基于三通道分离特征融合与支持向量机的混凝土图像分类研究", 《图学学报》, vol. 42, no. 6, pages 917 - 923 *
彭珊: "基于特征融合和注意力机制的目标检测算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 1, pages 138 - 2779 *
梁斌等: "奇异值分解和改进PCA的视频人脸检索方法", 《计算机工程与应用》, vol. 49, no. 11, pages 177 - 182 *

Also Published As

Publication number Publication date
CN116503933B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN109255392B (en) Video classification method, device and equipment based on non-local neural network
US11354797B2 (en) Method, device, and system for testing an image
CN110503076B (en) Video classification method, device, equipment and medium based on artificial intelligence
CN105069424B (en) Quick face recognition system and method
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
CN109684969B (en) Gaze position estimation method, computer device, and storage medium
Li et al. Inference of a compact representation of sensor fingerprint for source camera identification
CN111797882B (en) Image classification method and device
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
CN109063776B (en) Image re-recognition network training method and device and image re-recognition method and device
CN110929836B (en) Neural network training and image processing method and device, electronic equipment and medium
CN113191489B (en) Training method of binary neural network model, image processing method and device
US20160189396A1 (en) Image processing
CN112868019B (en) Feature processing method and device, storage medium and program product
CN114298997B (en) Fake picture detection method, fake picture detection device and storage medium
CN110809126A (en) Video frame interpolation method and system based on adaptive deformable convolution
CN104616013A (en) Method for acquiring low-dimensional local characteristics descriptor
CN107368803A (en) A kind of face identification method and system based on classification rarefaction representation
Štruc et al. Removing illumination artifacts from face images using the nuisance attribute projection
CN116310462B (en) Image clustering method and device based on rank constraint self-expression
CN116503933B (en) Periocular feature extraction method and device, electronic equipment and storage medium
CN111667495A (en) Image scene analysis method and device
CN116258873A (en) Position information determining method, training method and device of object recognition model
CN116630152A (en) Image resolution reconstruction method and device, storage medium and electronic equipment
CN112132253A (en) 3D motion recognition method and device, computer readable storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant