CN117560455B - Image feature processing method, device, equipment and storage medium - Google Patents

Image feature processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117560455B
CN117560455B CN202410041219.XA CN202410041219A CN117560455B CN 117560455 B CN117560455 B CN 117560455B CN 202410041219 A CN202410041219 A CN 202410041219A CN 117560455 B CN117560455 B CN 117560455B
Authority
CN
China
Prior art keywords
image
feature
target
features
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410041219.XA
Other languages
Chinese (zh)
Other versions
CN117560455A (en
Inventor
黄余格
钟智舟
糜予曦
张菁芸
王军
王少鸣
周水庚
丁守鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410041219.XA priority Critical patent/CN117560455B/en
Publication of CN117560455A publication Critical patent/CN117560455A/en
Application granted granted Critical
Publication of CN117560455B publication Critical patent/CN117560455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • H04N1/448Rendering the image unintelligible, e.g. scrambling
    • H04N1/4486Rendering the image unintelligible, e.g. scrambling using digital data encryption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/53Measures to keep reference information secret, e.g. cancellable biometrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0869Generation of secret information including derivation or calculation of cryptographic keys or passwords involving random numbers or seeds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/08Randomization, e.g. dummy operations or using noise

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image feature processing method, an image feature processing device, image feature processing equipment and a storage medium, which can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like; the method comprises the following steps: performing feature extraction processing on a target image to obtain unencrypted image features of the target image; generating a feature key of the unencrypted image feature; performing rotation processing on the unencrypted image features based on the feature key to obtain rotation image features; discarding part of feature elements in the rotation image features to obtain encrypted image features of the target image; the application can improve the efficiency and the safety of image characteristic encryption.

Description

Image feature processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an image feature processing method, apparatus, device, and storage medium.
Background
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is a comprehensive technology of computer science, and by researching the design principle and implementation method of various intelligent machines, the machines have the functions of sensing, reasoning and decision. Artificial intelligence technology is a comprehensive discipline and generally comprises sensors, special artificial intelligent chips, cloud computing, distributed storage, big data processing technology, pre-training model technology, operation/interaction systems, electromechanical integration and the like. The pre-training model is also called a large model and a basic model, and can be widely applied to all large-direction downstream tasks of artificial intelligence after fine adjustment. With the development of technology, artificial intelligence technology will find application in more fields and will develop more and more important value.
Image feature processing is also one of the important application directions of artificial intelligence. In the related art, for the security of image features (for example, extracted by a machine learning model) of an image, the image features are often encrypted and protected based on a homomorphic encryption (Homomorphic Encryption, HE) algorithm. However, the homomorphic encryption algorithm has the defects of high computational complexity, and the homomorphic encryption method cannot guarantee the security of the image characteristics when the secret key is leaked.
Disclosure of Invention
Embodiments of the present application provide an image feature processing method, apparatus, electronic device, computer readable storage medium, and computer program product, which can improve efficiency and security of image feature encryption.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image feature processing method, which comprises the following steps:
Performing feature extraction processing on a target image to obtain unencrypted image features of the target image;
generating a feature key of the unencrypted image feature;
performing rotation processing on the unencrypted image features based on the feature key to obtain rotation image features;
and discarding part of characteristic elements in the rotation image characteristics to obtain the encrypted image characteristics of the target image.
The embodiment of the application also provides an image recognition method, which comprises the following steps:
Receiving an image recognition request aiming at an image to be recognized, wherein the image recognition request indicates whether the image to be recognized belongs to a target object or not, the target object comprises an object to which an image corresponding to each target encrypted image feature in an image feature library belongs, and each target encrypted image feature is obtained based on the image feature processing method provided by the embodiment of the application;
Responding to the image recognition request, extracting unencrypted to-be-recognized features of the to-be-recognized image, and matching the unencrypted to-be-recognized features with the target encrypted image features to obtain a matching result;
When the matching result characterizes that the image feature library has the target encrypted image feature matched with the unencrypted feature to be identified, determining to obtain an identification result of the image to be identified belonging to a target object;
And when the matching result characterizes that the image feature library does not have the target encrypted image feature matched with the unencrypted image feature to be identified, determining to obtain an identification result that the image to be identified does not belong to a target object.
The embodiment of the application also provides an image feature processing device, which comprises:
The feature extraction module is used for carrying out feature extraction processing on the target image to obtain unencrypted image features of the target image;
a key generation module for generating a feature key of the unencrypted image feature;
the feature rotation module is used for carrying out rotation processing on the unencrypted image features based on the feature key to obtain rotation image features;
and the feature discarding module is used for discarding part of feature elements in the rotating image features to obtain the encrypted image features of the target image.
The embodiment of the application also provides an image recognition device, which comprises:
The receiving module is used for receiving an image recognition request aiming at an image to be recognized, wherein the image recognition request indicates whether the image to be recognized belongs to a target object or not, the target object comprises an object to which an image corresponding to each target encryption image characteristic in an image characteristic library belongs, and each target encryption image characteristic is obtained based on the image characteristic processing method provided by the embodiment of the application;
The matching module is used for responding to the image recognition request, extracting the unencrypted to-be-recognized features of the to-be-recognized image, and matching the unencrypted to-be-recognized features with the target encrypted image features to obtain a matching result;
The first determining module is used for determining and obtaining an identification result of the image to be identified belonging to a target object when the matching result characterizes that the image feature library has the target encrypted image feature matched with the unencrypted image to be identified;
And the second determining module is used for determining to obtain the recognition result that the image to be recognized does not belong to a target object when the image feature library is characterized by the matching result and the target encrypted image feature matched with the unencrypted image to be recognized does not exist.
The embodiment of the application also provides electronic equipment, which comprises:
a memory for storing computer executable instructions;
and the processor is used for realizing the method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the application also provides a computer readable storage medium which stores computer executable instructions or a computer program, and when the computer executable instructions or the computer program are executed by a processor, the method provided by the embodiment of the application is realized.
The embodiment of the application also provides a computer program product, which comprises computer executable instructions or a computer program, and the computer executable instructions or the computer program realize the method provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
By applying the embodiment of the application, for the extracted unencrypted image features of the target image, firstly, a feature key of the unencrypted image features is generated, then, based on the feature key, the unencrypted image features are subjected to rotation processing to obtain rotation image features, and finally, part of feature elements in the rotation image features are discarded to obtain the encrypted image features of the target image. The encryption of the unencrypted image features can be realized by rotating the unencrypted image features based on the feature key and discarding part of feature elements of the rotated image features, so that the operation complexity of the encryption of the image features is simplified, the encryption efficiency of the image features is improved, and the occupation of computing resources is reduced; (2) The irreversibility of the image characteristic encryption is realized by discarding part of the characteristic elements, the decryption restoration of the image characteristic cannot be realized even if the key is leaked, and the security of the image characteristic encryption is improved.
Drawings
FIG. 1 is a schematic diagram of an image feature processing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3A is a schematic flow chart of an image feature processing method according to an embodiment of the present application;
Fig. 3B is a schematic flow chart of an image recognition method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an application scenario of an image feature processing method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of feature extraction provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of spherical linear interpolation provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart of a feature element discarding process according to an embodiment of the present application;
fig. 8 is a second application scenario schematic diagram of the image feature processing method provided by the embodiment of the present application;
FIG. 9 is a schematic flow chart of feature matching provided by an embodiment of the present application;
FIG. 10 is a schematic diagram showing the comparison of test results of an image feature processing method according to an embodiment of the present application;
FIG. 11 is a verification result of the protection performance of the image feature processing method according to the embodiment of the present application;
fig. 12 is a test result of the operation overhead of the image feature processing method according to the embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the embodiments of the application is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) Client side: applications running in the terminal for providing various services, such as a client supporting feature processing (e.g., feature encryption) of images (e.g., face images).
2) In response to: for representing a condition or state upon which an operation is performed, one or more operations performed may be in real-time or with a set delay when the condition or state upon which the operation is dependent is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) Computer Vision (CV) is a science of researching how to make a machine "look at", and more specifically, to replace a human eye with a camera and a Computer to perform machine Vision such as recognition and measurement on a target, and further perform graphic processing, so that the Computer is processed into an image more suitable for human eye observation or transmission to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. The large model technology brings important transformation for the development of computer vision technology, and pre-trained models in the vision fields of swin-transducer, viT, V-MOE, MAE and the like can be quickly and widely applied to downstream specific tasks through fine tuning. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, text recognition (Optical Character Recognition, OCR), video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, three-dimensional (3D) techniques, virtual reality, augmented reality, synchronous localization and map construction, and the like, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and the like, and living body detection techniques.
Based on the above description of the terms and terminology involved in the embodiments of the present application, the embodiments of the present application will be described in detail below. Embodiments of the present application provide an image feature processing method, apparatus, electronic device, computer readable storage medium, and computer program product, which can improve efficiency and security of image feature encryption.
It should be noted that, in the present disclosure, the relevant data collection process should obtain the informed consent or the individual consent of the personal information body strictly according to the requirements of the relevant laws and regulations during the application of the examples, and develop the subsequent data use and processing behaviors within the authorized range of the laws and regulations and the personal information body.
The image feature processing system provided by the embodiment of the application is described below. Referring to fig. 1, fig. 1 is a schematic architecture diagram of an image feature processing system according to an embodiment of the present application. To enable support for one exemplary application, image feature processing system 100 includes: server 200, network 300, and terminals (terminals 400-1 and 400-2 are shown as examples). The terminal is connected to the server 200 through the network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and the data transmission is implemented by using a wireless or wired link.
In one exemplary scenario, the terminal 400-1 may be operated with a client supporting image (e.g., face image) feature processing (e.g., feature encryption) corresponding to a first user (e.g., an operator of image feature related data). In practical application, the terminal 400-1 responds to the image feature encryption instruction for the target image triggered by the first user, and sends an image feature encryption request for the target image to the server 200. The server 200 receives an image feature encryption request for a target image transmitted by the terminal 400-1; responding to an image feature encryption request aiming at a target image, and carrying out feature extraction processing on the target image to obtain unencrypted image features of the target image; generating a feature key of the unencrypted image feature; based on the feature key, carrying out rotation processing on the unencrypted image features to obtain rotation image features; and discarding part of characteristic elements in the rotation image characteristics to obtain the encrypted image characteristics of the target image.
In one exemplary scenario, the server 200 may also register the encrypted image features of the target image with the image feature library of the terminal 400-2. In practical applications, the terminal 400-2 corresponds to the second user, and may operate a client (for example, a face recognition client, for example, for supporting recognition of whether the image to be recognized belongs to the target object or not), where the image feature library includes a plurality of target encrypted image features, and the target object is an object whose image is registered in the image feature library. The terminal 400-2 receives the encrypted image features registered by the server 200 and stores the encrypted image features as target encrypted image features in an image feature library. In another exemplary scenario, when receiving an image recognition request (indicating whether the image to be recognized belongs to a target object) for the image to be recognized, the terminal 400-2 may match the encrypted image to be recognized of the image to be recognized with each target encrypted image feature in the image feature library to obtain a matching result; if the matching is successful, the image to be identified belongs to the target object (identification is successful), and if the matching is failed, the image to be identified does not belong to the target object (identification is failed).
In some embodiments, the image feature processing method provided by the embodiment of the present application is implemented by an electronic device, for example, may be implemented by a terminal alone, may be implemented by a server alone, or may be implemented by a terminal and a server cooperatively. The embodiment of the application can be applied to various scenes, including but not limited to cloud technology, artificial intelligence, intelligent transportation, assisted driving, audio and video, computer vision, biological feature payment, biological feature unlocking, biological feature identity verification and the like.
In some embodiments, the electronic device implementing the image feature processing method provided by the embodiment of the present application may be a terminal or a server of various types. The server (e.g., server 200) may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers. The terminal (e.g., terminal 400) may be, but is not limited to, a notebook computer, tablet computer, desktop computer, smart phone, smart voice interaction device (e.g., smart speaker), smart home appliance (e.g., smart television), smart watch, vehicle-mounted terminal, wearable device, virtual Reality (VR) device, aircraft, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited by the embodiment of the present application.
In some embodiments, the image feature processing method provided by the embodiments of the present application may be implemented by means of Cloud Technology (Cloud Technology). Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources. As an example, a server (e.g., server 200) may also be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, web services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDNs), and basic cloud computing services such as big data and artificial intelligence platforms.
In some embodiments, the image feature processing method provided by the embodiments of the present application may be implemented by means of a Block Chain (Block Chain) technique. Blockchains are novel application modes of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanisms, encryption algorithms, and the like. By way of example, multiple servers may be organized into a blockchain, and the servers may be nodes on the blockchain, with information connections between each node in the blockchain, and information transfer between the nodes may be via the information connections. The data related to the image feature processing method (such as the encrypted image feature of the target image) provided by the embodiment of the application can be stored on the blockchain.
In some embodiments, the terminal or the server may implement the image feature processing method provided by the embodiments of the present application by running various computer executable instructions or computer programs. For example, the computer-executable instructions may be commands at the micro-program level, machine instructions, or software instructions. The computer program may be a native program or a software module in an operating system; a Native (APP) Application, i.e. a program that needs to be installed in an operating system to run; or an applet that can be embedded in any APP, i.e., a program that can be run only by being downloaded into the browser environment. In general, the computer-executable instructions may be any form of instructions and the computer program may be any form of application, module, or plug-in.
The electronic device for implementing the image feature processing method provided by the embodiment of the application is described below. Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 500 provided in the embodiment of the present application may be a terminal or a server. As shown in fig. 2, the electronic device 500 includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in electronic device 500 are coupled together by bus system 540. It is appreciated that the bus system 540 is used to enable connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 540 in fig. 2 for clarity of illustration.
The Processor 510 may be an integrated circuit chip having signal processing capabilities such as a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., where the general purpose Processor may be a microprocessor or any conventional Processor, etc.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Memory 550 may include one or more storage devices physically located away from processor 510. Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM) and the volatile Memory may be a random access Memory (Random Access Memory, RAM). The memory 550 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
Network communication module 552 is used to reach other electronic devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (Universal Serial Bus, USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
The input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the image feature processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows an image feature processing apparatus 555 stored in a memory 550, which may be software in the form of a program, a plug-in, or the like, including the following software modules: the feature extraction module 5551, the key generation module 5552, the feature rotation module 5553, and the feature discarding module 5554 are logical, and thus may be arbitrarily combined or further split according to the implemented functions, the functions of each module will be described below.
The image feature processing method provided by the embodiment of the application is described below. As described above, the image feature processing method provided by the embodiment of the present application is implemented by an electronic device, for example, may be implemented by a server or a terminal alone or implemented by a server and a terminal cooperatively. The execution subject of each step will not be repeated hereinafter. Referring to fig. 3A, fig. 3A is a schematic flow chart of an image feature processing method according to an embodiment of the present application, where the image feature processing method according to the embodiment of the present application includes:
Step 101: and carrying out feature extraction processing on the target image to obtain unencrypted image features of the target image.
In step 101, the target image may be an image that needs image feature encryption, such as a face image, a palm print image, a fingerprint image, an iris image, or the like. And for the target image, firstly, carrying out feature extraction processing on the target image to obtain unencrypted image features of the target image. In the embodiment of the application, after the unencrypted image features of the target image are extracted, the security of the unencrypted image features can be improved by encrypting the unencrypted image features, so that the target image is prevented from being restored due to the acquisition of the unencrypted image features, and the security of image information in the target image is prevented from being influenced.
In some embodiments, the feature extraction process may be performed on the target image to obtain unencrypted image features of the target image by: and extracting global features of the target image to obtain the global features of the target image, and taking the global features as unencrypted image features of the target image.
Here, a global feature is extracted from the target image, and when only the global feature is encrypted later, the speed of encrypting the image feature can be improved, and the time consumption of encrypting the image feature is reduced. As an example, the feature extraction process of the global feature may be performed by a pre-trained neural network model, which may be constructed based on a convolutional neural network, a cyclic neural network, a deep neural network, or the like, for example, the neural network model may be a residual network Resnet, which corresponds to a fully connected layer through which the features extracted by the neural network model can be integrated into one global feature.
In some embodiments, the feature extraction process may be performed on the target image to obtain unencrypted image features of the target image by: extracting features of the target image to obtain unencrypted multiple image sub-features, wherein each image sub-feature is used for representing part of features in the target image; the plurality of image sub-features are taken as unencrypted image features of the target image. Specifically, the local features of the target dimensions can be extracted from the target image, so as to obtain unencrypted image sub-features with multiple target dimensions; and taking the image sub-features of the multiple target dimensions as unencrypted image features of the target image.
Here, the target dimension may be preset, the target dimension may be controlled to a lower order, for example, the target dimension may be 16 dimensions. By extracting the local features of the target dimensions from the target image, unencrypted image sub-features of multiple target dimensions, that is, unencrypted image features of the target image include image sub-features of multiple target dimensions, can be obtained. As an example, instead of the above-mentioned extraction of global features, here the fully connected layer corresponding to the above-mentioned neural network model may be changed into a convolution layer (the parameters of the convolution layer may be preset, for exampleBy which image sub-features of multiple target dimensions can be generated based on features extracted by the neural network model. It should be noted that, the image sub-features (i.e., local features) obtained by the embodiment of the present application are not implemented by "hard splitting", but are more robust features learned by a neural network model, where the neural network model can completely store each important feature (such as eyes, nose, mouth, etc. of a face image) in the target image in each image sub-feature. Therefore, if the characteristic encryption is carried out on each local characteristic in the follow-up characteristic encryption, the difficulty of restoring the target image according to the image characteristic is increased, and the safety of the image characteristic encryption can be improved; and by reasonably setting the target dimension, the complexity of encryption calculation of each local feature can be reasonably reduced, the speed of image feature encryption is ensured, and the time consumption of image feature encryption and the occupation of calculation resources are reduced.
Step 102: a feature key for the unencrypted image feature is generated.
In step 102, a feature key of the unencrypted image feature needs to be generated, which is used for the rotation process of the unencrypted image feature. In some embodiments, the feature key for the unencrypted image feature may be generated by: randomly sampling from the target distribution to obtain a random key; dividing the random key by the modular length of the random key to obtain the characteristic key of the unencrypted image characteristic.
Here, random data conforming to the target distribution may be randomly generated, and then randomly sampled from the random data conforming to the target distribution, with the sampling result being used as the random key. In practical applications, since the unencrypted image feature is a start vector for rotation and the feature key is an end vector for rotation processing, the random key may be identical in structure to the unencrypted image feature, e.g., in both dimensions. And further obtaining the modular length of the random key, and dividing the random key by the modular length of the random key to obtain the characteristic key of the unencrypted image characteristic. As an example, the rotation process may be implemented by means of spherical linear interpolation (SPHERICAL LINEAR interaction, slerp), and since the extracted unencrypted image feature is processed to be located on an n-dimensional (typically 512-dimensional) hypersphere, if the result of the Slerp interpolation is to be located on the hypersphere, it is required to implement Slerp that the start point vector (i.e. the unencrypted image feature to be rotated is located on the hypersphere) and the end point vector are located on the hypersphere at the same time, so that the target distribution may be a standard normal distribution, i.e. a random key may be sampled from the standard normal distribution, and the random key is divided by the modular length of the random key, i.e. a feature key satisfying the condition may be quickly generated. In this way, the randomness is introduced by taking the randomly generated characteristic key as the rotation end point vector of the characteristic rotation, the effect of protecting the image characteristics is achieved, and the safety of the image characteristics is improved.
In some embodiments, the unencrypted image feature includes a plurality of image sub-features, such that a feature key for the unencrypted image feature may be generated by: generating sub-feature keys of the sub-features of each image respectively, wherein different keys exist among the sub-feature keys of the sub-features of each image; or generating a target feature key for the image sub-features, and taking the target feature key as the sub-feature key of each image sub-feature.
Here, the sub-feature keys of the respective image sub-features may be generated separately for the respective image sub-features, and different keys exist between the sub-feature keys of the respective image sub-features. It should be noted that the presence of different keys may refer to the fact that each of the sub-feature keys is different, or may refer to the fact that some of the sub-feature keys are identical and some of the sub-feature keys are different. Therefore, a plurality of different sub-feature keys can be adopted to encrypt and protect the image sub-features, so that the security of the image features is further improved, and the encryption effect is better. Of course, a sub-feature key can be used for each image sub-feature, so that the encryption protection of the image features can be realized, the time consumption of the image feature encryption can be reduced, and the speed of the image feature encryption can be improved. Note that, the generation method of the sub-feature key of the image sub-feature may refer to the generation method of the feature key of the image feature, which is not described herein.
Step 103: and carrying out rotation processing on the unencrypted image features based on the feature key to obtain rotation image features.
In step 103, for the unencrypted image feature of the target object, based on the generated feature key, the unencrypted image feature is rotated to obtain a rotated image feature. In some embodiments, the unencrypted image feature includes a plurality of image sub-features, so in step 103, the unencrypted image feature is rotated, in effect, each image sub-feature is rotated, resulting in a rotated image sub-feature for each image sub-feature, the rotated image sub-features of the plurality of image sub-features comprising the rotated image feature. The rotation process of the unencrypted image features (i.e., the image sub-features) is described next.
In some embodiments, the unencrypted image feature may be rotated based on the feature key to obtain a rotated image feature by: taking the unencrypted image characteristics as a starting point vector of spherical linear interpolation, taking the characteristic key as an end point vector of spherical linear interpolation, and determining a vector included angle between the starting point vector and the end point vector; performing spherical linear interpolation processing on the starting point vector and the end point vector based on the vector included angle to obtain an interpolation result; and taking the interpolation result as a rotation image characteristic. Specifically, the spherical linear interpolation processing is performed on the starting point vector and the end point vector based on the vector included angle, and an interpolation result can be obtained through the following steps: acquiring a first angle weight of the end point vector, and determining a second angle weight of the start point vector based on the first angle weight; and performing spherical linear interpolation processing on the starting point vector and the end point vector based on the vector included angle, the first angle weight and the second angle weight to obtain an interpolation result.
Here, the vector angle between the start vector and the end vector is first determined. Then a first angle weight of the end point vector is obtained, wherein the first angle weight can be a preset super parameterAnd determining a second angle weight of the origin vector based on the first angle weight. In practical applications, the sum of the first angle weight and the second angle weight may be 1, so the second angle weight=1-the first angle weight. And then, based on the vector included angle, the first angle weight and the second angle weight, carrying out interpolation processing on the starting point vector and the end point vector to obtain an interpolation result, wherein the interpolation result is the rotating image characteristic. In practical application, based on the vector included angle, the first angle weight and the second angle weight, the spherical linear interpolation process can be implemented by the following formula (one):
; formula 1
Wherein,Is a hyper-parameter (i.e. a first angle weight),As the weight value of the second angle is given,Is the origin vectorAnd endpoint vectorIs used for the vector included angle of (a),Is the interpolation result.
Therefore, the rotation of the unencrypted image features is realized through Slerp interpolation, the process is simple to realize and has the characteristic of light weight, the speed of image feature encryption can be improved, and the time consumption of image feature encryption and the occupation of calculation resources can be reduced.
Step 104: and discarding part of characteristic elements in the rotation image characteristics to obtain the encrypted image characteristics of the target image.
In step 103, after the rotated image feature is obtained, a portion of the feature elements in the rotated image feature may be discarded (e.g., a portion of the feature elements in the rotated image feature is set to 0) to obtain an encrypted image feature of the target image. Therefore, the irreversibility of the image characteristic encryption is realized by discarding the characteristic elements, even if an attacker acquires the characteristic key, the unencrypted image characteristic cannot be restored, the safety of the image characteristic is further improved, and the encryption effect of the image characteristic is enhanced. In step 103, the rotation processing is actually performed on each image sub-feature to obtain a rotated image sub-feature of each image sub-feature, so that in step 104, when the feature elements are discarded, part of the feature elements in each rotated image sub-feature are discarded to obtain an encrypted rotated image sub-feature of each rotated image sub-feature, and the plurality of encrypted rotated image sub-features form the encrypted image feature of the target image. Next, discarding of feature elements of the rotated image features (i.e., the rotated image sub-features) will be described.
In some embodiments, the encrypted image features of the target image may be obtained by discarding some feature elements in the rotated image features by: randomly determining characteristic elements of partial dimensions from the rotated image characteristics; discarding feature elements of part of dimensions randomly determined in the rotation image features to obtain the encrypted image features of the target image. Here, the dimension discarding number of the randomly discarded dimensions may be preset, and then the feature elements of the dimension discarding number may be randomly determined, so that the feature elements of a part of the randomly determined dimensions in the rotated image feature may be discarded. Therefore, the feature elements to be discarded are randomly determined, the randomness of the image feature encryption is further increased, and the security of the image feature encryption is improved.
In some embodiments, the encrypted image features of the target image may be obtained by discarding some feature elements in the rotated image features by: setting part of characteristic elements in the rotating image characteristics as specific values to obtain encrypted image characteristics of the target image; or setting partial characteristic elements in the rotation image characteristic as random values to obtain the encrypted image characteristic of the target image.
In some embodiments, the unencrypted image feature includes a plurality of image sub-features, and the rotated image feature includes a plurality of rotated image sub-features resulting from rotating the plurality of image sub-features; thus, the encrypted image feature of the target image may be obtained by discarding some feature elements in the rotated image feature as follows: determining the importance degree of each of the plurality of rotated image sub-features; and discarding part of feature elements of the plurality of rotating image sub-features based on the importance degree to obtain the encrypted image feature of the target image, wherein the rotating image sub-features with high importance degree have fewer dimensions than the rotating image sub-features with low importance degree.
Here, for each rotation image sub-feature, the following processing is performed, respectively: first, a level of importance of the rotated image sub-feature may be determined, the level of importance being indicative of the level of importance of the rotated image sub-feature to characterize the target image. In practice, a plurality of rotated image sub-features (or image sub-features rotated to obtain rotated image features) may be self-attentive to obtain an attentive matrix (attention map), where the attentive matrix includes importance of each rotated image sub-feature, and the attentive matrix can reflect importance of each rotated image sub-feature to characterize a target image. In this way, the importance level of the rotated image sub-feature can be obtained from the attention matrix. And discarding part of characteristic elements of the plurality of rotating image sub-characteristics based on the importance degree of each rotating image sub-characteristic to obtain the encrypted image characteristic of the target image. The number of dimensions of the discarded elements is smaller in the rotated image sub-feature having a high degree of importance than in the rotated image sub-feature having a low degree of importance.
In some embodiments, based on the importance level, the encrypted image features of the target image may be obtained by discarding part of the feature elements of the plurality of rotated image sub-features by performing the steps of: for each rotation image sub-feature, the following processing is performed: determining the dimension discarding quantity corresponding to the importance degree of the rotating image sub-feature; discarding feature elements of the dimension discarding number from the rotating image sub-features to obtain encrypted rotating image sub-features of the rotating image sub-features; wherein the encrypted rotated image sub-features of the plurality of rotated image sub-features comprise encrypted image features of the target image.
Here, the number of dimensional discards corresponding to the importance degree of the rotated image sub-feature is first determined. In some embodiments, the number of dimension drops corresponding to the importance level may be determined by one of the following: the first mode obtains a mapping relation between the importance degree and the dimension discarding number, and determines the dimension discarding number corresponding to the importance degree based on the mapping relation. A second mode is to acquire a plurality of importance intervals and the interval dimension discarding quantity corresponding to each importance interval; determining a target importance degree interval in which the importance degree is located from the importance degree intervals; determining the discarding number of the interval dimensions corresponding to the target importance interval and the discarding number of the dimensions corresponding to the importance interval. Acquiring the total number of dimension discarding corresponding to the rotation image characteristics; determining a ratio of the degree of importance to a sum of the degrees of importance of the plurality of rotated image sub-features; multiplying the total number of dimension drops by the ratio to obtain the number of dimension drops.
For the mode (one), a mapping relationship of the importance degree and the number of dimension drops may be set in advance. After the importance degree of the rotating image sub-feature is determined, the dimension discarding number corresponding to the importance degree of the rotating image sub-feature is determined based on the mapping relation by combining the importance degree of the rotating image sub-feature.
For the second aspect (a), a plurality of importance intervals may be set in advance, and for each importance interval, the number of section dimension discarding corresponding to each importance interval may be set. In this way, after determining the importance degree of the rotating image sub-feature, determining which target importance degree section of the plurality of importance degree sections the importance degree of the rotating image sub-feature is in, so as to discard the number of section dimensions corresponding to the target importance degree section, which is the number of dimension discarding corresponding to the importance degree of the rotating image sub-feature.
For the third mode, the total number of dimension discarding of the dimension to be discarded may be set in advance. Thus, the number of dimension drops corresponding to each rotated image sub-feature is determined based on the total number of dimension drops. Specifically, after determining the importance degree of the rotated image sub-feature, a ratio of the importance degree of the rotated image sub-feature to the sum of the importance degrees of the plurality of rotated image sub-features may be determined, and then the total number of dimension drops and the ratio are multiplied to obtain the number of dimension drops corresponding to the importance degree of the rotated image sub-feature.
And finally, discarding feature elements of the dimension discarding number from the rotating image sub-feature after obtaining the dimension discarding number corresponding to the importance degree of the rotating image sub-feature, thereby obtaining the encrypted rotating image sub-feature of the rotating image sub-feature. The encrypted rotated image sub-features of the plurality of rotated image sub-features may constitute encrypted image features of the target image.
By applying the embodiment, according to the importance degree of each rotating image sub-feature for representing the target image, the feature elements of each rotating image sub-feature are respectively discarded, so that the feature elements of some dimensions can be discarded on the rotating image sub-feature with high importance degree, the feature elements of some dimensions are discarded on the rotating image sub-feature with low importance degree, the encrypted image feature obtained by encrypting the image feature is ensured by discarding the feature elements on the basis of realizing the image feature encryption, the target image can be expressed more accurately, the loss of the important feature is reduced, and the encryption effect of the image feature is improved.
In some embodiments, feature elements of a dimension may be discarded by discarding feature elements of a dimension discard number by: setting the characteristic elements of the dimension discarding number as a specific value; or the feature elements of the dimension discard number are set to random values. Here, the specific value may be 0, i.e., for the discarded feature element, the feature element is set to 0; the random value can be generated based on a preset random algorithm, so that the randomness of the image characteristic encryption can be further increased, and the safety of the image characteristic encryption is improved.
In some embodiments, part of the feature elements in the rotated image feature may be discarded by the image step to obtain the encrypted image feature of the target image: discarding part of characteristic elements in the rotation image characteristics to obtain intermediate encryption image characteristics of the target image; and carrying out feature standardization processing on the intermediate encrypted image features to obtain the encrypted image features of the target image. Here, the feature normalization processing may be performed on the intermediate encrypted image feature to obtain the encrypted image feature of the target image. If the unencrypted image feature includes a plurality of image sub-features, the intermediate encrypted image feature also includes a plurality of intermediate encrypted image sub-features, so that the intermediate encrypted image sub-features are subjected to feature normalization processing to obtain encrypted image sub-features of the intermediate encrypted image sub-features, respectively, and the encrypted image sub-features of the plurality of intermediate encrypted image sub-features constitute the encrypted image feature of the target image.
In some embodiments, after obtaining the encrypted image features of the target image, the encrypted image features may also be stored as target encrypted image features in an image feature library. The image feature library includes a plurality of target encrypted image features. In actual implementation, the encrypted image features and the feature key association required for generating the encrypted image features may be stored in an image feature library. Thus, the image feature library includes a plurality of target encrypted image features, and a target feature key required to generate each target encrypted image feature. It should be noted that if the target encrypted image feature includes a plurality of target encrypted image sub-features, and each target encrypted image sub-feature is generated by a different target feature key, each target encrypted image sub-feature and its corresponding target feature key may be respectively associated and stored; if the target encrypted image feature includes a plurality of target encrypted image sub-features, each corresponding to the same target feature key, then the target feature key may be stored in association with the plurality of target encrypted image sub-features.
By applying the embodiment of the application, for the extracted unencrypted image features of the target image, firstly, a feature key of the unencrypted image features is generated, then, based on the feature key, the unencrypted image features are subjected to rotation processing to obtain rotation image features, and finally, part of feature elements in the rotation image features are discarded to obtain the encrypted image features of the target image. The encryption of the unencrypted image features can be realized by rotating the unencrypted image features based on the feature key and discarding part of feature elements of the rotated image features, so that the operation complexity of the encryption of the image features is simplified, the encryption efficiency of the image features is improved, and the occupation of computing resources is reduced; (2) The irreversibility of the image characteristic encryption is realized by discarding part of the characteristic elements, the decryption restoration of the image characteristic cannot be realized even if the key is leaked, and the security of the image characteristic encryption is improved.
The image recognition method provided by the embodiment of the application is described below. The image recognition method provided by the embodiment of the application is implemented by the electronic equipment, for example, the image recognition method can be implemented by a server or a terminal singly or cooperatively. The execution subject of each step will not be repeated hereinafter. Referring to fig. 3B, fig. 3B is a schematic flow chart of an image recognition method according to an embodiment of the present application, where the image recognition method according to the embodiment of the present application includes:
step 201: an image recognition request for an image to be recognized is received.
The image recognition request indicates whether the image to be recognized belongs to a target object or not, wherein the target object comprises an object to which an image corresponding to each target encrypted image feature in the image feature library belongs, and each target encrypted image feature is obtained by the image feature processing method provided by the embodiment of the application.
In some embodiments, after obtaining the encrypted image feature of the image based on the image feature processing method provided in the above embodiments of the present application, the encrypted image feature may also be stored as the target encrypted image feature in the image feature library. Based on this, the image feature library includes a plurality of target encrypted image features. In actual implementation, the encrypted image features and the feature key association required for generating the encrypted image features may be stored in an image feature library. Thus, the image feature library includes a plurality of target encrypted image features, and a target feature key required to generate each target encrypted image feature. It should be noted that if the target encrypted image feature includes a plurality of target encrypted image sub-features, and each target encrypted image sub-feature is generated by a different target feature key, each target encrypted image sub-feature and its corresponding target feature key may be respectively associated and stored; if the target encrypted image feature includes a plurality of target encrypted image sub-features, each corresponding to the same target feature key, then the target feature key may be stored in association with the plurality of target encrypted image sub-features.
Here, the above-described image feature library may be applied to an identification scene that identifies whether an image to be identified belongs to a target object, the image feature library including target encrypted image features of an image of the target object. That is, the object to which the image corresponding to the target encrypted image feature belongs may be referred to as a target object. It should be noted that, whether the image to be identified belongs to the target object may refer to whether the image to be identified belongs to a specific target object, or whether the image to be identified belongs to the group of the plurality of target objects corresponding to the image feature library.
In some exemplary scenarios, the image to be identified is a face image, and then the identification scenario is an identification scenario based on face identification for identifying whether the face image belongs to a target object; the image to be identified is a palm print image, and the identification scene is an identification scene based on palm print identification for identifying whether the palm print image belongs to a target object; the image to be identified is a fingerprint image, and the identification scene is an identification scene based on fingerprint identification for identifying whether the fingerprint image belongs to a target object; the image to be identified is an iris image, and the identification scene is an identification scene based on iris identification for identifying whether the iris image belongs to a target object. The identification scenes can be applied to actual scenes such as biometric payment, biometric unlocking (such as electronic equipment unlocking, access control unlocking and the like), biometric identity verification and the like.
It should be noted that if the image recognition request carries the identifier of the specific target object, that is, if the image recognition request indicates whether the image to be recognized belongs to which specific target object, then only the unencrypted feature to be recognized is matched with the target encrypted image feature of the specific target object. If the image recognition request does not carry the identification of the specific target object, namely the image recognition request does not indicate whether the image to be recognized belongs to which specific target object, the unencrypted image to be recognized is respectively matched with the characteristics of each target encrypted image. When the matching result represents that the image feature library has the target encrypted image feature matched with the unencrypted feature to be identified, determining to obtain an identification result that the image to be identified belongs to the target object; and when the matching result characterizes that the image feature library does not have the target encrypted image feature matched with the unencrypted feature to be identified, determining to obtain an identification result that the image to be identified does not belong to the target object.
Step 202: and responding to the image recognition request, extracting the unencrypted features to be recognized of the images to be recognized, and matching the unencrypted features to be recognized with the target encrypted image features to obtain a matching result.
It should be noted that, the implementation logic for extracting the unencrypted feature to be identified of the image to be identified may refer to the implementation logic for extracting the unencrypted feature of the target image; and each target encrypted image feature in the image feature library adopts the same generation logic as the encrypted image feature.
In some embodiments, the matching result may be obtained by matching the unencrypted feature to be identified with each target encrypted image feature by: for each target encrypted image feature, the following processing is performed: acquiring a target feature key required for generating target encrypted image features, and carrying out rotation processing on unencrypted features to be identified based on the target feature key to obtain rotated features to be identified; determining a target dimension of a feature element discarded by the generated target encrypted image feature, discarding the feature element positioned in the target dimension in the rotated feature to be identified, and obtaining an encrypted feature to be identified of the unencrypted feature to be identified; determining the feature similarity of the features to be identified in the encryption and the features of the target encrypted image; when the feature similarity reaches a similarity threshold, determining to obtain a matching result of matching the unencrypted feature to be identified with the target encrypted image feature; and when the feature similarity does not reach the similarity threshold, determining to obtain a matching result of the non-encrypted feature to be identified and the target encrypted image feature.
It should be noted that, the implementation logic for performing the rotation processing on the unencrypted feature to be identified based on the target feature key may refer to the implementation logic for performing the rotation processing on the unencrypted image feature based on the feature key. In this way, the encrypted feature to be identified of the unencrypted feature to be identified is generated in the same generation mode as the target encrypted image feature in the image feature library, so that the matching of the target encrypted image feature and the unencrypted feature to be identified can be realized by matching the target encrypted image feature and the encrypted feature to be identified, the accuracy of feature matching can be improved, and the identification accuracy of the image to be identified is improved.
In some embodiments, the encrypted to-be-identified feature comprises a plurality of encrypted to-be-identified sub-features, the target encrypted image feature comprises a plurality of target encrypted image sub-features, and the encrypted to-be-identified sub-features and the target encrypted image sub-features are in one-to-one correspondence; thus, the feature similarity of the encrypted feature to be identified and the target encrypted image feature can be determined by: determining the importance degree of each encryption sub-feature to be identified, wherein the importance degree indicates the importance degree of the encryption sub-feature to be identified for representing the image to be identified; determining sub-feature similarity of the sub-features to be identified by encryption and the sub-features of the target encrypted image corresponding to the sub-features to be identified by encryption aiming at each sub-feature to be identified by encryption; and determining the feature similarity of the features to be identified and the target encrypted image features based on the importance degree of each sub-feature to be identified and the sub-feature similarity corresponding to each sub-feature to be identified.
The implementation logic for determining the importance level of each sub-feature to be identified in encryption may refer to the implementation logic for determining the importance level of each sub-feature of the rotated image. The sub-feature similarity can be represented by a cosine distance between the sub-feature to be identified by encryption and the sub-feature of the target encrypted image corresponding to the sub-feature to be identified by encryption. When the feature similarity is determined, the importance degree of each sub-feature to be identified in encryption can be used as the weight of each sub-feature to be identified in encryption, so that the sub-feature similarity corresponding to each sub-feature to be identified in encryption is subjected to weighted summation processing based on the weight of each sub-feature to be identified in encryption, and a weighted summation result is obtained, wherein the weighted summation result is the feature similarity of the feature to be identified in encryption and the target encrypted image feature. Thus, the feature similarity between the features to be identified in the encryption and the target encrypted image features can be accurately calculated.
Step 203: and when the matching result represents that the image feature library has the target encrypted image feature matched with the unencrypted feature to be identified, determining to obtain an identification result that the image to be identified belongs to the target object.
Here, when the matching result characterizes that the image feature library has the target encrypted image feature matched with the unencrypted feature to be identified, it is determined that the image to be identified belongs to the identification result of the target object, and the image to be identified belongs to the target object corresponding to the target encrypted image feature matched with the unencrypted feature to be identified.
Step 204: and when the matching result represents that the image feature library does not have the target encrypted image feature matched with the unencrypted feature to be identified, determining to obtain an identification result that the image to be identified does not belong to the target object.
By applying the embodiment of the application, each target encrypted image feature in the image feature library is obtained based on the image feature processing method provided by the embodiment of the application, so that the safety of the target encrypted image feature in the image feature library can be ensured, the target encrypted image feature in the image feature library for image recognition is prevented from being decrypted and restored due to leakage, and the safety of image recognition is further improved.
An exemplary application of the embodiment of the present application in an actual application scenario is described below by taking a target image as an example of a face image. In the face recognition field, the neural network model can extract highly aggregated identity feature vectors (i.e. face features, also called identity template vectors) from the face image, and the face recognition device deployed offline needs to store the neural network model and the face feature library (including a plurality of face features) of the user at the same time. However, because the attacker can restore the face information by using the face features, the attacker can acquire the face information of the face feature library once the attacker breaks the device, which causes serious privacy disclosure problem. Therefore, protection of the face features is required. In the related art, the protection scheme of the face feature includes: (1) The face features are cryptographically protected based on homomorphic encryption (Homomorphic Encryption, HE) algorithms. However, the homomorphic encryption algorithm has the defects of high computational complexity, and the homomorphic encryption method cannot guarantee the security of the face characteristics when the secret key is leaked. (2) And encrypting and protecting the face features based on the hash encryption. However, due to the inherent intra-class variability in biometric measurements, hash encryption is difficult to apply to face feature encryption; this is because the characteristics of the hash function used for hash encryption are that small differences in the input are amplified, that is, small changes occur in the input, and the output is completely different, which will cause the face features to change in the encryption protection process, so that the accuracy of face recognition (realized by matching the face features of the face image to be recognized with the face features in the face feature library) is difficult to be ensured.
Based on this, the embodiment of the application provides a protection scheme for the face features, so as to at least solve the above-mentioned problems. In the embodiment of the application, (1) different from the way of extracting a face image as a global feature (namely, an identity template vector), the embodiment of the application changes the network structure for extracting the features of the face image, can generate grouped local features (namely, a plurality of face sub-features, each face sub-feature is the local feature of the face image and is collectively called as the face feature), has independence among groups, and the dimension of each face sub-feature is controlled in a lower level (for example, 16 dimensions). (2) And rotating the face sub-feature aiming at each face sub-feature to obtain a first rotation sub-feature. Specifically, firstly, randomly generating a characteristic key of each face sub-feature on an hypersphere where a plurality of face sub-features are located, wherein the characteristic keys of the face sub-features can be the same or different; then, for each face sub-feature, taking the face sub-feature as a starting point vector of spherical linear interpolation (SPHERICAL LINEAR interpolation, slerp), taking a feature key of the face sub-feature as an end point vector of Slerp interpolation, and rotating the face sub-feature through Slerp interpolation to obtain a first rotation sub-feature of the face sub-feature. (3) And aiming at each first rotation sub-feature, acquiring the importance degree of the first rotation sub-feature (indicating the importance degree of the first rotation sub-feature for expressing the face image), and discarding the feature element (setting 0 for the feature element) of the first rotation sub-feature according to the importance degree of the first rotation sub-feature to obtain a second rotation sub-feature. (4) And carrying out feature standardization processing on each second rotation sub-feature (for example, standardizing each second rotation sub-feature onto each hypersphere) to obtain the encrypted face sub-feature of each face sub-feature, and storing each encrypted face sub-feature of the face image and a feature key adopted in the process of generating the encrypted face sub-feature.
In practical application, the embodiment of the application can be effectively applied to the face recognition scene, not only can the safety of the face feature library on the equipment be ensured, but also the face recognition precision can be ensured, and the cost is low. Referring to fig. 4, fig. 4 is a schematic diagram of an application scenario of an image feature processing method according to an embodiment of the present application. Here, the face recognition procedure includes the following steps: (1) the server registers the user. The user transmits the face image of the user to a server side, the server side performs feature extraction on the face image through a neural network model to obtain unencrypted face features (comprising a plurality of unencrypted face sub-features), and the unencrypted face features are stored in a database of the server side. (2) the server issues the database. The server copies the unencrypted face features of the database and operates the image feature processing method provided by the application to encrypt the copied unencrypted face features to obtain encrypted face features (comprising a plurality of encrypted face sub-features); the encrypted face features are registered on an offline device (running with a client supporting face recognition and with the same neural network model as the server side). And (3) face recognition. The offline equipment collects face images to be recognized of the user; extracting features of the face image to be identified through a neural network model to obtain unencrypted face features to be identified (including a plurality of unencrypted face sub-features to be identified); the image feature processing method provided by the application is operated to encrypt the unencrypted face feature to be identified to obtain the encrypted face feature to be identified; matching the encrypted face features to be recognized with the registered encrypted face features one by one, if the matching is successful, obtaining a face recognition result passing the face recognition, and if the matching is failed, obtaining a face recognition result not passing the face recognition; and displaying the face recognition result to the user. Therefore, the embodiment of the application realizes the face recognition scene without changing the interaction mode of the user side in the face recognition scene, has low reasoning speed and storage cost, and can improve the experience of the user side in the face recognition scene.
The embodiments of the present application will be described in detail. The embodiment of the application comprises the following steps:
(1) And extracting face sub-features of the face image. Referring to fig. 5, fig. 5 is a schematic flow chart of feature extraction according to an embodiment of the present application. Here, unlike the way to extract a face image as a global feature (i.e., an identity template vector), the embodiment of the present application changes the network structure for feature extraction of the face image, i.e., changes the full-connection layer corresponding to the neural network model (e.g., the residual network Resnet) into a convolution layer (e.g. Such that multiple sets of local features (i.e., face sub-features, each also referred to as a feature group, N in fig. 5 representing the number of groups of feature groups) can be generated independently of each other, and the dimensions of each face sub-feature are controlled to a low order of magnitude (e.g., 16 dimensions, S in fig. 5 representing the group size (or dimension) of the feature groups). It should be noted that, the feature group obtained by the embodiment of the application is not realized by "hard splitting", but is a more robust feature group learned by a neural network model, and the neural network model can completely store each important identification feature (such as eyes, nose, mouth, etc.) in a face image in each feature group in order to ensure the accuracy of face identification.
(2) And generating a characteristic key of each face sub-characteristic. The feature key is used for rotating the face sub-feature, and the rotation of the face sub-feature is realized through Slerp. Since the facial features extracted by the neural network model are processed and then located on an n-dimensional (512-dimensional in general) hypersphere, if the interpolation result is to be located on the hypersphere, it is required to realize that the start point vector Slerp (i.e. the facial features to be rotated are located on the hypersphere) and the end point vector are located on the hypersphere at the same time. Thus, a random key can be sampled from a standard normal distribution and divided by the modulus of the random key to quickly generate a conditional feature key (for Slerp required endpoint vectors).
(3) Rotation of facial features. Here, the rotation of the face sub-feature is achieved through Slerp. Referring to fig. 6, fig. 6 is a schematic diagram of spherical linear interpolation provided by an embodiment of the present application. Here, the facial sub-feature to be rotated is taken as a starting point vector Slerp neededThe characteristic key is taken as an endpoint vector required by SlerpThe rotation of the facial sub-features is realized through the following formula (I):
; formula 1
Wherein,Is the parameter of the ultrasonic wave to be used as the ultrasonic wave,Is the origin vectorAnd endpoint vectorIs arranged at the lower end of the cylinder,For the interpolation result, here, the interpolation result refers to a first rotation sub-feature obtained by rotating the face sub-feature.
The feature key is also regarded as a feature group with the same structure, because the feature key obtained by normal distribution sampling is independent from the feature group, and thus the interpolation result of the feature group also keeps independence from group to group; and the rotation of the facial sub-features is realized through Slerp, so that the calculation time and the expenditure of the storage space can be reduced.
(4) And discarding (drop) characteristic elements of the first rotation sub-feature to obtain a second rotation sub-feature. Referring to fig. 7, fig. 7 is a schematic flow chart of a feature element discarding process provided by the implementation of the present application. Here, for each first rotation sub-feature, the processing may be performed to obtain a second rotation sub-feature by: as shown in the mode (one) of (1) in fig. 7: randomly discarding feature elements in a specific number of dimensions in the first rotated sub-feature (i.e., setting the discarded feature elements to zero); this provides good feature protection. As shown in the mode (two) of fig. 7 (2): self-attention processing is carried out on a plurality of first rotation sub-features through a machine learning model, so that an attention matrix (attention map) is obtained, the attention matrix can reflect the importance degree of each first rotation sub-feature (the importance degree of the first rotation sub-feature for representing a face image is indicated), and therefore, feature elements with some dimensions can be discarded on the first rotation sub-features with high importance degrees, and feature elements with some dimensions can be discarded on the first rotation sub-features with low importance degrees. For example, a importance level interval and a corresponding number of discarded dimensions may be set, and then feature elements of the dimensions of the corresponding number of dimensions (i.e., feature elements discarded are set to zero) are discarded for the first rotation sub-feature according to the importance level interval in which the importance level of the first rotation sub-feature is located. In practical applications, the total number of dimensions discarded by way (one) and way (two) may be the same.
(5) And carrying out feature standardization processing on each second rotation sub-feature to obtain the encrypted face sub-feature of each face sub-feature. For example, the second rotation sub-features are normalized to the respective hyperspheres. Because part of characteristic elements are discarded in the second rotation sub-characteristic, the included angle theta between the encrypted face sub-characteristic corresponding to the second rotation sub-characteristic and the characteristic key is hidden, and the effect of privacy protection on the face characteristic is achieved.
(6) Face matching process. Referring to fig. 8, fig. 8 is a second application scenario schematic diagram of the image feature processing method according to the embodiment of the present application. When the offline device receives a face recognition request, extracting features of the face image to be recognized through a neural network model to obtain unencrypted face features to be recognized (including a plurality of face sub-features to be recognized); and matching the face features to be identified with the encrypted face features in the face feature library. Specifically, for each encrypted face feature, it provides a portal to the face feature to be identified, i.e., generates a feature key for the encrypted face feature. In this way, in the process of matching the face feature to be identified with each encrypted face feature, aiming at each encrypted face feature, the feature key corresponding to the encrypted face feature is taken as the end point vector of Slerp, the face feature to be identified is taken as the starting point vector of Slerp, and the face feature to be identified is rotated to obtain a first feature to be identified; discarding characteristic elements of the first rotation characteristic to be identified based on the discarding mode of the characteristic elements of the encrypted face characteristic to obtain a second rotation characteristic to be identified; performing feature standardization processing on the second rotation feature to be identified to obtain encrypted face features to be identified (including a plurality of encrypted face sub-features to be identified); and matching the encrypted face features to be identified with the encrypted face features to obtain matching scores.
Referring to fig. 9, fig. 9 is a schematic flow chart of feature matching according to an embodiment of the present application. When determining the matching score, firstly, performing self-attention processing on the encrypted face features to be identified (including a plurality of encrypted face sub-features to be identified) through a machine learning model to obtain an attention matrix (attention map), wherein the attention matrix can reflect the importance degree of each encrypted face sub-feature to be identified (indicates the importance degree of the encrypted face sub-feature to express the face image to be identified); then determining sub cosine distances between each encrypted face sub feature to be identified in the encrypted face features and the corresponding encrypted face sub feature in the encrypted face features; and finally multiplying the sub-cosine distances corresponding to the sub-features of the encrypted face to be identified with the corresponding importance degrees to obtain multiplication results corresponding to the sub-features of the encrypted face to be identified, summing the multiplication results to obtain a final cosine distance, and taking the cosine distance as the matching score of the features of the encrypted face to be identified and the features of the encrypted face. Thus, the face recognition result of the face image to be recognized is determined based on the matching scores of the face features to be recognized and the encrypted face features. For example, if the maximum matching score of the encrypted face feature to be identified and the encrypted face feature reaches a matching score threshold, the matching is successful, and a face identification result passing the face identification is obtained; and if the maximum matching score of the encrypted face features to be identified and the encrypted face features does not reach the matching score threshold, indicating that the matching is failed, and obtaining a face recognition result that the face recognition fails.
By applying the embodiment of the application, (1) randomness is introduced by taking the randomly generated characteristic key as the rotation end point vector of the characteristic rotation, and the irreversibility is realized by discarding the characteristic elements, so that the privacy protection effect of the face characteristics of the face image is achieved. (2) Has higher recognition accuracy and shorter operation time, and simultaneously maintains friendly storage overhead. (3) The safety of the offline face recognition device can be ensured, and particularly the safety of the face feature library on the offline device is protected. Even if an attacker breaks the offline device and acquires the face features and the feature key, the attacker cannot use the reconstructed face information. (4) The method has strong universality, does not need to change the offline face recognition service flow during deployment, can be embedded into any offline face recognition service scene, obtains good face recognition accuracy under lower cost and has good application potential. The following describes the beneficial effects obtained by the embodiments of the present application in detail in conjunction with experimental data.
(1) Task availability: face recognition accuracy.
Here, the present application is compared with a scheme (ArcFace) that does not include a feature privacy protection function and a scheme (a key-based MLP HASH scheme, a key-based PolyProtect scheme, a non-key-based IronMask scheme, and a non-key-based ASE scheme) that includes a feature privacy protection function on a common data set such as LFW, CFP-FP, ageDB, CPLFW, CALFW, and IJB-B for testing face recognition accuracy. Referring to fig. 10, fig. 10 is a schematic diagram illustrating comparison of test results of an image feature processing method according to an embodiment of the present application. Here, the first 2 rows of the table shown in fig. 10 are test results of the scheme without the feature privacy preserving function (including ArcFace, and the scheme ours _baseline of the present application when the feature encrypting step is not performed); the last 5 rows of the table shown in fig. 10 are test results of schemes with feature privacy protection functions (including scheme ours _ protect, polyProtect, MLP HASH, ASE, and IronMask of the present application when performing the feature encryption step). As for the feature privacy preserving function, as can be seen from the last 5 rows of the table shown in fig. 10, the present application can obtain the optimal result (the contents of the dashed boxes) on most of the data sets, and can obtain the suboptimal result (the contents of the solid boxes) on both data sets.
(2) Privacy preserving performance.
Here, it is assumed that an attacker can acquire the encryption algorithm, the encrypted face feature, and the feature key of the present application, and then restore the face image based on the acquired face feature using a face synthesis method (IDIFF FACE) based on a diffusion model. Fig. 11 is a verification result of the protection performance of the image feature processing method provided by the embodiment of the present application. Here, three different settings were made, including: the encryption-free case, the case where a single feature key (i.e., the feature keys of the face sub-features are all the same) is encrypted, and the case where multiple feature keys are encrypted (i.e., the feature keys of the face sub-features are all different). In the three settings, the face image is unprotected, and the variable is the face feature itself. As shown in fig. 11, under the condition of no encryption, IDIFF FACE highly restores the face information corresponding to the face features, which causes serious privacy disclosure; under the other two settings, the application has excellent defending performance and distorts the mapping from the face characteristics to the face information.
(3) The present application is computationally expensive. Referring to fig. 12, fig. 12 is a test result of operation overhead of the image feature processing method according to the embodiment of the present application. Here, the test results are under the same hardware environment, so that it is known that the operation cost of the present application is lower than other schemes in terms of operation cost.
It should be noted that, the faces appearing in the drawings provided in the embodiments of the present application are all machine synthesized, and are not real faces.
Continuing with the description below of an exemplary architecture of image feature processing device 555 implemented as a software module provided by embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in image feature processing device 555 of memory 550 may include: the feature extraction module is used for carrying out feature extraction processing on the target image to obtain unencrypted image features of the target image; a key generation module for generating a feature key of the unencrypted image feature; the feature rotation module is used for carrying out rotation processing on the unencrypted image features based on the feature key to obtain rotation image features; and the feature discarding module is used for discarding part of feature elements in the rotating image features to obtain the encrypted image features of the target image.
In some embodiments, the feature extraction module is further configured to perform feature extraction on the target image to obtain a plurality of unencrypted image sub-features, where each image sub-feature is used to characterize a portion of features in the target image; and taking the plurality of image sub-features as unencrypted image features of the target image.
In some embodiments, the key generation module is further configured to randomly sample from the target distribution to obtain a random key; dividing the random key by the modular length of the random key to obtain the characteristic key of the unencrypted image characteristic.
In some embodiments, the unencrypted image feature includes a plurality of image sub-features, and the key generation module is further configured to generate a sub-feature key of each of the image sub-features, where a different key exists between the sub-feature keys of each of the image sub-features; or generating a target feature key for the image sub-features, and taking the target feature key as a sub-feature key of each image sub-feature.
In some embodiments, the feature rotation module is further configured to take the unencrypted image feature as a start vector of spherical linear interpolation, take the feature key as an end vector of spherical linear interpolation, and determine a vector angle between the start vector and the end vector; performing spherical linear interpolation processing on the starting point vector and the end point vector based on the vector included angle to obtain an interpolation result; and taking the interpolation result as the rotation image characteristic.
In some embodiments, the unencrypted image feature includes a plurality of image sub-features, and the rotated image feature includes a plurality of rotated image sub-features resulting from rotating the plurality of image sub-features; the feature discarding module is further configured to perform, for each of the rotated image sub-features, the following processing respectively: determining respective degrees of importance of the plurality of rotated image sub-features; and discarding part of characteristic elements of the plurality of rotating image sub-characteristics based on the importance degree to obtain the encrypted image characteristics of the target image, wherein the rotating image sub-characteristics with high importance degree have fewer dimensions than the rotating image sub-characteristics with low importance degree.
In some embodiments, the feature discarding module is further configured to set a part of feature elements in the rotated image feature to a specific value, to obtain an encrypted image feature of the target image; or setting partial characteristic elements in the rotation image characteristic as random values to obtain the encrypted image characteristic of the target image.
In some embodiments, the feature discarding module is further configured to discard a portion of feature elements in the rotated image feature to obtain an intermediate encrypted image feature of the target image; and carrying out feature standardization processing on the intermediate encrypted image feature to obtain the encrypted image feature of the target image.
The embodiment of the application also provides an image recognition device, which comprises: the receiving module is used for receiving an image recognition request aiming at an image to be recognized, wherein the image recognition request indicates whether the image to be recognized belongs to a target object or not, the target object comprises an object to which an image corresponding to each target encryption image characteristic in an image characteristic library belongs, and each target encryption image characteristic is obtained based on the image characteristic processing method provided by the embodiment of the application; the matching module is used for responding to the image recognition request, extracting the unencrypted to-be-recognized features of the to-be-recognized image, and matching the unencrypted to-be-recognized features with the target encrypted image features to obtain a matching result; the first determining module is used for determining and obtaining an identification result of the image to be identified belonging to a target object when the matching result characterizes that the image feature library has the target encrypted image feature matched with the unencrypted image to be identified; and the second determining module is used for determining to obtain the recognition result that the image to be recognized does not belong to a target object when the image feature library is characterized by the matching result and the target encrypted image feature matched with the unencrypted image to be recognized does not exist.
In some embodiments, the matching module is further configured to perform, for each of the target encrypted image features, the following processing: acquiring a target feature key required for generating the target encrypted image feature, and carrying out rotation processing on the unencrypted feature to be identified based on the target feature key to obtain a rotation feature to be identified; determining a target dimension in which a feature element discarded by the target encrypted image feature is generated, discarding the feature element positioned in the target dimension in the rotated feature to be identified, and obtaining an encrypted feature to be identified of the unencrypted feature to be identified; determining the feature similarity of the features to be identified in the encryption and the target encrypted image features; when the feature similarity reaches a similarity threshold, determining to obtain a matching result of the unencrypted feature to be identified and the target encrypted image feature; and when the feature similarity does not reach a similarity threshold, determining to obtain a matching result of the non-encrypted feature to be identified and the target encrypted image feature.
In some embodiments, the encrypted to-be-identified feature comprises a plurality of encrypted to-be-identified sub-features, the target encrypted image feature comprises a plurality of target encrypted image sub-features, the encrypted to-be-identified sub-features and the target encrypted image sub-features are in one-to-one correspondence; the matching module is also used for determining the importance degree of each sub-feature to be identified by encryption; determining sub-feature similarity of the sub-features to be identified by encryption and the target encrypted image sub-features corresponding to the sub-features to be identified by encryption aiming at each sub-feature to be identified by encryption; and determining the feature similarity of the features to be identified and the target encrypted image features based on the importance degree of each sub-feature to be identified and the sub-feature similarity corresponding to each sub-feature to be identified.
It should be noted that, the description of the embodiments of the device in the present disclosure is similar to the description of the embodiments of the method described above, and has similar beneficial effects as those of the embodiments of the method, which are not described herein. The technical details that are not found in the image feature processing apparatus provided in the embodiment of the present application may be understood based on the description of the technical details in the foregoing method embodiment.
Embodiments of the present application also provide a computer program product comprising computer-executable instructions or a computer program stored in a computer-readable storage medium. The processor of the electronic device reads the computer-executable instructions or the computer program from the computer-readable storage medium, and the processor executes the computer-executable instructions or the computer program to cause the electronic device to perform the method provided by the embodiment of the application.
Embodiments of the present application also provide a computer-readable storage medium having stored therein computer-executable instructions or a computer program which, when executed by a processor, cause the processor to perform the method provided by the embodiments of the present application.
In some embodiments, the computer readable storage medium may be RAM, ROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, computer-executable instructions may be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, in the form of programs, software modules, scripts, or code, and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, computer-executable instructions may, but need not, correspond to files in a file system, may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (Hyper Text Markup Language, HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, computer-executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (14)

1. An image feature processing method, the method comprising:
Performing feature extraction processing on a target image to obtain unencrypted image features of the target image;
generating a feature key of the unencrypted image feature;
taking the unencrypted image characteristics as a starting point vector of spherical linear interpolation, taking the characteristic key as an end point vector of spherical linear interpolation, and determining a vector included angle between the starting point vector and the end point vector;
Performing spherical linear interpolation processing on the starting point vector and the end point vector based on the vector included angle to obtain a rotating image characteristic;
and discarding part of characteristic elements in the rotation image characteristics to obtain the encrypted image characteristics of the target image.
2. The method according to claim 1, wherein the performing feature extraction processing on the target image to obtain unencrypted image features of the target image includes:
Extracting features of the target image to obtain unencrypted multiple image sub-features, wherein each image sub-feature is used for representing part of features in the target image;
and taking the plurality of image sub-features as unencrypted image features of the target image.
3. The method of claim 1, wherein the generating the feature key for the unencrypted image feature comprises:
Randomly sampling from the target distribution to obtain a random key;
dividing the random key by the modular length of the random key to obtain the characteristic key of the unencrypted image characteristic.
4. The method of claim 1, wherein the unencrypted image feature includes a plurality of image sub-features, the generating a feature key for the unencrypted image feature comprising:
generating sub-feature keys of the image sub-features respectively, wherein different keys exist among the sub-feature keys of the image sub-features;
Or generating a target feature key for the image sub-features, and taking the target feature key as a sub-feature key of each image sub-feature.
5. The method of claim 1, wherein the unencrypted image feature includes a plurality of image sub-features, the rotated image feature including a plurality of rotated image sub-features resulting from rotating the plurality of image sub-features; discarding part of feature elements in the rotated image feature to obtain an encrypted image feature of the target image, including:
Determining respective degrees of importance of the plurality of rotated image sub-features;
and discarding part of characteristic elements of the plurality of rotating image sub-characteristics based on the importance degree to obtain the encrypted image characteristics of the target image, wherein the rotating image sub-characteristics with high importance degree have fewer dimensions than the rotating image sub-characteristics with low importance degree.
6. The method of claim 1, wherein discarding some feature elements in the rotated image feature to obtain an encrypted image feature of the target image comprises:
Setting part of characteristic elements in the rotating image characteristic as specific values to obtain the encrypted image characteristic of the target image;
or setting partial characteristic elements in the rotation image characteristic as random values to obtain the encrypted image characteristic of the target image.
7. The method of claim 1, wherein discarding some feature elements in the rotated image feature to obtain an encrypted image feature of the target image comprises:
discarding part of characteristic elements in the rotation image characteristics to obtain intermediate encryption image characteristics of the target image;
And carrying out feature standardization processing on the intermediate encrypted image feature to obtain the encrypted image feature of the target image.
8. An image recognition method, the method comprising:
Receiving an image recognition request aiming at an image to be recognized, wherein the image recognition request indicates whether the image to be recognized belongs to a target object or not, the target object comprises an object to which an image corresponding to each target encrypted image feature in an image feature library belongs, and each target encrypted image feature is obtained based on the image feature processing method according to any one of claims 1-7;
Responding to the image recognition request, extracting unencrypted to-be-recognized features of the to-be-recognized image, and matching the unencrypted to-be-recognized features with the target encrypted image features to obtain a matching result;
When the matching result characterizes that the image feature library has the target encrypted image feature matched with the unencrypted feature to be identified, determining to obtain an identification result of the image to be identified belonging to a target object;
And when the matching result characterizes that the image feature library does not have the target encrypted image feature matched with the unencrypted image feature to be identified, determining to obtain an identification result that the image to be identified does not belong to a target object.
9. The method of claim 8, wherein said matching the unencrypted features to be identified with each of the target encrypted image features to obtain a matching result comprises:
for each of the target encrypted image features, the following processing is performed:
acquiring a target feature key required for generating the target encrypted image feature, and carrying out rotation processing on the unencrypted feature to be identified based on the target feature key to obtain a rotation feature to be identified;
Determining a target dimension in which a feature element discarded by the target encrypted image feature is generated, discarding the feature element positioned in the target dimension in the rotated feature to be identified, and obtaining an encrypted feature to be identified of the unencrypted feature to be identified;
determining the feature similarity of the features to be identified in the encryption and the target encrypted image features;
When the feature similarity reaches a similarity threshold, determining to obtain a matching result of the unencrypted feature to be identified and the target encrypted image feature;
And when the feature similarity does not reach a similarity threshold, determining to obtain a matching result of the non-encrypted feature to be identified and the target encrypted image feature.
10. The method of claim 9, wherein the encrypted feature to be identified comprises a plurality of encrypted sub-features, the target encrypted image feature comprises a plurality of target encrypted image sub-features, the encrypted feature to be identified and the target encrypted image sub-features are in one-to-one correspondence; the determining the feature similarity of the encrypted feature to be identified and the target encrypted image feature comprises the following steps:
determining the importance degree of each sub-feature to be identified by encryption;
determining sub-feature similarity of the sub-features to be identified by encryption and the target encrypted image sub-features corresponding to the sub-features to be identified by encryption aiming at each sub-feature to be identified by encryption;
And determining the feature similarity of the features to be identified and the target encrypted image features based on the importance degree of each sub-feature to be identified and the sub-feature similarity corresponding to each sub-feature to be identified.
11. An image feature processing apparatus, characterized in that the apparatus comprises:
The feature extraction module is used for carrying out feature extraction processing on the target image to obtain unencrypted image features of the target image;
a key generation module for generating a feature key of the unencrypted image feature;
The feature rotation module is used for taking the unencrypted image feature as a starting point vector of spherical linear interpolation, taking the feature key as an end point vector of spherical linear interpolation, and determining a vector included angle between the starting point vector and the end point vector; performing spherical linear interpolation processing on the starting point vector and the end point vector based on the vector included angle to obtain a rotating image characteristic;
and the feature discarding module is used for discarding part of feature elements in the rotating image features to obtain the encrypted image features of the target image.
12. An electronic device, the electronic device comprising:
a memory for storing computer executable instructions;
a processor for implementing the method of any one of claims 1 to 10 when executing computer-executable instructions stored in the memory.
13. A computer readable storage medium storing computer executable instructions or a computer program, which when executed by a processor, implement the method of any one of claims 1 to 10.
14. A computer program product comprising computer-executable instructions or a computer program, which, when executed by a processor, implements the method of any one of claims 1 to 10.
CN202410041219.XA 2024-01-11 2024-01-11 Image feature processing method, device, equipment and storage medium Active CN117560455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410041219.XA CN117560455B (en) 2024-01-11 2024-01-11 Image feature processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410041219.XA CN117560455B (en) 2024-01-11 2024-01-11 Image feature processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117560455A CN117560455A (en) 2024-02-13
CN117560455B true CN117560455B (en) 2024-04-26

Family

ID=89813248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410041219.XA Active CN117560455B (en) 2024-01-11 2024-01-11 Image feature processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117560455B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847003A (en) * 2015-01-15 2016-08-10 深圳印象认知技术有限公司 Encryption method of biological feature, encryption matching method and encryption system, and encryption matching system
KR102095364B1 (en) * 2018-12-12 2020-04-01 인천대학교 산학협력단 Method and apparatus for image data encryption using rubik's cube principle
CN114782462A (en) * 2022-03-08 2022-07-22 北京邮电大学 Semantic weighting-based image information hiding method
CN115546846A (en) * 2022-07-29 2022-12-30 深圳绿米联创科技有限公司 Image recognition processing method and device, electronic equipment and storage medium
KR20230086038A (en) * 2021-12-08 2023-06-15 조선대학교산학협력단 Apparatus and Method for Encrypting and Compressing Image
CN116361830A (en) * 2023-03-20 2023-06-30 深圳市佳信捷智慧物联有限公司 Face recognition method, device and storage medium for secure encryption

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847003A (en) * 2015-01-15 2016-08-10 深圳印象认知技术有限公司 Encryption method of biological feature, encryption matching method and encryption system, and encryption matching system
KR102095364B1 (en) * 2018-12-12 2020-04-01 인천대학교 산학협력단 Method and apparatus for image data encryption using rubik's cube principle
KR20230086038A (en) * 2021-12-08 2023-06-15 조선대학교산학협력단 Apparatus and Method for Encrypting and Compressing Image
CN114782462A (en) * 2022-03-08 2022-07-22 北京邮电大学 Semantic weighting-based image information hiding method
CN115546846A (en) * 2022-07-29 2022-12-30 深圳绿米联创科技有限公司 Image recognition processing method and device, electronic equipment and storage medium
CN116361830A (en) * 2023-03-20 2023-06-30 深圳市佳信捷智慧物联有限公司 Face recognition method, device and storage medium for secure encryption

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Yunyu Li ; Jiantao Zhou ; Yuanman Li ; Oscar C. Au.Reducing the ciphertext expansion in image homomorphic encryption via linear interpolation technique.2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP).2015,全文. *

Also Published As

Publication number Publication date
CN117560455A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
Gu et al. Securing input data of deep learning inference systems via partitioned enclave execution
CN111538968A (en) Identity verification method, device and equipment based on privacy protection
Zhang et al. An efficient parallel secure machine learning framework on GPUs
CN110874571A (en) Training method and device of face recognition model
Leroux et al. Privacy aware offloading of deep neural networks
US9009486B2 (en) Biometric authentication apparatus, biometric authentication method, and computer readable storage medium
Karri Secure robot face recognition in cloud environments
CN113766085B (en) Image processing method and related device
CN111475690B (en) Character string matching method and device, data detection method and server
Jasmine et al. A privacy preserving based multi-biometric system for secure identification in cloud environment
CN107742141B (en) Intelligent identity information acquisition method and system based on RFID technology
CN117560455B (en) Image feature processing method, device, equipment and storage medium
CN113239852B (en) Privacy image processing method, device and equipment based on privacy protection
CN113542527B (en) Face image transmission method and device, electronic equipment and storage medium
CN111461091B (en) Universal fingerprint generation method and device, storage medium and electronic device
CN114048453A (en) User feature generation method and device, computer equipment and storage medium
CN113901502A (en) Data processing method and device, electronic equipment and storage medium
CN113518061A (en) Data transmission method, device, apparatus, system and medium in face recognition
CN112348060A (en) Classification vector generation method and device, computer equipment and storage medium
Ma Face recognition technology and privacy protection methods based on deep learning
Santos et al. Medical Systems Data Security and Biometric Authentication in Public Cloud Servers
CN114826689B (en) Information input method, security authentication method and electronic equipment
Wang et al. Internet of vehicles based on TrustZone and optimized RSA
CN115396222B (en) Device instruction execution method, system, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant